Change in vdsm[master]: vdsm: Code simplifications
by Vinzenz Feenstra
Vinzenz Feenstra has uploaded a new change for review.
Change subject: vdsm: Code simplifications
......................................................................
vdsm: Code simplifications
* Made vmName available as variable of the vm object
* Simplified the naming of the guest socket files
Change-Id: Iaff375542048f49da507910252a35780161ee09c
Signed-off-by: Vinzenz Feenstra <vfeenstr(a)redhat.com>
---
M vdsm/vm/vm.py
1 file changed, 6 insertions(+), 10 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/48/11748/1
diff --git a/vdsm/vm/vm.py b/vdsm/vm/vm.py
index 2286e41..ad6e969 100644
--- a/vdsm/vm/vm.py
+++ b/vdsm/vm/vm.py
@@ -159,14 +159,11 @@
self._devices = initDeviceMap()
self._connection = libvirtconnection.get(cif)
- if 'vmName' not in self.conf:
- self.conf['vmName'] = 'n%s' % self.id
- self._guestSocketFile = (constants.P_LIBVIRT_VMCHANNELS +
- self.conf['vmName'].encode('utf-8') +
- '.' + _VMCHANNEL_DEVICE_NAME)
- self._qemuguestSocketFile = (constants.P_LIBVIRT_VMCHANNELS +
- self.conf['vmName'].encode('utf-8') +
- '.' + _QEMU_GA_DEVICE_NAME)
+ self.vmName = self.conf.get('vmName', 'n%s' % self.id)
+ self.conf['vmName'] = self.vmName.encode('utf-8')
+ socketFilesBase = constants.P_LIBVIRT_VMCHANNELS + self.vmName + '.'
+ self._guestSocketFile = socketFilesBase + _VMCHANNEL_DEVICE_NAME
+ self._qemuguestSocketFile = socketFilesBase + _QEMU_GA_DEVICE_NAME
self._lastXMLDesc = '<domain><uuid>%s</uuid></domain>' % self.id
self._devXmlHash = '0'
self._released = False
@@ -2103,8 +2100,7 @@
def _getPid(self):
pid = '0'
try:
- vmName = self.conf['vmName'].encode('utf-8')
- pid = supervdsm.getProxy().getVmPid(vmName)
+ pid = supervdsm.getProxy().getVmPid(self.vmName)
except:
pass
return pid
--
To view, visit http://gerrit.ovirt.org/11748
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Iaff375542048f49da507910252a35780161ee09c
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Vinzenz Feenstra <vfeenstr(a)redhat.com>
9 years, 8 months
Change in vdsm[master]: [WIP] More refactor work
by Vinzenz Feenstra
Vinzenz Feenstra has abandoned this change.
Change subject: [WIP] More refactor work
......................................................................
Abandoned
Out of date
--
To view, visit http://gerrit.ovirt.org/11749
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: abandon
Gerrit-Change-Id: I11154569c1d7593ebb70827d584337b0c2d32638
Gerrit-PatchSet: 3
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Vinzenz Feenstra <vfeenstr(a)redhat.com>
Gerrit-Reviewer: Vinzenz Feenstra <vfeenstr(a)redhat.com>
Gerrit-Reviewer: oVirt Jenkins CI Server
9 years, 8 months
Change in vdsm[master]: jsonrpc: Create the java bindings and fix bugs
by smizrahi@redhat.com
Saggi Mizrahi has posted comments on this change.
Change subject: jsonrpc: Create the java bindings and fix bugs
......................................................................
Patch Set 5:
(15 comments)
....................................................
File lib/yajsonrpc/__init__.py
Line 88
Line 89
Line 90
Line 91
Line 92
The spec says that ID can be anything
Line 315: try:
Line 316: mobj = json.loads(message)
Line 317: isResponse = self._isResponse(mobj)
Line 318: except:
Line 319: self.log.warning("Problem parsing message from client")
It might be huge and it might contain sensitive data (passwords)
Since we never manually construct json objects it only happens if the libraries are broken.
Line 320: transport.close()
Line 321: del self._clients[transport]
Line 322: continue
Line 323:
Line 333: if v is None:
Line 334: v = res
Line 335:
Line 336: if v != res:
Line 337: raise TypeError("batch is mixed")
Might contain sensitive data
Line 338:
Line 339: return v
Line 340: else:
Line 341: return ("result" in obj or "error" in obj)
Line 340: else:
Line 341: return ("result" in obj or "error" in obj)
Line 342:
Line 343: def close(self):
Line 344: self._inbox.put(None)
We need to properly close channels.
None is a good a flag as any IMO.
Line 345:
Line 346:
Line 347: class JsonRpcCall(object):
Line 348: def __init__(self):
Line 449: self._workQueue = Queue()
Line 450: self._threadFactory = threadFactory
Line 451:
Line 452: def queueRequest(self, req):
Line 453: print "DSAD"
oops :)
Line 454: self._workQueue.put_nowait(req)
Line 455: print "DSAD"
Line 456:
Line 457: def _serveRequest(self, ctx, req):
....................................................
File lib/yajsonrpc/betterAsyncore.py
Line 105: class AsyncChat(object):
Line 106: # these are overridable defaults
Line 107:
Line 108: ac_in_buffer_size = 4096
Line 109: ac_out_buffer_size = 4096
They are. They are just class related constants and not global constants.
Line 110:
Line 111: def __init__(self, impl):
Line 112: self._fifoLock = Lock()
Line 113: self._impl = impl
Line 154:
Line 155: try:
Line 156: data = dispatcher.recv(self.ac_in_buffer_size)
Line 157: except socket.error:
Line 158: dispatcher.handle_error()
No, the API mandates that the I am not responsible for error reporting.
Line 159: return
Line 160:
Line 161: self.ac_in_buffer = self.ac_in_buffer + data
Line 162:
Line 318:
Line 319: try:
Line 320: impl.init(self)
Line 321: except AttributeError:
Line 322: pass
No, it just means that it's optional
Line 323:
Line 324: def __invoke(self, name, *args, **kwargs):
Line 325: if hasattr(self.__impl, name):
Line 326: return getattr(self.__impl, name)(self, *args, **kwargs)
Line 373:
Line 374: def connect(self, addr):
Line 375: self.connected = False
Line 376: self.connecting = True
Line 377: socket = self.socket
It's faster and pyflakes can catch errors this way.
Line 378: socket.setblocking(1)
Line 379: socket.connect(addr)
Line 380: socket.setblocking(0)
Line 381: self.addr = addr
....................................................
File lib/yajsonrpc/protonReactor.py
Line 62: self._reactor = reactor
Line 63: self._connected = False
Line 64:
Line 65: def setTimeout(self, timeout):
Line 66: # TODO
I don't think it's needed. It will just make the log dirty.
Line 67: pass
Line 68:
Line 69: def closed(self):
Line 70: return (self.connector is None or
....................................................
File tests/jsonRpcTests.py
Line 18: # Refer to the README and COPYING files for full details of the license
Line 19: #
Line 20: import threading
Line 21: import socket
Line 22: import logging
53
Line 23: from Queue import Queue
Line 24: from contextlib import contextmanager
Line 25: from testValidation import brokentest
Line 26:
....................................................
File tests/jsonRpcUtils.py
Line 23: pass
Line 24:
Line 25:
Line 26: def hasProton():
Line 27: return protonReactor is not None
I don't understand the question
Line 28:
Line 29:
Line 30: def getFreePort():
Line 31: sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
Line 57: ("127.0.0.1", port))
Line 58:
Line 59:
Line 60: REACTOR_CONSTRUCTORS = {"tcp": _tcpServerConstructor,
Line 61: "amqp": _protonServerConstructor}
Because proton is the implementation not the protocol
Line 62: REACTOR_TYPE_PERMUTATIONS = [[r] for r in REACTOR_CONSTRUCTORS.iterkeys()]
Line 63: SSL_OPTIONS = (True, False)
Line 64: CONNECTION_PERMUTATIONS = tuple(product(REACTOR_CONSTRUCTORS.iterkeys(),
Line 65: SSL_OPTIONS))
Line 59:
Line 60: REACTOR_CONSTRUCTORS = {"tcp": _tcpServerConstructor,
Line 61: "amqp": _protonServerConstructor}
Line 62: REACTOR_TYPE_PERMUTATIONS = [[r] for r in REACTOR_CONSTRUCTORS.iterkeys()]
Line 63: SSL_OPTIONS = (True, False)
True means ssl=True
False means ssl=False
Line 64: CONNECTION_PERMUTATIONS = tuple(product(REACTOR_CONSTRUCTORS.iterkeys(),
Line 65: SSL_OPTIONS))
Line 66:
Line 67: CERT_DIR = os.path.abspath(os.path.dirname(__file__))
....................................................
File vdsm_api/vdsmapi.py
Line 116: Find the API schema file whether we are running from within the source dir
Line 117: or from an installed location
Line 118: """
Line 119: # Don't depend on module VDSM if not looking for schema
Line 120: from vdsm import constants
I don't think it matters
Line 121:
Line 122: localpath = os.path.dirname(__file__)
Line 123: installedpath = constants.P_VDSM
Line 124: for directory in localpath, installedpath:
--
To view, visit http://gerrit.ovirt.org/19497
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: comment
Gerrit-Change-Id: If828355b7efe28fe6a2e784069425fefd2f3f25c
Gerrit-PatchSet: 5
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Saggi Mizrahi <smizrahi(a)redhat.com>
Gerrit-Reviewer: Barak Azulay <bazulay(a)redhat.com>
Gerrit-Reviewer: Eduardo <ewarszaw(a)redhat.com>
Gerrit-Reviewer: Saggi Mizrahi <smizrahi(a)redhat.com>
Gerrit-Reviewer: Yaniv Bronhaim <ybronhei(a)redhat.com>
Gerrit-Reviewer: mooli tayer <mtayer(a)redhat.com>
Gerrit-Reviewer: oVirt Jenkins CI Server
Gerrit-HasComments: Yes
9 years, 8 months
Change in vdsm[master]: Drop single use inheritance
by asegurap@redhat.com
Antoni Segura Puimedon has uploaded a new change for review.
Change subject: Drop single use inheritance
......................................................................
Drop single use inheritance
There is only one class inheriting from Device, VmDevice. Device
has no instances, so it is better to drop it altogether in favor
or VmDevice, which is inherited from a sizeable amount of times.
Change-Id: If781ab20110874e71ba16b60d1d5511a54914979
Signed-off-by: Antoni S. Puimedon <asegurap(a)redhat.com>
---
M vdsm/vm.py
1 file changed, 16 insertions(+), 19 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/51/18351/1
diff --git a/vdsm/vm.py b/vdsm/vm.py
index 4f80b4c..9a24822 100644
--- a/vdsm/vm.py
+++ b/vdsm/vm.py
@@ -90,24 +90,6 @@
diskDeviceXmlElements)
-class Device(object):
- def __init__(self, conf, log, **kwargs):
- for attr, value in kwargs.iteritems():
- try:
- setattr(self, attr, value)
- except AttributeError:
- # skip read-only properties
- pass
- self.conf = conf
- self.log = log
- self._deviceXML = None
-
- def __str__(self):
- attrs = [":".join((a, str(getattr(self, a)))) for a in dir(self)
- if not a.startswith('__')]
- return " ".join(attrs)
-
-
class _MigrationError(RuntimeError):
pass
@@ -1149,7 +1131,22 @@
return self.doc.toprettyxml(encoding='utf-8')
-class VmDevice(Device):
+class VmDevice(object):
+ def __init__(self, conf, log, **kwargs):
+ for attr, value in kwargs.iteritems():
+ try:
+ setattr(self, attr, value)
+ except AttributeError: # skip read-only properties
+ pass
+ self.conf = conf
+ self.log = log
+ self._deviceXML = None
+
+ def __str__(self):
+ attrs = [':'.join((a, str(getattr(self, a)))) for a in dir(self)
+ if not a.startswith('__')]
+ return ' '.join(attrs)
+
def createXmlElem(self, elemType, deviceType, attributes=[]):
"""
Create domxml device element according to passed in params
--
To view, visit http://gerrit.ovirt.org/18351
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: If781ab20110874e71ba16b60d1d5511a54914979
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Antoni Segura Puimedon <asegurap(a)redhat.com>
9 years, 8 months
Change in vdsm[master]: gluster: fix integer overflow error in rebalance status
by tjeyasin@redhat.com
Hello Ayal Baron, Bala.FA, Saggi Mizrahi, Dan Kenigsberg,
I'd like you to do a code review. Please visit
http://gerrit.ovirt.org/19863
to review the following change.
Change subject: gluster: fix integer overflow error in rebalance status
......................................................................
gluster: fix integer overflow error in rebalance status
Provides rebalance status values as strings to avoid overflow error
when a rebalance status values exceeds the XML-RPC limits
For more info: https://bugzilla.redhat.com/show_bug.cgi?id=1012393
Change-Id: Iec44c47268318bcc105c00c2de0cf483012d3723
Signed-off-by: Timothy Asir <tjeyasin(a)redhat.com>
---
M vdsm/gluster/cli.py
1 file changed, 10 insertions(+), 10 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/63/19863/1
diff --git a/vdsm/gluster/cli.py b/vdsm/gluster/cli.py
index 62b5f13..f9d6404 100644
--- a/vdsm/gluster/cli.py
+++ b/vdsm/gluster/cli.py
@@ -610,21 +610,21 @@
status = {
'summary': {
- 'filesScanned': int(tree.find('aggregate/lookups').text),
- 'filesMoved': int(tree.find('aggregate/files').text),
- 'filesFailed': int(tree.find('aggregate/failures').text),
- 'filesSkipped': int(tree.find('aggregate/failures').text),
- 'totalSizeMoved': int(tree.find('aggregate/size').text),
+ 'filesScanned': tree.find('aggregate/lookups').text,
+ 'filesMoved': tree.find('aggregate/files').text,
+ 'filesFailed': tree.find('aggregate/failures').text,
+ 'filesSkipped': tree.find('aggregate/failures').text,
+ 'totalSizeMoved': tree.find('aggregate/size').text,
'status': tree.find('aggregate/statusStr').text.upper()},
'hosts': []}
for el in tree.findall('node'):
status['hosts'].append({'name': el.find('nodeName').text,
- 'filesScanned': int(el.find('lookups').text),
- 'filesMoved': int(el.find('files').text),
- 'filesFailed': int(el.find('failures').text),
- 'filesSkipped': int(el.find('failures').text),
- 'totalSizeMoved': int(el.find('size').text),
+ 'filesScanned': el.find('lookups').text,
+ 'filesMoved': el.find('files').text,
+ 'filesFailed': el.find('failures').text,
+ 'filesSkipped': el.find('failures').text,
+ 'totalSizeMoved': el.find('size').text,
'status': el.find('statusStr').text.upper()})
return status
--
To view, visit http://gerrit.ovirt.org/19863
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Iec44c47268318bcc105c00c2de0cf483012d3723
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Timothy Asir <tjeyasin(a)redhat.com>
Gerrit-Reviewer: Ayal Baron <abaron(a)redhat.com>
Gerrit-Reviewer: Bala.FA <barumuga(a)redhat.com>
Gerrit-Reviewer: Dan Kenigsberg <danken(a)redhat.com>
Gerrit-Reviewer: Saggi Mizrahi <smizrahi(a)redhat.com>
9 years, 8 months
Change in vdsm[master]: Unified network persistence [2/3] - Respond to setSafeNetwor...
by amuller@redhat.com
Assaf Muller has uploaded a new change for review.
Change subject: Unified network persistence [2/3] - Respond to setSafeNetworkConfig
......................................................................
Unified network persistence [2/3] - Respond to setSafeNetworkConfig
Change-Id: I320677e40ff5b11da684d3ab7195d018135356b2
Signed-off-by: Assaf Muller <amuller(a)redhat.com>
---
M lib/vdsm/netinfo.py
M vdsm/Makefile.am
M vdsm/configNetwork.py
M vdsm/vdsm-store-net-config.in
4 files changed, 61 insertions(+), 10 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/09/17009/1
diff --git a/lib/vdsm/netinfo.py b/lib/vdsm/netinfo.py
index 6a75067..00ab7ad 100644
--- a/lib/vdsm/netinfo.py
+++ b/lib/vdsm/netinfo.py
@@ -44,6 +44,8 @@
# Unified persistence directories
NET_CONF_RUN_DIR = constants.P_VDSM_RUN + 'netconf/nets/'
BOND_CONF_RUN_DIR = constants.P_VDSM_RUN + 'netconf/bonds/'
+NET_CONF_PERS_DIR = constants.P_VDSM_LIB + 'persistence/netconf/nets/'
+BOND_CONF_PERS_DIR = constants.P_VDSM_LIB + 'persistence/netconf/bonds/'
NET_CONF_PREF = NET_CONF_DIR + 'ifcfg-'
PROC_NET_VLAN = '/proc/net/vlan/'
diff --git a/vdsm/Makefile.am b/vdsm/Makefile.am
index c30dcd9..7b56e6a 100644
--- a/vdsm/Makefile.am
+++ b/vdsm/Makefile.am
@@ -172,6 +172,7 @@
$(MKDIR_P) $(DESTDIR)$(vdsmrundir)/trackedInterfaces
$(MKDIR_P) $(DESTDIR)$(vdsmrundir)/payload
$(MKDIR_P) $(DESTDIR)$(vdsmlibdir)/netconfback
+ $(MKDIR_P) $(DESTDIR)$(vdsmlibdir)/persistence
$(MKDIR_P) $(DESTDIR)$(vdsmpoolsdir)
$(MKDIR_P) $(DESTDIR)$(vdsmbackupdir)
$(MKDIR_P) $(DESTDIR)$(localstatedir)/lib/libvirt/qemu/channels
diff --git a/vdsm/configNetwork.py b/vdsm/configNetwork.py
index 7827727..3399e51 100755
--- a/vdsm/configNetwork.py
+++ b/vdsm/configNetwork.py
@@ -649,7 +649,8 @@
def setSafeNetworkConfig():
"""Declare current network configuration as 'safe'"""
- execCmd([constants.EXT_VDSM_STORE_NET_CONFIG])
+ execCmd([constants.EXT_VDSM_STORE_NET_CONFIG,
+ config.get('vars', 'persistence')])
def usage():
diff --git a/vdsm/vdsm-store-net-config.in b/vdsm/vdsm-store-net-config.in
index ea87bca..f4ba1f4 100755
--- a/vdsm/vdsm-store-net-config.in
+++ b/vdsm/vdsm-store-net-config.in
@@ -5,16 +5,18 @@
. @LIBEXECDIR(a)/ovirt_functions.sh
+# ifcfg persistence directories
NET_CONF_DIR='/etc/sysconfig/network-scripts/'
-NET_CONF_BACK_DIR=@VDSMLIBDIR@/netconfback
-DELETE_HEADER='# original file did not exist'
+NET_CONF_BACK_DIR="@VDSMLIBDIR@/netconfback"
-if isOvirtNode
-then
- # for ovirt, persist the changed configuration files
+# Unified persistence directories
+RUN_CONF_DIR='@VDSMRUNDIR@/netconf'
+PERS_CONF_PATH="@VDSMLIBDIR@/persistence"
+PERS_NET_CONF_PATH="$PERS_CONF_PATH/netconf"
- . /usr/libexec/ovirt-functions
+PERSISTENCE=$1
+ifcfg_node_persist() {
for f in "$NET_CONF_BACK_DIR"/*;
do
[ ! -f "$f" ] && continue
@@ -27,9 +29,54 @@
fi
rm "$NET_CONF_BACK_DIR/$bf"
done
-else
- # for rhel, remove the backed up configuration files, and thus mark the
- # ones under /etc/sysconfig as "safe".
+}
+ifcfg_nonnode_persist() {
+ # Remove the backed up configuration files thus marking the ones under
+ # /etc/sysconfig as "safe".
rm -rf "$NET_CONF_BACK_DIR"/*
+}
+
+unified_node_persist() {
+ unified_nonnode_persist
+
+ # oVirt node ovirt_store_config puts the dir in persistent storage and
+ # bind mounts it in the original place. So that's all we really need to do.
+ ovirt_store_config "$PERS_CONF_PATH"
+}
+
+unified_nonnode_persist() {
+ # Atomic directory copy by using the atomicity of overwriting a link
+ # (rename syscall).
+ TIMESTAMP=$(date +%s)
+ PERS_CONF_SYMLINK=$PERS_NET_CONF_PATH
+ PERS_CONF_DIR_ROOTNAME="$PERS_CONF_SYMLINK."
+ PERS_CONF_NEW_DIR="$PERS_CONF_DIR_ROOTNAME$TIMESTAMP"
+ PERS_CONF_NEW_SYMLINK="$PERS_CONF_SYMLINK.link.$TIMESTAMP"
+
+ cp -r "$RUN_CONF_DIR" "$PERS_CONF_NEW_DIR"
+ ln -s "$PERS_CONF_NEW_DIR" "$PERS_CONF_NEW_SYMLINK"
+ mv -fT "$PERS_CONF_NEW_SYMLINK" "$PERS_CONF_SYMLINK"
+ find "$PERS_CONF_PATH" -type d -path "$PERS_CONF_DIR_ROOTNAME*" | \
+ grep -v "$PERS_CONF_NEW_DIR" | xargs rm -fr
+}
+
+
+if isOvirtNode
+then
+ # for node, persist the changed configuration files
+
+ . /usr/libexec/ovirt-functions
+
+ if [ "$PERSISTENCE" == "unified" ]; then
+ unified_node_persist
+ else
+ ifcfg_node_persist
+ fi
+else
+ if [ "$PERSISTENCE" == "unified" ]; then
+ unified_nonnode_persist
+ else
+ ifcfg_nonnode_persist
+ fi
fi
--
To view, visit http://gerrit.ovirt.org/17009
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I320677e40ff5b11da684d3ab7195d018135356b2
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Assaf Muller <amuller(a)redhat.com>
9 years, 8 months
Change in vdsm[master]: [WIP] Unified persistence.
by asegurap@redhat.com
Antoni Segura Puimedon has uploaded a new change for review.
Change subject: [WIP] Unified persistence.
......................................................................
[WIP] Unified persistence.
This patch introduces the new persistence model for vdsm networking.
It is meant to provide a single reliable way abstracting persistence
out of the netconf configurators as much as possible.
To achieve its purpose, it stores the network actions as setupNetwork
parameters serialized in json which are then used for rollback and
initialization.
Change-Id: I7137a96f84abd2c5e532c6c37737e36ef17567a9
Signed-off-by: Antoni S. Puimedon <asegurap(a)redhat.com>
---
M lib/vdsm/config.py.in
M lib/vdsm/netinfo.py
M vdsm/configNetwork.py
M vdsm/vdsm-store-net-config.in
4 files changed, 124 insertions(+), 11 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/99/16699/1
diff --git a/lib/vdsm/config.py.in b/lib/vdsm/config.py.in
index 2a5618a..05a820b 100644
--- a/lib/vdsm/config.py.in
+++ b/lib/vdsm/config.py.in
@@ -42,6 +42,9 @@
'Comma-separated list of fnmatch-patterns for dummy hosts nics to '
'be shown to vdsm.'),
+ ('persistence', 'ifcfg',
+ 'Whether to use "ifcfg" or "unified" persistence for networks.'),
+
('nic_model', 'rtl8139,pv',
'NIC model is rtl8139, ne2k_pci pv or any other valid device '
'recognized by kvm/qemu if a coma separated list given then a '
diff --git a/lib/vdsm/netinfo.py b/lib/vdsm/netinfo.py
index 37cd0b4..6b22c77 100644
--- a/lib/vdsm/netinfo.py
+++ b/lib/vdsm/netinfo.py
@@ -37,9 +37,14 @@
import libvirtconnection
NET_CONF_DIR = '/etc/sysconfig/network-scripts/'
+# ifcfg persistence directories
NET_CONF_BACK_DIR = constants.P_VDSM_LIB + 'netconfback/'
NET_LOGICALNET_CONF_BACK_DIR = NET_CONF_BACK_DIR + 'logicalnetworks/'
+# Unified persistence directories
+NET_CONF_RUN_DIR = constants.P_VDSM_RUN + 'netconf/nets/'
+BOND_CONF_RUN_DIR = constants.P_VDSM_RUN + 'netconf/bonds/'
+
NET_CONF_PREF = NET_CONF_DIR + 'ifcfg-'
PROC_NET_VLAN = '/proc/net/vlan/'
NET_FN_MATCH = '/sys/class/net/*'
diff --git a/vdsm/configNetwork.py b/vdsm/configNetwork.py
index 1fde263..9c72639 100755
--- a/vdsm/configNetwork.py
+++ b/vdsm/configNetwork.py
@@ -17,12 +17,14 @@
# Refer to the README and COPYING files for full details of the license
#
+import json
import sys
import os
import traceback
import time
import logging
+from vdsm.config import config
from vdsm import constants
from vdsm import utils
from storage.misc import execCmd
@@ -152,6 +154,7 @@
configurator=None, bondingOptions=None, bridged=True,
_netinfo=None, qosInbound=None, qosOutbound=None, **options):
nics = nics or ()
+ saveRunConf = configurator is None
if _netinfo is None:
_netinfo = netinfo.NetInfo()
bridged = utils.tobool(bridged)
@@ -203,6 +206,36 @@
qosInbound=qosInbound,
qosOutbound=qosOutbound)
netEnt.configure(**options)
+ if saveRunConf:
+ pass
+
+
+def _setBondAsRunning(self, bond):
+ bondFileName = os.path.join(netinfo.BOND_CONF_RUN_DIR, bond.name)
+ if bond.name not in self._bonds:
+ try:
+ self._bonds[bond.name] = json.load(open(bondFileName))
+ logging.debug("Backed up %s", bondFileName)
+ except IOError as e:
+ if e.errno == os.errno.ENOENT:
+ self._bonds[bond.name] = None
+ else:
+ raise
+ json.dump(bond.setupify(), open(bondFileName, 'w'))
+
+
+def _setNetworkAsRunning(self, network, topNetDev):
+ netFileName = os.path.join(netinfo.NET_CONF_RUN_DIR, network)
+ if network not in self._nets:
+ try:
+ self._nets[network] = json.load(open(netFileName))
+ logging.debug("Backed up %s", netFileName)
+ except IOError as e:
+ if e.errno == os.errno.ENOENT:
+ self._nets[network] = None
+ else:
+ raise
+ json.dump(topNetDev.setupify(), open(netFileName, 'w'))
def assertBridgeClean(bridge, vlan, bonding, nics):
@@ -475,6 +508,7 @@
_netinfo = netinfo.NetInfo()
configurator = Ifcfg()
networksAdded = set()
+ networksDeleted = set()
logger.debug("Setting up network according to configuration: "
"networks:%r, bondings:%r, options:%r" % (networks,
@@ -505,6 +539,7 @@
_delBrokenNetwork(network, libvirt_nets[network],
configurator=configurator)
if 'remove' in networkAttrs:
+ networksDeleted.add(network)
del networks[network]
del libvirt_nets[network]
else:
@@ -543,11 +578,35 @@
except:
configurator.rollback()
raise
+ else:
+ if config.get('vars', 'persistence') != 'unified':
+ return
+ for netName in networksDeleted:
+ logger.info('Removing network %s from running configuration',
+ netName)
+ os.unlink(netinfo.NET_CONF_RUN_DIR + netName)
+ for netName in networksAdded:
+ logger.info('Adding network %s to running configuration',
+ netName)
+ json.dump(networks[netName],
+ open(netinfo.NET_CONF_RUN_DIR + netName, 'w'))
+ existingBonds = netinfo.bondings()
+ for bondName, attr in bondings:
+ if bondName in existingBonds:
+ logger.info('Removing bond %s from running configuration',
+ bondName)
+ os.unlink(netinfo.BOND_CONF_RUN_DIR + bondName)
+ else:
+ logger.info('Adding bond %s to running configuration',
+ bondName)
+ json.dump(bondings[bondName],
+ open(netinfo.BOND_CONF_RUN_DIR + bondName, 'w'))
def setSafeNetworkConfig():
"""Declare current network configuration as 'safe'"""
- execCmd([constants.EXT_VDSM_STORE_NET_CONFIG])
+ execCmd([constants.EXT_VDSM_STORE_NET_CONFIG,
+ config.get('vars', 'persistency')])
def usage():
diff --git a/vdsm/vdsm-store-net-config.in b/vdsm/vdsm-store-net-config.in
index ed3af8a..0ad57c9 100755
--- a/vdsm/vdsm-store-net-config.in
+++ b/vdsm/vdsm-store-net-config.in
@@ -5,20 +5,22 @@
. @LIBEXECDIR(a)/ovirt_functions.sh
+# ifcfg persistence directories
NET_CONF_DIR='/etc/sysconfig/network-scripts/'
-NET_CONF_BACK_DIR=@VDSMLIBDIR@/netconfback
-DELETE_HEADER='# original file did not exist'
+NET_CONF_BACK_DIR="@VDSMLIBDIR@/netconfback"
-if isOvirt
-then
- # for ovirt, persist the changed configuration files
+# Unified persistence directories
+RUN_CONF_DIR='@VDSMRUNDIR@/netconf'
+PERS_CONF_PATH="@VDSMLIBDIR@/persistence"
+PERS_NET_CONF_PATH="$PERS_CONF_PATH/netconf"
- . /usr/libexec/ovirt-functions
+PERSISTENCE=$1
+ifcfg_node_persist() {
for f in "$NET_CONF_BACK_DIR"/*;
do
[ ! -f "$f" ] && continue
- bf=`basename "$f"`
+ bf=$(basename "$f")
if [ -f "$NET_CONF_DIR/$bf" ];
then
ovirt_store_config "$NET_CONF_DIR/$bf"
@@ -27,9 +29,53 @@
fi
rm "$NET_CONF_BACK_DIR/$bf"
done
-else
- # for rhel, remove the backed up configuration files, and thus mark the
- # ones under /etc/sysconfig as "safe".
+}
+ifcfg_nonnode_persist() {
+ # Remove the backed up configuration files thus marking the ones under
+ # /etc/sysconfig as "safe".
rm -rf "$NET_CONF_BACK_DIR"/*
+}
+
+unified_node_persist() {
+ unified_nonnode_persist
+
+ # oVirt node ovirt_store_config puts the dir in persistent storage and
+ # bind mounts it in the original place. So that's all we really need to do.
+ ovirt_store_config "$PERS_CONF_PATH"
+}
+
+unified_nonnode_persist() {
+ # Atomic directory copy by using the atomicity of overwriting a link
+ # (rename syscall).
+ TIMESTAMP=$(date +%s)
+ PERS_CONF_SYMLINK=$PERS_NET_CONF_PATH
+ PERS_CONF_DIR_ROOTNAME="$PERS_CONF_SYMLINK."
+ PERS_CONF_NEW_DIR="$PERS_CONF_DIR_ROOTNAME$TIMESTAMP"
+ PERS_CONF_NEW_SYMLINK="$PERS_CONF_SYMLINK.link.$TIMESTAMP"
+
+ cp -r "$RUN_CONF_DIR" "$PERS_CONF_NEW_DIR"
+ ln -s "$PERS_CONF_NEW_DIR" "$PERS_CONF_NEW_SYMLINK"
+ mv -fT "$PERS_CONF_NEW_SYMLINK" "$PERS_CONF_SYMLINK"
+ find "$PERS_CONF_PATH" -type d -path "$PERS_CONF_DIR_ROOTNAME" | \
+ grep -v "$PERS_CONF_NEW_DIR" | xargs rm -fr
+}
+
+
+if isOvirt
+then
+ # for node, persist the changed configuration files
+
+ . /usr/libexec/ovirt-functions
+
+ if [ "$PERSISTENCE" == "unified"]; then
+ unified_node_persist
+ else
+ ifcfg_node_persist
+ fi
+else
+ if [ "$PERSISTENCE" == "unified"]; then
+ unified_nonnode_persist
+ else
+ ifcfg_nonnode_persist
fi
--
To view, visit http://gerrit.ovirt.org/16699
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I7137a96f84abd2c5e532c6c37737e36ef17567a9
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Antoni Segura Puimedon <asegurap(a)redhat.com>
9 years, 8 months
Change in vdsm[master]: gluster: add gluster task support
by barumuga@redhat.com
Hello Ayal Baron, Timothy Asir, Saggi Mizrahi, Federico Simoncelli, Dan Kenigsberg,
I'd like you to do a code review. Please visit
http://gerrit.ovirt.org/10200
to review the following change.
Change subject: gluster: add gluster task support
......................................................................
gluster: add gluster task support
gluster volume operations like rebalance, replace-brick, remove-brick
are async operations which needs to be tracked as async tasks in
oVirt. This is done by introducing below new verbs and changes in
existing rebalance, replace-brick, remove-brick verbs.
New verb:
* glusterTaskActionPerform
* glusterTasksList
- return value structure:
[{"id": TASKID,
"verb": VOLUMENAME,
"state": TaskStatus,
"code": TaskType,
"message": STRING,
"result": '',
"tag": 'gluster'}, ...]
As below verbs are not consumed by engine/RHS-C yet, its OK to differ in
compatibility issue now.
glusterVolumeRebalanceStart
glusterVolumeRebalanceStatus
glusterVolumeReplaceBrickStart
glusterVolumeReplaceBrickStatus
glusterVolumeRemoveBrickStart
glusterVolumeRemoveBrickStatus
Change-Id: I154df353bc6f23001d7bf61b8f5345abd2019cb6
Signed-off-by: Bala.FA <barumuga(a)redhat.com>
---
M tests/gluster_cli_tests.py
M vdsm/gluster/api.py
M vdsm/gluster/cli.py
M vdsm/gluster/exception.py
M vdsm_cli/vdsClientGluster.py
5 files changed, 762 insertions(+), 169 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/00/10200/1
diff --git a/tests/gluster_cli_tests.py b/tests/gluster_cli_tests.py
index b5dedbb..227e6f8 100644
--- a/tests/gluster_cli_tests.py
+++ b/tests/gluster_cli_tests.py
@@ -1067,3 +1067,231 @@
def test_parseVolumeProfileInfo(self):
self._parseVolumeProfileInfo_test()
self._parseVolumeProfileInfoNfs_test()
+
+ def test_parseVolumeStatusAll(self):
+ out = """<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<cliOutput>
+ <opRet>0</opRet>
+ <opErrno>0</opErrno>
+ <opErrstr></opErrstr>
+ <volumes>
+ <volume>
+ <name>V1</name>
+ <id>03eace73-9197-49d0-a877-831bc6e9dac2</id>
+ <tasks>
+ <task>
+ <name>rebalance</name>
+ <id>12345473-9197-49d0-a877-831bc6e9dac2</id>
+ </task>
+ </tasks>
+ </volume>
+ <volume>
+ <name>V2</name>
+ <id>03eace73-1237-49d0-a877-831bc6e9dac2</id>
+ <tasks>
+ <task>
+ <name>replace-brick</name>
+ <id>12345473-1237-49d0-a877-831bc6e9dac2</id>
+ <sourceBrick>192.168.122.167:/tmp/V2-b1</sourceBrick>
+ <destBrick>192.168.122.168:/tmp/V2-b1</destBrick>
+ </task>
+ </tasks>
+ </volume>
+ <volume>
+ <name>V3</name>
+ <id>03eace73-1237-1230-a877-831bc6e9dac2</id>
+ <tasks>
+ <task>
+ <name>remove-brick</name>
+ <id>12345473-1237-1230-a877-831bc6e9dac2</id>
+ <BrickCount>2</BrickCount>
+ <brick>192.168.122.167:/tmp/V3-b1</brick>
+ <brick>192.168.122.168:/tmp/V3-b1</brick>
+ </task>
+ </tasks>
+ </volume>
+ </volumes>
+</cliOutput>
+"""
+ ostatus = {'12345473-1237-1230-a877-831bc6e9dac2':
+ {'bricks': ['192.168.122.167:/tmp/V3-b1',
+ '192.168.122.168:/tmp/V3-b1'],
+ 'taskType': 'remove-brick',
+ 'volumeId': '03eace73-1237-1230-a877-831bc6e9dac2',
+ 'volumeName': 'V3'},
+ '12345473-1237-49d0-a877-831bc6e9dac2':
+ {'bricks': ['192.168.122.167:/tmp/V2-b1',
+ '192.168.122.168:/tmp/V2-b1'],
+ 'taskType': 'replace-brick',
+ 'volumeId': '03eace73-1237-49d0-a877-831bc6e9dac2',
+ 'volumeName': 'V2'},
+ '12345473-9197-49d0-a877-831bc6e9dac2':
+ {'bricks': [],
+ 'taskType': 'rebalance',
+ 'volumeId': '03eace73-9197-49d0-a877-831bc6e9dac2',
+ 'volumeName': 'V1'}}
+ tree = etree.fromstring(out)
+ status = gcli._parseVolumeStatusAll(tree)
+ self.assertEquals(status, ostatus)
+
+ def test_parseVolumeRebalanceStatus(self):
+ out = """<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<cliOutput>
+ <opRet>0</opRet>
+ <opErrno>0</opErrno>
+ <opErrstr></opErrstr>
+ <volRebalance>
+ <id>03eace73-9197-49d0-a877-831bc6e9dac2</id>
+ <node>
+ <nodeName>192.168.122.2</nodeName>
+ <lookups>7628</lookups>
+ <files>273</files>
+ <failures>0</failures>
+ <size>468918728</size>
+ <status>RUNNING</status>
+ </node>
+ <node>
+ <nodeName>FC16-1</nodeName>
+ <lookups>2734</lookups>
+ <files>765</files>
+ <failures>57</failures>
+ <size>918728</size>
+ <status>FAILED</status>
+ </node>
+ <node>
+ <nodeName>FC16-2</nodeName>
+ <lookups>456</lookups>
+ <files>62</files>
+ <failures>0</failures>
+ <size>192876</size>
+ <status>COMPLETED</status>
+ </node>
+ <aggregate>
+ <lookups>10818</lookups>
+ <files>1100</files>
+ <failures>57</failures>
+ <size>470030332</size>
+ <status>RUNNING</status>
+ </aggregate>
+ </volRebalance>
+</cliOutput>
+"""
+ ostatus = {'host': [{'name': '192.168.122.2',
+ 'filesScanned': 7628,
+ 'filesMoved': 273,
+ 'filesFailed': 0,
+ 'totalSizeMoved': 468918728,
+ 'status': gcli.TaskStatus.RUNNING},
+ {'name': 'FC16-1',
+ 'filesScanned': 2734,
+ 'filesMoved': 765,
+ 'filesFailed': 57,
+ 'totalSizeMoved': 918728,
+ 'status': gcli.TaskStatus.FAILED},
+ {'name': 'FC16-2',
+ 'filesScanned': 456,
+ 'filesMoved': 62,
+ 'filesFailed': 0,
+ 'totalSizeMoved': 192876,
+ 'status': gcli.TaskStatus.COMPLETED}],
+ 'summary': {'filesScanned': 10818,
+ 'filesMoved': 1100,
+ 'filesFailed': 57,
+ 'totalSizeMoved': 470030332,
+ 'status': gcli.TaskStatus.RUNNING},
+ 'taskId': '03eace73-9197-49d0-a877-831bc6e9dac2'}
+ tree = etree.fromstring(out)
+ status = gcli._parseVolumeRebalanceStatus(tree)
+ self.assertEquals(status, ostatus)
+
+ def test_parseVolumeRemoveBrickStatus(self):
+ out = """<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<cliOutput>
+ <opRet>0</opRet>
+ <opErrno>0</opErrno>
+ <opErrstr></opErrstr>
+ <volRemoveBrick>
+ <id>03eace73-9197-49d0-a877-831bc6e9dac2</id>
+ <node>
+ <nodeName>192.168.122.2</nodeName>
+ <lookups>7628</lookups>
+ <files>273</files>
+ <failures>0</failures>
+ <size>468918728</size>
+ <status>RUNNING</status>
+ </node>
+ <node>
+ <nodeName>FC16-1</nodeName>
+ <lookups>2734</lookups>
+ <files>765</files>
+ <failures>57</failures>
+ <size>918728</size>
+ <status>FAILED</status>
+ </node>
+ <node>
+ <nodeName>FC16-2</nodeName>
+ <lookups>456</lookups>
+ <files>62</files>
+ <failures>0</failures>
+ <size>192876</size>
+ <status>COMPLETED</status>
+ </node>
+ <aggregate>
+ <lookups>10818</lookups>
+ <files>1100</files>
+ <failures>57</failures>
+ <size>470030332</size>
+ <status>RUNNING</status>
+ </aggregate>
+ </volRemoveBrick>
+</cliOutput>
+"""
+ ostatus = {'host': [{'name': '192.168.122.2',
+ 'filesScanned': 7628,
+ 'filesMoved': 273,
+ 'filesFailed': 0,
+ 'totalSizeMoved': 468918728,
+ 'status': gcli.TaskStatus.RUNNING},
+ {'name': 'FC16-1',
+ 'filesScanned': 2734,
+ 'filesMoved': 765,
+ 'filesFailed': 57,
+ 'totalSizeMoved': 918728,
+ 'status': gcli.TaskStatus.FAILED},
+ {'name': 'FC16-2',
+ 'filesScanned': 456,
+ 'filesMoved': 62,
+ 'filesFailed': 0,
+ 'totalSizeMoved': 192876,
+ 'status': gcli.TaskStatus.COMPLETED}],
+ 'summary': {'filesScanned': 10818,
+ 'filesMoved': 1100,
+ 'filesFailed': 57,
+ 'totalSizeMoved': 470030332,
+ 'status': gcli.TaskStatus.RUNNING},
+ 'taskId': '03eace73-9197-49d0-a877-831bc6e9dac2'}
+ tree = etree.fromstring(out)
+ status = gcli._parseVolumeRemoveBrickStatus(tree)
+ self.assertEquals(status, ostatus)
+
+ def test_parseVolumeReplaceBrickStatus(self):
+ out = """<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<cliOutput>
+ <opRet>0</opRet>
+ <opErrno>0</opErrno>
+ <opErrstr></opErrstr>
+ <volReplaceBrick>
+ <id>03eace73-9197-49d0-a877-831bc6e9dac2</id>
+ <filesMoved>273</filesMoved>
+ <movingFile>pixmaps/logfactor5.png</movingFile>
+ <status>RUNNING</status>
+ </volReplaceBrick>
+</cliOutput>
+"""
+ ostatus = {'filesMoved': 273,
+ 'movingFile': 'pixmaps/logfactor5.png',
+ 'status': gcli.TaskStatus.RUNNING,
+ 'taskId': '03eace73-9197-49d0-a877-831bc6e9dac2'}
+ tree = etree.fromstring(out)
+ status = gcli._parseVolumeReplaceBrickStatus(tree)
+ self.assertEquals(status, ostatus)
diff --git a/vdsm/gluster/api.py b/vdsm/gluster/api.py
index 5f0b0ed..2121ffd 100644
--- a/vdsm/gluster/api.py
+++ b/vdsm/gluster/api.py
@@ -19,9 +19,12 @@
#
from functools import wraps
+import logging
from vdsm.define import doneCode
import supervdsm as svdsm
+from cli import TaskType, TaskAction
+import exception as ge
_SUCCESS = {'status': doneCode}
@@ -45,11 +48,22 @@
The gluster interface of vdsm.
"""
+ svdsmProxy = svdsm.getProxy()
+ _taskActionMap = \
+ {TaskType.REBALANCE:
+ {TaskAction.STOP: svdsmProxy.glusterVolumeRebalanceStop},
+ TaskType.REPLACE_BRICK:
+ {TaskAction.STOP: svdsmProxy.glusterVolumeReplaceBrickStop,
+ TaskAction.ABORT: svdsmProxy.glusterVolumeReplaceBrickAbort,
+ TaskAction.PAUSE: svdsmProxy.glusterVolumeReplaceBrickPause,
+ TaskAction.COMMIT: svdsmProxy.glusterVolumeReplaceBrickCommit},
+ TaskType.REMOVE_BRICK:
+ {TaskAction.STOP: svdsmProxy.glusterVolumeRemoveBrickStop,
+ TaskAction.COMMIT: svdsmProxy.glusterVolumeRemoveBrickCommit}}
def __init__(self, cif, log):
self.cif = cif
self.log = log
- self.svdsmProxy = svdsm.getProxy()
@exportAsVerb
def volumesList(self, volumeName=None, options=None):
@@ -95,9 +109,9 @@
@exportAsVerb
def volumeRebalanceStart(self, volumeName, rebalanceType="",
force=False, options=None):
- self.svdsmProxy.glusterVolumeRebalanceStart(volumeName,
- rebalanceType,
- force)
+ return self.svdsmProxy.glusterVolumeRebalanceStart(volumeName,
+ rebalanceType,
+ force)
@exportAsVerb
def volumeRebalanceStop(self, volumeName, force=False, options=None):
@@ -105,15 +119,15 @@
@exportAsVerb
def volumeRebalanceStatus(self, volumeName, options=None):
- st, msg = self.svdsmProxy.glusterVolumeRebalanceStatus(volumeName)
- return {'rebalance': st, 'message': msg}
+ return {'volumeStatus':
+ self.svdsmProxy.glusterVolumeRebalanceStatus(volumeName)}
@exportAsVerb
def volumeReplaceBrickStart(self, volumeName, existingBrick, newBrick,
options=None):
- self.svdsmProxy.glusterVolumeReplaceBrickStart(volumeName,
- existingBrick,
- newBrick)
+ return self.svdsmProxy.glusterVolumeReplaceBrickStart(volumeName,
+ existingBrick,
+ newBrick)
@exportAsVerb
def volumeReplaceBrickAbort(self, volumeName, existingBrick, newBrick,
@@ -132,10 +146,10 @@
@exportAsVerb
def volumeReplaceBrickStatus(self, volumeName, oldBrick, newBrick,
options=None):
- st, msg = self.svdsmProxy.glusterVolumeReplaceBrickStatus(volumeName,
- oldBrick,
- newBrick)
- return {'replaceBrick': st, 'message': msg}
+ return {'volumeStatus':
+ self.svdsmProxy.glusterVolumeReplaceBrickStatus(volumeName,
+ oldBrick,
+ newBrick)}
@exportAsVerb
def volumeReplaceBrickCommit(self, volumeName, existingBrick, newBrick,
@@ -148,8 +162,9 @@
@exportAsVerb
def volumeRemoveBrickStart(self, volumeName, brickList,
replicaCount=0, options=None):
- self.svdsmProxy.glusterVolumeRemoveBrickStart(volumeName, brickList,
- replicaCount)
+ return self.svdsmProxy.glusterVolumeRemoveBrickStart(volumeName,
+ brickList,
+ replicaCount)
@exportAsVerb
def volumeRemoveBrickStop(self, volumeName, brickList,
@@ -160,10 +175,10 @@
@exportAsVerb
def volumeRemoveBrickStatus(self, volumeName, brickList,
replicaCount=0, options=None):
- message = self.svdsmProxy.glusterVolumeRemoveBrickStatus(volumeName,
- brickList,
- replicaCount)
- return {'message': message}
+ status = self.svdsmProxy.glusterVolumeRemoveBrickStatus(volumeName,
+ brickList,
+ replicaCount)
+ return {'volumeStatus': status}
@exportAsVerb
def volumeRemoveBrickCommit(self, volumeName, brickList,
@@ -186,6 +201,91 @@
return {'volumeStatus': status}
@exportAsVerb
+ def taskActionPerform(self, taskId, action, options=None):
+ tasks = self.svdsmProxy.glusterVolumeStatusAll()
+ if taskId not in tasks:
+ raise ge.GlusterTaskNotFoundException(taskId)
+
+ act = getattr(TaskAction, action, None)
+ if not act:
+ raise ge.GlusterTaskActionNotFoundException(taskId, action)
+
+ value = tasks[taskId]
+ taskType = value['taskType']
+ if act in self._taskActionMap[taskType]:
+ raise ge.GlusterTaskActionUnsupportedException(taskId,
+ taskType,
+ action)
+
+ func = self._taskActionMap[taskType][act]
+ if taskType == TaskType.REBALANCE:
+ func(value['volumeName'])
+ elif taskType == TaskType.REMOVE_BRICK:
+ func(value['volumeName'], value['bricks'])
+ elif taskType == TaskType.REPLACE_BRICK:
+ func(value['volumeName'], value['bricks'][0], value['bricks'][1])
+ else:
+ raise ge.GlusterTaskTypeUnknownException(taskId, taskType)
+
+ @exportAsVerb
+ def tasksList(self, options=None):
+ """
+ Return all gluster tasks as
+ [{"id": TASKID,
+ "verb": VOLUMENAME,
+ "state": TaskStatus,
+ "code": TaskType,
+ "message": STRING,
+ "result": '',
+ "tag": 'gluster'}, ...]
+ """
+ subRes = {}
+ for taskId, value in self.svdsmProxy.glusterVolumeStatusAll():
+ try:
+ msg = ''
+ state = ''
+ if value['taskType'] == TaskType.REBALANCE:
+ status = self.svdsmProxy.\
+ glusterVolumeRebalanceStatus(value['volumeName'])
+ msg = ('Files [scanned: %d, moved: %d, failed: %d], '
+ 'Total size moved: %d') % \
+ (status['summary']['filesScanned'],
+ status['summary']['filesMoved'],
+ status['summary']['filesFailed'],
+ status['summary']['totalSizeMoved'])
+ state = status['summary']['status']
+ elif value['taskType'] == TaskType.REMOVE_BRICK:
+ status = self.svdsmProxy.\
+ glusterVolumeRemoveBrickStatus(value['volumeName'],
+ value['bricks'])
+ msg = ('Files [scanned: %d, moved: %d, failed: %d], '
+ 'Total size moved: %d') % \
+ (status['summary']['filesScanned'],
+ status['summary']['filesMoved'],
+ status['summary']['filesFailed'],
+ status['summary']['totalSizeMoved'])
+ state = status['summary']['status']
+ elif value['taskType'] == TaskType.REPLACE_BRICK:
+ status = self.svdsmProxy.\
+ glusterVolumeReplaceBrickStatus(value['volumeName'],
+ value['bricks'][0],
+ value['bricks'][1])
+ msg = 'Files moved: %d, Moving file: %s' % \
+ (status['filesMoved'], status['movingFile'])
+ state = status['status']
+
+ subRes[taskId] = {"id": taskId,
+ "verb": value['volumeName'],
+ "state": state,
+ "code": value['taskType'],
+ "message": msg,
+ "result": '',
+ "tag": 'gluster'}
+ except ge.GlusterException:
+ logging.error("gluster exception occured", exc_info=True)
+ return subRes
+
+ @exportAsVerb
def hostAdd(self, hostName, options=None):
self.svdsmProxy.glusterPeerProbe(hostName)
diff --git a/vdsm/gluster/cli.py b/vdsm/gluster/cli.py
index 7136281..c3f2ed8 100644
--- a/vdsm/gluster/cli.py
+++ b/vdsm/gluster/cli.py
@@ -72,6 +72,25 @@
RDMA = 'RDMA'
+class TaskType:
+ REBALANCE = 'REBALANCE'
+ REPLACE_BRICK = 'REPLACE_BRICK'
+ REMOVE_BRICK = 'REMOVE_BRICK'
+
+
+class TaskStatus:
+ RUNNING = 'RUNNING'
+ FAILED = 'FAILED'
+ COMPLETED = 'COMPLETED'
+
+
+class TaskAction:
+ STOP: 'STOP'
+ ABORT: 'ABORT'
+ PAUSE: 'PAUSE'
+ COMMIT: 'COMMIT'
+
+
def _execGluster(cmd):
return utils.execCmd(cmd)
@@ -303,6 +322,50 @@
return _parseVolumeStatusMem(xmltree)
else:
return _parseVolumeStatus(xmltree)
+ except (etree.ParseError, AttributeError, ValueError):
+ raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)])
+
+
+def _parseVolumeStatusAll(tree):
+ """
+ returns {TaskId: {'volumeName': VolumeName,
+ 'volumeId': VolumeId,
+ 'taskType': TaskType,
+ 'bricks': BrickList}, ...}
+ """
+ tasks = {}
+ for el in tree.findall('volumes/volume'):
+ volumeName = el.find('name').text
+ volumeId = el.find('id').text
+ for c in el.findall('tasks/task'):
+ taskType = c.find('name').text
+ taskType = taskType.upper().replace('-', '_')
+ taskId = c.find('id').text
+ bricks = []
+ if taskType == TaskType.REPLACE_BRICK:
+ bricks.append(c.find('sourceBrick').text)
+ bricks.append(c.find('destBrick').text)
+ elif taskType == TaskType.REMOVE_BRICK:
+ for b in c.findall('brick'):
+ bricks.append(b.text)
+ elif taskType == TaskType.REBALANCE:
+ pass
+ tasks[taskId] = {'volumeName': volumeName,
+ 'volumeId': volumeId,
+ 'taskType': taskType,
+ 'bricks': bricks}
+ return tasks
+
+
+@exportToSuperVdsm
+def volumeStatusAll():
+ command = _getGlusterVolCmd() + ["status", "all"]
+ try:
+ xmltree = _execGlusterXml(command)
+ except ge.GlusterCmdFailedException, e:
+ raise ge.GlusterVolumeStatusAllFailedException(rc=e.rc, err=e.err)
+ try:
+ return _parseVolumeStatusAll(xmltree)
except (etree.ParseError, AttributeError, ValueError):
raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)])
@@ -554,11 +617,15 @@
command.append("start")
if force:
command.append("force")
- rc, out, err = _execGluster(command)
- if rc:
- raise ge.GlusterVolumeRebalanceStartFailedException(rc, out, err)
- else:
- return True
+ try:
+ xmltree = _execGlusterXml(command)
+ except ge.GlusterCmdFailedException, e:
+ raise ge.GlusterVolumeRebalanceStartFailedException(rc=e.rc,
+ err=e.err)
+ try:
+ return {'taskId': xmltree.find('id').text}
+ except (etree.ParseError, AttributeError, ValueError):
+ raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)])
@exportToSuperVdsm
@@ -566,84 +633,141 @@
command = _getGlusterVolCmd() + ["rebalance", volumeName, "stop"]
if force:
command.append('force')
- rc, out, err = _execGluster(command)
- if rc:
- raise ge.GlusterVolumeRebalanceStopFailedException(rc, out, err)
- else:
+ try:
+ _execGlusterXml(command)
return True
+ except ge.GlusterCmdFailedException, e:
+ raise ge.GlusterVolumeRebalanceStopFailedException(rc=e.rc,
+ err=e.err)
+
+
+def _parseVolumeRebalanceRemoveBrickStatus(xmltree, mode):
+ """
+ returns {'taskId': UUID,
+ 'host': [{'name': NAME,
+ 'id': HOSTID,
+ 'filesScanned': INT,
+ 'filesMoved': INT,
+ 'filesFailed': INT,
+ 'totalSizeMoved': INT,
+ 'status': TaskStatus},...]
+ 'summary': {'filesScanned': INT,
+ 'filesMoved': INT,
+ 'filesFailed': INT,
+ 'totalSizeMoved': INT,
+ 'status': TaskStatus}}
+ """
+ if mode == 'rebalance':
+ tree = xmltree.find('volRebalance')
+ elif mode == 'remove-brick':
+ tree = xmltree.find('volRemoveBrick')
+ else:
+ return
+ status = \
+ {'taskId': tree.find('id').text,
+ 'summary': \
+ {'filesScanned': int(tree.find('aggregate/lookups').text),
+ 'filesMoved': int(tree.find('aggregate/files').text),
+ 'filesFailed': int(tree.find('aggregate/failures').text),
+ 'totalSizeMoved': int(tree.find('aggregate/size').text),
+ #'status': tree.find('aggregate/status').text},
+ 'host': []}
+ for el in tree.findall('node'):
+ status['host'].append({'name': el.find('nodeName').text,
+ #'id': el.find('id').text,
+ 'filesScanned':
+ int(el.find('lookups').text),
+ 'filesMoved': int(el.find('files').text),
+ 'filesFailed': int(el.find('failures').text),
+ 'totalSizeMoved':
+ int(el.find('size').text),
+ 'status': el.find('status').text})
+ return status
+
+
+def _parseVolumeRebalanceStatus(tree):
+ return _parseVolumeRebalanceRemoveBrickStatus(tree, 'rebalance')
@exportToSuperVdsm
def volumeRebalanceStatus(volumeName):
- rc, out, err = _execGluster(_getGlusterVolCmd() + ["rebalance", volumeName,
- "status"])
- if rc:
- raise ge.GlusterVolumeRebalanceStatusFailedException(rc, out, err)
- if 'in progress' in out[0]:
- return BrickStatus.RUNNING, "\n".join(out)
- elif 'complete' in out[0]:
- return BrickStatus.COMPLETED, "\n".join(out)
- else:
- return BrickStatus.UNKNOWN, "\n".join(out)
+ command = _getGlusterVolCmd() + ["rebalance", volumeName, "status"]
+ try:
+ xmltree = _execGlusterXml(command)
+ except ge.GlusterCmdFailedException, e:
+ raise ge.GlusterVolumeRebalanceStatusFailedException(rc=e.rc,
+ err=e.err)
+ try:
+ return _parseVolumeRebalanceStatus(xmltree)
+ except (etree.ParseError, AttributeError, ValueError):
+ raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)])
@exportToSuperVdsm
def volumeReplaceBrickStart(volumeName, existingBrick, newBrick):
- rc, out, err = _execGluster(_getGlusterVolCmd() + ["replace-brick",
- volumeName,
- existingBrick, newBrick,
- "start"])
- if rc:
- raise ge.GlusterVolumeReplaceBrickStartFailedException(rc, out, err)
- else:
- return True
+ command = _getGlusterVolCmd() + ["replace-brick", volumeName,
+ existingBrick, newBrick, "start"]
+ try:
+ xmltree = _execGlusterXml(command)
+ except ge.GlusterCmdFailedException, e:
+ raise ge.GlusterVolumeReplaceBrickStartFailedException(rc=e.rc,
+ err=e.err)
+ try:
+ return {'taskId': xmltree.find('id').text}
+ except (etree.ParseError, AttributeError, ValueError):
+ raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)])
@exportToSuperVdsm
def volumeReplaceBrickAbort(volumeName, existingBrick, newBrick):
- rc, out, err = _execGluster(_getGlusterVolCmd() + ["replace-brick",
- volumeName,
- existingBrick, newBrick,
- "abort"])
- if rc:
- raise ge.GlusterVolumeReplaceBrickAbortFailedException(rc, out, err)
- else:
+ command = _getGlusterVolCmd() + ["replace-brick", volumeName,
+ existingBrick, newBrick, "abort"]
+ try:
+ _execGlusterXml(command)
return True
+ except ge.GlusterCmdFailedException, e:
+ raise ge.GlusterVolumeReplaceBrickAbortFailedException(rc=e.rc,
+ err=e.err)
@exportToSuperVdsm
def volumeReplaceBrickPause(volumeName, existingBrick, newBrick):
- rc, out, err = _execGluster(_getGlusterVolCmd() + ["replace-brick",
- volumeName,
- existingBrick, newBrick,
- "pause"])
- if rc:
- raise ge.GlusterVolumeReplaceBrickPauseFailedException(rc, out, err)
- else:
+ command = _getGlusterVolCmd() + ["replace-brick", volumeName,
+ existingBrick, newBrick, "pause"]
+ try:
+ _execGlusterXml(command)
return True
+ except ge.GlusterCmdFailedException, e:
+ raise ge.GlusterVolumeReplaceBrickPauseFailedException(rc=e.rc,
+ err=e.err)
+
+
+def _parseVolumeReplaceBrickStatus(tree):
+ """
+ returns {'taskId': UUID,
+ 'filesMoved': INT,
+ 'movingFile': STRING,
+ 'status': TaskStatus}
+ """
+ return {'taskId': tree.find('volReplaceBrick/id').text,
+ 'filesMoved': int(tree.find('volReplaceBrick/filesMoved').text),
+ 'movingFile': tree.find('volReplaceBrick/movingFile').text,
+ 'status': tree.find('volReplaceBrick/status').text}
@exportToSuperVdsm
def volumeReplaceBrickStatus(volumeName, existingBrick, newBrick):
- rc, out, err = _execGluster(_getGlusterVolCmd() + ["replace-brick",
- volumeName,
- existingBrick, newBrick,
- "status"])
- if rc:
- raise ge.GlusterVolumeReplaceBrickStatusFailedException(rc, out,
- err)
- message = "\n".join(out)
- statLine = out[0].strip().upper()
- if BrickStatus.PAUSED in statLine:
- return BrickStatus.PAUSED, message
- elif statLine.endswith('MIGRATION COMPLETE'):
- return BrickStatus.COMPLETED, message
- elif statLine.startswith('NUMBER OF FILES MIGRATED'):
- return BrickStatus.RUNNING, message
- elif statLine.endswith("UNKNOWN"):
- return BrickStatus.UNKNOWN, message
- else:
- return BrickStatus.NA, message
+ command = _getGlusterVolCmd() + ["replace-brick", volumeName,
+ existingBrick, newBrick, "status"]
+ try:
+ xmltree = _execGlusterXml(command)
+ except ge.GlusterCmdFailedException, e:
+ raise ge.GlusterVolumeReplaceBrickStatusFailedException(rc=e.rc,
+ err=e.err)
+ try:
+ return _parseVolumeReplaceBrickStatus(xmltree)
+ except (etree.ParseError, AttributeError, ValueError):
+ raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)])
@exportToSuperVdsm
@@ -653,12 +777,12 @@
existingBrick, newBrick, "commit"]
if force:
command.append('force')
- rc, out, err = _execGluster(command)
- if rc:
- raise ge.GlusterVolumeReplaceBrickCommitFailedException(rc, out,
- err)
- else:
+ try:
+ _execGlusterXml(command)
return True
+ except ge.GlusterCmdFailedException, e:
+ raise ge.GlusterVolumeReplaceBrickCommitFailedException(rc=e.rc,
+ err=e.err)
@exportToSuperVdsm
@@ -667,12 +791,15 @@
if replicaCount:
command += ["replica", "%s" % replicaCount]
command += brickList + ["start"]
-
- rc, out, err = _execGluster(command)
- if rc:
- raise ge.GlusterVolumeRemoveBrickStartFailedException(rc, out, err)
- else:
- return True
+ try:
+ xmltree = _execGlusterXml(command)
+ except ge.GlusterCmdFailedException, e:
+ raise ge.GlusterVolumeRemoveBrickStartFailedException(rc=e.rc,
+ err=e.err)
+ try:
+ return {'taskId': xmltree.find('id').text}
+ except (etree.ParseError, AttributeError, ValueError):
+ raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)])
@exportToSuperVdsm
@@ -681,12 +808,16 @@
if replicaCount:
command += ["replica", "%s" % replicaCount]
command += brickList + ["stop"]
- rc, out, err = _execGluster(command)
-
- if rc:
- raise ge.GlusterVolumeRemoveBrickStopFailedException(rc, out, err)
- else:
+ try:
+ _execGlusterXml(command)
return True
+ except ge.GlusterCmdFailedException, e:
+ raise ge.GlusterVolumeRemoveBrickStopFailedException(rc=e.rc,
+ err=e.err)
+
+
+def _parseVolumeRemoveBrickStatus(tree):
+ return _parseVolumeRebalanceRemoveBrickStatus(tree, 'remove-brick')
@exportToSuperVdsm
@@ -695,12 +826,15 @@
if replicaCount:
command += ["replica", "%s" % replicaCount]
command += brickList + ["status"]
- rc, out, err = _execGluster(command)
-
- if rc:
- raise ge.GlusterVolumeRemoveBrickStatusFailedException(rc, out, err)
- else:
- return "\n".join(out)
+ try:
+ xmltree = _execGlusterXml(command)
+ except ge.GlusterCmdFailedException, e:
+ raise ge.GlusterVolumeRemoveBrickStatusFailedException(rc=e.rc,
+ err=e.err)
+ try:
+ return _parseVolumeRemoveBrickStatus(xmltree)
+ except (etree.ParseError, AttributeError, ValueError):
+ raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)])
@exportToSuperVdsm
@@ -709,12 +843,12 @@
if replicaCount:
command += ["replica", "%s" % replicaCount]
command += brickList + ["commit"]
- rc, out, err = _execGluster(command)
-
- if rc:
- raise ge.GlusterVolumeRemoveBrickCommitFailedException(rc, out, err)
- else:
+ try:
+ _execGlusterXml(command)
return True
+ except ge.GlusterCmdFailedException, e:
+ raise ge.GlusterVolumeRemoveBrickCommitFailedException(rc=e.rc,
+ err=e.err)
@exportToSuperVdsm
@@ -723,12 +857,12 @@
if replicaCount:
command += ["replica", "%s" % replicaCount]
command += brickList + ["force"]
- rc, out, err = _execGluster(command)
-
- if rc:
- raise ge.GlusterVolumeRemoveBrickForceFailedException(rc, out, err)
- else:
+ try:
+ _execGlusterXml(command)
return True
+ except ge.GlusterCmdFailedException, e:
+ raise ge.GlusterVolumeRemoveBrickForceFailedException(rc=e.rc,
+ err=e.err)
@exportToSuperVdsm
diff --git a/vdsm/gluster/exception.py b/vdsm/gluster/exception.py
index e921d7d..bcc7835 100644
--- a/vdsm/gluster/exception.py
+++ b/vdsm/gluster/exception.py
@@ -351,6 +351,56 @@
message = "Volume profile info failed"
+class GlusterVolumeStatusAllFailedException(GlusterVolumeException):
+ code = 4161
+ message = "Volume status all failed"
+
+
+class GlusterTaskNotFoundException(GlusterVolumeException):
+ code = 4162
+ message = "Task not found"
+
+ def __init__(self, taskId):
+ self.taskId = taskId
+ s = 'task id: %s' % taskId
+ self.err = [s]
+
+
+class GlusterTaskActionNotFoundException(GlusterVolumeException):
+ code = 4163
+ message = "Task action not found"
+
+ def __init__(self, taskId, action):
+ self.taskId = taskId
+ self.action = action
+ s = 'Action %s not found for task %s' % (action, taskId)
+ self.err = [s]
+
+
+class GlusterTaskActionUnsupportedException(GlusterVolumeException):
+ code = 4164
+ message = "Task action unsupported"
+
+ def __init__(self, taskId, taskType, action):
+ self.taskId = taskId
+ self.taskType = taskType
+ self.action = action
+ s = 'Unsupported action %s for task %s and type %s' % \
+ (action, taskId, taskType)
+ self.err = [s]
+
+
+class GlusterTaskTypeUnknownException(GlusterVolumeException):
+ code = 4165
+ message = "Task type unknown"
+
+ def __init__(self, taskId, taskType):
+ self.taskId = taskId
+ self.taskType = taskType
+ s = 'Unknown task type %s for task %s' % (taskId, taskType)
+ self.err = [s]
+
+
# Host
class GlusterHostException(GlusterException):
code = 4400
diff --git a/vdsm_cli/vdsClientGluster.py b/vdsm_cli/vdsClientGluster.py
index be47696..39c4782 100644
--- a/vdsm_cli/vdsClientGluster.py
+++ b/vdsm_cli/vdsClientGluster.py
@@ -112,23 +112,29 @@
return status['status']['code'], status['status']['message']
def do_glusterVolumeRebalanceStart(self, args):
- params = self._eqSplit(args[1:])
- rebalanceType = params.get('type', 'fix-layout')
- force = params.get('force', False)
- status = self.s.glusterVolumeRebalanceStart(args[0],
- rebalanceType, force)
- pp.pprint(status)
+ params = self._eqSplit(args)
+ volumeName = params.get('volumeName', '')
+ rebalanceType = params.get('rebalanceType', '')
+ force = (params.get('force', 'no').upper() == 'YES')
+
+ status = self.s.glusterVolumeRebalanceStart(volumeName,
+ rebalanceType,
+ force)
return status['status']['code'], status['status']['message']
def do_glusterVolumeRebalanceStop(self, args):
- params = self._eqSplit(args[1:])
- force = params.get('force', False)
- status = self.s.glusterVolumeRebalanceStop(args[0], force)
- pp.pprint(status)
+ params = self._eqSplit(args)
+ volumeName = params.get('volumeName', '')
+ force = (params.get('force', 'no').upper() == 'YES')
+
+ status = self.s.glusterVolumeRebalanceStop(volumeName, force)
return status['status']['code'], status['status']['message']
def do_glusterVolumeRebalanceStatus(self, args):
- status = self.s.glusterVolumeRebalanceStatus(args[0])
+ params = self._eqSplit(args)
+ volumeName = params.get('volumeName', '')
+
+ status = self.s.glusterVolumeRebalanceStatus(volumeName)
pp.pprint(status)
return status['status']['code'], status['status']['message']
@@ -148,76 +154,118 @@
return status['status']['code'], status['status']['message']
def do_glusterVolumeReplaceBrickStart(self, args):
- status = self.s.glusterVolumeReplaceBrickStart(args[0], args[1],
- args[2])
+ params = self._eqSplit(args)
+ volumeName = params.get('volumeName', '')
+ existingBrick = params.get('existingBrick', '')
+ newBrick = params.get('newBrick', '')
+
+ status = self.s.glusterVolumeReplaceBrickStart(volumeName,
+ existingBrick,
+ newBrick)
return status['status']['code'], status['status']['message']
def do_glusterVolumeReplaceBrickAbort(self, args):
- status = self.s.glusterVolumeReplaceBrickAbort(args[0], args[1],
- args[2])
+ params = self._eqSplit(args)
+ volumeName = params.get('volumeName', '')
+ existingBrick = params.get('existingBrick', '')
+ newBrick = params.get('newBrick', '')
+
+ status = self.s.glusterVolumeReplaceBrickAbort(volumeName,
+ existingBrick,
+ newBrick)
return status['status']['code'], status['status']['message']
def do_glusterVolumeReplaceBrickPause(self, args):
- status = self.s.glusterVolumeReplaceBrickPause(args[0], args[1],
- args[2])
+ params = self._eqSplit(args)
+ volumeName = params.get('volumeName', '')
+ existingBrick = params.get('existingBrick', '')
+ newBrick = params.get('newBrick', '')
+
+ status = self.s.glusterVolumeReplaceBrickPause(volumeName,
+ existingBrick,
+ newBrick)
return status['status']['code'], status['status']['message']
def do_glusterVolumeReplaceBrickStatus(self, args):
- status = self.s.glusterVolumeReplaceBrickStatus(args[0], args[1],
- args[2])
+ params = self._eqSplit(args)
+ volumeName = params.get('volumeName', '')
+ existingBrick = params.get('existingBrick', '')
+ newBrick = params.get('newBrick', '')
+
+ status = self.s.glusterVolumeReplaceBrickStatus(volumeName,
+ existingBrick,
+ newBrick)
+ pp.pprint(status)
return status['status']['code'], status['status']['message']
def do_glusterVolumeReplaceBrickCommit(self, args):
- status = self.s.glusterVolumeReplaceBrickCommit(args[0], args[1],
- args[2])
+ params = self._eqSplit(args)
+ volumeName = params.get('volumeName', '')
+ existingBrick = params.get('existingBrick', '')
+ newBrick = params.get('newBrick', '')
+ force = (params.get('force', 'no').upper() == 'YES')
+
+ status = self.s.glusterVolumeReplaceBrickCommit(volumeName,
+ existingBrick,
+ newBrick,
+ force)
return status['status']['code'], status['status']['message']
def do_glusterVolumeRemoveBrickStart(self, args):
- params = self._eqSplit(args[1:])
+ params = self._eqSplit(args)
+ volumeName = params.get('volumeName', '')
try:
brickList = params['bricks'].split(',')
except:
raise ValueError
replicaCount = params.get('replica', '')
- status = self.s.glusterVolumeRemoveBrickStart(args[0], brickList,
+
+ status = self.s.glusterVolumeRemoveBrickStart(volumeName,
+ brickList,
replicaCount)
- pp.pprint(status)
return status['status']['code'], status['status']['message']
def do_glusterVolumeRemoveBrickStop(self, args):
- params = self._eqSplit(args[1:])
+ params = self._eqSplit(args)
+ volumeName = params.get('volumeName', '')
try:
brickList = params['bricks'].split(',')
except:
raise ValueError
replicaCount = params.get('replica', '')
- status = self.s.glusterVolumeRemoveBrickStop(args[0], brickList,
+
+ status = self.s.glusterVolumeRemoveBrickStop(volumeName,
+ brickList,
replicaCount)
- pp.pprint(status)
return status['status']['code'], status['status']['message']
def do_glusterVolumeRemoveBrickStatus(self, args):
- params = self._eqSplit(args[1:])
+ params = self._eqSplit(args)
+ volumeName = params.get('volumeName', '')
try:
brickList = params['bricks'].split(',')
except:
raise ValueError
replicaCount = params.get('replica', '')
- status = self.s.glusterVolumeRemoveBrickStatus(args[0], brickList,
+
+ status = self.s.glusterVolumeRemoveBrickStatus(volumeName,
+ brickList,
replicaCount)
pp.pprint(status)
return status['status']['code'], status['status']['message']
def do_glusterVolumeRemoveBrickCommit(self, args):
- params = self._eqSplit(args[1:])
+ params = self._eqSplit(args)
+ volumeName = params.get('volumeName', '')
try:
brickList = params['bricks'].split(',')
except:
raise ValueError
replicaCount = params.get('replica', '')
- status = self.s.glusterVolumeRemoveBrickCommit(args[0], brickList,
+
+ status = self.s.glusterVolumeRemoveBrickCommit(volumeName,
+ brickList,
replicaCount)
- pp.pprint(status)
return status['status']['code'], status['status']['message']
def do_glusterVolumeRemoveBrickForce(self, args):
@@ -270,6 +318,14 @@
status = self.s.glusterVolumeProfileInfo(volumeName, nfs)
pp.pprint(status)
+ return status['status']['code'], status['status']['message']
+
+ def do_glusterTaskActionPerform(self, args):
+ params = self._eqSplit(args)
+ taskId = params.get('taskId', '')
+ action = params.get('action', '')
+
+ status = self.s.glusterTaskActionPerform(taskid, action)
return status['status']['code'], status['status']['message']
@@ -338,18 +394,22 @@
)),
'glusterVolumeRebalanceStart': (
serv.do_glusterVolumeRebalanceStart,
- ('<volume_name>\n\t<volume_name> is existing volume name',
+ ('volumeName=<volume_name> [rebalanceType=fix-layout] '
+ '[force={yes|no}]\n\t'
+ '<volume_name> is existing volume name',
'start volume rebalance'
)),
'glusterVolumeRebalanceStop': (
serv.do_glusterVolumeRebalanceStop,
- ('<volume_name>\n\t<volume_name> is existing volume name',
+ ('volumeName=<volume_name> [force={yes|no}]\n\t'
+ '<volume_name> is existing volume name',
'stop volume rebalance'
)),
'glusterVolumeRebalanceStatus': (
serv.do_glusterVolumeRebalanceStatus,
- ('<volume_name>\n\t<volume_name> is existing volume name',
- 'get volume rebalance status'
+ ('volumeName=<volume_name>\n\t'
+ '<volume_name> is existing volume name',
+ 'get volume rebalance status'
)),
'glusterVolumeDelete': (
serv.do_glusterVolumeDelete,
@@ -366,65 +426,79 @@
)),
'glusterVolumeReplaceBrickStart': (
serv.do_glusterVolumeReplaceBrickStart,
- ('<volume_name> <existing_brick> <new_brick> \n\t<volume_name> '
- 'is existing volume name\n\t<brick> is existing brick\n\t'
+ ('volumeName=<volume_name> existingBrick=<existing_brick> '
+ 'newBrick=<new_brick>\n\t'
+ '<volume_name> is existing volume name\n\t'
+ '<existing_brick> is existing brick\n\t'
'<new_brick> is new brick',
'start volume replace brick'
)),
'glusterVolumeReplaceBrickAbort': (
serv.do_glusterVolumeReplaceBrickAbort,
- ('<volume_name> <existing_brick> <new_brick> \n\t<volume_name> '
- 'is existing volume name\n\t<brick> is existing brick\n\t'
+ ('volumeName=<volume_name> existingBrick=<existing_brick> '
+ 'newBrick=<new_brick>\n\t'
+ '<volume_name> is existing volume name\n\t'
+ '<existing_brick> is existing brick\n\t'
'<new_brick> is new brick',
'abort volume replace brick'
)),
'glusterVolumeReplaceBrickPause': (
serv.do_glusterVolumeReplaceBrickPause,
- ('<volume_name> <existing_brick> <new_brick> \n\t<volume_name> '
- 'is existing volume name\n\t<brick> is existing brick\n\t'
+ ('volumeName=<volume_name> existingBrick=<existing_brick> '
+ 'newBrick=<new_brick>\n\t'
+ '<volume_name> is existing volume name\n\t'
+ '<existing_brick> is existing brick\n\t'
'<new_brick> is new brick',
'pause volume replace brick'
)),
'glusterVolumeReplaceBrickStatus': (
serv.do_glusterVolumeReplaceBrickStatus,
- ('<volume_name> <existing_brick> <new_brick> \n\t<volume_name> '
- 'is existing volume name\n\t<brick> is existing brick\n\t'
+ ('volumeName=<volume_name> existingBrick=<existing_brick> '
+ 'newBrick=<new_brick>\n\t'
+ '<volume_name> is existing volume name\n\t'
+ '<existing_brick> is existing brick\n\t'
'<new_brick> is new brick',
'get volume replace brick status'
)),
'glusterVolumeReplaceBrickCommit': (
serv.do_glusterVolumeReplaceBrickCommit,
- ('<volume_name> <existing_brick> <new_brick> \n\t<volume_name> '
- 'is existing volume name\n\t<brick> is existing brick\n\t'
+ ('volumeName=<volume_name> existingBrick=<existing_brick> '
+ 'newBrick=<new_brick> [force={yes|no}]\n\t'
+ '<volume_name> is existing volume name\n\t'
+ '<existing_brick> is existing brick\n\t'
'<new_brick> is new brick',
'commit volume replace brick'
)),
'glusterVolumeRemoveBrickStart': (
serv.do_glusterVolumeRemoveBrickStart,
- ('<volume_name> [replica=<count>] bricks=brick[,brick] ... \n\t'
- '<volume_name> is existing volume name\n\t<brick> is '
- 'existing brick',
+ ('volumeName=<volume_name> bricks=<brick[,brick, ...]> '
+ '[replica=<count>]\n\t'
+ '<volume_name> is existing volume name\n\t'
+ '<brick[,brick, ...]> is existing brick(s)',
'start volume remove bricks'
)),
'glusterVolumeRemoveBrickStop': (
serv.do_glusterVolumeRemoveBrickStop,
- ('<volume_name> [replica=<count>] bricks=brick[,brick] ... \n\t'
- '<volume_name> is existing volume name\n\t<brick> is '
- 'existing brick',
+ ('volumeName=<volume_name> bricks=<brick[,brick, ...]> '
+ '[replica=<count>]\n\t'
+ '<volume_name> is existing volume name\n\t'
+ '<brick[,brick, ...]> is existing brick(s)',
'stop volume remove bricks'
)),
'glusterVolumeRemoveBrickStatus': (
serv.do_glusterVolumeRemoveBrickStatus,
- ('<volume_name> [replica=<count>] bricks=brick[,brick] ... \n\t'
- '<volume_name> is existing volume name\n\t<brick> is '
- 'existing brick',
+ ('volumeName=<volume_name> bricks=<brick[,brick, ...]> '
+ '[replica=<count>]\n\t'
+ '<volume_name> is existing volume name\n\t'
+ '<brick[,brick, ...]> is existing brick(s)',
'get volume remove bricks status'
)),
'glusterVolumeRemoveBrickCommit': (
serv.do_glusterVolumeRemoveBrickCommit,
- ('<volume_name> [replica=<count>] bricks=brick[,brick] ... \n\t'
- '<volume_name> is existing volume name\n\t<brick> is '
- 'existing brick',
+ ('volumeName=<volume_name> bricks=<brick[,brick, ...]> '
+ '[replica=<count>]\n\t'
+ '<volume_name> is existing volume name\n\t'
+ '<brick[,brick, ...]> is existing brick(s)',
'commit volume remove bricks'
)),
'glusterVolumeRemoveBrickForce': (
@@ -468,4 +542,11 @@
('volumeName=<volume_name> [nfs={yes|no}]\n\t'
'<volume_name> is existing volume name',
'get gluster volume profile info'
+ )),
+ 'glusterTaskActionPerform': (
+ serv.do_glusterTaskActionPerform,
+ ('taskId=<task_id> action=<action>\n\t'
+ '<task_id> is running task id\n\t'
+ '<action> is task action to be performed',
+ 'perform action on gluster task'
)), }
--
To view, visit http://gerrit.ovirt.org/10200
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I154df353bc6f23001d7bf61b8f5345abd2019cb6
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Bala.FA <barumuga(a)redhat.com>
Gerrit-Reviewer: Ayal Baron <abaron(a)redhat.com>
Gerrit-Reviewer: Dan Kenigsberg <danken(a)redhat.com>
Gerrit-Reviewer: Federico Simoncelli <fsimonce(a)redhat.com>
Gerrit-Reviewer: Saggi Mizrahi <smizrahi(a)redhat.com>
Gerrit-Reviewer: Timothy Asir <tjeyasin(a)redhat.com>
9 years, 8 months
Change in vdsm[master]: vdsm: support VIR_MIGRATE_ABORT_ON_ERROR
by peet@redhat.com
Peter V. Saveliev has uploaded a new change for review.
Change subject: vdsm: support VIR_MIGRATE_ABORT_ON_ERROR
......................................................................
vdsm: support VIR_MIGRATE_ABORT_ON_ERROR
Abort VM migration on EIO by default. The flag is supported
since libvirt 1.0.1 upstream, so use getattr() to keep code
compatible with older libvirt versions. In the latter case
migration EIO abort will not work.
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=961154
Signed-off-by: Peter V. Saveliev <peet(a)redhat.com>
Change-Id: Ic7f715c51f28ef2cd01fb95d42553ca10c79ea80
---
M lib/vdsm/config.py.in
M vdsm/vm.py
2 files changed, 16 insertions(+), 1 deletion(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/22/17422/1
diff --git a/lib/vdsm/config.py.in b/lib/vdsm/config.py.in
index bdbd91a..2cb1b48 100644
--- a/lib/vdsm/config.py.in
+++ b/lib/vdsm/config.py.in
@@ -83,6 +83,10 @@
('migration_downtime_steps', '10',
'Incremental steps used to reach migration_downtime.'),
+ ('migration_abort_on_eio', 'true',
+ 'Abort VM migration on I/O error and refuse to migrate '
+ 'VMs, paused because of EIO.'),
+
('max_outgoing_migrations', '3',
'Maximum concurrent outgoing migrations'),
diff --git a/vdsm/vm.py b/vdsm/vm.py
index 4333170..8a60159 100644
--- a/vdsm/vm.py
+++ b/vdsm/vm.py
@@ -379,12 +379,23 @@
# side
self._preparingMigrationEvt = False
if not self._migrationCanceledEvt:
+ # Note on VIR_MIGRATE_ABORT_ON_ERROR:
+ #
+ # The flag is added in libvirt-1.1.0, is available
+ # since libvirt-0.10.2-20.el6 in RHEL 6.5, but will not
+ # be backported to RHEL 6.4 ever.
+ # So the solution could be a dependency in the spec-file,
+ # but this getattr() trick just let us not to do unneeded
+ # fork.
self._vm._dom.migrateToURI2(
duri, muri, None,
libvirt.VIR_MIGRATE_LIVE |
libvirt.VIR_MIGRATE_PEER2PEER |
(libvirt.VIR_MIGRATE_TUNNELLED if
- self._tunneled else 0),
+ self._tunneled else 0) |
+ (getattr(libvirt, 'VIR_MIGRATE_ABORT_ON_ERROR', 0) if
+ config.get('vars',
+ 'migration_abort_on_eio') else 0),
None, maxBandwidth)
finally:
t.cancel()
--
To view, visit http://gerrit.ovirt.org/17422
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ic7f715c51f28ef2cd01fb95d42553ca10c79ea80
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Peter V. Saveliev <peet(a)redhat.com>
9 years, 8 months