Change in vdsm[master]: multipath: Move all calls to multipath exe to a single method
by smizrahi@redhat.com
Saggi Mizrahi has uploaded a new change for review.
Change subject: multipath: Move all calls to multipath exe to a single method
......................................................................
multipath: Move all calls to multipath exe to a single method
This makes the code a bit cleaner
Change-Id: I52afc07a07a925ed7572eb369deb7c203edb04cd
Signed-off-by: Saggi Mizrahi <smizrahi(a)redhat.com>
---
M vdsm/storage/multipath.py
1 file changed, 11 insertions(+), 4 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/55/19255/1
diff --git a/vdsm/storage/multipath.py b/vdsm/storage/multipath.py
index 924d747..c31b5c3 100644
--- a/vdsm/storage/multipath.py
+++ b/vdsm/storage/multipath.py
@@ -94,6 +94,10 @@
)
+def _runCmd(args):
+ return misc.execCmd([constants.EXT_MULTIPATH] + args, sudo=True)
+
+
def rescan():
"""
Forces multipath daemon to rescan the list of available devices and
@@ -108,8 +112,8 @@
supervdsm.getProxy().forceScsiScan()
# Now let multipath daemon pick up new devices
- cmd = [constants.EXT_MULTIPATH, "-r"]
- misc.execCmd(cmd, sudo=True)
+
+ _runCmd("-r")
def isEnabled():
@@ -154,6 +158,10 @@
return False
+def flushAll():
+ _runCmd("-F")
+
+
def setupMultipath():
"""
Set up the multipath daemon configuration to the known and
@@ -173,8 +181,7 @@
raise se.MultipathSetupError()
misc.persistFile(MPATH_CONF)
- # Flush all unused multipath device maps
- misc.execCmd([constants.EXT_MULTIPATH, "-F"], sudo=True)
+ flushAll()
cmd = [constants.EXT_VDSM_TOOL, "service-reload", "multipathd"]
rc = misc.execCmd(cmd, sudo=True)[0]
--
To view, visit http://gerrit.ovirt.org/19255
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I52afc07a07a925ed7572eb369deb7c203edb04cd
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Saggi Mizrahi <smizrahi(a)redhat.com>
8 years, 8 months
Change in vdsm[master]: migration: Add incoming migration semaphore
by tjelinek@redhat.com
Tomas Jelinek has posted comments on this change.
Change subject: migration: Add incoming migration semaphore
......................................................................
Patch Set 2:
(3 comments)
please set this patch a topic like "migration" - I guess much more will be coming so it would be good to keep track.
https://gerrit.ovirt.org/#/c/45954/2/vdsm/virt/migration.py
File vdsm/virt/migration.py:
Line 54: VIR_MIGRATE_PARAM_GRAPHICS_URI = 'graphics_uri'
Line 55:
Line 56:
Line 57: mig = min(config.getint('vars', 'max_incoming_migrations'),
Line 58: caps.CpuTopology().cores())
> I don't think that there is any relationship between the number of incoming
yeah, maybe it does not need a fallback at all. But not sure how would it than behave after update.
BTW don't you need to define it also in the config.py.in?
Line 59:
Line 60: incomingMigrations = threading.BoundedSemaphore(mig)
Line 61:
Line 62:
Line 330: dev._deviceXML, self._vm.conf, dev.custom)
Line 331: hooks.before_vm_migrate_source(self._vm._dom.XMLDesc(0),
Line 332: self._vm.conf)
Line 333:
Line 334: while True:
> Why doesn't engine handle the case of too many incoming migrations on the h
For the same reason why the VDSM has an outgoing semaphore. Engine decides that this 10 machines should land on this VDSM and it is correct so sends the commands. And the VDSM migrates them. Which is again correct. It just does not migrate them in one big batch to protect itself from overload.
Line 335: # Do not measure the time spent for creating the VM on the
Line 336: # destination. In some cases some expensive operations can
Line 337: # cause the migration to get cancelled right after the
Line 338: # transfer started.
Line 347: SourceThread._ongoingMigrations.release()
Line 348: # the destination is busy with incoming migrations
Line 349: # release semaphore and give other outgoing migrations
Line 350: # a chance
Line 351: time.sleep(5)
> seems very arbitrary and possibly works for short migrations, how about lon
It is not "wait until migration finishes". It is "wait to not to eat up all the resources and give other waiting threads a chance to get the lock and start migrating to possibly other destinations". So I don't think it has anything to do with how long the migration takes.
Line 352: SourceThread._ongoingMigrations.acquire()
Line 353: else:
Line 354: break
Line 355:
--
To view, visit https://gerrit.ovirt.org/45954
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: comment
Gerrit-Change-Id: I8952f732033ed160292b11fbc0c4deac099b2b3e
Gerrit-PatchSet: 2
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Martin Betak <mbetak(a)redhat.com>
Gerrit-Reviewer: Martin Polednik <mpolednik(a)redhat.com>
Gerrit-Reviewer: Michal Skrivanek <mskrivan(a)redhat.com>
Gerrit-Reviewer: Tomas Jelinek <tjelinek(a)redhat.com>
Gerrit-Reviewer: automation(a)ovirt.org
Gerrit-HasComments: Yes
8 years, 8 months
Change in vdsm[master]: migration: Add incoming migration semaphore
by Martin Polednik
Martin Polednik has posted comments on this change.
Change subject: migration: Add incoming migration semaphore
......................................................................
Patch Set 2: Code-Review-1
(5 comments)
https://gerrit.ovirt.org/#/c/45954/2/lib/vdsm/utils.py
File lib/vdsm/utils.py:
Line 1271: else:
Line 1272: yield
Line 1273:
Line 1274:
Line 1275: @contextmanager
See comment in api.py
Line 1276: def releaseOnError(resource):
Line 1277: try:
Line 1278: yield
Line 1279: except Exception:
https://gerrit.ovirt.org/#/c/45954/2/vdsm/API.py
File vdsm/API.py:
Line 574:
Line 575: if not migration.incomingMigrations.acquire(False):
Line 576: return response.error('migrationLimit')
Line 577:
Line 578: with utils.releaseOnError(migration.incomingMigrations):
This feels unpythonic. You should create a contextmanager function that is able to handle this behavior without the need for the releaseOnError. Something like
@contextmanager
def acquire(semaphore, block):
lock = semaphore.acquire(block)
try:
yield lock
except:
semaphore.release()
raise
and then
with acquire(s, block=False) as acquired:
if not acquired:
return response.error('migrationLimit')
...
Line 579: params['vmId'] = self._UUID
Line 580: result = self.create(params)
Line 581: if result['status']['code']:
Line 582: self.log.debug('Migration create - Failed')
https://gerrit.ovirt.org/#/c/45954/2/vdsm/virt/migration.py
File vdsm/virt/migration.py:
Line 54: VIR_MIGRATE_PARAM_GRAPHICS_URI = 'graphics_uri'
Line 55:
Line 56:
Line 57: mig = min(config.getint('vars', 'max_incoming_migrations'),
Line 58: caps.CpuTopology().cores())
I don't think that there is any relationship between the number of incoming migration and CPU cores, unlike outgoing migrations where one could argue the CPU cores can be equal to maximum number of compressions running.
Line 59:
Line 60: incomingMigrations = threading.BoundedSemaphore(mig)
Line 61:
Line 62:
Line 330: dev._deviceXML, self._vm.conf, dev.custom)
Line 331: hooks.before_vm_migrate_source(self._vm._dom.XMLDesc(0),
Line 332: self._vm.conf)
Line 333:
Line 334: while True:
Why doesn't engine handle the case of too many incoming migrations on the host?
Line 335: # Do not measure the time spent for creating the VM on the
Line 336: # destination. In some cases some expensive operations can
Line 337: # cause the migration to get cancelled right after the
Line 338: # transfer started.
Line 347: SourceThread._ongoingMigrations.release()
Line 348: # the destination is busy with incoming migrations
Line 349: # release semaphore and give other outgoing migrations
Line 350: # a chance
Line 351: time.sleep(5)
seems very arbitrary and possibly works for short migrations, how about long migrations?
Line 352: SourceThread._ongoingMigrations.acquire()
Line 353: else:
Line 354: break
Line 355:
--
To view, visit https://gerrit.ovirt.org/45954
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: comment
Gerrit-Change-Id: I8952f732033ed160292b11fbc0c4deac099b2b3e
Gerrit-PatchSet: 2
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Martin Betak <mbetak(a)redhat.com>
Gerrit-Reviewer: Martin Polednik <mpolednik(a)redhat.com>
Gerrit-Reviewer: Michal Skrivanek <mskrivan(a)redhat.com>
Gerrit-Reviewer: Tomas Jelinek <tjelinek(a)redhat.com>
Gerrit-Reviewer: automation(a)ovirt.org
Gerrit-HasComments: Yes
8 years, 8 months
Change in vdsm[master]: spmprotect: Switch from fencing by pid to fencing using syst...
by dkuznets@redhat.com
Dima Kuznetsov has uploaded a new change for review.
Change subject: spmprotect: Switch from fencing by pid to fencing using systemctl
......................................................................
spmprotect: Switch from fencing by pid to fencing using systemctl
Change-Id: Ifdea618514232a1f751afae54337de787f297b9e
Signed-off-by: Dima Kuznetsov <dkuznets(a)redhat.com>
---
M vdsm/storage/protect/spmprotect.sh.in
1 file changed, 5 insertions(+), 6 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/11/43211/1
diff --git a/vdsm/storage/protect/spmprotect.sh.in b/vdsm/storage/protect/spmprotect.sh.in
index 101cac0..099e40f 100755
--- a/vdsm/storage/protect/spmprotect.sh.in
+++ b/vdsm/storage/protect/spmprotect.sh.in
@@ -25,14 +25,13 @@
LOGFILE="/var/log/vdsm/spm-lock.log"
VDS_CLIENT="/usr/bin/vdsClient"
LEASE_UTIL="@SAFELEASE_PATH@"
+SYSTEMCTL="/bin/systemctl"
KILL="/bin/kill"
PKILL="/usr/bin/pkill"
sdUUID=$2
CHECKVDSM=${CHECKVDSM:-"/usr/bin/pgrep vdsm"}
REBOOTCMD=${REBOOTCMD:-"sudo /sbin/reboot -f"}
RENEWDIR="/var/run/vdsm/spmprotect/$$"
-VDSM_PIDFILE="/var/run/vdsm/vdsmd.pid"
-VDSM_PID=`/bin/cat "$VDSM_PIDFILE"`
function usage() {
if [ -n "$1" ]; then
@@ -75,13 +74,13 @@
disown
(sleep 7
log "Trying to stop vdsm for sdUUID=$sdUUID id=$ID lease_path=$LEASE_FILE"
- echodo $KILL "$VDSM_PID"
+ echodo $SYSTEMCTL kill --signal=15 vdsmd.service
sleep 2
- echodo $KILL -9 "$VDSM_PID"
+ echodo $SYSTEMCTL kill --signal=9 vdsmd.service
)&
disown
- echodo $KILL -USR1 "$VDSM_PID"
+ echodo $SYSTEMCTL kill --signal=30 vdsmd.service
rm -fr $RENEWDIR
trap EXIT
@@ -206,7 +205,7 @@
dbg="-d"
fi
-log "Protecting spm lock for vdsm pid $VDSM_PID"
+log "Protecting spm lock for vdsmd"
case $1 in
start)
--
To view, visit https://gerrit.ovirt.org/43211
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ifdea618514232a1f751afae54337de787f297b9e
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Dima Kuznetsov <dkuznets(a)redhat.com>
8 years, 8 months
Change in vdsm[master]: migration: Add incoming migration semaphore
by automation@ovirt.org
automation(a)ovirt.org has posted comments on this change.
Change subject: migration: Add incoming migration semaphore
......................................................................
Patch Set 2:
* Update tracker::IGNORE, no Bug-Url found
* Check Bug-Url::WARN, no bug url found, make sure header matches 'Bug-Url: ' and is a valid url.
* Check merged to previous::IGNORE, Not in stable branch (['ovirt-3.5', 'ovirt-3.4', 'ovirt-3.3'])
--
To view, visit https://gerrit.ovirt.org/45954
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: comment
Gerrit-Change-Id: I8952f732033ed160292b11fbc0c4deac099b2b3e
Gerrit-PatchSet: 2
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Martin Betak <mbetak(a)redhat.com>
Gerrit-Reviewer: Martin Polednik <mpolednik(a)redhat.com>
Gerrit-Reviewer: Tomas Jelinek <tjelinek(a)redhat.com>
Gerrit-Reviewer: automation(a)ovirt.org
Gerrit-HasComments: No
8 years, 8 months
Change in vdsm[master]: migration: Add incoming migration semaphore
by automation@ovirt.org
automation(a)ovirt.org has posted comments on this change.
Change subject: migration: Add incoming migration semaphore
......................................................................
Patch Set 1:
* Update tracker::IGNORE, no Bug-Url found
* Check Bug-Url::WARN, no bug url found, make sure header matches 'Bug-Url: ' and is a valid url.
* Check merged to previous::IGNORE, Not in stable branch (['ovirt-3.5', 'ovirt-3.4', 'ovirt-3.3'])
--
To view, visit https://gerrit.ovirt.org/45954
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: comment
Gerrit-Change-Id: I8952f732033ed160292b11fbc0c4deac099b2b3e
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Martin Betak <mbetak(a)redhat.com>
Gerrit-Reviewer: automation(a)ovirt.org
Gerrit-HasComments: No
8 years, 8 months
Change in vdsm[ovirt-3.5]: Live merge: Update base size after live merge
by alitke@redhat.com
Adam Litke has uploaded a new change for review.
Change subject: Live merge: Update base size after live merge
......................................................................
Live merge: Update base size after live merge
When performing a live merge, data is copied from a top volume into a
base volume. If the top volume is larger than the base volume (which
can happen if the drive size was extended), libvirt will change the size
of the base volume to match that of the top volume. When synchronizing
metadata after the merge, we need to update the 'capacity' field of the
base volume to reflect the new size. We do this inside the
LiveMergeCleanupThread to ensure that it gets retried in the event of
storage connection problems or vdsm restarts.
Change-Id: Ic351d694ddeed5b4bf92a211c5d64fa6673b3221
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=1251958
Signed-off-by: Adam Litke <alitke(a)redhat.com>
---
M vdsm/virt/vm.py
1 file changed, 25 insertions(+), 8 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/07/45107/1
diff --git a/vdsm/virt/vm.py b/vdsm/virt/vm.py
index 6d8cccb..62f40e4 100644
--- a/vdsm/virt/vm.py
+++ b/vdsm/virt/vm.py
@@ -5741,7 +5741,7 @@
def queryBlockJobs(self):
def startCleanup(job, drive, needPivot):
- t = LiveMergeCleanupThread(self, job['jobID'], drive, needPivot)
+ t = LiveMergeCleanupThread(self, job, drive, needPivot)
t.start()
self._liveMergeCleanupThreads[job['jobID']] = t
@@ -6087,11 +6087,11 @@
class LiveMergeCleanupThread(threading.Thread):
- def __init__(self, vm, jobId, drive, doPivot):
+ def __init__(self, vm, job, drive, doPivot):
threading.Thread.__init__(self)
self.setDaemon(True)
self.vm = vm
- self.jobId = jobId
+ self.job = job
self.drive = drive
self.doPivot = doPivot
self.success = False
@@ -6115,7 +6115,7 @@
self.vm.stopDisksStatsCollection()
self.vm.log.info("Requesting pivot to complete active layer commit "
- "(job %s)", self.jobId)
+ "(job %s)", self.job['jobID'])
try:
flags = libvirt.VIR_DOMAIN_BLOCK_JOB_ABORT_PIVOT
ret = self.vm._dom.blockJobAbort(self.drive.name, flags)
@@ -6125,22 +6125,39 @@
else:
if ret != 0:
self.vm.log.error("Pivot failed for job %s (rc=%i)",
- self.jobId, ret)
+ self.job['jobID'], ret)
raise RuntimeError("pivot failed")
self._waitForXMLUpdate()
- self.vm.log.info("Pivot completed (job %s)", self.jobId)
+ self.vm.log.info("Pivot completed (job %s)", self.job['jobID'])
+
+ def update_base_size(self):
+ # If the drive size was extended just after creating the snapshot which
+ # we are removing, the size of the top volume might be larger than the
+ # size of the base volume. In that case libvirt has enlarged the base
+ # volume automatically as part of the blockCommit operation. Update
+ # our metadata to reflect this change.
+ topVolUUID = self.job['topVolume']
+ baseVolUUID = self.job['baseVolume']
+ topVolInfo = self.vm._getVolumeInfo(self.drive.domainID,
+ self.drive.poolID,
+ self.drive.imageID, topVolUUID)
+ self.vm._setVolumeSize(self.drive.domainID, self.drive.poolID,
+ self.drive.imageID, baseVolUUID,
+ topVolInfo['capacity'])
@utils.traceback()
def run(self):
+ self.update_base_size()
if self.doPivot:
self.tryPivot()
self.vm.log.info("Synchronizing volume chain after live merge "
- "(job %s)", self.jobId)
+ "(job %s)", self.job['jobID'])
self.vm._syncVolumeChain(self.drive)
if self.doPivot:
self.vm.startDisksStatsCollection()
self.success = True
- self.vm.log.info("Synchronization completed (job %s)", self.jobId)
+ self.vm.log.info("Synchronization completed (job %s)",
+ self.job['jobID'])
def isSuccessful(self):
"""
--
To view, visit https://gerrit.ovirt.org/45107
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ic351d694ddeed5b4bf92a211c5d64fa6673b3221
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: ovirt-3.5
Gerrit-Owner: Adam Litke <alitke(a)redhat.com>
8 years, 8 months
Change in vdsm[ovirt-3.5]: virt: Introduce Vm._setVolumeSize helper
by alitke@redhat.com
Adam Litke has uploaded a new change for review.
Change subject: virt: Introduce Vm._setVolumeSize helper
......................................................................
virt: Introduce Vm._setVolumeSize helper
Backport the Vm._setVolumeSize helper for use by LiveMergeCleanupThread
to update the size of the base volume in cases where the top volume was
smaller than the base volume.
Change-Id: I41ae9d92b3d22cda342209d33a87c7163ab1e5d5
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=1251958
Signed-off-by: Adam Litke <alitke(a)redhat.com>
---
M vdsm/virt/vm.py
1 file changed, 8 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/06/45106/1
diff --git a/vdsm/virt/vm.py b/vdsm/virt/vm.py
index e4f270d..6d8cccb 100644
--- a/vdsm/virt/vm.py
+++ b/vdsm/virt/vm.py
@@ -5917,6 +5917,14 @@
(domainID, volumeID))
return res['info']
+ def _setVolumeSize(self, domainID, poolID, imageID, volumeID, size):
+ res = self.cif.irs.setVolumeSize(domainID, poolID, imageID, volumeID,
+ size)
+ if res['status']['code'] != 0:
+ raise StorageUnavailableError(
+ "Unable to set volume size to %s for domain %s volume %s" %
+ (size, domainID, volumeID))
+
def _diskXMLGetVolumeChainInfo(self, diskXML, drive):
def find_element_by_name(doc, name):
for child in doc.childNodes:
--
To view, visit https://gerrit.ovirt.org/45106
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I41ae9d92b3d22cda342209d33a87c7163ab1e5d5
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: ovirt-3.5
Gerrit-Owner: Adam Litke <alitke(a)redhat.com>
8 years, 8 months
Change in vdsm[ovirt-3.5]: Live Merge: Allow extension of non-leaf raw volumes
by alitke@redhat.com
Adam Litke has uploaded a new change for review.
Change subject: Live Merge: Allow extension of non-leaf raw volumes
......................................................................
Live Merge: Allow extension of non-leaf raw volumes
volume.extendSize() is currently prohibited for any non-leaf volume.
For a very specific live merge scenario we must permit extension of an
internal raw base volume. Allow this usage and add a comment explaining
the reasoning.
The scenario:
- User begins with a raw block disk.
- User creates a snapshot.
- User enlarges the disk (diskSizeExtend)
- User performs live merge to remove the snapshot
In this case the base volume is too small to accommodate the data
from the child volume and an error is raised since libvirt cannot
enlarge a block device. The solution is to require engine to call
extendVolumeSize on the base volume before requesting the live merge
operation.
Change-Id: Ia1918aa11876e9ebe9e43f6f95f3fff50c21b41c
Signed-off-by: Adam Litke <alitke(a)redhat.com>
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=1251958
---
M vdsm/storage/volume.py
1 file changed, 9 insertions(+), 2 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/05/45105/1
diff --git a/vdsm/storage/volume.py b/vdsm/storage/volume.py
index 75fa1c4..0cd6de6 100644
--- a/vdsm/storage/volume.py
+++ b/vdsm/storage/volume.py
@@ -548,11 +548,10 @@
"""
Extend the size (virtual disk size seen by the guest) of the volume.
"""
- if not self.isLeaf() or self.isShared():
+ if self.isShared():
raise se.VolumeNonWritable(self.volUUID)
volFormat = self.getFormat()
-
if volFormat == COW_FORMAT:
self.log.debug("skipping cow size extension for volume %s to "
"size %s", self.volUUID, newSize)
@@ -560,6 +559,14 @@
elif volFormat != RAW_FORMAT:
raise se.IncorrectFormat(self.volUUID)
+ # Note: This function previously prohibited extending non-leaf volumes.
+ # If a disk is enlarged a volume may become larger than its parent. In
+ # order to support live merge of a larger volume into its raw parent we
+ # must permit extension of this raw volume prior to starting the merge.
+ isBase = self.getParent() == BLANK_UUID
+ if not (isBase or self.isLeaf()):
+ raise se.VolumeNonWritable(self.volUUID)
+
curRawSize = self.getVolumeSize()
if (newSize < curRawSize):
--
To view, visit https://gerrit.ovirt.org/45105
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ia1918aa11876e9ebe9e43f6f95f3fff50c21b41c
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: ovirt-3.5
Gerrit-Owner: Adam Litke <alitke(a)redhat.com>
8 years, 8 months
Change in vdsm[ovirt-3.5]: virt: Add _getVolumeInfo helper
by alitke@redhat.com
Adam Litke has uploaded a new change for review.
Change subject: virt: Add _getVolumeInfo helper
......................................................................
virt: Add _getVolumeInfo helper
The next patch needs to call into to storage to get detailed volume size
information. Add a helper to encapsulate this operation which makes
error checking and exception raising consistent for all call sites.
Change-Id: Ib67eecc4725ac272695a64fabefb969882d9c0e8
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=1251958
Signed-off-by: Adam Litke <alitke(a)redhat.com>
---
M vdsm/virt/vm.py
1 file changed, 8 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/04/45104/1
diff --git a/vdsm/virt/vm.py b/vdsm/virt/vm.py
index f813af7..e4f270d 100644
--- a/vdsm/virt/vm.py
+++ b/vdsm/virt/vm.py
@@ -5909,6 +5909,14 @@
return {'status': doneCode}
+ def _getVolumeInfo(self, domainID, poolID, imageID, volumeID):
+ res = self.cif.irs.getVolumeInfo(domainID, poolID, imageID, volumeID)
+ if res['status']['code'] != 0:
+ raise StorageUnavailableError(
+ "Unable to get volume info for domain %s volume %s" %
+ (domainID, volumeID))
+ return res['info']
+
def _diskXMLGetVolumeChainInfo(self, diskXML, drive):
def find_element_by_name(doc, name):
for child in doc.childNodes:
--
To view, visit https://gerrit.ovirt.org/45104
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ib67eecc4725ac272695a64fabefb969882d9c0e8
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: ovirt-3.5
Gerrit-Owner: Adam Litke <alitke(a)redhat.com>
8 years, 8 months