Change in vdsm[master]: vm: Cleanup waiting for xml update
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: vm: Cleanup waiting for xml update
......................................................................
vm: Cleanup waiting for xml update
This patch cleans up a bit the code for waiting until libvirt xml is
updated after pivot was completed.
- Clarify confusing log message claiming that pivot failed after it
completed successfully
- Cleanup creation of volumes lists using generator expression
- More clear logic for checking current volumes list
- Replace detailed log message and unhelpful exception with detailed
exception
- Move comment out of the loop to make the loop more clear
- Remove unneeded keys() calls when looking up alias in chains
This code was added as temporary solution until libvirt is fixed, but I
think we would like keep a simplified version of it even after libvirt
is fixed, verifying that the operation was successful.
Change-Id: I9fec5416a62736bad461ddd0b54093d23960b7a6
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M vdsm/virt/vm.py
1 file changed, 27 insertions(+), 24 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/38/39938/1
diff --git a/vdsm/virt/vm.py b/vdsm/virt/vm.py
index efadbdb..8ece47b 100644
--- a/vdsm/virt/vm.py
+++ b/vdsm/virt/vm.py
@@ -5100,40 +5100,43 @@
# synchronized and we may start the vm with a stale volume in the
# future. See https://bugzilla.redhat.com/show_bug.cgi?id=1202719 for
# more details.
- # TODO: Remove once we depend on a libvirt with this bug fixed.
# We expect libvirt to show that the original leaf has been removed
# from the active volume chain.
origVols = sorted([x['volumeID'] for x in self.drive.volumeChain])
- expectedVols = origVols[:]
- expectedVols.remove(self.drive.volumeID)
+ expectedVols = [v for v in origVols if v != self.driveVolumeID]
alias = self.drive['alias']
self.vm.log.info("Waiting for libvirt to update the XML after pivot "
"of drive %s completed", alias)
- while True:
- # This operation should complete in either one or two iterations of
- # this loop. Until libvirt updates the XML there is nothing to do
- # but wait. While we wait we continue to tell engine that the job
- # is ongoing. If we are still in this loop when the VM is powered
- # off, the merge will be resolved manually by engine using the
- # reconcileVolumeChain verb.
- chains = self.vm._driveGetActualVolumeChain([self.drive])
- if alias not in chains.keys():
- raise RuntimeError("Failed to retrieve volume chain for "
- "drive %s. Pivot failed.", alias)
- curVols = sorted([entry.uuid for entry in chains[alias]])
- if curVols == origVols:
- time.sleep(1)
- elif curVols == expectedVols:
+ # This operation should complete in either one or two iterations of
+ # this loop. Until libvirt updates the XML there is nothing to do
+ # but wait. While we wait we continue to tell engine that the job
+ # is ongoing. If we are still in this loop when the VM is powered
+ # off, the merge will be resolved manually by engine using the
+ # reconcileVolumeChain verb.
+ # TODO: Check once when we depend on a libvirt with this bug fixed.
+
+ while True:
+ chains = self.vm._driveGetActualVolumeChain([self.drive])
+ if alias not in chains:
+ raise RuntimeError("Failed to retrieve volume chain for "
+ "drive %s after pivot completed", alias)
+
+ curVols = sorted(entry.uuid for entry in chains[alias])
+
+ if curVols == expectedVols:
self.vm.log.info("The XML update has been completed")
- break
- else:
- self.log.error("Bad volume chain found for drive %s. Previous "
- "chain: %s, Expected chain: %s, Actual chain: "
- "%s", alias, origVols, expectedVols, curVols)
- raise RuntimeError("Bad volume chain found")
+ return
+
+ if curVols != origVols:
+ raise RuntimeError(
+ "Bad volume chain after pivot for drive %s. Previous "
+ "chain: %s, Expected chain: %s, Actual chain: %s" %
+ (alias, origVols, expectedVols, curVols))
+
+ time.sleep(1)
def _devicesWithAlias(domXML):
--
To view, visit https://gerrit.ovirt.org/39938
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I9fec5416a62736bad461ddd0b54093d23960b7a6
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
6 years, 10 months
Change in vdsm[master]: fc-scan: Use utilities from vdsm library.
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: fc-scan: Use utilities from vdsm library.
......................................................................
fc-scan: Use utilities from vdsm library.
Replace low level threading code with simpler concurrent.tmap() call and
duplicate monotonic_time() with utils.monotonic_time().
Change-Id: Ic48748d6a43d41e034e16cb4f636ebe627881590
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M vdsm/storage/fc-scan
1 file changed, 24 insertions(+), 46 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/66/38466/1
diff --git a/vdsm/storage/fc-scan b/vdsm/storage/fc-scan
index 344345d..c746ea4 100755
--- a/vdsm/storage/fc-scan
+++ b/vdsm/storage/fc-scan
@@ -38,43 +38,11 @@
import logging
import os
import sys
-import threading
+
+from vdsm import concurrent
+from vdsm import utils
log = logging.getLogger("fc-scan")
-
-
-class Scan(object):
-
- def __init__(self, host):
- self.host = host
- self.succeeded = False
- self.thread = None
-
- def start(self):
- self.thread = threading.Thread(target=self.run)
- self.thread.daemon = True
- self.thread.start()
-
- def wait(self):
- self.thread.join()
-
- def run(self):
- try:
- path = "/sys/class/scsi_host/%s/scan" % self.host
- log.debug("Scanning %s", path)
- start = monotonic_time()
- fd = os.open(path, os.O_WRONLY)
- try:
- os.write(fd, "- - -")
- finally:
- os.close(fd)
- self.succeeded = True
- elapsed = monotonic_time() - start
- log.debug("Scanned %s in %.2f seconds", path, elapsed)
- except OSError as e:
- log.error("Scanning %s failed: %s", path, e)
- except Exception:
- log.exception("Scanning %s failed", path)
def main(args):
@@ -93,22 +61,32 @@
log.debug("No fc_host found")
return 0
- scans = []
-
- for host in hosts:
- s = Scan(host)
- s.start()
- scans.append(s)
-
- for s in scans:
- s.wait()
+ scans = concurrent.tmap(scan_host, hosts)
if not all(s.succeeded for s in scans):
return 1
+ return 0
-def monotonic_time():
- return os.times()[4]
+
+def scan_host(name):
+ try:
+ path = "/sys/class/scsi_host/%s/scan" % name
+ log.debug("Scanning %s", path)
+ start = utils.monotonic_time()
+ fd = os.open(path, os.O_WRONLY)
+ try:
+ os.write(fd, "- - -")
+ finally:
+ os.close(fd)
+ elapsed = utils.monotonic_time() - start
+ log.debug("Scanned %s in %.2f seconds", path, elapsed)
+ except OSError as e:
+ log.error("Scanning %s failed: %s", path, e)
+ raise
+ except Exception:
+ log.exception("Scanning %s failed", path)
+ raise
if __name__ == '__main__':
--
To view, visit https://gerrit.ovirt.org/38466
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ic48748d6a43d41e034e16cb4f636ebe627881590
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
6 years, 10 months
Change in vdsm[master]: lib: Revert and refine error handling in tmap()
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: lib: Revert and refine error handling in tmap()
......................................................................
lib: Revert and refine error handling in tmap()
In commit 2b7155b696 (lib: Simplify and generalize concurrent.tmap()),
we simplified error handling by returning a named tuple with function
results. This turned out less useful then the original error handling.
This patch returns the previous error handling:
- Functions passed to tmap() should not raise - if they raise, this is
considered a bug in the function.
- The last error is raised by tmap() instead of returning the result.
This make it easier to fail loudly for unexpected errors.
- The original exception is re-raised now with the original traceback.
- Error handling is documented properly now
Previously you had to make sure function raises to signal failures:
def func():
try:
code that should not fail...
code that may fail...
code that should not fail...
except ExpectedError:
log.error(...)
raise
except Exception:
log.exception(...)
raise
results = concurrent.tmap(func, values)
if not all(r.succeeded for r in results):
...
Returning the result as is lets us have nicer code:
def func():
code that should not fail...
try:
code that may fail...
except ExpectedError:
log.error(...)
return False
code that should not fail...
return True
succeeded = concurrent.tmap(func, values)
if not all(succeeded):
...
We can ignore unexpected errors, since tmap() will log them and fail
loudly. We can also minimize try except block for expected errors.
Change-Id: I0154b28ff7822c63e77181bbbf444c712bd0c31e
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M lib/vdsm/concurrent.py
M tests/concurrentTests.py
2 files changed, 45 insertions(+), 19 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/11/39211/1
diff --git a/lib/vdsm/concurrent.py b/lib/vdsm/concurrent.py
index 64e072d..5498052 100644
--- a/lib/vdsm/concurrent.py
+++ b/lib/vdsm/concurrent.py
@@ -18,22 +18,42 @@
# Refer to the README and COPYING files for full details of the license
#
+import logging
import threading
-from collections import namedtuple
-
-
-Result = namedtuple("Result", ["succeeded", "value"])
+import sys
def tmap(func, iterable):
+ """
+ Run func with arguments from iterable in multiple threads, a returning the
+ output in order of arguments.
+
+ func should not raise exceptions - we consider this a bug in func, and will
+ fail the call and re-raise the exception in the caller thread.
+
+ Expected exceptions should be handled in func. If the caller likes to
+ handle the error later, func should return it:
+
+ def func(value):
+ try:
+ return something(value)
+ except ExpectedError as e:
+ return e
+
+ Unexpected exceptions should not be handled, as they are logged in the
+ worker threads and re-raised in the caller thread. If multiple excpetions
+ raised, only the last one will be re-raised in the caller thread.
+ """
args = list(iterable)
results = [None] * len(args)
+ error = [None]
def worker(i, f, arg):
try:
- results[i] = Result(True, f(arg))
- except Exception as e:
- results[i] = Result(False, e)
+ results[i] = f(arg)
+ except Exception:
+ error[0] = sys.exc_info()
+ logging.exception("Unhandled exception in tmap worker thread")
threads = []
for i, arg in enumerate(args):
@@ -45,4 +65,8 @@
for t in threads:
t.join()
+ if error[0] is not None:
+ t, v, tb = error[0]
+ raise t, v, tb
+
return results
diff --git a/tests/concurrentTests.py b/tests/concurrentTests.py
index 307e397..5c0646b 100644
--- a/tests/concurrentTests.py
+++ b/tests/concurrentTests.py
@@ -26,13 +26,16 @@
from vdsm import concurrent
+class Error(Exception):
+ pass
+
+
class TMapTests(VdsmTestCase):
def test_results(self):
values = tuple(range(10))
results = concurrent.tmap(lambda x: x, values)
- expected = [concurrent.Result(True, x) for x in values]
- self.assertEqual(results, expected)
+ self.assertEqual(results, list(values))
def test_results_order(self):
def func(x):
@@ -40,8 +43,7 @@
return x
values = tuple(random.random() * 0.1 for x in range(10))
results = concurrent.tmap(func, values)
- expected = [concurrent.Result(True, x) for x in values]
- self.assertEqual(results, expected)
+ self.assertEqual(results, list(values))
def test_concurrency(self):
start = time.time()
@@ -49,12 +51,12 @@
elapsed = time.time() - start
self.assertTrue(0.1 < elapsed < 0.2)
- def test_error(self):
- error = RuntimeError("No result for you!")
-
+ def test_raise_last_error(self):
def func(x):
- raise error
-
- results = concurrent.tmap(func, range(10))
- expected = [concurrent.Result(False, error)] * 10
- self.assertEqual(results, expected)
+ raise Error(x)
+ try:
+ concurrent.tmap(func, (1, 2, 3))
+ except Error as e:
+ self.assertEqual(e.args, (3,))
+ else:
+ self.fail("Exception was not raised")
--
To view, visit https://gerrit.ovirt.org/39211
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I0154b28ff7822c63e77181bbbf444c712bd0c31e
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
6 years, 10 months
Change in vdsm[master]: virt: Use Drive.diskType instead of networkDev and blockDev
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: virt: Use Drive.diskType instead of networkDev and blockDev
......................................................................
virt: Use Drive.diskType instead of networkDev and blockDev
Now that we have explicit diskType we don't need to use the networkDev
and blockDev properties. This is very useful when we set libvirt disk
type property, or want to check for certain disk type.
Change-Id: Id68bc74b3d788dc82fc61bf8c3de5a52164d0989
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M tests/vmStorageTests.py
M vdsm/virt/vm.py
2 files changed, 21 insertions(+), 28 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/72/40472/1
diff --git a/tests/vmStorageTests.py b/tests/vmStorageTests.py
index 8e8f7bc..dfe0991 100644
--- a/tests/vmStorageTests.py
+++ b/tests/vmStorageTests.py
@@ -268,74 +268,69 @@
def test_cdrom(self):
conf = drive_config(device='cdrom')
drive = Drive({}, self.log, **conf)
- self.assertFalse(drive.networkDev)
- self.assertFalse(drive.blockDev)
+ self.assertEqual(drive.diskType, DISK_TYPE.FILE)
def test_floppy(self):
conf = drive_config(device='floppy')
drive = Drive({}, self.log, **conf)
- self.assertFalse(drive.networkDev)
- self.assertFalse(drive.blockDev)
+ self.assertEqual(drive.diskType, DISK_TYPE.FILE)
def test_network_disk(self):
conf = drive_config(diskType=DISK_TYPE.NETWORK)
drive = Drive({}, self.log, **conf)
- self.assertTrue(drive.networkDev)
- self.assertFalse(drive.blockDev)
+ self.assertEqual(drive.diskType, DISK_TYPE.NETWORK)
@MonkeyPatch(utils, 'isBlockDevice', lambda path: True)
def test_block_disk(self):
conf = drive_config(device='disk')
drive = Drive({}, self.log, **conf)
- self.assertFalse(drive.networkDev)
- self.assertTrue(drive.blockDev)
+ self.assertEqual(drive.diskType, DISK_TYPE.BLOCK)
@MonkeyPatch(utils, 'isBlockDevice', lambda path: False)
def test_file_disk(self):
conf = drive_config(device='disk')
drive = Drive({}, self.log, **conf)
- self.assertFalse(drive.networkDev)
- self.assertFalse(drive.blockDev)
+ self.assertEqual(drive.diskType, DISK_TYPE.FILE)
@MonkeyPatch(utils, 'isBlockDevice', lambda path: False)
def test_migrate_from_file_to_block(self):
conf = drive_config(path='/filedomain/volume')
drive = Drive({}, self.log, **conf)
- self.assertFalse(drive.blockDev)
+ self.assertEqual(drive.diskType, DISK_TYPE.FILE)
# Migrate drive to block domain...
utils.isBlockDevice = lambda path: True
drive.path = "/blockdomain/volume"
- self.assertTrue(drive.blockDev)
+ self.assertEqual(drive.diskType, DISK_TYPE.BLOCK)
@MonkeyPatch(utils, 'isBlockDevice', lambda path: True)
def test_migrate_from_block_to_file(self):
conf = drive_config(path='/blockdomain/volume')
drive = Drive({}, self.log, **conf)
- self.assertTrue(drive.blockDev)
+ self.assertEqual(drive.diskType, DISK_TYPE.BLOCK)
# Migrate drive to file domain...
utils.isBlockDevice = lambda path: False
drive.path = "/filedomain/volume"
- self.assertFalse(drive.blockDev)
+ self.assertEqual(drive.diskType, DISK_TYPE.FILE)
@MonkeyPatch(utils, 'isBlockDevice', lambda path: True)
def test_migrate_from_block_to_network(self):
conf = drive_config(path='/blockdomain/volume')
drive = Drive({}, self.log, **conf)
- self.assertTrue(drive.blockDev)
+ self.assertEqual(drive.diskType, DISK_TYPE.BLOCK)
# Migrate drive to network disk...
drive.path = "pool/volume"
drive.diskType = DISK_TYPE.NETWORK
- self.assertFalse(drive.blockDev)
+ self.assertEqual(drive.diskType, DISK_TYPE.NETWORK)
@MonkeyPatch(utils, 'isBlockDevice', lambda path: True)
def test_migrate_network_to_block(self):
conf = drive_config(diskType=DISK_TYPE.NETWORK, path='pool/volume')
drive = Drive({}, self.log, **conf)
- self.assertTrue(drive.networkDev)
+ self.assertEqual(drive.diskType, DISK_TYPE.NETWORK)
# Migrate drive to block domain...
drive.path = '/blockdomain/volume'
drive.diskType = None
- self.assertTrue(drive.blockDev)
+ self.assertEqual(drive.diskType, DISK_TYPE.BLOCK)
@expandPermutations
diff --git a/vdsm/virt/vm.py b/vdsm/virt/vm.py
index 78b55c9..a6e742e 100644
--- a/vdsm/virt/vm.py
+++ b/vdsm/virt/vm.py
@@ -1862,7 +1862,7 @@
def _changeDisk(self, diskDeviceXmlElement):
diskType = diskDeviceXmlElement.getAttribute('type')
- if diskType not in ['file', 'block']:
+ if diskType not in (DISK_TYPE.BLOCK, DISK_TYPE.FILE):
return
diskSerial = diskDeviceXmlElement. \
@@ -1871,13 +1871,12 @@
for vmDrive in self._devices[hwclass.DISK]:
if vmDrive.serial == diskSerial:
# update the type
- diskDeviceXmlElement.setAttribute(
- 'type', 'block' if vmDrive.blockDev else 'file')
+ diskDeviceXmlElement.setAttribute('type', vmDrive.diskType)
# update the path
+ attr = 'dev' if vmDrive.diskType == DISK_TYPE.BLOCK else 'file'
diskDeviceXmlElement.getElementsByTagName('source')[0]. \
- setAttribute('dev' if vmDrive.blockDev else 'file',
- vmDrive.path)
+ setAttribute(attr, vmDrive.path)
# update the format (the disk might have been collapsed)
diskDeviceXmlElement.getElementsByTagName('driver')[0]. \
@@ -2773,7 +2772,7 @@
# we specify type='block' and dev=path for block volumes but we
# always speficy the file=path for backwards compatibility.
args = {'type': sourceType, 'file': newPath}
- if sourceType == 'block':
+ if sourceType == DISK_TYPE.BLOCK:
args['dev'] = newPath
disk.appendChildWithArgs('source', **args)
return disk
@@ -2881,7 +2880,7 @@
newDrives[vmDevName]["format"] = "cow"
# We need to keep track of the drive object because we cannot
- # safely access the blockDev property until after prepareVolumePath
+ # safely access the diskType property until after prepareVolumePath
vmDrives[vmDevName] = vmDrive
# If all the drives are the current ones, return success
@@ -2905,9 +2904,8 @@
_rollbackDrives(preparedDrives)
return errCode['snapshotErr']
- snapType = 'block' if vmDrives[vmDevName].blockDev else 'file'
snapelem = _diskSnapshot(vmDevName, newDrives[vmDevName]["path"],
- snapType)
+ vmDrives[vmDevName].diskType)
disks.appendChild(snapelem)
snap.appendChild(disks)
@@ -4553,7 +4551,7 @@
sourceXML = find_element_by_name(diskXML, 'source')
if not sourceXML:
break
- sourceAttr = ('file', 'dev')[drive.blockDev]
+ sourceAttr = 'dev' if drive.diskType == DISK_TYPE.BLOCK else 'file'
path = sourceXML.getAttribute(sourceAttr)
# TODO: Allocation information is not available in the XML. Switch
--
To view, visit https://gerrit.ovirt.org/40472
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Id68bc74b3d788dc82fc61bf8c3de5a52164d0989
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
6 years, 10 months
Change in vdsm[master]: misc: Safer and simpler itmap
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: misc: Safer and simpler itmap
......................................................................
misc: Safer and simpler itmap
The previous code had few issues:
- It used unlimited number of threads by default. This may lead to
creation of 100's of threads if you do not specify a value.
- It used non-daemon threads, which could lead to unwanted delay during
vdsm shutdown.
- It tried to yield results before all arguments were handled. This
could lead to unwanted delay in argument processing, if the caller
would block processing the results.
- It started one thread per value, even if maxthreads was smaller than
number of values.
- It was too complicated.
Changes:
- The caller must specify the maximum number of threads.
- Use daemon threads
- Queue all values before yielding results
- Start up to maxthreads worker threads, each processing multiple values
- Simplify the code
- Add test for error handling
Change-Id: Iba6116ac4003702c8e921cebaf494491a6f9afaf
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M tests/miscTests.py
M vdsm/storage/misc.py
2 files changed, 42 insertions(+), 42 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/19/39119/1
diff --git a/tests/miscTests.py b/tests/miscTests.py
index 31f64fa..4b3e3c3 100644
--- a/tests/miscTests.py
+++ b/tests/miscTests.py
@@ -196,7 +196,7 @@
# outOfProcess operation + 1. it let us know that oop and itmap operate
# properly with their limitations
data = frozenset(range(oop.HELPERS_PER_DOMAIN + 1))
- ret = frozenset(misc.itmap(dummy, data, misc.UNLIMITED_THREADS))
+ ret = frozenset(misc.itmap(dummy, data, len(data)))
self.assertEquals(ret, data)
def testMoreThreadsThanArgs(self):
@@ -207,6 +207,13 @@
data = 1
self.assertRaises(ValueError, misc.itmap(int, data, 0).next)
+ def testErrors(self):
+ err = Exception()
+ def dummy(arg):
+ raise err
+ data = [1, 2, 3]
+ self.assertEqual(list(misc.itmap(dummy, data, 4)), [err] * len(data))
+
class RotateFiles(TestCaseBase):
diff --git a/vdsm/storage/misc.py b/vdsm/storage/misc.py
index eb484c7..463fd04 100644
--- a/vdsm/storage/misc.py
+++ b/vdsm/storage/misc.py
@@ -58,7 +58,6 @@
STR_UUID_SIZE = 36
UUID_HYPHENS = [8, 13, 18, 23]
MEGA = 1 << 20
-UNLIMITED_THREADS = -1
log = logging.getLogger('Storage.Misc')
@@ -882,53 +881,47 @@
raise exception
-def itmap(func, iterable, maxthreads=UNLIMITED_THREADS):
+def itmap(func, iterable, maxthreads):
"""
- Make an iterator that computes the function using
- arguments from the iterable. It works similar to tmap
- by running each operation in a different thread, this
- causes the results not to return in any particular
- order so it's good if you don't care about the order
- of the results.
- maxthreads stands for maximum threads that we can initiate simultaneosly.
- If we reached to max threads the function waits for thread to
- finish before initiate the next one.
+ Return an iterator calling func with arguments from iterable in multiple threads.
+
+ Unlike tmap, the results are not returned in the original order of the
+ arguments, and number of threads is limited to maxthreads.
"""
- if maxthreads < 1 and maxthreads != UNLIMITED_THREADS:
- raise ValueError("Wrong input to function itmap: %s", maxthreads)
+ if maxthreads < 1:
+ raise ValueError("Invalid maxthreads value: %s" % maxthreads)
- respQueue = Queue.Queue()
+ DONE = object()
+ values = Queue.Queue()
+ results = Queue.Queue()
- def wrapper(value):
- try:
- respQueue.put(func(value))
- except Exception as e:
- respQueue.put(e)
+ def worker():
+ while True:
+ value = values.get()
+ if value is DONE:
+ return
+ try:
+ results.put(func(value))
+ except Exception as e:
+ results.put(e)
- threadsCount = 0
- for arg in iterable:
- if maxthreads != UNLIMITED_THREADS:
- if maxthreads == 0:
- # This not supposed to happened. If it does, it's a bug.
- # maxthreads should get to 0 only after threadsCount is
- # greater than 1
- if threadsCount < 1:
- raise RuntimeError("No thread initiated")
- else:
- yield respQueue.get()
- # if yield returns one thread stopped, so we can run
- # another thread in queue
- maxthreads += 1
- threadsCount -= 1
+ count = 0
+ threads = 0
- t = threading.Thread(target=wrapper, args=(arg,))
- t.start()
- threadsCount += 1
- maxthreads -= 1
+ for value in iterable:
+ values.put(value)
+ count += 1
+ if threads < maxthreads:
+ t = threading.Thread(target=worker)
+ t.daemon = True
+ t.start()
+ threads += 1
- # waiting for rest threads to end
- for i in xrange(threadsCount):
- yield respQueue.get()
+ for _ in range(threads):
+ values.put(DONE)
+
+ for _ in xrange(count):
+ yield results.get()
def isAscii(s):
--
To view, visit https://gerrit.ovirt.org/39119
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Iba6116ac4003702c8e921cebaf494491a6f9afaf
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
6 years, 10 months
Change in vdsm[master]: udevadm: More precise error handling
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: udevadm: More precise error handling
......................................................................
udevadm: More precise error handling
udevadm provides a --timeout option, but there is no robust way to
detect a timeout in EL6, EL7, and Fedora 20. In Fedora 21 and upstream,
udevadm ignores the timeout option. This patch improves error handling
by using our own timeout.
udevadm.settle() raises now udevadm.Failure or udevadm.Timeout, and the
caller is responsible to handle the error.
In both multipath.rescan() and IscsiConnection.connect(), we warn about
timeout but do not handle other errors, so real errors in udevadm will
fail loudly.
Change-Id: Ia0a7380b1b181ec93399ea741122cfa2e98086fb
Relates-To: https://bugzilla.redhat.com/1209474
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
A tests/udevadmTests.py
M vdsm/storage/multipath.py
M vdsm/storage/storageServer.py
M vdsm/storage/udevadm.py
4 files changed, 106 insertions(+), 21 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/40/39740/1
diff --git a/tests/udevadmTests.py b/tests/udevadmTests.py
new file mode 100644
index 0000000..90841b2
--- /dev/null
+++ b/tests/udevadmTests.py
@@ -0,0 +1,52 @@
+#
+# Copyright 2015 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+#
+# Refer to the README and COPYING files for full details of the license
+#
+
+
+from monkeypatch import MonkeyPatch
+from testlib import VdsmTestCase
+
+from vdsm import utils
+from storage import udevadm
+
+TRUE = utils.CommandPath("true", "/bin/true", "/usr/bin/true")
+FALSE = utils.CommandPath("false", "/bin/false", "/usr/bin/false")
+READ = utils.CommandPath("read", "/bin/read", "/usr/bin/read")
+
+
+class UdevadmSettleTests(VdsmTestCase):
+
+ @MonkeyPatch(udevadm, "_UDEVADM", TRUE)
+ def test_success(self):
+ udevadm.settle(5)
+
+ @MonkeyPatch(udevadm, "_UDEVADM", FALSE)
+ def test_error(self):
+ try:
+ udevadm.settle(5)
+ except udevadm.Failure as e:
+ self.assertEqual(e.rc, 1)
+ self.assertEqual(e.out, "")
+ self.assertEqual(e.err, "")
+ else:
+ self.fail("Failure not raised")
+
+ @MonkeyPatch(udevadm, "_UDEVADM", READ)
+ def test_timeout(self):
+ self.assertRaises(udevadm.Timeout, udevadm.settle, 1)
diff --git a/vdsm/storage/multipath.py b/vdsm/storage/multipath.py
index a1c42b3..925c411 100644
--- a/vdsm/storage/multipath.py
+++ b/vdsm/storage/multipath.py
@@ -73,7 +73,10 @@
# events are processed, ensuring detection of new devices and creation or
# update of multipath devices.
timeout = config.getint('irs', 'scsi_settle_timeout')
- udevadm.settle(timeout)
+ try:
+ udevadm.settle(timeout)
+ except udevadm.Timeout as e:
+ log.warning("Timeout waiting for udev events: %s", e)
def deduceType(a, b):
diff --git a/vdsm/storage/storageServer.py b/vdsm/storage/storageServer.py
index 22a90d1..c19fb8d 100644
--- a/vdsm/storage/storageServer.py
+++ b/vdsm/storage/storageServer.py
@@ -382,7 +382,10 @@
def connect(self):
iscsi.addIscsiNode(self._iface, self._target, self._cred)
timeout = config.getint("irs", "scsi_settle_timeout")
- udevadm.settle(timeout)
+ try:
+ udevadm.settle(timeout)
+ except udevadm.Timeout as e:
+ self.log.warning("Timeout waiting for udev events: %s", e)
def _match(self, session):
target = session.target
diff --git a/vdsm/storage/udevadm.py b/vdsm/storage/udevadm.py
index 4b4b54a..a2afd04 100644
--- a/vdsm/storage/udevadm.py
+++ b/vdsm/storage/udevadm.py
@@ -18,22 +18,39 @@
# Refer to the README and COPYING files for full details of the license
#
-import logging
+import errno
+import signal
+
from vdsm import utils
+from vdsm.infra import zombiereaper
_UDEVADM = utils.CommandPath("udevadm", "/sbin/udevadm", "/usr/sbin/udevadm")
class Error(Exception):
+ message = None
- def __init__(self, rc, out, err):
+ def __str__(self):
+ return self.message.format(self=self)
+
+
+class Failure(Error):
+ message = ("udevadm failed cmd={self.cmd} rc={self.rc} out={self.out!r} "
+ "err={self.err!r}")
+
+ def __init__(self, cmd, rc, out, err):
+ self.cmd = cmd
self.rc = rc
self.out = out
self.err = err
- def __str__(self):
- return "Process failed with rc=%d out=%r err=%r" % (
- self.rc, self.out, self.err)
+
+class Timeout(Error):
+ message = ("udevadm timed out cmd={self.cmd} timeout={self.timeout}")
+
+ def __init__(self, cmd, timeout):
+ self.cmd = cmd
+ self.timeout = timeout
def settle(timeout, exit_if_exists=None):
@@ -44,25 +61,35 @@
Arguments:
timeout Maximum number of seconds to wait for the event queue to
- become empty. A value of 0 will check if the queue is empty
- and always return immediately.
+ become empty.
exit_if_exists Stop waiting if file exists.
+
+ Raises Failure if udevadm failed, or Timeout if udevadm did not terminate
+ within the requested timeout.
"""
- args = ["settle", "--timeout=%s" % timeout]
+ cmd = [_UDEVADM.cmd, "settle"]
if exit_if_exists:
- args.append("--exit-if-exists=%s" % exit_if_exists)
+ cmd.append("--exit-if-exists=%s" % exit_if_exists)
- try:
- _run_command(args)
- except Error as e:
- logging.error("%s", e)
+ _run_command(cmd, timeout)
-def _run_command(args):
- cmd = [_UDEVADM.cmd]
- cmd.extend(args)
- rc, out, err = utils.execCmd(cmd, raw=True)
- if rc != 0:
- raise Error(rc, out, err)
+def _run_command(cmd, timeout=None):
+ proc = utils.execCmd(cmd, sync=False, deathSignal=signal.SIGKILL)
+
+ if not proc.wait(timeout):
+ try:
+ proc.kill()
+ except OSError as e:
+ if e.errno != errno.ESRCH:
+ raise
+ finally:
+ zombiereaper.autoReapPID(proc.pid)
+ raise Timeout(cmd, timeout)
+
+ if proc.returncode != 0:
+ out = "".join(proc.stdout)
+ err = "".join(proc.stderr)
+ raise Failure(cmd, proc.returncode, out, err)
--
To view, visit https://gerrit.ovirt.org/39740
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ia0a7380b1b181ec93399ea741122cfa2e98086fb
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
6 years, 10 months
Change in vdsm[master]: storage: Make Image.__chainSizeCalc public
by alitke@redhat.com
Adam Litke has uploaded a new change for review.
Change subject: storage: Make Image.__chainSizeCalc public
......................................................................
storage: Make Image.__chainSizeCalc public
The new SDM copyVolumeData wants to make use of the same logic being
used by the classic copy flows to extend the size of the target volume
to the appropriate size. Make Image.__chainSizeCalc public so it can be
accessed from the SDM code.
Change-Id: Id079eb5067c16f934370e42b5f4e09bbcef1512b
Signed-off-by: Adam Litke <alitke(a)redhat.com>
---
M vdsm/storage/image.py
1 file changed, 2 insertions(+), 2 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/95/38995/1
diff --git a/vdsm/storage/image.py b/vdsm/storage/image.py
index 791b48c..8d5e8c2 100644
--- a/vdsm/storage/image.py
+++ b/vdsm/storage/image.py
@@ -140,7 +140,7 @@
randomStr = misc.randomStr(RENAME_RANDOM_STRING_LEN)
return "%s%s_%s" % (sd.REMOVED_IMAGE_PREFIX, randomStr, uuid)
- def __chainSizeCalc(self, sdUUID, imgUUID, volUUID, size):
+ def chainSizeCalc(self, sdUUID, imgUUID, volUUID, size):
"""
Compute an estimate of the whole chain size
using the sum of the actual size of the chain's volumes
@@ -763,7 +763,7 @@
if volParams['volFormat'] != volume.COW_FORMAT or \
volParams['prealloc'] != volume.SPARSE_VOL:
raise se.IncorrectFormat(self)
- volParams['apparentsize'] = self.__chainSizeCalc(
+ volParams['apparentsize'] = self.chainSizeCalc(
sdUUID, srcImgUUID, srcVolUUID, volParams['size'])
# Find out dest volume parameters
--
To view, visit https://gerrit.ovirt.org/38995
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Id079eb5067c16f934370e42b5f4e09bbcef1512b
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Adam Litke <alitke(a)redhat.com>
6 years, 11 months
Change in vdsm[master]: SDM: Add extendVolumeContainer API
by alitke@redhat.com
Adam Litke has uploaded a new change for review.
Change subject: SDM: Add extendVolumeContainer API
......................................................................
SDM: Add extendVolumeContainer API
The extendVolumeContainer API is used to extend LVM logical volumes
which store thinly-provisioned vdsm volumes.
Change-Id: I6a128ba3eab4116ff4e794e94a171e51d9e432de
Signed-off-by: Adam Litke <alitke(a)redhat.com>
---
M client/vdsClient.py
M vdsm/API.py
M vdsm/rpc/BindingXMLRPC.py
M vdsm/rpc/vdsmapi-schema.json
M vdsm/storage/hsm.py
M vdsm/storage/sdm/__init__.py
M vdsm/storage/sdm/blockstore.py
M vdsm/storage/sdm/filestore.py
8 files changed, 123 insertions(+), 9 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/96/39696/1
diff --git a/client/vdsClient.py b/client/vdsClient.py
index 23bfc1a..f6c0c83 100755
--- a/client/vdsClient.py
+++ b/client/vdsClient.py
@@ -1953,6 +1953,17 @@
else:
return status['status']['code'], status['status']['message']
+ def extendVolumeContainer(self, args):
+ if len(args) != 4:
+ raise ValueError("Wrong number of arguments")
+
+ sdUUID, imgUUID, volUUID, size = args
+ status = self.s.extendVolumeContainer(sdUUID, imgUUID, volUUID, size)
+ if status['status']['code'] == 0:
+ return 0, ''
+ else:
+ return status['status']['code'], status['status']['message']
+
if __name__ == '__main__':
if _glusterEnabled:
@@ -2844,6 +2855,11 @@
'<srcImage> <dstImage> <collapse>',
'Copy the date from one volume into another.'
)),
+ 'extendVolumeContainer': (
+ serv.extendVolumeContainer, (
+ '<sdUUID> <imgUUID> <volUUID>',
+ 'Extend a thinly-provisioned block volume.'
+ )),
}
if _glusterEnabled:
commands.update(ge.getGlusterCmdDict(serv))
diff --git a/vdsm/API.py b/vdsm/API.py
index b4b7308..5661b82 100644
--- a/vdsm/API.py
+++ b/vdsm/API.py
@@ -1675,6 +1675,10 @@
def copyData(self, srcImage, dstImage, collapse):
return self._cif.irs.copyData(srcImage, dstImage, collapse)
+ def extendVolumeContainer(self, sdUUID, imgUUID, volUUID, size):
+ return self._cif.irs.extendVolumeContainer(sdUUID, imgUUID, volUUID,
+ size)
+
# take a rough estimate on how much free mem is available for new vm
# memTotal = memFree + memCached + mem_used_by_non_qemu + resident .
# simply returning (memFree + memCached) is not good enough, as the
diff --git a/vdsm/rpc/BindingXMLRPC.py b/vdsm/rpc/BindingXMLRPC.py
index 3e70e28..ae43614 100644
--- a/vdsm/rpc/BindingXMLRPC.py
+++ b/vdsm/rpc/BindingXMLRPC.py
@@ -993,6 +993,10 @@
api = API.Global()
return api.copyData(srcImage, dstImage, collapse)
+ def extendVolumeContainer(self, sdUUID, imgUUID, volUUID, size):
+ api = API.Global()
+ return api.extendVolumeContainer(sdUUID, imgUUID, volUUID, size)
+
def getGlobalMethods(self):
return ((self.vmDestroy, 'destroy'),
(self.vmCreate, 'create'),
@@ -1143,7 +1147,8 @@
'storageServer_ConnectionRefs_statuses'),
(self.volumeCreateContainer, 'createVolumeContainer'),
(self.volumeRemove, 'removeVolume'),
- (self.copyData, 'copyData'))
+ (self.copyData, 'copyData'),
+ (self.extendVolumeContainer, 'extendVolumeContainer'))
def wrapApiMethod(f):
diff --git a/vdsm/rpc/vdsmapi-schema.json b/vdsm/rpc/vdsmapi-schema.json
index 8720703..f507324 100644
--- a/vdsm/rpc/vdsmapi-schema.json
+++ b/vdsm/rpc/vdsmapi-schema.json
@@ -4182,6 +4182,24 @@
'data': {'srcImage': 'VolumeSpec', 'dstImage': 'VolumeSpec',
'collapse': 'bool'}}
+##
+# @Host.extendVolumeContainer:
+#
+# Extend a thinly-provisioned volume.
+#
+# @sdUUID: The UUID of the storage domain containing the volume
+#
+# @imgUUID: The UUID of the image containing the volume
+#
+# @volUUID: The UUID of the volume
+#
+# @size: The new desired size (in bytes)
+#
+# Since: 4.18.0
+##
+{'command': {'class': 'Host', 'name': 'extendVolumeContainer'},
+ 'data': {'sdUUID': 'UUID', 'imgUUID': 'UUID', 'volUUID': 'UUID',
+ 'size': 'uint'}}
## Category: @ConnectionRefs ##################################################
##
diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
index 2236457..34730dd 100644
--- a/vdsm/storage/hsm.py
+++ b/vdsm/storage/hsm.py
@@ -854,13 +854,22 @@
"""
newSize = misc.validateN(newSize, "newSize") / 2 ** 20
- try:
- pool = self.getPool(spUUID)
- except se.StoragePoolUnknown:
- pass
+ enableSDM = True # Replace this with a check on the SD version
+ if enableSDM:
+ domain = sdCache.produce(volDict['domainID'])
+ self._sdmSchedule('extendVolumeContainer',
+ sdm.extendVolumeContainer, domain,
+ volDict['imageID'], volDict['volumeID'], newSize,
+ callbackFunc, volDict)
else:
- if pool.hsmMailer:
- pool.hsmMailer.sendExtendMsg(volDict, newSize, callbackFunc)
+ try:
+ pool = self.getPool(spUUID)
+ except se.StoragePoolUnknown:
+ pass
+ else:
+ if pool.hsmMailer:
+ pool.hsmMailer.sendExtendMsg(volDict, newSize,
+ callbackFunc)
def _spmSchedule(self, spUUID, name, func, *args):
self.validateSPM(spUUID)
@@ -3744,3 +3753,18 @@
se.VolumeCopyError("Unsupported combination of image types: "
"src:%s, dst:%s" % (src.get('type'),
dst.get('type')))
+
+ @public
+ def extendVolumeContainer(self, sdUUID, imgUUID, volUUID, size):
+ vars.task.setDefaultException(
+ se.VolumeExtendingError("sdUUID=%s, volumeUUID=%s, size=%s" % (
+ sdUUID, volUUID, size)))
+ size = misc.validateN(size, "size")
+ # ExtendVolume expects size in MB
+ size = math.ceil(size / 2 ** 20)
+
+ dom = sdCache.produce(sdUUID=sdUUID)
+ misc.validateUUID(imgUUID, 'imgUUID')
+ misc.validateUUID(volUUID, 'volUUID')
+ vars.task.getSharedLock(STORAGE, sdUUID)
+ return sdm.extendVolumeContainer(dom, imgUUID, volUUID, size)
diff --git a/vdsm/storage/sdm/__init__.py b/vdsm/storage/sdm/__init__.py
index 9d74bd5..7f80ca2 100644
--- a/vdsm/storage/sdm/__init__.py
+++ b/vdsm/storage/sdm/__init__.py
@@ -195,3 +195,16 @@
finally:
dstDom.releaseVolumeLease(dstImage['imgUUID'], dstImage['volUUID'])
srcDom.releaseVolumeLease(srcImage['imgUUID'], srcImage['volUUID'])
+
+
+def extendVolumeContainer(domain, imgUUID, volUUID, size,
+ cbFn=None, cbData=None):
+ cls = __getStoreClass(domain)
+ hostId = getDomainHostId(domain.sdUUID)
+ domain.acquireClusterLock(hostId)
+ try:
+ cls.extendVolume(domain, imgUUID, volUUID, size)
+ finally:
+ domain.releaseClusterLock()
+ if cbFn:
+ cbFn(cbData)
diff --git a/vdsm/storage/sdm/blockstore.py b/vdsm/storage/sdm/blockstore.py
index b80f869..73a12a8 100644
--- a/vdsm/storage/sdm/blockstore.py
+++ b/vdsm/storage/sdm/blockstore.py
@@ -20,6 +20,7 @@
import os
import logging
+import math
import vdsm.utils as utils
from vdsm.config import config
@@ -27,9 +28,13 @@
import volumestore
from .. import blockVolume
from .. import lvm
+from .. import resourceManager as rm
+from .. import sd
from .. import storage_exception as se
from .. import volume
+from ..resourceFactories import IMAGE_NAMESPACE
+rmanager = rm.ResourceManager.getInstance()
log = logging.getLogger('Storage.sdm.blockstore')
SECTORS_TO_MB = (1 << 20) / volume.BLOCK_SIZE
@@ -37,6 +42,9 @@
class BlockStore(volumestore.VolumeStore):
volClass = blockVolume.BlockVolume
+
+ # Estimate of the additional space needed for qcow format internal data.
+ VOLWM_COW_OVERHEAD = 1.1
@classmethod
def volFormatToPreallocate(cls, volFormat):
@@ -109,7 +117,7 @@
parent = tag[len(blockVolume.TAG_PREFIX_PARENT):]
if parent and image:
break
- vols.append(volumestore.GCVol(lv.name, volUUID, image, parent))
+ vols.append(volumestore.GCVol(lv.name, volUUID, image, parent)
return vols
@classmethod
@@ -119,3 +127,24 @@
except se.VolumeMetadataReadError:
pass
lvm.removeLVs(dom.sdUUID, volName)
+
+ @classmethod
+ def extendVolume(cls, dom, imgUUID, volUUID, size):
+ imageResourcesNamespace = sd.getNamespace(dom.sdUUID, IMAGE_NAMESPACE)
+ with rmanager.acquireResource(imageResourcesNamespace, imgUUID,
+ rm.LockType.shared):
+ # Verify that the requested size is valid
+ vol = dom.produceVolume(imgUUID, volUUID)
+ volInfo = vol.getInfo()
+ maxSize = int(volInfo['capacity'])
+ if volInfo['format'] == volume.type2name(volume.COW_FORMAT):
+ maxSize = maxSize * cls.VOLWM_COW_OVERHEAD
+ maxSize = math.ceil(maxSize / 2 ** 20)
+ if size > maxSize:
+ raise se.VolumeExtendingError(
+ "Size %i exceeds the maximum extend size of %i for volume "
+ "%s" % (size, maxSize, volUUID))
+
+ dom.extendVolume(volUUID, size)
+
+
diff --git a/vdsm/storage/sdm/filestore.py b/vdsm/storage/sdm/filestore.py
index dd6c58f..cc3e2bb 100644
--- a/vdsm/storage/sdm/filestore.py
+++ b/vdsm/storage/sdm/filestore.py
@@ -157,4 +157,9 @@
except se.ImageDeleteError:
dom.imageGarbageCollector()
else:
- dom.oop.os.unlink(volPath)
\ No newline at end of file
+ dom.oop.os.unlink(volPath)
+
+ @classmethod
+ def extendVolume(cls, dom, imgUUID, volUUID, size):
+ # There is nothing to do for file domains. The filesystem handles it,
+ pass
--
To view, visit https://gerrit.ovirt.org/39696
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I6a128ba3eab4116ff4e794e94a171e51d9e432de
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Adam Litke <alitke(a)redhat.com>
6 years, 11 months
Change in vdsm[master]: SDM: isolateVolumes API
by alitke@redhat.com
Adam Litke has uploaded a new change for review.
Change subject: SDM: isolateVolumes API
......................................................................
SDM: isolateVolumes API
Change-Id: I9b67e2df82afba9956e8246c1a4f9093aed729f2
Signed-off-by: Adam Litke <alitke(a)redhat.com>
---
M client/vdsClient.py
M vdsm/API.py
M vdsm/rpc/BindingXMLRPC.py
M vdsm/rpc/vdsmapi-schema.json
M vdsm/storage/hsm.py
M vdsm/storage/sdm/__init__.py
M vdsm/storage/sdm/blockstore.py
M vdsm/storage/sdm/filestore.py
M vdsm/storage/sdm/volumestore.py
M vdsm/storage/storage_exception.py
10 files changed, 134 insertions(+), 1 deletion(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/79/40379/1
diff --git a/client/vdsClient.py b/client/vdsClient.py
index 633d1b7..d3c482e 100755
--- a/client/vdsClient.py
+++ b/client/vdsClient.py
@@ -1963,6 +1963,17 @@
else:
return status['status']['code'], status['status']['message']
+ def isolateVolumes(self, args):
+ if len(args) != 4:
+ raise ValueError('Wrong number of arguments')
+ sdUUID, srcImgUUID, dstImgUUID, volStr = args
+ volList = volStr.split(',')
+ status = self.s.isolateVolumes(sdUUID, srcImgUUID, dstImgUUID, volList)
+ if status['status']['code'] == 0:
+ return 0, ''
+ else:
+ return status['status']['code'], status['status']['message']
+
if __name__ == '__main__':
if _glusterEnabled:
@@ -2855,6 +2866,12 @@
'<sdUUID> <imgUUID> <volUUID>',
'Extend a thinly-provisioned block volume.'
)),
+ 'isolateVolumes': (
+ serv.isolateVolumes, (
+ '<sdUUID> <srcImgUUID> <dstImgUUID> <volUUID>[...,<volUUID>]',
+ 'Isolate volumes from one image into a new image for '
+ 'post-processing.'
+ )),
}
if _glusterEnabled:
commands.update(ge.getGlusterCmdDict(serv))
diff --git a/vdsm/API.py b/vdsm/API.py
index a50025f..44dddb4 100644
--- a/vdsm/API.py
+++ b/vdsm/API.py
@@ -1049,6 +1049,10 @@
def validate(self):
return self._irs.validateStorageDomain(self._UUID)
+ def isolateVolumes(self, srcImageID, dstImageID, volumeList):
+ return self._irs.isolateVolumes(self._UUID, srcImageID, dstImageID,
+ volumeList)
+
class StoragePool(APIBase):
ctorArgs = ['storagepoolID']
diff --git a/vdsm/rpc/BindingXMLRPC.py b/vdsm/rpc/BindingXMLRPC.py
index 7834e83..0e1e5e4 100644
--- a/vdsm/rpc/BindingXMLRPC.py
+++ b/vdsm/rpc/BindingXMLRPC.py
@@ -1000,6 +1000,10 @@
api = API.Global()
return api.extendVolumeContainer(sdUUID, imgUUID, volUUID, size)
+ def isolateVolumes(self, sdUUID, srcImgUUID, dstImgUUID, volumeList):
+ api = API.StorageDomain(sdUUID)
+ return api.isolateVolumes(srcImgUUID, dstImgUUID, volumeList)
+
def getGlobalMethods(self):
return ((self.vmDestroy, 'destroy'),
(self.vmCreate, 'create'),
@@ -1151,7 +1155,8 @@
'storageServer_ConnectionRefs_statuses'),
(self.volumeCreateContainer, 'createVolumeContainer'),
(self.copyData, 'copyData'),
- (self.extendVolumeContainer, 'extendVolumeContainer'))
+ (self.extendVolumeContainer, 'extendVolumeContainer'),
+ (self.isolateVolumes, 'isolateVolumes'))
def wrapApiMethod(f):
diff --git a/vdsm/rpc/vdsmapi-schema.json b/vdsm/rpc/vdsmapi-schema.json
index c0d8caf..a873d22 100644
--- a/vdsm/rpc/vdsmapi-schema.json
+++ b/vdsm/rpc/vdsmapi-schema.json
@@ -5459,6 +5459,25 @@
{'command': {'class': 'StorageDomain', 'name': 'validate'},
'data': {'storagedomainID': 'UUID'}}
+##
+# @StorageDomain.isolateVolumes:
+#
+# Isolate volumes from one image into a new image.
+#
+# @storagedomainID: The UUID of the Storage Domain
+#
+# @srcImageID: The UUID of the Image containing the volumes
+#
+# @dstImageID: The UUID of the destination Image
+#
+# @volumeList: Identifies a set of volumes to move
+#
+# Since: 4.18.0
+##
+{'command': {'class': 'StorageDomain', 'name': 'isolateVolumes'},
+ 'data': {'storagedomainID': 'UUID', 'srcImageID': 'UUID',
+ 'dstImageID': 'UUID', 'volumeList': ['UUID']}}
+
## Category: @StoragePool #####################################################
##
# @StoragePool:
diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
index 63c9b3b..746423d 100644
--- a/vdsm/storage/hsm.py
+++ b/vdsm/storage/hsm.py
@@ -3760,3 +3760,13 @@
misc.validateUUID(volUUID, 'volUUID')
vars.task.getSharedLock(STORAGE, sdUUID)
return sdm.extendVolumeContainer(dom, imgUUID, volUUID, size)
+
+ @public
+ def isolateVolumes(self, sdUUID, srcImgUUID, dstImgUUID, volumeList):
+ vars.task.setDefaultException(
+ se.IsolateVolumesError(sdUUID, srcImgUUID, dstImgUUID, volumeList))
+ dom = sdCache.produce(sdUUID=sdUUID)
+ misc.validateUUID(srcImgUUID, 'srcImgUUID')
+ misc.validateUUID(dstImgUUID, 'dstImgUUID')
+ vars.task.getSharedLock(STORAGE, sdUUID)
+ return sdm.isolateVolumes(dom, srcImgUUID, dstImgUUID, volumeList)
diff --git a/vdsm/storage/sdm/__init__.py b/vdsm/storage/sdm/__init__.py
index da1e858..1636e63 100644
--- a/vdsm/storage/sdm/__init__.py
+++ b/vdsm/storage/sdm/__init__.py
@@ -208,3 +208,21 @@
domain.releaseClusterLock()
if cbFn:
cbFn(cbData)
+
+
+def isolateVolumes(domain, srcImgUUID, dstImgUUID, volumeList):
+ cls = __getStoreClass(domain)
+ imageResourcesNamespace = sd.getNamespace(domain.sdUUID, IMAGE_NAMESPACE)
+
+ hostId = getDomainHostId(domain.sdUUID)
+ domain.acquireClusterLock(hostId)
+ try:
+ with nested(rmanager.acquireResource(imageResourcesNamespace,
+ srcImgUUID,
+ rm.LockType.exclusive),
+ rmanager.acquireResource(imageResourcesNamespace,
+ dstImgUUID,
+ rm.LockType.exclusive)):
+ cls.isolateVolumes(domain, srcImgUUID, dstImgUUID, volumeList)
+ finally:
+ domain.releaseClusterLock()
diff --git a/vdsm/storage/sdm/blockstore.py b/vdsm/storage/sdm/blockstore.py
index 06a77e4..bd9d3f1 100644
--- a/vdsm/storage/sdm/blockstore.py
+++ b/vdsm/storage/sdm/blockstore.py
@@ -101,6 +101,18 @@
return newName
@classmethod
+ def _isolateVolume(cls, dom, srcImgUUID, dstImgUUID, vol):
+ pVolUUID = vol.getParent()
+ toAdd = [blockVolume.TAG_PREFIX_PARENT + volume.BLANK_UUID,
+ blockVolume.TAG_PREFIX_IMAGE + dstImgUUID]
+ toDel = [blockVolume.TAG_PREFIX_PARENT + pVolUUID,
+ blockVolume.TAG_PREFIX_IMAGE + srcImgUUID]
+ lvm.changeLVTags(dom.sdUUID, vol.volUUID, addTags=toAdd, delTags=toDel)
+ if pVolUUID and pVolUUID != volume.BLANK_UUID:
+ pVol = dom.produceVolume(srcImgUUID, pVolUUID)
+ cls.recheckIfLeaf(pVol)
+
+ @classmethod
def _getGCVolumes(cls, dom, onlyImg, onlyVol):
lvs = lvm.getLV(dom.sdUUID)
vols = []
diff --git a/vdsm/storage/sdm/filestore.py b/vdsm/storage/sdm/filestore.py
index 3e99ee9..9385d87 100644
--- a/vdsm/storage/sdm/filestore.py
+++ b/vdsm/storage/sdm/filestore.py
@@ -98,6 +98,21 @@
return newName
@classmethod
+ def _isolateVolume(cls, dom, srcImgUUID, dstImgUUID, vol):
+ srcImgPath = os.path.join(dom.getRepoPath(), dom.sdUUID,
+ sd.DOMAIN_IMAGES, srcImgUUID)
+ dstImgPath = os.path.join(dom.getRepoPath(), dom.sdUUID,
+ sd.DOMAIN_IMAGES, dstImgUUID)
+ pUUID = vol.getParent()
+ vol._share(dstImgPath)
+ dstVol = dom.produceVolume(dstImgUUID, vol.volUUID)
+ dstVol.setParent(volume.BLANK_UUID)
+ dstVol.setImage(dstImgUUID)
+ newName = cls._beginRemoveVolume(dom, srcImgPath, vol.volUUID)
+ volInfo = volumestore.GCVol(newName, vol.volUUID, srcImgUUID, pUUID)
+ cls._garbageCollectVolume(dom, volInfo)
+
+ @classmethod
def _getGCVolumes(cls, dom, onlyImg, onlyVol):
vols = []
volPaths = []
diff --git a/vdsm/storage/sdm/volumestore.py b/vdsm/storage/sdm/volumestore.py
index d518f20..6568d75 100644
--- a/vdsm/storage/sdm/volumestore.py
+++ b/vdsm/storage/sdm/volumestore.py
@@ -355,3 +355,27 @@
newName = cls._beginRemoveVolume(dom, imageDir, volUUID)
volInfo = GCVol(newName, volUUID, imgUUID, pUUID)
cls._garbageCollectVolume(dom, volInfo)
+
+ @classmethod
+ def isolateVolumes(cls, dom, srcImgUUID, dstImgUUID, volumeList):
+ repoPath = dom.getRepoPath()
+ # Create dest image
+ cls.createImage(repoPath, dom.sdUUID, dstImgUUID)
+ # Verify dest image contains only volumes in volumeList
+ uuidList = cls.volClass.getImageVolumes(repoPath, dom.sdUUID,
+ dstImgUUID)
+ extraVols = set(uuidList) - set(volumeList)
+ if extraVols:
+ log.error("Destination image contains unexpected volumes: %s",
+ extraVols)
+ raise se.IsolateVolumesError(dom.sdUUID, srcImgUUID,
+ dstImgUUID, volumeList)
+ # Iterate over volumes in volumeList
+ for volUUID in volumeList:
+ try:
+ vol = cls.volClass(repoPath, dom.sdUUID, srcImgUUID, volUUID)
+ except se.VolumeDoesNotExist:
+ log.debug("Skipping non-existent source volume %s", volUUID)
+ continue
+ vol.validateDelete()
+ cls._isolateVolume(dom, srcImgUUID, dstImgUUID, vol)
diff --git a/vdsm/storage/storage_exception.py b/vdsm/storage/storage_exception.py
index 1cfc8e4..a695cf4 100644
--- a/vdsm/storage/storage_exception.py
+++ b/vdsm/storage/storage_exception.py
@@ -453,6 +453,15 @@
message = "Image does not exist in domain"
+class IsolateVolumesError(StorageException):
+ def __init__(self, sdUUID, srcImgUUID, dstImgUUID, volumeList):
+ self.value = ("domain=%s srcImg=%s dstImg=%s "
+ "volumes=%s" % (sdUUID, srcImgUUID, dstImgUUID,
+ volumeList))
+ code = 269
+ message = "Unable to isolate volumes"
+
+
#################################################
# Pool Exceptions
#################################################
--
To view, visit https://gerrit.ovirt.org/40379
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I9b67e2df82afba9956e8246c1a4f9093aed729f2
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Adam Litke <alitke(a)redhat.com>
6 years, 11 months
Change in vdsm[master]: HACK: run GC in domain monitor
by alitke@redhat.com
Adam Litke has uploaded a new change for review.
Change subject: HACK: run GC in domain monitor
......................................................................
HACK: run GC in domain monitor
Change-Id: I3c560e6fbccdf50b135cc9c90b23824ae04b0376
Signed-off-by: Adam Litke <alitke(a)redhat.com>
---
M vdsm/storage/monitor.py
M vdsm/storage/sdm/__init__.py
2 files changed, 22 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/80/40380/1
diff --git a/vdsm/storage/monitor.py b/vdsm/storage/monitor.py
index ef032a5..a2a1bd7 100644
--- a/vdsm/storage/monitor.py
+++ b/vdsm/storage/monitor.py
@@ -28,6 +28,7 @@
from . import clusterlock
from . import misc
+from . import sdm
from .sdc import sdCache
@@ -249,6 +250,7 @@
self._performDomainSelftest()
self._checkReadDelay()
self._collectStatistics()
+ self._garbageCollect()
except Exception as e:
self.log.exception("Error monitoring domain %s", self.sdUUID)
self.nextStatus.error = e
@@ -340,6 +342,14 @@
self.nextStatus.isoPrefix = self.isoPrefix
self.nextStatus.version = self.domain.getVersion()
+ def _garbageCollect(self):
+ if True: # XXX: limit this to domain ver 4 or later
+ try:
+ sdm.garbageCollectStorageDomain(self.domain)
+ except:
+ self.log.exception("Garbage collection failed for domain %s",
+ self.domain.sdUUID)
+
# Managing host id
def _shouldAcquireHostId(self):
diff --git a/vdsm/storage/sdm/__init__.py b/vdsm/storage/sdm/__init__.py
index 1636e63..4a31332 100644
--- a/vdsm/storage/sdm/__init__.py
+++ b/vdsm/storage/sdm/__init__.py
@@ -226,3 +226,15 @@
cls.isolateVolumes(domain, srcImgUUID, dstImgUUID, volumeList)
finally:
domain.releaseClusterLock()
+
+
+def garbageCollectStorageDomain(domain):
+ if domain.isISO():
+ return
+ cls = __getStoreClass(domain)
+ hostId = getDomainHostId(domain.sdUUID)
+ domain.acquireClusterLock(hostId)
+ try:
+ cls.garbageCollectStorageDomain(domain)
+ finally:
+ domain.releaseClusterLock()
--
To view, visit https://gerrit.ovirt.org/40380
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I3c560e6fbccdf50b135cc9c90b23824ae04b0376
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Adam Litke <alitke(a)redhat.com>
6 years, 11 months