Change in vdsm[master]: HACK: Use a monitor command to get watermarks
by alitke@redhat.com
Adam Litke has uploaded a new change for review.
Change subject: HACK: Use a monitor command to get watermarks
......................................................................
HACK: Use a monitor command to get watermarks
Change-Id: I6b3e408fda22ac4a4cb11f58d399b9d69906d72e
Signed-off-by: Adam Litke <alitke(a)redhat.com>
---
M vdsm/virt/vm.py
1 file changed, 27 insertions(+), 2 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/20/28620/1
diff --git a/vdsm/virt/vm.py b/vdsm/virt/vm.py
index 708872a..5c192f0 100644
--- a/vdsm/virt/vm.py
+++ b/vdsm/virt/vm.py
@@ -2293,8 +2293,33 @@
self.conf['timeOffset'] = newTimeOffset
def _getMergeWriteWatermarks(self):
- # TODO: Adopt the future libvirt API
- return {}
+ ret = {}
+ cmd = {'execute': 'query-blockstats'}
+ resp = self._internalQMPMonitorCommand(cmd)
+ for device in resp['return']:
+ name = device['device']
+ if not name.startswith('drive-'):
+ continue
+ alias = name[6:]
+ try:
+ drive = self._lookupDeviceByAlias(DISK_DEVICES, alias)
+ job = self.getBlockJob(drive)
+ except LookupError:
+ continue
+
+ volChain = job['chain']
+ stats = []
+ vol = device
+ while vol:
+ stats.insert(0, vol['parent']['stats']['wr_highest_offset'])
+ vol = vol.get('backing')
+ if len(volChain) != len(stats):
+ self.log.debug("The number of wr_highest_offset stats does "
+ "not match the number of volumes. Skipping.")
+ continue
+ for vol, stat in zip(volChain, stats):
+ ret[vol] = stat
+ return ret
def _getLiveMergeExtendCandidates(self):
ret = {}
--
To view, visit http://gerrit.ovirt.org/28620
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I6b3e408fda22ac4a4cb11f58d399b9d69906d72e
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Adam Litke <alitke(a)redhat.com>
8 years, 1 month
Change in vdsm[master]: spbackends: do not set spmRole on forceFreeSpm
by Federico Simoncelli
Federico Simoncelli has uploaded a new change for review.
Change subject: spbackends: do not set spmRole on forceFreeSpm
......................................................................
spbackends: do not set spmRole on forceFreeSpm
Setting the spmRole to SPM_FREE on forceFreeSpm is harmful. In fact the
release of the spm role (SPM_ACQUIRED) should go through the stopSpm
procedure where the master filesystem is unmounted, etc.
Change-Id: I2bd9a0d9749e49a97a31c535c92dd242eb8f74ec
Signed-off-by: Federico Simoncelli <fsimonce(a)redhat.com>
---
M vdsm/storage/spbackends.py
1 file changed, 0 insertions(+), 1 deletion(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/18/27318/1
diff --git a/vdsm/storage/spbackends.py b/vdsm/storage/spbackends.py
index 86714a3..dddb749 100644
--- a/vdsm/storage/spbackends.py
+++ b/vdsm/storage/spbackends.py
@@ -327,7 +327,6 @@
# DO NOT USE, STUPID, HERE ONLY FOR BC
# TODO: SCSI Fence the 'lastOwner'
self.setSpmStatus(LVER_INVALID, SPM_ID_FREE, __securityOverride=True)
- self.pool.spmRole = SPM_FREE
@classmethod
def _getPoolMD(cls, domain):
--
To view, visit http://gerrit.ovirt.org/27318
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I2bd9a0d9749e49a97a31c535c92dd242eb8f74ec
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Federico Simoncelli <fsimonce(a)redhat.com>
8 years, 1 month
Change in vdsm[master]: image: unify the prezeroing optimizations
by Federico Simoncelli
Federico Simoncelli has uploaded a new change for review.
Change subject: image: unify the prezeroing optimizations
......................................................................
image: unify the prezeroing optimizations
The same prezeroing optimization logic was used in multiple places, this
patch unifies it in __optimizedCreateVolume.
Change-Id: I0fd90f85e9debf98bcac07d1b8d4b38c319c33f2
Signed-off-by: Federico Simoncelli <fsimonce(a)redhat.com>
---
M vdsm/storage/image.py
1 file changed, 43 insertions(+), 45 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/04/8504/1
diff --git a/vdsm/storage/image.py b/vdsm/storage/image.py
index e86d94c..19ab078 100644
--- a/vdsm/storage/image.py
+++ b/vdsm/storage/image.py
@@ -454,6 +454,37 @@
except Exception:
self.log.error("Unexpected error", exc_info=True)
+ def __optimizedCreateVolume(self, domain, imgUUID, size, apparentSize,
+ volFormat, preallocate, diskType, volUUID, desc, srcImgUUID,
+ srcVolUUID):
+ # To avoid 'prezeroing' preallocated volume on NFS domain,
+ # we create the target volume with minimal size and after
+ # that we'll change its metadata back to the original size.
+ if (volFormat == volume.COW_FORMAT
+ or preallocate == volume.SPARSE_VOL):
+ volTmpSize = size
+ else:
+ volTmpSize = TEMPORARY_VOLUME_SIZE
+
+ domain.createVolume(imgUUID, volTmpSize, volFormat, preallocate,
+ diskType, volUUID, desc, srcImgUUID, srcVolUUID)
+ newVolume = domain.produceVolume(imgUUID, volUUID)
+
+ if volFormat == volume.RAW_FORMAT:
+ extendSize = size
+ else:
+ extendSize = apparentSize
+
+ # Extend volume (for LV only) size to the actual size
+ newVolume.extend((extendSize + 511) / 512)
+
+ # Change destination volume metadata back to the original
+ # size. Heavy operation, do it only if necessary.
+ if volTmpSize != size:
+ newVolume.setSize(size)
+
+ return newVolume
+
def _createTargetImage(self, destDom, srcSdUUID, imgUUID):
# Before actual data copying we need perform several operation
# such as: create all volumes, create fake template if needed, ...
@@ -500,34 +531,12 @@
# find out src volume parameters
volParams = srcVol.getVolumeParams(bs=1)
- # To avoid 'prezeroing' preallocated volume on NFS domain,
- # we create the target volume with minimal size and after
- # that w'll change its metadata back to the original size.
- if (volParams['volFormat'] == volume.COW_FORMAT
- or volParams['prealloc'] == volume.SPARSE_VOL):
- volTmpSize = volParams['size']
- else:
- volTmpSize = TEMPORARY_VOLUME_SIZE # in sectors (10M)
-
- destDom.createVolume(imgUUID=imgUUID, size=volTmpSize,
- volFormat=volParams['volFormat'],
- preallocate=volParams['prealloc'],
- diskType=volParams['disktype'],
- volUUID=srcVol.volUUID,
- desc=volParams['descr'],
- srcImgUUID=pimg,
- srcVolUUID=volParams['parent'])
-
- dstVol = destDom.produceVolume(imgUUID=imgUUID,
- volUUID=srcVol.volUUID)
-
- # Extend volume (for LV only) size to the actual size
- dstVol.extend((volParams['apparentsize'] + 511) / 512)
-
- # Change destination volume metadata back to the original
- # size.
- if volTmpSize != volParams['size']:
- dstVol.setSize(volParams['size'])
+ dstVol = self.__optimizedCreateVolume(
+ destDom, imgUUID, volParams['size'],
+ volParams['apparentsize'], volParams['volFormat'],
+ volParams['prealloc'], volParams['disktype'],
+ srcVol.volUUID, volParams['descr'], srcImgUUID=pimg,
+ srcVolUUID=volParams['parent'])
dstChain.append(dstVol)
except se.StorageException:
@@ -760,25 +769,14 @@
self.log.info("delete image %s on domain %s before overwriting", dstImgUUID, dstSdUUID)
self.delete(dstSdUUID, dstImgUUID, postZero, force=True)
- # To avoid 'prezeroing' preallocated volume on NFS domain,
- # we create the target volume with minimal size and after that w'll change
- # its metadata back to the original size.
- tmpSize = TEMPORARY_VOLUME_SIZE # in sectors (10M)
- destDom.createVolume(imgUUID=dstImgUUID, size=tmpSize,
- volFormat=dstVolFormat, preallocate=volParams['prealloc'],
- diskType=volParams['disktype'], volUUID=dstVolUUID, desc=descr,
- srcImgUUID=volume.BLANK_UUID, srcVolUUID=volume.BLANK_UUID)
+ dstVol = self.__optimizedCreateVolume(
+ destDom, dstImgUUID, volParams['size'],
+ volParams['apparentsize'], dstVolFormat,
+ volParams['prealloc'], volParams['disktype'],
+ dstVolUUID, descr, volume.BLANK_UUID,
+ volume.BLANK_UUID)
- dstVol = sdCache.produce(dstSdUUID).produceVolume(imgUUID=dstImgUUID, volUUID=dstVolUUID)
- # For convert to 'raw' we need use the virtual disk size instead of apparent size
- if dstVolFormat == volume.RAW_FORMAT:
- newsize = volParams['size']
- else:
- newsize = volParams['apparentsize']
- dstVol.extend(newsize)
dstPath = dstVol.getVolumePath()
- # Change destination volume metadata back to the original size.
- dstVol.setSize(volParams['size'])
except se.StorageException, e:
self.log.error("Unexpected error", exc_info=True)
raise
--
To view, visit http://gerrit.ovirt.org/8504
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I0fd90f85e9debf98bcac07d1b8d4b38c319c33f2
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Federico Simoncelli <fsimonce(a)redhat.com>
8 years, 1 month
Change in vdsm[master]: [wip] sdcache: avoid extra refresh due samplingmethod
by Federico Simoncelli
Federico Simoncelli has uploaded a new change for review.
Change subject: [wip] sdcache: avoid extra refresh due samplingmethod
......................................................................
[wip] sdcache: avoid extra refresh due samplingmethod
In order to avoid an extra iscsi rescan (symptomatic of samplingmethod)
an additional lock has been introduced to queue the requests when the
storage is flagged as stale.
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=870768
Change-Id: If178a8eaeb94f1dfe9e0957036dde88f6a22829c
Signed-off-by: Federico Simoncelli <fsimonce(a)redhat.com>
---
M vdsm/storage/sdc.py
1 file changed, 25 insertions(+), 26 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/74/9274/1
diff --git a/vdsm/storage/sdc.py b/vdsm/storage/sdc.py
index f2f4534..978e3fa 100644
--- a/vdsm/storage/sdc.py
+++ b/vdsm/storage/sdc.py
@@ -62,32 +62,27 @@
STORAGE_UPDATED = 0
STORAGE_STALE = 1
- STORAGE_REFRESHING = 2
def __init__(self, storage_repo):
- self._syncroot = threading.Condition()
+ self._syncDomain = threading.Condition()
+ self._syncRefresh = threading.Lock()
self.__domainCache = {}
self.__inProgress = set()
self.__staleStatus = self.STORAGE_STALE
self.storage_repo = storage_repo
def invalidateStorage(self):
- with self._syncroot:
- self.__staleStatus = self.STORAGE_STALE
+ self.log.debug("The storages have been invalidated")
+ self.__staleStatus = self.STORAGE_STALE
@misc.samplingmethod
def refreshStorage(self):
- self.__staleStatus = self.STORAGE_REFRESHING
-
+ # We need to set the __staleStatus value at the beginning because we
+ # want to keep track of the future invalidateStorage calls that might
+ # arrive during the rescan procedure.
+ self.__staleStatus = self.STORAGE_UPDATED
multipath.rescan()
lvm.invalidateCache()
-
- # If a new invalidateStorage request came in after the refresh
- # started then we cannot flag the storages as updated (force a
- # new rescan later).
- with self._syncroot:
- if self.__staleStatus == self.STORAGE_REFRESHING:
- self.__staleStatus = self.STORAGE_UPDATED
def produce(self, sdUUID):
domain = DomainProxy(self, sdUUID)
@@ -98,7 +93,7 @@
return domain
def _realProduce(self, sdUUID):
- with self._syncroot:
+ with self._syncDomain:
while True:
domain = self.__domainCache.get(sdUUID)
@@ -109,25 +104,29 @@
self.__inProgress.add(sdUUID)
break
- self._syncroot.wait()
+ self._syncDomain.wait()
try:
- # If multiple calls reach this point and the storage is not
- # updated the refreshStorage() sampling method is called
- # serializing (and eventually grouping) the requests.
- if self.__staleStatus != self.STORAGE_UPDATED:
- self.refreshStorage()
+ # Here we cannot take full advantage of the refreshStorage
+ # samplingmethod since we might be scheduling an unneeded
+ # extra rescan. We need an additional lock (_syncRefresh)
+ # to make sure that __staleStatus is taken in account
+ # (without affecting all the other external refreshStorage
+ # calls as it would be if we move this check there).
+ with self._syncRefresh:
+ if self.__staleStatus != self.STORAGE_UPDATED:
+ self.refreshStorage()
domain = self._findDomain(sdUUID)
- with self._syncroot:
+ with self._syncDomain:
self.__domainCache[sdUUID] = domain
return domain
finally:
- with self._syncroot:
+ with self._syncDomain:
self.__inProgress.remove(sdUUID)
- self._syncroot.notifyAll()
+ self._syncDomain.notifyAll()
def _findDomain(self, sdUUID):
import blockSD
@@ -162,16 +161,16 @@
return uuids
def refresh(self):
- with self._syncroot:
+ with self._syncDomain:
lvm.invalidateCache()
self.__domainCache.clear()
def manuallyAddDomain(self, domain):
- with self._syncroot:
+ with self._syncDomain:
self.__domainCache[domain.sdUUID] = domain
def manuallyRemoveDomain(self, sdUUID):
- with self._syncroot:
+ with self._syncDomain:
try:
del self.__domainCache[sdUUID]
except KeyError:
--
To view, visit http://gerrit.ovirt.org/9274
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: If178a8eaeb94f1dfe9e0957036dde88f6a22829c
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Federico Simoncelli <fsimonce(a)redhat.com>
8 years, 1 month
Change in vdsm[master]: sp: load dumped tasks when recovering
by Federico Simoncelli
Federico Simoncelli has uploaded a new change for review.
Change subject: sp: load dumped tasks when recovering
......................................................................
sp: load dumped tasks when recovering
Change-Id: I1cd2ea34c2013870b213d8baa471248adabfbbe3
Signed-off-by: Federico Simoncelli <fsimonce(a)redhat.com>
---
M vdsm/storage/sp.py
1 file changed, 1 insertion(+), 2 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/02/26902/1
diff --git a/vdsm/storage/sp.py b/vdsm/storage/sp.py
index 338232f..e6a6bd0 100644
--- a/vdsm/storage/sp.py
+++ b/vdsm/storage/sp.py
@@ -288,8 +288,6 @@
self.log.error("Backup domain validation failed",
exc_info=True)
- self.taskMng.loadDumpedTasks(self.tasksDir)
-
self.spmRole = SPM_ACQUIRED
# Once setSecure completes we are running as SPM
@@ -322,6 +320,7 @@
# Restore tasks is last because tasks are spm ops (spm has to
# be started)
+ self.taskMng.loadDumpedTasks(self.tasksDir)
self.taskMng.recoverDumpedTasks()
self.log.debug("ended.")
--
To view, visit http://gerrit.ovirt.org/26902
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I1cd2ea34c2013870b213d8baa471248adabfbbe3
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Federico Simoncelli <fsimonce(a)redhat.com>
8 years, 1 month
Change in vdsm[master]: task: support task id in client request
by Federico Simoncelli
Federico Simoncelli has uploaded a new change for review.
Change subject: task: support task id in client request
......................................................................
task: support task id in client request
Change-Id: Ib5034e6c3466d5a663699d4f924975b7e067c768
Signed-off-by: Federico Simoncelli <fsimonce(a)redhat.com>
---
M vdsm/BindingXMLRPC.py
M vdsm/clientIF.py
M vdsm/storage/dispatcher.py
3 files changed, 16 insertions(+), 1 deletion(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/09/26809/1
diff --git a/vdsm/BindingXMLRPC.py b/vdsm/BindingXMLRPC.py
index 76251f5..d449088 100644
--- a/vdsm/BindingXMLRPC.py
+++ b/vdsm/BindingXMLRPC.py
@@ -98,6 +98,7 @@
Create xml-rpc server over http or https.
"""
HTTP_HEADER_FLOWID = "FlowID"
+ HTTP_HEADER_TASKID = "TaskID"
threadLocal = self.cif.threadLocal
@@ -200,6 +201,7 @@
def parse_request(self):
r = basehandler.parse_request(self)
threadLocal.flowID = self.headers.get(HTTP_HEADER_FLOWID)
+ threadLocal.taskID = self.headers.get(HTTP_HEADER_TASKID)
return r
def finish(self):
@@ -207,6 +209,7 @@
threadLocal.client = None
threadLocal.server = None
threadLocal.flowID = None
+ threadLocal.taskID = None
if sys.version_info[:2] == (2, 6):
# Override BaseHTTPServer.BaseRequestHandler implementation to
@@ -246,6 +249,10 @@
fmt += " flowID [%s]"
logargs.append(self.cif.threadLocal.flowID)
+ if getattr(self.cif.threadLocal, 'taskID', None) is not None:
+ fmt += " taskID [%s]"
+ logargs.append(self.cif.threadLocal.taskID)
+
self.log.debug(fmt, *logargs)
try:
diff --git a/vdsm/clientIF.py b/vdsm/clientIF.py
index eac7950..21e3c71 100644
--- a/vdsm/clientIF.py
+++ b/vdsm/clientIF.py
@@ -99,6 +99,7 @@
self.channelListener.start()
self.threadLocal = threading.local()
self.threadLocal.client = ''
+ self.irs.setClientThreadLocal(self.threadLocal)
except:
self.log.error('failed to init clientIF, '
'shutting down storage dispatcher')
diff --git a/vdsm/storage/dispatcher.py b/vdsm/storage/dispatcher.py
index 6586492..3e2bf0a 100644
--- a/vdsm/storage/dispatcher.py
+++ b/vdsm/storage/dispatcher.py
@@ -45,10 +45,14 @@
self.storage_repository = config.get('irs', 'repository')
self._exposeFunctions(obj)
self.log.info("Starting StorageDispatcher...")
+ self._clientThreadLocal = None
@property
def ready(self):
return getattr(self._obj, 'ready', True)
+
+ def setClientThreadLocal(self, clientThreadLocal):
+ self._clientThreadLocal = clientThreadLocal
def _exposeFunctions(self, obj):
for funcName in dir(obj):
@@ -66,7 +70,10 @@
@wraps(func)
def wrapper(*args, **kwargs):
try:
- ctask = task.Task(id=None, name=name)
+ ctaskid = getattr(self._clientThreadLocal, 'taskID', None)
+ if ctaskid is not None:
+ self.log.info('using client requested taskID %s', ctaskid)
+ ctask = task.Task(id=ctaskid, name=name)
try:
response = self.STATUS_OK.copy()
result = ctask.prepare(func, *args, **kwargs)
--
To view, visit http://gerrit.ovirt.org/26809
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ib5034e6c3466d5a663699d4f924975b7e067c768
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Federico Simoncelli <fsimonce(a)redhat.com>
8 years, 1 month
Change in vdsm[master]: volume: prepare only one volume on clone
by Federico Simoncelli
Federico Simoncelli has uploaded a new change for review.
Change subject: volume: prepare only one volume on clone
......................................................................
volume: prepare only one volume on clone
When we are cloning a volume we need only
Change-Id: Idc009fac4dc1a258537b0ffb15bd627680d79330
Signed-off-by: Federico Simoncelli <fsimonce(a)redhat.com>
---
M vdsm/storage/volume.py
1 file changed, 3 insertions(+), 3 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/20/26920/1
diff --git a/vdsm/storage/volume.py b/vdsm/storage/volume.py
index e1d3fc7..a36914f 100644
--- a/vdsm/storage/volume.py
+++ b/vdsm/storage/volume.py
@@ -271,7 +271,7 @@
wasleaf = True
self.setInternal()
try:
- self.prepare(rw=False)
+ self.prepare(rw=False, justme=True)
dst_path = os.path.join(dst_image_dir, dst_volUUID)
self.log.debug('cloning volume %s to %s', self.volumePath,
dst_path)
@@ -283,15 +283,15 @@
qemuimg.create(dst_path, backing=parent,
format=fmt2str(volFormat),
backingFormat=fmt2str(self.getFormat()))
- self.teardown(self.sdUUID, self.volUUID)
except Exception as e:
self.log.exception('cannot clone volume %s to %s',
self.volumePath, dst_path)
# FIXME: might race with other clones
if wasleaf:
self.setLeaf()
- self.teardown(self.sdUUID, self.volUUID)
raise se.CannotCloneVolume(self.volumePath, dst_path, str(e))
+ finally:
+ self.teardown(self.sdUUID, self.volUUID, justme=True)
def _shareLease(self, dstImgPath):
"""
--
To view, visit http://gerrit.ovirt.org/26920
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Idc009fac4dc1a258537b0ffb15bd627680d79330
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Federico Simoncelli <fsimonce(a)redhat.com>
8 years, 1 month
Change in vdsm[master]: sp: prevent master demotion on activation
by Federico Simoncelli
Federico Simoncelli has uploaded a new change for review.
Change subject: sp: prevent master demotion on activation
......................................................................
sp: prevent master demotion on activation
Change-Id: I47a0c3ea16bcdefa99899e04c453eaf995344dfb
Signed-off-by: Federico Simoncelli <fsimonce(a)redhat.com>
---
M vdsm/storage/sp.py
1 file changed, 9 insertions(+), 1 deletion(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/32/28332/1
diff --git a/vdsm/storage/sp.py b/vdsm/storage/sp.py
index 3f983b6..3762c10 100644
--- a/vdsm/storage/sp.py
+++ b/vdsm/storage/sp.py
@@ -1041,7 +1041,15 @@
# Domain conversion requires the links to be present
self._refreshDomainLinks(dom)
- self._backend.setDomainRegularRole(dom)
+
+ # This should never happen because we're not deactivating the
+ # current master in deactivateStorageDomain if a new master is
+ # not provided. It is also impossible to connect to a pool
+ # where the master domain is not active. Anyway to be on the
+ # safe side we must prevent the current master domain from
+ # being demoted to regular.
+ if sdUUID != self.masterDomain.sdUUID:
+ self._backend.setDomainRegularRole(dom)
if dom.getDomainClass() == sd.DATA_DOMAIN:
self._convertDomain(dom)
--
To view, visit http://gerrit.ovirt.org/28332
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I47a0c3ea16bcdefa99899e04c453eaf995344dfb
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Federico Simoncelli <fsimonce(a)redhat.com>
8 years, 1 month
Change in vdsm[master]: [WIP] BZ#844656 Release the lock during _findDomain
by Federico Simoncelli
Federico Simoncelli has uploaded a new change for review.
Change subject: [WIP] BZ#844656 Release the lock during _findDomain
......................................................................
[WIP] BZ#844656 Release the lock during _findDomain
Signed-off-by: Federico Simoncelli <fsimonce(a)redhat.com>
Change-Id: I8088d5fe716a3a08c3e5cef2d2d9a654ee96f60a
---
M vdsm/storage/sdc.py
1 file changed, 21 insertions(+), 7 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/22/6822/1
--
To view, visit http://gerrit.ovirt.org/6822
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I8088d5fe716a3a08c3e5cef2d2d9a654ee96f60a
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Federico Simoncelli <fsimonce(a)redhat.com>
8 years, 1 month
Change in vdsm[master]: tests: Add storage mailbox tests
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: tests: Add storage mailbox tests
......................................................................
tests: Add storage mailbox tests
Use fake repository for testing the content of the storage mailbox
without using real block devices.
Change-Id: If02ed99b95dfd0d6bc5cc9694e60c3808d0974aa
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M tests/Makefile.am
A tests/mailboxTests.py
2 files changed, 91 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/71/29371/1
diff --git a/tests/Makefile.am b/tests/Makefile.am
index 6507165..a1fd385 100644
--- a/tests/Makefile.am
+++ b/tests/Makefile.am
@@ -46,6 +46,7 @@
libvirtconnectionTests.py \
lsblkTests.py \
lvmTests.py \
+ mailboxTests.py \
main.py \
md_utils_tests.py \
miscTests.py \
diff --git a/tests/mailboxTests.py b/tests/mailboxTests.py
new file mode 100644
index 0000000..ec6854c
--- /dev/null
+++ b/tests/mailboxTests.py
@@ -0,0 +1,90 @@
+import ConfigParser
+import errno
+import os
+import time
+
+from storage import sd
+from storage import storage_mailbox
+from vdsm import config
+
+from testrunner import VdsmTestCase as TestCaseBase
+import monkeypatch
+
+# Don't use /tmp, as O_DIRECT does not work with tempfs file system. Keeping
+# the test files in the source is ugly but make it very easy to debug the
+# mailbox content.
+REPO_DIR = 'mailboxTests.tmp'
+
+HOST_ID = 1
+POOL_ID = 'pool'
+MD_DIR = os.path.join(REPO_DIR, POOL_ID, "mastersd", sd.DOMAIN_META_DATA)
+INBOX = os.path.join(MD_DIR, "inbox")
+OUTBOX = os.path.join(MD_DIR, "outbox")
+MAX_HOSTS = 2000
+
+# U (0x55) is a nice initial value
+DIRTY_MAILBOX = 'U' * storage_mailbox.MAILBOX_SIZE
+
+fake_config = ConfigParser.ConfigParser()
+config.set_defaults(fake_config)
+fake_config.set('irs', 'repository', REPO_DIR)
+
+
+class HSMMailboxTests(TestCaseBase):
+
+ def setUp(self):
+ # Note: we don't remove the inbox and outbox when test ends to make it
+ # eaiser to debug by checking inbox and outbox content after a test
+ # fails.
+ create_repository()
+ init_mailbox(INBOX)
+ init_mailbox(OUTBOX)
+
+ @monkeypatch.MonkeyPatch(storage_mailbox, 'config', fake_config)
+ def test_init_inbox(self):
+ mailer = storage_mailbox.HSM_Mailbox(HOST_ID, POOL_ID)
+ try:
+ time.sleep(0.5)
+ with open(INBOX) as f:
+ # First mailbox is not used
+ data = f.read(storage_mailbox.MAILBOX_SIZE)
+ self.assertEquals(data, DIRTY_MAILBOX)
+ # When mailbox is started, it clears the host inbox
+ data = f.read(storage_mailbox.MAILBOX_SIZE)
+ self.assertEquals(data, storage_mailbox.EMPTYMAILBOX)
+ # This mailbox belong to another host, and should not change
+ data = f.read(storage_mailbox.MAILBOX_SIZE)
+ self.assertEquals(data, DIRTY_MAILBOX)
+ finally:
+ mailer.stop()
+
+ @monkeypatch.MonkeyPatch(storage_mailbox, 'config', fake_config)
+ def test_keep_outbox(self):
+ mailer = storage_mailbox.HSM_Mailbox(HOST_ID, POOL_ID)
+ try:
+ time.sleep(0.5)
+ with open(OUTBOX) as f:
+ # First mailbox is not used
+ data = f.read(storage_mailbox.MAILBOX_SIZE)
+ self.assertEquals(data, DIRTY_MAILBOX)
+ # Outbox is not touched
+ data = f.read(storage_mailbox.MAILBOX_SIZE)
+ self.assertEquals(data, DIRTY_MAILBOX)
+ # This mailbox belong to another host, and should not change
+ data = f.read(storage_mailbox.MAILBOX_SIZE)
+ self.assertEquals(data, DIRTY_MAILBOX)
+ finally:
+ mailer.stop()
+
+
+def init_mailbox(path):
+ with open(path, 'w') as f:
+ f.write(DIRTY_MAILBOX * MAX_HOSTS)
+
+
+def create_repository():
+ try:
+ os.makedirs(MD_DIR)
+ except OSError as e:
+ if e.errno != errno.EEXIST:
+ raise
--
To view, visit http://gerrit.ovirt.org/29371
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: If02ed99b95dfd0d6bc5cc9694e60c3808d0974aa
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
8 years, 1 month