Change in vdsm[master]: yml: parameter type fixes for StoragePool.spmStart
by Piotr Kliczewski
Piotr Kliczewski has uploaded a new change for review.
Change subject: yml: parameter type fixes for StoragePool.spmStart
......................................................................
yml: parameter type fixes for StoragePool.spmStart
Change-Id: I49072827b8ac04f720d50aca8e5a24b4be7582b7
Signed-off-by: Piotr Kliczewski <piotr.kliczewski(a)gmail.com>
---
M lib/api/vdsm-api.yml
M tests/vdsmapi_test.py
2 files changed, 17 insertions(+), 3 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/04/59704/1
diff --git a/lib/api/vdsm-api.yml b/lib/api/vdsm-api.yml
index a289b7f..670eed7 100644
--- a/lib/api/vdsm-api.yml
+++ b/lib/api/vdsm-api.yml
@@ -8602,11 +8602,13 @@
- description: Deprecated. The lver of the previous SPM
name: prevLver
- type: int
+ type: string
+ datatype: int
- description: This parameter is not used
name: enableScsiFencing
- type: boolean
+ type: string
+ datatype: boolean
- defaultvalue: null
description: The maximum number of hosts that could be in the cluster
@@ -8616,7 +8618,8 @@
- defaultvalue: null
description: The expected Storage Domain version of the master domain
name: domVersion
- type: int
+ type: string
+ datatype: int
return:
description: A task UUID
type: *UUID
diff --git a/tests/vdsmapi_test.py b/tests/vdsmapi_test.py
index bff578c..04297b1 100644
--- a/tests/vdsmapi_test.py
+++ b/tests/vdsmapi_test.py
@@ -580,3 +580,14 @@
_schema.schema().verify_retval(
vdsmapi.MethodRep('Host', 'hostdevListByCaps'), ret)
+
+ def test_start_spm(self):
+ params = {u'prevLver': u'-1',
+ u'enableScsiFencing': u'false',
+ u'storagepoolID': u'636d9c59-f7ba-4115-87a1-44d6563a9610',
+ u'prevID': -1,
+ u'domVersion': u'3',
+ u'maxHostID': 250}
+
+ _schema.schema().verify_args(
+ vdsmapi.MethodRep('StoragePool', 'spmStart'), params)
--
To view, visit https://gerrit.ovirt.org/59704
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I49072827b8ac04f720d50aca8e5a24b4be7582b7
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Piotr Kliczewski <piotr.kliczewski(a)gmail.com>
6 years, 5 months
Change in vdsm[master]: core: Unlink image run directory when deleting a snapshot
by ahino@redhat.com
Ala Hino has uploaded a new change for review.
Change subject: core: Unlink image run directory when deleting a snapshot
......................................................................
core: Unlink image run directory when deleting a snapshot
Unlink image run directory,
/run/vdsm/storage/sdUUID/imgUUID/volUUID, when removing a
snapshot.
Change-Id: Ib88bf92e702ac6c324b87c9459b01adf165eaca4
Bug-Url: https://bugzilla.redhat.com/1321018
Signed-off-by: Ala Hino <ahino(a)redhat.com>
---
M vdsm/storage/blockVolume.py
1 file changed, 11 insertions(+), 1 deletion(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/25/59725/1
diff --git a/vdsm/storage/blockVolume.py b/vdsm/storage/blockVolume.py
index 4476a9e..080a858 100644
--- a/vdsm/storage/blockVolume.py
+++ b/vdsm/storage/blockVolume.py
@@ -604,12 +604,22 @@
try:
self.log.debug("Unlinking %s", vol_path)
os.unlink(vol_path)
- return True
except Exception as e:
eFound = e
self.log.error("cannot delete volume's %s/%s link path: %s",
self.sdUUID, self.volUUID, vol_path, exc_info=True)
+ try:
+ imgRundir = os.path.join(constants.P_VDSM_STORAGE, self.sdUUID,
+ self.imgUUID, self.volUUID)
+ self.log.debug("Unlinking %s", imgRundir)
+ os.unlink(imgRundir)
+ return True
+ except Exception as e:
+ eFound = e
+ self.log.error("cannot delete volume's %s/%s link path: %s",
+ self.sdUUID, self.volUUID, imgRundir, exc_info=True)
+
raise eFound
def extend(self, newSize):
--
To view, visit https://gerrit.ovirt.org/59725
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ib88bf92e702ac6c324b87c9459b01adf165eaca4
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Ala Hino <ahino(a)redhat.com>
6 years, 5 months
Change in vdsm[ovirt-3.5]: spec: Require sanlock 2.8-3
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: spec: Require sanlock 2.8-3
......................................................................
spec: Require sanlock 2.8-3
Sanlock 2.8-3 added missing dependency on /usr/sbin/useradd and
/usr/sbin/groupadd, used to add the sanlock user and group during
installation. Without these dependencies, sanlock installation fail to
add the user and group, which fail vdsm-tool configure later.
Change-Id: I83ad11eda2695f161ee294571bbacbac11586b83
Bug-Url: https://bugzilla.redhat.com/1349068
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M vdsm.spec.in
1 file changed, 1 insertion(+), 1 deletion(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/56/59656/1
diff --git a/vdsm.spec.in b/vdsm.spec.in
index 607532f..7f968bd 100644
--- a/vdsm.spec.in
+++ b/vdsm.spec.in
@@ -216,7 +216,7 @@
Requires: iscsi-initiator-utils >= 6.2.0.873-21
%endif
-Requires: sanlock >= 2.8, sanlock-python
+Requires: sanlock >= 2.8-3, sanlock-python
%if 0%{?rhel}
Requires: python-ethtool >= 0.6-3
--
To view, visit https://gerrit.ovirt.org/59656
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I83ad11eda2695f161ee294571bbacbac11586b83
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: ovirt-3.5
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
6 years, 5 months
Change in vdsm[master]: network: Use new concurrent.thread() utility
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: network: Use new concurrent.thread() utility
......................................................................
network: Use new concurrent.thread() utility
This patch updates the networking subsystem to use the new utility.
Behavior changes:
- dhclient.DhcpClient threads are protected from silent failures
- configurators/ifcfg._ifup threads are proected from silent
failures.
Change-Id: I62e80bbbb9354d3173cce631ed5579532cf7cdcb
Relates-To: https://bugzilla.redhat.com/1141422
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M vdsm/network/configurators/dhclient.py
M vdsm/network/configurators/ifcfg.py
M vdsm/network/sourceroutethread.py
3 files changed, 8 insertions(+), 14 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/53/45553/1
diff --git a/vdsm/network/configurators/dhclient.py b/vdsm/network/configurators/dhclient.py
index 181c302..a25ae8f 100644
--- a/vdsm/network/configurators/dhclient.py
+++ b/vdsm/network/configurators/dhclient.py
@@ -23,8 +23,8 @@
import logging
import os
import signal
-import threading
+from vdsm import concurrent
from vdsm import cmdutils
from vdsm import ipwrapper
from vdsm import netinfo
@@ -76,9 +76,8 @@
rc, _, _ = self._dhclient()
return rc
else:
- t = threading.Thread(target=self._dhclient, name='vdsm-dhclient-%s'
- % self.iface)
- t.daemon = True
+ t = concurrent.thread(self._dhclient,
+ name='vdsm-dhclient-%s' % self.iface)
t.start()
def shutdown(self):
diff --git a/vdsm/network/configurators/ifcfg.py b/vdsm/network/configurators/ifcfg.py
index e1d3e94..f676f83 100644
--- a/vdsm/network/configurators/ifcfg.py
+++ b/vdsm/network/configurators/ifcfg.py
@@ -28,11 +28,11 @@
import re
import selinux
import shutil
-import threading
from libvirt import libvirtError, VIR_ERR_NO_NETWORK
from vdsm.config import config
+from vdsm import concurrent
from vdsm import cmdutils
from vdsm import constants
from vdsm import ipwrapper
@@ -782,9 +782,8 @@
if not iface.blockingdhcp and (iface.ipv4.bootproto == 'dhcp' or
iface.ipv6.dhcpv6):
# wait for dhcp in another thread, so vdsm won't get stuck (BZ#498940)
- t = threading.Thread(target=_exec_ifup, name='ifup-waiting-on-dhcp',
- args=(iface.name, cgroup))
- t.daemon = True
+ t = concurrent.thread(_exec_ifup, name='ifup-waiting-on-dhcp',
+ args=(iface.name, cgroup))
t.start()
else:
_exec_ifup(iface.name, cgroup)
diff --git a/vdsm/network/sourceroutethread.py b/vdsm/network/sourceroutethread.py
index 0a49760..042e5bd 100644
--- a/vdsm/network/sourceroutethread.py
+++ b/vdsm/network/sourceroutethread.py
@@ -19,12 +19,11 @@
from __future__ import absolute_import
import logging
import os
-import threading
import pyinotify
from vdsm.constants import P_VDSM_RUN
-from vdsm import utils
+from vdsm import concurrent
from .configurators.iproute2 import Iproute2
from .sourceroute import DynamicSourceRoute
@@ -68,13 +67,10 @@
def start():
- thread = threading.Thread(target=_subscribeToInotifyLoop,
- name='sourceRoute')
- thread.daemon = True
+ thread = concurrent.thread(_subscribeToInotifyLoop, name='sourceRoute')
thread.start()
-(a)utils.traceback()
def _subscribeToInotifyLoop():
logging.debug("sourceRouteThread.subscribeToInotifyLoop started")
--
To view, visit https://gerrit.ovirt.org/45553
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I62e80bbbb9354d3173cce631ed5579532cf7cdcb
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
6 years, 6 months
Change in vdsm[master]: libvirtconnection: Replace assert with AssertionError
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: libvirtconnection: Replace assert with AssertionError
......................................................................
libvirtconnection: Replace assert with AssertionError
The code wrongly assumed that assert always exists. When running in
optimized mode, the check would be skipped, and instead of getting an
AssertionError, which is the expected error for programmer error
(starting the eventloop twice), we could get a confusing
RuntimeException or RuntimeError from Thread.start (depending on Python
version).
RuntimeError misused in the standard library for all kinds of errors
that do not have builtin errors. It is particularry bad option when used
for usage error.
Change-Id: Icf1564f81f4c1fbf77ccaff6d93c047a02d946da
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M lib/vdsm/libvirtconnection.py
1 file changed, 2 insertions(+), 1 deletion(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/64/34364/1
diff --git a/lib/vdsm/libvirtconnection.py b/lib/vdsm/libvirtconnection.py
index 5430c82..009f8b7 100644
--- a/lib/vdsm/libvirtconnection.py
+++ b/lib/vdsm/libvirtconnection.py
@@ -37,7 +37,8 @@
self.__thread = None
def start(self):
- assert not self.run
+ if self.run:
+ raise AssertionError("EventLoop is running")
self.__thread = threading.Thread(target=self.__run,
name="libvirtEventLoop")
self.__thread.setDaemon(True)
--
To view, visit http://gerrit.ovirt.org/34364
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Icf1564f81f4c1fbf77ccaff6d93c047a02d946da
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
6 years, 6 months
Change in vdsm[master]: vm: Remove useless volume size monitoring
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: vm: Remove useless volume size monitoring
......................................................................
vm: Remove useless volume size monitoring
We used to check drive volume size every 60 seconds but we do not use
the result of this check for anything. According old comments, the
result was used in the past for extending disks. Since 3.6 we are using
the capacity value from libvirt.
This check may block on storage apis for minutes when using NFS
unresponsive storage, blocking executor workers and harming unrelated
vms using other storage.
Change-Id: Ib1436c2968f3e408ce38a913c6ca3146a25a312d
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M lib/vdsm/config.py.in
M lib/vdsm/virt/periodic.py
M vdsm/virt/vm.py
M vdsm/virt/vmdevices/storage.py
4 files changed, 19 insertions(+), 89 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/01/59801/1
diff --git a/lib/vdsm/config.py.in b/lib/vdsm/config.py.in
index 8bcb09c..ad2f8c7 100644
--- a/lib/vdsm/config.py.in
+++ b/lib/vdsm/config.py.in
@@ -287,9 +287,6 @@
'volume_utilization_percent, set the free space limit. Use higher '
'values to extend in bigger chunks.'),
- ('vol_size_sample_interval', '60',
- 'How often should the volume size be checked (seconds).'),
-
('scsi_rescan_maximal_timeout', '30',
'The maximal number of seconds to wait for scsi scan to return.'),
diff --git a/lib/vdsm/virt/periodic.py b/lib/vdsm/virt/periodic.py
index 8ed5d67..72f33a9 100644
--- a/lib/vdsm/virt/periodic.py
+++ b/lib/vdsm/virt/periodic.py
@@ -71,12 +71,6 @@
return Operation(disp, period, scheduler)
_operations = [
- # needs dispatching becuse updating the volume stats needs the
- # access the storage, thus can block.
- per_vm_operation(
- UpdateVolumes,
- config.getint('irs', 'vol_size_sample_interval')),
-
# needs dispatching becuse access FS and libvirt data
per_vm_operation(
NumaInfoMonitor,
@@ -316,22 +310,6 @@
return '<%s vm=%s at 0x%x>' % (
self.__class__.__name__, self._vm.id, id(self)
)
-
-
-class UpdateVolumes(_RunnableOnVm):
-
- @property
- def required(self):
- return (super(UpdateVolumes, self).required and
- # Avoid queries from storage during recovery process
- self._vm.isDisksStatsCollectionEnabled())
-
- def _execute(self):
- for drive in self._vm.getDiskDevices():
- # TODO: If this block (it is actually possible?)
- # we must make sure we don't overwrite good data
- # with stale old data.
- self._vm.updateDriveVolume(drive)
class NumaInfoMonitor(_RunnableOnVm):
diff --git a/vdsm/virt/vm.py b/vdsm/virt/vm.py
index b5eec87..db2bec0 100644
--- a/vdsm/virt/vm.py
+++ b/vdsm/virt/vm.py
@@ -378,15 +378,6 @@
if 'device' not in drv:
drv['device'] = 'disk'
- if drv['device'] == 'disk':
- volsize = self._getVolumeSize(drv['domainID'], drv['poolID'],
- drv['imageID'], drv['volumeID'])
- drv['truesize'] = str(volsize.truesize)
- drv['apparentsize'] = str(volsize.apparentsize)
- else:
- drv['truesize'] = 0
- drv['apparentsize'] = 0
-
def __legacyDrives(self):
"""
Backward compatibility for qa scripts that specify direct paths.
@@ -398,8 +389,7 @@
if path:
legacies.append({'type': hwclass.DISK,
'device': 'disk', 'path': path,
- 'iface': 'ide', 'index': index,
- 'truesize': 0})
+ 'iface': 'ide', 'index': index})
return legacies
def __removableDrives(self):
@@ -408,8 +398,7 @@
'device': 'cdrom',
'iface': vmdevices.storage.DEFAULT_INTERFACE_FOR_ARCH[self.arch],
'path': self.conf.get('cdrom', ''),
- 'index': 2,
- 'truesize': 0}]
+ 'index': 2}]
floppyPath = self.conf.get('floppy')
if floppyPath:
removables.append({
@@ -417,8 +406,7 @@
'device': 'floppy',
'path': floppyPath,
'iface': 'fdc',
- 'index': 0,
- 'truesize': 0})
+ 'index': 0})
return removables
def _devMapFromDevSpecMap(self, dev_spec_map):
@@ -999,10 +987,9 @@
for drive, volumeID, capacity, alloc, physical in extend:
self.log.info(
- "Requesting extension for volume %s on domain %s (apparent: "
- "%s, capacity: %s, allocated: %s, physical: %s)",
- volumeID, drive.domainID, drive.apparentsize, capacity,
- alloc, physical)
+ "Requesting extension for volume %s on domain %s (capacity: "
+ "%s, allocated: %s, physical: %s)",
+ volumeID, drive.domainID, capacity, alloc, physical)
self.extendDriveVolume(drive, volumeID, physical, capacity)
return len(extend) > 0
@@ -1044,8 +1031,6 @@
raise RuntimeError(
"Volume extension failed for %s (domainID: %s, volumeID: %s)" %
(volInfo['name'], volInfo['domainID'], volInfo['volumeID']))
-
- return volSize
def __afterReplicaExtension(self, volInfo):
self.__verifyVolumeExtension(volInfo)
@@ -1093,14 +1078,7 @@
def __afterVolumeExtension(self, volInfo):
# Check if the extension succeeded. On failure an exception is raised
# TODO: Report failure to the engine.
- volSize = self.__verifyVolumeExtension(volInfo)
-
- # Only update apparentsize and truesize if we've resized the leaf
- if not volInfo['internal']:
- vmDrive = self._findDriveByName(volInfo['name'])
- vmDrive.apparentsize = volSize.apparentsize
- vmDrive.truesize = volSize.truesize
-
+ self.__verifyVolumeExtension(volInfo)
try:
self.cont()
except libvirt.libvirtError:
@@ -3235,22 +3213,6 @@
return device
raise LookupError("No such disk %r" % name)
- def updateDriveVolume(self, vmDrive):
- if not vmDrive.device == 'disk' or not isVdsmImage(vmDrive):
- return
-
- try:
- volSize = self._getVolumeSize(
- vmDrive.domainID, vmDrive.poolID, vmDrive.imageID,
- vmDrive.volumeID)
- except StorageUnavailableError as e:
- self.log.error("Unable to update drive %s volume size: %s",
- vmDrive.name, e)
- return
-
- vmDrive.truesize = volSize.truesize
- vmDrive.apparentsize = volSize.apparentsize
-
def updateDriveParameters(self, driveParams):
"""Update the drive with the new volume information"""
@@ -3259,7 +3221,6 @@
if vmDrive.name == driveParams["name"]:
for k, v in driveParams.iteritems():
setattr(vmDrive, k, v)
- self.updateDriveVolume(vmDrive)
break
else:
self.log.error("Unable to update the drive object for: %s",
@@ -3499,11 +3460,8 @@
# see the XML even with 'info' as default level.
self.log.info(snapxml)
- # We need to stop the collection of the stats for two reasons, one
- # is to prevent spurious libvirt errors about missing drive paths
- # (since we're changing them), and also to prevent to trigger a drive
- # extension for the new volume with the apparent size of the old one
- # (the apparentsize is updated as last step in updateDriveParameters)
+ # Prevent spurious libvirt errors about missing drive paths (since
+ # we're changing them).
self.stopDisksStatsCollection()
try:
diff --git a/vdsm/virt/vmdevices/storage.py b/vdsm/virt/vmdevices/storage.py
index 78bf00c..eb8b2ae 100644
--- a/vdsm/virt/vmdevices/storage.py
+++ b/vdsm/virt/vmdevices/storage.py
@@ -56,13 +56,14 @@
class Drive(Base):
- __slots__ = ('iface', '_path', 'readonly', 'bootOrder', 'domainID',
- 'poolID', 'imageID', 'UUID', 'volumeID', 'format',
- 'propagateErrors', 'address', 'apparentsize', 'volumeInfo',
- 'index', 'name', 'optional', 'shared', 'truesize',
- 'volumeChain', 'baseVolumeID', 'serial', 'reqsize', 'cache',
- '_blockDev', 'extSharedState', 'drv', 'sgio', 'GUID',
- 'diskReplicate', '_diskType', 'hosts', 'protocol', 'auth')
+ __slots__ = (
+ 'iface', '_path', 'readonly', 'bootOrder', 'domainID', 'poolID',
+ 'imageID', 'UUID', 'volumeID', 'format', 'propagateErrors', 'address',
+ 'volumeInfo', 'index', 'name', 'optional', 'shared', 'volumeChain',
+ 'baseVolumeID', 'serial', 'reqsize', 'cache', '_blockDev',
+ 'extSharedState', 'drv', 'sgio', 'GUID', 'diskReplicate', '_diskType',
+ 'hosts', 'protocol', 'auth'
+ )
VOLWM_CHUNK_SIZE = (config.getint('irs', 'volume_utilization_chunk_mb') *
constants.MEGAB)
VOLWM_FREE_PCT = 100 - config.getint('irs', 'volume_utilization_percent')
@@ -165,8 +166,6 @@
self.device = getattr(self, 'device', 'disk')
# Keep sizes as int
self.reqsize = int(kwargs.get('reqsize', '0')) # Backward compatible
- self.truesize = int(kwargs.get('truesize', '0'))
- self.apparentsize = int(kwargs.get('apparentsize', '0'))
self.name = makeName(self.iface, self.index)
self.cache = config.get('vars', 'qemu_drive_cache')
@@ -247,10 +246,8 @@
Returns the next volume size in bytes. This value is based on the
volExtensionChunk property and it's the size that should be requested
for the next LV extension. curSize is the current size of the volume
- to be extended. For the leaf volume curSize == self.apparentsize.
- For internal volumes it is discovered by calling irs.getVolumeSize().
- capacity is the maximum size of the volume. It can be discovered using
- libvirt.virDomain.blockInfo() or qemuimg.info().
+ to be extended. capacity is the maximum size of the volume. It can be
+ discovered using libvirt.virDomain.blockInfo() or qemuimg.info().
"""
nextSize = utils.round(curSize + self.volExtensionChunk,
constants.MEGAB)
--
To view, visit https://gerrit.ovirt.org/59801
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ib1436c2968f3e408ce38a913c6ca3146a25a312d
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
6 years, 6 months
Change in vdsm[master]: cache: Add caching decorator with invalidation
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: cache: Add caching decorator with invalidation
......................................................................
cache: Add caching decorator with invalidation
The new cache.memoized extends utils.memoized, adding invalidation
support.
Features added:
- An optional "validate" argument. This is a callable invoked each time
the memoized function is called. When the callable returns False, the
cache is invalidated.
- Memoized functions have an "invalidate" method, used to invalidate the
cache during testing.
- file_validator - invalidates the cache when a file changes.
Example usage:
from vdsm.cache import memoized, file_validator
@memoized(file_validator('/bigfile'))
def parse_bigfile():
# Expensive code processing '/bigfile' contents
Change-Id: I6dd8fb29d94286e3e3a3e29b8218501cbdc5c018
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M lib/vdsm/Makefile.am
A lib/vdsm/cache.py
M tests/Makefile.am
A tests/cacheTests.py
4 files changed, 366 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/09/34709/1
diff --git a/lib/vdsm/Makefile.am b/lib/vdsm/Makefile.am
index b862e71..6f0040d 100644
--- a/lib/vdsm/Makefile.am
+++ b/lib/vdsm/Makefile.am
@@ -23,6 +23,7 @@
dist_vdsmpylib_PYTHON = \
__init__.py \
+ cache.py \
compat.py \
define.py \
exception.py \
diff --git a/lib/vdsm/cache.py b/lib/vdsm/cache.py
new file mode 100644
index 0000000..9806e40
--- /dev/null
+++ b/lib/vdsm/cache.py
@@ -0,0 +1,98 @@
+#
+# Copyright 2014 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+#
+# Refer to the README and COPYING files for full details of the license
+#
+
+import errno
+import os
+import functools
+
+
+def memoized(validate=None):
+ """
+ Return a caching decorator supporting invalidation.
+
+ The decorator accepts an optional validate callable, called each time the
+ memoized function is called. If the validate callable return True, the
+ memoized function will use the cache. If the validate callable return
+ False, the memoized cache is cleared.
+
+ The memoized function may accept multiple positional arguments. The
+ cache store the result for each combination of arguments. Functions with
+ kwargs are not supported.
+
+ Memoized functions have an "invalidate" method, used to invalidate the
+ memoized cache during testing.
+
+ To invalidate the cache when a file changes, use the file_validator from
+ this module.
+
+ Example usage:
+
+ from vdsm.cache import memoized, file_validator
+
+ @memoized(file_validator('/bigfile'))
+ def parse_bigfile():
+ # Expensive code processing '/bigfile' contents
+
+ """
+ def decorator(f):
+ cache = {}
+
+ @functools.wraps(f)
+ def wrapper(*args):
+ if validate is not None and not validate():
+ cache.clear()
+ try:
+ value = cache[args]
+ except KeyError:
+ value = cache[args] = f(*args)
+ return value
+
+ wrapper.invalidate = cache.clear
+ return wrapper
+
+ return decorator
+
+
+class file_validator(object):
+ """
+ I'm a validator returning False when a file has changed since the last
+ validation.
+ """
+
+ UNKNOWN = 0
+ MISSING = 1
+
+ def __init__(self, path):
+ self.path = path
+ self.stats = self.UNKNOWN
+
+ def __call__(self):
+ try:
+ stats = os.stat(self.path)
+ except OSError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ stats = self.MISSING
+ else:
+ stats = stats.st_ino, stats.st_size, stats.st_mtime
+ if stats != self.stats:
+ self.stats = stats
+ return False
+ return True
diff --git a/tests/Makefile.am b/tests/Makefile.am
index 36a1cdd..6fa7e64 100644
--- a/tests/Makefile.am
+++ b/tests/Makefile.am
@@ -26,6 +26,7 @@
alignmentScanTests.py \
blocksdTests.py \
bridgeTests.py \
+ cacheTests.py \
cPopenTests.py \
capsTests.py \
clientifTests.py \
diff --git a/tests/cacheTests.py b/tests/cacheTests.py
new file mode 100644
index 0000000..8927b39
--- /dev/null
+++ b/tests/cacheTests.py
@@ -0,0 +1,266 @@
+#
+# Copyright 2014 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301 USA
+#
+# Refer to the README and COPYING files for full details of the license
+#
+
+import os
+from vdsm.cache import memoized
+from vdsm.cache import file_validator
+from testlib import VdsmTestCase
+from testlib import namedTemporaryDir
+
+
+class Validator(object):
+ """ I'm a callable returning a boolean value (self.valid) """
+
+ def __init__(self):
+ self.valid = True
+ self.count = 0
+
+ def __call__(self):
+ self.count += 1
+ return self.valid
+
+
+class Accessor(object):
+ """ I'm recording how many times a dict was accessed. """
+
+ def __init__(self, d):
+ self.d = d
+ self.count = 0
+
+ def get(self, key):
+ self.count += 1
+ return self.d[key]
+
+
+class MemoizedTests(VdsmTestCase):
+
+ def setUp(self):
+ self.values = {'a': 0, 'b': 10, ('a',): 20, ('a', 'b'): 30}
+
+ def test_no_args(self):
+ accessor = Accessor(self.values)
+
+ @memoized()
+ def func(key):
+ return accessor.get(key)
+
+ # Fill the cache
+ self.assertEqual(func('a'), self.values['a'])
+ self.assertEqual(func('b'), self.values['b'])
+ self.assertEqual(accessor.count, 2)
+
+ # Values served now from the cache
+ self.assertEqual(func('a'), self.values['a'])
+ self.assertEqual(func('b'), self.values['b'])
+ self.assertEqual(accessor.count, 2)
+
+ def test_validation(self):
+ accessor = Accessor(self.values)
+ validator = Validator()
+
+ @memoized(validator)
+ def func(key):
+ return accessor.get(key)
+
+ # Fill the cache
+ self.assertEqual(func('a'), self.values['a'])
+ self.assertEqual(func('b'), self.values['b'])
+ self.assertEqual(accessor.count, 2)
+ self.assertEqual(validator.count, 2)
+
+ # Values served now from the cache
+ self.assertEqual(func('a'), self.values['a'])
+ self.assertEqual(func('b'), self.values['b'])
+ self.assertEqual(accessor.count, 2)
+ self.assertEqual(validator.count, 4)
+
+ # Values has changed
+ self.values['a'] += 1
+ self.values['b'] += 1
+
+ # Next call should clear the cache
+ validator.valid = False
+ self.assertEqual(func('a'), self.values['a'])
+ self.assertEqual(accessor.count, 3)
+ self.assertEqual(validator.count, 5)
+
+ # Next call should add next value to cache
+ validator.valid = True
+ self.assertEqual(func('b'), self.values['b'])
+ self.assertEqual(accessor.count, 4)
+ self.assertEqual(validator.count, 6)
+
+ # Values served now from the cache
+ self.assertEqual(func('a'), self.values['a'])
+ self.assertEqual(func('b'), self.values['b'])
+ self.assertEqual(accessor.count, 4)
+ self.assertEqual(validator.count, 8)
+
+ def test_raise_errors_in_memoized_func(self):
+ accessor = Accessor(self.values)
+ validator = Validator()
+
+ @memoized(validator)
+ def func(key):
+ return accessor.get(key)
+
+ # First run should fail, second shold fill the cache
+ self.assertRaises(KeyError, func, 'no such key')
+ self.assertEqual(func('a'), self.values['a'])
+ self.assertEqual(accessor.count, 2)
+ self.assertEqual(validator.count, 2)
+
+ def test_multiple_args(self):
+ accessor = Accessor(self.values)
+
+ @memoized()
+ def func(*args):
+ return accessor.get(args)
+
+ # Fill the cache
+ self.assertEqual(func('a'), self.values[('a',)])
+ self.assertEqual(func('a', 'b'), self.values[('a', 'b')])
+ self.assertEqual(accessor.count, 2)
+
+ # Values served now from the cache
+ self.assertEqual(func('a'), self.values[('a',)])
+ self.assertEqual(func('a', 'b'), self.values[('a', 'b')])
+ self.assertEqual(accessor.count, 2)
+
+ def test_kwargs_not_supported(self):
+ @memoized()
+ def func(a=None, b=None):
+ pass
+ self.assertRaises(TypeError, func, a=1, b=2)
+
+ def test_invalidate(self):
+ accessor = Accessor(self.values)
+
+ @memoized()
+ def func(key):
+ return accessor.get(key)
+
+ self.assertEqual(func('a'), self.values['a'])
+ self.assertEqual(accessor.count, 1)
+
+ func.invalidate()
+
+ self.assertEqual(func('a'), self.values['a'])
+ self.assertEqual(accessor.count, 2)
+
+
+class FileValidatorTests(VdsmTestCase):
+
+ def test_no_file(self):
+ with namedTemporaryDir() as tempdir:
+ path = os.path.join(tempdir, 'data')
+ validator = file_validator(path)
+
+ # Must be False so memoise call the decorated function
+ self.assertEqual(validator(), False)
+
+ # Since file state did not change, must remain True
+ self.assertEqual(validator(), True)
+
+ def test_file_created(self):
+ with namedTemporaryDir() as tempdir:
+ path = os.path.join(tempdir, 'data')
+ validator = file_validator(path)
+
+ self.assertEqual(validator(), False)
+ self.assertEqual(validator(), True)
+
+ with open(path, 'w') as f:
+ f.write('data')
+
+ self.assertEqual(validator(), False)
+ self.assertEqual(validator(), True)
+
+ def test_file_removed(self):
+ with namedTemporaryDir() as tempdir:
+ path = os.path.join(tempdir, 'data')
+ validator = file_validator(path)
+
+ with open(path, 'w') as f:
+ f.write('data')
+
+ self.assertEqual(validator(), False)
+ self.assertEqual(validator(), True)
+
+ os.unlink(path)
+
+ self.assertEqual(validator(), False)
+ self.assertEqual(validator(), True)
+
+ def test_size_changed(self):
+ with namedTemporaryDir() as tempdir:
+ path = os.path.join(tempdir, 'data')
+ validator = file_validator(path)
+ data = 'old data'
+ with open(path, 'w') as f:
+ f.write(data)
+
+ self.assertEqual(validator(), False)
+ self.assertEqual(validator(), True)
+
+ with open(path, 'w') as f:
+ f.write(data + ' new data')
+
+ self.assertEqual(validator(), False)
+ self.assertEqual(validator(), True)
+
+ def test_mtime_changed(self):
+ with namedTemporaryDir() as tempdir:
+ path = os.path.join(tempdir, 'data')
+ validator = file_validator(path)
+ data = 'old data'
+ with open(path, 'w') as f:
+ f.write(data)
+
+ self.assertEqual(validator(), False)
+ self.assertEqual(validator(), True)
+
+ # Fake timestamp change, as timestamp resolution may not be good
+ # enough when comparing changes during the test.
+ atime = mtime = os.path.getmtime(path) + 1
+ os.utime(path, (atime, mtime))
+
+ self.assertEqual(validator(), False)
+ self.assertEqual(validator(), True)
+
+ def test_ino_changed(self):
+ with namedTemporaryDir() as tempdir:
+ path = os.path.join(tempdir, 'data')
+ validator = file_validator(path)
+ data = 'old data'
+ with open(path, 'w') as f:
+ f.write(data)
+
+ self.assertEqual(validator(), False)
+ self.assertEqual(validator(), True)
+
+ tmp = path + '.tmp'
+ with open(tmp, 'w') as f:
+ f.write(data)
+ os.rename(tmp, path)
+
+ self.assertEqual(validator(), False)
+ self.assertEqual(validator(), True)
--
To view, visit http://gerrit.ovirt.org/34709
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I6dd8fb29d94286e3e3a3e29b8218501cbdc5c018
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
6 years, 6 months
Change in vdsm[master]: fencing: Make getHostLeaseStatus API public
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: fencing: Make getHostLeaseStatus API public
......................................................................
fencing: Make getHostLeaseStatus API public
Getting host lease status will allow engine to make smarter decisions
when a host is non-responsive by using a proxy host to get the
non-responsive host status.
See http://pastebin.com/KqqeAdSu for example output from this API.
Change-Id: I415c1fee6256bf8d4e03ee542cc58e193162e9b8
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M client/vdsClient.py
M vdsm/API.py
M vdsm/rpc/BindingXMLRPC.py
M vdsm/rpc/Bridge.py
M vdsm/rpc/vdsmapi-schema.json
5 files changed, 62 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/57/29157/1
diff --git a/client/vdsClient.py b/client/vdsClient.py
index 2c09b28..aea1503 100644
--- a/client/vdsClient.py
+++ b/client/vdsClient.py
@@ -1760,6 +1760,18 @@
status = self.s.stopMonitoringDomain(sdUUID)
return status['status']['code'], status['status']['message']
+ def getHostLeaseStatus(self, args):
+ domains = {}
+ for pair in args:
+ sdUUID, hostId = pair.split('=', 1)
+ domains[sdUUID] = int(hostId)
+ response = self.s.getHostLeaseStatus(domains)
+ if response['status']['code']:
+ print "Cannot get host storage liveliness"
+ return response['status']['code'], response['status']['message']
+ pp.pprint(response['domains'])
+ return 0, ''
+
def snapshot(self, args):
vmUUID, sdUUID, imgUUID, baseVolUUID, volUUID = args
@@ -2579,6 +2591,11 @@
('<sdUUID>',
'Stop monitoring SD: sdUUID'
)),
+ 'getHostLeaseStatus': (serv.getHostLeaseStatus,
+ ('<sdUUID>=<hostId> [<sdUUID>=<hostId>] ...',
+ 'Returns host lease status for hostId on '
+ 'each domain.'
+ )),
'snapshot': (serv.snapshot,
('<vmId> <sdUUID> <imgUUID> <baseVolUUID> <volUUID>',
'Take a live snapshot'
diff --git a/vdsm/API.py b/vdsm/API.py
index e739294..0b44459 100644
--- a/vdsm/API.py
+++ b/vdsm/API.py
@@ -1497,6 +1497,9 @@
def stopMonitoringDomain(self, sdUUID):
return self._irs.stopMonitoringDomain(sdUUID)
+ def getHostLeaseStatus(self, domains):
+ return self._irs.getHostLeaseStatus(domains)
+
def getLVMVolumeGroups(self, storageType=None):
return self._irs.getVGList(storageType)
diff --git a/vdsm/rpc/BindingXMLRPC.py b/vdsm/rpc/BindingXMLRPC.py
index c1c7490..a06a3b4 100644
--- a/vdsm/rpc/BindingXMLRPC.py
+++ b/vdsm/rpc/BindingXMLRPC.py
@@ -917,6 +917,10 @@
api = API.Global()
return api.stopMonitoringDomain(sdUUID)
+ def getHostLeaseStatus(self, domains, options=None):
+ api = API.Global()
+ return api.getHostLeaseStatus(domains)
+
def vgsGetList(self, storageType=None, options=None):
api = API.Global()
return api.getLVMVolumeGroups(storageType)
@@ -1070,6 +1074,7 @@
(self.storageRepoGetStats, 'repoStats'),
(self.startMonitoringDomain, 'startMonitoringDomain'),
(self.stopMonitoringDomain, 'stopMonitoringDomain'),
+ (self.getHostLeaseStatus, 'getHostLeaseStatus'),
(self.vgsGetList, 'getVGList'),
(self.devicesGetList, 'getDeviceList'),
(self.devicesGetVisibility, 'getDevicesVisibility'),
diff --git a/vdsm/rpc/Bridge.py b/vdsm/rpc/Bridge.py
index 7e898de..ba700d1 100644
--- a/vdsm/rpc/Bridge.py
+++ b/vdsm/rpc/Bridge.py
@@ -349,6 +349,7 @@
'Host_getStorageRepoStats': {'ret': Host_getStorageRepoStats_Ret},
'Host_startMonitoringDomain': {},
'Host_stopMonitoringDomain': {},
+ 'Host_getHostLeaseStatus': {'ret': 'domains'},
'Host_getVMList': {'call': Host_getVMList_Call, 'ret': Host_getVMList_Ret},
'Host_getVMFullList': {'call': Host_getVMFullList_Call, 'ret': 'vmList'},
'Host_getAllVmStats': {'ret': 'statsList'},
diff --git a/vdsm/rpc/vdsmapi-schema.json b/vdsm/rpc/vdsmapi-schema.json
index 0c8a6f6..7617185 100644
--- a/vdsm/rpc/vdsmapi-schema.json
+++ b/vdsm/rpc/vdsmapi-schema.json
@@ -2052,6 +2052,42 @@
'returns': ''}
##
+# @HostIdMap:
+#
+# A mapping of hostId indexed by domain UUID.
+#
+# Since: 4.15.0
+##
+{'map': 'HostIdMap',
+ 'key': 'UUID', 'value': 'int'}
+
+##
+# @HostLeaseStatusMap:
+#
+# A mapping of status codes indexed by domain UUID.
+#
+# Since: 4.15.0
+##
+{'map': 'HostLeaseStatusMap',
+ 'key': 'UUID', 'value': 'str'}
+
+##
+# @Host.getHostLeaseStatus:
+#
+# Returns host status for for specified domains
+#
+# @domains: A mapping of hostId indexed by domain UUID
+#
+# Returns:
+# Host status code for each domain
+#
+# Since: 4.15.0
+##
+{'command': {'class': 'Host', 'name': 'getHostLeaseStatus'},
+ 'data': {'domains': 'HostIdMap'}
+ 'returns': {'domains': 'HostLeaseStatusMap'}}
+
+##
# @VmStatus:
#
# An enumeration of possible virtual machine statuses.
--
To view, visit http://gerrit.ovirt.org/29157
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I415c1fee6256bf8d4e03ee542cc58e193162e9b8
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
6 years, 6 months
Change in vdsm[master]: sdc: Rename method to make it less confusing
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: sdc: Rename method to make it less confusing
......................................................................
sdc: Rename method to make it less confusing
StorageCache.refresh() does not do any refresh, but clearing the domain
cahce and invlidating lvm cache (actually clearing it). Rename to
clear() to reflect what it does.
Change-Id: I2c67ae0ddc98857e406fec62be0cbcf817213236
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M vdsm/storage/sdc.py
M vdsm/storage/sp.py
2 files changed, 3 insertions(+), 3 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/16/47916/1
diff --git a/vdsm/storage/sdc.py b/vdsm/storage/sdc.py
index ecb9708..a26c05a 100644
--- a/vdsm/storage/sdc.py
+++ b/vdsm/storage/sdc.py
@@ -181,7 +181,7 @@
return uuids
- def refresh(self):
+ def clear(self):
with self._syncroot:
lvm.invalidateCache()
self.__domainCache.clear()
diff --git a/vdsm/storage/sp.py b/vdsm/storage/sp.py
index b8fd8f3..6ecaf8d 100644
--- a/vdsm/storage/sp.py
+++ b/vdsm/storage/sp.py
@@ -627,7 +627,7 @@
self.id = hostID
# Make sure SDCache doesn't have stale data (it can be in case of FC)
sdCache.invalidateStorage()
- sdCache.refresh()
+ sdCache.clear()
# Rebuild whole Pool
self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
self.__createMailboxMonitor()
@@ -1245,7 +1245,7 @@
'msdUUID' - master storage domain UUID
"""
sdCache.invalidateStorage()
- sdCache.refresh()
+ sdCache.clear()
self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
def updateVM(self, vmList, sdUUID):
--
To view, visit https://gerrit.ovirt.org/47916
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I2c67ae0ddc98857e406fec62be0cbcf817213236
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
6 years, 6 months
Change in vdsm[master]: scsi: Scan only the required domain type
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: scsi: Scan only the required domain type
......................................................................
scsi: Scan only the required domain type
We used to perform both iSCSI and FCP rescan when creating or editing a
storage domain, connecting to storage server, getting vg and storage
domain list and more.
The unneeded rescan is typically fast, but if a storage server or device
is not accessible, a SCSI rescan may block for couple of minutes,
leading to unwanted blocking of unrelated storage threads. This is
particularly bad when you are interested only in one domain type, but
the host get stuck scanning the other type.
To improve storage domain isolation, we use the specified storage type
to perform a rescan only of the relevant type. If storage type was not
specified, we scan both ISCSI and FCP keeping the old behavior.
Change-Id: Ic32cd683020e94df016dd77b19ae3eb7317c5554
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M vdsm/storage/hsm.py
M vdsm/storage/multipath.py
M vdsm/storage/sdc.py
3 files changed, 25 insertions(+), 17 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/24/45824/1
diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
index 1b8c064..541b699 100644
--- a/vdsm/storage/hsm.py
+++ b/vdsm/storage/hsm.py
@@ -1984,15 +1984,16 @@
return dict(devList=devices)
def _getDeviceList(self, storageType=None, guids=(), checkStatus=True):
- sdCache.refreshStorage()
- typeFilter = lambda dev: True
- if storageType:
- if sd.storageType(storageType) == sd.type2name(sd.ISCSI_DOMAIN):
- typeFilter = \
- lambda dev: multipath.devIsiSCSI(dev.get("devtype"))
- elif sd.storageType(storageType) == sd.type2name(sd.FCP_DOMAIN):
- typeFilter = \
- lambda dev: multipath.devIsFCP(dev.get("devtype"))
+ domType = sd.storageType(storageType) if storageType else None
+
+ sdCache.refreshStorage(domType)
+
+ if domType == sd.ISCSI_DOMAIN:
+ typeFilter = lambda dev: multipath.devIsiSCSI(dev.get("devtype"))
+ elif domType == sd.FCP_DOMAIN:
+ typeFilter = lambda dev: multipath.devIsFCP(dev.get("devtype"))
+ else:
+ typeFilter = lambda dev: True
devices = []
pvs = {}
@@ -2470,7 +2471,7 @@
# while the VDSM was not connected, we need to
# call refreshStorage.
if domType in (sd.FCP_DOMAIN, sd.ISCSI_DOMAIN):
- sdCache.refreshStorage()
+ sdCache.refreshStorage(domType)
try:
doms = self.__prefetchDomains(domType, conObj)
except:
@@ -2864,7 +2865,8 @@
"""
vars.task.setDefaultException(
se.StorageDomainActionError("spUUID: %s" % spUUID))
- sdCache.refreshStorage()
+ domType = sd.storageType(storageType) if storageType else None
+ sdCache.refreshStorage(domType)
if spUUID and spUUID != volume.BLANK_UUID:
domList = self.getPool(spUUID).getDomains()
domains = domList.keys()
@@ -2925,7 +2927,8 @@
:rtype: dict
"""
vars.task.setDefaultException(se.VolumeGroupActionError())
- sdCache.refreshStorage()
+ domType = sd.storageType(storageType) if storageType else None
+ sdCache.refreshStorage(domType)
# getSharedLock(connectionsResource...)
vglist = []
vgs = self.__getVGsInfo()
diff --git a/vdsm/storage/multipath.py b/vdsm/storage/multipath.py
index ad81d2d..32deb98 100644
--- a/vdsm/storage/multipath.py
+++ b/vdsm/storage/multipath.py
@@ -39,6 +39,7 @@
import misc
import iscsi
import devicemapper
+import sd
DEV_ISCSI = "iSCSI"
DEV_FCP = "FCP"
@@ -61,7 +62,7 @@
""" multipath operation failed """
-def rescan():
+def rescan(domType=None):
"""
Forces multipath daemon to rescan the list of available devices and
refresh the mapping table. New devices can be found under /dev/mapper
@@ -70,8 +71,12 @@
"""
# First rescan iSCSI and FCP connections
- iscsi.rescan()
- hba.rescan()
+
+ if domType in (None, sd.ISCSI_DOMAIN):
+ iscsi.rescan()
+
+ if domType in (None, sd.FCP_DOMAIN):
+ hba.rescan()
# Now let multipath daemon pick up new devices
misc.execCmd([constants.EXT_MULTIPATH], sudo=True)
diff --git a/vdsm/storage/sdc.py b/vdsm/storage/sdc.py
index ecb9708..273c5c0 100644
--- a/vdsm/storage/sdc.py
+++ b/vdsm/storage/sdc.py
@@ -77,10 +77,10 @@
self.__staleStatus = self.STORAGE_STALE
@misc.samplingmethod
- def refreshStorage(self):
+ def refreshStorage(self, domType=None):
self.__staleStatus = self.STORAGE_REFRESHING
- multipath.rescan()
+ multipath.rescan(domType)
multipath.resize_devices()
lvm.invalidateCache()
--
To view, visit https://gerrit.ovirt.org/45824
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ic32cd683020e94df016dd77b19ae3eb7317c5554
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
6 years, 6 months