Change in vdsm[master]: vm: sync device update during creation of a vm
by fromani@redhat.com
Francesco Romani has posted comments on this change.
Change subject: vm: sync device update during creation of a vm
......................................................................
Patch Set 2:
We have quite some things mixed here.
Most likely (but not 100% sure) this hasn't happened before because
the XML marshaller (as Piotr found) used a different approach when iterating on python dictionaries, which kept things (quite) safe.
This changed in the 3.5 timeframe due mostly to json-rpc and live-merge. Both of those changes are not to blame, their interaction with the existing codebase however broke up things.
Simply to revert to the old method is feasible as short term solution, not as long term one.
The deeper problem is how badly Vm is intertwined. To fix this will cost time - first approach here: http://gerrit.ovirt.org/#/q/status:open+project:vdsm+branch:master+topic:...
As first-aid patch I'll see if it is possible to revert to the old path (items() vs iteritems()), but this solution is not very good either.
--
To view, visit http://gerrit.ovirt.org/34599
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: comment
Gerrit-Change-Id: I477551500ccc2297eb0c05d6562710bc420363a5
Gerrit-PatchSet: 2
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Piotr Kliczewski <piotr.kliczewski(a)gmail.com>
Gerrit-Reviewer: Francesco Romani <fromani(a)redhat.com>
Gerrit-Reviewer: Michal Skrivanek <michal.skrivanek(a)redhat.com>
Gerrit-Reviewer: Michal Skrivanek <mskrivan(a)redhat.com>
Gerrit-Reviewer: Oved Ourfali <oourfali(a)redhat.com>
Gerrit-Reviewer: Piotr Kliczewski <piotr.kliczewski(a)gmail.com>
Gerrit-Reviewer: Saggi Mizrahi <smizrahi(a)redhat.com>
Gerrit-Reviewer: automation(a)ovirt.org
Gerrit-Reviewer: oVirt Jenkins CI Server
Gerrit-HasComments: No
8 years, 4 months
Change in vdsm[master]: vm: sync device update during creation of a vm
by smizrahi@redhat.com
Saggi Mizrahi has posted comments on this change.
Change subject: vm: sync device update during creation of a vm
......................................................................
Patch Set 2: Code-Review+1
--
To view, visit http://gerrit.ovirt.org/34599
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: comment
Gerrit-Change-Id: I477551500ccc2297eb0c05d6562710bc420363a5
Gerrit-PatchSet: 2
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Piotr Kliczewski <piotr.kliczewski(a)gmail.com>
Gerrit-Reviewer: Francesco Romani <fromani(a)redhat.com>
Gerrit-Reviewer: Michal Skrivanek <michal.skrivanek(a)redhat.com>
Gerrit-Reviewer: Michal Skrivanek <mskrivan(a)redhat.com>
Gerrit-Reviewer: Oved Ourfali <oourfali(a)redhat.com>
Gerrit-Reviewer: Saggi Mizrahi <smizrahi(a)redhat.com>
Gerrit-Reviewer: automation(a)ovirt.org
Gerrit-Reviewer: oVirt Jenkins CI Server
Gerrit-HasComments: No
8 years, 4 months
Change in vdsm[master]: vdsm: refactored new functional tests
by ykleinbe@redhat.com
Yoav Kleinberger has uploaded a new change for review.
Change subject: vdsm: refactored new functional tests
......................................................................
vdsm: refactored new functional tests
Seprarated the storage backend and verifier base classes into two
separate modules.
Change-Id: Ie9dbe640943f86fcb2415c25498f663304653b3a
Signed-off-by: Yoav Kleinberger <ykleinbe(a)redhat.com>
---
D tests/functional/testlib/storagecontexts/base.py
A tests/functional/testlib/storagecontexts/base/__init__.py
A tests/functional/testlib/storagecontexts/base/storagebackend.py
A tests/functional/testlib/storagecontexts/base/verify.py
M tests/functional/testlib/storagecontexts/filebased.py
M tests/functional/testlib/storagecontexts/iscsi.py
6 files changed, 150 insertions(+), 147 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/42/34242/1
diff --git a/tests/functional/testlib/storagecontexts/base.py b/tests/functional/testlib/storagecontexts/base.py
deleted file mode 100644
index 2e308ad..0000000
--- a/tests/functional/testlib/storagecontexts/base.py
+++ /dev/null
@@ -1,140 +0,0 @@
-import logging
-import os
-import random
-import time
-import uuid
-from functional.testlib import vdsmcaller
-
-
-class Verify(object):
- def __init__(self, vdsm):
- self.vdsm = vdsm
-
- def sleepWhileVDSMCompletesTask(self, duration):
- time.sleep(duration)
-
- def storagePoolCreated(self, poolID, masterDomainID):
- self.sleepWhileVDSMCompletesTask(duration=1)
- linkToDomain =\
- os.path.join('/rhev/data-center', poolID, masterDomainID)
- linkToMasterDomain =\
- os.path.join('/rhev/data-center', poolID, 'mastersd')
- assert os.path.lexists(linkToDomain)
- assert os.path.lexists(linkToMasterDomain)
-
- def waitFor(self, seconds, description, predicate, *args, **kwargs):
- logging.info('waiting for "%s"' % description)
- start = time.time()
- for _ in xrange(seconds):
- if predicate(*args, **kwargs):
- logging.info('it took %0.3f seconds' % (time.time() - start))
- return
- time.sleep(1)
-
- MESSAGE = 'waited %s seconds for "%s" but it did not happen' %\
- (timeout, description)
- assert False, MESSAGE
-
- def spmStarted(self, poolID):
- self.sleepWhileVDSMCompletesTask(duration=1)
- masterDomainDirectory = '/rhev/data-center/%s/mastersd' % poolID
- master = os.path.join(masterDomainDirectory, 'master')
- tasks = os.path.join(master, 'tasks')
- vms = os.path.join(master, 'vms')
- self.waitFor(
- 60,
- 'SPM related subdirectories exist',
- self._allExist,
- [master, tasks, vms])
-
- def _allExist(self, paths):
- result = True
- for path in paths:
- result = result and os.path.exists(path)
- return result
-
- def waitUntilVDSMTaskFinished(self, taskID, timeout):
- self.waitFor(
- timeout,
- 'vdsm task to be finished',
- self._taskFinished,
- taskID)
- taskStatus = self._taskStatus(taskID)
- assert taskStatus['code'] == 0, taskStatus['message']
-
- def _taskFinished(self, taskID):
- return self._taskStatus(taskID)['taskState'] == 'finished'
-
- def _taskStatus(self, taskID):
- result = self.vdsm().getTaskStatus(taskID)
- return result['taskStatus']
-
-
-class StorageBackend(object):
- def __init__(self):
- self._vdsmCaller = vdsmcaller.VDSMCaller()
- self._domainID = self._newUUID()
- self._poolID = self._newUUID()
- self._imageID = self._newUUID()
- self._volumeID = self._newUUID()
- self._connectionID = self._newUUID()
-
- def connectionID(self):
- return self._connectionID
-
- def volumeID(self):
- return self._volumeID
-
- def imageID(self):
- return self._imageID
-
- def poolID(self):
- return self._poolID
-
- def domainID(self):
- return self._domainID
-
- def vdsm(self):
- return self._vdsmCaller
-
- def _newUUID(self):
- return str(uuid.uuid4())
-
- def randomName(self, base):
- return "%s_%04d" % (base, random.randint(1, 10000))
-
- def createStoragePool(self):
- POOL_TYPE_DEPRECATED = 0
- self.vdsm().createStoragePool(
- POOL_TYPE_DEPRECATED,
- self.poolID(),
- self.randomName('pool'),
- self.domainID(),
- [self.domainID()],
- 1)
- return self.poolID()
-
- def connectStoragePool(self, poolID, masterDomainID):
- SCSI_KEY_DEPRECATED = 0
- self.vdsm().connectStoragePool(
- poolID,
- 1,
- SCSI_KEY_DEPRECATED,
- masterDomainID,
- 1)
-
- def spmStart(self, poolID):
- RECOVERY_MODE_DEPRECATED = 0
- SCSI_FENCING_DEPRECATED = 0
- self.vdsm().spmStart(
- poolID,
- -1,
- '-1',
- SCSI_FENCING_DEPRECATED,
- RECOVERY_MODE_DEPRECATED)
-
- def activateStorageDomain(self, domainID, poolID):
- self.vdsm().activateStorageDomain(domainID, poolID)
-
- def largeIntegerXMLRPCWorkaround(self, number):
- return str(number)
diff --git a/tests/functional/testlib/storagecontexts/base/__init__.py b/tests/functional/testlib/storagecontexts/base/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/functional/testlib/storagecontexts/base/__init__.py
diff --git a/tests/functional/testlib/storagecontexts/base/storagebackend.py b/tests/functional/testlib/storagecontexts/base/storagebackend.py
new file mode 100644
index 0000000..9fad2e9
--- /dev/null
+++ b/tests/functional/testlib/storagecontexts/base/storagebackend.py
@@ -0,0 +1,73 @@
+import random
+import uuid
+from functional.testlib import vdsmcaller
+
+
+class StorageBackend(object):
+ def __init__(self):
+ self._vdsmCaller = vdsmcaller.VDSMCaller()
+ self._domainID = self._newUUID()
+ self._poolID = self._newUUID()
+ self._imageID = self._newUUID()
+ self._volumeID = self._newUUID()
+ self._connectionID = self._newUUID()
+
+ def connectionID(self):
+ return self._connectionID
+
+ def volumeID(self):
+ return self._volumeID
+
+ def imageID(self):
+ return self._imageID
+
+ def poolID(self):
+ return self._poolID
+
+ def domainID(self):
+ return self._domainID
+
+ def vdsm(self):
+ return self._vdsmCaller
+
+ def _newUUID(self):
+ return str(uuid.uuid4())
+
+ def randomName(self, base):
+ return "%s_%04d" % (base, random.randint(1, 10000))
+
+ def createStoragePool(self):
+ POOL_TYPE_DEPRECATED = 0
+ self.vdsm().createStoragePool(
+ POOL_TYPE_DEPRECATED,
+ self.poolID(),
+ self.randomName('pool'),
+ self.domainID(),
+ [self.domainID()],
+ 1)
+ return self.poolID()
+
+ def connectStoragePool(self, poolID, masterDomainID):
+ SCSI_KEY_DEPRECATED = 0
+ self.vdsm().connectStoragePool(
+ poolID,
+ 1,
+ SCSI_KEY_DEPRECATED,
+ masterDomainID,
+ 1)
+
+ def spmStart(self, poolID):
+ RECOVERY_MODE_DEPRECATED = 0
+ SCSI_FENCING_DEPRECATED = 0
+ self.vdsm().spmStart(
+ poolID,
+ -1,
+ '-1',
+ SCSI_FENCING_DEPRECATED,
+ RECOVERY_MODE_DEPRECATED)
+
+ def activateStorageDomain(self, domainID, poolID):
+ self.vdsm().activateStorageDomain(domainID, poolID)
+
+ def largeIntegerXMLRPCWorkaround(self, number):
+ return str(number)
diff --git a/tests/functional/testlib/storagecontexts/base/verify.py b/tests/functional/testlib/storagecontexts/base/verify.py
new file mode 100644
index 0000000..0ecc956
--- /dev/null
+++ b/tests/functional/testlib/storagecontexts/base/verify.py
@@ -0,0 +1,67 @@
+import logging
+import os
+import time
+
+
+class Verify(object):
+ def __init__(self, vdsm):
+ self.vdsm = vdsm
+
+ def sleepWhileVDSMCompletesTask(self, duration):
+ time.sleep(duration)
+
+ def storagePoolCreated(self, poolID, masterDomainID):
+ self.sleepWhileVDSMCompletesTask(duration=1)
+ linkToDomain =\
+ os.path.join('/rhev/data-center', poolID, masterDomainID)
+ linkToMasterDomain =\
+ os.path.join('/rhev/data-center', poolID, 'mastersd')
+ assert os.path.lexists(linkToDomain)
+ assert os.path.lexists(linkToMasterDomain)
+
+ def waitFor(self, seconds, description, predicate, *args, **kwargs):
+ logging.info('waiting for "%s"' % description)
+ start = time.time()
+ for _ in xrange(seconds):
+ if predicate(*args, **kwargs):
+ logging.info('it took %0.3f seconds' % (time.time() - start))
+ return
+ time.sleep(1)
+
+ MESSAGE = 'waited %s seconds for "%s" but it did not happen' %\
+ (seconds, description)
+ assert False, MESSAGE
+
+ def spmStarted(self, poolID):
+ self.sleepWhileVDSMCompletesTask(duration=1)
+ masterDomainDirectory = '/rhev/data-center/%s/mastersd' % poolID
+ master = os.path.join(masterDomainDirectory, 'master')
+ tasks = os.path.join(master, 'tasks')
+ vms = os.path.join(master, 'vms')
+ self.waitFor(
+ 60,
+ 'SPM related subdirectories exist',
+ self._allExist,
+ [master, tasks, vms])
+
+ def _allExist(self, paths):
+ result = True
+ for path in paths:
+ result = result and os.path.exists(path)
+ return result
+
+ def waitUntilVDSMTaskFinished(self, taskID, timeout):
+ self.waitFor(
+ timeout,
+ 'vdsm task to be finished',
+ self._taskFinished,
+ taskID)
+ taskStatus = self._taskStatus(taskID)
+ assert taskStatus['code'] == 0, taskStatus['message']
+
+ def _taskFinished(self, taskID):
+ return self._taskStatus(taskID)['taskState'] == 'finished'
+
+ def _taskStatus(self, taskID):
+ result = self.vdsm().getTaskStatus(taskID)
+ return result['taskStatus']
diff --git a/tests/functional/testlib/storagecontexts/filebased.py b/tests/functional/testlib/storagecontexts/filebased.py
index bdb15d7..ba7ab10 100644
--- a/tests/functional/testlib/storagecontexts/filebased.py
+++ b/tests/functional/testlib/storagecontexts/filebased.py
@@ -2,10 +2,12 @@
import storage.volume
import storage.image
from . import base
+from functional.testlib.storagecontexts.base import storagebackend
+from functional.testlib.storagecontexts.base import verify
import logging
-class Verify(base.Verify):
+class Verify(verify.Verify):
def rhevMountPoint(self):
raise Exception('you must override this function')
@@ -52,7 +54,7 @@
assert os.path.exists(path)
-class FileBased(base.StorageBackend):
+class FileBased(storagebackend.StorageBackend):
def createVolume(self, size):
PREALLOCATE = 1
result = self.vdsm().createVolume(
diff --git a/tests/functional/testlib/storagecontexts/iscsi.py b/tests/functional/testlib/storagecontexts/iscsi.py
index eb19715..640ec14 100644
--- a/tests/functional/testlib/storagecontexts/iscsi.py
+++ b/tests/functional/testlib/storagecontexts/iscsi.py
@@ -8,12 +8,13 @@
import storage.sd
import storage.volume
import storage.image
-from . import base
+from functional.testlib.storagecontexts.base import storagebackend
+from functional.testlib.storagecontexts.base import verify
-class Verify(base.Verify):
+class Verify(verify.Verify):
def __init__(self, iqn, volumeGroup, vdsm, volumeID):
- base.Verify.__init__(self, vdsm)
+ verify.Verify.__init__(self, vdsm)
self._iqn = iqn
self._volumeGroup = volumeGroup
self._volumeID = volumeID
@@ -53,11 +54,11 @@
assert result == 0, "did not find logical volume in volume group"
-class ISCSI(base.StorageBackend):
+class ISCSI(storagebackend.StorageBackend):
_NULL_UUID = '00000000-0000-0000-0000-000000000000'
def __init__(self):
- base.StorageBackend.__init__(self)
+ storagebackend.StorageBackend.__init__(self)
self._iqn = 'iqn.1970-01.functional.test:%04d' %\
random.randint(1, 10000)
self._volumeGroup = {'uuid': self.domainID(), 'vgs_uuid': None}
--
To view, visit http://gerrit.ovirt.org/34242
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ie9dbe640943f86fcb2415c25498f663304653b3a
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Yoav Kleinberger <ykleinbe(a)redhat.com>
8 years, 4 months
Change in vdsm[ovirt-3.5]: caps: Additional ppc64 hardware information
by vdelima@redhat.com
Hello Antoni Segura Puimedon, Dan Kenigsberg,
I'd like you to do a code review. Please visit
http://gerrit.ovirt.org/34617
to review the following change.
Change subject: caps: Additional ppc64 hardware information
......................................................................
caps: Additional ppc64 hardware information
Includes extra information (manufacturer and product name) about ppc64
hosts in the getVdsHardwareInfo command. This extra information is
obtained from the device tree and skipped in case it is missing.
Change-Id: I8f67a830740b64bc246f680f2c7a18a4293f4cc2
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=1149262
Signed-off-by: Vitor de Lima <vdelima(a)redhat.com>
Reviewed-on: http://gerrit.ovirt.org/33857
Reviewed-by: Antoni Segura Puimedon <asegurap(a)redhat.com>
Reviewed-by: Dan Kenigsberg <danken(a)redhat.com>
---
M vdsm/ppc64HardwareInfo.py
1 file changed, 16 insertions(+), 9 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/17/34617/1
diff --git a/vdsm/ppc64HardwareInfo.py b/vdsm/ppc64HardwareInfo.py
index 1a4b30e..029eaa4 100644
--- a/vdsm/ppc64HardwareInfo.py
+++ b/vdsm/ppc64HardwareInfo.py
@@ -21,14 +21,21 @@
import os
+def _getFromDeviceTree(treeProperty):
+ path = '/proc/device-tree/%s' % treeProperty
+ if os.path.exists(path):
+ with open(path) as f:
+ value = f.readline().rstrip('\0').replace(',', '')
+ return value
+ else:
+ return 'unavailable'
+
+
@utils.memoized
def getHardwareInfoStructure():
- infoStructure = {'systemProductName': 'unavailable',
- 'systemSerialNumber': 'unavailable',
+ infoStructure = {'systemSerialNumber': 'unavailable',
'systemFamily': 'unavailable',
- 'systemVersion': 'unavailable',
- 'systemUUID': 'unavailable',
- 'systemManufacturer': 'unavailable'}
+ 'systemVersion': 'unavailable'}
for line in file('/proc/cpuinfo'):
if line.strip() == '':
@@ -42,11 +49,11 @@
elif key == 'machine':
infoStructure['systemVersion'] = value
- if os.path.exists('/proc/device-tree/system-id'):
- with open('/proc/device-tree/system-id') as f:
- vdsmId = f.readline().rstrip('\0').replace(',', '')
+ infoStructure['systemUUID'] = _getFromDeviceTree('system-id')
- infoStructure['systemUUID'] = vdsmId
+ infoStructure['systemProductName'] = _getFromDeviceTree('model-name')
+
+ infoStructure['systemManufacturer'] = _getFromDeviceTree('vendor')
return infoStructure
--
To view, visit http://gerrit.ovirt.org/34617
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I8f67a830740b64bc246f680f2c7a18a4293f4cc2
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: ovirt-3.5
Gerrit-Owner: Vitor de Lima <vdelima(a)redhat.com>
Gerrit-Reviewer: Antoni Segura Puimedon <asegurap(a)redhat.com>
Gerrit-Reviewer: Dan Kenigsberg <danken(a)redhat.com>
8 years, 4 months
Change in vdsm[ovirt-3.5]: volume: Log the correct error when creating a volume fails
by Allon Mureinik
Allon Mureinik has uploaded a new change for review.
Change subject: volume: Log the correct error when creating a volume fails
......................................................................
volume: Log the correct error when creating a volume fails
When a volume creation failed because of CannotCreateLogicalVolume
exception, we used to lie and log "volume already exists". This log
confused and wasted many developers hours. Now we log the exception
value instead.
Change-Id: I603b055658950dae5ccc3806b8b7a9e53762c5ef
Bug-Url: https://bugzilla.redhat.com/1143830
Relates-To: https://bugzilla.redhat.com/1142710
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
Reviewed-on: http://gerrit.ovirt.org/33301
Reviewed-by: Yoav Kleinberger <ykleinbe(a)redhat.com>
Reviewed-by: Dan Kenigsberg <danken(a)redhat.com>
(cherry picked from commit 3e17f9828576f16e4ef95f805c2e7ce27b63d812)
---
M vdsm/storage/volume.py
1 file changed, 1 insertion(+), 2 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/39/34639/1
diff --git a/vdsm/storage/volume.py b/vdsm/storage/volume.py
index 13bd256..75fa1c4 100644
--- a/vdsm/storage/volume.py
+++ b/vdsm/storage/volume.py
@@ -434,8 +434,7 @@
preallocate, volParent, srcImgUUID,
srcVolUUID, volPath)
except (se.VolumeAlreadyExists, se.CannotCreateLogicalVolume) as e:
- cls.log.error("Failed to create volume: %s, volume already "
- "exists", volPath)
+ cls.log.error("Failed to create volume %s: %s", volPath, e)
vars.task.popRecovery()
raise e
# When the volume format is raw what the guest sees is the apparent
--
To view, visit http://gerrit.ovirt.org/34639
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I603b055658950dae5ccc3806b8b7a9e53762c5ef
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: ovirt-3.5
Gerrit-Owner: Allon Mureinik <amureini(a)redhat.com>
Gerrit-Reviewer: Nir Soffer <nsoffer(a)redhat.com>
8 years, 4 months
Change in vdsm[ovirt-3.5]: netinfo:nicSpeed(): fix nicSpeed condition
by phoracek@redhat.com
Petr Horáček has uploaded a new change for review.
Change subject: netinfo:nicSpeed(): fix nicSpeed condition
......................................................................
netinfo:nicSpeed(): fix nicSpeed condition
In `if s not in (2 ** 16 - 1, 2 ** 32 - 1) or s > 0` first part doesn't
make sense with OR. Changed to AND.
I changed conditions to make the function more readable and created unit
test, to be sure that function returns correct values.
Change-Id: I34cfca16909f9695441e26d3ddd508e7a4210c12
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=1157224
Signed-off-by: Petr Horáček <phoracek(a)redhat.com>
---
M lib/vdsm/netinfo.py
M tests/netinfoTests.py
2 files changed, 28 insertions(+), 13 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/01/34701/1
diff --git a/lib/vdsm/netinfo.py b/lib/vdsm/netinfo.py
index 174e5e4..74c9f6c 100644
--- a/lib/vdsm/netinfo.py
+++ b/lib/vdsm/netinfo.py
@@ -246,19 +246,17 @@
def nicSpeed(nicName):
- """Returns the nic speed if it is a legal value and nicName refers to a
- nic, 0 otherwise."""
+ """Returns the nic speed if it is a legal value, nicName refers to a
+ nic and nic is UP, 0 otherwise."""
try:
- # if the device is not up we must report 0
- if operstate(nicName) != OPERSTATE_UP:
- return 0
- with open('/sys/class/net/%s/speed' % nicName) as speedFile:
- s = int(speedFile.read())
- # the device may have been disabled/downed after checking
- # so we validate the return value as sysfs may return
- # special values to indicate the device is down/disabled
- if s not in (2 ** 16 - 1, 2 ** 32 - 1) or s > 0:
- return s
+ if operstate(nicName) == OPERSTATE_UP:
+ with open('/sys/class/net/%s/speed' % nicName) as speedFile:
+ s = int(speedFile.read())
+ # the device may have been disabled/downed after checking
+ # so we validate the return value as sysfs may return
+ # special values to indicate the device is down/disabled
+ if s not in (2 ** 16 - 1, 2 ** 32 - 1) and s > 0:
+ return s
except IOError as ose:
if ose.errno == errno.EINVAL:
return _ibHackedSpeed(nicName)
diff --git a/tests/netinfoTests.py b/tests/netinfoTests.py
index 8900b24..62c46b6 100644
--- a/tests/netinfoTests.py
+++ b/tests/netinfoTests.py
@@ -18,9 +18,11 @@
#
# Refer to the README and COPYING files for full details of the license
#
+import __builtin__
import os
from datetime import datetime
from functools import partial
+import io
import time
import ethtool
@@ -29,7 +31,7 @@
from vdsm import netconfpersistence
from vdsm import netinfo
from vdsm.netinfo import (getBootProtocol, getDhclientIfaces, BONDING_MASTERS,
- BONDING_OPT, _getBondingOptions)
+ BONDING_OPT, _getBondingOptions, OPERSTATE_UP)
from vdsm.tool.dump_bonding_defaults import _random_iface_name
from functional import dummy, veth
@@ -93,6 +95,21 @@
for s, addr in zip(inputs, ip):
self.assertEqual(addr, netinfo.ipv6StrToAddress(s))
+ def testValidNicSpeed(self):
+ values = ((0, OPERSTATE_UP, 0),
+ (-10, OPERSTATE_UP, 0),
+ (2 ** 16 - 1, OPERSTATE_UP, 0),
+ (2 ** 32 - 1, OPERSTATE_UP, 0)
+ (123, OPERSTATE_UP, 123),
+ (123, 'unknown', 0))
+
+ for passed, operstate, expected in values:
+ with MonkeyPatchScope([(__builtin__, 'open',
+ lambda x: io.BytesIO(str(passed))),
+ (netinfo, 'operstate',
+ lambda x: operstate)]):
+ self.assertEqual(netinfo.nicSpeed('fake_nic'), expected)
+
@MonkeyPatch(ipwrapper.Link, '_detectType',
partial(_fakeTypeDetection, ipwrapper.Link))
@MonkeyPatch(netinfo, 'networks', lambda: {'fake': {'bridged': True}})
--
To view, visit http://gerrit.ovirt.org/34701
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I34cfca16909f9695441e26d3ddd508e7a4210c12
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: ovirt-3.5
Gerrit-Owner: Petr Horáček <phoracek(a)redhat.com>
8 years, 4 months
Change in vdsm[master]: persistence: fails while removing non-existing network
by phoracek@redhat.com
Petr Horáček has uploaded a new change for review.
Change subject: persistence: fails while removing non-existing network
......................................................................
persistence: fails while removing non-existing network
vdsm-restore-netconfig:unified_restoration() removes current networks
(listed in runningConfig) before it tries to restore persisted ones.
Problem occurs when current network (listed in runningConfig) does not
exist in the system (for example when we manualy delete its nic). In
this case, setupNetwork() returns exception (network does not exist)
and restoration can not continue.
I added new condition to setupNetworks(), it checks if network is in
runningConfig in case it was not listed in netinfo and libvirt_nets. If
so, invalid network is removed from runningConfig.
Change-Id: I0c507626705d7ead84db2f3aa15e4032f9558d12
Signed-off-by: Petr Horáček <phoracek(a)redhat.com>
---
M vdsm/network/api.py
M vdsm/vdsm-restore-net-config
2 files changed, 11 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/95/33995/1
diff --git a/vdsm/network/api.py b/vdsm/network/api.py
index e70d954..dc372a7 100755
--- a/vdsm/network/api.py
+++ b/vdsm/network/api.py
@@ -719,6 +719,16 @@
del networks[network]
del libvirt_nets[network]
_netinfo.updateDevices()
+ elif network in configurator.runningConfig.networks:
+ # If the network was not in _netinfo or libvirt_nets but is in
+ # the networks returned by running configurator, we might
+ # remove it from runningConfig.
+ logger.debug('Removing non-existing network %r from '
+ 'runningConfig', network)
+ configurator.runningConfig.removeNetwork(network)
+ if 'remove' in networkAttrs:
+ del networks[network]
+ _netinfo.updateDevices()
elif 'remove' in networkAttrs:
raise ConfigNetworkError(ne.ERR_BAD_BRIDGE, "Cannot delete "
"network %r: It doesn't exist in the "
diff --git a/vdsm/vdsm-restore-net-config b/vdsm/vdsm-restore-net-config
index f9ca589..6c9681a 100755
--- a/vdsm/vdsm-restore-net-config
+++ b/vdsm/vdsm-restore-net-config
@@ -31,6 +31,7 @@
# Unified persistence restoration
from network.api import setupNetworks
from network import configurators
+from network import errors as ne
from vdsm.netconfpersistence import RunningConfig, PersistentConfig
import pkgutil
--
To view, visit http://gerrit.ovirt.org/33995
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I0c507626705d7ead84db2f3aa15e4032f9558d12
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Petr Horáček <phoracek(a)redhat.com>
8 years, 4 months
Change in vdsm[master]: openstacknet: Fix migration when using security groups
by asegurap@redhat.com
Antoni Segura Puimedon has uploaded a new change for review.
Change subject: openstacknet: Fix migration when using security groups
......................................................................
openstacknet: Fix migration when using security groups
When migrating, the destination libvirt receives an xml already
altered by before_device_create like this test one:
<?xml version="1.0" encoding="utf-8"?>
<interface type="bridge">
<mac address="00:1a:4a:16:01:51"/>
<model type="virtio"/>
<source bridge="qbrtest_port_i"/>
<target dev="taptest_port_i"/>
</interface>
The issue is that before_device_create hooking point is not part
of the migration and it is the only thing that creates the security
groups bridge and the necessary veths. In order to have migration
working, then, it was necessary to add a hook that on the hooking
point 'before_device_migrate' does the security groups bridge and
veths creation.
Change-Id: Icd8a789c4565f32b32965af3966a4edd361949ea
Bug-Url: https://bugzilla.redhat.com/1048880
Signed-off-by: Antoni S. Puimedon <asegurap(a)redhat.com>
---
M vdsm_hooks/openstacknet/Makefile.am
M vdsm_hooks/openstacknet/before_device_create.py
A vdsm_hooks/openstacknet/before_device_migrate_destination.py
M vdsm_hooks/openstacknet/openstacknet_utils.py
4 files changed, 137 insertions(+), 47 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/06/34406/1
diff --git a/vdsm_hooks/openstacknet/Makefile.am b/vdsm_hooks/openstacknet/Makefile.am
index 2baab05..4cd9c67 100644
--- a/vdsm_hooks/openstacknet/Makefile.am
+++ b/vdsm_hooks/openstacknet/Makefile.am
@@ -32,6 +32,7 @@
after_device_create.py \
after_device_destroy.py \
before_device_create.py \
+ before_device_migrate_destination.py \
$(constsfile) \
sudoers.in
@@ -57,6 +58,9 @@
$(MKDIR_P) $(DESTDIR)$(vdsmhooksdir)/before_device_create
$(INSTALL_SCRIPT) $(srcdir)/before_device_create.py \
$(DESTDIR)$(vdsmhooksdir)/before_device_create/50_openstacknet
+ $(MKDIR_P) $(DESTDIR)$(vdsmhooksdir)/before_device_migrate_destination
+ $(INSTALL_SCRIPT) $(srcdir)/before_device_migrate_destination.py \
+ $(DESTDIR)$(vdsmhooksdir)/before_device_migrate_destination/50_openstacknet
$(MKDIR_P) $(DESTDIR)$(vdsmhooksdir)/before_nic_hotplug
$(INSTALL_SCRIPT) $(srcdir)/before_device_create.py \
$(DESTDIR)$(vdsmhooksdir)/before_nic_hotplug/50_openstacknet
@@ -68,6 +72,7 @@
$(RM) $(DESTDIR)$(vdsmhooksdir)/after_nic_hotplug/50_openstacknet
$(RM) $(DESTDIR)$(vdsmhooksdir)/after_nic_hotunplug/50_openstacknet
$(RM) $(DESTDIR)$(vdsmhooksdir)/before_device_create/50_openstacknet
+ $(RM) $(DESTDIR)$(vdsmhooksdir)/before_device_migrate_destination/50_openstacknet
$(RM) $(DESTDIR)$(vdsmhooksdir)/before_nic_hotplug/50_openstacknet
install-data-consts:
@@ -89,6 +94,8 @@
$(MKDIR_P) $(DESTDIR)$(vdsmhooksdir)/before_device_create
$(INSTALL_SCRIPT) $(srcdir)/$(constsfile) \
$(DESTDIR)$(vdsmhooksdir)/before_device_create/$(constsfile)
+ $(INSTALL_SCRIPT) $(srcdir)/$(constsfile) \
+ $(DESTDIR)$(vdsmhooksdir)/before_device_migrate_destination/$(constsfile)
$(MKDIR_P) $(DESTDIR)$(vdsmhooksdir)/before_nic_hotplug
$(INSTALL_SCRIPT) $(srcdir)/$(constsfile) \
$(DESTDIR)$(vdsmhooksdir)/before_nic_hotplug/$(constsfile)
@@ -100,6 +107,7 @@
$(RM) $(DESTDIR)$(vdsmhooksdir)/after_nic_hotplug/$(constsfile)
$(RM) $(DESTDIR)$(vdsmhooksdir)/after_nic_hotunplug/$(constsfile)
$(RM) $(DESTDIR)$(vdsmhooksdir)/before_device_create/$(constsfile)
+ $(RM) $(DESTDIR)$(vdsmhooksdir)/before_device_migrate_destination/$(constsfile)
$(RM) $(DESTDIR)$(vdsmhooksdir)/before_nic_hotplug/$(constsfile)
install-data-sudoers:
diff --git a/vdsm_hooks/openstacknet/before_device_create.py b/vdsm_hooks/openstacknet/before_device_create.py
index de7a896..d326cd5 100755
--- a/vdsm_hooks/openstacknet/before_device_create.py
+++ b/vdsm_hooks/openstacknet/before_device_create.py
@@ -31,15 +31,12 @@
'''
import os
-import subprocess
import sys
import traceback
from xml.dom import minidom
import hooking
from openstacknet_utils import DUMMY_BRIDGE
-from openstacknet_utils import EXT_BRCTL
-from openstacknet_utils import EXT_IP
from openstacknet_utils import INTEGRATION_BRIDGE
from openstacknet_utils import OPENSTACK_NET_PROVIDER_TYPE
from openstacknet_utils import PLUGIN_TYPE_KEY
@@ -48,10 +45,8 @@
from openstacknet_utils import PT_OVS
from openstacknet_utils import SECURITY_GROUPS_KEY
from openstacknet_utils import VNIC_ID_KEY
-from openstacknet_utils import deviceExists
from openstacknet_utils import devName
-from openstacknet_utils import executeOrExit
-from openstacknet_utils import ovs_vsctl
+from openstacknet_utils import setUpSecurityGroupVnic
HELP_ARG = "-h"
TEST_ARG = "-t"
@@ -94,36 +89,11 @@
def addOvsHybridVnic(domxml, iface, portId):
+ setUpSecurityGroupVnic(
+ iface.getElementsByTagName('mac')[0].getAttribute('address'),
+ portId)
+
brName = devName("qbr", portId)
-
- # TODO: Remove this check after bz 1045626 is fixed
- if not deviceExists(brName):
- executeOrExit([EXT_BRCTL, 'addbr', brName])
- executeOrExit([EXT_BRCTL, 'setfd', brName, '0'])
- executeOrExit([EXT_BRCTL, 'stp', brName, 'off'])
-
- vethBr = devName("qvb", portId)
- vethOvs = devName("qvo", portId)
-
- # TODO: Remove this check after bz 1045626 is fixed
- if not deviceExists(vethOvs):
- executeOrExit([EXT_IP, 'link', 'add', vethBr, 'type', 'veth', 'peer',
- 'name', vethOvs])
- for dev in [vethBr, vethOvs]:
- executeOrExit([EXT_IP, 'link', 'set', dev, 'up'])
- executeOrExit([EXT_IP, 'link', 'set', dev, 'promisc', 'on'])
-
- executeOrExit([EXT_IP, 'link', 'set', brName, 'up'])
- executeOrExit([EXT_BRCTL, 'addif', brName, vethBr])
-
- mac = iface.getElementsByTagName('mac')[0].getAttribute('address')
- executeOrExit([ovs_vsctl.cmd, '--', '--may-exist', 'add-port',
- INTEGRATION_BRIDGE, vethOvs,
- '--', 'set', 'Interface', vethOvs,
- 'external-ids:iface-id=%s' % portId,
- 'external-ids:iface-status=active',
- 'external-ids:attached-mac=%s' % mac])
-
defineLinuxBridge(domxml, iface, portId, brName)
@@ -165,16 +135,6 @@
hooking.write_domxml(domxml)
-def mockExecuteOrExit(command):
- print("Mocking successful execution of: %s"
- % subprocess.list2cmdline(command))
- return (0, '', '')
-
-
-def mockDeviceExists(dev):
- return False
-
-
def test(ovs, withSecurityGroups):
domxml = minidom.parseString("""<?xml version="1.0" encoding="utf-8"?>
<interface type="bridge">
@@ -188,8 +148,9 @@
else:
pluginType = PT_BRIDGE
- globals()['executeOrExit'] = mockExecuteOrExit
- globals()['deviceExists'] = mockDeviceExists
+ import openstacknet_utils
+ openstacknet_utils.executeOrExit = openstacknet_utils.mockExecuteOrExit
+ openstacknet_utils.deviceExists = openstacknet_utils.mockDeviceExists
addOpenstackVnic(domxml,
pluginType,
'test_port_id',
diff --git a/vdsm_hooks/openstacknet/before_device_migrate_destination.py b/vdsm_hooks/openstacknet/before_device_migrate_destination.py
new file mode 100644
index 0000000..684a2ac
--- /dev/null
+++ b/vdsm_hooks/openstacknet/before_device_migrate_destination.py
@@ -0,0 +1,78 @@
+#!/usr/bin/env python
+
+"""
+OpenStack Network Hook (pre device migration)
+=============================================
+The hook receives a port_id for a migrated virtual NIC that is to be handled by
+creating a security groups bridge if security groups are needed. If no security
+groups are needed, the xml of the device will already refer libvirt to use ovs,
+so no change is needed in that flow.
+
+For the security groups, then, the current implementation will connect the vNIC
+the tap will be connected to a dedicated Linux Bridge which will be connected
+by veth pair to the OVS integration bridge. The reason for this is that
+currently the Security Groups implementation (iptables) doesn't work on the OVS
+bridge, so a workaround had to be taken (same as OpenStack Compute does it).
+
+Syntax:
+ { 'provider_type': 'OPENSTACK_NETWORK', 'vnic_id': 'port_id',
+ 'plugin_type': 'plugin_type_value', 'security_groups': .* }
+Where:
+ port_id should be replaced with the port id of the virtual NIC to be
+ connected to OpenStack Network.
+ plugin_type_value should be replaced with with OPEN_VSWITCH for OVS plugin
+ or anything else for other plugins.
+ security_groups will trigger the correct behavior for enabling security
+ groups support, mainly when using OVS. The value is unimportant.
+"""
+import hooking
+import os
+import sys
+import traceback
+
+from openstacknet_utils import OPENSTACK_NET_PROVIDER_TYPE
+from openstacknet_utils import PLUGIN_TYPE_KEY
+from openstacknet_utils import PROVIDER_TYPE_KEY
+from openstacknet_utils import PT_OVS
+from openstacknet_utils import SECURITY_GROUPS_KEY
+from openstacknet_utils import VNIC_ID_KEY
+from openstacknet_utils import setUpSecurityGroupVnic
+
+
+def main():
+ if PROVIDER_TYPE_KEY not in os.environ:
+ return
+
+ providerType = os.environ[PROVIDER_TYPE_KEY]
+ pluginType = os.environ[PLUGIN_TYPE_KEY]
+ if (providerType == OPENSTACK_NET_PROVIDER_TYPE and
+ pluginType == PT_OVS and SECURITY_GROUPS_KEY in os.environ):
+ domxml = hooking.read_domxml()
+ portId = os.environ[VNIC_ID_KEY]
+ iface = domxml.getElementsByTagName('interface')[0]
+ mac = iface.getElementsByTagName('mac')[0].getAttribute('address')
+ setUpSecurityGroupVnic(mac, portId)
+
+
+def test():
+ """Should create:
+ - qbrtest_port_i linux bridge
+ - qvbtest_port_i veth attached to the bridge above
+ - qvbtest_port_i matching veth attached to the br-int with portId
+ 'test_port_id' and mac '00:1a:4a:16:01:51'
+ """
+ import openstacknet_utils
+ openstacknet_utils.executeOrExit = openstacknet_utils.mockExecuteOrExit
+ openstacknet_utils.deviceExists = openstacknet_utils.mockDeviceExists
+ setUpSecurityGroupVnic("00:1a:4a:16:01:51", 'test_port_id')
+
+
+if __name__ == '__main__':
+ try:
+ if '-t' in sys.argv:
+ test()
+ else:
+ main()
+ except:
+ hooking.exit_hook('openstacknet hook: [unexpected error]: %s\n' %
+ traceback.format_exc())
diff --git a/vdsm_hooks/openstacknet/openstacknet_utils.py b/vdsm_hooks/openstacknet/openstacknet_utils.py
index 5a75fb6..7593a6d 100644
--- a/vdsm_hooks/openstacknet/openstacknet_utils.py
+++ b/vdsm_hooks/openstacknet/openstacknet_utils.py
@@ -1,6 +1,7 @@
#!/usr/bin/python
import hooking
+import subprocess
from vdsm.netinfo import DUMMY_BRIDGE
from vdsm.utils import CommandPath
@@ -36,6 +37,12 @@
(command, err))
+def mockExecuteOrExit(command):
+ print("Mocking successful execution of: %s" %
+ subprocess.list2cmdline(command))
+ return (0, '', '')
+
+
def devName(prefix, name):
return (prefix + name)[:DEV_MAX_LENGTH]
@@ -44,3 +51,39 @@
command = [EXT_IP, 'link', 'show', 'dev', dev]
retcode, out, err = hooking.execCmd(command, raw=True)
return retcode == 0
+
+
+def mockDeviceExists(dev):
+ return False
+
+
+def setUpSecurityGroupVnic(macAddr, portId):
+ hooking.log('Setting up vNIC (portId %s) security groups' % portId)
+ brName = devName("qbr", portId)
+
+ # TODO: Remove this check after bz 1045626 is fixed
+ if not deviceExists(brName):
+ executeOrExit([EXT_BRCTL, 'addbr', brName])
+ executeOrExit([EXT_BRCTL, 'setfd', brName, '0'])
+ executeOrExit([EXT_BRCTL, 'stp', brName, 'off'])
+
+ vethBr = devName("qvb", portId)
+ vethOvs = devName("qvo", portId)
+
+ # TODO: Remove this check after bz 1045626 is fixed
+ if not deviceExists(vethOvs):
+ executeOrExit([EXT_IP, 'link', 'add', vethBr, 'type', 'veth', 'peer',
+ 'name', vethOvs])
+ for dev in [vethBr, vethOvs]:
+ executeOrExit([EXT_IP, 'link', 'set', dev, 'up'])
+ executeOrExit([EXT_IP, 'link', 'set', dev, 'promisc', 'on'])
+
+ executeOrExit([EXT_IP, 'link', 'set', brName, 'up'])
+ executeOrExit([EXT_BRCTL, 'addif', brName, vethBr])
+
+ executeOrExit([ovs_vsctl.cmd, '--', '--may-exist', 'add-port',
+ INTEGRATION_BRIDGE, vethOvs,
+ '--', 'set', 'Interface', vethOvs,
+ 'external-ids:iface-id=%s' % portId,
+ 'external-ids:iface-status=active',
+ 'external-ids:attached-mac=%s' % macAddr])
--
To view, visit http://gerrit.ovirt.org/34406
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Icd8a789c4565f32b32965af3966a4edd361949ea
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Antoni Segura Puimedon <asegurap(a)redhat.com>
8 years, 4 months
Change in vdsm[master]: vdsm: documentation for new storage functional tests
by ykleinbe@redhat.com
Yoav Kleinberger has uploaded a new change for review.
Change subject: vdsm: documentation for new storage functional tests
......................................................................
vdsm: documentation for new storage functional tests
Added a markdown README file that explains the design and usage of the
new functional tests. This will help future developers who wish to
extend the tests.
Change-Id: I7858c5ce6e17f2dbf11bf7ec6d041e4dbef0457c
Signed-off-by: Yoav Kleinberger <ykleinbe(a)redhat.com>
---
A tests/functional/testlib/README.md
1 file changed, 144 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/78/34478/1
diff --git a/tests/functional/testlib/README.md b/tests/functional/testlib/README.md
new file mode 100644
index 0000000..3a2847c
--- /dev/null
+++ b/tests/functional/testlib/README.md
@@ -0,0 +1,144 @@
+# New Functional Tests for VDSM
+
+By Yoav Kleinberger (ykleinbe(a)redhat.com ; haarcuba(a)gmail.com)
+
+## Introduction
+This document describes new functional tests for VDSM storage,
+intended to replace the old tests at `tests/functional/storageTests.py`. The old
+tests need replacing, because they
+
+1. Are hard to understand.
+2. Do not really verify VDSM behaviour.
+
+The new tests *do* verify VDSM behaviour, and are, so I hope, easier to understand.
+
+## Running the Tests
+
+First thing's first: to run the new functional tests, change into the `tests` directory and then
+
+ $ ./run_functional_storage_tests.sh
+
+You can use
+
+ $ ./run_functional_storage_tests.sh --verbose
+
+to see verbose logging.
+
+## The Basic Test: Create Volume
+
+There is currently one basic flow we test
+
+1. Creation of a Storage Domain
+2. Creation of a Virtual Disk on that Domain
+
+We want to test this flow for various storage backends, e.g. for NFS, iSCSI and
+Fibre Channel. This requires that the test framework support different storage
+backends. The problem is solved at the price of a non-trivial design of test
+"Storage Contexts", which will now be explained. The test itself resides in
+`tests/functional/basicStorageTest.py`.
+
+## Storage Contexts
+
+### iSCSI example
+
+As explained above, our create-volume test needs to run in different contexts:
+an iSCSI context, an NFS context, and so on. These contexts are represented by
+modules residing in
+
+ functional/testlib/storagecontexts
+
+To take a concrete example, let's look at the iSCSI context, located at
+`functional/testlib/storagecontexts/iscsi.py`. This file contains two classes:
+`ISCSI` and `Verify`. The usage is as follows:
+
+ with iscsi.ISCSI() as (vdsm, verify):
+ storageServerID = vdsm.connectStorageServer()
+ verify.storageServerConnected()
+
+ domainID = vdsm.createStorageDomain()
+ verify.storageDomainCreated(domainID)
+
+ poolID = vdsm.createStoragePool()
+ verify.storagePoolCreated(poolID, masterDomainID=domainID)
+ ...
+
+Note the structure of action/verification. The `vdsm` object lets you give VDSM
+a command [just like Engine does]. The `verify` object checks for observable
+changes induced by the command, e.g. the existence of logical volumes, etc.
+
+The ISCSI storage context makes sure that the `(vdsm, verify)` pair acts on an
+actual iSCSI storage set up for the purpose of this test. The ISCSI object will
+set up an iSCSI storage (on the host where the test runs), and will clean it up
+when the test is finished. In the example above, `vdsm.connectStorageServer()`
+tells vdsm to connect to the iSCSI storage, and the `verify` object then checks
+inside the `sysfs` interface exposed by the linux kernel that the connection
+actually exists.
+
+### Under the Hood of the iSCSI Storage Context
+
+As a further clarification of the workings of these tests, let's look at what
+happens when the `vdsm.connectStorageServer()` call above is executed. If you
+look inside, you'll find that this gets translated into calling the VDSM API
+with something like
+
+
+ connectStorageServer( storage.sd.ISCSI_DOMAIN,
+ '00000000-0000-0000-0000-000000000000',
+ [{ 'connection': '127.0.0.1',
+ 'iqn': 'iqn.1970-01.functional.test:3243',
+ 'user': '',
+ 'tpgt': '1',
+ 'password': '',
+ 'id': '00000000-0000-0000-0000-000000000000',
+ 'port': '3260' }])
+
+Thus, the ISCSI storage context object translates the essence of
+`connectStorageServer` to proper iSCSI terms.
+
+The same goes for the `verify`
+object. The `verify.storageServerConnected` call above will result in globbing
+files under `/sys/devices/platform/host*/session*/iscsi_session/*/targetname`
+and looking for the IQN inside.
+
+### Other Storage Contexts
+
+As demostrated above, the `ISCSI` class takes care of setting up iSCSI storage,
+and cleaning up afterwards. Similarly, if you want to test the same operations
+on NFS, you would use an NFS storage context (to be found in
+`functional/testlib/storagecontexts/nfs.py`) in exactly the same way:
+
+ with nfs.NFS() as (vdsm, verify):
+ storageServerID = vdsm.connectStorageServer()
+ verify.storageServerConnected()
+ ...
+
+This time, the `NFS` object will set up NFS storage, use the proper NFS
+parameters for calling VDSM and for verifying its behaviour, and remove it
+afterwards.
+
+Currently, we have iSCSI, LocalFS and NFS support. This may be extented by
+writing new storage contexts, e.g. for Fibre Channel.
+
+## VDSM Service Shutdown and Bringup
+
+Between tests, we shutdown the VDSM service, clean up the `/rhev/data-center`
+directory, and restart the service. The class in charge of this is
+`ControlVDSM`, located in `tests/functional/testlib/controlvdsm.py`. It also
+checks that VDSM is listening for commands before returning.
+
+## Randomness
+
+The storage contexts include a fair amount of ranomizing, e.g. the `iqn` iSCSI
+parameter is different every time the test is run. This helps us avoid cases in
+which some leftovers from manual checks or previous test runs mess with our
+test results. I recommend continuing this practice.
+
+## Known Issues
+
+1. Currently, cleanup after the tests is not perfect: if you work with the
+ tests you'll see the problems. The tests work well enough that it's not an
+ immediate concern, but there is room for improvement. At this stage I wanted
+ to get the tests into wider acceptance and worry about improving this later.
+2. The storage context classes are obviously quite tailored to the
+ create volume test, and will probably require some changes when adding
+ future tests.
--
To view, visit http://gerrit.ovirt.org/34478
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I7858c5ce6e17f2dbf11bf7ec6d041e4dbef0457c
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Yoav Kleinberger <ykleinbe(a)redhat.com>
8 years, 4 months