Change in vdsm[master]: tests: Add loop module
by Nir Soffer
Nir Soffer has posted comments on this change.
Change subject: tests: Add loop module
......................................................................
Patch Set 5: Code-Review-1
(6 comments)
Needs more work to be useful for more interesting lvm tests.
https://gerrit.ovirt.org/#/c/64329/5/tests/loop.py
File tests/loop.py:
Line 31:
Line 32: log = logging.getLogger("test")
Line 33:
Line 34:
Line 35: class Device(object):
Rename to open()?
with loop.open("backing_file"):
...
And maybe convert to function?
Line 36:
Line 37: def __init__(self, backing_file):
Line 38: self._backing_file = backing_file
Line 39: self._path = None
Line 39: self._path = None
Line 40:
Line 41: @property
Line 42: def path(self):
Line 43: return self._path
Can be returned by the context manager.
Line 44:
Line 45: @property
Line 46: def backing_file(self):
Line 47: return self._backing_file
Line 43: return self._path
Line 44:
Line 45: @property
Line 46: def backing_file(self):
Line 47: return self._backing_file
The caller have this value, we don't need to keep it.
Line 48:
Line 49: def __enter__(self):
Line 50: cmd = ["losetup", "--find", "--show", self._backing_file]
Line 51: rc, out, err = commands.execCmd(cmd, raw=True)
Line 45: @property
Line 46: def backing_file(self):
Line 47: return self._backing_file
Line 48:
Line 49: def __enter__(self):
This can accept the backing file.
Extract create_device function, usable for code that need to create many devices.
Line 50: cmd = ["losetup", "--find", "--show", self._backing_file]
Line 51: rc, out, err = commands.execCmd(cmd, raw=True)
Line 52: if rc != 0:
Line 53: raise cmdutils.Error(cmd, rc, out, err)
Line 51: rc, out, err = commands.execCmd(cmd, raw=True)
Line 52: if rc != 0:
Line 53: raise cmdutils.Error(cmd, rc, out, err)
Line 54: self._path = out.strip()
Line 55: return self
We can return path here instead of self.
Line 56:
Line 57: def __exit__(self, t, v, tb):
Line 58: cmd = ["losetup", "--detach", self._path]
Line 59: rc, out, err = commands.execCmd(cmd, raw=True)
Line 53: raise cmdutils.Error(cmd, rc, out, err)
Line 54: self._path = out.strip()
Line 55: return self
Line 56:
Line 57: def __exit__(self, t, v, tb):
Extract detach_device function, usable for code that created many devices and need to detach them.
Line 58: cmd = ["losetup", "--detach", self._path]
Line 59: rc, out, err = commands.execCmd(cmd, raw=True)
Line 60: if rc != 0:
Line 61: if t is None:
--
To view, visit https://gerrit.ovirt.org/64329
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: comment
Gerrit-Change-Id: I37184887930ad9ec1234036fdcdeb6cee8ccac42
Gerrit-PatchSet: 5
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
Gerrit-Reviewer: Adam Litke <alitke(a)redhat.com>
Gerrit-Reviewer: Ala Hino <ahino(a)redhat.com>
Gerrit-Reviewer: Allon Mureinik <amureini(a)redhat.com>
Gerrit-Reviewer: Dan Kenigsberg <danken(a)redhat.com>
Gerrit-Reviewer: Francesco Romani <fromani(a)redhat.com>
Gerrit-Reviewer: Freddy Rolland <frolland(a)redhat.com>
Gerrit-Reviewer: Jenkins CI
Gerrit-Reviewer: Nir Soffer <nsoffer(a)redhat.com>
Gerrit-Reviewer: gerrit-hooks <automation(a)ovirt.org>
Gerrit-HasComments: Yes
7 years, 8 months
Change in vdsm[master]: guest-lvs: Add failing test for guest lvs
by automation@ovirt.org
gerrit-hooks has posted comments on this change.
Change subject: guest-lvs: Add failing test for guest lvs
......................................................................
Patch Set 9:
* #1374545::Update tracker: OK
* Check Bug-Url::OK
* Check Public Bug::#1374545::OK, public bug
* Check Product::#1374545::OK, Correct product Red Hat Enterprise Virtualization Manager
* Check TM::SKIP, not in a monitored branch (ovirt-3.6 ovirt-4.0)
* Check merged to previous::IGNORE, Not in stable branch (['ovirt-3.6', 'ovirt-4.0'])
--
To view, visit https://gerrit.ovirt.org/64330
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: comment
Gerrit-Change-Id: I80d37278225262bc5692e00aed15654e84119590
Gerrit-PatchSet: 9
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
Gerrit-Reviewer: Adam Litke <alitke(a)redhat.com>
Gerrit-Reviewer: Ala Hino <ahino(a)redhat.com>
Gerrit-Reviewer: Allon Mureinik <amureini(a)redhat.com>
Gerrit-Reviewer: Freddy Rolland <frolland(a)redhat.com>
Gerrit-Reviewer: Jenkins CI
Gerrit-Reviewer: Nir Soffer <nsoffer(a)redhat.com>
Gerrit-Reviewer: gerrit-hooks <automation(a)ovirt.org>
Gerrit-HasComments: No
7 years, 8 months
Change in vdsm[master]: tests: Add loop module
by automation@ovirt.org
gerrit-hooks has posted comments on this change.
Change subject: tests: Add loop module
......................................................................
Patch Set 5:
* #1374545::Update tracker: OK
* Check Bug-Url::OK
* Check Public Bug::#1374545::OK, public bug
* Check Product::#1374545::OK, Correct product Red Hat Enterprise Virtualization Manager
* Check TM::SKIP, not in a monitored branch (ovirt-3.6 ovirt-4.0)
* Check merged to previous::IGNORE, Not in stable branch (['ovirt-3.6', 'ovirt-4.0'])
--
To view, visit https://gerrit.ovirt.org/64329
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: comment
Gerrit-Change-Id: I37184887930ad9ec1234036fdcdeb6cee8ccac42
Gerrit-PatchSet: 5
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
Gerrit-Reviewer: Adam Litke <alitke(a)redhat.com>
Gerrit-Reviewer: Ala Hino <ahino(a)redhat.com>
Gerrit-Reviewer: Allon Mureinik <amureini(a)redhat.com>
Gerrit-Reviewer: Dan Kenigsberg <danken(a)redhat.com>
Gerrit-Reviewer: Francesco Romani <fromani(a)redhat.com>
Gerrit-Reviewer: Freddy Rolland <frolland(a)redhat.com>
Gerrit-Reviewer: Jenkins CI
Gerrit-Reviewer: Nir Soffer <nsoffer(a)redhat.com>
Gerrit-Reviewer: gerrit-hooks <automation(a)ovirt.org>
Gerrit-HasComments: No
7 years, 8 months
Change in vdsm[master]: tests: Use fresh ResourceManager for every test
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: tests: Use fresh ResourceManager for every test
......................................................................
tests: Use fresh ResourceManager for every test
Previously if unregistring a namespace failed during shutdown failed, we
would reset the manager instance, but if a test failed and left the
manager in inconsistent state, possibly breaking unrelated tests.
Replace fragile setUp and tearDown code with monkeypatching, setting a
fresh instance before each test, and restoring the previous instance
after the test.
Change-Id: Ib4a68ce670aab78b15d11dafa155b58beb6265ec
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M tests/storage_resourcemanager_test.py
1 file changed, 77 insertions(+), 54 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/04/65004/1
diff --git a/tests/storage_resourcemanager_test.py b/tests/storage_resourcemanager_test.py
index 88b92fd..307fea5 100644
--- a/tests/storage_resourcemanager_test.py
+++ b/tests/storage_resourcemanager_test.py
@@ -28,6 +28,7 @@
from storage import resourceManager as rm
+from monkeypatch import MonkeyPatch
from storagefakelib import FakeResourceManager
from testlib import expandPermutations, permutations
from testlib import VdsmTestCase as TestCaseBase
@@ -112,30 +113,39 @@
return s
-class ResourceManagerTests(TestCaseBase):
- def setUp(self):
- manager = self.manager = rm.ResourceManager.getInstance()
- manager.registerNamespace("storage", rm.SimpleResourceFactory())
- manager.registerNamespace("null", NullResourceFactory())
- manager.registerNamespace("string", StringResourceFactory())
- manager.registerNamespace("error", ErrorResourceFactory())
- manager.registerNamespace("switchfail", SwitchFailFactory())
- manager.registerNamespace("crashy", CrashOnCloseFactory())
- manager.registerNamespace("failAfterSwitch", FailAfterSwitchFactory())
+def manager():
+ """
+ Create fresh ResourceManager instance for testing.
+ """
+ manager = rm.ResourceManager()
+ manager.registerNamespace("storage", rm.SimpleResourceFactory())
+ manager.registerNamespace("null", NullResourceFactory())
+ manager.registerNamespace("string", StringResourceFactory())
+ manager.registerNamespace("error", ErrorResourceFactory())
+ manager.registerNamespace("switchfail", SwitchFailFactory())
+ manager.registerNamespace("crashy", CrashOnCloseFactory())
+ manager.registerNamespace("failAfterSwitch", FailAfterSwitchFactory())
+ return manager
+
+class ResourceManagerTests(TestCaseBase):
+
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testErrorInFactory(self):
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
req = manager.registerResource("error", "resource", rm.EXCLUSIVE,
lambda req, res: 1)
self.assertTrue(req.canceled())
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testSingleton(self):
a = rm.ResourceManager.getInstance()
b = rm.ResourceManager.getInstance()
self.assertEquals(id(a), id(b))
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testRegisterInvalidNamespace(self):
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
try:
manager.registerNamespace("I.HEART.DOTS",
rm.SimpleResourceFactory())
@@ -144,13 +154,14 @@
self.fail("Managed to register an invalid namespace")
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testFailCreateAfterSwitch(self):
resources = []
def callback(req, res):
resources.append(res)
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
exclusive1 = manager.acquireResource(
"failAfterSwitch", "resource", rm.EXCLUSIVE)
sharedReq1 = manager.registerResource(
@@ -159,30 +170,35 @@
self.assertTrue(sharedReq1.canceled())
self.assertEquals(resources[0], None)
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testReregisterNamespace(self):
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
self.assertRaises((ValueError, KeyError), manager.registerNamespace,
"storage", rm.SimpleResourceFactory())
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testResourceSwitchLockTypeFail(self):
self.testResourceLockSwitch("switchfail")
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testRequestInvalidResource(self):
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
self.assertRaises(ValueError, manager.acquireResource,
"storage", "DOT.DOT", rm.SHARED)
self.assertRaises(ValueError, manager.acquireResource,
"DOT.DOT", "resource", rm.SHARED)
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testReleaseInvalidResource(self):
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
self.assertRaises(ValueError, manager.releaseResource,
"DONT_EXIST", "resource")
self.assertRaises(ValueError, manager.releaseResource, "storage",
"DOT")
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testResourceWrapper(self):
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
s = StringIO
with manager.acquireResource(
"string", "test",
@@ -192,8 +208,9 @@
continue
self.assertTrue(hasattr(resource, attr))
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testAccessAttributeNotExposedByWrapper(self):
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
with manager.acquireResource(
"string", "test",
rm.EXCLUSIVE) as resource:
@@ -208,13 +225,14 @@
self.fail("Managed to access an attribute not exposed by wrapper")
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testAccessAttributeNotExposedByRequestRef(self):
resources = []
def callback(req, res):
resources.insert(0, res)
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
req = manager.registerResource(
"string", "resource", rm.SHARED, callback)
try:
@@ -230,13 +248,14 @@
self.fail("Managed to access an attribute not exposed by wrapper")
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testRequestRefStr(self):
resources = []
def callback(req, res):
resources.insert(0, res)
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
req = manager.registerResource(
"string", "resource", rm.SHARED, callback)
try:
@@ -245,6 +264,7 @@
req.wait()
resources[0].release()
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testRequestRefCmp(self):
resources = []
requests = []
@@ -253,7 +273,7 @@
resources.insert(0, res)
requests.insert(0, req)
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
req1 = manager.registerResource(
"string", "resource", rm.EXCLUSIVE, callback)
req2 = manager.registerResource(
@@ -276,13 +296,14 @@
self.assertNotEqual(req1, "STUFF")
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testRequestRecancel(self):
resources = []
def callback(req, res):
resources.insert(0, res)
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
blocker = manager.acquireResource("string", "resource", rm.EXCLUSIVE)
req = manager.registerResource(
"string", "resource", rm.EXCLUSIVE, callback)
@@ -293,6 +314,7 @@
blocker.release()
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testRequestRegrant(self):
resources = []
@@ -303,11 +325,12 @@
req.grant()
self.assertRaises(rm.RequestAlreadyProcessedError, req.grant)
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testRequestWithBadCallbackOnCancel(self):
def callback(req, res):
raise Exception("BUY MILK!")
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
blocker = manager.acquireResource("string", "resource", rm.EXCLUSIVE)
req = manager.registerResource(
"string", "resource", rm.EXCLUSIVE, callback)
@@ -316,29 +339,32 @@
blocker.release()
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testRequestWithBadCallbackOnGrant(self):
def callback(req, res):
res.release()
raise Exception("BUY MILK!")
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
req = manager.registerResource(
"string", "resource", rm.EXCLUSIVE, callback)
req.wait()
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testRereleaseResource(self):
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
res = manager.acquireResource("string", "resource", rm.EXCLUSIVE)
res.release()
res.release()
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testCancelExclusiveBetweenShared(self):
resources = []
def callback(req, res):
resources.insert(0, res)
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
exclusive1 = manager.acquireResource(
"string", "resource", rm.EXCLUSIVE)
sharedReq1 = manager.registerResource(
@@ -377,16 +403,18 @@
while len(resources) > 0:
resources.pop().release()
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testCrashOnSwitch(self):
self.testResourceLockSwitch("crashy")
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testResourceLockSwitch(self, namespace="string"):
resources = []
def callback(req, res):
resources.insert(0, res)
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
exclusive1 = manager.acquireResource(
namespace, "resource", rm.EXCLUSIVE)
sharedReq1 = manager.registerResource(
@@ -422,8 +450,9 @@
hash(exclusive3)
hash(sharedReq3)
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testResourceAcquireTimeout(self):
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
exclusive1 = manager.acquireResource(
"string", "resource", rm.EXCLUSIVE)
self.assertRaises(rm.RequestTimedOutError,
@@ -431,13 +460,15 @@
rm.EXCLUSIVE, 1)
exclusive1.release()
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testResourceAcquireInvalidTimeout(self):
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
self.assertRaises(TypeError, manager.acquireResource, "string",
"resource", rm.EXCLUSIVE, "A")
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testResourceInvalidation(self):
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
resource = manager.acquireResource("string", "test",
rm.EXCLUSIVE)
try:
@@ -447,12 +478,14 @@
resource.release()
self.assertRaises(Exception, resource.write, "test")
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testForceRegisterNamespace(self):
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
manager.registerNamespace("storage", rm.SimpleResourceFactory(), True)
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testResourceAutorelease(self):
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
self.log.info("Acquiring resource", extra={'resource': "bob"})
res = manager.acquireResource("storage", "resource", rm.SHARED)
resProxy = proxy(res)
@@ -471,16 +504,18 @@
break
time.sleep(1)
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testAcquireResourceShared(self):
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
res1 = manager.acquireResource("storage", "resource", rm.SHARED)
res2 = manager.acquireResource("storage", "resource", rm.SHARED, 10)
res1.release()
res2.release()
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testResourceStatuses(self):
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
self.assertEquals(manager.getResourceStatus("storage", "resource"),
rm.LockState.free)
exclusive1 = manager.acquireResource(
@@ -500,8 +535,9 @@
self.fail("Managed to get status on a non existing resource")
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testAcquireNonExistingResource(self):
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
try:
manager.acquireResource("null", "resource", rm.EXCLUSIVE)
except KeyError:
@@ -509,13 +545,14 @@
self.fail("Managed to get status on a non existing resource")
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testAcquireResourceExclusive(self):
resources = []
def callback(req, res):
resources.append(res)
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
exclusive1 = manager.acquireResource(
"storage", "resource", rm.EXCLUSIVE)
sharedReq1 = manager.registerResource(
@@ -550,13 +587,14 @@
self.assertTrue(exclusiveReq2.granted())
resources.pop().release() # exclusiveReq 2
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
def testCancelRequest(self):
resources = []
def callback(req, res):
resources.append(res)
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
exclusiveReq1 = manager.registerResource(
"storage", "resource", rm.EXCLUSIVE, callback)
exclusiveReq2 = manager.registerResource(
@@ -577,6 +615,7 @@
self.assertTrue(exclusiveReq3.granted())
resources.pop().release() # exclusiveReq 3
+ @MonkeyPatch(rm.ResourceManager, "_instance", manager())
@slowtest
@stresstest
def testStressTest(self):
@@ -612,7 +651,7 @@
res.release()
threadLimit.release()
- manager = self.manager
+ manager = rm.ResourceManager.getInstance()
rnd = Random()
lockTranslator = [rm.EXCLUSIVE, rm.SHARED]
@@ -662,22 +701,6 @@
for t in releaseThreads:
t.join()
-
- def tearDown(self):
- manager = self.manager
-
- manager.unregisterNamespace("null")
-
- try:
- manager.unregisterNamespace("storage")
- manager.unregisterNamespace("string")
- manager.unregisterNamespace("error")
- manager.unregisterNamespace("switchfail")
- manager.unregisterNamespace("crashy")
- manager.unregisterNamespace("failAfterSwitch")
- except:
- rm.ResourceManager._instance = None
- raise
@expandPermutations
--
To view, visit https://gerrit.ovirt.org/65004
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ib4a68ce670aab78b15d11dafa155b58beb6265ec
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
7 years, 8 months
Change in vdsm[master]: tests: Streamline resourceManager import
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: tests: Streamline resourceManager import
......................................................................
tests: Streamline resourceManager import
In the application we typically import resourceManager as rm. Do the
same in the tests to streamline the tests and make the usage of this
module same both in the tests and the real code.
Change-Id: Ie06a26c4fcf6a3f4d81339df1633d28d87b99520
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M tests/storage_resourcemanager_test.py
1 file changed, 86 insertions(+), 107 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/03/65003/1
diff --git a/tests/storage_resourcemanager_test.py b/tests/storage_resourcemanager_test.py
index f616e09..88b92fd 100644
--- a/tests/storage_resourcemanager_test.py
+++ b/tests/storage_resourcemanager_test.py
@@ -26,14 +26,15 @@
import types
from resource import getrlimit, RLIMIT_NPROC
-import storage.resourceManager as resourceManager
+from storage import resourceManager as rm
+
from storagefakelib import FakeResourceManager
from testlib import expandPermutations, permutations
from testlib import VdsmTestCase as TestCaseBase
from testValidation import slowtest, stresstest
-class NullResourceFactory(resourceManager.SimpleResourceFactory):
+class NullResourceFactory(rm.SimpleResourceFactory):
"""
A resource factory that has no resources. Used for testing.
"""
@@ -41,7 +42,7 @@
return False
-class ErrorResourceFactory(resourceManager.SimpleResourceFactory):
+class ErrorResourceFactory(rm.SimpleResourceFactory):
"""
A resource factory that has no resources. Used for testing.
"""
@@ -49,7 +50,7 @@
raise Exception("EPIC FAIL!! LOLZ!!")
-class StringResourceFactory(resourceManager.SimpleResourceFactory):
+class StringResourceFactory(rm.SimpleResourceFactory):
def createResource(self, name, lockType):
s = StringIO("%s:%s" % (name, lockType))
s.seek(0)
@@ -66,7 +67,7 @@
return s
-class SwitchFailFactory(resourceManager.SimpleResourceFactory):
+class SwitchFailFactory(rm.SimpleResourceFactory):
def createResource(self, name, lockType):
s = StringIO("%s:%s" % (name, lockType))
s.seek(0)
@@ -78,7 +79,7 @@
return s
-class CrashOnCloseFactory(resourceManager.SimpleResourceFactory):
+class CrashOnCloseFactory(rm.SimpleResourceFactory):
def createResource(self, name, lockType):
s = StringIO("%s:%s" % (name, lockType))
s.seek(0)
@@ -90,7 +91,7 @@
return s
-class FailAfterSwitchFactory(resourceManager.SimpleResourceFactory):
+class FailAfterSwitchFactory(rm.SimpleResourceFactory):
def __init__(self):
self.fail = False
@@ -113,9 +114,8 @@
class ResourceManagerTests(TestCaseBase):
def setUp(self):
- manager = self.manager = resourceManager.ResourceManager.getInstance()
- manager.registerNamespace("storage",
- resourceManager.SimpleResourceFactory())
+ manager = self.manager = rm.ResourceManager.getInstance()
+ manager.registerNamespace("storage", rm.SimpleResourceFactory())
manager.registerNamespace("null", NullResourceFactory())
manager.registerNamespace("string", StringResourceFactory())
manager.registerNamespace("error", ErrorResourceFactory())
@@ -125,21 +125,20 @@
def testErrorInFactory(self):
manager = self.manager
- req = manager.registerResource("error", "resource",
- resourceManager.EXCLUSIVE,
+ req = manager.registerResource("error", "resource", rm.EXCLUSIVE,
lambda req, res: 1)
self.assertTrue(req.canceled())
def testSingleton(self):
- a = resourceManager.ResourceManager.getInstance()
- b = resourceManager.ResourceManager.getInstance()
+ a = rm.ResourceManager.getInstance()
+ b = rm.ResourceManager.getInstance()
self.assertEquals(id(a), id(b))
def testRegisterInvalidNamespace(self):
manager = self.manager
try:
manager.registerNamespace("I.HEART.DOTS",
- resourceManager.SimpleResourceFactory())
+ rm.SimpleResourceFactory())
except ValueError:
return
@@ -153,9 +152,9 @@
manager = self.manager
exclusive1 = manager.acquireResource(
- "failAfterSwitch", "resource", resourceManager.EXCLUSIVE)
+ "failAfterSwitch", "resource", rm.EXCLUSIVE)
sharedReq1 = manager.registerResource(
- "failAfterSwitch", "resource", resourceManager.SHARED, callback)
+ "failAfterSwitch", "resource", rm.SHARED, callback)
exclusive1.release()
self.assertTrue(sharedReq1.canceled())
self.assertEquals(resources[0], None)
@@ -163,7 +162,7 @@
def testReregisterNamespace(self):
manager = self.manager
self.assertRaises((ValueError, KeyError), manager.registerNamespace,
- "storage", resourceManager.SimpleResourceFactory())
+ "storage", rm.SimpleResourceFactory())
def testResourceSwitchLockTypeFail(self):
self.testResourceLockSwitch("switchfail")
@@ -171,9 +170,9 @@
def testRequestInvalidResource(self):
manager = self.manager
self.assertRaises(ValueError, manager.acquireResource,
- "storage", "DOT.DOT", resourceManager.SHARED)
+ "storage", "DOT.DOT", rm.SHARED)
self.assertRaises(ValueError, manager.acquireResource,
- "DOT.DOT", "resource", resourceManager.SHARED)
+ "DOT.DOT", "resource", rm.SHARED)
def testReleaseInvalidResource(self):
manager = self.manager
@@ -187,7 +186,7 @@
s = StringIO
with manager.acquireResource(
"string", "test",
- resourceManager.EXCLUSIVE) as resource:
+ rm.EXCLUSIVE) as resource:
for attr in dir(s):
if attr == "close":
continue
@@ -197,7 +196,7 @@
manager = self.manager
with manager.acquireResource(
"string", "test",
- resourceManager.EXCLUSIVE) as resource:
+ rm.EXCLUSIVE) as resource:
try:
resource.THERE_IS_NO_WAY_I_EXIST
except AttributeError:
@@ -217,7 +216,7 @@
manager = self.manager
req = manager.registerResource(
- "string", "resource", resourceManager.SHARED, callback)
+ "string", "resource", rm.SHARED, callback)
try:
req.grant()
except AttributeError:
@@ -239,7 +238,7 @@
manager = self.manager
req = manager.registerResource(
- "string", "resource", resourceManager.SHARED, callback)
+ "string", "resource", rm.SHARED, callback)
try:
str(req)
finally:
@@ -256,9 +255,9 @@
manager = self.manager
req1 = manager.registerResource(
- "string", "resource", resourceManager.EXCLUSIVE, callback)
+ "string", "resource", rm.EXCLUSIVE, callback)
req2 = manager.registerResource(
- "string", "resource", resourceManager.EXCLUSIVE, callback)
+ "string", "resource", rm.EXCLUSIVE, callback)
self.assertNotEqual(req1, req2)
self.assertEqual(req1, req1)
@@ -284,15 +283,13 @@
resources.insert(0, res)
manager = self.manager
- blocker = manager.acquireResource("string", "resource",
- resourceManager.EXCLUSIVE)
+ blocker = manager.acquireResource("string", "resource", rm.EXCLUSIVE)
req = manager.registerResource(
- "string", "resource", resourceManager.EXCLUSIVE, callback)
+ "string", "resource", rm.EXCLUSIVE, callback)
req.cancel()
- self.assertRaises(resourceManager.RequestAlreadyProcessedError,
- req.cancel)
+ self.assertRaises(rm.RequestAlreadyProcessedError, req.cancel)
blocker.release()
@@ -302,21 +299,18 @@
def callback(req, res):
resources.insert(0, res)
- req = resourceManager.Request(
- "namespace", "name", resourceManager.EXCLUSIVE, callback)
+ req = rm.Request("namespace", "name", rm.EXCLUSIVE, callback)
req.grant()
- self.assertRaises(resourceManager.RequestAlreadyProcessedError,
- req.grant)
+ self.assertRaises(rm.RequestAlreadyProcessedError, req.grant)
def testRequestWithBadCallbackOnCancel(self):
def callback(req, res):
raise Exception("BUY MILK!")
manager = self.manager
- blocker = manager.acquireResource("string", "resource",
- resourceManager.EXCLUSIVE)
+ blocker = manager.acquireResource("string", "resource", rm.EXCLUSIVE)
req = manager.registerResource(
- "string", "resource", resourceManager.EXCLUSIVE, callback)
+ "string", "resource", rm.EXCLUSIVE, callback)
req.cancel()
@@ -329,13 +323,12 @@
manager = self.manager
req = manager.registerResource(
- "string", "resource", resourceManager.EXCLUSIVE, callback)
+ "string", "resource", rm.EXCLUSIVE, callback)
req.wait()
def testRereleaseResource(self):
manager = self.manager
- res = manager.acquireResource("string", "resource",
- resourceManager.EXCLUSIVE)
+ res = manager.acquireResource("string", "resource", rm.EXCLUSIVE)
res.release()
res.release()
@@ -347,17 +340,17 @@
manager = self.manager
exclusive1 = manager.acquireResource(
- "string", "resource", resourceManager.EXCLUSIVE)
+ "string", "resource", rm.EXCLUSIVE)
sharedReq1 = manager.registerResource(
- "string", "resource", resourceManager.SHARED, callback)
+ "string", "resource", rm.SHARED, callback)
sharedReq2 = manager.registerResource(
- "string", "resource", resourceManager.SHARED, callback)
+ "string", "resource", rm.SHARED, callback)
exclusiveReq1 = manager.registerResource(
- "string", "resource", resourceManager.EXCLUSIVE, callback)
+ "string", "resource", rm.EXCLUSIVE, callback)
sharedReq3 = manager.registerResource(
- "string", "resource", resourceManager.SHARED, callback)
+ "string", "resource", rm.SHARED, callback)
sharedReq4 = manager.registerResource(
- "string", "resource", resourceManager.SHARED, callback)
+ "string", "resource", rm.SHARED, callback)
self.assertFalse(sharedReq1.granted())
self.assertFalse(sharedReq2.granted())
@@ -395,19 +388,17 @@
manager = self.manager
exclusive1 = manager.acquireResource(
- namespace, "resource", resourceManager.EXCLUSIVE)
+ namespace, "resource", rm.EXCLUSIVE)
sharedReq1 = manager.registerResource(
- namespace, "resource", resourceManager.SHARED, callback)
+ namespace, "resource", rm.SHARED, callback)
sharedReq2 = manager.registerResource(
- namespace, "resource", resourceManager.SHARED, callback)
+ namespace, "resource", rm.SHARED, callback)
exclusive2 = manager.registerResource(
- namespace, "resource", resourceManager.EXCLUSIVE,
- callback)
+ namespace, "resource", rm.EXCLUSIVE, callback)
exclusive3 = manager.registerResource(
- namespace, "resource", resourceManager.EXCLUSIVE,
- callback)
+ namespace, "resource", rm.EXCLUSIVE, callback)
sharedReq3 = manager.registerResource(
- namespace, "resource", resourceManager.SHARED, callback)
+ namespace, "resource", rm.SHARED, callback)
self.assertEquals(exclusive1.read(), "resource:exclusive")
exclusive1.release()
@@ -434,21 +425,21 @@
def testResourceAcquireTimeout(self):
manager = self.manager
exclusive1 = manager.acquireResource(
- "string", "resource", resourceManager.EXCLUSIVE)
- self.assertRaises(resourceManager.RequestTimedOutError,
+ "string", "resource", rm.EXCLUSIVE)
+ self.assertRaises(rm.RequestTimedOutError,
manager.acquireResource, "string", "resource",
- resourceManager.EXCLUSIVE, 1)
+ rm.EXCLUSIVE, 1)
exclusive1.release()
def testResourceAcquireInvalidTimeout(self):
manager = self.manager
self.assertRaises(TypeError, manager.acquireResource, "string",
- "resource", resourceManager.EXCLUSIVE, "A")
+ "resource", rm.EXCLUSIVE, "A")
def testResourceInvalidation(self):
manager = self.manager
resource = manager.acquireResource("string", "test",
- resourceManager.EXCLUSIVE)
+ rm.EXCLUSIVE)
try:
resource.write("dsada")
except:
@@ -458,14 +449,12 @@
def testForceRegisterNamespace(self):
manager = self.manager
- manager.registerNamespace(
- "storage", resourceManager.SimpleResourceFactory(), True)
+ manager.registerNamespace("storage", rm.SimpleResourceFactory(), True)
def testResourceAutorelease(self):
manager = self.manager
self.log.info("Acquiring resource", extra={'resource': "bob"})
- res = manager.acquireResource("storage", "resource",
- resourceManager.SHARED)
+ res = manager.acquireResource("storage", "resource", rm.SHARED)
resProxy = proxy(res)
res = None
# wait for object to die
@@ -478,16 +467,14 @@
self.log.info("Waiting for autoclean")
while True:
resStatus = manager.getResourceStatus("storage", "resource")
- if resStatus == resourceManager.LockState.free:
+ if resStatus == rm.LockState.free:
break
time.sleep(1)
def testAcquireResourceShared(self):
manager = self.manager
- res1 = manager.acquireResource("storage", "resource",
- resourceManager.SHARED)
- res2 = manager.acquireResource("storage", "resource",
- resourceManager.SHARED, 10)
+ res1 = manager.acquireResource("storage", "resource", rm.SHARED)
+ res2 = manager.acquireResource("storage", "resource", rm.SHARED, 10)
res1.release()
res2.release()
@@ -495,20 +482,19 @@
def testResourceStatuses(self):
manager = self.manager
self.assertEquals(manager.getResourceStatus("storage", "resource"),
- resourceManager.LockState.free)
+ rm.LockState.free)
exclusive1 = manager.acquireResource(
- "storage", "resource", resourceManager.EXCLUSIVE)
+ "storage", "resource", rm.EXCLUSIVE)
self.assertEquals(manager.getResourceStatus("storage", "resource"),
- resourceManager.LockState.locked)
+ rm.LockState.locked)
exclusive1.release()
- shared1 = manager.acquireResource("storage", "resource",
- resourceManager.SHARED)
+ shared1 = manager.acquireResource("storage", "resource", rm.SHARED)
self.assertEquals(manager.getResourceStatus("storage", "resource"),
- resourceManager.LockState.shared)
+ rm.LockState.shared)
shared1.release()
try:
self.assertEquals(manager.getResourceStatus("null", "resource"),
- resourceManager.LockState.free)
+ rm.LockState.free)
except KeyError:
return
@@ -517,8 +503,7 @@
def testAcquireNonExistingResource(self):
manager = self.manager
try:
- manager.acquireResource("null", "resource",
- resourceManager.EXCLUSIVE)
+ manager.acquireResource("null", "resource", rm.EXCLUSIVE)
except KeyError:
return
@@ -532,17 +517,15 @@
manager = self.manager
exclusive1 = manager.acquireResource(
- "storage", "resource", resourceManager.EXCLUSIVE)
+ "storage", "resource", rm.EXCLUSIVE)
sharedReq1 = manager.registerResource(
- "storage", "resource", resourceManager.SHARED, callback)
+ "storage", "resource", rm.SHARED, callback)
sharedReq2 = manager.registerResource(
- "storage", "resource", resourceManager.SHARED, callback)
+ "storage", "resource", rm.SHARED, callback)
exclusiveReq1 = manager.registerResource(
- "storage", "resource", resourceManager.EXCLUSIVE,
- callback)
+ "storage", "resource", rm.EXCLUSIVE, callback)
exclusiveReq2 = manager.registerResource(
- "storage", "resource", resourceManager.EXCLUSIVE,
- callback)
+ "storage", "resource", rm.EXCLUSIVE, callback)
self.assertFalse(sharedReq1.granted())
self.assertFalse(sharedReq2.granted())
@@ -575,14 +558,11 @@
manager = self.manager
exclusiveReq1 = manager.registerResource(
- "storage", "resource", resourceManager.EXCLUSIVE,
- callback)
+ "storage", "resource", rm.EXCLUSIVE, callback)
exclusiveReq2 = manager.registerResource(
- "storage", "resource", resourceManager.EXCLUSIVE,
- callback)
+ "storage", "resource", rm.EXCLUSIVE, callback)
exclusiveReq3 = manager.registerResource(
- "storage", "resource", resourceManager.EXCLUSIVE,
- callback)
+ "storage", "resource", rm.EXCLUSIVE, callback)
self.assertTrue(exclusiveReq1.granted())
self.assertFalse(exclusiveReq2.canceled())
@@ -624,7 +604,7 @@
threadLimit.release()
def releaseShared(req, res):
- self.assertEquals(req.lockType, resourceManager.SHARED)
+ self.assertEquals(req.lockType, rm.SHARED)
res.release()
threadLimit.release()
@@ -635,7 +615,7 @@
manager = self.manager
rnd = Random()
- lockTranslator = [resourceManager.EXCLUSIVE, resourceManager.SHARED]
+ lockTranslator = [rm.EXCLUSIVE, rm.SHARED]
threads = []
for i in range(procLimit / 2):
@@ -696,7 +676,7 @@
manager.unregisterNamespace("crashy")
manager.unregisterNamespace("failAfterSwitch")
except:
- resourceManager.ResourceManager._instance = None
+ rm.ResourceManager._instance = None
raise
@@ -704,7 +684,7 @@
class ResourceManagerLockTest(TestCaseBase):
def test_properties(self):
- a = resourceManager.ResourceManagerLock('ns', 'name', 'mode')
+ a = rm.ResourceManagerLock('ns', 'name', 'mode')
self.assertEqual('ns', a.ns)
self.assertEqual('name', a.name)
self.assertEqual('mode', a.mode)
@@ -714,31 +694,30 @@
(('nsA', 'nameA', 'mode'), ('nsA', 'nameB', 'mode')),
))
def test_less_than(self, a, b):
- b = resourceManager.ResourceManagerLock(*b)
- a = resourceManager.ResourceManagerLock(*a)
+ b = rm.ResourceManagerLock(*b)
+ a = rm.ResourceManagerLock(*a)
self.assertLess(a, b)
def test_equality(self):
- a = resourceManager.ResourceManagerLock('ns', 'name', 'mode')
- b = resourceManager.ResourceManagerLock('ns', 'name', 'mode')
+ a = rm.ResourceManagerLock('ns', 'name', 'mode')
+ b = rm.ResourceManagerLock('ns', 'name', 'mode')
self.assertEqual(a, b)
def test_mode_used_for_equality(self):
- a = resourceManager.ResourceManagerLock('nsA', 'nameA', 'modeA')
- b = resourceManager.ResourceManagerLock('nsA', 'nameA', 'modeB')
+ a = rm.ResourceManagerLock('nsA', 'nameA', 'modeA')
+ b = rm.ResourceManagerLock('nsA', 'nameA', 'modeB')
self.assertNotEqual(a, b)
def test_mode_ignored_for_sorting(self):
- a = resourceManager.ResourceManagerLock('nsA', 'nameA', 'modeA')
- b = resourceManager.ResourceManagerLock('nsA', 'nameA', 'modeB')
+ a = rm.ResourceManagerLock('nsA', 'nameA', 'modeA')
+ b = rm.ResourceManagerLock('nsA', 'nameA', 'modeB')
self.assertFalse(a < b)
self.assertFalse(b < a)
def test_acquire_release(self):
fake_rm = FakeResourceManager()
- lock = resourceManager.ResourceManagerLock(
- 'ns_A', 'name_A', resourceManager.SHARED)
+ lock = rm.ResourceManagerLock('ns_A', 'name_A', rm.SHARED)
lock._rm = fake_rm
expected = []
lock.acquire()
@@ -750,8 +729,8 @@
self.assertEqual(expected, fake_rm.__calls__)
def test_repr(self):
- mode = resourceManager.SHARED
- lock = resourceManager.ResourceManagerLock('ns', 'name', mode)
+ mode = rm.SHARED
+ lock = rm.ResourceManagerLock('ns', 'name', mode)
lock_string = str(lock)
self.assertIn("ResourceManagerLock", lock_string)
self.assertIn("ns=ns", lock_string)
--
To view, visit https://gerrit.ovirt.org/65003
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ie06a26c4fcf6a3f4d81339df1633d28d87b99520
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
7 years, 8 months
Change in vdsm[master]: resourceManager: Remove unused listNamespaces
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: resourceManager: Remove unused listNamespaces
......................................................................
resourceManager: Remove unused listNamespaces
The only usage was a test for this, so we don't really need it, making
the ResourceManager interface smaller. This will make further
refactoring easier.
Change-Id: Ie284d49eaf63ce767375288bcab67f10001e9f14
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M tests/storage_resourcemanager_test.py
M vdsm/storage/resourceManager.py
2 files changed, 0 insertions(+), 9 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/02/65002/1
diff --git a/tests/storage_resourcemanager_test.py b/tests/storage_resourcemanager_test.py
index 3ced8c3..f616e09 100644
--- a/tests/storage_resourcemanager_test.py
+++ b/tests/storage_resourcemanager_test.py
@@ -461,11 +461,6 @@
manager.registerNamespace(
"storage", resourceManager.SimpleResourceFactory(), True)
- def testListNamespaces(self):
- manager = self.manager
- namespaces = manager.listNamespaces()
- self.assertEquals(len(namespaces), 7)
-
def testResourceAutorelease(self):
manager = self.manager
self.log.info("Acquiring resource", extra={'resource': "bob"})
diff --git a/vdsm/storage/resourceManager.py b/vdsm/storage/resourceManager.py
index 2949d10..338a885 100644
--- a/vdsm/storage/resourceManager.py
+++ b/vdsm/storage/resourceManager.py
@@ -361,10 +361,6 @@
return cls._instance
- def listNamespaces(self):
- with self._syncRoot.shared:
- return self._namespaces.keys()
-
def registerNamespace(self, namespace, factory, force=False):
if not self._namespaceValidator.match(namespace):
raise ValueError("Illegal namespace '%s'" % namespace)
--
To view, visit https://gerrit.ovirt.org/65002
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ie284d49eaf63ce767375288bcab67f10001e9f14
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
7 years, 8 months
Change in vdsm[master]: blockSD: Storage domain life cycle management
by automation@ovirt.org
gerrit-hooks has posted comments on this change.
Change subject: blockSD: Storage domain life cycle management
......................................................................
Patch Set 7:
* #1331978::Update tracker: OK
* Check Bug-Url::OK
* Check Public Bug::#1331978::OK, public bug
* Check Product::#1331978::OK, Correct classification oVirt
* Check TM::SKIP, not in a monitored branch (ovirt-3.6 ovirt-4.0)
* Check merged to previous::IGNORE, Not in stable branch (['ovirt-3.6', 'ovirt-4.0'])
--
To view, visit https://gerrit.ovirt.org/56876
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: comment
Gerrit-Change-Id: I7227bb43c2e1ee67a6239956aae48173a27f566e
Gerrit-PatchSet: 7
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
Gerrit-Reviewer: Adam Litke <alitke(a)redhat.com>
Gerrit-Reviewer: Ala Hino <ahino(a)redhat.com>
Gerrit-Reviewer: Allon Mureinik <amureini(a)redhat.com>
Gerrit-Reviewer: Freddy Rolland <frolland(a)redhat.com>
Gerrit-Reviewer: Idan Shaby <ishaby(a)redhat.com>
Gerrit-Reviewer: Jenkins CI
Gerrit-Reviewer: Nir Soffer <nsoffer(a)redhat.com>
Gerrit-Reviewer: Simone Tiraboschi <stirabos(a)redhat.com>
Gerrit-Reviewer: Tal Nisan <tnisan(a)redhat.com>
Gerrit-Reviewer: gerrit-hooks <automation(a)ovirt.org>
Gerrit-HasComments: No
7 years, 8 months
Change in vdsm[master]: resourceManager: Move ResourceInfo class to module
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: resourceManager: Move ResourceInfo class to module
......................................................................
resourceManager: Move ResourceInfo class to module
Simplify ResourceManger class by moving the nested ResouceInfo class to
the module. This will make further refactoring easier.
Change-Id: I21170d863d19c981995aca6794a1a346e1fbe31b
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M vdsm/storage/resourceManager.py
1 file changed, 15 insertions(+), 15 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/01/65001/1
diff --git a/vdsm/storage/resourceManager.py b/vdsm/storage/resourceManager.py
index a85fc80..2949d10 100644
--- a/vdsm/storage/resourceManager.py
+++ b/vdsm/storage/resourceManager.py
@@ -345,19 +345,6 @@
_instance = None
_singletonLock = threading.Lock()
- class ResourceInfo(object):
- """
- Resource struct
- """
- def __init__(self, realObj, namespace, name):
- self.queue = []
- self.activeUsers = 0
- self.currentLock = None
- self.realObj = realObj
- self.namespace = namespace
- self.name = name
- self.fullName = "%s.%s" % (namespace, name)
-
def __init__(self):
self._syncRoot = rwlock.RWLock()
self._namespaces = {}
@@ -575,8 +562,7 @@
contextCleanup.defer(request.cancel)
return RequestRef(request)
- resource = resources[name] = ResourceManager.ResourceInfo(
- obj, namespace, name)
+ resource = resources[name] = ResourceInfo(obj, namespace, name)
resource.currentLock = request.lockType
resource.activeUsers += 1
@@ -713,6 +699,20 @@
self.factory = factory
+class ResourceInfo(object):
+ """
+ Resource struct
+ """
+ def __init__(self, realObj, namespace, name):
+ self.queue = []
+ self.activeUsers = 0
+ self.currentLock = None
+ self.realObj = realObj
+ self.namespace = namespace
+ self.name = name
+ self.fullName = "%s.%s" % (namespace, name)
+
+
class Owner(object):
log = logging.getLogger('storage.ResourceManager.Owner')
--
To view, visit https://gerrit.ovirt.org/65001
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I21170d863d19c981995aca6794a1a346e1fbe31b
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
7 years, 8 months
Change in vdsm[master]: resourceManager: Move Namespace class to module
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: resourceManager: Move Namespace class to module
......................................................................
resourceManager: Move Namespace class to module
Simplify the ResourceManager class by moving the nested Namespace class
to the module. This make further refactoring easier.
Change-Id: I82240985fe156345fee0c08a46de5251e9d65be8
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M vdsm/storage/resourceManager.py
1 file changed, 11 insertions(+), 10 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/00/65000/1
diff --git a/vdsm/storage/resourceManager.py b/vdsm/storage/resourceManager.py
index 9060c44..a85fc80 100644
--- a/vdsm/storage/resourceManager.py
+++ b/vdsm/storage/resourceManager.py
@@ -358,15 +358,6 @@
self.name = name
self.fullName = "%s.%s" % (namespace, name)
- class Namespace(object):
- """
- Namespace struct
- """
- def __init__(self, factory):
- self.resources = {}
- self.lock = threading.Lock() # rwlock.RWLock()
- self.factory = factory
-
def __init__(self):
self._syncRoot = rwlock.RWLock()
self._namespaces = {}
@@ -404,7 +395,7 @@
self._log.debug("Registering namespace '%s'", namespace)
- self._namespaces[namespace] = ResourceManager.Namespace(factory)
+ self._namespaces[namespace] = Namespace(factory)
def unregisterNamespace(self, namespace):
with self._syncRoot.exclusive:
@@ -712,6 +703,16 @@
resource.activeUsers)
+class Namespace(object):
+ """
+ Namespace struct
+ """
+ def __init__(self, factory):
+ self.resources = {}
+ self.lock = threading.Lock() # rwlock.RWLock()
+ self.factory = factory
+
+
class Owner(object):
log = logging.getLogger('storage.ResourceManager.Owner')
--
To view, visit https://gerrit.ovirt.org/65000
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I82240985fe156345fee0c08a46de5251e9d65be8
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
7 years, 8 months
Change in vdsm[master]: resourceManager: Flatten LockType constants
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: resourceManager: Flatten LockType constants
......................................................................
resourceManager: Flatten LockType constants
Replace the LockType class with flat constants, streamlining client
code.
Change-Id: Id78e07814f21a1dcf33efa2afe400eff041e3001
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M tests/resourceManagerTests.py
M tests/storage_sdm_copy_data_test.py
M tests/storage_sdm_create_volume_test.py
M tests/storage_volume_test.py
M vdsm/storage/blockVolume.py
M vdsm/storage/hsm.py
M vdsm/storage/image.py
M vdsm/storage/resourceFactories.py
M vdsm/storage/resourceManager.py
M vdsm/storage/sdm/api/copy_data.py
M vdsm/storage/sdm/api/create_volume.py
M vdsm/storage/sp.py
M vdsm/storage/task.py
M vdsm/storage/volume.py
14 files changed, 119 insertions(+), 138 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/28/63628/1
diff --git a/tests/resourceManagerTests.py b/tests/resourceManagerTests.py
index bc3002c..91f0b20 100644
--- a/tests/resourceManagerTests.py
+++ b/tests/resourceManagerTests.py
@@ -126,7 +126,7 @@
def testErrorInFactory(self):
manager = self.manager
req = manager.registerResource("error", "resource",
- resourceManager.LockType.exclusive,
+ resourceManager.EXCLUSIVE,
lambda req, res: 1)
self.assertTrue(req.canceled())
@@ -153,10 +153,9 @@
manager = self.manager
exclusive1 = manager.acquireResource(
- "failAfterSwitch", "resource", resourceManager.LockType.exclusive)
+ "failAfterSwitch", "resource", resourceManager.EXCLUSIVE)
sharedReq1 = manager.registerResource(
- "failAfterSwitch", "resource", resourceManager.LockType.shared,
- callback)
+ "failAfterSwitch", "resource", resourceManager.SHARED, callback)
exclusive1.release()
self.assertTrue(sharedReq1.canceled())
self.assertEquals(resources[0], None)
@@ -172,11 +171,9 @@
def testRequestInvalidResource(self):
manager = self.manager
self.assertRaises(ValueError, manager.acquireResource,
- "storage", "DOT.DOT",
- resourceManager.LockType.shared)
+ "storage", "DOT.DOT", resourceManager.SHARED)
self.assertRaises(ValueError, manager.acquireResource,
- "DOT.DOT", "resource",
- resourceManager.LockType.shared)
+ "DOT.DOT", "resource", resourceManager.SHARED)
def testReleaseInvalidResource(self):
manager = self.manager
@@ -190,7 +187,7 @@
s = StringIO
with manager.acquireResource(
"string", "test",
- resourceManager.LockType.exclusive) as resource:
+ resourceManager.EXCLUSIVE) as resource:
for attr in dir(s):
if attr == "close":
continue
@@ -200,7 +197,7 @@
manager = self.manager
with manager.acquireResource(
"string", "test",
- resourceManager.LockType.exclusive) as resource:
+ resourceManager.EXCLUSIVE) as resource:
try:
resource.THERE_IS_NO_WAY_I_EXIST
except AttributeError:
@@ -220,7 +217,7 @@
manager = self.manager
req = manager.registerResource(
- "string", "resource", resourceManager.LockType.shared, callback)
+ "string", "resource", resourceManager.SHARED, callback)
try:
req.grant()
except AttributeError:
@@ -242,7 +239,7 @@
manager = self.manager
req = manager.registerResource(
- "string", "resource", resourceManager.LockType.shared, callback)
+ "string", "resource", resourceManager.SHARED, callback)
try:
str(req)
finally:
@@ -259,9 +256,9 @@
manager = self.manager
req1 = manager.registerResource(
- "string", "resource", resourceManager.LockType.exclusive, callback)
+ "string", "resource", resourceManager.EXCLUSIVE, callback)
req2 = manager.registerResource(
- "string", "resource", resourceManager.LockType.exclusive, callback)
+ "string", "resource", resourceManager.EXCLUSIVE, callback)
self.assertNotEqual(req1, req2)
self.assertEqual(req1, req1)
@@ -288,9 +285,9 @@
manager = self.manager
blocker = manager.acquireResource("string", "resource",
- resourceManager.LockType.exclusive)
+ resourceManager.EXCLUSIVE)
req = manager.registerResource(
- "string", "resource", resourceManager.LockType.exclusive, callback)
+ "string", "resource", resourceManager.EXCLUSIVE, callback)
req.cancel()
@@ -306,7 +303,7 @@
resources.insert(0, res)
req = resourceManager.Request(
- "namespace", "name", resourceManager.LockType.exclusive, callback)
+ "namespace", "name", resourceManager.EXCLUSIVE, callback)
req.grant()
self.assertRaises(resourceManager.RequestAlreadyProcessedError,
req.grant)
@@ -317,9 +314,9 @@
manager = self.manager
blocker = manager.acquireResource("string", "resource",
- resourceManager.LockType.exclusive)
+ resourceManager.EXCLUSIVE)
req = manager.registerResource(
- "string", "resource", resourceManager.LockType.exclusive, callback)
+ "string", "resource", resourceManager.EXCLUSIVE, callback)
req.cancel()
@@ -332,13 +329,13 @@
manager = self.manager
req = manager.registerResource(
- "string", "resource", resourceManager.LockType.exclusive, callback)
+ "string", "resource", resourceManager.EXCLUSIVE, callback)
req.wait()
def testRereleaseResource(self):
manager = self.manager
res = manager.acquireResource("string", "resource",
- resourceManager.LockType.exclusive)
+ resourceManager.EXCLUSIVE)
res.release()
res.release()
@@ -350,17 +347,17 @@
manager = self.manager
exclusive1 = manager.acquireResource(
- "string", "resource", resourceManager.LockType.exclusive)
+ "string", "resource", resourceManager.EXCLUSIVE)
sharedReq1 = manager.registerResource(
- "string", "resource", resourceManager.LockType.shared, callback)
+ "string", "resource", resourceManager.SHARED, callback)
sharedReq2 = manager.registerResource(
- "string", "resource", resourceManager.LockType.shared, callback)
+ "string", "resource", resourceManager.SHARED, callback)
exclusiveReq1 = manager.registerResource(
- "string", "resource", resourceManager.LockType.exclusive, callback)
+ "string", "resource", resourceManager.EXCLUSIVE, callback)
sharedReq3 = manager.registerResource(
- "string", "resource", resourceManager.LockType.shared, callback)
+ "string", "resource", resourceManager.SHARED, callback)
sharedReq4 = manager.registerResource(
- "string", "resource", resourceManager.LockType.shared, callback)
+ "string", "resource", resourceManager.SHARED, callback)
self.assertFalse(sharedReq1.granted())
self.assertFalse(sharedReq2.granted())
@@ -398,19 +395,19 @@
manager = self.manager
exclusive1 = manager.acquireResource(
- namespace, "resource", resourceManager.LockType.exclusive)
+ namespace, "resource", resourceManager.EXCLUSIVE)
sharedReq1 = manager.registerResource(
- namespace, "resource", resourceManager.LockType.shared, callback)
+ namespace, "resource", resourceManager.SHARED, callback)
sharedReq2 = manager.registerResource(
- namespace, "resource", resourceManager.LockType.shared, callback)
+ namespace, "resource", resourceManager.SHARED, callback)
exclusive2 = manager.registerResource(
- namespace, "resource", resourceManager.LockType.exclusive,
+ namespace, "resource", resourceManager.EXCLUSIVE,
callback)
exclusive3 = manager.registerResource(
- namespace, "resource", resourceManager.LockType.exclusive,
+ namespace, "resource", resourceManager.EXCLUSIVE,
callback)
sharedReq3 = manager.registerResource(
- namespace, "resource", resourceManager.LockType.shared, callback)
+ namespace, "resource", resourceManager.SHARED, callback)
self.assertEquals(exclusive1.read(), "resource:exclusive")
exclusive1.release()
@@ -437,21 +434,21 @@
def testResourceAcquireTimeout(self):
manager = self.manager
exclusive1 = manager.acquireResource(
- "string", "resource", resourceManager.LockType.exclusive)
+ "string", "resource", resourceManager.EXCLUSIVE)
self.assertRaises(resourceManager.RequestTimedOutError,
manager.acquireResource, "string", "resource",
- resourceManager.LockType.exclusive, 1)
+ resourceManager.EXCLUSIVE, 1)
exclusive1.release()
def testResourceAcquireInvalidTimeout(self):
manager = self.manager
self.assertRaises(TypeError, manager.acquireResource, "string",
- "resource", resourceManager.LockType.exclusive, "A")
+ "resource", resourceManager.EXCLUSIVE, "A")
def testResourceInvalidation(self):
manager = self.manager
resource = manager.acquireResource("string", "test",
- resourceManager.LockType.exclusive)
+ resourceManager.EXCLUSIVE)
try:
resource.write("dsada")
except:
@@ -473,7 +470,7 @@
manager = self.manager
self.log.info("Acquiring resource", extra={'resource': "bob"})
res = manager.acquireResource("storage", "resource",
- resourceManager.LockType.shared)
+ resourceManager.SHARED)
resProxy = proxy(res)
res = None
# wait for object to die
@@ -493,9 +490,9 @@
def testAcquireResourceShared(self):
manager = self.manager
res1 = manager.acquireResource("storage", "resource",
- resourceManager.LockType.shared)
+ resourceManager.SHARED)
res2 = manager.acquireResource("storage", "resource",
- resourceManager.LockType.shared, 10)
+ resourceManager.SHARED, 10)
res1.release()
res2.release()
@@ -505,12 +502,12 @@
self.assertEquals(manager.getResourceStatus("storage", "resource"),
resourceManager.LockState.free)
exclusive1 = manager.acquireResource(
- "storage", "resource", resourceManager.LockType.exclusive)
+ "storage", "resource", resourceManager.EXCLUSIVE)
self.assertEquals(manager.getResourceStatus("storage", "resource"),
resourceManager.LockState.locked)
exclusive1.release()
shared1 = manager.acquireResource("storage", "resource",
- resourceManager.LockType.shared)
+ resourceManager.SHARED)
self.assertEquals(manager.getResourceStatus("storage", "resource"),
resourceManager.LockState.shared)
shared1.release()
@@ -526,7 +523,7 @@
manager = self.manager
try:
manager.acquireResource("null", "resource",
- resourceManager.LockType.exclusive)
+ resourceManager.EXCLUSIVE)
except KeyError:
return
@@ -540,16 +537,16 @@
manager = self.manager
exclusive1 = manager.acquireResource(
- "storage", "resource", resourceManager.LockType.exclusive)
+ "storage", "resource", resourceManager.EXCLUSIVE)
sharedReq1 = manager.registerResource(
- "storage", "resource", resourceManager.LockType.shared, callback)
+ "storage", "resource", resourceManager.SHARED, callback)
sharedReq2 = manager.registerResource(
- "storage", "resource", resourceManager.LockType.shared, callback)
+ "storage", "resource", resourceManager.SHARED, callback)
exclusiveReq1 = manager.registerResource(
- "storage", "resource", resourceManager.LockType.exclusive,
+ "storage", "resource", resourceManager.EXCLUSIVE,
callback)
exclusiveReq2 = manager.registerResource(
- "storage", "resource", resourceManager.LockType.exclusive,
+ "storage", "resource", resourceManager.EXCLUSIVE,
callback)
self.assertFalse(sharedReq1.granted())
@@ -583,13 +580,13 @@
manager = self.manager
exclusiveReq1 = manager.registerResource(
- "storage", "resource", resourceManager.LockType.exclusive,
+ "storage", "resource", resourceManager.EXCLUSIVE,
callback)
exclusiveReq2 = manager.registerResource(
- "storage", "resource", resourceManager.LockType.exclusive,
+ "storage", "resource", resourceManager.EXCLUSIVE,
callback)
exclusiveReq3 = manager.registerResource(
- "storage", "resource", resourceManager.LockType.exclusive,
+ "storage", "resource", resourceManager.EXCLUSIVE,
callback)
self.assertTrue(exclusiveReq1.granted())
@@ -632,7 +629,7 @@
threadLimit.release()
def releaseShared(req, res):
- self.assertEquals(req.lockType, resourceManager.LockType.shared)
+ self.assertEquals(req.lockType, resourceManager.SHARED)
res.release()
threadLimit.release()
@@ -643,8 +640,7 @@
manager = self.manager
rnd = Random()
- lockTranslator = [resourceManager.LockType.exclusive,
- resourceManager.LockType.shared]
+ lockTranslator = [resourceManager.EXCLUSIVE, resourceManager.SHARED]
threads = []
for i in range(procLimit / 2):
@@ -747,7 +743,7 @@
fake_rm = FakeResourceManager()
lock = resourceManager.ResourceManagerLock(
- 'ns_A', 'name_A', resourceManager.LockType.shared)
+ 'ns_A', 'name_A', resourceManager.SHARED)
lock._rm = fake_rm
expected = []
lock.acquire()
diff --git a/tests/storage_sdm_copy_data_test.py b/tests/storage_sdm_copy_data_test.py
index dbe6e20..1796544 100644
--- a/tests/storage_sdm_copy_data_test.py
+++ b/tests/storage_sdm_copy_data_test.py
@@ -94,15 +94,14 @@
ret = [
# Domain lock for each volume
resourceManager.ResourceManagerLock(
- sc.STORAGE, src_vol.sdUUID, resourceManager.LockType.shared),
+ sc.STORAGE, src_vol.sdUUID, resourceManager.SHARED),
resourceManager.ResourceManagerLock(
- sc.STORAGE, dst_vol.sdUUID, resourceManager.LockType.shared),
+ sc.STORAGE, dst_vol.sdUUID, resourceManager.SHARED),
# Image lock for each volume, exclusive for the destination
resourceManager.ResourceManagerLock(
- src_img_ns, src_vol.imgUUID, resourceManager.LockType.shared),
+ src_img_ns, src_vol.imgUUID, resourceManager.SHARED),
resourceManager.ResourceManagerLock(
- dst_img_ns, dst_vol.imgUUID,
- resourceManager.LockType.exclusive),
+ dst_img_ns, dst_vol.imgUUID, resourceManager.EXCLUSIVE),
# Volume lease for the destination volume
volume.VolumeLease(
0, dst_vol.sdUUID, dst_vol.imgUUID, dst_vol.volUUID)
diff --git a/tests/storage_sdm_create_volume_test.py b/tests/storage_sdm_create_volume_test.py
index 26c1118..d4e3507 100644
--- a/tests/storage_sdm_create_volume_test.py
+++ b/tests/storage_sdm_create_volume_test.py
@@ -125,7 +125,7 @@
# Verify that the image resource was locked and released
image_ns = sd.getNamespace(sc.IMAGE_NAMESPACE, job.sd_manifest.sdUUID)
- rm_args = (image_ns, job.vol_info.img_id, rm.LockType.exclusive)
+ rm_args = (image_ns, job.vol_info.img_id, rm.EXCLUSIVE)
self.assertEqual([('acquireResource', rm_args, {}),
('releaseResource', rm_args, {})],
self.rm.__calls__)
diff --git a/tests/storage_volume_test.py b/tests/storage_volume_test.py
index 79cae70..1114095 100644
--- a/tests/storage_volume_test.py
+++ b/tests/storage_volume_test.py
@@ -54,7 +54,7 @@
self.assertEqual(sd.getNamespace(sc.VOLUME_LEASE_NAMESPACE, 'dom'),
a.ns)
self.assertEqual('vol', a.name)
- self.assertEqual(rm.LockType.exclusive, a.mode)
+ self.assertEqual(rm.EXCLUSIVE, a.mode)
@permutations((
(('domA', 'img', 'vol'), ('domB', 'img', 'vol')),
diff --git a/vdsm/storage/blockVolume.py b/vdsm/storage/blockVolume.py
index f411545..f1ef30a 100644
--- a/vdsm/storage/blockVolume.py
+++ b/vdsm/storage/blockVolume.py
@@ -371,7 +371,7 @@
"""
if setrw:
self.setrw(rw=rw)
- access = rm.LockType.exclusive if rw else rm.LockType.shared
+ access = rm.EXCLUSIVE if rw else rm.SHARED
activation = rmanager.acquireResource(self.lvmActivationNamespace,
self.volUUID, access)
activation.autoRelease = False
diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
index 7f22ac0..296039d 100644
--- a/vdsm/storage/hsm.py
+++ b/vdsm/storage/hsm.py
@@ -981,8 +981,7 @@
"spUUID=%s, msdUUID=%s, masterVersion=%s, hostID=%s, "
"domainsMap=%s" %
(spUUID, msdUUID, masterVersion, hostID, domainsMap)))
- with rmanager.acquireResource(STORAGE, HSM_DOM_MON_LOCK,
- rm.LockType.exclusive):
+ with rmanager.acquireResource(STORAGE, HSM_DOM_MON_LOCK, rm.EXCLUSIVE):
return self._connectStoragePool(
spUUID, hostID, msdUUID, masterVersion, domainsMap)
@@ -1020,7 +1019,7 @@
except se.StoragePoolUnknown:
pass # pool not connected yet
else:
- with rmanager.acquireResource(STORAGE, spUUID, rm.LockType.shared):
+ with rmanager.acquireResource(STORAGE, spUUID, rm.SHARED):
# FIXME: this breaks in case of a race as it assumes that the
# pool is still available. At the moment we maintain this
# behavior as it's inherited from the previous implementation
@@ -1030,7 +1029,7 @@
masterVersion, domainsMap)
return True
- with rmanager.acquireResource(STORAGE, spUUID, rm.LockType.exclusive):
+ with rmanager.acquireResource(STORAGE, spUUID, rm.EXCLUSIVE):
try:
pool = self.getPool(spUUID)
except se.StoragePoolUnknown:
@@ -1096,8 +1095,7 @@
def _disconnectPool(self, pool, hostID, remove):
pool.validateNotSPM()
- with rmanager.acquireResource(STORAGE, HSM_DOM_MON_LOCK,
- rm.LockType.exclusive):
+ with rmanager.acquireResource(STORAGE, HSM_DOM_MON_LOCK, rm.EXCLUSIVE):
res = pool.disconnect()
del self.pools[pool.spUUID]
return res
@@ -1822,7 +1820,7 @@
imageResourcesNamespace = sd.getNamespace(sc.IMAGE_NAMESPACE, sdUUID)
with rmanager.acquireResource(imageResourcesNamespace, imgUUID,
- rm.LockType.shared):
+ rm.SHARED):
image.Image(repoPath).syncVolumeChain(sdUUID, imgUUID, volUUID,
newChain)
@@ -3242,8 +3240,7 @@
for sdUUID in activeDoms:
dom = sdCache.produce(sdUUID=sdUUID)
if dom.isData():
- with rmanager.acquireResource(STORAGE, sdUUID,
- rm.LockType.shared):
+ with rmanager.acquireResource(STORAGE, sdUUID, rm.SHARED):
try:
imgs = dom.getAllImages()
except se.StorageDomainDoesNotExist:
@@ -3473,8 +3470,7 @@
@deprecated
@public
def startMonitoringDomain(self, sdUUID, hostID, options=None):
- with rmanager.acquireResource(STORAGE, HSM_DOM_MON_LOCK,
- rm.LockType.exclusive):
+ with rmanager.acquireResource(STORAGE, HSM_DOM_MON_LOCK, rm.EXCLUSIVE):
# Note: We cannot raise here StorageDomainIsMemberOfPool, as it
# will break old hosted engine agent.
self.domainMonitor.startMonitoring(sdUUID, int(hostID), False)
@@ -3482,8 +3478,7 @@
@deprecated
@public
def stopMonitoringDomain(self, sdUUID, options=None):
- with rmanager.acquireResource(STORAGE, HSM_DOM_MON_LOCK,
- rm.LockType.exclusive):
+ with rmanager.acquireResource(STORAGE, HSM_DOM_MON_LOCK, rm.EXCLUSIVE):
if sdUUID in self.domainMonitor.poolDomains:
raise se.StorageDomainIsMemberOfPool(sdUUID)
self.domainMonitor.stopMonitoring([sdUUID])
diff --git a/vdsm/storage/image.py b/vdsm/storage/image.py
index e265f88..c642c89 100644
--- a/vdsm/storage/image.py
+++ b/vdsm/storage/image.py
@@ -379,7 +379,7 @@
destDom.sdUUID)
# In destination domain we need to lock image's template if exists
with rmanager.acquireResource(dstImageResourcesNamespace, pimg,
- rm.LockType.shared) \
+ rm.SHARED) \
if pimg != sc.BLANK_UUID else justLogIt(imgUUID):
if fakeTemplate:
self.createFakeTemplate(destDom.sdUUID, volParams)
diff --git a/vdsm/storage/resourceFactories.py b/vdsm/storage/resourceFactories.py
index a18360e..b36faa5 100644
--- a/vdsm/storage/resourceFactories.py
+++ b/vdsm/storage/resourceFactories.py
@@ -154,7 +154,7 @@
if len(volUUIDChain) > 0:
volRes = rmanager.acquireResource(
self.volumeResourcesNamespace,
- template, rm.LockType.shared,
+ template, rm.SHARED,
timeout=self.resource_default_timeout)
else:
volRes = rmanager.acquireResource(
diff --git a/vdsm/storage/resourceManager.py b/vdsm/storage/resourceManager.py
index b766082..e048d01 100644
--- a/vdsm/storage/resourceManager.py
+++ b/vdsm/storage/resourceManager.py
@@ -52,9 +52,8 @@
# enums.
-class LockType:
- shared = "shared"
- exclusive = "exclusive"
+SHARED = "shared"
+EXCLUSIVE = "exclusive"
class LockState:
@@ -80,9 +79,9 @@
@classmethod
def fromType(cls, locktype):
- if str(locktype) == LockType.shared:
+ if str(locktype) == SHARED:
return cls.shared
- if str(locktype) == LockType.exclusive:
+ if str(locktype) == EXCLUSIVE:
return cls.locked
raise ValueError("invalid locktype %s" % locktype)
@@ -524,7 +523,7 @@
if not self._resourceNameValidator.match(name):
raise ValueError("Invalid resource name '%s'" % name)
- if lockType not in (LockType.shared, LockType.exclusive):
+ if lockType not in (SHARED, EXCLUSIVE):
raise ValueError("invalid lock type %r" % lockType)
request = Request(namespace, name, lockType, callback)
@@ -547,8 +546,8 @@
raise KeyError("No such resource '%s'" % (fullName))
else:
if len(resource.queue) == 0 and \
- resource.currentLock == LockType.shared and \
- request.lockType == LockType.shared:
+ resource.currentLock == SHARED and \
+ request.lockType == SHARED:
resource.activeUsers += 1
self._log.debug("Resource '%s' found in shared state "
"and queue is empty, Joining current "
@@ -677,7 +676,7 @@
break
# If the lock is exclusive were done
- if resource.currentLock == LockType.exclusive:
+ if resource.currentLock == EXCLUSIVE:
return
# Keep granting shared locks
@@ -690,7 +689,7 @@
resource.queue.pop()
continue
- if nextRequest.lockType == LockType.exclusive:
+ if nextRequest.lockType == EXCLUSIVE:
break
nextRequest = resource.queue.pop()
diff --git a/vdsm/storage/sdm/api/copy_data.py b/vdsm/storage/sdm/api/copy_data.py
index 0a1cdb6..109bd75 100644
--- a/vdsm/storage/sdm/api/copy_data.py
+++ b/vdsm/storage/sdm/api/copy_data.py
@@ -98,9 +98,8 @@
@property
def locks(self):
img_ns = sd.getNamespace(sc.IMAGE_NAMESPACE, self.sd_id)
- mode = rm.LockType.exclusive if self._writable else rm.LockType.shared
- ret = [rm.ResourceManagerLock(sc.STORAGE, self.sd_id,
- rm.LockType.shared),
+ mode = rm.EXCLUSIVE if self._writable else rm.SHARED
+ ret = [rm.ResourceManagerLock(sc.STORAGE, self.sd_id, rm.SHARED),
rm.ResourceManagerLock(img_ns, self.img_id, mode)]
if self._writable:
ret.append(volume.VolumeLease(self._host_id, self.sd_id,
diff --git a/vdsm/storage/sdm/api/create_volume.py b/vdsm/storage/sdm/api/create_volume.py
index ada1d94..cc269ad 100644
--- a/vdsm/storage/sdm/api/create_volume.py
+++ b/vdsm/storage/sdm/api/create_volume.py
@@ -45,7 +45,7 @@
image_res_ns = sd.getNamespace(sc.IMAGE_NAMESPACE,
self.sd_manifest.sdUUID)
with rmanager.acquireResource(image_res_ns, self.vol_info.img_id,
- rm.LockType.exclusive):
+ rm.EXCLUSIVE):
artifacts = self.sd_manifest.get_volume_artifacts(
self.vol_info.img_id, self.vol_info.vol_id)
artifacts.create(
diff --git a/vdsm/storage/sp.py b/vdsm/storage/sp.py
index 44e2ed8..0ddcc5a 100644
--- a/vdsm/storage/sp.py
+++ b/vdsm/storage/sp.py
@@ -140,15 +140,14 @@
return
domain = sdCache.produce(sdUUID)
- with rmanager.acquireResource(sc.STORAGE, self.spUUID,
- rm.LockType.shared):
+ with rmanager.acquireResource(sc.STORAGE, self.spUUID, rm.SHARED):
if sdUUID not in self.getDomains(activeOnly=True):
self.log.debug("Domain %s is not an active pool domain, "
"skipping domain links refresh",
sdUUID)
return
with rmanager.acquireResource(sc.STORAGE, sdUUID + "_repo",
- rm.LockType.exclusive):
+ rm.EXCLUSIVE):
self.log.debug("Refreshing domain links for %s", sdUUID)
self._refreshDomainLinks(domain)
@@ -175,9 +174,8 @@
return
with rmanager.acquireResource(sc.STORAGE, "upgrade_" + self.spUUID,
- rm.LockType.shared):
- with rmanager.acquireResource(sc.STORAGE, sdUUID,
- rm.LockType.exclusive):
+ rm.SHARED):
+ with rmanager.acquireResource(sc.STORAGE, sdUUID, rm.EXCLUSIVE):
if sdUUID not in self._domainsToUpgrade:
return
@@ -350,7 +348,7 @@
def _shutDownUpgrade(self):
self.log.debug("Shutting down upgrade process")
with rmanager.acquireResource(sc.STORAGE, "upgrade_" + self.spUUID,
- rm.LockType.exclusive):
+ rm.EXCLUSIVE):
try:
self.domainMonitor.onDomainStateChange.unregister(
self._upgradeCallback)
@@ -421,14 +419,13 @@
def _upgradePool(self, targetDomVersion, lockTimeout=None):
try:
with rmanager.acquireResource(sc.STORAGE, "upgrade_" + self.spUUID,
- rm.LockType.exclusive,
- timeout=lockTimeout):
+ rm.EXCLUSIVE, timeout=lockTimeout):
sd.validateDomainVersion(targetDomVersion)
self.log.info("Trying to upgrade master domain `%s`",
self.masterDomain.sdUUID)
with rmanager.acquireResource(sc.STORAGE,
self.masterDomain.sdUUID,
- rm.LockType.exclusive):
+ rm.EXCLUSIVE):
self._convertDomain(self.masterDomain,
str(targetDomVersion))
@@ -1348,7 +1345,7 @@
def extendVolumeSize(self, sdUUID, imgUUID, volUUID, newSize):
imageResourcesNamespace = sd.getNamespace(sc.IMAGE_NAMESPACE, sdUUID)
with rmanager.acquireResource(imageResourcesNamespace, imgUUID,
- rm.LockType.exclusive):
+ rm.EXCLUSIVE):
return sdCache.produce(sdUUID) \
.produceVolume(imgUUID, volUUID).extendSize(int(newSize))
@@ -1537,9 +1534,9 @@
dstImageResourcesNamespace = srcImageResourcesNamespace
with nested(rmanager.acquireResource(srcImageResourcesNamespace,
- srcImgUUID, rm.LockType.shared),
+ srcImgUUID, rm.SHARED),
rmanager.acquireResource(dstImageResourcesNamespace,
- dstImgUUID, rm.LockType.exclusive)
+ dstImgUUID, rm.EXCLUSIVE)
):
dstUUID = image.Image(self.poolPath).copyCollapsed(
sdUUID, vmUUID, srcImgUUID, srcVolUUID, dstImgUUID,
@@ -1578,16 +1575,16 @@
# For MOVE_OP acquire exclusive lock
# For COPY_OP shared lock is enough
if op == image.MOVE_OP:
- srcLock = rm.LockType.exclusive
+ srcLock = rm.EXCLUSIVE
elif op == image.COPY_OP:
- srcLock = rm.LockType.shared
+ srcLock = rm.SHARED
else:
raise se.MoveImageError(imgUUID)
with nested(rmanager.acquireResource(srcImageResourcesNamespace,
imgUUID, srcLock),
rmanager.acquireResource(dstImageResourcesNamespace,
- imgUUID, rm.LockType.exclusive)):
+ imgUUID, rm.EXCLUSIVE)):
image.Image(self.poolPath).move(srcDomUUID, dstDomUUID, imgUUID,
vmUUID, op, postZero, force)
@@ -1625,10 +1622,10 @@
# Since source volume is only a parent of temporary volume, we don't
# need to acquire any lock for it.
with nested(
- rmanager.acquireResource(srcNamespace, tmpImgUUID,
- rm.LockType.exclusive),
- rmanager.acquireResource(dstNamespace, dstImgUUID,
- rm.LockType.exclusive)):
+ rmanager.acquireResource(srcNamespace, tmpImgUUID,
+ rm.EXCLUSIVE),
+ rmanager.acquireResource(dstNamespace, dstImgUUID,
+ rm.EXCLUSIVE)):
image.Image(self.poolPath).sparsify(
tmpSdUUID, tmpImgUUID, tmpVolUUID, dstSdUUID, dstImgUUID,
dstVolUUID)
@@ -1652,8 +1649,8 @@
# Preparing the ordered resource list to be acquired
resList = (rmanager.acquireResource(*x) for x in sorted((
- (srcImgResNs, imgUUID, rm.LockType.shared),
- (dstImgResNs, imgUUID, rm.LockType.exclusive),
+ (srcImgResNs, imgUUID, rm.SHARED),
+ (dstImgResNs, imgUUID, rm.EXCLUSIVE),
)))
with nested(*resList):
@@ -1680,8 +1677,8 @@
# Preparing the ordered resource list to be acquired
resList = (rmanager.acquireResource(*x) for x in sorted((
- (srcImgResNs, imgUUID, rm.LockType.shared),
- (dstImgResNs, imgUUID, rm.LockType.exclusive),
+ (srcImgResNs, imgUUID, rm.SHARED),
+ (dstImgResNs, imgUUID, rm.EXCLUSIVE),
)))
with nested(*resList):
@@ -1694,8 +1691,7 @@
methodArgs.
"""
imgResourceLock = rmanager.acquireResource(
- sd.getNamespace(sc.IMAGE_NAMESPACE, sdUUID), imgUUID,
- rm.LockType.shared)
+ sd.getNamespace(sc.IMAGE_NAMESPACE, sdUUID), imgUUID, rm.SHARED)
with imgResourceLock:
return image.Image(self.poolPath) \
@@ -1707,8 +1703,7 @@
and methodArgs.
"""
imgResourceLock = rmanager.acquireResource(
- sd.getNamespace(sc.IMAGE_NAMESPACE, sdUUID), imgUUID,
- rm.LockType.exclusive)
+ sd.getNamespace(sc.IMAGE_NAMESPACE, sdUUID), imgUUID, rm.EXCLUSIVE)
with imgResourceLock:
return image.Image(self.poolPath) \
@@ -1724,8 +1719,7 @@
startEvent.wait()
imgResourceLock = rmanager.acquireResource(
- sd.getNamespace(sc.IMAGE_NAMESPACE, sdUUID), imgUUID,
- rm.LockType.shared)
+ sd.getNamespace(sc.IMAGE_NAMESPACE, sdUUID), imgUUID, rm.SHARED)
with imgResourceLock:
try:
@@ -1740,8 +1734,7 @@
Download an image from a stream.
"""
imgResourceLock = rmanager.acquireResource(
- sd.getNamespace(sc.IMAGE_NAMESPACE, sdUUID), imgUUID,
- rm.LockType.exclusive)
+ sd.getNamespace(sc.IMAGE_NAMESPACE, sdUUID), imgUUID, rm.EXCLUSIVE)
with imgResourceLock:
try:
@@ -1780,9 +1773,9 @@
resourceList = []
for imgUUID in imgList:
resourceList.append(rmanager.acquireResource(
- srcImageResourcesNamespace, imgUUID, rm.LockType.exclusive))
+ srcImageResourcesNamespace, imgUUID, rm.EXCLUSIVE))
resourceList.append(rmanager.acquireResource(
- dstImageResourcesNamespace, imgUUID, rm.LockType.exclusive))
+ dstImageResourcesNamespace, imgUUID, rm.EXCLUSIVE))
with nested(*resourceList):
image.Image(self.poolPath).multiMove(
@@ -1805,7 +1798,7 @@
"""
imageResourcesNamespace = sd.getNamespace(sc.IMAGE_NAMESPACE, sdUUID)
with rmanager.acquireResource(imageResourcesNamespace, imgUUID,
- rm.LockType.exclusive):
+ rm.EXCLUSIVE):
img = image.Image(self.poolPath)
chain = img.reconcileVolumeChain(sdUUID, imgUUID, leafVolUUID)
return dict(volumes=chain)
@@ -1832,7 +1825,7 @@
imageResourcesNamespace = sd.getNamespace(sc.IMAGE_NAMESPACE, sdUUID)
with rmanager.acquireResource(imageResourcesNamespace, imgUUID,
- rm.LockType.exclusive):
+ rm.EXCLUSIVE):
image.Image(self.poolPath).merge(
sdUUID, vmUUID, imgUUID, ancestor, successor, postZero)
@@ -1890,7 +1883,7 @@
if srcVol.getParent() == sc.BLANK_UUID:
with rmanager.acquireResource(imageResourcesNamespace,
srcImgUUID,
- rm.LockType.exclusive):
+ rm.EXCLUSIVE):
self.log.debug("volume %s is not shared. "
"Setting it as shared", srcVolUUID)
@@ -1899,7 +1892,7 @@
raise se.VolumeNonShareable(srcVol)
with rmanager.acquireResource(imageResourcesNamespace, imgUUID,
- rm.LockType.exclusive):
+ rm.EXCLUSIVE):
newVolUUID = sdCache.produce(sdUUID).createVolume(
imgUUID=imgUUID, size=size, volFormat=volFormat,
preallocate=preallocate, diskType=diskType, volUUID=volUUID,
@@ -1928,7 +1921,7 @@
imageResourcesNamespace = sd.getNamespace(sc.IMAGE_NAMESPACE, sdUUID)
with rmanager.acquireResource(imageResourcesNamespace, imgUUID,
- rm.LockType.exclusive):
+ rm.EXCLUSIVE):
dom = sdCache.produce(sdUUID)
for volUUID in volumes:
dom.produceVolume(imgUUID, volUUID).delete(
@@ -1981,7 +1974,7 @@
self.validatePoolSD(sdUUID)
imageResourcesNamespace = sd.getNamespace(sc.IMAGE_NAMESPACE, sdUUID)
with rmanager.acquireResource(imageResourcesNamespace, imgUUID,
- rm.LockType.exclusive):
+ rm.EXCLUSIVE):
sdCache.produce(sdUUID).produceVolume(
imgUUID=imgUUID,
volUUID=volUUID).setDescription(descr=description)
@@ -1990,7 +1983,7 @@
self.validatePoolSD(sdUUID)
imageResourcesNamespace = sd.getNamespace(sc.IMAGE_NAMESPACE, sdUUID)
with rmanager.acquireResource(imageResourcesNamespace, imgUUID,
- rm.LockType.exclusive):
+ rm.EXCLUSIVE):
sdCache.produce(sdUUID).produceVolume(
imgUUID=imgUUID,
volUUID=volUUID).setLegality(legality=legality)
diff --git a/vdsm/storage/task.py b/vdsm/storage/task.py
index a2f6f62..daeb2a1 100644
--- a/vdsm/storage/task.py
+++ b/vdsm/storage/task.py
@@ -1360,7 +1360,7 @@
'task_resource_default_timeout')):
self.resOwner.acquire(namespace,
resName,
- resourceManager.LockType.exclusive,
+ resourceManager.EXCLUSIVE,
timeout)
def getSharedLock(self,
@@ -1370,5 +1370,5 @@
'task_resource_default_timeout')):
self.resOwner.acquire(namespace,
resName,
- resourceManager.LockType.shared,
+ resourceManager.SHARED,
timeout)
diff --git a/vdsm/storage/volume.py b/vdsm/storage/volume.py
index f02be02..aa318c0 100644
--- a/vdsm/storage/volume.py
+++ b/vdsm/storage/volume.py
@@ -615,8 +615,8 @@
imageResourcesNamespace = sd.getNamespace(sc.IMAGE_NAMESPACE, sdUUID)
- with rmanager.acquireResource(imageResourcesNamespace,
- srcImg, rm.LockType.exclusive):
+ with rmanager.acquireResource(imageResourcesNamespace, srcImg,
+ rm.EXCLUSIVE):
vol = sdCache.produce(sdUUID).produceVolume(srcImg, srcVol)
vol.prepare(rw=True, chainrw=True, setrw=True)
@@ -1205,7 +1205,7 @@
@property
def mode(self):
- return rm.LockType.exclusive # All volume leases are exclusive
+ return rm.EXCLUSIVE # All volume leases are exclusive
def acquire(self):
dom = sdCache.produce_manifest(self._sd_id)
--
To view, visit https://gerrit.ovirt.org/63628
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Id78e07814f21a1dcf33efa2afe400eff041e3001
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
7 years, 8 months