Federico Simoncelli has uploaded a new change for review.
Change subject: [wip] sdcache: avoid extra refresh due samplingmethod
......................................................................
[wip] sdcache: avoid extra refresh due samplingmethod
In order to avoid an extra iscsi rescan (symptomatic of samplingmethod)
an additional lock has been introduced to queue the requests when the
storage is flagged as stale.
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=870768
Change-Id: If178a8eaeb94f1dfe9e0957036dde88f6a22829c
Signed-off-by: Federico Simoncelli <fsimonce(a)redhat.com>
---
M vdsm/storage/sdc.py
1 file changed, 25 insertions(+), 26 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/74/9274/1
diff --git a/vdsm/storage/sdc.py b/vdsm/storage/sdc.py
index f2f4534..978e3fa 100644
--- a/vdsm/storage/sdc.py
+++ b/vdsm/storage/sdc.py
@@ -62,32 +62,27 @@
STORAGE_UPDATED = 0
STORAGE_STALE = 1
- STORAGE_REFRESHING = 2
def __init__(self, storage_repo):
- self._syncroot = threading.Condition()
+ self._syncDomain = threading.Condition()
+ self._syncRefresh = threading.Lock()
self.__domainCache = {}
self.__inProgress = set()
self.__staleStatus = self.STORAGE_STALE
self.storage_repo = storage_repo
def invalidateStorage(self):
- with self._syncroot:
- self.__staleStatus = self.STORAGE_STALE
+ self.log.debug("The storages have been invalidated")
+ self.__staleStatus = self.STORAGE_STALE
@misc.samplingmethod
def refreshStorage(self):
- self.__staleStatus = self.STORAGE_REFRESHING
-
+ # We need to set the __staleStatus value at the beginning because we
+ # want to keep track of the future invalidateStorage calls that might
+ # arrive during the rescan procedure.
+ self.__staleStatus = self.STORAGE_UPDATED
multipath.rescan()
lvm.invalidateCache()
-
- # If a new invalidateStorage request came in after the refresh
- # started then we cannot flag the storages as updated (force a
- # new rescan later).
- with self._syncroot:
- if self.__staleStatus == self.STORAGE_REFRESHING:
- self.__staleStatus = self.STORAGE_UPDATED
def produce(self, sdUUID):
domain = DomainProxy(self, sdUUID)
@@ -98,7 +93,7 @@
return domain
def _realProduce(self, sdUUID):
- with self._syncroot:
+ with self._syncDomain:
while True:
domain = self.__domainCache.get(sdUUID)
@@ -109,25 +104,29 @@
self.__inProgress.add(sdUUID)
break
- self._syncroot.wait()
+ self._syncDomain.wait()
try:
- # If multiple calls reach this point and the storage is not
- # updated the refreshStorage() sampling method is called
- # serializing (and eventually grouping) the requests.
- if self.__staleStatus != self.STORAGE_UPDATED:
- self.refreshStorage()
+ # Here we cannot take full advantage of the refreshStorage
+ # samplingmethod since we might be scheduling an unneeded
+ # extra rescan. We need an additional lock (_syncRefresh)
+ # to make sure that __staleStatus is taken in account
+ # (without affecting all the other external refreshStorage
+ # calls as it would be if we move this check there).
+ with self._syncRefresh:
+ if self.__staleStatus != self.STORAGE_UPDATED:
+ self.refreshStorage()
domain = self._findDomain(sdUUID)
- with self._syncroot:
+ with self._syncDomain:
self.__domainCache[sdUUID] = domain
return domain
finally:
- with self._syncroot:
+ with self._syncDomain:
self.__inProgress.remove(sdUUID)
- self._syncroot.notifyAll()
+ self._syncDomain.notifyAll()
def _findDomain(self, sdUUID):
import blockSD
@@ -162,16 +161,16 @@
return uuids
def refresh(self):
- with self._syncroot:
+ with self._syncDomain:
lvm.invalidateCache()
self.__domainCache.clear()
def manuallyAddDomain(self, domain):
- with self._syncroot:
+ with self._syncDomain:
self.__domainCache[domain.sdUUID] = domain
def manuallyRemoveDomain(self, sdUUID):
- with self._syncroot:
+ with self._syncDomain:
try:
del self.__domainCache[sdUUID]
except KeyError:
--
To view, visit http://gerrit.ovirt.org/9274
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: If178a8eaeb94f1dfe9e0957036dde88f6a22829c
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Federico Simoncelli <fsimonce(a)redhat.com>
Eduardo has uploaded a new change for review.
Change subject: [WIP] Towards a more (block) secure HSM.
......................................................................
[WIP] Towards a more (block) secure HSM.
Change-Id: I30df4ee5cdb6b44cf14d8cb155436aac7442a07d
---
M vdsm/storage/hsm.py
M vdsm/storage/lvm.py
M vdsm/storage/sp.py
3 files changed, 25 insertions(+), 5 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/18/2218/1
--
To view, visit http://gerrit.ovirt.org/2218
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I30df4ee5cdb6b44cf14d8cb155436aac7442a07d
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Eduardo <ewarszaw(a)redhat.com>
Saggi Mizrahi has uploaded a new change for review.
Change subject: [WIP] Implement a process to do dangerous IO in C
......................................................................
[WIP] Implement a process to do dangerous IO in C
This replaces the process pool with a process that can serve multiple
requests written in C.
This implementation is much more scalable and lightweight. Should solve
bugs related to running out of helpers, logging getting suck, python
forking deadlocking, running out of memory and other things as well.
The communication between VDSM and the IOProcess is done with json
objects.
The IOProcess starts with 3 thread:
1. requestReader - reads requests from the pipe, builds a DOM
representation of it and queues it up for handling
2. responseWriter - gets response DOMs from the queue converts them to a
JSON string and send it over the pipe
3. requestHandler - pops requests from the queue and provisions threads
for handling them. Currently we I just allocate a new thread per
request. If there is ever a need to have a thread pool this is where
the load balancing is going to sit.
Each request gets the are as a JsonNode and returns a response that is a
JsonNode as well. Most exported functions are pretty trivial and are a
good example on how to write new ones.
Unlink the ProcessPoolHelper, high level commands sit of the OopWrapper
and are run from the client side instead of being implemented in C on
the IOProcess side.
Change-Id: Ie4664d5330debbe38ba33b74ebb586ac42913b4a
Signed-off-by: Saggi Mizrahi <smizrahi(a)redhat.com>
---
M configure.ac
M tests/Makefile.am
A tests/ioprocessTests.py
A tests/outOfProcessTests.py
D tests/processPoolTests.py
M vdsm.spec.in
M vdsm/constants.py.in
M vdsm/storage/Makefile.am
M vdsm/storage/fileSD.py
M vdsm/storage/fileUtils.py
M vdsm/storage/fileVolume.py
A vdsm/storage/ioprocess.py
A vdsm/storage/ioprocess/.gitignore
A vdsm/storage/ioprocess/Makefile.am
A vdsm/storage/ioprocess/exported-functions.c
A vdsm/storage/ioprocess/exported-functions.h
A vdsm/storage/ioprocess/ioprocess.c
A vdsm/storage/ioprocess/json-dom-generator.c
A vdsm/storage/ioprocess/json-dom-generator.h
A vdsm/storage/ioprocess/json-dom-parser.c
A vdsm/storage/ioprocess/json-dom-parser.h
A vdsm/storage/ioprocess/json-dom.c
A vdsm/storage/ioprocess/json-dom.h
M vdsm/storage/misc.py
M vdsm/storage/nfsSD.py
M vdsm/storage/outOfProcess.py
D vdsm/storage/processPool.py
M vdsm/storage/sd.py
M vdsm/storage/sp.py
M vdsm/storage/task.py
30 files changed, 3,018 insertions(+), 666 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/46/3946/1
--
To view, visit http://gerrit.ovirt.org/3946
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ie4664d5330debbe38ba33b74ebb586ac42913b4a
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Saggi Mizrahi <smizrahi(a)redhat.com>