Change in vdsm[master]: utils: add ovirt-node persistence functions
by Alon Bar-Lev
Alon Bar-Lev has uploaded a new change for review.
Change subject: utils: add ovirt-node persistence functions
......................................................................
utils: add ovirt-node persistence functions
Change-Id: Ib93af61a44a52c37faf92d6f6081babefa3a09aa
Signed-off-by: Alon Bar-Lev <alonbl(a)redhat.com>
---
M lib/vdsm/utils.py
1 file changed, 24 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/11/20811/1
diff --git a/lib/vdsm/utils.py b/lib/vdsm/utils.py
index 78d055e..acb284c 100644
--- a/lib/vdsm/utils.py
+++ b/lib/vdsm/utils.py
@@ -877,3 +877,27 @@
logging.error("Panic: %s", msg, exc_info=True)
os.killpg(0, 9)
sys.exit(-3)
+
+
+@memoized
+def isOvirtNode():
+ return (
+ os.path.exists('/etc/rhev-hypervisor-release') or
+ glob.glob('/etc/ovirt-node-*-release')
+ )
+
+
+def ovirtNodePersist(files):
+ if isOvirtNode():
+ from ovirtnode import ovirtfunctions
+ ovirtfunctions.ovirt_store_config(files)
+
+
+def ovirtNodeUnpersist(files):
+ if isOvirtNode():
+ from ovirtnode import ovirtfunctions
+ todo = []
+ for f in files:
+ if ovirtfunctions.is_persisted(f):
+ todo.append(f)
+ ovirtfunctions.remove_config([todo])
--
To view, visit http://gerrit.ovirt.org/20811
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ib93af61a44a52c37faf92d6f6081babefa3a09aa
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Alon Bar-Lev <alonbl(a)redhat.com>
9 years, 11 months
Change in vdsm[master]: fileUtils.validateAccess ioprocess implementation
by ykaplan@redhat.com
Yeela Kaplan has uploaded a new change for review.
Change subject: fileUtils.validateAccess ioprocess implementation
......................................................................
fileUtils.validateAccess ioprocess implementation
Change-Id: Ide82ef85d245216492e1e4327efb37c6c32a55dc
Signed-off-by: Yeela Kaplan <ykaplan(a)redhat.com>
---
M vdsm/storage/outOfProcess.py
1 file changed, 18 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/20/27120/1
diff --git a/vdsm/storage/outOfProcess.py b/vdsm/storage/outOfProcess.py
index ead305b..3b5071c 100644
--- a/vdsm/storage/outOfProcess.py
+++ b/vdsm/storage/outOfProcess.py
@@ -18,6 +18,9 @@
# Refer to the README and COPYING files for full details of the license
#
import types
+import os
+import errno
+import logging
from ioprocess import IOProcess
@@ -37,6 +40,8 @@
_procLock = threading.Lock()
_proc = {}
_ioproc = {}
+
+log = logging.getLogger('oop')
def setDefaultImpl(impl):
@@ -75,6 +80,17 @@
return self._iop.glob(pattern)
+class _ioprocessFileUtils(object):
+ def __init__(self, iop):
+ self._iop = iop
+
+ def validateAccess(self, targetPath, perms=(os.R_OK | os.W_OK | os.X_OK)):
+ if not self._iop.access(targetPath, perms):
+ log.warning("Permission denied for directory: %s with permissions:"
+ "%s", targetPath, perms)
+ raise OSError(errno.EACCES, os.strerror(errno.EACCES))
+
+
class _ModuleWrapper(types.ModuleType):
def __init__(self, modName, procPool, ioproc, timeout, subModNames=()):
self._modName = modName
@@ -98,6 +114,8 @@
if ioproc:
self.glob = _ioprocessGlob(ioproc)
+ self.fileUtils.validateAccess = \
+ _ioprocessFileUtils(ioproc).validateAccess
def __getattr__(self, name):
# Root modules is fake, we need to remove it
--
To view, visit http://gerrit.ovirt.org/27120
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ide82ef85d245216492e1e4327efb37c6c32a55dc
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Yeela Kaplan <ykaplan(a)redhat.com>
9 years, 11 months
Change in vdsm[master]: repoStats implementation using ioprocess instead of RFH
by ykaplan@redhat.com
Yeela Kaplan has uploaded a new change for review.
Change subject: repoStats implementation using ioprocess instead of RFH
......................................................................
repoStats implementation using ioprocess instead of RFH
Change-Id: I225b8914801628c625716f58cdca19884081b4b6
Signed-off-by: Yeela Kaplan <ykaplan(a)redhat.com>
---
M vdsm/storage/outOfProcess.py
1 file changed, 11 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/66/27266/1
diff --git a/vdsm/storage/outOfProcess.py b/vdsm/storage/outOfProcess.py
index 3b5071c..f2de5f4 100644
--- a/vdsm/storage/outOfProcess.py
+++ b/vdsm/storage/outOfProcess.py
@@ -90,6 +90,17 @@
"%s", targetPath, perms)
raise OSError(errno.EACCES, os.strerror(errno.EACCES))
+ def pathExists(self, filename, writable=False):
+ return self._iop.pathExists(filename, writable)
+
+
+class _ioprocessOs(object):
+ def __init__(self, iop):
+ self._iop = iop
+
+ def statvfs(self, path):
+ return self._iop.statvfs(path)
+
class _ModuleWrapper(types.ModuleType):
def __init__(self, modName, procPool, ioproc, timeout, subModNames=()):
--
To view, visit http://gerrit.ovirt.org/27266
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I225b8914801628c625716f58cdca19884081b4b6
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Yeela Kaplan <ykaplan(a)redhat.com>
9 years, 11 months
Change in vdsm[master]: Convert file metadata to use ioprocess for read/writes
by ykaplan@redhat.com
Yeela Kaplan has uploaded a new change for review.
Change subject: Convert file metadata to use ioprocess for read/writes
......................................................................
Convert file metadata to use ioprocess for read/writes
Change-Id: I86beb53885dc935ad473498208e486895eab8315
Signed-off-by: Yeela Kaplan <ykaplan(a)redhat.com>
---
M vdsm/storage/outOfProcess.py
1 file changed, 12 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/67/27267/1
diff --git a/vdsm/storage/outOfProcess.py b/vdsm/storage/outOfProcess.py
index f2de5f4..c864985 100644
--- a/vdsm/storage/outOfProcess.py
+++ b/vdsm/storage/outOfProcess.py
@@ -98,9 +98,19 @@
def __init__(self, iop):
self._iop = iop
+ def rename(self, oldpath, newpath):
+ self._iop.rename(oldpath, newpath)
+
def statvfs(self, path):
return self._iop.statvfs(path)
+
+def directReadLines(ioproc, path):
+ ioproc.readlines(path, direct=True)
+
+
+def writeLines(ioproc, path, lines):
+ ioproc.writefile(path, lines)
class _ModuleWrapper(types.ModuleType):
def __init__(self, modName, procPool, ioproc, timeout, subModNames=()):
@@ -127,6 +137,8 @@
self.glob = _ioprocessGlob(ioproc)
self.fileUtils.validateAccess = \
_ioprocessFileUtils(ioproc).validateAccess
+ self.directReadLines = partial(directReadLines, ioproc)
+ self.writeLines = partial(writeLines, ioproc)
def __getattr__(self, name):
# Root modules is fake, we need to remove it
--
To view, visit http://gerrit.ovirt.org/27267
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I86beb53885dc935ad473498208e486895eab8315
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Yeela Kaplan <ykaplan(a)redhat.com>
9 years, 11 months
Change in vdsm[master]: Change file permissions using ioprocess
by ykaplan@redhat.com
Yeela Kaplan has uploaded a new change for review.
Change subject: Change file permissions using ioprocess
......................................................................
Change file permissions using ioprocess
Change-Id: If71ebd4172e53fe0a9c530d29584603b9d2eef5c
Signed-off-by: Yeela Kaplan <ykaplan(a)redhat.com>
---
M vdsm/storage/outOfProcess.py
1 file changed, 3 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/68/27268/1
diff --git a/vdsm/storage/outOfProcess.py b/vdsm/storage/outOfProcess.py
index c864985..516f15d 100644
--- a/vdsm/storage/outOfProcess.py
+++ b/vdsm/storage/outOfProcess.py
@@ -98,6 +98,9 @@
def __init__(self, iop):
self._iop = iop
+ def chmod(self, path, mode):
+ self._iop.chmod(path, mode)
+
def rename(self, oldpath, newpath):
self._iop.rename(oldpath, newpath)
--
To view, visit http://gerrit.ovirt.org/27268
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: If71ebd4172e53fe0a9c530d29584603b9d2eef5c
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Yeela Kaplan <ykaplan(a)redhat.com>
9 years, 11 months
Change in vdsm[master]: virt: stats: move the guest stats in a method
by fromani@redhat.com
Francesco Romani has uploaded a new change for review.
Change subject: virt: stats: move the guest stats in a method
......................................................................
virt: stats: move the guest stats in a method
this patch moves the guest statistics gathering in
a separate method, with no functional changes.
Change-Id: I83adb6ddc28777c50190229e5da728a3bdb3b24e
Signed-off-by: Francesco Romani <fromani(a)redhat.com>
---
M vdsm/virt/vm.py
1 file changed, 10 insertions(+), 7 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/54/26554/1
diff --git a/vdsm/virt/vm.py b/vdsm/virt/vm.py
index 922595e..894f717 100644
--- a/vdsm/virt/vm.py
+++ b/vdsm/virt/vm.py
@@ -2414,15 +2414,9 @@
self._addVmStatusStats(stats)
try:
- stats.update(self.guestAgent.getGuestInfo())
+ self._addGuestInfoStats(stats)
except Exception:
return stats
- memUsage = 0
- realMemUsage = int(stats['memUsage'])
- if realMemUsage != 0:
- memUsage = (100 - float(realMemUsage) /
- int(self.conf['memSize']) * 100)
- stats['memUsage'] = utils.convertToStr(int(memUsage))
stats['balloonInfo'] = self._getBalloonInfo()
@@ -2518,6 +2512,15 @@
if self.isMigrating():
stats['migrationProgress'] = self.migrateStatus()['progress']
+ def _addGuestInfoStats(self, stats):
+ stats.update(self.guestAgent.getGuestInfo())
+ memUsage = 0
+ realMemUsage = int(stats['memUsage'])
+ if realMemUsage != 0:
+ memUsage = (100 - float(realMemUsage) /
+ int(self.conf['memSize']) * 100)
+ stats['memUsage'] = utils.convertToStr(int(memUsage))
+
def isMigrating(self):
return self._migrationSourceThread.isAlive()
--
To view, visit http://gerrit.ovirt.org/26554
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I83adb6ddc28777c50190229e5da728a3bdb3b24e
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Francesco Romani <fromani(a)redhat.com>
9 years, 11 months
Change in vdsm[master]: jsonrpc: Stomp support
by smizrahi@redhat.com
Saggi Mizrahi has uploaded a new change for review.
Change subject: jsonrpc: Stomp support
......................................................................
jsonrpc: Stomp support
The mini broker turned up to be much more work than I expected.
This is an intermediate solution. VDSM will accept broker like
message but will ignore most of them and not enforce policy.
This is so that we can get the engine talking in the correct way and
work on the mini broker stress free. This also means that we can release
VDSM without the mini broker using a message format that is future
ready.
It is expected from the engine to send the CONNECT and SUBSCRIBE frames
even though VDSM doesn't yet enforce them because future VDSM's probably
will.
Change-Id: I22bcae1e150dea7bc7d9fecefb6847c48bfe8949
Signed-off-by: Saggi Mizrahi <smizrahi(a)redhat.com>
---
M lib/yajsonrpc/__init__.py
M lib/yajsonrpc/betterAsyncore.py
A lib/yajsonrpc/stomp.py
A lib/yajsonrpc/stompReactor.py
M tests/jsonRpcTests.py
M tests/jsonRpcUtils.py
M vdsm_api/BindingJsonRpc.py
7 files changed, 895 insertions(+), 1 deletion(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/50/26750/1
diff --git a/lib/yajsonrpc/__init__.py b/lib/yajsonrpc/__init__.py
index fabc5d4..34ff97f 100644
--- a/lib/yajsonrpc/__init__.py
+++ b/lib/yajsonrpc/__init__.py
@@ -466,6 +466,7 @@
self._threadFactory = threadFactory
def queueRequest(self, req):
+ self.log.debug("Queueing request")
self._workQueue.put_nowait(req)
def _serveRequest(self, ctx, req):
@@ -502,11 +503,14 @@
def serve_requests(self):
while True:
+ self.log.debug("Waiting for request")
obj = self._workQueue.get()
+ self.log.debug("Popped request")
if obj is None:
break
client, msg = obj
+ self.log.debug("Parsing message")
self._parseMessage(client, msg)
def _parseMessage(self, client, msg):
@@ -560,4 +564,5 @@
self._threadFactory(partial(self._serveRequest, ctx, request))
def stop(self):
+ self.log.info("Stopping JsonRPC Server")
self._workQueue.put_nowait(None)
diff --git a/lib/yajsonrpc/betterAsyncore.py b/lib/yajsonrpc/betterAsyncore.py
index 8806ee5..1e2c5ba 100644
--- a/lib/yajsonrpc/betterAsyncore.py
+++ b/lib/yajsonrpc/betterAsyncore.py
@@ -41,6 +41,9 @@
self.connection = connection
self.__wctx = wrappedContext
+ def pending(self):
+ return self.connection.pending()
+
def get_context(self):
return self.__wctx
diff --git a/lib/yajsonrpc/stomp.py b/lib/yajsonrpc/stomp.py
new file mode 100644
index 0000000..3e2ecfb
--- /dev/null
+++ b/lib/yajsonrpc/stomp.py
@@ -0,0 +1,522 @@
+import logging
+import socket
+import cStringIO
+from threading import Timer, Event
+from uuid import uuid4
+from collections import deque
+import time
+
+from betterAsyncore import Dispatcher
+import asyncore
+
+
+_ESCAPE_CHARS = (('\\\\', '\\'), ('\\r', '\r'), ('\\n', '\n'), ('\\c', ':'))
+
+
+class Command:
+ MESSAGE = "MESSAGE"
+ SEND = "SEND"
+ SUBSCRIBE = "SUBSCRIBE"
+ UNSUBSCRIBE = "UNSUBSCRIBE"
+ CONNECT = "CONNECT"
+ CONNECTED = "CONNECTED"
+ ERROR = "ERROR"
+ RECEIPT = "RECEIPT"
+
+
+class AckMode:
+ AUTO = "auto"
+
+
+class StompError(RuntimeError):
+ def __init__(self, frame):
+ RuntimeError.__init__(self, frame.body)
+
+
+class Frame(object):
+ def __init__(self, command="", headers=None, body=None):
+ self.command = command
+ if headers is None:
+ headers = {}
+
+ self.headers = headers
+ if isinstance(body, unicode):
+ body = body.encode('utf-8')
+
+ self.body = body
+
+ def encode(self):
+ body = self.body
+ # We do it here so we are sure header is up to date
+ if body is not None:
+ self.headers["content-length"] = len(body)
+
+ data = self.command + '\n'
+ data += '\n'.join(["%s:%s" % (encodeValue(key), encodeValue(value))
+ for key, value in self.headers.iteritems()])
+ data += '\n\n'
+ if body is not None:
+ data += body
+
+ data += "\0"
+ return data
+
+ def __repr__(self):
+ return "<StompFrame command=%s>" % (repr(self.command))
+
+ def copy(self):
+ return Frame(self.command, self.headers.copy(), self.body)
+
+
+def decodeValue(s):
+ s = s.decode('utf-8')
+ for escaped, raw in _ESCAPE_CHARS:
+ s = s.replace(escaped, raw)
+
+ # TODO : Throw error on invalid escape char
+ # spec: Undefined escape sequences such as \t (octet 92 and 116) MUST be
+ # treated as a fatal protocol error.
+ return s
+
+
+def encodeValue(s):
+ if not isinstance(s, (str, unicode)):
+ s = str(s)
+ for escaped, raw in _ESCAPE_CHARS:
+ s = s.replace(raw, escaped)
+
+ if isinstance(s, unicode):
+ s = s.encode('utf-8')
+
+ return s
+
+
+class Parser(object):
+ _STATE_CMD = "Parsing command"
+ _STATE_HEADER = "Parsing headers"
+ _STATE_BODY = "Receiving body"
+
+ def __init__(self):
+ self._states = {
+ self._STATE_CMD: self._parse_command,
+ self._STATE_HEADER: self._parse_header,
+ self._STATE_BODY: self._parse_body}
+ self._frames = deque()
+ self._state = self._STATE_CMD
+ self._contentLength = -1
+ self._flush()
+
+ def _flush(self):
+ self._buffer = cStringIO.StringIO()
+
+ def _handle_terminator(self, term):
+ if term not in self._buffer.getvalue():
+ return None
+
+ data = self._buffer.getvalue()
+ res, rest = data.split(term, 1)
+ self._flush()
+ self._buffer.write(rest)
+
+ return res
+
+ def _parse_command(self):
+ cmd = self._handle_terminator('\n')
+ if cmd is None:
+ return False
+
+ if len(cmd) > 0 and cmd[-1] == '\r':
+ cmd = cmd[:-1]
+
+ if cmd == "":
+ return True
+
+ cmd = decodeValue(cmd)
+ self._tmpFrame = Frame(cmd)
+
+ self._state = self._STATE_HEADER
+ return True
+
+ def _parse_header(self):
+ header = self._handle_terminator('\n')
+ if header is None:
+ return False
+
+ headers = self._tmpFrame.headers
+ if len(header) > 0 and header[-1] == '\r':
+ header = header[:-1]
+
+ if header == "":
+ self._contentLength = int(headers.get('content-length', -1))
+ self._state = self._STATE_BODY
+ return True
+
+ # TODO: Proper error on missing or to many ':'
+ key, value = header.split(":")
+ key = decodeValue(key)
+ value = decodeValue(value)
+
+ # If a client or a server receives repeated frame header entries, only
+ # the first header entry SHOULD be used as the value of header entry.
+ # Subsequent values are only used to maintain a history of state
+ # changes of the header and MAY be ignored.
+ if key not in headers:
+ headers[key] = value
+
+ return True
+
+ def _pushFrame(self):
+ self._frames.append(self._tmpFrame)
+ self._state = self._STATE_CMD
+ self._tmpFrame = None
+ self._contentLength = -1
+
+ def _parse_body(self):
+ if self._contentLength >= 0:
+ return self._parse_body_length()
+ else:
+ return self._parse_body_terminator()
+
+ def _parse_body_terminator(self):
+ body = self._handle_terminator('\0')
+ if body is None:
+ return False
+
+ self._tmpFrame.body = body
+ self._pushFrame()
+ return True
+
+ def _parse_body_length(self):
+ buf = self._buffer
+ cl = self._contentLength
+ ndata = buf.tell()
+ if ndata < cl:
+ return False
+
+ remainingBytes = 0
+ self._flush()
+ body = buf.getvalue()
+ self._buffer.write(body[cl + 1:])
+ body = body[:cl]
+
+ if remainingBytes == 0:
+ self._tmpFrame.body = body
+ self._pushFrame()
+
+ return True
+
+ @property
+ def pending(self):
+ return len(self._frames)
+
+ def parse(self, data):
+ states = self._states
+ self._buffer.write(data)
+ while states[self._state]():
+ pass
+
+ def popFrame(self):
+ try:
+ return self._frames.popleft()
+ except IndexError:
+ return None
+
+
+class Client(object):
+ def __init__(self, sock=None):
+ """
+ Initialize the client.
+
+ The socket parameter can be an already initialized socket. Should be
+ used to pass specialized socket objects like SSL sockets.
+ """
+ if sock is None:
+ sock = self._sock = socket.socket()
+ else:
+ sock = sock
+
+ self._map = {}
+ # Because we don't know how large the frames are
+ # we have to use non bolocking IO
+ sock.setblocking(False)
+
+ # We have our own timeout for operations we
+ # pretend to be synchronous (like connect).
+ self._timeout = None
+ self._connected = Event()
+ self._subscriptions = {}
+
+ self._aclient = None
+ self._adisp = None
+
+ self._inbox = deque()
+
+ @property
+ def outgoing(self):
+ return self._adisp.outgoing
+
+ def _registerSubscription(self, sub):
+ self._subscriptions[sub.id] = sub
+
+ def _unregisterSubscription(self, sub):
+ del self._subscriptions[sub.id]
+
+ @property
+ def connected(self):
+ return self._connected.isSet()
+
+ def handle_connect(self, aclient, frame):
+ self._connected.set()
+
+ def handle_message(self, aclient, frame):
+ self._inbox.append(frame)
+
+ def process(self, timeout=None):
+ if timeout is None:
+ timeout = self._timeout
+
+ asyncore.loop(use_poll=True, timeout=timeout, map=self._map, count=1)
+
+ def connect(self, address, hostname):
+ sock = self._sock
+
+ self._aclient = AsyncClient(self, hostname)
+ adisp = self._adisp = AsyncDispatcher(self._aclient)
+ disp = self._disp = Dispatcher(adisp, sock, self._map)
+ sock.setblocking(True)
+ disp.connect(address)
+ sock.setblocking(False)
+ self.recv() # wait for CONNECTED msg
+
+ if not self._connected.isSet():
+ sock.close()
+ raise socket.error()
+
+ def recv(self):
+ timeout = self._timeout
+ s = time.time()
+ duration = 0
+ while timeout is None or duration < timeout:
+ try:
+ return self._inbox.popleft()
+ except IndexError:
+ td = timeout - duration if timeout is not None else None
+ self.process(td)
+ duration = time.time() - s
+
+ return None
+
+ def put_subscribe(self, destination, ack=None):
+ subid = self._aclient.subscribe(self._adisp, destination, ack)
+ sub = Subsciption(self, subid, ack)
+ self._registerSubscription(sub)
+ return sub
+
+ def put_send(self, destination, data="", headers=None):
+ self._aclient.send(self._adisp, destination, data, headers)
+
+ def put(self, frame):
+ self._adisp.send_raw(frame)
+
+ def send(self):
+ disp = self._disp
+ timeout = self._timeout
+ duration = 0
+ s = time.time()
+ while ((timeout is None or duration < timeout) and
+ (disp.writable() or not self._connected.isSet())):
+ td = timeout - duration if timeout is not None else None
+ self.process(td)
+ duration = time.time() - s
+
+ def gettimout(self):
+ return self._timeout
+
+ def settimeout(self, value):
+ self._timeout = value
+
+
+class AsyncDispatcher(object):
+ log = logging.getLogger("stomp.AsyncDispatcher")
+
+ def __init__(self, frameHandler, bufferSize=1024):
+ self._frameHandler = frameHandler
+ self._bufferSize = bufferSize
+ self._parser = Parser()
+ self._outbox = deque()
+ self._outbuf = None
+
+ def _queueFrame(self, frame):
+ self._outbox.append(frame)
+
+ @property
+ def outgoing(self):
+ n = len(self._outbox)
+ if self._outbuf != "":
+ n += 1
+
+ return n
+
+ def handle_connect(self, dispatcher):
+ self._frameHandler.handle_connect(self)
+
+ def handle_read(self, dispatcher):
+ pending = self._bufferSize
+ while pending:
+ try:
+ data = dispatcher.recv(pending)
+ except socket.error:
+ dispatcher.handle_error()
+ return
+
+ try:
+ pending = dispatcher.socket.pending()
+ except AttributeError:
+ pending = 0
+ pass
+
+ parser = self._parser
+
+ if data is not None:
+ parser.parse(data)
+
+ frameHandler = self._frameHandler
+ if hasattr(frameHandler, "handle_frame"):
+ while parser.pending > 0:
+ frameHandler.handle_frame(self, parser.popFrame())
+
+ def popFrame(self):
+ return self._parser.popFrame()
+
+ def handle_write(self, dispatcher):
+ if self._outbuf is None:
+ try:
+ frame = self._outbox.popleft()
+ except IndexError:
+ return
+
+ self._outbuf = frame.encode()
+
+ data = self._outbuf
+ numSent = dispatcher.send(data)
+ if numSent == len(data):
+ self._outbuf = None
+ else:
+ self._outbuf = data[numSent:]
+
+ def send_raw(self, frame):
+ self._queueFrame(frame)
+
+ def writable(self, dispatcher):
+ return len(self._outbox) > 0 or self._outbuf is not None
+
+ def readable(self, dispatcher):
+ return True
+
+
+class AsyncClient(object):
+ log = logging.getLogger("yajsonrpc.protocols.stomp.AsyncClient")
+
+ def __init__(self, frameHandler, hostname):
+ self._hostname = hostname
+ self._frameHandler = frameHandler
+ self._connected = False
+ self._error = None
+ self._commands = {
+ Command.CONNECTED: self._process_connected,
+ Command.MESSAGE: self._process_message,
+ Command.RECEIPT: self._process_receipt,
+ Command.ERROR: self._process_error}
+
+ @property
+ def connected(self):
+ return self._connected
+
+ def getLastError(self):
+ return self._error
+
+ def handle_connect(self, dispatcher):
+ self.log.debug("Sending CONNECT frame...")
+ hostname = self._hostname
+ frame = Frame(
+ Command.CONNECT,
+ {"accept-version": "1.2",
+ "host": hostname})
+
+ dispatcher.send_raw(frame)
+
+ def handle_frame(self, dispatcher, frame):
+ self._commands[frame.command](frame, dispatcher)
+
+ def _process_connected(self, frame, dispatcher):
+ self._connected = True
+ frameHandler = self._frameHandler
+ if hasattr(frameHandler, "handle_connect"):
+ frameHandler.handle_connect(self, frame)
+
+ self.log.debug("Stomp connection astablished")
+
+ def _process_message(self, frame, dispatcher):
+ frameHandler = self._frameHandler
+
+ if hasattr(frameHandler, "handle_message"):
+ frameHandler.handle_message(self, frame)
+
+ def _process_receipt(self, frame, dispatcher):
+ # TODO:
+ pass
+
+ def _process_error(self, frame, dispatcher):
+ raise StompError(frame)
+
+ def send(self, dispatcher, destination, data="", headers=None):
+ frame = Frame(
+ Command.SEND,
+ {"destination": destination},
+ data)
+
+ dispatcher.send_raw(frame)
+
+ def subscribe(self, dispatcher, destination, ack=None):
+ if ack is None:
+ ack = AckMode.AUTO
+
+ subscriptionID = str(uuid4())
+
+ frame = Frame(
+ Command.SUBSCRIBE,
+ {"destination": destination,
+ "ack": ack,
+ "id": subscriptionID})
+
+ dispatcher.send_raw(frame)
+
+ return subscriptionID
+
+
+class Subsciption(object):
+ def __init__(self, client, subid, ack):
+ self._ack = ack
+ self._subid = subid
+ self._client = client
+ self._valid = True
+
+ @property
+ def id(self):
+ return self._subid
+
+ def unsubscribe(self):
+ client = self._client
+ subid = self._subid
+
+ client._unregisterSubscription(self)
+
+ frame = Frame(Command.UNSUBSCRIBE,
+ {"id": str(subid)})
+ client.put(frame)
+ self._valid = False
+
+ def __del__(self):
+ # Using a timer because unsubscribe action might involve taking locks.
+ if self._valid:
+ Timer(0, self.unsubscribe)
diff --git a/lib/yajsonrpc/stompReactor.py b/lib/yajsonrpc/stompReactor.py
new file mode 100644
index 0000000..7448d76
--- /dev/null
+++ b/lib/yajsonrpc/stompReactor.py
@@ -0,0 +1,335 @@
+# Copyright (C) 2014 Saggi Mizrahi, Red Hat Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+
+import struct
+import asyncore
+import socket
+import os
+import threading
+import logging
+
+import stomp
+
+from betterAsyncore import \
+ Dispatcher, \
+ SSLDispatcher, \
+ SSLContext
+
+
+_Size = struct.Struct("!Q")
+
+_STATE_LEN = "Waiting for message length"
+_STATE_MSG = "Waiting for message"
+
+
+_DEFAULT_RESPONSE_DESTINATIOM = "/queue/_local/vdsm/reponses"
+_DEFAULT_REQUEST_DESTINATION = "/queue/_local/vdsm/requests"
+
+
+def _SSLContextFactory(ctxdef):
+ """Creates an appropriate ssl context from the generic defenition defined
+ in __init__.py"""
+ return SSLContext(cert_file=ctxdef.cert_file,
+ key_file=ctxdef.key_file,
+ ca_cert=ctxdef.ca_cert,
+ session_id=ctxdef.session_id,
+ protocol=ctxdef.protocol)
+
+
+class StompAdapterImpl(object):
+ log = logging.getLogger("Broker.StompAdapter")
+
+ def __init__(self, reactor, messageHandler):
+ self._reactor = reactor
+ self._messageHandler = messageHandler
+ self._commands = {
+ stomp.Command.CONNECT: self._cmd_connect,
+ stomp.Command.SEND: self._cmd_send,
+ stomp.Command.SUBSCRIBE: self._cmd_subscribe,
+ stomp.Command.UNSUBSCRIBE: self._cmd_unsubscribe}
+
+ def _cmd_connect(self, dispatcher, frame):
+ self.log.info("Processing CONNECT request")
+ version = frame.headers.get("accept-version", None)
+ if version != "1.2":
+ res = stomp.Frame(stomp.Command.ERROR, None, "Version unsupported")
+
+ else:
+ res = stomp.Frame(stomp.Command.CONNECTED, {"version": "1.2"})
+
+ dispatcher.send_raw(res)
+ self.log.info("CONNECT response queued")
+ self._reactor.wakeup()
+
+ def _cmd_subscribe(self, dispatcher, frame):
+ self.log.debug("Subscribe command ignored")
+
+ def _cmd_unsubscribe(self, dispatcher, frame):
+ self.log.debug("Unsubscribe command ignored")
+
+ def _cmd_send(self, dispatcher, frame):
+ self.log.debug("Passing incoming message")
+ self._messageHandler(self, frame.body)
+
+ def handle_frame(self, dispatcher, frame):
+ self.log.debug("Handling message %s", frame)
+ try:
+ self._commands[frame.command](dispatcher, frame)
+ except KeyError:
+ self.log.warn("Unknown command %s", frame)
+ dispatcher.handle_error()
+
+
+class StompServer(object):
+ log = logging.getLogger("yajsonrpc.StompServer")
+
+ def __init__(self, sock, reactor, addr, sslctx=None):
+ self._addr = addr
+ self._reactor = reactor
+ self._messageHandler = None
+
+ adapter = StompAdapterImpl(reactor, self._handleMessage)
+ adisp = self._adisp = stomp.AsyncDispatcher(adapter)
+ if sslctx is None:
+ try:
+ sslctx = sock.get_context()
+ except AttributeError:
+ # if get_context() is missing it just means we recieved an
+ # socket that doesn't use SSL
+ pass
+ else:
+ sslctx = _SSLContextFactory(sslctx)
+
+ if sslctx is None:
+ dispatcher = Dispatcher(adisp, sock=sock, map=reactor._map)
+ else:
+ dispatcher = SSLDispatcher(adisp, sslctx, sock=sock,
+ map=reactor._map)
+
+ if sock is None:
+ address_family = socket.getaddrinfo(*addr)[0][0]
+ dispatcher.create_socket(address_family, socket.SOCK_STREAM)
+
+ self._dispatcher = dispatcher
+
+ def setTimeout(self, timeout):
+ self._dispatcher.socket.settimeout(timeout)
+
+ def connect(self):
+ self._dispatcher.connect(self._addr)
+
+ def _handleMessage(self, impl, data):
+ if self._messageHandler is not None:
+ self.log.debug("Processing incoming message")
+ self._messageHandler((self, data))
+
+ def setMessageHandler(self, msgHandler):
+ self._messageHandler = msgHandler
+
+ def send(self, message):
+ self.log.debug("Sending response")
+ res = stomp.Frame(stomp.Command.MESSAGE,
+ {"destination": _DEFAULT_RESPONSE_DESTINATIOM,
+ "content-type": "application/json"},
+ message)
+ self._adisp.send_raw(res)
+ self._reactor.wakeup()
+
+ def close(self):
+ self._dispatcher.close()
+
+
+class StompClient(object):
+ log = logging.getLogger("jsonrpc.AsyncoreClient")
+
+ def __init__(self, sock, reactor, addr, sslctx=None):
+ self._addr = addr
+ self._reactor = reactor
+ self._messageHandler = None
+
+ self._aclient = stomp.AsyncClient(self, "vdsm")
+ adisp = self._adisp = stomp.AsyncDispatcher(self._aclient)
+ if sslctx is None:
+ try:
+ sslctx = sock.get_context()
+ except AttributeError:
+ # if get_context() is missing it just means we recieved an
+ # socket that doesn't use SSL
+ pass
+ else:
+ sslctx = _SSLContextFactory(sslctx)
+
+ if sslctx is None:
+ dispatcher = Dispatcher(adisp, sock=sock, map=reactor._map)
+ else:
+ dispatcher = SSLDispatcher(adisp, sslctx, sock=sock,
+ map=reactor._map)
+
+ if sock is None:
+ address_family = socket.getaddrinfo(*addr)[0][0]
+ dispatcher.create_socket(address_family, socket.SOCK_STREAM)
+
+ self._dispatcher = dispatcher
+
+ def setTimeout(self, timeout):
+ self._dispatcher.socket.settimeout(timeout)
+
+ def connect(self):
+ self._dispatcher.connect(self._addr)
+
+ def handle_message(self, impl, frame):
+ if self._messageHandler is not None:
+ self.log.debug("Queueing incoming message")
+ self._messageHandler((self, frame.body))
+
+ def setMessageHandler(self, msgHandler):
+ self._messageHandler = msgHandler
+
+ def send(self, message):
+ self._aclient.send(self._adisp, _DEFAULT_REQUEST_DESTINATION,
+ message,
+ {"content-type": "application/json"})
+ self._reactor.wakeup()
+ self.log.debug("Message queued for delivery")
+
+ def close(self):
+ self._dispatcher.close()
+
+
+def StompListener(reactor, address, acceptHandler, sslctx=None):
+ impl = StompListenerImpl(reactor, address, acceptHandler)
+ if sslctx is None:
+ return Dispatcher(impl, map=reactor._map)
+ else:
+ sslctx = _SSLContextFactory(sslctx)
+ return SSLDispatcher(impl, sslctx, map=reactor._map)
+
+
+# FIXME: We should go about making a listener wrapper like the client wrapper
+# This is not as high priority as users don't interact with listeners
+# as much
+class StompListenerImpl(object):
+ log = logging.getLogger("jsonrpc.StompListener")
+
+ def __init__(self, reactor, address, acceptHandler):
+ self._reactor = reactor
+ self._address = address
+ self._acceptHandler = acceptHandler
+
+ def init(self, dispatcher):
+ address_family = socket.getaddrinfo(*self._address)[0][0]
+ dispatcher.create_socket(address_family, socket.SOCK_STREAM)
+
+ dispatcher.set_reuse_addr()
+ dispatcher.bind(self._address)
+ dispatcher.listen(5)
+
+ def handle_accept(self, dispatcher):
+ try:
+ pair = dispatcher.accept()
+ except Exception as e:
+ self.log.exception(e)
+ raise
+ if pair is None:
+ return
+
+ sock, addr = pair
+ self.log.debug("Accepting connection from client "
+ "at tcp://%s:%s", addr[0], addr[1])
+
+ client = StompServer(sock, self._reactor, addr)
+ self._acceptHandler(self, client)
+
+ def writable(self, dispatcher):
+ return False
+
+
+class _AsyncoreEvent(asyncore.file_dispatcher):
+ def __init__(self, map):
+ self._lock = threading.Lock()
+ r, w = os.pipe()
+ self._w = w
+ try:
+ asyncore.file_dispatcher.__init__(self, r, map=map)
+ except:
+ os.close(r)
+ os.close(w)
+ raise
+
+ # the file_dispatcher ctor dups the file in order to take ownership of
+ # it
+ os.close(r)
+ self._isSet = False
+
+ def writable(self):
+ return False
+
+ def set(self):
+ with self._lock:
+ if self._isSet:
+ return
+
+ self._isSet = True
+
+ os.write(self._w, "a")
+
+ def handle_read(self):
+ with self._lock:
+ self.recv(1)
+ self._isSet = False
+
+ def close(self):
+ try:
+ os.close(self._w)
+ except (OSError, IOError):
+ pass
+
+ asyncore.file_dispatcher.close(self)
+
+
+class StompReactor(object):
+ def __init__(self, sslctx=None):
+ self.sslctx = sslctx
+ self._map = {}
+ self._isRunning = False
+ self._wakeupEvent = _AsyncoreEvent(self._map)
+
+ def createListener(self, address, acceptHandler):
+ listener = StompListener(self, address, acceptHandler, self.sslctx)
+ self.wakeup()
+ return listener
+
+ def createClient(self, address):
+ return StompClient(None, self, address, self.sslctx)
+
+ def process_requests(self):
+ self._isRunning = True
+ while self._isRunning:
+ asyncore.loop(use_poll=True, map=self._map, count=1)
+
+ for key, dispatcher in self._map.items():
+ del self._map[key]
+ dispatcher.close()
+
+ def wakeup(self):
+ self._wakeupEvent.set()
+
+ def stop(self):
+ self._isRunning = False
+ try:
+ self.wakeup()
+ except (IOError, OSError):
+ # Client woke up and closed the event dispatcher without our help
+ pass
diff --git a/tests/jsonRpcTests.py b/tests/jsonRpcTests.py
index 3b63c01..3eb210b 100644
--- a/tests/jsonRpcTests.py
+++ b/tests/jsonRpcTests.py
@@ -149,7 +149,10 @@
class _DummyBridge(object):
+ log = logging.getLogger("tests.DummyBridge")
+
def echo(self, text):
+ self.log.info("ECHO: '%s'", text)
return text
def ping(self):
@@ -185,6 +188,7 @@
bridge = _DummyBridge()
with constructServer(rt, bridge, ssl) as (server, clientFactory):
with self._client(clientFactory) as client:
+ self.log.info("Calling 'echo'")
self.assertEquals(self._callTimeout(client, "echo", (data,),
apiTests.id,
CALL_TIMEOUT), data)
diff --git a/tests/jsonRpcUtils.py b/tests/jsonRpcUtils.py
index 6d6dcf2..e3f54f6 100644
--- a/tests/jsonRpcUtils.py
+++ b/tests/jsonRpcUtils.py
@@ -10,6 +10,7 @@
from yajsonrpc import \
JsonRpcServer, \
asyncoreReactor, \
+ stompReactor, \
JsonRpcClientPool, \
SSLContext
@@ -36,6 +37,14 @@
@contextmanager
+def _stompServerConstructor():
+ port = getFreePort()
+ address = ("127.0.0.1", port)
+ reactorType = stompReactor.StompReactor
+ yield reactorType, address
+
+
+@contextmanager
def _tcpServerConstructor():
port = getFreePort()
address = ("127.0.0.1", port)
@@ -56,7 +65,8 @@
REACTOR_CONSTRUCTORS = {"tcp": _tcpServerConstructor,
- "amqp": _protonServerConstructor}
+ "amqp": _protonServerConstructor,
+ "stomp": _stompServerConstructor}
REACTOR_TYPE_PERMUTATIONS = [[r] for r in REACTOR_CONSTRUCTORS.iterkeys()]
CONNECTION_PERMUTATIONS = tuple(product(REACTOR_CONSTRUCTORS.iterkeys(),
(True, False)))
diff --git a/vdsm_api/BindingJsonRpc.py b/vdsm_api/BindingJsonRpc.py
index 6c0fa47..471cb70 100644
--- a/vdsm_api/BindingJsonRpc.py
+++ b/vdsm_api/BindingJsonRpc.py
@@ -21,6 +21,7 @@
from yajsonrpc import JsonRpcServer
from yajsonrpc.asyncoreReactor import AsyncoreReactor
+from yajsonrpc.stompReactor import StompReactor
from yajsonrpc.betterAsyncore import SSLContext
ProtonReactor = None
try:
@@ -49,6 +50,9 @@
if backendType not in reactors:
if backendType == "tcp":
reactors["tcp"] = self._createTcpReactor(truststore_path)
+ elif backendType == "stomp":
+ reactors["stomp"] = \
+ self._createStompReactor(truststore_path)
elif backendType == "amqp":
if ProtonReactor is None:
continue
@@ -84,6 +88,15 @@
ca_cert = truststore_path + '/certs/cacert.pem'
return AsyncoreReactor(SSLContext(cert_file, key_file, ca_cert))
+ def _createStompReactor(self, truststore_path=None):
+ if truststore_path is None:
+ return StompReactor()
+ else:
+ key_file = truststore_path + '/keys/vdsmkey.pem'
+ cert_file = truststore_path + '/certs/vdsmcert.pem'
+ ca_cert = truststore_path + '/certs/cacert.pem'
+ return StompReactor(SSLContext(cert_file, key_file, ca_cert))
+
def _createProtonReactor(self):
return ProtonReactor()
@@ -104,6 +117,8 @@
try:
if backendType == "tcp":
self._createTcpListener(cfg)
+ if backendType == "stomp":
+ self._createStompListener(cfg)
elif backendType == "amqp":
self._createProtonListener(cfg)
except:
--
To view, visit http://gerrit.ovirt.org/26750
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I22bcae1e150dea7bc7d9fecefb6847c48bfe8949
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Saggi Mizrahi <smizrahi(a)redhat.com>
9 years, 11 months
Change in vdsm[master]: drop ominous log for libvirt errors
by Dan Kenigsberg
Dan Kenigsberg has uploaded a new change for review.
Change subject: drop ominous log for libvirt errors
......................................................................
drop ominous log for libvirt errors
Patch http://gerrit.ovirt.org/13990 has introduced a log line whenever a
libvirt exception flows through our libvirtconnection.
E.g. Unknown libvirterror: ecode: 9 edom: 19 level: 2 message: operation
failed: network 'vdsm-ovirtmgmt' already exists...
The log line appears out of context and is repeated once its traceback
is properly caught.
Change-Id: Icd2a53ffee7fb78cb1c8d171093e93e233ed5ad4
Signed-off-by: Dan Kenigsberg <danken(a)redhat.com>
---
M lib/vdsm/libvirtconnection.py
1 file changed, 0 insertions(+), 4 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/24/17324/1
diff --git a/lib/vdsm/libvirtconnection.py b/lib/vdsm/libvirtconnection.py
index cdba57c..9891691 100644
--- a/lib/vdsm/libvirtconnection.py
+++ b/lib/vdsm/libvirtconnection.py
@@ -95,10 +95,6 @@
if killOnFailure:
log.error('taking calling process down.')
os.kill(os.getpid(), signal.SIGTERM)
- else:
- log.debug('Unknown libvirterror: ecode: %d edom: %d '
- 'level: %d message: %s', ecode, edom,
- e.get_error_level(), e.get_error_message())
raise
wrapper.__name__ = f.__name__
wrapper.__doc__ = f.__doc__
--
To view, visit http://gerrit.ovirt.org/17324
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Icd2a53ffee7fb78cb1c8d171093e93e233ed5ad4
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Dan Kenigsberg <danken(a)redhat.com>
9 years, 11 months
Change in vdsm[master]: IGNORE: testing gerrit triggered jobs
by dcaroest@redhat.com
David Caro has uploaded a new change for review.
Change subject: IGNORE: testing gerrit triggered jobs
......................................................................
IGNORE: testing gerrit triggered jobs
Change-Id: Ic8946fd09d50e68f5111860a90572c12d14c7326
Signed-off-by: David Caro <dcaroest(a)redhat.com>
---
M vdsm/sos/vdsm.py.in
1 file changed, 5 insertions(+), 3 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/70/26570/1
diff --git a/vdsm/sos/vdsm.py.in b/vdsm/sos/vdsm.py.in
index d90940f..b94bd6a 100644
--- a/vdsm/sos/vdsm.py.in
+++ b/vdsm/sos/vdsm.py.in
@@ -119,9 +119,11 @@
self.collectExtOutput(vdsclient + "getDeviceList")
self.collectExtOutput(vdsclient + "getAllTasksInfo")
self.collectExtOutput(vdsclient + "getAllTasksStatuses")
- p = subprocess.Popen(vdsclient + "getConnectedStoragePoolsList",
- shell=True, stdout=subprocess.PIPE,
- stderr=subprocess.PIPE)
+ p = subprocess.Popen(
+ vdsclient + "getConnectedStoragePoolsList",
+ shell=True,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE)
out, err = p.communicate()
for line in out.splitlines()[1:-1]:
pool = line.strip()
--
To view, visit http://gerrit.ovirt.org/26570
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ic8946fd09d50e68f5111860a90572c12d14c7326
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: David Caro <dcaroest(a)redhat.com>
9 years, 11 months
Change in vdsm[master]: mutipath: Remove unneeded and dangerous -r parameter
by Nir Soffer
Nir Soffer has uploaded a new change for review.
Change subject: mutipath: Remove unneeded and dangerous -r parameter
......................................................................
mutipath: Remove unneeded and dangerous -r parameter
Since commit dbf2089488 (Jul 9 2013) multipath call was change to use
the -r flag, forcing a reload of the device map. This was tested to fix
a case where new lun is created on the storage server, while a host was
connected, and the new device is not available when issuing the
getDeviceList command. According to a comment on gerrit, the change was
tested for ISCSI and FC storage types, but there is no documentation of
the testing procedure. The related bug was verified, but has no
information about how it was verified.
We have two related bugs:
- Bug 1078879 tell us that invoking multipath with the -r flag sometimes
triggers a segfault in the multipathd daemon. In the bug, multipath
developer suggests that as long as multipathd daemon is running,
there is no need to invoke multipath to detect new devices, and
"multipath -r really isn't useful for much of anything".
- Bug 1071654 tell us that devices rescanning is broken on FC storage
domains (although the -r flag is used). I reproduced this bug using
storage QE FC server.
This patch removes the -r flag. To be on the safe side, I left the
multipath call as it was since the first multipath commit in 2009. We
will work with kernel and multipath developers further on removing this
call if it is indeed unneeded.
Bug-Url: https://bugzilla.redhat.com/1078879
Relates-to: https://bugzilla.redhat.com/1071654
Relates-to: http://gerrit.ovirt.org/17263
Change-Id: I880ab5343df3e0030638901e188320b20570747d
Signed-off-by: Nir Soffer <nsoffer(a)redhat.com>
---
M vdsm/storage/multipath.py
1 file changed, 1 insertion(+), 2 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/42/27242/1
diff --git a/vdsm/storage/multipath.py b/vdsm/storage/multipath.py
index 69a758e..ba98866 100644
--- a/vdsm/storage/multipath.py
+++ b/vdsm/storage/multipath.py
@@ -107,8 +107,7 @@
supervdsm.getProxy().hbaRescan()
# Now let multipath daemon pick up new devices
- cmd = [constants.EXT_MULTIPATH, "-r"]
- misc.execCmd(cmd, sudo=True)
+ misc.execCmd([constants.EXT_MULTIPATH], sudo=True)
def isEnabled():
--
To view, visit http://gerrit.ovirt.org/27242
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I880ab5343df3e0030638901e188320b20570747d
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsoffer(a)redhat.com>
9 years, 11 months