[vdsm/f18] update to vdsm-4.10.3-6

Federico Simoncelli fsimonce at fedoraproject.org
Wed Jan 30 16:39:56 UTC 2013


commit fb9d01be8b64fd021822468460b6d48b5ff0c522
Author: Federico Simoncelli <fsimonce at redhat.com>
Date:   Wed Jan 30 17:38:19 2013 +0100

    update to vdsm-4.10.3-6
    
    - Explicitly shutdown  m2crypto socket
    - spec: require policycoreutils and skip sebool errors
    - spec: requires selinux-policy to avoid selinux failure
    - vdsmd.service: require either ntpd or chronyd
    - isRunning didn't check local variable before reading
    - udev: Race fix- load and trigger dev rule (#891300)
    - Change scsi_id command path to be configured at runtime (#886087)
    - upgrade: force upgrade to v2 before upgrading to v3 (#893184)
    - misc: rename safelease to clusterlock
    - domain: select the cluster lock using makeClusterLock
    - clusterlock: add the local locking implementation (#877715)
    - upgrade: catch MetaDataKeyNotFoundError when preparing
    - vdsm.spec: Require openssl (#905728)
    - Fedora 18: require a newer udev
    - fix sloppy backport of safelease rename
    - removing the use of zombie reaper from supervdsm

 0012-Explicitly-shutdown-m2crypto-socket.patch     |   55 ++
 ...re-policycoreutils-and-skip-sebool-errors.patch |   59 ++
 ...es-selinux-policy-to-avoid-selinux-failur.patch |   42 +
 ...md.service-require-either-ntpd-or-chronyd.patch |   36 +
 ...idn-t-check-local-variable-before-reading.patch |   44 +
 0017-udev-Race-fix-load-and-trigger-dev-rule.patch |  115 +++
 ..._id-command-path-to-be-configured-at-runt.patch |  171 ++++
 ...orce-upgrade-to-v2-before-upgrading-to-v3.patch |   91 ++
 0020-misc-rename-safelease-to-clusterlock.patch    |  866 ++++++++++++++++++++
 ...ct-the-cluster-lock-using-makeClusterLock.patch |  151 ++++
 ...lock-add-the-local-locking-implementation.patch |  225 +++++
 ...ch-MetaDataKeyNotFoundError-when-preparin.patch |   38 +
 0024-vdsm.spec-Require-openssl.patch               |   31 +
 0025-Fedora-18-require-a-newer-udev.patch          |   36 +
 0026-fix-sloppy-backport-of-safelease-rename.patch |   40 +
 ...g-the-use-of-zombie-reaper-from-supervdsm.patch |   52 ++
 vdsm.spec                                          |   72 ++-
 17 files changed, 2117 insertions(+), 7 deletions(-)
---
diff --git a/0012-Explicitly-shutdown-m2crypto-socket.patch b/0012-Explicitly-shutdown-m2crypto-socket.patch
new file mode 100644
index 0000000..00ac452
--- /dev/null
+++ b/0012-Explicitly-shutdown-m2crypto-socket.patch
@@ -0,0 +1,55 @@
+From dcdc1ce83f0f4d426d31401ca14fb8c685150c45 Mon Sep 17 00:00:00 2001
+From: Andrey Gordeev <dreyou at gmail.com>
+Date: Mon, 14 Jan 2013 10:30:52 +0100
+Subject: [PATCH 12/22] Explicitly shutdown  m2crypto socket
+
+Aparently some versions of the m2crypto library don't shutdown correctly
+underlying sockets when a SSL connection is closed.
+
+In Python 2.6.6 (the version in RHEL6 and in CentOS6) when the XML RPC
+server closes a connection it calls the shutdown method on that
+connection with sock.SHUT_WR as the parameter. This works fine for plain
+sockets, and works well also for SSL sockets using the builtin ssl
+module as it translates the call to shutdown to a complete shutdown of
+the SSL connection. But m2crypto does an different translation and the
+net result is that the underlying SSL connection is not completely
+closed.
+
+In Python 2.7.3 (the version in Fedora 18) when the XML RPC server
+closes a connection it calls the shutdown method on that connection with
+sock.SHUT_RDWR, so no matter what SSL implementation is used the
+underlying SSL connection is completely closed.
+
+This patch changes the SSLSocket class so that it explicitly shuts down
+and closes the underlying socket when  when the connection is closed.
+
+Change-Id: Ie1a471aaccb32554b94340ebfb92b9d7ba14407a
+Signed-off-by: Juan Hernandez <juan.hernandez at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10972
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Tested-by: Dan Kenigsberg <danken at redhat.com>
+Signed-off-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11384
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm/SecureXMLRPCServer.py | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/vdsm/SecureXMLRPCServer.py b/vdsm/SecureXMLRPCServer.py
+index 2de1cf7..bad2067 100644
+--- a/vdsm/SecureXMLRPCServer.py
++++ b/vdsm/SecureXMLRPCServer.py
+@@ -57,6 +57,10 @@ class SSLSocket(object):
+     def gettimeout(self):
+         return self.connection.socket.gettimeout()
+ 
++    def close(self):
++        self.connection.shutdown(socket.SHUT_RDWR)
++        self.connection.close()
++
+     def __getattr__(self, name):
+         # This is how we delegate all the rest of the methods to the
+         # underlying SSL connection:
+-- 
+1.8.1
+
diff --git a/0013-spec-require-policycoreutils-and-skip-sebool-errors.patch b/0013-spec-require-policycoreutils-and-skip-sebool-errors.patch
new file mode 100644
index 0000000..d8f3a98
--- /dev/null
+++ b/0013-spec-require-policycoreutils-and-skip-sebool-errors.patch
@@ -0,0 +1,59 @@
+From f28df85573914d1ccb57fdc7bae5121a9a24576c Mon Sep 17 00:00:00 2001
+From: Federico Simoncelli <fsimonce at redhat.com>
+Date: Tue, 11 Dec 2012 04:01:21 -0500
+Subject: [PATCH 13/22] spec: require policycoreutils and skip sebool errors
+
+In order to avoid a policycoreutils bug (rhbz 883355) when selinux is
+disabled we now require the version 2.1.13-44 (or newer) of Fedora.
+Additionally we now skip any error in the rpm scriptlets for the sebool
+configuration (sebool-config) since they could interfere with the rpm
+installation potentially leaving multiple packages installed.
+
+Change-Id: Iefd5f53c9118eeea6817ce9660ea18abcfd1955c
+Signed-off-by: Federico Simoncelli <fsimonce at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/9840
+Reviewed-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11363
+---
+ vdsm.spec.in | 10 ++++++++--
+ 1 file changed, 8 insertions(+), 2 deletions(-)
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index 5b13419..e153880 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -131,6 +131,12 @@ Requires: selinux-policy-targeted >= 3.10.0-149
+ Requires: lvm2 >= 2.02.95
+ %endif
+ 
++# In order to avoid a policycoreutils bug (rhbz 883355) when selinux is
++# disabled we now require the version 2.1.13-44 (or newer) of Fedora.
++%if 0%{?fedora} >= 18
++Requires: policycoreutils >= 2.1.13-44
++%endif
++
+ Requires: libvirt-python, libvirt-lock-sanlock, libvirt-client
+ Requires: psmisc >= 22.6-15
+ Requires: fence-agents
+@@ -479,7 +485,7 @@ export LC_ALL=C
+ /usr/sbin/usermod -a -G %{qemu_group},%{vdsm_group} %{snlk_user}
+ 
+ %post
+-%{_bindir}/vdsm-tool sebool-config
++%{_bindir}/vdsm-tool sebool-config || :
+ # set the vdsm "secret" password for libvirt
+ %{_bindir}/vdsm-tool set-saslpasswd
+ 
+@@ -521,7 +527,7 @@ then
+     /bin/sed -i '/# VDSM section begin/,/# VDSM section end/d' \
+         /etc/sysctl.conf
+ 
+-    %{_bindir}/vdsm-tool sebool-unconfig
++    %{_bindir}/vdsm-tool sebool-unconfig || :
+ 
+     /usr/sbin/saslpasswd2 -p -a libvirt -d vdsm at ovirt
+ 
+-- 
+1.8.1
+
diff --git a/0014-spec-requires-selinux-policy-to-avoid-selinux-failur.patch b/0014-spec-requires-selinux-policy-to-avoid-selinux-failur.patch
new file mode 100644
index 0000000..360ec83
--- /dev/null
+++ b/0014-spec-requires-selinux-policy-to-avoid-selinux-failur.patch
@@ -0,0 +1,42 @@
+From d28299352e54fe6615aba70280f76d25a088f851 Mon Sep 17 00:00:00 2001
+From: Mark Wu <wudxw at linux.vnet.ibm.com>
+Date: Wed, 23 Jan 2013 10:55:47 +0800
+Subject: [PATCH 14/22] spec: requires selinux-policy to avoid selinux failure
+ on access tls cert
+
+selinux-policy tightened up the security on svirt_t on fedora18. It causes
+that svirt_t is disallowed to access cert_t file. And therefore it will block
+qemu run spice server with tls. For more details, please see:
+https://bugzilla.redhat.com/show_bug.cgi?id=890345
+
+Change-Id: I9fe74c6187e7e9f2a8c0b2a824d2871fb5497d86
+Signed-off-by: Mark Wu <wudxw at linux.vnet.ibm.com>
+Reviewed-on: http://gerrit.ovirt.org/11290
+Reviewed-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11364
+Tested-by: Federico Simoncelli <fsimonce at redhat.com>
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm.spec.in | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index e153880..dfc2459 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -131,9 +131,10 @@ Requires: selinux-policy-targeted >= 3.10.0-149
+ Requires: lvm2 >= 2.02.95
+ %endif
+ 
++%if 0%{?fedora} >= 18
++Requires: selinux-policy-targeted >= 3.11.1-71
+ # In order to avoid a policycoreutils bug (rhbz 883355) when selinux is
+ # disabled we now require the version 2.1.13-44 (or newer) of Fedora.
+-%if 0%{?fedora} >= 18
+ Requires: policycoreutils >= 2.1.13-44
+ %endif
+ 
+-- 
+1.8.1
+
diff --git a/0015-vdsmd.service-require-either-ntpd-or-chronyd.patch b/0015-vdsmd.service-require-either-ntpd-or-chronyd.patch
new file mode 100644
index 0000000..cceab27
--- /dev/null
+++ b/0015-vdsmd.service-require-either-ntpd-or-chronyd.patch
@@ -0,0 +1,36 @@
+From 8a0e831eafedb0aefc6aadab7ac1448cab6b7643 Mon Sep 17 00:00:00 2001
+From: Dan Kenigsberg <danken at redhat.com>
+Date: Wed, 23 Jan 2013 10:27:01 +0200
+Subject: [PATCH 15/22] vdsmd.service: require either ntpd or chronyd
+
+Fedora 18 ships with chronyd by default, which conflicts with ntpd. We
+do not really care which one of the two is running, as long as the host
+clock is synchronized. That's what requiring time-sync.target means.
+
+Change-Id: Ie0605bea6d34c214aea8814a72a03e9ad2883fdb
+Signed-off-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11291
+Reviewed-by: Zhou Zheng Sheng <zhshzhou at linux.vnet.ibm.com>
+Reviewed-by: Antoni Segura Puimedon <asegurap at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11366
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+Tested-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm/vdsmd.service | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/vdsm/vdsmd.service b/vdsm/vdsmd.service
+index 6a650f4..9823b34 100644
+--- a/vdsm/vdsmd.service
++++ b/vdsm/vdsmd.service
+@@ -1,6 +1,6 @@
+ [Unit]
+ Description=Virtual Desktop Server Manager
+-Requires=multipathd.service libvirtd.service ntpd.service
++Requires=multipathd.service libvirtd.service time-sync.target
+ Conflicts=libvirt-guests.service
+ 
+ [Service]
+-- 
+1.8.1
+
diff --git a/0016-isRunning-didn-t-check-local-variable-before-reading.patch b/0016-isRunning-didn-t-check-local-variable-before-reading.patch
new file mode 100644
index 0000000..c2b85cb
--- /dev/null
+++ b/0016-isRunning-didn-t-check-local-variable-before-reading.patch
@@ -0,0 +1,44 @@
+From c95e492ccef335e82b2eb79495c35d08beab6629 Mon Sep 17 00:00:00 2001
+From: Yaniv Bronhaim <ybronhei at redhat.com>
+Date: Thu, 17 Jan 2013 10:01:54 +0200
+Subject: [PATCH 16/22] isRunning didn't check local variable before reading
+ saved data
+
+All internal svdsm files contained last svdsm instance info,
+after restart we didn't verify local manager instance before processing
+the operation and got AttributeError exception when calling svdsm
+manager.
+
+This returns false when _svdsm instance is None or in firstLaunch.
+
+Change-Id: I9dec0c6955dadcd959cc1c8df4e9745322fb0ce3
+Bug-Id: https://bugzilla.redhat.com/show_bug.cgi?id=890365
+Signed-off-by: Yaniv Bronhaim <ybronhei at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10491
+Reviewed-by: Ayal Baron <abaron at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11135
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+Tested-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm/supervdsm.py | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/vdsm/supervdsm.py b/vdsm/supervdsm.py
+index 740a93e..6a38076 100644
+--- a/vdsm/supervdsm.py
++++ b/vdsm/supervdsm.py
+@@ -148,6 +148,9 @@ class SuperVdsmProxy(object):
+         self._firstLaunch = True
+ 
+     def isRunning(self):
++        if self._firstLaunch or self._svdsm is None:
++            return False
++
+         try:
+             with open(self.pidfile, "r") as f:
+                 spid = f.read().strip()
+-- 
+1.8.1
+
diff --git a/0017-udev-Race-fix-load-and-trigger-dev-rule.patch b/0017-udev-Race-fix-load-and-trigger-dev-rule.patch
new file mode 100644
index 0000000..7615ee8
--- /dev/null
+++ b/0017-udev-Race-fix-load-and-trigger-dev-rule.patch
@@ -0,0 +1,115 @@
+From 959a8703937f01161988221940d189acc4f7a796 Mon Sep 17 00:00:00 2001
+From: Federico Simoncelli <fsimonce at redhat.com>
+Date: Sun, 13 Jan 2013 16:21:55 +0200
+Subject: [PATCH 17/22] udev: Race fix- load and trigger dev rule
+
+The rule file is generated, yet not synch-loaded in memory, so a VM with
+a direct lun fails to start.
+This patch reloads the rules before triggering using the new private
+udev functions - udevReloadRules() in supervdsmServer.py .
+Also added a check in appropriateDevice() (hsm.py) to make sure the
+mapping is indeed there.
+
+Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=891300
+Change-Id: If3b2008a3d9df2dcaf54190721c2dd9764338627
+Signed-off-by: Lee Yarwood <lyarwood at redhat.com>
+Signed-off-by: Vered Volansky <vvolansk at redhat.com>
+Signed-off-by: Federico Simoncelli <fsimonce at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11410
+Reviewed-by: Allon Mureinik <amureini at redhat.com>
+---
+ vdsm/storage/hsm.py     |  7 +++++++
+ vdsm/supervdsmServer.py | 31 +++++++++++++++++++++++++++++++
+ 2 files changed, 38 insertions(+)
+
+diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
+index efb2749..62e9f74 100644
+--- a/vdsm/storage/hsm.py
++++ b/vdsm/storage/hsm.py
+@@ -67,6 +67,7 @@ import mount
+ import dispatcher
+ import supervdsm
+ import storageServer
++from vdsm import utils
+ 
+ GUID = "guid"
+ NAME = "name"
+@@ -87,6 +88,8 @@ SECTOR_SIZE = 512
+ 
+ STORAGE_CONNECTION_DIR = os.path.join(constants.P_VDSM_RUN, "connections/")
+ 
++QEMU_READABLE_TIMEOUT = 30
++
+ 
+ def public(f=None, **kwargs):
+     if f is None:
+@@ -2925,6 +2928,10 @@ class HSM:
+         """
+         supervdsm.getProxy().appropriateDevice(guid, thiefId)
+         supervdsm.getProxy().udevTrigger(guid)
++        devPath = devicemapper.DMPATH_FORMAT % guid
++        utils.retry(partial(fileUtils.validateQemuReadable, devPath),
++                    expectedException=OSError,
++                    timeout=QEMU_READABLE_TIMEOUT)
+ 
+     @public
+     def inappropriateDevices(self, thiefId):
+diff --git a/vdsm/supervdsmServer.py b/vdsm/supervdsmServer.py
+index dc89218..833e91f 100755
+--- a/vdsm/supervdsmServer.py
++++ b/vdsm/supervdsmServer.py
+@@ -89,6 +89,10 @@ LOG_CONF_PATH = "/etc/vdsm/logger.conf"
+ 
+ class _SuperVdsm(object):
+ 
++    UDEV_WITH_RELOAD_VERSION = 181
++
++    log = logging.getLogger("SuperVdsm.ServerCallback")
++
+     @logDecorator
+     def ping(self, *args, **kwargs):
+         # This method exists for testing purposes
+@@ -226,6 +230,7 @@ class _SuperVdsm(object):
+ 
+     @logDecorator
+     def udevTrigger(self, guid):
++        self.__udevReloadRules(guid)
+         cmd = [EXT_UDEVADM, 'trigger', '--verbose', '--action', 'change',
+                '--property-match=DM_NAME=%s' % guid]
+         rc, out, err = misc.execCmd(cmd, sudo=False)
+@@ -304,6 +309,32 @@ class _SuperVdsm(object):
+     def removeFs(self, path):
+         return mkimage.removeFs(path)
+ 
++    def __udevReloadRules(self, guid):
++        if self.__udevOperationReload():
++            reload = "--reload"
++        else:
++            reload = "--reload-rules"
++        cmd = [EXT_UDEVADM, 'control', reload]
++        rc, out, err = misc.execCmd(cmd, sudo=False)
++        if rc:
++            self.log.error("Udevadm reload-rules command failed rc=%s, "
++                           "out=\"%s\", err=\"%s\"", rc, out, err)
++            raise OSError(errno.EINVAL, "Could not reload-rules for device "
++                          "%s" % guid)
++
++    @utils.memoized
++    def __udevVersion(self):
++        cmd = [EXT_UDEVADM, '--version']
++        rc, out, err = misc.execCmd(cmd, sudo=False)
++        if rc:
++            self.log.error("Udevadm version command failed rc=%s, "
++                           " out=\"%s\", err=\"%s\"", rc, out, err)
++            raise RuntimeError("Could not get udev version number")
++        return int(out[0])
++
++    def __udevOperationReload(self):
++        return self.__udevVersion() > self.UDEV_WITH_RELOAD_VERSION
++
+ 
+ def __pokeParent(parentPid, address, log):
+     try:
+-- 
+1.8.1
+
diff --git a/0018-Change-scsi_id-command-path-to-be-configured-at-runt.patch b/0018-Change-scsi_id-command-path-to-be-configured-at-runt.patch
new file mode 100644
index 0000000..2412a47
--- /dev/null
+++ b/0018-Change-scsi_id-command-path-to-be-configured-at-runt.patch
@@ -0,0 +1,171 @@
+From c7ee9217ec6edebec7b1d3a2536792114fd1a258 Mon Sep 17 00:00:00 2001
+From: Yeela Kaplan <ykaplan at redhat.com>
+Date: Fri, 25 Jan 2013 15:54:07 +0200
+Subject: [PATCH 18/22] Change scsi_id command path to be configured at runtime
+
+On fedora 18 scsi_id path is no longer under /sbin/scsi_id,
+we configure vdsm to look for the path at runtime
+and thus remove it from constants.
+
+Change-Id: I409d4da0ba429564466271aded32e96f9401cd6c
+Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=886087
+Signed-off-by: Yeela Kaplan <ykaplan at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10824
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11393
+Tested-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ configure.ac              |  2 --
+ vdsm/constants.py.in      | 24 ------------------------
+ vdsm/storage/multipath.py | 45 ++++++++++++++++++++++++++++++++++++++++-----
+ vdsm/sudoers.vdsm.in      |  1 -
+ 4 files changed, 40 insertions(+), 32 deletions(-)
+
+diff --git a/configure.ac b/configure.ac
+index 3489e38..edc0b50 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -167,8 +167,6 @@ AC_PATH_PROG([QEMUIMG_PATH], [qemu-img], [/usr/bin/qemu-img])
+ AC_PATH_PROG([REBOOT_PATH], [reboot], [/usr/bin/reboot])
+ AC_PATH_PROG([RPM_PATH], [rpm], [/bin/rpm])
+ AC_PATH_PROG([RSYNC_PATH], [rsync], [/usr/bin/rsync])
+-AC_PATH_PROG([SCSI_ID_PATH], [scsi_id], [/sbin/scsi_id],
+-             [$PATH$PATH_SEPARATOR/lib/udev])
+ AC_PATH_PROG([SED_PATH], [sed], [/bin/sed])
+ AC_PATH_PROG([SERVICE_PATH], [service], [/sbin/service])
+ AC_PATH_PROG([SETSID_PATH], [setsid], [/usr/bin/setsid])
+diff --git a/vdsm/constants.py.in b/vdsm/constants.py.in
+index 8034b8e..ec5fff9 100644
+--- a/vdsm/constants.py.in
++++ b/vdsm/constants.py.in
+@@ -127,7 +127,6 @@ EXT_QEMUIMG = '@QEMUIMG_PATH@'
+ EXT_REBOOT = '@REBOOT_PATH@'
+ EXT_RSYNC = '@RSYNC_PATH@'
+ 
+-EXT_SCSI_ID = '@SCSI_ID_PATH@'  # TBD !
+ EXT_SERVICE = '@SERVICE_PATH@'
+ EXT_SETSID = '@SETSID_PATH@'
+ EXT_SH = '/bin/sh'  # The shell path is invariable
+@@ -157,26 +156,3 @@ CMD_LOWPRIO = [EXT_NICE, '-n', '19', EXT_IONICE, '-c', '3']
+ STRG_ISCSI_HOST = "iscsi_host/"
+ STRG_SCSI_HOST = "scsi_host/"
+ STRG_ISCSI_SESSION = "iscsi_session/"
+-STRG_MPATH_CONF = (
+-    "\n\n"
+-    "defaults {\n"
+-    "    polling_interval        5\n"
+-    "    getuid_callout          \"@SCSI_ID_PATH@ --whitelisted "
+-                                    "--replace-whitespace --device=/dev/%n\"\n"
+-    "    no_path_retry           fail\n"
+-    "    user_friendly_names     no\n"
+-    "    flush_on_last_del       yes\n"
+-    "    fast_io_fail_tmo        5\n"
+-    "    dev_loss_tmo            30\n"
+-    "    max_fds                 4096\n"
+-    "}\n"
+-    "\n"
+-    "devices {\n"
+-    "device {\n"
+-    "    vendor                  \"HITACHI\"\n"
+-    "    product                 \"DF.*\"\n"
+-    "    getuid_callout          \"@SCSI_ID_PATH@ --whitelisted "
+-                                    "--replace-whitespace --device=/dev/%n\"\n"
+-    "}\n"
+-    "}"
+-)
+diff --git a/vdsm/storage/multipath.py b/vdsm/storage/multipath.py
+index 741f1a1..05fd186 100644
+--- a/vdsm/storage/multipath.py
++++ b/vdsm/storage/multipath.py
+@@ -30,6 +30,7 @@ import re
+ from collections import namedtuple
+ 
+ from vdsm import constants
++from vdsm import utils
+ import misc
+ import iscsi
+ import supervdsm
+@@ -49,13 +50,47 @@ MPATH_CONF = "/etc/multipath.conf"
+ 
+ OLD_TAGS = ["# RHAT REVISION 0.2", "# RHEV REVISION 0.3",
+             "# RHEV REVISION 0.4", "# RHEV REVISION 0.5",
+-            "# RHEV REVISION 0.6", "# RHEV REVISION 0.7"]
+-MPATH_CONF_TAG = "# RHEV REVISION 0.8"
++            "# RHEV REVISION 0.6", "# RHEV REVISION 0.7",
++            "# RHEV REVISION 0.8", "# RHEV REVISION 0.9"]
++MPATH_CONF_TAG = "# RHEV REVISION 1.0"
+ MPATH_CONF_PRIVATE_TAG = "# RHEV PRIVATE"
+-MPATH_CONF_TEMPLATE = MPATH_CONF_TAG + constants.STRG_MPATH_CONF
++STRG_MPATH_CONF = (
++    "\n\n"
++    "defaults {\n"
++    "    polling_interval        5\n"
++    "    getuid_callout          \"%(scsi_id_path)s --whitelisted "
++    "--replace-whitespace --device=/dev/%%n\"\n"
++    "    no_path_retry           fail\n"
++    "    user_friendly_names     no\n"
++    "    flush_on_last_del       yes\n"
++    "    fast_io_fail_tmo        5\n"
++    "    dev_loss_tmo            30\n"
++    "    max_fds                 4096\n"
++    "}\n"
++    "\n"
++    "devices {\n"
++    "device {\n"
++    "    vendor                  \"HITACHI\"\n"
++    "    product                 \"DF.*\"\n"
++    "    getuid_callout          \"%(scsi_id_path)s --whitelisted "
++    "--replace-whitespace --device=/dev/%%n\"\n"
++    "}\n"
++    "device {\n"
++    "    vendor                  \"COMPELNT\"\n"
++    "    product                 \"Compellent Vol\"\n"
++    "    no_path_retry           fail\n"
++    "}\n"
++    "}"
++)
++MPATH_CONF_TEMPLATE = MPATH_CONF_TAG + STRG_MPATH_CONF
+ 
+ log = logging.getLogger("Storage.Multipath")
+ 
++_scsi_id = utils.CommandPath("scsi_id",
++                             "/sbin/scsi_id",  # EL6
++                             "/usr/lib/udev/scsi_id",  # Fedora
++                             )
++
+ 
+ def rescan():
+     """
+@@ -127,7 +162,7 @@ def setupMultipath():
+                 os.path.basename(MPATH_CONF), MAX_CONF_COPIES,
+                 cp=True, persist=True)
+     with tempfile.NamedTemporaryFile() as f:
+-        f.write(MPATH_CONF_TEMPLATE)
++        f.write(MPATH_CONF_TEMPLATE % {'scsi_id_path': _scsi_id.cmd})
+         f.flush()
+         cmd = [constants.EXT_CP, f.name, MPATH_CONF]
+         rc = misc.execCmd(cmd, sudo=True)[0]
+@@ -173,7 +208,7 @@ def getDeviceSize(dev):
+ 
+ def getScsiSerial(physdev):
+     blkdev = os.path.join("/dev", physdev)
+-    cmd = [constants.EXT_SCSI_ID,
++    cmd = [_scsi_id.cmd,
+            "--page=0x80",
+            "--whitelisted",
+            "--export",
+diff --git a/vdsm/sudoers.vdsm.in b/vdsm/sudoers.vdsm.in
+index ab99e8e..4fc75f9 100644
+--- a/vdsm/sudoers.vdsm.in
++++ b/vdsm/sudoers.vdsm.in
+@@ -23,7 +23,6 @@ Cmnd_Alias VDSM_STORAGE = @MOUNT_PATH@, @UMOUNT_PATH@, \
+     @SERVICE_PATH@ iscsid *, \
+     @SERVICE_PATH@ multipathd restart, \
+     @SERVICE_PATH@ multipathd reload, \
+-    @SCSI_ID_PATH@, \
+     @ISCSIADM_PATH@ *, \
+     @LVM_PATH@, \
+     @CAT_PATH@ /sys/block/*/device/../../*, \
+-- 
+1.8.1
+
diff --git a/0019-upgrade-force-upgrade-to-v2-before-upgrading-to-v3.patch b/0019-upgrade-force-upgrade-to-v2-before-upgrading-to-v3.patch
new file mode 100644
index 0000000..5a1c4a9
--- /dev/null
+++ b/0019-upgrade-force-upgrade-to-v2-before-upgrading-to-v3.patch
@@ -0,0 +1,91 @@
+From 33480cbc90a4810aa99e3fc7b36e879cdb0c19d4 Mon Sep 17 00:00:00 2001
+From: Federico Simoncelli <fsimonce at redhat.com>
+Date: Wed, 9 Jan 2013 09:57:52 +0200
+Subject: [PATCH 19/22] upgrade: force upgrade to v2 before upgrading to v3
+
+During the upgrade of a domain to version 3 vdsm reallocates the
+metadata slots that are higher than 1947 (given a leases LV of 2Gb)
+in order to use the same offsets for the volume leases (BZ#882276
+and git commit hash 2ba76e3).
+This has no effect when the domain is version 0 since the metadata
+slots offsets are fixed (the first physical extent of the LV) and
+they can't be reallocated. In such case the domain must be upgraded
+to version 2 first.
+
+Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=893184
+Change-Id: I2bd424ad29e76d1368ff2959bb8fe45afc595cdb
+Signed-off-by: Federico Simoncelli <fsimonce at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10792
+Reviewed-by: Ayal Baron <abaron at redhat.com>
+Tested-by: Haim Ateya <hateya at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11462
+---
+ vdsm/storage/imageRepository/formatConverter.py | 26 +++++++++++++++++--------
+ vdsm/storage/volume.py                          |  4 +++-
+ 2 files changed, 21 insertions(+), 9 deletions(-)
+
+diff --git a/vdsm/storage/imageRepository/formatConverter.py b/vdsm/storage/imageRepository/formatConverter.py
+index 0d7dd6d..88b053d 100644
+--- a/vdsm/storage/imageRepository/formatConverter.py
++++ b/vdsm/storage/imageRepository/formatConverter.py
+@@ -93,6 +93,23 @@ def v3DomainConverter(repoPath, hostId, domain, isMsd):
+     log = logging.getLogger('Storage.v3DomainConverter')
+     log.debug("Starting conversion for domain %s", domain.sdUUID)
+ 
++    targetVersion = 3
++    currentVersion = domain.getVersion()
++
++    # For block domains if we're upgrading from version 0 we need to first
++    # upgrade to version 2 and then proceed to upgrade to version 3.
++    if domain.getStorageType() in sd.BLOCK_DOMAIN_TYPES:
++        if currentVersion == 0:
++            log.debug("Upgrading domain %s from version %s to version 2",
++                      domain.sdUUID, currentVersion)
++            v2DomainConverter(repoPath, hostId, domain, isMsd)
++            currentVersion = domain.getVersion()
++
++        if currentVersion != 2:
++            log.debug("Unsupported conversion from version %s to version %s",
++                      currentVersion, targetVersion)
++            raise se.UnsupportedDomainVersion(currentVersion)
++
+     if domain.getStorageType() in sd.FILE_DOMAIN_TYPES:
+         log.debug("Setting permissions for domain %s", domain.sdUUID)
+         domain.setMetadataPermissions()
+@@ -268,17 +285,10 @@ def v3DomainConverter(repoPath, hostId, domain, isMsd):
+                               "not critical since the volume might be in use",
+                               imgUUID, exc_info=True)
+ 
+-        targetVersion = 3
+-        currentVersion = domain.getVersion()
+         log.debug("Finalizing the storage domain upgrade from version %s to "
+                   "version %s for domain %s", currentVersion, targetVersion,
+                   domain.sdUUID)
+-
+-        if (currentVersion not in blockSD.VERS_METADATA_TAG
+-                        and domain.getStorageType() in sd.BLOCK_DOMAIN_TYPES):
+-            __convertDomainMetadataToTags(domain, targetVersion)
+-        else:
+-            domain.setMetaParam(sd.DMDK_VERSION, targetVersion)
++        domain.setMetaParam(sd.DMDK_VERSION, targetVersion)
+ 
+     except:
+         if isMsd:
+diff --git a/vdsm/storage/volume.py b/vdsm/storage/volume.py
+index cde612a..12dd188 100644
+--- a/vdsm/storage/volume.py
++++ b/vdsm/storage/volume.py
+@@ -503,7 +503,9 @@ class Volume(object):
+             cls.newMetadata(metaId, sdUUID, imgUUID, srcVolUUID, size,
+                             type2name(volFormat), type2name(preallocate),
+                             volType, diskType, desc, LEGAL_VOL)
+-            cls.newVolumeLease(metaId, sdUUID, volUUID)
++
++            if dom.hasVolumeLeases():
++                cls.newVolumeLease(metaId, sdUUID, volUUID)
+ 
+         except se.StorageException:
+             cls.log.error("Unexpected error", exc_info=True)
+-- 
+1.8.1
+
diff --git a/0020-misc-rename-safelease-to-clusterlock.patch b/0020-misc-rename-safelease-to-clusterlock.patch
new file mode 100644
index 0000000..3e1684c
--- /dev/null
+++ b/0020-misc-rename-safelease-to-clusterlock.patch
@@ -0,0 +1,866 @@
+From e60206af7781c86ddb5d2ef1fcac3f8f8b086ee4 Mon Sep 17 00:00:00 2001
+From: Federico Simoncelli <fsimonce at redhat.com>
+Date: Fri, 14 Dec 2012 06:42:09 -0500
+Subject: [PATCH 20/22] misc: rename safelease to clusterlock
+
+The safelease module is now contaning also the sanlock implementation
+and soon it might contain other (e.g.: a special lock for local storage
+domains), for this reason it has been renamed with a more general name
+clusterlock. The safelease implementation also required some cleanup in
+order to achieve more uniformity between the locking mechanisms.
+
+Change-Id: I74070ebb43dd726362900a0746c08b2ee3d6eac7
+Signed-off-by: Federico Simoncelli <fsimonce at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10067
+Reviewed-by: Allon Mureinik <amureini at redhat.com>
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11463
+---
+ vdsm.spec.in                                    |   2 +-
+ vdsm/API.py                                     |   4 +-
+ vdsm/storage/Makefile.am                        |   4 +-
+ vdsm/storage/blockSD.py                         |   4 +-
+ vdsm/storage/clusterlock.py                     | 251 ++++++++++++++++++++++++
+ vdsm/storage/hsm.py                             |  20 +-
+ vdsm/storage/imageRepository/formatConverter.py |   6 +-
+ vdsm/storage/safelease.py                       | 250 -----------------------
+ vdsm/storage/sd.py                              |  12 +-
+ vdsm/storage/sp.py                              |  25 ++-
+ 10 files changed, 289 insertions(+), 289 deletions(-)
+ create mode 100644 vdsm/storage/clusterlock.py
+ delete mode 100644 vdsm/storage/safelease.py
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index dfc2459..8ad4dce 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -685,7 +685,7 @@ exit 0
+ %{_datadir}/%{vdsm_name}/storage/resourceFactories.py*
+ %{_datadir}/%{vdsm_name}/storage/remoteFileHandler.py*
+ %{_datadir}/%{vdsm_name}/storage/resourceManager.py*
+-%{_datadir}/%{vdsm_name}/storage/safelease.py*
++%{_datadir}/%{vdsm_name}/storage/clusterlock.py*
+ %{_datadir}/%{vdsm_name}/storage/sdc.py*
+ %{_datadir}/%{vdsm_name}/storage/sd.py*
+ %{_datadir}/%{vdsm_name}/storage/securable.py*
+diff --git a/vdsm/API.py b/vdsm/API.py
+index 732f8a3..a050a51 100644
+--- a/vdsm/API.py
++++ b/vdsm/API.py
+@@ -33,7 +33,7 @@ import configNetwork
+ from vdsm import netinfo
+ from vdsm import constants
+ import storage.misc
+-import storage.safelease
++import storage.clusterlock
+ import storage.volume
+ import storage.sd
+ import storage.image
+@@ -992,7 +992,7 @@ class StoragePool(APIBase):
+     def spmStart(self, prevID, prevLver, enableScsiFencing,
+                  maxHostID=None, domVersion=None):
+         if maxHostID is None:
+-            maxHostID = storage.safelease.MAX_HOST_ID
++            maxHostID = storage.clusterlock.MAX_HOST_ID
+         recoveryMode = None   # unused
+         return self._irs.spmStart(self._UUID, prevID, prevLver,
+                 recoveryMode, enableScsiFencing, maxHostID, domVersion)
+diff --git a/vdsm/storage/Makefile.am b/vdsm/storage/Makefile.am
+index cff09be..abc1545 100644
+--- a/vdsm/storage/Makefile.am
++++ b/vdsm/storage/Makefile.am
+@@ -25,6 +25,7 @@ dist_vdsmstorage_PYTHON = \
+ 	__init__.py \
+ 	blockSD.py \
+ 	blockVolume.py \
++	clusterlock.py \
+ 	devicemapper.py \
+ 	dispatcher.py \
+ 	domainMonitor.py \
+@@ -35,8 +36,8 @@ dist_vdsmstorage_PYTHON = \
+ 	hba.py \
+ 	hsm.py \
+ 	image.py \
++	iscsiadm.py \
+ 	iscsi.py \
+-        iscsiadm.py \
+ 	localFsSD.py \
+ 	lvm.py \
+ 	misc.py \
+@@ -48,7 +49,6 @@ dist_vdsmstorage_PYTHON = \
+ 	remoteFileHandler.py \
+ 	resourceFactories.py \
+ 	resourceManager.py \
+-	safelease.py \
+ 	sdc.py \
+ 	sd.py \
+ 	securable.py \
+diff --git a/vdsm/storage/blockSD.py b/vdsm/storage/blockSD.py
+index 61ec996..862e413 100644
+--- a/vdsm/storage/blockSD.py
++++ b/vdsm/storage/blockSD.py
+@@ -37,7 +37,7 @@ import misc
+ import fileUtils
+ import sd
+ import lvm
+-import safelease
++import clusterlock
+ import blockVolume
+ import multipath
+ import resourceFactories
+@@ -63,7 +63,7 @@ log = logging.getLogger("Storage.BlockSD")
+ 
+ # FIXME: Make this calculated from something logical
+ RESERVED_METADATA_SIZE = 40 * (2 ** 20)
+-RESERVED_MAILBOX_SIZE = MAILBOX_SIZE * safelease.MAX_HOST_ID
++RESERVED_MAILBOX_SIZE = MAILBOX_SIZE * clusterlock.MAX_HOST_ID
+ METADATA_BASE_SIZE = 378
+ # VG's min metadata threshold is 20%
+ VG_MDA_MIN_THRESHOLD = 0.2
+diff --git a/vdsm/storage/clusterlock.py b/vdsm/storage/clusterlock.py
+new file mode 100644
+index 0000000..4525b2f
+--- /dev/null
++++ b/vdsm/storage/clusterlock.py
+@@ -0,0 +1,251 @@
++#
++# Copyright 2011 Red Hat, Inc.
++#
++# This program is free software; you can redistribute it and/or modify
++# it under the terms of the GNU General Public License as published by
++# the Free Software Foundation; either version 2 of the License, or
++# (at your option) any later version.
++#
++# This program is distributed in the hope that it will be useful,
++# but WITHOUT ANY WARRANTY; without even the implied warranty of
++# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++# GNU General Public License for more details.
++#
++# You should have received a copy of the GNU General Public License
++# along with this program; if not, write to the Free Software
++# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301 USA
++#
++# Refer to the README and COPYING files for full details of the license
++#
++
++import os
++import threading
++import logging
++import subprocess
++from contextlib import nested
++import sanlock
++
++import misc
++import storage_exception as se
++from vdsm import constants
++from vdsm.config import config
++
++
++MAX_HOST_ID = 250
++
++# The LEASE_OFFSET is used by SANLock to not overlap with safelease in
++# orfer to preserve the ability to acquire both locks (e.g.: during the
++# domain upgrade)
++SDM_LEASE_NAME = 'SDM'
++SDM_LEASE_OFFSET = 512 * 2048
++
++
++class SafeLease(object):
++    log = logging.getLogger("SafeLease")
++
++    lockUtilPath = config.get('irs', 'lock_util_path')
++    lockCmd = config.get('irs', 'lock_cmd')
++    freeLockCmd = config.get('irs', 'free_lock_cmd')
++
++    def __init__(self, sdUUID, idsPath, leasesPath, lockRenewalIntervalSec,
++                 leaseTimeSec, leaseFailRetry, ioOpTimeoutSec):
++        self._lock = threading.Lock()
++        self._sdUUID = sdUUID
++        self._idsPath = idsPath
++        self._leasesPath = leasesPath
++        self.setParams(lockRenewalIntervalSec, leaseTimeSec, leaseFailRetry,
++                       ioOpTimeoutSec)
++
++    def initLock(self):
++        lockUtil = os.path.join(self.lockUtilPath, "safelease")
++        initCommand = [lockUtil, "release", "-f", self._leasesPath, "0"]
++        rc, out, err = misc.execCmd(initCommand, sudo=False,
++                cwd=self.lockUtilPath)
++        if rc != 0:
++            self.log.warn("could not initialise spm lease (%s): %s", rc, out)
++            raise se.ClusterLockInitError()
++
++    def setParams(self, lockRenewalIntervalSec, leaseTimeSec, leaseFailRetry,
++                  ioOpTimeoutSec):
++        self._lockRenewalIntervalSec = lockRenewalIntervalSec
++        self._leaseTimeSec = leaseTimeSec
++        self._leaseFailRetry = leaseFailRetry
++        self._ioOpTimeoutSec = ioOpTimeoutSec
++
++    def getReservedId(self):
++        return 1000
++
++    def acquireHostId(self, hostId, async):
++        self.log.debug("Host id for domain %s successfully acquired (id: %s)",
++                       self._sdUUID, hostId)
++
++    def releaseHostId(self, hostId, async, unused):
++        self.log.debug("Host id for domain %s released successfully (id: %s)",
++                       self._sdUUID, hostId)
++
++    def hasHostId(self, hostId):
++        return True
++
++    def acquire(self, hostID):
++        leaseTimeMs = self._leaseTimeSec * 1000
++        ioOpTimeoutMs = self._ioOpTimeoutSec * 1000
++        with self._lock:
++            self.log.debug("Acquiring cluster lock for domain %s" %
++                    self._sdUUID)
++
++            lockUtil = self.getLockUtilFullPath()
++            acquireLockCommand = subprocess.list2cmdline([
++                lockUtil, "start", self._sdUUID, str(hostID),
++                str(self._lockRenewalIntervalSec), str(self._leasesPath),
++                str(leaseTimeMs), str(ioOpTimeoutMs), str(self._leaseFailRetry)
++            ])
++
++            cmd = [constants.EXT_SETSID, constants.EXT_IONICE, '-c1', '-n0',
++                constants.EXT_SU, misc.IOUSER, '-s', constants.EXT_SH, '-c',
++                acquireLockCommand]
++            (rc, out, err) = misc.execCmd(cmd, cwd=self.lockUtilPath,
++                    sudo=True)
++            if rc != 0:
++                raise se.AcquireLockFailure(self._sdUUID, rc, out, err)
++            self.log.debug("Clustered lock acquired successfully")
++
++    def getLockUtilFullPath(self):
++        return os.path.join(self.lockUtilPath, self.lockCmd)
++
++    def release(self):
++        with self._lock:
++            freeLockUtil = os.path.join(self.lockUtilPath, self.freeLockCmd)
++            releaseLockCommand = [freeLockUtil, self._sdUUID]
++            self.log.info("Releasing cluster lock for domain %s" %
++                    self._sdUUID)
++            (rc, out, err) = misc.execCmd(releaseLockCommand, sudo=False,
++                    cwd=self.lockUtilPath)
++            if rc != 0:
++                self.log.error("Could not release cluster lock "
++                        "rc=%s out=%s, err=%s" % (str(rc), out, err))
++
++            self.log.debug("Cluster lock released successfully")
++
++
++class SANLock(object):
++    log = logging.getLogger("SANLock")
++
++    _sanlock_fd = None
++    _sanlock_lock = threading.Lock()
++
++    def __init__(self, sdUUID, idsPath, leasesPath, *args):
++        self._lock = threading.Lock()
++        self._sdUUID = sdUUID
++        self._idsPath = idsPath
++        self._leasesPath = leasesPath
++        self._sanlockfd = None
++
++    def initLock(self):
++        try:
++            sanlock.init_lockspace(self._sdUUID, self._idsPath)
++            sanlock.init_resource(self._sdUUID, SDM_LEASE_NAME,
++                                  [(self._leasesPath, SDM_LEASE_OFFSET)])
++        except sanlock.SanlockException:
++            self.log.warn("Cannot initialize clusterlock", exc_info=True)
++            raise se.ClusterLockInitError()
++
++    def setParams(self, *args):
++        pass
++
++    def getReservedId(self):
++        return MAX_HOST_ID
++
++    def acquireHostId(self, hostId, async):
++        with self._lock:
++            self.log.info("Acquiring host id for domain %s (id: %s)",
++                          self._sdUUID, hostId)
++
++            try:
++                sanlock.add_lockspace(self._sdUUID, hostId, self._idsPath,
++                                      async=async)
++            except sanlock.SanlockException, e:
++                if e.errno == os.errno.EINPROGRESS:
++                    # if the request is not asynchronous wait for the ongoing
++                    # lockspace operation to complete
++                    if not async and not sanlock.inq_lockspace(
++                            self._sdUUID, hostId, self._idsPath, wait=True):
++                        raise se.AcquireHostIdFailure(self._sdUUID, e)
++                    # else silently continue, the host id has been acquired
++                    # or it's in the process of being acquired (async)
++                elif e.errno != os.errno.EEXIST:
++                    raise se.AcquireHostIdFailure(self._sdUUID, e)
++
++            self.log.debug("Host id for domain %s successfully acquired "
++                           "(id: %s)", self._sdUUID, hostId)
++
++    def releaseHostId(self, hostId, async, unused):
++        with self._lock:
++            self.log.info("Releasing host id for domain %s (id: %s)",
++                          self._sdUUID, hostId)
++
++            try:
++                sanlock.rem_lockspace(self._sdUUID, hostId, self._idsPath,
++                                      async=async, unused=unused)
++            except sanlock.SanlockException, e:
++                if e.errno != os.errno.ENOENT:
++                    raise se.ReleaseHostIdFailure(self._sdUUID, e)
++
++            self.log.debug("Host id for domain %s released successfully "
++                           "(id: %s)", self._sdUUID, hostId)
++
++    def hasHostId(self, hostId):
++        with self._lock:
++            try:
++                return sanlock.inq_lockspace(self._sdUUID,
++                                             hostId, self._idsPath)
++            except sanlock.SanlockException:
++                self.log.debug("Unable to inquire sanlock lockspace "
++                               "status, returning False", exc_info=True)
++                return False
++
++    # The hostId parameter is maintained here only for compatibility with
++    # ClusterLock. We could consider to remove it in the future but keeping it
++    # for logging purpose is desirable.
++    def acquire(self, hostId):
++        with nested(self._lock, SANLock._sanlock_lock):
++            self.log.info("Acquiring cluster lock for domain %s (id: %s)",
++                          self._sdUUID, hostId)
++
++            while True:
++                if SANLock._sanlock_fd is None:
++                    try:
++                        SANLock._sanlock_fd = sanlock.register()
++                    except sanlock.SanlockException, e:
++                        raise se.AcquireLockFailure(self._sdUUID, e.errno,
++                                        "Cannot register to sanlock", str(e))
++
++                try:
++                    sanlock.acquire(self._sdUUID, SDM_LEASE_NAME,
++                                    [(self._leasesPath, SDM_LEASE_OFFSET)],
++                                    slkfd=SANLock._sanlock_fd)
++                except sanlock.SanlockException, e:
++                    if e.errno != os.errno.EPIPE:
++                        raise se.AcquireLockFailure(self._sdUUID, e.errno,
++                                        "Cannot acquire cluster lock", str(e))
++                    SANLock._sanlock_fd = None
++                    continue
++
++                break
++
++            self.log.debug("Cluster lock for domain %s successfully acquired "
++                           "(id: %s)", self._sdUUID, hostId)
++
++    def release(self):
++        with self._lock:
++            self.log.info("Releasing cluster lock for domain %s", self._sdUUID)
++
++            try:
++                sanlock.release(self._sdUUID, SDM_LEASE_NAME,
++                                [(self._leasesPath, SDM_LEASE_OFFSET)],
++                                slkfd=SANLock._sanlock_fd)
++            except sanlock.SanlockException, e:
++                raise se.ReleaseLockFailure(self._sdUUID, e)
++
++            self._sanlockfd = None
++            self.log.debug("Cluster lock for domain %s successfully released",
++                           self._sdUUID)
+diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
+index 62e9f74..8bbe3b8 100644
+--- a/vdsm/storage/hsm.py
++++ b/vdsm/storage/hsm.py
+@@ -53,7 +53,7 @@ import iscsi
+ import misc
+ from misc import deprecated
+ import taskManager
+-import safelease
++import clusterlock
+ import storage_exception as se
+ from threadLocal import vars
+ from vdsm import constants
+@@ -528,7 +528,7 @@ class HSM:
+ 
+     @public
+     def spmStart(self, spUUID, prevID, prevLVER, recoveryMode, scsiFencing,
+-                 maxHostID=safelease.MAX_HOST_ID, domVersion=None,
++                 maxHostID=clusterlock.MAX_HOST_ID, domVersion=None,
+                  options=None):
+         """
+         Starts an SPM.
+@@ -845,7 +845,7 @@ class HSM:
+         :raises: an :exc:`Storage_Exception.InvalidParameterException` if the
+                  master domain is not supplied in the domain list.
+         """
+-        safeLease = sd.packLeaseParams(
++        leaseParams = sd.packLeaseParams(
+             lockRenewalIntervalSec=lockRenewalIntervalSec,
+             leaseTimeSec=leaseTimeSec,
+             ioOpTimeoutSec=ioOpTimeoutSec,
+@@ -853,9 +853,9 @@ class HSM:
+         vars.task.setDefaultException(
+             se.StoragePoolCreationError(
+                 "spUUID=%s, poolName=%s, masterDom=%s, domList=%s, "
+-                "masterVersion=%s, safelease params: (%s)" %
++                "masterVersion=%s, clusterlock params: (%s)" %
+                 (spUUID, poolName, masterDom, domList, masterVersion,
+-                 safeLease)))
++                 leaseParams)))
+         misc.validateUUID(spUUID, 'spUUID')
+         if masterDom not in domList:
+             raise se.InvalidParameterException("masterDom", str(masterDom))
+@@ -892,7 +892,7 @@ class HSM:
+ 
+         return sp.StoragePool(
+             spUUID, self.taskMng).create(poolName, masterDom, domList,
+-                                         masterVersion, safeLease)
++                                         masterVersion, leaseParams)
+ 
+     @public
+     def connectStoragePool(self, spUUID, hostID, scsiKey,
+@@ -1701,7 +1701,7 @@ class HSM:
+         :returns: Nothing ? pool.reconstructMaster return nothing
+         :rtype: ?
+         """
+-        safeLease = sd.packLeaseParams(
++        leaseParams = sd.packLeaseParams(
+             lockRenewalIntervalSec=lockRenewalIntervalSec,
+             leaseTimeSec=leaseTimeSec,
+             ioOpTimeoutSec=ioOpTimeoutSec,
+@@ -1710,9 +1710,9 @@ class HSM:
+ 
+         vars.task.setDefaultException(
+             se.ReconstructMasterError(
+-                "spUUID=%s, masterDom=%s, masterVersion=%s, safelease "
++                "spUUID=%s, masterDom=%s, masterVersion=%s, clusterlock "
+                 "params: (%s)" % (spUUID, masterDom, masterVersion,
+-                                  safeLease)))
++                                  leaseParams)))
+ 
+         self.log.info("spUUID=%s master=%s", spUUID, masterDom)
+ 
+@@ -1738,7 +1738,7 @@ class HSM:
+                 domDict[d] = sd.validateSDDeprecatedStatus(status)
+ 
+         return pool.reconstructMaster(hostId, poolName, masterDom, domDict,
+-                                      masterVersion, safeLease)
++                                      masterVersion, leaseParams)
+ 
+     def _logResp_getDeviceList(self, response):
+         logableDevs = deepcopy(response)
+diff --git a/vdsm/storage/imageRepository/formatConverter.py b/vdsm/storage/imageRepository/formatConverter.py
+index 88b053d..0742560 100644
+--- a/vdsm/storage/imageRepository/formatConverter.py
++++ b/vdsm/storage/imageRepository/formatConverter.py
+@@ -26,7 +26,7 @@ from vdsm import qemuImg
+ from storage import sd
+ from storage import blockSD
+ from storage import image
+-from storage import safelease
++from storage import clusterlock
+ from storage import volume
+ from storage import blockVolume
+ from storage import storage_exception as se
+@@ -115,8 +115,8 @@ def v3DomainConverter(repoPath, hostId, domain, isMsd):
+         domain.setMetadataPermissions()
+ 
+     log.debug("Initializing the new cluster lock for domain %s", domain.sdUUID)
+-    newClusterLock = safelease.SANLock(domain.sdUUID, domain.getIdsFilePath(),
+-                                       domain.getLeasesFilePath())
++    newClusterLock = clusterlock.SANLock(
++        domain.sdUUID, domain.getIdsFilePath(), domain.getLeasesFilePath())
+     newClusterLock.initLock()
+ 
+     log.debug("Acquiring the host id %s for domain %s", hostId, domain.sdUUID)
+diff --git a/vdsm/storage/safelease.py b/vdsm/storage/safelease.py
+deleted file mode 100644
+index 88a4eae..0000000
+--- a/vdsm/storage/safelease.py
++++ /dev/null
+@@ -1,250 +0,0 @@
+-#
+-# Copyright 2011 Red Hat, Inc.
+-#
+-# This program is free software; you can redistribute it and/or modify
+-# it under the terms of the GNU General Public License as published by
+-# the Free Software Foundation; either version 2 of the License, or
+-# (at your option) any later version.
+-#
+-# This program is distributed in the hope that it will be useful,
+-# but WITHOUT ANY WARRANTY; without even the implied warranty of
+-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+-# GNU General Public License for more details.
+-#
+-# You should have received a copy of the GNU General Public License
+-# along with this program; if not, write to the Free Software
+-# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301 USA
+-#
+-# Refer to the README and COPYING files for full details of the license
+-#
+-
+-import os
+-from vdsm.config import config
+-import misc
+-import subprocess
+-import sanlock
+-from contextlib import nested
+-from vdsm import constants
+-import storage_exception as se
+-import threading
+-import logging
+-
+-
+-MAX_HOST_ID = 250
+-
+-# The LEASE_OFFSET is used by SANLock to not overlap with safelease in
+-# orfer to preserve the ability to acquire both locks (e.g.: during the
+-# domain upgrade)
+-SDM_LEASE_NAME = 'SDM'
+-SDM_LEASE_OFFSET = 512 * 2048
+-
+-
+-class ClusterLock(object):
+-    log = logging.getLogger("ClusterLock")
+-    lockUtilPath = config.get('irs', 'lock_util_path')
+-    lockCmd = config.get('irs', 'lock_cmd')
+-    freeLockCmd = config.get('irs', 'free_lock_cmd')
+-
+-    def __init__(self, sdUUID, idFile, leaseFile,
+-            lockRenewalIntervalSec,
+-            leaseTimeSec,
+-            leaseFailRetry,
+-            ioOpTimeoutSec):
+-        self._lock = threading.RLock()
+-        self._sdUUID = sdUUID
+-        self._leaseFile = leaseFile
+-        self.setParams(lockRenewalIntervalSec, leaseTimeSec,
+-                       leaseFailRetry, ioOpTimeoutSec)
+-
+-    def initLock(self):
+-        lockUtil = os.path.join(self.lockUtilPath, "safelease")
+-        initCommand = [lockUtil, "release", "-f", self._leaseFile, "0"]
+-        rc, out, err = misc.execCmd(initCommand, sudo=False,
+-                cwd=self.lockUtilPath)
+-        if rc != 0:
+-            self.log.warn("could not initialise spm lease (%s): %s", rc, out)
+-            raise se.ClusterLockInitError()
+-
+-    def setParams(self, lockRenewalIntervalSec,
+-                    leaseTimeSec,
+-                    leaseFailRetry,
+-                    ioOpTimeoutSec):
+-        self._lockRenewalIntervalSec = lockRenewalIntervalSec
+-        self._leaseTimeSec = leaseTimeSec
+-        self._leaseFailRetry = leaseFailRetry
+-        self._ioOpTimeoutSec = ioOpTimeoutSec
+-
+-    def getReservedId(self):
+-        return 1000
+-
+-    def acquireHostId(self, hostId, async):
+-        pass
+-
+-    def releaseHostId(self, hostId, async, unused):
+-        pass
+-
+-    def hasHostId(self, hostId):
+-        return True
+-
+-    def acquire(self, hostID):
+-        leaseTimeMs = self._leaseTimeSec * 1000
+-        ioOpTimeoutMs = self._ioOpTimeoutSec * 1000
+-        with self._lock:
+-            self.log.debug("Acquiring cluster lock for domain %s" %
+-                    self._sdUUID)
+-
+-            lockUtil = self.getLockUtilFullPath()
+-            acquireLockCommand = subprocess.list2cmdline([lockUtil, "start",
+-                self._sdUUID, str(hostID), str(self._lockRenewalIntervalSec),
+-                str(self._leaseFile), str(leaseTimeMs), str(ioOpTimeoutMs),
+-                str(self._leaseFailRetry)])
+-
+-            cmd = [constants.EXT_SETSID, constants.EXT_IONICE, '-c1', '-n0',
+-                constants.EXT_SU, misc.IOUSER, '-s', constants.EXT_SH, '-c',
+-                acquireLockCommand]
+-            (rc, out, err) = misc.execCmd(cmd, cwd=self.lockUtilPath,
+-                    sudo=True)
+-            if rc != 0:
+-                raise se.AcquireLockFailure(self._sdUUID, rc, out, err)
+-            self.log.debug("Clustered lock acquired successfully")
+-
+-    def getLockUtilFullPath(self):
+-        return os.path.join(self.lockUtilPath, self.lockCmd)
+-
+-    def release(self):
+-        with self._lock:
+-            freeLockUtil = os.path.join(self.lockUtilPath, self.freeLockCmd)
+-            releaseLockCommand = [freeLockUtil, self._sdUUID]
+-            self.log.info("Releasing cluster lock for domain %s" %
+-                    self._sdUUID)
+-            (rc, out, err) = misc.execCmd(releaseLockCommand, sudo=False,
+-                    cwd=self.lockUtilPath)
+-            if rc != 0:
+-                self.log.error("Could not release cluster lock "
+-                        "rc=%s out=%s, err=%s" % (str(rc), out, err))
+-
+-            self.log.debug("Cluster lock released successfully")
+-
+-
+-class SANLock(object):
+-    log = logging.getLogger("SANLock")
+-
+-    _sanlock_fd = None
+-    _sanlock_lock = threading.Lock()
+-
+-    def __init__(self, sdUUID, idsPath, leasesPath, *args):
+-        self._lock = threading.Lock()
+-        self._sdUUID = sdUUID
+-        self._idsPath = idsPath
+-        self._leasesPath = leasesPath
+-        self._sanlockfd = None
+-
+-    def initLock(self):
+-        try:
+-            sanlock.init_lockspace(self._sdUUID, self._idsPath)
+-            sanlock.init_resource(self._sdUUID, SDM_LEASE_NAME,
+-                                  [(self._leasesPath, SDM_LEASE_OFFSET)])
+-        except sanlock.SanlockException:
+-            self.log.warn("Cannot initialize clusterlock", exc_info=True)
+-            raise se.ClusterLockInitError()
+-
+-    def setParams(self, *args):
+-        pass
+-
+-    def getReservedId(self):
+-        return MAX_HOST_ID
+-
+-    def acquireHostId(self, hostId, async):
+-        with self._lock:
+-            self.log.info("Acquiring host id for domain %s (id: %s)",
+-                          self._sdUUID, hostId)
+-
+-            try:
+-                sanlock.add_lockspace(self._sdUUID, hostId, self._idsPath,
+-                                      async=async)
+-            except sanlock.SanlockException, e:
+-                if e.errno == os.errno.EINPROGRESS:
+-                    # if the request is not asynchronous wait for the ongoing
+-                    # lockspace operation to complete
+-                    if not async and not sanlock.inq_lockspace(
+-                            self._sdUUID, hostId, self._idsPath, wait=True):
+-                        raise se.AcquireHostIdFailure(self._sdUUID, e)
+-                    # else silently continue, the host id has been acquired
+-                    # or it's in the process of being acquired (async)
+-                elif e.errno != os.errno.EEXIST:
+-                    raise se.AcquireHostIdFailure(self._sdUUID, e)
+-
+-            self.log.debug("Host id for domain %s successfully acquired "
+-                           "(id: %s)", self._sdUUID, hostId)
+-
+-    def releaseHostId(self, hostId, async, unused):
+-        with self._lock:
+-            self.log.info("Releasing host id for domain %s (id: %s)",
+-                          self._sdUUID, hostId)
+-
+-            try:
+-                sanlock.rem_lockspace(self._sdUUID, hostId, self._idsPath,
+-                                      async=async, unused=unused)
+-            except sanlock.SanlockException, e:
+-                if e.errno != os.errno.ENOENT:
+-                    raise se.ReleaseHostIdFailure(self._sdUUID, e)
+-
+-            self.log.debug("Host id for domain %s released successfully "
+-                           "(id: %s)", self._sdUUID, hostId)
+-
+-    def hasHostId(self, hostId):
+-        with self._lock:
+-            try:
+-                return sanlock.inq_lockspace(self._sdUUID,
+-                                             hostId, self._idsPath)
+-            except sanlock.SanlockException:
+-                self.log.debug("Unable to inquire sanlock lockspace "
+-                               "status, returning False", exc_info=True)
+-                return False
+-
+-    # The hostId parameter is maintained here only for compatibility with
+-    # ClusterLock. We could consider to remove it in the future but keeping it
+-    # for logging purpose is desirable.
+-    def acquire(self, hostId):
+-        with nested(self._lock, SANLock._sanlock_lock):
+-            self.log.info("Acquiring cluster lock for domain %s (id: %s)",
+-                          self._sdUUID, hostId)
+-
+-            while True:
+-                if SANLock._sanlock_fd is None:
+-                    try:
+-                        SANLock._sanlock_fd = sanlock.register()
+-                    except sanlock.SanlockException, e:
+-                        raise se.AcquireLockFailure(self._sdUUID, e.errno,
+-                                        "Cannot register to sanlock", str(e))
+-
+-                try:
+-                    sanlock.acquire(self._sdUUID, SDM_LEASE_NAME,
+-                                    [(self._leasesPath, SDM_LEASE_OFFSET)],
+-                                    slkfd=SANLock._sanlock_fd)
+-                except sanlock.SanlockException, e:
+-                    if e.errno != os.errno.EPIPE:
+-                        raise se.AcquireLockFailure(self._sdUUID, e.errno,
+-                                        "Cannot acquire cluster lock", str(e))
+-                    SANLock._sanlock_fd = None
+-                    continue
+-
+-                break
+-
+-            self.log.debug("Cluster lock for domain %s successfully acquired "
+-                           "(id: %s)", self._sdUUID, hostId)
+-
+-    def release(self):
+-        with self._lock:
+-            self.log.info("Releasing cluster lock for domain %s", self._sdUUID)
+-
+-            try:
+-                sanlock.release(self._sdUUID, SDM_LEASE_NAME,
+-                                [(self._leasesPath, SDM_LEASE_OFFSET)],
+-                                slkfd=SANLock._sanlock_fd)
+-            except sanlock.SanlockException, e:
+-                raise se.ReleaseLockFailure(self._sdUUID, e)
+-
+-            self._sanlockfd = None
+-            self.log.debug("Cluster lock for domain %s successfully released",
+-                           self._sdUUID)
+diff --git a/vdsm/storage/sd.py b/vdsm/storage/sd.py
+index 1b11017..dbc1beb 100644
+--- a/vdsm/storage/sd.py
++++ b/vdsm/storage/sd.py
+@@ -31,7 +31,7 @@ import resourceFactories
+ from resourceFactories import IMAGE_NAMESPACE, VOLUME_NAMESPACE
+ import resourceManager as rm
+ from vdsm import constants
+-import safelease
++import clusterlock
+ import outOfProcess as oop
+ from persistentDict import unicodeEncoder, unicodeDecoder
+ 
+@@ -307,12 +307,12 @@ class StorageDomain:
+                 DEFAULT_LEASE_PARAMS[DMDK_LEASE_TIME_SEC],
+                 DEFAULT_LEASE_PARAMS[DMDK_LEASE_RETRIES],
+                 DEFAULT_LEASE_PARAMS[DMDK_IO_OP_TIMEOUT_SEC])
+-            self._clusterLock = safelease.ClusterLock(self.sdUUID,
+-                    self.getIdsFilePath(), self.getLeasesFilePath(),
+-                    *leaseParams)
++            self._clusterLock = clusterlock.SafeLease(
++                self.sdUUID, self.getIdsFilePath(), self.getLeasesFilePath(),
++                *leaseParams)
+         elif domversion in DOM_SANLOCK_VERS:
+-            self._clusterLock = safelease.SANLock(self.sdUUID,
+-                    self.getIdsFilePath(), self.getLeasesFilePath())
++            self._clusterLock = clusterlock.SANLock(
++                self.sdUUID, self.getIdsFilePath(), self.getLeasesFilePath())
+         else:
+             raise se.UnsupportedDomainVersion(domversion)
+ 
+diff --git a/vdsm/storage/sp.py b/vdsm/storage/sp.py
+index 40d15b3..e13d088 100644
+--- a/vdsm/storage/sp.py
++++ b/vdsm/storage/sp.py
+@@ -494,7 +494,7 @@ class StoragePool(Securable):
+             return config.getint("irs", "maximum_domains_in_pool")
+ 
+     @unsecured
+-    def _acquireTemporaryClusterLock(self, msdUUID, safeLease):
++    def _acquireTemporaryClusterLock(self, msdUUID, leaseParams):
+         try:
+             # Master domain is unattached and all changes to unattached domains
+             # must be performed under storage lock
+@@ -504,7 +504,7 @@ class StoragePool(Securable):
+             # assigned id for this pool
+             self.id = msd.getReservedId()
+ 
+-            msd.changeLeaseParams(safeLease)
++            msd.changeLeaseParams(leaseParams)
+ 
+             msd.acquireHostId(self.id)
+ 
+@@ -527,7 +527,7 @@ class StoragePool(Securable):
+         self.id = SPM_ID_FREE
+ 
+     @unsecured
+-    def create(self, poolName, msdUUID, domList, masterVersion, safeLease):
++    def create(self, poolName, msdUUID, domList, masterVersion, leaseParams):
+         """
+         Create new storage pool with single/multiple image data domain.
+         The command will create new storage pool meta-data attach each
+@@ -537,10 +537,9 @@ class StoragePool(Securable):
+          'msdUUID' - master domain of this pool (one of domList)
+          'domList' - list of domains (i.e sdUUID,sdUUID,...,sdUUID)
+         """
+-        self.log.info("spUUID=%s poolName=%s master_sd=%s "
+-                      "domList=%s masterVersion=%s %s",
+-                      self.spUUID, poolName, msdUUID,
+-                      domList, masterVersion, str(safeLease))
++        self.log.info("spUUID=%s poolName=%s master_sd=%s domList=%s "
++                      "masterVersion=%s %s", self.spUUID, poolName, msdUUID,
++                      domList, masterVersion, leaseParams)
+ 
+         if msdUUID not in domList:
+             raise se.InvalidParameterException("masterDomain", msdUUID)
+@@ -565,7 +564,7 @@ class StoragePool(Securable):
+                     raise se.StorageDomainAlreadyAttached(spUUIDs[0], sdUUID)
+ 
+         fileUtils.createdir(self.poolPath)
+-        self._acquireTemporaryClusterLock(msdUUID, safeLease)
++        self._acquireTemporaryClusterLock(msdUUID, leaseParams)
+ 
+         try:
+             self._setSafe()
+@@ -573,7 +572,7 @@ class StoragePool(Securable):
+             # We should do it before actually attaching this domain to the pool.
+             # During 'master' marking we create pool metadata and each attached
+             # domain should register there
+-            self.createMaster(poolName, msd, masterVersion, safeLease)
++            self.createMaster(poolName, msd, masterVersion, leaseParams)
+             self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
+             # Attach storage domains to the storage pool
+             # Since we are creating the pool then attach is done from the hsm and not the spm
+@@ -722,10 +721,10 @@ class StoragePool(Securable):
+ 
+     @unsecured
+     def reconstructMaster(self, hostId, poolName, msdUUID, domDict,
+-                          masterVersion, safeLease):
++                          masterVersion, leaseParams):
+         self.log.info("spUUID=%s hostId=%s poolName=%s msdUUID=%s domDict=%s "
+                       "masterVersion=%s leaseparams=(%s)", self.spUUID, hostId,
+-                      poolName, msdUUID, domDict, masterVersion, str(safeLease))
++                      poolName, msdUUID, domDict, masterVersion, leaseParams)
+ 
+         if msdUUID not in domDict:
+             raise se.InvalidParameterException("masterDomain", msdUUID)
+@@ -736,7 +735,7 @@ class StoragePool(Securable):
+         # For backward compatibility we must support a reconstructMaster
+         # that doesn't specify an hostId.
+         if not hostId:
+-            self._acquireTemporaryClusterLock(msdUUID, safeLease)
++            self._acquireTemporaryClusterLock(msdUUID, leaseParams)
+             temporaryLock = True
+         else:
+             # Forcing to acquire the host id (if it's not acquired already).
+@@ -749,7 +748,7 @@ class StoragePool(Securable):
+ 
+         try:
+             self.createMaster(poolName, futureMaster, masterVersion,
+-                              safeLease)
++                              leaseParams)
+ 
+             for sdUUID in domDict:
+                 domDict[sdUUID] = domDict[sdUUID].capitalize()
+-- 
+1.8.1
+
diff --git a/0021-domain-select-the-cluster-lock-using-makeClusterLock.patch b/0021-domain-select-the-cluster-lock-using-makeClusterLock.patch
new file mode 100644
index 0000000..3d0666c
--- /dev/null
+++ b/0021-domain-select-the-cluster-lock-using-makeClusterLock.patch
@@ -0,0 +1,151 @@
+From 5da363f0412d2b709fb1460324ee04b5905e492b Mon Sep 17 00:00:00 2001
+From: Federico Simoncelli <fsimonce at redhat.com>
+Date: Thu, 20 Dec 2012 06:12:52 -0500
+Subject: [PATCH 21/22] domain: select the cluster lock using makeClusterLock
+
+In order to support different locking mechanisms (not only per-domain
+format but also per-domain type) a new makeClusterLock method has been
+introduced to select the appropriate cluster lock.
+
+Change-Id: I78072254441335a420292af642985840e9b2ac68
+Signed-off-by: Federico Simoncelli <fsimonce at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10281
+Reviewed-by: Allon Mureinik <amureini at redhat.com>
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11464
+---
+ vdsm/storage/imageRepository/formatConverter.py | 11 +++--
+ vdsm/storage/sd.py                              | 54 +++++++++++++++----------
+ 2 files changed, 39 insertions(+), 26 deletions(-)
+
+diff --git a/vdsm/storage/imageRepository/formatConverter.py b/vdsm/storage/imageRepository/formatConverter.py
+index 0742560..95a77d1 100644
+--- a/vdsm/storage/imageRepository/formatConverter.py
++++ b/vdsm/storage/imageRepository/formatConverter.py
+@@ -26,7 +26,6 @@ from vdsm import qemuImg
+ from storage import sd
+ from storage import blockSD
+ from storage import image
+-from storage import clusterlock
+ from storage import volume
+ from storage import blockVolume
+ from storage import storage_exception as se
+@@ -91,7 +90,12 @@ def v2DomainConverter(repoPath, hostId, domain, isMsd):
+ 
+ def v3DomainConverter(repoPath, hostId, domain, isMsd):
+     log = logging.getLogger('Storage.v3DomainConverter')
+-    log.debug("Starting conversion for domain %s", domain.sdUUID)
++
++    targetVersion = 3
++    currentVersion = domain.getVersion()
++
++    log.debug("Starting conversion for domain %s from version %s "
++              "to version %s", domain.sdUUID, currentVersion, targetVersion)
+ 
+     targetVersion = 3
+     currentVersion = domain.getVersion()
+@@ -115,8 +119,7 @@ def v3DomainConverter(repoPath, hostId, domain, isMsd):
+         domain.setMetadataPermissions()
+ 
+     log.debug("Initializing the new cluster lock for domain %s", domain.sdUUID)
+-    newClusterLock = clusterlock.SANLock(
+-        domain.sdUUID, domain.getIdsFilePath(), domain.getLeasesFilePath())
++    newClusterLock = domain._makeClusterLock(targetVersion)
+     newClusterLock.initLock()
+ 
+     log.debug("Acquiring the host id %s for domain %s", hostId, domain.sdUUID)
+diff --git a/vdsm/storage/sd.py b/vdsm/storage/sd.py
+index dbc1beb..a55ce06 100644
+--- a/vdsm/storage/sd.py
++++ b/vdsm/storage/sd.py
+@@ -101,10 +101,6 @@ BACKUP_DOMAIN = 3
+ DOMAIN_CLASSES = {DATA_DOMAIN: 'Data', ISO_DOMAIN: 'Iso',
+                   BACKUP_DOMAIN: 'Backup'}
+ 
+-# Lock Version
+-DOM_SAFELEASE_VERS = (0, 2)
+-DOM_SANLOCK_VERS = (3,)
+-
+ # Metadata keys
+ DMDK_VERSION = "VERSION"
+ DMDK_SDUUID = "SDUUID"
+@@ -292,29 +288,20 @@ class StorageDomain:
+     mdBackupVersions = config.get('irs', 'md_backup_versions')
+     mdBackupDir = config.get('irs', 'md_backup_dir')
+ 
++    # version: (clusterLockClass, hasVolumeLeases)
++    _clusterLockTable = {
++        0: (clusterlock.SafeLease, False),
++        2: (clusterlock.SafeLease, False),
++        3: (clusterlock.SANLock, True),
++    }
++
+     def __init__(self, sdUUID, domaindir, metadata):
+         self.sdUUID = sdUUID
+         self.domaindir = domaindir
+         self._metadata = metadata
+         self._lock = threading.Lock()
+         self.stat = None
+-
+-        domversion = self.getVersion()
+-
+-        if domversion in DOM_SAFELEASE_VERS:
+-            leaseParams = (
+-                DEFAULT_LEASE_PARAMS[DMDK_LOCK_RENEWAL_INTERVAL_SEC],
+-                DEFAULT_LEASE_PARAMS[DMDK_LEASE_TIME_SEC],
+-                DEFAULT_LEASE_PARAMS[DMDK_LEASE_RETRIES],
+-                DEFAULT_LEASE_PARAMS[DMDK_IO_OP_TIMEOUT_SEC])
+-            self._clusterLock = clusterlock.SafeLease(
+-                self.sdUUID, self.getIdsFilePath(), self.getLeasesFilePath(),
+-                *leaseParams)
+-        elif domversion in DOM_SANLOCK_VERS:
+-            self._clusterLock = clusterlock.SANLock(
+-                self.sdUUID, self.getIdsFilePath(), self.getLeasesFilePath())
+-        else:
+-            raise se.UnsupportedDomainVersion(domversion)
++        self._clusterLock = self._makeClusterLock()
+ 
+     def __del__(self):
+         if self.stat:
+@@ -328,6 +315,25 @@ class StorageDomain:
+     def oop(self):
+         return oop.getProcessPool(self.sdUUID)
+ 
++    def _makeClusterLock(self, domVersion=None):
++        if not domVersion:
++            domVersion = self.getVersion()
++
++        leaseParams = (
++            DEFAULT_LEASE_PARAMS[DMDK_LOCK_RENEWAL_INTERVAL_SEC],
++            DEFAULT_LEASE_PARAMS[DMDK_LEASE_TIME_SEC],
++            DEFAULT_LEASE_PARAMS[DMDK_LEASE_RETRIES],
++            DEFAULT_LEASE_PARAMS[DMDK_IO_OP_TIMEOUT_SEC],
++        )
++
++        try:
++            clusterLockClass = self._clusterLockTable[domVersion][0]
++        except KeyError:
++            raise se.UnsupportedDomainVersion(domVersion)
++
++        return clusterLockClass(self.sdUUID, self.getIdsFilePath(),
++                                self.getLeasesFilePath(), *leaseParams)
++
+     @classmethod
+     def create(cls, sdUUID, domainName, domClass, typeSpecificArg, version):
+         """
+@@ -436,7 +442,11 @@ class StorageDomain:
+         return self._clusterLock.hasHostId(hostId)
+ 
+     def hasVolumeLeases(self):
+-        return self.getVersion() in DOM_SANLOCK_VERS
++        domVersion = self.getVersion()
++        try:
++            return self._clusterLockTable[domVersion][1]
++        except KeyError:
++            raise se.UnsupportedDomainVersion(domVersion)
+ 
+     def getVolumeLease(self, volUUID):
+         """
+-- 
+1.8.1
+
diff --git a/0022-clusterlock-add-the-local-locking-implementation.patch b/0022-clusterlock-add-the-local-locking-implementation.patch
new file mode 100644
index 0000000..d496718
--- /dev/null
+++ b/0022-clusterlock-add-the-local-locking-implementation.patch
@@ -0,0 +1,225 @@
+From e73c5bc586d1e689fc33ba77082488d755e3a621 Mon Sep 17 00:00:00 2001
+From: Federico Simoncelli <fsimonce at redhat.com>
+Date: Thu, 20 Dec 2012 08:08:21 -0500
+Subject: [PATCH 22/22] clusterlock: add the local locking implementation
+
+In order to have a faster and more lightweight locking mechanism on
+local storage domains a new cluster lock (based on flock) has been
+introduced.
+
+Change-Id: I106618a9a61cc96727edaf2e3ab043b2ec28ebe1
+Signed-off-by: Federico Simoncelli <fsimonce at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10282
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11465
+---
+ vdsm/storage/clusterlock.py | 122 +++++++++++++++++++++++++++++++++++++++++---
+ vdsm/storage/localFsSD.py   |   7 +++
+ vdsm/storage/misc.py        |  19 +++++++
+ 3 files changed, 141 insertions(+), 7 deletions(-)
+
+diff --git a/vdsm/storage/clusterlock.py b/vdsm/storage/clusterlock.py
+index 4525b2f..cabf174 100644
+--- a/vdsm/storage/clusterlock.py
++++ b/vdsm/storage/clusterlock.py
+@@ -19,6 +19,7 @@
+ #
+ 
+ import os
++import fcntl
+ import threading
+ import logging
+ import subprocess
+@@ -127,6 +128,22 @@ class SafeLease(object):
+             self.log.debug("Cluster lock released successfully")
+ 
+ 
++initSANLockLog = logging.getLogger("initSANLock")
++
++
++def initSANLock(sdUUID, idsPath, leasesPath):
++    initSANLockLog.debug("Initializing SANLock for domain %s", sdUUID)
++
++    try:
++        sanlock.init_lockspace(sdUUID, idsPath)
++        sanlock.init_resource(sdUUID, SDM_LEASE_NAME,
++                              [(leasesPath, SDM_LEASE_OFFSET)])
++    except sanlock.SanlockException:
++        initSANLockLog.error("Cannot initialize SANLock for domain %s",
++                             sdUUID, exc_info=True)
++        raise se.ClusterLockInitError()
++
++
+ class SANLock(object):
+     log = logging.getLogger("SANLock")
+ 
+@@ -141,13 +158,7 @@ class SANLock(object):
+         self._sanlockfd = None
+ 
+     def initLock(self):
+-        try:
+-            sanlock.init_lockspace(self._sdUUID, self._idsPath)
+-            sanlock.init_resource(self._sdUUID, SDM_LEASE_NAME,
+-                                  [(self._leasesPath, SDM_LEASE_OFFSET)])
+-        except sanlock.SanlockException:
+-            self.log.warn("Cannot initialize clusterlock", exc_info=True)
+-            raise se.ClusterLockInitError()
++        initSANLock(self._sdUUID, self._idsPath, self._leasesPath)
+ 
+     def setParams(self, *args):
+         pass
+@@ -249,3 +260,100 @@ class SANLock(object):
+             self._sanlockfd = None
+             self.log.debug("Cluster lock for domain %s successfully released",
+                            self._sdUUID)
++
++
++class LocalLock(object):
++    log = logging.getLogger("LocalLock")
++
++    _globalLockMap = {}
++    _globalLockMapSync = threading.Lock()
++
++    def __init__(self, sdUUID, idsPath, leasesPath, *args):
++        self._sdUUID = sdUUID
++        self._idsPath = idsPath
++        self._leasesPath = leasesPath
++
++    def initLock(self):
++        # The LocalLock initialization is based on SANLock to maintain on-disk
++        # domain format consistent across all the V3 types.
++        # The advantage is that the domain can be exposed as an NFS/GlusterFS
++        # domain later on without any modification.
++        # XXX: Keep in mind that LocalLock and SANLock cannot detect each other
++        # and therefore concurrently using the same domain as local domain and
++        # NFS domain (or any other shared file-based domain) will certainly
++        # lead to disastrous consequences.
++        initSANLock(self._sdUUID, self._idsPath, self._leasesPath)
++
++    def setParams(self, *args):
++        pass
++
++    def getReservedId(self):
++        return MAX_HOST_ID
++
++    def acquireHostId(self, hostId, async):
++        self.log.debug("Host id for domain %s successfully acquired (id: %s)",
++                       self._sdUUID, hostId)
++
++    def releaseHostId(self, hostId, async, unused):
++        self.log.debug("Host id for domain %s released successfully (id: %s)",
++                       self._sdUUID, hostId)
++
++    def hasHostId(self, hostId):
++        return True
++
++    def acquire(self, hostId):
++        with self._globalLockMapSync:
++            self.log.info("Acquiring local lock for domain %s (id: %s)",
++                          self._sdUUID, hostId)
++
++            lockFile = self._globalLockMap.get(self._sdUUID, None)
++
++            if lockFile:
++                try:
++                    misc.NoIntrCall(fcntl.fcntl, lockFile, fcntl.F_GETFD)
++                except IOError as e:
++                    # We found a stale file descriptor, removing.
++                    del self._globalLockMap[self._sdUUID]
++
++                    # Raise any other unkown error.
++                    if e.errno != os.errno.EBADF:
++                        raise
++                else:
++                    self.log.debug("Local lock already acquired for domain "
++                                   "%s (id: %s)", self._sdUUID, hostId)
++                    return  # success, the lock was already acquired
++
++            lockFile = misc.NoIntrCall(os.open, self._idsPath, os.O_RDONLY)
++
++            try:
++                misc.NoIntrCall(fcntl.flock, lockFile,
++                                fcntl.LOCK_EX | fcntl.LOCK_NB)
++            except IOError as e:
++                misc.NoIntrCall(os.close, lockFile)
++                if e.errno in (os.errno.EACCES, os.errno.EAGAIN):
++                    raise se.AcquireLockFailure(
++                        self._sdUUID, e.errno, "Cannot acquire local lock",
++                        str(e))
++                raise
++            else:
++                self._globalLockMap[self._sdUUID] = lockFile
++
++        self.log.debug("Local lock for domain %s successfully acquired "
++                       "(id: %s)", self._sdUUID, hostId)
++
++    def release(self):
++        with self._globalLockMapSync:
++            self.log.info("Releasing local lock for domain %s", self._sdUUID)
++
++            lockFile = self._globalLockMap.get(self._sdUUID, None)
++
++            if not lockFile:
++                self.log.debug("Local lock already released for domain %s",
++                               self._sdUUID)
++                return
++
++            misc.NoIntrCall(os.close, lockFile)
++            del self._globalLockMap[self._sdUUID]
++
++            self.log.debug("Local lock for domain %s successfully released",
++                           self._sdUUID)
+diff --git a/vdsm/storage/localFsSD.py b/vdsm/storage/localFsSD.py
+index 198c073..7d59894 100644
+--- a/vdsm/storage/localFsSD.py
++++ b/vdsm/storage/localFsSD.py
+@@ -26,9 +26,16 @@ import fileSD
+ import fileUtils
+ import storage_exception as se
+ import misc
++import clusterlock
+ 
+ 
+ class LocalFsStorageDomain(fileSD.FileStorageDomain):
++    # version: (clusterLockClass, hasVolumeLeases)
++    _clusterLockTable = {
++        0: (clusterlock.SafeLease, False),
++        2: (clusterlock.SafeLease, False),
++        3: (clusterlock.LocalLock, True),
++    }
+ 
+     @classmethod
+     def _preCreateValidation(cls, sdUUID, domPath, typeSpecificArg, version):
+diff --git a/vdsm/storage/misc.py b/vdsm/storage/misc.py
+index 17d38ee..b26a317 100644
+--- a/vdsm/storage/misc.py
++++ b/vdsm/storage/misc.py
+@@ -1344,6 +1344,25 @@ def itmap(func, iterable, maxthreads=UNLIMITED_THREADS):
+         yield respQueue.get()
+ 
+ 
++def NoIntrCall(fun, *args, **kwargs):
++    """
++    This wrapper is used to handle the interrupt exceptions that might
++    occur during a system call.
++    """
++    while True:
++        try:
++            return fun(*args, **kwargs)
++        except (IOError, select.error) as e:
++            if e.args[0] == os.errno.EINTR:
++                continue
++            raise
++        break
++
++
++# NOTE: it would be best to try and unify NoIntrCall and NoIntrPoll.
++# We could do so defining a new object that can be used as a placeholer
++# for the changing timeout value in the *args/**kwargs. This would
++# lead us to rebuilding the function arguments at each loop.
+ def NoIntrPoll(pollfun, timeout=-1):
+     """
+     This wrapper is used to handle the interrupt exceptions that might occur
+-- 
+1.8.1
+
diff --git a/0023-upgrade-catch-MetaDataKeyNotFoundError-when-preparin.patch b/0023-upgrade-catch-MetaDataKeyNotFoundError-when-preparin.patch
new file mode 100644
index 0000000..cda48c4
--- /dev/null
+++ b/0023-upgrade-catch-MetaDataKeyNotFoundError-when-preparin.patch
@@ -0,0 +1,38 @@
+From 94a3b6b63449ac4a16c76c1d5c52d58f5c895ecc Mon Sep 17 00:00:00 2001
+From: Lee Yarwood <lyarwood at redhat.com>
+Date: Tue, 22 Jan 2013 14:09:28 +0000
+Subject: [PATCH 23/27] upgrade: catch MetaDataKeyNotFoundError when preparing
+ images
+
+Ensure that we catch and continue past any MetaDataKeyNotFoundError
+exception when preparing images that may contain partially removed
+volumes. For example where the LV is still present but the metadata
+block has been blanked out.
+
+Change-Id: I92f7a61bf6d1e24e84711486fd4f8ba67e2a0077
+Signed-off-by: Lee Yarwood <lyarwood at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11485
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm/storage/imageRepository/formatConverter.py | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+diff --git a/vdsm/storage/imageRepository/formatConverter.py b/vdsm/storage/imageRepository/formatConverter.py
+index 95a77d1..cbf64f5 100644
+--- a/vdsm/storage/imageRepository/formatConverter.py
++++ b/vdsm/storage/imageRepository/formatConverter.py
+@@ -280,6 +280,11 @@ def v3DomainConverter(repoPath, hostId, domain, isMsd):
+                 log.error("It is not possible to prepare the image %s, the "
+                           "volume chain looks damaged", imgUUID, exc_info=True)
+ 
++            except se.MetaDataKeyNotFoundError:
++                log.error("It is not possible to prepare the image %s, the "
++                          "volume metadata looks damaged", imgUUID,
++                          exc_info=True)
++
+             finally:
+                 try:
+                     img.teardown(domain.sdUUID, imgUUID)
+-- 
+1.8.1
+
diff --git a/0024-vdsm.spec-Require-openssl.patch b/0024-vdsm.spec-Require-openssl.patch
new file mode 100644
index 0000000..ab86511
--- /dev/null
+++ b/0024-vdsm.spec-Require-openssl.patch
@@ -0,0 +1,31 @@
+From 53e8505e34f1e7a76adfa0de74d5eb9d27efd586 Mon Sep 17 00:00:00 2001
+From: Douglas Schilling Landgraf <dougsland at redhat.com>
+Date: Wed, 30 Jan 2013 09:27:38 -0500
+Subject: [PATCH 24/27] vdsm.spec: Require openssl
+
+deployUtil uses openssl command, we should Require it.
+
+Change-Id: Ib53aa66bad94e9c4046f3430b892a60cbc80c520
+Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=905728
+Signed-off-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11543
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+---
+ vdsm.spec.in | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index 8ad4dce..e898c59 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -207,6 +207,7 @@ BuildArch:      noarch
+ 
+ Requires: %{name} = %{version}-%{release}
+ Requires: m2crypto
++Requires: openssl
+ 
+ %description reg
+ VDSM registration package. Used to register a Linux host to a Virtualization
+-- 
+1.8.1
+
diff --git a/0025-Fedora-18-require-a-newer-udev.patch b/0025-Fedora-18-require-a-newer-udev.patch
new file mode 100644
index 0000000..6464c3f
--- /dev/null
+++ b/0025-Fedora-18-require-a-newer-udev.patch
@@ -0,0 +1,36 @@
+From 674f1003f05d84609e4555c1509b1409475e1c97 Mon Sep 17 00:00:00 2001
+From: Dan Kenigsberg <danken at redhat.com>
+Date: Tue, 29 Jan 2013 10:44:01 +0200
+Subject: [PATCH 25/27] Fedora 18: require a newer udev
+
+Due to https://bugzilla.redhat.com/903716 `udev: device node permissions
+not applied with "change" event' we could not use block storage in
+Fedora. Let us explicitly require a newerer systemd that fixes this
+issue, to avoid users' dismay.
+
+Change-Id: Ie17abb2af146c492efafc94bfbb533c7f6c8025c
+Signed-off-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11489
+Reviewed-by: Antoni Segura Puimedon <asegurap at redhat.com>
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11534
+Tested-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm.spec.in | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index e898c59..00c1259 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -136,6 +136,7 @@ Requires: selinux-policy-targeted >= 3.11.1-71
+ # In order to avoid a policycoreutils bug (rhbz 883355) when selinux is
+ # disabled we now require the version 2.1.13-44 (or newer) of Fedora.
+ Requires: policycoreutils >= 2.1.13-44
++Requires: systemd >= 197-1.fc18.2
+ %endif
+ 
+ Requires: libvirt-python, libvirt-lock-sanlock, libvirt-client
+-- 
+1.8.1
+
diff --git a/0026-fix-sloppy-backport-of-safelease-rename.patch b/0026-fix-sloppy-backport-of-safelease-rename.patch
new file mode 100644
index 0000000..8ae6b1e
--- /dev/null
+++ b/0026-fix-sloppy-backport-of-safelease-rename.patch
@@ -0,0 +1,40 @@
+From 7904843648c7dd368f832d8f2b652290ca717424 Mon Sep 17 00:00:00 2001
+From: Dan Kenigsberg <danken at redhat.com>
+Date: Wed, 30 Jan 2013 13:43:33 +0200
+Subject: [PATCH 26/27] fix sloppy backport of safelease rename
+
+Somehow, this sloppy backport of I74070ebb43dd726362900a0746c
+was not caught by Jenkins. Any idea why?
+
+Change-Id: Iaf1dc264d17b59934b78877a11f37b21614b268e
+Signed-off-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11544
+Reviewed-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ Makefile.am | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/Makefile.am b/Makefile.am
+index 80d6f52..06ffbf9 100644
+--- a/Makefile.am
++++ b/Makefile.am
+@@ -54,6 +54,7 @@ PEP8_WHITELIST = \
+ 	vdsm/*.py.in \
+ 	vdsm/storage/__init__.py \
+ 	vdsm/storage/blockVolume.py \
++	vdsm/storage/clusterlock.py \
+ 	vdsm/storage/devicemapper.py \
+ 	vdsm/storage/domainMonitor.py \
+ 	vdsm/storage/fileSD.py \
+@@ -74,7 +75,6 @@ PEP8_WHITELIST = \
+ 	vdsm/storage/persistentDict.py \
+ 	vdsm/storage/remoteFileHandler.py \
+ 	vdsm/storage/resourceFactories.py \
+-	vdsm/storage/safelease.py \
+ 	vdsm/storage/sd.py \
+ 	vdsm/storage/sdc.py \
+ 	vdsm/storage/securable.py \
+-- 
+1.8.1
+
diff --git a/0027-removing-the-use-of-zombie-reaper-from-supervdsm.patch b/0027-removing-the-use-of-zombie-reaper-from-supervdsm.patch
new file mode 100644
index 0000000..14ce5fd
--- /dev/null
+++ b/0027-removing-the-use-of-zombie-reaper-from-supervdsm.patch
@@ -0,0 +1,52 @@
+From 18c24f7c7c27ac732c4a760caa9524e7319cd47e Mon Sep 17 00:00:00 2001
+From: Yaniv Bronhaim <ybronhei at redhat.com>
+Date: Tue, 29 Jan 2013 13:49:46 +0200
+Subject: [PATCH 27/27] removing the use of zombie reaper from supervdsm
+
+This may solve validateAccess errors, but can cause defuct subprocesses.
+This patch is signed as WIP until we'll find better solution, until then
+this patch helps to verify if the previous errors that was caused thanks
+to zombie reaper handling don't occur.
+
+Change-Id: If3f9bae47f2894cc95785de8f19f6ec388ea58da
+Signed-off-by: Yaniv Bronhaim <ybronhei at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11491
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+Tested-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm/supervdsmServer.py | 4 ----
+ 1 file changed, 4 deletions(-)
+
+diff --git a/vdsm/supervdsmServer.py b/vdsm/supervdsmServer.py
+index 833e91f..21e7c94 100755
+--- a/vdsm/supervdsmServer.py
++++ b/vdsm/supervdsmServer.py
+@@ -56,7 +56,6 @@ import tc
+ import ksm
+ import mkimage
+ from storage.multipath import MPATH_CONF
+-import zombieReaper
+ 
+ _UDEV_RULE_FILE_DIR = "/etc/udev/rules.d/"
+ _UDEV_RULE_FILE_PREFIX = "99-vdsm-"
+@@ -199,7 +198,6 @@ class _SuperVdsm(object):
+         pipe, hisPipe = Pipe()
+         proc = Process(target=child, args=(hisPipe,))
+         proc.start()
+-        zombieReaper.autoReapPID(proc.pid)
+ 
+         if not pipe.poll(RUN_AS_TIMEOUT):
+             try:
+@@ -391,8 +389,6 @@ def main():
+         if os.path.exists(address):
+             os.unlink(address)
+ 
+-        zombieReaper.registerSignalHandler()
+-
+         log.debug("Setting up keep alive thread")
+ 
+         monThread = threading.Thread(target=__pokeParent,
+-- 
+1.8.1
+
diff --git a/vdsm.spec b/vdsm.spec
index 3a227ba..6b0b3dd 100644
--- a/vdsm.spec
+++ b/vdsm.spec
@@ -32,7 +32,7 @@
 
 Name:           %{vdsm_name}
 Version:        4.10.3
-Release:        5%{?vdsm_relvtag}%{?dist}%{?extra_release}
+Release:        6%{?vdsm_relvtag}%{?dist}%{?extra_release}
 Summary:        Virtual Desktop Server Manager
 
 Group:          Applications/System
@@ -59,6 +59,22 @@ Patch7:         0008-vdsm.spec-Don-t-require-python-ordereddict-on-fedora.patch
 Patch8:         0009-vdsm.spec-BuildRequires-python-pthreading.patch
 Patch9:         0010-Searching-for-both-py-and-pyc-file-to-start-super-vd.patch
 Patch10:        0011-adding-getHardwareInfo-API-to-vdsm.patch
+Patch11:        0012-Explicitly-shutdown-m2crypto-socket.patch
+Patch12:        0013-spec-require-policycoreutils-and-skip-sebool-errors.patch
+Patch13:        0014-spec-requires-selinux-policy-to-avoid-selinux-failur.patch
+Patch14:        0015-vdsmd.service-require-either-ntpd-or-chronyd.patch
+Patch15:        0016-isRunning-didn-t-check-local-variable-before-reading.patch
+Patch16:        0017-udev-Race-fix-load-and-trigger-dev-rule.patch
+Patch17:        0018-Change-scsi_id-command-path-to-be-configured-at-runt.patch
+Patch18:        0019-upgrade-force-upgrade-to-v2-before-upgrading-to-v3.patch
+Patch19:        0020-misc-rename-safelease-to-clusterlock.patch
+Patch20:        0021-domain-select-the-cluster-lock-using-makeClusterLock.patch
+Patch21:        0022-clusterlock-add-the-local-locking-implementation.patch
+Patch22:        0023-upgrade-catch-MetaDataKeyNotFoundError-when-preparin.patch
+Patch23:        0024-vdsm.spec-Require-openssl.patch
+Patch24:        0025-Fedora-18-require-a-newer-udev.patch
+Patch25:        0026-fix-sloppy-backport-of-safelease-rename.patch
+Patch26:        0027-removing-the-use-of-zombie-reaper-from-supervdsm.patch
 
 
 BuildRoot:      %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
@@ -69,6 +85,7 @@ BuildRequires: python-nose
 
 # BuildRequires needed by the tests during the build
 BuildRequires: python-ethtool
+BuildRequires: python-pthreading
 BuildRequires: libselinux-python
 BuildRequires: libvirt-python
 BuildRequires: genisoimage
@@ -77,7 +94,7 @@ BuildRequires: m2crypto
 %ifarch x86_64
 BuildRequires: python-dmidecode
 %endif
-%if 0%{?rhel}
+%if 0%{?rhel} == 6
 BuildRequires: python-ordereddict
 %endif
 
@@ -93,7 +110,6 @@ BuildRequires: libtool
 BuildRequires: pyflakes
 BuildRequires: python-pep8
 BuildRequires: systemd-units
-BuildRequires: python-pthreading
 %endif
 
 Requires: which
@@ -161,6 +177,13 @@ Requires: selinux-policy-targeted >= 3.10.0-149
 Requires: lvm2 >= 2.02.95
 %endif
 
+%if 0%{?fedora} >= 18
+Requires: selinux-policy-targeted >= 3.11.1-71
+# In order to avoid a policycoreutils bug (rhbz 883355) when selinux is
+# disabled we now require the version 2.1.13-44 (or newer) of Fedora.
+Requires: policycoreutils >= 2.1.13-44
+%endif
+
 Requires: libvirt-python, libvirt-lock-sanlock, libvirt-client
 Requires: psmisc >= 22.6-15
 Requires: fence-agents
@@ -210,7 +233,7 @@ Summary:        VDSM API Server
 BuildArch:      noarch
 
 Requires: %{name}-python = %{version}-%{release}
-%if 0%{?rhel}
+%if 0%{?rhel} == 6
 Requires: python-ordereddict
 %endif
 
@@ -231,6 +254,7 @@ BuildArch:      noarch
 
 Requires: %{name} = %{version}-%{release}
 Requires: m2crypto
+Requires: openssl
 
 %description reg
 VDSM registration package. Used to register a Linux host to a Virtualization
@@ -444,6 +468,22 @@ Gluster plugin enables VDSM to serve Gluster functionalities.
 %patch8 -p1 -b .patch8
 %patch9 -p1 -b .patch9
 %patch10 -p1 -b .patch10
+%patch11 -p1 -b .patch11
+%patch12 -p1 -b .patch12
+%patch13 -p1 -b .patch13
+%patch14 -p1 -b .patch14
+%patch15 -p1 -b .patch15
+%patch16 -p1 -b .patch16
+%patch17 -p1 -b .patch17
+%patch18 -p1 -b .patch18
+%patch19 -p1 -b .patch19
+%patch20 -p1 -b .patch20
+%patch21 -p1 -b .patch21
+%patch22 -p1 -b .patch22
+%patch23 -p1 -b .patch23
+%patch24 -p1 -b .patch24
+%patch25 -p1 -b .patch25
+%patch26 -p1 -b .patch26
 
 %if 0%{?rhel} == 6
 sed -i '/ su /d' vdsm/vdsm-logrotate.conf.in
@@ -526,7 +566,7 @@ export LC_ALL=C
 /usr/sbin/usermod -a -G %{qemu_group},%{vdsm_group} %{snlk_user}
 
 %post
-%{_bindir}/vdsm-tool sebool-config
+%{_bindir}/vdsm-tool sebool-config || :
 # set the vdsm "secret" password for libvirt
 %{_bindir}/vdsm-tool set-saslpasswd
 
@@ -568,7 +608,7 @@ then
     /bin/sed -i '/# VDSM section begin/,/# VDSM section end/d' \
         /etc/sysctl.conf
 
-    %{_bindir}/vdsm-tool sebool-unconfig
+    %{_bindir}/vdsm-tool sebool-unconfig || :
 
     /usr/sbin/saslpasswd2 -p -a libvirt -d vdsm at ovirt
 
@@ -725,7 +765,7 @@ exit 0
 %{_datadir}/%{vdsm_name}/storage/resourceFactories.py*
 %{_datadir}/%{vdsm_name}/storage/remoteFileHandler.py*
 %{_datadir}/%{vdsm_name}/storage/resourceManager.py*
-%{_datadir}/%{vdsm_name}/storage/safelease.py*
+%{_datadir}/%{vdsm_name}/storage/clusterlock.py*
 %{_datadir}/%{vdsm_name}/storage/sdc.py*
 %{_datadir}/%{vdsm_name}/storage/sd.py*
 %{_datadir}/%{vdsm_name}/storage/securable.py*
@@ -1034,6 +1074,24 @@ exit 0
 %{_datadir}/%{vdsm_name}/gluster/hostname.py*
 
 %changelog
+* Wed Jan 30 2013 Federico Simoncelli <fsimonce at redhat.com> 4.10.3-6
+- Explicitly shutdown  m2crypto socket
+- spec: require policycoreutils and skip sebool errors
+- spec: requires selinux-policy to avoid selinux failure
+- vdsmd.service: require either ntpd or chronyd
+- isRunning didn't check local variable before reading
+- udev: Race fix- load and trigger dev rule (#891300)
+- Change scsi_id command path to be configured at runtime (#886087)
+- upgrade: force upgrade to v2 before upgrading to v3 (#893184)
+- misc: rename safelease to clusterlock
+- domain: select the cluster lock using makeClusterLock
+- clusterlock: add the local locking implementation (#877715)
+- upgrade: catch MetaDataKeyNotFoundError when preparing
+- vdsm.spec: Require openssl (#905728)
+- Fedora 18: require a newer udev
+- fix sloppy backport of safelease rename
+- removing the use of zombie reaper from supervdsm
+
 * Fri Jan 18 2013 Douglas Schilling Landgraf <dougsland at redhat.com> 4.10.3-5
 - Searching for both py and pyc file to start super vdsm
 - adding getHardwareInfo API to vdsm


More information about the scm-commits mailing list