[vdsm] update master branch with vdsm-4.10.3-11.fc19

Douglas Schilling Landgraf dougsland at fedoraproject.org
Mon Mar 25 13:52:53 UTC 2013


commit 83d581e06eed322982da89f7411b0c331048f06e
Author: Douglas Schilling Landgraf <dougsland at redhat.com>
Date:   Mon Mar 25 10:52:19 2013 -0300

    update master branch with vdsm-4.10.3-11.fc19

 0001-schema-Fix-schema-for-VM.updateDevice.patch   |   65 ++
 ...hema-Missing-comment-for-new-VmDeviceType.patch |   36 +
 ...ort-CPU-thread-info-in-getVdsCapabilities.patch |   53 ++
 0004-caps.py-osversion-validate-OVIRT.patch        |   35 +
 ...-libvirtd-didn-t-work-over-allinone-setup.patch |   37 +
 0006-Integrate-Smartcard-support.patch             |  421 ++++++++++
 ...m.spec-python-ordereddict-only-for-rhel-7.patch |   42 +
 ...on-t-require-python-ordereddict-on-fedora.patch |   42 +
 ...vdsm.spec-BuildRequires-python-pthreading.patch |   31 +
 ...or-both-py-and-pyc-file-to-start-super-vd.patch |   54 ++
 0011-adding-getHardwareInfo-API-to-vdsm.patch      |  334 ++++++++
 0012-Explicitly-shutdown-m2crypto-socket.patch     |   55 ++
 ...re-policycoreutils-and-skip-sebool-errors.patch |   59 ++
 ...es-selinux-policy-to-avoid-selinux-failur.patch |   42 +
 ...md.service-require-either-ntpd-or-chronyd.patch |   36 +
 ...idn-t-check-local-variable-before-reading.patch |   44 +
 0017-udev-Race-fix-load-and-trigger-dev-rule.patch |  115 +++
 ..._id-command-path-to-be-configured-at-runt.patch |  171 ++++
 ...orce-upgrade-to-v2-before-upgrading-to-v3.patch |   91 ++
 0020-misc-rename-safelease-to-clusterlock.patch    |  866 ++++++++++++++++++++
 ...ct-the-cluster-lock-using-makeClusterLock.patch |  151 ++++
 ...lock-add-the-local-locking-implementation.patch |  225 +++++
 ...ch-MetaDataKeyNotFoundError-when-preparin.patch |   38 +
 0024-vdsm.spec-Require-openssl.patch               |   31 +
 0025-Fedora-18-require-a-newer-udev.patch          |   36 +
 0026-fix-sloppy-backport-of-safelease-rename.patch |   40 +
 ...g-the-use-of-zombie-reaper-from-supervdsm.patch |   52 ++
 ...llow-delete-update-of-devices-with-no-ifc.patch |   63 ++
 ...licycoreutils-2.1.13-55-to-avoid-another-.patch |   41 +
 ...to-connect-to-supervdsm-more-than-3-time-.patch |   50 ++
 0031-packaging-add-load_needed_modules.py.in.patch |   81 ++
 ...e_bond_dev-reopen-bonding_masters-per-bon.patch |   38 +
 ...er-Handling-Attribute-error-in-Python-2.6.patch |   91 ++
 0034-bootstrap-remove-glusterfs-packages.patch     |   52 ++
 ...-gluster-set-glusterfs-dependency-version.patch |   33 +
 ...te-the-template-when-zeroing-a-dependant-.patch |   65 ++
 0037-vdsm.spec-fence-agents-all.patch              |   59 ++
 sources                                            |    2 +-
 vdsm.spec                                          |  179 ++++-
 39 files changed, 3929 insertions(+), 27 deletions(-)
---
diff --git a/0001-schema-Fix-schema-for-VM.updateDevice.patch b/0001-schema-Fix-schema-for-VM.updateDevice.patch
new file mode 100644
index 0000000..59cd4bc
--- /dev/null
+++ b/0001-schema-Fix-schema-for-VM.updateDevice.patch
@@ -0,0 +1,65 @@
+From 844f10a9aafd55018e9966a61dece726d3e62511 Mon Sep 17 00:00:00 2001
+From: Adam Litke <agl at us.ibm.com>
+Date: Wed, 12 Dec 2012 13:52:19 -0600
+Subject: [PATCH 1/3] schema: Fix schema for VM.updateDevice
+
+Another recent update broke the schema file.  I'll take the blame for this one
+since I approved the change :)  Some missing and/or malformed data in comments
+was causing the process-schema script to fail.  Another reason for validating
+the schema during the build.
+
+Change-Id: If88596050ace9511bcc7be65ee46645359e30532
+Signed-off-by: Adam Litke <agl at us.ibm.com>
+Reviewed-on: http://gerrit.ovirt.org/10012
+Reviewed-by: Saggi Mizrahi <smizrahi at redhat.com>
+Tested-by: Saggi Mizrahi <smizrahi at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10019
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Tested-by: Antoni Segura Puimedon <asegurap at redhat.com>
+---
+ vdsm_api/vdsmapi-schema.json | 9 +++++----
+ 1 file changed, 5 insertions(+), 4 deletions(-)
+
+diff --git a/vdsm_api/vdsmapi-schema.json b/vdsm_api/vdsmapi-schema.json
+index 9772e18..487e69a 100644
+--- a/vdsm_api/vdsmapi-schema.json
++++ b/vdsm_api/vdsmapi-schema.json
+@@ -4894,10 +4894,12 @@
+ ##
+ # @vmUpdateDeviceParams:
+ #
++# A discriminated record of update parameters for a VM device.
++#
+ # @deviceType: The VM device type to update. For example 'interface' for
+ #              network devices or 'disk' for disk storage devices.
+ #
+-# Since 4.10.3
++# Since: 4.10.3
+ ##
+ {'type': 'vmUpdateDeviceParams',
+  'data': {'deviceType': 'VmDeviceType'},
+@@ -4924,11 +4926,11 @@
+ #                 by alias. If omitted, it keeps the current mirroring
+ #                 configuration.
+ #
+-# Since 4.10.3
++# Since: 4.10.3
+ ##
+ {'type': 'vmUpdateInterfaceDeviceParams',
+  'data': {'*network': 'str', '*linkActive': 'bool',
+-          'alias': 'str', '*portMirroring': '[str]'}}
++          'alias': 'str', '*portMirroring': ['str']}}
+ 
+ ##
+ # @VM.updateDevice:
+@@ -4943,7 +4945,6 @@
+ # The VM definition, as updated
+ #
+ # Since: 4.10.3
+-#
+ ##
+ {'command': {'class': 'VM', 'name': 'updateDevice'},
+  'data': {'vmId': 'UUID', 'params': 'vmUpdateDeviceParams'},
+-- 
+1.7.11.7
+
diff --git a/0002-schema-Missing-comment-for-new-VmDeviceType.patch b/0002-schema-Missing-comment-for-new-VmDeviceType.patch
new file mode 100644
index 0000000..d44579b
--- /dev/null
+++ b/0002-schema-Missing-comment-for-new-VmDeviceType.patch
@@ -0,0 +1,36 @@
+From 244713456b87d4ea1873c98cbaae52bffd3ce1ee Mon Sep 17 00:00:00 2001
+From: Adam Litke <agl at us.ibm.com>
+Date: Wed, 12 Dec 2012 14:11:16 -0600
+Subject: [PATCH 2/3] schema: Missing comment for new VmDeviceType
+
+When adding a new 'console' VmDeviceType, the submitter forgot to include
+documentation in the comment block.  This caused process-schema.py to fail.
+
+Change-Id: Icd7db71a4cd1a2addd31815a73dcd5c1cda7af4f
+Signed-off-by: Adam Litke <agl at us.ibm.com>
+Reviewed-on: http://gerrit.ovirt.org/10013
+Reviewed-by: Saggi Mizrahi <smizrahi at redhat.com>
+Tested-by: Saggi Mizrahi <smizrahi at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10020
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Tested-by: Dan Kenigsberg <danken at redhat.com>
+---
+ vdsm_api/vdsmapi-schema.json | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/vdsm_api/vdsmapi-schema.json b/vdsm_api/vdsmapi-schema.json
+index 487e69a..58dd6e2 100644
+--- a/vdsm_api/vdsmapi-schema.json
++++ b/vdsm_api/vdsmapi-schema.json
+@@ -1681,6 +1681,8 @@
+ #
+ # @channel:     A host-guest communication channel
+ #
++# @console:     A console device
++#
+ # Since: 4.10.0
+ ##
+ {'enum': 'VmDeviceType',
+-- 
+1.7.11.7
+
diff --git a/0003-api-Report-CPU-thread-info-in-getVdsCapabilities.patch b/0003-api-Report-CPU-thread-info-in-getVdsCapabilities.patch
new file mode 100644
index 0000000..e502d73
--- /dev/null
+++ b/0003-api-Report-CPU-thread-info-in-getVdsCapabilities.patch
@@ -0,0 +1,53 @@
+From b8c1c973d9767859d144042531f7a5ae76fce389 Mon Sep 17 00:00:00 2001
+From: Greg Padgett <gpadgett at redhat.com>
+Date: Thu, 20 Dec 2012 09:43:29 -0500
+Subject: [PATCH 3/3] api: Report CPU thread info in getVdsCapabilities
+
+Report CPU thread info in getVdsCapabilities
+
+Change-Id: I11f1e139a3f3d5bd18032713694bdebf9ab8c1d7
+Signed-off-by: Greg Padgett <gpadgett at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10300
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm/caps.py                 | 1 +
+ vdsm_api/vdsmapi-schema.json | 4 +++-
+ 2 files changed, 4 insertions(+), 1 deletion(-)
+
+diff --git a/vdsm/caps.py b/vdsm/caps.py
+index 0438fb4..6749227 100644
+--- a/vdsm/caps.py
++++ b/vdsm/caps.py
+@@ -261,6 +261,7 @@ def get():
+     else:
+         caps['cpuCores'] = str(cpuTopology.cores())
+ 
++    caps['cpuThreads'] = str(cpuTopology.threads())
+     caps['cpuSockets'] = str(cpuTopology.sockets())
+     caps['cpuSpeed'] = cpuInfo.mhz()
+     if config.getboolean('vars', 'fake_kvm_support'):
+diff --git a/vdsm_api/vdsmapi-schema.json b/vdsm_api/vdsmapi-schema.json
+index 58dd6e2..c72ec4b 100644
+--- a/vdsm_api/vdsmapi-schema.json
++++ b/vdsm_api/vdsmapi-schema.json
+@@ -848,6 +848,8 @@
+ #
+ # @kvmEnabled:          KVM is enabled on the host
+ #
++# @cpuThreads:          The number of CPU threads present
++#
+ # @cpuCores:            The number of CPU cores present
+ #
+ # @cpuSockets:          The numbet of CPU sockets
+@@ -909,7 +911,7 @@
+ #        the current API truncates @software_version to 'x.y'.
+ ##
+ {'type': 'VdsmCapabilities',
+- 'data': {'kvmEnabled': 'bool', 'cpuCores': 'uint',
++ 'data': {'kvmEnabled': 'bool', 'cpuThreads': 'uint', 'cpuCores': 'uint',
+           'cpuSockets': 'uint', 'cpuSpeed': 'float', 'cpuModel': 'str',
+           'cpuFlags': 'str', 'version_name': 'str', 'software_version': 'str',
+           'software_revision': 'str', 'supportedENGINEs': ['str'],
+-- 
+1.7.11.7
+
diff --git a/0004-caps.py-osversion-validate-OVIRT.patch b/0004-caps.py-osversion-validate-OVIRT.patch
new file mode 100644
index 0000000..9333042
--- /dev/null
+++ b/0004-caps.py-osversion-validate-OVIRT.patch
@@ -0,0 +1,35 @@
+From 95be2057db1bea2730bc4b65058f5b1a42ab45bd Mon Sep 17 00:00:00 2001
+From: Douglas Schilling Landgraf <dougsland at redhat.com>
+Date: Mon, 17 Dec 2012 19:23:21 -0500
+Subject: [PATCH 4/6] caps.py: osversion() validate OVIRT
+
+Currently we are only validating RHEV-H node.
+This patch will validate oVirt node as well in osversion().
+
+Change-Id: I58efd4660f94b2f68ead470a54f0f301a1b9b4ba
+Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=873917
+Signed-off-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10177
+Reviewed-by: Alon Bar-Lev <alonbl at redhat.com>
+Tested-by: Alon Bar-Lev <alonbl at redhat.com>
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+---
+ vdsm/caps.py | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/vdsm/caps.py b/vdsm/caps.py
+index 6749227..e9a1476 100644
+--- a/vdsm/caps.py
++++ b/vdsm/caps.py
+@@ -229,7 +229,7 @@ def osversion():
+ 
+     osname = getos()
+     try:
+-        if osname == OSName.RHEVH:
++        if osname == OSName.RHEVH or osname == OSName.OVIRT:
+             d = _parseKeyVal(file('/etc/default/version'))
+             version = d.get('VERSION', '')
+             release = d.get('RELEASE', '')
+-- 
+1.7.11.7
+
diff --git a/0005-restarting-libvirtd-didn-t-work-over-allinone-setup.patch b/0005-restarting-libvirtd-didn-t-work-over-allinone-setup.patch
new file mode 100644
index 0000000..bc7af92
--- /dev/null
+++ b/0005-restarting-libvirtd-didn-t-work-over-allinone-setup.patch
@@ -0,0 +1,37 @@
+From 87dfb384debe7c171c3df27f13dc95ba3f0f0d5b Mon Sep 17 00:00:00 2001
+From: Yaniv Bronhaim <ybronhei at redhat.com>
+Date: Sun, 23 Dec 2012 10:40:29 +0200
+Subject: [PATCH 5/6] restarting libvirtd didn't work over allinone setup
+
+Change-Id: I300adc5ac3d9b12fee49023a54e5ffb4dee98da1
+Bug-Id: https://bugzilla.redhat.com/show_bug.cgi?id=888258
+Signed-off-by: Yaniv Bronhaim <ybronhei at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10237
+Tested-by: Ohad Basan <obasan at redhat.com>
+Reviewed-by: Ohad Basan <obasan at redhat.com>
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10316
+---
+ vdsm/vdsmd.init.in | 6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+diff --git a/vdsm/vdsmd.init.in b/vdsm/vdsmd.init.in
+index 7a7ae84..c8223d3 100755
+--- a/vdsm/vdsmd.init.in
++++ b/vdsm/vdsmd.init.in
+@@ -362,7 +362,11 @@ EOF
+ 
+     ovirt_store_config "$lconf" "$qconf" "$ldconf" "$llogr"
+ 
+-    /sbin/initctl restart libvirtd 2>/dev/null || :
++    if libvirt_should_use_upstart; then
++        /sbin/initctl restart libvirtd 2>/dev/null || :
++    else
++        /bin/systemctl restart libvirtd.service
++    fi
+ 
+     #
+     # finished reconfiguration, do not trigger
+-- 
+1.7.11.7
+
diff --git a/0006-Integrate-Smartcard-support.patch b/0006-Integrate-Smartcard-support.patch
new file mode 100644
index 0000000..7837b0e
--- /dev/null
+++ b/0006-Integrate-Smartcard-support.patch
@@ -0,0 +1,421 @@
+From d33da223da1160b1ad2a8d02316b6b435bee9c29 Mon Sep 17 00:00:00 2001
+From: Tomas Jelinek <tjelinek at redhat.com>
+Date: Wed, 10 Oct 2012 03:06:21 -0400
+Subject: [PATCH 6/6] Integrate Smartcard support
+
+This patch is a VDSM part of the integrating
+the smartcard support to the ovirt:
+
+This VDSM part integrates the smartcard  in
+a supported way, not just as an unsupported custom hook.
+
+It also removes the smartcard hook itself.
+
+Change-Id: I7cdaef420c8381d588f6215e66e6a80dd9d2e44b
+Signed-off-by: Tomas Jelinek <tjelinek at redhat.com>
+Signed-off-by: Peter V. Saveliev <peet at redhat.com>
+Signed-off-by: Antoni S. Puimedon <asegurap at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10547
+Tested-by: Omer Frenkel <ofrenkel at redhat.com>
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ configure.ac                            |  1 -
+ tests/libvirtvmTests.py                 |  7 +++++
+ vdsm.spec.in                            | 12 --------
+ vdsm/libvirtvm.py                       | 44 ++++++++++++++++++++++++++++-
+ vdsm/vm.py                              | 19 +++++++++++--
+ vdsm_api/vdsmapi-schema.json            | 49 +++++++++++++++++++++++++++++++--
+ vdsm_hooks/Makefile.am                  |  1 -
+ vdsm_hooks/README                       |  4 +--
+ vdsm_hooks/smartcard/Makefile.am        | 18 ------------
+ vdsm_hooks/smartcard/README             |  9 ------
+ vdsm_hooks/smartcard/before_vm_start.py | 29 -------------------
+ 11 files changed, 116 insertions(+), 77 deletions(-)
+ delete mode 100644 vdsm_hooks/smartcard/Makefile.am
+ delete mode 100644 vdsm_hooks/smartcard/README
+ delete mode 100755 vdsm_hooks/smartcard/before_vm_start.py
+
+diff --git a/configure.ac b/configure.ac
+index a6265b5..3489e38 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -209,7 +209,6 @@ AC_OUTPUT([
+ 	vdsm_hooks/qos/Makefile
+ 	vdsm_hooks/qemucmdline/Makefile
+ 	vdsm_hooks/scratchpad/Makefile
+-	vdsm_hooks/smartcard/Makefile
+ 	vdsm_hooks/smbios/Makefile
+ 	vdsm_hooks/sriov/Makefile
+ 	vdsm_hooks/vhostmd/Makefile
+diff --git a/tests/libvirtvmTests.py b/tests/libvirtvmTests.py
+index 4ed7318..bd68f2a 100644
+--- a/tests/libvirtvmTests.py
++++ b/tests/libvirtvmTests.py
+@@ -91,6 +91,13 @@ class TestLibvirtvm(TestCaseBase):
+             domxml.appendOs()
+             self.assertXML(domxml.dom, xml, 'os')
+ 
++    def testSmartcardXML(self):
++        smartcardXML = '<smartcard mode="passthrough" type="spicevmc"/>'
++        dev = {'device': 'smartcard',
++               'specParams': {'mode': 'passthrough', 'type': 'spicevmc'}}
++        smartcard = libvirtvm.SmartCardDevice(self.conf, self.log, **dev)
++        self.assertXML(smartcard.getXML(), smartcardXML)
++
+     def testFeaturesXML(self):
+         featuresXML = """
+             <features>
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index 1978fa9..f9e238b 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -353,14 +353,6 @@ Hook creates a disk for a VM onetime usage,
+ the disk will be erased when the VM destroyed.
+ VM cannot be migrated when using scratchpad hook
+ 
+-%package hook-smartcard
+-Summary:        Smartcard support for Spice protocol in VDSM
+-BuildArch:      noarch
+-
+-%description hook-smartcard
+-Smartcard hook add support for spice in VDSM.
+-Smartcard hook enable user to use its smartcard inside virtual machines.
+-
+ %package hook-smbios
+ Summary:        Adding custom smbios entries to libvirt domain via VDSM
+ BuildArch:      noarch
+@@ -887,10 +879,6 @@ exit 0
+ %{_libexecdir}/%{vdsm_name}/hooks/before_vm_migrate_source/50_scratchpad
+ %{_libexecdir}/%{vdsm_name}/hooks/after_vm_destroy/50_scratchpad
+ 
+-%files hook-smartcard
+-%defattr(-, root, root, -)
+-%{_libexecdir}/%{vdsm_name}/hooks/before_vm_start/50_smartcard
+-
+ %files hook-smbios
+ %defattr(-, root, root, -)
+ %{_libexecdir}/%{vdsm_name}/hooks/before_vm_start/50_smbios
+diff --git a/vdsm/libvirtvm.py b/vdsm/libvirtvm.py
+index 219e382..fe140ec 100644
+--- a/vdsm/libvirtvm.py
++++ b/vdsm/libvirtvm.py
+@@ -1243,6 +1243,21 @@ class WatchdogDevice(LibvirtVmDevice):
+         return m
+ 
+ 
++class SmartCardDevice(LibvirtVmDevice):
++    def getXML(self):
++        """
++        Add smartcard section to domain xml
++
++        <smartcard mode='passthrough' type='spicevmc'>
++          <address ... />
++        </smartcard>
++        """
++        card = self.createXmlElem(self.device, None, ['address'])
++        card.setAttribute('mode', self.specParams['mode'])
++        card.setAttribute('type', self.specParams['type'])
++        return card
++
++
+ class RedirDevice(LibvirtVmDevice):
+     def getXML(self):
+         """
+@@ -1395,6 +1410,7 @@ class LibvirtVm(vm.Vm):
+         self._getUnderlyingControllerDeviceInfo()
+         self._getUnderlyingBalloonDeviceInfo()
+         self._getUnderlyingWatchdogDeviceInfo()
++        self._getUnderlyingSmartcardDeviceInfo()
+         # Obtain info of all unknown devices. Must be last!
+         self._getUnderlyingUnknownDeviceInfo()
+ 
+@@ -1486,7 +1502,8 @@ class LibvirtVm(vm.Vm):
+                   vm.BALLOON_DEVICES: BalloonDevice,
+                   vm.WATCHDOG_DEVICES: WatchdogDevice,
+                   vm.REDIR_DEVICES: RedirDevice,
+-                  vm.CONSOLE_DEVICES: ConsoleDevice}
++                  vm.CONSOLE_DEVICES: ConsoleDevice,
++                  vm.SMARTCARD_DEVICES: SmartCardDevice}
+ 
+         for devType, devClass in devMap.items():
+             for dev in devices[devType]:
+@@ -2800,6 +2817,31 @@ class LibvirtVm(vm.Vm):
+                     dev['address'] = address
+                     dev['alias'] = alias
+ 
++    def _getUnderlyingSmartcardDeviceInfo(self):
++        """
++        Obtain smartcard device info from libvirt.
++        """
++        smartcardxml = _domParseStr(self._lastXMLDesc).childNodes[0].\
++            getElementsByTagName('devices')[0].\
++            getElementsByTagName('smartcard')
++        for x in smartcardxml:
++            if not x.getElementsByTagName('address'):
++                continue
++
++            address = self._getUnderlyingDeviceAddress(x)
++            alias = x.getElementsByTagName('alias')[0].getAttribute('name')
++
++            for dev in self._devices[vm.SMARTCARD_DEVICES]:
++                if not hasattr(dev, 'address'):
++                    dev.address = address
++                    dev.alias = alias
++
++            for dev in self.conf['devices']:
++                if dev['device'] == vm.SMARTCARD_DEVICES and \
++                        not dev.get('address'):
++                    dev['address'] = address
++                    dev['alias'] = alias
++
+     def _getUnderlyingWatchdogDeviceInfo(self):
+         """
+         Obtain watchdog device info from libvirt.
+diff --git a/vdsm/vm.py b/vdsm/vm.py
+index 49fcb11..9610644 100644
+--- a/vdsm/vm.py
++++ b/vdsm/vm.py
+@@ -47,6 +47,7 @@ BALLOON_DEVICES = 'balloon'
+ REDIR_DEVICES = 'redir'
+ WATCHDOG_DEVICES = 'watchdog'
+ CONSOLE_DEVICES = 'console'
++SMARTCARD_DEVICES = 'smartcard'
+ 
+ """
+ A module containing classes needed for VM communication.
+@@ -364,7 +365,8 @@ class Vm(object):
+                          SOUND_DEVICES: [], VIDEO_DEVICES: [],
+                          CONTROLLER_DEVICES: [], GENERAL_DEVICES: [],
+                          BALLOON_DEVICES: [], REDIR_DEVICES: [],
+-                         WATCHDOG_DEVICES: [], CONSOLE_DEVICES: []}
++                         WATCHDOG_DEVICES: [], CONSOLE_DEVICES: [],
++                         SMARTCARD_DEVICES: []}
+ 
+     def _get_lastStatus(self):
+         PAUSED_STATES = ('Powering down', 'RebootInProgress', 'Up')
+@@ -447,7 +449,8 @@ class Vm(object):
+                    SOUND_DEVICES: [], VIDEO_DEVICES: [],
+                    CONTROLLER_DEVICES: [], GENERAL_DEVICES: [],
+                    BALLOON_DEVICES: [], REDIR_DEVICES: [],
+-                   WATCHDOG_DEVICES: [], CONSOLE_DEVICES: []}
++                   WATCHDOG_DEVICES: [], CONSOLE_DEVICES: [],
++                   SMARTCARD_DEVICES: []}
+         for dev in self.conf.get('devices'):
+             try:
+                 devices[dev['type']].append(dev)
+@@ -485,6 +488,7 @@ class Vm(object):
+             devices[GENERAL_DEVICES] = []
+             devices[BALLOON_DEVICES] = []
+             devices[WATCHDOG_DEVICES] = []
++            devices[SMARTCARD_DEVICES] = self.getConfSmartcard()
+             devices[REDIR_DEVICES] = []
+             devices[CONSOLE_DEVICES] = []
+         else:
+@@ -549,6 +553,17 @@ class Vm(object):
+ 
+         return vcards
+ 
++    def getConfSmartcard(self):
++        """
++        Normalize smartcard device (now there is only one)
++        """
++        cards = []
++        if self.conf.get('smartcard'):
++            cards.append({'device': SMARTCARD_DEVICES,
++                          'specParams': {'mode': 'passthrough',
++                                         'type': 'spicevmc'}})
++        return cards
++
+     def getConfSound(self):
+         """
+         Normalize sound device provided by conf.
+diff --git a/vdsm_api/vdsmapi-schema.json b/vdsm_api/vdsmapi-schema.json
+index c72ec4b..7c9ef22 100644
+--- a/vdsm_api/vdsmapi-schema.json
++++ b/vdsm_api/vdsmapi-schema.json
+@@ -1685,11 +1685,13 @@
+ #
+ # @console:     A console device
+ #
++# @smartcard:   A smartcard device
++#
+ # Since: 4.10.0
+ ##
+ {'enum': 'VmDeviceType',
+  'data': ['disk', 'interface', 'video', 'sound', 'controller', 'balloon',
+-          'channel', 'console']}
++          'channel', 'console', 'smartcard']}
+ 
+ ##
+ # @VmDiskDeviceType:
+@@ -2356,6 +2358,48 @@
+           'address': 'VmDeviceAddress', 'alias': 'str', 'deviceId': 'UUID'}}
+ 
+ ##
++# @VmSmartcardDeviceSpecParams:
++#
++# Additional VM smartcard device parameters.
++#
++# Since: 4.10.3
++##
++{'type': 'VmSmartcardDeviceSpecParams', 'data': {}}
++
++##
++# @VmSmartcardDeviceType:
++#
++# An enumeration of VM smartcard device types.
++#
++# @smartcard: A smartcard
++#
++# Since: 4.10.3
++##
++{'enum': 'VmSmartcardDeviceType', 'data': ['smartcard']}
++
++##
++# @VmSmartcardDevice:
++#
++# Properties of a VM smartcard device.
++#
++# @deviceType:  The device type (always @smartcard)
++#
++# @device:      The the type of smartcard device
++#
++# @address:     Device hardware address
++#
++# @alias:       Alias used to identify this device in commands
++#
++# @specParams:  #optional Additional device parameters
++#
++# Since: 4.10.3
++##
++{'type': 'VmSmartcardDevice',
++ 'data': {'deviceType': 'VmDeviceType', 'device': 'VmSmartcardDeviceType',
++          'address': 'VmDeviceAddress', 'alias': 'str',
++          '*specParams': 'VmSmartcardDeviceSpecParams'}}
++
++##
+ # @VmConsoleDevice:
+ #
+ # Properties of a VM console device.
+@@ -2382,7 +2426,8 @@
+  'data': {'deviceType': 'VmDeviceType',},
+  'union': ['VmDiskDevice', 'VmInterfaceDevice', 'VmVideoDevice',
+           'VmSoundDevice', 'VmControllerDevice', 'VmBalloonDevice',
+-          'VmChannelDevice', 'VmWatchdogDevice', 'VmConsoleDevice']}
++          'VmChannelDevice', 'VmWatchdogDevice', 'VmConsoleDevice',
++          'VmSmartcardDevice']}
+ 
+ ##
+ # @VmShortStatus:
+diff --git a/vdsm_hooks/Makefile.am b/vdsm_hooks/Makefile.am
+index 9f00d4d..0e27a98 100644
+--- a/vdsm_hooks/Makefile.am
++++ b/vdsm_hooks/Makefile.am
+@@ -37,7 +37,6 @@ SUBDIRS += \
+ 	promisc \
+ 	qos \
+ 	scratchpad \
+-	smartcard \
+ 	smbios \
+ 	sriov \
+ 	vmdisk \
+diff --git a/vdsm_hooks/README b/vdsm_hooks/README
+index b45b93e..1659610 100644
+--- a/vdsm_hooks/README
++++ b/vdsm_hooks/README
+@@ -24,7 +24,7 @@ To work with VDSM hooks you need first to do the following:
+ 
+    If you want to enable more then one custom hook use the semicolon as
+    a separator:
+-   # rhevm-config -s UserDefinedVMProperties='pincpu=^[0-9]+$;smartcard=^(true|false)$' --cver=3.0
++   # rhevm-config -s UserDefinedVMProperties='pincpu=^[0-9]+$;sap_agent=^(true|false)$' --cver=3.0
+ 
+    The convention is [hook name]=[value], the value is evaluate with regular expression,
+    If you find regular expression too complex, you can always use the following command:
+@@ -47,7 +47,7 @@ To work with VDSM hooks you need first to do the following:
+       pincpu=1
+       if you want to use more then on hook and you did enable it with the rhevm-config
+       tool, you can use the semicolon as a separator:
+-      pincpu=1;smartcard=true
++      pincpu=1;sap_agent=true
+    b. Another option is to use "Run Once" dialog which mean that you add a custom property
+       only this time to the VM, next time that you run the VM it will run without the
+       custom property that you provided.
+diff --git a/vdsm_hooks/smartcard/Makefile.am b/vdsm_hooks/smartcard/Makefile.am
+deleted file mode 100644
+index dfefff5..0000000
+--- a/vdsm_hooks/smartcard/Makefile.am
++++ /dev/null
+@@ -1,18 +0,0 @@
+-# Copyright 2008 Red Hat, Inc. and/or its affiliates.
+-#
+-# Licensed to you under the GNU General Public License as published by
+-# the Free Software Foundation; either version 2 of the License, or
+-# (at your option) any later version.  See the files README and
+-# LICENSE_GPL_v2 which accompany this distribution.
+-#
+-
+-EXTRA_DIST = \
+-	before_vm_start.py
+-
+-install-data-local:
+-	$(MKDIR_P) $(DESTDIR)$(vdsmhooksdir)/before_vm_start
+-	$(INSTALL_SCRIPT) $(srcdir)/before_vm_start.py \
+-		$(DESTDIR)$(vdsmhooksdir)/before_vm_start/50_smartcard
+-
+-uninstall-local:
+-	$(RM) $(DESTDIR)$(vdsmhooksdir)/before_vm_start/50_smartcard
+diff --git a/vdsm_hooks/smartcard/README b/vdsm_hooks/smartcard/README
+deleted file mode 100644
+index bd376bf..0000000
+--- a/vdsm_hooks/smartcard/README
++++ /dev/null
+@@ -1,9 +0,0 @@
+-smartcard hook:
+-===============
+-add smartcard support for spice
+-
+-syntax:
+-smartcard: smartcard=true
+-
+-libvirt xml:
+-<smartcard mode='passthrough' type='spicevmc'/>
+diff --git a/vdsm_hooks/smartcard/before_vm_start.py b/vdsm_hooks/smartcard/before_vm_start.py
+deleted file mode 100755
+index 1944978..0000000
+--- a/vdsm_hooks/smartcard/before_vm_start.py
++++ /dev/null
+@@ -1,29 +0,0 @@
+-#!/usr/bin/python
+-
+-import os
+-import sys
+-import hooking
+-import traceback
+-
+-'''
+-smartcard vdsm hook
+-adding to domain xml
+-<smartcard mode='passthrough' type='spicevmc'/>
+-'''
+-
+-if 'smartcard' in os.environ:
+-    try:
+-        sys.stderr.write('smartcard: adding smartcard support\n')
+-        domxml = hooking.read_domxml()
+-
+-        devices = domxml.getElementsByTagName('devices')[0]
+-        card = domxml.createElement('smartcard')
+-        card.setAttribute('mode', 'passthrough')
+-        card.setAttribute('type', 'spicevmc')
+-
+-        devices.appendChild(card)
+-
+-        hooking.write_domxml(domxml)
+-    except:
+-        sys.stderr.write('smartcard: [unexpected error]: %s\n' % traceback.format_exc())
+-        sys.exit(2)
+-- 
+1.7.11.7
+
diff --git a/0007-vdsm.spec-python-ordereddict-only-for-rhel-7.patch b/0007-vdsm.spec-python-ordereddict-only-for-rhel-7.patch
new file mode 100644
index 0000000..d20a3c9
--- /dev/null
+++ b/0007-vdsm.spec-python-ordereddict-only-for-rhel-7.patch
@@ -0,0 +1,42 @@
+From 03389bcfa0808927ad7e8115073f89f05ce7fe5b Mon Sep 17 00:00:00 2001
+From: Douglas Schilling Landgraf <dougsland at redhat.com>
+Date: Tue, 15 Jan 2013 13:12:08 -0500
+Subject: [PATCH 07/22] vdsm.spec: python-ordereddict only for rhel < 7
+
+rhel7 contains python 2.7 which already includes ordereddict module,
+no need extra python package.
+
+Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=891542
+Change-Id: I784d82a7fb5a1c6a13f015747f077020b91c19cb
+Signed-off-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11056
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+---
+ vdsm.spec.in | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index f9e238b..c01d29f 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -44,7 +44,7 @@ BuildRequires: libvirt-python
+ BuildRequires: genisoimage
+ BuildRequires: openssl
+ BuildRequires: m2crypto
+-%if 0%{?rhel}
++%if 0%{?rhel} < 7
+ BuildRequires: python-ordereddict
+ %endif
+ 
+@@ -174,7 +174,7 @@ Summary:        VDSM API Server
+ BuildArch:      noarch
+ 
+ Requires: %{name}-python = %{version}-%{release}
+-%if 0%{?rhel}
++%if 0%{?rhel} < 7
+ Requires: python-ordereddict
+ %endif
+ 
+-- 
+1.8.1
+
diff --git a/0008-vdsm.spec-Don-t-require-python-ordereddict-on-fedora.patch b/0008-vdsm.spec-Don-t-require-python-ordereddict-on-fedora.patch
new file mode 100644
index 0000000..20dcf5b
--- /dev/null
+++ b/0008-vdsm.spec-Don-t-require-python-ordereddict-on-fedora.patch
@@ -0,0 +1,42 @@
+From 95765433ada7074f49fb33bb2f2ea59799022e98 Mon Sep 17 00:00:00 2001
+From: Douglas Schilling Landgraf <dougsland at redhat.com>
+Date: Tue, 15 Jan 2013 13:16:06 -0500
+Subject: [PATCH 08/22] vdsm.spec: Don't require python-ordereddict on fedora
+
+It is a regression introduced by commit bb0620f. The condition "0%{?rhel} < 7"
+also holds true on fedora, so it causes to require python-ordereddict on fedora
+
+Change-Id: I87e167f207669f4dda5bb71d7809fe301ea7e905
+Signed-off-by: Mark Wu <wudxw at linux.vnet.ibm.com>
+Signed-off-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11057
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+---
+ vdsm.spec.in | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index c01d29f..b518045 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -44,7 +44,7 @@ BuildRequires: libvirt-python
+ BuildRequires: genisoimage
+ BuildRequires: openssl
+ BuildRequires: m2crypto
+-%if 0%{?rhel} < 7
++%if 0%{?rhel} == 6
+ BuildRequires: python-ordereddict
+ %endif
+ 
+@@ -174,7 +174,7 @@ Summary:        VDSM API Server
+ BuildArch:      noarch
+ 
+ Requires: %{name}-python = %{version}-%{release}
+-%if 0%{?rhel} < 7
++%if 0%{?rhel} == 6
+ Requires: python-ordereddict
+ %endif
+ 
+-- 
+1.8.1
+
diff --git a/0009-vdsm.spec-BuildRequires-python-pthreading.patch b/0009-vdsm.spec-BuildRequires-python-pthreading.patch
new file mode 100644
index 0000000..1aff2c9
--- /dev/null
+++ b/0009-vdsm.spec-BuildRequires-python-pthreading.patch
@@ -0,0 +1,31 @@
+From bed07103fc279eefd3f879f7f47e7d3b3c2cd8a9 Mon Sep 17 00:00:00 2001
+From: Douglas Schilling Landgraf <dougsland at redhat.com>
+Date: Tue, 15 Jan 2013 14:16:01 -0500
+Subject: [PATCH 09/22] vdsm.spec: BuildRequires: python-pthreading
+
+Moving python-pthreading to be a generic BuildRequirement
+
+Change-Id: If996c366cac42e149193651d2d8a6dbde6d8c81e
+Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=891542
+Signed-off-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11058
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+---
+ vdsm.spec.in | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index b518045..14381e5 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -39,6 +39,7 @@ BuildRequires: python-nose
+ 
+ # BuildRequires needed by the tests during the build
+ BuildRequires: python-ethtool
++BuildRequires: python-pthreading
+ BuildRequires: libselinux-python
+ BuildRequires: libvirt-python
+ BuildRequires: genisoimage
+-- 
+1.8.1
+
diff --git a/0010-Searching-for-both-py-and-pyc-file-to-start-super-vd.patch b/0010-Searching-for-both-py-and-pyc-file-to-start-super-vd.patch
new file mode 100644
index 0000000..658729a
--- /dev/null
+++ b/0010-Searching-for-both-py-and-pyc-file-to-start-super-vd.patch
@@ -0,0 +1,54 @@
+From 72ff12082013c1682604e2ffb9e8a06b2714d64e Mon Sep 17 00:00:00 2001
+From: Yaniv Bronhaim <ybronhei at redhat.com>
+Date: Thu, 17 Jan 2013 09:38:43 +0200
+Subject: [PATCH 10/22] Searching for both py and pyc file to start super vdsm
+
+In oVirt Node we don't keep py files.
+
+Change-Id: I36771ce46f5d00ad8befe33569252bdb8cffeaa1
+Signed-off-by: Yaniv Bronhaim <ybronhei at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10854
+Reviewed-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11133
+Reviewed-by: Vinzenz Feenstra <vfeenstr at redhat.com>
+---
+ vdsm/supervdsm.py | 14 +++++++++-----
+ 1 file changed, 9 insertions(+), 5 deletions(-)
+
+diff --git a/vdsm/supervdsm.py b/vdsm/supervdsm.py
+index 532d5ac..740a93e 100644
+--- a/vdsm/supervdsm.py
++++ b/vdsm/supervdsm.py
+@@ -35,19 +35,23 @@ _g_singletonSupervdsmInstance = None
+ _g_singletonSupervdsmInstance_lock = threading.Lock()
+ 
+ 
+-def __supervdsmServerPath(serverFile):
++def __supervdsmServerPath():
+     base = os.path.dirname(__file__)
+ 
+-    serverPath = os.path.join(base, serverFile)
+-    if os.path.exists(serverPath):
+-        return os.path.abspath(serverPath)
++    # serverFile can be both the py or pyc file. In oVirt node we don't keep
++    # py files. this method looks for one of the two to calculate the absolute
++    # path of supervdsmServer
++    for serverFile in ("supervdsmServer.py", "supervdsmServer.pyc"):
++        serverPath = os.path.join(base, serverFile)
++        if os.path.exists(serverPath):
++            return os.path.abspath(serverPath)
+ 
+     raise RuntimeError("SuperVDSM Server not found")
+ 
+ PIDFILE = os.path.join(constants.P_VDSM_RUN, "svdsm.pid")
+ TIMESTAMP = os.path.join(constants.P_VDSM_RUN, "svdsm.time")
+ ADDRESS = os.path.join(constants.P_VDSM_RUN, "svdsm.sock")
+-SUPERVDSM = __supervdsmServerPath("supervdsmServer.py")
++SUPERVDSM = __supervdsmServerPath()
+ 
+ extraPythonPathList = []
+ 
+-- 
+1.8.1
+
diff --git a/0011-adding-getHardwareInfo-API-to-vdsm.patch b/0011-adding-getHardwareInfo-API-to-vdsm.patch
new file mode 100644
index 0000000..85a1509
--- /dev/null
+++ b/0011-adding-getHardwareInfo-API-to-vdsm.patch
@@ -0,0 +1,334 @@
+From 6d25123c36637eb82966d52e33330bdbfa733413 Mon Sep 17 00:00:00 2001
+From: Yaniv Bronhaim <ybronhei at redhat.com>
+Date: Fri, 18 Jan 2013 15:01:37 +0200
+Subject: [PATCH 11/22] adding getHardwareInfo API to vdsm
+
+Super vdsm retrieves system info about host hardware
+parameters. This info will be shown as part of getHardwareInfo
+API call in a structure called HardwareInformation.
+
+This feature currently available only for x86 cpu platfroms, for other
+platfroms the api call returns empty dictionary.
+
+Feature-Description:
+http://wiki.ovirt.org/wiki/Features/Design/HostHardwareInfo
+
+Change-Id: Ic429ef101fcf9047c4b552405314dc7ba9ba07a0
+Signed-off-by: Yaniv Bronhaim <ybronhei at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/9258
+Reviewed-by: Barak Azulay <bazulay at redhat.com>
+Reviewed-by: Shu Ming <shuming at linux.vnet.ibm.com>
+Reviewed-by: Saggi Mizrahi <smizrahi at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11168
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+---
+ vdsm.spec.in                 |  5 +++
+ vdsm/API.py                  | 11 +++++
+ vdsm/BindingXMLRPC.py        |  5 +++
+ vdsm/Makefile.am             |  1 +
+ vdsm/define.py               |  3 ++
+ vdsm/dmidecodeUtil.py        | 99 ++++++++++++++++++++++++++++++++++++++++++++
+ vdsm/supervdsmServer.py      | 10 +++++
+ vdsm_api/vdsmapi-schema.json | 25 +++++++++++
+ vdsm_cli/vdsClient.py        |  7 ++++
+ 9 files changed, 166 insertions(+)
+ create mode 100644 vdsm/dmidecodeUtil.py
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index 14381e5..5b13419 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -45,6 +45,9 @@ BuildRequires: libvirt-python
+ BuildRequires: genisoimage
+ BuildRequires: openssl
+ BuildRequires: m2crypto
++%ifarch x86_64
++BuildRequires: python-dmidecode
++%endif
+ %if 0%{?rhel} == 6
+ BuildRequires: python-ordereddict
+ %endif
+@@ -77,6 +80,7 @@ Requires: m2crypto
+ Requires: %{name}-xmlrpc = %{version}-%{release}
+ 
+ %ifarch x86_64
++Requires: python-dmidecode
+ Requires: dmidecode
+ %endif
+ 
+@@ -610,6 +614,7 @@ exit 0
+ %{_datadir}/%{vdsm_name}/blkid.py*
+ %{_datadir}/%{vdsm_name}/caps.py*
+ %{_datadir}/%{vdsm_name}/clientIF.py*
++%{_datadir}/%{vdsm_name}/dmidecodeUtil.py*
+ %{_datadir}/%{vdsm_name}/API.py*
+ %{_datadir}/%{vdsm_name}/hooking.py*
+ %{_datadir}/%{vdsm_name}/hooks.py*
+diff --git a/vdsm/API.py b/vdsm/API.py
+index a1e3f2c..732f8a3 100644
+--- a/vdsm/API.py
++++ b/vdsm/API.py
+@@ -1119,6 +1119,17 @@ class Global(APIBase):
+ 
+         return {'status': doneCode, 'info': c}
+ 
++    def getHardwareInfo(self):
++        """
++        Report host hardware information
++        """
++        try:
++            hw = supervdsm.getProxy().getHardwareInfo()
++            return {'status': doneCode, 'info': hw}
++        except:
++            self.log.error("failed to retrieve hardware info", exc_info=True)
++            return errCode['hwInfoErr']
++
+     def getStats(self):
+         """
+         Report host statistics.
+diff --git a/vdsm/BindingXMLRPC.py b/vdsm/BindingXMLRPC.py
+index 73185b9..f19a8bb 100644
+--- a/vdsm/BindingXMLRPC.py
++++ b/vdsm/BindingXMLRPC.py
+@@ -288,6 +288,10 @@ class BindingXMLRPC(object):
+         ret['info'].update(self.getServerInfo())
+         return ret
+ 
++    def getHardwareInfo(self):
++        api = API.Global()
++        return api.getHardwareInfo()
++
+     def getStats(self):
+         api = API.Global()
+         return api.getStats()
+@@ -768,6 +772,7 @@ class BindingXMLRPC(object):
+                 (self.vmGetMigrationStatus, 'migrateStatus'),
+                 (self.vmMigrationCancel, 'migrateCancel'),
+                 (self.getCapabilities, 'getVdsCapabilities'),
++                (self.getHardwareInfo, 'getVdsHardwareInfo'),
+                 (self.getStats, 'getVdsStats'),
+                 (self.vmGetStats, 'getVmStats'),
+                 (self.getAllVmStats, 'getAllVmStats'),
+diff --git a/vdsm/Makefile.am b/vdsm/Makefile.am
+index dc0590e..88b3287 100644
+--- a/vdsm/Makefile.am
++++ b/vdsm/Makefile.am
+@@ -32,6 +32,7 @@ dist_vdsm_PYTHON = \
+ 	configNetwork.py \
+ 	debugPluginClient.py \
+ 	dummybr.py \
++	dmidecodeUtil.py \
+ 	guestIF.py \
+ 	hooking.py \
+ 	hooks.py \
+diff --git a/vdsm/define.py b/vdsm/define.py
+index efebea1..01b0c60 100644
+--- a/vdsm/define.py
++++ b/vdsm/define.py
+@@ -127,6 +127,9 @@ errCode = {'noVM': {'status':
+             'updateDevice': {'status':
+                              {'code': 56,
+                               'message': 'Failed to update device'}},
++            'hwInfoErr': {'status':
++                          {'code': 57,
++                           'message': 'Failed to read hardware information'}},
+             'recovery': {'status':
+                          {'code': 99,
+                           'message':
+diff --git a/vdsm/dmidecodeUtil.py b/vdsm/dmidecodeUtil.py
+new file mode 100644
+index 0000000..eb8d834
+--- /dev/null
++++ b/vdsm/dmidecodeUtil.py
+@@ -0,0 +1,99 @@
++#
++# Copyright 2012 Red Hat, Inc.
++#
++# This program is free software; you can redistribute it and/or modify
++# it under the terms of the GNU General Public License as published by
++# the Free Software Foundation; either version 2 of the License, or
++# (at your option) any later version.
++#
++# This program is distributed in the hope that it will be useful,
++# but WITHOUT ANY WARRANTY; without even the implied warranty of
++# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++# GNU General Public License for more details.
++#
++# You should have received a copy of the GNU General Public License
++# along with this program; if not, write to the Free Software
++# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301 USA
++#
++# Refer to the README and COPYING files for full details of the license
++#
++
++import dmidecode
++from vdsm import utils
++
++
++# This function gets dict and returns new dict that includes only string
++# value for each key. Keys in d that their value is a dictionary will be
++# ignored because those keys define a lable for the sub dictionary
++# (and those keys are irrelevant for us in dmidecode output)
++def __leafDict(d):
++    ret = {}
++    for k, v in d.iteritems():
++        if isinstance(v, dict):
++            ret.update(__leafDict(v))
++        else:
++            ret[k] = v
++    return ret
++
++
++ at utils.memoized
++def getAllDmidecodeInfo():
++    myLeafDict = {}
++    for k in ('system', 'bios', 'cache', 'processor', 'chassis', 'memory'):
++        myLeafDict[k] = __leafDict(getattr(dmidecode, k)())
++    return myLeafDict
++
++
++ at utils.memoized
++def getHardwareInfoStructure():
++    dmiInfo = getAllDmidecodeInfo()
++    sysStruct = {}
++    for k1, k2 in (('system', 'Manufacturer'),
++                   ('system', 'Product Name'),
++                   ('system', 'Version'),
++                   ('system', 'Serial Number'),
++                   ('system', 'UUID'),
++                   ('system', 'Family')):
++        sysStruct[(k1 + k2).replace(' ', '')] = dmiInfo[k1][k2]
++
++    return sysStruct
++
++
++def printInfo(d):
++
++    def formatData(data):
++        return '\n'.join(['%s - %s' % (k, v) for k, v in data.iteritems()])
++
++    print(
++        """
++        SYSTEM INFORMATION
++        ==================
++{system}
++
++        BIOS INFORMATION
++        ================
++{bios}
++
++        CACHE INFORMATION
++        =================
++{cache}
++
++        PROCESSOR INFO
++        ==============
++{processor}
++
++        CHASSIS INFO
++        ============
++{chassis}
++
++        MEMORY INFORMATION
++        ==================
++{memory}
++        """.format(
++        system=formatData(d['system']),
++        bios=formatData(d['bios']),
++        cache=formatData(d['cache']),
++        processor=formatData(d['processor']),
++        chassis=formatData(d['chassis']),
++        memory=formatData(d['memory']))
++    )
+diff --git a/vdsm/supervdsmServer.py b/vdsm/supervdsmServer.py
+index 5effd41..dc89218 100755
+--- a/vdsm/supervdsmServer.py
++++ b/vdsm/supervdsmServer.py
+@@ -17,6 +17,7 @@
+ # Refer to the README and COPYING files for full details of the license
+ #
+ 
++import platform
+ import logging
+ import logging.config
+ import sys
+@@ -94,6 +95,15 @@ class _SuperVdsm(object):
+         return True
+ 
+     @logDecorator
++    def getHardwareInfo(self, *args, **kwargs):
++        if platform.machine() in ('x86_64', 'i686'):
++            from dmidecodeUtil import getHardwareInfoStructure
++            return getHardwareInfoStructure()
++        else:
++            #  not implemented over other architecture
++            return {}
++
++    @logDecorator
+     def getDevicePartedInfo(self, *args, **kwargs):
+         return _getDevicePartedInfo(*args, **kwargs)
+ 
+diff --git a/vdsm_api/vdsmapi-schema.json b/vdsm_api/vdsmapi-schema.json
+index 7c9ef22..b69d33e 100644
+--- a/vdsm_api/vdsmapi-schema.json
++++ b/vdsm_api/vdsmapi-schema.json
+@@ -690,6 +690,31 @@
+  'data': {'release': 'str', 'version': 'str', 'name': 'OSName'}}
+ 
+ ##
++# @HardwareInformation:
++#
++# Host hardware fields.
++#
++# @systemManufacturer:  Host manufacturer's name
++#
++# @systemProductName:   Host's hardware module
++#
++# @systemSerialNumber:  Hardware serial number
++#
++# @systemFamily:        Processor type
++#
++# @systemUUID:          Host's hardware UUID
++#
++# @systemVersion:       Host's hardware version
++#
++# Since: 4.10.3
++##
++{'type': 'HardwareInformation',
++ 'data': {'systemManufacturer': 'str',
++          'systemProductName': 'str', 'systemVersion': 'str',
++          'systemSerialNumber': 'str', 'systemUUID': 'str',
++          'systemFamily': 'str'}}
++
++##
+ # @SoftwarePackage:
+ #
+ # An enumeration of aliases for important software components.
+diff --git a/vdsm_cli/vdsClient.py b/vdsm_cli/vdsClient.py
+index c67e3fe..884dc5d 100644
+--- a/vdsm_cli/vdsClient.py
++++ b/vdsm_cli/vdsClient.py
+@@ -411,6 +411,9 @@ class service:
+     def do_getCap(self, args):
+         return self.ExecAndExit(self.s.getVdsCapabilities())
+ 
++    def do_getHardware(self, args):
++        return self.ExecAndExit(self.s.getVdsHardwareInfo())
++
+     def do_getVdsStats(self, args):
+         return self.ExecAndExit(self.s.getVdsStats())
+ 
+@@ -1900,6 +1903,10 @@ if __name__ == '__main__':
+                        ('',
+                         'Get Capabilities info of the VDS'
+                         )),
++        'getVdsHardwareInfo': (serv.do_getHardware,
++                               ('',
++                                'Get hardware info of the VDS'
++                                )),
+         'getVdsStats': (serv.do_getVdsStats,
+                        ('',
+                         'Get Statistics info on the VDS'
+-- 
+1.8.1
+
diff --git a/0012-Explicitly-shutdown-m2crypto-socket.patch b/0012-Explicitly-shutdown-m2crypto-socket.patch
new file mode 100644
index 0000000..00ac452
--- /dev/null
+++ b/0012-Explicitly-shutdown-m2crypto-socket.patch
@@ -0,0 +1,55 @@
+From dcdc1ce83f0f4d426d31401ca14fb8c685150c45 Mon Sep 17 00:00:00 2001
+From: Andrey Gordeev <dreyou at gmail.com>
+Date: Mon, 14 Jan 2013 10:30:52 +0100
+Subject: [PATCH 12/22] Explicitly shutdown  m2crypto socket
+
+Aparently some versions of the m2crypto library don't shutdown correctly
+underlying sockets when a SSL connection is closed.
+
+In Python 2.6.6 (the version in RHEL6 and in CentOS6) when the XML RPC
+server closes a connection it calls the shutdown method on that
+connection with sock.SHUT_WR as the parameter. This works fine for plain
+sockets, and works well also for SSL sockets using the builtin ssl
+module as it translates the call to shutdown to a complete shutdown of
+the SSL connection. But m2crypto does an different translation and the
+net result is that the underlying SSL connection is not completely
+closed.
+
+In Python 2.7.3 (the version in Fedora 18) when the XML RPC server
+closes a connection it calls the shutdown method on that connection with
+sock.SHUT_RDWR, so no matter what SSL implementation is used the
+underlying SSL connection is completely closed.
+
+This patch changes the SSLSocket class so that it explicitly shuts down
+and closes the underlying socket when  when the connection is closed.
+
+Change-Id: Ie1a471aaccb32554b94340ebfb92b9d7ba14407a
+Signed-off-by: Juan Hernandez <juan.hernandez at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10972
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Tested-by: Dan Kenigsberg <danken at redhat.com>
+Signed-off-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11384
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm/SecureXMLRPCServer.py | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/vdsm/SecureXMLRPCServer.py b/vdsm/SecureXMLRPCServer.py
+index 2de1cf7..bad2067 100644
+--- a/vdsm/SecureXMLRPCServer.py
++++ b/vdsm/SecureXMLRPCServer.py
+@@ -57,6 +57,10 @@ class SSLSocket(object):
+     def gettimeout(self):
+         return self.connection.socket.gettimeout()
+ 
++    def close(self):
++        self.connection.shutdown(socket.SHUT_RDWR)
++        self.connection.close()
++
+     def __getattr__(self, name):
+         # This is how we delegate all the rest of the methods to the
+         # underlying SSL connection:
+-- 
+1.8.1
+
diff --git a/0013-spec-require-policycoreutils-and-skip-sebool-errors.patch b/0013-spec-require-policycoreutils-and-skip-sebool-errors.patch
new file mode 100644
index 0000000..d8f3a98
--- /dev/null
+++ b/0013-spec-require-policycoreutils-and-skip-sebool-errors.patch
@@ -0,0 +1,59 @@
+From f28df85573914d1ccb57fdc7bae5121a9a24576c Mon Sep 17 00:00:00 2001
+From: Federico Simoncelli <fsimonce at redhat.com>
+Date: Tue, 11 Dec 2012 04:01:21 -0500
+Subject: [PATCH 13/22] spec: require policycoreutils and skip sebool errors
+
+In order to avoid a policycoreutils bug (rhbz 883355) when selinux is
+disabled we now require the version 2.1.13-44 (or newer) of Fedora.
+Additionally we now skip any error in the rpm scriptlets for the sebool
+configuration (sebool-config) since they could interfere with the rpm
+installation potentially leaving multiple packages installed.
+
+Change-Id: Iefd5f53c9118eeea6817ce9660ea18abcfd1955c
+Signed-off-by: Federico Simoncelli <fsimonce at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/9840
+Reviewed-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11363
+---
+ vdsm.spec.in | 10 ++++++++--
+ 1 file changed, 8 insertions(+), 2 deletions(-)
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index 5b13419..e153880 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -131,6 +131,12 @@ Requires: selinux-policy-targeted >= 3.10.0-149
+ Requires: lvm2 >= 2.02.95
+ %endif
+ 
++# In order to avoid a policycoreutils bug (rhbz 883355) when selinux is
++# disabled we now require the version 2.1.13-44 (or newer) of Fedora.
++%if 0%{?fedora} >= 18
++Requires: policycoreutils >= 2.1.13-44
++%endif
++
+ Requires: libvirt-python, libvirt-lock-sanlock, libvirt-client
+ Requires: psmisc >= 22.6-15
+ Requires: fence-agents
+@@ -479,7 +485,7 @@ export LC_ALL=C
+ /usr/sbin/usermod -a -G %{qemu_group},%{vdsm_group} %{snlk_user}
+ 
+ %post
+-%{_bindir}/vdsm-tool sebool-config
++%{_bindir}/vdsm-tool sebool-config || :
+ # set the vdsm "secret" password for libvirt
+ %{_bindir}/vdsm-tool set-saslpasswd
+ 
+@@ -521,7 +527,7 @@ then
+     /bin/sed -i '/# VDSM section begin/,/# VDSM section end/d' \
+         /etc/sysctl.conf
+ 
+-    %{_bindir}/vdsm-tool sebool-unconfig
++    %{_bindir}/vdsm-tool sebool-unconfig || :
+ 
+     /usr/sbin/saslpasswd2 -p -a libvirt -d vdsm at ovirt
+ 
+-- 
+1.8.1
+
diff --git a/0014-spec-requires-selinux-policy-to-avoid-selinux-failur.patch b/0014-spec-requires-selinux-policy-to-avoid-selinux-failur.patch
new file mode 100644
index 0000000..360ec83
--- /dev/null
+++ b/0014-spec-requires-selinux-policy-to-avoid-selinux-failur.patch
@@ -0,0 +1,42 @@
+From d28299352e54fe6615aba70280f76d25a088f851 Mon Sep 17 00:00:00 2001
+From: Mark Wu <wudxw at linux.vnet.ibm.com>
+Date: Wed, 23 Jan 2013 10:55:47 +0800
+Subject: [PATCH 14/22] spec: requires selinux-policy to avoid selinux failure
+ on access tls cert
+
+selinux-policy tightened up the security on svirt_t on fedora18. It causes
+that svirt_t is disallowed to access cert_t file. And therefore it will block
+qemu run spice server with tls. For more details, please see:
+https://bugzilla.redhat.com/show_bug.cgi?id=890345
+
+Change-Id: I9fe74c6187e7e9f2a8c0b2a824d2871fb5497d86
+Signed-off-by: Mark Wu <wudxw at linux.vnet.ibm.com>
+Reviewed-on: http://gerrit.ovirt.org/11290
+Reviewed-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11364
+Tested-by: Federico Simoncelli <fsimonce at redhat.com>
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm.spec.in | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index e153880..dfc2459 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -131,9 +131,10 @@ Requires: selinux-policy-targeted >= 3.10.0-149
+ Requires: lvm2 >= 2.02.95
+ %endif
+ 
++%if 0%{?fedora} >= 18
++Requires: selinux-policy-targeted >= 3.11.1-71
+ # In order to avoid a policycoreutils bug (rhbz 883355) when selinux is
+ # disabled we now require the version 2.1.13-44 (or newer) of Fedora.
+-%if 0%{?fedora} >= 18
+ Requires: policycoreutils >= 2.1.13-44
+ %endif
+ 
+-- 
+1.8.1
+
diff --git a/0015-vdsmd.service-require-either-ntpd-or-chronyd.patch b/0015-vdsmd.service-require-either-ntpd-or-chronyd.patch
new file mode 100644
index 0000000..cceab27
--- /dev/null
+++ b/0015-vdsmd.service-require-either-ntpd-or-chronyd.patch
@@ -0,0 +1,36 @@
+From 8a0e831eafedb0aefc6aadab7ac1448cab6b7643 Mon Sep 17 00:00:00 2001
+From: Dan Kenigsberg <danken at redhat.com>
+Date: Wed, 23 Jan 2013 10:27:01 +0200
+Subject: [PATCH 15/22] vdsmd.service: require either ntpd or chronyd
+
+Fedora 18 ships with chronyd by default, which conflicts with ntpd. We
+do not really care which one of the two is running, as long as the host
+clock is synchronized. That's what requiring time-sync.target means.
+
+Change-Id: Ie0605bea6d34c214aea8814a72a03e9ad2883fdb
+Signed-off-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11291
+Reviewed-by: Zhou Zheng Sheng <zhshzhou at linux.vnet.ibm.com>
+Reviewed-by: Antoni Segura Puimedon <asegurap at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11366
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+Tested-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm/vdsmd.service | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/vdsm/vdsmd.service b/vdsm/vdsmd.service
+index 6a650f4..9823b34 100644
+--- a/vdsm/vdsmd.service
++++ b/vdsm/vdsmd.service
+@@ -1,6 +1,6 @@
+ [Unit]
+ Description=Virtual Desktop Server Manager
+-Requires=multipathd.service libvirtd.service ntpd.service
++Requires=multipathd.service libvirtd.service time-sync.target
+ Conflicts=libvirt-guests.service
+ 
+ [Service]
+-- 
+1.8.1
+
diff --git a/0016-isRunning-didn-t-check-local-variable-before-reading.patch b/0016-isRunning-didn-t-check-local-variable-before-reading.patch
new file mode 100644
index 0000000..c2b85cb
--- /dev/null
+++ b/0016-isRunning-didn-t-check-local-variable-before-reading.patch
@@ -0,0 +1,44 @@
+From c95e492ccef335e82b2eb79495c35d08beab6629 Mon Sep 17 00:00:00 2001
+From: Yaniv Bronhaim <ybronhei at redhat.com>
+Date: Thu, 17 Jan 2013 10:01:54 +0200
+Subject: [PATCH 16/22] isRunning didn't check local variable before reading
+ saved data
+
+All internal svdsm files contained last svdsm instance info,
+after restart we didn't verify local manager instance before processing
+the operation and got AttributeError exception when calling svdsm
+manager.
+
+This returns false when _svdsm instance is None or in firstLaunch.
+
+Change-Id: I9dec0c6955dadcd959cc1c8df4e9745322fb0ce3
+Bug-Id: https://bugzilla.redhat.com/show_bug.cgi?id=890365
+Signed-off-by: Yaniv Bronhaim <ybronhei at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10491
+Reviewed-by: Ayal Baron <abaron at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11135
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+Tested-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm/supervdsm.py | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/vdsm/supervdsm.py b/vdsm/supervdsm.py
+index 740a93e..6a38076 100644
+--- a/vdsm/supervdsm.py
++++ b/vdsm/supervdsm.py
+@@ -148,6 +148,9 @@ class SuperVdsmProxy(object):
+         self._firstLaunch = True
+ 
+     def isRunning(self):
++        if self._firstLaunch or self._svdsm is None:
++            return False
++
+         try:
+             with open(self.pidfile, "r") as f:
+                 spid = f.read().strip()
+-- 
+1.8.1
+
diff --git a/0017-udev-Race-fix-load-and-trigger-dev-rule.patch b/0017-udev-Race-fix-load-and-trigger-dev-rule.patch
new file mode 100644
index 0000000..7615ee8
--- /dev/null
+++ b/0017-udev-Race-fix-load-and-trigger-dev-rule.patch
@@ -0,0 +1,115 @@
+From 959a8703937f01161988221940d189acc4f7a796 Mon Sep 17 00:00:00 2001
+From: Federico Simoncelli <fsimonce at redhat.com>
+Date: Sun, 13 Jan 2013 16:21:55 +0200
+Subject: [PATCH 17/22] udev: Race fix- load and trigger dev rule
+
+The rule file is generated, yet not synch-loaded in memory, so a VM with
+a direct lun fails to start.
+This patch reloads the rules before triggering using the new private
+udev functions - udevReloadRules() in supervdsmServer.py .
+Also added a check in appropriateDevice() (hsm.py) to make sure the
+mapping is indeed there.
+
+Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=891300
+Change-Id: If3b2008a3d9df2dcaf54190721c2dd9764338627
+Signed-off-by: Lee Yarwood <lyarwood at redhat.com>
+Signed-off-by: Vered Volansky <vvolansk at redhat.com>
+Signed-off-by: Federico Simoncelli <fsimonce at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11410
+Reviewed-by: Allon Mureinik <amureini at redhat.com>
+---
+ vdsm/storage/hsm.py     |  7 +++++++
+ vdsm/supervdsmServer.py | 31 +++++++++++++++++++++++++++++++
+ 2 files changed, 38 insertions(+)
+
+diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
+index efb2749..62e9f74 100644
+--- a/vdsm/storage/hsm.py
++++ b/vdsm/storage/hsm.py
+@@ -67,6 +67,7 @@ import mount
+ import dispatcher
+ import supervdsm
+ import storageServer
++from vdsm import utils
+ 
+ GUID = "guid"
+ NAME = "name"
+@@ -87,6 +88,8 @@ SECTOR_SIZE = 512
+ 
+ STORAGE_CONNECTION_DIR = os.path.join(constants.P_VDSM_RUN, "connections/")
+ 
++QEMU_READABLE_TIMEOUT = 30
++
+ 
+ def public(f=None, **kwargs):
+     if f is None:
+@@ -2925,6 +2928,10 @@ class HSM:
+         """
+         supervdsm.getProxy().appropriateDevice(guid, thiefId)
+         supervdsm.getProxy().udevTrigger(guid)
++        devPath = devicemapper.DMPATH_FORMAT % guid
++        utils.retry(partial(fileUtils.validateQemuReadable, devPath),
++                    expectedException=OSError,
++                    timeout=QEMU_READABLE_TIMEOUT)
+ 
+     @public
+     def inappropriateDevices(self, thiefId):
+diff --git a/vdsm/supervdsmServer.py b/vdsm/supervdsmServer.py
+index dc89218..833e91f 100755
+--- a/vdsm/supervdsmServer.py
++++ b/vdsm/supervdsmServer.py
+@@ -89,6 +89,10 @@ LOG_CONF_PATH = "/etc/vdsm/logger.conf"
+ 
+ class _SuperVdsm(object):
+ 
++    UDEV_WITH_RELOAD_VERSION = 181
++
++    log = logging.getLogger("SuperVdsm.ServerCallback")
++
+     @logDecorator
+     def ping(self, *args, **kwargs):
+         # This method exists for testing purposes
+@@ -226,6 +230,7 @@ class _SuperVdsm(object):
+ 
+     @logDecorator
+     def udevTrigger(self, guid):
++        self.__udevReloadRules(guid)
+         cmd = [EXT_UDEVADM, 'trigger', '--verbose', '--action', 'change',
+                '--property-match=DM_NAME=%s' % guid]
+         rc, out, err = misc.execCmd(cmd, sudo=False)
+@@ -304,6 +309,32 @@ class _SuperVdsm(object):
+     def removeFs(self, path):
+         return mkimage.removeFs(path)
+ 
++    def __udevReloadRules(self, guid):
++        if self.__udevOperationReload():
++            reload = "--reload"
++        else:
++            reload = "--reload-rules"
++        cmd = [EXT_UDEVADM, 'control', reload]
++        rc, out, err = misc.execCmd(cmd, sudo=False)
++        if rc:
++            self.log.error("Udevadm reload-rules command failed rc=%s, "
++                           "out=\"%s\", err=\"%s\"", rc, out, err)
++            raise OSError(errno.EINVAL, "Could not reload-rules for device "
++                          "%s" % guid)
++
++    @utils.memoized
++    def __udevVersion(self):
++        cmd = [EXT_UDEVADM, '--version']
++        rc, out, err = misc.execCmd(cmd, sudo=False)
++        if rc:
++            self.log.error("Udevadm version command failed rc=%s, "
++                           " out=\"%s\", err=\"%s\"", rc, out, err)
++            raise RuntimeError("Could not get udev version number")
++        return int(out[0])
++
++    def __udevOperationReload(self):
++        return self.__udevVersion() > self.UDEV_WITH_RELOAD_VERSION
++
+ 
+ def __pokeParent(parentPid, address, log):
+     try:
+-- 
+1.8.1
+
diff --git a/0018-Change-scsi_id-command-path-to-be-configured-at-runt.patch b/0018-Change-scsi_id-command-path-to-be-configured-at-runt.patch
new file mode 100644
index 0000000..2412a47
--- /dev/null
+++ b/0018-Change-scsi_id-command-path-to-be-configured-at-runt.patch
@@ -0,0 +1,171 @@
+From c7ee9217ec6edebec7b1d3a2536792114fd1a258 Mon Sep 17 00:00:00 2001
+From: Yeela Kaplan <ykaplan at redhat.com>
+Date: Fri, 25 Jan 2013 15:54:07 +0200
+Subject: [PATCH 18/22] Change scsi_id command path to be configured at runtime
+
+On fedora 18 scsi_id path is no longer under /sbin/scsi_id,
+we configure vdsm to look for the path at runtime
+and thus remove it from constants.
+
+Change-Id: I409d4da0ba429564466271aded32e96f9401cd6c
+Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=886087
+Signed-off-by: Yeela Kaplan <ykaplan at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10824
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11393
+Tested-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ configure.ac              |  2 --
+ vdsm/constants.py.in      | 24 ------------------------
+ vdsm/storage/multipath.py | 45 ++++++++++++++++++++++++++++++++++++++++-----
+ vdsm/sudoers.vdsm.in      |  1 -
+ 4 files changed, 40 insertions(+), 32 deletions(-)
+
+diff --git a/configure.ac b/configure.ac
+index 3489e38..edc0b50 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -167,8 +167,6 @@ AC_PATH_PROG([QEMUIMG_PATH], [qemu-img], [/usr/bin/qemu-img])
+ AC_PATH_PROG([REBOOT_PATH], [reboot], [/usr/bin/reboot])
+ AC_PATH_PROG([RPM_PATH], [rpm], [/bin/rpm])
+ AC_PATH_PROG([RSYNC_PATH], [rsync], [/usr/bin/rsync])
+-AC_PATH_PROG([SCSI_ID_PATH], [scsi_id], [/sbin/scsi_id],
+-             [$PATH$PATH_SEPARATOR/lib/udev])
+ AC_PATH_PROG([SED_PATH], [sed], [/bin/sed])
+ AC_PATH_PROG([SERVICE_PATH], [service], [/sbin/service])
+ AC_PATH_PROG([SETSID_PATH], [setsid], [/usr/bin/setsid])
+diff --git a/vdsm/constants.py.in b/vdsm/constants.py.in
+index 8034b8e..ec5fff9 100644
+--- a/vdsm/constants.py.in
++++ b/vdsm/constants.py.in
+@@ -127,7 +127,6 @@ EXT_QEMUIMG = '@QEMUIMG_PATH@'
+ EXT_REBOOT = '@REBOOT_PATH@'
+ EXT_RSYNC = '@RSYNC_PATH@'
+ 
+-EXT_SCSI_ID = '@SCSI_ID_PATH@'  # TBD !
+ EXT_SERVICE = '@SERVICE_PATH@'
+ EXT_SETSID = '@SETSID_PATH@'
+ EXT_SH = '/bin/sh'  # The shell path is invariable
+@@ -157,26 +156,3 @@ CMD_LOWPRIO = [EXT_NICE, '-n', '19', EXT_IONICE, '-c', '3']
+ STRG_ISCSI_HOST = "iscsi_host/"
+ STRG_SCSI_HOST = "scsi_host/"
+ STRG_ISCSI_SESSION = "iscsi_session/"
+-STRG_MPATH_CONF = (
+-    "\n\n"
+-    "defaults {\n"
+-    "    polling_interval        5\n"
+-    "    getuid_callout          \"@SCSI_ID_PATH@ --whitelisted "
+-                                    "--replace-whitespace --device=/dev/%n\"\n"
+-    "    no_path_retry           fail\n"
+-    "    user_friendly_names     no\n"
+-    "    flush_on_last_del       yes\n"
+-    "    fast_io_fail_tmo        5\n"
+-    "    dev_loss_tmo            30\n"
+-    "    max_fds                 4096\n"
+-    "}\n"
+-    "\n"
+-    "devices {\n"
+-    "device {\n"
+-    "    vendor                  \"HITACHI\"\n"
+-    "    product                 \"DF.*\"\n"
+-    "    getuid_callout          \"@SCSI_ID_PATH@ --whitelisted "
+-                                    "--replace-whitespace --device=/dev/%n\"\n"
+-    "}\n"
+-    "}"
+-)
+diff --git a/vdsm/storage/multipath.py b/vdsm/storage/multipath.py
+index 741f1a1..05fd186 100644
+--- a/vdsm/storage/multipath.py
++++ b/vdsm/storage/multipath.py
+@@ -30,6 +30,7 @@ import re
+ from collections import namedtuple
+ 
+ from vdsm import constants
++from vdsm import utils
+ import misc
+ import iscsi
+ import supervdsm
+@@ -49,13 +50,47 @@ MPATH_CONF = "/etc/multipath.conf"
+ 
+ OLD_TAGS = ["# RHAT REVISION 0.2", "# RHEV REVISION 0.3",
+             "# RHEV REVISION 0.4", "# RHEV REVISION 0.5",
+-            "# RHEV REVISION 0.6", "# RHEV REVISION 0.7"]
+-MPATH_CONF_TAG = "# RHEV REVISION 0.8"
++            "# RHEV REVISION 0.6", "# RHEV REVISION 0.7",
++            "# RHEV REVISION 0.8", "# RHEV REVISION 0.9"]
++MPATH_CONF_TAG = "# RHEV REVISION 1.0"
+ MPATH_CONF_PRIVATE_TAG = "# RHEV PRIVATE"
+-MPATH_CONF_TEMPLATE = MPATH_CONF_TAG + constants.STRG_MPATH_CONF
++STRG_MPATH_CONF = (
++    "\n\n"
++    "defaults {\n"
++    "    polling_interval        5\n"
++    "    getuid_callout          \"%(scsi_id_path)s --whitelisted "
++    "--replace-whitespace --device=/dev/%%n\"\n"
++    "    no_path_retry           fail\n"
++    "    user_friendly_names     no\n"
++    "    flush_on_last_del       yes\n"
++    "    fast_io_fail_tmo        5\n"
++    "    dev_loss_tmo            30\n"
++    "    max_fds                 4096\n"
++    "}\n"
++    "\n"
++    "devices {\n"
++    "device {\n"
++    "    vendor                  \"HITACHI\"\n"
++    "    product                 \"DF.*\"\n"
++    "    getuid_callout          \"%(scsi_id_path)s --whitelisted "
++    "--replace-whitespace --device=/dev/%%n\"\n"
++    "}\n"
++    "device {\n"
++    "    vendor                  \"COMPELNT\"\n"
++    "    product                 \"Compellent Vol\"\n"
++    "    no_path_retry           fail\n"
++    "}\n"
++    "}"
++)
++MPATH_CONF_TEMPLATE = MPATH_CONF_TAG + STRG_MPATH_CONF
+ 
+ log = logging.getLogger("Storage.Multipath")
+ 
++_scsi_id = utils.CommandPath("scsi_id",
++                             "/sbin/scsi_id",  # EL6
++                             "/usr/lib/udev/scsi_id",  # Fedora
++                             )
++
+ 
+ def rescan():
+     """
+@@ -127,7 +162,7 @@ def setupMultipath():
+                 os.path.basename(MPATH_CONF), MAX_CONF_COPIES,
+                 cp=True, persist=True)
+     with tempfile.NamedTemporaryFile() as f:
+-        f.write(MPATH_CONF_TEMPLATE)
++        f.write(MPATH_CONF_TEMPLATE % {'scsi_id_path': _scsi_id.cmd})
+         f.flush()
+         cmd = [constants.EXT_CP, f.name, MPATH_CONF]
+         rc = misc.execCmd(cmd, sudo=True)[0]
+@@ -173,7 +208,7 @@ def getDeviceSize(dev):
+ 
+ def getScsiSerial(physdev):
+     blkdev = os.path.join("/dev", physdev)
+-    cmd = [constants.EXT_SCSI_ID,
++    cmd = [_scsi_id.cmd,
+            "--page=0x80",
+            "--whitelisted",
+            "--export",
+diff --git a/vdsm/sudoers.vdsm.in b/vdsm/sudoers.vdsm.in
+index ab99e8e..4fc75f9 100644
+--- a/vdsm/sudoers.vdsm.in
++++ b/vdsm/sudoers.vdsm.in
+@@ -23,7 +23,6 @@ Cmnd_Alias VDSM_STORAGE = @MOUNT_PATH@, @UMOUNT_PATH@, \
+     @SERVICE_PATH@ iscsid *, \
+     @SERVICE_PATH@ multipathd restart, \
+     @SERVICE_PATH@ multipathd reload, \
+-    @SCSI_ID_PATH@, \
+     @ISCSIADM_PATH@ *, \
+     @LVM_PATH@, \
+     @CAT_PATH@ /sys/block/*/device/../../*, \
+-- 
+1.8.1
+
diff --git a/0019-upgrade-force-upgrade-to-v2-before-upgrading-to-v3.patch b/0019-upgrade-force-upgrade-to-v2-before-upgrading-to-v3.patch
new file mode 100644
index 0000000..5a1c4a9
--- /dev/null
+++ b/0019-upgrade-force-upgrade-to-v2-before-upgrading-to-v3.patch
@@ -0,0 +1,91 @@
+From 33480cbc90a4810aa99e3fc7b36e879cdb0c19d4 Mon Sep 17 00:00:00 2001
+From: Federico Simoncelli <fsimonce at redhat.com>
+Date: Wed, 9 Jan 2013 09:57:52 +0200
+Subject: [PATCH 19/22] upgrade: force upgrade to v2 before upgrading to v3
+
+During the upgrade of a domain to version 3 vdsm reallocates the
+metadata slots that are higher than 1947 (given a leases LV of 2Gb)
+in order to use the same offsets for the volume leases (BZ#882276
+and git commit hash 2ba76e3).
+This has no effect when the domain is version 0 since the metadata
+slots offsets are fixed (the first physical extent of the LV) and
+they can't be reallocated. In such case the domain must be upgraded
+to version 2 first.
+
+Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=893184
+Change-Id: I2bd424ad29e76d1368ff2959bb8fe45afc595cdb
+Signed-off-by: Federico Simoncelli <fsimonce at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10792
+Reviewed-by: Ayal Baron <abaron at redhat.com>
+Tested-by: Haim Ateya <hateya at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11462
+---
+ vdsm/storage/imageRepository/formatConverter.py | 26 +++++++++++++++++--------
+ vdsm/storage/volume.py                          |  4 +++-
+ 2 files changed, 21 insertions(+), 9 deletions(-)
+
+diff --git a/vdsm/storage/imageRepository/formatConverter.py b/vdsm/storage/imageRepository/formatConverter.py
+index 0d7dd6d..88b053d 100644
+--- a/vdsm/storage/imageRepository/formatConverter.py
++++ b/vdsm/storage/imageRepository/formatConverter.py
+@@ -93,6 +93,23 @@ def v3DomainConverter(repoPath, hostId, domain, isMsd):
+     log = logging.getLogger('Storage.v3DomainConverter')
+     log.debug("Starting conversion for domain %s", domain.sdUUID)
+ 
++    targetVersion = 3
++    currentVersion = domain.getVersion()
++
++    # For block domains if we're upgrading from version 0 we need to first
++    # upgrade to version 2 and then proceed to upgrade to version 3.
++    if domain.getStorageType() in sd.BLOCK_DOMAIN_TYPES:
++        if currentVersion == 0:
++            log.debug("Upgrading domain %s from version %s to version 2",
++                      domain.sdUUID, currentVersion)
++            v2DomainConverter(repoPath, hostId, domain, isMsd)
++            currentVersion = domain.getVersion()
++
++        if currentVersion != 2:
++            log.debug("Unsupported conversion from version %s to version %s",
++                      currentVersion, targetVersion)
++            raise se.UnsupportedDomainVersion(currentVersion)
++
+     if domain.getStorageType() in sd.FILE_DOMAIN_TYPES:
+         log.debug("Setting permissions for domain %s", domain.sdUUID)
+         domain.setMetadataPermissions()
+@@ -268,17 +285,10 @@ def v3DomainConverter(repoPath, hostId, domain, isMsd):
+                               "not critical since the volume might be in use",
+                               imgUUID, exc_info=True)
+ 
+-        targetVersion = 3
+-        currentVersion = domain.getVersion()
+         log.debug("Finalizing the storage domain upgrade from version %s to "
+                   "version %s for domain %s", currentVersion, targetVersion,
+                   domain.sdUUID)
+-
+-        if (currentVersion not in blockSD.VERS_METADATA_TAG
+-                        and domain.getStorageType() in sd.BLOCK_DOMAIN_TYPES):
+-            __convertDomainMetadataToTags(domain, targetVersion)
+-        else:
+-            domain.setMetaParam(sd.DMDK_VERSION, targetVersion)
++        domain.setMetaParam(sd.DMDK_VERSION, targetVersion)
+ 
+     except:
+         if isMsd:
+diff --git a/vdsm/storage/volume.py b/vdsm/storage/volume.py
+index cde612a..12dd188 100644
+--- a/vdsm/storage/volume.py
++++ b/vdsm/storage/volume.py
+@@ -503,7 +503,9 @@ class Volume(object):
+             cls.newMetadata(metaId, sdUUID, imgUUID, srcVolUUID, size,
+                             type2name(volFormat), type2name(preallocate),
+                             volType, diskType, desc, LEGAL_VOL)
+-            cls.newVolumeLease(metaId, sdUUID, volUUID)
++
++            if dom.hasVolumeLeases():
++                cls.newVolumeLease(metaId, sdUUID, volUUID)
+ 
+         except se.StorageException:
+             cls.log.error("Unexpected error", exc_info=True)
+-- 
+1.8.1
+
diff --git a/0020-misc-rename-safelease-to-clusterlock.patch b/0020-misc-rename-safelease-to-clusterlock.patch
new file mode 100644
index 0000000..3e1684c
--- /dev/null
+++ b/0020-misc-rename-safelease-to-clusterlock.patch
@@ -0,0 +1,866 @@
+From e60206af7781c86ddb5d2ef1fcac3f8f8b086ee4 Mon Sep 17 00:00:00 2001
+From: Federico Simoncelli <fsimonce at redhat.com>
+Date: Fri, 14 Dec 2012 06:42:09 -0500
+Subject: [PATCH 20/22] misc: rename safelease to clusterlock
+
+The safelease module is now contaning also the sanlock implementation
+and soon it might contain other (e.g.: a special lock for local storage
+domains), for this reason it has been renamed with a more general name
+clusterlock. The safelease implementation also required some cleanup in
+order to achieve more uniformity between the locking mechanisms.
+
+Change-Id: I74070ebb43dd726362900a0746c08b2ee3d6eac7
+Signed-off-by: Federico Simoncelli <fsimonce at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10067
+Reviewed-by: Allon Mureinik <amureini at redhat.com>
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11463
+---
+ vdsm.spec.in                                    |   2 +-
+ vdsm/API.py                                     |   4 +-
+ vdsm/storage/Makefile.am                        |   4 +-
+ vdsm/storage/blockSD.py                         |   4 +-
+ vdsm/storage/clusterlock.py                     | 251 ++++++++++++++++++++++++
+ vdsm/storage/hsm.py                             |  20 +-
+ vdsm/storage/imageRepository/formatConverter.py |   6 +-
+ vdsm/storage/safelease.py                       | 250 -----------------------
+ vdsm/storage/sd.py                              |  12 +-
+ vdsm/storage/sp.py                              |  25 ++-
+ 10 files changed, 289 insertions(+), 289 deletions(-)
+ create mode 100644 vdsm/storage/clusterlock.py
+ delete mode 100644 vdsm/storage/safelease.py
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index dfc2459..8ad4dce 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -685,7 +685,7 @@ exit 0
+ %{_datadir}/%{vdsm_name}/storage/resourceFactories.py*
+ %{_datadir}/%{vdsm_name}/storage/remoteFileHandler.py*
+ %{_datadir}/%{vdsm_name}/storage/resourceManager.py*
+-%{_datadir}/%{vdsm_name}/storage/safelease.py*
++%{_datadir}/%{vdsm_name}/storage/clusterlock.py*
+ %{_datadir}/%{vdsm_name}/storage/sdc.py*
+ %{_datadir}/%{vdsm_name}/storage/sd.py*
+ %{_datadir}/%{vdsm_name}/storage/securable.py*
+diff --git a/vdsm/API.py b/vdsm/API.py
+index 732f8a3..a050a51 100644
+--- a/vdsm/API.py
++++ b/vdsm/API.py
+@@ -33,7 +33,7 @@ import configNetwork
+ from vdsm import netinfo
+ from vdsm import constants
+ import storage.misc
+-import storage.safelease
++import storage.clusterlock
+ import storage.volume
+ import storage.sd
+ import storage.image
+@@ -992,7 +992,7 @@ class StoragePool(APIBase):
+     def spmStart(self, prevID, prevLver, enableScsiFencing,
+                  maxHostID=None, domVersion=None):
+         if maxHostID is None:
+-            maxHostID = storage.safelease.MAX_HOST_ID
++            maxHostID = storage.clusterlock.MAX_HOST_ID
+         recoveryMode = None   # unused
+         return self._irs.spmStart(self._UUID, prevID, prevLver,
+                 recoveryMode, enableScsiFencing, maxHostID, domVersion)
+diff --git a/vdsm/storage/Makefile.am b/vdsm/storage/Makefile.am
+index cff09be..abc1545 100644
+--- a/vdsm/storage/Makefile.am
++++ b/vdsm/storage/Makefile.am
+@@ -25,6 +25,7 @@ dist_vdsmstorage_PYTHON = \
+ 	__init__.py \
+ 	blockSD.py \
+ 	blockVolume.py \
++	clusterlock.py \
+ 	devicemapper.py \
+ 	dispatcher.py \
+ 	domainMonitor.py \
+@@ -35,8 +36,8 @@ dist_vdsmstorage_PYTHON = \
+ 	hba.py \
+ 	hsm.py \
+ 	image.py \
++	iscsiadm.py \
+ 	iscsi.py \
+-        iscsiadm.py \
+ 	localFsSD.py \
+ 	lvm.py \
+ 	misc.py \
+@@ -48,7 +49,6 @@ dist_vdsmstorage_PYTHON = \
+ 	remoteFileHandler.py \
+ 	resourceFactories.py \
+ 	resourceManager.py \
+-	safelease.py \
+ 	sdc.py \
+ 	sd.py \
+ 	securable.py \
+diff --git a/vdsm/storage/blockSD.py b/vdsm/storage/blockSD.py
+index 61ec996..862e413 100644
+--- a/vdsm/storage/blockSD.py
++++ b/vdsm/storage/blockSD.py
+@@ -37,7 +37,7 @@ import misc
+ import fileUtils
+ import sd
+ import lvm
+-import safelease
++import clusterlock
+ import blockVolume
+ import multipath
+ import resourceFactories
+@@ -63,7 +63,7 @@ log = logging.getLogger("Storage.BlockSD")
+ 
+ # FIXME: Make this calculated from something logical
+ RESERVED_METADATA_SIZE = 40 * (2 ** 20)
+-RESERVED_MAILBOX_SIZE = MAILBOX_SIZE * safelease.MAX_HOST_ID
++RESERVED_MAILBOX_SIZE = MAILBOX_SIZE * clusterlock.MAX_HOST_ID
+ METADATA_BASE_SIZE = 378
+ # VG's min metadata threshold is 20%
+ VG_MDA_MIN_THRESHOLD = 0.2
+diff --git a/vdsm/storage/clusterlock.py b/vdsm/storage/clusterlock.py
+new file mode 100644
+index 0000000..4525b2f
+--- /dev/null
++++ b/vdsm/storage/clusterlock.py
+@@ -0,0 +1,251 @@
++#
++# Copyright 2011 Red Hat, Inc.
++#
++# This program is free software; you can redistribute it and/or modify
++# it under the terms of the GNU General Public License as published by
++# the Free Software Foundation; either version 2 of the License, or
++# (at your option) any later version.
++#
++# This program is distributed in the hope that it will be useful,
++# but WITHOUT ANY WARRANTY; without even the implied warranty of
++# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++# GNU General Public License for more details.
++#
++# You should have received a copy of the GNU General Public License
++# along with this program; if not, write to the Free Software
++# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301 USA
++#
++# Refer to the README and COPYING files for full details of the license
++#
++
++import os
++import threading
++import logging
++import subprocess
++from contextlib import nested
++import sanlock
++
++import misc
++import storage_exception as se
++from vdsm import constants
++from vdsm.config import config
++
++
++MAX_HOST_ID = 250
++
++# The LEASE_OFFSET is used by SANLock to not overlap with safelease in
++# orfer to preserve the ability to acquire both locks (e.g.: during the
++# domain upgrade)
++SDM_LEASE_NAME = 'SDM'
++SDM_LEASE_OFFSET = 512 * 2048
++
++
++class SafeLease(object):
++    log = logging.getLogger("SafeLease")
++
++    lockUtilPath = config.get('irs', 'lock_util_path')
++    lockCmd = config.get('irs', 'lock_cmd')
++    freeLockCmd = config.get('irs', 'free_lock_cmd')
++
++    def __init__(self, sdUUID, idsPath, leasesPath, lockRenewalIntervalSec,
++                 leaseTimeSec, leaseFailRetry, ioOpTimeoutSec):
++        self._lock = threading.Lock()
++        self._sdUUID = sdUUID
++        self._idsPath = idsPath
++        self._leasesPath = leasesPath
++        self.setParams(lockRenewalIntervalSec, leaseTimeSec, leaseFailRetry,
++                       ioOpTimeoutSec)
++
++    def initLock(self):
++        lockUtil = os.path.join(self.lockUtilPath, "safelease")
++        initCommand = [lockUtil, "release", "-f", self._leasesPath, "0"]
++        rc, out, err = misc.execCmd(initCommand, sudo=False,
++                cwd=self.lockUtilPath)
++        if rc != 0:
++            self.log.warn("could not initialise spm lease (%s): %s", rc, out)
++            raise se.ClusterLockInitError()
++
++    def setParams(self, lockRenewalIntervalSec, leaseTimeSec, leaseFailRetry,
++                  ioOpTimeoutSec):
++        self._lockRenewalIntervalSec = lockRenewalIntervalSec
++        self._leaseTimeSec = leaseTimeSec
++        self._leaseFailRetry = leaseFailRetry
++        self._ioOpTimeoutSec = ioOpTimeoutSec
++
++    def getReservedId(self):
++        return 1000
++
++    def acquireHostId(self, hostId, async):
++        self.log.debug("Host id for domain %s successfully acquired (id: %s)",
++                       self._sdUUID, hostId)
++
++    def releaseHostId(self, hostId, async, unused):
++        self.log.debug("Host id for domain %s released successfully (id: %s)",
++                       self._sdUUID, hostId)
++
++    def hasHostId(self, hostId):
++        return True
++
++    def acquire(self, hostID):
++        leaseTimeMs = self._leaseTimeSec * 1000
++        ioOpTimeoutMs = self._ioOpTimeoutSec * 1000
++        with self._lock:
++            self.log.debug("Acquiring cluster lock for domain %s" %
++                    self._sdUUID)
++
++            lockUtil = self.getLockUtilFullPath()
++            acquireLockCommand = subprocess.list2cmdline([
++                lockUtil, "start", self._sdUUID, str(hostID),
++                str(self._lockRenewalIntervalSec), str(self._leasesPath),
++                str(leaseTimeMs), str(ioOpTimeoutMs), str(self._leaseFailRetry)
++            ])
++
++            cmd = [constants.EXT_SETSID, constants.EXT_IONICE, '-c1', '-n0',
++                constants.EXT_SU, misc.IOUSER, '-s', constants.EXT_SH, '-c',
++                acquireLockCommand]
++            (rc, out, err) = misc.execCmd(cmd, cwd=self.lockUtilPath,
++                    sudo=True)
++            if rc != 0:
++                raise se.AcquireLockFailure(self._sdUUID, rc, out, err)
++            self.log.debug("Clustered lock acquired successfully")
++
++    def getLockUtilFullPath(self):
++        return os.path.join(self.lockUtilPath, self.lockCmd)
++
++    def release(self):
++        with self._lock:
++            freeLockUtil = os.path.join(self.lockUtilPath, self.freeLockCmd)
++            releaseLockCommand = [freeLockUtil, self._sdUUID]
++            self.log.info("Releasing cluster lock for domain %s" %
++                    self._sdUUID)
++            (rc, out, err) = misc.execCmd(releaseLockCommand, sudo=False,
++                    cwd=self.lockUtilPath)
++            if rc != 0:
++                self.log.error("Could not release cluster lock "
++                        "rc=%s out=%s, err=%s" % (str(rc), out, err))
++
++            self.log.debug("Cluster lock released successfully")
++
++
++class SANLock(object):
++    log = logging.getLogger("SANLock")
++
++    _sanlock_fd = None
++    _sanlock_lock = threading.Lock()
++
++    def __init__(self, sdUUID, idsPath, leasesPath, *args):
++        self._lock = threading.Lock()
++        self._sdUUID = sdUUID
++        self._idsPath = idsPath
++        self._leasesPath = leasesPath
++        self._sanlockfd = None
++
++    def initLock(self):
++        try:
++            sanlock.init_lockspace(self._sdUUID, self._idsPath)
++            sanlock.init_resource(self._sdUUID, SDM_LEASE_NAME,
++                                  [(self._leasesPath, SDM_LEASE_OFFSET)])
++        except sanlock.SanlockException:
++            self.log.warn("Cannot initialize clusterlock", exc_info=True)
++            raise se.ClusterLockInitError()
++
++    def setParams(self, *args):
++        pass
++
++    def getReservedId(self):
++        return MAX_HOST_ID
++
++    def acquireHostId(self, hostId, async):
++        with self._lock:
++            self.log.info("Acquiring host id for domain %s (id: %s)",
++                          self._sdUUID, hostId)
++
++            try:
++                sanlock.add_lockspace(self._sdUUID, hostId, self._idsPath,
++                                      async=async)
++            except sanlock.SanlockException, e:
++                if e.errno == os.errno.EINPROGRESS:
++                    # if the request is not asynchronous wait for the ongoing
++                    # lockspace operation to complete
++                    if not async and not sanlock.inq_lockspace(
++                            self._sdUUID, hostId, self._idsPath, wait=True):
++                        raise se.AcquireHostIdFailure(self._sdUUID, e)
++                    # else silently continue, the host id has been acquired
++                    # or it's in the process of being acquired (async)
++                elif e.errno != os.errno.EEXIST:
++                    raise se.AcquireHostIdFailure(self._sdUUID, e)
++
++            self.log.debug("Host id for domain %s successfully acquired "
++                           "(id: %s)", self._sdUUID, hostId)
++
++    def releaseHostId(self, hostId, async, unused):
++        with self._lock:
++            self.log.info("Releasing host id for domain %s (id: %s)",
++                          self._sdUUID, hostId)
++
++            try:
++                sanlock.rem_lockspace(self._sdUUID, hostId, self._idsPath,
++                                      async=async, unused=unused)
++            except sanlock.SanlockException, e:
++                if e.errno != os.errno.ENOENT:
++                    raise se.ReleaseHostIdFailure(self._sdUUID, e)
++
++            self.log.debug("Host id for domain %s released successfully "
++                           "(id: %s)", self._sdUUID, hostId)
++
++    def hasHostId(self, hostId):
++        with self._lock:
++            try:
++                return sanlock.inq_lockspace(self._sdUUID,
++                                             hostId, self._idsPath)
++            except sanlock.SanlockException:
++                self.log.debug("Unable to inquire sanlock lockspace "
++                               "status, returning False", exc_info=True)
++                return False
++
++    # The hostId parameter is maintained here only for compatibility with
++    # ClusterLock. We could consider to remove it in the future but keeping it
++    # for logging purpose is desirable.
++    def acquire(self, hostId):
++        with nested(self._lock, SANLock._sanlock_lock):
++            self.log.info("Acquiring cluster lock for domain %s (id: %s)",
++                          self._sdUUID, hostId)
++
++            while True:
++                if SANLock._sanlock_fd is None:
++                    try:
++                        SANLock._sanlock_fd = sanlock.register()
++                    except sanlock.SanlockException, e:
++                        raise se.AcquireLockFailure(self._sdUUID, e.errno,
++                                        "Cannot register to sanlock", str(e))
++
++                try:
++                    sanlock.acquire(self._sdUUID, SDM_LEASE_NAME,
++                                    [(self._leasesPath, SDM_LEASE_OFFSET)],
++                                    slkfd=SANLock._sanlock_fd)
++                except sanlock.SanlockException, e:
++                    if e.errno != os.errno.EPIPE:
++                        raise se.AcquireLockFailure(self._sdUUID, e.errno,
++                                        "Cannot acquire cluster lock", str(e))
++                    SANLock._sanlock_fd = None
++                    continue
++
++                break
++
++            self.log.debug("Cluster lock for domain %s successfully acquired "
++                           "(id: %s)", self._sdUUID, hostId)
++
++    def release(self):
++        with self._lock:
++            self.log.info("Releasing cluster lock for domain %s", self._sdUUID)
++
++            try:
++                sanlock.release(self._sdUUID, SDM_LEASE_NAME,
++                                [(self._leasesPath, SDM_LEASE_OFFSET)],
++                                slkfd=SANLock._sanlock_fd)
++            except sanlock.SanlockException, e:
++                raise se.ReleaseLockFailure(self._sdUUID, e)
++
++            self._sanlockfd = None
++            self.log.debug("Cluster lock for domain %s successfully released",
++                           self._sdUUID)
+diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
+index 62e9f74..8bbe3b8 100644
+--- a/vdsm/storage/hsm.py
++++ b/vdsm/storage/hsm.py
+@@ -53,7 +53,7 @@ import iscsi
+ import misc
+ from misc import deprecated
+ import taskManager
+-import safelease
++import clusterlock
+ import storage_exception as se
+ from threadLocal import vars
+ from vdsm import constants
+@@ -528,7 +528,7 @@ class HSM:
+ 
+     @public
+     def spmStart(self, spUUID, prevID, prevLVER, recoveryMode, scsiFencing,
+-                 maxHostID=safelease.MAX_HOST_ID, domVersion=None,
++                 maxHostID=clusterlock.MAX_HOST_ID, domVersion=None,
+                  options=None):
+         """
+         Starts an SPM.
+@@ -845,7 +845,7 @@ class HSM:
+         :raises: an :exc:`Storage_Exception.InvalidParameterException` if the
+                  master domain is not supplied in the domain list.
+         """
+-        safeLease = sd.packLeaseParams(
++        leaseParams = sd.packLeaseParams(
+             lockRenewalIntervalSec=lockRenewalIntervalSec,
+             leaseTimeSec=leaseTimeSec,
+             ioOpTimeoutSec=ioOpTimeoutSec,
+@@ -853,9 +853,9 @@ class HSM:
+         vars.task.setDefaultException(
+             se.StoragePoolCreationError(
+                 "spUUID=%s, poolName=%s, masterDom=%s, domList=%s, "
+-                "masterVersion=%s, safelease params: (%s)" %
++                "masterVersion=%s, clusterlock params: (%s)" %
+                 (spUUID, poolName, masterDom, domList, masterVersion,
+-                 safeLease)))
++                 leaseParams)))
+         misc.validateUUID(spUUID, 'spUUID')
+         if masterDom not in domList:
+             raise se.InvalidParameterException("masterDom", str(masterDom))
+@@ -892,7 +892,7 @@ class HSM:
+ 
+         return sp.StoragePool(
+             spUUID, self.taskMng).create(poolName, masterDom, domList,
+-                                         masterVersion, safeLease)
++                                         masterVersion, leaseParams)
+ 
+     @public
+     def connectStoragePool(self, spUUID, hostID, scsiKey,
+@@ -1701,7 +1701,7 @@ class HSM:
+         :returns: Nothing ? pool.reconstructMaster return nothing
+         :rtype: ?
+         """
+-        safeLease = sd.packLeaseParams(
++        leaseParams = sd.packLeaseParams(
+             lockRenewalIntervalSec=lockRenewalIntervalSec,
+             leaseTimeSec=leaseTimeSec,
+             ioOpTimeoutSec=ioOpTimeoutSec,
+@@ -1710,9 +1710,9 @@ class HSM:
+ 
+         vars.task.setDefaultException(
+             se.ReconstructMasterError(
+-                "spUUID=%s, masterDom=%s, masterVersion=%s, safelease "
++                "spUUID=%s, masterDom=%s, masterVersion=%s, clusterlock "
+                 "params: (%s)" % (spUUID, masterDom, masterVersion,
+-                                  safeLease)))
++                                  leaseParams)))
+ 
+         self.log.info("spUUID=%s master=%s", spUUID, masterDom)
+ 
+@@ -1738,7 +1738,7 @@ class HSM:
+                 domDict[d] = sd.validateSDDeprecatedStatus(status)
+ 
+         return pool.reconstructMaster(hostId, poolName, masterDom, domDict,
+-                                      masterVersion, safeLease)
++                                      masterVersion, leaseParams)
+ 
+     def _logResp_getDeviceList(self, response):
+         logableDevs = deepcopy(response)
+diff --git a/vdsm/storage/imageRepository/formatConverter.py b/vdsm/storage/imageRepository/formatConverter.py
+index 88b053d..0742560 100644
+--- a/vdsm/storage/imageRepository/formatConverter.py
++++ b/vdsm/storage/imageRepository/formatConverter.py
+@@ -26,7 +26,7 @@ from vdsm import qemuImg
+ from storage import sd
+ from storage import blockSD
+ from storage import image
+-from storage import safelease
++from storage import clusterlock
+ from storage import volume
+ from storage import blockVolume
+ from storage import storage_exception as se
+@@ -115,8 +115,8 @@ def v3DomainConverter(repoPath, hostId, domain, isMsd):
+         domain.setMetadataPermissions()
+ 
+     log.debug("Initializing the new cluster lock for domain %s", domain.sdUUID)
+-    newClusterLock = safelease.SANLock(domain.sdUUID, domain.getIdsFilePath(),
+-                                       domain.getLeasesFilePath())
++    newClusterLock = clusterlock.SANLock(
++        domain.sdUUID, domain.getIdsFilePath(), domain.getLeasesFilePath())
+     newClusterLock.initLock()
+ 
+     log.debug("Acquiring the host id %s for domain %s", hostId, domain.sdUUID)
+diff --git a/vdsm/storage/safelease.py b/vdsm/storage/safelease.py
+deleted file mode 100644
+index 88a4eae..0000000
+--- a/vdsm/storage/safelease.py
++++ /dev/null
+@@ -1,250 +0,0 @@
+-#
+-# Copyright 2011 Red Hat, Inc.
+-#
+-# This program is free software; you can redistribute it and/or modify
+-# it under the terms of the GNU General Public License as published by
+-# the Free Software Foundation; either version 2 of the License, or
+-# (at your option) any later version.
+-#
+-# This program is distributed in the hope that it will be useful,
+-# but WITHOUT ANY WARRANTY; without even the implied warranty of
+-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+-# GNU General Public License for more details.
+-#
+-# You should have received a copy of the GNU General Public License
+-# along with this program; if not, write to the Free Software
+-# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301 USA
+-#
+-# Refer to the README and COPYING files for full details of the license
+-#
+-
+-import os
+-from vdsm.config import config
+-import misc
+-import subprocess
+-import sanlock
+-from contextlib import nested
+-from vdsm import constants
+-import storage_exception as se
+-import threading
+-import logging
+-
+-
+-MAX_HOST_ID = 250
+-
+-# The LEASE_OFFSET is used by SANLock to not overlap with safelease in
+-# orfer to preserve the ability to acquire both locks (e.g.: during the
+-# domain upgrade)
+-SDM_LEASE_NAME = 'SDM'
+-SDM_LEASE_OFFSET = 512 * 2048
+-
+-
+-class ClusterLock(object):
+-    log = logging.getLogger("ClusterLock")
+-    lockUtilPath = config.get('irs', 'lock_util_path')
+-    lockCmd = config.get('irs', 'lock_cmd')
+-    freeLockCmd = config.get('irs', 'free_lock_cmd')
+-
+-    def __init__(self, sdUUID, idFile, leaseFile,
+-            lockRenewalIntervalSec,
+-            leaseTimeSec,
+-            leaseFailRetry,
+-            ioOpTimeoutSec):
+-        self._lock = threading.RLock()
+-        self._sdUUID = sdUUID
+-        self._leaseFile = leaseFile
+-        self.setParams(lockRenewalIntervalSec, leaseTimeSec,
+-                       leaseFailRetry, ioOpTimeoutSec)
+-
+-    def initLock(self):
+-        lockUtil = os.path.join(self.lockUtilPath, "safelease")
+-        initCommand = [lockUtil, "release", "-f", self._leaseFile, "0"]
+-        rc, out, err = misc.execCmd(initCommand, sudo=False,
+-                cwd=self.lockUtilPath)
+-        if rc != 0:
+-            self.log.warn("could not initialise spm lease (%s): %s", rc, out)
+-            raise se.ClusterLockInitError()
+-
+-    def setParams(self, lockRenewalIntervalSec,
+-                    leaseTimeSec,
+-                    leaseFailRetry,
+-                    ioOpTimeoutSec):
+-        self._lockRenewalIntervalSec = lockRenewalIntervalSec
+-        self._leaseTimeSec = leaseTimeSec
+-        self._leaseFailRetry = leaseFailRetry
+-        self._ioOpTimeoutSec = ioOpTimeoutSec
+-
+-    def getReservedId(self):
+-        return 1000
+-
+-    def acquireHostId(self, hostId, async):
+-        pass
+-
+-    def releaseHostId(self, hostId, async, unused):
+-        pass
+-
+-    def hasHostId(self, hostId):
+-        return True
+-
+-    def acquire(self, hostID):
+-        leaseTimeMs = self._leaseTimeSec * 1000
+-        ioOpTimeoutMs = self._ioOpTimeoutSec * 1000
+-        with self._lock:
+-            self.log.debug("Acquiring cluster lock for domain %s" %
+-                    self._sdUUID)
+-
+-            lockUtil = self.getLockUtilFullPath()
+-            acquireLockCommand = subprocess.list2cmdline([lockUtil, "start",
+-                self._sdUUID, str(hostID), str(self._lockRenewalIntervalSec),
+-                str(self._leaseFile), str(leaseTimeMs), str(ioOpTimeoutMs),
+-                str(self._leaseFailRetry)])
+-
+-            cmd = [constants.EXT_SETSID, constants.EXT_IONICE, '-c1', '-n0',
+-                constants.EXT_SU, misc.IOUSER, '-s', constants.EXT_SH, '-c',
+-                acquireLockCommand]
+-            (rc, out, err) = misc.execCmd(cmd, cwd=self.lockUtilPath,
+-                    sudo=True)
+-            if rc != 0:
+-                raise se.AcquireLockFailure(self._sdUUID, rc, out, err)
+-            self.log.debug("Clustered lock acquired successfully")
+-
+-    def getLockUtilFullPath(self):
+-        return os.path.join(self.lockUtilPath, self.lockCmd)
+-
+-    def release(self):
+-        with self._lock:
+-            freeLockUtil = os.path.join(self.lockUtilPath, self.freeLockCmd)
+-            releaseLockCommand = [freeLockUtil, self._sdUUID]
+-            self.log.info("Releasing cluster lock for domain %s" %
+-                    self._sdUUID)
+-            (rc, out, err) = misc.execCmd(releaseLockCommand, sudo=False,
+-                    cwd=self.lockUtilPath)
+-            if rc != 0:
+-                self.log.error("Could not release cluster lock "
+-                        "rc=%s out=%s, err=%s" % (str(rc), out, err))
+-
+-            self.log.debug("Cluster lock released successfully")
+-
+-
+-class SANLock(object):
+-    log = logging.getLogger("SANLock")
+-
+-    _sanlock_fd = None
+-    _sanlock_lock = threading.Lock()
+-
+-    def __init__(self, sdUUID, idsPath, leasesPath, *args):
+-        self._lock = threading.Lock()
+-        self._sdUUID = sdUUID
+-        self._idsPath = idsPath
+-        self._leasesPath = leasesPath
+-        self._sanlockfd = None
+-
+-    def initLock(self):
+-        try:
+-            sanlock.init_lockspace(self._sdUUID, self._idsPath)
+-            sanlock.init_resource(self._sdUUID, SDM_LEASE_NAME,
+-                                  [(self._leasesPath, SDM_LEASE_OFFSET)])
+-        except sanlock.SanlockException:
+-            self.log.warn("Cannot initialize clusterlock", exc_info=True)
+-            raise se.ClusterLockInitError()
+-
+-    def setParams(self, *args):
+-        pass
+-
+-    def getReservedId(self):
+-        return MAX_HOST_ID
+-
+-    def acquireHostId(self, hostId, async):
+-        with self._lock:
+-            self.log.info("Acquiring host id for domain %s (id: %s)",
+-                          self._sdUUID, hostId)
+-
+-            try:
+-                sanlock.add_lockspace(self._sdUUID, hostId, self._idsPath,
+-                                      async=async)
+-            except sanlock.SanlockException, e:
+-                if e.errno == os.errno.EINPROGRESS:
+-                    # if the request is not asynchronous wait for the ongoing
+-                    # lockspace operation to complete
+-                    if not async and not sanlock.inq_lockspace(
+-                            self._sdUUID, hostId, self._idsPath, wait=True):
+-                        raise se.AcquireHostIdFailure(self._sdUUID, e)
+-                    # else silently continue, the host id has been acquired
+-                    # or it's in the process of being acquired (async)
+-                elif e.errno != os.errno.EEXIST:
+-                    raise se.AcquireHostIdFailure(self._sdUUID, e)
+-
+-            self.log.debug("Host id for domain %s successfully acquired "
+-                           "(id: %s)", self._sdUUID, hostId)
+-
+-    def releaseHostId(self, hostId, async, unused):
+-        with self._lock:
+-            self.log.info("Releasing host id for domain %s (id: %s)",
+-                          self._sdUUID, hostId)
+-
+-            try:
+-                sanlock.rem_lockspace(self._sdUUID, hostId, self._idsPath,
+-                                      async=async, unused=unused)
+-            except sanlock.SanlockException, e:
+-                if e.errno != os.errno.ENOENT:
+-                    raise se.ReleaseHostIdFailure(self._sdUUID, e)
+-
+-            self.log.debug("Host id for domain %s released successfully "
+-                           "(id: %s)", self._sdUUID, hostId)
+-
+-    def hasHostId(self, hostId):
+-        with self._lock:
+-            try:
+-                return sanlock.inq_lockspace(self._sdUUID,
+-                                             hostId, self._idsPath)
+-            except sanlock.SanlockException:
+-                self.log.debug("Unable to inquire sanlock lockspace "
+-                               "status, returning False", exc_info=True)
+-                return False
+-
+-    # The hostId parameter is maintained here only for compatibility with
+-    # ClusterLock. We could consider to remove it in the future but keeping it
+-    # for logging purpose is desirable.
+-    def acquire(self, hostId):
+-        with nested(self._lock, SANLock._sanlock_lock):
+-            self.log.info("Acquiring cluster lock for domain %s (id: %s)",
+-                          self._sdUUID, hostId)
+-
+-            while True:
+-                if SANLock._sanlock_fd is None:
+-                    try:
+-                        SANLock._sanlock_fd = sanlock.register()
+-                    except sanlock.SanlockException, e:
+-                        raise se.AcquireLockFailure(self._sdUUID, e.errno,
+-                                        "Cannot register to sanlock", str(e))
+-
+-                try:
+-                    sanlock.acquire(self._sdUUID, SDM_LEASE_NAME,
+-                                    [(self._leasesPath, SDM_LEASE_OFFSET)],
+-                                    slkfd=SANLock._sanlock_fd)
+-                except sanlock.SanlockException, e:
+-                    if e.errno != os.errno.EPIPE:
+-                        raise se.AcquireLockFailure(self._sdUUID, e.errno,
+-                                        "Cannot acquire cluster lock", str(e))
+-                    SANLock._sanlock_fd = None
+-                    continue
+-
+-                break
+-
+-            self.log.debug("Cluster lock for domain %s successfully acquired "
+-                           "(id: %s)", self._sdUUID, hostId)
+-
+-    def release(self):
+-        with self._lock:
+-            self.log.info("Releasing cluster lock for domain %s", self._sdUUID)
+-
+-            try:
+-                sanlock.release(self._sdUUID, SDM_LEASE_NAME,
+-                                [(self._leasesPath, SDM_LEASE_OFFSET)],
+-                                slkfd=SANLock._sanlock_fd)
+-            except sanlock.SanlockException, e:
+-                raise se.ReleaseLockFailure(self._sdUUID, e)
+-
+-            self._sanlockfd = None
+-            self.log.debug("Cluster lock for domain %s successfully released",
+-                           self._sdUUID)
+diff --git a/vdsm/storage/sd.py b/vdsm/storage/sd.py
+index 1b11017..dbc1beb 100644
+--- a/vdsm/storage/sd.py
++++ b/vdsm/storage/sd.py
+@@ -31,7 +31,7 @@ import resourceFactories
+ from resourceFactories import IMAGE_NAMESPACE, VOLUME_NAMESPACE
+ import resourceManager as rm
+ from vdsm import constants
+-import safelease
++import clusterlock
+ import outOfProcess as oop
+ from persistentDict import unicodeEncoder, unicodeDecoder
+ 
+@@ -307,12 +307,12 @@ class StorageDomain:
+                 DEFAULT_LEASE_PARAMS[DMDK_LEASE_TIME_SEC],
+                 DEFAULT_LEASE_PARAMS[DMDK_LEASE_RETRIES],
+                 DEFAULT_LEASE_PARAMS[DMDK_IO_OP_TIMEOUT_SEC])
+-            self._clusterLock = safelease.ClusterLock(self.sdUUID,
+-                    self.getIdsFilePath(), self.getLeasesFilePath(),
+-                    *leaseParams)
++            self._clusterLock = clusterlock.SafeLease(
++                self.sdUUID, self.getIdsFilePath(), self.getLeasesFilePath(),
++                *leaseParams)
+         elif domversion in DOM_SANLOCK_VERS:
+-            self._clusterLock = safelease.SANLock(self.sdUUID,
+-                    self.getIdsFilePath(), self.getLeasesFilePath())
++            self._clusterLock = clusterlock.SANLock(
++                self.sdUUID, self.getIdsFilePath(), self.getLeasesFilePath())
+         else:
+             raise se.UnsupportedDomainVersion(domversion)
+ 
+diff --git a/vdsm/storage/sp.py b/vdsm/storage/sp.py
+index 40d15b3..e13d088 100644
+--- a/vdsm/storage/sp.py
++++ b/vdsm/storage/sp.py
+@@ -494,7 +494,7 @@ class StoragePool(Securable):
+             return config.getint("irs", "maximum_domains_in_pool")
+ 
+     @unsecured
+-    def _acquireTemporaryClusterLock(self, msdUUID, safeLease):
++    def _acquireTemporaryClusterLock(self, msdUUID, leaseParams):
+         try:
+             # Master domain is unattached and all changes to unattached domains
+             # must be performed under storage lock
+@@ -504,7 +504,7 @@ class StoragePool(Securable):
+             # assigned id for this pool
+             self.id = msd.getReservedId()
+ 
+-            msd.changeLeaseParams(safeLease)
++            msd.changeLeaseParams(leaseParams)
+ 
+             msd.acquireHostId(self.id)
+ 
+@@ -527,7 +527,7 @@ class StoragePool(Securable):
+         self.id = SPM_ID_FREE
+ 
+     @unsecured
+-    def create(self, poolName, msdUUID, domList, masterVersion, safeLease):
++    def create(self, poolName, msdUUID, domList, masterVersion, leaseParams):
+         """
+         Create new storage pool with single/multiple image data domain.
+         The command will create new storage pool meta-data attach each
+@@ -537,10 +537,9 @@ class StoragePool(Securable):
+          'msdUUID' - master domain of this pool (one of domList)
+          'domList' - list of domains (i.e sdUUID,sdUUID,...,sdUUID)
+         """
+-        self.log.info("spUUID=%s poolName=%s master_sd=%s "
+-                      "domList=%s masterVersion=%s %s",
+-                      self.spUUID, poolName, msdUUID,
+-                      domList, masterVersion, str(safeLease))
++        self.log.info("spUUID=%s poolName=%s master_sd=%s domList=%s "
++                      "masterVersion=%s %s", self.spUUID, poolName, msdUUID,
++                      domList, masterVersion, leaseParams)
+ 
+         if msdUUID not in domList:
+             raise se.InvalidParameterException("masterDomain", msdUUID)
+@@ -565,7 +564,7 @@ class StoragePool(Securable):
+                     raise se.StorageDomainAlreadyAttached(spUUIDs[0], sdUUID)
+ 
+         fileUtils.createdir(self.poolPath)
+-        self._acquireTemporaryClusterLock(msdUUID, safeLease)
++        self._acquireTemporaryClusterLock(msdUUID, leaseParams)
+ 
+         try:
+             self._setSafe()
+@@ -573,7 +572,7 @@ class StoragePool(Securable):
+             # We should do it before actually attaching this domain to the pool.
+             # During 'master' marking we create pool metadata and each attached
+             # domain should register there
+-            self.createMaster(poolName, msd, masterVersion, safeLease)
++            self.createMaster(poolName, msd, masterVersion, leaseParams)
+             self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
+             # Attach storage domains to the storage pool
+             # Since we are creating the pool then attach is done from the hsm and not the spm
+@@ -722,10 +721,10 @@ class StoragePool(Securable):
+ 
+     @unsecured
+     def reconstructMaster(self, hostId, poolName, msdUUID, domDict,
+-                          masterVersion, safeLease):
++                          masterVersion, leaseParams):
+         self.log.info("spUUID=%s hostId=%s poolName=%s msdUUID=%s domDict=%s "
+                       "masterVersion=%s leaseparams=(%s)", self.spUUID, hostId,
+-                      poolName, msdUUID, domDict, masterVersion, str(safeLease))
++                      poolName, msdUUID, domDict, masterVersion, leaseParams)
+ 
+         if msdUUID not in domDict:
+             raise se.InvalidParameterException("masterDomain", msdUUID)
+@@ -736,7 +735,7 @@ class StoragePool(Securable):
+         # For backward compatibility we must support a reconstructMaster
+         # that doesn't specify an hostId.
+         if not hostId:
+-            self._acquireTemporaryClusterLock(msdUUID, safeLease)
++            self._acquireTemporaryClusterLock(msdUUID, leaseParams)
+             temporaryLock = True
+         else:
+             # Forcing to acquire the host id (if it's not acquired already).
+@@ -749,7 +748,7 @@ class StoragePool(Securable):
+ 
+         try:
+             self.createMaster(poolName, futureMaster, masterVersion,
+-                              safeLease)
++                              leaseParams)
+ 
+             for sdUUID in domDict:
+                 domDict[sdUUID] = domDict[sdUUID].capitalize()
+-- 
+1.8.1
+
diff --git a/0021-domain-select-the-cluster-lock-using-makeClusterLock.patch b/0021-domain-select-the-cluster-lock-using-makeClusterLock.patch
new file mode 100644
index 0000000..3d0666c
--- /dev/null
+++ b/0021-domain-select-the-cluster-lock-using-makeClusterLock.patch
@@ -0,0 +1,151 @@
+From 5da363f0412d2b709fb1460324ee04b5905e492b Mon Sep 17 00:00:00 2001
+From: Federico Simoncelli <fsimonce at redhat.com>
+Date: Thu, 20 Dec 2012 06:12:52 -0500
+Subject: [PATCH 21/22] domain: select the cluster lock using makeClusterLock
+
+In order to support different locking mechanisms (not only per-domain
+format but also per-domain type) a new makeClusterLock method has been
+introduced to select the appropriate cluster lock.
+
+Change-Id: I78072254441335a420292af642985840e9b2ac68
+Signed-off-by: Federico Simoncelli <fsimonce at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10281
+Reviewed-by: Allon Mureinik <amureini at redhat.com>
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11464
+---
+ vdsm/storage/imageRepository/formatConverter.py | 11 +++--
+ vdsm/storage/sd.py                              | 54 +++++++++++++++----------
+ 2 files changed, 39 insertions(+), 26 deletions(-)
+
+diff --git a/vdsm/storage/imageRepository/formatConverter.py b/vdsm/storage/imageRepository/formatConverter.py
+index 0742560..95a77d1 100644
+--- a/vdsm/storage/imageRepository/formatConverter.py
++++ b/vdsm/storage/imageRepository/formatConverter.py
+@@ -26,7 +26,6 @@ from vdsm import qemuImg
+ from storage import sd
+ from storage import blockSD
+ from storage import image
+-from storage import clusterlock
+ from storage import volume
+ from storage import blockVolume
+ from storage import storage_exception as se
+@@ -91,7 +90,12 @@ def v2DomainConverter(repoPath, hostId, domain, isMsd):
+ 
+ def v3DomainConverter(repoPath, hostId, domain, isMsd):
+     log = logging.getLogger('Storage.v3DomainConverter')
+-    log.debug("Starting conversion for domain %s", domain.sdUUID)
++
++    targetVersion = 3
++    currentVersion = domain.getVersion()
++
++    log.debug("Starting conversion for domain %s from version %s "
++              "to version %s", domain.sdUUID, currentVersion, targetVersion)
+ 
+     targetVersion = 3
+     currentVersion = domain.getVersion()
+@@ -115,8 +119,7 @@ def v3DomainConverter(repoPath, hostId, domain, isMsd):
+         domain.setMetadataPermissions()
+ 
+     log.debug("Initializing the new cluster lock for domain %s", domain.sdUUID)
+-    newClusterLock = clusterlock.SANLock(
+-        domain.sdUUID, domain.getIdsFilePath(), domain.getLeasesFilePath())
++    newClusterLock = domain._makeClusterLock(targetVersion)
+     newClusterLock.initLock()
+ 
+     log.debug("Acquiring the host id %s for domain %s", hostId, domain.sdUUID)
+diff --git a/vdsm/storage/sd.py b/vdsm/storage/sd.py
+index dbc1beb..a55ce06 100644
+--- a/vdsm/storage/sd.py
++++ b/vdsm/storage/sd.py
+@@ -101,10 +101,6 @@ BACKUP_DOMAIN = 3
+ DOMAIN_CLASSES = {DATA_DOMAIN: 'Data', ISO_DOMAIN: 'Iso',
+                   BACKUP_DOMAIN: 'Backup'}
+ 
+-# Lock Version
+-DOM_SAFELEASE_VERS = (0, 2)
+-DOM_SANLOCK_VERS = (3,)
+-
+ # Metadata keys
+ DMDK_VERSION = "VERSION"
+ DMDK_SDUUID = "SDUUID"
+@@ -292,29 +288,20 @@ class StorageDomain:
+     mdBackupVersions = config.get('irs', 'md_backup_versions')
+     mdBackupDir = config.get('irs', 'md_backup_dir')
+ 
++    # version: (clusterLockClass, hasVolumeLeases)
++    _clusterLockTable = {
++        0: (clusterlock.SafeLease, False),
++        2: (clusterlock.SafeLease, False),
++        3: (clusterlock.SANLock, True),
++    }
++
+     def __init__(self, sdUUID, domaindir, metadata):
+         self.sdUUID = sdUUID
+         self.domaindir = domaindir
+         self._metadata = metadata
+         self._lock = threading.Lock()
+         self.stat = None
+-
+-        domversion = self.getVersion()
+-
+-        if domversion in DOM_SAFELEASE_VERS:
+-            leaseParams = (
+-                DEFAULT_LEASE_PARAMS[DMDK_LOCK_RENEWAL_INTERVAL_SEC],
+-                DEFAULT_LEASE_PARAMS[DMDK_LEASE_TIME_SEC],
+-                DEFAULT_LEASE_PARAMS[DMDK_LEASE_RETRIES],
+-                DEFAULT_LEASE_PARAMS[DMDK_IO_OP_TIMEOUT_SEC])
+-            self._clusterLock = clusterlock.SafeLease(
+-                self.sdUUID, self.getIdsFilePath(), self.getLeasesFilePath(),
+-                *leaseParams)
+-        elif domversion in DOM_SANLOCK_VERS:
+-            self._clusterLock = clusterlock.SANLock(
+-                self.sdUUID, self.getIdsFilePath(), self.getLeasesFilePath())
+-        else:
+-            raise se.UnsupportedDomainVersion(domversion)
++        self._clusterLock = self._makeClusterLock()
+ 
+     def __del__(self):
+         if self.stat:
+@@ -328,6 +315,25 @@ class StorageDomain:
+     def oop(self):
+         return oop.getProcessPool(self.sdUUID)
+ 
++    def _makeClusterLock(self, domVersion=None):
++        if not domVersion:
++            domVersion = self.getVersion()
++
++        leaseParams = (
++            DEFAULT_LEASE_PARAMS[DMDK_LOCK_RENEWAL_INTERVAL_SEC],
++            DEFAULT_LEASE_PARAMS[DMDK_LEASE_TIME_SEC],
++            DEFAULT_LEASE_PARAMS[DMDK_LEASE_RETRIES],
++            DEFAULT_LEASE_PARAMS[DMDK_IO_OP_TIMEOUT_SEC],
++        )
++
++        try:
++            clusterLockClass = self._clusterLockTable[domVersion][0]
++        except KeyError:
++            raise se.UnsupportedDomainVersion(domVersion)
++
++        return clusterLockClass(self.sdUUID, self.getIdsFilePath(),
++                                self.getLeasesFilePath(), *leaseParams)
++
+     @classmethod
+     def create(cls, sdUUID, domainName, domClass, typeSpecificArg, version):
+         """
+@@ -436,7 +442,11 @@ class StorageDomain:
+         return self._clusterLock.hasHostId(hostId)
+ 
+     def hasVolumeLeases(self):
+-        return self.getVersion() in DOM_SANLOCK_VERS
++        domVersion = self.getVersion()
++        try:
++            return self._clusterLockTable[domVersion][1]
++        except KeyError:
++            raise se.UnsupportedDomainVersion(domVersion)
+ 
+     def getVolumeLease(self, volUUID):
+         """
+-- 
+1.8.1
+
diff --git a/0022-clusterlock-add-the-local-locking-implementation.patch b/0022-clusterlock-add-the-local-locking-implementation.patch
new file mode 100644
index 0000000..d496718
--- /dev/null
+++ b/0022-clusterlock-add-the-local-locking-implementation.patch
@@ -0,0 +1,225 @@
+From e73c5bc586d1e689fc33ba77082488d755e3a621 Mon Sep 17 00:00:00 2001
+From: Federico Simoncelli <fsimonce at redhat.com>
+Date: Thu, 20 Dec 2012 08:08:21 -0500
+Subject: [PATCH 22/22] clusterlock: add the local locking implementation
+
+In order to have a faster and more lightweight locking mechanism on
+local storage domains a new cluster lock (based on flock) has been
+introduced.
+
+Change-Id: I106618a9a61cc96727edaf2e3ab043b2ec28ebe1
+Signed-off-by: Federico Simoncelli <fsimonce at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/10282
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11465
+---
+ vdsm/storage/clusterlock.py | 122 +++++++++++++++++++++++++++++++++++++++++---
+ vdsm/storage/localFsSD.py   |   7 +++
+ vdsm/storage/misc.py        |  19 +++++++
+ 3 files changed, 141 insertions(+), 7 deletions(-)
+
+diff --git a/vdsm/storage/clusterlock.py b/vdsm/storage/clusterlock.py
+index 4525b2f..cabf174 100644
+--- a/vdsm/storage/clusterlock.py
++++ b/vdsm/storage/clusterlock.py
+@@ -19,6 +19,7 @@
+ #
+ 
+ import os
++import fcntl
+ import threading
+ import logging
+ import subprocess
+@@ -127,6 +128,22 @@ class SafeLease(object):
+             self.log.debug("Cluster lock released successfully")
+ 
+ 
++initSANLockLog = logging.getLogger("initSANLock")
++
++
++def initSANLock(sdUUID, idsPath, leasesPath):
++    initSANLockLog.debug("Initializing SANLock for domain %s", sdUUID)
++
++    try:
++        sanlock.init_lockspace(sdUUID, idsPath)
++        sanlock.init_resource(sdUUID, SDM_LEASE_NAME,
++                              [(leasesPath, SDM_LEASE_OFFSET)])
++    except sanlock.SanlockException:
++        initSANLockLog.error("Cannot initialize SANLock for domain %s",
++                             sdUUID, exc_info=True)
++        raise se.ClusterLockInitError()
++
++
+ class SANLock(object):
+     log = logging.getLogger("SANLock")
+ 
+@@ -141,13 +158,7 @@ class SANLock(object):
+         self._sanlockfd = None
+ 
+     def initLock(self):
+-        try:
+-            sanlock.init_lockspace(self._sdUUID, self._idsPath)
+-            sanlock.init_resource(self._sdUUID, SDM_LEASE_NAME,
+-                                  [(self._leasesPath, SDM_LEASE_OFFSET)])
+-        except sanlock.SanlockException:
+-            self.log.warn("Cannot initialize clusterlock", exc_info=True)
+-            raise se.ClusterLockInitError()
++        initSANLock(self._sdUUID, self._idsPath, self._leasesPath)
+ 
+     def setParams(self, *args):
+         pass
+@@ -249,3 +260,100 @@ class SANLock(object):
+             self._sanlockfd = None
+             self.log.debug("Cluster lock for domain %s successfully released",
+                            self._sdUUID)
++
++
++class LocalLock(object):
++    log = logging.getLogger("LocalLock")
++
++    _globalLockMap = {}
++    _globalLockMapSync = threading.Lock()
++
++    def __init__(self, sdUUID, idsPath, leasesPath, *args):
++        self._sdUUID = sdUUID
++        self._idsPath = idsPath
++        self._leasesPath = leasesPath
++
++    def initLock(self):
++        # The LocalLock initialization is based on SANLock to maintain on-disk
++        # domain format consistent across all the V3 types.
++        # The advantage is that the domain can be exposed as an NFS/GlusterFS
++        # domain later on without any modification.
++        # XXX: Keep in mind that LocalLock and SANLock cannot detect each other
++        # and therefore concurrently using the same domain as local domain and
++        # NFS domain (or any other shared file-based domain) will certainly
++        # lead to disastrous consequences.
++        initSANLock(self._sdUUID, self._idsPath, self._leasesPath)
++
++    def setParams(self, *args):
++        pass
++
++    def getReservedId(self):
++        return MAX_HOST_ID
++
++    def acquireHostId(self, hostId, async):
++        self.log.debug("Host id for domain %s successfully acquired (id: %s)",
++                       self._sdUUID, hostId)
++
++    def releaseHostId(self, hostId, async, unused):
++        self.log.debug("Host id for domain %s released successfully (id: %s)",
++                       self._sdUUID, hostId)
++
++    def hasHostId(self, hostId):
++        return True
++
++    def acquire(self, hostId):
++        with self._globalLockMapSync:
++            self.log.info("Acquiring local lock for domain %s (id: %s)",
++                          self._sdUUID, hostId)
++
++            lockFile = self._globalLockMap.get(self._sdUUID, None)
++
++            if lockFile:
++                try:
++                    misc.NoIntrCall(fcntl.fcntl, lockFile, fcntl.F_GETFD)
++                except IOError as e:
++                    # We found a stale file descriptor, removing.
++                    del self._globalLockMap[self._sdUUID]
++
++                    # Raise any other unkown error.
++                    if e.errno != os.errno.EBADF:
++                        raise
++                else:
++                    self.log.debug("Local lock already acquired for domain "
++                                   "%s (id: %s)", self._sdUUID, hostId)
++                    return  # success, the lock was already acquired
++
++            lockFile = misc.NoIntrCall(os.open, self._idsPath, os.O_RDONLY)
++
++            try:
++                misc.NoIntrCall(fcntl.flock, lockFile,
++                                fcntl.LOCK_EX | fcntl.LOCK_NB)
++            except IOError as e:
++                misc.NoIntrCall(os.close, lockFile)
++                if e.errno in (os.errno.EACCES, os.errno.EAGAIN):
++                    raise se.AcquireLockFailure(
++                        self._sdUUID, e.errno, "Cannot acquire local lock",
++                        str(e))
++                raise
++            else:
++                self._globalLockMap[self._sdUUID] = lockFile
++
++        self.log.debug("Local lock for domain %s successfully acquired "
++                       "(id: %s)", self._sdUUID, hostId)
++
++    def release(self):
++        with self._globalLockMapSync:
++            self.log.info("Releasing local lock for domain %s", self._sdUUID)
++
++            lockFile = self._globalLockMap.get(self._sdUUID, None)
++
++            if not lockFile:
++                self.log.debug("Local lock already released for domain %s",
++                               self._sdUUID)
++                return
++
++            misc.NoIntrCall(os.close, lockFile)
++            del self._globalLockMap[self._sdUUID]
++
++            self.log.debug("Local lock for domain %s successfully released",
++                           self._sdUUID)
+diff --git a/vdsm/storage/localFsSD.py b/vdsm/storage/localFsSD.py
+index 198c073..7d59894 100644
+--- a/vdsm/storage/localFsSD.py
++++ b/vdsm/storage/localFsSD.py
+@@ -26,9 +26,16 @@ import fileSD
+ import fileUtils
+ import storage_exception as se
+ import misc
++import clusterlock
+ 
+ 
+ class LocalFsStorageDomain(fileSD.FileStorageDomain):
++    # version: (clusterLockClass, hasVolumeLeases)
++    _clusterLockTable = {
++        0: (clusterlock.SafeLease, False),
++        2: (clusterlock.SafeLease, False),
++        3: (clusterlock.LocalLock, True),
++    }
+ 
+     @classmethod
+     def _preCreateValidation(cls, sdUUID, domPath, typeSpecificArg, version):
+diff --git a/vdsm/storage/misc.py b/vdsm/storage/misc.py
+index 17d38ee..b26a317 100644
+--- a/vdsm/storage/misc.py
++++ b/vdsm/storage/misc.py
+@@ -1344,6 +1344,25 @@ def itmap(func, iterable, maxthreads=UNLIMITED_THREADS):
+         yield respQueue.get()
+ 
+ 
++def NoIntrCall(fun, *args, **kwargs):
++    """
++    This wrapper is used to handle the interrupt exceptions that might
++    occur during a system call.
++    """
++    while True:
++        try:
++            return fun(*args, **kwargs)
++        except (IOError, select.error) as e:
++            if e.args[0] == os.errno.EINTR:
++                continue
++            raise
++        break
++
++
++# NOTE: it would be best to try and unify NoIntrCall and NoIntrPoll.
++# We could do so defining a new object that can be used as a placeholer
++# for the changing timeout value in the *args/**kwargs. This would
++# lead us to rebuilding the function arguments at each loop.
+ def NoIntrPoll(pollfun, timeout=-1):
+     """
+     This wrapper is used to handle the interrupt exceptions that might occur
+-- 
+1.8.1
+
diff --git a/0023-upgrade-catch-MetaDataKeyNotFoundError-when-preparin.patch b/0023-upgrade-catch-MetaDataKeyNotFoundError-when-preparin.patch
new file mode 100644
index 0000000..cda48c4
--- /dev/null
+++ b/0023-upgrade-catch-MetaDataKeyNotFoundError-when-preparin.patch
@@ -0,0 +1,38 @@
+From 94a3b6b63449ac4a16c76c1d5c52d58f5c895ecc Mon Sep 17 00:00:00 2001
+From: Lee Yarwood <lyarwood at redhat.com>
+Date: Tue, 22 Jan 2013 14:09:28 +0000
+Subject: [PATCH 23/27] upgrade: catch MetaDataKeyNotFoundError when preparing
+ images
+
+Ensure that we catch and continue past any MetaDataKeyNotFoundError
+exception when preparing images that may contain partially removed
+volumes. For example where the LV is still present but the metadata
+block has been blanked out.
+
+Change-Id: I92f7a61bf6d1e24e84711486fd4f8ba67e2a0077
+Signed-off-by: Lee Yarwood <lyarwood at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11485
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm/storage/imageRepository/formatConverter.py | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+diff --git a/vdsm/storage/imageRepository/formatConverter.py b/vdsm/storage/imageRepository/formatConverter.py
+index 95a77d1..cbf64f5 100644
+--- a/vdsm/storage/imageRepository/formatConverter.py
++++ b/vdsm/storage/imageRepository/formatConverter.py
+@@ -280,6 +280,11 @@ def v3DomainConverter(repoPath, hostId, domain, isMsd):
+                 log.error("It is not possible to prepare the image %s, the "
+                           "volume chain looks damaged", imgUUID, exc_info=True)
+ 
++            except se.MetaDataKeyNotFoundError:
++                log.error("It is not possible to prepare the image %s, the "
++                          "volume metadata looks damaged", imgUUID,
++                          exc_info=True)
++
+             finally:
+                 try:
+                     img.teardown(domain.sdUUID, imgUUID)
+-- 
+1.8.1
+
diff --git a/0024-vdsm.spec-Require-openssl.patch b/0024-vdsm.spec-Require-openssl.patch
new file mode 100644
index 0000000..ab86511
--- /dev/null
+++ b/0024-vdsm.spec-Require-openssl.patch
@@ -0,0 +1,31 @@
+From 53e8505e34f1e7a76adfa0de74d5eb9d27efd586 Mon Sep 17 00:00:00 2001
+From: Douglas Schilling Landgraf <dougsland at redhat.com>
+Date: Wed, 30 Jan 2013 09:27:38 -0500
+Subject: [PATCH 24/27] vdsm.spec: Require openssl
+
+deployUtil uses openssl command, we should Require it.
+
+Change-Id: Ib53aa66bad94e9c4046f3430b892a60cbc80c520
+Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=905728
+Signed-off-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11543
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+---
+ vdsm.spec.in | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index 8ad4dce..e898c59 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -207,6 +207,7 @@ BuildArch:      noarch
+ 
+ Requires: %{name} = %{version}-%{release}
+ Requires: m2crypto
++Requires: openssl
+ 
+ %description reg
+ VDSM registration package. Used to register a Linux host to a Virtualization
+-- 
+1.8.1
+
diff --git a/0025-Fedora-18-require-a-newer-udev.patch b/0025-Fedora-18-require-a-newer-udev.patch
new file mode 100644
index 0000000..6464c3f
--- /dev/null
+++ b/0025-Fedora-18-require-a-newer-udev.patch
@@ -0,0 +1,36 @@
+From 674f1003f05d84609e4555c1509b1409475e1c97 Mon Sep 17 00:00:00 2001
+From: Dan Kenigsberg <danken at redhat.com>
+Date: Tue, 29 Jan 2013 10:44:01 +0200
+Subject: [PATCH 25/27] Fedora 18: require a newer udev
+
+Due to https://bugzilla.redhat.com/903716 `udev: device node permissions
+not applied with "change" event' we could not use block storage in
+Fedora. Let us explicitly require a newerer systemd that fixes this
+issue, to avoid users' dismay.
+
+Change-Id: Ie17abb2af146c492efafc94bfbb533c7f6c8025c
+Signed-off-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11489
+Reviewed-by: Antoni Segura Puimedon <asegurap at redhat.com>
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11534
+Tested-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm.spec.in | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index e898c59..00c1259 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -136,6 +136,7 @@ Requires: selinux-policy-targeted >= 3.11.1-71
+ # In order to avoid a policycoreutils bug (rhbz 883355) when selinux is
+ # disabled we now require the version 2.1.13-44 (or newer) of Fedora.
+ Requires: policycoreutils >= 2.1.13-44
++Requires: systemd >= 197-1.fc18.2
+ %endif
+ 
+ Requires: libvirt-python, libvirt-lock-sanlock, libvirt-client
+-- 
+1.8.1
+
diff --git a/0026-fix-sloppy-backport-of-safelease-rename.patch b/0026-fix-sloppy-backport-of-safelease-rename.patch
new file mode 100644
index 0000000..8ae6b1e
--- /dev/null
+++ b/0026-fix-sloppy-backport-of-safelease-rename.patch
@@ -0,0 +1,40 @@
+From 7904843648c7dd368f832d8f2b652290ca717424 Mon Sep 17 00:00:00 2001
+From: Dan Kenigsberg <danken at redhat.com>
+Date: Wed, 30 Jan 2013 13:43:33 +0200
+Subject: [PATCH 26/27] fix sloppy backport of safelease rename
+
+Somehow, this sloppy backport of I74070ebb43dd726362900a0746c
+was not caught by Jenkins. Any idea why?
+
+Change-Id: Iaf1dc264d17b59934b78877a11f37b21614b268e
+Signed-off-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11544
+Reviewed-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ Makefile.am | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/Makefile.am b/Makefile.am
+index 80d6f52..06ffbf9 100644
+--- a/Makefile.am
++++ b/Makefile.am
+@@ -54,6 +54,7 @@ PEP8_WHITELIST = \
+ 	vdsm/*.py.in \
+ 	vdsm/storage/__init__.py \
+ 	vdsm/storage/blockVolume.py \
++	vdsm/storage/clusterlock.py \
+ 	vdsm/storage/devicemapper.py \
+ 	vdsm/storage/domainMonitor.py \
+ 	vdsm/storage/fileSD.py \
+@@ -74,7 +75,6 @@ PEP8_WHITELIST = \
+ 	vdsm/storage/persistentDict.py \
+ 	vdsm/storage/remoteFileHandler.py \
+ 	vdsm/storage/resourceFactories.py \
+-	vdsm/storage/safelease.py \
+ 	vdsm/storage/sd.py \
+ 	vdsm/storage/sdc.py \
+ 	vdsm/storage/securable.py \
+-- 
+1.8.1
+
diff --git a/0027-removing-the-use-of-zombie-reaper-from-supervdsm.patch b/0027-removing-the-use-of-zombie-reaper-from-supervdsm.patch
new file mode 100644
index 0000000..14ce5fd
--- /dev/null
+++ b/0027-removing-the-use-of-zombie-reaper-from-supervdsm.patch
@@ -0,0 +1,52 @@
+From 18c24f7c7c27ac732c4a760caa9524e7319cd47e Mon Sep 17 00:00:00 2001
+From: Yaniv Bronhaim <ybronhei at redhat.com>
+Date: Tue, 29 Jan 2013 13:49:46 +0200
+Subject: [PATCH 27/27] removing the use of zombie reaper from supervdsm
+
+This may solve validateAccess errors, but can cause defuct subprocesses.
+This patch is signed as WIP until we'll find better solution, until then
+this patch helps to verify if the previous errors that was caused thanks
+to zombie reaper handling don't occur.
+
+Change-Id: If3f9bae47f2894cc95785de8f19f6ec388ea58da
+Signed-off-by: Yaniv Bronhaim <ybronhei at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11491
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
+Tested-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm/supervdsmServer.py | 4 ----
+ 1 file changed, 4 deletions(-)
+
+diff --git a/vdsm/supervdsmServer.py b/vdsm/supervdsmServer.py
+index 833e91f..21e7c94 100755
+--- a/vdsm/supervdsmServer.py
++++ b/vdsm/supervdsmServer.py
+@@ -56,7 +56,6 @@ import tc
+ import ksm
+ import mkimage
+ from storage.multipath import MPATH_CONF
+-import zombieReaper
+ 
+ _UDEV_RULE_FILE_DIR = "/etc/udev/rules.d/"
+ _UDEV_RULE_FILE_PREFIX = "99-vdsm-"
+@@ -199,7 +198,6 @@ class _SuperVdsm(object):
+         pipe, hisPipe = Pipe()
+         proc = Process(target=child, args=(hisPipe,))
+         proc.start()
+-        zombieReaper.autoReapPID(proc.pid)
+ 
+         if not pipe.poll(RUN_AS_TIMEOUT):
+             try:
+@@ -391,8 +389,6 @@ def main():
+         if os.path.exists(address):
+             os.unlink(address)
+ 
+-        zombieReaper.registerSignalHandler()
+-
+         log.debug("Setting up keep alive thread")
+ 
+         monThread = threading.Thread(target=__pokeParent,
+-- 
+1.8.1
+
diff --git a/0028-configNet-allow-delete-update-of-devices-with-no-ifc.patch b/0028-configNet-allow-delete-update-of-devices-with-no-ifc.patch
new file mode 100644
index 0000000..a85b5fc
--- /dev/null
+++ b/0028-configNet-allow-delete-update-of-devices-with-no-ifc.patch
@@ -0,0 +1,63 @@
+From c1465ed861233cf90a1cace4c41560cbd48a61b3 Mon Sep 17 00:00:00 2001
+From: Dan Kenigsberg <danken at redhat.com>
+Date: Fri, 1 Feb 2013 23:04:50 +0200
+Subject: [PATCH 28/30] configNet: allow delete/update of devices with no ifcfg
+
+In Fedora 18, ifcfg files are missing by default. This patch assumes
+that there are no custom MTU setting for a device with no ifcfg file.
+
+This version of the patch owes a lot to Mark Wu's
+http://gerrit.ovirt.org/11357 and to Toni who convinced me that it is
+better to read the MTU directly from kernel.
+
+Change-Id: Icb3a623ca3d3b560288cbe4141eea6bd060ac798
+Bug-Url: https://bugzilla.redhat.com/906383
+Signed-off-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11621
+Reviewed-by: Antoni Segura Puimedon <asegurap at redhat.com>
+Tested-by: Mike Kolesnik <mkolesni at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11680
+Tested-by: Antoni Segura Puimedon <asegurap at redhat.com>
+---
+ vdsm/configNetwork.py | 18 +++++-------------
+ 1 file changed, 5 insertions(+), 13 deletions(-)
+
+diff --git a/vdsm/configNetwork.py b/vdsm/configNetwork.py
+index 53debfa..041e1a4 100755
+--- a/vdsm/configNetwork.py
++++ b/vdsm/configNetwork.py
+@@ -600,12 +600,10 @@ class ConfigWriter(object):
+         it check if a vlan, bond that have a higher mtu value
+         """
+         for nic in nics:
+-            cf = self.NET_CONF_PREF + nic
+-            mtuval = self._getConfigValue(cf, 'MTU')
+-            if not mtuval is None:
+-                mtuval = int(mtuval)
+-                if mtuval > mtu:
+-                    mtu = mtuval
++            mtuval = int(netinfo.getMtu(nic))
++
++            if mtuval > mtu:
++                mtu = mtuval
+         return mtu
+ 
+     def setNewMtu(self, network, bridged):
+@@ -623,13 +621,7 @@ class ConfigWriter(object):
+         _netinfo = netinfo.NetInfo()
+         currmtu = None
+         if bridged:
+-            cf = self.NET_CONF_PREF + network
+-            currmtu = self._getConfigValue(cf, 'MTU')
+-            if currmtu:
+-                currmtu = int(currmtu)
+-            else:
+-                # Optimization: if network hasn't custom MTU, do nothing
+-                return
++            currmtu = int(netinfo.getMtu(network))
+ 
+         nics, delvlan, bonding = \
+             _netinfo.getNicsVlanAndBondingForNetwork(network)
+-- 
+1.8.1.2
+
diff --git a/0029-Requires-policycoreutils-2.1.13-55-to-avoid-another-.patch b/0029-Requires-policycoreutils-2.1.13-55-to-avoid-another-.patch
new file mode 100644
index 0000000..83a40f6
--- /dev/null
+++ b/0029-Requires-policycoreutils-2.1.13-55-to-avoid-another-.patch
@@ -0,0 +1,41 @@
+From 6c021fc54d446f944acd497fb7e2110428ac289c Mon Sep 17 00:00:00 2001
+From: Douglas Schilling Landgraf <dougsland at redhat.com>
+Date: Tue, 5 Feb 2013 14:30:49 -0500
+Subject: [PATCH 29/30] Requires policycoreutils-2.1.13-55 to avoid another
+ break on selinux disabled.
+
+When selinux is disabled on f18, it fails to import module sepolicy
+with an exception. It causes the the vdsm-tool unavailable, and therefore
+the bonding module can't be loaded when vdsm starts up. For details,
+please see https://bugzilla.redhat.com/show_bug.cgi?id=889698
+
+Change-Id: I09387167ceeffdc104910103b8381954296cdbe9
+Signed-off-by: Mark Wu <wudxw at linux.vnet.ibm.com>
+Signed-off-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11731
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Tested-by: Dan Kenigsberg <danken at redhat.com>
+---
+ vdsm.spec.in | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index 00c1259..0aad124 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -133,9 +133,9 @@ Requires: lvm2 >= 2.02.95
+ 
+ %if 0%{?fedora} >= 18
+ Requires: selinux-policy-targeted >= 3.11.1-71
+-# In order to avoid a policycoreutils bug (rhbz 883355) when selinux is
+-# disabled we now require the version 2.1.13-44 (or newer) of Fedora.
+-Requires: policycoreutils >= 2.1.13-44
++# In order to avoid a policycoreutils bug (rhbz 889698) when selinux is
++# disabled we now require the version 2.1.13-55 (or newer) of Fedora.
++Requires: policycoreutils >= 2.1.13-55
+ Requires: systemd >= 197-1.fc18.2
+ %endif
+ 
+-- 
+1.8.1.2
+
diff --git a/0030-After-fail-to-connect-to-supervdsm-more-than-3-time-.patch b/0030-After-fail-to-connect-to-supervdsm-more-than-3-time-.patch
new file mode 100644
index 0000000..6f21182
--- /dev/null
+++ b/0030-After-fail-to-connect-to-supervdsm-more-than-3-time-.patch
@@ -0,0 +1,50 @@
+From 73b120fd6d0cd8215f3857c034cc7d4584c8ee05 Mon Sep 17 00:00:00 2001
+From: Yaniv Bronhaim <ybronhei at redhat.com>
+Date: Thu, 14 Feb 2013 13:46:06 +0200
+Subject: [PATCH 30/30] After fail to connect to supervdsm more than 3 time
+ vdsm gets into panic
+
+Due to race between old supervdsm instance to the new instance after
+prepareForShutdown, sometimes the socket is removed after
+new supervdsm started to listen on it.
+_pokeParent thread unlink the socket when distinguish that vdsm is dead.
+This can take more time than the time that takes to vdsm to startup and
+start the new instance of supervdsm. The unlink removes the socket file
+and vdsm cannot communicate with supervdsm.
+When the communication fails, vdsm calls panic and restart itself, this
+will start supervdsm again as needed.
+
+Change-Id: Iafe112893a76686edd2949d4f40b734646fd74df
+Bug-Id: https://bugzilla.redhat.com/show_bug.cgi?id=910005
+Signed-off-by: Yaniv Bronhaim <ybronhei at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/11932
+Reviewed-by: Saggi Mizrahi <smizrahi at redhat.com>
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/12053
+---
+ vdsm/supervdsm.py | 9 ++++++++-
+ 1 file changed, 8 insertions(+), 1 deletion(-)
+
+diff --git a/vdsm/supervdsm.py b/vdsm/supervdsm.py
+index 6a38076..1b6402d 100644
+--- a/vdsm/supervdsm.py
++++ b/vdsm/supervdsm.py
+@@ -194,7 +194,14 @@ class SuperVdsmProxy(object):
+     def launch(self):
+         self._firstLaunch = False
+         self._start()
+-        utils.retry(self._connect, Exception, timeout=60)
++        try:
++            # We retry 3 times to connect to avoid exceptions that are raised
++            # due to the process initializing. It might takes time to create
++            # the communication socket or other initialization methods take
++            # more time than expected.
++            utils.retry(self._connect, Exception, timeout=60)
++        except:
++            misc.panic("Couldn't connect to supervdsm")
+ 
+     def __getattr__(self, name):
+         return ProxyCaller(self, name)
+-- 
+1.8.1.2
+
diff --git a/0031-packaging-add-load_needed_modules.py.in.patch b/0031-packaging-add-load_needed_modules.py.in.patch
new file mode 100644
index 0000000..25fc677
--- /dev/null
+++ b/0031-packaging-add-load_needed_modules.py.in.patch
@@ -0,0 +1,81 @@
+From 45fec93749f8474db30cb0b518ca9b2f7b80f142 Mon Sep 17 00:00:00 2001
+From: Federico Simoncelli <fsimonce at redhat.com>
+Date: Thu, 28 Feb 2013 10:44:49 +0100
+Subject: [PATCH 31/32] packaging: add load_needed_modules.py.in
+
+Signed-off-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm-tool/load_needed_modules.py.in | 61 +++++++++++++++++++++++++++++++++++++
+ 1 file changed, 61 insertions(+)
+ create mode 100644 vdsm-tool/load_needed_modules.py.in
+
+diff --git a/vdsm-tool/load_needed_modules.py.in b/vdsm-tool/load_needed_modules.py.in
+new file mode 100644
+index 0000000..675a172
+--- /dev/null
++++ b/vdsm-tool/load_needed_modules.py.in
+@@ -0,0 +1,61 @@
++# Copyright IBM, Corp. 2012
++#
++# This program is free software; you can redistribute it and/or modify
++# it under the terms of the GNU General Public License as published by
++# the Free Software Foundation; either version 2 of the License, or
++# (at your option) any later version.
++#
++# This program is distributed in the hope that it will be useful,
++# but WITHOUT ANY WARRANTY; without even the implied warranty of
++# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++# GNU General Public License for more details.
++#
++# You should have received a copy of the GNU General Public License
++# along with this program; if not, write to the Free Software
++# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
++#
++# Refer to the README and COPYING files for full details of the license
++#
++
++
++import subprocess
++
++from vdsm.tool import expose
++
++
++EX_MODPROBE = '@MODPROBE_PATH@'
++
++
++def _exec_command(argv):
++    """
++    This function executes a given shell command.
++    """
++
++    p = subprocess.Popen(argv, stdout=subprocess.PIPE,
++                         stderr=subprocess.PIPE)
++    out, err = p.communicate()
++    rc = p.returncode
++    if rc != 0:
++        raise Exception("Execute command %s failed: %s" % (argv, err))
++
++
++def _enable_bond_dev():
++    REQUIRED = set(['bond0', 'bond1', 'bond2', 'bond3', 'bond4'])
++    MASTER_FILE = '/sys/class/net/bonding_masters'
++
++    # @ENGINENAME@ currently assumes that all bonding devices pre-exist
++    existing = set(file(MASTER_FILE).read().split())
++    with open(MASTER_FILE, 'w') as f:
++        for bond in REQUIRED - existing:
++            f.write('+%s\n' % bond)
++
++
++ at expose('load-needed-modules')
++def load_needed_modules():
++    """
++    Load needed modules
++    """
++
++    for mod in ['tun', 'bonding', '8021q']:
++        _exec_command([EX_MODPROBE, mod])
++    _enable_bond_dev()
+-- 
+1.8.1.2
+
diff --git a/0032-tool-_enable_bond_dev-reopen-bonding_masters-per-bon.patch b/0032-tool-_enable_bond_dev-reopen-bonding_masters-per-bon.patch
new file mode 100644
index 0000000..b73ce8b
--- /dev/null
+++ b/0032-tool-_enable_bond_dev-reopen-bonding_masters-per-bon.patch
@@ -0,0 +1,38 @@
+From 8cca94da272babb5eef7a81a7a05d178070efe69 Mon Sep 17 00:00:00 2001
+From: Dan Kenigsberg <danken at redhat.com>
+Date: Mon, 25 Feb 2013 16:06:56 +0200
+Subject: [PATCH 32/32] tool: _enable_bond_dev: reopen bonding_masters per bond
+
+Writing multiple +bondnames into /sys/class/net/bonding_masters is not
+enough to add new bonding devices. One has to reopen that file for each
+added bond.
+
+Change-Id: I4693a3d2cc3beba0b5961d16fb1ef25170f99f9e
+Signed-off-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/12409
+Reviewed-by: Antoni Segura Puimedon <asegurap at redhat.com>
+Tested-by: gena cher <genadic at gmail.com>
+Reviewed-on: http://gerrit.ovirt.org/12447
+Reviewed-by: Igor Lvovsky <ilvovsky at redhat.com>
+---
+ vdsm-tool/load_needed_modules.py.in | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/vdsm-tool/load_needed_modules.py.in b/vdsm-tool/load_needed_modules.py.in
+index 675a172..89c2b3f 100644
+--- a/vdsm-tool/load_needed_modules.py.in
++++ b/vdsm-tool/load_needed_modules.py.in
+@@ -45,8 +45,8 @@ def _enable_bond_dev():
+ 
+     # @ENGINENAME@ currently assumes that all bonding devices pre-exist
+     existing = set(file(MASTER_FILE).read().split())
+-    with open(MASTER_FILE, 'w') as f:
+-        for bond in REQUIRED - existing:
++    for bond in REQUIRED - existing:
++        with open(MASTER_FILE, 'w') as f:
+             f.write('+%s\n' % bond)
+ 
+ 
+-- 
+1.8.1.2
+
diff --git a/0033-gluster-Handling-Attribute-error-in-Python-2.6.patch b/0033-gluster-Handling-Attribute-error-in-Python-2.6.patch
new file mode 100644
index 0000000..52909b2
--- /dev/null
+++ b/0033-gluster-Handling-Attribute-error-in-Python-2.6.patch
@@ -0,0 +1,91 @@
+From 345374cfd88cc5bd1295d8e72c579edf3669ab1a Mon Sep 17 00:00:00 2001
+From: Aravinda VK <avishwan at redhat.com>
+Date: Wed, 6 Mar 2013 14:34:12 +0530
+Subject: [PATCH 33/36] gluster: Handling Attribute error in Python 2.6
+
+xml.etree.cElementTree in Python 2.6 doesn't have the attribute
+ParseError(Introduced in Python 2.7). VDSM gluster/cli.py tries
+to capture etree.ParseError when gluster cli returns incompatible
+xml output.
+
+Change-Id: I63c33b34ce11473636365ea094e267c5424c7255
+Signed-off-by: Aravinda VK <avishwan at redhat.com>
+(cherry picked from commit 5759c876e414b433b1a42e50e1817d5841ec2ef2)
+Reviewed-on: http://gerrit.ovirt.org/12829
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+---
+ vdsm/gluster/cli.py | 18 ++++++++++++------
+ 1 file changed, 12 insertions(+), 6 deletions(-)
+
+diff --git a/vdsm/gluster/cli.py b/vdsm/gluster/cli.py
+index 7136281..d4f65ab 100644
+--- a/vdsm/gluster/cli.py
++++ b/vdsm/gluster/cli.py
+@@ -31,6 +31,12 @@ _glusterCommandPath = utils.CommandPath("gluster",
+                                         )
+ 
+ 
++if hasattr(etree, 'ParseError'):
++    _etreeExceptions = (etree.ParseError, AttributeError, ValueError)
++else:
++    _etreeExceptions = (SyntaxError, AttributeError, ValueError)
++
++
+ def _getGlusterVolCmd():
+     return [_glusterCommandPath.cmd, "--mode=script", "volume"]
+ 
+@@ -85,7 +91,7 @@ def _execGlusterXml(cmd):
+         tree = etree.fromstring('\n'.join(out))
+         rv = int(tree.find('opRet').text)
+         msg = tree.find('opErrstr').text
+-    except (etree.ParseError, AttributeError, ValueError):
++    except _etreeExceptions:
+         raise ge.GlusterXmlErrorException(err=out)
+     if rv == 0:
+         return tree
+@@ -303,7 +309,7 @@ def volumeStatus(volumeName, brick=None, option=None):
+             return _parseVolumeStatusMem(xmltree)
+         else:
+             return _parseVolumeStatus(xmltree)
+-    except (etree.ParseError, AttributeError, ValueError):
++    except _etreeExceptions:
+         raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)])
+ 
+ 
+@@ -427,7 +433,7 @@ def volumeInfo(volumeName=None):
+         raise ge.GlusterVolumesListFailedException(rc=e.rc, err=e.err)
+     try:
+         return _parseVolumeInfo(xmltree)
+-    except (etree.ParseError, AttributeError, ValueError):
++    except _etreeExceptions:
+         raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)])
+ 
+ 
+@@ -448,7 +454,7 @@ def volumeCreate(volumeName, brickList, replicaCount=0, stripeCount=0,
+         raise ge.GlusterVolumeCreateFailedException(rc=e.rc, err=e.err)
+     try:
+         return {'uuid': xmltree.find('volCreate/volume/id').text}
+-    except (etree.ParseError, AttributeError, ValueError):
++    except _etreeExceptions:
+         raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)])
+ 
+ 
+@@ -787,7 +793,7 @@ def peerStatus():
+         return _parsePeerStatus(xmltree,
+                                 _getLocalIpAddress() or _getGlusterHostName(),
+                                 _getGlusterUuid(), HostStatus.CONNECTED)
+-    except (etree.ParseError, AttributeError, ValueError):
++    except _etreeExceptions:
+         raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)])
+ 
+ 
+@@ -878,5 +884,5 @@ def volumeProfileInfo(volumeName, nfs=False):
+         raise ge.GlusterVolumeProfileInfoFailedException(rc=e.rc, err=e.err)
+     try:
+         return _parseVolumeProfileInfo(xmltree, nfs)
+-    except (etree.ParseError, AttributeError, ValueError):
++    except _etreeExceptions:
+         raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)])
+-- 
+1.8.1.4
+
diff --git a/0034-bootstrap-remove-glusterfs-packages.patch b/0034-bootstrap-remove-glusterfs-packages.patch
new file mode 100644
index 0000000..8351336
--- /dev/null
+++ b/0034-bootstrap-remove-glusterfs-packages.patch
@@ -0,0 +1,52 @@
+From f2277d595e06d86791abffabe52f4e397f26cb0f Mon Sep 17 00:00:00 2001
+From: "Bala.FA" <barumuga at redhat.com>
+Date: Tue, 6 Nov 2012 16:09:49 +0530
+Subject: [PATCH 34/36] bootstrap: remove glusterfs packages
+
+As glusterfs packages are dependencies for vdsm-gluster package,
+having these in bootstrap is redundant
+
+Change-Id: I5bdde338599155df43af3a0fd0d14a02e3bddda8
+Signed-off-by: Bala.FA <barumuga at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/12959
+Reviewed-by: Timothy Asir <tjeyasin at redhat.com>
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+---
+ vds_bootstrap/vds_bootstrap.py | 3 +--
+ vdsm.spec.in                   | 5 ++++-
+ 2 files changed, 5 insertions(+), 3 deletions(-)
+
+diff --git a/vds_bootstrap/vds_bootstrap.py b/vds_bootstrap/vds_bootstrap.py
+index 61e0f6f..219e6e4 100755
+--- a/vds_bootstrap/vds_bootstrap.py
++++ b/vds_bootstrap/vds_bootstrap.py
+@@ -159,8 +159,7 @@ if rhel6based:
+                 'seabios', 'qemu-img', 'fence-agents',
+                 'libselinux-python', 'sanlock', 'sanlock-python')
+     # Gluster packages
+-    GLUSTER_PACK = ('vdsm-gluster', 'glusterfs-server', 'glusterfs-rdma',
+-                    'glusterfs-geo-replication')
++    GLUSTER_PACK = (VDSM_NAME + '-gluster', )
+ else:
+     # Devel packages
+     DEVEL_PACK = ('gdb', 'tcpdump', 'strace', 'ltrace', 'sysstat', 'ntp',
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index 0aad124..1cb189e 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -402,7 +402,10 @@ Summary:        Gluster Plugin for VDSM
+ BuildArch:      noarch
+ 
+ Requires: %{name} = %{version}-%{release}
+-Requires: glusterfs glusterfs-server glusterfs-fuse
++Requires: glusterfs
++Requires: glusterfs-server
++Requires: glusterfs-fuse
++Requires: glusterfs-rdma
+ 
+ %description gluster
+ Gluster plugin enables VDSM to serve Gluster functionalities.
+-- 
+1.8.1.4
+
diff --git a/0035-gluster-set-glusterfs-dependency-version.patch b/0035-gluster-set-glusterfs-dependency-version.patch
new file mode 100644
index 0000000..97bf1a0
--- /dev/null
+++ b/0035-gluster-set-glusterfs-dependency-version.patch
@@ -0,0 +1,33 @@
+From 8ad23d1cbc02d2a923fcdcf050d8a4faf26948b2 Mon Sep 17 00:00:00 2001
+From: "Bala.FA" <barumuga at redhat.com>
+Date: Tue, 12 Mar 2013 14:28:08 +0530
+Subject: [PATCH 35/36] gluster: set glusterfs dependency version
+
+Now vdsm-gluster depends on glusterfs version 3.4.0 or higher.
+
+Change-Id: Icb42bf4dec26b118f52cc51701faa5e611f63c00
+Signed-off-by: Bala.FA <barumuga at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/12960
+Reviewed-by: Aravinda VK <avishwan at redhat.com>
+Tested-by: Aravinda VK <avishwan at redhat.com>
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+---
+ vdsm.spec.in | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index 1cb189e..60e76e5 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -402,7 +402,7 @@ Summary:        Gluster Plugin for VDSM
+ BuildArch:      noarch
+ 
+ Requires: %{name} = %{version}-%{release}
+-Requires: glusterfs
++Requires: glusterfs >= 3.4.0
+ Requires: glusterfs-server
+ Requires: glusterfs-fuse
+ Requires: glusterfs-rdma
+-- 
+1.8.1.4
+
diff --git a/0036-Do-not-delete-the-template-when-zeroing-a-dependant-.patch b/0036-Do-not-delete-the-template-when-zeroing-a-dependant-.patch
new file mode 100644
index 0000000..dc34ce0
--- /dev/null
+++ b/0036-Do-not-delete-the-template-when-zeroing-a-dependant-.patch
@@ -0,0 +1,65 @@
+From 37334a9b538fdb7af6e2d49eea5f7bd7bedd82b4 Mon Sep 17 00:00:00 2001
+From: Eduardo Warszawski <ewarszaw at redhat.com>
+Date: Mon, 18 Feb 2013 18:14:31 +0200
+Subject: [PATCH 36/36] Do not delete the template when zeroing a dependant
+ image.
+
+Change-Id: I9e472334efa9dadb5389db70b0953f88b9be858a
+Bug-url: https://bugzilla.redhat.com/show_bug.cgi?id=910013
+Signed-off-by: Eduardo <ewarszaw at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/12178
+Tested-by: Haim Ateya <hateya at redhat.com>
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+Reviewed-by: Yeela Kaplan <ykaplan at redhat.com>
+Reviewed-by: Ayal Baron <abaron at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/13028
+Tested-by: Federico Simoncelli <fsimonce at redhat.com>
+---
+ vdsm/storage/blockSD.py | 12 +++++++++---
+ vdsm/storage/hsm.py     |  2 +-
+ 2 files changed, 10 insertions(+), 4 deletions(-)
+
+diff --git a/vdsm/storage/blockSD.py b/vdsm/storage/blockSD.py
+index 862e413..e66256d 100644
+--- a/vdsm/storage/blockSD.py
++++ b/vdsm/storage/blockSD.py
+@@ -945,14 +945,20 @@ class BlockStorageDomain(sd.StorageDomain):
+             self.log.debug("removed image dir: %s", imgPath)
+         return imgPath
+ 
++    def _getImgExclusiveVols(self, imgUUID, volsImgs):
++        """Filter vols belonging to imgUUID only."""
++        exclusives = dict((vName, v) for vName, v in volsImgs.iteritems()
++                          if v.imgs[0] == imgUUID)
++        return exclusives
++
+     def deleteImage(self, sdUUID, imgUUID, volsImgs):
+-        toDel = tuple(vName for vName, v in volsImgs.iteritems()
+-                                                    if v.imgs[0] == imgUUID)
++        toDel = self._getImgExclusiveVols(imgUUID, volsImgs)
+         deleteVolumes(sdUUID, toDel)
+         self.rmDCImgDir(imgUUID, volsImgs)
+ 
+     def zeroImage(self, sdUUID, imgUUID, volsImgs):
+-        zeroImgVolumes(sdUUID, imgUUID, volsImgs)
++        toZero = self._getImgExclusiveVols(imgUUID, volsImgs)
++        zeroImgVolumes(sdUUID, imgUUID, toZero)
+         self.rmDCImgDir(imgUUID, volsImgs)
+ 
+     def getAllVolumes(self):
+diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
+index 8bbe3b8..32a32c9 100644
+--- a/vdsm/storage/hsm.py
++++ b/vdsm/storage/hsm.py
+@@ -1455,7 +1455,7 @@ class HSM:
+             # postZero implies block domain. Backup domains are always NFS
+             # hence no need to create fake template if postZero is true.
+             self._spmSchedule(spUUID, "zeroImage_%s" % imgUUID, dom.zeroImage,
+-                              sdUUID, imgUUID, volsByImg.keys())
++                              sdUUID, imgUUID, volsByImg)
+         else:
+             dom.deleteImage(sdUUID, imgUUID, volsByImg)
+             # This is a hack to keep the interface consistent
+-- 
+1.8.1.4
+
diff --git a/0037-vdsm.spec-fence-agents-all.patch b/0037-vdsm.spec-fence-agents-all.patch
new file mode 100644
index 0000000..398c464
--- /dev/null
+++ b/0037-vdsm.spec-fence-agents-all.patch
@@ -0,0 +1,59 @@
+From ed7c81971e539e0940d3ffdfe3f913ca99cde14d Mon Sep 17 00:00:00 2001
+From: Douglas Schilling Landgraf <dougsland at redhat.com>
+Date: Fri, 22 Mar 2013 21:46:34 -0400
+Subject: [PATCH] vdsm.spec: fence-agents-all
+
+fence-agents package in version 4 replaced the package name
+to fence-agents-all. This patch will adapt vdsm spec to this change.
+
+Change-Id: Ie4e92fda50aadb8223e721d72d43ea03b9b24f2d
+Signed-off-by: Douglas Schilling Landgraf <dougsland at redhat.com>
+Reviewed-on: http://gerrit.ovirt.org/13249
+Reviewed-by: Dan Kenigsberg <danken at redhat.com>
+---
+ vdsm.spec.in | 8 +++++++-
+ 1 file changed, 7 insertions(+), 1 deletion(-)
+
+diff --git a/vdsm.spec.in b/vdsm.spec.in
+index abfdbf9..3f7397a 100644
+--- a/vdsm.spec.in
++++ b/vdsm.spec.in
+@@ -101,6 +101,7 @@ Requires: dmidecode
+ ExclusiveArch:  x86_64
+ Requires: device-mapper-multipath
+ Requires: e2fsprogs
++Requires: fence-agents-all
+ Requires: iscsi-initiator-utils
+ Requires: libvirt
+ Requires: lvm2
+@@ -118,6 +119,7 @@ Requires: libvirt >= 0.10.2-18.el6_4.2
+ Requires: iscsi-initiator-utils >= 6.2.0.872-15
+ Requires: device-mapper-multipath >= 0.4.9-52
+ Requires: e2fsprogs >= 1.41.12-11
++Requires: fence-agents
+ Requires: kernel >= 2.6.32-279.9.1
+ Requires: sanlock >= 2.3-4, sanlock-python
+ Requires: initscripts >= 9.03.31-2.el6_3.1
+@@ -127,6 +129,11 @@ Requires: lvm2 >= 2.02.95-10.el6_3.2
+ Requires: logrotate < 3.8.0
+ %endif
+ %else
++%if 0%{?fedora} >= 19
++Requires: fence-agents-all
++%else
++Requires: fence-agents
++%endif
+ # Subprocess and thread bug was found on python 2.7.2
+ Requires: python >= 2.7.3
+ Requires: qemu-kvm >= 2:0.15.0-4
+@@ -156,7 +163,6 @@ Requires: systemd >= 197-1.fc18.2
+ 
+ Requires: libvirt-python, libvirt-lock-sanlock, libvirt-client
+ Requires: psmisc >= 22.6-15
+-Requires: fence-agents
+ Requires: bridge-utils
+ Requires: sos
+ Requires: tree
+-- 
+1.8.1.4
+
diff --git a/sources b/sources
index 0027272..d373508 100644
--- a/sources
+++ b/sources
@@ -1 +1 @@
-55dae854f0d9d71ee791c0d7373ca7e9  vdsm-4.10.3-b005b54.tar.gz
+6e181a25b2b4fc9e5f85faafc9d73aa1  vdsm-4.10.3.tar.gz
diff --git a/vdsm.spec b/vdsm.spec
index 2d72259..2ff3e87 100644
--- a/vdsm.spec
+++ b/vdsm.spec
@@ -4,9 +4,9 @@
 %global vdsm_reg vdsm-reg
 
 # Upstream git release
-%global vdsm_release b005b54
-%global vdsm_relvtag .git%{vdsm_release}
-%global vdsm_relttag -%{vdsm_release}
+# % global vdsm_release gf2f6683
+# % global vdsm_relvtag .git%{vdsm_release}
+# % global vdsm_relttag -%{vdsm_release}
 
 # Required users and groups
 %global vdsm_user vdsm
@@ -32,7 +32,7 @@
 
 Name:           %{vdsm_name}
 Version:        4.10.3
-Release:        5%{?vdsm_relvtag}%{?dist}%{?extra_release}
+Release:        11%{?vdsm_relvtag}%{?dist}%{?extra_release}
 Summary:        Virtual Desktop Server Manager
 
 Group:          Applications/System
@@ -47,6 +47,48 @@ Url:            http://www.ovirt.org/wiki/Vdsm
 #  make VERSION={version}-{vdsm_release} dist
 Source0:        %{vdsm_name}-%{version}%{?vdsm_relttag}.tar.gz
 
+# ovirt-3.2 patches
+Patch0:         0001-schema-Fix-schema-for-VM.updateDevice.patch
+Patch1:         0002-schema-Missing-comment-for-new-VmDeviceType.patch
+Patch2:         0003-api-Report-CPU-thread-info-in-getVdsCapabilities.patch
+Patch3:         0004-caps.py-osversion-validate-OVIRT.patch
+Patch4:         0005-restarting-libvirtd-didn-t-work-over-allinone-setup.patch
+Patch5:         0006-Integrate-Smartcard-support.patch
+Patch6:         0007-vdsm.spec-python-ordereddict-only-for-rhel-7.patch
+Patch7:         0008-vdsm.spec-Don-t-require-python-ordereddict-on-fedora.patch
+Patch8:         0009-vdsm.spec-BuildRequires-python-pthreading.patch
+Patch9:         0010-Searching-for-both-py-and-pyc-file-to-start-super-vd.patch
+Patch10:        0011-adding-getHardwareInfo-API-to-vdsm.patch
+Patch11:        0012-Explicitly-shutdown-m2crypto-socket.patch
+Patch12:        0013-spec-require-policycoreutils-and-skip-sebool-errors.patch
+Patch13:        0014-spec-requires-selinux-policy-to-avoid-selinux-failur.patch
+Patch14:        0015-vdsmd.service-require-either-ntpd-or-chronyd.patch
+Patch15:        0016-isRunning-didn-t-check-local-variable-before-reading.patch
+Patch16:        0017-udev-Race-fix-load-and-trigger-dev-rule.patch
+Patch17:        0018-Change-scsi_id-command-path-to-be-configured-at-runt.patch
+Patch18:        0019-upgrade-force-upgrade-to-v2-before-upgrading-to-v3.patch
+Patch19:        0020-misc-rename-safelease-to-clusterlock.patch
+Patch20:        0021-domain-select-the-cluster-lock-using-makeClusterLock.patch
+Patch21:        0022-clusterlock-add-the-local-locking-implementation.patch
+Patch22:        0023-upgrade-catch-MetaDataKeyNotFoundError-when-preparin.patch
+Patch23:        0024-vdsm.spec-Require-openssl.patch
+Patch24:        0025-Fedora-18-require-a-newer-udev.patch
+Patch25:        0026-fix-sloppy-backport-of-safelease-rename.patch
+Patch26:        0027-removing-the-use-of-zombie-reaper-from-supervdsm.patch
+Patch27:        0028-configNet-allow-delete-update-of-devices-with-no-ifc.patch
+Patch28:        0029-Requires-policycoreutils-2.1.13-55-to-avoid-another-.patch
+Patch29:        0030-After-fail-to-connect-to-supervdsm-more-than-3-time-.patch
+# This patch is not present in the upstream ovirt-3.2 branch and it was added to
+# add a missing file in the base vdsm tar.gz
+Patch30:        0031-packaging-add-load_needed_modules.py.in.patch
+Patch31:        0032-tool-_enable_bond_dev-reopen-bonding_masters-per-bon.patch
+Patch32:        0033-gluster-Handling-Attribute-error-in-Python-2.6.patch
+Patch33:        0034-bootstrap-remove-glusterfs-packages.patch
+Patch34:        0035-gluster-set-glusterfs-dependency-version.patch
+Patch35:        0036-Do-not-delete-the-template-when-zeroing-a-dependant-.patch
+Patch36:        0037-vdsm.spec-fence-agents-all.patch
+
+
 BuildRoot:      %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
 
 BuildRequires: python
@@ -88,7 +130,7 @@ Requires: logrotate
 Requires: xz
 Requires: ntp
 Requires: iproute
-Requires: python-ethtool >= 0.6-3
+Requires: python-ethtool
 Requires: rpm-python
 Requires: nfs-utils
 Requires: python-pthreading
@@ -118,7 +160,7 @@ Requires: python
 # Update the qemu-kvm requires when block_stream will be included
 Requires: qemu-kvm >= 2:0.12.1.2-2.295.el6_3.4
 Requires: qemu-img >= 2:0.12.1.2-2.295.el6_3.4
-Requires: libvirt >= 0.9.10-21.el6_3.6
+Requires: libvirt >= 0.9.10-21.el6_3.5
 Requires: iscsi-initiator-utils >= 6.2.0.872-15
 Requires: device-mapper-multipath >= 0.4.9-52
 Requires: e2fsprogs >= 1.41.12-11
@@ -126,7 +168,7 @@ Requires: kernel >= 2.6.32-279.9.1
 Requires: sanlock >= 2.3-4, sanlock-python
 Requires: initscripts >= 9.03.31-2.el6_3.1
 Requires: mom >= 0.3.0
-Requires: selinux-policy-targeted >= 3.7.19-155
+Requires: selinux-policy-targeted >= 3.7.19-80
 Requires: lvm2 >= 2.02.95-10.el6_3.2
 Requires: logrotate < 3.8.0
 %endif
@@ -147,10 +189,12 @@ Requires: selinux-policy-targeted >= 3.10.0-149
 Requires: lvm2 >= 2.02.95
 %endif
 
-# In order to avoid a policycoreutils bug (rhbz 883355) when selinux is
-# disabled we now require the version 2.1.13-44 (or newer) of Fedora.
 %if 0%{?fedora} >= 18
-Requires: policycoreutils >= 2.1.13-44
+Requires: selinux-policy-targeted >= 3.11.1-71
+# In order to avoid a policycoreutils bug (rhbz 889698) when selinux is
+# disabled we now require the version 2.1.13-55 (or newer) of Fedora.
+Requires: policycoreutils >= 2.1.13-55
+Requires: systemd >= 197-1.fc18.2
 %endif
 
 Requires: libvirt-python, libvirt-lock-sanlock, libvirt-client
@@ -223,6 +267,7 @@ BuildArch:      noarch
 
 Requires: %{name} = %{version}-%{release}
 Requires: m2crypto
+Requires: openssl
 
 %description reg
 VDSM registration package. Used to register a Linux host to a Virtualization
@@ -331,11 +376,11 @@ If the nested virtualization is enabled in your kvm module
 this hook will expose it to the guests.
 
 %package hook-numa
-Summary:        NUMA support for VDSM
+Summary:        numa sopport for VDSM
 BuildArch:      noarch
 
 %description hook-numa
-Hooks is getting number/rage of NUMA nodes and NUMA mode,
+Hooks is getting number/rage of numa nodes and numa mode,
 and update the VM xml.
 
 %package hook-pincpu
@@ -416,7 +461,10 @@ Summary:        Gluster Plugin for VDSM
 BuildArch:      noarch
 
 Requires: %{name} = %{version}-%{release}
-Requires: glusterfs glusterfs-server glusterfs-fuse
+Requires: glusterfs >= 3.4.0
+Requires: glusterfs-server
+Requires: glusterfs-fuse
+Requires: glusterfs-rdma
 
 %description gluster
 Gluster plugin enables VDSM to serve Gluster functionalities.
@@ -424,6 +472,45 @@ Gluster plugin enables VDSM to serve Gluster functionalities.
 %prep
 %setup -q
 
+# ovirt-3.2 patches
+%patch0 -p1 -b .patch0
+%patch1 -p1 -b .patch1
+%patch2 -p1 -b .patch2
+%patch3 -p1 -b .patch3
+%patch4 -p1 -b .patch4
+%patch5 -p1 -b .patch5
+%patch6 -p1 -b .patch6
+%patch7 -p1 -b .patch7
+%patch8 -p1 -b .patch8
+%patch9 -p1 -b .patch9
+%patch10 -p1 -b .patch10
+%patch11 -p1 -b .patch11
+%patch12 -p1 -b .patch12
+%patch13 -p1 -b .patch13
+%patch14 -p1 -b .patch14
+%patch15 -p1 -b .patch15
+%patch16 -p1 -b .patch16
+%patch17 -p1 -b .patch17
+%patch18 -p1 -b .patch18
+%patch19 -p1 -b .patch19
+%patch20 -p1 -b .patch20
+%patch21 -p1 -b .patch21
+%patch22 -p1 -b .patch22
+%patch23 -p1 -b .patch23
+%patch24 -p1 -b .patch24
+%patch25 -p1 -b .patch25
+%patch26 -p1 -b .patch26
+%patch27 -p1 -b .patch27
+%patch28 -p1 -b .patch28
+%patch29 -p1 -b .patch29
+%patch30 -p1 -b .patch30
+%patch31 -p1 -b .patch31
+%patch32 -p1 -b .patch32
+%patch33 -p1 -b .patch33
+%patch34 -p1 -b .patch34
+%patch35 -p1 -b .patch35
+%patch36 -p1 -b .patch36
+
 %if 0%{?rhel} == 6
 sed -i '/ su /d' vdsm/vdsm-logrotate.conf.in
 %endif
@@ -622,7 +709,7 @@ exit 0
 
 %files
 %defattr(-, root, root, -)
-%doc COPYING README vdsm/vdsm.conf.sample vdsm_api/vdsm-api.html
+%doc COPYING README vdsm/vdsm.conf.sample
 %if 0%{?rhel}
 %{_initrddir}/vdsmd
 %else
@@ -704,7 +791,7 @@ exit 0
 %{_datadir}/%{vdsm_name}/storage/resourceFactories.py*
 %{_datadir}/%{vdsm_name}/storage/remoteFileHandler.py*
 %{_datadir}/%{vdsm_name}/storage/resourceManager.py*
-%{_datadir}/%{vdsm_name}/storage/safelease.py*
+%{_datadir}/%{vdsm_name}/storage/clusterlock.py*
 %{_datadir}/%{vdsm_name}/storage/sdc.py*
 %{_datadir}/%{vdsm_name}/storage/sd.py*
 %{_datadir}/%{vdsm_name}/storage/securable.py*
@@ -761,11 +848,6 @@ exit 0
 %{_datadir}/%{vdsm_name}/neterrors.py*
 %{_datadir}/%{vdsm_name}/respawn
 %{_datadir}/%{vdsm_name}/set-conf-item
-%dir %{_datadir}/%{vdsm_name}/gluster
-%{_datadir}/%{vdsm_name}/gluster/__init__.py*
-%{_datadir}/%{vdsm_name}/gluster/cli.py*
-%{_datadir}/%{vdsm_name}/gluster/exception.py*
-%{_datadir}/%{vdsm_name}/gluster/hostname.py*
 %{python_sitelib}/sos/plugins/vdsm.py*
 /lib/udev/rules.d/12-vdsm-lvm.rules
 /etc/security/limits.d/99-vdsm.conf
@@ -832,7 +914,6 @@ exit 0
 %{_datadir}/%{vdsm_name}/tests/netmaskconversions
 %{_datadir}/%{vdsm_name}/tests/run_tests.sh
 %{_datadir}/%{vdsm_name}/tests/route_info.out
-%{_datadir}/%{vdsm_name}/tests/route_info_ppc64.out
 %{_datadir}/%{vdsm_name}/tests/tc_filter_show.out
 %{_datadir}/%{vdsm_name}/tests/glusterVolumeProfileInfo.xml
 %{_datadir}/%{vdsm_name}/tests/glusterVolumeProfileInfoNfs.xml
@@ -1012,14 +1093,60 @@ exit 0
 %defattr(-, root, root, -)
 %dir %{_datadir}/%{vdsm_name}/gluster
 %doc COPYING
+%{_datadir}/%{vdsm_name}/gluster/__init__.py*
 %{_datadir}/%{vdsm_name}/gluster/api.py*
+%{_datadir}/%{vdsm_name}/gluster/cli.py*
+%{_datadir}/%{vdsm_name}/gluster/exception.py*
+%{_datadir}/%{vdsm_name}/gluster/hostname.py*
 
 %changelog
-* Fri Feb 15 2013 Fedora Release Engineering <rel-eng at lists.fedoraproject.org> - 4.10.3-5.gitb005b54
-- Rebuilt for https://fedoraproject.org/wiki/Fedora_19_Mass_Rebuild
-
-* Sun Jan 13 2013 Douglas Schilling Landgraf <dougsland at redhat.com> v4.10.3-78-gb005b54
-* v4.10.3-78-gb005b54
+* Mon Mar 25 2013 Douglas Schilling Landgraf <dougsland at redhat.com> 4.10.3-11
+- adapt vdsm.spec to new fence-agents package name.
+
+* Thu Mar 14 2013 Federico Simoncelli <fsimonce at redhat.com> 4.10.3-10
+- gluster: Handling Attribute error in Python 2.6
+- bootstrap: remove glusterfs packages
+- gluster: set glusterfs dependency version
+- Do not delete the template when zeroing a dependant
+
+* Wed Feb 27 2013 Federico Simoncelli <fsimonce at redhat.com> 4.10.3-9
+- packaging: add load_needed_modules.py.in
+- tool: _enable_bond_dev: reopen bonding_masters per bond
+
+* Tue Feb 19 2013 Federico Simoncelli <fsimonce at redhat.com> 4.10.3-8
+- configNet: allow delete/update of devices with no ifcfg (#906383)
+- Requires policycoreutils-2.1.13-55 to avoid another
+- After fail to connect to supervdsm more than 3 time
+
+* Thu Feb 14 2013 Federico Simoncelli <fsimonce at redhat.com> 4.10.3-7
+- Fedora 18: require a newer udev (applied properly to the fedora specfile)
+
+* Wed Jan 30 2013 Federico Simoncelli <fsimonce at redhat.com> 4.10.3-6
+- Explicitly shutdown  m2crypto socket
+- spec: require policycoreutils and skip sebool errors
+- spec: requires selinux-policy to avoid selinux failure
+- vdsmd.service: require either ntpd or chronyd
+- isRunning didn't check local variable before reading
+- udev: Race fix- load and trigger dev rule (#891300)
+- Change scsi_id command path to be configured at runtime (#886087)
+- upgrade: force upgrade to v2 before upgrading to v3 (#893184)
+- misc: rename safelease to clusterlock
+- domain: select the cluster lock using makeClusterLock
+- clusterlock: add the local locking implementation (#877715)
+- upgrade: catch MetaDataKeyNotFoundError when preparing
+- vdsm.spec: Require openssl (#905728)
+- Fedora 18: require a newer udev
+- fix sloppy backport of safelease rename
+- removing the use of zombie reaper from supervdsm
+
+* Fri Jan 18 2013 Douglas Schilling Landgraf <dougsland at redhat.com> 4.10.3-5
+- Searching for both py and pyc file to start super vdsm
+- adding getHardwareInfo API to vdsm
+
+* Tue Jan 15 2013 Douglas Schilling Landgraf <dougsland at redhat.com> 4.10.3-4
+- python-ordereddict only for rhel more then 7
+- dont require python ordereddict on fedora
+- BuildRequires python-pthreading
 
 * Wed Jan 02 2013 Federico Simoncelli <fsimonce at redhat.com> 4.10.3-3
 - caps.py: osversion() validate OVIRT


More information about the scm-commits mailing list