Change in vdsm[ovirt-3.5]: virt: Add _getVolumeInfo helper
by alitke@redhat.com
Hello Dan Kenigsberg, Francesco Romani,
I'd like you to do a code review. Please visit
https://gerrit.ovirt.org/45947
to review the following change.
Change subject: virt: Add _getVolumeInfo helper
......................................................................
virt: Add _getVolumeInfo helper
The next patch needs to call into to storage to get detailed volume size
information. Add a helper to encapsulate this operation which makes
error checking and exception raising consistent for all call sites.
Change-Id: I11eefb292e5d08458cf3a16ef9c444fb9c08702b
Signed-off-by: Adam Litke <alitke(a)redhat.com>
Reviewed-on: https://gerrit.ovirt.org/43560
Reviewed-by: Nir Soffer <nsoffer(a)redhat.com>
Continuous-Integration: Jenkins CI
Reviewed-by: Francesco Romani <fromani(a)redhat.com>
Reviewed-by: Dan Kenigsberg <danken(a)redhat.com>
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=1251958
---
M vdsm/virt/vm.py
1 file changed, 8 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/47/45947/1
diff --git a/vdsm/virt/vm.py b/vdsm/virt/vm.py
index f813af7..e656a63 100644
--- a/vdsm/virt/vm.py
+++ b/vdsm/virt/vm.py
@@ -6069,6 +6069,14 @@
if dev['type'] == BALLOON_DEVICES:
yield dev
+ def _getVolumeInfo(self, domainID, poolID, imageID, volumeID):
+ res = self.cif.irs.getVolumeInfo(domainID, poolID, imageID, volumeID)
+ if res['status']['code'] != 0:
+ raise StorageUnavailableError(
+ "Unable to get volume info for domain %s volume %s" %
+ (domainID, volumeID))
+ return res['info']
+
class LiveMergeCleanupThread(threading.Thread):
def __init__(self, vm, jobId, drive, doPivot):
--
To view, visit https://gerrit.ovirt.org/45947
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I11eefb292e5d08458cf3a16ef9c444fb9c08702b
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: ovirt-3.5
Gerrit-Owner: Adam Litke <alitke(a)redhat.com>
Gerrit-Reviewer: Dan Kenigsberg <danken(a)redhat.com>
Gerrit-Reviewer: Francesco Romani <fromani(a)redhat.com>
8 years, 8 months
Change in vdsm[master]: configurator doesn't load pyc files under configurators folder
by ybronhei@redhat.com
Yaniv Bronhaim has uploaded a new change for review.
Change subject: configurator doesn't load pyc files under configurators folder
......................................................................
configurator doesn't load pyc files under configurators folder
configurator loads dynamically modules from configurators folder. It
searches for py files only. In ovirt-node installation we install only pyc
files and this caused us to miss all configurators modules in ovirt-node
installation.
Change-Id: Ia529de0069e2f4ec168a4b9df82ba62c56d66730
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=1259247
Signed-off-by: Yaniv Bronhaim <ybronhei(a)redhat.com>
---
M lib/vdsm/tool/configurator.py
1 file changed, 1 insertion(+), 1 deletion(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/46/45846/1
diff --git a/lib/vdsm/tool/configurator.py b/lib/vdsm/tool/configurator.py
index 5ed905d..c9984aa 100644
--- a/lib/vdsm/tool/configurator.py
+++ b/lib/vdsm/tool/configurator.py
@@ -58,7 +58,7 @@
return [
getmname(module)
- for module in iglob("%s*.py" % path)
+ for module in iglob("%s*.py*" % path)
if filter_(getmname(module))
]
--
To view, visit https://gerrit.ovirt.org/45846
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ia529de0069e2f4ec168a4b9df82ba62c56d66730
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Yaniv Bronhaim <ybronhei(a)redhat.com>
8 years, 8 months
Change in vdsm[master]: migration: Add incoming migration semaphore
by automation@ovirt.org
automation(a)ovirt.org has posted comments on this change.
Change subject: migration: Add incoming migration semaphore
......................................................................
Patch Set 3:
* Update tracker::IGNORE, no Bug-Url found
* Check Bug-Url::WARN, no bug url found, make sure header matches 'Bug-Url: ' and is a valid url.
* Check merged to previous::IGNORE, Not in stable branch (['ovirt-3.5', 'ovirt-3.4', 'ovirt-3.3'])
--
To view, visit https://gerrit.ovirt.org/45954
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: comment
Gerrit-Change-Id: I8952f732033ed160292b11fbc0c4deac099b2b3e
Gerrit-PatchSet: 3
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Martin Betak <mbetak(a)redhat.com>
Gerrit-Reviewer: Francesco Romani <fromani(a)redhat.com>
Gerrit-Reviewer: Martin Betak <mbetak(a)redhat.com>
Gerrit-Reviewer: Martin Polednik <mpolednik(a)redhat.com>
Gerrit-Reviewer: Michal Skrivanek <mskrivan(a)redhat.com>
Gerrit-Reviewer: Tomas Jelinek <tjelinek(a)redhat.com>
Gerrit-Reviewer: automation(a)ovirt.org
Gerrit-HasComments: No
8 years, 8 months
Change in vdsm[master]: core: moving InquireNotSupportedError to storage_exception.py
by laravot@redhat.com
Liron Aravot has uploaded a new change for review.
Change subject: core: moving InquireNotSupportedError to storage_exception.py
......................................................................
core: moving InquireNotSupportedError to storage_exception.py
InquireNotSupportedError is currently defined in clusterlock.py, that
prohibits from assigning a meaningful error code to that error and to
use it outside of that class scope without using a different method then
our wildly used one. In this patch its moved to storage_exception so
we'll be able catch and inspect that error like any other clusterlock related
error.
Change-Id: I8201794dc96ee24dc9c0da5b7c3d71ab0b75e9f3
Bug-Url: https://bugzilla.redhat.com/1242092
Signed-off-by: Liron Aravot <laravot(a)redhat.com>
---
M vdsm/storage/clusterlock.py
M vdsm/storage/hsm.py
M vdsm/storage/storage_exception.py
3 files changed, 7 insertions(+), 5 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/63/45763/1
diff --git a/vdsm/storage/clusterlock.py b/vdsm/storage/clusterlock.py
index 541df2a..8f5b355 100644
--- a/vdsm/storage/clusterlock.py
+++ b/vdsm/storage/clusterlock.py
@@ -69,10 +69,6 @@
HOST_STATUS_DEAD = "dead"
-class InquireNotSupportedError(Exception):
- """Raised when the clusterlock class is not supporting inquire"""
-
-
class SafeLease(object):
log = logging.getLogger("Storage.SafeLease")
@@ -150,7 +146,7 @@
self.log.debug("Clustered lock acquired successfully")
def inquire(self):
- raise InquireNotSupportedError()
+ raise se.InquireNotSupportedError()
def getLockUtilFullPath(self):
return os.path.join(self.lockUtilPath, self.lockCmd)
diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
index 82f4426..5583f76 100644
--- a/vdsm/storage/hsm.py
+++ b/vdsm/storage/hsm.py
@@ -599,6 +599,9 @@
# This happens when we cannot read the MD LV
self.log.error("Can't read LV based metadata", exc_info=True)
raise se.StorageDomainMasterError("Can't read LV based metadata")
+ except se.InquireNotSupportedError:
+ self.log.error("Inquire spm status isn't supported by the used cluster lock", exc_info=True)
+ raise
except se.StorageException as e:
self.log.error("MD read error: %s", str(e), exc_info=True)
raise se.StorageDomainMasterError("MD read error")
diff --git a/vdsm/storage/storage_exception.py b/vdsm/storage/storage_exception.py
index 1e9c3f8..be6f26b 100644
--- a/vdsm/storage/storage_exception.py
+++ b/vdsm/storage/storage_exception.py
@@ -1617,6 +1617,9 @@
code = 701
message = "Could not initialize cluster lock"
+class InquireNotSupportedError(StorageException):
+ code = 702
+ message = "Cluster lock inquire isnt supported"
#################################################
# Meta data related Exceptions
--
To view, visit https://gerrit.ovirt.org/45763
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I8201794dc96ee24dc9c0da5b7c3d71ab0b75e9f3
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Liron Aravot <laravot(a)redhat.com>
8 years, 8 months
Change in vdsm[master]: migration: Add incoming migration semaphore
by mbetak@redhat.com
Martin Betak has posted comments on this change.
Change subject: migration: Add incoming migration semaphore
......................................................................
Patch Set 2:
(3 comments)
https://gerrit.ovirt.org/#/c/45954/2/lib/vdsm/utils.py
File lib/vdsm/utils.py:
Line 1271: else:
Line 1272: yield
Line 1273:
Line 1274:
Line 1275: @contextmanager
> See comment in api.py
Done
Line 1276: def releaseOnError(resource):
Line 1277: try:
Line 1278: yield
Line 1279: except Exception:
https://gerrit.ovirt.org/#/c/45954/2/vdsm/API.py
File vdsm/API.py:
Line 574:
Line 575: if not migration.incomingMigrations.acquire(False):
Line 576: return response.error('migrationLimit')
Line 577:
Line 578: with utils.releaseOnError(migration.incomingMigrations):
> This feels unpythonic. You should create a contextmanager function that is
Done
Line 579: params['vmId'] = self._UUID
Line 580: result = self.create(params)
Line 581: if result['status']['code']:
Line 582: self.log.debug('Migration create - Failed')
https://gerrit.ovirt.org/#/c/45954/2/vdsm/virt/migration.py
File vdsm/virt/migration.py:
Line 54: VIR_MIGRATE_PARAM_GRAPHICS_URI = 'graphics_uri'
Line 55:
Line 56:
Line 57: mig = min(config.getint('vars', 'max_incoming_migrations'),
Line 58: caps.CpuTopology().cores())
> should be exposed as a configuration imho
I wanted to remain consistent with the handling of outgoing migrations which use the same logic. Also added to config.py.in
Line 59:
Line 60: incomingMigrations = threading.BoundedSemaphore(mig)
Line 61:
Line 62:
--
To view, visit https://gerrit.ovirt.org/45954
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: comment
Gerrit-Change-Id: I8952f732033ed160292b11fbc0c4deac099b2b3e
Gerrit-PatchSet: 2
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Martin Betak <mbetak(a)redhat.com>
Gerrit-Reviewer: Francesco Romani <fromani(a)redhat.com>
Gerrit-Reviewer: Martin Betak <mbetak(a)redhat.com>
Gerrit-Reviewer: Martin Polednik <mpolednik(a)redhat.com>
Gerrit-Reviewer: Michal Skrivanek <mskrivan(a)redhat.com>
Gerrit-Reviewer: Tomas Jelinek <tjelinek(a)redhat.com>
Gerrit-Reviewer: automation(a)ovirt.org
Gerrit-HasComments: Yes
8 years, 8 months
Change in vdsm[master]: migration: Add incoming migration semaphore
by Martin Polednik
Martin Polednik has posted comments on this change.
Change subject: migration: Add incoming migration semaphore
......................................................................
Patch Set 2:
(3 comments)
https://gerrit.ovirt.org/#/c/45954/2/vdsm/virt/migration.py
File vdsm/virt/migration.py:
Line 54: VIR_MIGRATE_PARAM_GRAPHICS_URI = 'graphics_uri'
Line 55:
Line 56:
Line 57: mig = min(config.getint('vars', 'max_incoming_migrations'),
Line 58: caps.CpuTopology().cores())
> yeah, maybe it does not need a fallback at all. But not sure how would it t
should be exposed as a configuration imho
Line 59:
Line 60: incomingMigrations = threading.BoundedSemaphore(mig)
Line 61:
Line 62:
Line 330: dev._deviceXML, self._vm.conf, dev.custom)
Line 331: hooks.before_vm_migrate_source(self._vm._dom.XMLDesc(0),
Line 332: self._vm.conf)
Line 333:
Line 334: while True:
> For the same reason why the VDSM has an outgoing semaphore. Engine decides
we need something more robust then just retrying - there should be some kind of migration status tracking and acting on that. what if there are multiple long-term migrations that block small ones and by the time they finish there are different hosts that could be suitable for migration?
Line 335: # Do not measure the time spent for creating the VM on the
Line 336: # destination. In some cases some expensive operations can
Line 337: # cause the migration to get cancelled right after the
Line 338: # transfer started.
Line 347: SourceThread._ongoingMigrations.release()
Line 348: # the destination is busy with incoming migrations
Line 349: # release semaphore and give other outgoing migrations
Line 350: # a chance
Line 351: time.sleep(5)
> It is not "wait until migration finishes". It is "wait to not to eat up all
see above
Line 352: SourceThread._ongoingMigrations.acquire()
Line 353: else:
Line 354: break
Line 355:
--
To view, visit https://gerrit.ovirt.org/45954
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: comment
Gerrit-Change-Id: I8952f732033ed160292b11fbc0c4deac099b2b3e
Gerrit-PatchSet: 2
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Martin Betak <mbetak(a)redhat.com>
Gerrit-Reviewer: Francesco Romani <fromani(a)redhat.com>
Gerrit-Reviewer: Martin Polednik <mpolednik(a)redhat.com>
Gerrit-Reviewer: Michal Skrivanek <mskrivan(a)redhat.com>
Gerrit-Reviewer: Tomas Jelinek <tjelinek(a)redhat.com>
Gerrit-Reviewer: automation(a)ovirt.org
Gerrit-HasComments: Yes
8 years, 8 months
Change in vdsm[master]: ifcfg: make removeNic cope with a missing ifcfg file
by osvoboda@redhat.com
Ondřej Svoboda has uploaded a new change for review.
Change subject: ifcfg: make removeNic cope with a missing ifcfg file
......................................................................
ifcfg: make removeNic cope with a missing ifcfg file
The file is only read to be able to write back the HWADDR property.
This information may already be available in ConfigWriter._backups,
but let's just patch around the obvious problem first.
Change-Id: Ica18b30d508224903e7dcb898d74ae8ec35d4a23
Signed-off-by: Ondřej Svoboda <osvoboda(a)redhat.com>
---
M vdsm/network/configurators/ifcfg.py
1 file changed, 16 insertions(+), 3 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/93/45893/1
diff --git a/vdsm/network/configurators/ifcfg.py b/vdsm/network/configurators/ifcfg.py
index 161a3b2..6315b15 100644
--- a/vdsm/network/configurators/ifcfg.py
+++ b/vdsm/network/configurators/ifcfg.py
@@ -647,11 +647,24 @@
def removeNic(self, nic):
cf = netinfo.NET_CONF_PREF + nic
self._backup(cf)
- with open(cf) as nicFile:
- hwlines = [line for line in nicFile if line.startswith('HWADDR=')]
+
+ try:
+ with open(cf) as nicFile:
+ # TODO: how about getting the MAC from netinfo.gethwaddr or
+ # self._backups instead?
+ hwlines = [line for line in nicFile if line.startswith(
+ 'HWADDR=')]
+ except IOError as e:
+ if e.errno == os.errno.ENOENT:
+ # TODO: does logging not work during network restoration?
+ logging.warning("%s didn't exist, HWADDR is unknown", cf)
+ else:
+ logging.exception("%s couldn't be read, HWADDR is unknown", cf)
+ hwlines = []
+
l = [self.CONFFILE_HEADER + '\n', 'DEVICE=%s\n' % nic, 'ONBOOT=yes\n',
'MTU=%s\n' % netinfo.DEFAULT_MTU] + hwlines
- l += 'NM_CONTROLLED=no\n'
+ l += 'NM_CONTROLLED=no\n' # TODO: why care?
with open(cf, 'w') as nicFile:
nicFile.writelines(l)
--
To view, visit https://gerrit.ovirt.org/45893
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ica18b30d508224903e7dcb898d74ae8ec35d4a23
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Ondřej Svoboda <osvoboda(a)redhat.com>
8 years, 8 months
Change in vdsm[ovirt-3.6]: caps: more precise emulated machines selection
by fromani@redhat.com
Hello Dan Kenigsberg, Martin Polednik,
I'd like you to do a code review. Please visit
https://gerrit.ovirt.org/45850
to review the following change.
Change subject: caps: more precise emulated machines selection
......................................................................
caps: more precise emulated machines selection
Current VDSM code fetches emulated machines from the section which matches
first the required architecture.
This works in the simplest (and the only recommended) case, but doesn't
if, for any reason, there is more than one valid emulatore for a given
architecture.
The only known case is if the user managed to install qemu-kvm alongside
plain qemu.
This patch implements more robust capabilities fetching to deal with
this corner case.
Change-Id: I4deebbc90bf1cec53fc40bc6a35c6ada933296c3
Bug-Url: https://bugzilla.redhat.com/1239258
Signed-off-by: Francesco Romani <fromani(a)redhat.com>
Reviewed-on: https://gerrit.ovirt.org/45257
Continuous-Integration: Jenkins CI
Reviewed-by: Martin Polednik <mpolednik(a)redhat.com>
Reviewed-by: Dan Kenigsberg <danken(a)redhat.com>
---
M tests/Makefile.am
M tests/capsTests.py
A tests/caps_libvirt_multiqemu.out
M vdsm.spec.in
M vdsm/caps.py
5 files changed, 606 insertions(+), 13 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/50/45850/1
diff --git a/tests/Makefile.am b/tests/Makefile.am
index 174982c..e247551 100644
--- a/tests/Makefile.am
+++ b/tests/Makefile.am
@@ -154,6 +154,7 @@
caps_libvirt_intel_E5606.out \
caps_libvirt_intel_i73770.out \
caps_libvirt_intel_i73770_nosnap.out \
+ caps_libvirt_multiqemu.out \
cpu_map.xml \
caps_numactl_4_nodes.out \
glusterGeoRepStatus.xml \
diff --git a/tests/capsTests.py b/tests/capsTests.py
index 1bdccce..a70de5f 100644
--- a/tests/capsTests.py
+++ b/tests/capsTests.py
@@ -317,6 +317,24 @@
'pc-i440fx-rhel7.0.0']
self.assertEqual(expected, result)
+ def test_getEmulatedMachinesWithTwoQEMUInstalled(self):
+ capsData = self._readCaps("caps_libvirt_multiqemu.out")
+ result = caps._getEmulatedMachines('x86_64', capsData)
+ expected = ['pc-i440fx-rhel7.1.0',
+ 'rhel6.3.0',
+ 'pc-q35-rhel7.0.0',
+ 'rhel6.1.0',
+ 'rhel6.6.0',
+ 'rhel6.2.0',
+ 'pc',
+ 'pc-q35-rhel7.1.0',
+ 'q35',
+ 'rhel6.4.0',
+ 'rhel6.0.0',
+ 'rhel6.5.0',
+ 'pc-i440fx-rhel7.0.0']
+ self.assertEqual(expected, result)
+
def test_getNumaTopology(self):
capsData = self._readCaps("caps_libvirt_intel_i73770_nosnap.out")
result = caps.getNumaTopology(capsData)
diff --git a/tests/caps_libvirt_multiqemu.out b/tests/caps_libvirt_multiqemu.out
new file mode 100644
index 0000000..38e962f
--- /dev/null
+++ b/tests/caps_libvirt_multiqemu.out
@@ -0,0 +1,547 @@
+<capabilities>
+
+ <host>
+ <uuid>8760ad12-e4e8-43ca-8a11-93c59fbf148d</uuid>
+ <cpu>
+ <arch>x86_64</arch>
+ <model>SandyBridge</model>
+ <vendor>Intel</vendor>
+ <topology sockets='1' cores='6' threads='2'/>
+ <feature name='invtsc'/>
+ <feature name='invpcid'/>
+ <feature name='erms'/>
+ <feature name='bmi2'/>
+ <feature name='smep'/>
+ <feature name='avx2'/>
+ <feature name='bmi1'/>
+ <feature name='fsgsbase'/>
+ <feature name='abm'/>
+ <feature name='pdpe1gb'/>
+ <feature name='rdrand'/>
+ <feature name='f16c'/>
+ <feature name='osxsave'/>
+ <feature name='movbe'/>
+ <feature name='dca'/>
+ <feature name='pcid'/>
+ <feature name='pdcm'/>
+ <feature name='xtpr'/>
+ <feature name='fma'/>
+ <feature name='tm2'/>
+ <feature name='est'/>
+ <feature name='smx'/>
+ <feature name='vmx'/>
+ <feature name='ds_cpl'/>
+ <feature name='monitor'/>
+ <feature name='dtes64'/>
+ <feature name='pbe'/>
+ <feature name='tm'/>
+ <feature name='ht'/>
+ <feature name='ss'/>
+ <feature name='acpi'/>
+ <feature name='ds'/>
+ <feature name='vme'/>
+ <pages unit='KiB' size='4'/>
+ <pages unit='KiB' size='2048'/>
+ </cpu>
+ <power_management>
+ <suspend_mem/>
+ </power_management>
+ <migration_features>
+ <live/>
+ <uri_transports>
+ <uri_transport>tcp</uri_transport>
+ <uri_transport>rdma</uri_transport>
+ </uri_transports>
+ </migration_features>
+ <topology>
+ <cells num='2'>
+ <cell id='0'>
+ <memory unit='KiB'>33444704</memory>
+ <pages unit='KiB' size='4'>8361176</pages>
+ <pages unit='KiB' size='2048'>0</pages>
+ <distances>
+ <sibling id='0' value='10'/>
+ <sibling id='1' value='21'/>
+ </distances>
+ <cpus num='12'>
+ <cpu id='0' socket_id='0' core_id='0' siblings='0,12'/>
+ <cpu id='1' socket_id='0' core_id='1' siblings='1,13'/>
+ <cpu id='2' socket_id='0' core_id='2' siblings='2,14'/>
+ <cpu id='3' socket_id='0' core_id='3' siblings='3,15'/>
+ <cpu id='4' socket_id='0' core_id='4' siblings='4,16'/>
+ <cpu id='5' socket_id='0' core_id='5' siblings='5,17'/>
+ <cpu id='12' socket_id='0' core_id='0' siblings='0,12'/>
+ <cpu id='13' socket_id='0' core_id='1' siblings='1,13'/>
+ <cpu id='14' socket_id='0' core_id='2' siblings='2,14'/>
+ <cpu id='15' socket_id='0' core_id='3' siblings='3,15'/>
+ <cpu id='16' socket_id='0' core_id='4' siblings='4,16'/>
+ <cpu id='17' socket_id='0' core_id='5' siblings='5,17'/>
+ </cpus>
+ </cell>
+ <cell id='1'>
+ <memory unit='KiB'>33554432</memory>
+ <pages unit='KiB' size='4'>8388608</pages>
+ <pages unit='KiB' size='2048'>0</pages>
+ <distances>
+ <sibling id='0' value='21'/>
+ <sibling id='1' value='10'/>
+ </distances>
+ <cpus num='12'>
+ <cpu id='6' socket_id='1' core_id='0' siblings='6,18'/>
+ <cpu id='7' socket_id='1' core_id='1' siblings='7,19'/>
+ <cpu id='8' socket_id='1' core_id='2' siblings='8,20'/>
+ <cpu id='9' socket_id='1' core_id='3' siblings='9,21'/>
+ <cpu id='10' socket_id='1' core_id='4' siblings='10,22'/>
+ <cpu id='11' socket_id='1' core_id='5' siblings='11,23'/>
+ <cpu id='18' socket_id='1' core_id='0' siblings='6,18'/>
+ <cpu id='19' socket_id='1' core_id='1' siblings='7,19'/>
+ <cpu id='20' socket_id='1' core_id='2' siblings='8,20'/>
+ <cpu id='21' socket_id='1' core_id='3' siblings='9,21'/>
+ <cpu id='22' socket_id='1' core_id='4' siblings='10,22'/>
+ <cpu id='23' socket_id='1' core_id='5' siblings='11,23'/>
+ </cpus>
+ </cell>
+ </cells>
+ </topology>
+ <secmodel>
+ <model>selinux</model>
+ <doi>0</doi>
+ <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
+ <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
+ </secmodel>
+ <secmodel>
+ <model>dac</model>
+ <doi>0</doi>
+ <baselabel type='kvm'>+107:+107</baselabel>
+ <baselabel type='qemu'>+107:+107</baselabel>
+ </secmodel>
+ </host>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='alpha'>
+ <wordsize>64</wordsize>
+ <emulator>/usr/bin/qemu-system-alpha</emulator>
+ <machine maxCpus='4'>clipper</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <deviceboot/>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='armv7l'>
+ <wordsize>32</wordsize>
+ <emulator>/usr/bin/qemu-system-arm</emulator>
+ <machine maxCpus='1'>borzoi</machine>
+ <machine maxCpus='4'>virt</machine>
+ <machine maxCpus='4'>midway</machine>
+ <machine maxCpus='1'>tosa</machine>
+ <machine maxCpus='1'>cheetah</machine>
+ <machine maxCpus='1'>realview-pb-a8</machine>
+ <machine maxCpus='1'>collie</machine>
+ <machine maxCpus='1'>n800</machine>
+ <machine maxCpus='4'>highbank</machine>
+ <machine maxCpus='1'>kzm</machine>
+ <machine maxCpus='1'>integratorcp</machine>
+ <machine maxCpus='1'>sx1-v1</machine>
+ <machine maxCpus='2'>smdkc210</machine>
+ <machine maxCpus='1'>akita</machine>
+ <machine maxCpus='1'>canon-a1100</machine>
+ <machine maxCpus='1'>spitz</machine>
+ <machine maxCpus='1'>verdex</machine>
+ <machine maxCpus='1'>xilinx-zynq-a9</machine>
+ <machine maxCpus='4'>realview-eb-mpcore</machine>
+ <machine maxCpus='2'>nuri</machine>
+ <machine maxCpus='4'>vexpress-a15</machine>
+ <machine maxCpus='1'>n810</machine>
+ <machine maxCpus='1'>terrier</machine>
+ <machine maxCpus='1'>mainstone</machine>
+ <machine maxCpus='1'>musicpal</machine>
+ <machine maxCpus='4'>realview-pbx-a9</machine>
+ <machine maxCpus='1'>lm3s6965evb</machine>
+ <machine maxCpus='4'>vexpress-a9</machine>
+ <machine maxCpus='1'>cubieboard</machine>
+ <machine maxCpus='1'>realview-eb</machine>
+ <machine maxCpus='1'>sx1</machine>
+ <machine maxCpus='1'>connex</machine>
+ <machine maxCpus='1'>z2</machine>
+ <machine maxCpus='1'>lm3s811evb</machine>
+ <machine maxCpus='1'>versatilepb</machine>
+ <machine maxCpus='1'>versatileab</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <cpuselection/>
+ <deviceboot/>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='cris'>
+ <wordsize>32</wordsize>
+ <emulator>/usr/bin/qemu-system-cris</emulator>
+ <machine maxCpus='1'>axis-dev88</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='i686'>
+ <wordsize>32</wordsize>
+ <emulator>/usr/bin/qemu-system-i386</emulator>
+ <machine canonical='pc-i440fx-2.0' maxCpus='255'>pc</machine>
+ <machine maxCpus='255'>pc-0.12</machine>
+ <machine maxCpus='255'>pc-1.3</machine>
+ <machine maxCpus='255'>pc-q35-1.6</machine>
+ <machine maxCpus='255'>pc-q35-1.5</machine>
+ <machine maxCpus='255'>pc-i440fx-1.6</machine>
+ <machine maxCpus='255'>pc-i440fx-1.7</machine>
+ <machine maxCpus='255'>pc-0.11</machine>
+ <machine maxCpus='255'>pc-0.10</machine>
+ <machine maxCpus='255'>pc-1.2</machine>
+ <machine maxCpus='1'>isapc</machine>
+ <machine maxCpus='255'>pc-q35-1.4</machine>
+ <machine maxCpus='255'>pc-0.15</machine>
+ <machine maxCpus='255'>pc-0.14</machine>
+ <machine maxCpus='255'>pc-i440fx-1.5</machine>
+ <machine canonical='pc-q35-2.0' maxCpus='255'>q35</machine>
+ <machine maxCpus='255'>pc-i440fx-1.4</machine>
+ <machine maxCpus='255'>pc-1.1</machine>
+ <machine maxCpus='255'>pc-q35-1.7</machine>
+ <machine maxCpus='255'>pc-1.0</machine>
+ <machine maxCpus='255'>pc-0.13</machine>
+ <domain type='qemu'>
+ </domain>
+ <domain type='kvm'>
+ <emulator>/usr/libexec/qemu-kvm</emulator>
+ <machine canonical='pc-i440fx-rhel7.1.0' maxCpus='240'>pc</machine>
+ <machine maxCpus='240'>rhel6.6.0</machine>
+ <machine maxCpus='240'>pc-q35-rhel7.0.0</machine>
+ <machine maxCpus='240'>rhel6.4.0</machine>
+ <machine canonical='pc-q35-rhel7.1.0' maxCpus='240'>q35</machine>
+ <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine>
+ <machine maxCpus='240'>rhel6.2.0</machine>
+ <machine maxCpus='240'>rhel6.1.0</machine>
+ <machine maxCpus='240'>rhel6.5.0</machine>
+ <machine maxCpus='240'>rhel6.0.0</machine>
+ <machine maxCpus='240'>rhel6.3.0</machine>
+ </domain>
+ </arch>
+ <features>
+ <cpuselection/>
+ <deviceboot/>
+ <disksnapshot default='on' toggle='no'/>
+ <acpi default='on' toggle='yes'/>
+ <apic default='on' toggle='no'/>
+ <pae/>
+ <nonpae/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='lm32'>
+ <wordsize>32</wordsize>
+ <emulator>/usr/bin/qemu-system-lm32</emulator>
+ <machine maxCpus='1'>lm32-evr</machine>
+ <machine maxCpus='1'>milkymist</machine>
+ <machine maxCpus='1'>lm32-uclinux</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='m68k'>
+ <wordsize>32</wordsize>
+ <emulator>/usr/bin/qemu-system-m68k</emulator>
+ <machine maxCpus='1'>mcf5208evb</machine>
+ <machine maxCpus='1'>dummy</machine>
+ <machine maxCpus='1'>an5206</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <deviceboot/>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='microblaze'>
+ <wordsize>32</wordsize>
+ <emulator>/usr/bin/qemu-system-microblaze</emulator>
+ <machine maxCpus='1'>petalogix-s3adsp1800</machine>
+ <machine maxCpus='1'>petalogix-ml605</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='microblazeel'>
+ <wordsize>32</wordsize>
+ <emulator>/usr/bin/qemu-system-microblazeel</emulator>
+ <machine maxCpus='1'>petalogix-s3adsp1800</machine>
+ <machine maxCpus='1'>petalogix-ml605</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='mips'>
+ <wordsize>32</wordsize>
+ <emulator>/usr/bin/qemu-system-mips</emulator>
+ <machine maxCpus='16'>malta</machine>
+ <machine maxCpus='1'>mipssim</machine>
+ <machine maxCpus='1'>magnum</machine>
+ <machine maxCpus='1'>pica61</machine>
+ <machine maxCpus='1'>mips</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <deviceboot/>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='mipsel'>
+ <wordsize>32</wordsize>
+ <emulator>/usr/bin/qemu-system-mipsel</emulator>
+ <machine maxCpus='16'>malta</machine>
+ <machine maxCpus='1'>mipssim</machine>
+ <machine maxCpus='1'>magnum</machine>
+ <machine maxCpus='1'>pica61</machine>
+ <machine maxCpus='1'>mips</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <deviceboot/>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='mips64'>
+ <wordsize>64</wordsize>
+ <emulator>/usr/bin/qemu-system-mips64</emulator>
+ <machine maxCpus='16'>malta</machine>
+ <machine maxCpus='1'>mipssim</machine>
+ <machine maxCpus='1'>magnum</machine>
+ <machine maxCpus='1'>mips</machine>
+ <machine maxCpus='1'>pica61</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <deviceboot/>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='mips64el'>
+ <wordsize>64</wordsize>
+ <emulator>/usr/bin/qemu-system-mips64el</emulator>
+ <machine maxCpus='16'>malta</machine>
+ <machine maxCpus='1'>fulong2e</machine>
+ <machine maxCpus='1'>magnum</machine>
+ <machine maxCpus='1'>mipssim</machine>
+ <machine maxCpus='1'>mips</machine>
+ <machine maxCpus='1'>pica61</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <deviceboot/>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='s390x'>
+ <wordsize>64</wordsize>
+ <emulator>/usr/bin/qemu-system-s390x</emulator>
+ <machine canonical='s390-virtio' maxCpus='255'>s390</machine>
+ <machine canonical='s390-ccw-virtio' maxCpus='255'>s390-ccw</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <cpuselection/>
+ <deviceboot/>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='sh4'>
+ <wordsize>32</wordsize>
+ <emulator>/usr/bin/qemu-system-sh4</emulator>
+ <machine maxCpus='1'>shix</machine>
+ <machine maxCpus='1'>r2d</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <deviceboot/>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='sh4eb'>
+ <wordsize>64</wordsize>
+ <emulator>/usr/bin/qemu-system-sh4eb</emulator>
+ <machine maxCpus='1'>shix</machine>
+ <machine maxCpus='1'>r2d</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <deviceboot/>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='unicore32'>
+ <wordsize>32</wordsize>
+ <emulator>/usr/bin/qemu-system-unicore32</emulator>
+ <machine maxCpus='1'>puv3</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='x86_64'>
+ <wordsize>64</wordsize>
+ <emulator>/usr/bin/qemu-system-x86_64</emulator>
+ <machine canonical='pc-i440fx-2.0' maxCpus='255'>pc</machine>
+ <machine maxCpus='255'>pc-1.3</machine>
+ <machine maxCpus='255'>pc-0.12</machine>
+ <machine maxCpus='255'>pc-q35-1.6</machine>
+ <machine maxCpus='255'>pc-q35-1.5</machine>
+ <machine maxCpus='255'>pc-i440fx-1.6</machine>
+ <machine maxCpus='255'>pc-i440fx-1.7</machine>
+ <machine maxCpus='255'>pc-0.11</machine>
+ <machine maxCpus='255'>pc-1.2</machine>
+ <machine maxCpus='255'>pc-0.10</machine>
+ <machine maxCpus='1'>isapc</machine>
+ <machine maxCpus='255'>pc-q35-1.4</machine>
+ <machine maxCpus='255'>pc-0.15</machine>
+ <machine maxCpus='255'>pc-0.14</machine>
+ <machine maxCpus='255'>pc-i440fx-1.5</machine>
+ <machine maxCpus='255'>pc-i440fx-1.4</machine>
+ <machine canonical='pc-q35-2.0' maxCpus='255'>q35</machine>
+ <machine maxCpus='255'>pc-1.1</machine>
+ <machine maxCpus='255'>pc-q35-1.7</machine>
+ <machine maxCpus='255'>pc-1.0</machine>
+ <machine maxCpus='255'>pc-0.13</machine>
+ <domain type='qemu'>
+ </domain>
+ <domain type='kvm'>
+ <emulator>/usr/libexec/qemu-kvm</emulator>
+ <machine canonical='pc-i440fx-rhel7.1.0' maxCpus='240'>pc</machine>
+ <machine maxCpus='240'>rhel6.6.0</machine>
+ <machine maxCpus='240'>pc-q35-rhel7.0.0</machine>
+ <machine maxCpus='240'>rhel6.4.0</machine>
+ <machine canonical='pc-q35-rhel7.1.0' maxCpus='240'>q35</machine>
+ <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine>
+ <machine maxCpus='240'>rhel6.2.0</machine>
+ <machine maxCpus='240'>rhel6.1.0</machine>
+ <machine maxCpus='240'>rhel6.5.0</machine>
+ <machine maxCpus='240'>rhel6.0.0</machine>
+ <machine maxCpus='240'>rhel6.3.0</machine>
+ </domain>
+ </arch>
+ <features>
+ <cpuselection/>
+ <deviceboot/>
+ <disksnapshot default='on' toggle='no'/>
+ <acpi default='on' toggle='yes'/>
+ <apic default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='xtensa'>
+ <wordsize>32</wordsize>
+ <emulator>/usr/bin/qemu-system-xtensa</emulator>
+ <machine maxCpus='4'>sim</machine>
+ <machine maxCpus='4'>lx60</machine>
+ <machine maxCpus='4'>kc705</machine>
+ <machine maxCpus='4'>ml605</machine>
+ <machine maxCpus='4'>lx200</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='xtensaeb'>
+ <wordsize>32</wordsize>
+ <emulator>/usr/bin/qemu-system-xtensaeb</emulator>
+ <machine maxCpus='4'>sim</machine>
+ <machine maxCpus='4'>kc705</machine>
+ <machine maxCpus='4'>ml605</machine>
+ <machine maxCpus='4'>lx200</machine>
+ <machine maxCpus='4'>lx60</machine>
+ <domain type='qemu'>
+ </domain>
+ </arch>
+ <features>
+ <disksnapshot default='on' toggle='no'/>
+ </features>
+ </guest>
+
+</capabilities>
+
+
diff --git a/vdsm.spec.in b/vdsm.spec.in
index f0e061e..9a38359 100644
--- a/vdsm.spec.in
+++ b/vdsm.spec.in
@@ -1101,6 +1101,7 @@
%{_datadir}/%{vdsm_name}/tests/caps_libvirt_intel_E5606.out
%{_datadir}/%{vdsm_name}/tests/caps_libvirt_intel_i73770.out
%{_datadir}/%{vdsm_name}/tests/caps_libvirt_intel_i73770_nosnap.out
+%{_datadir}/%{vdsm_name}/tests/caps_libvirt_multiqemu.out
%{_datadir}/%{vdsm_name}/tests/caps_numactl_4_nodes.out
%{_datadir}/%{vdsm_name}/tests/cpu_map.xml
%{_datadir}/%{vdsm_name}/tests/devices/*.py*
diff --git a/vdsm/caps.py b/vdsm/caps.py
index 89c96ba..c0cb2f6 100644
--- a/vdsm/caps.py
+++ b/vdsm/caps.py
@@ -381,25 +381,51 @@
return AutoNumaBalancingStatus.UNKNOWN
+def _get_emulated_machines_from_node(node):
+ # We have to make sure to inspect 'canonical' attribute where
+ # libvirt puts the real machine name. Relevant bug:
+ # https://bugzilla.redhat.com/show_bug.cgi?id=1229666
+ return list(set((itertools.chain.from_iterable(
+ (
+ (m.text, m.get('canonical'))
+ if m.get('canonical') else
+ (m.text,)
+ )
+ for m in node.iterfind('machine')))))
+
+
+def _get_emulated_machines_from_arch(arch, caps):
+ arch_tag = caps.find('.//guest/arch[@name="%s"]' % arch)
+ if not arch_tag:
+ logging.error('Error while looking for architecture '
+ '"%s" in libvirt capabilities', arch)
+ return []
+
+ return _get_emulated_machines_from_node(arch_tag)
+
+
+def _get_emulated_machines_from_domain(arch, caps):
+ domain_tag = caps.find(
+ './/guest/arch[@name="%s"]/domain[@type="kvm"]' % arch)
+ if not domain_tag:
+ logging.error('Error while looking for kvm domain (%s) '
+ 'libvirt capabilities', arch)
+ return []
+
+ return _get_emulated_machines_from_node(domain_tag)
+
+
@utils.memoized
def _getEmulatedMachines(arch, capabilities=None):
if capabilities is None:
capabilities = _getCapsXMLStr()
caps = ET.fromstring(capabilities)
- for archTag in caps.iter(tag='arch'):
- if archTag.get('name') == arch:
- # We have to make sure to inspect 'canonical' attribute where
- # libvirt puts the real machine name. Relevant bug:
- # https://bugzilla.redhat.com/show_bug.cgi?id=1229666
- return list(set((itertools.chain.from_iterable(
- (
- (m.text, m.get('canonical')) if
- m.get('canonical') else (m.text,)
- )
- for m in archTag.iterfind('machine')))))
-
- return []
+ # machine list from domain can legally be empty
+ # (e.g. only qemu-kvm installed)
+ # in that case it is fine to use machines list from arch
+ return (_get_emulated_machines_from_domain(arch, caps) or
+ _get_emulated_machines_from_arch(arch, caps))
def _getAllCpuModels(capfile=CPU_MAP_FILE, arch=None):
--
To view, visit https://gerrit.ovirt.org/45850
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I4deebbc90bf1cec53fc40bc6a35c6ada933296c3
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: ovirt-3.6
Gerrit-Owner: Francesco Romani <fromani(a)redhat.com>
Gerrit-Reviewer: Dan Kenigsberg <danken(a)redhat.com>
Gerrit-Reviewer: Martin Polednik <mpolednik(a)redhat.com>
8 years, 8 months
Change in vdsm[master]: vm: do not immediately kill after failed migration
by fromani@redhat.com
Francesco Romani has uploaded a new change for review.
Change subject: vm: do not immediately kill after failed migration
......................................................................
vm: do not immediately kill after failed migration
WRITEME
Change-Id: Ice18b65e335f18b4ca406557a5e65266aa229854
Signed-off-by: Francesco Romani <fromani(a)redhat.com>
---
M vdsm/virt/vm.py
1 file changed, 6 insertions(+), 2 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/88/42888/1
diff --git a/vdsm/virt/vm.py b/vdsm/virt/vm.py
index bb8c062..f3e2502 100644
--- a/vdsm/virt/vm.py
+++ b/vdsm/virt/vm.py
@@ -771,8 +771,12 @@
'exitMessage', ''))
self.recovering = False
except MigrationError:
- self.log.exception("Failed to start a migration destination vm")
- self.setDownStatus(ERROR, vmexitreason.MIGRATION_FAILED)
+ with self._releaseLock:
+ aborted = self._released
+ if not aborted:
+ # we know what happened, no need for stack trace
+ self.log.exception("Failed to start a migration destination vm")
+ self.setDownStatus(ERROR, vmexitreason.MIGRATION_FAILED)
except Exception as e:
if self.recovering:
self.log.info("Skipping errors on recovery", exc_info=True)
--
To view, visit https://gerrit.ovirt.org/42888
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ice18b65e335f18b4ca406557a5e65266aa229854
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Francesco Romani <fromani(a)redhat.com>
Gerrit-Reviewer: Francesco Romani <fromani(a)redhat.com>
Gerrit-Reviewer: automation(a)ovirt.org
8 years, 8 months
Change in vdsm[master]: migration: on abort, override reason on dest VM
by fromani@redhat.com
Francesco Romani has uploaded a new change for review.
Change subject: migration: on abort, override reason on dest VM
......................................................................
migration: on abort, override reason on dest VM
When a migration is aborted, the Engine looks
for the migration failure reason on Destination VM.
Unfortunately, it is (almost always) the source VM
that decided to abort the migration, so it is actually
the source VM that knows the failure reason.
We can change VDSM to share this information, or we
can fix Engine to ask source VM.
The first option is probably safer and simpler, so
this patch implements that by extending the destroy()
verb with an optional vmexitreason field, so destroying
agent (maybe source VDSM, maybe Engine) can propagate
the right reason.
Change-Id: I021774437e969930880ba1602893bbd5ed2c1c1a
Signed-off-by: Francesco Romani <fromani(a)redhat.com>
---
M vdsm/API.py
M vdsm/rpc/bindingxmlrpc.py
M vdsm/rpc/vdsmapi-schema.json
M vdsm/virt/migration.py
M vdsm/virt/vm.py
M vdsm/virt/vmexitreason.py
6 files changed, 57 insertions(+), 12 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/01/42801/1
diff --git a/vdsm/API.py b/vdsm/API.py
index bc74e67..1b517e1 100644
--- a/vdsm/API.py
+++ b/vdsm/API.py
@@ -43,6 +43,7 @@
import storage.volume
import storage.sd
import storage.image
+from virt import vmexitreason
from virt import vmstatus
from virt.vmdevices import graphics
from virt.vmdevices import hwclass
@@ -327,7 +328,7 @@
else:
return errCode['nonresp']
- def destroy(self):
+ def destroy(self, exitReason=vmexitreason.SUCCESS):
"""
Destroy the specified VM.
"""
@@ -337,7 +338,7 @@
v = self._cif.vmContainer.get(self._UUID)
if not v:
return errCode['noVM']
- res = v.destroy()
+ res = v.destroy(exitReason)
status = utils.picklecopy(res)
if status['status']['code'] == 0:
status['status']['message'] = "Machine destroyed"
diff --git a/vdsm/rpc/bindingxmlrpc.py b/vdsm/rpc/bindingxmlrpc.py
index 21a545d..2a344c4 100644
--- a/vdsm/rpc/bindingxmlrpc.py
+++ b/vdsm/rpc/bindingxmlrpc.py
@@ -36,6 +36,7 @@
from vdsm.netinfo import getDeviceByIP
import API
from vdsm.exception import VdsmException
+from virt import vmexitreason
try:
from gluster.api import getGlusterMethods
@@ -350,9 +351,9 @@
#
# Callable methods:
#
- def vmDestroy(self, vmId):
+ def vmDestroy(self, vmId, reason=vmexitreason.SUCCESS):
vm = API.VM(vmId)
- return vm.destroy()
+ return vm.destroy(reason)
def vmCreate(self, vmParams):
vm = API.VM(vmParams['vmId'])
diff --git a/vdsm/rpc/vdsmapi-schema.json b/vdsm/rpc/vdsmapi-schema.json
index b16657d..81ec00c 100644
--- a/vdsm/rpc/vdsmapi-schema.json
+++ b/vdsm/rpc/vdsmapi-schema.json
@@ -6575,12 +6575,16 @@
#
# Forcibly stop a running VM.
#
-# @vmID: The UUID of the VM
+# @vmID: The UUID of the VM
+#
+# @exitReason: #optional Override VM exit reason.
+# Ignored if SUCCESS.
+# Internal usage only. (new in version 4.17.0)
#
# Since: 4.10.0
##
{'command': {'class': 'VM', 'name': 'destroy'},
- 'data': {'vmID': 'UUID'}}
+ 'data': {'vmID': 'UUID', '*exitReason': 'VmExitReason'}}
##
# @VM.getInfo:
diff --git a/vdsm/virt/migration.py b/vdsm/virt/migration.py
index 0e6a024..9ceecb8 100644
--- a/vdsm/virt/migration.py
+++ b/vdsm/virt/migration.py
@@ -67,6 +67,13 @@
STALLED = 3
+_ABORT_REASON_TO_EXIT_REASON = {
+ MigrationAbortReason.CLIENT: vmexitreason.MIGRATION_ABORTED_CLIENT,
+ MigrationAbortReason.MAX_TIME: vmexitreason.MIGRATION_ABORTED_TIMEOUT,
+ MigrationAbortReason.STALLED: vmexitreason.MIGRATION_ABORTED_STALLED,
+}
+
+
class SourceThread(threading.Thread):
"""
A thread that takes care of migration on the source vdsm.
@@ -143,6 +150,14 @@
if self._monitorThread is not None:
# we need to report monotonic progress
self._progress = self._monitorThread.progress
+
+ def _get_abort_reason(self):
+ # FIXME: racy.
+ if (self._monitorThread is not None and
+ self._monitorThread.abort_reason is not None):
+ return self._monitorThread.abort_reason
+ else:
+ return MigrationAbortReason.CLIENT
def _addMigrationFields(self, res):
"""
@@ -240,7 +255,11 @@
self.log.error(message)
if not self.hibernating:
try:
- self._destServer.destroy(self._vm.id)
+ reason = _ABORT_REASON_TO_EXIT_REASON.get(
+ self._get_abort_reason(),
+ vmexitreason.SUCCESS
+ )
+ self._destServer.destroy(self._vm.id, reason)
except Exception:
self.log.exception("Failed to destroy remote VM")
# if the guest was stopped before migration, we need to cont it
@@ -509,6 +528,15 @@
self._startTime = startTime
self.daemon = True
self.progress = 0
+ self._abort_reason = None
+
+ @property
+ def canceled(self):
+ return self._abort_reason is not None
+
+ @property
+ def abort_reason(self):
+ return self._abort_reason
@property
def enabled(self):
@@ -555,6 +583,7 @@
'migration will be aborted.',
now - self._startTime,
migrationMaxTime)
+ self._abort_reason = MigrationAbortReason.MAX_TIME
abort = True
elif (lowmark is None) or (lowmark > dataRemaining):
lowmark = dataRemaining
@@ -564,6 +593,7 @@
self._vm.log.warn(
'Migration is stuck: Hasn\'t progressed in %s seconds. '
'Aborting.' % (now - lastProgressTime))
+ self._abort_reason = MigrationAbortReason.STALLED
abort = True
if abort:
diff --git a/vdsm/virt/vm.py b/vdsm/virt/vm.py
index 4f7a271..bb8c062 100644
--- a/vdsm/virt/vm.py
+++ b/vdsm/virt/vm.py
@@ -3674,10 +3674,10 @@
except Exception:
self.log.exception("Failed to delete VM %s", self.conf['vmId'])
- def destroy(self):
+ def destroy(self, reason=vmexitreason.SUCCESS):
self.log.debug('destroy Called')
- result = self.doDestroy()
+ result = self.doDestroy(reason)
if result['status']['code']:
return result
# Clean VM from the system
@@ -3685,14 +3685,17 @@
return {'status': doneCode}
- def doDestroy(self):
+ def doDestroy(self, reason):
for dev in self._customDevices():
hooks.before_device_destroy(dev._deviceXML, self.conf,
dev.custom)
hooks.before_vm_destroy(self._domain.xml, self.conf)
with self._shutdownLock:
- self._shutdownReason = vmexitreason.ADMIN_SHUTDOWN
+ if reason != vmexitreason.SUCCESS:
+ self._shutdownReason = reason
+ else:
+ self._shutdownReason = vmexitreason.ADMIN_SHUTDOWN
self._destroyed = True
return self.releaseVm()
diff --git a/vdsm/virt/vmexitreason.py b/vdsm/virt/vmexitreason.py
index c04e65e..944d893 100644
--- a/vdsm/virt/vmexitreason.py
+++ b/vdsm/virt/vmexitreason.py
@@ -28,6 +28,9 @@
USER_SHUTDOWN = 7
MIGRATION_FAILED = 8
LIBVIRT_DOMAIN_MISSING = 9
+MIGRATION_ABORTED_CLIENT = 10
+MIGRATION_ABORTED_TIMEOUT = 11
+MIGRATION_ABORTED_STALLED = 12
exitReasons = {
@@ -40,5 +43,8 @@
ADMIN_SHUTDOWN: 'Admin shut down from the engine',
USER_SHUTDOWN: 'User shut down from within the guest',
MIGRATION_FAILED: 'VM failed to migrate',
- LIBVIRT_DOMAIN_MISSING: 'Failed to find the libvirt domain'
+ LIBVIRT_DOMAIN_MISSING: 'Failed to find the libvirt domain',
+ MIGRATION_ABORTED_CLIENT: 'VM migration canceled by client',
+ MIGRATION_ABORTED_TIMEOUT: 'VM took too much time to migrate',
+ MIGRATION_ABORTED_STALLED: 'VM migration failed to progress'
}
--
To view, visit https://gerrit.ovirt.org/42801
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I021774437e969930880ba1602893bbd5ed2c1c1a
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Francesco Romani <fromani(a)redhat.com>
8 years, 8 months