Change in vdsm[master]: virt: add test to allow custom XML parsing test
by Martin Polednik
Martin Polednik has uploaded a new change for review.
Change subject: virt: add test to allow custom XML parsing test
......................................................................
virt: add test to allow custom XML parsing test
Tests should run against explicit device specification, but it may be
useful to have a means of simply testing that parse of given XML
succeeds. This patch implements such functionality, allowing user to
supply XML in tests/devices/data and modify
tests/parsing/custom_vm_tests.py to see if the parsing conforms general
device specification.
Change-Id: I3bb24e2854e6f7b93c5108eca8b6f79f17353e85
Signed-off-by: Martin Polednik <mpolednik(a)redhat.com>
---
M tests/Makefile.am
M tests/devices/data/Makefile.am
A tests/devices/data/testSriovVm.xml
M tests/devices/parsing/Makefile.am
A tests/devices/parsing/custom_vm_tests.py
5 files changed, 181 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/09/39909/1
diff --git a/tests/Makefile.am b/tests/Makefile.am
index b8a8964..2bc9018 100644
--- a/tests/Makefile.am
+++ b/tests/Makefile.am
@@ -27,6 +27,7 @@
device_modules = \
devices/parsing/complex_vm_tests.py \
+ devices/parsing/custom_vm_tests.py \
$(NULL)
test_modules = \
diff --git a/tests/devices/data/Makefile.am b/tests/devices/data/Makefile.am
index 0ead0e2..6a6288c 100644
--- a/tests/devices/data/Makefile.am
+++ b/tests/devices/data/Makefile.am
@@ -22,4 +22,5 @@
dist_vdsmdevdatatests_DATA = \
testComplexVm.xml \
+ testSriovVm.xml \
$(NULL)
diff --git a/tests/devices/data/testSriovVm.xml b/tests/devices/data/testSriovVm.xml
new file mode 100644
index 0000000..2980c7f
--- /dev/null
+++ b/tests/devices/data/testSriovVm.xml
@@ -0,0 +1,144 @@
+<domain type='kvm' id='2'>
+ <name>vm1_Copy</name>
+ <uuid>78144ebf-7894-456e-997f-9fc96083341e</uuid>
+ <memory unit='KiB'>1048576</memory>
+ <currentMemory unit='KiB'>1048576</currentMemory>
+ <vcpu placement='static' current='1'>16</vcpu>
+ <cputune>
+ <shares>1020</shares>
+ <period>12500</period>
+ <quota>100000</quota>
+ </cputune>
+ <resource>
+ <partition>/machine</partition>
+ </resource>
+ <sysinfo type='smbios'>
+ <system>
+ <entry name='manufacturer'>oVirt</entry>
+ <entry name='product'>oVirt Node</entry>
+ <entry name='version'>7.1-0.1.el7</entry>
+ <entry name='serial'>38373035-3536-4247-3830-333334344139</entry>
+ <entry name='uuid'>78144ebf-7894-456e-997f-9fc96083341e</entry>
+ </system>
+ </sysinfo>
+ <os>
+ <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
+ <smbios mode='sysinfo'/>
+ </os>
+ <features>
+ <acpi/>
+ </features>
+ <cpu mode='custom' match='exact'>
+ <model fallback='allow'>Conroe</model>
+ <topology sockets='16' cores='1' threads='1'/>
+ </cpu>
+ <clock offset='variable' adjustment='0' basis='utc'>
+ <timer name='rtc' tickpolicy='catchup'/>
+ <timer name='pit' tickpolicy='delay'/>
+ <timer name='hpet' present='no'/>
+ </clock>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <emulator>/usr/libexec/qemu-kvm</emulator>
+ <disk type='file' device='cdrom'>
+ <driver name='qemu' type='raw'/>
+ <source startupPolicy='optional'/>
+ <backingStore/>
+ <target dev='hdc' bus='ide'/>
+ <readonly/>
+ <serial></serial>
+ <alias name='ide0-1-0'/>
+ <address type='drive' controller='0' bus='1' target='0' unit='0'/>
+ </disk>
+ <disk type='file' device='disk' snapshot='no'>
+ <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>
+ <source file='/rhev/data-center/bd4ba8d0-024e-412b-aa6e-b22a1654f53e/9a8980dc-b533-4085-884e-daa9f3753ce7/images/369a5d94-05bc-41c0-84c5-ed3b1b8d2d89/79d06c74-8a95-4e4b-afba-8d09975c5f8d'>
+ <seclabel model='selinux' labelskip='yes'/>
+ </source>
+ <backingStore/>
+ <target dev='vda' bus='virtio'/>
+ <serial>369a5d94-05bc-41c0-84c5-ed3b1b8d2d89</serial>
+ <boot order='1'/>
+ <alias name='virtio-disk0'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
+ </disk>
+ <controller type='scsi' index='0' model='virtio-scsi'>
+ <alias name='scsi0'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
+ </controller>
+ <controller type='virtio-serial' index='0' ports='16'>
+ <alias name='virtio-serial0'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
+ </controller>
+ <controller type='usb' index='0'>
+ <alias name='usb0'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
+ </controller>
+ <controller type='pci' index='0' model='pci-root'>
+ <alias name='pci.0'/>
+ </controller>
+ <controller type='ide' index='0'>
+ <alias name='ide0'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
+ </controller>
+ <interface type='hostdev'>
+ <mac address='00:1a:4a:16:01:53'/>
+ <driver name='vfio'/>
+ <source>
+ <address type='pci' domain='0x0000' bus='0x02' slot='0x10' function='0x0'/>
+ </source>
+ <alias name='hostdev0'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
+ </interface>
+ <interface type='bridge'>
+ <mac address='00:1a:4a:16:01:54'/>
+ <source bridge='ovirtmgmt'/>
+ <bandwidth>
+ </bandwidth>
+ <target dev='vnet0'/>
+ <model type='virtio'/>
+ <filterref filter='vdsm-no-mac-spoofing'/>
+ <link state='down'/>
+ <alias name='net1'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
+ </interface>
+ <channel type='unix'>
+ <source mode='bind' path='/var/lib/libvirt/qemu/channels/78144ebf-7894-456e-997f-9fc96083341e.com.redhat.rhevm.vdsm'/>
+ <target type='virtio' name='com.redhat.rhevm.vdsm'/>
+ <alias name='channel0'/>
+ <address type='virtio-serial' controller='0' bus='0' port='1'/>
+ </channel>
+ <channel type='unix'>
+ <source mode='bind' path='/var/lib/libvirt/qemu/channels/78144ebf-7894-456e-997f-9fc96083341e.org.qemu.guest_agent.0'/>
+ <target type='virtio' name='org.qemu.guest_agent.0'/>
+ <alias name='channel1'/>
+ <address type='virtio-serial' controller='0' bus='0' port='2'/>
+ </channel>
+ <channel type='spicevmc'>
+ <target type='virtio' name='com.redhat.spice.0'/>
+ <alias name='channel2'/>
+ <address type='virtio-serial' controller='0' bus='0' port='3'/>
+ </channel>
+ <input type='mouse' bus='ps2'/>
+ <input type='keyboard' bus='ps2'/>
+ <graphics type='spice' port='5900' tlsPort='5901' autoport='yes' listen='0' passwdValidTo='1970-01-01T00:00:01'>
+ <listen type='address' address='0'/>
+ </graphics>
+ <video>
+ <model type='qxl' ram='65536' vram='32768' vgamem='16384' heads='1'/>
+ <alias name='video0'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
+ </video>
+ <memballoon model='virtio'>
+ <alias name='balloon0'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
+ </memballoon>
+ </devices>
+ <seclabel type='dynamic' model='selinux' relabel='yes'>
+ <label>system_u:system_r:svirt_t:s0:c799,c978</label>
+ <imagelabel>system_u:object_r:svirt_image_t:s0:c799,c978</imagelabel>
+ </seclabel>
+</domain>
+
diff --git a/tests/devices/parsing/Makefile.am b/tests/devices/parsing/Makefile.am
index 978e9bd..26edbd4 100644
--- a/tests/devices/parsing/Makefile.am
+++ b/tests/devices/parsing/Makefile.am
@@ -23,4 +23,5 @@
dist_vdsmdevparsingtests_PYTHON = \
__init__.py \
complex_vm_tests.py \
+ custom_vm_tests.py \
$(NULL)
diff --git a/tests/devices/parsing/custom_vm_tests.py b/tests/devices/parsing/custom_vm_tests.py
new file mode 100644
index 0000000..eca8db9
--- /dev/null
+++ b/tests/devices/parsing/custom_vm_tests.py
@@ -0,0 +1,34 @@
+import os
+
+from testlib import permutations, expandPermutations
+from testlib import XMLTestCase
+
+from virt import domain_descriptor
+import vmfakelib as fake
+
+import verify
+
+
+@expandPermutations
+class TestVmDevicesXmlParsing(XMLTestCase, verify.DeviceMixin):
+
+ @permutations([['testComplexVm.xml'], ['testSriovVm.xml']])
+ def test_custom_vm(self, domain_xml):
+ params = {'name': 'complexVm', 'displaySecurePort': '-1',
+ 'memSize': '256', 'displayPort': '-1', 'display': 'qxl'}
+
+ devices = [{'device': 'spice', 'type': 'graphics'}]
+
+ test_path = os.path.realpath(__file__)
+ dir_name = os.path.split(test_path)[0]
+ api_path = os.path.join(
+ dir_name, '..', 'data', domain_xml)
+
+ domain = None
+ with open(api_path, 'r') as domxml:
+ domain = domxml.read()
+
+ with fake.VM(params=params, devices=devices) as vm:
+ vm._domain = domain_descriptor.DomainDescriptor(domain)
+ vm._getUnderlyingVmDevicesInfo()
+ self.verifyDevicesConf(vm.conf['devices'])
--
To view, visit https://gerrit.ovirt.org/39909
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I3bb24e2854e6f7b93c5108eca8b6f79f17353e85
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Martin Polednik <mpolednik(a)redhat.com>
8 years, 8 months
Change in vdsm[master]: virtTests: add way to query full stats via jsonrpc
by Martin Polednik
Martin Polednik has uploaded a new change for review.
Change subject: virtTests: add way to query full stats via jsonrpc
......................................................................
virtTests: add way to query full stats via jsonrpc
Change-Id: Ic2c2f50d862bfd1447aa812a037b5b41f3efd6af
Signed-off-by: Martin Polednik <mpolednik(a)redhat.com>
---
M lib/vdsm/jsonrpcvdscli.py
M tests/functional/utils.py
2 files changed, 2 insertions(+), 1 deletion(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/09/45309/1
diff --git a/lib/vdsm/jsonrpcvdscli.py b/lib/vdsm/jsonrpcvdscli.py
index 6e65e0d..eda01c4 100644
--- a/lib/vdsm/jsonrpcvdscli.py
+++ b/lib/vdsm/jsonrpcvdscli.py
@@ -43,6 +43,7 @@
'getVdsStats': 'Host.getStats',
'getVmStats': 'VM.getStats',
'list': 'Host.getVMList',
+ 'fullList': 'Host.getVMFullList',
'migrationCreate': 'VM.migrationCreate',
'ping': 'Host.ping',
'setBalloonTarget': 'VM.setBalloonTarget',
diff --git a/tests/functional/utils.py b/tests/functional/utils.py
index 26562aa..1414386 100644
--- a/tests/functional/utils.py
+++ b/tests/functional/utils.py
@@ -221,7 +221,7 @@
return _parse_result(result)
def getVmList(self, vmId):
- result = self.vdscli.list('true', [vmId])
+ result = self.vdscli.getVMFullList()
code, msg, vm_list = _parse_result(result, True)
return code, msg, vm_list[0]
--
To view, visit https://gerrit.ovirt.org/45309
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ic2c2f50d862bfd1447aa812a037b5b41f3efd6af
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Martin Polednik <mpolednik(a)redhat.com>
8 years, 8 months
Change in vdsm[master]: caps: add fake emulated machines
by Martin Polednik
Martin Polednik has uploaded a new change for review.
Change subject: caps: add fake emulated machines
......................................................................
caps: add fake emulated machines
When running fakekvm, the libvirt capability XML doesn't contain
emulated machine types of the opposite architecture. When adding such
host to engine, it will be considered non-operational. The patch
hardcodes the emulated machine types of both architectures and returns
them, causing the engine to accept the host.
Change-Id: I707515225b9034adda1770870d9a6939ecca9e5d
Signed-off-by: Martin Polednik <mpolednik(a)redhat.com>
---
M vdsm/caps.py
1 file changed, 24 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/52/46852/1
diff --git a/vdsm/caps.py b/vdsm/caps.py
index fed29f1..a3c1f16 100644
--- a/vdsm/caps.py
+++ b/vdsm/caps.py
@@ -381,6 +381,27 @@
return AutoNumaBalancingStatus.UNKNOWN
+def _get_fake_emulated_machines():
+ ppc64le_machines = ['pseries', 'pseries-rhel7.2.0']
+ x86_64_machines = ['pc-i440fx-rhel7.1.0',
+ 'rhel6.3.0',
+ 'pc-q35-rhel7.2.0',
+ 'pc-i440fx-rhel7.0.0',
+ 'rhel6.1.0',
+ 'rhel6.6.0',
+ 'rhel6.2.0',
+ 'pc',
+ 'pc-q35-rhel7.0.0',
+ 'pc-q35-rhel7.1.0',
+ 'q35',
+ 'pc-i440fx-rhel7.2.0',
+ 'rhel6.4.0',
+ 'rhel6.0.0',
+ 'rhel6.5.0']
+
+ return ppc64le_machines + x86_64_machines
+
+
def _get_emulated_machines_from_node(node):
# We have to make sure to inspect 'canonical' attribute where
# libvirt puts the real machine name. Relevant bug:
@@ -417,6 +438,9 @@
@utils.memoized
def _getEmulatedMachines(arch, capabilities=None):
+ if config.getboolean('vars', 'fake_kvm_support'):
+ return _get_fake_emulated_machines()
+
if capabilities is None:
capabilities = _getCapsXMLStr()
caps = ET.fromstring(capabilities)
--
To view, visit https://gerrit.ovirt.org/46852
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I707515225b9034adda1770870d9a6939ecca9e5d
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Martin Polednik <mpolednik(a)redhat.com>
8 years, 8 months
Change in vdsm[master]: recovery: move from clientIF to virt package
by fromani@redhat.com
Francesco Romani has uploaded a new change for review.
Change subject: recovery: move from clientIF to virt package
......................................................................
recovery: move from clientIF to virt package
The code with recover VMs belongs to the virt vertical,
and was part of clientIF mostly ofr historical reasons.
To unclutter a bit the codebase, and to move forward the long
term goal to drop clientIF, this patch moves the vm recovery
code into a new virt module (virt/recovery.py).
Along the way, we also modernize the names and make them pep8
friendly. Besides that, there are no changes in logic.
Tests will follow up in a future patch.
Change-Id: I3c72782bb2e4a62f94514eb3059f2ba45f01b6e2
Signed-off-by: Francesco Romani <fromani(a)redhat.com>
---
M debian/vdsm.install
M vdsm.spec.in
M vdsm/clientIF.py
M vdsm/virt/Makefile.am
A vdsm/virt/recovery.py
5 files changed, 118 insertions(+), 77 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/12/44412/1
diff --git a/debian/vdsm.install b/debian/vdsm.install
index a26c790..fe25828 100644
--- a/debian/vdsm.install
+++ b/debian/vdsm.install
@@ -146,6 +146,7 @@
./usr/share/vdsm/virt/guestagent.py
./usr/share/vdsm/virt/migration.py
./usr/share/vdsm/virt/periodic.py
+./usr/share/vdsm/virt/recovery.py
./usr/share/vdsm/virt/sampling.py
./usr/share/vdsm/virt/secret.py
./usr/share/vdsm/virt/vm.py
diff --git a/vdsm.spec.in b/vdsm.spec.in
index fd21bc1..e2f5311 100644
--- a/vdsm.spec.in
+++ b/vdsm.spec.in
@@ -814,6 +814,7 @@
%{_datadir}/%{vdsm_name}/virt/guestagent.py*
%{_datadir}/%{vdsm_name}/virt/migration.py*
%{_datadir}/%{vdsm_name}/virt/periodic.py*
+%{_datadir}/%{vdsm_name}/virt/recovery.py*
%{_datadir}/%{vdsm_name}/virt/sampling.py*
%{_datadir}/%{vdsm_name}/virt/secret.py*
%{_datadir}/%{vdsm_name}/virt/vmchannels.py*
diff --git a/vdsm/clientIF.py b/vdsm/clientIF.py
index 3ec3e42..d147655 100644
--- a/vdsm/clientIF.py
+++ b/vdsm/clientIF.py
@@ -34,7 +34,6 @@
import alignmentScan
from vdsm.config import config
from momIF import MomClient
-from vdsm.compat import pickle
from vdsm.define import doneCode, errCode
import libvirt
from vdsm import sslutils
@@ -47,6 +46,7 @@
from protocoldetector import MultiProtocolAcceptor
from virt import migration
+from virt import recovery
from virt import sampling
from virt import secret
from virt import vm
@@ -466,46 +466,7 @@
caps.CpuTopology().cores())
migration.SourceThread.setMaxOutgoingMigrations(mog)
- # Recover stage 1: domains from libvirt
- doms = getVDSMDomains()
- num_doms = len(doms)
- for idx, v in enumerate(doms):
- vmId = v.UUIDString()
- if self._recoverVm(vmId):
- self.log.info(
- 'recovery [1:%d/%d]: recovered domain %s from libvirt',
- idx+1, num_doms, vmId)
- else:
- self.log.info(
- 'recovery [1:%d/%d]: loose domain %s found, killing it.',
- idx+1, num_doms, vmId)
- try:
- v.destroy()
- except libvirt.libvirtError:
- self.log.exception(
- 'recovery [1:%d/%d]: failed to kill loose domain %s',
- idx+1, num_doms, vmId)
-
- # Recover stage 2: domains from recovery files
- # we do this to safely handle VMs which disappeared
- # from the host while VDSM was down/restarting
- rec_vms = self._getVDSMVmsFromRecovery()
- num_rec_vms = len(rec_vms)
- if rec_vms:
- self.log.warning(
- 'recovery: found %i VMs from recovery files not'
- ' reported by libvirt. This should not happen!'
- ' Will try to recover them.', num_rec_vms)
-
- for idx, vmId in enumerate(rec_vms):
- if self._recoverVm(vmId):
- self.log.info(
- 'recovery [2:%d/%d]: recovered domain %s'
- ' from data file', idx+1, num_rec_vms, vmId)
- else:
- self.log.warning(
- 'recovery [2:%d/%d]: VM %s failed to recover from data'
- ' file, reported as Down', idx+1, num_rec_vms, vmId)
+ recovery.all_vms(self)
# recover stage 3: waiting for domains to go up
while self._enabled:
@@ -518,7 +479,9 @@
'recovery: waiting for %d domains to go up',
launching)
time.sleep(1)
- self._cleanOldFiles()
+
+ recovery.clean_vm_files(cif)
+
self._recovery = False
# Now if we have VMs to restore we should wait pool connection
@@ -553,41 +516,6 @@
except:
self.log.exception("recovery: failed")
raise
-
- def _getVDSMVmsFromRecovery(self):
- vms = []
- for f in os.listdir(constants.P_VDSM_RUN):
- vmId, fileType = os.path.splitext(f)
- if fileType == ".recovery":
- if vmId not in self.vmContainer:
- vms.append(vmId)
- return vms
-
- def _recoverVm(self, vmid):
- try:
- recoveryFile = constants.P_VDSM_RUN + vmid + ".recovery"
- params = pickle.load(file(recoveryFile))
- now = time.time()
- pt = float(params.pop('startTime', now))
- params['elapsedTimeOffset'] = now - pt
- self.log.debug("Trying to recover " + params['vmId'])
- if not self.createVm(params, vmRecover=True)['status']['code']:
- return recoveryFile
- except:
- self.log.debug("Error recovering VM", exc_info=True)
- return None
-
- def _cleanOldFiles(self):
- for f in os.listdir(constants.P_VDSM_RUN):
- try:
- vmId, fileType = f.split(".", 1)
- exts = ["guest.socket", "monitor.socket",
- "stdio.dump", "recovery"]
- if fileType in exts and vmId not in self.vmContainer:
- self.log.debug("removing old file " + f)
- utils.rmFile(constants.P_VDSM_RUN + f)
- except:
- pass
def dispatchLibvirtEvents(self, conn, dom, *args):
try:
diff --git a/vdsm/virt/Makefile.am b/vdsm/virt/Makefile.am
index 745e7dc..f6065ad 100644
--- a/vdsm/virt/Makefile.am
+++ b/vdsm/virt/Makefile.am
@@ -29,6 +29,7 @@
guestagent.py \
migration.py \
periodic.py \
+ recovery.py \
sampling.py \
secret.py \
vm.py \
diff --git a/vdsm/virt/recovery.py b/vdsm/virt/recovery.py
new file mode 100644
index 0000000..8ab65cd
--- /dev/null
+++ b/vdsm/virt/recovery.py
@@ -0,0 +1,110 @@
+#
+# Copyright 2011-2015 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+#
+# Refer to the README and COPYING files for full details of the license
+#
+
+import os
+import os.path
+
+import libvirt
+
+from vdsm.compat import pickle
+from vdsm import constants
+
+from .vm import getVDSMDomains
+
+
+def all_vms(cif):
+ # Recover stage 1: domains from libvirt
+ doms = getVDSMDomains()
+ num_doms = len(doms)
+ for idx, v in enumerate(doms):
+ vm_id = v.UUIDString()
+ if vm_from_file(cif, vm_id):
+ cif.log.info(
+ 'recovery [1:%d/%d]: recovered domain %s from libvirt',
+ idx+1, num_doms, vm_id)
+ else:
+ cif.log.info(
+ 'recovery [1:%d/%d]: loose domain %s found, killing it.',
+ idx+1, num_doms, vm_id)
+ try:
+ v.destroy()
+ except libvirt.libvirtError:
+ cif.log.exception(
+ 'recovery [1:%d/%d]: failed to kill loose domain %s',
+ idx+1, num_doms, vm_id)
+
+ # Recover stage 2: domains from recovery files
+ # we do this to safely handle VMs which disappeared
+ # from the host while VDSM was down/restarting
+ rec_vms = vdsm_vms_from_files(cif)
+ num_rec_vms = len(rec_vms)
+ if rec_vms:
+ cif.log.warning(
+ 'recovery: found %i VMs from recovery files not'
+ ' reported by libvirt. This should not happen!'
+ ' Will try to recover them.', num_rec_vms)
+
+ for idx, vm_id in enumerate(rec_vms):
+ if vm_from_file(cif, vm_id):
+ cif.log.info(
+ 'recovery [2:%d/%d]: recovered domain %s'
+ ' from data file', idx+1, num_rec_vms, vm_id)
+ else:
+ cif.log.warning(
+ 'recovery [2:%d/%d]: VM %s failed to recover from data'
+ ' file, reported as Down', idx+1, num_rec_vms, vm_id)
+
+
+def vdsm_vms_from_files(cif):
+ vms = []
+ for f in os.listdir(constants.P_VDSM_RUN):
+ vm_id, fileType = os.path.splitext(f)
+ if fileType == ".recovery":
+ if vm_id not in cif.vmContainer:
+ vms.append(vm_id)
+ return vms
+
+
+def vm_from_file(cif, vmid):
+ try:
+ recovery_file = constants.P_VDSM_RUN + vmid + ".recovery"
+ params = pickle.load(file(recovery_file))
+ now = time.time()
+ pt = float(params.pop('startTime', now))
+ params['elapsedTimeOffset'] = now - pt
+ cif.log.debug("Trying to recover " + params['vm_id'])
+ if not cif.createVm(params, vmRecover=True)['status']['code']:
+ return recovery_file
+ except:
+ cif.log.debug("Error recovering VM", exc_info=True)
+ return None
+
+
+def clean_vm_files(cif):
+ for f in os.listdir(constants.P_VDSM_RUN):
+ try:
+ vm_id, fileType = f.split(".", 1)
+ exts = ["guest.socket", "monitor.socket",
+ "stdio.dump", "recovery"]
+ if fileType in exts and vm_id not in cif.vmContainer:
+ cif.log.debug("removing old file " + f)
+ utils.rmFile(constants.P_VDSM_RUN + f)
+ except:
+ pass
--
To view, visit https://gerrit.ovirt.org/44412
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I3c72782bb2e4a62f94514eb3059f2ba45f01b6e2
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Francesco Romani <fromani(a)redhat.com>
8 years, 8 months
Change in vdsm[ovirt-3.5]: sp: deactivateSd - remove domain from pending for upgrade list
by laravot@redhat.com
Liron Aravot has uploaded a new change for review.
Change subject: sp: deactivateSd - remove domain from pending for upgrade list
......................................................................
sp: deactivateSd - remove domain from pending for upgrade list
When domain is deactivated its not cleared from the _domainsToUpgrade list,
which causes it to hold wrong and unneeded information (the domain might be
detached later from the storage pool).
Change-Id: I4451b348b8837dd83d95aea2be4a4536b33cdd99
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=1260429
Signed-off-by: Liron Aravot <laravot(a)redhat.com>
---
M vdsm/storage/sp.py
1 file changed, 16 insertions(+), 8 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/78/45978/1
diff --git a/vdsm/storage/sp.py b/vdsm/storage/sp.py
index 834eefd..43220c2 100644
--- a/vdsm/storage/sp.py
+++ b/vdsm/storage/sp.py
@@ -188,14 +188,7 @@
return
self._domainsToUpgrade.remove(sdUUID)
- if len(self._domainsToUpgrade) == 0:
- self.log.debug("All domains are upgraded, unregistering "
- "from state change event")
- try:
- self.domainMonitor.onDomainStateChange.\
- unregister(self._upgradeCallback)
- except KeyError:
- pass
+ self._finalizePoolUpgradeIfNeeded()
def _updateDomainsRole(self):
for sdUUID in self.getDomains(activeOnly=True):
@@ -1119,8 +1112,23 @@
domList[sdUUID] = sd.DOM_ATTACHED_STATUS
self._backend.setDomainsMap(domList)
+ try:
+ self._domainsToUpgrade.remove(sdUUID)
+ except ValueError:
+ pass
+ self._finalizePoolUpgradeIfNeeded()
self.updateMonitoringThreads()
+ def _finalizePoolUpgradeIfNeeded(self):
+ if len(self._domainsToUpgrade) == 0:
+ self.log.debug("No domains left for upgrade, unregistering "
+ "from state change event")
+ try:
+ self.domainMonitor.onDomainStateChange.unregister(
+ self._upgradeCallback)
+ except KeyError:
+ pass
+
@unsecured
def _linkStorageDomain(self, linkTarget, linkName):
self.log.info("Linking %s to %s", linkTarget, linkName)
--
To view, visit https://gerrit.ovirt.org/45978
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I4451b348b8837dd83d95aea2be4a4536b33cdd99
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: ovirt-3.5
Gerrit-Owner: Liron Aravot <laravot(a)redhat.com>
8 years, 8 months
Change in vdsm[ovirt-3.5]: hsm: lock pool when running upgradeStoragePool
by laravot@redhat.com
Liron Aravot has uploaded a new change for review.
Change subject: hsm: lock pool when running upgradeStoragePool
......................................................................
hsm: lock pool when running upgradeStoragePool
When running upgradeStoragePool we use the pool metadata to know which
domains needs upgrade, having no pool lock means that races with flows
that manipulate the pool metadata like activateSd and deactivateSd might
occur.
Change-Id: I1d5b65a75b1b50d5f5991334cf6221c067a31f5b
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=1260429
Signed-off-by: Liron Aravot <laravot(a)redhat.com>
---
M vdsm/storage/hsm.py
1 file changed, 5 insertions(+), 1 deletion(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/77/45977/1
diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
index 57063f1..da571fc 100644
--- a/vdsm/storage/hsm.py
+++ b/vdsm/storage/hsm.py
@@ -3550,7 +3550,11 @@
def upgradeStoragePool(self, spUUID, targetDomVersion):
targetDomVersion = int(targetDomVersion)
pool = self.getPool(spUUID)
- pool._upgradePool(targetDomVersion)
+ # This lock has to be mutual with the pool metadata operations (like
+ # activateSD/deactivateSD) as it uses the pool metadata.
+ with rmanager.acquireResource(STORAGE, spUUID,
+ rm.LockType.exclusive):
+ pool._upgradePool(targetDomVersion)
return {"upgradeStatus": "started"}
def _getDomsStats(self, domainMonitor, doms):
--
To view, visit https://gerrit.ovirt.org/45977
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I1d5b65a75b1b50d5f5991334cf6221c067a31f5b
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: ovirt-3.5
Gerrit-Owner: Liron Aravot <laravot(a)redhat.com>
8 years, 8 months
Change in vdsm[ovirt-3.5]: sp: startSpm - clusterlock inquire leads to failure
by laravot@redhat.com
Liron Aravot has uploaded a new change for review.
Change subject: sp: startSpm - clusterlock inquire leads to failure
......................................................................
sp: startSpm - clusterlock inquire leads to failure
Currently startSpm() is responsible for the pool upgrade, it attempts to
upgrade the pool domains to the desired version. During the execution of
startSpm() we attempt to retrieve the current spm status in order to compare
it with the engine sent parameters - and in case of difference we log
the info to the user.
When a DC upgrade is performed when the DC is down from a version that uses
V1 as the domains version to a version that uses the StoragePoolMemoryBackend
we'll encounter a problem - because the StoragePoolMemoryBackend uses
the information from the clusterlock only and the current clusterlock
that is used on the domain might not support inquiring (safelease for
example) which will cause it to throw InquireNotSupportError.
As we use the inquired information just to display a warning, in case of
a clusterlock that doesn't support inquiring we should just log it to
the user and continue with starting the spm.
Change-Id: I082dc83ea410768db3819e7259089c20c2614b07
Bug-Url: https://bugzilla.redhat.com/1242092
Signed-off-by: Liron Aravot <laravot(a)redhat.com>
---
M vdsm/storage/sp.py
1 file changed, 11 insertions(+), 5 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/76/45976/1
diff --git a/vdsm/storage/sp.py b/vdsm/storage/sp.py
index 323f878..834eefd 100644
--- a/vdsm/storage/sp.py
+++ b/vdsm/storage/sp.py
@@ -242,7 +242,6 @@
raise se.MiscOperationInProgress("spm start %s" % self.spUUID)
self.updateMonitoringThreads()
- oldlver, oldid = self._backend.getSpmStatus()
masterDomVersion = self.getVersion()
# If no specific domain version was specified use current master
# domain version
@@ -254,10 +253,17 @@
self.masterDomain.sdUUID, curVer=masterDomVersion,
expVer=expectedDomVersion)
- if int(oldlver) != int(prevLVER) or int(oldid) != int(prevID):
- self.log.info("expected previd:%s lver:%s got request for "
- "previd:%s lver:%s" %
- (oldid, oldlver, prevID, prevLVER))
+ try:
+ oldlver, oldid = self._backend.getSpmStatus()
+ except se.InquireNotSupportedError:
+ self.log.info("cluster lock inquire isn't supported. "
+ "proceeding with startSpm()")
+ oldlver = LVER_INVALID
+ else:
+ if int(oldlver) != int(prevLVER) or int(oldid) != int(prevID):
+ self.log.info("expected previd:%s lver:%s got request for "
+ "previd:%s lver:%s" %
+ (oldid, oldlver, prevID, prevLVER))
self.spmRole = SPM_CONTEND
--
To view, visit https://gerrit.ovirt.org/45976
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I082dc83ea410768db3819e7259089c20c2614b07
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: ovirt-3.5
Gerrit-Owner: Liron Aravot <laravot(a)redhat.com>
8 years, 8 months
Change in vdsm[ovirt-3.5]: core: moving InquireNotSupportedError to storage_exception.py
by laravot@redhat.com
Liron Aravot has uploaded a new change for review.
Change subject: core: moving InquireNotSupportedError to storage_exception.py
......................................................................
core: moving InquireNotSupportedError to storage_exception.py
InquireNotSupportedError is currently defined in clusterlock.py, that
prohibits from assigning a meaningful error code to that error and to
use it outside of that class scope without using a different method then
our wildly used one. In this patch its moved to storage_exception so
we'll be able catch and inspect that error like any other clusterlock related
error. The engine will use that error as well and will attempt to start
the spm if getSpmStatus() will fail as we don't have the "current" spm
information, on the worst case startSpm() will fail.
Change-Id: I8201794dc96ee24dc9c0da5b7c3d71ab0b75e9f3
Bug-Url: https://bugzilla.redhat.com/1242092
Signed-off-by: Liron Aravot <laravot(a)redhat.com>
---
M vdsm/storage/clusterlock.py
M vdsm/storage/hsm.py
M vdsm/storage/storage_exception.py
3 files changed, 10 insertions(+), 5 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/75/45975/1
diff --git a/vdsm/storage/clusterlock.py b/vdsm/storage/clusterlock.py
index 24a5d81..7373e1f 100644
--- a/vdsm/storage/clusterlock.py
+++ b/vdsm/storage/clusterlock.py
@@ -69,10 +69,6 @@
HOST_STATUS_DEAD = "dead"
-class InquireNotSupportedError(Exception):
- """Raised when the clusterlock class is not supporting inquire"""
-
-
class SafeLease(object):
log = logging.getLogger("Storage.SafeLease")
@@ -146,7 +142,7 @@
self.log.debug("Clustered lock acquired successfully")
def inquire(self):
- raise InquireNotSupportedError()
+ raise se.InquireNotSupportedError()
def getLockUtilFullPath(self):
return os.path.join(self.lockUtilPath, self.lockCmd)
diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
index 579f5a6..57063f1 100644
--- a/vdsm/storage/hsm.py
+++ b/vdsm/storage/hsm.py
@@ -635,6 +635,10 @@
# This happens when we cannot read the MD LV
self.log.error("Can't read LV based metadata", exc_info=True)
raise se.StorageDomainMasterError("Can't read LV based metadata")
+ except se.InquireNotSupportedError:
+ self.log.error("Inquire spm status isn't supported by "
+ "the current cluster lock")
+ raise
except se.StorageException as e:
self.log.error("MD read error: %s", str(e), exc_info=True)
raise se.StorageDomainMasterError("MD read error")
diff --git a/vdsm/storage/storage_exception.py b/vdsm/storage/storage_exception.py
index b4f3b66..5a26402 100644
--- a/vdsm/storage/storage_exception.py
+++ b/vdsm/storage/storage_exception.py
@@ -1593,6 +1593,11 @@
message = "Could not initialize cluster lock"
+class InquireNotSupportedError(StorageException):
+ code = 702
+ message = "Cluster lock inquire isnt supported"
+
+
#################################################
# Meta data related Exceptions
#################################################
--
To view, visit https://gerrit.ovirt.org/45975
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I8201794dc96ee24dc9c0da5b7c3d71ab0b75e9f3
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: ovirt-3.5
Gerrit-Owner: Liron Aravot <laravot(a)redhat.com>
8 years, 8 months
Change in vdsm[master]: stomp: make sure that subscriptions use uniqe id
by Piotr Kliczewski
Piotr Kliczewski has uploaded a new change for review.
Change subject: stomp: make sure that subscriptions use uniqe id
......................................................................
stomp: make sure that subscriptions use uniqe id
There was a bug in the engine that there were two subscriptions with the
same id. This issue as a result created fd leak because we were not able
to clean the subscription id on connection closed or unsubscribe.
Change-Id: I3883bb68134a6e2cc52cf54ce4027122db8150e9
Signed-off-by: pkliczewski <piotr.kliczewski(a)gmail.com>
---
M lib/yajsonrpc/stompreactor.py
1 file changed, 5 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/56/46656/1
diff --git a/lib/yajsonrpc/stompreactor.py b/lib/yajsonrpc/stompreactor.py
index 71fd2ab..7e74708 100644
--- a/lib/yajsonrpc/stompreactor.py
+++ b/lib/yajsonrpc/stompreactor.py
@@ -133,6 +133,11 @@
dispatcher.connection)
return
+ if sub_id in self._sub_ids.keys():
+ self._send_error("Subscription id already exists",
+ dispatcher.connection)
+ return
+
ack = frame.headers.get("ack", stomp.AckMode.AUTO)
subscription = stomp._Subscription(dispatcher.connection, destination,
sub_id, ack, None)
--
To view, visit https://gerrit.ovirt.org/46656
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I3883bb68134a6e2cc52cf54ce4027122db8150e9
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Piotr Kliczewski <piotr.kliczewski(a)gmail.com>
8 years, 8 months
Change in vdsm[master]: schema: rpm for jsonrpc schema files
by Piotr Kliczewski
Piotr Kliczewski has uploaded a new change for review.
Change subject: schema: rpm for jsonrpc schema files
......................................................................
schema: rpm for jsonrpc schema files
We need to use schema files outside of vdsm for external clients and in
the future for the engine. In order to do it we need to have separate
rpm which provides required files.
Change-Id: I13d6291ddbf5bf7d8e6a0956db3300cd0c45e563
Signed-off-by: pkliczewski <piotr.kliczewski(a)gmail.com>
---
M configure.ac
M lib/Makefile.am
R lib/api/process-schema.py
A lib/api/vdsm-api.html
R lib/api/vdsmapi-gluster-schema.json
R lib/api/vdsmapi-schema.json
R lib/api/vdsmapi.py
M tests/schemaTests.py
M tests/schemaValidationTest.py
M tests/vmApiTests.py
M vdsm.spec.in
M vdsm/rpc/Bridge.py
M vdsm/rpc/Makefile.am
13 files changed, 6,201 insertions(+), 39 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/50/45750/1
--
To view, visit https://gerrit.ovirt.org/45750
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I13d6291ddbf5bf7d8e6a0956db3300cd0c45e563
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Piotr Kliczewski <piotr.kliczewski(a)gmail.com>
8 years, 8 months