Storage location of the 'recover' files
by Vinzenz Feenstra
Hi,
We have had some verbal discussion about the restore state files and we
came to the conclusion that it might be a good idea, if we could have
the configuration we're saving pickled in the restore files within the
Domain XML itself.
The point is, that we're currently having kind of a potential problem
(can be found in vm.py all over the place in comments) that the recover
file and the actual configuration of the VM in the Domain XML might differ.
The idea is that we could utilize the metatag introduced in before
libvirt 0.10 to store our current configuration embedded in the
DomainXML in libvirt.
When we're recovering we can get the configuration from there.
Additionally we would be able to update the domain xml and the current
configuration at the same time and we'd have a more consistent state.
This is just an idea and I am curious what you guys are thinking about it.
Let me know, thanks
--
Regards,
Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R & D
Phone: +420 532 294 625
IRC: vfeenstr or evilissimo
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 10 months
Fwd: RHEV-m hosts with certs configured
by navin p
Hi,
I have couple of RHEV hosts (ovpxen,RHV2, RHV10 etc) and i'm trying to
connect from one of the client machine (C1). All the RHEV host have libvirt
modified by vdsm. It looks like the below
## beginning of configuration section by vdsm-4.10.2
listen_addr="0.0.0.0"
unix_sock_group="kvm"
unix_sock_rw_perms="0770"
auth_unix_rw="sasl"
host_uuid="036118ab-705f-4aeb-9a13-013dc8af6b41"
keepalive_interval=-1
log_outputs="1:file:/var/log/libvirtd.log"
log_filters="3:virobject 3:virfile 2:virnetlink 3:cgroup 3:event 3:json
1:libvirt 1:util 1:qemu"
ca_file="/etc/pki/vdsm/certs/cacert.pem"
cert_file="/etc/pki/vdsm/certs/vdsmcert.pem"
key_file="/etc/pki/vdsm/keys/vdsmkey.pem"
## end of configuration section by vdsm-4.10.2
# ls
bkp-2013-08-16_110734_cacert.pem cacert.pem vdsmcert.pem
bkp-2013-08-16_110734_vdsmcert.pem engine_web_ca.pem
[root@ovpxen certs]# pwd
/etc/pki/vdsm/certs
[root@ovpxen certs]# certtool -i --infile engine_web_ca.pem | head
X.509 Certificate Information:
Version: 3
Serial Number (hex): 09
Issuer: C=US,O=HP,CN=CA-IWFVM00772.hpswlabs.adapps.hp.com.64431
Validity:
Not Before: Wed Jan 23 13:24:14 UTC 2013
Not After: Sun Jan 22 07:54:14 UTC 2023
Subject: C=US,O=HP,CN=CA-IWFVM00772.hpswlabs.adapps.hp.com.64431
Subject Public Key Algorithm: RSA
Modulus (bits 1024):
certtool -i --infile cacert.pem | head
X.509 Certificate Information:
Version: 3
Serial Number (hex): 09
Issuer: C=US,O=HP,CN=CA-IWFVM00772.hpswlabs.adapps.hp.com.64431
Validity:
Not Before: Wed Jan 23 13:24:14 UTC 2013
Not After: Sun Jan 22 07:54:14 UTC 2023
Subject: C=US,O=HP,CN=CA-IWFVM00772.hpswlabs.adapps.hp.com.64431
Subject Public Key Algorithm: RSA
Modulus (bits 1024):
[root@ovpxen certs]# certtool -i --infile vdsmcert.pem | head
X.509 Certificate Information:
Version: 3
Serial Number (hex): 0c
Issuer: C=US,O=HP,CN=CA-IWFVM00772.hpswlabs.adapps.hp.com.64431
Validity:
Not Before: Thu Aug 15 11:09:22 UTC 2013
Not After: Wed Aug 15 05:39:22 UTC 2018
Subject: O=HP,CN=16.184.46.53
Subject Public Key Algorithm: RSA
Modulus (bits 2048):
Now from the client C1 which cert should i place in /etc/pki/CA/cacert.pem
so that i can access from the client using the URI
qemu+tls://ovpxen.ind.hp.com/system. Please note the host
IWFVM00772.hpswlabs.adapps.hp.com is ovirt managed host. It is not the
client.
My problem here is i can't change the hypervisor hosts as there are too
many of them and it is configured by vdsm . What certs should i take from
host so that i can use it in the client so that i can connect to multiple
hosts from the client using virsh or virt-manager . I need tls as remote
protocol as i'm trying to automate commands.
Regards,
Navin
10 years, 10 months
Test failure not free loop devices
by dcaroest@redhat.com
Sometimes we get this error when running the vdsm tests:
======================================================================
ERROR: testLoopMount (mountTests.MountTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/ephemeral0/vdsm_unit_tests_gerrit_el/tests/mountTests.py", line 69, in testLoopMount
m.mount(mntOpts="loop")
File "/ephemeral0/vdsm_unit_tests_gerrit_el/vdsm/storage/mount.py", line 222, in mount
return self._runcmd(cmd, timeout)
File "/ephemeral0/vdsm_unit_tests_gerrit_el/vdsm/storage/mount.py", line 238, in _runcmd
raise MountError(rc, ";".join((out, err)))
MountError: (2, ';mount: could not find any free loop device\n')
-------------------- >> begin captured logging << --------------------
Storage.Misc.excCmd: DEBUG: '/sbin/mkfs.ext2 -F /tmp/tmpq95svr' (cwd None)
Storage.Misc.excCmd: DEBUG: SUCCESS: <err> = 'mke2fs 1.41.12 (17-May-2010)\n'; <rc> = 0
Storage.Misc.excCmd: DEBUG: '/usr/bin/sudo -n /bin/mount -o loop /tmp/tmpq95svr /tmp/tmpcS29EU' (cwd None)
The problem is that it seems that the loop devices that are not being released (maybe when the test fails?) and the system runs out of devices eventually.
Can you take a look to see where the cleanup fails and fix it?
Thanks!
pd. To free a loop device you have to umount it and then losetup -d <loopdevice>, that way the device gets liberated. Also it might be a good solution to create a specific device per test, that way it will never run out of them no matter how many tests run in parallel, but that requires a little more modifications.
----
David Caro
Red Hat Czech s.r.o.
Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605
Email: dcaro(a)redhat.com
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic
RHT Global #: 82-62605
10 years, 10 months
vdsm-sync meeting August 12th 2013
by ybronhei@redhat.com
Hey,
I couldn't join to the call from the Israeli number,
I wanted to raise that recently we added locking mechanism to vdsmd.init script (http://gerrit.ovirt.org/#/c/17662/) that leaded to some regressions over fedora 19
The fixes should be provided as part of http://gerrit.ovirt.org/#/c/17926/ - reviews are welcome :)
Other than that, I'll try to check about the call issues. I understand that it worked for some.. hope to be there next time
If someone can share what was said, it will be great
Thanks,
Yaniv Bronhaim
10 years, 10 months
Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain
by Andrew Cathrow
> ----- Forwarded Message -----
> > From: "Itamar Heim" <iheim(a)redhat.com>
> > To: "Sahina Bose" <sabose(a)redhat.com>
> > Cc: "engine-devel" <engine-devel(a)ovirt.org>, "VDSM Project
> > Development" <vdsm-devel(a)lists.fedorahosted.org>
> > Sent: Wednesday, August 7, 2013 1:30:54 PM
> > Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage
> > Domain
> >
> > On 08/07/2013 08:21 AM, Sahina Bose wrote:
> > > [Adding engine-devel]
> > >
> > > On 08/06/2013 10:48 AM, Deepak C Shetty wrote:
> > >> Hi All,
> > >> There were 2 learnings from BZ
> > >> https://bugzilla.redhat.com/show_bug.cgi?id=988299
> > >>
> > >> 1) Gluster RPM deps were not proper in VDSM when using Gluster
> > >> Storage
> > >> Domain. This has been partly addressed
> > >> by the gluster-devel thread @
> > >> http://lists.gnu.org/archive/html/gluster-devel/2013-08/msg00008.html
> > >> and will be fully addressed once Gluster folks ensure their
> > >> packaging
> > >> is friendly enuf for VDSM to consume
> > >> just the needed bits. Once that happens, i will be sending a
> > >> patch to
> > >> vdsm.spec.in to update the gluster
> > >> deps correctly. So this issue gets addressed in near term.
> > >>
> > >> 2) Gluster storage domain needs minimum libvirt 1.0.1 and qemu
> > >> 1.3.
> > >>
> > >> libvirt 1.0.1 has the support for representing gluster as a
> > >> network
> > >> block device and qemu 1.3 has the
> > >> native support for gluster block backend which supports
> > >> gluster://...
> > >> URI way of representing a gluster
> > >> based file (aka volume/vmdisk in VDSM case). Many distros (incl.
> > >> centos 6.4 in the BZ) won't have qemu
> > >> 1.3 in their distro repos! How do we handle this dep in VDSM ?
> > >>
> > >> Do we disable gluster storage domain in oVirt engine if VDSM
> > >> reports
> > >> qemu < 1.3 as part of getCapabilities ?
> > >> or
> > >> Do we ensure qemu 1.3 is present in ovirt.repo assuming
> > >> ovirt.repo is
> > >> always present on VDSM hosts in which
> > >> case when VDSM gets installed, qemu 1.3 dep in vdsm.spec.in will
> > >> install qemu 1.3 from the ovirt.repo
> > >> instead of the distro repo. This means vdsm.spec.in will have
> > >> qemu >=
> > >> 1.3 under Requires.
> > >>
> > > Is this possible to make this a conditional install? That is,
> > > only if
> > > Storage Domain = GlusterFS in the Data center, the bootstrapping
> > > of host
> > > will install the qemu 1.3 and dependencies.
> > >
> > > (The question still remains as to where the qemu 1.3 rpms will be
> > > available)
RHEL6.5 (and so CentOS 6.5) will get backported libgfapi support so we shouldn't need to require qemu 1.3 just the appropriate qemu-kvm version from 6.5
https://bugzilla.redhat.com/show_bug.cgi?id=848070
> > >
> >
> > hosts are installed prior to storage domain definition usually.
> > we need to find a solution to having a qemu > 1.3 for .el6 (or
> > another
> > version of qemu with this feature set).
> >
>
> > >> What will be a good way to handle this ?
> > >> Appreciate your response
> > >>
> > >> thanx,
> > >> deepak
> > >>
> > >> _______________________________________________
> > >> vdsm-devel mailing list
> > >> vdsm-devel(a)lists.fedorahosted.org
> > >> https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
> > >
> > > _______________________________________________
> > > vdsm-devel mailing list
> > > vdsm-devel(a)lists.fedorahosted.org
> > > https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
> >
> > _______________________________________________
> > vdsm-devel mailing list
> > vdsm-devel(a)lists.fedorahosted.org
> > https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
> >
>
10 years, 10 months
How to handle qemu 1.3 dep for Gluster Storage Domain
by deepakcs@linux.vnet.ibm.com
Hi All,
There were 2 learnings from BZ
https://bugzilla.redhat.com/show_bug.cgi?id=988299
1) Gluster RPM deps were not proper in VDSM when using Gluster Storage
Domain. This has been partly addressed
by the gluster-devel thread @
http://lists.gnu.org/archive/html/gluster-devel/2013-08/msg00008.html
and will be fully addressed once Gluster folks ensure their packaging is
friendly enuf for VDSM to consume
just the needed bits. Once that happens, i will be sending a patch to
vdsm.spec.in to update the gluster
deps correctly. So this issue gets addressed in near term.
2) Gluster storage domain needs minimum libvirt 1.0.1 and qemu 1.3.
libvirt 1.0.1 has the support for representing gluster as a network
block device and qemu 1.3 has the
native support for gluster block backend which supports gluster://...
URI way of representing a gluster
based file (aka volume/vmdisk in VDSM case). Many distros (incl. centos
6.4 in the BZ) won't have qemu
1.3 in their distro repos! How do we handle this dep in VDSM ?
Do we disable gluster storage domain in oVirt engine if VDSM reports
qemu < 1.3 as part of getCapabilities ?
or
Do we ensure qemu 1.3 is present in ovirt.repo assuming ovirt.repo is
always present on VDSM hosts in which
case when VDSM gets installed, qemu 1.3 dep in vdsm.spec.in will install
qemu 1.3 from the ovirt.repo
instead of the distro repo. This means vdsm.spec.in will have qemu >=
1.3 under Requires.
What will be a good way to handle this ?
Appreciate your response
thanx,
deepak
10 years, 10 months
ovirt 3.3 RC packages
by oschreib@redhat.com
Dear maintainers,
As you probably know, we're heading towards the 3.3 release of ovirt.
I'd like to get a short status about your project, and it's readiness for the upcoming release.
If your project is blocker free, please let me know of the relevant build to pick up into the RC repo.
Current known blockers (as in https://bugzilla.redhat.com/show_bug.cgi?id=918494 - Tracker: oVirt 3.3 release):
ovirt-engine
============
984586 ovirt-engine-backend infra Cannot start a VM with USB Native - Exit message: internal error Could not format channel target type.
988299 ovirt-engine-core gluster Impossible to start VM from Gluster Storage Domain
987939 ovirt-engine-installer integration engine-setup -> engine-cleanup -> engine-setup -> fails
vsdm
====
988004 vdsm network [vdsm] OSError: [Errno 2] No such file or directory: '/sys/class/net/ovirtmgmt/brif'
988065 vdsm virt Migration fails - AttributeError: 'ConsoleDevice' object has no attribute 'alias'
988397 vdsm network ovirt-node post-installation setup networks fails when NetworkManager is running
988990 vdsm network oVirt 3.3 - (vdsm-network): netinfo - ValueError: unknown bridge ens3
990854 vdsm network Multiple Gateways: Upgrade VDSM to 3.3 must reconfigure networking on host
990963 vdsm vdsm must require selinux-policy-3.12.1-68.fc19
ovirt-node
====
988986 ovirt-node libvirt network directory is not persisted
other
=====
990509 selinux-policy Current selinux policy prevents running a VM with volumes under /var/run/vdsm/storage
Thanks,
Ofer Schreiber
10 years, 10 months
broken test FileVolumeGetVSizeTest in git master
by asegurap@redhat.com
Hi oVirters,
Apparently due to commit:
commit a3b4b2cb6ee0ea8dbe04413882d52d8a0e776ef8
Author: Eduardo Warszawski <ewarszaw(a)redhat.com>
Date: Wed Jul 31 04:49:52 2013 +0300
Avoid Img and Vol produces in fileVolume.getV*Size
No need for produce Images and Volumes for get the volume path
when calculating the size.
Related to BZ#960952, BZ#769502.
which adds the line:
volPath = os.path.join(sdobj.mountpoint, sdobj.sdUUID, 'images',
the test in $subj, FileVolumeGetVSizeTest is broken, since it calls:
volSize = FileVolume.getVSize(self.sdobj, self.imgUUID, self.volUUID)
that uses FileDomainMockObject (which does not have a mountpoint), resulting in
AttributeError: 'FileDomainMockObject' object has no attribute 'mountpoint'
Best regards,
Toni
10 years, 10 months
Error while rebooting VM
by Sandro Bonazzola
Hi, while rebooting hosted-engine VM I've the following error:
vdsm-4.12.0-10.git295a069.fc19.x86_64 (nightly)
libvirt-1.0.5.4-1.fc19.x86_64 (F19)
mom-0.3.2-3.fc19.noarch (manually rebuilt from master due to missing
package in nightly)
Thread-621::DEBUG::2013-08-01
10:42:53,011::BindingXMLRPC::986::vds::(wrapper) return vmGetStats with
{'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status':
'Up', 'username': 'Unknown', 'memUsage': '0', 'acpiEnable': 'true',
'guestFQDN': '', 'pid': '26846', 'displayIp': '0', 'displayPort': '-1',
'session': 'Unknown', 'displaySecurePort': u'5900', 'cdrom':
'/home/Fedora-19-x86_64-DVD.iso', 'hash': '-6106157858121768929',
'balloonInfo': {}, 'pauseCode': 'NOERR', 'clientIp': '127.0.0.1',
'kvmEnable': 'true', 'network': {u'vnet0': {'macAddr':
'00:16:3e:4f:10:9a', 'rxDropped': '0', 'rxErrors': '0', 'txDropped':
'0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state':
'unknown', 'speed': '1000', 'name': u'vnet0'}}, 'vmId':
'fd470849-17c9-436e-818d-16232f5b032b', 'monitorResponse': '0',
'cpuUser': '0.69', 'disks': {u'hdc': {'readLatency': '0',
'apparentsize': '0', 'writeLatency': '0', 'flushLatency': '0',
'readRate': '0.00', 'truesize': '0', 'writeRate': '0.00'}, u'hda':
{'readLatency': '0', 'apparentsize': '26843545600', 'writeLatency': '0',
'imageID': '9ac2ea13-1de5-4a60-83c5-8700a23203b7', 'flushLatency': '0',
'readRate': '0.00', 'truesize': '1594064896', 'writeRate': '0.00'}},
'boot': 'd', 'statsAge': '0.74', 'elapsedTime': '2834', 'vmType': 'kvm',
'cpuSys': '0.13', 'timeOffset': -500L, 'appsList': [], 'guestIPs': '',
'displayType': 'qxl'}]}
Thread-622::DEBUG::2013-08-01
10:42:58,024::BindingXMLRPC::979::vds::(wrapper) client
[127.0.0.1]::call vmGetStats with
('fd470849-17c9-436e-818d-16232f5b032b',) {}
Thread-622::DEBUG::2013-08-01
10:42:58,024::BindingXMLRPC::986::vds::(wrapper) return vmGetStats with
{'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status':
'Up', 'username': 'Unknown', 'memUsage': '0', 'acpiEnable': 'true',
'guestFQDN': '', 'pid': '26846', 'displayIp': '0', 'displayPort': '-1',
'session': 'Unknown', 'displaySecurePort': u'5900', 'cdrom':
'/home/Fedora-19-x86_64-DVD.iso', 'hash': '-6106157858121768929',
'balloonInfo': {}, 'pauseCode': 'NOERR', 'clientIp': '127.0.0.1',
'kvmEnable': 'true', 'network': {u'vnet0': {'macAddr':
'00:16:3e:4f:10:9a', 'rxDropped': '0', 'rxErrors': '0', 'txDropped':
'0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state':
'unknown', 'speed': '1000', 'name': u'vnet0'}}, 'vmId':
'fd470849-17c9-436e-818d-16232f5b032b', 'monitorResponse': '0',
'cpuUser': '0.69', 'disks': {u'hdc': {'readLatency': '0',
'apparentsize': '0', 'writeLatency': '0', 'flushLatency': '0',
'readRate': '0.00', 'truesize': '0', 'writeRate': '0.00'}, u'hda':
{'readLatency': '0', 'apparentsize': '26843545600', 'writeLatency': '0',
'imageID': '9ac2ea13-1de5-4a60-83c5-8700a23203b7', 'flushLatency': '0',
'readRate': '0.00', 'truesize': '1594064896', 'writeRate': '0.00'}},
'boot': 'd', 'statsAge': '0.76', 'elapsedTime': '2839', 'vmType': 'kvm',
'cpuSys': '0.13', 'timeOffset': -500L, 'appsList': [], 'guestIPs': '',
'displayType': 'qxl'}]}
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,268::vm::4714::vm.Vm::(_onLibvirtLifecycleEvent)
vmId=`fd470849-17c9-436e-818d-16232f5b032b`::event Shutdown detail 0
opaque None
VM Channels Listener::ERROR::2013-08-01
10:42:59,335::vmChannels::53::vds::(_handle_event) Received 00000019 on
fileno 32
VM Channels Listener::DEBUG::2013-08-01
10:42:59,336::vmChannels::128::vds::(_handle_unconnected) Trying to
connect fileno 32.
VM Channels Listener::DEBUG::2013-08-01
10:42:59,336::guestIF::147::vm.Vm::(_connect)
vmId=`fd470849-17c9-436e-818d-16232f5b032b`::Attempting connection to
/var/lib/libvirt/qemu/channels/fd470849-17c9-436e-818d-16232f5b032b.com.redhat.rhevm.vdsm
VM Channels Listener::DEBUG::2013-08-01
10:42:59,336::guestIF::158::vm.Vm::(_connect)
vmId=`fd470849-17c9-436e-818d-16232f5b032b`::Failed to connect to
/var/lib/libvirt/qemu/channels/fd470849-17c9-436e-818d-16232f5b032b.com.redhat.rhevm.vdsm
with 111
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,672::vm::4714::vm.Vm::(_onLibvirtLifecycleEvent)
vmId=`fd470849-17c9-436e-818d-16232f5b032b`::event Stopped detail 0
opaque None
libvirtEventLoop::INFO::2013-08-01
10:42:59,672::vm::2092::vm.Vm::(_onQemuDeath)
vmId=`fd470849-17c9-436e-818d-16232f5b032b`::underlying process disconnected
libvirtEventLoop::INFO::2013-08-01
10:42:59,672::vm::4214::vm.Vm::(releaseVm)
vmId=`fd470849-17c9-436e-818d-16232f5b032b`::Release VM resources
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,674::libvirtconnection::101::libvirtconnection::(wrapper)
Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Dominio non
trovato: no domain with matching uuid
'fd470849-17c9-436e-818d-16232f5b032b' (oVirtHostedEngine)
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,675::sampling::292::vm.Vm::(stop)
vmId=`fd470849-17c9-436e-818d-16232f5b032b`::Stop statistics collection
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,675::vmChannels::205::vds::(unregister) Delete fileno 32 from
listener.
Thread-44::DEBUG::2013-08-01 10:42:59,675::sampling::323::vm.Vm::(run)
vmId=`fd470849-17c9-436e-818d-16232f5b032b`::Stats thread finished
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,675::libvirtconnection::101::libvirtconnection::(wrapper)
Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Dominio non
trovato: no domain with matching uuid
'fd470849-17c9-436e-818d-16232f5b032b' (oVirtHostedEngine)
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,676::task::579::TaskManager.Task::(_updateState)
Task=`9be47a03-b9d9-4686-b43b-9764070a0e4f`::moving from state init ->
state preparing
libvirtEventLoop::INFO::2013-08-01
10:42:59,676::logUtils::44::dispatcher::(wrapper) Run and protect:
teardownImage(sdUUID='ab35a1ff-700b-4354-8539-bc5f0daa6348',
spUUID='0c66b59f-bfa5-475d-82c8-ac8878db2565',
imgUUID='9ac2ea13-1de5-4a60-83c5-8700a23203b7', volUUID=None)
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,676::resourceManager::197::ResourceManager.Request::(__init__)
ResName=`Storage.ab35a1ff-700b-4354-8539-bc5f0daa6348`ReqID=`7b99cc2f-21fe-4cd3-9ed3-4be8ffb3b37a`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3299' at 'teardownImage'
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,676::resourceManager::541::ResourceManager::(registerResource)
Trying to register resource
'Storage.ab35a1ff-700b-4354-8539-bc5f0daa6348' for lock type 'shared'
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,676::resourceManager::600::ResourceManager::(registerResource)
Resource 'Storage.ab35a1ff-700b-4354-8539-bc5f0daa6348' is free. Now
locking as 'shared' (1 active user)
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,676::resourceManager::237::ResourceManager.Request::(grant)
ResName=`Storage.ab35a1ff-700b-4354-8539-bc5f0daa6348`ReqID=`7b99cc2f-21fe-4cd3-9ed3-4be8ffb3b37a`::Granted
request
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,677::task::811::TaskManager.Task::(resourceAcquired)
Task=`9be47a03-b9d9-4686-b43b-9764070a0e4f`::_resourcesAcquired:
Storage.ab35a1ff-700b-4354-8539-bc5f0daa6348 (shared)
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,677::task::974::TaskManager.Task::(_decref)
Task=`9be47a03-b9d9-4686-b43b-9764070a0e4f`::ref 1 aborting False
libvirtEventLoop::INFO::2013-08-01
10:42:59,677::logUtils::47::dispatcher::(wrapper) Run and protect:
teardownImage, Return response: None
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,677::task::1168::TaskManager.Task::(prepare)
Task=`9be47a03-b9d9-4686-b43b-9764070a0e4f`::finished: None
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,677::task::579::TaskManager.Task::(_updateState)
Task=`9be47a03-b9d9-4686-b43b-9764070a0e4f`::moving from state preparing
-> state finished
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,677::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.ab35a1ff-700b-4354-8539-bc5f0daa6348': < ResourceRef
'Storage.ab35a1ff-700b-4354-8539-bc5f0daa6348', isValid: 'True' obj:
'None'>}
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,677::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,677::resourceManager::615::ResourceManager::(releaseResource)
Trying to release resource 'Storage.ab35a1ff-700b-4354-8539-bc5f0daa6348'
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,677::resourceManager::634::ResourceManager::(releaseResource)
Released resource 'Storage.ab35a1ff-700b-4354-8539-bc5f0daa6348' (0
active users)
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,677::resourceManager::640::ResourceManager::(releaseResource)
Resource 'Storage.ab35a1ff-700b-4354-8539-bc5f0daa6348' is free, finding
out if anyone is waiting for it.
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,677::resourceManager::648::ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.ab35a1ff-700b-4354-8539-bc5f0daa6348', Clearing records.
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,678::task::974::TaskManager.Task::(_decref)
Task=`9be47a03-b9d9-4686-b43b-9764070a0e4f`::ref 0 aborting False
libvirtEventLoop::WARNING::2013-08-01
10:42:59,678::clientIF::384::vds::(teardownVolumePath) Drive is not a
vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2
VOLWM_FREE_PCT:50 _blockDev:False _checkIoTuneCategories:<bound method
Drive._checkIoTuneCategories of <vm.Drive object at 0x2f14210>>
_customize:<bound method Drive._customize of <vm.Drive object at
0x2f14210>> _deviceXML:<disk device="cdrom" snapshot="no"
type="file"><source file="/home/Fedora-19-x86_64-DVD.iso"
startupPolicy="optional"/><target bus="ide"
dev="hdc"/><serial></serial></disk> _makeName:<bound method
Drive._makeName of <vm.Drive object at 0x2f14210>>
_validateIoTuneParams:<bound method Drive._validateIoTuneParams of
<vm.Drive object at 0x2f14210>> address:{u'bus': u'1', u'controller':
u'0', u'type': u'drive', u'target': u'0', u'unit': u'0'} alias:ide0-1-0
apparentsize:0 blockDev:False cache:none conf:{'status': 'Up', 'bridge':
'ovirtmgmt', 'vmId': 'fd470849-17c9-436e-818d-16232f5b032b', 'pid':
'26846', 'drives': [{'index': '0', 'domainID':
'ab35a1ff-700b-4354-8539-bc5f0daa6348', 'reqsize': '0', 'name': u'hda',
'format': 'raw', 'volumeInfo': {'path':
'/rhev/data-center/0c66b59f-bfa5-475d-82c8-ac8878db2565/ab35a1ff-700b-4354-8539-bc5f0daa6348/images/9ac2ea13-1de5-4a60-83c5-8700a23203b7/1b63a000-a23e-46ad-9283-e7227cca4879',
'volType': 'path'}, 'address': {u'bus': u'0', u'controller': u'0',
u'type': u'drive', u'target': u'0', u'unit': u'0'}, 'volumeID':
'1b63a000-a23e-46ad-9283-e7227cca4879', 'apparentsize': '26843545600',
'imageID': '9ac2ea13-1de5-4a60-83c5-8700a23203b7', 'alias': u'ide0-0-0',
'readonly': 'False', 'iface': 'ide', 'truesize': '0', 'poolID':
'0c66b59f-bfa5-475d-82c8-ac8878db2565', 'device': 'disk', 'shared':
False, 'path':
'/rhev/data-center/0c66b59f-bfa5-475d-82c8-ac8878db2565/ab35a1ff-700b-4354-8539-bc5f0daa6348/images/9ac2ea13-1de5-4a60-83c5-8700a23203b7/1b63a000-a23e-46ad-9283-e7227cca4879',
'propagateErrors': 'off', 'type': 'disk', 'volumeChain': [{'domainID':
'ab35a1ff-700b-4354-8539-bc5f0daa6348', 'vmVolInfo': {'path':
'/rhev/data-center/0c66b59f-bfa5-475d-82c8-ac8878db2565/ab35a1ff-700b-4354-8539-bc5f0daa6348/images/9ac2ea13-1de5-4a60-83c5-8700a23203b7/1b63a000-a23e-46ad-9283-e7227cca4879',
'volType': 'path'}, 'leaseOffset': 0, 'volumeID':
'1b63a000-a23e-46ad-9283-e7227cca4879', 'leasePath':
'/rhev/data-center/0c66b59f-bfa5-475d-82c8-ac8878db2565/ab35a1ff-700b-4354-8539-bc5f0daa6348/images/9ac2ea13-1de5-4a60-83c5-8700a23203b7/1b63a000-a23e-46ad-9283-e7227cca4879.lease',
'imageID': '9ac2ea13-1de5-4a60-83c5-8700a23203b7', 'shared': False,
'path':
'/rhev/data-center/0c66b59f-bfa5-475d-82c8-ac8878db2565/ab35a1ff-700b-4354-8539-bc5f0daa6348/images/9ac2ea13-1de5-4a60-83c5-8700a23203b7/1b63a000-a23e-46ad-9283-e7227cca4879'}]},
{'index': 2, 'iface': 'ide', 'name': u'hdc', 'format': 'raw', 'address':
{u'bus': u'1', u'controller': u'0', u'type': u'drive', u'target': u'0',
u'unit': u'0'}, 'alias': u'ide0-1-0', 'readonly': 'True',
'propagateErrors': 'off', 'shared': False, 'device': 'cdrom', 'path':
'/home/Fedora-19-x86_64-DVD.iso', 'truesize': 0, 'type': 'disk'}],
'cdrom': '/home/Fedora-19-x86_64-DVD.iso', 'displaySecurePort': u'5900',
'displayPort': '-1', 'pauseCode': 'NOERR', 'clientIp': '127.0.0.1',
'nicModel': 'virtio', 'macAddr': '00:16:3e:4f:10:9a', 'vmName':
'oVirtHostedEngine', 'boot': 'd', 'devices': [{'device': 'memballoon',
'specParams': {'model': 'none'}, 'type': 'balloon'}, {'device':
'virtio-serial', 'alias': u'virtio-serial0', 'type': 'controller',
'address': {u'slot': u'0x04', u'bus': u'0x00', u'domain': u'0x0000',
u'type': u'pci', u'function': u'0x0'}}, {'device': 'qxl', 'specParams':
{'vram': '65536'}, 'alias': u'video0', 'type': 'video', 'address':
{u'slot': u'0x02', u'bus': u'0x00', u'domain': u'0x0000', u'type':
u'pci', u'function': u'0x0'}}, {'nicModel': 'virtio', 'macAddr':
'00:16:3e:4f:10:9a', 'linkActive': True, 'network': 'ovirtmgmt',
'alias': u'net0', 'address': {u'slot': u'0x03', u'bus': u'0x00',
u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}, 'device':
'bridge', 'type': 'interface', 'name': u'vnet0'}, {'index': '0',
'domainID': 'ab35a1ff-700b-4354-8539-bc5f0daa6348', 'reqsize': '0',
'name': u'hda', 'format': 'raw', 'volumeInfo': {'path':
'/rhev/data-center/0c66b59f-bfa5-475d-82c8-ac8878db2565/ab35a1ff-700b-4354-8539-bc5f0daa6348/images/9ac2ea13-1de5-4a60-83c5-8700a23203b7/1b63a000-a23e-46ad-9283-e7227cca4879',
'volType': 'path'}, 'address': {u'bus': u'0', u'controller': u'0',
u'type': u'drive', u'target': u'0', u'unit': u'0'}, 'volumeID':
'1b63a000-a23e-46ad-9283-e7227cca4879', 'apparentsize': '26843545600',
'imageID': '9ac2ea13-1de5-4a60-83c5-8700a23203b7', 'alias': u'ide0-0-0',
'readonly': 'False', 'iface': 'ide', 'truesize': '0', 'poolID':
'0c66b59f-bfa5-475d-82c8-ac8878db2565', 'device': 'disk', 'shared':
False, 'path':
'/rhev/data-center/0c66b59f-bfa5-475d-82c8-ac8878db2565/ab35a1ff-700b-4354-8539-bc5f0daa6348/images/9ac2ea13-1de5-4a60-83c5-8700a23203b7/1b63a000-a23e-46ad-9283-e7227cca4879',
'propagateErrors': 'off', 'type': 'disk', 'volumeChain': [{'domainID':
'ab35a1ff-700b-4354-8539-bc5f0daa6348', 'vmVolInfo': {'path':
'/rhev/data-center/0c66b59f-bfa5-475d-82c8-ac8878db2565/ab35a1ff-700b-4354-8539-bc5f0daa6348/images/9ac2ea13-1de5-4a60-83c5-8700a23203b7/1b63a000-a23e-46ad-9283-e7227cca4879',
'volType': 'path'}, 'leaseOffset': 0, 'volumeID':
'1b63a000-a23e-46ad-9283-e7227cca4879', 'leasePath':
'/rhev/data-center/0c66b59f-bfa5-475d-82c8-ac8878db2565/ab35a1ff-700b-4354-8539-bc5f0daa6348/images/9ac2ea13-1de5-4a60-83c5-8700a23203b7/1b63a000-a23e-46ad-9283-e7227cca4879.lease',
'imageID': '9ac2ea13-1de5-4a60-83c5-8700a23203b7', 'shared': False,
'path':
'/rhev/data-center/0c66b59f-bfa5-475d-82c8-ac8878db2565/ab35a1ff-700b-4354-8539-bc5f0daa6348/images/9ac2ea13-1de5-4a60-83c5-8700a23203b7/1b63a000-a23e-46ad-9283-e7227cca4879'}]},
{'index': 2, 'iface': 'ide', 'name': u'hdc', 'format': 'raw', 'address':
{u'bus': u'1', u'controller': u'0', u'type': u'drive', u'target': u'0',
u'unit': u'0'}, 'alias': u'ide0-1-0', 'readonly': 'True',
'propagateErrors': 'off', 'shared': False, 'device': 'cdrom', 'path':
'/home/Fedora-19-x86_64-DVD.iso', 'truesize': 0, 'type': 'disk'},
{'device': u'usb', 'alias': u'usb0', 'type': 'controller', 'address':
{u'slot': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type':
u'pci', u'function': u'0x2'}}, {'device': u'ide', 'alias': u'ide0',
'type': 'controller', 'address': {u'slot': u'0x01', u'bus': u'0x00',
u'domain': u'0x0000', u'type': u'pci', u'function': u'0x1'}}, {'device':
u'unix', 'alias': u'channel0', 'type': u'channel', 'address': {u'bus':
u'0', u'controller': u'0', u'type': u'virtio-serial', u'port': u'1'}},
{'device': u'unix', 'alias': u'channel1', 'type': u'channel', 'address':
{u'bus': u'0', u'controller': u'0', u'type': u'virtio-serial', u'port':
u'2'}}, {'device': u'spicevmc', 'alias': u'channel2', 'type':
u'channel', 'address': {u'bus': u'0', u'controller': u'0', u'type':
u'virtio-serial', u'port': u'3'}}], 'smp': '2', 'vmType': 'kvm',
'timeOffset': -500L, 'memSize': '4096', 'spiceSecureChannels':
'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir',
'displayIp': '0', 'display': 'qxl'} createXmlElem:<bound method
Drive.createXmlElem of <vm.Drive object at 0x2f14210>> device:cdrom
drv:raw format:raw getNextVolumeSize:<bound method
Drive.getNextVolumeSize of <vm.Drive object at 0x2f14210>> getXML:<bound
method Drive.getXML of <vm.Drive object at 0x2f14210>> iface:ide index:2
isDiskReplicationInProgress:<bound method
Drive.isDiskReplicationInProgress of <vm.Drive object at 0x2f14210>>
isVdsmImage:<bound method Drive.isVdsmImage of <vm.Drive object at
0x2f14210>> log:<logUtils.SimpleLogAdapter object at 0x2f23650> name:hdc
networkDev:False path:/home/Fedora-19-x86_64-DVD.iso propagateErrors:off
readonly:True reqsize:0 serial: shared:False truesize:0 type:cdrom
volExtensionChunk:1024 watermarkLimit:536870912
Traceback (most recent call last):
File "/usr/share/vdsm/clientIF.py", line 378, in teardownVolumePath
res = self.irs.teardownImage(drive['domainID'],
File "/usr/share/vdsm/vm.py", line 1343, in __getitem__
raise KeyError(key)
KeyError: 'domainID'
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,679::task::579::TaskManager.Task::(_updateState)
Task=`23ebd2c4-84a1-4abe-a656-fa27d36d8425`::moving from state init ->
state preparing
libvirtEventLoop::INFO::2013-08-01
10:42:59,679::logUtils::44::dispatcher::(wrapper) Run and protect:
inappropriateDevices(thiefId='fd470849-17c9-436e-818d-16232f5b032b')
libvirtEventLoop::INFO::2013-08-01
10:42:59,680::logUtils::47::dispatcher::(wrapper) Run and protect:
inappropriateDevices, Return response: None
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,680::task::1168::TaskManager.Task::(prepare)
Task=`23ebd2c4-84a1-4abe-a656-fa27d36d8425`::finished: None
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,680::task::579::TaskManager.Task::(_updateState)
Task=`23ebd2c4-84a1-4abe-a656-fa27d36d8425`::moving from state preparing
-> state finished
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,680::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,681::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
libvirtEventLoop::DEBUG::2013-08-01
10:42:59,681::task::974::TaskManager.Task::(_decref)
Task=`23ebd2c4-84a1-4abe-a656-fa27d36d8425`::ref 0 aborting False
libvirtEventLoop::ERROR::2013-08-01
10:42:59,681::clientIF::612::vds::(dispatchLibvirtEvents) Error running
VM callback
Traceback (most recent call last):
File "/usr/share/vdsm/clientIF.py", line 584, in dispatchLibvirtEvents
v._onLibvirtLifecycleEvent(event, detail, None)
File "/usr/share/vdsm/vm.py", line 4728, in _onLibvirtLifecycleEvent
self._onQemuDeath()
File "/usr/share/vdsm/vm.py", line 2095, in _onQemuDeath
response = self.releaseVm()
File "/usr/share/vdsm/vm.py", line 4246, in releaseVm
self.cif.removeVmFromMonitoredDomains(self.id)
File "/usr/share/vdsm/clientIF.py", line 125, in
removeVmFromMonitoredDomains
for dom in self.domainVmIds:
RuntimeError: dictionary changed size during iteration
Thread-24::DEBUG::2013-08-01
10:43:00,274::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
'/usr/bin/dd iflag=direct
if=/rhev/data-center/mnt/192.168.1.104:_home_images/ab35a1ff-700b-4354-8539-bc5f0daa6348/dom_md/metadata
bs=4096 count=1' (cwd None)
The VM is in the following state:
# vdsClient -s localhost getVmStats fd470849-17c9-436e-818d-16232f5b032b
fd470849-17c9-436e-818d-16232f5b032b
Status = Powering down
username = Unknown
memUsage = 0
acpiEnable = true
guestFQDN =
displayPort = -1
session = Unknown
displaySecurePort = 5900
timeOffset = -500
balloonInfo = {}
pauseCode = NOERR
network = {'vnet0': {'macAddr': '00:16:3e:4f:10:9a', 'rxDropped':
'0', 'txDropped': '0', 'rxErrors': '0', 'txRate': '0.0', 'rxRate':
'0.0', 'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name':
'vnet0'}}
displayType = qxl
cpuUser = 0.69
boot = d
elapsedTime = 3400
vmType = kvm
cpuSys = 0.13
appsList = []
hash = -6106157858121768929
pid = 26846
displayIp = 0
cdrom = /home/Fedora-19-x86_64-DVD.iso
guestIPs =
kvmEnable = true
disks = {'hdc': {'readLatency': '0', 'apparentsize': '0',
'writeLatency': '0', 'flushLatency': '0', 'readRate': '0.00',
'truesize': '0', 'writeRate': '0.00'}, 'hda': {'readLatency': '0',
'apparentsize': '26843545600', 'writeLatency': '0', 'imageID':
'9ac2ea13-1de5-4a60-83c5-8700a23203b7', 'flushLatency': '0', 'readRate':
'0.00', 'truesize': '1594064896', 'writeRate': '0.00'}}
monitorResponse = -1
statsAge = 559.96
clientIp = 127.0.0.1
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 10 months