On Thu, Sep 24, 2009 at 5:42 PM, gianluca.cecchi
<gianluca.cecchi(a)gmail.com <mailto:gianluca.cecchi@gmail.com>> wrote:
On Tue, Sep 22, 2009 at 3:00 PM, Mark McLoughlin <markmc(a)redhat.com
<mailto:markmc@redhat.com>> wrote:
On Tue, 2009-09-22 at 10:05 +0200, Gianluca Cecchi wrote:
> > Yeah, I think there's a bug currently in the code. I
believe markmc
> has
> > recently ported that patch back, though, so upgrading to a newer
> version
> > (when
> > he pushes it) should fix the bug.
> >
> > --
> > Chris Lalancette
> >
>
> OK, I'll wait for the fix and then I test.
I think the fix Chris means is in libvirt-0.7.1-5
Cheers,
Mark.
I confirm that with that version of libvirt I'm able to save my VM
(or at least the command completes, not tried yet to restore '-)
Inside the data file saved I can see that there is also the xml
config for the guest embedded ....
As the VM is killed at the end of the save, can I then
- copy the disk(s) of the VM to another host
- copy the save file to another host
- restore the save on that secondary host based on the fact that it
retains the same config?
At this time, trying to restore on another host that has not the
same storage poool config gives this:
Error restoring domain '/mnt/rhel54_x86_64': internal error unable
to start guest: qemu: could not open disk image /dev/vg_qemu01/rhel53_64
and in details window:
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/manager.py", line 458,
in restore_saved_callback
newconn.restore(file_to_load)
File "/usr/share/virt-manager/virtManager/connection.py", line
642, in restore
self.vmm.restore(frm)
File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1420,
in restore
if ret == -1: raise libvirtError ('virDomainRestore() failed',
conn=self)
libvirtError: internal error unable to start guest: qemu: could not
open disk image /dev/vg_qemu01/rhel53_64
Supposing I recreate a pool with the same pointers and the hard disk
of the VM, are there other limitations/restrictions to restore on
another host?
BTW: the restore on the same host (with the VM still stopped as at the
end of the save), doesn't work either. I get:
Error restoring domain '/mnt/rhel54_240909_1750.save': Unable to read
QEMU help output: Interrupted system call
And inside the details window:
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/manager.py", line 458, in
restore_saved_callback
newconn.restore(file_to_load)
File "/usr/share/virt-manager/virtManager/connection.py", line 642, in
restore
self.vmm.restore(frm)
File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1420, in
restore
if ret == -1: raise libvirtError ('virDomainRestore() failed',
conn=self)
libvirtError: Unable to read QEMU help output: Interrupted system call
[root@virtfedbis qemu]# ll /mnt/rhel54_240909_1750.save
-rw------- 1 root root 334221504 2009-09-24 17:51
/mnt/rhel54_240909_1750.save
The VM has an hd of 8Gb of size and 4Gb of ram.
If I want to debug setting LIBVIRTD_DEBUG=1 and then run the restore
command via command line, what would it be, so that I can bugzilla this
(if not already there)?
Inside /var/log/messages I have this:
Sep 24 17:51:51 virtfedbis kernel: libvirtd[25820]: segfault at 0 ip
00000039762a4822 sp 00007f20a7546880 error 4 in libc-2.10.1.so
<
http://libc-2.10.1.so>[3976200000+164000]
Sep 24 17:51:51 virtfedbis libvirtd: 17:51:51.379: error :
qemudExtractVersionInfo:1032 : Unable to read QEMU help output:
Interrupted system call
OK, yeah, it looks like libvirtd segfaulted, which is obviously not good.
You'll want to stop libvirtd, then run (as root):
# ulimit -c unlimited
# LIBVIRT_DEBUG=1 /usr/sbin/libvirtd --verbose
Assuming you get a core out of it, the best information to get is the output of
"thread apply all bt" with gdb.
--
Chris Lalancette