[fedora-virt] KVM high available

iarly selbir iarlyy at gmail.com
Tue Jan 18 13:06:14 UTC 2011


I configured a Gluster volume to work as my backend storage instead gfs but
I can't finish the setup of a vm, after click em finish(virt-manager) it
show me this error:

Unable to complete install 'libvirt.libvirtError internal error unable to
start guest: qemu: could not open disk image
/var/lib/libvirt/images/test.img

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/create.py", line 724, in
do_install
    dom = guest.start_install(False, meter = meter)
  File "/usr/lib/python2.4/site-packages/virtinst/Guest.py", line 541, in
start_install
    return self._do_install(consolecb, meter, removeOld, wait)
  File "/usr/lib/python2.4/site-packages/virtinst/Guest.py", line 633, in
_do_install
    self.domain = self.conn.createLinux(install_xml, 0)
  File "/usr/lib64/python2.4/site-packages/libvirt.py", line 974, in
createLinux
    if ret is None:raise libvirtError('virDomainCreateLinux() failed',
conn=self)
libvirtError: internal error unable to start guest: qemu: could not open
disk image /var/lib/libvirt/images/test.img

# /var/log/messages

Jan 18 05:36:05 kvmsrv001 libvirtd: 05:36:05.227: error : internal error
Timed out while reading console log output
Jan 18 05:36:05 kvmsrv001 libvirtd: 05:36:05.227: error : internal error
unable to start guest: qemu: could not open disk image
/var/lib/libvirt/images/test.img


The file test.img is created but the domain is not, If I umount
/var/lib/libvirt/images (mount point to my gluster volume) all works fine.

Does anyone experienced with this?


- -
iarlyy selbir

:wq!



On Tue, Jan 18, 2011 at 3:52 AM, wariola at gmail.com <wariola at gmail.com>wrote:

> I think u can use cluster suite and fence towards Libvirt as a fencing
> device.
>
> (Tho I just know the theory, never tried it myself)
>
> -wariola-
>
> On Mon, Jan 17, 2011 at 4:17 PM, Dor Laor <dlaor at redhat.com> wrote:
>
>> On 01/14/2011 08:34 PM, iarly selbir wrote:
>> > Sorry, I forgot to mentions it, Yes I have this configuration two
>> > failover domain, first with host-1 on top, and other with host-2 on top.
>> >
>> > My question is, how you guys are configuring your guests resources to
>> > failover from on hosts to another, remembering that I have same machines
>> > on two kvm hosts, i.e. kvm001 has guest001 on, and kvmsrv002 has
>> > guest001 ( It must be powered on just in fail of guest0001 on kvm001)
>>
>> Using hearbeat/light cluster mgmt ala Linux HA package might be a good
>> option for you.
>>
>> >
>> > I hope being clear enough.
>> >
>> > Thank you so much.
>> >
>> > - -
>> > iarlyy selbir
>> >
>> > :wq!
>> >
>> >
>> >
>> > On Fri, Jan 14, 2011 at 3:18 PM, Thomas Sjolshagen
>> > <thomas at sjolshagen.net <mailto:thomas at sjolshagen.net>> wrote:
>> >
>> >     On Fri, 14 Jan 2011 14:19:52 -0300, iarly selbir <iarlyy at gmail.com
>> >     <mailto:iarlyy at gmail.com>> wrote:
>> >
>> >>     Hi there,
>> >>     Hi I'm joining today an would like to share my knowledge with
>> >>     virtualization and get more = )
>> >>     when KVM-HOST-001 fail, the KVM-HOST-002 take over all machines
>> >>     from other host, I'm sharing a storage volume between two nodes
>> >>     (gfs2), so all hosts can see the guest images, but how to
>> >>     configure the clusters resources to migrate the guests? this is my
>> >>     question and any suggestions will be appreciated.
>> >     Assuming you're using libvirt to manage the VM's (guests), I'd
>> >     configure them as <vm> resources in rgmanager and make them members
>> >     of a failover group with the highest (shows up as the lowest
>> >     priority number in the example) priority to the KVM-HOST-* you want
>> >     the guest to start on (if it's available).
>> >     A couple of (example) <vm> resource I have configured in my 2-node
>> >     GFS2 based KVM cluster (some of the info in the vm resource tag is
>> >     actually not necessary, but I was both experimenting and playing it
>> >     safe when I set this up)
>> >     <rm>
>> >     <failoverdomains>
>> >     <failoverdomain name="prefer-virt0" restricted="0" ordered="1">
>> >     <failoverdomainnode name="virt0-backup" priority="10" />
>> >     <failoverdomainnode name="virt1-backup" priority="20" />
>> >     </failoverdomain>
>> >     <failoverdomain name="prefer-virt1" restricted="0" ordered="1">
>> >     <failoverdomainnode name="virt1-backup" priority="10" />
>> >     <failoverdomainnode name="virt0-backup" priority="20" />
>> >     </failoverdomain>
>> >     </failoverdomains>
>> >     <!-- VM resources -->
>> >     <vm name="imap1" autostart="1" recovery="restart" migrate="live"
>> >     domain="prefer-virt0" use_virsh="1" hypervisor="qemu" />
>> >     <vm name="imap2" autostart="1" recovery="restart" migrate="live"
>> >     domain="prefer-virt1" use_virsh="1" hypervisor="qemu" />
>> >     </rm>
>> >     Hope this helps to illustrate.
>> >     // Thomas
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > virt mailing list
>> > virt at lists.fedoraproject.org
>> > https://admin.fedoraproject.org/mailman/listinfo/virt
>>
>> _______________________________________________
>> virt mailing list
>> virt at lists.fedoraproject.org
>> https://admin.fedoraproject.org/mailman/listinfo/virt
>>
>
>
>
> --
> .: war|ola :.
> Use Fedora Linux for better computing experience
> http://fedoraproject.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.fedoraproject.org/pipermail/virt/attachments/20110118/a05ad2c2/attachment.html 


More information about the virt mailing list