On 01/18/2011 02:06 PM, iarly selbir wrote:
I configured a Gluster volume to work as my backend storage instead
gfs
but I can't finish the setup of a vm, after click em
finish(virt-manager) it show me this error:
Unable to complete install 'libvirt.libvirtError internal error unable
to start guest: qemu: could not open disk image
/var/lib/libvirt/images/test.img
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/create.py", line 724, in
do_install
dom = guest.start_install(False, meter = meter)
File "/usr/lib/python2.4/site-packages/virtinst/Guest.py", line 541,
in start_install
return self._do_install(consolecb, meter, removeOld, wait)
File "/usr/lib/python2.4/site-packages/virtinst/Guest.py", line 633,
in _do_install
self.domain = self.conn.createLinux(install_xml, 0)
File "/usr/lib64/python2.4/site-packages/libvirt.py", line 974, in
createLinux
if ret is None:raise libvirtError('virDomainCreateLinux() failed',
conn=self)
libvirtError: internal error unable to start guest: qemu: could not open
disk image /var/lib/libvirt/images/test.img
# /var/log/messages
Jan 18 05:36:05 kvmsrv001 libvirtd: 05:36:05.227: error : internal error
Timed out while reading console log output
Jan 18 05:36:05 kvmsrv001 libvirtd: 05:36:05.227: error : internal error
unable to start guest: qemu: could not open disk image
/var/lib/libvirt/images/test.img
The file test.img is created but the domain is not, If I umount
/var/lib/libvirt/images (mount point to my gluster volume) all works fine.
Does anyone experienced with this?
Seems like there is some sort of an access/permission issue.
Since your disk should be shared, make sure that it's there in the
libvirt xml and the file is writable.
- -
iarlyy selbir
:wq!
On Tue, Jan 18, 2011 at 3:52 AM, wariola(a)gmail.com
<mailto:wariola@gmail.com> <wariola(a)gmail.com
<mailto:wariola@gmail.com>> wrote:
I think u can use cluster suite and fence towards Libvirt as a
fencing device.
(Tho I just know the theory, never tried it myself)
-wariola-
On Mon, Jan 17, 2011 at 4:17 PM, Dor Laor <dlaor(a)redhat.com
<mailto:dlaor@redhat.com>> wrote:
On 01/14/2011 08:34 PM, iarly selbir wrote:
> Sorry, I forgot to mentions it, Yes I have this configuration two
> failover domain, first with host-1 on top, and other with
host-2 on top.
>
> My question is, how you guys are configuring your guests
resources to
> failover from on hosts to another, remembering that I have
same machines
> on two kvm hosts, i.e. kvm001 has guest001 on, and kvmsrv002 has
> guest001 ( It must be powered on just in fail of guest0001 on
kvm001)
Using hearbeat/light cluster mgmt ala Linux HA package might be
a good
option for you.
>
> I hope being clear enough.
>
> Thank you so much.
>
> - -
> iarlyy selbir
>
> :wq!
>
>
>
> On Fri, Jan 14, 2011 at 3:18 PM, Thomas Sjolshagen
> <thomas(a)sjolshagen.net <mailto:thomas@sjolshagen.net>
<mailto:thomas@sjolshagen.net <mailto:thomas@sjolshagen.net>>>
wrote:
>
> On Fri, 14 Jan 2011 14:19:52 -0300, iarly selbir
<iarlyy(a)gmail.com <mailto:iarlyy@gmail.com>
> <mailto:iarlyy@gmail.com <mailto:iarlyy@gmail.com>>> wrote:
>
>> Hi there,
>> Hi I'm joining today an would like to share my knowledge
with
>> virtualization and get more = )
>> when KVM-HOST-001 fail, the KVM-HOST-002 take over all
machines
>> from other host, I'm sharing a storage volume between
two nodes
>> (gfs2), so all hosts can see the guest images, but how to
>> configure the clusters resources to migrate the guests?
this is my
>> question and any suggestions will be appreciated.
> Assuming you're using libvirt to manage the VM's
(guests), I'd
> configure them as <vm> resources in rgmanager and make
them members
> of a failover group with the highest (shows up as the lowest
> priority number in the example) priority to the
KVM-HOST-* you want
> the guest to start on (if it's available).
> A couple of (example) <vm> resource I have configured in
my 2-node
> GFS2 based KVM cluster (some of the info in the vm
resource tag is
> actually not necessary, but I was both experimenting and
playing it
> safe when I set this up)
> <rm>
> <failoverdomains>
> <failoverdomain name="prefer-virt0" restricted="0"
ordered="1">
> <failoverdomainnode name="virt0-backup"
priority="10" />
> <failoverdomainnode name="virt1-backup"
priority="20" />
> </failoverdomain>
> <failoverdomain name="prefer-virt1" restricted="0"
ordered="1">
> <failoverdomainnode name="virt1-backup"
priority="10" />
> <failoverdomainnode name="virt0-backup"
priority="20" />
> </failoverdomain>
> </failoverdomains>
> <!-- VM resources -->
> <vm name="imap1" autostart="1"
recovery="restart" migrate="live"
> domain="prefer-virt0" use_virsh="1"
hypervisor="qemu" />
> <vm name="imap2" autostart="1"
recovery="restart" migrate="live"
> domain="prefer-virt1" use_virsh="1"
hypervisor="qemu" />
> </rm>
> Hope this helps to illustrate.
> // Thomas
>
>
>
>
> _______________________________________________
> virt mailing list
> virt(a)lists.fedoraproject.org
<mailto:virt@lists.fedoraproject.org>
>
https://admin.fedoraproject.org/mailman/listinfo/virt
_______________________________________________
virt mailing list
virt(a)lists.fedoraproject.org <mailto:virt@lists.fedoraproject.org>
https://admin.fedoraproject.org/mailman/listinfo/virt
--
.: war|ola :.
Use Fedora Linux for better computing experience
http://fedoraproject.org