Spec weirdness
by Jesse Keating
I'm looking at the spec file for libguestfs, and all I can say is WTF.
There is a lot of crazyness going on in this spec, chroots within
chroots, making a repo of yum cache packages and using it again, calling
qemu, and none of it is really documented in the spec as for what and
why things are done this way.
The review for this is extremely light on comments regarding the spec
file construction as well, it looks very suspiciously like "it built and
passed rpmlint, let it in!"
I'd really like to see some comments around what's going on here, and
maybe some discussion on public lists of what you're trying to do and
whether or not there are better ways to do that within our buildsystem.
--
Jesse Keating
Fedora -- Freedom² is a feature!
identi.ca: http://identi.ca/jkeating
14 years, 8 months
Am I using kvm?
by Tom Horsley
That is the question :-).
I've used virt-manager on f11 to create a VM where I am at the moment
installing an f11 vm, but I don't see the string "kvm" in anything
involved in this VM. How can I tell what kind of emulation is being
used for the machine? (Shouldn't that be mentioned in the Details
tab somewhere?)
The machine is a xeon system where "vmx" does show up in /proc/cpuinfo
and lsmod shows a "kvm" module. Is it just a given that I will
therefore be using kvm? The only "Virt Type" I was offered by
virt-manager when doing the install was "qemu".
14 years, 8 months
AVC denials on F-11
by Jerry James
I just did a yum upgrade this morning, and got glibc-2.10.1-4.x86_64,
where the top ChangeLog entry says:
* Tue Aug 04 2009 Andreas Schwab <schwab(a)redhat.com> - 2.10.1-4
- Reenable setuid on pt_chown.
Now trying to start a virtual machine with virt-manager yields this AVC denial:
node=localhost.localdomain type=AVC msg=audit(1250004330.149:46142):
avc: denied { setrlimit } for pid=18539 comm="qemu-kvm"
scontext=system_u:system_r:svirt_t:s0:c141,c175
tcontext=system_u:system_r:svirt_t:s0:c141,c175 tclass=process
node=localhost.localdomain type=SYSCALL
msg=audit(1250004330.149:46142): arch=c000003e syscall=160 success=no
exit=-13 a0=4 a1=7fff65c9ef50 a2=0 a3=7fac28fde220 items=0 ppid=18535
pid=18539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0
sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="qemu-kvm"
exe="/usr/bin/qemu-kvm" subj=system_u:system_r:svirt_t:s0:c141,c175
key=(null)
... and two instances of this AVC denial:
node=localhost.localdomain type=AVC msg=audit(1250004330.150:46143):
avc: denied { setattr } for pid=18539 comm="pt_chown" name="6"
dev=devpts ino=9 scontext=system_u:system_r:svirt_t:s0:c141,c175
tcontext=system_u:object_r:devpts_t:s0:c141,c175 tclass=chr_file
node=localhost.localdomain type=SYSCALL
msg=audit(1250004330.150:46143): arch=c000003e syscall=92 success=no
exit=-13 a0=7fc194e8f1d0 a1=0 a2=5 a3=7fff01c34de0 items=0 ppid=18535
pid=18539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0
sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="pt_chown"
exe="/usr/libexec/pt_chown"
subj=system_u:system_r:svirt_t:s0:c141,c175 key=(null)
... and a dialog box from virt-manager that says:
"Error starting domain: internal error unable to start guest: qemu:
could not open monitor device 'pty'"
with this traceback:
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/engine.py", line 493, in run_domain
vm.startup()
File "/usr/share/virt-manager/virtManager/domain.py", line 573, in startup
self.vm.create()
File "/usr/lib64/python2.6/site-packages/libvirt.py", line 287, in create
if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirtError: internal error unable to start guest: qemu: could not
open monitor device 'pty'
Whose bug is this? Also, is there anything to be done about this
besides rolling glibc back to its previous version?
--
Jerry James
http://www.jamezone.org/
14 years, 8 months
fc11 and xen
by Jon R.
Hello List,
I would like to install xen with dom0 and HVM support on FC11 with a
recent kernel. Is this possible and if so, is there any documentation
that would help me to get the right pieces?
Thank you for any help,
Jon
14 years, 8 months
no virbr0 with libvirt-0.7.0-2
by Gianluca Cecchi
With libvirt-0.7.0-0.9.gite195b43.fc11.x86_64 all ok,
with libvirt-0.7.0-2.fc11.x86_64 updated today virbr0 is ko.
It doesn't start automatically as configured and in /var/log/messages
I find this:
Aug 7 12:57:43 virtfed libvirtd: 12:57:43.559: error :
networkDisableIPV6:806 : cannot enable
/proc/sys/net/ipv6/conf/virbr0/disable_ipv6: No such file or directory
previously I didn't get this error (and NetworkManager and ipv6 was
already disabled)
Trying to start manually the "default" network inside virt-manager
host details window I get this traceback:
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/host.py", line 247, in start_network
net.start()
File "/usr/share/virt-manager/virtManager/network.py", line 71, in start
self.net.create()
File "/usr/lib64/python2.6/site-packages/libvirt.py", line 620, in create
if ret == -1: raise libvirtError ('virNetworkCreate() failed', net=self)
libvirtError: cannot enable
/proc/sys/net/ipv6/conf/virbr0/disable_ipv6: No such file or directory
Current config files on my f11 x86_64 with fedora-virt-preview repo enabled:
[root]# cat /etc/modprobe.d/noipv6.conf
alias net-pf-10 off
alias ipv6 off
[root]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=myhostname.domain.com
NOZEROCONF=yes
NETWORKING_IPV6=off
[root@virtfed log]# cat /etc/libvirt/qemu/networks/default.xml
<network>
<name>default</name>
<uuid>2e877982-bc6c-48b6-975b-317231c43ce4</uuid>
<bridge name="virbr0" />
<forward/>
<ip address="192.168.122.1" netmask="255.255.255.0">
<dhcp>
<range start="192.168.122.2" end="192.168.122.254" />
</dhcp>
</ip>
</network>
Thanks,
Gianluca
14 years, 8 months
no virbr0
by Guy Carmin
After connecting to "Rawhide Virtualization Repository available for Fedora
11" and "yum update" (virt-manager 0.7*) the default network does not start
up (automatic nor manually) with the following error:
virsh # net-start default
error: Failed to start network default
error: cannot enable /proc/sys/net/ipv6/conf/virbr0/disable_ipv6: No such
file or directory
whereas the IPV6 is disable at my F11. (and virbr0 has been worked before
the update to 0.7)
----
“Only when the last tree has died,
and the last river has been poisoned,
and the last fish has been caught,
will we realize that we cannot eat money.”
(Chief Seattle)
----
may the source be with you.
-----
Sure Linux is user-friendly,
it's just picky about who its friends are.
------
Free your mind,
And your OS will follow
------
echo hvz/dbsnjoAhnbjm/dpn | perl -pe 's/./chr(ord($&)-1)/ge'
14 years, 8 months
live migration blocked until dump run in virsh
by Gianluca Cecchi
With standard F11 x86_64 host I was able to do live migration through
virsh but not inside virt-manager
(I got libvirtError: invalid argument in only tcp URIs are supported
for KVM migrations)
After enabling fedora-virt-preview repo, I now have:
gpxe-roms-qemu-0.9.7-4.fc11.noarch
libvirt-0.7.0-0.9.gite195b43.fc11.x86_64
libvirt-client-0.7.0-0.9.gite195b43.fc11.x86_64
libvirt-python-0.7.0-0.9.gite195b43.fc11.x86_64
perl-Sys-Virt-0.2.0-2.fc11.x86_64
python-virtinst-0.500.0-1.fc11.noarch
qemu-common-0.10.91-0.4.rc1.fc11.x86_64
qemu-img-0.10.91-0.4.rc1.fc11.x86_64
qemu-kvm-0.10.91-0.4.rc1.fc11.x86_64
qemu-system-x86-0.10.91-0.4.rc1.fc11.x86_64
virt-df-1.0.64-2.fc11.x86_64
virt-manager-0.8.0-1.fc11.noarch
virt-top-1.0.3-4.fc11.x86_64
virt-viewer-0.2.0-1.fc11.x86_64
I'm able to run live migration inside virt-manager: it completes, the
vm appears running on the
new host (I can also see the qemu process) but actually it is frozen
from an OS point of
view.
I found a sort of workaround doing this from inside virsh:
virsh # list
Id Name State
----------------------------------
1 centos53 running
2 centos53_node2 running
3 prova2 running
virsh # dump 3 /tmp/dump_prova2.log
Domain 3 dumped to /tmp/dump_prova2.log
As soon that the command completes (about 10 seconds with a vm having
768MB of ram defined) I have again the vm runni ng and reachable
through network.....
Any insight?
Also, how to migrate a powered off vm?
Question: a vm migrated form host1 to host2 still appears on the
source host as shutoff.... but right click on it in virt-manager gives
the option to "run" it.....
What does it happens in this case? I think corruption, as the vm is
actually running on host2.... Or would I receive a sort of error
preventing me to do this?
Thanks,
Gianluca
14 years, 8 months
end_request: I/O error, dev vda, sector 0
by Jerry James
On Thu, Jul 2, 2009 at 8:55 AM, Jerry James<loganjerry(a)gmail.com> wrote:
> I just made a Rawhide virtual machine to see if it suffers from the
> same problem. Zillions of copies of this message are being spewed to
> the console, although the machine itself seems to be running normally
> (if sluggishly):
>
> end_request: I/O error, dev vda, sector 0
>
> Any idea what that's all about?
On a Fedora 11 x86_64 host with the latest updates installed, I just
made a fresh KVM guest with virt-manager. I made the disk image
fresh, too, just in case that had something to do with it. I
installed Rawhide. I cannot login through GDM for reasons I haven't
tracked down yet. Sending Ctrl+Alt+F2 to the guest gives me a text
console where the message above is printed frequently. Is anybody
else seeing this?
--
Jerry James
http://www.jamezone.org/
14 years, 8 months
sysrq key when using 'virsh console'
by James Laska
Greetings,
I'm not able to find if this exists or not, but in attempting to debug a
problem observed in a guest, I need to grab output from several sysrq
commands.
Does the kvm ttyS0 console offer a sysrq hotkey when connecting using
'virsh console'?
Thanks,
James
14 years, 8 months
Supermin question
by Thomas S Hatch
I imagine I am missing something obvious, but I can't figure out how to
start a supermin appliance. I ran the helper script and generated my initrd
and symbolic link to my kernel, but I can't figure out how to use it beyond
that, how to actually boot it.
Thanks!
-Tom Hatch
14 years, 8 months