interesting kvm glitches
by Tom Horsley
I have been converting lots of old xen hvm images to run under
kvm with a new fedora 12 replacing the old debian on the
host machine, and I had a couple of examples of strange
behavior I wonder if anyone else has seen:
One machine booted just fine several times, then one boot
the kernel paniced saying the apic wasn't working and I
should boot with noapic. It has worked fine ever since then,
but somehow it had a problem that once.
Another machine apparently boots OK, but /proc/cpuinfo reports
the cpu speed as 0 HZ - quite an accomplishment (which I
noticed because some test programs were trying to do some
time base calculations and they all divided by zero :-).
I haven't yet booted this machine again to see if it
does this every time or not.
Of course, I also have a handful of machine that simply
refuse to boot all the way up (mostly various opensuse versions).
I may have to try reinstalling them and see if they work
better when under kvm at creation time.
14 years, 4 months
stop ksm on the fly?
by Tom Horsley
I see ksmd using 60% cpu on my kvm host which is running
many different totally different linux distros as KVMs.
It seems likely to me that ksm will be of no value on this
machine, and I could use the cpu time for the VMs themselves :-).
My question is this: Can I stop ksmd without breaking the
existing VMs? What is the best way to turn it off
"on the fly"? (Or is "service ksmd stop" perfectly
safe)?
14 years, 4 months
Re: [fedora-virt] Fedora 11 on XEN Host
by Dale Bewley
----- "sungsoo khim" <skhim(a)ebrary.com> wrote:
> Excuse if this was already discussed. (few hours of searching didn't
> really help out)
>
> We have some server running Cent OS 5.4 as Dom0, and a few more fedora
>
> 11 running on top of them using XEN.
>
> From time to time, one or two of our fedora VM's Load Index will rise
>
> up to 100+ over 8 hour period. (In a very linear fashion)
>
> When we see the trend starting (regardless of the Load index number at
>
> the moment) we are not able to login using SSH or even console (xm
> console [VMNAME]).
>
> Only way to deal with it is to reboot the VM, and there is no
> significant log entry at all. to indicate any system specific errors.
>
> (As if nothing has happened) However, no log message are present
> during
> the high load period.
>
> When it happens however, active running process will reply back - such
>
> as web proxy process. All socket open requests work as should (SSH,
> HTTP, etc), but will not go further than opening the socket with a
> greeting message.
>
> Has anyone had similar experience, and fund a resolution to?
I see something very similar on F8 Xen dom0. It seems to manifest
during high I/O for me, like during a mysqldump of a large db. The
guest system eventually becomes completely blocked.
This one was a record for me:
load average: 301.04, 301.00, 298.46
Install sar on the guest, and see if you can correlate load increase
with some activity on the system. Maybe you can spread that activity
out.
My solution was to move problematic guests to newer hardware with
support for KVM. Unfortunately, most of my hardware has no such
support.
14 years, 4 months
Fedora virt status
by Justin Forbes
Fedora 13
=========
As we move towards F-13, here's the schedule:
2010-01-26 Feature Submission Deadline (40 days)
2010-02-09 Feature Freeze (54 days)
2010-02-16 Alpha Freeze (61 days)
2010-03-23 Beta (Final Development) Freeze (96 days)
2010-04-29 Compose Release Candidate (133 days)
For the Fedora 13 release, we have expanded bug classification a bit.
Instead of the previous virt blockers and virt target bugs, we now have 4
classifications (all rolling up into the virt target for easy tracking). As
always, the trackers are referenced at
https://fedoraproject.org/wiki/Virtualization_bugs
F13VirtBlocker: These are bugs which are critical to fix before release,
and worth holding the release if they are not fixed in time.
F13VirtImportant: These bugs are not quite blockers, but should be
considered high priority
F13VirtTarget: This is somewhat the default bucket for virt bugs. Things
which are supported and should be fixed before release if at
all possible.
F13VirtPonies: These are typically feature requests, or bugs in features
we don't really support. As such they are less likely to get
attention, but patches are welcome. It's not that we don't
want these fixed, but we just don't have the resources to get
to everything.
F12 Virt Preview
================
As was announced before, the virt-preview repository for F12 users wishing
to test out the latest virtualization bits is available. Updates in this
repository include:
bochs-2.3.8-0.9.git04387139e3b:
- Include symlinks to VGABIOS in vgabios rpm, BZ 544310.
- Enable cpu level 6.
libvirt-0.7.4-1:
- upstream release of 0.7.4
- udev node device backend
- API to check object properties
- better QEmu monitor processing
- MAC address based port filtering for qemu
- support IPv6 and multiple addresses per interfaces
- a lot of fixes
- Really fix restore file labelling this time
- Fix QEMU save/restore permissions / labelling
libvirt-java-0.4.0-2:
- Modified the dependency to be libvirt-client instead of libvirt.
- Added libvirt APIs up through 0.7.0
python-virtinst-0.500.1-2:
- Fix interface API detection for libvirt < 0.7.4
- Update to version 0.500.1
- virt-install now attempts --os-variant detection by default.
- New --disk option 'format', for creating image formats like qcow2 or
vmdk
- Many improvements and bugfixes
qemu-0.11.0-12:
- Fix a use-after-free crasher in the slirp code (#539583)
- Fix overflow in the parallels image format support (#533573)
virt-manager-0.8.2-1:
- Update to 0.8.2 release
- Fix first virt-manager run on a new install
- Enable floppy media eject/connect
- Select manager row on right click, regressed with 0.8.1
- Set proper version Requires: for python-virtinst
- VM Migration wizard, exposing various migration options
- Enumerate CDROM and bridge devices on remote connections
- Support storage pool source enumeration for LVM, NFS, and SCSI
We should see qemu move to the 0.12 tree before the end of the year.
Bugs
====
DOOM-O-METER: 173 bugs open 4 weeks ago, up to 211 now.
We have a lot of work to do!
= Important =
== kvm ==
https://bugzilla.redhat.com/show_bug.cgi?id=478317
almost 9 thousand syscalls per second while idle
This is believed to be a result of the USB Tablet device, but several
users have noticed high host CPU usage while guests were idle.
https://bugzilla.redhat.com/show_bug.cgi?id=544339
Segfaults logged from kvm (qemu-kvm) resulting in guest sudden crash
and data loss
A number of users complaining of guests crashing and sometimes taking
the host with them. It is possible that this is related to video
emulation.
https://bugzilla.redhat.com/show_bug.cgi?id=544940
reattach virtio to rhel{5,6} guests will cause qemu-kvm crash
== libvirt ==
https://bugzilla.redhat.com/show_bug.cgi?id=541966
Occasional crash on vm shutdown/reboot
libvirt will occasionally crash when a VM shuts down. I am testing
now, but it seems that a couple of other bugs will be marked as duplicates
of this one.
14 years, 4 months
transmit queue 0 timed out
by Andrés García
Hi,
In F12 I often get a kernel error in the debug app, the problem seems to be
NETDEV WATCHDOG: eth1 (r8169): transmit queue 0 timed out
Where eth1 is the network adapted I have bridged for the guest machines.
Is this something I should report or is it to be expected?
Regards,
Andres.
P.D. The full dump is:
WARNING: at net/sched/sch_generic.c:246 dev_watchdog+0xf3/0x164() (Not
tainted)
Hardware name: P55-US3L
NETDEV WATCHDOG: eth1 (r8169): transmit queue 0 timed out
Modules linked in: joydev tun ipt_MASQUERADE iptable_nat nf_nat sunrpc
bridge stp llc ip6t_REJECT nf_conntrack_ipv6 ip6table_filter ip6_tables
ipv6 cpufreq_ondemand acpi_cpufreq freq_table dm_multipath kvm_intel kvm
snd_hda_codec_realtek snd_hda_intel ppdev parport_pc snd_hda_codec
snd_hwdep snd_seq snd_seq_device r8169 snd_pcm parport mii snd_timer
i2c_i801 snd soundcore snd_page_alloc btusb bluetooth rfkill serio_raw
ata_generic pata_acpi pata_jmicron nouveau ttm drm_kms_helper drm
i2c_algo_bit i2c_core [last unloaded: microcode]
Pid: 0, comm: swapper Not tainted 2.6.31.6-166.fc12.x86_64 #1
Call Trace:
<IRQ> [<ffffffff810516f4>] warn_slowpath_common+0x84/0x9c
[<ffffffff81051763>] warn_slowpath_fmt+0x41/0x43
[<ffffffff8138e831>] ? netif_tx_lock+0x44/0x6d
[<ffffffff8138e99b>] dev_watchdog+0xf3/0x164
[<ffffffff812fd5c5>] ? usb_hcd_poll_rh_status+0x13f/0x14e
[<ffffffff81071900>] ? clockevents_program_event+0xb/0x83
[<ffffffff81072ab9>] ? tick_dev_program_event+0x3c/0xaa
[<ffffffff8105bec4>] run_timer_softirq+0x19f/0x21c
[<ffffffff8106ae47>] ? hrtimer_interrupt+0x13c/0x153
[<ffffffff81057614>] __do_softirq+0xdd/0x1ad
[<ffffffff81026936>] ? apic_write+0x16/0x18
[<ffffffff81012eac>] call_softirq+0x1c/0x30
[<ffffffff810143fb>] do_softirq+0x47/0x8d
[<ffffffff81057326>] irq_exit+0x44/0x86
[<ffffffff8141ecf5>] do_IRQ+0xa5/0xbc
[<ffffffff810126d3>] ret_from_intr+0x0/0x11
<EOI> [<ffffffff81267b22>] ? acpi_idle_enter_simple+0x111/0x145
[<ffffffff81267b1b>] ? acpi_idle_enter_simple+0x10a/0x145
[<ffffffff81267834>] ? acpi_idle_enter_bm+0xd8/0x2b5
[<ffffffff8106aa45>] ? hrtimer_start+0x18/0x1a
[<ffffffff81353b7f>] ? cpuidle_idle_call+0x99/0xce
[<ffffffff81010c60>] ? cpu_idle+0xa6/0xe9
[<ffffffff81405db7>] ? rest_init+0x6b/0x6d
[<ffffffff81714dc9>] ? start_kernel+0x3ef/0x3fa
[<ffffffff817142a1>] ? x86_64_start_reservations+0xac/0xb0
[<ffffffff8171439d>] ? x86_64_start_kernel+0xf8/0x10
14 years, 4 months
Semi-OT: Running Windows 2000 guest with more than one vCPU using qemu-kvm.
by Gilboa Davara
Hello all,
I'm trying to run a 32bit Windows 2000 guest using qemu-kvm. (On a dual
Xeon E5335 workstation)
At least for, I rather use qemu-kvm directly and not libvirt. (I've got
a number of guest generation / management scripts that I don't have time
to convert to virtsh.)
If I start the guest with smp 1 and use the "ACPI Uniprocessor" HAL, the
qemu-kvm never eats more than 10-20% CPU when the guest is idle.
If I start the guest with smp 2 and use the "ACPI Multiprocessor" HAL,
the qemu-kvm eats ~110% (1.1 cores) even if the guest is fully idle.
I found a bugzilla entry (#479977) but it seems to suggest that the only
solution is enabling ACPI (which should be enabled by default).
Any idea what I can do next?
- Gilboa
14 years, 4 months
Virtualization: qemu-kvm: Excessive CPU usage on host
by Dario Lesca
Hello to everyone, on my laptop with F12x86_64 I have 'yum -y install
virtualization', then I installed a virtual machine F12x86_64
(without X, it's a server) for testing.
Unfortunately, when the guest machine is on and idle (does
anything) the host machine always has the cpu committed by the 15/25%
qemu-kvm.
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 3584 qemu 20 0 764m 179m 3116 S 19.3 5.9 10:06.45 qemu-kvm
Also, after a bit of time (more than an hour) and the guest machine is
unused, the process ksmd use excessive CPU (about 30%): another process
that slowed down with no apparent reason host (my laptop):
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 32 root 25 5 0 0 0 R 30.1 0.0 39:00.62 ksmd
> 3584 qemu 20 0 764m 523m 3116 S 15.9 17.3 31:59.25 qemu-kvm
All these things with VMware-server did not happen.
Is this normal?
It's possible do something to improve the situation?
Otherwise, I will have to go back to VMware.
Thanks in advance.
--
Dario Lesca <d.lesca(a)solinos.it>
14 years, 4 months
old xen Windows XP hvm to kvm avoiding activation problems?
by Tom Horsley
Are there any hints out there for how I can try to convert a
Windows XP hvm running on a fairly old version of xen to
a nice new fedora 12 KVM and (the tricky part) avoid XP
deciding it needs to be reactivated?
What things should I look at in windows to decide how to
tweak the KVM machine xml so it will be as close as possible to
the same virtual hardware windows had before?
14 years, 4 months