Guest shutdown problems
by Christian Axelsson
Hello
I have a problem shutting down xen guests. Using xm shutdown guests get
shut down but gets stuck in state '---s-d' or sometimes '------'.
When trying to clone a domain when in this state (my original purpose of
the whole operation) I'll get the error:
[root@hydra virtinst--devel]# ./virt-clone -o minimal -n new_img -f
/var/lib/xen/images/new_img.img
ERROR: virDomainGetXMLDesc() failed failed Xen syscall
xenDaemonDomainDumpXMLByID failed to find this domain -490299505
The same errors occurs when for example trying to attach to the console
using virsh.
I have tried to use 'xm destroy' to kill the guest the hard way but it
has no effect - the state remains unchanged. I have also tried this on a
few different guest installations with the same result. A thing worth
noting is that the output from 'xm list --long' differs, I've attached
the out put pre boot, after boot and after shutdown. Note how all the
devices in the guests are missing after shutdown.
Both the hosts and the guests are fedora 8 installations.
Regards,
Christian Axelsson
smiler(a)lanil.mine.nu
[?1034h(domain
(domid 0)
(on_crash restart)
(uuid 00000000-0000-0000-0000-000000000000)
(bootloader_args )
(vcpus 2)
(name Domain-0)
(on_poweroff destroy)
(on_reboot restart)
(bootloader )
(maxmem 16777215)
(memory 1491)
(shadow_memory 0)
(cpu_weight 256)
(cpu_cap 0)
(features )
(on_xend_start ignore)
(on_xend_stop ignore)
(cpu_time 1644.84369405)
(online_vcpus 2)
(image (linux (kernel )))
(status 2)
(state r-----)
)
(domain
(domid 2)
(on_crash restart)
(uuid a7638797-e237-3891-5e64-390f828238ca)
(bootloader_args )
(vcpus 1)
(name minimal)
(on_poweroff destroy)
(on_reboot restart)
(bootloader /usr/bin/pygrub)
(maxmem 512)
(memory 512)
(shadow_memory 0)
(cpu_weight 256)
(cpu_cap 0)
(features )
(on_xend_start ignore)
(on_xend_stop ignore)
(start_time 1206360333.14)
(cpu_time 9.753408915)
(online_vcpus 1)
(image
(linux
(kernel )
(notes
(FEATURES
'writable_page_tables|writable_descriptor_tables|auto_translated_physmap|pae_pgdir_above_4gb|supervisor_mode_kernel'
)
(VIRT_BASE 18446744071562067968)
(GUEST_VERSION 2.6)
(PADDR_OFFSET 18446744071562067968)
(GUEST_OS linux)
(HYPERCALL_PAGE 18446744071564189696)
(LOADER generic)
(SUSPEND_CANCEL 1)
(ENTRY 18446744071564165120)
(XEN_VERSION xen-3.0)
)
)
)
(status 2)
(state -b----)
(store_mfn 196619)
(console_mfn 196618)
(device
(vif
(bridge xenbr0)
(mac 00:16:3e:3f:93:b8)
(script vif-bridge)
(uuid 94afd732-920b-2e0b-b3d5-e79174754a80)
(backend 0)
)
)
(device
(vbd
(uname file:/var/lib/xen/images/minimal.img)
(uuid 8f4f4da3-5f8a-3fee-28e8-41dc49e876cd)
(mode w)
(dev xvda:disk)
(backend 0)
(bootable 1)
)
)
(device
(console
(protocol vt100)
(location 2)
(uuid 0046f2d3-058b-d524-9273-f1dac2ca950b)
)
)
)
[?1034h(domain
(domid 0)
(on_crash restart)
(uuid 00000000-0000-0000-0000-000000000000)
(bootloader_args )
(vcpus 2)
(name Domain-0)
(on_poweroff destroy)
(on_reboot restart)
(bootloader )
(maxmem 16777215)
(memory 1491)
(shadow_memory 0)
(cpu_weight 256)
(cpu_cap 0)
(features )
(on_xend_start ignore)
(on_xend_stop ignore)
(cpu_time 1648.92600832)
(online_vcpus 2)
(image (linux (kernel )))
(status 2)
(state r-----)
)
(domain
(domid 2)
(on_crash restart)
(uuid a7638797-e237-3891-5e64-390f828238ca)
(bootloader_args )
(vcpus 1)
(name minimal)
(on_poweroff destroy)
(on_reboot restart)
(bootloader /usr/bin/pygrub)
(maxmem 512)
(memory 512)
(shadow_memory 0)
(cpu_weight 256)
(cpu_cap 0)
(features )
(on_xend_start ignore)
(on_xend_stop ignore)
(start_time 1206360333.14)
(cpu_time 13.048743365)
(online_vcpus 1)
(image
(linux
(kernel )
(notes
(FEATURES
'writable_page_tables|writable_descriptor_tables|auto_translated_physmap|pae_pgdir_above_4gb|supervisor_mode_kernel'
)
(VIRT_BASE 18446744071562067968)
(GUEST_VERSION 2.6)
(PADDR_OFFSET 18446744071562067968)
(GUEST_OS linux)
(HYPERCALL_PAGE 18446744071564189696)
(LOADER generic)
(SUSPEND_CANCEL 1)
(ENTRY 18446744071564165120)
(XEN_VERSION xen-3.0)
)
)
)
(status 0)
(state ---s-d)
(store_mfn 196619)
(console_mfn 196618)
)
[?1034h(domain
(domid 0)
(on_crash restart)
(uuid 00000000-0000-0000-0000-000000000000)
(bootloader_args )
(vcpus 2)
(name Domain-0)
(on_poweroff destroy)
(on_reboot restart)
(bootloader )
(maxmem 16777215)
(memory 1491)
(shadow_memory 0)
(cpu_weight 256)
(cpu_cap 0)
(features )
(on_xend_start ignore)
(on_xend_stop ignore)
(cpu_time 1635.21430615)
(online_vcpus 2)
(image (linux (kernel )))
(status 2)
(state r-----)
)
(domain
(on_crash restart)
(uuid a7638797-e237-3891-5e64-390f828238ca)
(bootloader_args )
(vcpus 1)
(name minimal)
(on_poweroff destroy)
(on_reboot restart)
(bootloader /usr/bin/pygrub)
(maxmem 512)
(memory 512)
(shadow_memory 0)
(cpu_weight 256)
(cpu_cap 0)
(features )
(on_xend_start ignore)
(on_xend_stop ignore)
(start_time 1206309092.82)
(cpu_time 0.0)
(image
(linux
(kernel )
(notes
(FEATURES
'writable_page_tables|writable_descriptor_tables|auto_translated_physmap|pae_pgdir_above_4gb|supervisor_mode_kernel'
)
(VIRT_BASE 18446744071562067968)
(GUEST_VERSION 2.6)
(PADDR_OFFSET 18446744071562067968)
(GUEST_OS linux)
(HYPERCALL_PAGE 18446744071564189696)
(LOADER generic)
(SUSPEND_CANCEL 1)
(ENTRY 18446744071564165120)
(XEN_VERSION xen-3.0)
)
)
)
(status 0)
(device
(vif
(bridge xenbr0)
(mac 00:16:3e:3f:93:b8)
(backend 0)
(uuid 94afd732-920b-2e0b-b3d5-e79174754a80)
(script vif-bridge)
)
)
(device
(vbd
(uuid 8f4f4da3-5f8a-3fee-28e8-41dc49e876cd)
(bootable 1)
(driver paravirtualised)
(dev xvda:disk)
(uname file:/var/lib/xen/images/minimal.img)
(mode w)
(backend 0)
)
)
(device
(console
(protocol vt100)
(location 2)
(uuid 0046f2d3-058b-d524-9273-f1dac2ca950b)
)
)
)
14 years, 2 months
problems with f9 guest on f8 dom0
by Matt Cowan
Anytime I shutdown or restart (the guest will successfully restart) an
f9 guest on an f8 dom0 it causes virt-manager to hang and virt-install
to fail. Attempts to restart virt-manager hang at "Connecting". no
luck restarting xend, libvirtd, or even xenstored, only solution I've
found is a dom0 reboot! Same problem on 2 different host systems,
different hardware, dom0 installed by different people. I have no
trouble with <=f8 guests on either of these boxes.
more details at
https://bugzilla.redhat.com/show_bug.cgi?id=429403#c7
I assume other people are successfully running f9 guests on f8 dom0?!
No one else having this issue?
thanks.
-matt
14 years, 8 months
F10 crystal ball gazing
by Andy Burns
Given that xen is entering feature freeze for updated release ~August
is it too early to ask what the *hopes* are for xen in fedora10?
xen 3.3.0 (or late rc) or is this still likely to be 3.2.1 or 3.2.2?
kernel 2.6.27(or late rc) with pv_ops for 32 and 64 bit, dom0 and domU?
oVirt?
updates to libvirt/virt-manager as usual
14 years, 9 months
[Xen-devel] State of Xen in upstream Linux
by Pasi Kärkkäinen
----- Forwarded message from Jeremy Fitzhardinge <jeremy(a)goop.org> -----
From: Jeremy Fitzhardinge <jeremy(a)goop.org>
To: Xen-devel <xen-devel(a)lists.xensource.com>,
xen-users(a)lists.xensource.com,
Virtualization Mailing List <virtualization(a)lists.osdl.org>
Cc:
Date: Wed, 30 Jul 2008 17:51:37 -0700
Subject: [Xen-devel] State of Xen in upstream Linux
Well, the mainline kernel just hit 2.6.27-rc1, so it's time for an
update about what's new with Xen. I'm trying to aim this at both the
user and developer audiences, so bear with me if I seem to be waffling
about something irrelevant.
2.6.26 was mostly a bugfix update compared with 2.6.25, with a few small
issues fixed up. Feature-wise, it supports 32-bit domU with the core
devices needed to make it work (netfront, blockfront, console). It also
has xen-pvfb support, which means you can run the standard X server
without needing to set up Xvnc.
I don't know of any bugs in 2.6.26, so I'd recommend you try it out for
all your 32-bit domU needs. It has had fairly wide exposure in Fedora
kernels, so I'd rank its stability as fairly high. If you're migrating
from 2.6.18-xen, then there'll be a few things you need to pay attention
to. http://wiki.xensource.com/xenwiki/XenParavirtOps should help, but
if it doesn't, please either fix it and/or ask!
2.6.27 will be a much more interesting release. It has two major
feature additions: save/restore/migrate (including checkpoint and live
migration), and x86-64 support. In keeping with the overall unification
of i386 and x86-64 code in the kernel, the 32- and 64-bit Xen code is
largely shared, so they have feature parity.
The Xen support seems fairly stable in linux-2.6.git, but the kernel is
still at -rc1, so lots of other things will tend to break. I encourage
you to try it out if you're comfortable with what's still a fairly high
rate of change.
My current patch stack is pretty much empty - everything has been merged
into linux-2.6.git - so it makes a good base for any changes you may have
Now that Xen can directly boot a bzImage format kernel, distros have a
lot of flexibilty in how they package Xen. A single grub.conf entry can
be used to boot either a native kernel (via normal grub), or a
paravirtualized Xen kernel (via pygrub), without modification.
Fedora 9's kernel-xen package has been based on the mainline kernel from
the outset, but it is still packaged as a separate kernel. kernel-xen
has been dropped from rawhide (what will become Fedora 10), and all Xen
support - both 32 and 64 bit - has been rolled into the main kernel
package.
So, what's next?
The obvious big piece of missing functionality is dom0 support. That
will be my focus in this next kernel development window, and I hope
we'll have it merged into 2.6.28. Some roadblock may appear which
prevents this (kernel development is always a bit uncertain), but that's
the current plan.
We're planning on setting up a xen.git on xen.org somewhere. We still
need to work out the precise details, but my expectation is that will
become the place where dom0 work continues, and I also hope that other
Xen developers will start using it as the base for their own Xen work.
Expect to see some more concrete details over the next week or so.
What can I do?
I'm glad you asked. Here's my current TODO list. These are mostly
fairly small-scale projects which just need some attention. I'd love
people to adopt things from this list.
x86-64: SMP broken with CONFIG_PREEMPT
It crashes early after bringing up a second CPU when preempt is
enabled. I think it's failing to set up the CPU topology properly,
and leaving something uninitialized. The desired topology is the
simplest possible - one core per package, no SMT/HT, no multicore,
no shared caches. It should be simple to set up.
irq balancing causes lockups
Using irq balancing causes the kernel to lock up after a while. It
looks like it's losing interrupts. It's probably dropping
interrupts if you migrate an irq beween vcpus while an event is
pending. Shouldn't be too hard to fix. (In the meantime, the
workaround is to make sure that you don't enable in-kernel irq
balancing, and you don't run irqbalanced.)
block device hotplug
Hotplugging devices should work already, but I haven't really tested
it. Need to make sure that both the in-kernel driver stuff works
properly, and that udev events are raised properly, scripts run,
device nodes added - and conversely for unplug. Also, a modular
xen-blockfront.ko should be unloadable.
net device hotplug
Similar to block devices, but with a slight extra complication. If
the driver has outstanding granted pages, then the module can't be
immediately unloaded, because you can't free the pages if dom0 has a
reference to them. My thought is to add a simple kernel thread
which takes ownership of unwanted granted pages: it would
periodically try to ungrant them, and if successful, free the page.
That means that netfront could hand ownership of those pages over to
that thread, and unload immediately.
Performance measurement and tuning
By design, the paravirt-ops-based Xen implementation should have
high performance. It uses batching where-ever possible, late
pin/early unpin, and all the other performance tricks available to a
Xen kernel. However, my emphasis has been on correctness and
features, so I have not extensively benchmarked or performance tuned
the code. There's plenty of scope for measuring both synthetic and
real-world benchmarks (ideally, applications you really care about),
and try to work out how things can be tuned.
One thing that has already come to light is a general regression in
context switch time compared to 2.6.18.8-xen. It's unclear where
it's coming from; a close look at the actual context switch code
itself shows that it should perform the same as 2.6.18-xen (same
number of hypercalls performed, for example).
This would be an excellent opportunity to become familiar with Xen's
tracing and performance measurement tools...
Balloon driver
The current in-kernel balloon driver only supports shrinking and
regrowing a domain up to its original size. There's no support for
growing a domain beyond that.
My plan is to use hotplug memory to add new memory to the system. I
have some prototype code to do this, which works OK, but the hotplug
memory subsystem needs some modifications to really deal with the
kinds of incremental memory increases that we need for ballooning
(it assumes that you're actually plugging in physical DIMMs).
The other area which needs attention is some sanity checking when
deflating a domain, to prevent killing the domain by stealing too
much memory. 2.6.18-xen uses a simple static minimum memory
heuristic based on the original size of the domain. This helps, but
doesn't really prevent over-shrinking a domain which is already
under memory pressure. A better approach might be to register a
shrinker callback, which means that the balloon driver can see how
much memory pressure the system is under by looking getting feedback
from it.
A more advanced project is to modify the kernel VM subsystem to
measure refault distance, which is how long a page is evicted before
being faulted back in again. That measurement can tell you how much
more memory you need to add to a domain in order to get the fault
rate below a given rate.
gdb gives bad info in a 64-bit domain
For some reason, gdb doesn't work properly. If you set a
breakpoint, the program will stop as expected, but the register
state will be wrong. Other users of the ptrace syscall, such as
strace, seem to get good results, so I'm not sure what's going on
here. It might be a simple fix, or symptomatic of a more serious
problem. But it needs investigation first.
My Pet Project
What's missing? What do you depend on? What's needed before you
can use mainline Xen as your sole Xen kernel?
Thanks,
J
14 years, 10 months
diskless boot
by Luca
Hi all,
following the fedora-xen instruction, I was able to install XEN and Domain
0 and create an initial ram disk with mkinitrd (which will then mount the
real root filesystem located on an hard drive on the same workstation).
I wonder if what follows, is possible.
On a workstation, I would not install anything. Instead I would use
etherboot, for instance, to download Xen hypervisor, Domain 0 and an initial
ram disk from a server. Everything would be loaded in memory. The initial
ram disk would then mount the real root filesystem, located this time not on
the same workstation, but on a different workstation connected to the
network.
I have tried building by myself an initial ram disk capable of mounting a
real filesystem located on a different workstation but it didn't work. I'm
not a fedora or Xen expert, so I hope someone here could help me.
Of course, I assume the NFS server is working.
Thanks,
Luca
14 years, 10 months
source code
by Luca
Hi all,
I'm using Fedora 8.
With
yum install xen
yum groupinstall 'Virtualization'
I can install XEN and Domain 0 and everything works.
Now I would like to get the source code for the version of XEN and the
kernel installed with the previous commands (I eventually need to modify and
recompile them). How can I get the source code?
Thanks in advance.
Luca
14 years, 10 months
Testing LiveCD distros as guests?
by Philip Rhoades
People,
I want to test out a bunch of LiveCD distributions - is it possible to
set these up as guests under Xen so I don't have to shut down my main
machine?
Thanks,
Phil.
--
Philip Rhoades
Pricom Pty Limited (ACN 003 252 275 ABN 91 003 252 275)
GPO Box 3411
Sydney NSW 2001
Australia
E-mail: phil(a)pricom.com.au
14 years, 10 months
How to kill a domU that has no id
by Gerhard Scheffler
How to kill a domU that has no id?
# xm list
Name ID Mem VCPUs State
Time(s)
Domain-0 0 512 2 r----- 2745.2
m1 1 1024 2 -b---- 7764.6
m3 2 1024 2 -b---- 6765.8
m4 5 512 2 -b---- 1032.5
m5 512 2 0.0
Does m5 realy consume memory?
xm lists m5 even after dom0 reboot.
Gerhard
14 years, 10 months