ANNOUNCE: Rawhide virt repo for F11 users
by Mark McLoughlin
Hey,
The idea[1] was discussed here before, so I'll keep this short.
We've set up a repository for people running Fedora 11 who would like
to test the rawhide/F12 virt packages. To use it, do e.g.
$> cat > /etc/yum.repos.d/fedora-virt-preview.repo << EOF
[rawvirt]
name=Virtualization Rawhide for Fedora 11
baseurl=http://markmc.fedorapeople.org/virt-preview/f11/$basearch/
enabled=1
gpgcheck=0
EOF
$> yum update
At the moment, it contains the F-12 versions of libvirt and qemu, but
as F-12 development continues, it will contain more. I'll send periodic
mails to the list detailing the latest updates.
Also, this is very much a work-in-progress. The TODO list includes:
- get createrepo installed on fedorapeople.org
- include debuginfo packages in the repo (need more quota)
- find a better location than markmc.fedorapeople.org
- enable package builders to upload directly to the repo
- automatically do preview build/upload for devel/ builds
- allow building against preview packages; e.g. for language bindings
Comments most welcome. Help with the TODO list is even more welcome :-)
Thanks,
Mark.
[1] - https://www.redhat.com/archives/fedora-virt/2009-April/msg00154.html
13 years, 11 months
bridge network with iptables running on host?
by Tom Horsley
Long before I ever tried using virtual machines, I painstakingly
came up with some iptables settings to make my system as closed
as possible to most of the outside world while still being open
to my local 192.168.1.0/24 network.
I'm now playing around with VMs on my system, using bridging
because I want each VM to be a fully fledged member of my
local network.
It works great as long as I turn off iptables on the host, so
now I wonder what the heck is preventing the bridge traffic
from operating? (Actually it is just the VMs that can't
get out - I get can into them OK).
Do I have to tell the host to forward everything (rather than
forwarding nothing as I have it now?).
14 years, 1 month
2nd NIC hangs udev
by Gene Czarcinski
This is on a F11 x86_64 host WITHOUT virt-preview.
This happens with F11 and F9 guests (both 32 bit and 64 bit architectures).
Installing and bootup works fine with the default NIC. But, when I add a
second NIC (such as a privately network), I hang for a long time at Starting
udev.
The NICs mostly specify a "virtio" device but this also occurs if you specify
something else. Both NICs use the same device type.
At first I thought I was really hung but waiting a long time (at least a
minute) breaks free and everything proceeds normally.
This does not happen on real hardware ... I have one system with two NICs and
another with three NICs are n\there are no "long pauses".
Any ideas??
Gene
14 years, 1 month
Dom0 kernels
by M A Young
I have built a new set of kernel packages based on fedora rawhide
kernels and the xen/dom0/hackery branch of Jeremy's git repository
( http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=summary ).
This batch (kernel-2.6.29-0.114.2.6.rc6.fc11) is available via the koji
build system at
http://koji.fedoraproject.org/koji/taskinfo?taskID=1149500
These are really for development and debugging purposes only, as I am
still having problems getting them to boot, but others have reported more
success at getting kernels based on this git repository working, so you
might be lucky.
Note to install these packages on Fedora 10 you will need to have
rpm-4.6.0-1.fc10 installed (currently in updates-testing but it should be
available in updates soon) because of the change to SHA-256 file digest
hashing in recent Fedora 11 builds.
Michael Young
14 years, 1 month
F12 alpha
by Gene Czarcinski
I was able to install and run F12 alpha (just the packages on the DVD) on F11
plus current maintenance qemu-kvm.
I also tried F12 alpha on F11 + virt-preview ... I could install (if I
specified the system was F9) but the guest immediately reset just after the
"udev" message. I copied the .img and .xml files from the other system (where
it ran successfully) and it also died.
Is this a known problem? Any suggestions on data to look at?
Gene
14 years, 1 month
Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
by Boris Derzhavets
Thanks for keeping vanilla kernel alive.
Boris.
--- On Wed, 8/26/09, Boris Derzhavets <bderzhavets(a)yahoo.com> wrote:
From: Boris Derzhavets <bderzhavets(a)yahoo.com>
Subject: Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
To: fedora-xen(a)redhat.com, fedora-virt(a)redhat.com, "M A Young" <m.a.young(a)durham.ac.uk>
Date: Wednesday, August 26, 2009, 9:58 AM
I've made one more clean install with repo file :-
[root@ServerXen341 yum.repos.d]# cat fedora-myoung-dom0.repo
[myoung-dom0]
name=myoung's repository of Fedora based dom0 kernels - $basearch
baseurl=http://fedorapeople.org/~myoung/dom0/$basearch/
enabled=1
gpgcheck=0
[myoung-dom0-source]
name=myoung's repository of Fedora based dom0 kernels - Source
baseurl=http://fedorapeople.org/~myoung/dom0/src/
enabled=1
gpgcheck=0
Then tried to load kernel been built as vanilla. Console dropped into stack trace
and hanged.
However, kernel loads fine and works under Xen. Dmesg report
under Xen stays the same ( stack trace entries ).
Boris.
--- On Wed, 8/26/09, Boris Derzhavets <bderzhavets(a)yahoo.com> wrote:
From: Boris Derzhavets <bderzhavets(a)yahoo.com>
Subject: Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
To: fedora-xen(a)redhat.com, fedora-virt(a)redhat.com, "M A Young" <m.a.young(a)durham.ac.uk>
Date: Wednesday, August 26, 2009, 7:26 AM
Upstream issues looks DomU related.
Kernel 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 works stable at runtime.
Boris.
--- On Wed, 8/26/09, Boris Derzhavets <bderzhavets(a)yahoo.com> wrote:
From: Boris Derzhavets
<bderzhavets(a)yahoo.com>
Subject: Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
To: fedora-xen(a)redhat.com, fedora-virt(a)redhat.com, "M A Young" <m.a.young(a)durham.ac.uk>
Cc: xen-devel(a)xensource.com
Date: Wednesday, August 26, 2009, 3:58 AM
Not sure how 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 has been built.
There are pretty recent ongoing issues with 2.6.31-rc7 in upstream :-
http://lkml.org/lkml/2009/8/25/347
http://patchwork.kernel.org/patch/43791/
I was able to load Xen guest under 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64
followed by kernel error with sky2 (?) and loosing vnc
connection to Xen Host 3.4.1 on top of F11. I'll do some more testing.
Boris.
--- On Wed, 8/26/09, Boris Derzhavets <bderzhavets(a)yahoo.com> wrote:
From: Boris Derzhavets <bderzhavets(a)yahoo.com>
Subject: Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
To: fedora-xen(a)redhat.com, fedora-virt(a)redhat.com, "M A
Young" <m.a.young(a)durham.ac.uk>
Cc: xen-devel(a)xensource.com
Date: Wednesday, August 26, 2009, 2:50 AM
With rpm upgraded can load Dom0. However, dmesg report contains :-
Initializing cgroup subsys cpuset
Initializing cgroup subsys cpu
Linux version 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 (root@ServerXenSRC) (gcc version 4.4.0 20090506 (Red Hat 4.4.0-4) (GCC) ) #1 SMP Wed Aug 26 08:14:58 MSD 2009
Command line: root=/dev/mapper/vg_serverxensrc-LogVol00 ro console=tty0
KERNEL supported cpus:
Intel GenuineIntel
AMD AuthenticAMD
Centaur CentaurHauls
BIOS-provided physical
RAM map:
Xen: 0000000000000000 - 000000000009ec00 (usable)
Xen: 000000000009ec00 - 0000000000100000 (reserved)
Xen: 0000000000100000 - 00000000cff80000 (usable)
Xen: 00000000cff80000 - 00000000cff8e000 (ACPI data)
Xen: 00000000cff8e000 - 00000000cffe0000 (ACPI NVS)
Xen:
00000000cffe0000 - 00000000d0000000 (reserved)
Xen: 00000000fee00000 - 00000000fee01000 (reserved)
Xen: 00000000ffe00000 - 0000000100000000 (reserved)
Xen: 0000000100000000 - 00000001f1a6b000 (usable)
DMI 2.4 present.
AMI BIOS detected: BIOS may corrupt low RAM, working around it.
e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved)
last_pfn = 0x1f1a6b max_arch_pfn = 0x400000000
last_pfn = 0xcff80 max_arch_pfn = 0x400000000
initial memory mapped : 0 - 20000000
init_memory_mapping: 0000000000000000-00000000cff80000
0000000000 - 00cff80000 page 4k
kernel direct mapping tables up to cff80000 @ 100000-785000
init_memory_mapping: 0000000100000000-00000001f1a6b000
0100000000 - 01f1a6b000 page 4k
. . . . . . .
======================================================
[ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
]
2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 #1
------------------------------------------------------
khubd/28 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
(&retval->lock){......}, at: [<ffffffff8112a240>] dma_pool_alloc+0x45/0x321
and this task is already holding:
(&ehci->lock){-.....}, at: [<ffffffff813d4654>] ehci_urb_enqueue+0xb4/0xd5c
which would create a new lock dependency:
(&ehci->lock){-.....} -> (&retval->lock){......}
but this new dependency connects a HARDIRQ-irq-safe lock:
(&ehci->lock){-.....}
... which became HARDIRQ-irq-safe at:
[<ffffffff8109908b>] __lock_acquire+0x254/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>] _spin_lock+0x45/0x8e
[<ffffffff813d325c>] ehci_irq+0x41/0x441
[<ffffffff813b7e5f>]
usb_hcd_irq+0x59/0xcc
[<ffffffff810ca810>] handle_IRQ_event+0x62/0x148
[<ffffffff810ccda3>] handle_level_irq+0x90/0xf9
[<ffffffff81017078>] handle_irq+0x9a/0xba
[<ffffffff8130a0c6>] xen_evtchn_do_upcall+0x10c/0x1bd
[<ffffffff8101527e>] xen_do_hypervisor_callback+0x1e/0x30
[<ffffffffffffffff>] 0xffffffffffffffff
to a HARDIRQ-irq-unsafe lock:
(purge_lock){+.+...}
... which became HARDIRQ-irq-unsafe at:
... [<ffffffff810990ff>] __lock_acquire+0x2c8/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>] _spin_lock+0x45/0x8e
[<ffffffff811233a4>] __purge_vmap_area_lazy+0x63/0x198
[<ffffffff81124c74>] vm_unmap_aliases+0x18f/0x1b2
[<ffffffff8100eeb3>] xen_alloc_ptpage+0x5a/0xa0
[<ffffffff8100ef97>]
xen_alloc_pte+0x26/0x3c
[<ffffffff81118381>] __pte_alloc_kernel+0x6f/0xdd
[<ffffffff811241a1>] vmap_page_range_noflush+0x1c5/0x315
[<ffffffff81124332>] map_vm_area+0x41/0x6b
[<ffffffff8112448b>] __vmalloc_area_node+0x12f/0x167
[<ffffffff81124553>] __vmalloc_node+0x90/0xb5
[<ffffffff811243c8>] __vmalloc_area_node+0x6c/0x167
[<ffffffff81124553>] __vmalloc_node+0x90/0xb5
[<ffffffff811247ca>] __vmalloc+0x28/0x3e
[<ffffffff818504c9>] alloc_large_system_hash+0x12f/0x1fb
[<ffffffff81852b4e>] vfs_caches_init+0xb8/0x140
[<ffffffff8182b061>] start_kernel+0x3ef/0x44c
[<ffffffff8182a2d0>] x86_64_start_reservations+0xbb/0xd6
[<ffffffff8182e6d1>] xen_start_kernel+0x5ab/0x5b2
[<ffffffffffffffff>] 0xffffffffffffffff
other info that might help
us debug this:
2 locks held by khubd/28:
#0: (usb_address0_mutex){+.+...}, at: [<ffffffff813b2e82>] hub_port_init+0x8c/0x7ee
#1: (&ehci->lock){-.....}, at: [<ffffffff813d4654>] ehci_urb_enqueue+0xb4/0xd5c
the HARDIRQ-irq-safe lock's dependencies:
-> (&ehci->lock){-.....} ops: 0 {
IN-HARDIRQ-W at:
[<ffffffff8109908b>] __lock_acquire+0x254/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>]
_spin_lock+0x45/0x8e
[<ffffffff813d325c>] ehci_irq+0x41/0x441
[<ffffffff813b7e5f>] usb_hcd_irq+0x59/0xcc
[<ffffffff810ca810>] handle_IRQ_event+0x62/0x148
[<ffffffff810ccda3>] handle_level_irq+0x90/0xf9
. . . . . .
Jeremy's current version is rc6. I believe this issue has been noticed at
xen-devel.
Boris
-----Inline Attachment Follows-----
--
Fedora-xen mailing list
Fedora-xen(a)redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen
-----Inline Attachment Follows-----
--
Fedora-xen mailing list
Fedora-xen(a)redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen
-----Inline Attachment Follows-----
--
Fedora-xen mailing list
Fedora-xen(a)redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen
-----Inline Attachment Follows-----
--
Fedora-xen mailing list
Fedora-xen(a)redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen
14 years, 1 month
migrating to FE V12
by Paul Lambert
I would like to install FE V12 and try out the integrated to the virt
manager changes. There are several download options, however, I do not see
a complete DVD install disk. If I just download the boot.iso and install
does than imply I will then need to go back and add packages one by one to
re-establish where my DVD installed FE V11 is today? Is there any
difference between the the latest Rawhide and FE V12 alpha at this point?
Finally, can I migrate to FE 12 or is this a complete reinstall meaning I
need to preserve my current VMs and other data on my hard drive which only
has one partition? I plan to fix this scenario later but for now OS and
data is on one disk.
Thanks
PJAL
14 years, 1 month
Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
by Boris Derzhavets
I've made one more clean install with repo file :-
[root@ServerXen341 yum.repos.d]# cat fedora-myoung-dom0.repo
[myoung-dom0]
name=myoung's repository of Fedora based dom0 kernels - $basearch
baseurl=http://fedorapeople.org/~myoung/dom0/$basearch/
enabled=1
gpgcheck=0
[myoung-dom0-source]
name=myoung's repository of Fedora based dom0 kernels - Source
baseurl=http://fedorapeople.org/~myoung/dom0/src/
enabled=1
gpgcheck=0
Then tried to load kernel been built as vanilla. Console dropped into stack trace
and hanged.
However, kernel loads fine and works under Xen. Dmesg report
under Xen stays the same ( stack trace entries ).
Boris.
--- On Wed, 8/26/09, Boris Derzhavets <bderzhavets(a)yahoo.com> wrote:
From: Boris Derzhavets <bderzhavets(a)yahoo.com>
Subject: Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
To: fedora-xen(a)redhat.com, fedora-virt(a)redhat.com, "M A Young" <m.a.young(a)durham.ac.uk>
Date: Wednesday, August 26, 2009, 7:26 AM
Upstream issues looks DomU related.
Kernel 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 works stable at runtime.
Boris.
--- On Wed, 8/26/09, Boris Derzhavets <bderzhavets(a)yahoo.com> wrote:
From: Boris Derzhavets <bderzhavets(a)yahoo.com>
Subject: Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
To: fedora-xen(a)redhat.com, fedora-virt(a)redhat.com, "M A Young" <m.a.young(a)durham.ac.uk>
Cc: xen-devel(a)xensource.com
Date: Wednesday, August 26, 2009, 3:58 AM
Not sure how 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 has been built.
There are pretty recent ongoing issues with 2.6.31-rc7 in upstream :-
http://lkml.org/lkml/2009/8/25/347
http://patchwork.kernel.org/patch/43791/
I was able to load Xen guest under 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64
followed by kernel error with sky2 (?) and loosing vnc connection to Xen Host 3.4.1 on top of F11. I'll do some more testing.
Boris.
--- On Wed, 8/26/09, Boris Derzhavets <bderzhavets(a)yahoo.com> wrote:
From: Boris Derzhavets <bderzhavets(a)yahoo.com>
Subject: Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
To: fedora-xen(a)redhat.com, fedora-virt(a)redhat.com, "M A
Young" <m.a.young(a)durham.ac.uk>
Cc: xen-devel(a)xensource.com
Date: Wednesday, August 26, 2009, 2:50 AM
With rpm upgraded can load Dom0. However, dmesg report contains :-
Initializing cgroup subsys cpuset
Initializing cgroup subsys cpu
Linux version 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 (root@ServerXenSRC) (gcc version 4.4.0 20090506 (Red Hat 4.4.0-4) (GCC) ) #1 SMP Wed Aug 26 08:14:58 MSD 2009
Command line: root=/dev/mapper/vg_serverxensrc-LogVol00 ro console=tty0
KERNEL supported cpus:
Intel GenuineIntel
AMD AuthenticAMD
Centaur CentaurHauls
BIOS-provided physical
RAM map:
Xen: 0000000000000000 - 000000000009ec00 (usable)
Xen: 000000000009ec00 - 0000000000100000 (reserved)
Xen: 0000000000100000 - 00000000cff80000 (usable)
Xen: 00000000cff80000 - 00000000cff8e000 (ACPI data)
Xen: 00000000cff8e000 - 00000000cffe0000 (ACPI NVS)
Xen:
00000000cffe0000 - 00000000d0000000 (reserved)
Xen: 00000000fee00000 - 00000000fee01000 (reserved)
Xen: 00000000ffe00000 - 0000000100000000 (reserved)
Xen: 0000000100000000 - 00000001f1a6b000 (usable)
DMI 2.4 present.
AMI BIOS detected: BIOS may corrupt low RAM, working around it.
e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved)
last_pfn = 0x1f1a6b max_arch_pfn = 0x400000000
last_pfn = 0xcff80 max_arch_pfn = 0x400000000
initial memory mapped : 0 - 20000000
init_memory_mapping: 0000000000000000-00000000cff80000
0000000000 - 00cff80000 page 4k
kernel direct mapping tables up to cff80000 @ 100000-785000
init_memory_mapping: 0000000100000000-00000001f1a6b000
0100000000 - 01f1a6b000 page 4k
. . . . . . .
======================================================
[ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
]
2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 #1
------------------------------------------------------
khubd/28 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
(&retval->lock){......}, at: [<ffffffff8112a240>] dma_pool_alloc+0x45/0x321
and this task is already holding:
(&ehci->lock){-.....}, at: [<ffffffff813d4654>] ehci_urb_enqueue+0xb4/0xd5c
which would create a new lock dependency:
(&ehci->lock){-.....} -> (&retval->lock){......}
but this new dependency connects a HARDIRQ-irq-safe lock:
(&ehci->lock){-.....}
... which became HARDIRQ-irq-safe at:
[<ffffffff8109908b>] __lock_acquire+0x254/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>] _spin_lock+0x45/0x8e
[<ffffffff813d325c>] ehci_irq+0x41/0x441
[<ffffffff813b7e5f>]
usb_hcd_irq+0x59/0xcc
[<ffffffff810ca810>] handle_IRQ_event+0x62/0x148
[<ffffffff810ccda3>] handle_level_irq+0x90/0xf9
[<ffffffff81017078>] handle_irq+0x9a/0xba
[<ffffffff8130a0c6>] xen_evtchn_do_upcall+0x10c/0x1bd
[<ffffffff8101527e>] xen_do_hypervisor_callback+0x1e/0x30
[<ffffffffffffffff>] 0xffffffffffffffff
to a HARDIRQ-irq-unsafe lock:
(purge_lock){+.+...}
... which became HARDIRQ-irq-unsafe at:
... [<ffffffff810990ff>] __lock_acquire+0x2c8/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>] _spin_lock+0x45/0x8e
[<ffffffff811233a4>] __purge_vmap_area_lazy+0x63/0x198
[<ffffffff81124c74>] vm_unmap_aliases+0x18f/0x1b2
[<ffffffff8100eeb3>] xen_alloc_ptpage+0x5a/0xa0
[<ffffffff8100ef97>]
xen_alloc_pte+0x26/0x3c
[<ffffffff81118381>] __pte_alloc_kernel+0x6f/0xdd
[<ffffffff811241a1>] vmap_page_range_noflush+0x1c5/0x315
[<ffffffff81124332>] map_vm_area+0x41/0x6b
[<ffffffff8112448b>] __vmalloc_area_node+0x12f/0x167
[<ffffffff81124553>] __vmalloc_node+0x90/0xb5
[<ffffffff811243c8>] __vmalloc_area_node+0x6c/0x167
[<ffffffff81124553>] __vmalloc_node+0x90/0xb5
[<ffffffff811247ca>] __vmalloc+0x28/0x3e
[<ffffffff818504c9>] alloc_large_system_hash+0x12f/0x1fb
[<ffffffff81852b4e>] vfs_caches_init+0xb8/0x140
[<ffffffff8182b061>] start_kernel+0x3ef/0x44c
[<ffffffff8182a2d0>] x86_64_start_reservations+0xbb/0xd6
[<ffffffff8182e6d1>] xen_start_kernel+0x5ab/0x5b2
[<ffffffffffffffff>] 0xffffffffffffffff
other info that might help
us debug this:
2 locks held by khubd/28:
#0: (usb_address0_mutex){+.+...}, at: [<ffffffff813b2e82>] hub_port_init+0x8c/0x7ee
#1: (&ehci->lock){-.....}, at: [<ffffffff813d4654>] ehci_urb_enqueue+0xb4/0xd5c
the HARDIRQ-irq-safe lock's dependencies:
-> (&ehci->lock){-.....} ops: 0 {
IN-HARDIRQ-W at:
[<ffffffff8109908b>] __lock_acquire+0x254/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>]
_spin_lock+0x45/0x8e
[<ffffffff813d325c>] ehci_irq+0x41/0x441
[<ffffffff813b7e5f>] usb_hcd_irq+0x59/0xcc
[<ffffffff810ca810>] handle_IRQ_event+0x62/0x148
[<ffffffff810ccda3>] handle_level_irq+0x90/0xf9
. . . . . .
Jeremy's current version is rc6. I believe this issue has been noticed at
xen-devel.
Boris
-----Inline Attachment Follows-----
--
Fedora-xen mailing list
Fedora-xen(a)redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen
-----Inline Attachment Follows-----
--
Fedora-xen mailing list
Fedora-xen(a)redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen
-----Inline Attachment Follows-----
--
Fedora-xen mailing list
Fedora-xen(a)redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen
14 years, 1 month
Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
by Boris Derzhavets
With rpm upgraded can load Dom0. However, dmesg report contains :-
Initializing cgroup subsys cpuset
Initializing cgroup subsys cpu
Linux version 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 (root@ServerXenSRC) (gcc version 4.4.0 20090506 (Red Hat 4.4.0-4) (GCC) ) #1 SMP Wed Aug 26 08:14:58 MSD 2009
Command line: root=/dev/mapper/vg_serverxensrc-LogVol00 ro console=tty0
KERNEL supported cpus:
Intel GenuineIntel
AMD AuthenticAMD
Centaur CentaurHauls
BIOS-provided physical RAM map:
Xen: 0000000000000000 - 000000000009ec00 (usable)
Xen: 000000000009ec00 - 0000000000100000 (reserved)
Xen: 0000000000100000 - 00000000cff80000 (usable)
Xen: 00000000cff80000 - 00000000cff8e000 (ACPI data)
Xen: 00000000cff8e000 - 00000000cffe0000 (ACPI NVS)
Xen: 00000000cffe0000 - 00000000d0000000 (reserved)
Xen: 00000000fee00000 - 00000000fee01000 (reserved)
Xen: 00000000ffe00000 - 0000000100000000 (reserved)
Xen: 0000000100000000 - 00000001f1a6b000 (usable)
DMI 2.4 present.
AMI BIOS detected: BIOS may corrupt low RAM, working around it.
e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved)
last_pfn = 0x1f1a6b max_arch_pfn = 0x400000000
last_pfn = 0xcff80 max_arch_pfn = 0x400000000
initial memory mapped : 0 - 20000000
init_memory_mapping: 0000000000000000-00000000cff80000
0000000000 - 00cff80000 page 4k
kernel direct mapping tables up to cff80000 @ 100000-785000
init_memory_mapping: 0000000100000000-00000001f1a6b000
0100000000 - 01f1a6b000 page 4k
. . . . . . .
======================================================
[ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected ]
2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 #1
------------------------------------------------------
khubd/28 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
(&retval->lock){......}, at: [<ffffffff8112a240>] dma_pool_alloc+0x45/0x321
and this task is already holding:
(&ehci->lock){-.....}, at: [<ffffffff813d4654>] ehci_urb_enqueue+0xb4/0xd5c
which would create a new lock dependency:
(&ehci->lock){-.....} -> (&retval->lock){......}
but this new dependency connects a HARDIRQ-irq-safe lock:
(&ehci->lock){-.....}
... which became HARDIRQ-irq-safe at:
[<ffffffff8109908b>] __lock_acquire+0x254/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>] _spin_lock+0x45/0x8e
[<ffffffff813d325c>] ehci_irq+0x41/0x441
[<ffffffff813b7e5f>] usb_hcd_irq+0x59/0xcc
[<ffffffff810ca810>] handle_IRQ_event+0x62/0x148
[<ffffffff810ccda3>] handle_level_irq+0x90/0xf9
[<ffffffff81017078>] handle_irq+0x9a/0xba
[<ffffffff8130a0c6>] xen_evtchn_do_upcall+0x10c/0x1bd
[<ffffffff8101527e>] xen_do_hypervisor_callback+0x1e/0x30
[<ffffffffffffffff>] 0xffffffffffffffff
to a HARDIRQ-irq-unsafe lock:
(purge_lock){+.+...}
... which became HARDIRQ-irq-unsafe at:
... [<ffffffff810990ff>] __lock_acquire+0x2c8/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>] _spin_lock+0x45/0x8e
[<ffffffff811233a4>] __purge_vmap_area_lazy+0x63/0x198
[<ffffffff81124c74>] vm_unmap_aliases+0x18f/0x1b2
[<ffffffff8100eeb3>] xen_alloc_ptpage+0x5a/0xa0
[<ffffffff8100ef97>] xen_alloc_pte+0x26/0x3c
[<ffffffff81118381>] __pte_alloc_kernel+0x6f/0xdd
[<ffffffff811241a1>] vmap_page_range_noflush+0x1c5/0x315
[<ffffffff81124332>] map_vm_area+0x41/0x6b
[<ffffffff8112448b>] __vmalloc_area_node+0x12f/0x167
[<ffffffff81124553>] __vmalloc_node+0x90/0xb5
[<ffffffff811243c8>] __vmalloc_area_node+0x6c/0x167
[<ffffffff81124553>] __vmalloc_node+0x90/0xb5
[<ffffffff811247ca>] __vmalloc+0x28/0x3e
[<ffffffff818504c9>] alloc_large_system_hash+0x12f/0x1fb
[<ffffffff81852b4e>] vfs_caches_init+0xb8/0x140
[<ffffffff8182b061>] start_kernel+0x3ef/0x44c
[<ffffffff8182a2d0>] x86_64_start_reservations+0xbb/0xd6
[<ffffffff8182e6d1>] xen_start_kernel+0x5ab/0x5b2
[<ffffffffffffffff>] 0xffffffffffffffff
other info that might help us debug this:
2 locks held by khubd/28:
#0: (usb_address0_mutex){+.+...}, at: [<ffffffff813b2e82>] hub_port_init+0x8c/0x7ee
#1: (&ehci->lock){-.....}, at: [<ffffffff813d4654>] ehci_urb_enqueue+0xb4/0xd5c
the HARDIRQ-irq-safe lock's dependencies:
-> (&ehci->lock){-.....} ops: 0 {
IN-HARDIRQ-W at:
[<ffffffff8109908b>] __lock_acquire+0x254/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>] _spin_lock+0x45/0x8e
[<ffffffff813d325c>] ehci_irq+0x41/0x441
[<ffffffff813b7e5f>] usb_hcd_irq+0x59/0xcc
[<ffffffff810ca810>] handle_IRQ_event+0x62/0x148
[<ffffffff810ccda3>] handle_level_irq+0x90/0xf9
. . . . . .
Jeremy's current version is rc6. I believe this issue has been noticed at xen-devel.
Boris
14 years, 1 month