No pcifront in F10?
by Jim Lutz
I have a virtual machine landscape consisting of a CentOS 5.3 Dom0 with
Xen 3.1.2, some PV CentOS DomU's with PCI passthrough (same versions),
and I'm trying to implement a F10 DomU with PCI passthrough. I set
everything up for the passthrough, and the F10 guest can't see any of
the devices with lspci. If I boot a CentOS PV machine with the same PCI
passthrough configuration, the devices show up with lspci. I've seen
some posts in various places about this issue, but no solutions. How do
I implement this?
12 years, 11 months
Dom0 kernels
by M A Young
I have built a new set of kernel packages based on fedora rawhide
kernels and the xen/dom0/hackery branch of Jeremy's git repository
( http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=summary ).
This batch (kernel-2.6.29-0.114.2.6.rc6.fc11) is available via the koji
build system at
http://koji.fedoraproject.org/koji/taskinfo?taskID=1149500
These are really for development and debugging purposes only, as I am
still having problems getting them to boot, but others have reported more
success at getting kernels based on this git repository working, so you
might be lucky.
Note to install these packages on Fedora 10 you will need to have
rpm-4.6.0-1.fc10 installed (currently in updates-testing but it should be
available in updates soon) because of the change to SHA-256 file digest
hashing in recent Fedora 11 builds.
Michael Young
13 years, 6 months
missing xen-debuginfo-3.4.1-1.fc11.x86_64.rpm
by Jon R.
I am following Boris guide at:
http://bderzhavets.wordpress.com/2009/08/20/setup-libvirt-0-7-0-6-xen-3-4...
And after I issue the "rpmbuild -ba ./xen.spec" command and then issue
the yum install xen-3.4.1-1.fc11.x86_64.rpm \
xen-debuginfo-3.4.1-1.fc11.x86_64.rpm \
xen-devel-3.4.1-1.fc11.x86_64.rpm \
xen-doc-3.4.1-1.fc11.x86_64.rpm \
xen-hypervisor-3.4.1-1.fc11.x86_64.rpm \
xen-libs-3.4.1-1.fc11.x86_64.rpm \
xen-runtime-3.4.1-1.fc11.x86_64.rpm
command, I always am missing xen-debuginfo-3.4.1-1.fc11.x86_64.rpm.
Is there a certain package that needs to be installed to get this rpm
and can I get by without it?
Thanks for any help....again :)
Jon
AMD64 fc11 4core proc
[root@localhost x86_64]# cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 16
model : 2
model name : AMD Phenom(tm) 9500 Quad-Core Processor
stepping : 2
cpu MHz : 2210.236
cache size : 512 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat
pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb
rdtscp lm 3dnowext 3dnow constant_tsc rep_good nonstop_tsc pni monitor
cx16 lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse
3dnowprefetch osvw ibs
bogomips : 4420.46
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate
14 years, 1 month
Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
by Boris Derzhavets
Thanks for keeping vanilla kernel alive.
Boris.
--- On Wed, 8/26/09, Boris Derzhavets <bderzhavets(a)yahoo.com> wrote:
From: Boris Derzhavets <bderzhavets(a)yahoo.com>
Subject: Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
To: fedora-xen(a)redhat.com, fedora-virt(a)redhat.com, "M A Young" <m.a.young(a)durham.ac.uk>
Date: Wednesday, August 26, 2009, 9:58 AM
I've made one more clean install with repo file :-
[root@ServerXen341 yum.repos.d]# cat fedora-myoung-dom0.repo
[myoung-dom0]
name=myoung's repository of Fedora based dom0 kernels - $basearch
baseurl=http://fedorapeople.org/~myoung/dom0/$basearch/
enabled=1
gpgcheck=0
[myoung-dom0-source]
name=myoung's repository of Fedora based dom0 kernels - Source
baseurl=http://fedorapeople.org/~myoung/dom0/src/
enabled=1
gpgcheck=0
Then tried to load kernel been built as vanilla. Console dropped into stack trace
and hanged.
However, kernel loads fine and works under Xen. Dmesg report
under Xen stays the same ( stack trace entries ).
Boris.
--- On Wed, 8/26/09, Boris Derzhavets <bderzhavets(a)yahoo.com> wrote:
From: Boris Derzhavets <bderzhavets(a)yahoo.com>
Subject: Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
To: fedora-xen(a)redhat.com, fedora-virt(a)redhat.com, "M A Young" <m.a.young(a)durham.ac.uk>
Date: Wednesday, August 26, 2009, 7:26 AM
Upstream issues looks DomU related.
Kernel 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 works stable at runtime.
Boris.
--- On Wed, 8/26/09, Boris Derzhavets <bderzhavets(a)yahoo.com> wrote:
From: Boris Derzhavets
<bderzhavets(a)yahoo.com>
Subject: Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
To: fedora-xen(a)redhat.com, fedora-virt(a)redhat.com, "M A Young" <m.a.young(a)durham.ac.uk>
Cc: xen-devel(a)xensource.com
Date: Wednesday, August 26, 2009, 3:58 AM
Not sure how 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 has been built.
There are pretty recent ongoing issues with 2.6.31-rc7 in upstream :-
http://lkml.org/lkml/2009/8/25/347
http://patchwork.kernel.org/patch/43791/
I was able to load Xen guest under 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64
followed by kernel error with sky2 (?) and loosing vnc
connection to Xen Host 3.4.1 on top of F11. I'll do some more testing.
Boris.
--- On Wed, 8/26/09, Boris Derzhavets <bderzhavets(a)yahoo.com> wrote:
From: Boris Derzhavets <bderzhavets(a)yahoo.com>
Subject: Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
To: fedora-xen(a)redhat.com, fedora-virt(a)redhat.com, "M A
Young" <m.a.young(a)durham.ac.uk>
Cc: xen-devel(a)xensource.com
Date: Wednesday, August 26, 2009, 2:50 AM
With rpm upgraded can load Dom0. However, dmesg report contains :-
Initializing cgroup subsys cpuset
Initializing cgroup subsys cpu
Linux version 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 (root@ServerXenSRC) (gcc version 4.4.0 20090506 (Red Hat 4.4.0-4) (GCC) ) #1 SMP Wed Aug 26 08:14:58 MSD 2009
Command line: root=/dev/mapper/vg_serverxensrc-LogVol00 ro console=tty0
KERNEL supported cpus:
Intel GenuineIntel
AMD AuthenticAMD
Centaur CentaurHauls
BIOS-provided physical
RAM map:
Xen: 0000000000000000 - 000000000009ec00 (usable)
Xen: 000000000009ec00 - 0000000000100000 (reserved)
Xen: 0000000000100000 - 00000000cff80000 (usable)
Xen: 00000000cff80000 - 00000000cff8e000 (ACPI data)
Xen: 00000000cff8e000 - 00000000cffe0000 (ACPI NVS)
Xen:
00000000cffe0000 - 00000000d0000000 (reserved)
Xen: 00000000fee00000 - 00000000fee01000 (reserved)
Xen: 00000000ffe00000 - 0000000100000000 (reserved)
Xen: 0000000100000000 - 00000001f1a6b000 (usable)
DMI 2.4 present.
AMI BIOS detected: BIOS may corrupt low RAM, working around it.
e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved)
last_pfn = 0x1f1a6b max_arch_pfn = 0x400000000
last_pfn = 0xcff80 max_arch_pfn = 0x400000000
initial memory mapped : 0 - 20000000
init_memory_mapping: 0000000000000000-00000000cff80000
0000000000 - 00cff80000 page 4k
kernel direct mapping tables up to cff80000 @ 100000-785000
init_memory_mapping: 0000000100000000-00000001f1a6b000
0100000000 - 01f1a6b000 page 4k
. . . . . . .
======================================================
[ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
]
2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 #1
------------------------------------------------------
khubd/28 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
(&retval->lock){......}, at: [<ffffffff8112a240>] dma_pool_alloc+0x45/0x321
and this task is already holding:
(&ehci->lock){-.....}, at: [<ffffffff813d4654>] ehci_urb_enqueue+0xb4/0xd5c
which would create a new lock dependency:
(&ehci->lock){-.....} -> (&retval->lock){......}
but this new dependency connects a HARDIRQ-irq-safe lock:
(&ehci->lock){-.....}
... which became HARDIRQ-irq-safe at:
[<ffffffff8109908b>] __lock_acquire+0x254/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>] _spin_lock+0x45/0x8e
[<ffffffff813d325c>] ehci_irq+0x41/0x441
[<ffffffff813b7e5f>]
usb_hcd_irq+0x59/0xcc
[<ffffffff810ca810>] handle_IRQ_event+0x62/0x148
[<ffffffff810ccda3>] handle_level_irq+0x90/0xf9
[<ffffffff81017078>] handle_irq+0x9a/0xba
[<ffffffff8130a0c6>] xen_evtchn_do_upcall+0x10c/0x1bd
[<ffffffff8101527e>] xen_do_hypervisor_callback+0x1e/0x30
[<ffffffffffffffff>] 0xffffffffffffffff
to a HARDIRQ-irq-unsafe lock:
(purge_lock){+.+...}
... which became HARDIRQ-irq-unsafe at:
... [<ffffffff810990ff>] __lock_acquire+0x2c8/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>] _spin_lock+0x45/0x8e
[<ffffffff811233a4>] __purge_vmap_area_lazy+0x63/0x198
[<ffffffff81124c74>] vm_unmap_aliases+0x18f/0x1b2
[<ffffffff8100eeb3>] xen_alloc_ptpage+0x5a/0xa0
[<ffffffff8100ef97>]
xen_alloc_pte+0x26/0x3c
[<ffffffff81118381>] __pte_alloc_kernel+0x6f/0xdd
[<ffffffff811241a1>] vmap_page_range_noflush+0x1c5/0x315
[<ffffffff81124332>] map_vm_area+0x41/0x6b
[<ffffffff8112448b>] __vmalloc_area_node+0x12f/0x167
[<ffffffff81124553>] __vmalloc_node+0x90/0xb5
[<ffffffff811243c8>] __vmalloc_area_node+0x6c/0x167
[<ffffffff81124553>] __vmalloc_node+0x90/0xb5
[<ffffffff811247ca>] __vmalloc+0x28/0x3e
[<ffffffff818504c9>] alloc_large_system_hash+0x12f/0x1fb
[<ffffffff81852b4e>] vfs_caches_init+0xb8/0x140
[<ffffffff8182b061>] start_kernel+0x3ef/0x44c
[<ffffffff8182a2d0>] x86_64_start_reservations+0xbb/0xd6
[<ffffffff8182e6d1>] xen_start_kernel+0x5ab/0x5b2
[<ffffffffffffffff>] 0xffffffffffffffff
other info that might help
us debug this:
2 locks held by khubd/28:
#0: (usb_address0_mutex){+.+...}, at: [<ffffffff813b2e82>] hub_port_init+0x8c/0x7ee
#1: (&ehci->lock){-.....}, at: [<ffffffff813d4654>] ehci_urb_enqueue+0xb4/0xd5c
the HARDIRQ-irq-safe lock's dependencies:
-> (&ehci->lock){-.....} ops: 0 {
IN-HARDIRQ-W at:
[<ffffffff8109908b>] __lock_acquire+0x254/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>]
_spin_lock+0x45/0x8e
[<ffffffff813d325c>] ehci_irq+0x41/0x441
[<ffffffff813b7e5f>] usb_hcd_irq+0x59/0xcc
[<ffffffff810ca810>] handle_IRQ_event+0x62/0x148
[<ffffffff810ccda3>] handle_level_irq+0x90/0xf9
. . . . . .
Jeremy's current version is rc6. I believe this issue has been noticed at
xen-devel.
Boris
-----Inline Attachment Follows-----
--
Fedora-xen mailing list
Fedora-xen(a)redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen
-----Inline Attachment Follows-----
--
Fedora-xen mailing list
Fedora-xen(a)redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen
-----Inline Attachment Follows-----
--
Fedora-xen mailing list
Fedora-xen(a)redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen
-----Inline Attachment Follows-----
--
Fedora-xen mailing list
Fedora-xen(a)redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen
14 years, 1 month
Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
by Boris Derzhavets
I've made one more clean install with repo file :-
[root@ServerXen341 yum.repos.d]# cat fedora-myoung-dom0.repo
[myoung-dom0]
name=myoung's repository of Fedora based dom0 kernels - $basearch
baseurl=http://fedorapeople.org/~myoung/dom0/$basearch/
enabled=1
gpgcheck=0
[myoung-dom0-source]
name=myoung's repository of Fedora based dom0 kernels - Source
baseurl=http://fedorapeople.org/~myoung/dom0/src/
enabled=1
gpgcheck=0
Then tried to load kernel been built as vanilla. Console dropped into stack trace
and hanged.
However, kernel loads fine and works under Xen. Dmesg report
under Xen stays the same ( stack trace entries ).
Boris.
--- On Wed, 8/26/09, Boris Derzhavets <bderzhavets(a)yahoo.com> wrote:
From: Boris Derzhavets <bderzhavets(a)yahoo.com>
Subject: Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
To: fedora-xen(a)redhat.com, fedora-virt(a)redhat.com, "M A Young" <m.a.young(a)durham.ac.uk>
Date: Wednesday, August 26, 2009, 7:26 AM
Upstream issues looks DomU related.
Kernel 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 works stable at runtime.
Boris.
--- On Wed, 8/26/09, Boris Derzhavets <bderzhavets(a)yahoo.com> wrote:
From: Boris Derzhavets <bderzhavets(a)yahoo.com>
Subject: Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
To: fedora-xen(a)redhat.com, fedora-virt(a)redhat.com, "M A Young" <m.a.young(a)durham.ac.uk>
Cc: xen-devel(a)xensource.com
Date: Wednesday, August 26, 2009, 3:58 AM
Not sure how 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 has been built.
There are pretty recent ongoing issues with 2.6.31-rc7 in upstream :-
http://lkml.org/lkml/2009/8/25/347
http://patchwork.kernel.org/patch/43791/
I was able to load Xen guest under 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64
followed by kernel error with sky2 (?) and loosing vnc connection to Xen Host 3.4.1 on top of F11. I'll do some more testing.
Boris.
--- On Wed, 8/26/09, Boris Derzhavets <bderzhavets(a)yahoo.com> wrote:
From: Boris Derzhavets <bderzhavets(a)yahoo.com>
Subject: Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
To: fedora-xen(a)redhat.com, fedora-virt(a)redhat.com, "M A
Young" <m.a.young(a)durham.ac.uk>
Cc: xen-devel(a)xensource.com
Date: Wednesday, August 26, 2009, 2:50 AM
With rpm upgraded can load Dom0. However, dmesg report contains :-
Initializing cgroup subsys cpuset
Initializing cgroup subsys cpu
Linux version 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 (root@ServerXenSRC) (gcc version 4.4.0 20090506 (Red Hat 4.4.0-4) (GCC) ) #1 SMP Wed Aug 26 08:14:58 MSD 2009
Command line: root=/dev/mapper/vg_serverxensrc-LogVol00 ro console=tty0
KERNEL supported cpus:
Intel GenuineIntel
AMD AuthenticAMD
Centaur CentaurHauls
BIOS-provided physical
RAM map:
Xen: 0000000000000000 - 000000000009ec00 (usable)
Xen: 000000000009ec00 - 0000000000100000 (reserved)
Xen: 0000000000100000 - 00000000cff80000 (usable)
Xen: 00000000cff80000 - 00000000cff8e000 (ACPI data)
Xen: 00000000cff8e000 - 00000000cffe0000 (ACPI NVS)
Xen:
00000000cffe0000 - 00000000d0000000 (reserved)
Xen: 00000000fee00000 - 00000000fee01000 (reserved)
Xen: 00000000ffe00000 - 0000000100000000 (reserved)
Xen: 0000000100000000 - 00000001f1a6b000 (usable)
DMI 2.4 present.
AMI BIOS detected: BIOS may corrupt low RAM, working around it.
e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved)
last_pfn = 0x1f1a6b max_arch_pfn = 0x400000000
last_pfn = 0xcff80 max_arch_pfn = 0x400000000
initial memory mapped : 0 - 20000000
init_memory_mapping: 0000000000000000-00000000cff80000
0000000000 - 00cff80000 page 4k
kernel direct mapping tables up to cff80000 @ 100000-785000
init_memory_mapping: 0000000100000000-00000001f1a6b000
0100000000 - 01f1a6b000 page 4k
. . . . . . .
======================================================
[ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
]
2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 #1
------------------------------------------------------
khubd/28 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
(&retval->lock){......}, at: [<ffffffff8112a240>] dma_pool_alloc+0x45/0x321
and this task is already holding:
(&ehci->lock){-.....}, at: [<ffffffff813d4654>] ehci_urb_enqueue+0xb4/0xd5c
which would create a new lock dependency:
(&ehci->lock){-.....} -> (&retval->lock){......}
but this new dependency connects a HARDIRQ-irq-safe lock:
(&ehci->lock){-.....}
... which became HARDIRQ-irq-safe at:
[<ffffffff8109908b>] __lock_acquire+0x254/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>] _spin_lock+0x45/0x8e
[<ffffffff813d325c>] ehci_irq+0x41/0x441
[<ffffffff813b7e5f>]
usb_hcd_irq+0x59/0xcc
[<ffffffff810ca810>] handle_IRQ_event+0x62/0x148
[<ffffffff810ccda3>] handle_level_irq+0x90/0xf9
[<ffffffff81017078>] handle_irq+0x9a/0xba
[<ffffffff8130a0c6>] xen_evtchn_do_upcall+0x10c/0x1bd
[<ffffffff8101527e>] xen_do_hypervisor_callback+0x1e/0x30
[<ffffffffffffffff>] 0xffffffffffffffff
to a HARDIRQ-irq-unsafe lock:
(purge_lock){+.+...}
... which became HARDIRQ-irq-unsafe at:
... [<ffffffff810990ff>] __lock_acquire+0x2c8/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>] _spin_lock+0x45/0x8e
[<ffffffff811233a4>] __purge_vmap_area_lazy+0x63/0x198
[<ffffffff81124c74>] vm_unmap_aliases+0x18f/0x1b2
[<ffffffff8100eeb3>] xen_alloc_ptpage+0x5a/0xa0
[<ffffffff8100ef97>]
xen_alloc_pte+0x26/0x3c
[<ffffffff81118381>] __pte_alloc_kernel+0x6f/0xdd
[<ffffffff811241a1>] vmap_page_range_noflush+0x1c5/0x315
[<ffffffff81124332>] map_vm_area+0x41/0x6b
[<ffffffff8112448b>] __vmalloc_area_node+0x12f/0x167
[<ffffffff81124553>] __vmalloc_node+0x90/0xb5
[<ffffffff811243c8>] __vmalloc_area_node+0x6c/0x167
[<ffffffff81124553>] __vmalloc_node+0x90/0xb5
[<ffffffff811247ca>] __vmalloc+0x28/0x3e
[<ffffffff818504c9>] alloc_large_system_hash+0x12f/0x1fb
[<ffffffff81852b4e>] vfs_caches_init+0xb8/0x140
[<ffffffff8182b061>] start_kernel+0x3ef/0x44c
[<ffffffff8182a2d0>] x86_64_start_reservations+0xbb/0xd6
[<ffffffff8182e6d1>] xen_start_kernel+0x5ab/0x5b2
[<ffffffffffffffff>] 0xffffffffffffffff
other info that might help
us debug this:
2 locks held by khubd/28:
#0: (usb_address0_mutex){+.+...}, at: [<ffffffff813b2e82>] hub_port_init+0x8c/0x7ee
#1: (&ehci->lock){-.....}, at: [<ffffffff813d4654>] ehci_urb_enqueue+0xb4/0xd5c
the HARDIRQ-irq-safe lock's dependencies:
-> (&ehci->lock){-.....} ops: 0 {
IN-HARDIRQ-W at:
[<ffffffff8109908b>] __lock_acquire+0x254/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>]
_spin_lock+0x45/0x8e
[<ffffffff813d325c>] ehci_irq+0x41/0x441
[<ffffffff813b7e5f>] usb_hcd_irq+0x59/0xcc
[<ffffffff810ca810>] handle_IRQ_event+0x62/0x148
[<ffffffff810ccda3>] handle_level_irq+0x90/0xf9
. . . . . .
Jeremy's current version is rc6. I believe this issue has been noticed at
xen-devel.
Boris
-----Inline Attachment Follows-----
--
Fedora-xen mailing list
Fedora-xen(a)redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen
-----Inline Attachment Follows-----
--
Fedora-xen mailing list
Fedora-xen(a)redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen
-----Inline Attachment Follows-----
--
Fedora-xen mailing list
Fedora-xen(a)redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen
14 years, 1 month
Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
by Boris Derzhavets
With rpm upgraded can load Dom0. However, dmesg report contains :-
Initializing cgroup subsys cpuset
Initializing cgroup subsys cpu
Linux version 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 (root@ServerXenSRC) (gcc version 4.4.0 20090506 (Red Hat 4.4.0-4) (GCC) ) #1 SMP Wed Aug 26 08:14:58 MSD 2009
Command line: root=/dev/mapper/vg_serverxensrc-LogVol00 ro console=tty0
KERNEL supported cpus:
Intel GenuineIntel
AMD AuthenticAMD
Centaur CentaurHauls
BIOS-provided physical RAM map:
Xen: 0000000000000000 - 000000000009ec00 (usable)
Xen: 000000000009ec00 - 0000000000100000 (reserved)
Xen: 0000000000100000 - 00000000cff80000 (usable)
Xen: 00000000cff80000 - 00000000cff8e000 (ACPI data)
Xen: 00000000cff8e000 - 00000000cffe0000 (ACPI NVS)
Xen: 00000000cffe0000 - 00000000d0000000 (reserved)
Xen: 00000000fee00000 - 00000000fee01000 (reserved)
Xen: 00000000ffe00000 - 0000000100000000 (reserved)
Xen: 0000000100000000 - 00000001f1a6b000 (usable)
DMI 2.4 present.
AMI BIOS detected: BIOS may corrupt low RAM, working around it.
e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved)
last_pfn = 0x1f1a6b max_arch_pfn = 0x400000000
last_pfn = 0xcff80 max_arch_pfn = 0x400000000
initial memory mapped : 0 - 20000000
init_memory_mapping: 0000000000000000-00000000cff80000
0000000000 - 00cff80000 page 4k
kernel direct mapping tables up to cff80000 @ 100000-785000
init_memory_mapping: 0000000100000000-00000001f1a6b000
0100000000 - 01f1a6b000 page 4k
. . . . . . .
======================================================
[ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected ]
2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 #1
------------------------------------------------------
khubd/28 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
(&retval->lock){......}, at: [<ffffffff8112a240>] dma_pool_alloc+0x45/0x321
and this task is already holding:
(&ehci->lock){-.....}, at: [<ffffffff813d4654>] ehci_urb_enqueue+0xb4/0xd5c
which would create a new lock dependency:
(&ehci->lock){-.....} -> (&retval->lock){......}
but this new dependency connects a HARDIRQ-irq-safe lock:
(&ehci->lock){-.....}
... which became HARDIRQ-irq-safe at:
[<ffffffff8109908b>] __lock_acquire+0x254/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>] _spin_lock+0x45/0x8e
[<ffffffff813d325c>] ehci_irq+0x41/0x441
[<ffffffff813b7e5f>] usb_hcd_irq+0x59/0xcc
[<ffffffff810ca810>] handle_IRQ_event+0x62/0x148
[<ffffffff810ccda3>] handle_level_irq+0x90/0xf9
[<ffffffff81017078>] handle_irq+0x9a/0xba
[<ffffffff8130a0c6>] xen_evtchn_do_upcall+0x10c/0x1bd
[<ffffffff8101527e>] xen_do_hypervisor_callback+0x1e/0x30
[<ffffffffffffffff>] 0xffffffffffffffff
to a HARDIRQ-irq-unsafe lock:
(purge_lock){+.+...}
... which became HARDIRQ-irq-unsafe at:
... [<ffffffff810990ff>] __lock_acquire+0x2c8/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>] _spin_lock+0x45/0x8e
[<ffffffff811233a4>] __purge_vmap_area_lazy+0x63/0x198
[<ffffffff81124c74>] vm_unmap_aliases+0x18f/0x1b2
[<ffffffff8100eeb3>] xen_alloc_ptpage+0x5a/0xa0
[<ffffffff8100ef97>] xen_alloc_pte+0x26/0x3c
[<ffffffff81118381>] __pte_alloc_kernel+0x6f/0xdd
[<ffffffff811241a1>] vmap_page_range_noflush+0x1c5/0x315
[<ffffffff81124332>] map_vm_area+0x41/0x6b
[<ffffffff8112448b>] __vmalloc_area_node+0x12f/0x167
[<ffffffff81124553>] __vmalloc_node+0x90/0xb5
[<ffffffff811243c8>] __vmalloc_area_node+0x6c/0x167
[<ffffffff81124553>] __vmalloc_node+0x90/0xb5
[<ffffffff811247ca>] __vmalloc+0x28/0x3e
[<ffffffff818504c9>] alloc_large_system_hash+0x12f/0x1fb
[<ffffffff81852b4e>] vfs_caches_init+0xb8/0x140
[<ffffffff8182b061>] start_kernel+0x3ef/0x44c
[<ffffffff8182a2d0>] x86_64_start_reservations+0xbb/0xd6
[<ffffffff8182e6d1>] xen_start_kernel+0x5ab/0x5b2
[<ffffffffffffffff>] 0xffffffffffffffff
other info that might help us debug this:
2 locks held by khubd/28:
#0: (usb_address0_mutex){+.+...}, at: [<ffffffff813b2e82>] hub_port_init+0x8c/0x7ee
#1: (&ehci->lock){-.....}, at: [<ffffffff813d4654>] ehci_urb_enqueue+0xb4/0xd5c
the HARDIRQ-irq-safe lock's dependencies:
-> (&ehci->lock){-.....} ops: 0 {
IN-HARDIRQ-W at:
[<ffffffff8109908b>] __lock_acquire+0x254/0xc0e
[<ffffffff81099b33>] lock_acquire+0xee/0x12e
[<ffffffff8150b987>] _spin_lock+0x45/0x8e
[<ffffffff813d325c>] ehci_irq+0x41/0x441
[<ffffffff813b7e5f>] usb_hcd_irq+0x59/0xcc
[<ffffffff810ca810>] handle_IRQ_event+0x62/0x148
[<ffffffff810ccda3>] handle_level_irq+0x90/0xf9
. . . . . .
Jeremy's current version is rc6. I believe this issue has been noticed at xen-devel.
Boris
14 years, 1 month
Re: [Fedora-xen] BZIP2 and LZMA support patch for Xen PV domU bootloader
by Boris Derzhavets
Virt-install F12 PV hangs on 64-bit. Report attached.
Boris.
--- On Sun, 8/23/09, Pasi Kärkkäinen <pasik(a)iki.fi> wrote:
From: Pasi Kärkkäinen <pasik(a)iki.fi>
Subject: Re: [Fedora-xen] BZIP2 and LZMA support patch for Xen PV domU bootloader
To: "Mark McLoughlin" <markmc(a)redhat.com>
Cc: fedora-xen(a)redhat.com
Date: Sunday, August 23, 2009, 12:45 PM
On Thu, Aug 20, 2009 at 10:04:09PM +0300, Pasi Kärkkäinen wrote:
> On Thu, Aug 20, 2009 at 07:02:12PM +0100, Mark McLoughlin wrote:
> > On Thu, 2009-08-20 at 19:44 +0300, Pasi Kärkkäinen wrote:
> > > On Thu, Aug 20, 2009 at 05:05:17PM +0100, Mark McLoughlin wrote:
> > > > On Thu, 2009-08-20 at 18:41 +0300, Pasi Kärkkäinen wrote:
> > > >
> > > > > # CONFIG_KERNEL_GZIP is not set
> > > > > # CONFIG_KERNEL_BZIP2 is not set
> > > > > CONFIG_KERNEL_LZMA=y
> > > > >
> > > > > OK, so the problem is the rawhide kernel is LZMA compressed. Wasn't this
> > > > > changed back to GZIP earlier?
> > > >
> > > > Thanks for pointing that out, see:
> > > >
> > > > https://bugzilla.redhat.com/show_bug.cgi?id=515831
> > > >
> > > > It got disabled on a branch for the F12 Alpha release, but never got
> > > > disabled on the devel/ branch. Should be fixed now.
> > > >
> > >
> > > Thanks. Now I'll just wait for the next kernel build.. :)
> > >
> > > (or try applying the patch Chris sent to support BZIP2 and LZMA for Xen PV bootloader).
> >
> > Testing Chris's patch would certainly be a good idea. LZMA will come
> > back again in Fedora 13, so we should make sure to have the patch in
> > good shape by then
> >
>
> I actually already 'ported' the patch to Xen 3.4.1 and made sure it
> compiles OK. I've included it (and extra gcc 4.4.0 compilefix) to this
> email, and also sent them to xen-devel.
>
> atm I'm trying to make the patch work/apply with Fedora xen-3.4.1-1 src.rpm ..
> and sorting out some fedora/rpm specific compilation failure..
>
Here's updated patch that actually works. I added it to xen-3.4.1-1.src.rpm
and verified the resulting rpms are able to boot lzma compressed kernels.
It's still not final patch, since the stubdom/pvgrub bzip2/lzma support is
not yet included in this patch (until the proper method for Makefile hacks
is determined upstream).
-- Pasi
-----Inline Attachment Follows-----
--
Fedora-xen mailing list
Fedora-xen(a)redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen
14 years, 1 month