I just hit the following on a vm running 2.6.34.8-68.fc13.x86_64 after doing a lot of network traffic:
kswapd0: page allocation failure. order:0, mode:0x20 Pid: 33, comm: kswapd0 Not tainted 2.6.34.8-68.fc13.x86_64 #1 Call Trace: <IRQ> [<ffffffff810d158c>] __alloc_pages_nodemask+0x5c1/0x62f [<ffffffff810f8607>] alloc_pages_current+0x95/0x9e [<ffffffffa001ca06>] try_fill_recv+0x6c/0x1f3 [virtio_net] [<ffffffffa001d61f>] virtnet_poll+0x617/0x738 [virtio_net] [<ffffffff813a6243>] net_rx_action+0xaf/0x1ca [<ffffffff810533a6>] __do_softirq+0xe2/0x1a4 [<ffffffff8109e1e5>] ? handle_IRQ_event+0x60/0x121 [<ffffffff8100ab5c>] call_softirq+0x1c/0x30 [<ffffffff8100c342>] do_softirq+0x46/0x83 [<ffffffff8105321a>] irq_exit+0x3b/0x7d [<ffffffff8145328c>] do_IRQ+0xac/0xc3 [<ffffffff8144d9d3>] ret_from_intr+0x0/0x11 <EOI> [<ffffffff810d04d2>] ? __pagevec_free+0x66/0x77 [<ffffffff810d626b>] shrink_page_list+0x3a0/0x477 [<ffffffff8110ac42>] ? lookup_page_cgroup+0x32/0x48 [<ffffffff810d66d4>] shrink_inactive_list+0x392/0x68e [<ffffffff810d1f9f>] ? determine_dirtyable_memory+0x1a/0x2c [<ffffffff810d2027>] ? get_dirty_limits+0x27/0x252 [<ffffffff810483a8>] ? load_balance+0xd9/0x67f [<ffffffff810d6d2d>] shrink_zone+0x35d/0x411 [<ffffffff810d7aaf>] balance_pgdat+0x337/0x5b6 [<ffffffff810d51a9>] ? isolate_pages_global+0x0/0x1f7 [<ffffffff810d7f69>] kswapd+0x23b/0x266 [<ffffffff8106625f>] ? autoremove_wake_function+0x0/0x39 [<ffffffff810d7d2e>] ? kswapd+0x0/0x266 [<ffffffff81065de5>] kthread+0x7f/0x87 [<ffffffff8100aa64>] kernel_thread_helper+0x4/0x10 [<ffffffff81065d66>] ? kthread+0x0/0x87 [<ffffffff8100aa60>] ? kernel_thread_helper+0x0/0x10 Mem-Info: Node 0 DMA per-cpu: CPU 0: hi: 0, btch: 1 usd: 0 CPU 1: hi: 0, btch: 1 usd: 0 Node 0 DMA32 per-cpu: CPU 0: hi: 186, btch: 31 usd: 0 CPU 1: hi: 186, btch: 31 usd: 66 active_anon:15196 inactive_anon:31193 isolated_anon:0 active_file:65429 inactive_file:117341 isolated_file:32 unevictable:8 dirty:8205 writeback:0 unstable:0 free:1359 slab_reclaimable:9424 slab_unreclaimable:6876 mapped:3140 shmem:44 pagetables:1109 bounce:0 Node 0 DMA free:3992kB min:60kB low:72kB high:88kB active_anon:0kB inactive_anon:0kB active_file:16kB inactive_file:11812kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15360kB mlocked:0kB dirty:352kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:72kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no lowmem_reserve[]: 0 994 994 994 Node 0 DMA32 free:1444kB min:4000kB low:5000kB high:6000kB active_anon:60784kB inactive_anon:124772kB active_file:261700kB inactive_file:457552kB unevictable:32kB isolated(anon):0kB isolated(file):128kB present:1018068kB mlocked:32kB dirty:32468kB writeback:0kB mapped:12560kB shmem:176kB slab_reclaimable:37624kB slab_unreclaimable:27504kB kernel_stack:864kB pagetables:4436kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:32 all_unreclaimable? no lowmem_reserve[]: 0 0 0 0 Node 0 DMA: 0*4kB 1*8kB 5*16kB 2*32kB 4*64kB 6*128kB 3*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 3992kB Node 0 DMA32: 1*4kB 0*8kB 0*16kB 1*32kB 0*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 0*2048kB 0*4096kB = 1444kB 182826 total pagecache pages 0 pages in swap cache Swap cache stats: add 0, delete 0, find 0/0 Free swap = 1044188kB Total swap = 1044188kB 262140 pages RAM 6290 pages reserved 94483 pages shared 170020 pages non-shared
This looks a lot like https://bugs.launchpad.net/ubuntu/+source/linux/+bug/579276
and should be fixed by upstream commit 3e9d08e virtio_net: Add schedule check to napi_enable call which is also in the -longterm and -stable releases.
Is this something which can be picked up for the F-13 and F-14 kernels?
Kind regards,
Ruben
On Mon, Mar 21, 2011 at 07:08:44PM +0100, Ruben Kerkhof wrote:
I just hit the following on a vm running 2.6.34.8-68.fc13.x86_64 after doing a lot of network traffic:
want to test: http://koji.fedoraproject.org/koji/taskinfo?taskID=2936814 and make sure I didn't screw up the backport?
--Kyle
On Wed, Mar 23, 2011 at 17:29, Kyle McMartin kyle@mcmartin.ca wrote:
On Mon, Mar 21, 2011 at 07:08:44PM +0100, Ruben Kerkhof wrote:
I just hit the following on a vm running 2.6.34.8-68.fc13.x86_64 after doing a lot of network traffic:
want to test: http://koji.fedoraproject.org/koji/taskinfo?taskID=2936814 and make sure I didn't screw up the backport?
Sure, once I've figured out how to get it installed since I'm running the same version :)
Kind regards,
Ruben
On Wed, Mar 23, 2011 at 21:24, Ruben Kerkhof ruben@rubenkerkhof.com wrote:
On Wed, Mar 23, 2011 at 17:29, Kyle McMartin kyle@mcmartin.ca wrote:
On Mon, Mar 21, 2011 at 07:08:44PM +0100, Ruben Kerkhof wrote:
I just hit the following on a vm running 2.6.34.8-68.fc13.x86_64 after doing a lot of network traffic:
want to test: http://koji.fedoraproject.org/koji/taskinfo?taskID=2936814 and make sure I didn't screw up the backport?
Sure, once I've figured out how to get it installed since I'm running the same version :)
I've got it running now, and will generate some network traffic for a few hours to see if the problem has disappeared.
I'll report back tomorrow.
Thanks again!
Ruben
On Wed, Mar 23, 2011 at 09:43:32PM +0100, Ruben Kerkhof wrote:
want to test: http://koji.fedoraproject.org/koji/taskinfo?taskID=2936814 and make sure I didn't screw up the backport?
Sure, once I've figured out how to get it installed since I'm running the same version :)
I've got it running now, and will generate some network traffic for a few hours to see if the problem has disappeared.
I'll report back tomorrow.
Thanks again!
no problem, i should have cranked baserelease by .1 for the scratch build. :/
thanks for testing.
--kyle
On Wed, Mar 23, 2011 at 21:57, Kyle McMartin kyle@mcmartin.ca wrote:
On Wed, Mar 23, 2011 at 09:43:32PM +0100, Ruben Kerkhof wrote:
want to test: http://koji.fedoraproject.org/koji/taskinfo?taskID=2936814 and make sure I didn't screw up the backport?
Sure, once I've figured out how to get it installed since I'm running the same version :)
I've got it running now, and will generate some network traffic for a few hours to see if the problem has disappeared.
I'll report back tomorrow.
Thanks again!
no problem, i should have cranked baserelease by .1 for the scratch build. :/
thanks for testing.
No problems here after doing about 1.5TB of traffic :)
Kind regards,
Ruben
kernel@lists.fedoraproject.org