Kernel eating memory, ends up trashing

Dag Wieers dag at wieers.com
Mon Sep 8 04:50:23 UTC 2003


On Sun, 7 Sep 2003, Rik van Riel wrote:

> On Mon, 8 Sep 2003, Dag Wieers wrote:
> 
> > On August 8 I added a bugreport for a machine that trashes aprox. every 2 
> > weeks because of the kernel eating memory. You can find it here:
> 
> > I know at least 2 other persons that have the same problems (on different 
> > machines with a similar workload/functionality) so it seems to be a bug in 
> > the kernel on a particular use of the VM.
> 
> The thing is, RHL8 and 9 have a completely different VM from
> Severn.  Also, many people cannot reproduce this problem at
> all.

The best thing would be to try out the Severn kernel on RH9 ?
I cannot simply install Severn on this system ;/


> This, together with your graphs, suggests it's more likely
> to be a memory leak in some driver then a bug in the core VM
> code.

With driver you don't mean a loaded module ? Is there some way to find out 
what driver/module is allocating this memory ?


> Do you have anything suspicious in /proc/slabinfo when your
> system gets close to crashing ?

It's now up 7 days (36MB used, 25MB free) so we'll see in another
5 days. These are the higher numbers:

	inode_cache          812   1232    480  154  154    1
	dentry_cache         572   1380    128   46   46    1
	filp                 620    630    128   21   21    1
	buffer_head         4349  12363    100  162  317    1
	mm_struct           6038   6069    224  356  357    1
	vm_area_struct      1315   4920     96   38  123    1
	pte_chain            729   3277     32   14   29    1

I'm not sure what I have to look for. I guess I better save this also 
directly after booting up.


> In the RHL9 and RHL8 kernels, how big is free + active +
> inactive* together?  How much space is unaccounted for ?

I'm not sure how it all adds up. The information I get from ps/top for 
all the processes is almost constant (around 12MB) whereas slowly each day 
+-3.5MB extra seems to be unaccounted for.

This is after 7 days running:

	[root at breeg root]# free
	             total       used       free     shared    buffers     cached
	Mem:         61676      60648       1028          0       2212      20416
	-/+ buffers/cache:      38020      23656
	Swap:       457836       8876     448960

	[root at breeg root]# cat /proc/meminfo 
	        total:    used:    free:  shared: buffers:  cached:
	Mem:  63156224 62107648  1048576        0  2277376 25321472
	Swap: 468824064  9084928 459739136
	MemTotal:        61676 kB
	MemFree:          1024 kB
	MemShared:           0 kB
	Buffers:          2224 kB
	Cached:          20416 kB
	SwapCached:       4312 kB
	Active:          20688 kB
	ActiveAnon:       7660 kB
	ActiveCache:     13028 kB
	Inact_dirty:         0 kB
	Inact_laundry:    7036 kB
	Inact_clean:       580 kB
	Inact_target:     5660 kB
	HighTotal:           0 kB
	HighFree:            0 kB
	LowTotal:        61676 kB
	LowFree:          1024 kB
	SwapTotal:      457836 kB
	SwapFree:       448964 kB

Kind regards,
--   dag wieers,  dag at wieers.com,  http://dag.wieers.com/   --
[Any errors in spelling, tact or fact are transmission errors]





More information about the devel mailing list