On Mon, 8 Sep 2003, Dag Wieers wrote:
On Sun, 7 Sep 2003, Rik van Riel wrote:
This, together with your graphs, suggests it's more likely to be a memory leak in some driver then a bug in the core VM code.
With driver you don't mean a loaded module ? Is there some way to find out what driver/module is allocating this memory ?
There's no really easy way. The best way would be to look at which drivers you are using and compare that list with the drivers being used by the other people who have this problem.
Then look at people who don't have the problem at all. Scratch the drivers on problemless systems from the list of suspects.
Hopefully, you'll end up with just one or a few suspect drivers...
Do you have anything suspicious in /proc/slabinfo when your system gets close to crashing ?
Hmmm ok, so nothing bad in /proc/slabinfo.
In the RHL9 and RHL8 kernels, how big is free + active + inactive* together? How much space is unaccounted for ?
I'm not sure how it all adds up. The information I get from ps/top for all the processes is almost constant (around 12MB) whereas slowly each day +-3.5MB extra seems to be unaccounted for.
This is after 7 days running:
MemTotal: 61676 kB MemFree: 1024 kB Active: 20688 kB Inact_dirty: 0 kB Inact_laundry: 7036 kB Inact_clean: 580 kB
OK, that's 1 + 20.5 + 7 + .5 = 29 MB pageable memory.
In other words, 30+ MB is already taken by the kernel!
Definately looks like a memory leak.