On 19 Sep 2003, Stephen C. Tweedie wrote:
On Fri, 2003-09-19 at 18:30, Dag Wieers wrote:
Exactly which kernel are you running ?
2.4.20-19.9 currently, but I have the same effect with 2.4.20-18 and 2.4.20-20. The graph is exactly the same as it was weeks ago (every 2 weeks this happens, only this time the system was responsive enough to return the information ;)
We recently found one problem which could affect the size of the inode cache. It's not exactly a leak, because the resources can still be reclaimed, but the inodes were filed away in a place where the VM was least likely to go looking to reclaim them.
Could you "cat /proc/slabinfo" on the affected systems and see what the inode_cache entry looks like, please?
The system has been rebooted 2 days ago. The effect will be much bigger if I at least wait a week.
This is what I pasted earlier on Rik's request:
| > Do you have anything suspicious in /proc/slabinfo when your | > system gets close to crashing ? | | It's now up 7 days (36MB used, 25MB free) so we'll see in another | 5 days. These are the higher numbers: | | inode_cache 812 1232 480 154 154 1 | dentry_cache 572 1380 128 46 46 1 | filp 620 630 128 21 21 1 | buffer_head 4349 12363 100 162 317 1 | mm_struct 6038 6069 224 356 357 1 | vm_area_struct 1315 4920 96 38 123 1 | pte_chain 729 3277 32 14 29 1 | | I'm not sure what I have to look for. I guess I better save this also | directly after booting up.
I'll give you some other numbers later. If there's some more stuff you need from that system just before it is trashing, please tell now ;)
Thanks, -- dag wieers, dag@wieers.com, http://dag.wieers.com/ -- [Any errors in spelling, tact or fact are transmission errors]