On Tue, 2004-06-01 at 18:05, Havoc Pennington wrote:
Something like:
5M sum of per-app icon theme caching
5M sum of per-app base gtk_init() overhead
10M sum of per-email data in Evolution
7M base evo overhead with no mail loaded
30M sum of all executable pages (libraries and binaries)
...
I don't think people agree with me, but in my opinion it is important to
measure the working set. A program can malloc() 500 MB and then just sit
in poll() never touching that memory.
An approach that could give you something close to what you are after is
to LD_PRELOAD a new malloc() for the entire desktop and have that new
malloc() report a stack trace to another application that could then
process the data:
- Calculate the total amount of memory used by applications:
sum of all mmap()ed pages in physical RAM
+ sum of all anonymous pages in physical RAM
where "sum of all anonymous pages in physical RAM" is
calculated by for each application subtracting the
number of mapped pages in RAM from the RSS.
- Report like memprof does now, the amount of memory allocated
by a function and its children. Divide all numbers by
the total amount of memory used.
The amount of memory used by mmap()ed files is easy to measure:
- scan /proc and build a list of mmap()ed files
- mmap() of those files
- use mincore() to find out how many pages of those files
are actually in RAM.
(I have a program that does this somewhere).
i.e. try to get an idea of where focused optimization could have the
most impact on the desktop overall - what percentage of TOTAL memory
usage for the whole desktop can be blamed on each optimizable item, with
sufficient granularity to be useful.
The above might give you something like what you are after. It would be
possible to report at a filename granularity instead of function
granularity, which might be interesting as Alex suggested.
Soeren