On 10/4/05, Arjan van de Ven <arjanv@redhat.com> wrote:
On Tue, 2005-10-04 at 12:24 -0500, Justin Conover wrote:
> tune2fs
> -m reserved-blocks-percentage
> -r reserved-blocks-count
>
> If what I have read is correct, -m by default uses 5% of the disk
> space for "reserved root use"  On my server at home with a /home of
> 1TB thats about 50GB of wasted space.
>
> Is this reserved space actually used by ANYTHING?  Like LVM, some kind
> of fragmentation?

well for emergency root stuff; logging in without any disk space is
hard; lots of stuff wants to make temporary files etc.

but your second point it true too: most filesystems (ext3 but most
others) start to fragment like hell if they go over about 95% full.

Think of it this way: if you have half your disk empty, the filesystem
can do a proper job of finding non-fragmented space.
If only 0.0001% is free, it has almost no freedom of choice, resulting
in "you get it in whatever order some things become free".
Those are sort of extremes; there's been a bunch of research and the
outcome was that 5% free seems to be sort of the turning point in this
respect.

I suspect that research predates the Tb sized volumes, so I don't know
if it maybe is 1% on such volumes, but then again to some extend the
freedom needed will scale with the FS size


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (GNU/Linux)

iD8DBQBDQsEXpv2rCoFn+CIRAvUXAJ0SH+Pc9eXpinhuBGZ+DM4r5wTCtgCgi+7B
hxT89EtumMwOF13z0l0KP9Q=
=C5z1
-----END PGP SIGNATURE-----


So, lets say you have a production server with a 20GB filesystem used for a Oracle Database, would lowering it to 1-2% be safe or should you leave this at default. 

FYI, I'm not asking for support, just curious were the line is on importance.  You start getting in to 200GB-1TB file systems then you have a pretty large area of un-allocated space.