Hard drive spec change
Ric Wheeler
rwheeler at redhat.com
Sat Mar 13 13:17:25 UTC 2010
On 03/13/2010 12:45 AM, Felix Miata wrote:
> On 2010/03/10 21:28 (GMT-0500) Ric Wheeler composed:
>
>
>> For anyone serious about storage (performance, reliability and power
>> consumption) this will be a positive step.
>>
> Not everyone. Users of larger numbers of small files and small numbers of
> large files already lose a heap of space to slack even with 1024k blocksize,
> which will at least quadruple if forced to 4k sectors.
> http://en.wikipedia.org/wiki/Internal_fragmentation#Internal_fragmentation
>
Second on my list of annoying replies is a pointer to wikipedia (trumped
only by replies with random URLS !).
If you really want to store lots of really tiny files (< 1KB), you
probably want to look at wants to store them in more efficient ways (tar
them up, use a light weight DB, etc). Having been in the business of
making storage appliances that stored lots of small files, it is a
challenge.
Also note that the overhead of creating a file/directory entry/inode in
most modern file systems can easily consume more than a tiny file. If
you want to test this, just take your favourite file system and make a
brand new, empty FS. Fill it with zero length files and then see what
your per file overhead is.
In any case, you could use a file system (like reiserfs) that does tail
packing.
Ric
More information about the devel
mailing list