Heads up: e2fsprogs-1.42-WIP-0702 pushed to rawhide

Eric Sandeen sandeen at redhat.com
Mon Oct 3 23:03:56 UTC 2011


On 10/3/11 5:53 PM, Farkas Levente wrote:
> On 10/04/2011 12:33 AM, Eric Sandeen wrote:
>> On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
>>> On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
>>> I wasn't able to give the VM enough memory to make this succeed.  I've
>>> only got 8G on this laptop.  Should I need large amounts of memory to
>>> create these filesystems?
>>>
>>> At 100T it doesn't run out of memory, but the man behind the curtain
>>> starts to show.  The underlying qcow2 file grows to several gigs and I
>>> had to kill it.  I need to play with the lazy init features of ext4.
>>>
>>> Rich.
>>>
>>
>> Bleah.  Care to use xfs? ;)
> 
> why we've to use xfs? really? nobody really use large fs on linux? or
> nobody really use rhel? why not the e2fsprogs has too much upstream
> support? with 2-3TB disk the 16TB fs limit is really funny...or not so
> funny:-(

XFS has been proven at this scale on Linux for a very long time, is all.

But, that comment was mostly tongue in cheek.

Large filesystem support for ext4 has languished upstream for a very
long time, and few in the community seemed terribly interested to test it,
either.  

It's all fairly late in the game for ext4, but it's finally gaining some
momentum, I hope.  At least, the > 16T code is in the main git branch
now, and the next release will pretty well have to have the restriction
lifted.  As Richard found, there are sure to be a few rough edges.

Luckily nobody is really talking about deploying ext4 (or XFS for that matter)
at 1024 petabytes.

Testing in the 50T range is probably reasonable now, though pushing the
boundaries (or maybe well shy of those boundaries) is worth doing.

-Eric


More information about the devel mailing list