default file system, was: Comparison to Workstation Technical Specification

Simo Sorce simo at redhat.com
Sat Mar 1 18:57:24 UTC 2014


On Sat, 2014-03-01 at 12:04 +0000, Ian Malone wrote:
> On 28 February 2014 20:45, Adam Williamson <awilliam at redhat.com> wrote:
> > On Thu, 2014-02-27 at 23:16 -0700, Chris Murphy wrote:
> >> On Feb 27, 2014, at 11:07 PM, James Wilson Harshaw IV <jwharshaw at gmail.com> wrote:
> >>
> >> > I apologize, I guess I did not get the whole background out of it.
> >> >
> >> > What filesystems are we considering?
> >>
> >> It's XFS vs ext4 and Server WG has agreed on XFS on LVM.
> >
> > As a server WG member I voted +1 on XFS as I have no particular
> > objection to XFS as a filesystem, but I do think it seems a bit
> > sub-optimal for us to wind up with server and desktop having defaults
> > that are very similar but slightly different, for no apparently great
> > reason.
> >
> > ext4 and xfs are basically what I refer to as 'plain' filesystems (i.e.
> > not all souped-up btrfs/zfs stuff), they're stable, mature, and
> > generally good-enough for just about all cases. Is xfs really so much
> > better for servers, and ext4 so much better for desktops, that it's
> > worth the extra development/maintenance to allow Desktop to use ext4 and
> > Server to use xfs?
> >
> > Basically, what I'm saying is that if Desktop would be OK with using
> > xfs-on-LVM as default with all choices demoted to custom partitioning
> > (no dropdown), as Server has currently agreed on, that'd be great. Or if
> > we could otherwise achieve agreement on something.
> >
> 
> As you say they are 'plain' filesystems. Though I now regret not
> sending my small datapoint in before the Server WG decision. That's
> that a while ago, after using XFS for a long time we started putting
> new filesystems onto ext4 and in the past month we moved probably our
> largest remaining dataset (1.1TB) from XFS to ext4, the main reason
> has been flexibility with resizing. Particularly the XFS 32bit inode
> ceiling, (inode64 not working well with NFS).
> 1TB doesn't sound very big. These are imaging datasets in a research
> environment, files going from small text to images at tens of MB (some
> bigger, but not the dominant type). Projects usually get their own FS
> (for a variety of reasons including backup, audit and budgeting
> reasons). And often it's not known how large a project will eventually
> be, so filesystems get extended as appropriate. With XFS we have to
> take care to avoid the 32bit inode ceiling, and most recently found a
> filesystem that refused to take any more files for some other reason,
> even after creating a new clean copy. We didn't get to the bottom of
> that, and moved the data to ext4.
> Which is not to say XFS is a bad decision, there's plenty of people
> using it for other tasks, but I looked through the Server WG meeting
> and couldn't see mention of the for/against arguments. If my ramble
> above demonstrates anything it's not really about XFS, it's that
> server admins have reasons for choosing an FS and the scope to look at
> and change to alternatives. Default FS on the Server is not actually a
> massive impact, LVM is a different decision and makes sense there.
> LVM on a workstation though, well, you can make it the default, but a
> couple of releases ago I started turning it off and will continue
> doing so. It adds an extra level of complication to management which
> doesn't gain you anything on a workstation.

As far as I know inode64 is not really a problem on NFS anymore, which
is why I did not raise this as an issue at all (I use NFS and I have a
6TB XFS filesystem with inode64).

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York



More information about the devel mailing list