partitioning

Chris Murphy lists at colorremedies.com
Wed Jul 23 18:01:22 UTC 2014


On Jul 16, 2014, at 11:21 AM, dustin kempter <dustink at consistentstate.com> wrote:

> Hi all, I am an SA in training and ive been reading a lot about the importance of separating out your workspace/server into separate partitions such as /, /data, /home, /ftp, /usr, /boot vs dividing it into just a /, /boot, /data. and it seems that doing it how Ive been reading about with more partitions is more secure but what about when one partition becomes full?

I'm not sure how it's made more secure. Features of the technology backing a particular mount point is what would do that. For example raid1 /home, or raid5/6 for /var for a server. Or a gluster volume at /var or /home.

Partitions are kindof annoying actually, for the exact reason of what happens when the partition gets full. LVM makes this easier to manage because you can resize an LV and then the filesystem. Even better is LVM thin provisioning, where you make each volume an "ideal" size for its practical lifetime, it only consumes from the VG what is actually being used. Filesystem resize is avoided, which causes certain inefficiencies anyway and just adds to the non-deterministic nature of filesystems (Btrfs is sortof an exception).


> isnt that more of a problem vs one big /data partition where that is not an issue?

Sure so use one big partition and maybe quotas to contain things, or LVM, or gluster or ceph volumes. /boot on a plain partition makes sense, and for workstations/laptops it's useful to have /home separate just because it makes OS reinstalls easier than blowing away a whole system and restoring /home from a backup.

> what would you guys say the best solution would be? also read that you want to have twice as much swap as RAM and that dividing swap into 2 partitions helps with performance. is this true?


Best solution depends on the problem you want to avoid or solve. The installer's python-blivet code has swap recommendations. It's something like 2x up to a certain amount of memory, then it's 1x, and above maybe 64GB it's 1/2. You really don't want to be under swap pressure with any regularity, to the point if this is a server you might be better off with swap on an SSD. If you're using XFS, you can estimate memory requirements for fs repair using:

xfs_repair -n -vv -m 1 <dev>

If you give xfs_repair the minimum it could be hours for a repair, not good for a server. So again, if you can't afford the right amount of RAM to support the filesystem size, then in a bind you can use an SSD for swap and while it won't be "fast" it won't be dog slow (hours or days).

http://xfs.org/index.php/XFS_FAQ#Q:_Which_factors_influence_the_memory_usage_of_xfs_repair.3F


Chris Murphy


More information about the users mailing list