F16 unusable while writing to pendrive

sourcerer_sea at riseup.net sourcerer_sea at riseup.net
Sun Dec 18 05:15:35 UTC 2011


> Actually, the fact that Linux drives don't need regular defragging has
> nothing to do with the file system.

Well, what is that reason? I know that ext4 offers 'extents' and that
reduces fragmentation, but I don't see any other reason why the ext*
family of filesystems are more immune to fragmentation than other
filesystems.

It should still be possible to fragment a large file if, for example, you
opened a file that covered more than 3 segments and appended data to the
middle of it. The effect should be that another block is allocated and
added to the inode-list for that file to hold the new data that you've
added, with that block chaining to block 3.
N.B.
Before change was made:
[file-> [BLOCK1]->[BLOCK2]->[BLOCK3]] The file is contiguous on disk.
After change was made:
[file-> [BLOCK1]->[BLOCK2]->[BLOCK242]->[BLOCK3]] The file is fragmented.

Depending on the size of the file, the size of the blocks, and the
location of the edit, almost all files can become fragmented.

Another example:

Creating a file of initial size 0.
[file-> [BLOCKN]]
Before any changes are written to the file, some other files are modified.
This is a typical scenario on a multiprocessing system.
These other files allocate blocks N+1 up to block N+10 for their own use
according to scenario 1.

Make a change to the file:
[file-> [BLOCKN]->[BLOCKN+11]]
The nearest free block was block N+11, which is still 10 blocks away.

Now these differences are small. However, suppose that a block was 256KiB
in size. This means that each block pair can be read in one cycle of the
hard-drive reader (they typically read 512 KiB at a time) It would take
more time to read the file if the hard-drive must read the first block
pair [512 KiB], seek (256 * 10) bytes forward, and read the next block,
but only get 1 block out of that read. (Unoptimal read for hard-drive
block size)

Here is how extents work to reduce the risk of fragmentation:

Allocation of a file:

[file-> [BLOCKN]->...->[BLOCKN+M]]
The file allocates all blocks contiguously on the disk from N to M.

This is great for large files, because you can then use those blocks
freely to grow your file (i.e. append) as needed. As long as you reserved
the correct size, appending does not lead to fragmentation.

However, the 'fragmentation due to appending in the middle of a file'
problem still exists.

I am by no means an expert in filesystems, but I know a little bit about
their inner workings. I hope that you found my explanation informative.

sea



More information about the users mailing list