On Thu, Mar 12, 2015 at 3:58 PM, Chris Murphy lists@colorremedies.com wrote:
Anyway, seeing as this happens on an fsck, that means filesystem metadata is affected and if e2fsck -f doesn't fix it then, the fs is toast. I honestly would just immediately remount it ro, and back it up though before forcing an fsck. An fsck ought to fail gracefully and not make things worse, but...
For what it's worth, in the same scenario, by default Btrfs on an HDD use duplicate metadata on a single drive. So the same read error would get reported, but then there'd be something like this:
[48466.824770] BTRFS: checksum error at logical 20971520 on dev /dev/sdb, sector 57344: metadata leaf (level 0) in tree 3 [48466.829900] BTRFS: checksum error at logical 20971520 on dev /dev/sdb, sector 57344: metadata leaf (level 0) in tree 3 [48466.834944] BTRFS: bdev /dev/sdb errs: wr 0, rd 0, flush 0, corrupt 1, gen 0 [48466.853589] BTRFS: fixed up error at logical 20971520 on dev /dev/sdb
That's actually a corrupt sector, rather than read error, but the result is the same. Btrfs uses the duplicate copy and fixes the bad one automatically. Life continues. The same thing for data if there's a mirror copy (or raid56 parity, since kernel 3.19).
For single copy data, it'll show a path to the affected file. For single copy of fs metadata, well, bad things happen too, chances are the fs will abruptly go forced read only. For the most part Btrfs has been decently graceful lately if it successfully mounts read only.
I still keep multiple backups though. Ultimately I trust nothing but many copies.