BTRFS Question

Chris Murphy lists at colorremedies.com
Wed Jun 24 22:42:34 UTC 2015


On Mon, May 25, 2015 at 5:21 PM, Javier Perez <pepebuho at gmail.com> wrote:
> Hi
>
> I am just learning about btrfs on my home system.
>
> How do I know what kind of options where used to create a BTRFS filesystem?
> I installed F21 from scratch and I  created a BTRFS filesystem on a HDD.   I
> think I created it with the "single" option.

btrfs filesystem df <mp>
OR
btrfs fi df <mp>   # is shorter

The mountpoint is presumably / but could also be /home, if you've done
a default Btrfs installation. This will show the data and metadata
profiles.

The default mkfs for HDD is data profile single, and metadata profile DUP.


> Now I read that that is dangerous, I should have created it as one of the
> raids if I want to have data redundancy and protection vs bit rot (which is
> what I was looking forward to).

It's no more dangerous than any single drive installation. If there's
any bit rot, at least you are notified of what files are affected if
you check kernel messages with dmesg. This happens passively during
normal use (any file read operation), and can also be checked with:

btrfs scrub start <mp>

And then checking kernel messages.

If corruption has happened in metadata, then on HDD by default with
DUP metadata profile, the problem is fixed automatically. If single
data profile, then it'll list the bad file path.



>
> How can I verify it? Also, If I want to change it, can I do it IN-PLACE or
> do I have to reformat/recreate the filesystem?

It can be done in place. Add a new drive, optionally partition it and
then, e.g.:

btrfs device add /dev/sdb1 <mp>
btrfs balance start -dconvert=raid1 -mconvert=raid1 <mp>

Note you must convert both data and metadata profiles in order to
really get raid1 protection. If you only change data profile to raid1
then metadata (the file system itself) stays as DUP and that means
it's only on one device. These days the -m conversion will also
convert system chunks too, so it doesn't need to be separately listed.

Conversion status is in dmesg, as well as
btrfs balance status <mp>
 as well as:
btrfs fi df <mp>

Eventually all chunks will be converted and have the same profile.

Note that Btrfs and GNOME have no mechanism to notify the user in case
of drive failure like with mdadm managed drives. If a drive dies with
mdadm raid1, GNOME shell will produce a notification. That doesn't
happen right now with Btrfs multiple devices. Further, there are some
problems booting Btrfs raid1 in the case of a drive failure:

1. When all devices for a Btrfs volume aren't available, somewhere
between btrfs kernel code and libblkid there's a failure for a volume
UUID to be produced;
2. udev doesn't scan for it;
3. systemd can't find root by UUID;
This results in boot failure.
4. The degraded mount option is not enabled for Btrfs in the kernel by default;
This results in boot failure.

So to fix this you manually edit the boot entry (GRUB) and change
root=UUID=<uuid> to root=/dev/sdXY for the surviving drive, which can
sometimes be tricky since these assignments can change especially if a
drive has died. And you need to change rootflags=subvol=root to
rootflags=subvol=root,degraded

Now you can boot, and then do replacement with a new drive, e.g.:

btrfs replace start <devid> /dev/sdc1 <mp>

To get devid is non-obvious. You should be able to infer it from:
btrfs fi show
which will show you the devid for current devices but not missing
ones, sorta annoying that this isn't more clear.

Anyway, that does a device add, device delete, balance all in one
step. It's an online replace and rebuild, tends to be pretty fast
because it only has to sync used blocks rather than all blocks like md
and lvm raid.


>
> Thanks for whatever help/illumination you might provide. I am catching up on
> btrfs, but I am not there yet.

Yeah, I highly advise frequent backups with Btrfs. It's very unlikely
you will experience serious problems, but based on btrfs@ list
experience, if you end up in an edge case there's a good chance you
will need that backup. Fortunately even bad problems tend to make the
filesystem mount read-only so at least you can update your backup, but
there are still cases where btrfs check cannot fix the problem.

And it's still mostly advisable to not run btrfs check --repair
without first posting the complete results of btrfs check to the
btrfs@ list.

So if this all sounds overly complicated and daunting then I'd say
mdadm or lvm raid + ext4 or XFS is a better choice for your use case
right now.


-- 
Chris Murphy


More information about the users mailing list