On Tue, Jan 31, 2017 at 6:52 AM, Dominik 'Rathann' Mierzejewski
<dominik(a)greysector.net> wrote:
I'd like to see (a link to) a more comprehensive discussion of
the
purported advantages of LVM RAID over LVM on MD RAID here.
If the user never interacts with the storage stack, it's a wash.
Otherwise, the advantage is RAID level is an LV attribute, set at the
time the LV is created, which means LV's can have different RAID
levels, and are resizeable within the VG they belong to. So a VG with
three disks can have a raid0 LV, raid1 LV, and raid5 LV - and they're
all resizable.
I think the resize benefit is minor because only Btrfs has online
shrink, ext4 can only shrink offline, and XFS doesn't support shrink.
To mitigate this means leaving some unused space in the VG (and on all
PVs).
As for drawbacks, as a practical matter very few people are familiar
with managing LVM RAIDs with LVM tools. While it uses md kernel code,
it uses LVM metadata, not mdadm metadata, so mdadm cannot be used at
all to manage them.
On Tue, Jan 31, 2017 at 7:33 AM, Chris Adams <linux(a)cmadams.net> wrote:
How do LVM RAID volumes get tested? There's a regular cron job
for
testing MD RAID volumes, but I'm not aware of something like that for
LVM RAID.
I'm not aware of an upstream cron job for this; nor one in Fedora.
'echo check > /sys/block/mdX/md/sync_action' works for either mdadm or
LVM RAID, as it's kernel code doing the scrub; but LVM does have a
command for it 'lvchange --syncaction {check|repair) vg/raid_lv'
--
Chris Murphy