RAID-1 Question

DJ Delorie dj at delorie.com
Tue Apr 10 17:07:13 UTC 2007


"Ashley M. Kirchner" <ashley at pcraft.com> writes:
>     1. Should I use the BIOS to create/manage my RAIDs or should I let
> linux do that for me?

I prefer letting linux manage it.  That way, you can replace the
motherboard or controller without losing your raid array.

>     2. Can I add drives to the RAID in the future and simply GROW it,
> without losing information?  Or am I going to be adding more RAID
> volumes as I add pairs of drives?

What I do is partition each drive into, say, ten partitions.  Then I
combine all sd*1 into md1, all sd*2 into md2, etc; and use LVM to
combine the md* into a big volume group.  Then, as long as I'm under
90% full, I can move data off the individual PVs, shut down the md*
for that "slice", rebuild it to include a new drive, and re-add it to
the VG.

In fact, I just did that this weekend.  I had five 200G drives, raid5
across them, in a seven slot chassis.  I added two 750G drives, but
had to remove one of the 200's to fit the third 750 in.  So, I used
one of the partitions on the 750 as temp space and rebuild the ten
existing raids from raid5x5 to raid5x4, removing the fifth drive from
each.  Once they were all done, I physically swapped drives and built
the raid3x3 slices on the 750's, moving the data off the temp
partition once I had room for it elsewhere.

pvmove md4
vgreduce foo md4
mdadm --stop md4
mdadm --create md4 ...
pvcreate md4
vgextend foo md4

(it helps to "pvchange -x n" all of the slices you're migrating, ahead
of time, so that you cut down on duplicate moves)




More information about the users mailing list