Disk Druid and RAID 10 Partitions

Aero Maxx aero.maxx.d at gmail.com
Fri Mar 2 22:20:34 UTC 2012


Hello Reindl,

Thanks very much for replying to my question, what if I wanted swap to
be a partition tho, would I create a 4GB partition on each of the 4
drives or make it 2GB partition on each of the 4 drives?

What about using LVM with raid 10?

On 2 Mar 2012, at 21:48, Reindl Harald <h.reindl at thelounge.net> wrote:

>
>
> Am 02.03.2012 22:39, schrieb Reindl Harald:
>> Am 02.03.2012 22:32, schrieb Aero Maxx:
>>> If I am using a raid 10 setup on 4 Hard Drives, would I make a raid partition of 250MB on each of the 4 drives as
>>> in total this would be 500MB, or would I still make a 500MB raid partition, same for swap still make a 4992MB
>>> partition or halve it and spread it over the 4 drives?
>>
>> 1 x RAID 1 for /boot over all 4 drives
>> grub-install /dev/sda-/sdv/sdd to have all bootbale
>>
>> 1 x RAID10 for the OS
>> 1 x RAID10 for data
>>
>> SWAP can these days be a file
>> /home is here a folder on /mnt/data with a bind-mount
>>
>> this setup runs on 2 identical machines once installed and
>> with dd over ssh cloned since june 2011, originally F15 and
>> in the meantime updated to F15
>> _______________________
>>
>> /dev/md0      ext4    497M   66M  427M  14% /boot
>> /dev/md1      ext4     30G  7,3G   22G  25% /
>> /dev/md2      ext4    3,7T  1,6T  2,1T  44% /mnt/data
>>
>> [root at srv-rhsoft:~]$ cat /proc/mdstat
>> Personalities : [raid1] [raid10]
>> md2 : active raid10 sdc3[0] sdd3[3] sda3[4] sdb3[5]
>>      3875222528 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
>>      bitmap: 1/29 pages [4KB], 65536KB chunk
>>
>> md1 : active raid10 sdc2[0] sdd2[3] sda2[4] sdb2[5]
>>      30716928 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
>>      bitmap: 1/1 pages [4KB], 65536KB chunk
>>
>> md0 : active raid1 sdc1[0] sdd1[3] sda1[4] sdb1[5]
>>      511988 blocks super 1.0 [4/4] [UUUU]
>
> forgot the fdisk output
>
> /dev/sda1   *        2048     1026047      512000   fd  Linux raid autodetect
> /dev/sda2         1026048    31746047    15360000   fd  Linux raid autodetect
> /dev/sda3        31746048  3906971647  1937612800   fd  Linux raid autodetect
>
> /dev/sdb1   *        2048     1026047      512000   fd  Linux raid autodetect
> /dev/sdb2         1026048    31746047    15360000   fd  Linux raid autodetect
> /dev/sdb3        31746048  3906971647  1937612800   fd  Linux raid autodetect
>
> /dev/sdc1   *        2048     1026047      512000   fd  Linux raid autodetect
> /dev/sdc2         1026048    31746047    15360000   fd  Linux raid autodetect
> /dev/sdc3        31746048  3906971647  1937612800   fd  Linux raid autodetect
>
> /dev/sdd1   *        2048     1026047      512000   fd  Linux raid autodetect
> /dev/sdd2         1026048    31746047    15360000   fd  Linux raid autodetect
> /dev/sdd3        31746048  3906971647  1937612800   fd  Linux raid autodetect
>
> --
> users mailing list
> users at lists.fedoraproject.org
> To unsubscribe or change subscription options:
> https://admin.fedoraproject.org/mailman/listinfo/users
> Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
> Have a question? Ask away: http://ask.fedoraproject.org


More information about the users mailing list