Help: very slow software RAID 5.

Lamar Owen lowen at
Wed Sep 19 17:20:58 UTC 2007

On Tuesday 18 September 2007, Dean S. Messing wrote:
> What are others who run software RAID 5 seeing compared to the
> individual partition speeds?

I have one server here with two ATA drives on a single motherboard PATA 
channel (master/slave setup), and two drives on an add-on ATA/133 PATA (two 
masters, no slaves).  Here's the simple hdparm read test results:
[root at itadmin ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 hdg1[3] hde1[2] hdb1[1] hda1[0]
      480238656 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>
[root at itadmin ~]# for disk in hda hdb hde hdg md0
> do
> hdparm -t /dev/${disk}
> done

 Timing buffered disk reads:  170 MB in  3.02 seconds =  56.26 MB/sec

 Timing buffered disk reads:  172 MB in  3.03 seconds =  56.70 MB/sec

 Timing buffered disk reads:  170 MB in  3.01 seconds =  56.55 MB/sec

 Timing buffered disk reads:  170 MB in  3.01 seconds =  56.51 MB/sec

 Timing buffered disk reads:  372 MB in  3.01 seconds = 123.77 MB/sec
[root at itadmin ~]#    

It would run faster if I put another PATA HBA in and put hdb on it; anytime 
you do a master/slave ATA setup you limit the throughput.  A good rule of 
thumb is to make sure each PATA drive is master and alone on its channel; 
adding a slave drive to any of the PATA HBA ports will not increase (and will 
likely decrease) array throughput.  

You might think that's not the case due to the way the numbers look above.  
Well, I tried a little test with four hdparm -t's running concurrently (this 
is a dual Xeon box, and handles the test nicely).  Note how the two drives 
set master and slave slow when accessed concurrently:
[root at itadmin ~]# hdparm -t /dev/hda & hdparm -t /dev/hdb & hdparm -t /dev/hde 
& hdparm -t /dev/hdg
[2] 17631
[3] 17632
[4] 17633




 Timing buffered disk reads:   Timing buffered disk reads:   Timing buffered 
disk reads:   Timing buffered disk reads:  106 MB in  3.01 seconds =  35.20 
106 MB in  3.02 seconds =  35.06 MB/sec
170 MB in  3.02 seconds =  56.22 MB/sec
170 MB in  3.03 seconds =  56.17 MB/sec
[1]   Done                    hdparm -t /dev/hde
[2]   Done                    hdparm -t /dev/hda
[3]-  Done                    hdparm -t /dev/hdb
[4]+  Done                    hdparm -t /dev/hde
[root at itadmin ~]#  

SATA on the other hand is different.

If your box has multiple PCI buses put each HBA (particularly if they are 
32-bit PCI and the drives are ATA/133 or SATA) on a separate bus if possible. 
The box above has three PCI-X buses; the motherboard HBA (along with the 
motherboard integrated U320 SCSI) is on one and the ATA/133 HBA is on another 
(the GigE NIC is on the third).

Note that 32-bit PCI on a single bus will throttle your total throughput to 
about 133MB/s anyway (33MHz PCI clock times 4 bytes transferred per clock).  
If you have PCI-e slots, even x1, getting PCI-e HBA's will dramatically 
improve total throughput if your drives can handle it, as each PCI-e lane can 
do 250MB/s (2.5GHz PCI-e clock; 8B/10B encoded).

Bonnie++ gives me a different picture:
Size: 3072MB
Sequential Output: 
Per Chr: 31.1MB/s
Block: 43.8MB/s
Rewrite: 25.3MB/s

Sequential Input:
Per Chr: 33.1MB/s
Block: 87.7MB/s

Random Seeks:
203.7 per second
Which is not too awful. (not great compared to my FC SAN's results, but I 
can't publish those results due to EULA restrictions). 

For comparison, my laptop (single 7.2K RPM 100GB SATA, Intel Core 2Duo 2GHz 
2GB RAM, Fedora 7):
Filesize: 4096M
Seq Writes: 30MB/s
Block Writes: 30MB/s
Rewrites: 17MB/s
Seq Reads: 42MB/s
Block Reads: 44MB/s
Random Seek: 118.4/s

Lamar Owen
Chief Information Officer
Pisgah Astronomical Research Institute
1 PARI Drive
Rosman, NC  28772

More information about the users mailing list