I have an F7/rawhide machine with 6x400GB SATA II disks all partitioned as 100MB + 399.9GB
/boot is on /dev/sda1 (would be RAID1 across /dev/sd[abc]1 partitions except for mkinitrd raid1.ko breakage) with /dev/md1 as RAID5 on /dev/sd[abcdef]2
Then a single LVM vg00 on top of /dev/md1
root and swap as LVs within vg00 and plenty of spare space.
Doing some timings of the various block devices (so far just roughly with hdparm, bonnie installed for more detail later). Results of "hdparm -Tt" averaged over a few runs, showing cached and buffered speeds
/dev/sda1 gives 1995MB/s and 72MB/s which seems quite good for a single spindle /dev/md1 gives 2040MB/s and 260MB/s also fairly good (I had hoped with six spindles and parity to get closer to 5x performance than 3.5x) /dev/mapper/vg00-lv01 (my root fs) gives 2100MB/s and 135MB/s which is a little disappointing
Nearly a 50% speed penalty seems a heavy price to pay for LVM, is there any slow debug code currently in rawhide that might explain it? Could I have some bad choices of block sizes between RAID/LVM layers which are reducing throughput by splitting reads? Anything else?
søn, 18 03 2007 kl. 16:34 +0000, skrev Andy Burns:
I have an F7/rawhide machine with 6x400GB SATA II disks all partitioned as 100MB + 399.9GB
/boot is on /dev/sda1 (would be RAID1 across /dev/sd[abc]1 partitions except for mkinitrd raid1.ko breakage) with /dev/md1 as RAID5 on /dev/sd[abcdef]2
Then a single LVM vg00 on top of /dev/md1
root and swap as LVs within vg00 and plenty of spare space.
Doing some timings of the various block devices (so far just roughly with hdparm, bonnie installed for more detail later). Results of "hdparm -Tt" averaged over a few runs, showing cached and buffered speeds
/dev/sda1 gives 1995MB/s and 72MB/s which seems quite good for a single spindle /dev/md1 gives 2040MB/s and 260MB/s also fairly good (I had hoped with six spindles and parity to get closer to 5x performance than 3.5x) /dev/mapper/vg00-lv01 (my root fs) gives 2100MB/s and 135MB/s which is a little disappointing
Nearly a 50% speed penalty seems a heavy price to pay for LVM, is there any slow debug code currently in rawhide that might explain it? Could I have some bad choices of block sizes between RAID/LVM layers which are reducing throughput by splitting reads? Anything else?
Wow that explains a lot.. performance on Development has be awful for me lately and I have been unable to figure out why, this might be related.
My setup is 2 400GB drives in RAID0 using dmraid with LVM ontop.
- David Nielsen
Wow that explains a lot.. performance on Development has be awful for me lately and I have been unable to figure out why, this might be related.
My setup is 2 400GB drives in RAID0 using dmraid with LVM ontop.
It's a while since it tried dmraid instead of mdraid, does it use a UUID as the PV name for /dev/mapper? I suppose pvdisplay would show, then you could try hdparm on the PV under LVM compared to the LV above it.
søn, 18 03 2007 kl. 17:59 +0000, skrev Andy Burns:
Wow that explains a lot.. performance on Development has be awful for me lately and I have been unable to figure out why, this might be related.
My setup is 2 400GB drives in RAID0 using dmraid with LVM ontop.
It's a while since it tried dmraid instead of mdraid, does it use a UUID as the PV name for /dev/mapper? I suppose pvdisplay would show, then you could try hdparm on the PV under LVM compared to the LV above it.
/dev/mapper/VolGroup00-LogVol00: Timing cached reads: 1050 MB in 2.00 seconds = 524.68 MB/sec HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device Timing buffered disk reads: 162 MB in 3.01 seconds = 53.84 MB/sec HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device
/dev/mapper/nvidia_bbgdbcgj: Timing cached reads: 1142 MB in 2.00 seconds = 571.12 MB/sec HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device Timing buffered disk reads: 300 MB in 3.00 seconds = 99.97 MB/sec HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device
/dev/sda: Timing cached reads: 1096 MB in 2.00 seconds = 547.65 MB/sec Timing buffered disk reads: 182 MB in 3.00 seconds = 60.61 MB/sec
Yep something odd is definitely going on.
On 18/03/07, David Nielsen david@lovesunix.net wrote:
Yep something odd is definitely going on.
ok, bz'ed https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=232843