Convert ext4 lvm to normal ext4 partition

Michael Miles mmamiga6 at gmail.com
Sat Nov 13 18:08:12 UTC 2010


Lamar Owen wrote:
> On Friday, November 12, 2010 07:12:23 pm Peter Larsen wrote:
>    
>> So create a partition, test it without lvm. Then add it as a pv, and do
>> the same test on the lvm on the same implementation.
>>      
> Ok, the first set of two results are in.  And I am surprised by one data point in one of them.  Surprised enough that I ran the benchmarks three times, and got substantially the same results all three times.  I also show hdparm -t output below that confirms that hdparm -t is at best a 'best case' figure, especially when used with a heavily cached controller.  And last, but not least, is Seeker output that should really shed some light on random access benchmarks on different sized partitions.
>
> System is running CentOS 5.5 x86_64, two 2.8GHz Opterons, 10GB RAM.  Disk array connected by 4Gb/s fibre-channel, using a QLogic QLE2460 PCI-e 4x HBA.  Individual drives on the array are 500GB 7200RPM FCAL drives. LUN was (as far as I can tell) properly stripe-aligned prior to test. RAID group containing the LUN is 16 drives, in a RAID6 configuration; the other LUNs in the RAID group had little to no traffic during the testing. Array controller has substantial read and write caches (multiple GB) and powerful CPU's.  In other words, not your typical home system.  But it's what I had on-hand and available to test in a rapid manner.
>
> Using bonnie++ levels the playing field substantially, and wrings out what the disk performance actually is; and I do know that the choice of 7200RPM drives isn't the fastest; that's not the point here.  The point is comparing the performance of two ext3 filesystems (may possibly be doing the ext4 tests later today, but honestly it shouldn't matter), where one is on a raw partition and the other is in an LVM logical volume.  Given the results, I should probably swap the partitions, making sdb1 the LVM and sdb2 the raw ext3 (currently it's the other way, with sdb1 the raw and sdb2 the LVM), and rerun the tests to make sure I'm not running afoul of /dev/sdb1 not being stripe-aligned but /dev/sdb2 being stripe-aligned.
>
> bonnie++ command line:
> bonnie++ -d /opt/${filesystem}/bonnie -u nobody:nobody
> No special options; nobody:nobody owns /opt/${filesystem}/bonnie.  ${filesystem} is 50g-straight for the raw partition, 50g-lvm for the logical volume.
>
> The results:
>
> +++++++++++++++++++++++++++++++
> Raw Ext3:
> Size: 19496M
> SeqOutput:
> PerChr: 48445K/sec
> Block: 52836K/sec
> Rewrite: 19134K/sec
>
> SeqInput:
> PerChr: 51529K/sec
> Block: 26327K/sec (<--- this surprised me; one would think it would be larger, but might be related to stripe size alignment, but I thought I had compensated for that)
> RandomSeeks: 576.5 per sec.
>
> SeqCreate:
> Files: 16
> Creates/second: 10544
> RandomCreate:
> Creates/second: 11512
>
> Time output: real 50m16.811s, user 6m49.498s, sys 5m45.078s
> +++++++++++++++++++++++++++++++++++++++
> For the LVM filesystem:
>
> Size: 19496M
> SeqOutput:
> PerChr: 51841K/sec
> Block: 54266K/sec
> Rewrite: 26642K/sec
>
> SeqInput:
> PerChr: 54674K/sec
> Block: 69696K/sec (<--- this looks better and more normal)
> RandomSeeks: 757.9 per sec.
>
> SeqCreate:
> Files: 16
> Creates/second: 10540
> RandomCreate:
> Creates/second: 11127
>
> Time output: real 36m21.393s, user 6m47.328s, sys 6m17.813s
> +++++++++++++++++++++++++++++++++++++++
>
> Yeah, that means on this box with this array, LVM is somewhat faster than the raw partition ext3, especially for sequential block reads.  That doesn't seem to make sense; the Sequential and Random Create results are more in line with what I expected, with a small performance degradation on LVM.
>
> Using the other common tools:
> First, hdparm -t.  Note that with this much RAM in the array controller, this isn't a valid test, as the results below show very clearly (it also shows just how fast the machine can pull data down 4G/s fibrechannel!).
>
> +++++++++++++++++++++++++++++++++++++++
> [root at migration ~]# hdparm -t /dev/sdb1
>
> /dev/sdb1:
>   Timing buffered disk reads:  298 MB in  3.00 seconds =  99.32 MB/sec
> [root at migration ~]# hdparm -t /dev/sdb2
>
> /dev/sdb2:
>   Timing buffered disk reads:  386 MB in  3.01 seconds = 128.07 MB/sec
> [root at migration ~]# hdparm -t /dev/sdb1
>
> /dev/sdb1:
>   Timing buffered disk reads:  552 MB in  3.01 seconds = 183.67 MB/sec
> [root at migration ~]# hdparm -t /dev/sdb2
>
> /dev/sdb2:
>   Timing buffered disk reads:  562 MB in  3.01 seconds = 186.86 MB/sec
> [root at migration ~]# hdparm -t /dev/sdb1
>
> /dev/sdb1:
>   Timing buffered disk reads:  704 MB in  3.01 seconds = 233.85 MB/sec
> [root at migration ~]# hdparm -t /dev/sdb2
>
> /dev/sdb2:
>   Timing buffered disk reads:  614 MB in  3.01 seconds = 204.16 MB/sec
> [root at migration ~]#
> +++++++++++++++++++++++++++++++++++++++
>
> Now, the Seeker results (I test the raw disk first, then /dev/sdb1 and sdb2 in turn, then twice on the LVM logical volume's device node, and then once on the much smaller /dev/sdb3):
>
> +++++++++++++++++++++++++++++++++++++++
>
> [root at migration ~]# ./seeker /dev/sdb
> Seeker v2.0, 2007-01-15, http://www.linuxinsight.com/how_fast_is_your_disk.html
> Benchmarking /dev/sdb [102400MB], wait 30 seconds..............................
> Results: 230 seeks/second, 4.34 ms random access time
> [root at migration ~]# ./seeker /dev/sdb1
> Seeker v2.0, 2007-01-15, http://www.linuxinsight.com/how_fast_is_your_disk.html
> Benchmarking /dev/sdb1 [47692MB], wait 30 seconds.............................
> Results: 210 seeks/second, 4.75 ms random access time
> [root at migration ~]# ./seeker /dev/sdb2
> Seeker v2.0, 2007-01-15, http://www.linuxinsight.com/how_fast_is_your_disk.html
> Benchmarking /dev/sdb2 [47692MB], wait 30 seconds..............................
> Results: 219 seeks/second, 4.56 ms random access time
> [root at migration ~]# ./seeker /dev/benchtest/50g
> Seeker v2.0, 2007-01-15, http://www.linuxinsight.com/how_fast_is_your_disk.html
> Benchmarking /dev/benchtest/50g [47692MB], wait 30 seconds..............................
> Results: 217 seeks/second, 4.61 ms random access time
> [root at migration ~]# ./seeker /dev/benchtest/50g
> Seeker v2.0, 2007-01-15, http://www.linuxinsight.com/how_fast_is_your_disk.html
> Benchmarking /dev/benchtest/50g [47692MB], wait 30 seconds.............................
> Results: 203 seeks/second, 4.91 ms random access time
> [root at migration ~]# ./seeker /dev/sdb3
> Seeker v2.0, 2007-01-15, http://www.linuxinsight.com/how_fast_is_your_disk.html
> Benchmarking /dev/sdb3 [7012MB], wait 30 seconds..............................
> Results: 21459 seeks/second, 0.05 ms random access time
> [root at migration ~]#
>
> +++++++++++++++++++++++++++++++++++++++
>
> That last seeker run on the 7G /dev/sdb3 partition should really light things up.  No filesystem exists on /dev/sdb3, by the way, but I set it aside for a 'small versus large' partition Seeker test.
>
> So there you have it.  Comments welcome. (Yes, I'm going to do ext4, and yes I'm going to double-check stripe alignment, and yes I'm going to test with Fedora 14 (it will be 32-bit, though), and yes I'm going to mix up and rearrange partitions, but these things take time.)
>    






I have run all these tests and I have to say that Seeker is not a valid 
test to show speeds of these disks
I ran hdparm and it shows the lvm to be a bit slower but not a lot.
With Seeker it shows a large difference because of the area on the disk 
being tested
That's quite a difference on sdb3 by the way. It's amazing how much 
speed a filesystem takes away from a disk


More information about the users mailing list