HD to SSD question.

Reindl Harald h.reindl at thelounge.net
Tue Aug 20 17:41:20 UTC 2013


Am 20.08.2013 19:15, schrieb Mihai T. Lazarescu:
> On Tue, Aug 20, 2013 at 03:56:56PM +0200, Heinz Diehl wrote:
>> All these numbers are pointless, because when I see
>> your results I'm quite shure you did run the test without
>> generating loads of disk I/O in parallel.  What you actually
>> measured is the latency in idle state ;-)
>>
>> Open a console and run
>>  while : ; do time sh -c "dd if=/dev/zero of=bigfile bs=8M count=256 ; sync; rm bigfile"; done
>>
>> Then, open another one and run fsync-tester.  The numbers
>> that count to compare different elevators on your system
>> is the output fsync-tester generates while your machine is
>> generating the "bigfile".
> 
> Thanks Heinz!  Indeed, I took every precaution to leave the
> machine still. :-)
> 
> Here are the program numbers while running the script:
> 
> CFQ:
>     write time: 0.0010s fsync time: 0.8647s
>     write time: 0.1340s fsync time: 3.1220s
>     write time: 2.1435s fsync time: 2.8134s
>     write time: 0.1458s fsync time: 8.3726s
>     write time: 0.0005s fsync time: 1.0401s
>     write time: 0.0175s fsync time: 1.0270s
>     write time: 4.0406s fsync time: 0.0321s
>     write time: 0.0005s fsync time: 4.8683s
>     write time: 0.0004s fsync time: 0.3178s
> 
> Deadline run1:
>     write time: 0.0009s fsync time: 82.3477s
> 
> Deadline run2:
>     write time: 0.0007s fsync time: 659.2289s

all these numbers are also pointless because the moment of the
sync-command in the dd-loop makes the most difference and
depending when it happens exactly it slows down other things

however, i had running the loop and below the first result is
"cfq" and the second one "deadline" on a simple Software-RAID10
with 4x2 TB 7200 RPM ROTATING disks

looking at all values "deadline" is faster because far away
from 12 seconds and in reality there is no difference between
both while on IO-workloads like virtualization "deadline"
is usually faster
__________________________________________________________

[root at srv-rhsoft:/data]$ fsync-tester.sh

setting up random write file
done setting up random write file
starting fsync run
starting random io!
write time: 0.0005s fsync time: 1.3178s
write time: 0.0004s fsync time: 0.1273s
write time: 0.0495s fsync time: 5.1039s
write time: 0.0719s fsync time: 3.6246s
write time: 0.0005s fsync time: 0.3371s
write time: 0.0003s fsync time: 12.1480s
write time: 0.0006s fsync time: 0.6380s
write time: 0.0004s fsync time: 2.1844s
write time: 0.0004s fsync time: 2.3435s
write time: 0.0006s fsync time: 0.4382s
run done 10 fsyncs total, killing random writer

setting up random write file
done setting up random write file
starting fsync run
starting random io!
write time: 0.0005s fsync time: 0.0607s
write time: 0.0003s fsync time: 5.3616s
write time: 0.1376s fsync time: 2.7067s
write time: 0.0003s fsync time: 0.1988s
write time: 0.0003s fsync time: 0.5727s
write time: 0.0003s fsync time: 3.0089s
write time: 0.0256s fsync time: 1.2177s
write time: 0.0212s fsync time: 0.0670s
write time: 0.0179s fsync time: 0.2713s
write time: 0.0002s fsync time: 1.2871s
write time: 0.0162s fsync time: 2.8389s
write time: 0.0319s fsync time: 1.8701s
run done 12 fsyncs total, killing random writer
__________________________________________________________

[root at srv-rhsoft:/data]$ cat /scripts/fsync-tester.sh
#!/usr/bin/bash

cd /data/

# while : ; do time sh -c "dd if=/dev/zero of=bigfile bs=8M count=256 ; sync; rm bigfile"; done

echo cfq > /sys/block/sda/queue/scheduler
echo cfq > /sys/block/sdb/queue/scheduler
echo cfq > /sys/block/sdc/queue/scheduler
echo cfq > /sys/block/sdd/queue/scheduler

rm -f /data/fsync-tester*file
sync
echo ""
fsync-tester

echo deadline > /sys/block/sda/queue/scheduler
echo deadline > /sys/block/sdb/queue/scheduler
echo deadline > /sys/block/sdc/queue/scheduler
echo deadline > /sys/block/sdd/queue/scheduler

rm -f /data/fsync-tester*file
sync
echo ""
fsync-tester
__________________________________________________________

/dev/md1       ext4   29G    6,0G   23G   21% /
/dev/md0       ext4  477M     30M  443M    7% /boot
/dev/md2       ext4  3,6T    1,7T  1,9T   48% /mnt/data

Personalities : [raid10] [raid1]
md2 : active raid10 sdd3[0] sdc3[5] sda3[4] sdb3[3]
      3875222528 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 5/29 pages [20KB], 65536KB chunk

md1 : active raid10 sda2[4] sdb2[3] sdc2[5] sdd2[0]
      30716928 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md0 : active raid1 sdb1[3] sda1[4] sdd1[0] sdc1[5]
      511988 blocks super 1.0 [4/4] [UUUU]

unused devices: <none>


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 263 bytes
Desc: OpenPGP digital signature
URL: <http://lists.fedoraproject.org/pipermail/users/attachments/20130820/9857c08a/attachment.sig>


More information about the users mailing list