HD to SSD question.

Mihai T. Lazarescu mtlagm at gmail.com
Tue Aug 20 21:58:38 UTC 2013


On Tue, Aug 20, 2013 at 04:02:42PM +0200, Heinz Diehl wrote:

> On 20.08.2013, Heinz Diehl wrote: 
> 
> > Then, open another one and run fsync-tester. The numbers that count to
> > compare different elevators on your system is the output fsync-tester
> > generates while your machine is generating the "bigfile".
> 
> And while your're at it, you could also consider doing some
> fsmark runs (without the load), e.g.:
> 
>  ./fs_mark -S 1 -D 10000 -N 100000 -d /home/htd/fsmark/test -s 65536 -t4 -w 4096 -F 
> 
> Notice the -t switch, which lets you specify the number of
> threads used.

Here they are:

CFQ:

    #  ./fs_mark  -S  1  -D  10000  -N  100000  -d  /home/mihai/tmp/c/fs_mark-3.3  -s  65536  -t4  -w  4096  -F 
    # Version 3.3, 4 thread(s) starting at Tue Aug 20 23:30:40 2013
    # Sync method: INBAND FSYNC: fsync() per file in write loop.
    # Directories: Round Robin between directories across 10000
    # subdirectories with 100000 files per subdirectory.
    # File names: 40 bytes long, (16 initial bytes of time stamp
    # with 24 random bytes at end of name)
    # Files info: size 65536 bytes, written with an IO size of
    # 4096 bytes per write
    # App overhead is time in microseconds spent in the test not
    # doing file writing related system calls.
    #
    FSUse%        Count         Size    Files/sec     App Overhead
        77         4000        65536         50.4            88160
        77         8000        65536         43.2           104150
        77        12000        65536         44.4            94512

Deadline:

    #  ./fs_mark  -S  1  -D  10000  -N  100000  -d  /home/mihai/tmp/c/fs_mark-3.3/test  -s  65536  -t4  -w  4096  -F 
    # Version 3.3, 4 thread(s) starting at Tue Aug 20 23:42:19 2013
    # Sync method: INBAND FSYNC: fsync() per file in write loop.
    # Directories: Round Robin between directories across 10000
    # subdirectories with 100000 files per subdirectory.
    # File names: 40 bytes long, (16 initial bytes of time stamp
    # with 24 random bytes at end of name)
    # Files info: size 65536 bytes, written with an IO size of
    # 4096 bytes per write
    # App overhead is time in microseconds spent in the test not
    # doing file writing related system calls.
    #
    FSUse%        Count         Size    Files/sec     App Overhead
        77         4000        65536         47.6            92902
        77         8000        65536         41.6           101888
        77        12000        65536         39.6            97937

Noop:

    #  ./fs_mark  -S  1  -D  10000  -N  100000  -d  /home/mihai/tmp/c/fs_mark-3.3/test  -s  65536  -t4  -w  4096  -F 
    # Version 3.3, 4 thread(s) starting at Tue Aug 20 23:48:10 2013
    # Sync method: INBAND FSYNC: fsync() per file in write loop.
    # Directories: Round Robin between directories across 10000
    # subdirectories with 100000 files per subdirectory.
    # File names: 40 bytes long, (16 initial bytes of time stamp
    # with 24 random bytes at end of name)
    # Files info: size 65536 bytes, written with an IO size of
    # 4096 bytes per write
    # App overhead is time in microseconds spent in the test not
    # doing file writing related system calls.
    #
    FSUse%        Count         Size    Files/sec     App Overhead
        77         4000        65536         46.8            89478
        77         8000        65536         42.0           101337
        77        12000        65536         43.2            95834

It does not look like significant differences between schedulers.

Mihai


More information about the users mailing list