Overwriting a 4096 byte sector harddisk drive with random data

Gilboa Davara gilboad at gmail.com
Thu Jul 21 12:34:35 UTC 2011



> Sorry, could you please elaborate a bit more on how a higher size
> block results in better performance.
> 
> -- 
> Kind regards,
> Yudi
> 

Ouch, of the top my head, there two major reasons:
1. (Mechanical) disk drives (AKA Hard drives) dislike random read/writes
as their require the drive to constantly "move" the head to a different
strip.
2. File systems tend to like big blocks as its easier for them to
allocate adjacent blocks, reducing file fragmentation which in turns
increases the number of seeks (see 1).

However, using uber-large-blocks (such as 1M) may actually decrease the
performance due to the synchronous nature of dd itself. 

Here's a short demo:
(Taken from a 5 x 500GB software RAID5; ext4 partition over LVM)
Notice that the best performance is reached when using 64KB blocks.

$ rm -f temp.img; export BS=512; time dd if=/dev/zero of=temp.img bs=$BS count=$(((4096*1024*1024)/$BS))
4294967296 bytes (4.3 GB) copied, 151.229 s, 28.4 MB/s

real    2m31.231s:
user    0m0.830s
sys     0m32.678s

$ rm -f temp.img; export BS=16384; time dd if=/dev/zero of=temp.img bs=$BS count=$(((4096*1024*1024)/$BS))
4294967296 bytes (4.3 GB) copied, 106.988 s, 40.1 MB/s

real    1m46.990s
user    0m0.041s
sys     0m15.659s

$ rm -f temp.img; export BS=65536; time dd if=/dev/zero of=temp.img bs=$BS count=$(((4096*1024*1024)/$BS))
65536+0 records in
65536+0 records out
4294967296 bytes (4.3 GB) copied, 69.0871 s, 62.2 MB/s

real    1m9.089s
user    0m0.012s
sys     0m47.636s

$ rm -f temp.img; export BS=$((1024*1024)); time dd if=/dev/zero of=temp.img bs=$BS count=$(((4096*1024*1024)/$BS))
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 98.6219 s, 43.5 MB/s

real    1m38.639s
user    0m0.003s
sys     0m4.317s






More information about the users mailing list