On Fri, May 20, 2022 at 11:10:11AM +0800, Dave Young wrote:
> yes, if we can have some more data it would be better,
randomly
> google the "dd" performance, someone said below:
>
https://stackoverflow.com/questions/33485108/why-is-dd-with-the-direct-o-...
>
Above is for directio, seems sync io is different,
Yes, sync io is different. man 2 open says following for O_SYNC.
By the time write(2) (or similar) returns, the output data and
associated file metadata have been transferred to the underlying
hardware (i.e., as though each write(2) was followed by a call
to fsync(2)). See NOTES below.
I think I/O still goes through the page cache but it is immediately
written back to disk.
As long as size of I/O per write is big (big block size), I think both
direct I/O and sync will perform reasonably well. But if your write sizes
are small, these both will perform poor. I think sync probably will
perform wrose than direct I/O. Given we are not reading back the
contents of vmcore, there is not much point in going through the cache.
anyway with below
test on laptop, it seems directio is faster:
[dyoung]$ dd if=/dev/zero of=testfile bs=1M count=100 oflag=direct
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.202895 s, 517 MB/s
[dyoung]$ dd if=/dev/zero of=testfile bs=1M count=100 oflag=sync
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 1.13316 s, 92.5 MB/s
Yes, O_SYNC is little slower than direct I/O even for bs=1M.
So performance will depend on how makedumpfile is writing data. How many
writes is it doing and what are the sizes of these writes.
Also even if you use O_DIRECT, you will have to issue fsync at the end
anyway to ensure file (and any associated metadata actually got persisted
on the disk).
So while we can play with O_DIRECT and O_SYNC, but that does not seem
to be a real requirement.
Using "sync -f" is the fastest fix for the issue.
Thanks
Vivek