On Fri, Apr 30, 2021 at 7:56 PM Roger Heflin rogerheflin@gmail.com wrote:
388217 * 10ms = about 3800 seconds to read that file or about 26MB/sec, but with all of the seeks most of that time will be idle time waiting on disk (iowait), and it is very possible that parts of the file have large extents and other parts of the file are horribly fragmented. And that ignores any time to do any other work related to the rsync and file io. How long does it take to copy the file?
Not sure, I don't make a habit of moving 100GB files around too often :)
btrfs has an autodefrag mount option, no idea how good or bad it works, but it might be able to reduce the extents given enough time to a reasonable number and keep it under control.
Doing a forced defragment now specifically on the file and it's taking a while.
so long as you are using rsync to read the file the fact that the db
is cow is probably not an issue (since from rsync's point of view it is just one big file). if you have small writes to the file and btrfs was set to cow that would make a mess. Not sure for a db btrfs is a good filesystem choice on a spinning disk, disabling cow might have mostly fixed this. it not clear to me that if you set the defrag option now if it will fix the already fragmented parts of the file or not. And if you turned off cow later the file may have already been heavily fragemented.
I'm not specifically accessing the file, the Monero daemon is so no rsync involved as far as I know. I have already set the directory containing the file as nodatacow and moved the file to ensure it was set previously.
Thanks, Richard