On Thu, 10 Mar 2022 at 18:43, Patrick O'Callaghan pocallaghan@gmail.com wrote:
On Thu, 2022-03-10 at 22:40 +0000, Patrick O'Callaghan wrote:
On Thu, 2022-03-10 at 11:13 -0800, Samuel Sieb wrote:
On 3/10/22 02:47, Patrick O'Callaghan wrote:
On Wed, 2022-03-09 at 11:25 -0800, Samuel Sieb wrote:
On 3/9/22 02:44, Patrick O'Callaghan wrote:
Thanks for the detailed info. My current SSD is a Samsung 860 EVO (2TB) so for comparison I did this:
$ time dd if=/dev/zero bs=1G count=23 of=Big 23+0 records in 23+0 records out 24696061952 bytes (25 GB, 23 GiB) copied, 14.9873 s, 1.6 GB/s
real 0m15.087s user 0m0.000s sys 0m14.640s
However that's clearly not a reflection of actual I/O speed as the writes will have been cached.
You can use "conv=fdatasync" to make sure all data is written before giving the final result.
Didn't seem to make a difference:
$ time dd if=/dev/zero bs=1G count=23 conv=fdatasync of=Big 23+0 records in 23+0 records out 24696061952 bytes (25 GB, 23 GiB) copied, 15.3153 s, 1.6 GB/s
Are you using btrfs with compression enabled?
Btrfs in the default F35 configuration, so no compression.
Oops. It actually has "compress=zstd:1" in the fstab line.
Apologies. That completely invalidates the numbers.
Not completely invalid, they still say something about a real-world use case, (I work with optical remote sensing where many images have big blocks of "missing" data codes, e.g., clouds) but the interpretation changes. We have been using netcdf4 files with internal compression, but now I'm motivated to compare without compression on btrfs for "scratch" files that don't move on a network.