On Thu, 2022-03-10 at 19:47 -0400, George N. White III wrote:
Oops. It actually has "compress=zstd:1" in the fstab line.
Apologies. That completely invalidates the numbers.
Not completely invalid, they still say something about a real-world use case, (I work with optical remote sensing where many images have big blocks of "missing" data codes, e.g., clouds) but the interpretation changes. We have been using netcdf4 files with internal compression, but now I'm motivated to compare without compression on btrfs for "scratch" files that don't move on a network.
I'm calling it invalid because the data is a stream of zeroes, i.e. it's pretty much maximally compressible.
This might be more realistic, using /dev/urandom:
$ time dd if=/dev/urandom bs=1M count=23000 of=Big 23000+0 records in 23000+0 records out 24117248000 bytes (24 GB, 22 GiB) copied, 81.9688 s, 294 MB/s
real 1m22.106s user 0m0.040s sys 1m21.753s
poc