On Wed, Jul 8, 2020 at 9:50 AM Matthew Miller <mattdm(a)fedoraproject.org> wrote:
On Wed, Jul 08, 2020 at 12:24:27AM -0600, Chris Murphy wrote:
> 2. Benchmarking: this is hard. A simple tool for doing comparisons
> among algorithms on a specific bit of hardware is lzbench.
> How to compile on F32.
> But is that adequate? How do we confirm/deny on a wide variety of
> hardware that this meets the goal? And how is this test going to
> account for parallelization, and read ahead? Do we need a lot of data
> or is it adequate to get a sample "around the edges" (e.g. slow cpu
> fast drive; fast cpu slow drive; fast cpu fast drive; slow cpu slow
> drive). What algorithm?
More data is always better. I like qualifying the situations in that way. I
think we should make our decision based on the "center" rather than the
If it's the center, I think that favors the mount option approach and
do it with the lowest level of compression, i.e. zstd:1. But this
suggests more benchmarking still, to make certain it's well understood
what the range of write performance hit could be in those scenarios.
Whereas the curated approach can just bypass most of that question -
the payload and workload for flatpak and usr and containers is fairly
fixed across all Fedora users rather than mixed content and workloads
found in ~/
For I hope obvious reasons, I'd love to see this tested on a
Carbon Gen 8 with the default SSD options.
And, for benchmarks, I'm thinking more application benchmarks than a
benchmark of the compression itself. How much does compressed /usr affect
boot times for GNOME and KDE? What about startup time for LibreOffice,
Firefox, etc? Any impact on run-time usage?
> D. Which directories? Some may be outside of the installer's scope.
As I noted in the bugzilla entry, the /var/log on this system compresses to
3.6% of its original size.
Yep. I'm not opposed to it by any means. I'm not sure what things
other than VMs and databases would be there - and we still have to
figure out who "owns" those locations to decide how we get them to set
There is bit of a rabbit hole for /var/log/journals. systemd-journald
detects btrfs and automatically does nodatacow on its journals. That
means no compression. On a HDD, it makes sense. But on SSD the files
are relatively small and while fragmentation can be bad the tracking
isn't that bad on an SSD, I'm pretty sure compression makes up for it
but I haven't benchmarked it. Anyway I 'touch
/etc/tmpfiles.d/journal-nocow.conf' to prevent the setting of
nodatacow on journals.