On Wed, Jul 08, 2020 at 12:24:27AM -0600, Chris Murphy wrote:
2. Benchmarking: this is hard. A simple tool for doing comparisons
among algorithms on a specific bit of hardware is lzbench.
https://github.com/inikep/lzbench
How to compile on F32.
https://github.com/inikep/lzbench/issues/69
But is that adequate? How do we confirm/deny on a wide variety of
hardware that this meets the goal? And how is this test going to
account for parallelization, and read ahead? Do we need a lot of data
or is it adequate to get a sample "around the edges" (e.g. slow cpu
fast drive; fast cpu slow drive; fast cpu fast drive; slow cpu slow
drive). What algorithm?
More data is always better. I like qualifying the situations in that way. I
think we should make our decision based on the "center" rather than the
edges, though.
For I hope obvious reasons, I'd love to see this tested on a Lenovo X1
Carbon Gen 8 with the default SSD options.
And, for benchmarks, I'm thinking more application benchmarks than a
benchmark of the compression itself. How much does compressed /usr affect
boot times for GNOME and KDE? What about startup time for LibreOffice,
Firefox, etc? Any impact on run-time usage?
[...]
D. Which directories? Some may be outside of the installer's
scope.
As I noted in the bugzilla entry, the /var/log on this system compresses to
3.6% of its original size.
(Methodology: I tarred up the dir and then ran zstd -1 on the tar file. If I
use -19, it's unsurprisingly slow and saves another whole one percent of the
original.)
--
Matthew Miller
<mattdm(a)fedoraproject.org>
Fedora Project Leader