On Wed, May 13, 2020 at 6:03 PM Chris Murphy <lists(a)colorremedies.com> wrote:
On Wed, May 13, 2020 at 5:32 AM Bohdan Khomutskyi <bkhomuts(a)redhat.com> wrote:
> It was a long time since the last message in this change proposal.
> Recently I was working to reduce the impact of the increased compression ratio on
the installation image size for Fedora. I have achieved outstanding results -- working
proof of concept. With the following change:
, not only the higher compression does
not impact the installation time. In certain cases, the installation time is even reduced.
This is because of the fact the filesystem internal structure aware process is used to
install the system from the SquashFS. The new process also allows for taking advantage of
the multi-core architecture of the system during installation -- does the decompression on
multiple processors in parallel.
> The combination of https://github.com/rhinstaller/anaconda/pull/2292
should reduce _both_ the
image size and the installation time. The installation time will be reduced in case the
system is installed from the SquashFS. This is the case in Fedora Workstation.
> For optimization of the SquashFS, I will work on requesting the support of the
required functionality in the Pungi compose build software.
Hi, since the feedback was that a higher emphasis be placed on install
time being reduced, even if there was some increase in ISO size (not
without limit, it's a balancing act), I'm still curious how the change
compares when using zstd, all other things being equal.
For example Solus recently changed from xz to zstd in squashfs, and
claim 3-4x faster install times, with some increase in image size.
From anaconda.log for a default/auto LVM+ext4 install using
19:57:16 DBG ui.gui.spokes.installation_progress: The installation has finished.
19:51:52 DBG ui.gui.spokes.installation_progress: The installation has started.
This is not an exact comparison to using a plain squashfs image and
writing out (I'm guessing) 30,000 files to the install target. But,
using unsquashfs to extract the root.img and write it to the same
real 0m50.315s user 2m18.318s sys 0m6.569s
I'm extracting just one file, the embedded ext4. But (a) unsquashfs is
parallelizing at about 270% CPU for a 3 virtual core VM and (b)
/dev/loop1 isn't busy at all. Does unsquashfs and ext4 slow down when
handed 30K files to write out instead of one big one? Dunno. But as
prior testing suggests this is a CPU bound problem, not a disk
contention problem - I'm definitely in the "tell me more" position.
2m18s is a lot better than 5m24s. And honestly 5m isn't bad, it's
takes a lot longer to install Windows 10 and macOS.
I still think that zstd would get even better decompression rates with
less of a CPU, and thus power hit, it could be splitting hairs. I'm