On Fri, Jun 5, 2020 at 12:33 AM Milan Crha <mcrha(a)redhat.com> wrote:
On Thu, 2020-06-04 at 16:30 -0400, Ben Cotton wrote:
> ... The memory used is not preallocated. It's
> dynamically allocated and deallocated, on demand. ...
> The system will use RAM normally up until it's full, and then start
> paging out to swap-on-zram, same as a conventional swap-on-drive....
I confess I've absolutely no idea about this, I mean how that works in
practice, but when you tell me "we do not allocate on start, we
allocate on demand, when the memory is full", then, I hope a logic
question is, where do you allocate, when the memory is already full? If
there's some threshold, then it's quite the same as preallocate the
Yes, it's an oversimplification. There isn't a case when the memory is
truly completely full. The kernel starts to swap before that, and it
can move things around from buffers, cached, and even do reclaim, in
order to start allocating memory to the zram device. And at that point
the compression permits a greater rate of freed memory due to
eviction, than loss due to the allocation to the zram device.
This is taken just now from a laptop with 2+ days uptime
]$ free -m
total used free shared buff/cache available
Mem: 7845 3292 699 532 3852 3714
Swap: 3921 192 3729
NAME TYPE SIZE USED PRIO
/dev/zram0 partition 3.8G 193M -2
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lzo-rle 3.9G 187.3M 50M 52.7M 4 [SWAP]
This is a small amount of swap in use right now. In this case, ~190M
has been paged out to swap, and the zram device has compressed that to
~50M. That's pretty good, so it suggests highly compressible data. The
savings is ~140M. That 140M can be used to avoid reclaim of file pages
(programs can stay in RAM instead of being pushed out and read back in
later) or for more active user data, etc. Anything really.
Let aside how the compression itself works. Does it mean, that the
actual outcome will be quicker response (than when using swap on disk)
for a price of higher requirements on the CPU (thus it can
compress/decompress in a timely manner), thus eventually higher power
consumption, thus shorter up time, before the battery is down? I know I
can easily misunderstand things, but it feels like a side effect of the
I'm not sure I can quantify the battery consumption aspect. I've been
using this configuration for a year and haven't noticed a difference
in battery life, or heat. But in the case of heavy swap, it's
remarkably faster than SSD. For HDD it's an even bigger difference.
The writes to either drive type also comes with a power cost. I'm not
sure which one is more efficient.
Virtual machines. I usually do not give it access to the all host
system resources, I give it like 1 or 2 CPUs (sometimes more, but only
when I know I want to do something expensive there), and like 2GB of
memory (I used to give it one, but since Fedora begun to require at
least 2, I increased it). With this swap to RAM on, should I give it
even more memory (and eventually CPU), thus Fedora can boot and work
No. I also use VM's and I leave the zram device configured at 50% RAM
which is 4GiB for both of my laptops.
And in fact, I've been using swap-on-zram in the guest VM too, instead
of swap-on-drive. These are 2-3GiB VMs.
When you've a machine with 4GB RAM and 1TB HDD and 1.6GHz CPU,
much cheaper to use swap on disk than to waste RAM, which should be
used for other things (it's even not fully usable 4GB, because some of
that can use integrated graphics card). I guess this will penalize such
low cost machines, though it needs testing to know for sure. There are
devices with a lot less resources (there had been mentioned IoT, under
which, I suppose, belong also toys like Raspberry PI, which do not have
a lot of RAM (not counting Raspberry PI 4B)), where it might (or might
not) hurt even more.
These are really good questions. I have an Intel NUC with 4GiB RAM,
and Raspberry Pi Zero with only 512M RAM, and I use swap-on-zram there
too because (a) swap on SD Card is terrible performance (b) SD Cards
have pretty low max lifetime writes so I try to reduce writes as much
as possible to increase their life (c) Fedora IoT and I think some of
the other ARM products have been using swap-on-zram sized to 50% RAM
by default in Fedora for years.
It's actually very well suited for the low memory device, as well as
desktop and servers with tons of memory. And cloudy things as well.
My main machine has 16GB RAM. When I run compilation of some bigger
projects (like WebKitGTK in my case, but I do not want to even think of
giants like LibreOffice), then I face memory pressures, also depending
on the target type (debug/release) and how many cores I give to it. It
happens from time to time that the whole system freezes (keyboard/mouse
doesn't react, neither Caps Lock/Num Lock), with high CPU usage. I
suppose it tries to compile, but I never left it run that long, I just
turn off the machine and start again, where the new run eventually
survives. Should this swap to RAM help anyhow in such situation?
Yes. The webkitgtk compile is a good brutal test, I've been using it
quite a lot this whole time to evaluate the feature and come up with
the size proposals. The compile will go a lot faster with swap-on-zram
than with heavy swap-on-drive. But it's important in this case to get
the -j option correct for your actual resources. In my case it's 8M
RAM and 8 cores. And the default means 10 jobs, and eventually around
midway into the compile, it wants ~18-20GiB for all the jobs that have
spawned. This fails for me whether swap-on-zram or swap-on-disk, but
fails much faster with the combination of zram *and* earlyoom.
earlyoom is enabled by default in Fedora Workstation 32.
QA: SwapOnzram Test Day] to
> discover edge cases, and tweak the default configuration if necessary
> to establish a good one-size-fits all approach.
The developer usage and average (office) user usage are very different.
I'm afraid you cannot find any number, which will fit to all.
I'm reasonably confident sane defaults can be found. This doesn't mean
it's perfect for all workloads. Users right now pretty much accept the
one size fits all swap-on-disk and they never change it because it's a
pain to do that. In the zram case, it's easy to make changes and