On 7/6/20 6:49 PM, John M. Harris Jr wrote:
Unless you're actively using all of those tabs (I don't know how you would be, 
but it's certainly possible), swap sounds like the perfect solution. Unless 
Firefox keeps JS running in there, and it's updating the DOM, these would 
likely be able to get swapped out.

Firefox will actually unload tabs that you haven't done anything with in a 
while under specific circumstances, but I don't know what those are. You may 
notice, for example, that the page "reloads" without network traffic, when 
going to a tab you haven't had open in a while. I've seen this on my system 
recently.
Take a look at the Task Manager. You will see tabs running even though you're not touching that: the pages have elements (ads, animations, etc) that run even though the tabs are not visible. True, the browser tries to pacify them (turns off sound/video, and whatnot) but they still run---and if the JS engine has memory leaks their memory footprint increases. You can see the culprits---sort them by "Energy Impact" or "Memory" by clicking on the column headers.

More swap doesn't necessarily solve this problem: remember that 1GB/min 
is a ballpark HD speed so if you have 10GB swap  that your system is 
actually trying to use, you will just sit there for 10 minutes.
I don't really understand how that'd be the case. For that to happen, you'd 
have to load all of those into memory, have them swap out, then try to swap 
them all back in at the same time.

That's my point---you don't have control over it. Swap algorithm decides which pages are evicted from RAM or brought back---if the browser starts allocating memory, my FreeCAD might get pushed out, and if I click on GIMP window after not using it for an hour it tries to bring it back in.

One way to think about it is that disk is tens of thousands times slower than RAM. If you need to use it, your system is commensurably slower. That's why zram is such a good idea. Swap was always a tradeoff: you saved $'s not spent on RAM, and paid with your time sitting idle waiting for the computer.

With the modern way of computing, where your data is mostly NOT on your system---so you don't lose it if your application shuts down---I am beginning to think that application crashes aren't such a big deal as they used to be. I'd rather crash and restart where I left off than have the computer drag me along trying to save my application.

Having said that, of course lots of applications ARE local and will lose data if crashed, so maybe the cgroup-based approach is the definitive solution: hard-limit the memory for cloud apps, to protect the local apps from resource exhaustion.