On Sun, Aug 11, 2019 at 11:21 AM Jan Kratochvil jan.kratochvil@redhat.com wrote:
On Sun, 11 Aug 2019 17:50:17 +0200, Chris Murphy wrote:
I don't follow. You're saying RelWithDebInfo is never suitable for a local build?
Most of the time. What is your use case for it?
My use case is testing the responsiveness of Fedora Workstation under CPU and memory pressure, as experienced by an ordinary user.
In file included from Source/JavaScriptCore/config.h:32, from Source/JavaScriptCore/llint/LLIntSettingsExtractor.cpp:26: Source/JavaScriptCore/runtime/JSExportMacros.h:32:10: fatal error: wtf/ExportMacros.h: No such file or directory
You are reinventing the wheel Fedora packager has already done for this package.
That's out of scope.
I said from the outset this is an example. The central topic is that an unprivileged program is able to ask for resources that do not exist, and the operating system tries and fails to supply those resources, resulting not only in task failure, but the entire system is lost. In this example the user is doing other things concurrently and likely experiences data loss and possibly even file system corruption as a direct consequence of having to force power off on the machine because for all practical purposes normal control has been lost.
Let's take another argument. If the user manually specifies 'ninja -j 64' on this same system, is that sabotage?
For untrusted users Linux has given up for that, it is too big can of worms. Use virtual machine (KVM) with specified resources (memory size). Nowadays it should be also possible with less overhead by using Docker containers.
If you mean some local builds of your own causing runaway then (1) Turn off swap as RAM is cheap enough today. If something really runs out of the RAM it gets killed by kernel OOM. (2) Have the swap on NVMe, it from my experience does not kill the machine. (3) Use some reasonable ulimits in your ~/.bash_profile. (4) When the machine is really unresponsible login there from a different box and kill the culprits. From my own experience the machine is still able to accept new SSH connection, despite a bit slowly. But yes, I agree this problem has AFAIK no perfect solution.
I don't think it's acceptable in 2019 that an unpriviledged task takes out the entire operating system. As I mention in the very first post, remote ssh was not responsive for 30 minutes, at which point I gave up and forced power off. It's a bit of a trap though to suggest the user needs the ability and skill to remote ssh to kill off runaway programs, I refuse that premise.
It's completely sane for an ordinary user to consider that control of the system has been lost immediately upon experiencing a frozen mouse arrow.