On Sun, Aug 11, 2019 at 10:36 AM mcatanzaro@gnome.org wrote:
On Sun, Aug 11, 2019 at 10:50 AM, Chris Murphy lists@colorremedies.com wrote:
Let's take another argument. If the user manually specifies 'ninja -j 64' on this same system, is that sabotage? I'd say it is. And therefore why isn't it sabotage that the ninja default computes N jobs as nrcpus + 2? And also doesn't take available memory into account when deciding what resources to demand? I can build linux all day long on this system with its defaults and never run into a concurrent usability problem.
There does seem to be a dual responsibility, somehow, between the operating system and the application, to make sure sane requests are made and honored.
This seems like a distraction from the real goal here, which is to ensure Fedora remains responsive under heavy memory pressure, and to ensure unprivileged processes cannot take down the system by allocating large amounts of memory. Fixing ninja and make to dynamically scale the number of parallel build processes based on memory pressure would be wonderful, but it's not going to solve the underlying issue here, which is that random user processes should never be able to hang the system.
That's fair.