For what it's worth, my research group attacked basically exactly this problem some time ago. We built a modified Linux kernel that we called Redline that was utterly resilient to fork bombs, malloc bombs, and so on. No process could take down the system, much less unprivileged ones. I think some of the ideas we described back then would be worth adopting / adapting today (the code is of course hopelessly out of date: we published our paper on this at OSDI 2008).
We had a demo where we would run two identical systems, side by side, with the same workloads (a number of videos playing simultaneously), but with one running Redline, and the other running stock Linux. We would launch a fork/malloc bomb on both. The Redline system barely hiccuped. The stock Linux kernel would freeze and become totally unresponsive (or panic). It was a great demo, but also a pain, since we invariably had to restart the stock Linux box :).
Redline: first class support for interactivity in commodity operating systems
While modern workloads are increasingly interactive and resource-intensive (e.g., graphical user interfaces, browsers, and multimedia players), current operating systems have not kept up. These operating systems, which evolved from core designs that date to the 1970s and 1980s, provide good support for batch and command-line applications, but their ad hoc attempts to handle interactive workloads are poor. Their best-effort, priority-based schedulers provide no bounds on delays, and their resource managers (e.g., memory managers and disk I/O schedulers) are mostly oblivious to response time requirements. Pressure on any one of these resources can significantly degrade application responsiveness.
We present Redline, a system that brings first-class support for interactive applications to commodity operating systems. Redline works with unaltered applications and standard APIs. It uses lightweight specifications to orchestrate memory and disk I/O management so that they serve the needs of interactive applications. Unlike realtime systems that treat specifications as strict requirements and thus pessimistically limit system utilization, Redline dynamically adapts to recent load, maximizing responsiveness and system utilization. We show that Redline delivers responsiveness to interactive applications even in the face of extreme workloads including fork bombs, memory bombs and bursty, large disk I/O requests, reducing application pauses by up to two orders of magnitude.
Paper here (in case the attachment fails):
https://www.usenix.org/legacy/events/osdi08/tech/full_papers/yang/yang.pdf
And links to code here:
https://emeryberger.com/research/redline/
There has been some recent follow-on work in this direction: see this work out of Remzi and Andrea's lab at Wisconsin: http://pages.cs.wisc.edu/~remzi/Classes/739/Fall2016/Papers/splitio-sosp15.p...
-- emery
-- Professor Emery Berger College of Information and Computer Sciences University of Massachusetts Amherst www.emeryberger.org, @emeryberger
On Mon, Aug 12, 2019 at 10:07 AM Benjamin Kircher < benjamin.kircher@gmail.com> wrote:
On 12. Aug 2019, at 18:16, Lennart Poettering mzerqung@0pointer.de
wrote:
On Mo, 12.08.19 09:40, Chris Murphy (lists@colorremedies.com) wrote:
How to do this automatically? Could there be a mechanism for the system and the requesting application to negotiate resources?
Ideally, GNOME would run all its apps as systemd --user services. We could then set DefaultMemoryHigh= globally for the systemd --user instance to some percentage value (which is taken relative to the physical RAM size). This would then mean every user app individually could use — let's say — 75% of the physical RAM size and when it wants more it would be penalized during reclaim compared to apps using less.
If GNOME would run all apps as user services we could do various other nice things too. For example, it could dynamically assign the fg app more CPU/IO weight than the bg apps, if the system is starved of both.
I really like the ideas. Why isn’t this done this way anyway?
I don’t have a GNOME desktop at hand right now to investigate how GNOME starts applications and so on but aren’t new processes started by the user — GNOME or not — always children of the user.slice? Is there a difference if I start a GNOME application or a normal process from my shell?
And for the beginning, wouldn’t it be enough to differentiate between user slices and system slice and set DefaultMemoryHigh= in a way to make sure there is always some headroom left for the system?
BK
(… I definitely need to play around with Silverblue to learn what they are doing.) _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org