On Mo, 12.08.19 19:06, Benjamin Kircher (benjamin.kircher(a)gmail.com) wrote:
> On 12. Aug 2019, at 18:16, Lennart Poettering <mzerqung(a)0pointer.de> wrote:
> On Mo, 12.08.19 09:40, Chris Murphy (lists(a)colorremedies.com) wrote:
>> How to do this automatically? Could there be a mechanism for the
>> system and the requesting application to negotiate resources?
> Ideally, GNOME would run all its apps as systemd --user services. We
> could then set DefaultMemoryHigh= globally for the systemd --user
> instance to some percentage value (which is taken relative to the
> physical RAM size). This would then mean every user app individually
> could use — let's say — 75% of the physical RAM size and when it wants
> more it would be penalized during reclaim compared to apps using less.
> If GNOME would run all apps as user services we could do various other
> nice things too. For example, it could dynamically assign the fg app
> more CPU/IO weight than the bg apps, if the system is starved of
I really like the ideas. Why isn’t this done this way anyway?
Well, let's just say certain popular container managers blocked
switching to cgroupsv2, and only in cgroupsv2 delegating cgroup
subtrees to unprivileged users is safe. Hence doing this kind of
resource management wasn't really doable without ugly hacks.
But as it appears cgroupsv2 has a chance of becoming a reality on
Fedora now, so this opens a lot of doors.
I don’t have a GNOME desktop at hand right now to investigate how
GNOME starts applications and so on but aren’t new processes started
by the user — GNOME or not — always children of the user.slice? Is
there a difference if I start a GNOME application or a normal
process from my shell?
Well, "user.slice" is a concept of the *system* service manager, but
desktop apps are if anything a concept of the *per-user* service
And for the beginning, wouldn’t it be enough to differentiate
between user slices and system slice and set DefaultMemoryHigh= in a
way to make sure there is always some headroom left for the system?
From the system service manager's PoV all user apps together make up
the user's 'user@.service' instance, it doesn#t look below.
i.e. cgroups is hierarchial, and various components can manage their
own subtrees. PID 1 manages the top of the tree, and the per-user
service manager a subtree of it that is below it and arranges per-user
apps below that. But from PID1's PoV each of those per-user subtrees
is opaque and it won't do resource management beneath that
boundary. It's the job of the per-user service manager to do resource
Lennart Poettering, Berlin