On 08/10/18 18:28 -0500, Zebediah Figura wrote:
On 08/10/18 16:43, John Reiser wrote:
> On 10/8/18 2026 UTC, Zebediah Figura wrote:
>> On 08/10/18 2000 UTC, John Reiser wrote:
>>> Allowing 1M open files per unprivileged process is too many.
>>> Megabytes of RAM are precious. A hard limit of 1M open files per
>>> allows each process to eat at least 256MB (1M * sizeof(struct file)
>>> [linux/fs.h]) of RAM. If a single user is allowed 1000 processes,
>>> then that's 256GB of RAM, which is a Denial-of-Service attack.
>>> Yes, 4096 open files is not enough. Raise it to 65536.
>> Correct me if I'm wrong, but wouldn't this be capped by the
>> system-wide limit (i.e. it would hit ENFILE) before presenting
>> a problem?
> That means that a different DoS can happen even sooner,
> at (ENFILE / 1M) processes. No other process could open() a file.
The surface is substantially more colourful, e.g., executables may
not be launched anymore (i.e., effectively similar to denials based
on /proc/sys/kernel/pid_max), for failures to load shared libraries when
it gets thus far at all.
Sure, but in order to prevent that you'd almost always need to
NOFILE. I don't know what kind of policies Fedora (or any other
distribution) has regarding this kind of attack mitigation, but it
seems dubious to me that this is worth doing.
Yes, it feels somewhat uneasy that unprivileged users/processes are
given, in pristine configuration, a free permit to consume order (or
two) of magnitude more resources from globally shared domain than
what the globally imposed limits are, possibly impacting privileged
entities. Right now, not to talk about increasing one of these limits
globally, which would be hence preferrably limited just to Workstation
edition, and for the rest the question of possibly lowering these
limits for nonprivileged use cases would deserve some attention, IMHO.