[HEADS-UP] Rawhide: /tmp is now on tmpfs

Gregory Maxwell gmaxwell at gmail.com
Wed Jun 20 18:16:14 UTC 2012


On Wed, Jun 20, 2012 at 1:54 PM, Jef Spaleta <jspaleta at gmail.com> wrote:
> On Wed, Jun 20, 2012 at 9:41 AM, Gregory Maxwell <gmaxwell at gmail.com> wrote:
>> Tmpfs volumes have a size set as a mount option. The default is half
>> the physical ram (not physical ram plus swap). You can change the size
>> with a remount. When its full, its full, like any other filesystem
>
> Okay that was what I was missing. Pegging the tmpfs to some percentage
> of available ram by default.
>
> Follow up question.. is this value defined at install time or is it
> 50% of ram as seen at boot up?
> If I add or remove ram between boot ups, post-install does the tmpfs
> size automatically adjust to 50% of what is available at boot up?

It's 50% available at bootup by default (e.g. this is what you get
when you provide no size option while mounting). I'm not sure what the
systemd stuff is doing, I had the impression it was leaving this as a
default.   I don't know if this is a good thing or not.

On Wed, Jun 20, 2012 at 1:56 PM, Brian Wheeler <bdwheele at indiana.edu> wrote:
> I don't think its just a matter of quantity of I/O but _when_ the I/O
> happens.  Instead of the pagecache getting flushed to disk when it is
> convenient for the system (presumably during a lull in I/O) the I/O is
> concentrated when there is a large change in the VM allocations -- which
> makes it very similar to a thrashing situation.
>
> With a real filesystem behind it, the pages can just be discarded and reused
> when needed (providing they've been flushed) but in the case of tmpfs the
> pages only get flushed to swap when there is memory pressure.

An anticdote is not data, but I've never personally experienced
negative "thrashing" behavior from high tmpfs usage.  I suppose
thrashing only really happens when there is latency sensitive
competition for the IO, and the kernel must be aggressive enough to
avoid that.

When data is written to file systems normally the bulk will also
remain in the buffer cache for a some span of time until there is
memory pressure.  The difference is how long it can remain (tmpfs has
no mandatory flush) before being backed by disk, how much extra
overhead there is from maintaining metadata (less for tmpfs than
persistent file systems), and how much must be written right away to
keep the fs consistent (none for tmpfs).

On Wed, Jun 20, 2012 at 2:06 PM, Brian Wheeler <bdwheele at indiana.edu> wrote:
> So the default is that I can use 2G in /tmp regardless of how much swap is
> present if the system memory size is 4G?  So the only way to get more /tmp
> is to either mess with the max% or buy more ram?

On systems where tmpfs is provisioned for /tmp in fstab you change a
setting to get more space (provide size=fooG mount option).  This is
easier than adding more space to tmp when tmp is on root or some other
file system.

I don't know how it will be set in systemd. Regardless of what systemd
offers you could still toss in an option to remount it with more space
after bootup.

Buying more ram to increase /tmp is silly of course.  The default
behavior is just a default it doesn't imply some kind of cosmic
relationship between your tmpfs size and the amount of physical ram.


More information about the devel mailing list