On Mon, Jun 1, 2015 at 6:16 PM, Kevin Fenzi <kevin(a)scrye.com> wrote:
On Mon, 1 Jun 2015 17:15:42 +0100
Peter Robinson <pbrobinson(a)gmail.com> wrote:
> On Mon, Jun 1, 2015 at 4:59 PM, Miroslav Suchý <msuchy(a)redhat.com>
> > Dne 1.6.2015 v 16:58 Peter Robinson napsal(a):
> >> And a lot of swap disk can cause the IO issues that you mention and
> >> ultimately lead to performance problems.
> > ...
> >> What package set did you test this against?
> > 6th paragraph - answer to both questions.
> Blogs are great but it's also TL;DR .... sometimes bits here are
> Kernel isn't a major issue compared to others. I'd like to see how
> well it works for things like java, eclipse, libreoffice.
To add some info, current configurations:
Type / Memory / swap:
arm / 4gb / 4gb
buildvm / 10gb / 2gb
buildhw / 20gb / 0gb
The arm builders have 300GB drives.
The buildvm's are on an iscsi lun and each has 150GB allocated. The lun
is 4.5TB, and there's 650gb or so free on it.
The buildhw's have 2 300GB drives in a raid, so 300 usable.
So, on the arm and buildhw's we could add a 50GB swap probibly at the
cost of the base / being down to 240-250gb.
On the buildvm's we could only add 25GB or so, unless we increased the
space (which we may be able to do down the road, but cannot now).
The thing I find odd about this is that the linux kernel caching
doesn't seem to be a win. Shouldn't it cache buildroot and such in
memory anyhow? Or why is tmpfs performing so much better?
The page cache will cache pages as they are read and keep them in an
LRU fashion, yes. But if workload is primarily I/O bound, it's a
combination of both reads and writes. I suspect the majority of the
gain of tmpfs is the write out to memory instead of disk. The page
cache will help with frequently used files (like the toolchain
executables), but all of output of the compiler is brand new and being
If we're talking about things happening in a VM, there are other
things in play as well. Like interaction from other guests on the
physical disks, whether KSM is running on the host to share common
pages between guests, etc. And if the guests are frequently being
created and torn down, that isn't going to help at all.
I will say that I'm surprised the kernel was that much of a win.
Particularly since it is bound to hit swap even with a 5GB tmpfs. I
suppose some of that makes sense, as the kernel has massive output
that isn't read a lot. The swap hit might be minimal in that case
because we aren't going to be paging a bunch of stuff back in from
disk for most of the build duration. Instrumenting it would be still
be fun to see what all is in play.