On 02/16/2017 04:10 AM, Gerd Hoffmann wrote:
What was the reason to go 64k pages in the first place?
1). Some (server) implementations have higher performance under 64K.
2). VA sizes greater than 48-bit require a 64K translation granule. I
can mention that one publicly now. There are similar reasons in the
pipeline for why 64K is going to be needed on big iron ARM systems.
3). We thought 64K was going to matter, and we wanted to make sure we
could support it (4K is "easy", those other guys do that already).
Sure, with larger pages memory management overhead goes down. But
the other hand the memory footprint goes up, and frankly I'm a bit
surprised how much it goes up.
There's much room for optimization. I want to try to avoid throwing the
baby out with the bathwater as we push ARM and others to clean this up.
For another example, we see kernel structure ballooning caused by the
lack of support for sparse CPU masks and the like - all things that
ARM should be addressing in upstream. Our using 64K helps to keep the
pressure on them to clean this up. RHEL (and Cent) will use 64K no
matter what, but there could be a (short term) case for Fedora having
a cycle or two with a smaller size - I would prefer to avoid that.
So I'm wondering whenever 64k pages is a net win even
on enterprise machines. Did people benchmark this?
It is a net win on Enterprise, required for some of the insanely large
machines being designed now. I can give you one example - Cray have
recently announced that they will be shipping a very large ARM system
in the next couple months. There are many other such machines coming.
Benchmarking was done based upon models by the RH perf team about
3-4 years ago, yes. Again, there's a lot of cleanup to do, but that's
not in and of itself a reason to throw out 64K, especially as it's the
only path to >48-bit VA support in the coming iteration.
Computer Architect | Sent from my Fedora powered laptop