Ideal Swap Partition Size
yinyang at eburg.com
Thu Jan 22 01:53:18 UTC 2009
Patrick O'Callaghan wrote:
> On Tue, 2009-01-20 at 20:06 -0800, Gordon Messmer wrote:
>> It's also important to bear in mind that under the standard
>> configuration, you must have at least as much "free" memory as the
>> largest application in your server, or else that application won't be
>> able to call external programs.
> If you don't have enough memory (RAM+Swap) for the largest app you need
> to run, then that app won't be able to run.
You don't understand. Of course you need enough total memory to run
your applications. What I was pointing out was that you need to have
enough "free" memory beyond that for a second copy of the largest
application that you run. Even if it won't be filled, it needs to be
>> Let's imagine that you have a server with 2GB of RAM, and just 512MB of
>> swap (maybe based on the idea that swapping will cause the system to
>> behave badly). Let's also imagine that you've tuned your SQL server to
>> keep as much data in memory as possible, so it's 1.5GB resident. Now,
>> if you SQL server has helper applications that it wants to call, it has
>> to fork() and then exec() to start them. When it does a fork(), the
>> system doesn't actually copy all of its pages for the new process, but
>> it does require that the memory be available (the extent to which that
>> is true depends on your overcommit settings).
> This was true in older systems (actually the system just allocated space
> for data and stack, since the code segment was shared) but Linux uses a
> copy-on-write policy so I don't think it's true any more.
The feature that you're referring to is called "overcommit". I had
hoped that by referring to it *by name*, I could avoid inaccurate
corrections, but I guess not.
Overcommit uses a heuristic algorithm to determine whether or not a
request to allocate more memory than is present (either by malloc or
fork) will be allowed. In many cases, fork() will fail if you do not
have enough memory for a second copy of the application, even though
Linux doesn't copy a complete set of pages during fork(). If you want
the system to work *reliably*, you must have enough free memory for a
second copy of your largest application. In most cases you should
achieve that by having at least as much swap as physical memory.
>> However, since you don't
>> have 1.5GB of memory available, the fork() will probably fail, and the
>> SQL server process can't execute its helper script.
> I don't think so (see above).
You're wrong. I helped a friend track down exactly this issue just a
couple of months ago.
>> This situation would be much harder to diagnose if you had 1GB of swap
>> and your SQL server were something like 1.3 GB. In that case, it might
>> sometimes work and sometimes fail depending on how many other processes
>> were using memory.
> And on how much the SQL process is using for a specific run.
If it were specifically using 1.3 GB of memory in a total of 3GB, it
might work some of the time and fail some of the time depending on
whether the rest of the system were using 400MB of memory or 800MB.
>> So, even if you expect to never *use* swap space, you should have at
>> least as much swap as physical RAM.
> There is no "reasonable" amount of swap that will stop you from running
> out of memory in *every* conceivable circumstance. You need to know the
> behaviour of your system to make an educated guess.
That's exactly what I'm trying to illustrate, because is is frequently
overlooked. In systems which run applications that consume a lot of
memory, you need to make sure that your total amount of phsycal memory
and swap will leave enough free for a second copy of your very large
application. If not, then fork() may fail, even though fork() doesn't
More information about the users