I'll probably have a new server with 16 gigs of RAM on the way, soon.
With this amount of RAM being sufficient, do I really need a swap partition set up? I do understand that a swap partition is needed for hibernation, but this server does not need to hibernate.
On 8/18/10, Sam Varshavchik mrsam@courier-mta.com wrote:
I'll probably have a new server with 16 gigs of RAM on the way, soon.
With this amount of RAM being sufficient, do I really need a swap partition set up? I do understand that a swap partition is needed for hibernation, but this server does not need to hibernate.
I'm not sure what you mean by "need", but Fedora will run without a swap partition.
Andras
On Wed, Aug 18, 2010 at 1:00 PM, Sam Varshavchik mrsam@courier-mta.com wrote:
I'll probably have a new server with 16 gigs of RAM on the way, soon.
With this amount of RAM being sufficient, do I really need a swap partition set up? I do understand that a swap partition is needed for hibernation, but this server does not need to hibernate.
On servers I normally use strict overcommit mode to not fail any memory claims which are already mapped. Swap just raises the CommitLimit, so that I am able to commit more memory (see kernel-doc/Documentation/vm/overcommit-accounting and fs/proc.txt on meminfo).
When using heuristic mode you certainly don't need to use swap but have a higher risk of seeing the out of memory handler kick in earlier. In a lot of situation the kernel is able to page out not-needed memory to disk to have more room for I/O buffering (swappiness) thus improving performance.
Sam Varshavchik <mrsam <at> courier-mta.com> writes:
I'll probably have a new server with 16 gigs of RAM on the way, soon.
With this amount of RAM being sufficient, do I really need a swap partition set up? I do understand that a swap partition is needed for hibernation, but this server does not need to hibernate.
It is a matter of having it ready when you need it. Even if you may think you do not need it, a software installation may check for it.
Example: 2.1.2 Server Component Swap Space Requirements Table 2 Swap Space Requirements for Oracle Database XE Server 10g Your Computer's RAM Swap Space Needed Between 0 and 256 megabytes 3 times the size of RAM Between 256 and 512 megabytes 2 times the size of RAM 512 megabytes and greater 1024 megabytes
JB
On Wed, 2010-08-18 at 11:37 +0000, JB wrote:
Sam Varshavchik <mrsam <at> courier-mta.com> writes:
I'll probably have a new server with 16 gigs of RAM on the way, soon.
With this amount of RAM being sufficient, do I really need a swap partition set up? I do understand that a swap partition is needed for hibernation, but this server does not need to hibernate.
It is a matter of having it ready when you need it. Even if you may think you do not need it, a software installation may check for it.
Example: 2.1.2 Server Component Swap Space Requirements Table 2 Swap Space Requirements for Oracle Database XE Server 10g Your Computer's RAM Swap Space Needed Between 0 and 256 megabytes 3 times the size of RAM Between 256 and 512 megabytes 2 times the size of RAM 512 megabytes and greater 1024 megabytes
"swap space" != "swap partition". If it matters you can always create a swap file when needed.
poc
On 08/18/2010 04:00 AM, Sam Varshavchik wrote:
With this amount of RAM being sufficient, do I really need a swap partition set up? I do understand that a swap partition is needed for hibernation, but this server does not need to hibernate.
Depends on the purpose of the machine. Desktops often don't. If your system is dedicated to running a single very large application, it is. Although processes under Linux get a COW image of their parent's memory at the time they fork(), the default configuration requires that there is at least enough memory for a copy.
Sam Varshavchik wrote:
I'll probably have a new server with 16 gigs of RAM on the way, soon.
With this amount of RAM being sufficient, do I really need a swap partition set up? I do understand that a swap partition is needed for hibernation, but this server does not need to hibernate.
If by "really needed" you mean will the system run without, then no. if you mean can bad things happen other than running out of memory, maybe. I never found a good reason not to have swap, on a server at least a few GB larger than memory, on a desktop 2x the size of the largest single process you expect to run.
If you have that much memory I would hope you have enough disk that this isn't a means of saving disk space. :-(
On 08/18/2010 05:00 AM, Sam Varshavchik wrote:
I'll probably have a new server with 16 gigs of RAM on the way, soon.
With this amount of RAM being sufficient, do I really need a swap partition set up? I do understand that a swap partition is needed for hibernation, but this server does not need to hibernate.
Many server types can run happily without swap.
Virtual Machine servers, and or grid servers run specialized applications and often are installed and managed in groups of hundreds of servers. Running them diskless allows much simpler administration of those systems.
You just need to know if the applications you run really need swap. Now days, with very large memory systems, they don't need swap as a general rule.
Just how specialized servers used to not need swap, now days there are specialized servers that must have swap. I would never run a mission critical database server without swap, unless it was running on a grid. See what I mean?
If you have over 4GB RAM on a desktop, you will probably never touch your swap partition. However, if that desktop does video editing or image rasterizing, then you might want some swap on it just to be sure it does not crash the app in the middle of a multi-day run.
Good Luck!
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 08/18/2010 07:00 PM, Sam Varshavchik wrote:
I'll probably have a new server with 16 gigs of RAM on the way, soon.
With this amount of RAM being sufficient, do I really need a swap partition set up? I do understand that a swap partition is needed for hibernation, but this server does not need to hibernate.
If the memory gets fragged and the kernel wants to defrag, e.g. for a memory request from an application, in order to defrag any "dirty" data portions (those pages that have been written to), the kernel *requires* there to be swap. Otherwise there is no place to write the dirty pages out, in order to read them in elsewhere.
code pages, of course, can just be conveniently "forgotten", and re-read back in on demand. Data must be written to swap. Removing swap, removes this possibility from the kernel. This might not be a problem for you. It depends upon your work load and their memory footprint requirements.
- -Greg
- -- +---------------------------------------------------------------------+
Please also check the log file at "/dev/null" for additional information. (from /var/log/Xorg.setup.log)
| Greg Hosler ghosler@redhat.com | +---------------------------------------------------------------------+
So the morel of the story is ( as per my understanding .... forgive me if I miss understood. ),The regular day-to-day working desktop OS doesn't need the swap space (especially if it is having more then or equal 4GB RAM ) , Mission Critical Server must have swap space even-though it is having 32GB RAM . which are running large database or something like that
isn't it ????
Regards
On Thu, 19 Aug 2010, Gregory Hosler wrote:
If the memory gets fragged and the kernel wants to defrag, e.g. for a memory request from an application, in order to defrag any "dirty" data portions (those pages that have been written to), the kernel *requires* there to be swap. Otherwise there is no place to write the dirty pages out, in order to read them in elsewhere.
I didn't realize that memory could get fragged. I'd thought that one reason for virtual memory was allowing pages to be renumbered at will, the kernel's will, of course.
On Thu, 2010-08-19 at 09:22 -0500, Michael Hennebry wrote:
On Thu, 19 Aug 2010, Gregory Hosler wrote:
If the memory gets fragged and the kernel wants to defrag, e.g. for a memory request from an application, in order to defrag any "dirty" data portions (those pages that have been written to), the kernel *requires* there to be swap. Otherwise there is no place to write the dirty pages out, in order to read them in elsewhere.
I didn't realize that memory could get fragged. I'd thought that one reason for virtual memory was allowing pages to be renumbered at will, the kernel's will, of course.
I thought so too, but see: http://lwn.net/Articles/211505/
poc
On Thu, 19 Aug 2010, Patrick O'Callaghan wrote:
On Thu, 2010-08-19 at 09:22 -0500, Michael Hennebry wrote:
On Thu, 19 Aug 2010, Gregory Hosler wrote:
If the memory gets fragged and the kernel wants to defrag, e.g. for a memory request from an application, in order to defrag any "dirty" data portions (those pages that have been written to), the kernel *requires* there to be swap. Otherwise there is no place to write the dirty pages out, in order to read them in elsewhere.
I didn't realize that memory could get fragged. I'd thought that one reason for virtual memory was allowing pages to be renumbered at will, the kernel's will, of course.
I thought so too, but see: http://lwn.net/Articles/211505/
Posted November 28, 2006 by corbet:
If a large ("high order") block of memory is not available when needed, something will fail and yet another user will start to consider switching to BSD.
BSD does it differently?
On 08/19/2010 03:22 PM, Michael Hennebry wrote:
On Thu, 19 Aug 2010, Gregory Hosler wrote:
If the memory gets fragged and the kernel wants to defrag, e.g. for a memory request from an application, in order to defrag any "dirty" data portions (those pages that have been written to), the kernel *requires* there to be swap. Otherwise there is no place to write the dirty pages out, in order to read them in elsewhere.
I didn't realize that memory could get fragged. I'd thought that one reason for virtual memory was allowing pages to be renumbered at will, the kernel's will, of course.
Virtual memory allows us to present a simple, linear address space to running processes even though the physical memory backing it may be highly non-contiguous (or not even present).
Unfortunately someone still has to manage the pool of available physical memory pages ("page frames") and it is possible for this space to become fragmented over time as repeated allocation/deallocation cycles create "holes" in the available memory.
The kernel implements the buddy algorithm in the page allocator to try to minimize external fragmentation but some workloads can still lead to a lot of pages on the low-order free lists and no larger blocks to satisfy bigger requests.
Forcing everything out to swap and then pulling it back in is one crude way of forcing a level of de-fragmentation. There was some discussion on linux-mm.org of implementing novel de-fragmentation and fragmentation avoidance techniques a while back but I'm not sure where those initiatives are at the moment.
Regards, Bryn.
On Thu, 2010-08-19 at 09:22 -0500, Michael Hennebry wrote:
I didn't realize that memory could get fragged.
An old problem, and one reason why some other OSs *needed* occasional reboots, after a while. Even quitting all running applications, back down to just having the basic desktop, and deliberately issuing flush commands, wasn't enough to free up all the RAM. Particularly when something needed a big contiguous block, because some (later than boot time ran) system things might be sitting in the middle of the RAM.
On 08/19/2010 10:46 AM, Tim wrote:
On Thu, 2010-08-19 at 09:22 -0500, Michael Hennebry wrote:
I didn't realize that memory could get fragged.
An old problem, and one reason why some other OSs *needed* occasional reboots, after a while. Even quitting all running applications, back down to just having the basic desktop, and deliberately issuing flush commands, wasn't enough to free up all the RAM. Particularly when something needed a big contiguous block, because some (later than boot time ran) system things might be sitting in the middle of the RAM.
The FS page cache also tries to cache as much of the filesystem as it can, in ram. Even though this is not a real problem, because when ram is needed. LRU pages are used, and if dirty, they are flushed first before being re-allocated to a process. Problem comes as Michael explains, that when a process needs a large "physically contiguous" chunk of memory, it might not be available. That said, usually, requests for physically contiguous memory is only needed when wanting to map very large number of DMA pages for doing direct physical I/O. Otherwise, a process itself does not need to have physically contiguous pages. Only the virtual space allocated to that "malloc" or large buffer declaration in a program, is contiguous.
On Thu, 19 Aug 2010, JD wrote:
Problem comes as Michael explains, that when a process needs a large "physically contiguous" chunk of memory, it might not be available. That said, usually, requests for physically contiguous memory is only needed when wanting to map very large number of DMA pages for doing direct physical I/O. Otherwise, a process itself does not need to have physically contiguous pages. Only the virtual space allocated to that "malloc" or large buffer declaration in a program, is contiguous.
Why would malloc or a large buffer declaration require physically contiguous memory?
On 08/19/2010 02:15 PM, Michael Hennebry wrote:
On Thu, 19 Aug 2010, JD wrote:
Problem comes as Michael explains, that when a process needs a large "physically contiguous" chunk of memory, it might not be available. That said, usually, requests for physically contiguous memory is only needed when wanting to map very large number of DMA pages for doing direct physical I/O. Otherwise, a process itself does not need to have physically contiguous pages. Only the virtual space allocated to that "malloc" or large buffer declaration in a program, is contiguous.
Why would malloc or a large buffer declaration require physically contiguous memory?
It is done in a driver on the process' behalf when doing direct physical IO . typically, such blocks of physically contiguous chunks memory are set aside during boot. I have also seen special embedded linux drivers that provide an ioctl to let the process get a set of physically contiguous pages and map the space to user virtual space. This is for performance reasons to reduce copying from user space to kernel space when large amounts of data need to be moved. This is not a new idea. it has been around for many years. I first saw it in Linux back in 1998/1999.
On Thu, 19 Aug 2010, JD wrote:
On 08/19/2010 02:15 PM, Michael Hennebry wrote:
On Thu, 19 Aug 2010, JD wrote:
Problem comes as Michael explains, that when a process needs a large "physically contiguous" chunk of memory, it might not be available. That said, usually, requests for physically contiguous memory is only needed when wanting to map very large number of DMA pages for doing direct physical I/O. Otherwise, a process itself does not need to have physically contiguous pages. Only the virtual space allocated to that "malloc" or large buffer declaration in a program, is contiguous.
Why would malloc or a large buffer declaration require physically contiguous memory?
It is done in a driver on the process' behalf when doing direct physical IO . typically, such blocks of physically contiguous chunks memory are set aside during boot. I have also seen special embedded linux drivers that provide an ioctl to let the process get a set of physically contiguous pages and map the space to user virtual space. This is for performance reasons to reduce copying from user space to kernel space when large amounts of data need to be moved. This is not a new idea. it has been around for many years. I first saw it in Linux back in 1998/1999.
Perhaps I misunderstood. Do both of the following necessarily require physically contiguous memory? char fred[69000]; char *greg=malloc(96000); Would they sometimes require physically contiguous memory?
On 08/20/2010 06:44 AM, Michael Hennebry wrote:
On Thu, 19 Aug 2010, JD wrote:
On 08/19/2010 02:15 PM, Michael Hennebry wrote:
On Thu, 19 Aug 2010, JD wrote:
Problem comes as Michael explains, that when a process needs a large "physically contiguous" chunk of memory, it might not be available. That said, usually, requests for physically contiguous memory is only needed when wanting to map very large number of DMA pages for doing direct physical I/O. Otherwise, a process itself does not need to have physically contiguous pages. Only the virtual space allocated to that "malloc" or large buffer declaration in a program, is contiguous.
Why would malloc or a large buffer declaration require physically contiguous memory?
It is done in a driver on the process' behalf when doing direct physical IO . typically, such blocks of physically contiguous chunks memory are set aside during boot. I have also seen special embedded linux drivers that provide an ioctl to let the process get a set of physically contiguous pages and map the space to user virtual space. This is for performance reasons to reduce copying from user space to kernel space when large amounts of data need to be moved. This is not a new idea. it has been around for many years. I first saw it in Linux back in 1998/1999.
Perhaps I misunderstood. Do both of the following necessarily require physically contiguous memory? char fred[69000]; char *greg=malloc(96000); Would they sometimes require physically contiguous memory?
It depends on what you want to achieve. If the target device you will write that buffer to can handle a contiguous physical space of, say ... a few pages, then you would want to ask the special driver of that device, via an ioctl, to give you those pages, and map them to user virtual space - i.e. you would not allocate them from the heap.
On Fri, 20 Aug 2010, JD wrote:
On 08/20/2010 06:44 AM, Michael Hennebry wrote:
On Thu, 19 Aug 2010, JD wrote:
On 08/19/2010 02:15 PM, Michael Hennebry wrote:
On Thu, 19 Aug 2010, JD wrote:
Problem comes as Michael explains, that when a process needs a large "physically contiguous" chunk of memory, it might not be available. That said, usually, requests for physically contiguous memory is only needed when wanting to map very large number of DMA pages for doing direct physical I/O. Otherwise, a process itself does not need to have physically contiguous pages. Only the virtual space allocated to that "malloc" or large buffer declaration in a program, is contiguous.
Why would malloc or a large buffer declaration require physically contiguous memory?
It is done in a driver on the process' behalf when doing direct physical IO . typically, such blocks of physically contiguous chunks memory are set aside during boot. I have also seen special embedded linux drivers that provide an ioctl to let the process get a set of physically contiguous pages and map the space to user virtual space. This is for performance reasons to reduce copying from user space to kernel space when large amounts of data need to be moved. This is not a new idea. it has been around for many years. I first saw it in Linux back in 1998/1999.
Perhaps I misunderstood. Do both of the following necessarily require physically contiguous memory? char fred[69000]; char *greg=malloc(96000); Would they sometimes require physically contiguous memory?
It depends on what you want to achieve. If the target device you will write that buffer to can handle a contiguous physical space of, say ... a few pages, then you would want to ask the special driver of that device, via an ioctl, to give you those pages, and map them to user virtual space - i.e. you would not allocate them from the heap.
It makes sense that if a process insists on physically contiguous memory and can't get it, the process would die, but the above code does not tell the compiler what is to be achieved.
In the following, would fred or greg necessarily refer to physically contiguous memory?
#include <stdlib.h> extern void hank(char *);
int main(*args[], int argsNum) { char fred[69000]; char *greg=malloc(96000); char *greg=malloc(96000); hank(fred); hank(greg); return 0; }
On Fri, 20 Aug 2010 13:22:33 -0500 (CDT) Michael Hennebry hennebry@web.cs.ndsu.nodak.edu wrote:
It makes sense that if a process insists on physically contiguous memory and can't get it, the process would die, but the above code does not tell the compiler what is to be achieved.
In the following, would fred or greg necessarily refer to physically contiguous memory?
#include <stdlib.h> extern void hank(char *);
int main(*args[], int argsNum) { char fred[69000]; char *greg=malloc(96000); hank(fred); hank(greg); return 0; }
If I remember my Kerningham-Ritchie correctly, the answer is yes, since C relies on pointer arithmetic to refer to the elements of the array. The "fred" and "greg" variables are pointers to the beginning of the corresponding memory area, and referring fred[i] goes to the start of the array at fred, and then goes i elements forward to end up with the wanted element.
On Fri, 20 Aug 2010, Jussi Lehtola wrote:
On Fri, 20 Aug 2010 13:22:33 -0500 (CDT) Michael Hennebry hennebry@web.cs.ndsu.nodak.edu wrote:
It makes sense that if a process insists on physically contiguous memory and can't get it, the process would die, but the above code does not tell the compiler what is to be achieved.
In the following, would fred or greg necessarily refer to physically contiguous memory?
#include <stdlib.h> extern void hank(char *);
int main(*args[], int argsNum) { char fred[69000]; char *greg=malloc(96000); hank(fred); hank(greg); return 0; }
If I remember my Kerningham-Ritchie correctly, the answer is yes, since C relies on pointer arithmetic to refer to the elements of the array. The "fred" and "greg" variables are pointers to the beginning of the corresponding memory area, and referring fred[i] goes to the start of the array at fred, and then goes i elements forward to end up with the wanted element.
That is contiguous in terms of virtual memory. Adjacent virtual addresses do not have to have adjacent physical addresses.
Michael Hennebry <hennebry <at> web.cs.ndsu.nodak.edu> writes:
If I remember my Kerningham-Ritchie correctly, the answer is yes, since C relies on pointer arithmetic to refer to the elements of the array. The "fred" and "greg" variables are pointers to the beginning of the corresponding memory area, and referring fred[i] goes to the start of the array at fred, and then goes i elements forward to end up with the wanted element.
That is contiguous in terms of virtual memory. Adjacent virtual addresses do not have to have adjacent physical addresses.
Hi, http://www.patentstorm.us/patents/6986016/description.html ... For application (that is, user-mode) programming, standard APIs provided for memory allocation are malloc ( ) and realloc ( ). In both cases, the contiguity of the underlying physical memory is not guaranteed. Consequently, these calls are not suitable for use in cases in which contiguous physical memory is required. ... JB
On 08/20/2010 12:00 PM, Jussi Lehtola wrote:
On Fri, 20 Aug 2010 13:22:33 -0500 (CDT) Michael Hennebryhennebry@web.cs.ndsu.nodak.edu wrote:
It makes sense that if a process insists on physically contiguous memory and can't get it, the process would die, but the above code does not tell the compiler what is to be achieved.
In the following, would fred or greg necessarily refer to physically contiguous memory?
#include<stdlib.h> extern void hank(char *);
int main(*args[], int argsNum) { char fred[69000]; char *greg=malloc(96000); hank(fred); hank(greg); return 0; }
If I remember my Kerningham-Ritchie correctly, the answer is yes, since C relies on pointer arithmetic to refer to the elements of the array. The "fred" and "greg" variables are pointers to the beginning of the corresponding memory area, and referring fred[i] goes to the start of the array at fred, and then goes i elements forward to end up with the wanted element.
No. User virtual space (say 128 Megabyte char array) would NOT have a correspondingly contiguous Physical space of 128MB. Each virtual page would correspond to a particular physical page. But those corresponding physical pages are not contiguous with each other.
On 08/20/2010 11:22 AM, Michael Hennebry wrote:
On Fri, 20 Aug 2010, JD wrote:
On 08/20/2010 06:44 AM, Michael Hennebry wrote:
On Thu, 19 Aug 2010, JD wrote:
On 08/19/2010 02:15 PM, Michael Hennebry wrote:
On Thu, 19 Aug 2010, JD wrote:
Problem comes as Michael explains, that when a process needs a large "physically contiguous" chunk of memory, it might not be available. That said, usually, requests for physically contiguous memory is only needed when wanting to map very large number of DMA pages for doing direct physical I/O. Otherwise, a process itself does not need to have physically contiguous pages. Only the virtual space allocated to that "malloc" or large buffer declaration in a program, is contiguous.
Why would malloc or a large buffer declaration require physically contiguous memory?
It is done in a driver on the process' behalf when doing direct physical IO . typically, such blocks of physically contiguous chunks memory are set aside during boot. I have also seen special embedded linux drivers that provide an ioctl to let the process get a set of physically contiguous pages and map the space to user virtual space. This is for performance reasons to reduce copying from user space to kernel space when large amounts of data need to be moved. This is not a new idea. it has been around for many years. I first saw it in Linux back in 1998/1999.
Perhaps I misunderstood. Do both of the following necessarily require physically contiguous memory? char fred[69000]; char *greg=malloc(96000); Would they sometimes require physically contiguous memory?
It depends on what you want to achieve. If the target device you will write that buffer to can handle a contiguous physical space of, say ... a few pages, then you would want to ask the special driver of that device, via an ioctl, to give you those pages, and map them to user virtual space - i.e. you would not allocate them from the heap.
It makes sense that if a process insists on physically contiguous memory and can't get it, the process would die,
Not necessarily. It is not clear what you mean by insist. For example, the process could be put to sleep until the contiguous mem became available. But this is not achived via malloc or the likes.
but the above code does not tell the compiler what is to be achieved.
And it cannot. It is not the compiler's job.
In the following, would fred or greg necessarily refer to physically contiguous memory?
No! Not at all. In older versions of unix, there used to be a facility for the user to ask for phsycially contiguous mem. That has disappeared from most versions of unix and clones. There are drivers that, upon bootup, acquire from the kernel's internal interface, for X many contiguous pages. The driver then manages this pool of pages. The driver would export to the user process (header file) an ioctl through which the user process could request a number of pages, and they would then be mapped to user virtual space so the user could write to those pages, and when the write(2) call was made to write those pages, there would be NO copy from user space to kernel space. This was done in some embedded linux drivers for some certain devices to achieve higher performance.
So, let's not beat this thing any further. You might want to do your own googling.
#include<stdlib.h> extern void hank(char *);
int main(*args[], int argsNum) { char fred[69000]; char *greg=malloc(96000); char *greg=malloc(96000); hank(fred); hank(greg); return 0; }
On Thu, 2010-08-19 at 16:15 -0500, Michael Hennebry wrote:
On Thu, 19 Aug 2010, JD wrote:
Problem comes as Michael explains, that when a process needs a large "physically contiguous" chunk of memory, it might not be available. That said, usually, requests for physically contiguous memory is only needed when wanting to map very large number of DMA pages for doing direct physical I/O. Otherwise, a process itself does not need to have physically contiguous pages. Only the virtual space allocated to that "malloc" or large buffer declaration in a program, is contiguous.
Why would malloc or a large buffer declaration require physically contiguous memory?
I have te equally interesting question? Why you think malloc allocates memory blocks in the swap area. Do you have a reference for such a statement?
On 08/20/2010 03:36 PM, Aaron Konstam wrote:
On Thu, 2010-08-19 at 16:15 -0500, Michael Hennebry wrote:
On Thu, 19 Aug 2010, JD wrote:
Problem comes as Michael explains, that when a process needs a large "physically contiguous" chunk of memory, it might not be available. That said, usually, requests for physically contiguous memory is only needed when wanting to map very large number of DMA pages for doing direct physical I/O. Otherwise, a process itself does not need to have physically contiguous pages. Only the virtual space allocated to that "malloc" or large buffer declaration in a program, is contiguous.
Why would malloc or a large buffer declaration require physically contiguous memory?
I have te equally interesting question? Why you think malloc allocates memory blocks in the swap area.
JD didn't say that.
He said that when a process needs a large physically contiguous chunk of memory, you may need to use swap space to move other processes out of the way.
Andrew.
On 08/20/2010 07:36 AM, Aaron Konstam wrote:
On Thu, 2010-08-19 at 16:15 -0500, Michael Hennebry wrote:
On Thu, 19 Aug 2010, JD wrote:
Problem comes as Michael explains, that when a process needs a large "physically contiguous" chunk of memory, it might not be available. That said, usually, requests for physically contiguous memory is only needed when wanting to map very large number of DMA pages for doing direct physical I/O. Otherwise, a process itself does not need to have physically contiguous pages. Only the virtual space allocated to that "malloc" or large buffer declaration in a program, is contiguous.
Why would malloc or a large buffer declaration require physically contiguous memory?
I have te equally interesting question? Why you think malloc allocates memory blocks in the swap area. Do you have a reference for such a statement?
Who said what you claim was said? An OP already posted that you CAN run linux without swap. Normally, when you DO have swap space, user- land data areas (both static and dynamic), will be backed to swap if and when you run out of memory and some other process needs memory.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 08/19/2010 10:22 PM, Michael Hennebry wrote:
On Thu, 19 Aug 2010, Gregory Hosler wrote:
If the memory gets fragged and the kernel wants to defrag, e.g. for a memory request from an application, in order to defrag any "dirty" data portions (those pages that have been written to), the kernel *requires* there to be swap. Otherwise there is no place to write the dirty pages out, in order to read them in elsewhere.
I didn't realize that memory could get fragged. I'd thought that one reason for virtual memory was allowing pages to be renumbered at will, the kernel's will, of course.
virtually memory, yes. physical memory no.
:-)
- -G
- -- +---------------------------------------------------------------------+
Please also check the log file at "/dev/null" for additional information. (from /var/log/Xorg.setup.log)
| Greg Hosler ghosler@redhat.com | +---------------------------------------------------------------------+