I have a fedora 13 box on which I have a remote mounted nfs share over a fairly slow (10Mb/s) link. I'm then transferring data onto this share from a different machine using scp.
The problem is that after scp reports that it's 100% complete the program will hang for ~20 mins before it will move on to another file. At this point it can't be killed.
It looks like the nfs daemon is caching write data (around 2GB of it) which lets scp think its finished when actually there's loads of data sitting in a write buffer. The hanging is presumably the time it takes to flush the buffer (there is a process called nfsiod which is active during this time and df shows data is still being written).
Does anyone know how to either make this buffer smaller, or get rid of it all together so the scp can accruately report on its progress?
Thanks
Simon.
On Tue, 28 Sep 2010 15:07:25 +0100 Simon Andrews wrote:
Does anyone know how to either make this buffer smaller, or get rid of it all together so the scp can accruately report on its progress?
I don't know how to fix it, but I don't think it has anything to do with NFS. I see the same with ext3 disks at both ends.
On 09/28/2010 07:07 AM, Simon Andrews wrote:
I have a fedora 13 box on which I have a remote mounted nfs share over a fairly slow (10Mb/s) link. I'm then transferring data onto this share from a different machine using scp.
The problem is that after scp reports that it's 100% complete the program will hang for ~20 mins before it will move on to another file. At this point it can't be killed.
It looks like the nfs daemon is caching write data (around 2GB of it) which lets scp think its finished when actually there's loads of data sitting in a write buffer. The hanging is presumably the time it takes to flush the buffer (there is a process called nfsiod which is active during this time and df shows data is still being written).
Does anyone know how to either make this buffer smaller, or get rid of it all together so the scp can accruately report on its progress?
Thanks
Simon.
I understand you mean your desktop nfs-mounts a directory exported to your machine by a server over a 10 mb/s link. I understaand you mean that you have write permissions to write to this nfs mounted directory on your machine. I understand that you scp from a third machine (again at 10 mb/s ???? - you did not specify this part) to the nfs mounted directory on YOUR machine.
Just to get the data from the third machine to yours before it is even sent to the nfs server):
On a 10megabit/s link, only 80% of which is payload data, transferring 2GB (I assume you mean binary GB): .80 * 10000000 = 8000000 bits/s payload data 8000000 / 8 = 100000 bytes/s payload data 2147483648 / 1000000 = 2147.48 number of seconds it takes to download 2 GB. 2147.48 /60.00 = 35.79 minutes to download 2GB. So, that's close enough to your 20 minutes download time. I am being a bit pessimistic here as to how much of the ether bandwirdth is used for payload data. At 90%, the transfer time comes down to 31 minutes.
So, 20 minutes is absolutely miraculous!!! Be HAPPY!!
On Tue, Sep 28, 2010 at 1:39 PM, JD jd1008@gmail.com wrote:
At 90%, the transfer time comes down to 31 minutes.
So, 20 minutes is absolutely miraculous!!! Be HAPPY!!
Could be compression on the SCP link...
On 09/28/2010 11:58 AM, Kwan Lowe wrote:
On Tue, Sep 28, 2010 at 1:39 PM, JDjd1008@gmail.com wrote:
At 90%, the transfer time comes down to 31 minutes.
So, 20 minutes is absolutely miraculous!!! Be HAPPY!!
Could be compression on the SCP link..
Hmm... doubtful!
On Tue, Sep 28, 2010 at 10:07 PM, Simon Andrews simon.andrews@bbsrc.ac.ukwrote:
I have a fedora 13 box on which I have a remote mounted nfs share over a fairly slow (10Mb/s) link. I'm then transferring data onto this share from a different machine using scp.
The problem is that after scp reports that it's 100% complete the program will hang for ~20 mins before it will move on to another file. At this point it can't be killed.
It looks like the nfs daemon is caching write data (around 2GB of it) which lets scp think its finished when actually there's loads of data sitting in a write buffer. The hanging is presumably the time it takes to flush the buffer (there is a process called nfsiod which is active during this time and df shows data is still being written).
Does anyone know how to either make this buffer smaller, or get rid of it all together so the scp can accruately report on its progress?
Thanks
Simon.
-- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
This Article http://nfs.sourceforge.net/nfs-howto/ar01s05.html contains information on how to adjust the buffer size of NFS and optimise file transfers. Also scp has a -C option to enable compression.
On 09/28/2010 06:26 PM, Samuel Kidman wrote:
On Tue, Sep 28, 2010 at 10:07 PM, Simon Andrews <simon.andrews@bbsrc.ac.uk mailto:simon.andrews@bbsrc.ac.uk> wrote:
I have a fedora 13 box on which I have a remote mounted nfs share over a fairly slow (10Mb/s) link. I'm then transferring data onto this share from a different machine using scp. The problem is that after scp reports that it's 100% complete the program will hang for ~20 mins before it will move on to another file. At this point it can't be killed. It looks like the nfs daemon is caching write data (around 2GB of it) which lets scp think its finished when actually there's loads of data sitting in a write buffer. The hanging is presumably the time it takes to flush the buffer (there is a process called nfsiod which is active during this time and df shows data is still being written). Does anyone know how to either make this buffer smaller, or get rid of it all together so the scp can accruately report on its progress? Thanks Simon.
Hey! Simon, Listen: buffering is done by the filesystem internals in collaboration with the block io layer. Once the filesystem commits the write to block io layer, the write call returns to the calling program, and there is not an iota you can do about it! In the case of nfs, buffering is done by the nfsiod. Buffering will be done at both the server AND the client. This is especially noticeable when the nfs client writes onto and nfs mounted filesystem. nfsiod is the "helper" kernel thead. There will be as many of these as the admin configures the system for. Ditto with the main dispatcher, the nfsd process. The nfsiod is what buffers writes on the client side.
If you want the scp to function more synchronously, you need to rewrite scp, so that it calls fsync after each write! This will force scp process to wait for the data to be flushed before the write call returns.
On 29/09/2010 03:19, JD wrote:
<simon.andrews@bbsrc.ac.ukmailto:simon.andrews@bbsrc.ac.uk> wrote: The problem is that after scp reports that it's 100% complete the program will hang for ~20 mins before it will move on to another file. At this point it can't be killed.
Hey! Simon, Listen: buffering is done by the filesystem internals in collaboration with the block io layer. Once the filesystem commits the write to block io layer, the write call returns to the calling program, and there is not an iota you can do about it!
I think I get the general process by which the caching happens, and I'm not necessarily aiming to get rid of it, just set the cache size to something which is appropriate for the speed of the link I'm operating over.
In the case of nfs, buffering is done by the nfsiod. Buffering will be done at both the server AND the client. This is especially noticeable when the nfs client writes onto and nfs mounted filesystem. nfsiod is the "helper" kernel thead. There will be as many of these as the admin configures the system for.
OK, so I'm the admin on the client system. How to I configure nfsiod?
Ditto with the main dispatcher, the nfsd process. The nfsiod is what buffers writes on the client side.
Which means that that is the part which is causing my problems. The main problem is the disparity between the size of the cache on the client (somewhere around 2GB) and the speed of transfer onto the NFS mounted share (around 2MB/s). This means that every time the cache needs to be flushed there is a ~20min wait during which the client is completely blocked (so no chance to kill it or interact with it in any way). Give me a 5MB cache and I'm a happy man!
If you want the scp to function more synchronously, you need to rewrite scp, so that it calls fsync after each write! This will force scp process to wait for the data to be flushed before the write call returns.
I don't need to be that draconian, just configure a sensible cache size. I just can't see where I can set that.
Thanks for any advice
Simon.
On 09/29/2010 02:03 AM, Simon Andrews wrote:
On 29/09/2010 03:19, JD wrote:
<simon.andrews@bbsrc.ac.ukmailto:simon.andrews@bbsrc.ac.uk> wrote: The problem is that after scp reports that it's 100% complete the program will hang for ~20 mins before it will move on to another file. At this point it can't be killed.
Hey! Simon, Listen: buffering is done by the filesystem internals in collaboration with the block io layer. Once the filesystem commits the write to block io layer, the write call returns to the calling program, and there is not an iota you can do about it!
I think I get the general process by which the caching happens, and I'm not necessarily aiming to get rid of it, just set the cache size to something which is appropriate for the speed of the link I'm operating over.
YOU CANNOT!!! IT IS DONE BY THE KERNEL!!! AND IT IS SET IN CONCRETE!! READ MY SUBSEQUENT REPLY AS WELL.
In the case of nfs, buffering is done by the nfsiod. Buffering will be done at both the server AND the client. This is especially noticeable when the nfs client writes onto and nfs mounted filesystem. nfsiod is the "helper" kernel thead. There will be as many of these as the admin configures the system for.
OK, so I'm the admin on the client system. How to I configure nfsiod?
You can only set the NUMBER of how many to run. But I have forgotten where it is done - perhaps in some .conf file in /etc, or a file in /etc/sysconfig/
Ditto with the main dispatcher, the nfsd process. The nfsiod is what buffers writes on the client side.
Which means that that is the part which is causing my problems. The main problem is the disparity between the size of the cache on the client (somewhere around 2GB) and the speed of transfer onto the NFS mounted share (around 2MB/s). This means that every time the cache needs to be flushed there is a ~20min wait during which the client is completely blocked (so no chance to kill it or interact with it in any way). Give me a 5MB cache and I'm a happy man
If you want the scp to function more synchronously, you need to rewrite scp, so that it calls fsync after each write! This will force scp process to wait for the data to be flushed before the write call returns.
I don't need to be that draconian, just configure a sensible cache size. I just can't see where I can set that.
Just give it up dude! I told you already - cache amount CAN NOT BE CHANGED!!
Thanks for any advice
Simon.
On 09/28/2010 06:26 PM, Samuel Kidman wrote:
On Tue, Sep 28, 2010 at 10:07 PM, Simon Andrews <simon.andrews@bbsrc.ac.uk mailto:simon.andrews@bbsrc.ac.uk> wrote:
I have a fedora 13 box on which I have a remote mounted nfs share over a fairly slow (10Mb/s) link. I'm then transferring data onto this share from a different machine using scp. The problem is that after scp reports that it's 100% complete the program will hang for ~20 mins before it will move on to another file. At this point it can't be killed. It looks like the nfs daemon is caching write data (around 2GB of it) which lets scp think its finished when actually there's loads of data sitting in a write buffer. The hanging is presumably the time it takes to flush the buffer (there is a process called nfsiod which is active during this time and df shows data is still being written). Does anyone know how to either make this buffer smaller, or get rid of it all together so the scp can accruately report on its progress? Thanks Simon.
Hey! Simon,
you could also re-write scp to open the file descriptor with O_SYNC to force all writes to be synchronous, and you will obviate the need to call fsync().
Simon Andrews <simon.andrews <at> bbsrc.ac.uk> writes:
I have a fedora 13 box on which I have a remote mounted nfs share over a fairly slow (10Mb/s) link. I'm then transferring data onto this share from a different machine using scp.
The problem is that after scp reports that it's 100% complete the program will hang for ~20 mins before it will move on to another file. At this point it can't be killed.
It looks like the nfs daemon is caching write data (around 2GB of it) which lets scp think its finished when actually there's loads of data sitting in a write buffer. The hanging is presumably the time it takes to flush the buffer (there is a process called nfsiod which is active during this time and df shows data is still being written).
Does anyone know how to either make this buffer smaller, or get rid of it all together so the scp can accruately report on its progress?
Thanks
Simon.
Hi Simon,
you know, we are Linux users from Missouri USA here ... Can you tell us what system your nfs server is installed on, how its exported nfs shares are configured ? Then we can gain some valuable clues regarding performance of all factors involved.
JB
On 29/09/2010 09:55, JB wrote:
Simon Andrews<simon.andrews<at> bbsrc.ac.uk> writes: Hi Simon,
you know, we are Linux users from Missouri USA here ... Can you tell us what system your nfs server is installed on, how its exported nfs shares are configured ? Then we can gain some valuable clues regarding performance of all factors involved.
The server is an exported NFS share from a NetApp filer. Since the server isn't maintained by us I can't provide specifics about how it's configured.
The limiting factor seems to be the speed of the link between our client and the server. We're actually operating pretty close to the available wire speed so I don't think there's a problem with the setup in that respect.
Simon Andrews <simon.andrews <at> bbsrc.ac.uk> writes:
...
Hi, thanks.
It looks like the nfs daemon is caching write data (around 2GB of it) which lets scp think its finished when actually there's loads of data sitting in a write buffer. The hanging is presumably the time it takes to flush the buffer (there is a process called nfsiod which is active during this time and df shows data is still be
Your nfs client is a Fedora 13. Can you tell me what nfsiod is and where is came from ? Can you see it ? $ ps aux | grep -i nfsiod
JB
On 29/09/2010 12:19, JB wrote:
Simon Andrews<simon.andrews<at> bbsrc.ac.uk> writes:
...
Hi, thanks.
It looks like the nfs daemon is caching write data (around 2GB of it) which lets scp think its finished when actually there's loads of data sitting in a write buffer. The hanging is presumably the time it takes to flush the buffer (there is a process called nfsiod which is active during this time and df shows data is still be
Your nfs client is a Fedora 13. Can you tell me what nfsiod is and where is came from ? Can you see it ? $ ps aux | grep -i nfsiod
I think it must be part of the kernel. There's no nfsiosd binary anywhere on the system, but:
$ ps aux | grep -i nfsiod root 8974 0.0 0.0 0 0 ? S Sep27 0:15 [nfsiod]
Where the parent process is kthread.
Simon.
Simon Andrews <simon.andrews <at> bbsrc.ac.uk> writes:
... I think it must be part of the kernel. There's no nfsiod binary anywhere on the system, but:
$ ps aux | grep -i nfsiod root 8974 0.0 0.0 0 0 ? S Sep27 0:15 [nfsiod]
Where the parent process is kthread.
Simon.
Thanks. Could you please give us (on Fedora 13) an unedited output of: # cat /etc/fstab # mount
JB
JB <jb.1234abcd <at> gmail.com> writes:
Simon Andrews <simon.andrews <at> bbsrc.ac.uk> writes:
... I think it must be part of the kernel. There's no nfsiod binary anywhere on the system, but:
$ ps aux | grep -i nfsiod root 8974 0.0 0.0 0 0 ? S Sep27 0:15 [nfsiod]
Where the parent process is kthread.
Simon.
Thanks. Could you please give us (on Fedora 13) an unedited output of: # cat /etc/fstab # mount
JB
Well, actually these :-) :
$ man 8 mount ... It is possible that files /etc/mtab and /proc/mounts don’t match. The first file is based only on the mount command options, but the content of the second file also depends on the kernel and others settings (e.g. remote NFS server. In particular case the mount command may reports unreliable information about a NFS mount point and the /proc/mounts file usually contains more reliable information.)
# cat /etc/mtab # cat /proc/mounts
JB
On 29/09/2010 20:09, JB wrote:
Thanks. Could you please give us (on Fedora 13) an unedited output of:
# cat /etc/mtab # cat /proc/mounts
remote.server.name:/vol/ftp/ftp-1 /mnt/remote nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=149.155.100.6,mountvers=3,mountport=4046,mountproto=tcp,addr=XXX.XXX.XXX.XXX 0 0
remote.server.name:/vol/ftp/ftp-1 /mnt/remote nfs rw,tcp,addr=XXX.XXX.XXX.XXX 0 0
On Thu, Sep 30, 2010 at 1:33 AM, Simon Andrews simon.andrews@bbsrc.ac.uk wrote:
On 29/09/2010 20:09, JB wrote:
Thanks. Could you please give us (on Fedora 13) an unedited output of:
# cat /etc/mtab # cat /proc/mounts
remote.server.name:/vol/ftp/ftp-1 /mnt/remote nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=NNN.MMM.100.6,mountvers=3,mountport=4046,mountproto=tcp,addr=XXX.XXX.XXX.XXX 0 0
remote.server.name:/vol/ftp/ftp-1 /mnt/remote nfs rw,tcp,addr=XXX.XXX.XXX.XXX 0 0
Do check with traceroute how packets are routed. It looks like the server has multiple IP addresses. If you pick the wrong one a straight shot route can be lost in preference to some other out past the neighbors barn path.
Simon Andrews <simon.andrews <at> bbsrc.ac.uk> writes:
...
It looks like the nfs daemon is caching write data (around 2GB of it) which lets scp think its finished when actually there's loads of data sitting in a write buffer. The hanging is presumably the time it takes to flush the buffer (there is a process called nfsiod which is active during this time and df shows data is still being written).
...
... $ ps aux | grep -i nfsiod root 8974 0.0 0.0 0 0 ? S Sep27 0:15 [nfsiod]
Where the parent process is kthread.
Simon.
This is from HP Tru64 UNIX. $ man 8 nfsiod http://h30097.www3.hp.com/docs/base_doc/DOCUMENTATION/V51_HTML/MAN/MAN8/0043...
JB
JB <jb.1234abcd <at> gmail.com> writes:
...
There are other things to consider.
$ man 5 exports ... /etc/exports ... General Options ... async ... sync ...
$ man 5 nfs ... /etc/fstab ... Valid options for either the nfs or nfs4 file system type ... wsize=n The maximum number of bytes per network WRITE request that the NFS client can send when writing data to a file on an NFS server. ... ... ac / noac Selects whether the client may cache file attributes. ... Editor's Note: The noac option is a mixture of a generic option, sync, and an NFS-specific option actimeo=0. So it causes a significant performance penalty. But do not confuse file attributes caching with data caching. ... The sync mount option The NFS client treats the sync mount option differently than some other file systems (refer to mount(8) for a description of the generic sync and async mount options). If neither sync nor async is specified (or if the async option is specified), the NFS client delays sending appli- cation writes to the server until any of these events occur:
Memory pressure forces reclamation of system memory resources.
An application flushes file data explicitly with sync(2), msync(2), or fsync(3).
An application closes a file with close(2).
The file is locked/unlocked via fcntl(2).
In other words, under normal circumstances, data written by an applica- tion may not immediately appear on the server that hosts the file.
If the sync option is specified on a mount point, any system call that writes data to files on that mount point causes that data to be flushed to the server before the system call returns control to user space. This provides greater data cache coherence among clients, but at a sig- nificant performance cost.
Applications can use the O_SYNC open flag to force application writes to individual files to go to the server immediately without the use of the sync mount option. ... NFS version 4 caching features ... A file delegation ... ... Once a file has been delegated to a client, the client can cache that file’s data and metadata aggressively without contacting the server. ...
JB
JB <jb.1234abcd <at> gmail.com> writes:
...
How nfsiod's number of read-ahead/write-behind worker threads is determined and configured ?
$ cat linux-2.6.34.i686/fs/nfs/internal.h /* * NFS internal definitions */ ... /* Maximum number of readahead requests * FIXME: this should really be a sysctl so that users may tune it to suit * their needs. People that do NFS over a slow network, might for * instance want to reduce it to something closer to 1 for improved * interactive response. */ #define NFS_MAX_READAHEAD (RPC_DEF_SLOT_TABLE - 1) ...
$ cat linux-2.6.34.i686/include/linux/sunrpc/xprt.h /* ... * Declarations for the RPC transport interface. ... */ ... #define RPC_DEF_SLOT_TABLE (16U) ...
So, the max # of workers is 16 - 1 = 15
See: linux-2.6.34.i686/Documentation/kernel-parameters.txt ... sunrpc.tcp_slot_table_entries= sunrpc.udp_slot_table_entries= [NFS,SUNRPC] Sets the upper limit on the number of simultaneous RPC calls that can be sent from the client to a server. Increasing these values may allow you to improve throughput, but will also increase the amount of memory reserved for use by the client. ...
$ lsmod |grep -i sunrpc sunrpc 163601 1 $ $ modinfo sunrpc filename: /lib/modules/2.6.34.7-56.fc13.i686/kernel/net/sunrpc/sunrpc.ko license: GPL srcversion: 274F8EC6B56054A06EDF2A4 depends: vermagic: 2.6.34.7-56.fc13.i686 SMP mod_unload 686 parm: min_resvport:portnr parm: max_resvport:portnr parm: tcp_slot_table_entries:slot_table_size parm: udp_slot_table_entries:slot_table_size
$ ls /sys/module/sunrpc/parameters/ max_resvport pool_mode udp_slot_table_entries min_resvport tcp_slot_table_entries $ cat /sys/module/sunrpc/parameters/udp_slot_table_entries 16 $ cat /sys/module/sunrpc/parameters/tcp_slot_table_entries 16
Managing NFS and NIS, 2nd Edition. By Hal Stern, Mike Eisler and Ricardo Labiaga 18.5. NFS async thread tuning. ... If you are running eight NFS async threads on an NFS client, then the client will generate eight NFS write requests at once when it is performing a sequential write to a large file. The eight requests are handled by the NFS async threads. ... when a Solaris process issues a new write requests while all the NFS async threads are blocked waiting for a reply from the server, the write request is queued in the kernel and the requesting process returns successfully without blocking. The requesting process does not issue an RPC to the NFS server itself, only the NFS async threads do. When an NFS async thread RPC call completes, it proceeds to grab the next request from the queue and sends a new RPC to the server. It may be necessary to reduce the number of NFS requests if a server cannot keep pace with the incoming NFS write requests. ...
And for those who are not into NFS ...
Lesley Gore - It's My Party (1965) http://www.youtube.com/watch?v=XsYJyVEUaC4
JB
JB <jb.1234abcd <at> gmail.com> writes:
...
Firewall impact.
See 'man 5 nfs' - Mounting through a firewall.
Because you do not have access to your nfs server, the nfs client remains to be checked for any messages coming from nfs server that may be blocked or perhaps erroneously forwarded by your Fedora 13 client.
Your nfs server may be sending unsolicited NEW messages (as opposite to ESTABLISHED, RELATED) that may be rejected by your (default) firewall settings.
You are also communicating with the source-of-the-big-file machine.
Iptables is your default netfilter/firewall on Fedora 13. # iptables -L -n -v
If needed you should add extra log rules (man iptables; see LOG target; see presumably /var/log/messages).
Test the firewall on your nfs client machine from an external machine.
JB
On 28/09/10 22:07, Simon Andrews wrote:
Does anyone know how to either make this buffer smaller, or get rid of it all together so the scp can accruately report on its progress?
You could try adjusting /proc/sys/vm/dirty_ratio.
"Contains, as a percentage of total system memory, the number of pages at which a process which is generating disk writes will itself start writing out dirty data"
The default seems to be 20 on Fedora 13. For example to set it to 5% (which is the lowest it will go I believe):
echo 5 > /proc/sys/vm/dirty_ratio
This should effectively lower the size of the buffer, but note this will affect all file systems, not just NFS. However you may find that acceptable.
On 29/09/2010 14:12, Ian Chapman wrote:
On 28/09/10 22:07, Simon Andrews wrote:
Does anyone know how to either make this buffer smaller, or get rid of it all together so the scp can accruately report on its progress?
You could try adjusting /proc/sys/vm/dirty_ratio.
"Contains, as a percentage of total system memory, the number of pages at which a process which is generating disk writes will itself start writing out dirty data"
The default seems to be 20 on Fedora 13. For example to set it to 5% (which is the lowest it will go I believe):
That sounds like it might be an answer. It's a shame there's no way to specify this per-process, but this machine does have quite a bit of RAM in it, so having this specified as a percentage of RAM might make it larger than we'd want.
I'll have a play and see what effect this has.
Cheers
Simon.
On 29/09/10 22:51, Simon Andrews wrote:
That sounds like it might be an answer. It's a shame there's no way to specify this per-process, but this machine does have quite a bit of RAM in it, so having this specified as a percentage of RAM might make it larger than we'd want.
I'll have a play and see what effect this has.
No problem, there's also a good page here which explains a bit more how all this works, as well as some explanation on other tunables that might suit you better.
http://www.westnet.com/~gsmith/content/linux-pdflush.htm
On 29/09/2010 16:27, Ian Chapman wrote:
On 29/09/10 22:51, Simon Andrews wrote:
That sounds like it might be an answer. It's a shame there's no way to specify this per-process, but this machine does have quite a bit of RAM in it, so having this specified as a percentage of RAM might make it larger than we'd want.
I'll have a play and see what effect this has.
No problem, there's also a good page here which explains a bit more how all this works, as well as some explanation on other tunables that might suit you better.
Thanks, that's really helpful.
On Wed, Sep 29, 2010 at 8:27 AM, Ian Chapman packages@amiga-hardware.com wrote:
On 29/09/10 22:51, Simon Andrews wrote:
That sounds like it might be an answer. It's a shame there's no way to specify this per-process, but this machine does have quite a bit of RAM in it, so having this specified as a percentage of RAM might make it larger than we'd want.
I'll have a play and see what effect this has.
No problem, there's also a good page here which explains a bit more how all this works, as well as some explanation on other tunables that might suit you better.
http://www.westnet.com/~gsmith/content/linux-pdflush.htm
--
If you have a lot of ram you can boot with a kernel flag that specifies a smaller memory resource.
OR
You can write a program that gobbles a lot of memory (compare malloc(); calloc()). Running a memory hog can trigger IO of dirty pages out to disk or trigger a the high/low water mark used to trigger IO. This works because at the OS level all of free memory can be used by the OS as a buffer of one type or another.
The nice thing about a littlepiggie program is that you can run it from user space and also adjust how much memory it grabs and when so you can tune things to your liking. When you kill it you quickly get all the memory back.
Simon Andrews simon.andrews@bbsrc.ac.uk writes:
Does anyone know how to either make this buffer smaller, or get rid of it all together so the scp can accruately report on its progress?
Why not cut to the chase and run scp to the remote server. TCP will do a wonderful job of filling your pipe, but not too much.
-wolfgang
On Wed, 29 Sep 2010 17:21:25 -0700 Wolfgang S. Rupprecht wrote:
Why not cut to the chase and run scp to the remote server. TCP will do a wonderful job of filling your pipe, but not too much.
I see long delays with scp between two different systems talking to ext3 filesystems on both ends. The scp command will say 100%, but not actually exit, and the network monitor applet will show lots of network traffic for quite a while. When the network traffic dies down, the scp command finally exits.
I always figured it must be caching lots of packets in the 8 gig of memory I have.
On 30/09/2010 01:21, Wolfgang S. Rupprecht wrote:
Simon Andrewssimon.andrews@bbsrc.ac.uk writes:
Does anyone know how to either make this buffer smaller, or get rid of it all together so the scp can accruately report on its progress?
Why not cut to the chase and run scp to the remote server. TCP will do a wonderful job of filling your pipe, but not too much.
I'd love to, but I don't control the remote end of the connection and the only access I have is via an NFS mount.