logrotate(8) and copytruncate as default

P J P pj.pandit at yahoo.co.in
Fri Jun 28 12:04:54 UTC 2013


   Hello Jan,

----- Original Message -----
> From: Jan Kaluza <jkaluza at redhat.com>
> Subject: Re: logrotate(8) and copytruncate as default
> 
> Right now, without locking, logrotate would loss more messages if the
> logs are big, because copying takes more time. It would be interesting
> to mention the file size in your tests too. 


===
#!/bin/sh                                                                       
#                                                                               
# flockexp.sh: is a simple experiment to create a large log file and            
# copy-truncate it when its size reaches a predefined limit.                    
#                                                                               
                                                                                                                                                                
if [ $# -lt 2 ]; then                                                           
    echo "Usage: $0 <output-file> <maxsize>";                                   
    exit 0;                                                                     
fi                                                                              
                                                                                
# main                                                                          
{                                                                               
    > $1;                                                                       
    logsize=$2;                                                                 
                                                                                
    watch "if [ \`stat -c '%s' $1\` -gt $logsize ]; " \                         
        "then flock -x $1 -c 'cp $1 $1.1; > $1'; fi" > /dev/null &              
                                                                                
    count=0;                                                                    
    while (true);                                                               
    do                                                                          
        count=`expr $count + 1`;                                                
        echo `date "+%d %a %Y %T"` $count `grep 'MemFree' /proc/meminfo`;       
    done >> $1;                                                                 
}               

===


I tried  it with different file sizes. It starts showing data loss as size grows > 2MB.


===
$ stat -c '%s' test.log test.log.1 
418065
2007074
$
$ tail -n 4 test.log.1
28 Fri 2013 17:20:28 42937 MemFree: 3655640 kB
28 Fri 2013 17:20:28 42938 MemFree: 3655580 kB
28 Fri 2013 17:20:28 42939 MemFree: 3655068 kB
28 Fri 2013 17:20:28 42940 MemFree: 3655436 kB
$
$ head -n 4 test.log
28 Fri 2013 17:20:28 42942 MemFree: 3655428 kB
28 Fri 2013 17:20:28 42943 MemFree: 3655448 kB
28 Fri 2013 17:20:28 42944 MemFree: 3655812 kB
28 Fri 2013 17:20:28 42945 MemFree: 3655824 kB
===

I guess kernel buffers start dropping writes after very short limit.

---
Regards
   -Prasad
http://feedmug.com


More information about the devel mailing list