"why is my Linux so damn slow?"

Rick Sewill rsewill at gmail.com
Sat Feb 12 19:15:02 UTC 2011


On Saturday, February 12, 2011 12:43:53 pm M. Fioretti wrote:
> On Sat, Feb 12, 2011 12:27:55 PM -0600, Rick Sewill (rsewill at gmail.com) 
wrote:
> > Could you show the output of iostat -x 1,
> > not iostat -x 1 | egrep -i 'device|sda'
> > please?
> 
> Sure, sorry, here you go (this is with Firefox open, right now)
> 
> 
> Linux 2.6.35.10-74.fc14.x86_64 (polaris.localdomain)	02/12/2011	
_x86_64_	(2
> CPU)
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>           28.93    0.00    3.23    0.69    0.00   67.15
> 
> Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz
> avgqu-sz   await  svctm  %util sda               0.76    12.23    1.72   
> 2.07    96.94   111.76    54.97     0.10   26.58   4.13   1.57 dm-0       
>       0.00     0.00    2.45   13.98    96.65   111.76    12.68     2.18 
> 132.68   0.95   1.57 dm-1              0.00     0.00    0.01    0.00    
> 0.09     0.00     8.00     0.00    5.45   3.18   0.00
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>           48.76    0.00    0.50    0.00    0.00   50.75
> 
> Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz
> avgqu-sz   await  svctm  %util sda               0.00     0.00    0.00   
> 0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00 dm-0       
>       0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00   
> 0.00   0.00   0.00 dm-1              0.00     0.00    0.00    0.00    
> 0.00     0.00     0.00     0.00    0.00   0.00   0.00
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>           16.58    0.00    1.01    0.00    0.00   82.41
> 
> Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz
> avgqu-sz   await  svctm  %util sda               0.00     0.00    0.00  
> 19.00     0.00   152.00     8.00     0.01    0.79   0.11   0.20 dm-0      
>        0.00     0.00    0.00   19.00     0.00   152.00     8.00     0.01  
>  0.79   0.11   0.20 dm-1              0.00     0.00    0.00    0.00    
> 0.00     0.00     0.00     0.00    0.00   0.00   0.00
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>            4.46    0.00    0.99    4.95    0.00   89.60
> 
> Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz
> avgqu-sz   await  svctm  %util sda               0.00    27.00    0.00   
> 9.00     0.00   272.00    30.22     0.07    7.67   7.67   6.90 dm-0       
>       0.00     0.00    0.00   34.00     0.00   272.00     8.00     0.10   
> 2.82   2.03   6.90 dm-1              0.00     0.00    0.00    0.00    
> 0.00     0.00     0.00     0.00    0.00   0.00   0.00
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>            4.50    0.00    1.00    0.00    0.00   94.50
> 
> Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz
> avgqu-sz   await  svctm  %util sda               0.00     0.00    0.00   
> 0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00 dm-0       
>       0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00   
> 0.00   0.00   0.00 dm-1              0.00     0.00    0.00    0.00    
> 0.00     0.00     0.00     0.00    0.00   0.00   0.00
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>           12.87    0.00    0.99    0.00    0.00   86.14
> 
> Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz
> avgqu-sz   await  svctm  %util sda               0.00     0.00    0.00   
> 0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00 dm-0       
>       0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00   
> 0.00   0.00   0.00 dm-1              0.00     0.00    0.00    0.00    
> 0.00     0.00     0.00     0.00    0.00   0.00   0.00
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>           39.30    0.00    0.50    0.00    0.00   60.20
> 
> Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz
> avgqu-sz   await  svctm  %util sda               0.00     0.00    0.00   
> 0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00 dm-0       
>       0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00   
> 0.00   0.00   0.00 dm-1              0.00     0.00    0.00    0.00    
> 0.00     0.00     0.00     0.00    0.00   0.00   0.00

Is there any correlation between avg-cpu %user and Device sda 
wsec/s writes?

Is there a burst of %user cpu activity followed by a burst of wsec/s writes?

If the system is doing so little, I'd expect less %user cpu activity.
Since the system is 2 CPU, does 48% means one cpu ran solid for a second?

Someone help us...I know there is a command to show open files, lsof.
Does that command include a way to find out disk activity per file or
is there another command that can find out disk activity per file?
I'm hoping, if we identify the file(s) with disk activity, we might identify
the service/application/kernel feature that is hogging the cpu.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
Url : http://lists.fedoraproject.org/pipermail/users/attachments/20110212/2bbed999/attachment.bin 


More information about the users mailing list