[fedora-virt] possible memory leak ?

Razvan Radu razvan.radu at voxility.com
Wed Apr 21 14:02:46 UTC 2010


it seems to be the same thing as recently reported on the kvm list 
(related to virtio-blk), I will follow up there

regarding my memory calculations for the host, for an system that only 
has kvm virtual machines, nothing else
considering an overhead of 20% of the guest memory is sufficient ? 
should I reserve, some additional memory for the host kernel ?

I am trying to come up with an scheme to dimension host machines that 
have 12, 18 and 24gb ram, and to avoid swap usage

Razvan RADU
+40 (72) 7772218 / mobile


On 04/21/2010 02:22 PM, Dor Laor wrote:
> On 04/20/2010 05:17 PM, Razvan Radu wrote:
>> hello,
>>
>> after running an stress program in the guest for 6 to 24 hours the
>> qemu-kvm process uses up all the available memory and gets killed by
>> oom, is this an bug or an configuration problem ?
>>
>> host:
>> arch: intel i7 12gb ram
>> fedora 12 updated as of 2010-04-19 with fedora-virt packages
>> kernel: 2.6.32.11-99.fc12.x86_64
>> qemu: qemu*0.12.3-7.fc12.x86_64, also tested with 
>> qemu*0.12.3-6.fc12.x86_64
>> libvirt: libvirt*0.7.7-2.fc12.x86_64
>> qemu command: /usr/bin/qemu-kvm -S -M pc-0.11 -enable-kvm -m 3072 -smp
>> 4,sockets=4,cores=1,threads=1 -name guest-00574 -uuid
>> 3f26c499-b029-3e1c-2410-fd197b262057 -nodefaults -chardev
>> socket,id=monitor,path=/var/lib/libvirt/qemu/guest-00574.monitor,server,nowait 
>>
>> -mon chardev=monitor,mode=readline -rtc base=utc -boot dc -drive
>> if=none,media=cdrom,id=drive-ide0-1-0 -device
>> ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
>> file=/dev/hdd.img/guest-00574-1,if=none,id=drive-virtio-disk0,boot=on,cache=none 
>>
>> -device
>> virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 
>>
>> -device
>> virtio-net-pci,vlan=0,id=net0,mac=54:52:00:00:00:49,bus=pci.0,addr=0x5
>> -net tap,fd=21,vlan=0,name=hostnet0 -usb -device usb-tablet,id=input0
>> -vnc 0.0.0.0:2 -vga cirrus -device
>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3
>>
>> guest:
>> CentOS release 5.4
>> kernel: 2.6.18-164.15.1.el5
>>
>> stress program:
>> http://weather.ou.edu/~apw/projects/stress/
>> comman line: ./stress -v -c 4 -m 4 --vm-bytes 800M -d 4 --hdd-bytes 100M
>> -t 100000
>>
>>
>> the qemu process is constantly using more and more memory and after a
>> while it get killed
>> I have tested with:
>> - no hugepages
>> - hugepages
>> - lvm
>> - qcow2 image files
>> - cache=writeback
>> - cache=none
>> all resulted in the same memory usage pattern (slowly growing)
>>
>>
>> do you know of an formula for calculating the memory overhead for
>> qemu-kvm (without an potential memory leak ...) ?
>> I have noticed an 20% overhead with and without hugepages (pmap reported
>> a total of 1.2G for an 1G guest and 3.6G total for an 3G guest)
>> is 20% an correct assumption (it seems a little high) ?
>> I have observed the same overhead with hugepages, is this normal ?
>
> The is ruffly looks fine, in addition to the guest's ram there is 
> overhead for pci config space of the devices, qcow2 metadata, etc.
>
> How much ram do you have on your host? Please provide the top data, 
> vmstat 1 and slabtop info.
>
> qemu do copy IO requests but eventually each request should be freed.
>
>>
>>
>> thanks,
>>
>


More information about the virt mailing list