Hi, I have two virtual machine setups, one is FC13 and the other is FC14, using kvm and guests created using the Virtual Machine Manager.
I have noticed that the disk performance is slower than I would have expected. Here are some figures
FC14 Host (i7, 8Gb memory, Two 1Tb sata disks in soft Raid0)
Host, 9.76 MBytes/Sec
Centos 5.6 Guest, 6.45 MBytes/Sec
RedHat 8 Guest, 0.426 Mbytes/Sec
FC13 Host (i7, 4Gb Memory, One 1Tb sata disk)
Host, 47.6 MBytes/Sec
Centos 5.6 Guest, 12.52 MBytes/Sec
RedHat 7.3 Guest, 1.42 MBytes/Sec
FC6 Guest, 16 MBytes/Sec
Centos 6 Guest, 15.72 MBytes/Sec
The tests were run by copying large files (650 Mbytes) and timing the result.
I didn't expect the legacy 2.4 kernel guests to perform so slowly.
The performance of the soft raid0 host machine is disappointing.
On the FC13 host, without Raid, the guest performance at 1/3 of the host is also a surprise.
What experience and guidance is out there on this area?
Many thanks
Ken
On Sun, Oct 30, 2011 at 08:28:30AM +0000, Ken Smith wrote:
Hi, I have two virtual machine setups, one is FC13 and the other is FC14, using kvm and guests created using the Virtual Machine Manager.
I have noticed that the disk performance is slower than I would have expected. Here are some figures
FC14 Host (i7, 8Gb memory, Two 1Tb sata disks in soft Raid0)
Host, 9.76 MBytes/Sec Centos 5.6 Guest, 6.45 MBytes/Sec RedHat 8 Guest, 0.426 Mbytes/SecFC13 Host (i7, 4Gb Memory, One 1Tb sata disk)
Host, 47.6 MBytes/Sec Centos 5.6 Guest, 12.52 MBytes/Sec RedHat 7.3 Guest, 1.42 MBytes/Sec FC6 Guest, 16 MBytes/Sec Centos 6 Guest, 15.72 MBytes/SecThe tests were run by copying large files (650 Mbytes) and timing the result.
I didn't expect the legacy 2.4 kernel guests to perform so slowly.
The performance of the soft raid0 host machine is disappointing.
On the FC13 host, without Raid, the guest performance at 1/3 of the host is also a surprise.
What experience and guidance is out there on this area?
There's not enough information in this post to say what is going on.
What device are you exporting to the guest? virtio? IDE?
What are you using on the host to store the disks? qcow2? raw file? sparse or not? an LV? a partition?
What precise settings for cache etc are being used? Use 'virsh dumpxml' and look at the <disk> section.
Also you should try a later host. Some performance improvements have been made in more recent versions of qemu, and in any case F14 is almost out of support.
Rich.
Richard W.M. Jones wrote:
On Sun, Oct 30, 2011 at 08:28:30AM +0000, Ken Smith wrote:
Hi, I have two virtual machine setups, one is FC13 and the other is FC14, using kvm and guests created using the Virtual Machine Manager.
I have noticed that the disk performance is slower than I would have expected. Here are some figures
FC14 Host (i7, 8Gb memory, Two 1Tb sata disks in soft Raid0)
Host, 9.76 MBytes/Sec
Host file system Ext4 formatted on md0, Intel MB
Centos 5.6 Guest, 6.45 MBytes/Sec
Virtio, QEMU/RAW, No Specific cache setting
RedHat 8 Guest, 0.426 Mbytes/Sec
Correction this is FC8, IDE Device, QEMU/RAW, No Specific cache setting
FC13 Host (i7, 4Gb Memory, One 1Tb sata disk)
Host, 47.6 MBytes/Sec
Host file system Ext4 formatted LV on partition(s), ASUS MB
Centos 5.6 Guest, 12.52 MBytes/Sec
IDE, QEMU/RAW, No Specific cache setting
RedHat 7.3 Guest, 1.42 MBytes/Sec
IDE, QEMU/RAW, No Specific cache setting
FC6 Guest, 16 MBytes/Sec
IDE, QEMU/RAW, No Specific cache setting
Centos 6 Guest, 15.72 MBytes/Sec
Virtio, QEMU/RAW, No Specific cache setting
The tests were run by copying large files (650 Mbytes) and timing the result.
I didn't expect the legacy 2.4 kernel guests to perform so slowly.
The performance of the soft raid0 host machine is disappointing.
On the FC13 host, without Raid, the guest performance at 1/3 of the host is also a surprise.
What experience and guidance is out there on this area?
There's not enough information in this post to say what is going on.
What device are you exporting to the guest? virtio? IDE?
What are you using on the host to store the disks? qcow2? raw file? sparse or not? an LV? a partition?
What precise settings for cache etc are being used? Use 'virsh dumpxml' and look at the<disk> section.
Also you should try a later host. Some performance improvements have been made in more recent versions of qemu, and in any case F14 is almost out of support.
Rich.
I've pulled most of the information you requested. See above. I don't see any specific cache settings on either machine. Is there somewhere I would see the default setting on the machine? The raw files are the full size of the filesystem they are intended to hold.
Thanks
Ken
On Sun, Oct 30, 2011 at 04:56:51PM +0000, Ken Smith wrote:
Richard W.M. Jones wrote:
On Sun, Oct 30, 2011 at 08:28:30AM +0000, Ken Smith wrote:
Hi, I have two virtual machine setups, one is FC13 and the other is FC14, using kvm and guests created using the Virtual Machine Manager.
I have noticed that the disk performance is slower than I would have expected. Here are some figures
FC14 Host (i7, 8Gb memory, Two 1Tb sata disks in soft Raid0)
Host, 9.76 MBytes/SecHost file system Ext4 formatted on md0, Intel MB
Centos 5.6 Guest, 6.45 MBytes/SecVirtio, QEMU/RAW, No Specific cache setting
Is the raw file sparse or fully allocated? It makes a difference if the host is having to find and allocate blocks while writing. But nevertheless, this is about what I'd expect.
RedHat 8 Guest, 0.426 Mbytes/SecCorrection this is FC8, IDE Device, QEMU/RAW, No Specific cache setting
IDE, so we'd expect the performance to be terrible, and it is. IDE is just there for compatibility.
In general the best performance is going to be when you use a host LV (or partition) and avoid files. In the guest you should enable virtio. And for best performance make sure you are using the most recent qemu and kernel since a lot of work last year went into improving virtio.
Here are some numbers from my laptop (consumer SATA drive) which has had not really any tuning or attention to performance. The numbers are from the command:
dd if=/dev/zero of=<output> bs=8k count=131072 conv=fsync
Host (F17) write to LV: 59.5 MB/s Guest (F16, virtio) write to file on ext4 filesystem: 37.4 MB/s
I've pulled most of the information you requested. See above. I don't see any specific cache settings on either machine. Is there somewhere I would see the default setting on the machine?
I'm out of the loop on what the default caching policy is these days. Hopefully libvirt is at least choosing a safe one.
The raw files are the full size of the filesystem they are intended to hold.
Does this mean they're not sparse?
Rich.
Richard W.M. Jones wrote:
{snip} In general the best performance is going to be when you use a host LV (or partition) and avoid files. In the guest you should enable virtio. And for best performance make sure you are using the most recent qemu and kernel since a lot of work last year went into improving virtio.
Here are some numbers from my laptop (consumer SATA drive) which has had not really any tuning or attention to performance. The numbers are from the command:
dd if=/dev/zero of=<output> bs=8k count=131072 conv=fsync
Host (F17) write to LV: 59.5 MB/s Guest (F16, virtio) write to file on ext4 filesystem: 37.4 MB/s
I've pulled most of the information you requested. See above. I don't see any specific cache settings on either machine. Is there somewhere I would see the default setting on the machine?
I'm out of the loop on what the default caching policy is these days. Hopefully libvirt is at least choosing a safe one.
The raw files are the full size of the filesystem they are intended to hold.
Does this mean they're not sparse?
Rich.
Thank you Richard. That has given me some pointers.
Ken
Am 30.10.2011 21:59, schrieb Richard W.M. Jones:
I've pulled most of the information you requested. See above. I don't see any specific cache settings on either machine. Is there somewhere I would see the default setting on the machine?
I'm out of the loop on what the default caching policy is these days. Hopefully libvirt is at least choosing a safe one.
cache=writethrough is the default. And indeed this is the safest option, and at the same time by far the slowest option. I would expect that you get much better write performance with cache=none.
Note however that older guests OSes cannot deal correctly with disks that have a volatile write cache, like cache=none emulates. This means that in case of a host crash and with some bad luck, your guest might experience file system corruption.
Recent guest OSes are safe in this respect and can be used with cache=none or cache=writeback in order to improve performance without any such risk.
Kevin
On Mon, Oct 31, 2011 at 10:18:50AM +0100, Kevin Wolf wrote:
Am 30.10.2011 21:59, schrieb Richard W.M. Jones:
I've pulled most of the information you requested. See above. I don't see any specific cache settings on either machine. Is there somewhere I would see the default setting on the machine?
I'm out of the loop on what the default caching policy is these days. Hopefully libvirt is at least choosing a safe one.
cache=writethrough is the default. And indeed this is the safest option, and at the same time by far the slowest option. I would expect that you get much better write performance with cache=none.
Note however that older guests OSes cannot deal correctly with disks that have a volatile write cache, like cache=none emulates. This means that in case of a host crash and with some bad luck, your guest might experience file system corruption.
Recent guest OSes are safe in this respect and can be used with cache=none or cache=writeback in order to improve performance without any such risk.
Just FYI, here is how libvirt XML types are mapped to qemu types, found by examining the libvirt code:
libvirt old qemu modern qemu
none cache=off cache=none writeback cache=on cache=writeback writethrough cache=off cache=writethrough directsync cache=off cache=directsync (if supported by qemu) unsafe cache=off cache=unsafe (if supported by qemu)
I tested this to see what libvirt would actually use with my fairly recent qemu-kvm, and what sort of timings I would get:
libvirt MB/s modern qemu command line
(no setting) 29 (nothing) none 39 cache=none writeback 24* cache=writeback writethrough 28 cache=writethrough directsync [did not work for me -- "unknown disk cache mode"] unsafe [did not work for me -- "unknown disk cache mode"]
* = large variability in this result
What's also interesting is that performance today with no cache setting is slower than it was yesterday, even though I'm not running anything significantly different on my laptop.
Kevin, I'm using 'dd ... conv=fsync' for these tests. What's a good, reliable method for testing read and write speeds?
Rich.
Am 31.10.2011 16:07, schrieb Richard W.M. Jones:
On Mon, Oct 31, 2011 at 10:18:50AM +0100, Kevin Wolf wrote:
Am 30.10.2011 21:59, schrieb Richard W.M. Jones:
I've pulled most of the information you requested. See above. I don't see any specific cache settings on either machine. Is there somewhere I would see the default setting on the machine?
I'm out of the loop on what the default caching policy is these days. Hopefully libvirt is at least choosing a safe one.
cache=writethrough is the default. And indeed this is the safest option, and at the same time by far the slowest option. I would expect that you get much better write performance with cache=none.
Note however that older guests OSes cannot deal correctly with disks that have a volatile write cache, like cache=none emulates. This means that in case of a host crash and with some bad luck, your guest might experience file system corruption.
Recent guest OSes are safe in this respect and can be used with cache=none or cache=writeback in order to improve performance without any such risk.
Just FYI, here is how libvirt XML types are mapped to qemu types, found by examining the libvirt code:
libvirt old qemu modern qemu
none cache=off cache=none writeback cache=on cache=writeback writethrough cache=off cache=writethrough directsync cache=off cache=directsync (if supported by qemu) unsafe cache=off cache=unsafe (if supported by qemu)
cache=off is a strange fallback for unsafe...
I tested this to see what libvirt would actually use with my fairly recent qemu-kvm, and what sort of timings I would get:
What does fairly recent mean? cache=unsafe is supported since 0.13. If it doesn't work this might indicate a libvirt bug.
libvirt MB/s modern qemu command line
(no setting) 29 (nothing) none 39 cache=none writeback 24* cache=writeback writethrough 28 cache=writethrough directsync [did not work for me -- "unknown disk cache mode"] unsafe [did not work for me -- "unknown disk cache mode"]
- = large variability in this result
What's also interesting is that performance today with no cache setting is slower than it was yesterday, even though I'm not running anything significantly different on my laptop.
Kevin, I'm using 'dd ... conv=fsync' for these tests. What's a good, reliable method for testing read and write speeds?
Depends on what you want to test, obviously. conv=fsync will be very much in favour of cache=writethrough because you force all other caching options to do the disk flushes as well that make writethrough so slow.
When I use dd for testing, I usually use oflag=direct to bypass the guest's page cache, but don't issue any fsyncs. You should see that the writeback modes (none/writeback/unsafe) performs quite a bit better with that, while I'd expect writethrough to stay on about the same level.
Kevin
On 10/31/2011 04:26 PM, Kevin Wolf wrote:
Am 31.10.2011 16:07, schrieb Richard W.M. Jones:
On Mon, Oct 31, 2011 at 10:18:50AM +0100, Kevin Wolf wrote:
Am 30.10.2011 21:59, schrieb Richard W.M. Jones:
I've pulled most of the information you requested. See above. I don't see any specific cache settings on either machine. Is there somewhere I would see the default setting on the machine?
I'm out of the loop on what the default caching policy is these days. Hopefully libvirt is at least choosing a safe one.
cache=writethrough is the default. And indeed this is the safest option, and at the same time by far the slowest option. I would expect that you get much better write performance with cache=none.
Note however that older guests OSes cannot deal correctly with disks that have a volatile write cache, like cache=none emulates. This means that in case of a host crash and with some bad luck, your guest might experience file system corruption.
Recent guest OSes are safe in this respect and can be used with cache=none or cache=writeback in order to improve performance without any such risk.
Just FYI, here is how libvirt XML types are mapped to qemu types, found by examining the libvirt code:
libvirt old qemu modern qemu
none cache=off cache=none writeback cache=on cache=writeback writethrough cache=off cache=writethrough directsync cache=off cache=directsync (if supported by qemu) unsafe cache=off cache=unsafe (if supported by qemu)
cache=off is a strange fallback for unsafe...
I tested this to see what libvirt would actually use with my fairly recent qemu-kvm, and what sort of timings I would get:
What does fairly recent mean? cache=unsafe is supported since 0.13. If it doesn't work this might indicate a libvirt bug.
libvirt MB/s modern qemu command line
(no setting) 29 (nothing) none 39 cache=none writeback 24* cache=writeback writethrough 28 cache=writethrough directsync [did not work for me -- "unknown disk cache mode"] unsafe [did not work for me -- "unknown disk cache mode"]
- = large variability in this result
What's also interesting is that performance today with no cache setting is slower than it was yesterday, even though I'm not running anything significantly different on my laptop.
Kevin, I'm using 'dd ... conv=fsync' for these tests. What's a good, reliable method for testing read and write speeds?
Depends on what you want to test, obviously. conv=fsync will be very much in favour of cache=writethrough because you force all other caching options to do the disk flushes as well that make writethrough so slow.
When I use dd for testing, I usually use oflag=direct to bypass the guest's page cache, but don't issue any fsyncs. You should see that the writeback modes (none/writeback/unsafe) performs quite a bit better with that, while I'd expect writethrough to stay on about the same level.
FYI I've seen really strange behavior when I tried to do some benchmarking so I'm interested in this topic as well. I opened a bug for my problems:
https://bugzilla.redhat.com/show_bug.cgi?id=750202
Regards, Dennis
On Mon, Oct 31, 2011 at 04:26:33PM +0100, Kevin Wolf wrote:
Am 31.10.2011 16:07, schrieb Richard W.M. Jones:
On Mon, Oct 31, 2011 at 10:18:50AM +0100, Kevin Wolf wrote:
Am 30.10.2011 21:59, schrieb Richard W.M. Jones:
I've pulled most of the information you requested. See above. I don't see any specific cache settings on either machine. Is there somewhere I would see the default setting on the machine?
I'm out of the loop on what the default caching policy is these days. Hopefully libvirt is at least choosing a safe one.
cache=writethrough is the default. And indeed this is the safest option, and at the same time by far the slowest option. I would expect that you get much better write performance with cache=none.
Note however that older guests OSes cannot deal correctly with disks that have a volatile write cache, like cache=none emulates. This means that in case of a host crash and with some bad luck, your guest might experience file system corruption.
Recent guest OSes are safe in this respect and can be used with cache=none or cache=writeback in order to improve performance without any such risk.
Just FYI, here is how libvirt XML types are mapped to qemu types, found by examining the libvirt code:
libvirt old qemu modern qemu
none cache=off cache=none writeback cache=on cache=writeback writethrough cache=off cache=writethrough directsync cache=off cache=directsync (if supported by qemu) unsafe cache=off cache=unsafe (if supported by qemu)
cache=off is a strange fallback for unsafe...
(CC'd to Dan)
That was based on my reading of the libvirt code, but it might be wrong. In any case that would only apply to some very old version of qemu.
I tested this to see what libvirt would actually use with my fairly recent qemu-kvm, and what sort of timings I would get:
What does fairly recent mean? cache=unsafe is supported since 0.13. If it doesn't work this might indicate a libvirt bug.
libvirt seems to think that cache=unsafe isn't supported. However it clearly is in my version of qemu:
$ qemu-kvm -help | grep cache [,cache=writethrough|writeback|none|unsafe][,format=f]
qemu-0.15.0-5.fc17.x86_64
But:
$ sudo virsh edit F16x64 error: internal error unknown disk cache mode 'unsafe'
I need to dig into libvirt code a bit deeper to see what this means.
libvirt MB/s modern qemu command line
(no setting) 29 (nothing) none 39 cache=none writeback 24* cache=writeback writethrough 28 cache=writethrough directsync [did not work for me -- "unknown disk cache mode"] unsafe [did not work for me -- "unknown disk cache mode"]
- = large variability in this result
What's also interesting is that performance today with no cache setting is slower than it was yesterday, even though I'm not running anything significantly different on my laptop.
Kevin, I'm using 'dd ... conv=fsync' for these tests. What's a good, reliable method for testing read and write speeds?
Depends on what you want to test, obviously. conv=fsync will be very much in favour of cache=writethrough because you force all other caching options to do the disk flushes as well that make writethrough so slow.
When I use dd for testing, I usually use oflag=direct to bypass the guest's page cache, but don't issue any fsyncs. You should see that the writeback modes (none/writeback/unsafe) performs quite a bit better with that, while I'd expect writethrough to stay on about the same level.
OK, that's useful to know.
Rich.