Anyone know why qemu wouldn't want to run a virtual machine installing to a qcow2 image file on a tmpfs filesystem?
They are doing a lot of test installs in virtual machines at work, and I suggested creating a storage pool in tmpfs (the host machine has vast amounts of memory) as something that might make the install go faster, but it seems to refuse to even attempt to use the tmpfs image.
On 06/05/2014 10:13 AM, Tom Horsley wrote:
Anyone know why qemu wouldn't want to run a virtual machine installing to a qcow2 image file on a tmpfs filesystem?
They are doing a lot of test installs in virtual machines at work, and I suggested creating a storage pool in tmpfs (the host machine has vast amounts of memory) as something that might make the install go faster, but it seems to refuse to even attempt to use the tmpfs image.
What caching mode are you requesting? tmpfs can't (yet) support O_DIRECT, and if the caching mode you request causes qemu to try O_DIRECT, that would explain why it is failing.
On Thu, 05 Jun 2014 10:30:31 -0600 Eric Blake wrote:
What caching mode are you requesting? tmpfs can't (yet) support O_DIRECT, and if the caching mode you request causes qemu to try O_DIRECT, that would explain why it is failing.
Is that qemu's default mode? I don't think anyone is explicitly setting it, but I'll have to find out from the folks actually trying this.
On Thu, Jun 05, 2014 at 12:54:19PM -0400, Tom Horsley wrote:
On Thu, 05 Jun 2014 10:30:31 -0600 Eric Blake wrote:
What caching mode are you requesting? tmpfs can't (yet) support O_DIRECT, and if the caching mode you request causes qemu to try O_DIRECT, that would explain why it is failing.
Is that qemu's default mode? I don't think anyone is explicitly setting it, but I'll have to find out from the folks actually trying this.
I can't recall what the defaults are, but if you run 'virsh dumpxml' it will tell you the cache mode for each drive.
In any case, cache='none' (ie. O_DIRECT) won't work, and you shouldn't use it for throwaway test machines anyway. Since tmpfs always disappears at reboot, any machine you create on a tmpfs is a throwaway one, whether you intended that or not!
Use cache='unsafe' on tmpfs.
(This advice applies to libguestfs, virt-builder, etc too)
For more info on caching modes, see: http://rwmj.wordpress.com/2013/09/02/new-in-libguestfs-allow-cache-mode-to-b...
Rich.
On Sat, Jun 07, 2014 at 08:59:44AM -0400, Tom Horsley wrote:
On Sat, 7 Jun 2014 12:33:52 +0100 Richard W.M. Jones wrote:
any machine you create on a tmpfs is a throwaway one, whether you intended that or not!
That depends on if you migrate it off tmpfs to hard disk once all the I/O involved in doing an install is finished :-).
Indeed. Please note there is a potential trap here. If you just 'cp' the guest to another disk and immediately boot it with cache=none, then qemu might see a corrupt disk. You have to sync the disk after the copy. Easiest way is to use 'dd' to copy the disk, w/ option conv=fsync.
Rich.
On Sat, 2014-06-07 at 12:33 +0100, Richard W.M. Jones wrote:
In any case, cache='none' (ie. O_DIRECT) won't work, and you shouldn't use it for throwaway test machines anyway. Since tmpfs always disappears at reboot, any machine you create on a tmpfs is a throwaway one, whether you intended that or not!
Use cache='unsafe' on tmpfs.
(This advice applies to libguestfs, virt-builder, etc too)
For more info on caching modes, see: http://rwmj.wordpress.com/2013/09/02/new-in-libguestfs-allow-cache-mode-to-b...
Does the type of storage being used for the KVM (e.g. LVM, a file, etc.) change which cache mode should be employed? I ask because this page says if raw volumes or partitions are used, the cache should be set to "none":
http://www.linux-kvm.org/page/Tuning_KVM
Does that still hold true?
Regards,
Ranbir
On Sat, Jun 07, 2014 at 11:22:10AM -0400, Kanwar Ranbir Sandhu wrote:
On Sat, 2014-06-07 at 12:33 +0100, Richard W.M. Jones wrote:
In any case, cache='none' (ie. O_DIRECT) won't work, and you shouldn't use it for throwaway test machines anyway. Since tmpfs always disappears at reboot, any machine you create on a tmpfs is a throwaway one, whether you intended that or not!
Use cache='unsafe' on tmpfs.
(This advice applies to libguestfs, virt-builder, etc too)
For more info on caching modes, see: http://rwmj.wordpress.com/2013/09/02/new-in-libguestfs-allow-cache-mode-to-b...
Does the type of storage being used for the KVM (e.g. LVM, a file, etc.) change which cache mode should be employed?
There's no hard rule. You need to test realistic workloads to find out which is best on your hardware.
I ask because this page says if raw volumes or partitions are used, the cache should be set to "none":
http://www.linux-kvm.org/page/Tuning_KVM
Does that still hold true?
There is one pretty big reason to use cache=none, even though its performance is fairly terrible: Live migration is not possible unless you use cache='none'.
http://wiki.qemu.org/Migration/Storage
Rich.