Hi all, sorry for the slightly off topic post, but I'm looking for some definitive answers here. A couple days ago, I performed a "boot race" with Fedora 11 and Ubuntu 9.04, thinking it would be the easiest way to visualize which is faster to boot in a default installation. So basically, I created two identical VMs with virt manager and installed both from the liveCD, then fully updated them. The recorded boot sequence was repeated multiple times to see if results were consistent.
Now, of course I hit a nerve there, because I had many complaints, ranging from the video being a fake, to personal attacks, to more or less weird attempts to explain why Fedora booted faster.
The most common complain was that competing for the host's resources was not fair, something along the line: Fedora starts first becasue has no grub, grabs some critical resource and Ubuntu has to wait for it before continuing.
So basically the question is: do you think there could be any reason why such a test can be unfair to one of the VMs?
Hi Gianluca,
On Fri, 2009-04-24 at 09:15 +0200, Gianluca Sforna wrote:
Hi all, sorry for the slightly off topic post, but I'm looking for some definitive answers here. A couple days ago, I performed a "boot race" with Fedora 11 and Ubuntu 9.04, thinking it would be the easiest way to visualize which is faster to boot in a default installation. So basically, I created two identical VMs with virt manager and installed both from the liveCD, then fully updated them. The recorded boot sequence was repeated multiple times to see if results were consistent.
That's crazy talk! How could you possibly hope to get any useful results from such a stupid experiment. VMs aren't real machines, dude!
Now, of course I hit a nerve there, because I had many complaints, ranging from the video being a fake, to personal attacks, to more or less weird attempts to explain why Fedora booted faster.
Fedora booted faster? Wait, I take it back, that's a wonderful experiment! :-)
The most common complain was that competing for the host's resources was not fair, something along the line: Fedora starts first becasue has no grub, grabs some critical resource and Ubuntu has to wait for it before continuing.
So basically the question is: do you think there could be any reason why such a test can be unfair to one of the VMs?
Okay, seriously - I think it's a reasonable experiment.
However, the fact that you have two guests competing for resources is always going to make people suspicious. It may well be a deterministic experiment, but you're always going to have a hard time convincing people of that.
Personally, I'd do it by timing each VM on its own and comparing the boot times.
Also, I personally wouldn't be interested in which is faster boot, but rather what the bottlenecks are in both cases and how to make them both boot faster.
Cheers, Mark.
On Fri, Apr 24, 2009 at 09:10:47AM +0100, Mark McLoughlin wrote:
The most common complain was that competing for the host's resources was not fair, something along the line: Fedora starts first becasue has no grub, grabs some critical resource and Ubuntu has to wait for it before continuing.
So basically the question is: do you think there could be any reason why such a test can be unfair to one of the VMs?
Okay, seriously - I think it's a reasonable experiment.
However, the fact that you have two guests competing for resources is always going to make people suspicious. It may well be a deterministic experiment, but you're always going to have a hard time convincing people of that.
This would be particularly true if the VMs were over-committing on CPU resources. eg if you had a 2 cpu host, and gave both VMs 2 vcpus each, then you'd have 4 vcpus total comparing for 2 pcpus. They will also likely compete on disk I/O during boot and impact each other that way. Which reminds me, that you need to be careful with the QEMU disk caching modes. By default QEMU will use the host OS disk cache, so if one of the VM's data was already in the host cache and the other's wasn't, one would have an unfair I/O advtange.
I'd recommend running with cache=off for the -driver parameters to ensure they a guarenteed to be using Direct IO,avoiding any cache on the host OS.
Personally, I'd do it by timing each VM on its own and comparing the boot times.
You've also got to be sure both VMs are being run with comparable configs. eg, it'd be totally unfair to compare Fedora VM with IDE disk and RTL8139 nic against a Ubuntu VM using virtio disk & net, or vica-verca.
Daniel
On Fri, Apr 24, 2009 at 11:33 AM, Daniel P. Berrange berrange@redhat.com wrote:
On Fri, Apr 24, 2009 at 09:10:47AM +0100, Mark McLoughlin wrote:
The most common complain was that competing for the host's resources was not fair, something along the line: Fedora starts first becasue has no grub, grabs some critical resource and Ubuntu has to wait for it before continuing.
So basically the question is: do you think there could be any reason why such a test can be unfair to one of the VMs?
Okay, seriously - I think it's a reasonable experiment.
However, the fact that you have two guests competing for resources is always going to make people suspicious. It may well be a deterministic experiment, but you're always going to have a hard time convincing people of that.
This would be particularly true if the VMs were over-committing on CPU resources. eg if you had a 2 cpu host, and gave both VMs 2 vcpus each, then you'd have 4 vcpus total comparing for 2 pcpus.
Setup was on a 2 CPU host, each guest was assigned 1vcpu
They will also likely compete on disk I/O during boot and impact each other that way.
I expected this of course; I just assumed reasonable there was no reason for the host kernel to serve data in an uneven way.
Which reminds me, that you need to be careful with the QEMU disk caching modes. By default QEMU will use the host OS disk cache, so if one of the VM's data was already in the host cache and the other's wasn't, one would have an unfair I/O advtange.
Ah, ok. Suspecting something like this, I repeated the experiment multiple times to see if there was any fluctuation. After the first three runs, there was not
I'd recommend running with cache=off for the -driver parameters to ensure they a guarenteed to be using Direct IO,avoiding any cache on the host OS.
Is this exposed in the virt-manager interface (F10) ?
Personally, I'd do it by timing each VM on its own and comparing the boot times.
You've also got to be sure both VMs are being run with comparable configs. eg, it'd be totally unfair to compare Fedora VM with IDE disk and RTL8139 nic against a Ubuntu VM using virtio disk & net, or vica-verca.
The setup was done with the same (default) options offered by virt-manager. I also choose the "Generic" OS type for both.
Thank you very much
Gianluca Sforna wrote:
Hi all, sorry for the slightly off topic post, but I'm looking for some definitive answers here. A couple days ago, I performed a "boot race" with Fedora 11 and Ubuntu 9.04, thinking it would be the easiest way to visualize which is faster to boot in a default installation. So basically, I created two identical VMs with virt manager and installed both from the liveCD, then fully updated them. The recorded boot sequence was repeated multiple times to see if results were consistent.
Now, of course I hit a nerve there, because I had many complaints, ranging from the video being a fake, to personal attacks, to more or less weird attempts to explain why Fedora booted faster.
The most common complain was that competing for the host's resources was not fair, something along the line: Fedora starts first becasue has no grub, grabs some critical resource and Ubuntu has to wait for it before continuing.
So basically the question is: do you think there could be any reason why such a test can be unfair to one of the VMs?
Not unfair, but meaningless as a way of comparing which would boot faster on real hardware. Since disk writes seriously slow reads (per recent kernel list discussion) they clearly will interact, making the pattern of reads and writes in the startup script a factor in their own performance and definitely in the way they effect each other.
I would boot one, then the other, multiple times. And given the speed of the boots, I bet I could hold my breath while either one booted, so it won't take long. Then repeat with two CPUs configured, and produce another data point, possibly more relevant.
berrange@redhat.com noted:
I'd recommend running with cache=off for the -driver parameters to ensure they a guarenteed to be using Direct IO,avoiding any cache on the host OS.
At the least I would echo 1 >/proc/sys/vm/drop_caches before starting, I'm not sure how that would compare with the -direct option, but it's good practice with any benchmark.