I'm considering using AoE with Xen, my setup would be vblade on one FC7 storage server with mdraid over 6x SATA disks, and two xen hosts with FC7 using aoe+aoetools, using eth1 on all servers to a separate VLAN for SAN traffic, eth0 used for normal LAN traffic, all gigabit.
I'm wondering does it make more sense to slice up /dev/md1 on the storage server with LVM and then serve multiple /dev/VGxx/LVxx block devices with individual vblade processes using their own AoE shelf/slot ID to individual dom0 (or direct to AoE in domU)?
Or to serve the whole /dev/md1 using a single vblade process on the storage server and then use CLVM or GFS on each xen dom0 to slice up a single /dev/etherd/e0.0 in a coordinated way?
My thoughts are that using a cluster filesystem would save the hassle of starting/stopping vblade processes whenever resizing LVs and any associated confusion of shelf/slot IDs
But I'm not sure of the overhead of cluster filesystems, does DLM only get involved for maintenance operations on LVs, or for all I/O activity?
Thoughts welcome from anyone using (or having attempted) either approach ...
I used kvblade, AoE server as kernel module. I was impressed by it's greater performance.
But I stopped to use AoE, because it didn't work with kernel-xen-2.6.{19,20}. It cause panic or reset immediately after boot. The last kernel-xen can handle AoE is kernel-xen-2.6.18-1.2869.fc6. I have not tried AoE on kernel-xen in rawhide.
Andy Burns wrote:
I'm considering using AoE with Xen, my setup would be vblade on one FC7 storage server with mdraid over 6x SATA disks, and two xen hosts with FC7 using aoe+aoetools, using eth1 on all servers to a separate VLAN for SAN traffic, eth0 used for normal LAN traffic, all gigabit.
I'm wondering does it make more sense to slice up /dev/md1 on the storage server with LVM and then serve multiple /dev/VGxx/LVxx block devices with individual vblade processes using their own AoE shelf/slot ID to individual dom0 (or direct to AoE in domU)?
Or to serve the whole /dev/md1 using a single vblade process on the storage server and then use CLVM or GFS on each xen dom0 to slice up a single /dev/etherd/e0.0 in a coordinated way?
My thoughts are that using a cluster filesystem would save the hassle of starting/stopping vblade processes whenever resizing LVs and any associated confusion of shelf/slot IDs
But I'm not sure of the overhead of cluster filesystems, does DLM only get involved for maintenance operations on LVs, or for all I/O activity?
Thoughts welcome from anyone using (or having attempted) either approach ...
-- Fedora-xen mailing list Fedora-xen@redhat.com https://www.redhat.com/mailman/listinfo/fedora-xen
On 23/04/07, Kazutoshi Morioka morioka@at.wakwak.com wrote:
I used kvblade, AoE server as kernel module. I was impressed by it's greater performance.
So far I've not tried kvblade, only vblade, I wasn't sure about the experimental status of the kernel version.
I think interrupt throttling on my e1000 cards is my bottleneck at the moment.
But I stopped to use AoE, because it didn't work with kernel-xen-2.6.{19,20}. It cause panic or reset immediately after boot. The last kernel-xen can handle AoE is kernel-xen-2.6.18-1.2869.fc6. I have not tried AoE on kernel-xen in rawhide.
Oh :-( Thanks for the warning, were you mounting the AoE volumes in dom0 as backend and exporting as frontend into domU, or using AoE direct in domU?
Andy Burns wrote:
But I stopped to use AoE, because it didn't work with kernel-xen-2.6.{19,20}. It cause panic or reset immediately after boot. The last kernel-xen can handle AoE is kernel-xen-2.6.18-1.2869.fc6. I have not tried AoE on kernel-xen in rawhide.
Oh :-( Thanks for the warning, were you mounting the AoE volumes in dom0 as backend and exporting as frontend into domU, or using AoE direct in domU?
I mounted AoE in dom0 and exporting into domU. In this case, domU can be 2.6.{19,20} without any problem. Only dom0 must be 2.6.18. If dom0 is 2.6.{19,20}, then dom0 (and entire system) crashes.
If domU is 2.6.{19,20}, and use AoE direct in domU, domU also crashes.