On Tue, Apr 23, 2013 at 10:56:58AM -0500, Russell Jones wrote:
Thanks! I figured this seemed like my only option, and we have the
means of doing it. I just don't know how to configure it properly
within Sanlock and cannot find documentation on doing this.
This man page (
https://fedorahosted.org/sanlock/) appears to show
utilizing LVs from shared storage, but it's not very clear why you
need two LUNs (or if you even need two for a 2 node setup),
I'm not sure why I used two different vgs/lockspaces in the example. If
you have only one shared vg, you'd use only one lockspace. The number of
hosts doesn't matter.
why it has to use the "direct init" command,
"direct init" and "client init" do the same thing. The client init
uses the sanlock daemon to do the init, the former doesn't require
the daemon to be running.
if you have to manually add leases now instead of using the auto
lease
feature if you're using that, etc. One of the things I liked about
using Sanlock on NFS is the auto lease creation feature.
Yes, the libvirt auto leases and RHEV/ovirt do all of the sanlock setup
for you, so you'll need to replace that automation with manual steps:
- to create lease lvs and initialize them for sanlock
- configure the lease lvs in the libvirt config
- start wdmd and sanlock services
- run the sanlock add_lockspace command
Anywhere you can think of to point me towards more information on
going down this route with a 2 node setup?
I'm looking at the libvirt syntax is on this page under "Device leases":
http://libvirt.org/formatdomain.html#elementsEvents
An example similar to what the sanlock man page has. I'll show all leases
at different offsets on a single 1GB lv (instead of using one lv per
lease, which is also possible). (Sorry, I haven't actually tried this
myself.)
shared storage for vms and leases: /dev/sdb1
shared vg for vms and leases: pool1
shared lv for all leases: /dev/pool1/leases
lockspace name: LS1
three vms: A, B, C
lease names: leaseA, leaseB, leaseC
vgcreate pool1 /dev/sdb1
lvcreate -n leases -L 1GB pool1
sanlock direct init -s LS1:0:/dev/pool1/leases:0
sanlock direct init -r LS1:leaseA:/dev/pool1/leases:1048576
sanlock direct init -r LS1:leaseB:/dev/pool1/leases:2097152
sanlock direct init -r LS1:leaseC:/dev/pool1/leases:3145728
The libvirt syntax for vm A:
<lease>
<lockspace>LS1</lockspace>
<key>leaseA</key>
<target path='/dev/pool1/leases' offset='1048576'/>
</lease>
The libvirt syntax for vm B:
<lease>
<lockspace>LS1</lockspace>
<key>leaseB</key>
<target path='/dev/pool1/leases' offset='2097152'/>
</lease>
Running this would be roughly:
all hosts: service wdmd start
all hosts: service sanlock start
all hosts: service libvirt start
host 1: sanlock add_lockspace -s LS1:1:/dev/pool1/leases:0
host 2: sanlock add_lockspace -s LS1:2:/dev/pool1/leases:0
(Note that each uses a different host_id there.)
Then, libvirt should acquire the leases when you run the vms.
Dave