Hi all,
New to Sanlock, I try to make it work on Ubuntu 12.04 (using the following ppa : https://launchpad.net/~wb-munzinger/+archive/ppa-sanlock) Libvirt-bin and libvirt-sanlock are both version 0.9.11-2.2.ubuntu.1 Sanlock is version 2.1-4.ubuntu.1 I just cannot make it work :
Here is my conf : My NFS storage is mounted in fstab on both hosts : MYIP:/SANLOCK /var/lib/libvirt/sanlock nfs hard,nointr,_netdev 0 0
First I don't want to reboot my hosts in case of concurrent access, I just don't want libvirt to be able to launch my qcow2, so as far as I understood, no need to modprode softdog (tell me if I'm wrong). Cat /etc/default/sanlock Enabled=1 Sanlock_opts="-w 0"
Must I use wdmd ? It's enabled=0 for now...
In my /etc/libvirt/qemu-sanlock.conf Auto_disk_lease = 1 Disk_lease_dir = "/var/lib/libvirt/sanlock" And I do have different host_id (1 and 2) !
In qemu.conf : Lock_manager = "sanlock"
So I've got two .qcow2 on /var/lib/libvirt/sanlock and a __LIBVIRT__DISKS__:0 created : Sanlock status : P -1 listener P -1 status S __LIBVIRT__DISKS__:1:/var/lib/libvirt/sanlock/__LIBVIRT_DISKS__:0 (And S __LIBVIRT__DISKS__:2:/var/lib/libvirt/sanlock/__LIBVIRT_DISKS__:0 on the second host)
Now when I virsh start myvm1 on my 1st kvm, then a sanlock direct dump a __LIBVIRT__DISKS__ shows nothing except offset lockspace etc but no value (on both hosts). I can start the same vm1 on the second KVM without warning/error (!) Nevertheless, it seems that a lock is created on disks because a lock file is created (md5sum I guess). When now I stop my VM, this md5 lock file is never removed. I must virt-sanlock-cleanup to remove it...
If you could help me, that would be really great. Thank you.
On Wed, Oct 09, 2013 at 11:02:21AM -0400, David Teigland wrote:
On Wed, Oct 09, 2013 at 10:55:59AM +0200, NEVEU Stephane wrote:
In my /etc/libvirt/qemu-sanlock.conf
The better method of libvirt locking is virtlockd. I've never seen this old libvirt+sanlock combination work well for anyone.
I'm considering using the explicit lease variant of libvirt+sanlock (ie, lease config defined in the domain XML), with the leases maintained on a shared block device as I wish to avoid the need for a shared filesystem.
I'm under the impression this is the same configuration RHEV/oVirt uses; would you have any reservations using this locking style outside of those settings?
Thanks,
-- Adam
On Mon, Nov 18, 2013 at 08:36:50AM -0800, Adam Tilghman wrote:
On Wed, Oct 09, 2013 at 11:02:21AM -0400, David Teigland wrote:
On Wed, Oct 09, 2013 at 10:55:59AM +0200, NEVEU Stephane wrote:
In my /etc/libvirt/qemu-sanlock.conf
The better method of libvirt locking is virtlockd. I've never seen this old libvirt+sanlock combination work well for anyone.
I'm considering using the explicit lease variant of libvirt+sanlock (ie, lease config defined in the domain XML), with the leases maintained on a shared block device as I wish to avoid the need for a shared filesystem.
I'm under the impression this is the same configuration RHEV/oVirt uses; would you have any reservations using this locking style outside of those settings?
Hi, I'm not too familiar with the details of libvirt/ovirt, so take this for what it's worth...
The design exists to use sanlock leases for vm's through libvirt, and some of the parts have been written (in libvirt and sanlock), but ovirt/vdsm does not yet use those capabilities. (ovirt does use sanlock for protecting the spm now.)
Also, there is more to do than specifying the leases in the configuration. The sanlock lockspaces need to created and managed (vdsm would do this).
I would have reservations about manually setting up and administering a cluster directly at the libvirt/sanlock level if you're looking for something sustainable and supportable.
Dave
The design exists to use sanlock leases for vm's through libvirt, and some of the parts have been written (in libvirt and sanlock), but ovirt/vdsm does not yet use those capabilities. (ovirt does use sanlock for protecting the spm now.)
Also, there is more to do than specifying the leases in the configuration. The sanlock lockspaces need to created and managed (vdsm would do this).
I would have reservations about manually setting up and administering a cluster directly at the libvirt/sanlock level if you're looking for something sustainable and supportable.
Thanks, I'll keep this in mind. We'll plan to fall back to libvirt's virtlockd-on-NFS if we encounter any problems.
-- Adam
-- Adam Tilghman IT Architecture / Development Academic Computing & Media Services UC San Diego
sanlock-devel@lists.fedorahosted.org