Hi, all
I have an idea of cluster lock with sanlock. However, I am not familiar with the principle detail of sanlock. This mail is sent to the expert of sanlock to discuss its feasibility.
My idea about the cluster lock:
1) Initialize the lockspace and resource at first.
2) Add the hostId of nodes into the lockspace after step 1.
3) Loop:
4) If the node wants to acquire the cluster lock, "sanlock.acquire()" can be called. "sanlock.release()" will be called once the operation finished.
What about the feasibility of this idea?
I hava debuged the demo code of sanlock named 'python/example.py' step by step in two nodes(Node1, Node2). Demo code likes the following segment:
Demo code: L1 import sanlock L2 sd_path = "/dev/pool/" L3 ids_file = sd_path + "ids_file" L4 lease_file = sd_path + "lease_file" L5 LEASE_NAME = "LEASE"
L6 sanlock.init_lockspace(sd_path, ids_file) L7 sanlock.init_resource(sd_path, LEASE_NAME, [lease_file])
L8 host_id = 2 <-----different node has its own host_id here. L9 sanlock.add_lockspace(sd_path, host_id, ids_file)
L10 _sanlock_fd = sanlock.register()
L11 sanlock.acquire(sd_path, LEASE_NAME,[lease_file], slkfd=_sanlock_fd)
L1~L8 are executed in node1 and node2.
Execute in same time : L9 cannot be executed in the same time of these two nodes and one node would fail. Execute one by one :When node1 was added its hostId into the lockspace successfully first, the add_lockspace of node2 would overwrite the lockspace in "ids_file" without any exception. There are only one node which writes its timestamp into its sector.
Appreciate for your reply.
Thank you very much.
Qi
On Tue, Jun 18, 2013 at 09:20:07AM +0000, Qixiaozhen wrote:
My idea about the cluster lock:
Initialize the lockspace and resource at first.
Add the hostId of nodes into the lockspace after step 1.
Loop:
If the node wants to acquire the cluster lock, "sanlock.acquire()" can be called. "sanlock.release()" will be called once the operation finished.
What about the feasibility of this idea?
I hava debuged the demo code of sanlock named 'python/example.py' step by step in two nodes(Node1, Node2). Demo code likes the following segment:
I'm not sure exactly how you are running this, but you're probably doing it wrong.
Here is an example using the command line and a simple test program I've attached (compile with -lsanlock).
1. set up shared storage (/dev/sdb)
host1: vgcreate test /dev/sdb host1: lvcreate -n leases -L 1G test host2: vgscan host2: lvchange -ay /dev/test/leases
2. start the daemons
host1: modprobe softdog host2: modprobe softdog host1: wdmd host2: wdmd host1: sanlock daemon host2: sanlock daemon
(it's best to use a real watchdog driver instead of softdog)
3. initialize the lockspace (named "LS") and the resource (named "RX")
host1: sanlock client init -s LS:0:/dev/test/leases:0 host1: sanlock client init -r LS:RX:/dev/test/leases:1048576
(done from only one host)
4. add the lockspace
host1: sanlock client add_lockspace -s LS:1:/dev/test/leases:0 host2: sanlock client add_lockspace -s LS:2:/dev/test/leases:0
(this will take 20+ seconds) (each host uses a different host id)
5. verify that both hosts have joined the lockspace
host1: sanlock client host_status lockspace LS 1 timestamp 1687 2 timestamp 1582
host2: sanlock client host_status lockspace LS 1 timestamp 1687 2 timestamp 1561
6. acquire/release lock on RX
host1: sanlk_lockr LS RX /dev/test/leases 1048576 5 host2: sanlk_lockr LS RX /dev/test/leases 1048576 5
sanlock-devel@lists.fedorahosted.org