file locking...

bruce bedouglas at earthlink.net
Tue Mar 3 14:53:24 UTC 2009


Hi Dennis...

Thanks for the reply... Here's my solution up to now.. might change in the
future...

The problem:
App has a bunch of clients that need to get a separate/unique list of files
from a master server app. The files are created by the master server
process, and reside on the filesystem behind the server process. (this is a
client/server based app. client sends a request to the server.. the backend
operation of the server fetches the required files, and returns them to the
client app.)

A key issue is that I don't want to run into potential race conditions,
which would result in a given client never being served the files it's
trying to fetch.

Potential Soln:
1) Invoke a form of file locking, with each client processes waiting
   until it gets its lock.
2) Invoke some form of round-robin process, where the master process
   puts files in different dirs, so each client can have a better
   chance of getting a "lock" for the different dir..

Final Soln: (for now)
I decided to cheat!
I realized that since each client process is essentially unique, I can
create a uniqueId (uuid) for each process. Remember, the client app is
hitting the master server/file process via a webservice. So I have each
client send it's uuid to the master server via the webprocess. this
information is appended to a file, which gives me kind of a fifo approach
for creating unique dirs for each client. the server (on the backend) then
reads the fifo file, for the uuid. in getting the uuid for the 'client', a
master cron process then reads the fifo file, and for each uuid in the file,
creates a tmp dir for the uuid. the master cron process then populates this
dir, with the required files for the given client.

on the client side, the client loops through a wait loop, checking to see if
anything is created/placed in its tmp 'uuid' dir.. if files are there, it
fetches the files, and proceeds..

This approach ensures that a client would never run into a situation where
it might never get files where files are available for processing. in the
event there are no files, the client simply sleeps until there are files..
in the event a client requests files via the sending of the uuid, and the
client dies before getting the files, but the master cron had already placed
them in the uuid dir.. there will be a cleanup process to reabsorb those
files back into the system...

thanks to all who gave input/pointers!!

thoughts/comments/etc...



-----Original Message-----
From: python-list-bounces+bedouglas=earthlink.net at python.org
[mailto:python-list-bounces+bedouglas=earthlink.net at python.org]On Behalf
Of Dennis Lee Bieber
Sent: Sunday, March 01, 2009 11:41 AM
To: undisclosed-recipients:
Subject: Re: file locking...


On Sun, 1 Mar 2009 10:00:54 -0800, "bruce" <bedouglas at earthlink.net>
declaimed the following in comp.lang.python:

>
> Except in my situation.. the client has no knowledge of the filenaming
> situation, and i might have 1000s of files... think of the FIFO, first in,
> first out.. so i'm loking for a fast solution that would allow me to
create
> groups of say, 500 files, that get batched and processed by the client
> app...
>
	My silly thoughts...

	Main process creates temp/scratch directories for each subprocess;
spawn each subprocess, passing the directory path to it;
main process then just loops over the files moving them, one at a time,
to one of the temp/scratch directories, probably in cyclic order to
distribute the load;
when main/input directory is empty, sleep then check again (or, if the
OS supports it -- use some directory change notification) for new files.

	Each subprocess only sees its files in the applicable temp/scratch
directory.
--
	Wulfraed	Dennis Lee Bieber		KD6MOG
	wlfraed at ix.netcom.com		wulfraed at bestiaria.com
		HTTP://wlfraed.home.netcom.com/
	(Bestiaria Support Staff:		web-asst at bestiaria.com)
		HTTP://www.bestiaria.com/
--
http://mail.python.org/mailman/listinfo/python-list




More information about the users mailing list