Gossamer Forum
Home : Products : DBMan : Discussions :

FLOCK vs manual locking

Quote Reply
FLOCK vs manual locking
Flock (file locking and unlocking) will not work on some types of server.

Can anyone suggest a manual locking technique to replace the flock command in dbman?

Keef

Quote Reply
Re: FLOCK vs manual locking In reply to
Keef

You may need to use the search feature of the support forum and read through various threads to see if you can locate an answer to your question.

You may want to search in several of the forums.


Unoffical DBMan FAQ
http://webmagic.hypermart.net/dbman/
Quote Reply
Re: FLOCK vs manual locking In reply to
Hi Lois,

I have searched many of the forums and this problem surfaces a number of times but never seems to be solved. It is a fairly serious limitation on 'network servers' where the cgi-bin is located on a different server - flock not supported.

I have been doing some investigation at different Perl/CGI sites and will post a solution if I get it to work.

Keef

Quote Reply
Re: FLOCK vs manual locking In reply to
Keef emailed me and I sent him this, if any one else has this probs heres the solution.

I used this piece of code below to create
a lockfile, when the lockfile exists no other routine can open the data file
(or whatever file you are opening) until the lock is released. Requests will
que and be dealt with in turn.
To create the lockfile use-

$lock_file = "path to a directory for lock files/lock.file";

at the top of your script where the globals are and create the directory-

then-

&GetFileLock ("$lock_file");

before the part of your routine that opens the file, and release the lock
file with-


&ReleaseFileLock ("$lock_file");

after the the file has been closed.

You will need to paste the following two
subroutines as is into the file where this is taking place, they can go at
the end.


# SUB GETFILELOCK #

# This subroutine gets a lock file so that a file can only be modified
# by one user at a time. This is to prevent data mixing. If you want
# to use flock, uncomment that line. We only wait for 60 seconds if
# there is a lock file already in place, at which point we take over,
# assuming the old lock file is from a previous hang.

sub GetFileLock {
my ($lock_file) = $_[0];
my ($endtime) = 60;
$endtime = time + $endtime;
while (-e $lock_file && time < $endtime) { sleep( 2 ); }
open(LOCK_FILE, ">$lock_file");
# flock(LOCK_FILE, 2);
}

# SUB RELEASEFILELOCK #

# This subroutine removes the passed lock file to free up a file
# for further editing. If you want to use flock, uncomment that line.

sub ReleaseFileLock {
my ($lock_file) = $_[0];
# flock(LOCK_FILE, 8);
close(LOCK_FILE);
unlink($lock_file);
}


P.S.
With the two subroutines it should be possible for
you to ensure corruption does not occur with any programme you
use by including them in the script if flock is not available, as on network
file systems where the cgi server is a different machine to the webserver.

Hope that helps.

chmod

>> instruct for dbman,
nice and easy, here`s how you implement it.

Copy both the sub routines to the bottom of db.cgi.

Then in default.cfg , below-

# Full path and file name of the html routines.
require $db_script_path . "/html.pl";

place-

# Full path to a lockfile.
$lock_file = $db_script_path . "/lock/lock.file";

and create a directory called lock with read and write permissions alongside
where your script is.
This directory is where your lockfiles will be created, then deleted.

Then for example in db.cgi in sub add_record where it says-

if ($status eq "ok") {
open (DB, ">>$db_file_name") or &cgierr("error in add_record. unable to
open database: $db_file_name.\nReason: $!");
if ($db_use_flock) {
flock(DB, 2) or &cgierr("unable to get exclusive lock on
$db_file_name.\nReason: $!");
}
print DB &join_encode(%in);
close DB; # automatically removes file lock


Place-

&GetFileLock ("$lock_file");

before it-

and-

&ReleaseFileLock ("$lock_file");

after it-

You will need to do this for every instance of a file being opened and
written to, like the counter file, log file if u are using it, and within
the modify and delete sub routines in db.cgi (for testing just put it in the
sub add_record as above)

The lockfile will not allow another operation until it is deleted by
&ReleaseFileLock, which should be real quick (less than a second), but if
there is a hang the lockfile will be ignored after 60 seconds, you can
change this timeout in the sub GetFileLock

You can test it by not including the &ReleaseFileLock ("$lock_file"); above
then adding a record and trying to add another record straight away, the
programme will wait for 60 seconds before going ahead.

chmod




Quote Reply
Re: FLOCK vs manual locking In reply to
Hi chmod.

Thanks for posting this one . . . and for everyone else it works! Should this mod go into the DBMan Mods section? - I think it should.

A question?

Have you actually modified DBMan yourself with this mod?

If you have, did you find a missing 'Close PASS;' in the db.cgi file?

I would value your input on my post in the DBMan Customisation - look for 'Open/Close Issue'.


Thanks again for your mod.


Keef

Quote Reply
Re: FLOCK vs manual locking In reply to
The Perl FAQ recommends not using code like that or in their words "a common bit of code NOT TO USE is this:

sleep(3) while -e "file.lock"; # PLEASE DO NOT USE
open(LCK, "> file.lock"); # THIS BROKEN CODE

Check out http://www.perl.com/..._I_just_open_FH_file for the explanation.

The question above that states:

Some versions of flock() can't lock files over a network (e.g. on NFS file systems), so you'd need to force the use of fcntl(2) when you build Perl. See the flock entry of the perlfunc manpage, and the INSTALL file in the source distribution for information on building Perl to do this.

For more information on file locking, see also perlopentut if you have it (new for 5.006).

Take care,
RJ
http://LetsGoPens.com/