It is a relatively rudimentary Mod....
The benefits of this Mod are the following:
1)
Reduced disk space...(the current backup process stores about 3 MGs of data into the backup directory per day versus about 200KB of data via the compressed tar.gz file).
2)
Keeps latest data - stores latest data of the day rather than 5 or 7 backup copies per day.
3)
Easier Storage and Data Transfer - You can easily download the tar.gz file rather than waiting to download an abundance of backup copies.
Here is what you do...
1) Add the following variables in your
Links.pm file:
Code:
# Backup Path - No Trailing Slash
$LINKS{db_backup_path} = "$LINKS{admin_root_path}/backup";
# Tar Path - No Trailing Slash
$LINKS{tar_path} = "/usr/local/bin/tar";
# GZIP Path - No Trailing Slah
$LINKS{gzip_path} = "/usr/local/bin/gzip"; To find the
tar and
gzip paths in your server, simply type in the following commands at the telnet command prompt:
Code:
which tar
which gzip OR
Code:
where tar
where gzip You will also have to make sure that your
backup directory exists in your
admin directory...make sure that the directory permission is worldwide readable/writable (777).
2) Then add DBSQL objects before the
sub build_all routine in the
nph-build.cgi file...
Example: Code:
$CATALT = new Links::DBSQL "$LINKS{admin_root_path}/defs/CategoryAlternates.def"; NOTE: I have added most of my tables into the backup process.
You will also have to add the DBSQL object variables in the
use vars qw(....); line of codes in the
Load Modules section of the
nph-build.cgi file.
3) Then replace your
sub build_backup routine with the following codes:
Code:
sub build_backup {
# --------------------------------------------------------
# Backs up important database files.
#
my $today = $LINKDB->get_date;
my $time = $LINKDB->get_time;
print "\t=========================================\n";
print "\tBacking up Database....\n";
print "\t=========================================\n";
if (-e "$LINKS{db_backup_path}/$today.tar.gz") {
print "\t=========================================\n";
print "\tCompressed Backup File exists for today.. Skipping\n";
print "\t=========================================\n";
return;
}
else {
print "\tBacking up Links... \n";
$LINKDB->export_data ( { file => "$LINKS{db_backup_path}/Links$today.db", header => 1 } );
print "\tBacking up Category... \n";
$CATDB->export_data ( { file => "$LINKS{db_backup_path}/Categories$today.db", header => 1 } );
print "\tBacking up Alternative Category... \n";
$CATALT->export_data ( { file => "$LINKS{db_backup_path}/AltCategories$today.db", header => 1 } );
print "\tBacking up Users... \n";
$USERDB->export_data ( { file => "$LINKS{db_backup_path}/Users$today.db", header => 1 } );
}
if (open (TAR, "$LINKS{tar_path} cf $LINKS{db_backup_path}/temp.tar $LINKS{db_backup_path}/* |")) {;
close TAR;
open (GZ, "$LINKS{gzip_path} -c $LINKS{db_backup_path}/temp.tar > $LINKS{db_backup_path}/$today.tar.gz |") or die($!);
print "\t=========================================\n";
print qq"\tFile Compression Completed... \n";
print "\t=========================================\n";
close GZ;
unlink("$LINKS{db_backup_path}/temp.tar") or die($!);
system ("rm $LINKS{db_backup_path}/*.db");
}
print "\t=========================================\n";
print "\tScript Completed on $today at $time\n";
print "\t=========================================\n";
} You will notice that I have added
AltCategories and
Users tables.
NOTE: As Alex stated...backing up the whole database either through
mysqldump or through the above method is time consuming and you might want to consider creating a separate script for the backups...I have done this and the script looks like the following:
Code:
#!/usr/local/bin/perl
# Backup WWWVL: Anthropology MySQL Database
# File Name: nph-backup.cgi
# Description: Backups all pertinent tables in the MySQL Database.
#=======================================================================
# Load required modules.
# ---------------------------------------------------
use lib '/mnt/web/guide/anthrotech/cgibin/vlib';
use CGI ();
use CGI::Carp qw/fatalsToBrowser/;
use Links::Links;
use Links::DBSQL;
use Links::DB_Utils;
use Links::HTML_Templates;
use strict;
use vars qw($USE_HTML $BANNERDB $LINKDB $CATDB $CATALT $EDITDB $USREVDB $USERDB);
$|++;
# Determine whether we should print HTML or text.
$USE_HTML = 0;
$ENV{'REQUEST_METHOD'} and ($USE_HTML = 1);
# Create the DBSQL objects we will use.
$BANNERDB = new Links::DBSQL "$LINKS{admin_root_path}/defs/Banners.def";
$LINKDB = new Links::DBSQL "$LINKS{admin_root_path}/defs/Links.def";
$CATDB = new Links::DBSQL "$LINKS{admin_root_path}/defs/Category.def";
$CATALT = new Links::DBSQL "$LINKS{admin_root_path}/defs/CategoryAlternates.def";
$EDITDB = new Links::DBSQL "$LINKS{admin_root_path}/defs/Editor_Reviews.def";
$USREVDB = new Links::DBSQL "$LINKS{admin_root_path}/defs/User_Reviews.def";
$USERDB = new Links::DBSQL "$LINKS{admin_root_path}/defs/Users.def";
# Determine what type of build we are doing and do it.
if ($USE_HTML) {
CGI->nph(1);
print CGI->header(), CGI->start_html ( -title => 'Backing up Database' ), CGI->h1('Backing up Database ... '), "<pre>";
}
my $s = time();
&main ();
$USE_HTML and print "</pre>";
sub main {
# --------------------------------------------------------
# Backs up important database files.
#
my $today = $LINKDB->get_date;
my $time = $LINKDB->get_time;
print "\t=========================================\n";
print "\tBacking up Database....\n";
print "\t=========================================\n";
if (-e "$LINKS{db_backup_path}/$today.tar.gz") {
print "\t=========================================\n";
print "\tCompressed Backup File exists for today.. Skipping\n";
print "\t=========================================\n";
return;
}
else {
print "\tBacking up Banner... \n";
$BANNERDB->export_data ( { file => "$LINKS{db_backup_path}/Banners$today.db", header => 1 } );
print "\tBacking up Links... \n";
$LINKDB->export_data ( { file => "$LINKS{db_backup_path}/Links$today.db", header => 1 } );
print "\tBacking up Category... \n";
$CATDB->export_data ( { file => "$LINKS{db_backup_path}/Categories$today.db", header => 1 } );
print "\tBacking up Alternative Category... \n";
$CATALT->export_data ( { file => "$LINKS{db_backup_path}/AltCategories$today.db", header => 1 } );
print "\tBacking up Editor Reviews... \n";
$EDITDB->export_data ( { file => "$LINKS{db_backup_path}/Editor_Reviews$today.db", header => 1 } );
print "\tBacking up User Reviews... \n";
$USREVDB->export_data ( { file => "$LINKS{db_backup_path}/User_Reviews$today.db", header => 1 } );
print "\tBacking up Users... \n";
$USERDB->export_data ( { file => "$LINKS{db_backup_path}/Users$today.db", header => 1 } );
}
if (open (TAR, "/usr/local/bin/tar cf $LINKS{db_backup_path}/temp.tar $LINKS{db_backup_path}/* |")) {;
close TAR;
open (GZ, "/usr/local/bin/gzip -c $LINKS{db_backup_path}/temp.tar > $LINKS{db_backup_path}/$today.tar.gz |") or die($!);
print "\t=========================================\n";
print qq"\tFile Compression Completed... \n";
print "\t=========================================\n";
close GZ;
unlink("$LINKS{db_backup_path}/temp.tar") or die($!);
system ("rm $LINKS{db_backup_path}/*.db");
}
print "\t=========================================\n";
print "\tScript Completed on $today at $time\n";
print "\t=========================================\n";
} Notice that I have extra tables like User Reviews and Editor Reviews...you can simply replace these with other relevant tables.
4) Then you will have to create a crontab.
NOTE: There are codes above that will check to see if the backup exists for today...the best method is to use the separate backup script and set the crontab to 23:00:00 so that you get the latest data of the day into the backup.tar.gz file.
Regards,
Eliot Lee