Skip to main content

SVN Incremental and Full Backup - with Email Notification

To start of with lets do a Incremental back up for every commit and then lets look in to taking a full back for all our repositories. (Files Attached Below)

Incremental Backup.
SVN Incremental for every commit can be done using post-commit hook
this file is in REPOS_DIR/hooks/post-commit.tmpl you can copy it as post-commit
and change permissions.

$ cp post-commit.tmpl post-commit
$ chmod 755 post-commit

NOTE Important links:
For rsync refer this link
For mutt configuration refer this link.
For Running Crontab.
To Know about SVN backups you can go here.

Here is the information which needs to be in post-commit hook in SVN.

#
# PreDefined Information
#
REPOS="$1"
REV="$2"

#
# Change below parameters as Required.
#
LOCAL_BACKUP_PATH=/home/ahmed/SVNREPOS_TEST/test_backup_incremental
REMOTE_BACKUP_PATH=/home/ahmed/SVNREPOS_TEST/test_backup_full
LOCAL_MOUNT_BACKUP_PATH=/home/ahmed/SVNREPOS_TEST/test_backup_full
REMOTE_USER="backup_trusted_user"
REMOTE_SERVER="remote.server.com"
# Give Email Separated by Space( ) here.
EMAIL_GROUP="ahmed@gmail.com ahmed@groups.com"


#
# you need NOT TOUCH THIS !!
#
DATE=`date '+%d'-'%m'-'%Y'-'%H':'%M':'%S'`
BACKUP_FILENAME=commit-inc-ver\($REV\)-$DATE.bkp'.gz'
DIFF_FILENAME=diff-inc-ver\($REV\)-$DATE.gz


#
# Taking backup and rsync to a remote server.
#
svnadmin dump "$REPOS" --revision "$REV" --incremental | gzip > $LOCAL_BACKUP_PATH/$BACKUP_FILENAME
# rsync -avzh -e ssh $LOCAL_BACKUP_PATH  $REMOTE_USER@$REMOTE_SERVER:$REMOTE_BACKUP_PATH


#
# Can be backed-up on a Local mount Drive.
# Have commented - as we might not have a mount DRIVE
#
#cp $LOCAL_BACKUP_PATH/$BACKUP_FILENAME $LOCAL_MOUNT_BACKUP_PATH


#
# Now lets send mail to all the people - using mutt :)
#
svnlook diff $REPOS -r $REV | gzip > $DIFF_FILENAME
mutt -s "SVN Commit Complete - Back-Up for Version $REV" $EMAIL_GROUP < /dev/null -a $DIFF_FILENAME
rm $DIFF_FILENAME

Full SVN Back of all Repos.
Periodic SVN backup script which needs to be run using crontab.
Setting up an cron job is simple you can go through the above link.
Below script will take FULL backup for all the repositories within a svnroot directory.

#!/bin/sh
SVN_REPOSITORIES_ROOT_DIR="/home/ahmed/SVNREPOS_TEST/multiple_repos"
BACKUP_DIRECTORY="/home/ahmed/SVNREPOS_TEST/test_backup_full"
DATE=`date '+%d'-'%m'-'%Y'-'%H':'%M':'%S' `

for REPOSITORY in `ls -1 $SVN_REPOSITORIES_ROOT_DIR`
do
        #echo 'dumping repository: ' $REPOSITORY
        svnadmin dump $SVN_REPOSITORIES_ROOT_DIR/$REPOSITORY | gzip > $BACKUP_DIRECTORY/full-backup-$REPOSITORY-$DATE'.gz'
done
echo 'dumping repository: Successfull -' $DATE > /tmp/full-backup-$DATE.log

Now we are ready for some coding.
Scripts will take care of the rest :)

Files can be found here - Download them as required.
FullBackupScriptMultipleRepos.sh
dot.muttrc
post.commit

Comments

Popular posts from this blog

Cloudera Manager - Duplicate entry 'zookeeper' for key 'NAME'.

We had recently built a cluster using cloudera API’s and had all the services running on it with Kerberos enabled. Next we had a requirement to add another kafka cluster to our already exsisting cluster in cloudera manager. Since it is a quick task to get the zookeeper and kafka up and running. We decided to get this done using the cloudera manager instead of the API’s. But we faced the Duplicate entry 'zookeeper' for key 'NAME' issue as described in the bug below. https://issues.cloudera.org/browse/DISTRO-790 I have set up two clusters that share a Cloudera Manger. The first I set up with the API and created the services with capital letter names, e.g., ZOOKEEPER, HDFS, HIVE. Now, I add the second cluster using the Wizard. Add Cluster->Select Hosts->Distribute Parcels->Select base HDFS Cluster install On the next page i get SQL errros telling that the services i want to add already exist. I suspect that the check for existing service names does n

Zabbix History Table Clean Up

Zabbix history table gets really big, and if you are in a situation where you want to clean it up. Then we can do so, using the below steps. Stop zabbix server. Take table backup - just in case. Create a temporary table. Update the temporary table with data required, upto a specific date using epoch . Move old table to a different table name. Move updated (new temporary) table to original table which needs to be cleaned-up. Drop the old table. (Optional) Restart Zabbix Since this is not offical procedure, but it has worked for me so use it at your own risk. Here is another post which will help is reducing the size of history tables - http://zabbixzone.com/zabbix/history-and-trends/ Zabbix Version : Zabbix v2.4 Make sure MySql 5.1 is set with InnoDB as innodb_file_per_table=ON Step 1 Stop the Zabbix server sudo service zabbix-server stop Script. echo "------------------------------------------" echo " 1. Stopping Zabbix Server &quo

Access Filter in SSSD `ldap_access_filter` [SSSD Access denied / Permission denied ]

Access Filter Setup with SSSD ldap_access_filter (string) If using access_provider = ldap , this option is mandatory. It specifies an LDAP search filter criteria that must be met for the user to be granted access on this host. If access_provider = ldap and this option is not set, it will result in all users being denied access. Use access_provider = allow to change this default behaviour. Example: access_provider = ldap ldap_access_filter = memberOf=cn=allowed_user_groups,ou=Groups,dc=example,dc=com Prerequisites yum install sssd Single LDAP Group Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = memberOf=cn=Group Name,ou=Groups,dc=example,dc=com Multiple LDAP Groups Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = (|(memberOf=cn=System Adminstrators,ou=Groups,dc=example,dc=com)(memberOf=cn=Database Users,ou=Groups,dc=example,dc=com)) ldap_access_filter accepts standa