Jump to content

Server Backup via Cron to Amazon S3


Recommended Posts

Has anyone set anything like this up? I wanted to do this but I am not quite getting how to set it up. I want to automate the process so I don't have to think about it at all.

Doing some sort of incremental backup would be best, but I would even take a full backup once a week if that's what it has to be.

This doesn't have to be to S3 either - it could be to another web server or something.

Thanks!

Link to comment
Share on other sites

This is the script that I use for this. It works well for me:

=====

!/bin/bash
BACKUP_DIR="/backups"
MYSQL_FILENAME="$BACKUP_DIR/prod-$(date +%m%d%Y).sql"
DOCROOT_FILENAME="$BACKUP_DIR/prod-$(date +%m%d%Y).tar.gz"
ETC_FILENAME="$BACKUP_DIR/etc-$(date +%m%d%Y).tar.gz"
BACKUP_HISTORY="365 days"
echo "Creating backups..."
mysqldump -u mysql_username --password=mysql_password mysql_database > $MYSQL_FILENAME
gzip $MYSQL_FILENAME
pushd /opt/www
tar cfz $DOCROOT_FILENAME prod/
popd
pushd /etc
tar cfz $ETC_FILENAME php* nginx* http* sphinx*
popd
s3cmd put ${MYSQL_FILENAME}.gz s3://yoursite-backups/ && rm -f ${MYSQL_FILENAME}.gz
s3cmd put $DOCROOT_FILENAME s3://yoursite-backups/ && rm -f $DOCROOT_FILENAME
s3cmd put $ETC_FILENAME s3://yoursite-backups/ && rm -f $ETC_FILENAME
echo "... done."
echo
echo "Cleaning up old backups, deleting files older than $BACKUP_HISTORY..."
s3cmd ls s3://yoursite-backups/ | while read -r line;
do
createDate=`echo $line|awk {'print $1" "$2'}`
createDate=`date -d"$createDate" +%s`
olderThan=`date -d"-$BACKUP_HISTORY" +%s`
if [[ $createDate -lt $olderThan ]]
then
fileName=`echo $line|awk {'print $4'}`
if [[ $fileName != "" ]]
then
echo "... removing $fileName ..."
s3cmd del "$fileName"
fi
fi
done
echo "... done."
=====

robert

Link to comment
Share on other sites

I used to use S3 but i like to rsync data and just got a 25GB VPS for this .. it's cheaper and I have full control.

The problem with rsync is that you're only getting one copy of the data. That's not a really good backup strategy. It is not uncommon for corruption to take more than a day to discover, or perhaps for you to want something that was deleted a week ago back, or whatever. There are rsync-based solutions that do cover this, but I'm not sure if you're using them.

Storage is really cheap, even cheaper with Amazon's Glacier functionality. There is basically no reason not to keep daily backups going for a year or so. IMHO.

robert

Link to comment
Share on other sites

The problem with rsync is that you're only getting one copy of the data. That's not a really good backup strategy.


"rsync" was The word used but off course this isn't the way of doing backups. I was answering to this paragraph :smile:

This doesn't have to be to S3 either - it could be to another web server or something.


I use rdiff-backup locally and remote as a incremental solution
I use rsync locally and remote so I can have a "fresh" copy with one hour at most

All this is replicated at home

:smile:

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...