Colo backup, rsync, ssh, mysql and you… well me.

I have a server located with server pronto in Florida. It’s one of the low end servers and for the most part it works great. Some times they have network problems and once a drive that was failing, they replaced it after a few support tickets. All in all I think it’s a great deal for lower end hosting. This is not my hardware but some white box clone running Debian that is more like a desktop than a “server”. The best thing is that I have root and I’m in control of the server, the hosting company never get’s in my way and they don’t know root or have an account. Its really a colo for computer people as they are very hands off in my experience. Not good for the average beginner user that may break there box or need help form time to time. Still I’ll give them a 5 star for value a 4 start for network and a 3 star for support.

Anyway. There are about 5 domains running on my server (the largest being my wife’s www.klosterisland.com)  and it holds up well enough. As I’m an IT manager at Insight Communications I have Up Time, Backups and Recover-ability on the brain. As this is a low end server, it’s all “single point of failures” all over the place.

With this server being 900 miles away, low end hardware, strangers have physical access and I don’t know the people, backups are very, very important to me.  I use mysqldump scripts, rsync, ssh keys, screen and gzip to get backups to my NAS at home.

Using a directory called /backup I dump mysql DB’s and gzip them there. Then using rsync I sync the colo server data to a server located at home every day at 3:00 AM. On the home server, using a script I gzip the data to a mon/tue/wed/thur/fri/sat/sun directory structure then gzip the data into a single file. I have 7 days of backup on my NAS and can recover the websites with out much stress.

Adding in regular export list of installed packages and exporting the logs off the server, migrating to a new colo or moving to a replacement server is just now time consuming and not a bad experience full of sadness and data loss.

Below us the rsync command I use. your mileage may vary.

rsync –exclude /tmp/ –exclude /proc/ –exclude /dev/ –exclude /sys/ –exclude=”*.MYD” –exclude=”*.MYI” –bwlimit=100 -avz -e ssh /* xxx@xxxx.xxxx.xxx:/opt/storage/colo/current

“reading this” I’m excluding /tmp /proc /dev /sys and mysql DB files and limiting the bandwith to 100Kb and using ssh (with ssh keys) as the backhawl. Before cron runs the rsync I’m mysqldump’ing the DB’s and gzip’ing them to my /opt/backup directory on the colo server. The rsync process comes and sends them to the server at home, updates the target directory called “current”. I then gzip the contents of the “current” directory and place that .gz file into a mon,tue,wed… directory, just to overwritten next week.

I know there are other backup applications out there for linux that are good and free, however this process is SO SIMPLE.

I could add in email notification on failed rsync and perform independat file counts on source and target, build in process lockign etcera. I did not as this seems to work 360 days of the year and for my little colo server. thats well…. good enough.