In the world of containerization, Docker has emerged as the go-to solution for creating, deploying, and running applications by using containers. However, as with any data-driven application, backing up your data is paramount to ensure that you can recover from any unforeseen data loss or corruption. This article will guide you through creating an ultimate shell script that not only backs up Docker volumes regularly but also uploads these backups to Amazon S3 for offsite storage and cleans up old backups to conserve space.
Why Backup Docker Volumes?
Docker volumes are used to persist data generated by and used by Docker containers. By backing up these volumes, you safeguard against data loss that can occur from container failures, accidental deletions, system crashes, or corruption.
Write a Shell Script to Backup Docker Volumes
First, let’s set up our shell script environment. Create a new file named docker_volume_backup.sh and open it in your favorite text editor. Copy and paste below script.
#!/bin/bash
# Script Name: Docker Volume Backup Script
# Description: This script backs up Docker volumes, uploads them to Amazon S3, and cleans up old backups.
# Written by: Rahul Kumar
# Website: https://tecadmin.net
# Version: 1.0.1
# Configuration
BACKUP_PATH="/path/to/backup"
S3_BUCKET="s3://your-bucket-name/docker-volumes/"
MAX_BACKUPS=5 # The maximum number of backups to keep
# Create backup directory if not exists
mkdir -p "$BACKUP_PATH"
# Function to backup a Docker volume
backup_volume() {
VOLUME_NAME=$1
TIMESTAMP=$(date +"%Y%m%d-%H%M%S")
BACKUP_NAME="${VOLUME_NAME}_${TIMESTAMP}.tar.gz"
echo "Backing up Docker volume '$VOLUME_NAME' to $BACKUP_NAME"
docker run --rm -v $VOLUME_NAME:/volume -v $BACKUP_PATH:/backup alpine \
tar czf /backup/$BACKUP_NAME -C /volume ./
}
# Function to upload backups to s3
upload_to_s3() {
BACKUP_FILE=$1
echo "Uploading $BACKUP_FILE to S3"
aws s3 cp $BACKUP_PATH/$BACKUP_FILE $S3_BUCKET/$BACKUP_FILE
}
# Function to remove older backups
cleanup_backups() {
echo "Cleaning up old backups"
BACKUPS=$(ls $BACKUP_PATH | wc -l)
BACKUPS_TO_DELETE=$(($BACKUPS-$MAX_BACKUPS))
if [ $BACKUPS_TO_DELETE -gt 0 ]; then
ls -t $BACKUP_PATH | tail -$BACKUPS_TO_DELETE | while read BACKUP_FILE; do
echo "Deleting old backup $BACKUP_FILE"
rm $BACKUP_PATH/$BACKUP_FILE
done
fi
}
# Define your Docker volumes in an array
VOLUMES=("volume1" "volume2")
for VOLUME in "${VOLUMES[@]}"; do
backup_volume "$VOLUME"
upload_to_s3 "$BACKUP_NAME"
done
cleanup_backups
echo "Backup process completed."
Defines the local backup directory, the S3 bucket path for uploads, and the maximum number of backups to retain.
- BACKUP_PATH=”/path/to/backup”: Specifies the local directory path where Docker volume backups will be stored.
- S3_BUCKET=”s3://your-bucket-name/docker-volumes/”: Sets the Amazon S3 bucket path, including the specific directory for storing Docker volume backups.
- MAX_BACKUPS=5: Determines the maximum number of Docker volume backup files to keep locally before older backups are deleted.
- VOLUMES=(“volume1” “volume2”): Define all your docker volumes to backup in script. You can find list of volumes with `docker volume ls` command.
Running Script Manually
To ensure the script functions correctly before setting it up on a schedule, execute it manually. Begin by making the script executable with the following command:
chmod +x docker_volume_backup.sh
Next, execute the script:
./docker_volume_backup.sh
After the script has run successfully, confirm that the backups have been created in the location specified by $BACKUP_PATH and that they are also present in the designated S3 bucket.
Scheduling the Script with Crontab
After manually verifying the script’s functionality, you can schedule it to run automatically at regular intervals using crontab. To edit the crontab file and add a new scheduled task, execute crontab -e and include the following line:
0 2 * * * bash /path/to/docker_volume_backup.sh
This crontab entry schedules the script to run daily at 2:00 AM. Adjust the timing as needed to suit your backup strategy.
Conclusion
This script provides a comprehensive approach to backing up Docker volumes, uploading those backups to Amazon S3, and managing backup storage space. Remember to make your script executable by running chmod +x docker_volume_backup.sh
and test it thoroughly in a non-production environment before setting it to run as a scheduled cron job for regular backups.
By automating your Docker volume backups and ensuring offsite storage, you’re taking crucial steps to protect your data and ensure business continuity, no matter what challenges you may face.