Summary Build a bash script that controls what, where, and when backups run.Schedule the executable backup.sh with cron and review logs and destination after runs.Test the script, use absolute paths to avoid cron and runtime failures, and use comments for documentation.
A tool like Déjà Dup is an easy and excellent graphical way to automate Linux backups.However, creating a backup script is the superior, DIY way to automate backups.It offers way more control over what gets backed up, where the data gets backed up, and the frequency of backup jobs.
It’s also a fun introduction to basic scripting.Here’s how I created a simple backup script and used cron to automate backup jobs.Creating a simple backup script and automating it As long as you edit and update the script with your absolute paths, your backup location can be a local folder, a new disk partition, or an external drive.
Choosing a backup destination The plan was to create a lean backup script, but I also wanted a copy of essential Linux directories, just in case I ever needed to recover the system.My backup location for this project was a 128 GB USB flash drive.Related The Best USB Flash Drives of 2024 Looking for a solid flash drive to add to your everyday carry? We've got a roundup of some of the best on the market.
Posts Preparing the backup location Whether your backup location is a USB flash drive, external hard disk, or a new partition, pay attention to absolute directory paths and mount points; you’ll need them once we start scripting.This backup script uses rsync, which is usually preinstalled on most distros.Confirm your rsync version, and if it’s missing, use the package manager to install it.
I started the project by creating a new directory called ‘Backup’ on the mounted USB flash drive.You can create this folder graphically by navigating to your backup location and then using the shortcut Ctrl+SHIFT+N.I opted for the terminal and used the lsblk, cd, and mkdir commands.
lsblk cd “/media/htg/DATA BACKUP” mkdir backup rsync --version sudo apt update -y && sudo apt install rsync Deciding what to back up and creating a backup.sh script For system recovery purposes, I chose to back up a directory containing consolidated personal files and essential Linux directories like /home, etc, /var, /usr/local, /root, and /opt.Most Linux terminal text editors support basic scripting.I used nano to create and save a simple backup.sh bin bash script in the home directory.
Copy-paste the following script into your preferred terminal text editor: #!/bin/bash # This bash script backs up Linux recovery directories and personal files # Preserves ownership, permissions, timestamps, ACLs, and xattrs set -euo pipefail #------------This is the CONFIG script----------- HOSTNAME="$(hostname)" DATE="$(date +%F)" BACKUP_ROOT="/media/htg/DATA BACKUP/Backup" DEST_DIR="$BACKUP_ROOT/archives/${HOSTNAME}_${DATE}" LOG_FILE="BACKUP_ROOT/logs/backup_${HOSTNAME}_${DATE}.log" TMP_DIR="$BACKUP_ROOT/tmp" #Don't forget to edit the BACKUP_ROOT, DEST_DIR, LOG_FILE and TMP_DIR paths #Customize SOURCES=( to include the files and directories you want to back up SOURCES=( "/home" "/etc" "/var" "/root" "/opt" "/usr/local" "/home/htg/Backup" ) EXCLUDES=( "--exclude=/var/cache/" "--exclude=/var/tmp/" "--exclude=/var/lib/apt/lists/" "--exclude=/home/*/.cache/" "--exclude=/home/*/Downloads/" "--exclude=/var/lib/docker/" "--exclude=/var/lib/containers/" ) # Add these flags for backup summary and progress: # --info=progress2: Total progress line # --info=name0: Hide individual filenames # --stats: Final summary block # --no-inc-recursive: Better progress accuracy RSYNC_FLAGS=(-aAXH --numeric-ids --delete --human-readable --inplace --partial --info=progress2 --info=name0 --stats --no-inc-recursive) # ---------- PREPARATION ---------- # Create destination folders on the flash drive mkdir -p "$DEST_DIR" "$TMP_DIR" "$(dirname "$LOG_FILE")" touch "$LOG_FILE" # ---------- BACKUP ---------- echo "[$(date)] Starting backup to $DEST_DIR" | tee -a "$LOG_FILE" for SRC in "${SOURCES[@]}"; do echo "[$(date)] Backing up $SRC ..." | tee -a "$LOG_FILE" # Run rsync and log output rsync "${RSYNC_FLAGS[@]}" "${EXCLUDES[@]}" "$SRC" "$DEST_DIR" >>"$LOG_FILE" 2>&1 done echo "[$(date)] Backup completed" | tee -a "$LOG_FILE" # ---------- VERIFY ---------- echo "[$(date)] Listing destination sizes:" | tee -a "$LOG_FILE" du -sh "$DEST_DIR"/* 2>/dev/null | tee -a "$LOG_FILE" exit 0 Edit the script by replacing BACKUP_ROOT, DEST_DIR, LOG_FILE, and TMP_DIR with your absolute paths, then write out and exit; for nano, use Ctrl+O+Enter and Ctrl+X.The next part is using the chmod command to make the script executable and the ls command to confirm execute permission (look for -x).nano ~/backup.sh chmod +x ~/backup.sh ls -l ~/backup.sh Testing the backup script and automating it using cron jobs Even though all backup strategies fail eventually, testing backup systems and scripts can reduce the chances of a catastrophic data loss.
It catches errors that could cause process failure and is a simple but essential step that safeguards the integrity of any backup process.On some systems, you may need to use sudo to test-run the script.~/backup.sh sudo ~/backup.sh The script won't run successfully if it runs into errors, but the system will print them out, and you can troubleshoot from there.
If it’s all good, you’ll see the ‘Starting back up’ and ‘Backing up /home..’ messages.If you go to your backup destination, you should see some backed-up directories.If you’re backing up a lot of data to an external drive, the process may take a while.
As long as you don't get an error message, be patient, even if it seems like the terminal has frozen.After testing the script and confirming that it works, I used cron to schedule it to run at 8:00 p.m., the ls command to confirm the scheduled backup job, and the systemctl command to check cron’s service status.sudo crontab -e 0 20 * * * /usr/bin/bash /home/htg/backup.sh >> "/media/htg/DATA BACKUP/Backup/logs/cron_backup.log" 2>&1 sudo crontab -l ls -l /home/htg/backup.sh systemctl status cron At this point, I had successfully created and tested my backup script and used cron to automate it.
The final step was confirming the backup process and checking the logs after 8:00 pm the next day.The 3 lessons I learned the hard way Creating this backup script was a fun, educational experience.The project taught me three lessons that have stuck with me to date.
Always test backup scripts I didn’t test my first backup script.Can you guess what happened? That’s right: when I ran a test backup, the script failed.It took a while to figure out that I’d messed up a shell variable; I’d typed DATE+"$(date +%F)” instead of DATE=“$(date +%F)”, which caused a ‘command not found error’ on line 10.
Testing backup scripts catches syntax mistakes and permission errors that may cause execution errors.It’s also good practice.Use absolute paths Because cron is strict, and mistakes can cause a cron job to fail, get into the habit of using full, absolute directory and file paths in your script.
It minimizes errors, streamlines backups, and can help lower execution errors.Comment on your script Commenting on your script is like leaving your future self (and other people) notes that explain the intention behind a specific part or section of the script.In bash scripting, the hash symbol (#) is the widely accepted method of commenting.
You may have noticed several comments on my bash script.Consider an example where you need to change a script 6 months after creating it.Without comments, the chances of wrongly editing essential parts and breaking your script are high because the script is no longer fresh in your mind.
That’s how I automated Linux backups with a simple bash script and the three lessons I learned the hard way.While scripting is not the easiest way to automate backups, it’s fun and educational, especially if you’re looking to learn basic bash scripting.
Read More