Automated backups are essential for data protection and disaster recovery. This guide will help you implement a robust backup automation system on Alpine Linux, covering various backup strategies, tools, and best practices for ensuring your data is always protected.
Table of Contents
- Prerequisites
- Understanding Backup Strategies
- Installing Backup Tools
- Configuring Rsync Backups
- Setting Up Borg Backup
- Database Backup Automation
- System Backup Configuration
- Backup Scheduling
- Remote Backup Setup
- Backup Verification
- Retention Policies
- Monitoring and Alerts
- Disaster Recovery Planning
- Troubleshooting
- Best Practices
- Conclusion
Prerequisites
Before setting up backup automation, ensure you have:
- Alpine Linux with root access
- Sufficient storage for backups
- Basic understanding of cron and shell scripting
- Network connectivity for remote backups
- List of critical data to backup
- Backup destination (local/remote storage)
Understanding Backup Strategies
Backup Types
# Check current disk usage
df -h
# List important directories
du -sh /home /etc /var/lib/* | sort -h
Common backup strategies:
- Full Backup: Complete copy of all data
- Incremental: Only changed files since last backup
- Differential: Changes since last full backup
- 3-2-1 Rule: 3 copies, 2 different media, 1 offsite
Installing Backup Tools
Step 1: Install Core Backup Utilities
# Update package repository
apk update
# Install rsync
apk add rsync rsync-doc
# Install compression tools
apk add gzip bzip2 xz zstd
# Install archiving tools
apk add tar cpio
# Install backup utilities
apk add borgbackup restic duplicity
Step 2: Install Supporting Tools
# Install monitoring tools
apk add msmtp mailx
# Install database clients for backups
apk add postgresql-client mariadb-client
# Install encryption tools
apk add gnupg openssl
# Install network tools
apk add openssh-client curl wget
Configuring Rsync Backups
Step 1: Basic Rsync Backup Script
# Create backup script directory
mkdir -p /usr/local/scripts/backup
# Create rsync backup script
cat > /usr/local/scripts/backup/rsync-backup.sh << 'EOF'
#!/bin/sh
# Rsync backup script
# Configuration
SOURCE_DIRS="/home /etc /var/www /var/lib"
BACKUP_DIR="/backup/rsync"
DATE=$(date +%Y%m%d_%H%M%S)
LOG_FILE="/var/log/backup/rsync-$DATE.log"
EXCLUDE_FILE="/etc/backup/rsync-exclude.txt"
# Create directories
mkdir -p "$BACKUP_DIR" "$(dirname "$LOG_FILE")"
# Logging function
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Start backup
log_message "Starting rsync backup"
# Backup each directory
for dir in $SOURCE_DIRS; do
if [ -d "$dir" ]; then
log_message "Backing up $dir"
rsync -avz --delete \
--exclude-from="$EXCLUDE_FILE" \
"$dir" "$BACKUP_DIR/" \
>> "$LOG_FILE" 2>&1
else
log_message "Warning: $dir does not exist"
fi
done
# Create backup summary
BACKUP_SIZE=$(du -sh "$BACKUP_DIR" | cut -f1)
log_message "Backup completed. Total size: $BACKUP_SIZE"
# Compress old backups
find "$BACKUP_DIR" -name "*.tar" -mtime +7 -exec gzip {} \;
log_message "Backup process finished"
EOF
chmod +x /usr/local/scripts/backup/rsync-backup.sh
Step 2: Configure Exclusions
# Create exclusion file
mkdir -p /etc/backup
cat > /etc/backup/rsync-exclude.txt << 'EOF'
# Rsync exclusion patterns
*.tmp
*.temp
*.cache
*.swp
.cache/
/tmp/*
/var/tmp/*
/var/cache/*
/proc/*
/sys/*
/dev/*
/run/*
*.log
core
core.*
EOF
Step 3: Advanced Rsync Configuration
# Create incremental rsync backup script
cat > /usr/local/scripts/backup/rsync-incremental.sh << 'EOF'
#!/bin/sh
# Incremental rsync backup using hard links
# Configuration
SOURCE_DIRS="/home /etc /var/www"
BACKUP_ROOT="/backup/incremental"
CURRENT_BACKUP="$BACKUP_ROOT/current"
DATE=$(date +%Y%m%d_%H%M%S)
NEW_BACKUP="$BACKUP_ROOT/backup-$DATE"
# Create backup directory
mkdir -p "$BACKUP_ROOT"
# Perform incremental backup
if [ -d "$CURRENT_BACKUP" ]; then
# Create new backup with hard links to unchanged files
cp -al "$CURRENT_BACKUP" "$NEW_BACKUP"
fi
# Sync changes
for dir in $SOURCE_DIRS; do
mkdir -p "$NEW_BACKUP/$(dirname "$dir")"
rsync -av --delete "$dir" "$NEW_BACKUP/$(dirname "$dir")/"
done
# Update current symlink
rm -f "$CURRENT_BACKUP"
ln -s "$NEW_BACKUP" "$CURRENT_BACKUP"
# Clean old backups (keep last 30)
ls -1d "$BACKUP_ROOT"/backup-* | head -n -30 | xargs rm -rf
echo "Incremental backup completed: $NEW_BACKUP"
EOF
chmod +x /usr/local/scripts/backup/rsync-incremental.sh
Setting Up Borg Backup
Step 1: Initialize Borg Repository
# Create Borg backup script
cat > /usr/local/scripts/backup/borg-backup.sh << 'EOF'
#!/bin/sh
# Borg backup script
# Configuration
export BORG_REPO="/backup/borg"
export BORG_PASSPHRASE="your-secure-passphrase"
BACKUP_DIRS="/home /etc /var/www /var/lib/docker"
BACKUP_NAME="$(hostname)-$(date +%Y%m%d_%H%M%S)"
LOG_FILE="/var/log/backup/borg-$(date +%Y%m%d).log"
# Initialize repository if needed
if [ ! -d "$BORG_REPO" ]; then
echo "Initializing Borg repository..."
borg init --encryption=repokey "$BORG_REPO"
fi
# Logging function
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Create backup
log_message "Starting Borg backup: $BACKUP_NAME"
borg create \
--verbose \
--filter AME \
--list \
--stats \
--show-rc \
--compression zstd,5 \
--exclude-caches \
--exclude '/tmp/*' \
--exclude '/var/tmp/*' \
--exclude '/var/cache/*' \
"$BORG_REPO::$BACKUP_NAME" \
$BACKUP_DIRS \
2>&1 | tee -a "$LOG_FILE"
backup_exit=$?
# Prune old backups
log_message "Pruning old backups"
borg prune \
--list \
--prefix "$(hostname)-" \
--show-rc \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 12 \
--keep-yearly 2 \
"$BORG_REPO" \
2>&1 | tee -a "$LOG_FILE"
prune_exit=$?
# Verify backup
log_message "Verifying backup integrity"
borg check \
--repository-only \
"$BORG_REPO" \
2>&1 | tee -a "$LOG_FILE"
check_exit=$?
# Summary
if [ $backup_exit -eq 0 ] && [ $prune_exit -eq 0 ] && [ $check_exit -eq 0 ]; then
log_message "Backup completed successfully"
else
log_message "Backup completed with errors"
fi
# List backups
borg list "$BORG_REPO"
# Export passphrase warning
unset BORG_PASSPHRASE
EOF
chmod +x /usr/local/scripts/backup/borg-backup.sh
Step 2: Borg Configuration File
# Create Borg configuration
cat > /etc/backup/borg.conf << 'EOF'
# Borg Backup Configuration
# Repository location
BORG_REPO="/backup/borg"
# Encryption passphrase (store securely!)
BORG_PASSPHRASE="your-secure-passphrase"
# Backup paths
BACKUP_PATHS="/home /etc /var/www /var/lib"
# Exclusion patterns
BORG_EXCLUDE_PATTERNS="
*.pyc
*.cache
*/tmp/*
*/cache/*
*/temp/*
"
# Retention policy
KEEP_DAILY=7
KEEP_WEEKLY=4
KEEP_MONTHLY=12
KEEP_YEARLY=2
# Compression
COMPRESSION="zstd,5"
# Remote repository (optional)
# BORG_REPO="ssh://user@backup-server/path/to/repo"
EOF
# Secure the configuration file
chmod 600 /etc/backup/borg.conf
Database Backup Automation
Step 1: PostgreSQL Backup
# Create PostgreSQL backup script
cat > /usr/local/scripts/backup/postgres-backup.sh << 'EOF'
#!/bin/sh
# PostgreSQL backup script
# Configuration
BACKUP_DIR="/backup/postgresql"
DATE=$(date +%Y%m%d_%H%M%S)
LOG_FILE="/var/log/backup/postgres-$DATE.log"
RETENTION_DAYS=7
# Database connection
export PGHOST="localhost"
export PGPORT="5432"
export PGUSER="postgres"
# Create backup directory
mkdir -p "$BACKUP_DIR"
# Logging function
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
log_message "Starting PostgreSQL backup"
# Get list of databases
DATABASES=$(psql -t -c "SELECT datname FROM pg_database WHERE datname NOT IN ('template0', 'template1', 'postgres');")
# Backup each database
for db in $DATABASES; do
db=$(echo "$db" | tr -d ' ')
if [ -n "$db" ]; then
log_message "Backing up database: $db"
pg_dump -Fc -f "$BACKUP_DIR/${db}_${DATE}.dump" "$db" 2>&1 | tee -a "$LOG_FILE"
# Compress backup
gzip "$BACKUP_DIR/${db}_${DATE}.dump"
fi
done
# Backup global objects
log_message "Backing up global objects"
pg_dumpall --globals-only > "$BACKUP_DIR/globals_${DATE}.sql"
gzip "$BACKUP_DIR/globals_${DATE}.sql"
# Clean old backups
log_message "Cleaning old backups"
find "$BACKUP_DIR" -name "*.gz" -mtime +$RETENTION_DAYS -delete
log_message "PostgreSQL backup completed"
EOF
chmod +x /usr/local/scripts/backup/postgres-backup.sh
Step 2: MySQL/MariaDB Backup
# Create MySQL backup script
cat > /usr/local/scripts/backup/mysql-backup.sh << 'EOF'
#!/bin/sh
# MySQL/MariaDB backup script
# Configuration
BACKUP_DIR="/backup/mysql"
DATE=$(date +%Y%m%d_%H%M%S)
LOG_FILE="/var/log/backup/mysql-$DATE.log"
MYSQL_USER="root"
MYSQL_PASS="your-password"
RETENTION_DAYS=7
# Create backup directory
mkdir -p "$BACKUP_DIR"
# Logging function
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
log_message "Starting MySQL backup"
# Get list of databases
DATABASES=$(mysql -u"$MYSQL_USER" -p"$MYSQL_PASS" -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema|performance_schema|mysql|sys)")
# Backup each database
for db in $DATABASES; do
log_message "Backing up database: $db"
mysqldump -u"$MYSQL_USER" -p"$MYSQL_PASS" \
--single-transaction \
--routines \
--triggers \
--events \
"$db" | gzip > "$BACKUP_DIR/${db}_${DATE}.sql.gz"
done
# Backup all databases
log_message "Creating full backup"
mysqldump -u"$MYSQL_USER" -p"$MYSQL_PASS" \
--all-databases \
--single-transaction \
--routines \
--triggers \
--events | gzip > "$BACKUP_DIR/all_databases_${DATE}.sql.gz"
# Clean old backups
log_message "Cleaning old backups"
find "$BACKUP_DIR" -name "*.gz" -mtime +$RETENTION_DAYS -delete
log_message "MySQL backup completed"
EOF
chmod +x /usr/local/scripts/backup/mysql-backup.sh
System Backup Configuration
Step 1: Full System Backup
# Create system backup script
cat > /usr/local/scripts/backup/system-backup.sh << 'EOF'
#!/bin/sh
# Full system backup script
# Configuration
BACKUP_DIR="/backup/system"
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="$BACKUP_DIR/system-backup-$DATE.tar.gz"
LOG_FILE="/var/log/backup/system-$DATE.log"
# Exclude patterns
EXCLUDES="
--exclude=/proc
--exclude=/sys
--exclude=/dev
--exclude=/run
--exclude=/mnt
--exclude=/media
--exclude=/tmp
--exclude=/var/tmp
--exclude=/backup
--exclude=/swapfile
--exclude=/var/cache
"
# Create backup directory
mkdir -p "$BACKUP_DIR"
# Logging function
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
log_message "Starting full system backup"
# Create backup
tar -czpf "$BACKUP_FILE" \
$EXCLUDES \
--warning=no-file-changed \
--warning=no-file-removed \
/ 2>&1 | tee -a "$LOG_FILE"
# Check backup integrity
log_message "Verifying backup integrity"
tar -tzf "$BACKUP_FILE" > /dev/null 2>&1
if [ $? -eq 0 ]; then
log_message "Backup integrity verified"
else
log_message "ERROR: Backup integrity check failed"
fi
# Calculate backup size
BACKUP_SIZE=$(du -h "$BACKUP_FILE" | cut -f1)
log_message "Backup completed. Size: $BACKUP_SIZE"
# Clean old backups (keep last 3)
ls -1t "$BACKUP_DIR"/system-backup-*.tar.gz | tail -n +4 | xargs rm -f
log_message "System backup process finished"
EOF
chmod +x /usr/local/scripts/backup/system-backup.sh
Step 2: Configuration Backup
# Create configuration backup script
cat > /usr/local/scripts/backup/config-backup.sh << 'EOF'
#!/bin/sh
# Configuration files backup
# Configuration
BACKUP_DIR="/backup/config"
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="$BACKUP_DIR/config-$DATE.tar.gz"
# Important configuration directories
CONFIG_DIRS="
/etc
/usr/local/etc
/opt/*/etc
/var/lib/docker/volumes/*/config
"
# Important configuration files
CONFIG_FILES="
/boot/config.txt
/boot/cmdline.txt
"
# Package list backup
mkdir -p "$BACKUP_DIR"
# Save installed packages
apk info -v > "$BACKUP_DIR/packages-$DATE.txt"
# Create configuration backup
tar -czf "$BACKUP_FILE" $CONFIG_DIRS $CONFIG_FILES 2>/dev/null
echo "Configuration backup completed: $BACKUP_FILE"
# Git repository for /etc (etckeeper alternative)
if [ -d /etc/.git ]; then
cd /etc
git add -A
git commit -m "Automated backup: $DATE"
fi
EOF
chmod +x /usr/local/scripts/backup/config-backup.sh
Backup Scheduling
Step 1: Cron Configuration
# Create cron jobs for backups
cat > /etc/crontabs/root << 'EOF'
# Backup Schedule
# Daily backups at 2 AM
0 2 * * * /usr/local/scripts/backup/rsync-incremental.sh
15 2 * * * /usr/local/scripts/backup/borg-backup.sh
# Database backups every 6 hours
0 */6 * * * /usr/local/scripts/backup/postgres-backup.sh
30 */6 * * * /usr/local/scripts/backup/mysql-backup.sh
# Weekly system backup (Sunday 3 AM)
0 3 * * 0 /usr/local/scripts/backup/system-backup.sh
# Daily configuration backup
0 1 * * * /usr/local/scripts/backup/config-backup.sh
# Monthly backup verification (1st of month)
0 4 1 * * /usr/local/scripts/backup/verify-backups.sh
# Cleanup old logs weekly
0 5 * * 0 find /var/log/backup -name "*.log" -mtime +30 -delete
EOF
# Restart cron
rc-service crond restart
Step 2: Systemd Timer Alternative
# Create systemd timer for backups (if using systemd)
cat > /etc/systemd/system/backup-daily.service << 'EOF'
[Unit]
Description=Daily Backup Service
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/local/scripts/backup/borg-backup.sh
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
EOF
cat > /etc/systemd/system/backup-daily.timer << 'EOF'
[Unit]
Description=Daily Backup Timer
Requires=backup-daily.service
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target
EOF
Remote Backup Setup
Step 1: SSH Key Configuration
# Generate SSH key for backups
ssh-keygen -t ed25519 -f /root/.ssh/backup_key -N ""
# Create remote backup script
cat > /usr/local/scripts/backup/remote-backup.sh << 'EOF'
#!/bin/sh
# Remote backup via SSH
# Configuration
LOCAL_DIRS="/home /etc /var/www"
REMOTE_USER="backup"
REMOTE_HOST="backup.example.com"
REMOTE_DIR="/backups/$(hostname)"
SSH_KEY="/root/.ssh/backup_key"
LOG_FILE="/var/log/backup/remote-$(date +%Y%m%d).log"
# SSH options
SSH_OPTS="-i $SSH_KEY -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
# Logging function
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
log_message "Starting remote backup"
# Create remote directory
ssh $SSH_OPTS "$REMOTE_USER@$REMOTE_HOST" "mkdir -p $REMOTE_DIR"
# Perform rsync backup
for dir in $LOCAL_DIRS; do
log_message "Backing up $dir to remote"
rsync -avz --delete \
-e "ssh $SSH_OPTS" \
"$dir" "$REMOTE_USER@$REMOTE_HOST:$REMOTE_DIR/" \
>> "$LOG_FILE" 2>&1
done
log_message "Remote backup completed"
EOF
chmod +x /usr/local/scripts/backup/remote-backup.sh
Step 2: Remote Borg Backup
# Create remote Borg backup script
cat > /usr/local/scripts/backup/borg-remote.sh << 'EOF'
#!/bin/sh
# Remote Borg backup
# Configuration
export BORG_REPO="ssh://[email protected]/~/repos/$(hostname)"
export BORG_PASSPHRASE="your-secure-passphrase"
export BORG_RSH="ssh -i /root/.ssh/backup_key"
# Source directories
BACKUP_DIRS="/home /etc /var/www"
BACKUP_NAME="$(hostname)-$(date +%Y%m%d_%H%M%S)"
# Initialize remote repository if needed
borg init --encryption=repokey "$BORG_REPO" 2>/dev/null
# Create backup
borg create \
--verbose \
--stats \
--compression zstd,5 \
"$BORG_REPO::$BACKUP_NAME" \
$BACKUP_DIRS
# Prune old backups
borg prune \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 12 \
"$BORG_REPO"
unset BORG_PASSPHRASE
EOF
chmod +x /usr/local/scripts/backup/borg-remote.sh
Backup Verification
Step 1: Verification Script
# Create backup verification script
cat > /usr/local/scripts/backup/verify-backups.sh << 'EOF'
#!/bin/sh
# Backup verification script
LOG_FILE="/var/log/backup/verify-$(date +%Y%m%d).log"
ALERT_EMAIL="[email protected]"
# Logging function
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Check function
check_backup() {
local name=$1
local path=$2
local max_age=$3
if [ ! -e "$path" ]; then
log_message "ERROR: $name backup not found at $path"
return 1
fi
# Check age
if [ -n "$max_age" ]; then
if [ $(find "$path" -mtime +$max_age | wc -l) -gt 0 ]; then
log_message "WARNING: $name backup is older than $max_age days"
return 1
fi
fi
log_message "OK: $name backup verified"
return 0
}
log_message "Starting backup verification"
# Verify backups
ERRORS=0
# Check Borg repository
if [ -d "/backup/borg" ]; then
borg check /backup/borg 2>&1 | tee -a "$LOG_FILE"
[ $? -ne 0 ] && ERRORS=$((ERRORS + 1))
fi
# Check recent backups
check_backup "Rsync" "/backup/rsync/home" 1 || ERRORS=$((ERRORS + 1))
check_backup "PostgreSQL" "/backup/postgresql/*.gz" 1 || ERRORS=$((ERRORS + 1))
check_backup "MySQL" "/backup/mysql/*.gz" 1 || ERRORS=$((ERRORS + 1))
# Send alert if errors
if [ $ERRORS -gt 0 ]; then
log_message "Verification completed with $ERRORS errors"
echo "Backup verification failed. Check $LOG_FILE" | mail -s "Backup Verification Alert" "$ALERT_EMAIL"
else
log_message "All backups verified successfully"
fi
EOF
chmod +x /usr/local/scripts/backup/verify-backups.sh
Step 2: Restore Test Script
# Create restore test script
cat > /usr/local/scripts/backup/test-restore.sh << 'EOF'
#!/bin/sh
# Test backup restoration
TEST_DIR="/tmp/restore-test"
LOG_FILE="/var/log/backup/restore-test-$(date +%Y%m%d).log"
# Logging function
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Clean test directory
rm -rf "$TEST_DIR"
mkdir -p "$TEST_DIR"
log_message "Starting restore test"
# Test Borg restore
if [ -d "/backup/borg" ]; then
log_message "Testing Borg restore"
export BORG_PASSPHRASE="your-secure-passphrase"
# Get latest backup
LATEST=$(borg list /backup/borg | tail -1 | cut -d' ' -f1)
# Extract sample file
cd "$TEST_DIR"
borg extract /backup/borg::"$LATEST" etc/passwd
if [ -f "etc/passwd" ]; then
log_message "Borg restore test: SUCCESS"
else
log_message "Borg restore test: FAILED"
fi
unset BORG_PASSPHRASE
fi
# Test database restore
if [ -f "/backup/mysql/test_db_*.sql.gz" ]; then
log_message "Testing MySQL restore"
# Get latest backup
LATEST_DB=$(ls -1t /backup/mysql/test_db_*.sql.gz | head -1)
# Test extraction
gunzip -t "$LATEST_DB"
if [ $? -eq 0 ]; then
log_message "MySQL backup test: SUCCESS"
else
log_message "MySQL backup test: FAILED"
fi
fi
# Cleanup
rm -rf "$TEST_DIR"
log_message "Restore test completed"
EOF
chmod +x /usr/local/scripts/backup/test-restore.sh
Retention Policies
Automated Cleanup Script
# Create retention policy script
cat > /usr/local/scripts/backup/apply-retention.sh << 'EOF'
#!/bin/sh
# Apply backup retention policies
# Configuration
RETENTION_DAILY=7
RETENTION_WEEKLY=4
RETENTION_MONTHLY=12
RETENTION_YEARLY=2
LOG_FILE="/var/log/backup/retention-$(date +%Y%m%d).log"
# Logging function
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Apply retention to directory
apply_retention() {
local dir=$1
local pattern=$2
local days=$3
log_message "Applying $days day retention to $dir"
find "$dir" -name "$pattern" -mtime +$days -exec rm -f {} \; -print | \
while read file; do
log_message "Deleted: $file"
done
}
log_message "Starting retention policy application"
# Apply retention policies
apply_retention "/backup/rsync" "*.tar.gz" 30
apply_retention "/backup/postgresql" "*.gz" $RETENTION_WEEKLY
apply_retention "/backup/mysql" "*.gz" $RETENTION_WEEKLY
apply_retention "/backup/system" "*.tar.gz" 90
apply_retention "/var/log/backup" "*.log" 90
# Disk usage report
log_message "Backup disk usage:"
df -h /backup | tee -a "$LOG_FILE"
du -sh /backup/* | tee -a "$LOG_FILE"
log_message "Retention policy application completed"
EOF
chmod +x /usr/local/scripts/backup/apply-retention.sh
Monitoring and Alerts
Step 1: Backup Monitoring Script
# Create monitoring script
cat > /usr/local/scripts/backup/monitor-backups.sh << 'EOF'
#!/bin/sh
# Backup monitoring and alerting
# Configuration
ALERT_EMAIL="[email protected]"
WARNING_SIZE_GB=100
CRITICAL_SIZE_GB=200
LOG_FILE="/var/log/backup/monitor-$(date +%Y%m%d).log"
# Check backup status
check_backup_status() {
local name=$1
local log_pattern=$2
local max_age_hours=$3
# Find most recent log
LATEST_LOG=$(find /var/log/backup -name "$log_pattern" -mtime -1 | sort | tail -1)
if [ -z "$LATEST_LOG" ]; then
echo "CRITICAL: No recent $name backup found"
return 2
fi
# Check for errors in log
if grep -q "ERROR\|FAILED" "$LATEST_LOG"; then
echo "WARNING: $name backup completed with errors"
return 1
fi
echo "OK: $name backup successful"
return 0
}
# Check disk space
BACKUP_SIZE_GB=$(df -BG /backup | tail -1 | awk '{print $3}' | sed 's/G//')
if [ "$BACKUP_SIZE_GB" -gt "$CRITICAL_SIZE_GB" ]; then
echo "CRITICAL: Backup size ${BACKUP_SIZE_GB}GB exceeds critical threshold" | \
mail -s "Backup Space Critical" "$ALERT_EMAIL"
elif [ "$BACKUP_SIZE_GB" -gt "$WARNING_SIZE_GB" ]; then
echo "WARNING: Backup size ${BACKUP_SIZE_GB}GB exceeds warning threshold" | \
mail -s "Backup Space Warning" "$ALERT_EMAIL"
fi
# Check each backup type
check_backup_status "Borg" "borg-*.log" 24
check_backup_status "PostgreSQL" "postgres-*.log" 6
check_backup_status "MySQL" "mysql-*.log" 6
# Create status report
cat > /var/www/localhost/htdocs/backup-status.txt << EOF
Backup Status Report
Generated: $(date)
Disk Usage: ${BACKUP_SIZE_GB}GB / ${CRITICAL_SIZE_GB}GB
Recent Backups:
$(find /backup -type f -mtime -1 -exec ls -lh {} \; | tail -10)
Log Files:
$(ls -lt /var/log/backup/*.log | head -10)
EOF
EOF
chmod +x /usr/local/scripts/backup/monitor-backups.sh
Step 2: Integration with Monitoring Systems
# Create Nagios plugin
cat > /usr/local/bin/check_backup_age << 'EOF'
#!/bin/sh
# Nagios plugin to check backup age
BACKUP_DIR=$1
WARNING_HOURS=${2:-24}
CRITICAL_HOURS=${3:-48}
if [ -z "$BACKUP_DIR" ]; then
echo "UNKNOWN: No backup directory specified"
exit 3
fi
# Find newest file
NEWEST=$(find "$BACKUP_DIR" -type f -printf '%T@\n' | sort -n | tail -1)
if [ -z "$NEWEST" ]; then
echo "CRITICAL: No backups found in $BACKUP_DIR"
exit 2
fi
# Calculate age in hours
NOW=$(date +%s)
AGE_HOURS=$(( (NOW - ${NEWEST%.*}) / 3600 ))
if [ "$AGE_HOURS" -gt "$CRITICAL_HOURS" ]; then
echo "CRITICAL: Newest backup is $AGE_HOURS hours old"
exit 2
elif [ "$AGE_HOURS" -gt "$WARNING_HOURS" ]; then
echo "WARNING: Newest backup is $AGE_HOURS hours old"
exit 1
else
echo "OK: Newest backup is $AGE_HOURS hours old"
exit 0
fi
EOF
chmod +x /usr/local/bin/check_backup_age
Disaster Recovery Planning
Step 1: Recovery Documentation
# Create recovery documentation
cat > /backup/RECOVERY_PROCEDURE.md << 'EOF'
# Disaster Recovery Procedure
## Prerequisites
- Alpine Linux installation media
- Access to backup storage
- Network connectivity
## System Recovery Steps
### 1. Base System Installation
```bash
# Boot from Alpine Linux media
# Run setup-alpine
# Configure network and disk
2. Restore System Configuration
# Mount backup storage
mount /dev/sdb1 /mnt/backup
# Restore /etc
cd /
tar -xzf /mnt/backup/config/config-latest.tar.gz etc/
# Restore package list
apk add $(cat /mnt/backup/config/packages-latest.txt | awk '{print $1}')
3. Restore Data
# Using Borg
export BORG_PASSPHRASE="your-passphrase"
borg extract /mnt/backup/borg::latest
# Using rsync
rsync -av /mnt/backup/rsync/ /
4. Database Recovery
# PostgreSQL
gunzip -c /mnt/backup/postgresql/dbname_latest.sql.gz | psql dbname
# MySQL
gunzip -c /mnt/backup/mysql/dbname_latest.sql.gz | mysql dbname
Verification
- Check system services:
rc-status
- Verify database connections
- Test application functionality
- Review logs for errors EOF
### Step 2: Recovery Test Script
```bash
# Create disaster recovery test
cat > /usr/local/scripts/backup/dr-test.sh << 'EOF'
#!/bin/sh
# Disaster recovery test script
LOG_FILE="/var/log/backup/dr-test-$(date +%Y%m%d).log"
TEST_VM="dr-test-vm"
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
log_message "Starting disaster recovery test"
# Create test VM (if using virtualization)
# This is a placeholder - adapt to your environment
# Test backup accessibility
log_message "Testing backup accessibility"
for backup_dir in /backup/*; do
if [ -d "$backup_dir" ]; then
log_message "Checking $backup_dir"
ls -la "$backup_dir" | head -5 >> "$LOG_FILE"
fi
done
# Test restore procedures
log_message "Testing restore procedures"
# Document test results
cat >> "$LOG_FILE" << EOL
Disaster Recovery Test Summary
==============================
Date: $(date)
Backup Locations Verified: YES
Restore Procedures Tested: PARTIAL
Issues Found: NONE
Next Test Date: $(date -d "+3 months" +%Y-%m-%d)
EOL
log_message "Disaster recovery test completed"
EOF
chmod +x /usr/local/scripts/backup/dr-test.sh
Troubleshooting
Common Issues
- Backup fails with โNo space leftโ:
# Check disk space
df -h /backup
# Clean old backups
/usr/local/scripts/backup/apply-retention.sh
# Check for large log files
find /var/log -type f -size +100M -exec ls -lh {} \;
- Borg โCache is newer than repositoryโ:
# Delete cache
rm -rf ~/.cache/borg
# Rebuild cache
borg recreate-cache /backup/borg
- Permission denied errors:
# Check backup script permissions
ls -la /usr/local/scripts/backup/
# Check backup directory permissions
ls -la /backup/
# Fix permissions
chmod 700 /usr/local/scripts/backup/*
chown -R root:root /backup
Debug Mode
# Enable debug mode in scripts
cat > /usr/local/scripts/backup/debug-wrapper.sh << 'EOF'
#!/bin/sh
# Debug wrapper for backup scripts
SCRIPT=$1
shift
# Enable debugging
set -x
# Run script with timing
time $SCRIPT "$@"
# Show exit code
echo "Exit code: $?"
EOF
chmod +x /usr/local/scripts/backup/debug-wrapper.sh
Best Practices
Backup Security
# Encrypt backup archives
cat > /usr/local/scripts/backup/encrypt-backup.sh << 'EOF'
#!/bin/sh
# Encrypt backup files
BACKUP_FILE=$1
PASSPHRASE_FILE="/etc/backup/.passphrase"
if [ ! -f "$BACKUP_FILE" ]; then
echo "Usage: $0 <backup-file>"
exit 1
fi
# Encrypt with GPG
gpg --batch --yes \
--passphrase-file "$PASSPHRASE_FILE" \
--cipher-algo AES256 \
--symmetric \
"$BACKUP_FILE"
# Remove unencrypted file
rm -f "$BACKUP_FILE"
echo "Encrypted: ${BACKUP_FILE}.gpg"
EOF
chmod +x /usr/local/scripts/backup/encrypt-backup.sh
Backup Checklist
# Create backup checklist
cat > /backup/BACKUP_CHECKLIST.md << 'EOF'
# Backup Checklist
## Daily Tasks
- [ ] Verify overnight backups completed
- [ ] Check backup logs for errors
- [ ] Monitor disk space usage
## Weekly Tasks
- [ ] Test restore procedure
- [ ] Review retention policies
- [ ] Update documentation
## Monthly Tasks
- [ ] Full disaster recovery test
- [ ] Backup system audit
- [ ] Review and update backup scripts
## Quarterly Tasks
- [ ] Off-site backup verification
- [ ] Security assessment
- [ ] Performance optimization
EOF
Conclusion
Youโve successfully implemented a comprehensive backup automation system on Alpine Linux. This setup provides multiple backup methods, automated scheduling, verification procedures, and disaster recovery capabilities. Your data is now protected with industry-standard practices including encryption, retention policies, and monitoring.
Remember to regularly test your restore procedures, monitor backup health, and update your disaster recovery documentation. A backup is only as good as its ability to restore, so frequent testing is crucial for ensuring data protection.