๐พ AlmaLinux Backup and Disaster Recovery Complete Setup Guide
Ready to protect your precious data and ensure your systems can survive any disaster? ๐ก๏ธ Today weโre diving into the essential world of backup and disaster recovery on AlmaLinux! Whether youโre running a home server or managing enterprise infrastructure, having rock-solid backup strategies isnโt just smart โ itโs absolutely critical for business survival! ๐
In this comprehensive guide, weโll transform your AlmaLinux system into a fortress of data protection with automated backups, cloud integration, and bulletproof recovery procedures. By the end, youโll have enterprise-grade backup solutions that would make even the biggest corporations jealous! ๐ช
๐ค Why Is Backup and Disaster Recovery So Important?
Think backup and disaster recovery is just for paranoid sysadmins? Think again! ๐ฏ Hereโs why every AlmaLinux user absolutely needs this:
- ๐ Data Protection: Safeguard years of work from hardware failures, human errors, and cyberattacks
- โก Business Continuity: Keep operations running even when disaster strikes your primary systems
- ๐ฐ Cost Savings: Prevent massive financial losses from data loss and extended downtime
- ๐ฏ Compliance: Meet regulatory requirements for data retention and recovery capabilities
- ๐ด Peace of Mind: Sleep soundly knowing your data is safe and recoverable
- ๐ Quick Recovery: Restore systems in minutes instead of days or weeks
- ๐ Version Control: Access historical versions of files and system configurations
- ๐ Geographic Protection: Protect against localized disasters with off-site backups
๐ฏ What You Need to Get Started
Before we build your backup fortress, letโs make sure you have everything ready! โ
System Requirements:
- โ AlmaLinux 8 or 9 installed and running
- โ Root access or sudo privileges
- โ At least 50GB free disk space for local backups
- โ Basic understanding of Linux file systems
- โ Network connectivity for remote/cloud backups
Recommended Tools:
- โ External storage device or network storage
- โ Cloud storage account (AWS S3, Google Cloud, etc.)
- โ Spare system for testing recovery procedures
- โ Documentation tools for recovery procedures
Time Investment:
- โ Initial setup: 2-3 hours
- โ Testing and validation: 1-2 hours
- โ Documentation: 30 minutes
๐ Step 1: Install Essential Backup Tools
Letโs start by installing the powerful backup tools that will become your data protection arsenal! ๐ง
# Update your system first (always a good practice!)
sudo dnf update -y
# Install core backup and archiving tools
sudo dnf install -y rsync tar gzip bzip2 xz
# Install advanced backup tools
sudo dnf install -y rsnapshot rdiff-backup duplicity
# Install cloud storage tools
sudo dnf install -y s3cmd rclone
# Install system monitoring tools
sudo dnf install -y htop iotop ncdu
# Verify installations
rsync --version && echo "โ
Rsync installed successfully!"
rsnapshot -V && echo "โ
Rsnapshot installed successfully!"
duplicity --version && echo "โ
Duplicity installed successfully!"
What each tool does:
rsync
: Lightning-fast file synchronization and incremental backupsrsnapshot
: Filesystem snapshot utility using hard linksduplicity
: Encrypted, efficient, bandwidth-efficient backup solutions3cmd
&rclone
: Cloud storage integration tools
๐ง Step 2: Create Backup Directory Structure
Letโs organize our backup system with a professional directory structure! ๐
# Create main backup directories
sudo mkdir -p /backup/{local,scripts,logs,configs}
sudo mkdir -p /backup/local/{daily,weekly,monthly,system}
# Create subdirectories for different backup types
sudo mkdir -p /backup/local/daily/{files,databases,configs}
sudo mkdir -p /backup/local/weekly/{full-system,applications}
sudo mkdir -p /backup/local/monthly/{archives,long-term}
# Set proper permissions
sudo chown -R root:root /backup
sudo chmod -R 755 /backup
sudo chmod 700 /backup/scripts # Scripts should be more restrictive
# Create backup log directory
sudo mkdir -p /var/log/backup
sudo chmod 755 /var/log/backup
# Verify structure
tree /backup || ls -la /backup/
echo "โ
Backup directory structure created successfully!"
๐ Step 3: Configure Automated File Backups with Rsync
Now letโs create intelligent backup scripts that automatically protect your important files! ๐ฏ
# Create the main backup script
sudo tee /backup/scripts/daily-backup.sh << 'EOF'
#!/bin/bash
# Daily Backup Script for AlmaLinux
# Created: $(date +%Y-%m-%d)
# Configuration
BACKUP_ROOT="/backup/local/daily"
LOG_FILE="/var/log/backup/daily-backup-$(date +%Y%m%d).log"
RETENTION_DAYS=30
DATE=$(date +%Y%m%d_%H%M%S)
# Source directories to backup
SOURCES=(
"/home"
"/etc"
"/var/www"
"/opt"
"/usr/local"
)
# Destinations
DEST_FILES="$BACKUP_ROOT/files/$DATE"
DEST_CONFIGS="$BACKUP_ROOT/configs/$DATE"
# Create destination directories
mkdir -p "$DEST_FILES" "$DEST_CONFIGS"
# Function to log messages
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Function to calculate size
calculate_size() {
du -sh "$1" 2>/dev/null | cut -f1
}
log_message "=== Starting Daily Backup Process ==="
log_message "Backup destination: $DEST_FILES"
# Backup user files and data
for source in "${SOURCES[@]}"; do
if [ -d "$source" ]; then
log_message "Backing up $source..."
# Use rsync for efficient incremental backups
if rsync -avz --progress --delete \
--exclude="*.tmp" \
--exclude="*.log" \
--exclude="cache/*" \
--exclude=".cache/*" \
"$source/" "$DEST_FILES/$(basename $source)/" 2>&1 | tee -a "$LOG_FILE"; then
size=$(calculate_size "$DEST_FILES/$(basename $source)")
log_message "โ
Successfully backed up $source ($size)"
else
log_message "โ Failed to backup $source"
fi
else
log_message "โ ๏ธ Source directory $source does not exist"
fi
done
# Backup system configurations separately
log_message "Backing up system configurations..."
rsync -avz /etc/ "$DEST_CONFIGS/etc/" 2>&1 | tee -a "$LOG_FILE"
# Create backup manifest
cat > "$DEST_FILES/backup-manifest.txt" << EOL
Backup Created: $(date)
Hostname: $(hostname)
AlmaLinux Version: $(cat /etc/almalinux-release)
Kernel Version: $(uname -r)
Backup Size: $(calculate_size "$DEST_FILES")
Sources Backed Up:
$(for source in "${SOURCES[@]}"; do echo " - $source"; done)
EOL
# Cleanup old backups
log_message "Cleaning up backups older than $RETENTION_DAYS days..."
find "$BACKUP_ROOT/files" -type d -mtime +$RETENTION_DAYS -exec rm -rf {} + 2>/dev/null
find "$BACKUP_ROOT/configs" -type d -mtime +$RETENTION_DAYS -exec rm -rf {} + 2>/dev/null
# Calculate total backup size
total_size=$(calculate_size "$BACKUP_ROOT")
log_message "=== Backup Process Completed ==="
log_message "Total backup storage used: $total_size"
log_message "Backup location: $DEST_FILES"
# Send notification (optional)
if command -v mail >/dev/null 2>&1; then
echo "Daily backup completed successfully on $(hostname)" | \
mail -s "Backup Success - $(date +%Y-%m-%d)" root
fi
EOF
# Make script executable
sudo chmod +x /backup/scripts/daily-backup.sh
# Test the backup script
echo "๐งช Testing backup script..."
sudo /backup/scripts/daily-backup.sh
echo "โ
Daily backup script created and tested!"
๐ฎ Step 4: Set Up Database Backups
Databases need special attention! Letโs create automated database backup solutions! ๐พ
# Create database backup script
sudo tee /backup/scripts/database-backup.sh << 'EOF'
#!/bin/bash
# Database Backup Script for AlmaLinux
# Supports MySQL/MariaDB and PostgreSQL
# Configuration
BACKUP_DIR="/backup/local/daily/databases"
LOG_FILE="/var/log/backup/database-backup-$(date +%Y%m%d).log"
DATE=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=14
# Create backup directory
mkdir -p "$BACKUP_DIR/$DATE"
# Function to log messages
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
log_message "=== Starting Database Backup Process ==="
# MySQL/MariaDB backup (if installed)
if systemctl is-active --quiet mariadb || systemctl is-active --quiet mysql; then
log_message "Backing up MySQL/MariaDB databases..."
# Get list of databases (excluding system databases)
DATABASES=$(mysql -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema|performance_schema|mysql|sys)")
for db in $DATABASES; do
log_message "Backing up database: $db"
if mysqldump --single-transaction --routines --triggers "$db" | \
gzip > "$BACKUP_DIR/$DATE/${db}_${DATE}.sql.gz"; then
size=$(du -sh "$BACKUP_DIR/$DATE/${db}_${DATE}.sql.gz" | cut -f1)
log_message "โ
Successfully backed up $db ($size)"
else
log_message "โ Failed to backup database $db"
fi
done
# Also backup all databases in one file
if mysqldump --all-databases --single-transaction --routines --triggers | \
gzip > "$BACKUP_DIR/$DATE/all_databases_${DATE}.sql.gz"; then
size=$(du -sh "$BACKUP_DIR/$DATE/all_databases_${DATE}.sql.gz" | cut -f1)
log_message "โ
All databases backup completed ($size)"
fi
fi
# PostgreSQL backup (if installed)
if systemctl is-active --quiet postgresql; then
log_message "Backing up PostgreSQL databases..."
# Switch to postgres user and backup
sudo -u postgres pg_dumpall | gzip > "$BACKUP_DIR/$DATE/postgresql_all_${DATE}.sql.gz"
if [ $? -eq 0 ]; then
size=$(du -sh "$BACKUP_DIR/$DATE/postgresql_all_${DATE}.sql.gz" | cut -f1)
log_message "โ
PostgreSQL backup completed ($size)"
else
log_message "โ PostgreSQL backup failed"
fi
fi
# Cleanup old database backups
log_message "Cleaning up old database backups..."
find "$BACKUP_DIR" -type d -mtime +$RETENTION_DAYS -exec rm -rf {} + 2>/dev/null
total_size=$(du -sh "$BACKUP_DIR" | cut -f1)
log_message "=== Database Backup Process Completed ==="
log_message "Total database backup size: $total_size"
EOF
# Make script executable
sudo chmod +x /backup/scripts/database-backup.sh
# Test if we have any databases to backup
if systemctl is-active --quiet mariadb || systemctl is-active --quiet mysql || systemctl is-active --quiet postgresql; then
echo "๐งช Testing database backup script..."
sudo /backup/scripts/database-backup.sh
echo "โ
Database backup script created and tested!"
else
echo "โน๏ธ No databases detected. Script ready for when databases are installed."
fi
โ Step 5: Configure System State Backups
Letโs backup your entire system configuration and package lists for complete recovery! ๐ง
# Create system state backup script
sudo tee /backup/scripts/system-state-backup.sh << 'EOF'
#!/bin/bash
# System State Backup Script for AlmaLinux
# Captures system configuration, packages, and settings
# Configuration
BACKUP_DIR="/backup/local/daily/system"
LOG_FILE="/var/log/backup/system-state-$(date +%Y%m%d).log"
DATE=$(date +%Y%m%d_%H%M%S)
STATE_DIR="$BACKUP_DIR/system-state-$DATE"
# Create backup directory
mkdir -p "$STATE_DIR"
# Function to log messages
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
log_message "=== Starting System State Backup ==="
# Capture installed packages
log_message "Capturing installed packages..."
dnf list installed > "$STATE_DIR/installed-packages.txt"
rpm -qa --qf '%{NAME} %{VERSION}-%{RELEASE} %{ARCH}\n' > "$STATE_DIR/rpm-packages.txt"
# Capture enabled repositories
log_message "Capturing repository configuration..."
dnf repolist all > "$STATE_DIR/repositories.txt"
cp -r /etc/yum.repos.d/ "$STATE_DIR/yum-repos-backup/"
# Capture system services
log_message "Capturing system services state..."
systemctl list-units --all --no-pager > "$STATE_DIR/systemctl-all-units.txt"
systemctl list-unit-files --no-pager > "$STATE_DIR/systemctl-unit-files.txt"
systemctl list-units --state=enabled --no-pager > "$STATE_DIR/enabled-services.txt"
# Capture network configuration
log_message "Capturing network configuration..."
ip addr show > "$STATE_DIR/network-interfaces.txt"
ip route show > "$STATE_DIR/routing-table.txt"
cp -r /etc/NetworkManager/ "$STATE_DIR/networkmanager-backup/" 2>/dev/null || true
cp /etc/hosts "$STATE_DIR/hosts-backup" 2>/dev/null || true
# Capture firewall configuration
log_message "Capturing firewall configuration..."
firewall-cmd --list-all-zones > "$STATE_DIR/firewall-zones.txt" 2>/dev/null || true
iptables-save > "$STATE_DIR/iptables-rules.txt" 2>/dev/null || true
# Capture user accounts
log_message "Capturing user account information..."
cp /etc/passwd "$STATE_DIR/passwd-backup"
cp /etc/group "$STATE_DIR/group-backup"
cp /etc/shadow "$STATE_DIR/shadow-backup"
cp /etc/gshadow "$STATE_DIR/gshadow-backup"
# Capture cron jobs
log_message "Capturing scheduled tasks..."
cp -r /etc/cron* "$STATE_DIR/cron-backup/" 2>/dev/null || true
crontab -l > "$STATE_DIR/root-crontab.txt" 2>/dev/null || echo "No root crontab" > "$STATE_DIR/root-crontab.txt"
# Capture system information
log_message "Capturing system information..."
cat > "$STATE_DIR/system-info.txt" << EOL
=== AlmaLinux System Information ===
Date: $(date)
Hostname: $(hostname)
FQDN: $(hostname -f)
AlmaLinux Release: $(cat /etc/almalinux-release)
Kernel Version: $(uname -r)
Architecture: $(uname -m)
Uptime: $(uptime)
Memory: $(free -h)
Disk Usage: $(df -h)
CPU Info: $(lscpu | grep "Model name" | cut -d: -f2 | xargs)
Load Average: $(cat /proc/loadavg)
EOL
# Capture mounted filesystems
mount > "$STATE_DIR/mounted-filesystems.txt"
cat /etc/fstab > "$STATE_DIR/fstab-backup"
# Create restoration script
cat > "$STATE_DIR/restore-packages.sh" << 'EOL'
#!/bin/bash
# Package Restoration Script
# Generated on: $(date)
echo "This script will restore packages from the backup"
echo "WARNING: This will install packages that may not be needed"
read -p "Continue? (y/N): " confirm
if [[ $confirm == [yY] ]]; then
echo "Restoring packages..."
# Install packages (be careful with this!)
while read -r package version arch; do
if [ "$package" != "gpg-pubkey" ]; then
echo "Installing: $package"
dnf install -y "$package" 2>/dev/null || echo "Failed to install $package"
fi
done < rpm-packages.txt
echo "Package restoration completed!"
else
echo "Package restoration cancelled"
fi
EOL
chmod +x "$STATE_DIR/restore-packages.sh"
# Compress the backup
log_message "Compressing system state backup..."
cd "$BACKUP_DIR"
tar -czf "system-state-$DATE.tar.gz" "system-state-$DATE/"
rm -rf "system-state-$DATE/"
backup_size=$(du -sh "system-state-$DATE.tar.gz" | cut -f1)
log_message "โ
System state backup completed: $backup_size"
log_message "Backup file: $BACKUP_DIR/system-state-$DATE.tar.gz"
# Cleanup old system state backups (keep last 7)
ls -t "$BACKUP_DIR"/system-state-*.tar.gz | tail -n +8 | xargs rm -f 2>/dev/null || true
log_message "=== System State Backup Completed ==="
EOF
# Make script executable
sudo chmod +x /backup/scripts/system-state-backup.sh
# Test the system state backup
echo "๐งช Testing system state backup..."
sudo /backup/scripts/system-state-backup.sh
echo "โ
System state backup script created and tested!"
๐ Step 6: Set Up Cloud Backup Integration
Letโs connect your backups to the cloud for ultimate protection! โ๏ธ
# Create cloud backup configuration
sudo tee /backup/scripts/cloud-backup.sh << 'EOF'
#!/bin/bash
# Cloud Backup Script for AlmaLinux
# Supports AWS S3, Google Cloud, and other cloud providers
# Configuration (modify these for your cloud provider)
CLOUD_PROVIDER="s3" # s3, gcs, azure, etc.
BUCKET_NAME="your-backup-bucket"
LOCAL_BACKUP_DIR="/backup/local"
LOG_FILE="/var/log/backup/cloud-backup-$(date +%Y%m%d).log"
ENCRYPTION_PASSWORD="your-secure-password" # Use a strong password!
# Function to log messages
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Function to setup AWS S3 (example)
setup_s3_backup() {
log_message "Setting up AWS S3 backup..."
# Check if s3cmd is configured
if [ ! -f ~/.s3cfg ]; then
log_message "โ ๏ธ S3 not configured. Run 's3cmd --configure' first"
return 1
fi
# Sync to S3 with encryption
if s3cmd sync --encrypt --delete-removed "$LOCAL_BACKUP_DIR/" "s3://$BUCKET_NAME/$(hostname)/" 2>&1 | tee -a "$LOG_FILE"; then
log_message "โ
S3 backup completed successfully"
else
log_message "โ S3 backup failed"
return 1
fi
}
# Function to setup rclone backup (universal)
setup_rclone_backup() {
log_message "Setting up cloud backup with rclone..."
# Check if rclone is configured
if ! rclone listremotes | grep -q .; then
log_message "โ ๏ธ Rclone not configured. Run 'rclone config' first"
return 1
fi
# Get first configured remote
REMOTE=$(rclone listremotes | head -1)
if [ -n "$REMOTE" ]; then
log_message "Using remote: $REMOTE"
# Sync to cloud with progress
if rclone sync --progress --transfers 4 --checkers 8 \
"$LOCAL_BACKUP_DIR/" "$REMOTE$BUCKET_NAME/$(hostname)/" 2>&1 | tee -a "$LOG_FILE"; then
log_message "โ
Cloud backup completed successfully"
else
log_message "โ Cloud backup failed"
return 1
fi
else
log_message "โ No cloud remotes configured"
return 1
fi
}
# Function to create encrypted archive for cloud upload
create_encrypted_archive() {
log_message "Creating encrypted backup archive..."
ARCHIVE_NAME="backup-$(hostname)-$(date +%Y%m%d_%H%M%S).tar.gz.gpg"
ARCHIVE_PATH="/tmp/$ARCHIVE_NAME"
# Create compressed archive
tar -czf - -C "$LOCAL_BACKUP_DIR" . | \
gpg --symmetric --cipher-algo AES256 --compress-algo 1 \
--passphrase "$ENCRYPTION_PASSWORD" --batch --yes \
--output "$ARCHIVE_PATH" 2>&1 | tee -a "$LOG_FILE"
if [ $? -eq 0 ]; then
archive_size=$(du -sh "$ARCHIVE_PATH" | cut -f1)
log_message "โ
Encrypted archive created: $ARCHIVE_NAME ($archive_size)"
echo "$ARCHIVE_PATH"
else
log_message "โ Failed to create encrypted archive"
return 1
fi
}
log_message "=== Starting Cloud Backup Process ==="
# Check what cloud tools are available and configured
if command -v s3cmd >/dev/null 2>&1 && [ -f ~/.s3cfg ]; then
setup_s3_backup
elif command -v rclone >/dev/null 2>&1 && rclone listremotes | grep -q .; then
setup_rclone_backup
else
log_message "Creating encrypted archive for manual cloud upload..."
archive_path=$(create_encrypted_archive)
if [ $? -eq 0 ]; then
log_message "๐ค Upload this file to your cloud storage:"
log_message " $archive_path"
log_message "๐ก Remember to delete the local archive after upload!"
log_message "๐ Decryption command: gpg --decrypt $archive_path | tar -xzf -"
fi
fi
log_message "=== Cloud Backup Process Completed ==="
EOF
# Make script executable
sudo chmod +x /backup/scripts/cloud-backup.sh
# Create cloud backup configuration helper
sudo tee /backup/scripts/setup-cloud-backup.sh << 'EOF'
#!/bin/bash
# Cloud Backup Configuration Helper
echo "๐ Cloud Backup Configuration Helper"
echo "======================================"
# Check available tools
echo "๐ Checking available cloud backup tools..."
if command -v s3cmd >/dev/null 2>&1; then
echo "โ
s3cmd available (AWS S3)"
if [ ! -f ~/.s3cfg ]; then
echo "โ๏ธ To configure AWS S3: s3cmd --configure"
else
echo "โ
s3cmd already configured"
fi
else
echo "โ s3cmd not available"
echo " Install with: sudo dnf install s3cmd"
fi
if command -v rclone >/dev/null 2>&1; then
echo "โ
rclone available (Universal cloud storage)"
if ! rclone listremotes | grep -q .; then
echo "โ๏ธ To configure cloud storage: rclone config"
else
echo "โ
rclone remotes configured:"
rclone listremotes | sed 's/^/ /'
fi
else
echo "โ rclone not available"
echo " Install with: sudo dnf install rclone"
fi
echo ""
echo "๐ก Recommended setup steps:"
echo "1. Choose your cloud provider (AWS S3, Google Drive, Dropbox, etc.)"
echo "2. Install and configure the appropriate tool (s3cmd or rclone)"
echo "3. Test the connection with a small file"
echo "4. Update the cloud-backup.sh script with your settings"
echo "5. Test the full backup process"
EOF
chmod +x /backup/scripts/setup-cloud-backup.sh
echo "โ
Cloud backup scripts created!"
echo "๐ Run '/backup/scripts/setup-cloud-backup.sh' to configure cloud storage"
๐ฎ Step 7: Create Master Backup Orchestration Script
Now letโs create a master script that coordinates all your backup operations! ๐ญ
# Create the master backup orchestration script
sudo tee /backup/scripts/master-backup.sh << 'EOF'
#!/bin/bash
# Master Backup Orchestration Script for AlmaLinux
# Coordinates all backup operations with intelligent scheduling
# Configuration
LOG_FILE="/var/log/backup/master-backup-$(date +%Y%m%d).log"
BACKUP_SCRIPTS_DIR="/backup/scripts"
STATUS_FILE="/backup/last-backup-status.txt"
EMAIL_RECIPIENT="root"
# Backup script paths
DAILY_BACKUP="$BACKUP_SCRIPTS_DIR/daily-backup.sh"
DATABASE_BACKUP="$BACKUP_SCRIPTS_DIR/database-backup.sh"
SYSTEM_STATE_BACKUP="$BACKUP_SCRIPTS_DIR/system-state-backup.sh"
CLOUD_BACKUP="$BACKUP_SCRIPTS_DIR/cloud-backup.sh"
# Function to log messages
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Function to send notification
send_notification() {
local subject="$1"
local message="$2"
echo "$message" | tee -a "$LOG_FILE"
# Send email if mail is available
if command -v mail >/dev/null 2>&1; then
echo "$message" | mail -s "$subject" "$EMAIL_RECIPIENT"
fi
# Log to system log
logger "$subject: $message"
}
# Function to check disk space
check_disk_space() {
local backup_dir="/backup"
local threshold=90 # 90% threshold
usage=$(df "$backup_dir" | awk 'NR==2 {print int($5)}')
if [ "$usage" -gt "$threshold" ]; then
log_message "โ ๏ธ WARNING: Backup disk usage is ${usage}%"
send_notification "Backup Disk Space Warning" \
"Backup disk usage is ${usage}%. Consider cleaning old backups."
return 1
else
log_message "โ
Disk space OK: ${usage}% used"
return 0
fi
}
# Function to run backup script with error handling
run_backup_script() {
local script="$1"
local description="$2"
local start_time=$(date +%s)
log_message "๐ Starting $description..."
if [ ! -f "$script" ]; then
log_message "โ Script not found: $script"
return 1
fi
if [ ! -x "$script" ]; then
log_message "โ Script not executable: $script"
return 1
fi
# Run the script and capture exit code
if "$script" 2>&1 | tee -a "$LOG_FILE"; then
local end_time=$(date +%s)
local duration=$((end_time - start_time))
log_message "โ
$description completed successfully (${duration}s)"
return 0
else
local end_time=$(date +%s)
local duration=$((end_time - start_time))
log_message "โ $description failed (${duration}s)"
return 1
fi
}
# Function to get backup schedule based on day of week
get_backup_schedule() {
local day=$(date +%u) # 1=Monday, 7=Sunday
case $day in
1) # Monday - Full backup
echo "full"
;;
7) # Sunday - Weekly backup
echo "weekly"
;;
*) # Other days - Daily backup
echo "daily"
;;
esac
}
# Function to cleanup old logs
cleanup_logs() {
log_message "๐งน Cleaning up old log files..."
# Keep logs for 30 days
find /var/log/backup -name "*.log" -mtime +30 -delete 2>/dev/null || true
# Compress logs older than 7 days
find /var/log/backup -name "*.log" -mtime +7 ! -name "*$(date +%Y%m%d)*" \
-exec gzip {} \; 2>/dev/null || true
}
# Main execution starts here
log_message "========================================"
log_message "=== Master Backup Process Started ==="
log_message "========================================"
log_message "Hostname: $(hostname)"
log_message "Date: $(date)"
log_message "Backup Schedule: $(get_backup_schedule)"
# Initialize status tracking
backup_start_time=$(date +%s)
failed_backups=0
successful_backups=0
# Check prerequisites
log_message "๐ Checking system prerequisites..."
# Check disk space
if ! check_disk_space; then
failed_backups=$((failed_backups + 1))
fi
# Check if backup directory exists
if [ ! -d "/backup" ]; then
log_message "โ Backup directory not found!"
exit 1
fi
# Run backup operations based on schedule
schedule=$(get_backup_schedule)
case $schedule in
"full"|"weekly")
log_message "๐
Running full/weekly backup schedule..."
# Run all backup scripts
if run_backup_script "$DAILY_BACKUP" "File Backup"; then
successful_backups=$((successful_backups + 1))
else
failed_backups=$((failed_backups + 1))
fi
if run_backup_script "$DATABASE_BACKUP" "Database Backup"; then
successful_backups=$((successful_backups + 1))
else
failed_backups=$((failed_backups + 1))
fi
if run_backup_script "$SYSTEM_STATE_BACKUP" "System State Backup"; then
successful_backups=$((successful_backups + 1))
else
failed_backups=$((failed_backups + 1))
fi
if run_backup_script "$CLOUD_BACKUP" "Cloud Backup"; then
successful_backups=$((successful_backups + 1))
else
failed_backups=$((failed_backups + 1))
fi
;;
"daily")
log_message "๐
Running daily backup schedule..."
# Run essential backups only
if run_backup_script "$DAILY_BACKUP" "File Backup"; then
successful_backups=$((successful_backups + 1))
else
failed_backups=$((failed_backups + 1))
fi
if run_backup_script "$DATABASE_BACKUP" "Database Backup"; then
successful_backups=$((successful_backups + 1))
else
failed_backups=$((failed_backups + 1))
fi
;;
esac
# Calculate total execution time
backup_end_time=$(date +%s)
total_duration=$((backup_end_time - backup_start_time))
minutes=$((total_duration / 60))
seconds=$((total_duration % 60))
# Generate status summary
log_message "========================================"
log_message "=== Backup Process Summary ==="
log_message "========================================"
log_message "Total Duration: ${minutes}m ${seconds}s"
log_message "Successful Backups: $successful_backups"
log_message "Failed Backups: $failed_backups"
# Calculate backup sizes
backup_size=$(du -sh /backup/local 2>/dev/null | cut -f1 || echo "Unknown")
log_message "Total Backup Size: $backup_size"
# Write status file
cat > "$STATUS_FILE" << EOL
Last Backup: $(date)
Schedule: $schedule
Duration: ${minutes}m ${seconds}s
Successful: $successful_backups
Failed: $failed_backups
Total Size: $backup_size
Status: $([ $failed_backups -eq 0 ] && echo "SUCCESS" || echo "PARTIAL_FAILURE")
EOL
# Cleanup old logs
cleanup_logs
# Send final notification
if [ $failed_backups -eq 0 ]; then
log_message "๐ All backup operations completed successfully!"
send_notification "Backup Success" \
"All backup operations completed successfully. Duration: ${minutes}m ${seconds}s, Size: $backup_size"
else
log_message "โ ๏ธ Backup completed with $failed_backups failed operations"
send_notification "Backup Warning" \
"Backup completed with $failed_backups failed operations. Check logs: $LOG_FILE"
fi
log_message "=== Master Backup Process Completed ==="
exit $failed_backups
EOF
# Make script executable
sudo chmod +x /backup/scripts/master-backup.sh
# Test the master backup script
echo "๐งช Testing master backup orchestration..."
sudo /backup/scripts/master-backup.sh
echo "โ
Master backup script created and tested!"
๐จ Step 8: Configure Automated Scheduling with Cron
Letโs automate everything with intelligent cron scheduling! โฐ
# Create cron configuration for backups
echo "๐
Setting up automated backup scheduling..."
# Create cron job file for root
sudo tee /etc/cron.d/almalinux-backups << 'EOF'
# AlmaLinux Automated Backup Schedule
# Ensures comprehensive data protection with intelligent timing
# Environment
PATH=/usr/local/bin:/usr/bin:/bin
MAILTO=root
# Daily file and database backups (2 AM)
0 2 * * * root /backup/scripts/master-backup.sh >/dev/null 2>&1
# Weekly full system backup (Sunday 3 AM)
0 3 * * 0 root /backup/scripts/master-backup.sh >/dev/null 2>&1
# Monthly system state archive (1st day of month, 4 AM)
0 4 1 * * root /backup/scripts/system-state-backup.sh >/dev/null 2>&1
# Cloud backup sync (Daily at 5 AM, after local backups complete)
0 5 * * * root /backup/scripts/cloud-backup.sh >/dev/null 2>&1
# Backup log cleanup (Weekly, Sunday 6 AM)
0 6 * * 0 root find /var/log/backup -name "*.log" -mtime +30 -delete 2>/dev/null
# Disk space monitoring (Every 6 hours)
0 */6 * * * root df /backup | awk 'NR==2 {if(int($5) > 85) print "Warning: Backup disk " $5 " full"}' | mail -s "Backup Disk Space Alert" root 2>/dev/null || true
EOF
# Set proper permissions on cron file
sudo chmod 644 /etc/cron.d/almalinux-backups
# Create manual backup convenience script
sudo tee /usr/local/bin/backup-now << 'EOF'
#!/bin/bash
# Manual Backup Trigger Script
echo "๐ Starting manual backup process..."
echo "This will run all backup operations immediately."
echo ""
read -p "Continue with manual backup? (y/N): " confirm
if [[ $confirm == [yY] ]]; then
echo "Starting backup process..."
/backup/scripts/master-backup.sh
echo ""
echo "๐ Backup Status:"
if [ -f /backup/last-backup-status.txt ]; then
cat /backup/last-backup-status.txt
fi
echo ""
echo "๐ Backup Location: /backup/local"
echo "๐ Logs Location: /var/log/backup"
else
echo "Manual backup cancelled."
fi
EOF
sudo chmod +x /usr/local/bin/backup-now
# Create backup status checking script
sudo tee /usr/local/bin/backup-status << 'EOF'
#!/bin/bash
# Backup Status Checker Script
echo "๐ AlmaLinux Backup Status Report"
echo "=================================="
# Show last backup status
if [ -f /backup/last-backup-status.txt ]; then
echo "๐ Last Backup Status:"
cat /backup/last-backup-status.txt
echo ""
else
echo "โ No backup status file found"
echo ""
fi
# Show backup sizes
echo "๐พ Backup Storage Usage:"
if [ -d /backup/local ]; then
du -sh /backup/local/* 2>/dev/null | sort -hr || echo "No backups found"
echo ""
echo "๐ Total Backup Size: $(du -sh /backup/local 2>/dev/null | cut -f1)"
else
echo "โ Backup directory not found"
fi
echo ""
# Show recent log files
echo "๐ Recent Backup Logs:"
if [ -d /var/log/backup ]; then
ls -la /var/log/backup/*.log 2>/dev/null | tail -5 || echo "No log files found"
else
echo "โ Backup log directory not found"
fi
echo ""
# Show scheduled jobs
echo "โฐ Scheduled Backup Jobs:"
if [ -f /etc/cron.d/almalinux-backups ]; then
grep -v "^#" /etc/cron.d/almalinux-backups | grep -v "^$"
else
echo "โ No scheduled backup jobs found"
fi
EOF
sudo chmod +x /usr/local/bin/backup-status
# Restart cron service to load new jobs
sudo systemctl restart crond
sudo systemctl enable crond
echo "โ
Automated backup scheduling configured!"
echo "๐ Available commands:"
echo " backup-now - Run manual backup"
echo " backup-status - Check backup status"
echo "๐ Scheduled times:"
echo " Daily backups: 2:00 AM"
echo " Weekly full: Sunday 3:00 AM"
echo " Cloud sync: 5:00 AM"
๐ฎ Quick Examples: Real-World Backup Scenarios
Letโs see your backup system in action with practical examples! ๐ฏ
Example 1: Testing File Recovery
# Test file recovery process
echo "๐งช Testing file recovery capabilities..."
# Create a test file
echo "This is a test file created on $(date)" > /tmp/test-recovery.txt
cp /tmp/test-recovery.txt /home/
# Run a backup to capture the file
sudo /backup/scripts/daily-backup.sh
# "Accidentally" delete the file
rm /home/test-recovery.txt
echo "โ File deleted! Let's recover it..."
# Find the file in backups
find /backup/local -name "test-recovery.txt" -type f | head -1
# Restore the file
latest_backup=$(find /backup/local/daily/files -type d -name "*home*" | sort | tail -1)
if [ -n "$latest_backup" ]; then
cp "$latest_backup/test-recovery.txt" /home/
echo "โ
File recovered successfully!"
echo "Content: $(cat /home/test-recovery.txt)"
else
echo "โ Backup not found"
fi
Example 2: Emergency System Recovery Simulation
# Simulate system recovery process
echo "๐จ Emergency System Recovery Simulation"
echo "========================================"
# Show current system state
echo "๐ Current system packages: $(rpm -qa | wc -l)"
echo "๐ Current services: $(systemctl list-units --state=enabled --no-pager | wc -l)"
# Create system state backup
sudo /backup/scripts/system-state-backup.sh
# Show what would be restored
latest_state=$(ls -t /backup/local/daily/system/system-state-*.tar.gz | head -1)
if [ -n "$latest_state" ]; then
echo "๐ฆ Latest system state backup: $latest_state"
echo "๐ Contents:"
tar -tzf "$latest_state" | head -10
echo " ... and more"
echo ""
echo "๐ง To restore system state:"
echo " 1. Extract: tar -xzf $latest_state"
echo " 2. Review: cat system-state-*/system-info.txt"
echo " 3. Restore packages: ./system-state-*/restore-packages.sh"
echo " 4. Restore configs: Copy files from system-state-*/etc"
else
echo "โ No system state backup found"
fi
Example 3: Database Recovery Process
# Test database recovery (if databases are installed)
echo "๐พ Database Recovery Test"
echo "========================="
# Check for existing databases
if systemctl is-active --quiet mariadb || systemctl is-active --quiet mysql; then
echo "โ
MySQL/MariaDB detected"
# Show current databases
echo "๐ Current databases:"
mysql -e "SHOW DATABASES;" 2>/dev/null || echo "Database connection failed"
# Run database backup
sudo /backup/scripts/database-backup.sh
# Show backup files
latest_db_backup=$(find /backup/local/daily/databases -type d | sort | tail -1)
if [ -n "$latest_db_backup" ] && [ -d "$latest_db_backup" ]; then
echo "๐ฆ Database backup files:"
ls -lh "$latest_db_backup"/*.sql.gz 2>/dev/null || echo "No database backups found"
echo ""
echo "๐ง To restore a database:"
echo " gunzip < backup_file.sql.gz | mysql database_name"
echo " Example: gunzip < wordpress_backup.sql.gz | mysql wordpress"
fi
elif systemctl is-active --quiet postgresql; then
echo "โ
PostgreSQL detected"
# Show PostgreSQL backup
latest_pg_backup=$(find /backup/local/daily/databases -name "postgresql_all_*.sql.gz" | sort | tail -1)
if [ -n "$latest_pg_backup" ]; then
echo "๐ฆ PostgreSQL backup: $latest_pg_backup"
echo "๐ง To restore: gunzip < $latest_pg_backup | sudo -u postgres psql"
fi
else
echo "โน๏ธ No databases detected for recovery testing"
fi
๐จ Fix Common Problems: Troubleshooting Guide
Here are solutions to backup issues you might encounter! ๐ง
Problem 1: Backup Scripts Fail with Permission Errors
# Solution: Fix backup permissions
echo "๐ง Fixing backup permission issues..."
# Set correct ownership and permissions
sudo chown -R root:root /backup/
sudo chmod -R 755 /backup/
sudo chmod 700 /backup/scripts/
sudo chmod +x /backup/scripts/*.sh
# Fix log directory permissions
sudo mkdir -p /var/log/backup
sudo chmod 755 /var/log/backup
sudo chown root:root /var/log/backup
# Verify permissions
ls -la /backup/scripts/
echo "โ
Backup permissions fixed!"
Problem 2: Cloud Backup Fails or Not Configured
# Solution: Configure cloud backup step by step
echo "โ๏ธ Configuring cloud backup..."
# Check available tools
if ! command -v rclone >/dev/null 2>&1; then
echo "Installing rclone..."
sudo dnf install -y rclone
fi
# Quick rclone setup for Google Drive (example)
echo "๐ Setting up Google Drive backup (example):"
echo "1. Run: rclone config"
echo "2. Choose 'n' for new remote"
echo "3. Name it 'gdrive'"
echo "4. Choose Google Drive (option 15)"
echo "5. Leave client_id and secret blank"
echo "6. Follow authentication steps"
# Test cloud connection
if rclone listremotes | grep -q .; then
echo "โ
Cloud remotes configured:"
rclone listremotes
# Test with small file
echo "test" > /tmp/backup-test.txt
remote=$(rclone listremotes | head -1)
if rclone copy /tmp/backup-test.txt "${remote}backup-test/"; then
echo "โ
Cloud backup test successful!"
rclone delete "${remote}backup-test/backup-test.txt"
else
echo "โ Cloud backup test failed"
fi
rm /tmp/backup-test.txt
else
echo "โ ๏ธ No cloud remotes configured. Run 'rclone config' to set up."
fi
Problem 3: Backups Taking Too Much Disk Space
# Solution: Implement intelligent cleanup
echo "๐งน Implementing backup cleanup strategies..."
# Create advanced cleanup script
sudo tee /backup/scripts/cleanup-old-backups.sh << 'EOF'
#!/bin/bash
# Advanced Backup Cleanup Script
# Configuration
BACKUP_ROOT="/backup/local"
LOG_FILE="/var/log/backup/cleanup-$(date +%Y%m%d).log"
# Retention policies (days)
DAILY_RETENTION=7
WEEKLY_RETENTION=30
MONTHLY_RETENTION=90
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
log_message "=== Starting Backup Cleanup ==="
# Show current disk usage
df -h /backup | tee -a "$LOG_FILE"
# Cleanup daily backups older than retention period
log_message "Cleaning daily backups older than $DAILY_RETENTION days..."
find "$BACKUP_ROOT/daily" -type d -mtime +$DAILY_RETENTION -exec rm -rf {} + 2>/dev/null
# Keep weekly backups but clean older ones
log_message "Cleaning weekly backups older than $WEEKLY_RETENTION days..."
find "$BACKUP_ROOT/weekly" -type d -mtime +$WEEKLY_RETENTION -exec rm -rf {} + 2>/dev/null
# Keep monthly backups but clean older ones
log_message "Cleaning monthly backups older than $MONTHLY_RETENTION days..."
find "$BACKUP_ROOT/monthly" -type d -mtime +$MONTHLY_RETENTION -exec rm -rf {} + 2>/dev/null
# Compress old backups instead of deleting (space-saving alternative)
log_message "Compressing backups older than 3 days..."
find "$BACKUP_ROOT" -type d -name "*20*" -mtime +3 ! -name "*.tar.gz" -exec bash -c '
for dir; do
if [ -d "$dir" ] && [[ $(basename "$dir") =~ ^[0-9]{8}_[0-9]{6}$ ]]; then
echo "Compressing $dir..."
tar -czf "${dir}.tar.gz" -C "$(dirname "$dir")" "$(basename "$dir")" && rm -rf "$dir"
fi
done
' bash {} +
# Show final disk usage
log_message "=== Cleanup Completed ==="
df -h /backup | tee -a "$LOG_FILE"
EOF
chmod +x /backup/scripts/cleanup-old-backups.sh
# Run cleanup
sudo /backup/scripts/cleanup-old-backups.sh
echo "โ
Backup cleanup configured and executed!"
Problem 4: Database Backup Fails
# Solution: Fix database backup issues
echo "๐พ Troubleshooting database backup issues..."
# Check database service status
if systemctl is-active --quiet mariadb; then
echo "โ
MariaDB is running"
# Test database connection
if mysql -e "SELECT 1;" >/dev/null 2>&1; then
echo "โ
Database connection successful"
else
echo "โ Database connection failed"
echo "๐ก Solution: Check credentials and permissions"
echo " sudo mysql -u root -p"
echo " CREATE USER 'backup'@'localhost' IDENTIFIED BY 'secure_password';"
echo " GRANT SELECT, LOCK TABLES, SHOW VIEW ON *.* TO 'backup'@'localhost';"
fi
elif systemctl is-active --quiet postgresql; then
echo "โ
PostgreSQL is running"
# Test PostgreSQL connection
if sudo -u postgres psql -c "SELECT 1;" >/dev/null 2>&1; then
echo "โ
PostgreSQL connection successful"
else
echo "โ PostgreSQL connection failed"
echo "๐ก Solution: Check PostgreSQL configuration"
echo " sudo systemctl start postgresql"
echo " sudo -u postgres psql"
fi
else
echo "โน๏ธ No database services detected"
echo "๐ก To install MariaDB: sudo dnf install mariadb-server"
echo "๐ก To install PostgreSQL: sudo dnf install postgresql-server"
fi
Problem 5: Cron Jobs Not Running
# Solution: Fix cron job issues
echo "โฐ Troubleshooting cron job issues..."
# Check if cron service is running
if systemctl is-active --quiet crond; then
echo "โ
Cron service is running"
else
echo "โ Cron service not running"
echo "๐ง Starting cron service..."
sudo systemctl start crond
sudo systemctl enable crond
fi
# Check cron job file
if [ -f /etc/cron.d/almalinux-backups ]; then
echo "โ
Backup cron file exists"
# Verify syntax
echo "๐ Cron job contents:"
cat /etc/cron.d/almalinux-backups
# Check permissions
ls -la /etc/cron.d/almalinux-backups
else
echo "โ Backup cron file missing"
echo "๐ง Recreating cron file..."
# Recreate the cron file (repeat the cron creation from Step 8)
fi
# Check cron logs
echo "๐ Recent cron activity:"
grep -i backup /var/log/cron | tail -5 2>/dev/null || echo "No cron backup activity found"
# Test manual execution
echo "๐งช Testing manual backup execution..."
if sudo /backup/scripts/master-backup.sh >/dev/null 2>&1; then
echo "โ
Manual backup execution successful"
else
echo "โ Manual backup execution failed"
echo "๐ก Check logs: /var/log/backup/"
fi
๐ Simple Commands Summary
Hereโs your quick reference for all backup operations! ๐
Operation | Command | Description |
---|---|---|
Manual Backup | backup-now | Run complete backup immediately |
Check Status | backup-status | View backup status and sizes |
View Logs | tail -f /var/log/backup/master-backup-*.log | Monitor backup progress |
List Backups | ls -la /backup/local/daily/files/ | See available file backups |
Disk Usage | du -sh /backup/local/* | Check backup storage usage |
Test Recovery | find /backup -name "filename" -type f | Locate file in backups |
Database Backup | sudo /backup/scripts/database-backup.sh | Manual database backup |
System State | sudo /backup/scripts/system-state-backup.sh | Backup system configuration |
Cloud Sync | sudo /backup/scripts/cloud-backup.sh | Upload to cloud storage |
Cleanup Old | sudo /backup/scripts/cleanup-old-backups.sh | Remove old backup files |
Setup Cloud | /backup/scripts/setup-cloud-backup.sh | Configure cloud storage |
Restore File | cp /backup/local/daily/files/*/path/file /original/path/ | Restore specific file |
๐ก Tips for Success: Backup Best Practices
Follow these pro tips to ensure your backup system remains bulletproof! ๐ฏ
๐ Security Best Practices
- Encrypt sensitive backups - Always encrypt backups containing personal or business data
- Use strong passwords - Cloud backup encryption should use complex, unique passwords
- Test restore procedures - Regularly verify you can actually restore from your backups
- Secure backup storage - Protect backup locations with appropriate permissions
- Monitor access logs - Keep track of who accesses backup systems
โก Performance Optimization
- Schedule during off-hours - Run backups when system load is low
- Use incremental backups - Only backup changed files to save time and space
- Compress older backups - Archive old backups to save storage space
- Monitor network usage - Cloud backups can consume significant bandwidth
- Use local staging - Create local backups first, then sync to cloud
๐ฏ Reliability Guidelines
- Follow 3-2-1 rule - 3 copies, 2 different media types, 1 offsite
- Test monthly - Perform monthly restore tests to verify backup integrity
- Document procedures - Keep recovery procedures updated and accessible
- Monitor disk space - Ensure backup storage doesnโt fill up
- Verify backup completion - Always check that backups completed successfully
๐ Monitoring and Maintenance
- Set up alerts - Get notified immediately when backups fail
- Review logs regularly - Check backup logs for warnings and errors
- Update retention policies - Adjust how long to keep different types of backups
- Clean up regularly - Remove old backups to manage storage costs
- Test different scenarios - Practice various types of disaster recovery
๐ What Youโve Accomplished - Congratulations!
Look at everything youโve mastered! You should be incredibly proud! ๐
โ Backup Infrastructure Mastery
- โ Automated File Backups - Your important files are protected with intelligent incremental backups
- โ Database Protection - MySQL/MariaDB and PostgreSQL databases are automatically backed up
- โ System State Capture - Complete system configuration and package lists are preserved
- โ Cloud Integration - Your backups are safely stored in cloud storage for off-site protection
- โ Intelligent Scheduling - Everything runs automatically with smart cron job scheduling
โ Enterprise-Grade Features
- โ Master Orchestration - Coordinated backup operations with error handling and notifications
- โ Comprehensive Logging - Detailed logs track every backup operation for troubleshooting
- โ Disk Space Management - Automatic cleanup prevents storage from filling up
- โ Recovery Procedures - Documented and tested recovery processes for various scenarios
- โ Security Implementation - Encrypted backups and secure storage practices
โ Professional Tools
- โ
Command-line Utilities -
backup-now
andbackup-status
for easy management - โ Monitoring Scripts - Automated alerts for backup failures and disk space issues
- โ Recovery Testing - Scripts to validate backup integrity and test restore procedures
- โ Documentation - Complete procedures for disaster recovery scenarios
๐ฏ Why This Matters in the Real World
Your backup and disaster recovery system isnโt just about protecting files โ youโve built something that provides real business value! ๐ผ
๐ Business Continuity Impact
Data Protection: Your comprehensive backup strategy protects against hardware failures, human errors, cyberattacks, and natural disasters. No more sleepless nights worrying about data loss!
Compliance Readiness: Many industries require specific data retention and recovery capabilities. Your system meets these requirements with automated, documented procedures.
Cost Savings: Data loss can cost businesses thousands or millions of dollars. Your backup system provides insurance against these catastrophic losses at a fraction of the cost.
๐ช Technical Excellence
Scalability: Your backup system can grow with your infrastructure, handling everything from single servers to complex multi-system environments.
Automation: Once configured, your system runs without manual intervention, freeing up time for other important tasks.
Professional Standards: Youโve implemented enterprise-grade backup practices that rival those used by major corporations and data centers.
๐ Congratulations - Youโre Now a Backup Expert!
Youโve successfully transformed your AlmaLinux system into a fortress of data protection! ๐ฐ Your automated backup and disaster recovery system provides enterprise-grade protection for your valuable data and systems.
Whether youโre managing a home lab, running a small business, or working in enterprise IT, you now have the skills and tools to protect critical data and ensure business continuity. Your backup system will serve as a safety net that lets you innovate and experiment with confidence, knowing that recovery is always possible.
Keep exploring, keep backing up, and remember โ the best backup is the one you never have to use, but youโll be incredibly grateful it exists when you need it! ๐
Happy backing up, and may your data always be safe and recoverable! โญ ๐