๐พ AlmaLinux Backup & Disaster Recovery: Complete Guide to Data Protection
Hey there, data guardian! ๐ก๏ธ Ready to build an bulletproof backup and disaster recovery system that could survive anything from accidental deletions to complete server meltdowns? Today weโre creating a comprehensive data protection strategy on AlmaLinux that will let you sleep peacefully knowing your data is safe! ๐
Whether youโre protecting critical business data, personal files, or entire server infrastructures, this guide will turn your AlmaLinux system into a fortress of data resilience that can recover from any disaster! ๐ช
๐ค Why is Backup & Disaster Recovery Important?
Imagine losing years of work in seconds because of a hardware failure, ransomware attack, or simple human error! ๐ฑ Without proper backups, youโre one disaster away from catastrophe!
Hereโs why backup & disaster recovery on AlmaLinux is absolutely critical:
- ๐ก๏ธ Data Protection - Safeguard against hardware failures and corruption
- ๐ Quick Recovery - Restore operations in minutes, not days
- ๐ฏ Ransomware Defense - Recover from attacks without paying ransoms
- ๐ Business Continuity - Keep operations running during disasters
- โฐ Point-in-Time Recovery - Go back to any moment in history
- ๐ Geographic Redundancy - Protect against regional disasters
- ๐ Compliance Requirements - Meet regulatory backup requirements
- ๐ฐ Cost Savings - Avoid expensive data recovery services
๐ฏ What You Need
Before we start building your data protection fortress, letโs make sure you have everything ready:
โ AlmaLinux 9.x system (production server) โ Backup storage location (external drive, NAS, or cloud) โ Root or sudo access for configuration โ Internet connection for cloud backups โ 50+ GB storage for local backup staging โ Critical data identified (what needs backing up) โ Recovery time objectives (how fast you need to recover) โ Peace of mind desire ๐
๐ Step 1: Install Backup Tools
Letโs start by installing the essential backup tools on AlmaLinux! ๐ฏ
# Update system first
sudo dnf update -y
# Install essential backup tools
sudo dnf install -y rsync tar gzip bzip2 xz zip unzip
# Install advanced backup solutions
sudo dnf install -y borgbackup restic duplicity
# Install RAID and filesystem tools
sudo dnf install -y mdadm lvm2 xfsprogs e2fsprogs
# Install monitoring tools
sudo dnf install -y smartmontools iotop htop
# Install cloud backup tools (optional)
sudo dnf install -y rclone s3cmd
# Verify installations
rsync --version
borg --version
restic version
Expected output:
rsync version 3.2.x protocol version 31
borg 1.2.x
restic 0.16.x compiled with go1.20
Great! Your backup tools are installed! ๐
๐ง Step 2: Create Comprehensive Backup Strategy
Now letโs implement a multi-layered backup strategy:
# Create backup directory structure
sudo mkdir -p /backup/{daily,weekly,monthly,scripts,configs,logs}
sudo mkdir -p /var/backup/staging
# Create main backup script
sudo tee /backup/scripts/backup-system.sh << 'EOF'
#!/bin/bash
# Comprehensive System Backup Script
# Configuration
BACKUP_DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/backup"
STAGING_DIR="/var/backup/staging"
LOG_FILE="/backup/logs/backup_${BACKUP_DATE}.log"
RETENTION_DAILY=7
RETENTION_WEEKLY=4
RETENTION_MONTHLY=12
# Email configuration
ADMIN_EMAIL="[email protected]"
HOSTNAME=$(hostname)
# Start logging
exec 1> >(tee -a "$LOG_FILE")
exec 2>&1
echo "========================================="
echo "๐ Starting backup at $(date)"
echo "========================================="
# Function to check backup success
check_status() {
if [ $? -eq 0 ]; then
echo "โ
$1 completed successfully"
else
echo "โ $1 failed!"
send_alert "Backup Failed" "$1 failed on $HOSTNAME"
exit 1
fi
}
# Function to send email alerts
send_alert() {
echo "$2" | mail -s "$1 - $HOSTNAME" "$ADMIN_EMAIL"
}
# Create backup manifest
echo "๐ Creating backup manifest..."
cat > "${STAGING_DIR}/manifest_${BACKUP_DATE}.txt" << MANIFEST
Backup Date: $(date)
Hostname: $HOSTNAME
Kernel: $(uname -r)
OS: $(cat /etc/os-release | grep PRETTY_NAME | cut -d'"' -f2)
Uptime: $(uptime)
Disk Usage:
$(df -h)
MANIFEST
# 1. System Configuration Backup
echo "๐ Backing up system configurations..."
tar czf "${STAGING_DIR}/configs_${BACKUP_DATE}.tar.gz" \
/etc \
/root/.bashrc \
/root/.ssh \
/var/spool/cron \
--exclude=/etc/shadow.lock \
2>/dev/null
check_status "System configuration backup"
# 2. Package List Backup
echo "๐ฆ Backing up package list..."
rpm -qa > "${STAGING_DIR}/packages_${BACKUP_DATE}.txt"
dnf history info > "${STAGING_DIR}/dnf_history_${BACKUP_DATE}.txt"
check_status "Package list backup"
# 3. Database Backups (if applicable)
if command -v mysql &> /dev/null; then
echo "๐๏ธ Backing up MySQL databases..."
mysqldump --all-databases --single-transaction --quick \
> "${STAGING_DIR}/mysql_${BACKUP_DATE}.sql"
gzip "${STAGING_DIR}/mysql_${BACKUP_DATE}.sql"
check_status "MySQL backup"
fi
if command -v postgresql &> /dev/null; then
echo "๐๏ธ Backing up PostgreSQL databases..."
sudo -u postgres pg_dumpall > "${STAGING_DIR}/postgresql_${BACKUP_DATE}.sql"
gzip "${STAGING_DIR}/postgresql_${BACKUP_DATE}.sql"
check_status "PostgreSQL backup"
fi
# 4. Application Data Backup
echo "๐ Backing up application data..."
BACKUP_DIRS="/home /var/www /opt /srv"
for dir in $BACKUP_DIRS; do
if [ -d "$dir" ]; then
dir_name=$(echo $dir | tr '/' '_')
tar czf "${STAGING_DIR}/${dir_name}_${BACKUP_DATE}.tar.gz" "$dir" 2>/dev/null
check_status "Backup of $dir"
fi
done
# 5. Docker Volumes Backup (if Docker is installed)
if command -v docker &> /dev/null; then
echo "๐ณ Backing up Docker volumes..."
docker volume ls -q | while read volume; do
docker run --rm -v "$volume":/data -v "${STAGING_DIR}":/backup \
alpine tar czf "/backup/docker_${volume}_${BACKUP_DATE}.tar.gz" /data
done
check_status "Docker volumes backup"
fi
# 6. Move to permanent storage with rotation
echo "๐ Moving to permanent storage with rotation..."
# Determine backup type (daily, weekly, monthly)
DAY_OF_WEEK=$(date +%u)
DAY_OF_MONTH=$(date +%d)
if [ "$DAY_OF_MONTH" -eq "01" ]; then
BACKUP_TYPE="monthly"
RETENTION=$RETENTION_MONTHLY
elif [ "$DAY_OF_WEEK" -eq "7" ]; then
BACKUP_TYPE="weekly"
RETENTION=$RETENTION_WEEKLY
else
BACKUP_TYPE="daily"
RETENTION=$RETENTION_DAILY
fi
# Move staging to permanent location
DEST_DIR="${BACKUP_DIR}/${BACKUP_TYPE}/${BACKUP_DATE}"
mkdir -p "$DEST_DIR"
mv "${STAGING_DIR}"/* "$DEST_DIR/"
check_status "Moving to permanent storage"
# 7. Cleanup old backups
echo "๐งน Cleaning up old backups..."
find "${BACKUP_DIR}/${BACKUP_TYPE}" -maxdepth 1 -type d -mtime +${RETENTION} -exec rm -rf {} \;
check_status "Cleanup old backups"
# 8. Create backup report
BACKUP_SIZE=$(du -sh "$DEST_DIR" | cut -f1)
DISK_USAGE=$(df -h "$BACKUP_DIR" | tail -1 | awk '{print $5}')
cat > "${DEST_DIR}/backup_report.txt" << REPORT
Backup Report
=============
Date: $(date)
Type: $BACKUP_TYPE
Location: $DEST_DIR
Size: $BACKUP_SIZE
Disk Usage: $DISK_USAGE
Status: SUCCESS
Files Backed Up:
$(ls -lh "$DEST_DIR")
REPORT
echo "========================================="
echo "โ
Backup completed successfully!"
echo "๐ Backup size: $BACKUP_SIZE"
echo "๐พ Backup location: $DEST_DIR"
echo "========================================="
# Send success notification
send_alert "Backup Successful" "Backup completed successfully. Size: $BACKUP_SIZE"
exit 0
EOF
chmod +x /backup/scripts/backup-system.sh
Perfect! Your backup system is configured! ๐
๐ Step 3: Implement Advanced Backup with Borg
Letโs set up Borg for efficient, deduplicated backups:
# Initialize Borg repository
export BORG_REPO="/backup/borg"
borg init --encryption=repokey "$BORG_REPO"
# Save the key (VERY IMPORTANT!)
borg key export "$BORG_REPO" /backup/borg-key.txt
echo "โ ๏ธ IMPORTANT: Save the key file /backup/borg-key.txt in a secure location!"
# Create Borg backup script
sudo tee /backup/scripts/borg-backup.sh << 'EOF'
#!/bin/bash
# Borg Backup Script with Deduplication
export BORG_REPO="/backup/borg"
export BORG_PASSPHRASE="your-secure-passphrase" # Use key file in production
# Backup name
BACKUP_NAME="${HOSTNAME}-$(date +%Y%m%d-%H%M%S)"
echo "๐ Starting Borg backup: $BACKUP_NAME"
# Create backup with compression and deduplication
borg create \
--verbose \
--filter AME \
--list \
--stats \
--show-rc \
--compression lz4 \
--exclude-caches \
--exclude '/home/*/.cache/*' \
--exclude '/var/cache/*' \
--exclude '/var/tmp/*' \
"${BORG_REPO}::${BACKUP_NAME}" \
/etc \
/home \
/root \
/var \
/opt \
/srv
# Prune old backups
echo "๐งน Pruning old backups..."
borg prune \
--list \
--prefix "${HOSTNAME}-" \
--show-rc \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 12 \
--keep-yearly 2 \
"${BORG_REPO}"
# Verify backup integrity
echo "โ
Verifying backup integrity..."
borg check "${BORG_REPO}"
# Show repository info
echo "๐ Repository statistics:"
borg info "${BORG_REPO}"
unset BORG_PASSPHRASE
echo "โ
Borg backup completed!"
EOF
chmod +x /backup/scripts/borg-backup.sh
Excellent! Borg backup is ready for efficient storage! ๐ฏ
โ Step 4: Configure Disaster Recovery
Now letโs create a comprehensive disaster recovery system:
# Create disaster recovery script
sudo tee /backup/scripts/disaster-recovery.sh << 'EOF'
#!/bin/bash
# Disaster Recovery Script for AlmaLinux
RECOVERY_MODE="$1"
BACKUP_SOURCE="$2"
LOG_FILE="/var/log/disaster_recovery_$(date +%Y%m%d_%H%M%S).log"
# Logging setup
exec 1> >(tee -a "$LOG_FILE")
exec 2>&1
echo "๐จ DISASTER RECOVERY INITIATED"
echo "================================"
echo "Mode: $RECOVERY_MODE"
echo "Source: $BACKUP_SOURCE"
echo "Time: $(date)"
echo "================================"
# Function to restore system configurations
restore_configs() {
echo "๐ Restoring system configurations..."
if [ -f "$BACKUP_SOURCE/configs_*.tar.gz" ]; then
tar xzf "$BACKUP_SOURCE"/configs_*.tar.gz -C /
echo "โ
Configurations restored"
else
echo "โ Configuration backup not found!"
return 1
fi
}
# Function to restore databases
restore_databases() {
echo "๐๏ธ Restoring databases..."
# MySQL restoration
if [ -f "$BACKUP_SOURCE/mysql_*.sql.gz" ]; then
echo "Restoring MySQL databases..."
gunzip -c "$BACKUP_SOURCE"/mysql_*.sql.gz | mysql
echo "โ
MySQL restored"
fi
# PostgreSQL restoration
if [ -f "$BACKUP_SOURCE/postgresql_*.sql.gz" ]; then
echo "Restoring PostgreSQL databases..."
gunzip -c "$BACKUP_SOURCE"/postgresql_*.sql.gz | sudo -u postgres psql
echo "โ
PostgreSQL restored"
fi
}
# Function to restore application data
restore_applications() {
echo "๐ Restoring application data..."
for archive in "$BACKUP_SOURCE"/*.tar.gz; do
if [[ ! "$archive" =~ (configs|mysql|postgresql) ]]; then
echo "Extracting: $archive"
tar xzf "$archive" -C /
fi
done
echo "โ
Application data restored"
}
# Function to restore from Borg backup
restore_from_borg() {
echo "๐ Restoring from Borg backup..."
export BORG_REPO="/backup/borg"
# List available archives
echo "Available backups:"
borg list "$BORG_REPO"
# Mount the latest backup
MOUNT_POINT="/mnt/borg-restore"
mkdir -p "$MOUNT_POINT"
# Get latest archive name
LATEST_ARCHIVE=$(borg list "$BORG_REPO" --last 1 --format '{archive}')
echo "Mounting archive: $LATEST_ARCHIVE"
borg mount "${BORG_REPO}::${LATEST_ARCHIVE}" "$MOUNT_POINT"
# Restore files
rsync -av "$MOUNT_POINT"/ / --exclude=/proc --exclude=/sys --exclude=/dev
# Unmount
borg umount "$MOUNT_POINT"
echo "โ
Borg restoration completed"
}
# Function for bare metal recovery
bare_metal_recovery() {
echo "๐ BARE METAL RECOVERY MODE"
echo "============================"
# 1. Restore partition table
if [ -f "$BACKUP_SOURCE/partition_table.txt" ]; then
echo "Restoring partition table..."
sfdisk /dev/sda < "$BACKUP_SOURCE/partition_table.txt"
fi
# 2. Restore LVM configuration
if [ -f "$BACKUP_SOURCE/lvm_backup.txt" ]; then
echo "Restoring LVM..."
vgcfgrestore -f "$BACKUP_SOURCE/lvm_backup.txt"
fi
# 3. Restore bootloader
echo "Reinstalling bootloader..."
grub2-install /dev/sda
grub2-mkconfig -o /boot/grub2/grub.cfg
# 4. Restore system files
restore_configs
restore_applications
restore_databases
# 5. Rebuild initramfs
echo "Rebuilding initramfs..."
dracut -f --regenerate-all
echo "โ
Bare metal recovery completed!"
echo "โ ๏ธ System reboot required!"
}
# Main recovery logic
case "$RECOVERY_MODE" in
"config")
restore_configs
;;
"database")
restore_databases
;;
"application")
restore_applications
;;
"full")
restore_configs
restore_databases
restore_applications
;;
"borg")
restore_from_borg
;;
"bare-metal")
bare_metal_recovery
;;
*)
echo "Usage: $0 {config|database|application|full|borg|bare-metal} <backup-source>"
exit 1
;;
esac
echo "================================"
echo "โ
Recovery completed at $(date)"
echo "๐ Log saved to: $LOG_FILE"
echo "================================"
EOF
chmod +x /backup/scripts/disaster-recovery.sh
Fantastic! Your disaster recovery system is ready! ๐
๐ฎ Quick Examples
Example 1: Automated Cloud Backup with Rclone
# Configure rclone for cloud storage
rclone config
# Create cloud backup script
cat > /backup/scripts/cloud-backup.sh << 'EOF'
#!/bin/bash
# Cloud Backup Script using Rclone
BACKUP_DIR="/backup/daily"
CLOUD_REMOTE="mycloud:backups"
LOG_FILE="/backup/logs/cloud_backup_$(date +%Y%m%d).log"
echo "โ๏ธ Starting cloud backup at $(date)" | tee -a "$LOG_FILE"
# Find latest local backup
LATEST_BACKUP=$(ls -t "$BACKUP_DIR" | head -1)
if [ -z "$LATEST_BACKUP" ]; then
echo "โ No local backup found!" | tee -a "$LOG_FILE"
exit 1
fi
# Sync to cloud with encryption
rclone sync \
"$BACKUP_DIR/$LATEST_BACKUP" \
"$CLOUD_REMOTE/$(hostname)/$LATEST_BACKUP" \
--progress \
--transfers 4 \
--checkers 8 \
--log-file="$LOG_FILE" \
--log-level INFO
# Verify upload
rclone check \
"$BACKUP_DIR/$LATEST_BACKUP" \
"$CLOUD_REMOTE/$(hostname)/$LATEST_BACKUP" \
--log-file="$LOG_FILE"
if [ $? -eq 0 ]; then
echo "โ
Cloud backup completed successfully!" | tee -a "$LOG_FILE"
else
echo "โ Cloud backup verification failed!" | tee -a "$LOG_FILE"
exit 1
fi
# Clean old cloud backups (keep 30 days)
rclone delete \
"$CLOUD_REMOTE/$(hostname)" \
--min-age 30d \
--log-file="$LOG_FILE"
echo "โ๏ธ Cloud backup finished at $(date)" | tee -a "$LOG_FILE"
EOF
chmod +x /backup/scripts/cloud-backup.sh
# Schedule cloud backup
(crontab -l 2>/dev/null; echo "0 3 * * * /backup/scripts/cloud-backup.sh") | crontab -
This enables automatic cloud backups! โ๏ธ
Example 2: RAID Configuration for Data Protection
# Create software RAID for backup storage
cat > /backup/scripts/setup-raid.sh << 'EOF'
#!/bin/bash
# Setup RAID for backup redundancy
# Create RAID 1 (mirror) for critical backups
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
# Save RAID configuration
mdadm --detail --scan >> /etc/mdadm.conf
# Format RAID array
mkfs.xfs /dev/md0
# Mount RAID array
mkdir -p /backup/raid
mount /dev/md0 /backup/raid
# Add to fstab
echo "/dev/md0 /backup/raid xfs defaults 0 0" >> /etc/fstab
# Monitor RAID health
cat > /backup/scripts/monitor-raid.sh << 'MONITOR'
#!/bin/bash
# RAID Health Monitor
RAID_STATUS=$(mdadm --detail /dev/md0 | grep "State :" | awk '{print $3}')
if [ "$RAID_STATUS" != "clean" ] && [ "$RAID_STATUS" != "active" ]; then
echo "โ ๏ธ RAID degraded! Status: $RAID_STATUS"
echo "RAID array degraded on $(hostname)" | mail -s "RAID Alert" [email protected]
# Detailed status
mdadm --detail /dev/md0
else
echo "โ
RAID healthy: $RAID_STATUS"
fi
# Check for failed disks
mdadm --detail /dev/md0 | grep -q "faulty"
if [ $? -eq 0 ]; then
echo "๐จ FAILED DISK DETECTED!"
mdadm --detail /dev/md0 | grep "faulty"
fi
MONITOR
chmod +x /backup/scripts/monitor-raid.sh
# Add to crontab for regular monitoring
(crontab -l 2>/dev/null; echo "*/10 * * * * /backup/scripts/monitor-raid.sh") | crontab -
EOF
chmod +x /backup/scripts/setup-raid.sh
This provides hardware-level redundancy! ๐พ
Example 3: Incremental Backup with Rsync
# Create incremental backup script
cat > /backup/scripts/incremental-backup.sh << 'EOF'
#!/bin/bash
# Incremental Backup using Rsync
SOURCE_DIRS="/home /etc /var/www /opt"
BACKUP_DIR="/backup/incremental"
CURRENT_DATE=$(date +%Y%m%d_%H%M%S)
CURRENT_BACKUP="${BACKUP_DIR}/backup_${CURRENT_DATE}"
LATEST_LINK="${BACKUP_DIR}/latest"
LOG_FILE="/backup/logs/incremental_${CURRENT_DATE}.log"
echo "๐ Starting incremental backup at $(date)" | tee -a "$LOG_FILE"
# Create backup directory
mkdir -p "$CURRENT_BACKUP"
# Perform incremental backup
for SOURCE in $SOURCE_DIRS; do
if [ -d "$SOURCE" ]; then
echo "Backing up: $SOURCE" | tee -a "$LOG_FILE"
rsync -av --delete \
--link-dest="$LATEST_LINK" \
"$SOURCE" \
"$CURRENT_BACKUP/" \
--exclude='*.tmp' \
--exclude='*.cache' \
--exclude='*.log' \
>> "$LOG_FILE" 2>&1
fi
done
# Update latest symlink
rm -f "$LATEST_LINK"
ln -s "$CURRENT_BACKUP" "$LATEST_LINK"
# Calculate backup size and deduplication savings
BACKUP_SIZE=$(du -sh "$CURRENT_BACKUP" | cut -f1)
TOTAL_SIZE=$(du -sh "$BACKUP_DIR" | cut -f1)
echo "================================" | tee -a "$LOG_FILE"
echo "โ
Incremental backup completed!" | tee -a "$LOG_FILE"
echo "๐ This backup: $BACKUP_SIZE" | tee -a "$LOG_FILE"
echo "๐พ Total storage: $TOTAL_SIZE" | tee -a "$LOG_FILE"
echo "================================" | tee -a "$LOG_FILE"
# Cleanup old backups (keep 30 days)
find "$BACKUP_DIR" -maxdepth 1 -type d -name "backup_*" -mtime +30 -exec rm -rf {} \;
echo "๐งน Cleanup completed" | tee -a "$LOG_FILE"
EOF
chmod +x /backup/scripts/incremental-backup.sh
# Schedule hourly incremental backups
(crontab -l 2>/dev/null; echo "0 * * * * /backup/scripts/incremental-backup.sh") | crontab -
This creates space-efficient incremental backups! ๐
๐จ Fix Common Problems
Problem 1: Backup Failing Due to Disk Space
Symptoms: Backup scripts fail with โNo space left on deviceโ
# Check disk usage
df -h
du -sh /backup/*
# Clean up old backups
find /backup -type f -name "*.tar.gz" -mtime +30 -delete
# Compress existing backups
for file in /backup/daily/*/*.tar; do
gzip "$file"
done
# Move old backups to archive
mkdir -p /backup/archive
find /backup/daily -type d -mtime +7 -exec mv {} /backup/archive/ \;
# Enable compression in backup scripts
sed -i 's/tar cf/tar czf/' /backup/scripts/*.sh
# Set up automatic cleanup
cat > /backup/scripts/cleanup.sh << 'EOF'
#!/bin/bash
# Cleanup old backups
find /backup/daily -type d -mtime +7 -exec rm -rf {} \;
find /backup/weekly -type d -mtime +30 -exec rm -rf {} \;
find /backup/monthly -type d -mtime +365 -exec rm -rf {} \;
EOF
chmod +x /backup/scripts/cleanup.sh
Problem 2: Slow Backup Performance
Symptoms: Backups take hours to complete
# Optimize backup performance
# 1. Use parallel compression
tar cf - /large/directory | pigz > backup.tar.gz
# 2. Exclude unnecessary files
cat > /backup/exclude-list.txt << 'EOF'
*.tmp
*.cache
*.log
/var/cache/*
/tmp/*
/var/tmp/*
*.swp
core.*
EOF
# 3. Use faster compression algorithm
# LZ4 is much faster than gzip
tar --lz4 -cf backup.tar.lz4 /directory
# 4. Optimize rsync for speed
rsync -av --inplace --no-whole-file --compress-level=1 /source/ /destination/
# 5. Monitor I/O performance during backup
iotop -b -n 1
iostat -x 5
Problem 3: Restore Failing
Symptoms: Cannot restore from backup
# Troubleshoot restore issues
# 1. Verify backup integrity
tar tzf backup.tar.gz > /dev/null
echo $? # Should be 0
# 2. Check permissions
ls -la /backup/
sudo chown -R root:root /backup/
# 3. Test restore to alternate location
mkdir /tmp/test-restore
tar xzf backup.tar.gz -C /tmp/test-restore
# 4. Fix corrupted archive
# Try to extract as much as possible
tar xzf backup.tar.gz --ignore-zeros --warning=no-timestamp
# 5. Use Borg to verify and repair
borg check --repair /backup/borg
# 6. Emergency single-file recovery
tar xzf backup.tar.gz path/to/specific/file
Problem 4: Backup Encryption Issues
Symptoms: Cannot decrypt encrypted backups
# Setup proper encryption
# 1. Create encrypted backup with GPG
tar czf - /data | gpg --cipher-algo AES256 --symmetric > backup.tar.gz.gpg
# 2. Decrypt backup
gpg --decrypt backup.tar.gz.gpg | tar xzf -
# 3. Store encryption keys safely
cat > /backup/scripts/key-management.sh << 'EOF'
#!/bin/bash
# Backup encryption keys
# Export GPG keys
gpg --export-secret-keys > /secure/location/private-keys.gpg
gpg --export > /secure/location/public-keys.gpg
# Backup Borg key
borg key export /backup/borg /secure/location/borg-key.txt
# Create recovery info
cat > /secure/location/recovery-info.txt << INFO
Backup Encryption Recovery Information
======================================
Date: $(date)
System: $(hostname)
GPG Key ID: $(gpg --list-secret-keys --keyid-format LONG | grep sec | awk '{print $2}')
Borg Repo: /backup/borg
Recovery Steps:
1. Import GPG keys: gpg --import private-keys.gpg
2. Restore Borg key: borg key import /backup/borg borg-key.txt
INFO
echo "โ
Encryption keys backed up to /secure/location/"
EOF
chmod +x /backup/scripts/key-management.sh
๐ Simple Commands Summary
Command | Purpose |
---|---|
rsync -av /source/ /destination/ | Basic file synchronization |
tar czf backup.tar.gz /directory | Create compressed archive |
borg create repo::name /path | Create Borg backup |
borg list repo | List Borg archives |
borg mount repo::archive /mnt | Mount Borg archive |
rclone sync /local remote:path | Sync to cloud storage |
mdadm --detail /dev/md0 | Check RAID status |
df -h | Check disk usage |
du -sh /backup/* | Check backup sizes |
crontab -e | Edit backup schedule |
๐ก Tips for Success
๐ฏ Test Restores Regularly: A backup is only good if you can restore from it
๐ Monitor Backup Status: Set up alerts for failed backups
๐ Document Everything: Keep detailed records of whatโs backed up
๐ก๏ธ 3-2-1 Rule: 3 copies, 2 different media, 1 offsite
๐ Automate Everything: Manual backups are forgotten backups
๐ Version Control Configs: Keep configuration files in Git
๐ Regular Maintenance: Clean old backups and verify integrity
โก Optimize for Recovery: Fast recovery is more important than fast backup
๐ What You Learned
Congratulations! Youโve successfully mastered backup and disaster recovery on AlmaLinux! ๐
โ Implemented comprehensive backup strategies โ Configured automated backup schedules โ Set up Borg for efficient deduplication โ Created disaster recovery procedures โ Established cloud backup integration โ Built RAID arrays for redundancy โ Developed restore and recovery scripts โ Learned troubleshooting techniques
๐ฏ Why This Matters
Robust backup and disaster recovery isnโt optional โ itโs essential! ๐ With your AlmaLinux backup system, you now have:
- Complete data protection against any type of failure
- Rapid recovery capabilities to minimize downtime
- Business continuity assurance for critical operations
- Peace of mind knowing your data is safe
- Compliance readiness for regulatory requirements
Youโre now equipped to protect against data loss from hardware failures, human errors, cyberattacks, and natural disasters. Your backup expertise puts you in the league of professional system administrators who understand that data is the most valuable asset! ๐
Keep backing up, keep testing restores, and remember โ itโs not a matter of if disaster will strike, but when. Youโre prepared! โญ๐