☁️ AlmaLinux Cloud Migration: Complete AWS & Azure Guide for Enterprise Scale
Hey there, cloud pioneer! 🎉 Ready to lift your AlmaLinux infrastructure into the cloud and unlock unlimited scalability, reliability, and cost savings? Today we’re mastering cloud migration to AWS and Azure – the journey that transforms traditional infrastructure into modern, elastic, and globally distributed systems! 🚀
Whether you’re migrating a single server or an entire data center, this guide will turn you into a cloud migration expert who can move mountains (of servers) into the cloud! ⛰️➡️☁️
🤔 Why is Cloud Migration Important?
Imagine scaling from 1 server to 1000 servers in minutes, paying only for what you use, and having global reach with 99.99% uptime! 🌍 Cloud migration isn’t just trendy – it’s the foundation of modern business agility!
Here’s why cloud migration with AlmaLinux is absolutely game-changing:
- 🚀 Infinite Scalability - Scale up or down instantly based on demand
- 💰 Cost Optimization - Pay only for resources you actually use
- 🌍 Global Reach - Deploy worldwide with a few clicks
- 🛡️ Enhanced Security - Enterprise-grade security built-in
- ⚡ High Availability - 99.99% uptime with automatic failover
- 🔄 Disaster Recovery - Automated backups and recovery
- 📈 Performance Boost - Latest hardware without capital investment
- 🎯 Focus on Business - Let cloud providers handle infrastructure
🎯 What You Need
Before we start your cloud journey, let’s prepare for takeoff:
✅ AlmaLinux systems to migrate (on-premise or VMs) ✅ AWS/Azure accounts with appropriate permissions ✅ Cloud CLI tools (AWS CLI, Azure CLI) ✅ Migration assessment of current infrastructure ✅ Network connectivity for data transfer ✅ Backup of critical systems (safety first!) ✅ Migration timeline and rollback plans ✅ Excitement for the cloud revolution! ☁️
📝 Step 1: Cloud Migration Assessment
Let’s start by assessing what we’re migrating! 🎯
# Create migration assessment script
cat > ~/cloud-migration-assessment.sh << 'EOF'
#!/bin/bash
# Cloud Migration Assessment Tool
REPORT_FILE="migration-assessment-$(date +%Y%m%d).txt"
echo "☁️ Cloud Migration Assessment Report" > $REPORT_FILE
echo "====================================" >> $REPORT_FILE
echo "Date: $(date)" >> $REPORT_FILE
echo "Hostname: $(hostname)" >> $REPORT_FILE
echo "" >> $REPORT_FILE
# System specifications
echo "🖥️ System Specifications:" >> $REPORT_FILE
echo "CPU Cores: $(nproc)" >> $REPORT_FILE
echo "Memory: $(free -h | grep Mem | awk '{print $2}')" >> $REPORT_FILE
echo "Architecture: $(uname -m)" >> $REPORT_FILE
echo "OS: $(cat /etc/os-release | grep PRETTY_NAME | cut -d'"' -f2)" >> $REPORT_FILE
echo "Kernel: $(uname -r)" >> $REPORT_FILE
echo "" >> $REPORT_FILE
# Storage assessment
echo "💾 Storage Assessment:" >> $REPORT_FILE
lsblk >> $REPORT_FILE
echo "" >> $REPORT_FILE
df -h >> $REPORT_FILE
echo "" >> $REPORT_FILE
# Network configuration
echo "🌐 Network Configuration:" >> $REPORT_FILE
ip addr show >> $REPORT_FILE
echo "" >> $REPORT_FILE
# Running services
echo "🔧 Running Services:" >> $REPORT_FILE
systemctl list-units --state=running --type=service >> $REPORT_FILE
echo "" >> $REPORT_FILE
# Installed packages (key ones)
echo "📦 Key Installed Packages:" >> $REPORT_FILE
rpm -qa | grep -E "(httpd|nginx|mysql|mariadb|postgresql|php|python|java|docker)" >> $REPORT_FILE
echo "" >> $REPORT_FILE
# Performance baseline
echo "📊 Performance Baseline:" >> $REPORT_FILE
echo "Load Average: $(uptime | awk -F'load average:' '{print $2}')" >> $REPORT_FILE
echo "Memory Usage: $(free | grep Mem | awk '{printf "%.1f%%", $3/$2 * 100.0}')" >> $REPORT_FILE
echo "Disk I/O: $(iostat -x 1 2 | tail -1 | awk '{print $10}')" >> $REPORT_FILE
echo "" >> $REPORT_FILE
# Network ports
echo "🔌 Open Network Ports:" >> $REPORT_FILE
ss -tuln >> $REPORT_FILE
echo "" >> $REPORT_FILE
# Database assessment
echo "🗄️ Database Assessment:" >> $REPORT_FILE
if systemctl is-active --quiet mysqld || systemctl is-active --quiet mariadb; then
echo "MySQL/MariaDB detected" >> $REPORT_FILE
mysql -e "SELECT @@version;" 2>/dev/null >> $REPORT_FILE
fi
if systemctl is-active --quiet postgresql; then
echo "PostgreSQL detected" >> $REPORT_FILE
su - postgres -c "psql -c 'SELECT version();'" 2>/dev/null >> $REPORT_FILE
fi
echo "" >> $REPORT_FILE
# Configuration files to backup
echo "📋 Important Configuration Files:" >> $REPORT_FILE
find /etc -name "*.conf" -type f | head -20 >> $REPORT_FILE
echo "" >> $REPORT_FILE
# Migration complexity assessment
echo "🎯 Migration Complexity:" >> $REPORT_FILE
COMPLEXITY_SCORE=0
# Check for databases
if systemctl is-active --quiet mysqld || systemctl is-active --quiet mariadb || systemctl is-active --quiet postgresql; then
echo "- Database server detected (+2 complexity)" >> $REPORT_FILE
COMPLEXITY_SCORE=$((COMPLEXITY_SCORE + 2))
fi
# Check for web servers
if systemctl is-active --quiet httpd || systemctl is-active --quiet nginx; then
echo "- Web server detected (+1 complexity)" >> $REPORT_FILE
COMPLEXITY_SCORE=$((COMPLEXITY_SCORE + 1))
fi
# Check for custom applications
if [ $(systemctl list-units --type=service | grep -v "system" | wc -l) -gt 50 ]; then
echo "- Many custom services (+2 complexity)" >> $REPORT_FILE
COMPLEXITY_SCORE=$((COMPLEXITY_SCORE + 2))
fi
echo "Total Complexity Score: $COMPLEXITY_SCORE/10" >> $REPORT_FILE
echo "" >> $REPORT_FILE
# Cloud sizing recommendations
echo "☁️ Cloud Sizing Recommendations:" >> $REPORT_FILE
CPU_CORES=$(nproc)
MEMORY_GB=$(free -g | grep Mem | awk '{print $2}')
if [ $CPU_CORES -le 2 ] && [ $MEMORY_GB -le 4 ]; then
echo "AWS: t3.medium or t3.large" >> $REPORT_FILE
echo "Azure: Standard_B2s or Standard_B2ms" >> $REPORT_FILE
elif [ $CPU_CORES -le 4 ] && [ $MEMORY_GB -le 8 ]; then
echo "AWS: t3.xlarge or m5.xlarge" >> $REPORT_FILE
echo "Azure: Standard_B4ms or Standard_D4s_v3" >> $REPORT_FILE
else
echo "AWS: m5.2xlarge or larger" >> $REPORT_FILE
echo "Azure: Standard_D8s_v3 or larger" >> $REPORT_FILE
fi
echo "" >> $REPORT_FILE
echo "✅ Assessment completed! Report saved to: $REPORT_FILE"
EOF
chmod +x ~/cloud-migration-assessment.sh
./cloud-migration-assessment.sh
Perfect! We now have a complete assessment! 🎉
🔧 Step 2: AWS Migration Setup
Let’s set up AWS for AlmaLinux migration:
# Install AWS CLI
sudo dnf install -y awscli
# Configure AWS credentials (you'll need your Access Key and Secret)
aws configure
# Create AWS migration script
cat > ~/aws-migration.sh << 'EOF'
#!/bin/bash
# AWS Migration Script for AlmaLinux
# Configuration
AMI_NAME="almalinux-migration-$(date +%Y%m%d)"
INSTANCE_TYPE="${1:-t3.medium}"
KEY_NAME="${2:-my-key-pair}"
SECURITY_GROUP="${3:-sg-12345678}"
SUBNET_ID="${4:-subnet-12345678}"
# Colors
GREEN='\033[0;32m'
BLUE='\033[0;34m'
RED='\033[0;31m'
NC='\033[0m'
log() { echo -e "${BLUE}[AWS]${NC} $1"; }
success() { echo -e "${GREEN}[SUCCESS]${NC} $1"; }
error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Create custom AMI from current system
create_ami() {
log "Creating custom AlmaLinux AMI..."
# This would typically be done from an existing EC2 instance
# For on-premise, we'll use VM Export
echo "For on-premise migration, consider these options:"
echo "1. Use AWS VM Import/Export"
echo "2. Create AMI from existing EC2 instance"
echo "3. Use AWS Application Migration Service"
}
# Launch EC2 instance
launch_instance() {
log "Launching EC2 instance..."
# Get latest AlmaLinux AMI
AMI_ID=$(aws ec2 describe-images \
--owners 764336703387 \
--filters "Name=name,Values=AlmaLinux OS 9*" \
--query 'Images[*].[ImageId,CreationDate]' \
--output text | sort -k2 -r | head -1 | awk '{print $1}')
log "Using AMI: $AMI_ID"
# Launch instance
INSTANCE_ID=$(aws ec2 run-instances \
--image-id $AMI_ID \
--count 1 \
--instance-type $INSTANCE_TYPE \
--key-name $KEY_NAME \
--security-group-ids $SECURITY_GROUP \
--subnet-id $SUBNET_ID \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=AlmaLinux-Migration}]" \
--query 'Instances[0].InstanceId' \
--output text)
success "Instance launched: $INSTANCE_ID"
# Wait for instance to be running
log "Waiting for instance to be running..."
aws ec2 wait instance-running --instance-ids $INSTANCE_ID
# Get public IP
PUBLIC_IP=$(aws ec2 describe-instances \
--instance-ids $INSTANCE_ID \
--query 'Reservations[0].Instances[0].PublicIpAddress' \
--output text)
success "Instance ready! Public IP: $PUBLIC_IP"
echo "SSH Command: ssh -i ~/.ssh/$KEY_NAME.pem ec2-user@$PUBLIC_IP"
}
# Set up Auto Scaling
setup_autoscaling() {
log "Setting up Auto Scaling..."
# Create launch template
aws ec2 create-launch-template \
--launch-template-name almalinux-template \
--launch-template-data '{
"ImageId": "'$AMI_ID'",
"InstanceType": "'$INSTANCE_TYPE'",
"KeyName": "'$KEY_NAME'",
"SecurityGroupIds": ["'$SECURITY_GROUP'"],
"TagSpecifications": [{
"ResourceType": "instance",
"Tags": [{"Key": "Name", "Value": "AlmaLinux-AutoScale"}]
}]
}'
# Create Auto Scaling group
aws autoscaling create-auto-scaling-group \
--auto-scaling-group-name almalinux-asg \
--launch-template LaunchTemplateName=almalinux-template,Version='$Latest' \
--min-size 1 \
--max-size 5 \
--desired-capacity 2 \
--vpc-zone-identifier $SUBNET_ID
success "Auto Scaling configured"
}
# Set up CloudWatch monitoring
setup_monitoring() {
log "Setting up CloudWatch monitoring..."
# Create custom metric for application monitoring
aws logs create-log-group --log-group-name /aws/ec2/almalinux
# Install CloudWatch agent (to be done on instance)
cat > /tmp/cloudwatch-config.json << 'CLOUDWATCH'
{
"metrics": {
"namespace": "AlmaLinux/Application",
"metrics_collected": {
"cpu": {
"measurement": ["cpu_usage_idle", "cpu_usage_iowait"],
"metrics_collection_interval": 60
},
"disk": {
"measurement": ["used_percent"],
"metrics_collection_interval": 60,
"resources": ["*"]
},
"mem": {
"measurement": ["mem_used_percent"],
"metrics_collection_interval": 60
}
}
},
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "/var/log/messages",
"log_group_name": "/aws/ec2/almalinux",
"log_stream_name": "{instance_id}/messages"
}
]
}
}
}
}
CLOUDWATCH
success "CloudWatch monitoring configured"
}
# Main migration function
main() {
echo "☁️ AWS Migration for AlmaLinux"
echo "=============================="
echo "Instance Type: $INSTANCE_TYPE"
echo "Key Pair: $KEY_NAME"
echo ""
case "${5:-launch}" in
"assess")
create_ami
;;
"launch")
launch_instance
;;
"autoscale")
setup_autoscaling
;;
"monitor")
setup_monitoring
;;
"full")
launch_instance
setup_autoscaling
setup_monitoring
;;
*)
echo "Usage: $0 <instance-type> <key-name> <security-group> <subnet-id> <action>"
echo "Actions: assess, launch, autoscale, monitor, full"
;;
esac
}
main
EOF
chmod +x ~/aws-migration.sh
Excellent! AWS migration tools are ready! 🌟
🌟 Step 3: Azure Migration Setup
Now let’s set up Azure for AlmaLinux migration:
# Install Azure CLI
sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
sudo dnf config-manager --add-repo https://packages.microsoft.com/config/rhel/9/prod.repo
sudo dnf install -y azure-cli
# Login to Azure
az login
# Create Azure migration script
cat > ~/azure-migration.sh << 'EOF'
#!/bin/bash
# Azure Migration Script for AlmaLinux
# Configuration
RESOURCE_GROUP="${1:-AlmaLinux-Migration-RG}"
LOCATION="${2:-East US}"
VM_NAME="${3:-almalinux-vm}"
VM_SIZE="${4:-Standard_B2s}"
# Colors
GREEN='\033[0;32m'
BLUE='\033[0;34m'
RED='\033[0;31m'
NC='\033[0m'
log() { echo -e "${BLUE}[AZURE]${NC} $1"; }
success() { echo -e "${GREEN}[SUCCESS]${NC} $1"; }
error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Create resource group
create_resource_group() {
log "Creating resource group: $RESOURCE_GROUP"
az group create \
--name $RESOURCE_GROUP \
--location "$LOCATION"
success "Resource group created"
}
# Create virtual network
create_network() {
log "Creating virtual network..."
# Create VNet
az network vnet create \
--resource-group $RESOURCE_GROUP \
--name AlmaLinux-VNet \
--address-prefix 10.0.0.0/16 \
--subnet-name default \
--subnet-prefix 10.0.1.0/24
# Create Network Security Group
az network nsg create \
--resource-group $RESOURCE_GROUP \
--name AlmaLinux-NSG
# Add SSH rule
az network nsg rule create \
--resource-group $RESOURCE_GROUP \
--nsg-name AlmaLinux-NSG \
--name SSH \
--priority 1001 \
--source-address-prefixes '*' \
--source-port-ranges '*' \
--destination-address-prefixes '*' \
--destination-port-ranges 22 \
--access Allow \
--protocol Tcp
# Add HTTP rule
az network nsg rule create \
--resource-group $RESOURCE_GROUP \
--nsg-name AlmaLinux-NSG \
--name HTTP \
--priority 1002 \
--source-address-prefixes '*' \
--source-port-ranges '*' \
--destination-address-prefixes '*' \
--destination-port-ranges 80 \
--access Allow \
--protocol Tcp
success "Virtual network created"
}
# Create virtual machine
create_vm() {
log "Creating virtual machine: $VM_NAME"
# Generate SSH key if not exists
if [ ! -f ~/.ssh/id_rsa ]; then
ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa -N ""
fi
# Create VM
az vm create \
--resource-group $RESOURCE_GROUP \
--name $VM_NAME \
--image AlmaLinux:almalinux:9-gen1:latest \
--size $VM_SIZE \
--admin-username azureuser \
--ssh-key-values ~/.ssh/id_rsa.pub \
--vnet-name AlmaLinux-VNet \
--subnet default \
--nsg AlmaLinux-NSG \
--public-ip-sku Standard \
--storage-sku Premium_LRS
# Get public IP
PUBLIC_IP=$(az vm show \
--resource-group $RESOURCE_GROUP \
--name $VM_NAME \
--show-details \
--query publicIps \
--output tsv)
success "VM created! Public IP: $PUBLIC_IP"
echo "SSH Command: ssh azureuser@$PUBLIC_IP"
}
# Set up Azure Monitor
setup_monitoring() {
log "Setting up Azure Monitor..."
# Enable Azure Monitor for VMs
az vm extension set \
--resource-group $RESOURCE_GROUP \
--vm-name $VM_NAME \
--name AzureMonitorLinuxAgent \
--publisher Microsoft.Azure.Monitor
# Create Log Analytics workspace
WORKSPACE_NAME="AlmaLinux-Workspace"
az monitor log-analytics workspace create \
--resource-group $RESOURCE_GROUP \
--workspace-name $WORKSPACE_NAME \
--location "$LOCATION"
success "Azure Monitor configured"
}
# Set up VM Scale Set
setup_scaleset() {
log "Setting up VM Scale Set..."
az vmss create \
--resource-group $RESOURCE_GROUP \
--name AlmaLinux-VMSS \
--image AlmaLinux:almalinux:9-gen1:latest \
--vm-sku $VM_SIZE \
--admin-username azureuser \
--ssh-key-values ~/.ssh/id_rsa.pub \
--instance-count 2 \
--vnet-name AlmaLinux-VNet \
--subnet default \
--lb-name AlmaLinux-LB \
--backend-pool-name AlmaLinux-Pool
# Set up auto-scaling
az monitor autoscale create \
--resource-group $RESOURCE_GROUP \
--name AlmaLinux-Autoscale \
--resource AlmaLinux-VMSS \
--resource-type Microsoft.Compute/virtualMachineScaleSets \
--min-count 1 \
--max-count 10 \
--count 2
# Add scaling rule based on CPU
az monitor autoscale rule create \
--resource-group $RESOURCE_GROUP \
--autoscale-name AlmaLinux-Autoscale \
--condition "Percentage CPU > 70 avg 5m" \
--scale out 1
success "VM Scale Set configured"
}
# Set up backup
setup_backup() {
log "Setting up Azure Backup..."
# Create Recovery Services vault
az backup vault create \
--resource-group $RESOURCE_GROUP \
--name AlmaLinux-Vault \
--location "$LOCATION"
# Enable backup for VM
az backup protection enable-for-vm \
--resource-group $RESOURCE_GROUP \
--vault-name AlmaLinux-Vault \
--vm $VM_NAME \
--policy-name DefaultPolicy
success "Azure Backup configured"
}
# Main migration function
main() {
echo "☁️ Azure Migration for AlmaLinux"
echo "==============================="
echo "Resource Group: $RESOURCE_GROUP"
echo "Location: $LOCATION"
echo "VM Name: $VM_NAME"
echo "VM Size: $VM_SIZE"
echo ""
case "${5:-full}" in
"rg")
create_resource_group
;;
"network")
create_network
;;
"vm")
create_vm
;;
"monitor")
setup_monitoring
;;
"scaleset")
setup_scaleset
;;
"backup")
setup_backup
;;
"full")
create_resource_group
create_network
create_vm
setup_monitoring
setup_backup
;;
*)
echo "Usage: $0 <resource-group> <location> <vm-name> <vm-size> <action>"
echo "Actions: rg, network, vm, monitor, scaleset, backup, full"
;;
esac
}
main
EOF
chmod +x ~/azure-migration.sh
✅ Step 4: Data Migration and Synchronization
Let’s create tools for migrating data and applications:
# Create data migration script
cat > ~/data-migration.sh << 'EOF'
#!/bin/bash
# Data Migration and Synchronization Tool
SOURCE_HOST="${1:-localhost}"
DEST_HOST="${2:-cloud-server}"
MIGRATION_TYPE="${3:-files}"
# Colors
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
NC='\033[0m'
log() { echo -e "${BLUE}[MIGRATE]${NC} $1"; }
success() { echo -e "${GREEN}[SUCCESS]${NC} $1"; }
warning() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
# File system migration
migrate_files() {
log "Starting file system migration..."
# Define directories to migrate
DIRS_TO_MIGRATE=(
"/home"
"/etc"
"/var/www"
"/opt"
"/srv"
)
for dir in "${DIRS_TO_MIGRATE[@]}"; do
if [ -d "$dir" ]; then
log "Migrating $dir..."
# Create compressed archive
tar czf "/tmp/$(basename $dir)-backup.tar.gz" -C "$(dirname $dir)" "$(basename $dir)"
# Transfer to destination
if [ "$DEST_HOST" != "localhost" ]; then
scp "/tmp/$(basename $dir)-backup.tar.gz" "root@$DEST_HOST:/tmp/"
ssh "root@$DEST_HOST" "cd / && tar xzf /tmp/$(basename $dir)-backup.tar.gz"
fi
success "Migrated $dir"
fi
done
}
# Database migration
migrate_databases() {
log "Starting database migration..."
# MySQL/MariaDB migration
if systemctl is-active --quiet mysqld || systemctl is-active --quiet mariadb; then
log "Migrating MySQL/MariaDB databases..."
# Export all databases
mysqldump --all-databases --single-transaction --routines --triggers > /tmp/mysql-backup.sql
# Transfer and import
if [ "$DEST_HOST" != "localhost" ]; then
scp /tmp/mysql-backup.sql "root@$DEST_HOST:/tmp/"
ssh "root@$DEST_HOST" "mysql < /tmp/mysql-backup.sql"
fi
success "MySQL/MariaDB migrated"
fi
# PostgreSQL migration
if systemctl is-active --quiet postgresql; then
log "Migrating PostgreSQL databases..."
# Export all databases
sudo -u postgres pg_dumpall > /tmp/postgresql-backup.sql
# Transfer and import
if [ "$DEST_HOST" != "localhost" ]; then
scp /tmp/postgresql-backup.sql "root@$DEST_HOST:/tmp/"
ssh "root@$DEST_HOST" "sudo -u postgres psql < /tmp/postgresql-backup.sql"
fi
success "PostgreSQL migrated"
fi
}
# Application migration
migrate_applications() {
log "Starting application migration..."
# Create application inventory
cat > /tmp/app-inventory.txt << 'INVENTORY'
# Application Inventory for Migration
# Format: service_name:config_path:data_path:port
# Web servers
httpd:/etc/httpd:/var/www:80
nginx:/etc/nginx:/usr/share/nginx:80
# Databases
mysqld:/etc/my.cnf:/var/lib/mysql:3306
postgresql:/var/lib/pgsql/data:/var/lib/postgresql:5432
# Application servers
tomcat:/etc/tomcat:/var/lib/tomcat:8080
docker:/etc/docker:/var/lib/docker:2376
INVENTORY
# Process each application
while IFS=':' read -r service config data port; do
if [[ ! "$service" =~ ^# ]] && [ -n "$service" ]; then
if systemctl is-active --quiet "$service"; then
log "Migrating application: $service"
# Stop service
systemctl stop "$service"
# Backup configuration and data
if [ -d "$config" ]; then
tar czf "/tmp/${service}-config.tar.gz" -C "$(dirname $config)" "$(basename $config)"
fi
if [ -d "$data" ]; then
tar czf "/tmp/${service}-data.tar.gz" -C "$(dirname $data)" "$(basename $data)"
fi
# Transfer to destination
if [ "$DEST_HOST" != "localhost" ]; then
if [ -f "/tmp/${service}-config.tar.gz" ]; then
scp "/tmp/${service}-config.tar.gz" "root@$DEST_HOST:/tmp/"
fi
if [ -f "/tmp/${service}-data.tar.gz" ]; then
scp "/tmp/${service}-data.tar.gz" "root@$DEST_HOST:/tmp/"
fi
# Install and configure service on destination
ssh "root@$DEST_HOST" "
dnf install -y $service
systemctl stop $service
if [ -f /tmp/${service}-config.tar.gz ]; then
cd / && tar xzf /tmp/${service}-config.tar.gz
fi
if [ -f /tmp/${service}-data.tar.gz ]; then
cd / && tar xzf /tmp/${service}-data.tar.gz
fi
systemctl start $service
systemctl enable $service
"
fi
success "Migrated $service"
fi
fi
done < /tmp/app-inventory.txt
}
# Continuous synchronization
setup_sync() {
log "Setting up continuous synchronization..."
# Install and configure lsyncd for real-time sync
dnf install -y lsyncd
cat > /etc/lsyncd.conf << 'LSYNCD'
settings {
logfile = "/var/log/lsyncd.log",
statusFile = "/var/log/lsyncd-status.log",
nodaemon = false
}
sync {
default.rsync,
source = "/var/www/",
target = "root@DEST_HOST:/var/www/",
rsync = {
binary = "/usr/bin/rsync",
archive = true,
compress = true,
verbose = true
}
}
LSYNCD
# Replace DEST_HOST with actual destination
sed -i "s/DEST_HOST/$DEST_HOST/g" /etc/lsyncd.conf
systemctl enable lsyncd
systemctl start lsyncd
success "Continuous sync configured"
}
# Migration validation
validate_migration() {
log "Validating migration..."
# Check file checksums
find /var/www -type f -exec md5sum {} \; > /tmp/source-checksums.txt
if [ "$DEST_HOST" != "localhost" ]; then
ssh "root@$DEST_HOST" "find /var/www -type f -exec md5sum {} \;" > /tmp/dest-checksums.txt
if diff /tmp/source-checksums.txt /tmp/dest-checksums.txt; then
success "File integrity verified"
else
warning "File integrity check failed - some files may need re-sync"
fi
fi
# Check services
log "Verifying services on destination..."
if [ "$DEST_HOST" != "localhost" ]; then
ssh "root@$DEST_HOST" "systemctl list-units --state=running --type=service" > /tmp/dest-services.txt
success "Service verification completed"
fi
}
# Main migration function
main() {
echo "🚚 Data Migration Tool"
echo "====================="
echo "Source: $SOURCE_HOST"
echo "Destination: $DEST_HOST"
echo "Type: $MIGRATION_TYPE"
echo ""
case "$MIGRATION_TYPE" in
"files")
migrate_files
;;
"databases")
migrate_databases
;;
"applications")
migrate_applications
;;
"sync")
setup_sync
;;
"validate")
validate_migration
;;
"full")
migrate_files
migrate_databases
migrate_applications
validate_migration
;;
*)
echo "Usage: $0 <source-host> <dest-host> <type>"
echo "Types: files, databases, applications, sync, validate, full"
;;
esac
}
main
EOF
chmod +x ~/data-migration.sh
🎮 Quick Examples
Example 1: Hybrid Cloud Setup
# Create hybrid cloud configuration
cat > ~/hybrid-cloud-setup.sh << 'EOF'
#!/bin/bash
# Hybrid Cloud Configuration
# Configure VPN connection to cloud
setup_cloud_vpn() {
# Install VPN client
dnf install -y openvpn
# Configure cloud VPN (AWS/Azure specific)
echo "Setting up VPN connection to cloud..."
# AWS VPN configuration
if [ "$1" = "aws" ]; then
# Download VPN configuration from AWS
# aws ec2 describe-vpn-connections --vpn-connection-ids vpn-12345
echo "AWS VPN configuration needed"
fi
# Azure VPN configuration
if [ "$1" = "azure" ]; then
# Download VPN configuration from Azure
# az network vpn-connection show --name connection-name
echo "Azure VPN configuration needed"
fi
}
# Set up cloud bursting
setup_cloud_bursting() {
echo "Configuring cloud bursting for peak loads..."
# Monitor local resources
cat > /usr/local/bin/cloud-burst-monitor.sh << 'MONITOR'
#!/bin/bash
# Cloud Bursting Monitor
LOAD_THRESHOLD=5.0
MEMORY_THRESHOLD=80
current_load=$(uptime | awk -F'load average:' '{print $2}' | awk -F',' '{print $1}' | tr -d ' ')
memory_usage=$(free | grep Mem | awk '{printf "%.0f", $3/$2 * 100.0}')
if (( $(echo "$current_load > $LOAD_THRESHOLD" | bc -l) )) || [ $memory_usage -gt $MEMORY_THRESHOLD ]; then
echo "High load detected - triggering cloud burst"
# Launch additional cloud instances
~/aws-migration.sh t3.large my-key sg-12345 subnet-12345 launch
fi
MONITOR
chmod +x /usr/local/bin/cloud-burst-monitor.sh
# Schedule monitoring
(crontab -l; echo "*/5 * * * * /usr/local/bin/cloud-burst-monitor.sh") | crontab -
}
setup_cloud_vpn "$1"
setup_cloud_bursting
EOF
chmod +x ~/hybrid-cloud-setup.sh
Example 2: Cost Optimization
# Create cloud cost optimization script
cat > ~/cloud-cost-optimizer.sh << 'EOF'
#!/bin/bash
# Cloud Cost Optimization Tool
# AWS cost optimization
optimize_aws_costs() {
echo "🔍 AWS Cost Optimization Analysis"
# Find unused EBS volumes
aws ec2 describe-volumes --filters Name=status,Values=available --query 'Volumes[*].[VolumeId,Size,VolumeType]' --output table
# Find unused Elastic IPs
aws ec2 describe-addresses --query 'Addresses[?AssociationId==null].[PublicIp,AllocationId]' --output table
# Suggest right-sizing based on CloudWatch metrics
aws cloudwatch get-metric-statistics \
--namespace AWS/EC2 \
--metric-name CPUUtilization \
--start-time $(date -d '7 days ago' --iso-8601) \
--end-time $(date --iso-8601) \
--period 3600 \
--statistics Average
echo "💡 Cost Optimization Recommendations:"
echo "1. Delete unused EBS volumes"
echo "2. Release unused Elastic IPs"
echo "3. Consider Reserved Instances for steady workloads"
echo "4. Use Spot Instances for batch processing"
echo "5. Enable auto-scaling to match demand"
}
# Azure cost optimization
optimize_azure_costs() {
echo "🔍 Azure Cost Optimization Analysis"
# Find unused disks
az disk list --query '[?managedBy==null].[name,resourceGroup,diskSizeGb]' --output table
# Find unused public IPs
az network public-ip list --query '[?ipConfiguration==null].[name,resourceGroup]' --output table
# Analyze VM utilization
az monitor metrics list --resource /subscriptions/{subscription}/resourceGroups/{rg}/providers/Microsoft.Compute/virtualMachines/{vm} --metric "Percentage CPU"
echo "💡 Cost Optimization Recommendations:"
echo "1. Delete unused managed disks"
echo "2. Release unused public IPs"
echo "3. Consider Azure Reserved VM Instances"
echo "4. Use Azure Spot VMs for development"
echo "5. Implement auto-shutdown for dev/test VMs"
}
# Automated cost monitoring
setup_cost_monitoring() {
cat > /usr/local/bin/cost-monitor.sh << 'COST'
#!/bin/bash
# Daily cost monitoring
# AWS cost monitoring
if command -v aws &> /dev/null; then
COST=$(aws ce get-cost-and-usage \
--time-period Start=$(date -d 'yesterday' +%Y-%m-%d),End=$(date +%Y-%m-%d) \
--granularity DAILY \
--metrics BlendedCost \
--query 'ResultsByTime[0].Total.BlendedCost.Amount' \
--output text)
echo "Yesterday's AWS cost: $COST USD"
fi
# Azure cost monitoring
if command -v az &> /dev/null; then
az consumption usage list --start-date $(date -d 'yesterday' +%Y-%m-%d) --end-date $(date +%Y-%m-%d)
fi
COST
chmod +x /usr/local/bin/cost-monitor.sh
# Schedule daily cost monitoring
(crontab -l; echo "0 9 * * * /usr/local/bin/cost-monitor.sh | mail -s 'Daily Cloud Cost Report' [email protected]") | crontab -
}
case "$1" in
"aws") optimize_aws_costs ;;
"azure") optimize_azure_costs ;;
"monitor") setup_cost_monitoring ;;
*) echo "Usage: $0 {aws|azure|monitor}" ;;
esac
EOF
chmod +x ~/cloud-cost-optimizer.sh
🚨 Fix Common Problems
Problem 1: Migration Connectivity Issues
# Test network connectivity
ping -c 4 $CLOUD_HOST
# Test SSH connectivity
ssh -o ConnectTimeout=10 user@$CLOUD_HOST echo "Connection successful"
# Check firewall rules
# AWS
aws ec2 describe-security-groups --group-ids sg-12345
# Azure
az network nsg rule list --resource-group $RG --nsg-name $NSG
# Fix SSH key issues
ssh-copy-id user@$CLOUD_HOST
Problem 2: Performance Issues After Migration
# Check cloud instance performance
# Monitor CPU, memory, and I/O
htop
iostat -x 1 5
# Check network latency
ping -c 10 $CLOUD_HOST
# Optimize cloud instance
# Consider larger instance types or optimized storage
Problem 3: Data Synchronization Problems
# Check rsync/lsyncd status
systemctl status lsyncd
# Manual sync test
rsync -avz --dry-run /local/path/ user@remote:/remote/path/
# Check disk space on destination
ssh user@remote "df -h"
# Verify file permissions
ssh user@remote "ls -la /destination/path"
📋 Simple Commands Summary
Command | Purpose |
---|---|
~/cloud-migration-assessment.sh | Assess system for migration |
~/aws-migration.sh t3.medium key-name | Launch AWS instance |
~/azure-migration.sh RG "East US" vm-name | Create Azure VM |
~/data-migration.sh local remote full | Migrate all data |
aws ec2 describe-instances | List AWS instances |
az vm list --output table | List Azure VMs |
~/cloud-cost-optimizer.sh aws | Optimize AWS costs |
ssh user@cloud-host | Connect to cloud instance |
🏆 What You Learned
Congratulations! You’ve mastered cloud migration for AlmaLinux! 🎉
✅ Conducted migration assessment and planning ✅ Set up AWS migration with EC2 and auto-scaling ✅ Configured Azure migration with VMs and monitoring ✅ Created data migration and synchronization tools ✅ Implemented hybrid cloud strategies ✅ Built cost optimization monitoring ✅ Learned troubleshooting migration issues ✅ Mastered cloud infrastructure automation
🎯 Why This Matters
Cloud migration is the pathway to infinite scalability and modern infrastructure! 🌟 With your AlmaLinux cloud skills, you now have:
- Unlimited scalability to handle any workload
- Global infrastructure at your fingertips
- Cost optimization through pay-as-you-use models
- Enterprise reliability with 99.99% uptime
- Future-proof architecture ready for growth
You’re now equipped to migrate entire data centers to the cloud and build globally distributed applications! Your cloud expertise puts you at the forefront of modern infrastructure! 🚀
Keep scaling, keep optimizing, and remember – the cloud is limitless! You’ve got this! ⭐🙌