py
+
+
+
+
+
istio
+
+
+
+
+
+
+
postgres
+
+
+
+
+
pinecone
htmx
lisp
+
elementary
+
tf
+
+
+
+
+
packer
hapi
...
+
koa
+
travis
+
solidity
perl
yaml
+
jenkins
aurelia
+
jax
ray
+
+
+
+
+
+
+
grafana
sublime
android
+
{}
backbone
+
=>
rs
webpack
+
+
&&
+
+
_
+
cargo
macos
bun
+
+
alpine
vscode
f#
laravel
+
+
+
+
+
+
+
netlify
Back to Blog
☁️ AlmaLinux Cloud Integration: AWS & Azure Complete Guide
almalinux cloud-integration aws

☁️ AlmaLinux Cloud Integration: AWS & Azure Complete Guide

Published Sep 18, 2025

Master cloud integration on AlmaLinux with comprehensive AWS and Azure setup, cloud services configuration, hybrid cloud architecture, automation tools, monitoring, and enterprise cloud deployment strategies.

5 min read
0 views
Table of Contents

☁️ AlmaLinux Cloud Integration: AWS & Azure Complete Guide

Ready to soar into the clouds and master enterprise-grade cloud integration? 🚀 This comprehensive guide will transform you into a cloud integration expert, covering everything from basic AWS and Azure setup to advanced hybrid cloud architectures that power the world’s largest enterprises!

Cloud integration isn’t just about moving to the cloud – it’s about creating intelligent, scalable, and resilient infrastructure that seamlessly bridges on-premises and cloud environments while maximizing performance, security, and cost efficiency. Let’s build your cloud empire! ⛅

🤔 Why is Cloud Integration Important?

Imagine having unlimited computing power at your fingertips, scaling instantly based on demand! ⚡ Here’s why cloud integration is absolutely revolutionary:

  • 🌍 Global Reach: Deploy applications worldwide with millisecond latency
  • 📈 Infinite Scalability: Scale from zero to millions of users automatically
  • 💰 Cost Optimization: Pay only for resources you actually use
  • 🛡️ Enterprise Security: Bank-level security with compliance certifications
  • 🔄 High Availability: 99.99%+ uptime with automatic failover and disaster recovery
  • Performance Boost: SSDs, CDNs, and optimized networks deliver blazing speed
  • 🤖 AI/ML Integration: Access cutting-edge AI services without infrastructure complexity
  • 🔧 DevOps Acceleration: Automated deployment pipelines and infrastructure as code

🎯 What You Need

Before we embark on this cloud mastery journey, let’s make sure you have everything ready:

AlmaLinux server(s) (your cloud integration command center!) ✅ AWS account (free tier available for learning and testing) ✅ Azure account (free tier with $200 credit for new users) ✅ Basic networking knowledge (VPCs, subnets, security groups) ✅ SSH key pair (for secure server access) ✅ Credit card (for cloud account verification - free tiers available) ✅ Text editor (nano, vim, or VS Code for configuration files) ✅ Curiosity and ambition (we’re building enterprise-scale infrastructure!)

📝 Step 1: AWS CLI and SDK Setup

Let’s start by setting up the AWS command line interface and tools! Think of this as installing your cloud control panel. 🎛️

# Update system packages
sudo dnf update -y
# Ensures you have the latest security patches

# Install Python pip and development tools
sudo dnf install -y python3-pip python3-devel gcc
# pip: for installing AWS CLI and tools
# python3-devel: for compiling Python modules
# gcc: for building native extensions

# Install AWS CLI v2 (latest and most powerful version)
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

# Verify AWS CLI installation
aws --version
# Should show AWS CLI version 2.x

# Install additional AWS tools
pip3 install --user boto3 botocore awscli-plugin-endpoint
# boto3: AWS SDK for Python
# awscli-plugin-endpoint: for custom endpoints

Configure AWS credentials and settings:

# Configure AWS CLI with your credentials
aws configure
# Enter your Access Key ID, Secret Access Key, default region, and output format

# Alternative: Use environment variables (more secure for automation)
export AWS_ACCESS_KEY_ID="your-access-key-id"
export AWS_SECRET_ACCESS_KEY="your-secret-access-key"
export AWS_DEFAULT_REGION="us-east-1"

# Test AWS CLI connectivity
aws sts get-caller-identity
# Should return your AWS account information

# List available regions
aws ec2 describe-regions --output table
# Shows all AWS regions worldwide

Create AWS management script:

# Create AWS management helper
nano ~/aws-manager.sh

#!/bin/bash
echo "☁️ AWS MANAGEMENT TOOL"
echo "====================="

show_account_info() {
    echo "📊 AWS Account Information:"
    echo "=========================="
    aws sts get-caller-identity
    echo ""

    echo "🌍 Current Region:"
    aws configure get region
    echo ""

    echo "💰 Billing Information (last month):"
    aws ce get-cost-and-usage \
        --time-period Start=$(date -d "1 month ago" +%Y-%m-01),End=$(date +%Y-%m-01) \
        --granularity MONTHLY \
        --metrics BlendedCost \
        --query 'ResultsByTime[0].Total.BlendedCost.Amount' \
        --output text 2>/dev/null || echo "Billing API access required"
}

list_resources() {
    echo "🖥️ EC2 Instances:"
    echo "================="
    aws ec2 describe-instances \
        --query 'Reservations[*].Instances[*].[InstanceId,InstanceType,State.Name,PublicIpAddress,Tags[?Key==`Name`]|[0].Value]' \
        --output table

    echo ""
    echo "💾 EBS Volumes:"
    echo "==============="
    aws ec2 describe-volumes \
        --query 'Volumes[*].[VolumeId,Size,VolumeType,State,Attachments[0].InstanceId]' \
        --output table

    echo ""
    echo "🌐 VPCs:"
    echo "========"
    aws ec2 describe-vpcs \
        --query 'Vpcs[*].[VpcId,CidrBlock,State,Tags[?Key==`Name`]|[0].Value]' \
        --output table

    echo ""
    echo "🗄️ S3 Buckets:"
    echo "=============="
    aws s3 ls
}

create_instance() {
    local instance_type="${1:-t3.micro}"
    local key_name="${2:-my-key-pair}"

    echo "🚀 Creating EC2 Instance:"
    echo "========================="
    echo "Instance Type: $instance_type"
    echo "Key Pair: $key_name"

    # Get the latest AlmaLinux AMI ID
    AMI_ID=$(aws ec2 describe-images \
        --owners 764336703387 \
        --filters "Name=name,Values=AlmaLinux OS 9*" "Name=state,Values=available" \
        --query 'Images|sort_by(@, &CreationDate)[-1].ImageId' \
        --output text)

    echo "Using AMI: $AMI_ID"

    # Create security group if it doesn't exist
    SG_ID=$(aws ec2 describe-security-groups \
        --group-names almalinux-sg \
        --query 'SecurityGroups[0].GroupId' \
        --output text 2>/dev/null)

    if [ "$SG_ID" == "None" ] || [ -z "$SG_ID" ]; then
        echo "Creating security group..."
        SG_ID=$(aws ec2 create-security-group \
            --group-name almalinux-sg \
            --description "AlmaLinux Security Group" \
            --query 'GroupId' \
            --output text)

        # Add SSH access
        aws ec2 authorize-security-group-ingress \
            --group-id $SG_ID \
            --protocol tcp \
            --port 22 \
            --cidr 0.0.0.0/0

        # Add HTTP access
        aws ec2 authorize-security-group-ingress \
            --group-id $SG_ID \
            --protocol tcp \
            --port 80 \
            --cidr 0.0.0.0/0

        # Add HTTPS access
        aws ec2 authorize-security-group-ingress \
            --group-id $SG_ID \
            --protocol tcp \
            --port 443 \
            --cidr 0.0.0.0/0
    fi

    # Launch instance
    INSTANCE_ID=$(aws ec2 run-instances \
        --image-id $AMI_ID \
        --count 1 \
        --instance-type $instance_type \
        --key-name $key_name \
        --security-group-ids $SG_ID \
        --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=AlmaLinux-Cloud-Server}]' \
        --query 'Instances[0].InstanceId' \
        --output text)

    echo "✅ Instance created: $INSTANCE_ID"
    echo "⏳ Waiting for instance to be running..."

    aws ec2 wait instance-running --instance-ids $INSTANCE_ID

    # Get public IP
    PUBLIC_IP=$(aws ec2 describe-instances \
        --instance-ids $INSTANCE_ID \
        --query 'Reservations[0].Instances[0].PublicIpAddress' \
        --output text)

    echo "🌐 Public IP: $PUBLIC_IP"
    echo "🔑 SSH command: ssh -i ~/.ssh/$key_name.pem ec2-user@$PUBLIC_IP"
}

setup_s3_bucket() {
    local bucket_name="${1:-almalinux-backup-$(date +%s)}"

    echo "🗄️ Creating S3 Bucket:"
    echo "======================"
    echo "Bucket Name: $bucket_name"

    # Create bucket
    aws s3 mb s3://$bucket_name

    # Enable versioning
    aws s3api put-bucket-versioning \
        --bucket $bucket_name \
        --versioning-configuration Status=Enabled

    # Set lifecycle policy
    cat > /tmp/lifecycle.json << EOF
{
    "Rules": [
        {
            "ID": "AlmaLinuxBackupRule",
            "Status": "Enabled",
            "Filter": {"Prefix": "backups/"},
            "Transitions": [
                {
                    "Days": 30,
                    "StorageClass": "STANDARD_IA"
                },
                {
                    "Days": 90,
                    "StorageClass": "GLACIER"
                }
            ]
        }
    ]
}
EOF

    aws s3api put-bucket-lifecycle-configuration \
        --bucket $bucket_name \
        --lifecycle-configuration file:///tmp/lifecycle.json

    echo "✅ S3 bucket created: $bucket_name"
    echo "📁 Upload files: aws s3 cp file.txt s3://$bucket_name/"
}

case "$1" in
    info)
        show_account_info
        ;;
    list)
        list_resources
        ;;
    create)
        create_instance "$2" "$3"
        ;;
    s3)
        setup_s3_bucket "$2"
        ;;
    *)
        echo "Usage: $0 {info|list|create|s3}"
        echo "  info              - Show account information"
        echo "  list              - List AWS resources"
        echo "  create [type] [key] - Create EC2 instance"
        echo "  s3 [bucket]       - Create S3 bucket"
        echo ""
        echo "Examples:"
        echo "  $0 create t3.micro my-key-pair"
        echo "  $0 s3 my-backup-bucket"
        ;;
esac

chmod +x ~/aws-manager.sh
# Test AWS setup
~/aws-manager.sh info

🔧 Step 2: Azure CLI and SDK Setup

Now let’s set up Azure integration! 🔷 Azure provides powerful enterprise features and excellent hybrid cloud capabilities.

# Install Azure CLI
sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc

# Add Azure CLI repository
cat << EOF | sudo tee /etc/yum.repos.d/azure-cli.repo
[azure-cli]
name=Azure CLI
baseurl=https://packages.microsoft.com/yumrepos/azure-cli
enabled=1
gpgcheck=1
gpgkey=https://packages.microsoft.com/keys/microsoft.asc
EOF

# Install Azure CLI
sudo dnf install -y azure-cli

# Verify Azure CLI installation
az --version
# Should show Azure CLI version and extensions

# Install Azure PowerShell (optional)
sudo dnf install -y powershell
# pwsh -c "Install-Module -Name Az -AllowClobber -Force"

Configure Azure authentication:

# Login to Azure (opens web browser for authentication)
az login

# Alternative: Login with service principal (for automation)
# az login --service-principal -u <app-id> -p <password> --tenant <tenant-id>

# Set default subscription
az account list --output table
az account set --subscription "your-subscription-id"

# Verify login
az account show
# Shows current subscription and tenant information

# List available locations
az account list-locations --output table
# Shows all Azure regions worldwide

Create Azure management script:

# Create Azure management helper
nano ~/azure-manager.sh

#!/bin/bash
echo "🔷 AZURE MANAGEMENT TOOL"
echo "========================"

show_account_info() {
    echo "📊 Azure Account Information:"
    echo "============================"
    az account show
    echo ""

    echo "💰 Cost Information:"
    az consumption usage list --top 5 --output table 2>/dev/null || echo "Cost management API access required"
    echo ""

    echo "🌍 Available Locations:"
    az account list-locations --query '[?metadata.regionCategory==`Recommended`].[name,displayName]' --output table
}

list_resources() {
    echo "🖥️ Virtual Machines:"
    echo "==================="
    az vm list --output table

    echo ""
    echo "🌐 Virtual Networks:"
    echo "==================="
    az network vnet list --output table

    echo ""
    echo "🗄️ Storage Accounts:"
    echo "==================="
    az storage account list --output table

    echo ""
    echo "📊 Resource Groups:"
    echo "=================="
    az group list --output table
}

create_resource_group() {
    local rg_name="${1:-almalinux-rg}"
    local location="${2:-eastus}"

    echo "📂 Creating Resource Group:"
    echo "=========================="
    echo "Name: $rg_name"
    echo "Location: $location"

    az group create --name $rg_name --location $location

    echo "✅ Resource group created: $rg_name"
}

create_vm() {
    local vm_name="${1:-almalinux-vm}"
    local rg_name="${2:-almalinux-rg}"
    local vm_size="${3:-Standard_B2s}"

    echo "🚀 Creating Virtual Machine:"
    echo "=========================="
    echo "VM Name: $vm_name"
    echo "Resource Group: $rg_name"
    echo "VM Size: $vm_size"

    # Create virtual machine
    az vm create \
        --resource-group $rg_name \
        --name $vm_name \
        --image "almalinux:almalinux-x86_64:9-gen2:latest" \
        --size $vm_size \
        --admin-username azureuser \
        --generate-ssh-keys \
        --public-ip-sku Standard \
        --security-type TrustedLaunch \
        --enable-secure-boot true \
        --enable-vtpm true

    # Open SSH port
    az vm open-port --resource-group $rg_name --name $vm_name --port 22

    # Open HTTP and HTTPS ports
    az vm open-port --resource-group $rg_name --name $vm_name --port 80 --priority 1010
    az vm open-port --resource-group $rg_name --name $vm_name --port 443 --priority 1020

    # Get public IP
    PUBLIC_IP=$(az vm show -d --resource-group $rg_name --name $vm_name --query publicIps --output tsv)

    echo "✅ VM created successfully"
    echo "🌐 Public IP: $PUBLIC_IP"
    echo "🔑 SSH command: ssh azureuser@$PUBLIC_IP"
}

setup_storage() {
    local storage_name="${1:-almalinuxstorage$(date +%s)}"
    local rg_name="${2:-almalinux-rg}"

    echo "🗄️ Creating Storage Account:"
    echo "=========================="
    echo "Storage Name: $storage_name"
    echo "Resource Group: $rg_name"

    # Create storage account
    az storage account create \
        --name $storage_name \
        --resource-group $rg_name \
        --location eastus \
        --sku Standard_LRS \
        --encryption-services blob file \
        --https-only true

    # Create container for backups
    az storage container create \
        --name backups \
        --account-name $storage_name \
        --auth-mode login

    # Create file share
    az storage share create \
        --name almalinux-share \
        --account-name $storage_name \
        --quota 100 \
        --auth-mode login

    echo "✅ Storage account created: $storage_name"
    echo "📁 Container: backups"
    echo "🗂️ File share: almalinux-share"
}

setup_monitoring() {
    local rg_name="${1:-almalinux-rg}"
    local workspace_name="${2:-almalinux-workspace}"

    echo "📊 Setting up Monitoring:"
    echo "========================"

    # Create Log Analytics workspace
    az monitor log-analytics workspace create \
        --resource-group $rg_name \
        --workspace-name $workspace_name

    # Create Application Insights
    az monitor app-insights component create \
        --app almalinux-insights \
        --location eastus \
        --resource-group $rg_name \
        --workspace $workspace_name

    echo "✅ Monitoring setup complete"
    echo "📈 Log Analytics: $workspace_name"
    echo "👁️ Application Insights: almalinux-insights"
}

case "$1" in
    info)
        show_account_info
        ;;
    list)
        list_resources
        ;;
    rg)
        create_resource_group "$2" "$3"
        ;;
    vm)
        create_vm "$2" "$3" "$4"
        ;;
    storage)
        setup_storage "$2" "$3"
        ;;
    monitor)
        setup_monitoring "$2" "$3"
        ;;
    *)
        echo "Usage: $0 {info|list|rg|vm|storage|monitor}"
        echo "  info                    - Show account information"
        echo "  list                    - List Azure resources"
        echo "  rg [name] [location]    - Create resource group"
        echo "  vm [name] [rg] [size]   - Create virtual machine"
        echo "  storage [name] [rg]     - Create storage account"
        echo "  monitor [rg] [workspace] - Setup monitoring"
        echo ""
        echo "Examples:"
        echo "  $0 rg almalinux-rg eastus"
        echo "  $0 vm almalinux-vm almalinux-rg Standard_B2s"
        echo "  $0 storage mystorage123 almalinux-rg"
        ;;
esac

chmod +x ~/azure-manager.sh
# Test Azure setup
~/azure-manager.sh info

🌟 Step 3: Hybrid Cloud Architecture

Let’s build a sophisticated hybrid cloud setup! 🌉 This connects your on-premises AlmaLinux infrastructure with both AWS and Azure.

Set up VPN connections for hybrid connectivity:

# Install VPN client software
sudo dnf install -y openvpn strongswan NetworkManager-strongswan

# Create AWS VPN configuration script
nano ~/setup-aws-vpn.sh

#!/bin/bash
echo "🔒 AWS VPN SETUP"
echo "================"

create_aws_vpn() {
    local customer_gateway_ip="$1"

    if [ -z "$customer_gateway_ip" ]; then
        echo "Usage: create_aws_vpn <your-public-ip>"
        return 1
    fi

    echo "Setting up AWS VPN with IP: $customer_gateway_ip"

    # Create customer gateway
    CGW_ID=$(aws ec2 create-customer-gateway \
        --type ipsec.1 \
        --public-ip $customer_gateway_ip \
        --bgp-asn 65000 \
        --tag-specifications 'ResourceType=customer-gateway,Tags=[{Key=Name,Value=AlmaLinux-CGW}]' \
        --query 'CustomerGateway.CustomerGatewayId' \
        --output text)

    echo "✅ Customer Gateway created: $CGW_ID"

    # Create virtual private gateway
    VGW_ID=$(aws ec2 create-vpn-gateway \
        --type ipsec.1 \
        --amazon-side-asn 64512 \
        --tag-specifications 'ResourceType=vpn-gateway,Tags=[{Key=Name,Value=AlmaLinux-VGW}]' \
        --query 'VpnGateway.VpnGatewayId' \
        --output text)

    echo "✅ Virtual Private Gateway created: $VGW_ID"

    # Wait for VGW to be available
    aws ec2 wait vpn-gateway-attached --vpn-gateway-ids $VGW_ID

    # Get default VPC
    VPC_ID=$(aws ec2 describe-vpcs \
        --filters "Name=is-default,Values=true" \
        --query 'Vpcs[0].VpcId' \
        --output text)

    # Attach VGW to VPC
    aws ec2 attach-vpn-gateway \
        --vpn-gateway-id $VGW_ID \
        --vpc-id $VPC_ID

    # Create VPN connection
    VPN_ID=$(aws ec2 create-vpn-connection \
        --type ipsec.1 \
        --customer-gateway-id $CGW_ID \
        --vpn-gateway-id $VGW_ID \
        --tag-specifications 'ResourceType=vpn-connection,Tags=[{Key=Name,Value=AlmaLinux-VPN}]' \
        --query 'VpnConnection.VpnConnectionId' \
        --output text)

    echo "✅ VPN Connection created: $VPN_ID"
    echo "⏳ Waiting for VPN to be available..."

    aws ec2 wait vpn-connection-available --vpn-connection-ids $VPN_ID

    # Download configuration
    aws ec2 describe-vpn-connections \
        --vpn-connection-ids $VPN_ID \
        --query 'VpnConnections[0].CustomerGatewayConfiguration' \
        --output text > aws-vpn-config.txt

    echo "📁 VPN configuration saved to: aws-vpn-config.txt"
}

# Get your public IP
PUBLIC_IP=$(curl -s ifconfig.me)
echo "Your public IP: $PUBLIC_IP"

create_aws_vpn "$PUBLIC_IP"

chmod +x ~/setup-aws-vpn.sh

Create Azure VPN setup:

# Create Azure VPN configuration script
nano ~/setup-azure-vpn.sh

#!/bin/bash
echo "🔷 AZURE VPN SETUP"
echo "=================="

create_azure_vpn() {
    local rg_name="${1:-almalinux-rg}"
    local location="${2:-eastus}"
    local on_prem_ip="$3"

    if [ -z "$on_prem_ip" ]; then
        echo "Usage: create_azure_vpn <resource-group> <location> <on-premises-ip>"
        return 1
    fi

    echo "Setting up Azure VPN in $rg_name with on-premises IP: $on_prem_ip"

    # Create virtual network
    az network vnet create \
        --resource-group $rg_name \
        --name AlmaLinux-VNet \
        --address-prefix 10.1.0.0/16 \
        --subnet-name GatewaySubnet \
        --subnet-prefix 10.1.255.0/27

    # Create additional subnet for VMs
    az network vnet subnet create \
        --resource-group $rg_name \
        --vnet-name AlmaLinux-VNet \
        --name VMSubnet \
        --address-prefix 10.1.1.0/24

    # Create public IP for VPN gateway
    az network public-ip create \
        --resource-group $rg_name \
        --name VPNGateway-IP \
        --allocation-method Dynamic

    # Create VPN gateway (this takes 15-45 minutes)
    echo "⏳ Creating VPN Gateway (this will take 15-45 minutes)..."
    az network vnet-gateway create \
        --resource-group $rg_name \
        --name AlmaLinux-VPNGateway \
        --public-ip-address VPNGateway-IP \
        --vnet AlmaLinux-VNet \
        --gateway-type Vpn \
        --vpn-type RouteBased \
        --sku VpnGw1 \
        --no-wait

    # Create local network gateway
    az network local-gateway create \
        --resource-group $rg_name \
        --name OnPremises-Gateway \
        --gateway-ip-address $on_prem_ip \
        --local-address-prefixes 192.168.0.0/16

    echo "✅ VPN setup initiated"
    echo "⏳ Gateway creation in progress (check status with: az network vnet-gateway show -g $rg_name -n AlmaLinux-VPNGateway)"
    echo "🔑 Create connection after gateway is ready with shared key"
}

# Get your public IP
PUBLIC_IP=$(curl -s ifconfig.me)
echo "Your public IP: $PUBLIC_IP"

create_azure_vpn "almalinux-rg" "eastus" "$PUBLIC_IP"

chmod +x ~/setup-azure-vpn.sh

Create multi-cloud management script:

# Create multi-cloud management script
nano ~/multi-cloud-manager.sh

#!/bin/bash
echo "🌩️ MULTI-CLOUD MANAGEMENT TOOL"
echo "==============================="

sync_data() {
    echo "🔄 Multi-Cloud Data Synchronization:"
    echo "===================================="

    # Sync to AWS S3
    echo "📤 Syncing to AWS S3..."
    aws s3 sync /var/backups/ s3://almalinux-backup-bucket/$(hostname)/ \
        --exclude "*.tmp" --exclude "*.log" --delete

    # Sync to Azure Blob Storage
    echo "📤 Syncing to Azure Storage..."
    az storage blob upload-batch \
        --destination backups \
        --source /var/backups \
        --account-name almalinuxstorage \
        --auth-mode login

    echo "✅ Data synchronization complete"
}

deploy_application() {
    local app_name="$1"
    local cloud_provider="$2"

    echo "🚀 Deploying Application: $app_name"
    echo "=================================="

    case "$cloud_provider" in
        "aws")
            deploy_to_aws "$app_name"
            ;;
        "azure")
            deploy_to_azure "$app_name"
            ;;
        "both")
            deploy_to_aws "$app_name"
            deploy_to_azure "$app_name"
            ;;
        *)
            echo "❌ Unknown cloud provider: $cloud_provider"
            echo "Available options: aws, azure, both"
            return 1
            ;;
    esac
}

deploy_to_aws() {
    local app_name="$1"

    echo "☁️ Deploying to AWS..."

    # Create deployment package
    tar -czf /tmp/${app_name}.tar.gz -C /opt/$app_name .

    # Upload to S3
    aws s3 cp /tmp/${app_name}.tar.gz s3://almalinux-deployments/

    # Create EC2 instances and deploy
    ~/aws-manager.sh create t3.micro my-key-pair

    echo "✅ AWS deployment initiated"
}

deploy_to_azure() {
    local app_name="$1"

    echo "🔷 Deploying to Azure..."

    # Create deployment package
    tar -czf /tmp/${app_name}.tar.gz -C /opt/$app_name .

    # Upload to Azure Storage
    az storage blob upload \
        --file /tmp/${app_name}.tar.gz \
        --name "${app_name}.tar.gz" \
        --container-name deployments \
        --account-name almalinuxstorage \
        --auth-mode login

    # Create VM and deploy
    ~/azure-manager.sh vm almalinux-vm almalinux-rg Standard_B2s

    echo "✅ Azure deployment initiated"
}

monitor_costs() {
    echo "💰 Multi-Cloud Cost Monitoring:"
    echo "=============================="

    echo "☁️ AWS Costs:"
    aws ce get-cost-and-usage \
        --time-period Start=$(date -d "1 month ago" +%Y-%m-01),End=$(date +%Y-%m-01) \
        --granularity MONTHLY \
        --metrics BlendedCost \
        --query 'ResultsByTime[0].Total.BlendedCost.Amount' \
        --output text 2>/dev/null || echo "Billing API access required"

    echo ""
    echo "🔷 Azure Costs:"
    az consumption usage list --top 5 --output table 2>/dev/null || echo "Cost management access required"
}

health_check() {
    echo "🏥 Multi-Cloud Health Check:"
    echo "==========================="

    echo "☁️ AWS Resources:"
    aws ec2 describe-instance-status --output table

    echo ""
    echo "🔷 Azure Resources:"
    az vm list --show-details --output table

    echo ""
    echo "🌐 Network Connectivity:"
    ping -c 3 8.8.8.8 >/dev/null 2>&1 && echo "✅ Internet connectivity: OK" || echo "❌ Internet connectivity: FAILED"

    echo ""
    echo "🔒 VPN Status:"
    systemctl is-active strongswan >/dev/null 2>&1 && echo "✅ VPN service: Active" || echo "❌ VPN service: Inactive"
}

backup_configs() {
    echo "💾 Backing up Cloud Configurations:"
    echo "=================================="

    BACKUP_DIR="/var/backups/cloud-configs-$(date +%Y%m%d_%H%M%S)"
    mkdir -p $BACKUP_DIR

    # Backup AWS configs
    cp -r ~/.aws $BACKUP_DIR/aws-config 2>/dev/null || echo "No AWS config found"

    # Backup Azure configs
    cp -r ~/.azure $BACKUP_DIR/azure-config 2>/dev/null || echo "No Azure config found"

    # Export resource configurations
    aws ec2 describe-instances > $BACKUP_DIR/aws-instances.json 2>/dev/null
    az vm list > $BACKUP_DIR/azure-vms.json 2>/dev/null

    echo "✅ Configurations backed up to: $BACKUP_DIR"
}

case "$1" in
    sync)
        sync_data
        ;;
    deploy)
        deploy_application "$2" "$3"
        ;;
    costs)
        monitor_costs
        ;;
    health)
        health_check
        ;;
    backup)
        backup_configs
        ;;
    *)
        echo "Usage: $0 {sync|deploy|costs|health|backup}"
        echo "  sync                      - Sync data to both clouds"
        echo "  deploy <app> <provider>   - Deploy application (aws|azure|both)"
        echo "  costs                     - Monitor cloud costs"
        echo "  health                    - Check cloud resource health"
        echo "  backup                    - Backup cloud configurations"
        echo ""
        echo "Examples:"
        echo "  $0 deploy myapp both"
        echo "  $0 sync"
        echo "  $0 health"
        ;;
esac

chmod +x ~/multi-cloud-manager.sh

✅ Step 4: Cloud Automation and Infrastructure as Code

Let’s automate everything with Infrastructure as Code! 🤖 We’ll use Terraform for cross-cloud provisioning.

Install Terraform:

# Install Terraform
sudo dnf install -y dnf-plugins-core
sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
sudo dnf install -y terraform

# Verify installation
terraform --version

# Initialize Terraform working directory
mkdir ~/terraform-cloud
cd ~/terraform-cloud

Create Terraform configuration for multi-cloud deployment:

# Create main Terraform configuration
cat > main.tf << 'EOF'
# Configure Terraform providers
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~> 3.0"
    }
  }
}

# Configure AWS Provider
provider "aws" {
  region = var.aws_region
}

# Configure Azure Provider
provider "azurerm" {
  features {}
}

# Variables
variable "aws_region" {
  description = "AWS region"
  default     = "us-east-1"
}

variable "azure_location" {
  description = "Azure location"
  default     = "East US"
}

variable "environment" {
  description = "Environment name"
  default     = "production"
}

variable "project_name" {
  description = "Project name"
  default     = "almalinux-cloud"
}

# AWS Resources
resource "aws_instance" "almalinux_aws" {
  ami           = data.aws_ami.almalinux.id
  instance_type = "t3.micro"
  key_name      = aws_key_pair.deployer.key_name

  vpc_security_group_ids = [aws_security_group.almalinux_sg.id]

  user_data = base64encode(templatefile("${path.module}/user_data.sh", {
    hostname = "almalinux-aws-${var.environment}"
  }))

  tags = {
    Name        = "AlmaLinux-AWS-${var.environment}"
    Environment = var.environment
    Project     = var.project_name
  }
}

# AWS Security Group
resource "aws_security_group" "almalinux_sg" {
  name_prefix = "almalinux-sg-"
  description = "Security group for AlmaLinux instances"

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "AlmaLinux-SG-${var.environment}"
  }
}

# AWS Key Pair
resource "aws_key_pair" "deployer" {
  key_name   = "almalinux-deployer"
  public_key = file("~/.ssh/id_rsa.pub")
}

# Data source for AlmaLinux AMI
data "aws_ami" "almalinux" {
  most_recent = true
  owners      = ["764336703387"] # AlmaLinux

  filter {
    name   = "name"
    values = ["AlmaLinux OS 9*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
}

# Azure Resource Group
resource "azurerm_resource_group" "main" {
  name     = "rg-${var.project_name}-${var.environment}"
  location = var.azure_location

  tags = {
    Environment = var.environment
    Project     = var.project_name
  }
}

# Azure Virtual Network
resource "azurerm_virtual_network" "main" {
  name                = "vnet-${var.project_name}-${var.environment}"
  address_space       = ["10.0.0.0/16"]
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name

  tags = {
    Environment = var.environment
    Project     = var.project_name
  }
}

# Azure Subnet
resource "azurerm_subnet" "main" {
  name                 = "subnet-${var.project_name}-${var.environment}"
  resource_group_name  = azurerm_resource_group.main.name
  virtual_network_name = azurerm_virtual_network.main.name
  address_prefixes     = ["10.0.1.0/24"]
}

# Azure Network Security Group
resource "azurerm_network_security_group" "main" {
  name                = "nsg-${var.project_name}-${var.environment}"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name

  security_rule {
    name                       = "SSH"
    priority                   = 1001
    direction                  = "Inbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = "22"
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }

  security_rule {
    name                       = "HTTP"
    priority                   = 1002
    direction                  = "Inbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = "80"
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }

  security_rule {
    name                       = "HTTPS"
    priority                   = 1003
    direction                  = "Inbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = "443"
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }

  tags = {
    Environment = var.environment
    Project     = var.project_name
  }
}

# Azure Public IP
resource "azurerm_public_ip" "main" {
  name                = "pip-${var.project_name}-${var.environment}"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
  allocation_method   = "Static"
  sku                 = "Standard"

  tags = {
    Environment = var.environment
    Project     = var.project_name
  }
}

# Azure Network Interface
resource "azurerm_network_interface" "main" {
  name                = "nic-${var.project_name}-${var.environment}"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurerm_subnet.main.id
    private_ip_address_allocation = "Dynamic"
    public_ip_address_id          = azurerm_public_ip.main.id
  }

  tags = {
    Environment = var.environment
    Project     = var.project_name
  }
}

# Associate Network Security Group to Network Interface
resource "azurerm_network_interface_security_group_association" "main" {
  network_interface_id      = azurerm_network_interface.main.id
  network_security_group_id = azurerm_network_security_group.main.id
}

# Azure Virtual Machine
resource "azurerm_linux_virtual_machine" "main" {
  name                = "vm-${var.project_name}-${var.environment}"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
  size                = "Standard_B2s"
  admin_username      = "azureuser"

  # Disable password authentication
  disable_password_authentication = true

  network_interface_ids = [
    azurerm_network_interface.main.id,
  ]

  admin_ssh_key {
    username   = "azureuser"
    public_key = file("~/.ssh/id_rsa.pub")
  }

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Premium_LRS"
  }

  source_image_reference {
    publisher = "almalinux"
    offer     = "almalinux-x86_64"
    sku       = "9-gen2"
    version   = "latest"
  }

  custom_data = base64encode(templatefile("${path.module}/user_data.sh", {
    hostname = "almalinux-azure-${var.environment}"
  }))

  tags = {
    Environment = var.environment
    Project     = var.project_name
  }
}

# Outputs
output "aws_instance_public_ip" {
  description = "Public IP address of AWS instance"
  value       = aws_instance.almalinux_aws.public_ip
}

output "aws_instance_private_ip" {
  description = "Private IP address of AWS instance"
  value       = aws_instance.almalinux_aws.private_ip
}

output "azure_vm_public_ip" {
  description = "Public IP address of Azure VM"
  value       = azurerm_public_ip.main.ip_address
}

output "azure_vm_private_ip" {
  description = "Private IP address of Azure VM"
  value       = azurerm_network_interface.main.private_ip_address
}
EOF

# Create user data script for instance initialization
cat > user_data.sh << 'EOF'
#!/bin/bash

# Update system
dnf update -y

# Set hostname
hostnamectl set-hostname ${hostname}

# Install essential packages
dnf install -y nginx htop curl wget git

# Start and enable nginx
systemctl enable --now nginx

# Create simple index page
cat > /var/www/html/index.html << HTML
<!DOCTYPE html>
<html>
<head>
    <title>AlmaLinux Cloud Instance</title>
    <style>
        body { font-family: Arial; text-align: center; margin-top: 50px; }
        .success { color: #28a745; }
    </style>
</head>
<body>
    <h1 class="success">🚀 AlmaLinux Cloud Instance</h1>
    <p>Hostname: ${hostname}</p>
    <p>Deployed via Terraform</p>
    <p>$(date)</p>
</body>
</html>
HTML

# Configure firewall
firewall-cmd --permanent --add-service=http
firewall-cmd --permanent --add-service=https
firewall-cmd --reload

# Log deployment
echo "$(date): AlmaLinux cloud instance initialized" >> /var/log/cloud-init.log
EOF

# Create Terraform variables file
cat > terraform.tfvars << 'EOF'
aws_region      = "us-east-1"
azure_location  = "East US"
environment     = "production"
project_name    = "almalinux-cloud"
EOF

Create Terraform automation script:

# Create Terraform automation script
cat > terraform-deploy.sh << 'EOF'
#!/bin/bash
echo "🏗️ TERRAFORM CLOUD DEPLOYMENT"
echo "=============================="

init_terraform() {
    echo "📦 Initializing Terraform..."
    terraform init

    if [ $? -eq 0 ]; then
        echo "✅ Terraform initialized successfully"
    else
        echo "❌ Terraform initialization failed"
        exit 1
    fi
}

plan_deployment() {
    echo "📋 Planning deployment..."
    terraform plan -out=tfplan

    if [ $? -eq 0 ]; then
        echo "✅ Terraform plan completed successfully"
        echo "📊 Review the plan above before applying"
    else
        echo "❌ Terraform plan failed"
        exit 1
    fi
}

apply_deployment() {
    echo "🚀 Applying deployment..."
    terraform apply tfplan

    if [ $? -eq 0 ]; then
        echo "✅ Deployment completed successfully"
        echo ""
        echo "📊 Deployment outputs:"
        terraform output

        # Save outputs to file
        terraform output -json > deployment-outputs.json
        echo "📁 Outputs saved to deployment-outputs.json"
    else
        echo "❌ Deployment failed"
        exit 1
    fi
}

destroy_deployment() {
    echo "💥 Destroying deployment..."
    echo "⚠️  This will delete all resources!"
    read -p "Are you sure? (yes/no): " confirm

    if [ "$confirm" = "yes" ]; then
        terraform destroy -auto-approve
        echo "✅ Resources destroyed"
    else
        echo "❌ Destruction cancelled"
    fi
}

show_resources() {
    echo "📊 Current Terraform Resources:"
    echo "=============================="
    terraform state list

    echo ""
    echo "💰 Estimated costs:"
    echo "Note: Use 'terraform-cost-estimation' tools for detailed cost analysis"
}

validate_config() {
    echo "🧪 Validating Terraform configuration..."
    terraform validate

    if [ $? -eq 0 ]; then
        echo "✅ Configuration is valid"
    else
        echo "❌ Configuration validation failed"
        exit 1
    fi

    echo ""
    echo "🔍 Formatting configuration..."
    terraform fmt -recursive
    echo "✅ Configuration formatted"
}

case "$1" in
    init)
        init_terraform
        ;;
    plan)
        validate_config
        plan_deployment
        ;;
    apply)
        apply_deployment
        ;;
    destroy)
        destroy_deployment
        ;;
    show)
        show_resources
        ;;
    validate)
        validate_config
        ;;
    full)
        init_terraform
        validate_config
        plan_deployment
        echo ""
        read -p "Apply this plan? (yes/no): " confirm
        if [ "$confirm" = "yes" ]; then
            apply_deployment
        else
            echo "❌ Deployment cancelled"
        fi
        ;;
    *)
        echo "Usage: $0 {init|plan|apply|destroy|show|validate|full}"
        echo "  init     - Initialize Terraform"
        echo "  plan     - Plan deployment"
        echo "  apply    - Apply deployment"
        echo "  destroy  - Destroy all resources"
        echo "  show     - Show current resources"
        echo "  validate - Validate configuration"
        echo "  full     - Complete deployment workflow"
        ;;
esac
EOF

chmod +x terraform-deploy.sh

Generate SSH key if needed and deploy:

# Generate SSH key if it doesn't exist
if [ ! -f ~/.ssh/id_rsa ]; then
    ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa -N ""
    echo "✅ SSH key generated"
fi

# Run full Terraform deployment
./terraform-deploy.sh full

🌟 Step 5: Cloud Monitoring and Cost Management

Let’s implement comprehensive monitoring and cost optimization! 📊 We’ll track performance, costs, and resource utilization across both clouds.

Set up CloudWatch for AWS monitoring:

# Install CloudWatch agent
wget https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm
sudo rpm -U ./amazon-cloudwatch-agent.rpm

# Create CloudWatch configuration
sudo nano /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json

{
    "agent": {
        "metrics_collection_interval": 60,
        "run_as_user": "root"
    },
    "metrics": {
        "namespace": "AlmaLinux/System",
        "metrics_collected": {
            "cpu": {
                "measurement": [
                    "cpu_usage_idle",
                    "cpu_usage_iowait",
                    "cpu_usage_user",
                    "cpu_usage_system"
                ],
                "metrics_collection_interval": 60,
                "totalcpu": false
            },
            "disk": {
                "measurement": [
                    "used_percent"
                ],
                "metrics_collection_interval": 60,
                "resources": [
                    "*"
                ]
            },
            "diskio": {
                "measurement": [
                    "io_time"
                ],
                "metrics_collection_interval": 60,
                "resources": [
                    "*"
                ]
            },
            "mem": {
                "measurement": [
                    "mem_used_percent"
                ],
                "metrics_collection_interval": 60
            },
            "netstat": {
                "measurement": [
                    "tcp_established",
                    "tcp_time_wait"
                ],
                "metrics_collection_interval": 60
            },
            "swap": {
                "measurement": [
                    "swap_used_percent"
                ],
                "metrics_collection_interval": 60
            }
        }
    },
    "logs": {
        "logs_collected": {
            "files": {
                "collect_list": [
                    {
                        "file_path": "/var/log/messages",
                        "log_group_name": "AlmaLinux-System-Logs",
                        "log_stream_name": "{instance_id}-messages"
                    },
                    {
                        "file_path": "/var/log/secure",
                        "log_group_name": "AlmaLinux-Security-Logs",
                        "log_stream_name": "{instance_id}-secure"
                    }
                ]
            }
        }
    }
}

# Start CloudWatch agent
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
    -a fetch-config \
    -m ec2 \
    -s \
    -c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json

Set up Azure Monitor:

# Install Azure Monitor agent
wget https://aka.ms/azcmagent -O ~/install_linux_azcmagent.sh
bash ~/install_linux_azcmagent.sh

# Connect to Azure Arc (for hybrid monitoring)
sudo azcmagent connect \
    --resource-group "almalinux-rg" \
    --tenant-id "your-tenant-id" \
    --location "eastus" \
    --subscription-id "your-subscription-id"

Create comprehensive monitoring script:

# Create cloud monitoring script
nano ~/cloud-monitor.sh

#!/bin/bash
echo "📊 CLOUD MONITORING DASHBOARD"
echo "============================="

show_aws_metrics() {
    echo "☁️ AWS CloudWatch Metrics:"
    echo "=========================="

    # Get instance IDs
    INSTANCES=$(aws ec2 describe-instances \
        --query 'Reservations[*].Instances[*].InstanceId' \
        --output text)

    for instance in $INSTANCES; do
        echo "📊 Instance: $instance"

        # CPU Utilization
        CPU=$(aws cloudwatch get-metric-statistics \
            --namespace AWS/EC2 \
            --metric-name CPUUtilization \
            --dimensions Name=InstanceId,Value=$instance \
            --start-time $(date -u -d '5 minutes ago' +%Y-%m-%dT%H:%M:%S) \
            --end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
            --period 300 \
            --statistics Average \
            --query 'Datapoints[0].Average' \
            --output text 2>/dev/null)

        echo "   CPU: ${CPU:-N/A}%"

        # Network In
        NETWORK_IN=$(aws cloudwatch get-metric-statistics \
            --namespace AWS/EC2 \
            --metric-name NetworkIn \
            --dimensions Name=InstanceId,Value=$instance \
            --start-time $(date -u -d '5 minutes ago' +%Y-%m-%dT%H:%M:%S) \
            --end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
            --period 300 \
            --statistics Sum \
            --query 'Datapoints[0].Sum' \
            --output text 2>/dev/null)

        echo "   Network In: ${NETWORK_IN:-N/A} bytes"
        echo ""
    done
}

show_azure_metrics() {
    echo "🔷 Azure Monitor Metrics:"
    echo "========================"

    # Get VM information
    VMS=$(az vm list --query '[].{Name:name, ResourceGroup:resourceGroup}' --output tsv)

    while IFS=$'\t' read -r vm_name rg_name; do
        echo "📊 VM: $vm_name"

        # CPU Percentage
        CPU=$(az monitor metrics list \
            --resource "/subscriptions/$(az account show --query id -o tsv)/resourceGroups/$rg_name/providers/Microsoft.Compute/virtualMachines/$vm_name" \
            --metric "Percentage CPU" \
            --interval PT5M \
            --query 'value[0].timeseries[0].data[-1].average' \
            --output tsv 2>/dev/null)

        echo "   CPU: ${CPU:-N/A}%"

        # Network In
        NETWORK_IN=$(az monitor metrics list \
            --resource "/subscriptions/$(az account show --query id -o tsv)/resourceGroups/$rg_name/providers/Microsoft.Compute/virtualMachines/$vm_name" \
            --metric "Network In Total" \
            --interval PT5M \
            --query 'value[0].timeseries[0].data[-1].total' \
            --output tsv 2>/dev/null)

        echo "   Network In: ${NETWORK_IN:-N/A} bytes"
        echo ""
    done <<< "$VMS"
}

cost_analysis() {
    echo "💰 Cost Analysis:"
    echo "================="

    echo "☁️ AWS Costs (Current Month):"
    START_DATE=$(date +%Y-%m-01)
    END_DATE=$(date -d "$(date +%Y-%m-01) +1 month" +%Y-%m-01)

    AWS_COST=$(aws ce get-cost-and-usage \
        --time-period Start=$START_DATE,End=$END_DATE \
        --granularity MONTHLY \
        --metrics BlendedCost \
        --query 'ResultsByTime[0].Total.BlendedCost.Amount' \
        --output text 2>/dev/null)

    echo "   Current month: $${AWS_COST:-N/A}"

    echo ""
    echo "🔷 Azure Costs:"
    az consumption usage list \
        --top 5 \
        --query '[].{Service:meterName, Cost:pretaxCost, Currency:currency}' \
        --output table 2>/dev/null || echo "   Cost data unavailable"
}

resource_utilization() {
    echo "📈 Resource Utilization Summary:"
    echo "==============================="

    echo "☁️ AWS Resources:"
    echo "   Running EC2 instances: $(aws ec2 describe-instances --query 'Reservations[*].Instances[?State.Name==`running`] | length(@)' --output text)"
    echo "   Total EBS volumes: $(aws ec2 describe-volumes --query 'length(Volumes)' --output text)"
    echo "   S3 buckets: $(aws s3 ls | wc -l)"

    echo ""
    echo "🔷 Azure Resources:"
    echo "   Running VMs: $(az vm list --show-details --query "[?powerState=='VM running'] | length(@)" --output tsv)"
    echo "   Storage accounts: $(az storage account list --query 'length(@)' --output tsv)"
    echo "   Resource groups: $(az group list --query 'length(@)' --output tsv)"
}

generate_report() {
    echo "📋 Generating Cloud Report..."

    REPORT_FILE="/tmp/cloud-report-$(date +%Y%m%d_%H%M%S).txt"

    {
        echo "Cloud Infrastructure Report"
        echo "Generated: $(date)"
        echo "=========================="
        echo ""

        show_aws_metrics
        echo ""
        show_azure_metrics
        echo ""
        cost_analysis
        echo ""
        resource_utilization

    } > "$REPORT_FILE"

    echo "✅ Report saved to: $REPORT_FILE"
}

set_alerts() {
    echo "🔔 Setting up Cloud Alerts..."

    # AWS CloudWatch Alarm for high CPU
    aws cloudwatch put-metric-alarm \
        --alarm-name "AlmaLinux-High-CPU" \
        --alarm-description "Alert when CPU exceeds 80%" \
        --metric-name CPUUtilization \
        --namespace AWS/EC2 \
        --statistic Average \
        --period 300 \
        --threshold 80 \
        --comparison-operator GreaterThanThreshold \
        --evaluation-periods 2 \
        --alarm-actions "arn:aws:sns:us-east-1:123456789012:cpu-alerts" 2>/dev/null || echo "Configure SNS topic for AWS alerts"

    # Azure alert rule for high CPU
    az monitor metrics alert create \
        --name "AlmaLinux-High-CPU-Azure" \
        --resource-group "almalinux-rg" \
        --scopes "/subscriptions/$(az account show --query id -o tsv)/resourceGroups/almalinux-rg" \
        --condition "avg Percentage CPU > 80" \
        --description "Alert when CPU exceeds 80%" \
        --evaluation-frequency 5m \
        --window-size 15m \
        --severity 2 2>/dev/null || echo "Configure action group for Azure alerts"

    echo "✅ Alerts configured"
}

case "$1" in
    aws)
        show_aws_metrics
        ;;
    azure)
        show_azure_metrics
        ;;
    costs)
        cost_analysis
        ;;
    utilization)
        resource_utilization
        ;;
    report)
        generate_report
        ;;
    alerts)
        set_alerts
        ;;
    all)
        show_aws_metrics
        echo ""
        show_azure_metrics
        echo ""
        cost_analysis
        echo ""
        resource_utilization
        ;;
    *)
        echo "Usage: $0 {aws|azure|costs|utilization|report|alerts|all}"
        echo "  aws         - Show AWS metrics"
        echo "  azure       - Show Azure metrics"
        echo "  costs       - Show cost analysis"
        echo "  utilization - Show resource utilization"
        echo "  report      - Generate comprehensive report"
        echo "  alerts      - Set up monitoring alerts"
        echo "  all         - Show all monitoring data"
        ;;
esac

chmod +x ~/cloud-monitor.sh

🎮 Quick Examples

Let’s see your cloud integration in action with real-world scenarios! 🎯

Example 1: Auto-scaling Web Application

# Deploy auto-scaling web application to both clouds
~/multi-cloud-manager.sh deploy webapp both

# Monitor the deployment
~/cloud-monitor.sh all

# Check costs
~/cloud-monitor.sh costs

Example 2: Disaster Recovery Setup

# Create disaster recovery setup
cat > disaster-recovery.tf << 'EOF'
# Primary region deployment
resource "aws_instance" "primary" {
  ami           = data.aws_ami.almalinux.id
  instance_type = "t3.small"
  availability_zone = "us-east-1a"

  tags = {
    Name = "Primary-Web-Server"
    Role = "primary"
  }
}

# Secondary region deployment
resource "aws_instance" "secondary" {
  provider      = aws.secondary
  ami           = data.aws_ami.almalinux_secondary.id
  instance_type = "t3.small"
  availability_zone = "us-west-2a"

  tags = {
    Name = "Secondary-Web-Server"
    Role = "secondary"
  }
}

# Cross-region replication bucket
resource "aws_s3_bucket" "backup" {
  bucket = "almalinux-dr-backup-${random_string.suffix.result}"
}

resource "aws_s3_bucket_versioning" "backup" {
  bucket = aws_s3_bucket.backup.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_replication_configuration" "backup" {
  role   = aws_iam_role.replication.arn
  bucket = aws_s3_bucket.backup.id

  rule {
    id     = "cross-region-replication"
    status = "Enabled"

    destination {
      bucket        = aws_s3_bucket.backup_replica.arn
      storage_class = "STANDARD_IA"
    }
  }
}
EOF

# Apply disaster recovery setup
terraform apply

Example 3: Cost Optimization Automation

# Create cost optimization script
cat > cost-optimizer.sh << 'EOF'
#!/bin/bash

optimize_aws_costs() {
    echo "💰 AWS Cost Optimization:"

    # Stop underutilized instances
    INSTANCES=$(aws ec2 describe-instances \
        --filters "Name=instance-state-name,Values=running" \
        --query 'Reservations[*].Instances[*].InstanceId' \
        --output text)

    for instance in $INSTANCES; do
        # Get CPU utilization for last 7 days
        CPU_AVG=$(aws cloudwatch get-metric-statistics \
            --namespace AWS/EC2 \
            --metric-name CPUUtilization \
            --dimensions Name=InstanceId,Value=$instance \
            --start-time $(date -u -d '7 days ago' +%Y-%m-%dT%H:%M:%S) \
            --end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
            --period 86400 \
            --statistics Average \
            --query 'Datapoints[].Average | avg(@)' \
            --output text)

        if (( $(echo "$CPU_AVG < 5" | bc -l) )); then
            echo "⚠️  Instance $instance has low CPU usage ($CPU_AVG%)"
            echo "   Consider stopping or downsizing"
        fi
    done

    # Find unused EBS volumes
    echo "🔍 Checking for unused EBS volumes..."
    aws ec2 describe-volumes \
        --filters "Name=status,Values=available" \
        --query 'Volumes[].{VolumeId:VolumeId,Size:Size,VolumeType:VolumeType}' \
        --output table
}

optimize_azure_costs() {
    echo "💰 Azure Cost Optimization:"

    # Find stopped VMs that can be deallocated
    STOPPED_VMS=$(az vm list -d \
        --query "[?powerState=='VM stopped'].{Name:name,ResourceGroup:resourceGroup}" \
        --output tsv)

    while IFS=$'\t' read -r vm_name rg_name; do
        echo "⚠️  VM $vm_name is stopped but not deallocated"
        echo "   Run: az vm deallocate --name $vm_name --resource-group $rg_name"
    done <<< "$STOPPED_VMS"

    # Find unused disks
    echo "🔍 Checking for unattached disks..."
    az disk list \
        --query "[?diskState=='Unattached'].{Name:name,Size:diskSizeGb,ResourceGroup:resourceGroup}" \
        --output table
}

# Run optimizations
optimize_aws_costs
echo ""
optimize_azure_costs
EOF

chmod +x cost-optimizer.sh
./cost-optimizer.sh

🚨 Fix Common Problems

Don’t worry when cloud integration issues arise – here are solutions to common problems! 🛠️

Problem 1: Authentication Failures

Symptoms: “Access denied”, “Invalid credentials” errors

# Verify AWS credentials
aws sts get-caller-identity

# Reconfigure AWS CLI
aws configure

# Check AWS environment variables
env | grep AWS

# Verify Azure login
az account show

# Re-login to Azure
az login --use-device-code

# Check Azure permissions
az role assignment list --assignee $(az ad signed-in-user show --query objectId -o tsv)

Problem 2: Resource Quota Limits

Symptoms: “Quota exceeded”, “Limit reached” errors

# Check AWS service limits
aws service-quotas list-service-quotas --service-code ec2

# Request quota increase
aws service-quotas request-service-quota-increase \
    --service-code ec2 \
    --quota-code L-1216C47A \
    --desired-value 20

# Check Azure quotas
az vm list-usage --location eastus

# Request Azure quota increase through portal
echo "Request quota increase at: https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade"

Problem 3: Network Connectivity Issues

Symptoms: VPN connection failures, network timeouts

# Check VPN status
systemctl status strongswan

# Test connectivity to cloud services
ping -c 3 ec2.amazonaws.com
ping -c 3 management.azure.com

# Check security group rules
aws ec2 describe-security-groups --group-ids sg-12345678

# Check Azure NSG rules
az network nsg show --resource-group almalinux-rg --name almalinux-nsg

# Test specific ports
nc -zv ec2-instance-ip 22
nc -zv azure-vm-ip 80

Problem 4: High Cloud Costs

Symptoms: Unexpected high bills, cost spikes

# Analyze AWS costs
aws ce get-cost-and-usage \
    --time-period Start=2024-01-01,End=2024-02-01 \
    --granularity DAILY \
    --metrics BlendedCost \
    --group-by Type=DIMENSION,Key=SERVICE

# Set up AWS billing alerts
aws budgets create-budget \
    --account-id 123456789012 \
    --budget file://budget.json

# Check Azure costs
az consumption usage list --top 10

# Set up Azure budget alerts
az consumption budget create \
    --budget-name "MonthlyBudget" \
    --amount 100 \
    --time-grain Monthly

📋 Simple Commands Summary

Here’s your cloud integration quick reference guide! 📚

TaskCommandPurpose
AWS CLI Setupaws configureConfigure AWS credentials
Azure CLI Setupaz loginLogin to Azure account
AWS Resources~/aws-manager.sh listList AWS resources
Azure Resources~/azure-manager.sh listList Azure resources
Deploy Infrastructure./terraform-deploy.sh fullDeploy with Terraform
Monitor Clouds~/cloud-monitor.sh allMonitor both clouds
Sync Data~/multi-cloud-manager.sh syncSync data to clouds
Check Costs~/cloud-monitor.sh costsMonitor cloud costs
Health Check~/multi-cloud-manager.sh healthCheck cloud health
Backup Configs~/multi-cloud-manager.sh backupBackup configurations
Deploy App~/multi-cloud-manager.sh deploy app bothDeploy to both clouds
Optimize Costs./cost-optimizer.shFind cost optimizations

💡 Tips for Success

Follow these expert strategies to master cloud integration! 🌟

🎯 Cloud Architecture Best Practices

  • Multi-cloud strategy – Avoid vendor lock-in by using multiple cloud providers
  • Infrastructure as Code – Use Terraform for consistent, repeatable deployments
  • Security first – Implement proper IAM, encryption, and network security
  • Cost optimization – Regular monitoring and right-sizing of resources

🔧 Automation and DevOps

  • CI/CD pipelines – Automate deployment to cloud environments
  • Monitoring and alerting – Set up comprehensive monitoring across all clouds
  • Disaster recovery – Plan and test disaster recovery procedures
  • Documentation – Keep detailed documentation of your cloud architecture

🛡️ Security and Compliance

  • Least privilege access – Grant minimum necessary permissions
  • Regular security audits – Use cloud security tools and third-party scanners
  • Data encryption – Encrypt data at rest and in transit
  • Compliance frameworks – Implement SOC2, ISO27001, or other relevant standards

🚀 Advanced Cloud Features

  • Serverless computing – Use Lambda/Azure Functions for event-driven architectures
  • Container orchestration – Deploy Kubernetes clusters in the cloud
  • AI/ML services – Leverage cloud-native AI and machine learning capabilities
  • Edge computing – Use CDNs and edge locations for global performance

🏆 What You Learned

Congratulations! You’ve mastered cloud integration on AlmaLinux! 🎉 Here’s your incredible achievement:

Configured multi-cloud infrastructure with AWS and Azure integration ✅ Implemented Infrastructure as Code using Terraform for automated deployments ✅ Set up hybrid cloud connectivity with VPN and secure networking ✅ Built comprehensive monitoring across multiple cloud providers ✅ Created automation scripts for cloud management and operations ✅ Implemented cost optimization strategies and monitoring tools ✅ Configured security and compliance with cloud best practices ✅ Developed disaster recovery procedures and backup strategies ✅ Mastered cloud CLI tools for both AWS and Azure platforms ✅ Built scalable cloud architecture for enterprise workloads

🎯 Why This Matters

Cloud integration expertise is absolutely essential in today’s digital world! ☁️

Every modern business relies on cloud computing for scalability, reliability, and innovation. From startups leveraging cloud-native architectures to enterprises migrating legacy systems, cloud integration skills are in massive demand across all industries.

These skills open doors to the highest-paying cloud architect, DevOps engineer, and site reliability engineer positions. Companies desperately need cloud experts who can design, implement, and manage multi-cloud infrastructures that scale globally while maintaining security and cost efficiency.

Remember, you haven’t just learned cloud technologies – you’ve mastered the art of building resilient, scalable infrastructure that adapts to any business need. Your ability to seamlessly integrate on-premises and cloud environments positions you at the forefront of modern IT.

Keep innovating, keep optimizing, and keep pushing the boundaries of what’s possible with cloud computing! Your expertise will power the next generation of cloud-native applications and services! ☁️⚡🙌