0x
+
+
+
+
+
+
circle
+
+
redhat
+
goland
+
+
+
+
+
scala
+
fastapi
jenkins
termux
+
astro
+
sse
+
+
django
+
::
vite
redis
gradle
ts
===
+
+
+
+
py
elementary
+
pascal
+
+
+
bsd
+
+
react
goland
firebase
$
+
sinatra
+
s3
+
+
+
โˆˆ
mvn
lit
rs
+
+
+
+
+
rubymine
+
+
gulp
elasticsearch
+
laravel
+
sqlite
+
+
+
quarkus
yaml
+
+
+
saml
+
Back to Blog
๐Ÿณ AlmaLinux Microservices: Complete Docker & Container Guide
AlmaLinux microservices Docker

๐Ÿณ AlmaLinux Microservices: Complete Docker & Container Guide

Published Sep 18, 2025

Master microservices architecture on AlmaLinux with Docker! Learn container orchestration, service mesh, API gateways, and scalable deployment patterns. Complete guide for cloud-native applications.

71 min read
0 views
Table of Contents

๐Ÿณ AlmaLinux Microservices: Complete Docker & Container Guide

Welcome to the exciting world of microservices on AlmaLinux! ๐Ÿš€ Whether youโ€™re breaking down monoliths, building cloud-native applications, or creating scalable distributed systems, this comprehensive guide will transform you into a microservices architect who can design and deploy applications that scale to millions of users! ๐ŸŽฏ

Microservices arenโ€™t just about containers โ€“ theyโ€™re about building resilient, scalable, and maintainable systems that can evolve with your business. Letโ€™s containerize everything and build the future! ๐Ÿ’ช

๐Ÿค” Why are Microservices Important?

Imagine building applications like LEGO blocks โ€“ thatโ€™s the microservices advantage! ๐Ÿงฑ Hereโ€™s why mastering microservices on AlmaLinux is absolutely essential:

  • ๐Ÿš€ Independent Scaling - Scale only what needs scaling
  • ๐Ÿ”„ Technology Diversity - Use the right tool for each job
  • ๐Ÿ’ช Fault Isolation - One service failure doesnโ€™t kill everything
  • ๐ŸŒ Team Autonomy - Teams can work independently
  • ๐Ÿ“ฆ Easy Deployment - Deploy individual services without downtime
  • ๐Ÿ”ง Maintainability - Smaller codebases are easier to understand
  • ๐ŸŽฏ Business Alignment - Services match business capabilities
  • โ˜๏ธ Cloud Native - Perfect fit for cloud platforms

๐ŸŽฏ What You Need

Letโ€™s prepare your microservices development environment! โœ…

System Requirements:

  • โœ… AlmaLinux 8.x or 9.x with 8GB+ RAM
  • โœ… Docker and Docker Compose installed
  • โœ… Container registry access (Docker Hub or private)
  • โœ… Load balancer (HAProxy or Nginx)
  • โœ… Service discovery mechanism

Development Tools:

  • โœ… Docker and container runtime
  • โœ… Docker Compose for local orchestration
  • โœ… API gateway (Kong, Traefik, or Envoy)
  • โœ… Service mesh (Istio or Linkerd)
  • โœ… Monitoring stack (Prometheus + Grafana)

๐Ÿ“ Setting Up Docker Environment

Letโ€™s create the perfect container platform for microservices! ๐Ÿ”ง

Installing Docker and Tools

# Install Docker CE
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin

# Start and enable Docker
sudo systemctl enable --now docker

# Add user to docker group
sudo usermod -aG docker $USER
# Log out and back in for group changes

# Install Docker Compose standalone
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

# Install additional tools
sudo dnf install -y git curl jq

# Verify installation
docker --version
docker-compose --version
docker run hello-world

Container Registry Setup

# Set up private registry (optional)
docker run -d -p 5000:5000 --restart=always --name registry \
    -v /opt/registry:/var/lib/registry \
    registry:2

# Configure Docker daemon for insecure registry
sudo tee /etc/docker/daemon.json << 'EOF'
{
    "insecure-registries": ["localhost:5000"],
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "10m",
        "max-file": "3"
    },
    "storage-driver": "overlay2"
}
EOF

sudo systemctl restart docker

# Test registry
docker tag hello-world localhost:5000/hello-world
docker push localhost:5000/hello-world

๐Ÿ”ง Building Microservices Architecture

Letโ€™s design and build a complete microservices system! ๐ŸŒŸ

Sample Microservices Application Structure

# Create microservices project structure
mkdir -p ~/microservices-demo/{api-gateway,user-service,product-service,order-service,notification-service,shared}
cd ~/microservices-demo

# Create shared Docker network
docker network create microservices-net

# Create shared configurations
cat > shared/docker-compose.yml << 'EOF'
version: '3.8'

networks:
  microservices-net:
    external: true

services:
  # Message broker
  rabbitmq:
    image: rabbitmq:3-management
    container_name: rabbitmq
    ports:
      - "5672:5672"
      - "15672:15672"
    environment:
      RABBITMQ_DEFAULT_USER: admin
      RABBITMQ_DEFAULT_PASS: password123
    networks:
      - microservices-net
    volumes:
      - rabbitmq_data:/var/lib/rabbitmq

  # Database for shared services
  postgres:
    image: postgres:14
    container_name: postgres-main
    ports:
      - "5432:5432"
    environment:
      POSTGRES_DB: microservices
      POSTGRES_USER: admin
      POSTGRES_PASSWORD: password123
    networks:
      - microservices-net
    volumes:
      - postgres_data:/var/lib/postgresql/data

  # Redis for caching
  redis:
    image: redis:7-alpine
    container_name: redis-cache
    ports:
      - "6379:6379"
    networks:
      - microservices-net
    volumes:
      - redis_data:/data

volumes:
  rabbitmq_data:
  postgres_data:
  redis_data:
EOF

User Service Implementation

# Create user service
mkdir -p user-service/{src,tests}

cat > user-service/Dockerfile << 'EOF'
FROM node:18-alpine

WORKDIR /app

COPY package*.json ./
RUN npm ci --only=production

COPY src ./src

EXPOSE 3001

USER node

CMD ["node", "src/index.js"]
EOF

cat > user-service/package.json << 'EOF'
{
  "name": "user-service",
  "version": "1.0.0",
  "description": "User management microservice",
  "main": "src/index.js",
  "scripts": {
    "start": "node src/index.js",
    "dev": "nodemon src/index.js",
    "test": "jest"
  },
  "dependencies": {
    "express": "^4.18.2",
    "pg": "^8.11.0",
    "bcrypt": "^5.1.0",
    "jsonwebtoken": "^9.0.0",
    "helmet": "^7.0.0",
    "cors": "^2.8.5",
    "amqplib": "^0.10.3"
  },
  "devDependencies": {
    "nodemon": "^3.0.1",
    "jest": "^29.5.0"
  }
}
EOF

cat > user-service/src/index.js << 'EOF'
const express = require('express');
const helmet = require('helmet');
const cors = require('cors');
const { Pool } = require('pg');
const bcrypt = require('bcrypt');
const jwt = require('jsonwebtoken');

const app = express();
const PORT = process.env.PORT || 3001;

// Middleware
app.use(helmet());
app.use(cors());
app.use(express.json());

// Database connection
const pool = new Pool({
  host: process.env.DB_HOST || 'postgres',
  port: process.env.DB_PORT || 5432,
  database: process.env.DB_NAME || 'microservices',
  user: process.env.DB_USER || 'admin',
  password: process.env.DB_PASSWORD || 'password123',
});

// Initialize database
async function initDB() {
  try {
    await pool.query(`
      CREATE TABLE IF NOT EXISTS users (
        id SERIAL PRIMARY KEY,
        username VARCHAR(50) UNIQUE NOT NULL,
        email VARCHAR(100) UNIQUE NOT NULL,
        password_hash VARCHAR(255) NOT NULL,
        created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
        updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
      )
    `);
    console.log('Database initialized');
  } catch (error) {
    console.error('Database initialization error:', error);
  }
}

// Routes
app.get('/health', (req, res) => {
  res.json({ status: 'healthy', service: 'user-service', timestamp: new Date().toISOString() });
});

app.post('/users/register', async (req, res) => {
  try {
    const { username, email, password } = req.body;

    // Hash password
    const passwordHash = await bcrypt.hash(password, 10);

    // Insert user
    const result = await pool.query(
      'INSERT INTO users (username, email, password_hash) VALUES ($1, $2, $3) RETURNING id, username, email',
      [username, email, passwordHash]
    );

    res.status(201).json({ user: result.rows[0] });
  } catch (error) {
    console.error('Registration error:', error);
    res.status(500).json({ error: 'Registration failed' });
  }
});

app.post('/users/login', async (req, res) => {
  try {
    const { username, password } = req.body;

    // Find user
    const result = await pool.query('SELECT * FROM users WHERE username = $1', [username]);
    const user = result.rows[0];

    if (!user || !await bcrypt.compare(password, user.password_hash)) {
      return res.status(401).json({ error: 'Invalid credentials' });
    }

    // Generate JWT
    const token = jwt.sign(
      { userId: user.id, username: user.username },
      process.env.JWT_SECRET || 'default-secret',
      { expiresIn: '24h' }
    );

    res.json({ token, user: { id: user.id, username: user.username, email: user.email } });
  } catch (error) {
    console.error('Login error:', error);
    res.status(500).json({ error: 'Login failed' });
  }
});

app.get('/users/:id', async (req, res) => {
  try {
    const { id } = req.params;
    const result = await pool.query('SELECT id, username, email, created_at FROM users WHERE id = $1', [id]);

    if (result.rows.length === 0) {
      return res.status(404).json({ error: 'User not found' });
    }

    res.json({ user: result.rows[0] });
  } catch (error) {
    console.error('Get user error:', error);
    res.status(500).json({ error: 'Failed to get user' });
  }
});

// Start server
app.listen(PORT, async () => {
  await initDB();
  console.log(`User service running on port ${PORT}`);
});
EOF

cat > user-service/docker-compose.yml << 'EOF'
version: '3.8'

services:
  user-service:
    build: .
    container_name: user-service
    ports:
      - "3001:3001"
    environment:
      DB_HOST: postgres
      DB_NAME: microservices
      DB_USER: admin
      DB_PASSWORD: password123
      JWT_SECRET: user-service-secret-key
    networks:
      - microservices-net
    depends_on:
      - postgres
    restart: unless-stopped

networks:
  microservices-net:
    external: true
EOF

Product Service Implementation

# Create product service
mkdir -p product-service/{src,tests}

cat > product-service/Dockerfile << 'EOF'
FROM python:3.11-alpine

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY src ./src

EXPOSE 3002

CMD ["python", "src/app.py"]
EOF

cat > product-service/requirements.txt << 'EOF'
Flask==2.3.2
Flask-CORS==4.0.0
psycopg2-binary==2.9.6
redis==4.5.5
pika==1.3.2
gunicorn==20.1.0
EOF

cat > product-service/src/app.py << 'EOF'
from flask import Flask, request, jsonify
from flask_cors import CORS
import psycopg2
import redis
import json
import os
from datetime import datetime

app = Flask(__name__)
CORS(app)

# Database connection
def get_db_connection():
    return psycopg2.connect(
        host=os.getenv('DB_HOST', 'postgres'),
        port=os.getenv('DB_PORT', 5432),
        database=os.getenv('DB_NAME', 'microservices'),
        user=os.getenv('DB_USER', 'admin'),
        password=os.getenv('DB_PASSWORD', 'password123')
    )

# Redis connection
redis_client = redis.Redis(
    host=os.getenv('REDIS_HOST', 'redis'),
    port=int(os.getenv('REDIS_PORT', 6379)),
    db=0,
    decode_responses=True
)

# Initialize database
def init_db():
    try:
        conn = get_db_connection()
        cur = conn.cursor()
        cur.execute('''
            CREATE TABLE IF NOT EXISTS products (
                id SERIAL PRIMARY KEY,
                name VARCHAR(255) NOT NULL,
                description TEXT,
                price DECIMAL(10,2) NOT NULL,
                category VARCHAR(100),
                stock_quantity INTEGER DEFAULT 0,
                created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
                updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
            )
        ''')

        # Insert sample data
        cur.execute("SELECT COUNT(*) FROM products")
        if cur.fetchone()[0] == 0:
            sample_products = [
                ('Laptop Pro', 'High-performance laptop', 1299.99, 'Electronics', 50),
                ('Wireless Mouse', 'Ergonomic wireless mouse', 29.99, 'Electronics', 200),
                ('Coffee Mug', 'Ceramic coffee mug', 12.99, 'Home', 100),
                ('Notebook', 'Spiral notebook 200 pages', 5.99, 'Office', 150)
            ]

            cur.executemany('''
                INSERT INTO products (name, description, price, category, stock_quantity)
                VALUES (%s, %s, %s, %s, %s)
            ''', sample_products)

        conn.commit()
        cur.close()
        conn.close()
        print("Database initialized")
    except Exception as e:
        print(f"Database initialization error: {e}")

@app.route('/health', methods=['GET'])
def health_check():
    return jsonify({
        'status': 'healthy',
        'service': 'product-service',
        'timestamp': datetime.now().isoformat()
    })

@app.route('/products', methods=['GET'])
def get_products():
    try:
        # Check cache first
        cached_products = redis_client.get('products:all')
        if cached_products:
            return jsonify(json.loads(cached_products))

        conn = get_db_connection()
        cur = conn.cursor()
        cur.execute('SELECT * FROM products ORDER BY created_at DESC')

        products = []
        for row in cur.fetchall():
            products.append({
                'id': row[0],
                'name': row[1],
                'description': row[2],
                'price': float(row[3]),
                'category': row[4],
                'stock_quantity': row[5],
                'created_at': row[6].isoformat() if row[6] else None
            })

        # Cache results for 5 minutes
        redis_client.setex('products:all', 300, json.dumps(products))

        cur.close()
        conn.close()

        return jsonify(products)
    except Exception as e:
        print(f"Error getting products: {e}")
        return jsonify({'error': 'Failed to get products'}), 500

@app.route('/products/<int:product_id>', methods=['GET'])
def get_product(product_id):
    try:
        # Check cache first
        cache_key = f'product:{product_id}'
        cached_product = redis_client.get(cache_key)
        if cached_product:
            return jsonify(json.loads(cached_product))

        conn = get_db_connection()
        cur = conn.cursor()
        cur.execute('SELECT * FROM products WHERE id = %s', (product_id,))
        row = cur.fetchone()

        if not row:
            return jsonify({'error': 'Product not found'}), 404

        product = {
            'id': row[0],
            'name': row[1],
            'description': row[2],
            'price': float(row[3]),
            'category': row[4],
            'stock_quantity': row[5],
            'created_at': row[6].isoformat() if row[6] else None
        }

        # Cache for 10 minutes
        redis_client.setex(cache_key, 600, json.dumps(product))

        cur.close()
        conn.close()

        return jsonify(product)
    except Exception as e:
        print(f"Error getting product: {e}")
        return jsonify({'error': 'Failed to get product'}), 500

@app.route('/products', methods=['POST'])
def create_product():
    try:
        data = request.get_json()

        conn = get_db_connection()
        cur = conn.cursor()
        cur.execute('''
            INSERT INTO products (name, description, price, category, stock_quantity)
            VALUES (%s, %s, %s, %s, %s) RETURNING id
        ''', (data['name'], data.get('description'), data['price'],
              data.get('category'), data.get('stock_quantity', 0)))

        product_id = cur.fetchone()[0]
        conn.commit()
        cur.close()
        conn.close()

        # Clear cache
        redis_client.delete('products:all')

        return jsonify({'id': product_id, 'message': 'Product created'}), 201
    except Exception as e:
        print(f"Error creating product: {e}")
        return jsonify({'error': 'Failed to create product'}), 500

if __name__ == '__main__':
    init_db()
    app.run(host='0.0.0.0', port=int(os.getenv('PORT', 3002)), debug=False)
EOF

cat > product-service/docker-compose.yml << 'EOF'
version: '3.8'

services:
  product-service:
    build: .
    container_name: product-service
    ports:
      - "3002:3002"
    environment:
      DB_HOST: postgres
      DB_NAME: microservices
      DB_USER: admin
      DB_PASSWORD: password123
      REDIS_HOST: redis
      REDIS_PORT: 6379
    networks:
      - microservices-net
    depends_on:
      - postgres
      - redis
    restart: unless-stopped

networks:
  microservices-net:
    external: true
EOF

๐ŸŒŸ API Gateway Implementation

Letโ€™s create a central API gateway for our microservices! ๐Ÿšช

Nginx-based API Gateway

# Create API gateway
mkdir -p api-gateway/{nginx,scripts}

cat > api-gateway/nginx/nginx.conf << 'EOF'
upstream user_service {
    server user-service:3001;
}

upstream product_service {
    server product-service:3002;
}

upstream order_service {
    server order-service:3003;
}

server {
    listen 80;
    server_name localhost;

    # Enable logging
    access_log /var/log/nginx/api_access.log;
    error_log /var/log/nginx/api_error.log;

    # Rate limiting
    limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;

    # Health check endpoint
    location /health {
        access_log off;
        return 200 "healthy\n";
        add_header Content-Type text/plain;
    }

    # User service routes
    location /api/users {
        limit_req zone=api_limit burst=20 nodelay;

        proxy_pass http://user_service;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # CORS headers
        add_header Access-Control-Allow-Origin "*" always;
        add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS" always;
        add_header Access-Control-Allow-Headers "DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization" always;

        if ($request_method = 'OPTIONS') {
            return 204;
        }
    }

    # Product service routes
    location /api/products {
        limit_req zone=api_limit burst=30 nodelay;

        proxy_pass http://product_service;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Caching for GET requests
        proxy_cache_methods GET HEAD;
        proxy_cache_valid 200 5m;

        # CORS headers
        add_header Access-Control-Allow-Origin "*" always;
        add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS" always;
        add_header Access-Control-Allow-Headers "DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization" always;

        if ($request_method = 'OPTIONS') {
            return 204;
        }
    }

    # Order service routes
    location /api/orders {
        limit_req zone=api_limit burst=15 nodelay;

        proxy_pass http://order_service;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # CORS headers
        add_header Access-Control-Allow-Origin "*" always;
        add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS" always;
        add_header Access-Control-Allow-Headers "DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization" always;

        if ($request_method = 'OPTIONS') {
            return 204;
        }
    }

    # Default route
    location / {
        return 404 "Service not found";
    }
}
EOF

cat > api-gateway/Dockerfile << 'EOF'
FROM nginx:alpine

COPY nginx/nginx.conf /etc/nginx/conf.d/default.conf

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]
EOF

cat > api-gateway/docker-compose.yml << 'EOF'
version: '3.8'

services:
  api-gateway:
    build: .
    container_name: api-gateway
    ports:
      - "8080:80"
    networks:
      - microservices-net
    depends_on:
      - user-service
      - product-service
    restart: unless-stopped
    volumes:
      - ./logs:/var/log/nginx

networks:
  microservices-net:
    external: true
EOF

โœ… Service Discovery and Load Balancing

Letโ€™s implement service discovery for our microservices! ๐Ÿ”

Consul-based Service Discovery

# Create service discovery setup
mkdir -p service-discovery

cat > service-discovery/docker-compose.yml << 'EOF'
version: '3.8'

services:
  consul:
    image: consul:latest
    container_name: consul
    ports:
      - "8500:8500"
      - "8600:8600/udp"
    environment:
      - CONSUL_BIND_INTERFACE=eth0
    command: |
      consul agent -server -bootstrap-expect=1 -ui -bind=0.0.0.0
      -client=0.0.0.0 -data-dir=/consul/data
    networks:
      - microservices-net
    volumes:
      - consul_data:/consul/data

  registrator:
    image: gliderlabs/registrator
    container_name: registrator
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock
    command: -internal consul://consul:8500
    networks:
      - microservices-net
    depends_on:
      - consul

volumes:
  consul_data:

networks:
  microservices-net:
    external: true
EOF

# Create service registration script
cat > service-discovery/register-service.sh << 'EOF'
#!/bin/bash
# Service registration script

CONSUL_URL="http://localhost:8500"
SERVICE_NAME="$1"
SERVICE_ID="$2"
SERVICE_PORT="$3"
SERVICE_IP="$4"

curl -X PUT "$CONSUL_URL/v1/agent/service/register" \
    -H "Content-Type: application/json" \
    -d "{
        \"ID\": \"$SERVICE_ID\",
        \"Name\": \"$SERVICE_NAME\",
        \"Tags\": [\"api\", \"microservice\"],
        \"Address\": \"$SERVICE_IP\",
        \"Port\": $SERVICE_PORT,
        \"Check\": {
            \"HTTP\": \"http://$SERVICE_IP:$SERVICE_PORT/health\",
            \"Interval\": \"10s\",
            \"Timeout\": \"3s\"
        }
    }"

echo "Service $SERVICE_NAME registered with Consul"
EOF

chmod +x service-discovery/register-service.sh

๐ŸŽฎ Quick Examples

Example 1: Order Service with Event-Driven Architecture

# Create order service
mkdir -p order-service/{src,tests}

cat > order-service/Dockerfile << 'EOF'
FROM golang:1.21-alpine AS builder

WORKDIR /app

COPY go.mod go.sum ./
RUN go mod download

COPY src/ ./src/
RUN go build -o order-service src/main.go

FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/

COPY --from=builder /app/order-service .

EXPOSE 3003

CMD ["./order-service"]
EOF

cat > order-service/go.mod << 'EOF'
module order-service

go 1.21

require (
    github.com/gin-gonic/gin v1.9.1
    github.com/lib/pq v1.10.9
    github.com/streadway/amqp v1.1.0
    github.com/go-redis/redis/v8 v8.11.5
)
EOF

cat > order-service/src/main.go << 'EOF'
package main

import (
    "database/sql"
    "encoding/json"
    "log"
    "net/http"
    "os"
    "strconv"
    "time"

    "github.com/gin-gonic/gin"
    _ "github.com/lib/pq"
    "github.com/streadway/amqp"
)

type Order struct {
    ID          int       `json:"id"`
    UserID      int       `json:"user_id"`
    ProductID   int       `json:"product_id"`
    Quantity    int       `json:"quantity"`
    TotalPrice  float64   `json:"total_price"`
    Status      string    `json:"status"`
    CreatedAt   time.Time `json:"created_at"`
}

type OrderRequest struct {
    UserID    int     `json:"user_id" binding:"required"`
    ProductID int     `json:"product_id" binding:"required"`
    Quantity  int     `json:"quantity" binding:"required"`
    Price     float64 `json:"price" binding:"required"`
}

var db *sql.DB
var amqpConn *amqp.Connection

func initDB() {
    var err error
    dbHost := getEnv("DB_HOST", "postgres")
    dbPort := getEnv("DB_PORT", "5432")
    dbName := getEnv("DB_NAME", "microservices")
    dbUser := getEnv("DB_USER", "admin")
    dbPassword := getEnv("DB_PASSWORD", "password123")

    connStr := "host=" + dbHost + " port=" + dbPort + " user=" + dbUser +
               " password=" + dbPassword + " dbname=" + dbName + " sslmode=disable"

    db, err = sql.Open("postgres", connStr)
    if err != nil {
        log.Fatal("Failed to connect to database:", err)
    }

    // Create orders table
    _, err = db.Exec(`
        CREATE TABLE IF NOT EXISTS orders (
            id SERIAL PRIMARY KEY,
            user_id INTEGER NOT NULL,
            product_id INTEGER NOT NULL,
            quantity INTEGER NOT NULL,
            total_price DECIMAL(10,2) NOT NULL,
            status VARCHAR(50) DEFAULT 'pending',
            created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
        )
    `)
    if err != nil {
        log.Fatal("Failed to create orders table:", err)
    }
}

func initAMQP() {
    var err error
    amqpURL := getEnv("AMQP_URL", "amqp://admin:password123@rabbitmq:5672/")

    amqpConn, err = amqp.Dial(amqpURL)
    if err != nil {
        log.Fatal("Failed to connect to RabbitMQ:", err)
    }
}

func publishEvent(eventType string, data interface{}) error {
    ch, err := amqpConn.Channel()
    if err != nil {
        return err
    }
    defer ch.Close()

    err = ch.ExchangeDeclare("events", "topic", true, false, false, false, nil)
    if err != nil {
        return err
    }

    body, err := json.Marshal(data)
    if err != nil {
        return err
    }

    return ch.Publish("events", eventType, false, false, amqp.Publishing{
        ContentType: "application/json",
        Body:       body,
    })
}

func healthCheck(c *gin.Context) {
    c.JSON(http.StatusOK, gin.H{
        "status":    "healthy",
        "service":   "order-service",
        "timestamp": time.Now().Format(time.RFC3339),
    })
}

func createOrder(c *gin.Context) {
    var req OrderRequest
    if err := c.ShouldBindJSON(&req); err != nil {
        c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
        return
    }

    totalPrice := req.Price * float64(req.Quantity)

    var orderID int
    err := db.QueryRow(`
        INSERT INTO orders (user_id, product_id, quantity, total_price, status)
        VALUES ($1, $2, $3, $4, 'pending') RETURNING id
    `, req.UserID, req.ProductID, req.Quantity, totalPrice).Scan(&orderID)

    if err != nil {
        c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to create order"})
        return
    }

    order := Order{
        ID:         orderID,
        UserID:     req.UserID,
        ProductID:  req.ProductID,
        Quantity:   req.Quantity,
        TotalPrice: totalPrice,
        Status:     "pending",
        CreatedAt:  time.Now(),
    }

    // Publish order created event
    publishEvent("order.created", order)

    c.JSON(http.StatusCreated, order)
}

func getOrders(c *gin.Context) {
    rows, err := db.Query("SELECT id, user_id, product_id, quantity, total_price, status, created_at FROM orders ORDER BY created_at DESC")
    if err != nil {
        c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to get orders"})
        return
    }
    defer rows.Close()

    var orders []Order
    for rows.Next() {
        var order Order
        err := rows.Scan(&order.ID, &order.UserID, &order.ProductID, &order.Quantity, &order.TotalPrice, &order.Status, &order.CreatedAt)
        if err != nil {
            continue
        }
        orders = append(orders, order)
    }

    c.JSON(http.StatusOK, orders)
}

func getEnv(key, defaultValue string) string {
    if value := os.Getenv(key); value != "" {
        return value
    }
    return defaultValue
}

func main() {
    initDB()
    initAMQP()

    r := gin.Default()

    r.GET("/health", healthCheck)
    r.POST("/orders", createOrder)
    r.GET("/orders", getOrders)

    port := getEnv("PORT", "3003")
    log.Printf("Order service starting on port %s", port)
    r.Run(":" + port)
}
EOF

Example 2: Service Mesh with Istio

# Create Istio configuration for microservices
mkdir -p service-mesh

cat > service-mesh/istio-config.yaml << 'EOF'
apiVersion: v1
kind: Namespace
metadata:
  name: microservices
  labels:
    istio-injection: enabled
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: microservices-gateway
  namespace: microservices
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: microservices-vs
  namespace: microservices
spec:
  http:
  - match:
    - uri:
        prefix: /api/users
    route:
    - destination:
        host: user-service
        port:
          number: 3001
  - match:
    - uri:
        prefix: /api/products
    route:
    - destination:
        host: product-service
        port:
          number: 3002
  - match:
    - uri:
        prefix: /api/orders
    route:
    - destination:
        host: order-service
        port:
          number: 3003
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: user-service-dr
  namespace: microservices
spec:
  host: user-service
  trafficPolicy:
    loadBalancer:
      simple: ROUND_ROBIN
    circuitBreaker:
      consecutiveErrors: 3
      interval: 30s
      baseEjectionTime: 30s
EOF

Example 3: Comprehensive Monitoring Setup

# Create monitoring stack
mkdir -p monitoring/{prometheus,grafana}

cat > monitoring/docker-compose.yml << 'EOF'
version: '3.8'

services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus_data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/etc/prometheus/console_libraries'
      - '--web.console.templates=/etc/prometheus/consoles'
      - '--storage.tsdb.retention.time=200h'
      - '--web.enable-lifecycle'
    networks:
      - microservices-net

  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
      - "3000:3000"
    environment:
      - GF_SECURITY_ADMIN_USER=admin
      - GF_SECURITY_ADMIN_PASSWORD=admin123
    volumes:
      - grafana_data:/var/lib/grafana
      - ./grafana/provisioning:/etc/grafana/provisioning
    networks:
      - microservices-net

  jaeger:
    image: jaegertracing/all-in-one:latest
    container_name: jaeger
    ports:
      - "16686:16686"
      - "14268:14268"
    environment:
      - COLLECTOR_JAEGER_HTTP_PORT=14268
    networks:
      - microservices-net

volumes:
  prometheus_data:
  grafana_data:

networks:
  microservices-net:
    external: true
EOF

cat > monitoring/prometheus/prometheus.yml << 'EOF'
global:
  scrape_interval: 15s
  evaluation_interval: 15s

rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'user-service'
    static_configs:
      - targets: ['user-service:3001']
    metrics_path: '/metrics'
    scrape_interval: 5s

  - job_name: 'product-service'
    static_configs:
      - targets: ['product-service:3002']
    metrics_path: '/metrics'
    scrape_interval: 5s

  - job_name: 'order-service'
    static_configs:
      - targets: ['order-service:3003']
    metrics_path: '/metrics'
    scrape_interval: 5s

  - job_name: 'api-gateway'
    static_configs:
      - targets: ['api-gateway:80']
    metrics_path: '/metrics'
    scrape_interval: 5s
EOF

๐Ÿšจ Fix Common Microservices Problems

Letโ€™s solve frequent microservices challenges! ๐Ÿ› ๏ธ

Problem 1: Service Communication Failures

Symptoms: Services canโ€™t communicate, timeouts Solution:

# Check network connectivity
docker network ls
docker network inspect microservices-net

# Test service connectivity
docker exec -it user-service ping product-service

# Check service logs
docker logs user-service
docker logs product-service

# Verify port bindings
docker ps -a

# Restart network
docker network rm microservices-net
docker network create microservices-net

Problem 2: Database Connection Pool Exhaustion

Symptoms: Database connection errors under load Solution:

# Monitor database connections
docker exec -it postgres-main psql -U admin -d microservices -c "SELECT count(*) FROM pg_stat_activity;"

# Optimize connection pools in services
# For Node.js services:
pool.max = 20
pool.idleTimeoutMillis = 30000

# For Python services:
SQLALCHEMY_POOL_SIZE = 20
SQLALCHEMY_POOL_TIMEOUT = 30

Problem 3: Circuit Breaker Trips

Symptoms: Cascade failures, services becoming unavailable Solution:

# Implement circuit breaker in services
# Add to package.json:
"opossum": "^6.3.0"

# Circuit breaker configuration:
const CircuitBreaker = require('opossum');
const options = {
  timeout: 3000,
  errorThresholdPercentage: 50,
  resetTimeout: 30000
};

Problem 4: Message Queue Backlog

Symptoms: High latency, messages not being processed Solution:

# Check RabbitMQ queue status
docker exec -it rabbitmq rabbitmqctl list_queues

# Scale consumers
docker-compose up --scale order-service=3

# Monitor queue depth
curl -u admin:password123 http://localhost:15672/api/queues

๐Ÿ“‹ Microservices Commands Reference

Essential microservices management commands! โšก

CommandPurpose
docker-compose up -dStart all services
docker-compose scale service=3Scale specific service
docker logs service-nameView service logs
docker exec -it service bashShell into service
docker network inspect net-nameCheck network config
curl service:port/healthHealth check
docker statsResource usage
docker system pruneClean unused resources

๐Ÿ’ก Microservices Best Practices

Master these microservices principles! ๐ŸŽฏ

  • ๐ŸŽฏ Single Responsibility - One service, one business capability
  • ๐Ÿ”„ Stateless Design - Services should not maintain state
  • ๐Ÿ“ฆ Containerize Everything - Each service in its own container
  • ๐ŸŒ API First - Design APIs before implementation
  • ๐Ÿ” Service Discovery - Services find each other dynamically
  • ๐Ÿ“Š Observability - Logging, metrics, and tracing everywhere
  • ๐Ÿ›ก๏ธ Security by Default - Secure service-to-service communication
  • ๐Ÿš€ Independent Deployment - Deploy services separately
  • ๐Ÿ’พ Data Ownership - Each service owns its data
  • ๐Ÿ”ง Automation - Automate testing, building, and deployment

๐Ÿ† What Youโ€™ve Accomplished

Congratulations on mastering microservices on AlmaLinux! ๐ŸŽ‰ Youโ€™ve achieved:

  • โœ… Complete microservices architecture designed and implemented
  • โœ… Docker containerization for all services
  • โœ… API gateway for centralized routing and security
  • โœ… Service discovery with health checking
  • โœ… Event-driven communication using message queues
  • โœ… Database per service pattern implemented
  • โœ… Monitoring and observability stack deployed
  • โœ… Load balancing and circuit breaker patterns
  • โœ… Service mesh integration capabilities
  • โœ… Best practices applied throughout

๐ŸŽฏ Why These Skills Matter

Your microservices expertise enables modern application development! ๐ŸŒŸ With these skills, you can:

Immediate Benefits:

  • ๐Ÿš€ Build applications that scale to millions of users
  • ๐Ÿ”„ Deploy features independently and continuously
  • ๐Ÿ›ก๏ธ Create resilient systems that survive failures
  • ๐Ÿ’ฐ Optimize resource usage and reduce costs

Long-term Value:

  • โ˜๏ธ Design cloud-native architectures
  • ๐Ÿ† Lead digital transformation initiatives
  • ๐Ÿ’ผ Architect enterprise-scale distributed systems
  • ๐ŸŒ Build globally distributed applications

Youโ€™re now equipped to design and build microservices architectures that power the worldโ€™s largest applications! From startup MVPs to enterprise platforms, you can create systems that scale infinitely and adapt to any business need! ๐ŸŒŸ

Keep building, keep scaling! ๐Ÿ™Œ