perl
r
jwt
+
gcp
go
nomad
+
+
+
+
π
+
+
+
*
+
terraform
+
+
alpine
+
lua
argocd
+
js
+
vite
smtp
+
+
+
neo4j
&&
intellij
+
+
+
pascal
+
+
+
perl
+
+
+
vite
gin
qwik
crystal
&&
!!
notepad++
+
+
tcl
micronaut
+
++
+
micronaut
matplotlib
+
+
js
::
+
+
gradle
&&
+
+
matplotlib
?
+
+
+
goland
λ
istio
+
+
+
+
+
Back to Blog
⚖️ Configuring Process Load Balancing: Simple Guide
Alpine Linux Performance Beginner

⚖️ Configuring Process Load Balancing: Simple Guide

Published Jun 13, 2025

Easy tutorial on setting up process load balancing in Alpine Linux. Perfect for beginners to distribute workload across multiple processes and improve performance.

8 min read
0 views
Table of Contents

Let me show you how to configure process load balancing on Alpine Linux! Load balancing spreads work across multiple processes, making your system faster and more reliable. It’s like having multiple checkout lines at a store - customers get served faster when the load is distributed!

🤔 What is Process Load Balancing?

Process load balancing distributes incoming requests or tasks across multiple worker processes. Instead of one process handling everything (and possibly getting overwhelmed), multiple processes share the work. This improves performance, prevents bottlenecks, and provides redundancy if a process fails.

Why use load balancing?

  • Better performance
  • Higher reliability
  • Efficient resource use
  • Automatic failover
  • Scalable architecture

🎯 What You Need

Before starting, you’ll need:

  • Alpine Linux installed
  • Multi-core CPU (recommended)
  • Running services to balance
  • Root access
  • About 20 minutes

📋 Step 1: Install Load Balancing Tools

Let’s get the necessary tools:

# Update packages
apk update

# Install HAProxy for TCP/HTTP load balancing
apk add haproxy

# Install Nginx for web load balancing
apk add nginx

# Install process management tools
apk add supervisor runit

# Install monitoring tools
apk add htop iotop nethogs

# Check installations
haproxy -v
nginx -v
supervisord --version

📋 Step 2: Configure HAProxy

Set up HAProxy for basic load balancing:

# Backup original config
cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.backup

# Create HAProxy configuration
cat > /etc/haproxy/haproxy.cfg << 'EOF'
global
    daemon
    maxconn 4096
    log 127.0.0.1 local0
    
defaults
    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms
    option httplog
    
# Statistics page
stats enable
stats uri /haproxy-stats
stats refresh 10s

# Frontend - what users connect to
frontend web_frontend
    bind *:80
    default_backend web_servers
    
# Backend - actual servers
backend web_servers
    balance roundrobin
    option httpchk GET /health
    
    # Define backend servers
    server web1 127.0.0.1:8001 check
    server web2 127.0.0.1:8002 check
    server web3 127.0.0.1:8003 check
    server web4 127.0.0.1:8004 check
EOF

# Start HAProxy
rc-service haproxy start
rc-update add haproxy

📋 Step 3: Create Worker Processes

Set up multiple worker processes:

# Create simple web server script
cat > /usr/local/bin/web-worker.sh << 'EOF'
#!/bin/sh
# Simple web worker process

PORT=$1
WORKER_ID=$2

echo "Starting worker $WORKER_ID on port $PORT"

# Create response handler
cat > /tmp/worker-$WORKER_ID.sh << SCRIPT
#!/bin/sh
echo "HTTP/1.1 200 OK"
echo "Content-Type: text/html"
echo ""
echo "<h1>Worker $WORKER_ID</h1>"
echo "<p>Served from port $PORT</p>"
echo "<p>Process ID: $$</p>"
echo "<p>Time: $(date)</p>"
SCRIPT

chmod +x /tmp/worker-$WORKER_ID.sh

# Start simple HTTP server
while true; do
    nc -l -p $PORT -e /tmp/worker-$WORKER_ID.sh
done
EOF

chmod +x /usr/local/bin/web-worker.sh

# Create supervisor configuration
cat > /etc/supervisord.conf << 'EOF'
[supervisord]
nodaemon=false
logfile=/var/log/supervisord.log

[program:web-worker-1]
command=/usr/local/bin/web-worker.sh 8001 1
autostart=true
autorestart=true
stderr_logfile=/var/log/worker1.err.log
stdout_logfile=/var/log/worker1.out.log

[program:web-worker-2]
command=/usr/local/bin/web-worker.sh 8002 2
autostart=true
autorestart=true
stderr_logfile=/var/log/worker2.err.log
stdout_logfile=/var/log/worker2.out.log

[program:web-worker-3]
command=/usr/local/bin/web-worker.sh 8003 3
autostart=true
autorestart=true
stderr_logfile=/var/log/worker3.err.log
stdout_logfile=/var/log/worker3.out.log

[program:web-worker-4]
command=/usr/local/bin/web-worker.sh 8004 4
autostart=true
autorestart=true
stderr_logfile=/var/log/worker4.err.log
stdout_logfile=/var/log/worker4.out.log

[group:web-workers]
programs=web-worker-1,web-worker-2,web-worker-3,web-worker-4
EOF

# Start supervisor
supervisord -c /etc/supervisord.conf

📋 Step 4: Nginx Load Balancing

Configure Nginx as a load balancer:

# Create Nginx load balancer config
cat > /etc/nginx/conf.d/load-balancer.conf << 'EOF'
# Upstream servers
upstream backend {
    # Load balancing methods:
    # round-robin (default)
    # least_conn - least connections
    # ip_hash - session persistence
    # hash - consistent hashing
    
    least_conn;
    
    # Backend servers
    server 127.0.0.1:8001 weight=1 max_fails=3 fail_timeout=30s;
    server 127.0.0.1:8002 weight=1 max_fails=3 fail_timeout=30s;
    server 127.0.0.1:8003 weight=2;  # Higher weight = more requests
    server 127.0.0.1:8004 weight=1;
    
    # Backup server (only used if all others fail)
    server 127.0.0.1:8005 backup;
    
    # Keep connections alive
    keepalive 32;
}

# Main server block
server {
    listen 8080;
    server_name localhost;
    
    # Pass requests to backend
    location / {
        proxy_pass http://backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        
        # Headers for backend
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # Timeouts
        proxy_connect_timeout 5s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;
    }
    
    # Health check endpoint
    location /nginx-health {
        access_log off;
        return 200 "healthy\n";
        add_header Content-Type text/plain;
    }
    
    # Status page
    location /nginx-status {
        stub_status on;
        access_log off;
    }
}
EOF

# Test and reload Nginx
nginx -t
nginx -s reload

📋 Step 5: Process Pool Management

Create dynamic process pools:

# Create process pool manager
cat > /usr/local/bin/process-pool.sh << 'EOF'
#!/bin/sh
# Dynamic Process Pool Manager

MIN_WORKERS=2
MAX_WORKERS=8
CURRENT_WORKERS=0
CHECK_INTERVAL=10

# CPU threshold for scaling
SCALE_UP_CPU=70
SCALE_DOWN_CPU=30

# Function to get CPU usage
get_cpu_usage() {
    top -bn1 | grep "CPU:" | awk '{print $2}' | cut -d'%' -f1
}

# Function to get current worker count
get_worker_count() {
    supervisorctl status | grep -c "RUNNING"
}

# Function to scale workers
scale_workers() {
    local target=$1
    local current=$(get_worker_count)
    
    if [ $target -gt $current ]; then
        echo "📈 Scaling up to $target workers"
        for i in $(seq $((current + 1)) $target); do
            cat >> /etc/supervisord.conf << CONFIG
[program:web-worker-$i]
command=/usr/local/bin/web-worker.sh 800$i $i
autostart=false
autorestart=true
CONFIG
            supervisorctl reread
            supervisorctl add web-worker-$i
            supervisorctl start web-worker-$i
        done
    elif [ $target -lt $current ]; then
        echo "📉 Scaling down to $target workers"
        for i in $(seq $target $((current - 1))); do
            supervisorctl stop web-worker-$((i + 1))
            supervisorctl remove web-worker-$((i + 1))
        done
    fi
}

# Main monitoring loop
echo "🔄 Process Pool Manager Started"
echo "================================"

while true; do
    CPU_USAGE=$(get_cpu_usage)
    CURRENT_WORKERS=$(get_worker_count)
    
    echo "CPU: ${CPU_USAGE}% | Workers: $CURRENT_WORKERS"
    
    if [ $CPU_USAGE -gt $SCALE_UP_CPU ] && [ $CURRENT_WORKERS -lt $MAX_WORKERS ]; then
        scale_workers $((CURRENT_WORKERS + 1))
    elif [ $CPU_USAGE -lt $SCALE_DOWN_CPU ] && [ $CURRENT_WORKERS -gt $MIN_WORKERS ]; then
        scale_workers $((CURRENT_WORKERS - 1))
    fi
    
    sleep $CHECK_INTERVAL
done
EOF

chmod +x /usr/local/bin/process-pool.sh

# Run in background
nohup /usr/local/bin/process-pool.sh > /var/log/process-pool.log 2>&1 &

📋 Step 6: Advanced Load Balancing

Implement sophisticated balancing strategies:

# Weighted round-robin with health checks
cat > /etc/haproxy/haproxy-advanced.cfg << 'EOF'
global
    daemon
    maxconn 10000
    log 127.0.0.1 local0
    stats socket /var/run/haproxy.sock mode 600
    
defaults
    mode http
    timeout connect 5s
    timeout client 30s
    timeout server 30s
    option httplog
    option dontlognull
    retries 3
    
# Frontend with ACLs
frontend web_frontend
    bind *:80
    bind *:443 ssl crt /etc/ssl/cert.pem
    
    # ACL rules
    acl is_api path_beg /api
    acl is_static path_end .jpg .png .css .js
    acl is_websocket hdr(Upgrade) -i WebSocket
    
    # Route based on ACL
    use_backend api_servers if is_api
    use_backend static_servers if is_static
    use_backend websocket_servers if is_websocket
    default_backend web_servers
    
# API backend (least connections)
backend api_servers
    balance leastconn
    option httpchk GET /api/health
    
    server api1 127.0.0.1:9001 check weight 10
    server api2 127.0.0.1:9002 check weight 10
    
# Static content (round-robin)
backend static_servers
    balance roundrobin
    
    server static1 127.0.0.1:8081 check
    server static2 127.0.0.1:8082 check
    
# WebSocket backend (source IP hash)
backend websocket_servers
    balance source
    
    server ws1 127.0.0.1:8090 check
    server ws2 127.0.0.1:8091 check
    
# Default web backend
backend web_servers
    balance uri  # Balance based on URI
    hash-type consistent  # Consistent hashing
    
    # Servers with different weights
    server web1 127.0.0.1:8001 check weight 10 maxconn 100
    server web2 127.0.0.1:8002 check weight 20 maxconn 200
    server web3 127.0.0.1:8003 check weight 30 maxconn 300
    server web4 127.0.0.1:8004 check weight 10 maxconn 100
    
    # Session persistence
    cookie SERVERID insert indirect nocache
    
    # Slow start for new servers
    server web5 127.0.0.1:8005 check weight 10 slowstart 30s
EOF

# Apply configuration
haproxy -f /etc/haproxy/haproxy-advanced.cfg -c
rc-service haproxy reload

📋 Step 7: Monitoring and Metrics

Set up load balancing monitoring:

# Create monitoring dashboard
cat > /usr/local/bin/lb-monitor.sh << 'EOF'
#!/bin/sh
# Load Balancer Monitor

clear
echo "⚖️ Load Balancer Status Dashboard"
echo "================================="
echo ""

while true; do
    # Clear screen and move cursor to top
    printf "\033[4;0H"
    
    # HAProxy Stats
    echo "📊 HAProxy Statistics:"
    echo "---------------------"
    echo "show stat" | socat stdio /var/run/haproxy.sock | \
        cut -d',' -f1,2,18,19,20 | column -t -s','
    
    # Worker Status
    echo -e "\n👷 Worker Processes:"
    echo "-------------------"
    supervisorctl status | awk '{
        if($2 == "RUNNING") 
            printf "✅ %-20s PID: %-8s Uptime: %s\n", $1, $4, $6
        else
            printf "❌ %-20s %s\n", $1, $2
    }'
    
    # Connection Distribution
    echo -e "\n🔗 Connection Distribution:"
    echo "--------------------------"
    for port in 8001 8002 8003 8004; do
        CONNS=$(netstat -an | grep -c ":$port.*ESTABLISHED")
        printf "Port %s: %3d connections " $port $CONNS
        printf "%0.s█" $(seq 1 $((CONNS / 2)))
        echo ""
    done
    
    # System Resources
    echo -e "\n💻 System Resources:"
    echo "-------------------"
    CPU=$(top -bn1 | grep "CPU:" | awk '{print $2}')
    MEM=$(free -m | awk '/^Mem:/ {printf "%.1f%%", $3/$2 * 100}')
    LOAD=$(uptime | awk -F'load average:' '{print $2}')
    
    echo "CPU Usage: $CPU"
    echo "Memory Usage: $MEM"
    echo "Load Average:$LOAD"
    
    sleep 2
done
EOF

chmod +x /usr/local/bin/lb-monitor.sh

# Create systemd service for monitoring
cat > /etc/init.d/lb-monitor << 'EOF'
#!/sbin/openrc-run

name="lb-monitor"
description="Load Balancer Monitor"
command="/usr/local/bin/lb-monitor.sh"
command_background=true
pidfile="/var/run/lb-monitor.pid"
output_log="/var/log/lb-monitor.log"
error_log="/var/log/lb-monitor.log"
EOF

chmod +x /etc/init.d/lb-monitor

📋 Step 8: Load Testing

Test your load balancing setup:

# Install testing tools
apk add apache2-utils curl siege

# Create load test script
cat > /usr/local/bin/load-test.sh << 'EOF'
#!/bin/sh
# Load Balancer Test

echo "🧪 Load Balancer Test Suite"
echo "=========================="

# Test 1: Basic connectivity
echo -e "\n📍 Test 1: Basic Connectivity"
for i in 1 2 3 4 5; do
    WORKER=$(curl -s http://localhost | grep -o "Worker [0-9]" | awk '{print $2}')
    echo "Request $i -> Worker $WORKER"
done

# Test 2: Load distribution
echo -e "\n📍 Test 2: Load Distribution (100 requests)"
> /tmp/lb-test.log
for i in $(seq 1 100); do
    curl -s http://localhost | grep -o "Worker [0-9]" >> /tmp/lb-test.log
done

echo "Distribution:"
sort /tmp/lb-test.log | uniq -c | sort -nr

# Test 3: Concurrent connections
echo -e "\n📍 Test 3: Concurrent Connections"
ab -n 1000 -c 50 http://localhost/ | grep -E "Requests per second|Time per request|Transfer rate"

# Test 4: Failover
echo -e "\n📍 Test 4: Failover Test"
echo "Stopping worker 1..."
supervisorctl stop web-worker-1
sleep 2

echo "Sending 10 requests..."
FAILED=0
for i in $(seq 1 10); do
    if ! curl -s --max-time 2 http://localhost > /dev/null; then
        FAILED=$((FAILED + 1))
    fi
done
echo "Failed requests: $FAILED/10"

echo "Restarting worker 1..."
supervisorctl start web-worker-1

# Test 5: Performance under load
echo -e "\n📍 Test 5: Performance Test"
siege -c 25 -t 30s http://localhost/ 2>&1 | grep -E "Transactions|Availability|Response time"
EOF

chmod +x /usr/local/bin/load-test.sh

# Run the test
/usr/local/bin/load-test.sh

🎮 Practice Exercise

Try different load balancing configurations:

  1. Change balancing algorithm
  2. Add more workers
  3. Test failover
  4. Monitor performance
# Switch to least connections
sed -i 's/balance roundrobin/balance leastconn/' /etc/haproxy/haproxy.cfg
rc-service haproxy reload

# Add more workers
for i in 5 6 7 8; do
    cat >> /etc/supervisord.conf << EOF
[program:web-worker-$i]
command=/usr/local/bin/web-worker.sh 800$i $i
autostart=true
autorestart=true
EOF
done
supervisorctl reread
supervisorctl update

# Test the new configuration
/usr/local/bin/load-test.sh

🚨 Troubleshooting Common Issues

Workers Not Starting

Fix worker startup issues:

# Check supervisor logs
tail -f /var/log/supervisord.log

# Check individual worker logs
tail -f /var/log/worker*.log

# Verify ports are available
netstat -tlnp | grep 800

# Start workers manually
supervisorctl start all

Uneven Load Distribution

Balance the load better:

# Check current distribution
echo "show stat" | socat stdio /var/run/haproxy.sock

# Adjust weights
# Edit haproxy.cfg and change server weights
server web1 127.0.0.1:8001 check weight 10
server web2 127.0.0.1:8002 check weight 10  # Equal weights

# Use different algorithm
balance leastconn  # Better for long connections
balance source     # Session persistence

High CPU Usage

Optimize performance:

# Limit connections per worker
server web1 127.0.0.1:8001 maxconn 50

# Enable connection pooling
option http-server-close
option forceclose

# Tune kernel parameters
echo "net.ipv4.tcp_tw_reuse = 1" >> /etc/sysctl.conf
echo "net.core.somaxconn = 1024" >> /etc/sysctl.conf
sysctl -p

💡 Pro Tips

Tip 1: Health Checks

Implement smart health checks:

# Custom health check script
cat > /usr/local/bin/health-check.sh << 'EOF'
#!/bin/sh
PORT=$1
# Check CPU
CPU=$(ps aux | grep "web-worker.sh $PORT" | awk '{print $3}')
if [ $(echo "$CPU > 80" | bc) -eq 1 ]; then
    exit 1
fi
echo "OK"
EOF

# Use in HAProxy
option httpchk GET /health
http-check expect string OK

Tip 2: Auto-scaling

Scale based on metrics:

# Auto-scale based on response time
RESPONSE_TIME=$(curl -o /dev/null -s -w '%{time_total}' http://localhost)
if [ $(echo "$RESPONSE_TIME > 1.0" | bc) -eq 1 ]; then
    # Scale up
    supervisorctl start web-worker-5
fi

Tip 3: Blue-Green Deployment

Zero-downtime updates:

# Create two backend groups
backend blue_servers
    server blue1 127.0.0.1:8001
    server blue2 127.0.0.1:8002

backend green_servers
    server green1 127.0.0.1:8011
    server green2 127.0.0.1:8012

# Switch traffic gradually
backend web_servers
    balance roundrobin
    server blue 127.0.0.1:8001 weight 90
    server green 127.0.0.1:8011 weight 10

✅ Best Practices

  1. Monitor everything

    • Worker health
    • Response times
    • Error rates
    • Resource usage
  2. Plan for failure

    # Always have backup servers
    server web1 127.0.0.1:8001 check
    server backup 127.0.0.1:9001 backup
  3. Use appropriate algorithms

    • Round-robin: Equal servers
    • Least connections: Varying load
    • Source hash: Session persistence
  4. Set realistic limits

    maxconn 100      # Per server
    timeout server 30s
    retries 3
  5. Regular testing

    # Weekly load test
    0 2 * * 0 /usr/local/bin/load-test.sh

🏆 What You Learned

Excellent work! You can now:

  • ✅ Configure HAProxy and Nginx
  • ✅ Create worker process pools
  • ✅ Implement load balancing algorithms
  • ✅ Monitor and scale processes
  • ✅ Handle failover scenarios

Your processes are now perfectly balanced!

🎯 What’s Next?

Now that you understand load balancing, explore:

  • Container orchestration
  • Microservices architecture
  • Global load balancing
  • Service mesh technologies

Keep balancing those loads! ⚖️