๐ Nginx Reverse Proxy and Load Balancing on AlmaLinux: Scale Like a Pro
Ever watched your single web server melt under traffic? ๐ฅ I did! Our startupโs product launch crashed in 10 minutes with 50,000 visitors. Thatโs when I discovered Nginxโs superpowers! Now we handle 500,000 concurrent users across 10 servers without breaking a sweat. Today Iโm showing you how to build an unbreakable reverse proxy and load balancer with Nginx on AlmaLinux. Your servers will thank you! ๐ช
๐ค Why Nginx for Reverse Proxy and Load Balancing?
Nginx isnโt just fast - itโs lightning in a bottle! Hereโs why itโs perfect:
- โก Handles 10,000+ connections - Per worker process!
- ๐ฏ Multiple algorithms - Round-robin, least connections, IP hash
- ๐ SSL termination - Decrypt once, serve many
- ๐พ Caching built-in - Reduce backend load by 90%
- ๐ฅ Health checks - Auto-remove dead servers
- ๐ Zero downtime - Reload config without dropping connections
True story: We replaced a $5,000/month F5 load balancer with Nginx on a $20 VPS. Same performance, 250x cheaper! ๐
๐ฏ What You Need
Before we scale to infinity, ensure you have:
- โ AlmaLinux server (for Nginx)
- โ 2+ backend servers to balance
- โ Domain name (optional but recommended)
- โ SSL certificate (Letโs Encrypt works!)
- โ 30 minutes to become a scaling wizard
- โ Coffee (load balancing needs focus! โ)
๐ Step 1: Install and Configure Nginx
Letโs get Nginx running on AlmaLinux!
Install Nginx
# Add EPEL repository for latest Nginx
sudo dnf install -y epel-release
# Install Nginx
sudo dnf install -y nginx
# Enable and start Nginx
sudo systemctl enable --now nginx
# Check version
nginx -v
# Test configuration
sudo nginx -t
# Open firewall ports
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload
# Verify it's running
curl -I http://localhost
Basic Nginx Configuration
# Backup original config
sudo cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.backup
# Edit main config
sudo nano /etc/nginx/nginx.conf
# Optimize for performance
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 2048;
use epoll;
multi_accept on;
}
http {
# Basic Settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
# Buffer sizes
client_body_buffer_size 128k;
client_max_body_size 10m;
client_header_buffer_size 1k;
large_client_header_buffers 4 4k;
output_buffers 1 32k;
postpone_output 1460;
# Timeouts
client_header_timeout 3m;
client_body_timeout 3m;
send_timeout 3m;
# MIME types
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';
access_log /var/log/nginx/access.log main;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml text/javascript
application/json application/javascript application/xml+rss
application/rss+xml application/atom+xml image/svg+xml
text/x-js text/x-cross-domain-policy application/x-font-ttf
application/x-font-opentype application/vnd.ms-fontobject
image/x-icon;
# Include configs
include /etc/nginx/conf.d/*.conf;
}
๐ง Step 2: Configure Reverse Proxy
Time to proxy like a pro! ๐ฏ
Basic Reverse Proxy
# Create reverse proxy config
sudo nano /etc/nginx/conf.d/reverse-proxy.conf
# Simple reverse proxy
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://192.168.1.10:8080;
proxy_http_version 1.1;
# Headers for proper proxying
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
# WebSocket support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffering
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
# When backend is down
proxy_next_upstream error timeout http_500 http_502 http_503;
}
}
Advanced Reverse Proxy with Caching
# Create cache directory
sudo mkdir -p /var/cache/nginx
sudo chown nginx:nginx /var/cache/nginx
# Configure proxy with caching
sudo nano /etc/nginx/conf.d/cached-proxy.conf
# Define cache zone
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=app_cache:10m
max_size=1g inactive=60m use_temp_path=off;
server {
listen 80;
server_name cached.example.com;
location / {
# Enable caching
proxy_cache app_cache;
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 1m;
proxy_cache_valid any 1m;
proxy_cache_min_uses 3;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_background_update on;
proxy_cache_lock on;
# Add cache status header
add_header X-Cache-Status $upstream_cache_status;
# Cache key
proxy_cache_key "$scheme$request_method$host$request_uri";
# Bypass cache for certain requests
proxy_cache_bypass $http_cache_control;
proxy_no_cache $http_pragma $http_authorization;
# Backend
proxy_pass http://backend-server:8080;
# Standard proxy headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Purge cache endpoint
location ~ /purge(/.*) {
allow 127.0.0.1;
deny all;
proxy_cache_purge app_cache "$scheme$request_method$host$1";
}
}
๐ Step 3: Configure Load Balancing
Letโs distribute that load! โ๏ธ
Basic Load Balancing
# Create load balancer config
sudo nano /etc/nginx/conf.d/load-balancer.conf
# Define upstream servers
upstream backend_servers {
# Round-robin by default
server 192.168.1.10:8080 weight=3;
server 192.168.1.11:8080 weight=2;
server 192.168.1.12:8080 weight=1;
# Backup server
server 192.168.1.13:8080 backup;
# Mark server as down
# server 192.168.1.14:8080 down;
# Connection settings
keepalive 32;
keepalive_requests 100;
keepalive_timeout 60s;
}
server {
listen 80;
server_name lb.example.com;
location / {
proxy_pass http://backend_servers;
# Keep connections alive
proxy_http_version 1.1;
proxy_set_header Connection "";
# Standard headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Health check
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
proxy_next_upstream_tries 2;
proxy_next_upstream_timeout 10s;
}
# Health check endpoint
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}
Advanced Load Balancing Algorithms
# Different load balancing methods
sudo nano /etc/nginx/conf.d/advanced-lb.conf
# Least connections algorithm
upstream least_conn_backend {
least_conn;
server app1.example.com:8080;
server app2.example.com:8080;
server app3.example.com:8080;
}
# IP hash (session persistence)
upstream ip_hash_backend {
ip_hash;
server app1.example.com:8080;
server app2.example.com:8080;
server app3.example.com:8080;
}
# Hash based on request URI
upstream consistent_hash_backend {
hash $request_uri consistent;
server app1.example.com:8080;
server app2.example.com:8080;
server app3.example.com:8080;
}
# Random with two choices
upstream random_backend {
random two least_conn;
server app1.example.com:8080;
server app2.example.com:8080;
server app3.example.com:8080;
}
server {
listen 80;
server_name advanced-lb.example.com;
# Different endpoints use different algorithms
location /api {
proxy_pass http://least_conn_backend;
}
location /session {
proxy_pass http://ip_hash_backend;
}
location /static {
proxy_pass http://consistent_hash_backend;
}
location /random {
proxy_pass http://random_backend;
}
}
โ Step 4: SSL/TLS Configuration
Secure everything with HTTPS! ๐
Configure SSL Termination
# Install Certbot for Let's Encrypt
sudo dnf install -y certbot python3-certbot-nginx
# Get SSL certificate
sudo certbot --nginx -d lb.example.com
# Or manual SSL configuration
sudo nano /etc/nginx/conf.d/ssl-proxy.conf
upstream secure_backend {
server backend1.local:8080;
server backend2.local:8080;
server backend3.local:8080;
}
server {
listen 80;
server_name secure.example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name secure.example.com;
# SSL certificates
ssl_certificate /etc/letsencrypt/live/secure.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/secure.example.com/privkey.pem;
# SSL configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_stapling on;
ssl_stapling_verify on;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
location / {
proxy_pass http://secure_backend;
# Pass through SSL info
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-SSL-Client-Cert $ssl_client_cert;
proxy_set_header X-SSL-Client-Verify $ssl_client_verify;
# Standard headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
๐ฎ Quick Examples
Example 1: WordPress Load Balancer ๐
#!/bin/bash
# Complete WordPress load balancer setup
# Configure WordPress backends
cat > /etc/nginx/conf.d/wordpress-lb.conf << 'EOF'
upstream wordpress_backend {
least_conn;
# WordPress servers
server wp1.internal:80 max_fails=3 fail_timeout=30s;
server wp2.internal:80 max_fails=3 fail_timeout=30s;
server wp3.internal:80 max_fails=3 fail_timeout=30s;
keepalive 64;
}
# Cache for static files
proxy_cache_path /var/cache/nginx/wordpress levels=1:2
keys_zone=wordpress_cache:100m max_size=10g
inactive=60m use_temp_path=off;
# Rate limiting
limit_req_zone $binary_remote_addr zone=wordpress_limit:10m rate=10r/s;
server {
listen 80;
server_name wordpress.example.com;
# Rate limiting
limit_req zone=wordpress_limit burst=20 nodelay;
# Security
client_max_body_size 64M;
# Static files with caching
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml|rss|txt)$ {
proxy_pass http://wordpress_backend;
proxy_cache wordpress_cache;
proxy_cache_valid 200 60m;
proxy_cache_bypass $http_pragma $http_authorization;
expires 30d;
add_header Cache-Control "public";
add_header X-Cache-Status $upstream_cache_status;
}
# PHP/Dynamic content
location / {
proxy_pass http://wordpress_backend;
# Don't cache admin area
set $skip_cache 0;
if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|sitemap") {
set $skip_cache 1;
}
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in") {
set $skip_cache 1;
}
proxy_cache_bypass $skip_cache;
proxy_no_cache $skip_cache;
proxy_cache wordpress_cache;
proxy_cache_valid 200 10m;
# Headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Fix for WordPress redirects
proxy_redirect off;
}
# Block xmlrpc attacks
location = /xmlrpc.php {
deny all;
}
# Health check for load balancer
location /lb-health {
access_log off;
return 200 "OK";
add_header Content-Type text/plain;
}
}
EOF
# Restart Nginx
sudo nginx -t && sudo systemctl reload nginx
echo "โ
WordPress load balancer configured!"
Example 2: API Gateway with Rate Limiting ๐ฆ
#!/bin/bash
# API gateway with advanced features
cat > /etc/nginx/conf.d/api-gateway.conf << 'EOF'
# Rate limiting zones
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=100r/s;
limit_req_zone $http_apikey zone=apikey_limit:10m rate=1000r/s;
limit_conn_zone $binary_remote_addr zone=addr_limit:10m;
# API backends
upstream api_v1 {
least_conn;
server api1.internal:3000 weight=5;
server api2.internal:3000 weight=3;
server api3.internal:3000 weight=2;
keepalive 128;
}
upstream api_v2 {
ip_hash;
server api-v2-1.internal:3001;
server api-v2-2.internal:3001;
server api-v2-3.internal:3001;
keepalive 64;
}
# Microservices
upstream auth_service {
server auth.internal:4000;
keepalive 32;
}
upstream user_service {
server users.internal:4001;
keepalive 32;
}
upstream payment_service {
server payments.internal:4002;
keepalive 32;
}
server {
listen 443 ssl http2;
server_name api.example.com;
ssl_certificate /etc/ssl/certs/api.crt;
ssl_certificate_key /etc/ssl/private/api.key;
# Rate limiting
limit_req zone=api_limit burst=50 nodelay;
limit_conn addr_limit 100;
# CORS headers
add_header Access-Control-Allow-Origin $http_origin always;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS" always;
add_header Access-Control-Allow-Headers "Authorization, Content-Type, X-Api-Key" always;
add_header Access-Control-Max-Age 3600 always;
# API versioning
location /api/v1 {
# Check API key
if ($http_x_api_key = "") {
return 401 '{"error": "API key required"}';
}
limit_req zone=apikey_limit burst=100 nodelay;
proxy_pass http://api_v1;
proxy_http_version 1.1;
proxy_set_header Connection "";
# Timeout settings for API
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Api-Version "v1";
}
location /api/v2 {
proxy_pass http://api_v2;
proxy_set_header X-Api-Version "v2";
include /etc/nginx/api_proxy.conf;
}
# Microservices routing
location /api/auth {
proxy_pass http://auth_service;
include /etc/nginx/api_proxy.conf;
}
location /api/users {
proxy_pass http://user_service;
include /etc/nginx/api_proxy.conf;
}
location /api/payments {
proxy_pass http://payment_service;
include /etc/nginx/api_proxy.conf;
}
# GraphQL endpoint
location /graphql {
limit_req zone=api_limit burst=10 nodelay;
# Limit query complexity
client_body_buffer_size 10K;
client_max_body_size 10K;
proxy_pass http://api_v2/graphql;
include /etc/nginx/api_proxy.conf;
}
# WebSocket endpoint
location /ws {
proxy_pass http://api_v1;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
# Health check
location /health {
access_log off;
return 200 '{"status":"healthy"}';
add_header Content-Type application/json;
}
# Metrics endpoint (internal only)
location /metrics {
allow 10.0.0.0/8;
deny all;
proxy_pass http://api_v1/metrics;
}
}
# Common proxy settings
cat > /etc/nginx/api_proxy.conf << 'PROXY'
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
PROXY
EOF
echo "โ
API Gateway configured with rate limiting!"
Example 3: Blue-Green Deployment ๐ต๐ข
#!/bin/bash
# Blue-green deployment with Nginx
cat > /usr/local/bin/blue-green-deploy.sh << 'EOF'
#!/bin/bash
BLUE_SERVERS="server 10.0.1.10:8080; server 10.0.1.11:8080; server 10.0.1.12:8080;"
GREEN_SERVERS="server 10.0.2.10:8080; server 10.0.2.11:8080; server 10.0.2.12:8080;"
CONFIG_FILE="/etc/nginx/conf.d/production.conf"
CURRENT_ENV_FILE="/var/lib/nginx/current_env"
get_current_env() {
if [ -f "$CURRENT_ENV_FILE" ]; then
cat "$CURRENT_ENV_FILE"
else
echo "blue"
fi
}
switch_to_env() {
ENV=$1
echo "๐ Switching to $ENV environment..."
if [ "$ENV" = "blue" ]; then
SERVERS=$BLUE_SERVERS
else
SERVERS=$GREEN_SERVERS
fi
# Create new config
cat > "$CONFIG_FILE" << CONFIG
upstream production {
$SERVERS
keepalive 32;
}
server {
listen 80;
server_name production.example.com;
location / {
proxy_pass http://production;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Environment "$ENV";
}
location /health {
access_log off;
return 200 "Environment: $ENV\n";
add_header Content-Type text/plain;
}
}
CONFIG
# Test config
nginx -t
if [ $? -eq 0 ]; then
# Reload Nginx
systemctl reload nginx
echo "$ENV" > "$CURRENT_ENV_FILE"
echo "โ
Switched to $ENV environment"
else
echo "โ Configuration test failed!"
exit 1
fi
}
health_check() {
ENV=$1
if [ "$ENV" = "blue" ]; then
SERVERS="10.0.1.10 10.0.1.11 10.0.1.12"
else
SERVERS="10.0.2.10 10.0.2.11 10.0.2.12"
fi
echo "๐ฅ Health checking $ENV servers..."
for server in $SERVERS; do
if curl -f -s "http://$server:8080/health" > /dev/null; then
echo "โ
$server is healthy"
else
echo "โ $server is unhealthy"
return 1
fi
done
return 0
}
deploy() {
CURRENT=$(get_current_env)
if [ "$CURRENT" = "blue" ]; then
TARGET="green"
else
TARGET="blue"
fi
echo "๐ฆ Current environment: $CURRENT"
echo "๐ฏ Target environment: $TARGET"
# Health check target
health_check "$TARGET"
if [ $? -ne 0 ]; then
echo "โ Target environment is not healthy!"
exit 1
fi
# Switch traffic
switch_to_env "$TARGET"
# Verify
sleep 2
response=$(curl -s http://production.example.com/health)
echo "๐ Verification: $response"
}
rollback() {
CURRENT=$(get_current_env)
if [ "$CURRENT" = "blue" ]; then
TARGET="green"
else
TARGET="blue"
fi
echo "โฎ๏ธ Rolling back from $CURRENT to $TARGET..."
switch_to_env "$TARGET"
}
# Main menu
case "$1" in
deploy)
deploy
;;
rollback)
rollback
;;
status)
echo "Current environment: $(get_current_env)"
;;
health)
health_check "$2"
;;
*)
echo "Usage: $0 {deploy|rollback|status|health [blue|green]}"
exit 1
;;
esac
EOF
chmod +x /usr/local/bin/blue-green-deploy.sh
echo "โ
Blue-green deployment script ready!"
echo "๐ Usage: blue-green-deploy.sh deploy"
๐จ Fix Common Problems
Problem 1: 502 Bad Gateway โ
Backend not responding?
# Check if backend is running
curl -I http://backend-server:8080
# Check Nginx error log
sudo tail -f /var/log/nginx/error.log
# Increase timeout values
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
# Check SELinux
sudo setsebool -P httpd_can_network_connect on
Problem 2: Sessions Not Sticky โ
Users losing sessions?
# Use ip_hash for session persistence
upstream backend {
ip_hash;
server backend1:8080;
server backend2:8080;
}
# Or use cookies
upstream backend {
server backend1:8080;
server backend2:8080;
sticky cookie srv_id expires=1h;
}
Problem 3: High Memory Usage โ
Nginx eating RAM?
# Tune buffer sizes
client_body_buffer_size 10K;
client_header_buffer_size 1k;
large_client_header_buffers 2 1k;
# Limit connections
limit_conn_zone $binary_remote_addr zone=addr:10m;
limit_conn addr 10;
# Reduce worker connections
worker_connections 1024;
Problem 4: Slow Performance โ
Response times high?
# Enable caching
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m;
proxy_cache cache;
proxy_cache_valid 200 10m;
# Enable gzip
gzip on;
gzip_types text/plain text/css application/json;
# Use keepalive
upstream backend {
server backend1:8080;
keepalive 32;
}
๐ Simple Commands Summary
Task | Command |
---|---|
๐ Test config | nginx -t |
๐ Reload config | nginx -s reload |
๐ Check status | systemctl status nginx |
๐ View logs | tail -f /var/log/nginx/error.log |
๐ Connection stats | ss -tulpn | grep nginx |
๐ง Debug config | nginx -T |
๐พ Clear cache | rm -rf /var/cache/nginx/* |
๐ฅ Health check | curl -I http://localhost/health |
๐ก Tips for Success
- Start Simple ๐ฏ - Basic proxy first, then add features
- Monitor Everything ๐ - Logs tell the story
- Test Changes ๐งช - Always
nginx -t
before reload - Cache Wisely ๐พ - Cache static, skip dynamic
- Health Checks ๐ฅ - Dead servers kill performance
- Document Config ๐ - Your future self will thank you
Pro tip: Use nginx -T
to see the complete configuration including all includes. Saved me hours of debugging! ๐
๐ What You Learned
Youโre now a load balancing ninja! You can:
- โ Configure reverse proxy
- โ Set up load balancing
- โ Implement SSL termination
- โ Configure caching strategies
- โ Handle WebSocket connections
- โ Implement blue-green deployments
- โ Monitor and troubleshoot
๐ฏ Why This Matters
Proper load balancing provides:
- ๐ Infinite scalability
- ๐ช High availability
- โก Better performance
- ๐ Security isolation
- ๐ฐ Cost efficiency
- ๐ง Easy maintenance
Last month our main server died during Black Friday. The load balancer instantly shifted traffic to backup servers. Zero downtime, $2M in sales saved! Thatโs the power of Nginx! ๐ช
Remember: One server is none, two is one, three is reliable! Always load balance! โ๏ธ
Happy proxying! May your servers be balanced and your uptime be 100%! ๐โจ