⚖️ AlmaLinux Load Balancer with Nginx Complete Setup Guide
Ready to scale your applications to handle millions of users? 🚀 Load balancing is the secret weapon that powers the world’s largest websites, distributing traffic across multiple servers to ensure lightning-fast response times and bulletproof reliability! This comprehensive guide will transform your AlmaLinux system into a high-performance load balancer that can handle any traffic surge! ✨
Think of a load balancer as the ultimate traffic director - intelligently routing requests to the best available server, detecting failures instantly, and ensuring your users never experience downtime! ⚖️
🤔 Why Use Load Balancing with Nginx?
Nginx-powered load balancing provides incredible advantages for modern applications! 🌟
Essential Benefits:
- ⚡ Lightning Performance - Handle 10x more concurrent users
- 🛡️ Zero Downtime - Automatic failover when servers fail
- 📈 Infinite Scalability - Add servers without service interruption
- 🔄 Intelligent Distribution - Route traffic based on server capacity
- 🌐 SSL Termination - Centralized certificate management
- 📊 Real-time Monitoring - Track server health and performance
- 💾 Session Persistence - Keep users connected to the same server
- 🔧 Advanced Algorithms - Round-robin, weighted, IP hash, and more
- 🚀 Caching & Compression - Boost performance with built-in features
- 💰 Cost Efficiency - Maximize resource utilization
🎯 What You Need Before Starting
Let’s ensure you’re ready for this high-availability adventure! ✅
System Requirements:
- ✅ AlmaLinux 8 or 9 (fresh installation recommended)
- ✅ Minimum 4GB RAM (8GB+ for high-traffic setups)
- ✅ Multiple backend servers or VMs for testing
- ✅ Static public IP address for the load balancer
- ✅ Network connectivity between all servers
- ✅ Basic understanding of HTTP/HTTPS protocols
Infrastructure Setup:
- ✅ Load balancer server (main Nginx instance)
- ✅ 2+ backend web servers for testing
- ✅ SSL certificates for HTTPS termination
- ✅ DNS configuration pointing to load balancer
What We’ll Build:
- ✅ High-performance Nginx load balancer
- ✅ Multiple load balancing algorithms
- ✅ Health checks and automatic failover
- ✅ SSL termination and security features
- ✅ Session persistence mechanisms
- ✅ Advanced monitoring and logging
- ✅ Performance optimization techniques
📝 Step 1: System Preparation and Nginx Installation
Let’s start by preparing our AlmaLinux system for high-performance load balancing!
# Update system packages
sudo dnf update -y
# Install Nginx and essential tools
sudo dnf install -y \
nginx \
nginx-mod-http-headers-more \
nginx-mod-stream \
openssl \
curl \
wget \
net-tools \
htop \
iotop \
tcpdump \
wireshark-cli \
bind-utils
System Optimization for Load Balancing:
# Optimize kernel parameters for high-performance networking
sudo tee -a /etc/sysctl.conf << 'EOF'
# Load Balancer Performance Tuning
# Network performance
net.core.somaxconn = 65536
net.core.netdev_max_backlog = 5000
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# Connection handling
net.ipv4.tcp_max_syn_backlog = 65536
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 60
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_fin_timeout = 30
# File descriptor limits
fs.file-max = 1000000
fs.nr_open = 1000000
# Memory management
vm.swappiness = 10
vm.dirty_ratio = 15
vm.dirty_background_ratio = 5
EOF
# Apply kernel optimizations
sudo sysctl -p
# Increase file descriptor limits for Nginx
sudo tee /etc/security/limits.d/nginx.conf << 'EOF'
nginx soft nofile 65536
nginx hard nofile 65536
* soft nofile 65536
* hard nofile 65536
EOF
Configure Firewall for Load Balancer:
# Configure firewall for load balancing
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --permanent --add-port=8080/tcp # Backend health checks
sudo firewall-cmd --permanent --add-port=9000/tcp # Nginx status page
# Allow backend server connections
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="10.0.0.0/8" accept'
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="172.16.0.0/12" accept'
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.0.0/16" accept'
# Reload firewall rules
sudo firewall-cmd --reload
🔧 Step 2: Basic Load Balancer Configuration
Let’s create a robust Nginx load balancer configuration with multiple backend servers!
# Backup original Nginx configuration
sudo cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.backup
# Create optimized main Nginx configuration
sudo tee /etc/nginx/nginx.conf << 'EOF'
# High-Performance Load Balancer Configuration
user nginx;
worker_processes auto;
worker_rlimit_nofile 65536;
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;
# Load dynamic modules
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 8192;
use epoll;
multi_accept on;
accept_mutex off;
}
http {
# Basic settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
keepalive_requests 1000;
types_hash_max_size 2048;
server_tokens off;
# MIME types
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging format
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'upstream_addr="$upstream_addr" '
'upstream_status="$upstream_status" '
'upstream_response_time="$upstream_response_time" '
'request_time="$request_time"';
# Load balancer logging
log_format lb_log '$remote_addr - [$time_local] "$request" '
'$status $body_bytes_sent '
'upstream="$upstream_addr" '
'upstream_time="$upstream_response_time" '
'total_time="$request_time" '
'cache_status="$upstream_cache_status"';
access_log /var/log/nginx/access.log main;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/json
application/javascript
application/xml+rss
application/atom+xml
image/svg+xml;
# Rate limiting zones
limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;
limit_req_zone $binary_remote_addr zone=api:10m rate=100r/m;
limit_req_zone $binary_remote_addr zone=general:10m rate=1000r/m;
# Connection limiting
limit_conn_zone $binary_remote_addr zone=addr:10m;
# Upstream backend servers
upstream backend_servers {
# Load balancing method (round-robin is default)
# Other options: least_conn, ip_hash, hash, random
least_conn;
# Backend server definitions
server 192.168.1.101:80 weight=3 max_fails=3 fail_timeout=30s;
server 192.168.1.102:80 weight=2 max_fails=3 fail_timeout=30s;
server 192.168.1.103:80 weight=1 max_fails=3 fail_timeout=30s backup;
# Keepalive connections to backends
keepalive 32;
keepalive_requests 1000;
keepalive_timeout 60s;
}
# API backend servers (separate pool)
upstream api_servers {
least_conn;
server 192.168.1.201:8080 weight=1 max_fails=2 fail_timeout=15s;
server 192.168.1.202:8080 weight=1 max_fails=2 fail_timeout=15s;
keepalive 16;
}
# SSL/TLS configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE+AESGCM:ECDHE+AES256:ECDHE+AES128:!aNULL:!MD5:!DSS;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Include additional configuration files
include /etc/nginx/conf.d/*.conf;
}
# Stream module for TCP/UDP load balancing
stream {
# TCP load balancing for databases
upstream database_servers {
server 192.168.1.301:3306 weight=1 max_fails=2 fail_timeout=10s;
server 192.168.1.302:3306 weight=1 max_fails=2 fail_timeout=10s;
}
server {
listen 3306;
proxy_pass database_servers;
proxy_timeout 1s;
proxy_responses 1;
proxy_connect_timeout 1s;
}
}
EOF
Create Main Load Balancer Virtual Host:
# Create load balancer configuration
sudo tee /etc/nginx/conf.d/load-balancer.conf << 'EOF'
# Main Load Balancer Configuration
# HTTP to HTTPS redirect
server {
listen 80;
server_name example.com www.example.com;
# Security headers
add_header X-Frame-Options "DENY" always;
add_header X-Content-Type-Options "nosniff" always;
# Redirect all HTTP traffic to HTTPS
return 301 https://$server_name$request_uri;
}
# HTTPS Load Balancer
server {
listen 443 ssl http2;
server_name example.com www.example.com;
# SSL Certificate configuration
ssl_certificate /etc/ssl/certs/example.com.crt;
ssl_certificate_key /etc/ssl/private/example.com.key;
# SSL optimization
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/ssl/certs/example.com.ca.crt;
# Enhanced security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
# Access and error logs
access_log /var/log/nginx/lb_access.log lb_log;
error_log /var/log/nginx/lb_error.log;
# Rate limiting
limit_req zone=general burst=50 nodelay;
limit_conn addr 20;
# Main application traffic
location / {
# Proxy settings
proxy_pass http://backend_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
# Headers for backend servers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
# Timeouts
proxy_connect_timeout 5s;
proxy_send_timeout 10s;
proxy_read_timeout 10s;
# Buffer settings
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
# Error handling
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
proxy_next_upstream_tries 3;
proxy_next_upstream_timeout 10s;
}
# API endpoints with different backend pool
location /api/ {
proxy_pass http://api_servers;
proxy_http_version 1.1;
# API-specific headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# API rate limiting
limit_req zone=api burst=20 nodelay;
# API timeouts (shorter for APIs)
proxy_connect_timeout 3s;
proxy_send_timeout 5s;
proxy_read_timeout 5s;
# No caching for API responses
proxy_cache_bypass 1;
proxy_no_cache 1;
}
# Health check endpoint
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
# Static file caching
location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2|ttf|eot)$ {
proxy_pass http://backend_servers;
proxy_cache_valid 200 30d;
proxy_cache_valid 404 1m;
expires 30d;
add_header Cache-Control "public, immutable";
add_header X-Cache-Status $upstream_cache_status;
}
# Admin area with stricter access
location /admin/ {
# Restrict admin access by IP (adjust as needed)
allow 192.168.1.0/24;
allow 10.0.0.0/8;
deny all;
# Stricter rate limiting for admin
limit_req zone=login burst=5 nodelay;
proxy_pass http://backend_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Load balancer status and monitoring
server {
listen 9000;
server_name localhost;
# Restrict access to monitoring
allow 127.0.0.1;
allow 192.168.1.0/24;
deny all;
# Nginx status
location /nginx_status {
stub_status on;
access_log off;
}
# Upstream status
location /upstream_status {
# This requires nginx-plus or custom module
# For open source nginx, we'll create a custom endpoint
proxy_pass http://127.0.0.1:8080/status;
}
}
EOF
🌟 Step 3: Advanced Load Balancing Algorithms
Let’s implement different load balancing strategies for various use cases!
# Create configuration for different load balancing methods
sudo tee /etc/nginx/conf.d/lb-algorithms.conf << 'EOF'
# Advanced Load Balancing Algorithms
# Round Robin (default) - Simple rotation
upstream web_round_robin {
server 192.168.1.101:80;
server 192.168.1.102:80;
server 192.168.1.103:80;
}
# Least Connections - Route to server with fewest active connections
upstream web_least_conn {
least_conn;
server 192.168.1.101:80 weight=1;
server 192.168.1.102:80 weight=1;
server 192.168.1.103:80 weight=1;
}
# IP Hash - Same client always goes to same server (session persistence)
upstream web_ip_hash {
ip_hash;
server 192.168.1.101:80;
server 192.168.1.102:80;
server 192.168.1.103:80;
}
# Weighted Round Robin - Different server capacities
upstream web_weighted {
server 192.168.1.101:80 weight=5; # High-performance server
server 192.168.1.102:80 weight=3; # Medium-performance server
server 192.168.1.103:80 weight=1; # Low-performance server
}
# Hash-based (consistent hashing) - Based on URI or custom variable
upstream web_hash_uri {
hash $request_uri consistent;
server 192.168.1.101:80;
server 192.168.1.102:80;
server 192.168.1.103:80;
}
# Random with two choices - Good for large server pools
upstream web_random {
random two least_conn;
server 192.168.1.101:80;
server 192.168.1.102:80;
server 192.168.1.103:80;
server 192.168.1.104:80;
server 192.168.1.105:80;
}
# Geographic load balancing based on client location
map $geoip_country_code $geo_backend {
default web_round_robin;
US web_us_servers;
EU web_eu_servers;
ASIA web_asia_servers;
}
upstream web_us_servers {
server 192.168.1.101:80;
server 192.168.1.102:80;
}
upstream web_eu_servers {
server 192.168.2.101:80;
server 192.168.2.102:80;
}
upstream web_asia_servers {
server 192.168.3.101:80;
server 192.168.3.102:80;
}
# A/B testing upstream groups
map $cookie_ab_test $ab_backend {
default backend_a;
"b" backend_b;
}
upstream backend_a {
server 192.168.1.101:80;
server 192.168.1.102:80;
}
upstream backend_b {
server 192.168.1.201:80;
server 192.168.1.202:80;
}
EOF
Create Algorithm Selection Virtual Host:
# Create configuration to demonstrate different algorithms
sudo tee /etc/nginx/conf.d/algorithm-demo.conf << 'EOF'
# Algorithm Demo Configuration
server {
listen 80;
server_name lb-demo.example.com;
# Default location uses least connections
location / {
proxy_pass http://web_least_conn;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Add header to show which algorithm is used
add_header X-LB-Algorithm "least_conn";
}
# Session-sticky routing for user sessions
location /app/ {
proxy_pass http://web_ip_hash;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header X-LB-Algorithm "ip_hash";
}
# Weighted distribution for different server capacities
location /heavy/ {
proxy_pass http://web_weighted;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header X-LB-Algorithm "weighted";
}
# URI-based distribution (same URI always goes to same server)
location /cache/ {
proxy_pass http://web_hash_uri;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header X-LB-Algorithm "hash_uri";
}
# Geographic distribution
location /geo/ {
proxy_pass http://$geo_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header X-LB-Algorithm "geographic";
add_header X-LB-Region $geoip_country_code;
}
# A/B testing
location /test/ {
proxy_pass http://$ab_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header X-LB-Algorithm "ab_test";
add_header X-AB-Group $cookie_ab_test;
}
}
EOF
✅ Step 4: Health Checks and Automatic Failover
Implement sophisticated health monitoring to ensure high availability!
# Create health check configuration
sudo tee /etc/nginx/conf.d/health-checks.conf << 'EOF'
# Advanced Health Check Configuration
# Upstream with detailed health check parameters
upstream web_with_health {
# Primary servers
server 192.168.1.101:80 weight=3 max_fails=3 fail_timeout=30s slow_start=60s;
server 192.168.1.102:80 weight=2 max_fails=3 fail_timeout=30s slow_start=60s;
# Backup server (only used when primary servers fail)
server 192.168.1.103:80 weight=1 max_fails=2 fail_timeout=60s backup;
# Down server (temporarily removed from rotation)
# server 192.168.1.104:80 down;
keepalive 32;
}
# Database upstream with shorter timeouts
upstream db_cluster {
server 192.168.1.201:3306 weight=1 max_fails=2 fail_timeout=10s;
server 192.168.1.202:3306 weight=1 max_fails=2 fail_timeout=10s;
server 192.168.1.203:3306 weight=1 max_fails=2 fail_timeout=10s backup;
}
# Redis cluster for session storage
upstream redis_cluster {
server 192.168.1.301:6379 weight=1 max_fails=1 fail_timeout=5s;
server 192.168.1.302:6379 weight=1 max_fails=1 fail_timeout=5s;
# Redis sentinel for failover
server 192.168.1.303:26379 weight=1 max_fails=1 fail_timeout=5s backup;
}
EOF
# Create external health check script
sudo tee /usr/local/bin/nginx-health-checker.sh << 'EOF'
#!/bin/bash
# Nginx Backend Health Checker
LOG_FILE="/var/log/nginx/health-check.log"
BACKEND_SERVERS=(
"192.168.1.101:80"
"192.168.1.102:80"
"192.168.1.103:80"
)
check_backend_health() {
local server=$1
local host=$(echo $server | cut -d: -f1)
local port=$(echo $server | cut -d: -f2)
# HTTP health check
if curl -sf --connect-timeout 5 --max-time 10 "http://$server/health" > /dev/null 2>&1; then
echo "$(date): ✅ $server - Healthy" >> $LOG_FILE
return 0
else
echo "$(date): ❌ $server - Unhealthy" >> $LOG_FILE
return 1
fi
}
check_tcp_connectivity() {
local server=$1
local host=$(echo $server | cut -d: -f1)
local port=$(echo $server | cut -d: -f2)
if timeout 5 bash -c "</dev/tcp/$host/$port" 2>/dev/null; then
return 0
else
return 1
fi
}
update_nginx_config() {
local action=$1
local server=$2
case $action in
"disable")
# Add 'down' parameter to server
sed -i "s|server $server |server $server down |g" /etc/nginx/conf.d/health-checks.conf
;;
"enable")
# Remove 'down' parameter from server
sed -i "s|server $server down |server $server |g" /etc/nginx/conf.d/health-checks.conf
;;
esac
# Test configuration before reloading
if nginx -t; then
systemctl reload nginx
echo "$(date): 🔄 Nginx configuration reloaded" >> $LOG_FILE
else
echo "$(date): ❌ Nginx configuration test failed" >> $LOG_FILE
fi
}
main_health_check() {
echo "$(date): 🏥 Starting health check cycle" >> $LOG_FILE
for server in "${BACKEND_SERVERS[@]}"; do
if check_backend_health "$server"; then
# Server is healthy - ensure it's enabled
if grep -q "server $server down" /etc/nginx/conf.d/health-checks.conf; then
echo "$(date): 🟢 Re-enabling healthy server: $server" >> $LOG_FILE
update_nginx_config "enable" "$server"
fi
else
# Server is unhealthy - check if we should disable it
if ! grep -q "server $server down" /etc/nginx/conf.d/health-checks.conf; then
echo "$(date): 🔴 Disabling unhealthy server: $server" >> $LOG_FILE
update_nginx_config "disable" "$server"
# Send alert (customize as needed)
echo "Server $server failed health check at $(date)" | \
mail -s "Load Balancer Alert: Server Down" [email protected] 2>/dev/null || true
fi
fi
done
echo "$(date): ✅ Health check cycle completed" >> $LOG_FILE
}
# Run health check
main_health_check
EOF
chmod +x /usr/local/bin/nginx-health-checker.sh
# Create systemd service for health checks
sudo tee /etc/systemd/system/nginx-health-checker.service << 'EOF'
[Unit]
Description=Nginx Backend Health Checker
After=network.target nginx.service
[Service]
Type=oneshot
ExecStart=/usr/local/bin/nginx-health-checker.sh
User=nginx
Group=nginx
[Install]
WantedBy=multi-user.target
EOF
# Create timer for regular health checks
sudo tee /etc/systemd/system/nginx-health-checker.timer << 'EOF'
[Unit]
Description=Run Nginx Health Checker every 30 seconds
Requires=nginx-health-checker.service
[Timer]
OnCalendar=*:*:0/30
Persistent=true
[Install]
WantedBy=timers.target
EOF
# Enable and start health checker
sudo systemctl enable nginx-health-checker.timer
sudo systemctl start nginx-health-checker.timer
🛡️ Step 5: SSL Termination and Security Features
Implement advanced SSL termination and security features for your load balancer!
# Create SSL termination configuration
sudo tee /etc/nginx/conf.d/ssl-termination.conf << 'EOF'
# SSL Termination Load Balancer
# SSL configuration for multiple domains
map $ssl_server_name $ssl_certificate {
default /etc/ssl/certs/default.crt;
example.com /etc/ssl/certs/example.com.crt;
api.example.com /etc/ssl/certs/api.example.com.crt;
admin.example.com /etc/ssl/certs/admin.example.com.crt;
}
map $ssl_server_name $ssl_certificate_key {
default /etc/ssl/private/default.key;
example.com /etc/ssl/private/example.com.key;
api.example.com /etc/ssl/private/api.example.com.key;
admin.example.com /etc/ssl/private/admin.example.com.key;
}
# Main SSL termination server
server {
listen 443 ssl http2;
server_name example.com www.example.com;
# Dynamic SSL certificate selection
ssl_certificate $ssl_certificate;
ssl_certificate_key $ssl_certificate_key;
# SSL optimization
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# SSL session optimization
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off;
# OCSP stapling
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# Security headers
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self' https:; connect-src 'self'; frame-ancestors 'none';" always;
# Perfect Forward Secrecy
ssl_dhparam /etc/ssl/certs/dhparam.pem;
location / {
proxy_pass http://backend_servers;
# SSL headers for backend
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Host $host;
# SSL-specific headers
proxy_set_header X-SSL-Client-Cert $ssl_client_cert;
proxy_set_header X-SSL-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Client-Verify $ssl_client_verify;
proxy_set_header X-SSL-Protocol $ssl_protocol;
proxy_set_header X-SSL-Cipher $ssl_cipher;
}
}
# API SSL termination with client certificate authentication
server {
listen 443 ssl http2;
server_name api.example.com;
ssl_certificate /etc/ssl/certs/api.example.com.crt;
ssl_certificate_key /etc/ssl/private/api.example.com.key;
# Client certificate authentication
ssl_client_certificate /etc/ssl/certs/client-ca.crt;
ssl_verify_client optional;
# API-specific SSL settings
ssl_protocols TLSv1.2 TLSv1.3;
ssl_session_timeout 5m;
location / {
# Require client certificate for sensitive API
if ($ssl_client_verify != SUCCESS) {
return 403;
}
proxy_pass http://api_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-SSL-Client-Cert $ssl_client_cert;
proxy_set_header X-SSL-Client-DN $ssl_client_s_dn;
}
}
# Admin SSL with additional security
server {
listen 443 ssl http2;
server_name admin.example.com;
ssl_certificate /etc/ssl/certs/admin.example.com.crt;
ssl_certificate_key /etc/ssl/private/admin.example.com.key;
# Restrict admin access by IP
allow 192.168.1.0/24;
allow 10.0.0.0/8;
deny all;
# Additional security for admin
add_header X-Frame-Options "DENY" always;
add_header X-Robots-Tag "noindex, nofollow" always;
location / {
proxy_pass http://backend_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Admin-specific authentication headers
proxy_set_header X-Admin-Access "true";
proxy_set_header X-Access-Level "admin";
}
}
EOF
# Generate strong DH parameters
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048
# Create SSL optimization script
sudo tee /usr/local/bin/optimize-ssl.sh << 'EOF'
#!/bin/bash
# SSL Optimization Script
echo "🔐 Optimizing SSL configuration..."
# Test SSL configuration
echo "Testing SSL configuration..."
nginx -t
if [ $? -eq 0 ]; then
echo "✅ SSL configuration is valid"
# Test SSL certificate validity
for cert in /etc/ssl/certs/*.crt; do
if [ -f "$cert" ]; then
domain=$(basename "$cert" .crt)
echo "🔍 Checking certificate: $domain"
# Check certificate expiration
expiry=$(openssl x509 -enddate -noout -in "$cert" | cut -d= -f2)
expiry_epoch=$(date -d "$expiry" +%s)
current_epoch=$(date +%s)
days_left=$(( (expiry_epoch - current_epoch) / 86400 ))
if [ $days_left -lt 30 ]; then
echo "⚠️ Certificate $domain expires in $days_left days"
else
echo "✅ Certificate $domain is valid for $days_left days"
fi
fi
done
# Reload Nginx to apply changes
systemctl reload nginx
echo "✅ Nginx configuration reloaded"
else
echo "❌ SSL configuration has errors"
exit 1
fi
EOF
chmod +x /usr/local/bin/optimize-ssl.sh
🎮 Quick Examples - Load Balancer Testing and Validation
Let’s create practical examples for testing and validating your load balancer! 🚀
Example 1: Load Balancer Testing Suite
# Create comprehensive load balancer testing script
cat > /opt/nginx/scripts/test-load-balancer.sh << 'EOF'
#!/bin/bash
# Load Balancer Testing Suite
TARGET_URL="https://example.com"
LB_IP="192.168.1.100"
TEST_RESULTS="/tmp/lb-test-results.log"
echo "⚖️ Load Balancer Testing Suite" | tee $TEST_RESULTS
echo "==============================" | tee -a $TEST_RESULTS
echo "Target: $TARGET_URL" | tee -a $TEST_RESULTS
echo "Load Balancer IP: $LB_IP" | tee -a $TEST_RESULTS
echo "Started: $(date)" | tee -a $TEST_RESULTS
echo "" | tee -a $TEST_RESULTS
test_basic_connectivity() {
echo "🔌 Testing Basic Connectivity..." | tee -a $TEST_RESULTS
# Test HTTP redirect
http_response=$(curl -s -o /dev/null -w "%{http_code}" "http://$LB_IP")
if [ "$http_response" -eq 301 ] || [ "$http_response" -eq 302 ]; then
echo " ✅ HTTP to HTTPS redirect working (HTTP $http_response)" | tee -a $TEST_RESULTS
else
echo " ❌ HTTP redirect not working (HTTP $http_response)" | tee -a $TEST_RESULTS
fi
# Test HTTPS connectivity
https_response=$(curl -s -o /dev/null -w "%{http_code}" -k "$TARGET_URL")
if [ "$https_response" -eq 200 ]; then
echo " ✅ HTTPS connectivity working (HTTP $https_response)" | tee -a $TEST_RESULTS
else
echo " ❌ HTTPS connectivity failed (HTTP $https_response)" | tee -a $TEST_RESULTS
fi
echo "" | tee -a $TEST_RESULTS
}
test_load_distribution() {
echo "📊 Testing Load Distribution..." | tee -a $TEST_RESULTS
echo " Sending 100 requests to analyze distribution..." | tee -a $TEST_RESULTS
# Track which backend servers respond
declare -A server_counts
for i in {1..100}; do
# Extract backend server from response headers
backend=$(curl -s -k -I "$TARGET_URL" | grep -i "x-backend-server" | cut -d: -f2 | tr -d ' \r')
if [ ! -z "$backend" ]; then
((server_counts["$backend"]++))
fi
sleep 0.1
done
echo " Distribution results:" | tee -a $TEST_RESULTS
for server in "${!server_counts[@]}"; do
count=${server_counts[$server]}
percentage=$((count * 100 / 100))
echo " $server: $count requests ($percentage%)" | tee -a $TEST_RESULTS
done
echo "" | tee -a $TEST_RESULTS
}
test_ssl_termination() {
echo "🔐 Testing SSL Termination..." | tee -a $TEST_RESULTS
# Test SSL certificate
ssl_info=$(echo | openssl s_client -connect $LB_IP:443 -servername example.com 2>/dev/null | openssl x509 -noout -subject -dates)
if [ $? -eq 0 ]; then
echo " ✅ SSL certificate is valid" | tee -a $TEST_RESULTS
echo " Certificate info:" | tee -a $TEST_RESULTS
echo "$ssl_info" | sed 's/^/ /' | tee -a $TEST_RESULTS
else
echo " ❌ SSL certificate validation failed" | tee -a $TEST_RESULTS
fi
# Test SSL protocols
for protocol in "tls1_2" "tls1_3"; do
if echo | openssl s_client -connect $LB_IP:443 -$protocol 2>/dev/null | grep -q "Protocol.*TLS"; then
echo " ✅ $protocol is supported" | tee -a $TEST_RESULTS
else
echo " ❌ $protocol is not supported" | tee -a $TEST_RESULTS
fi
done
echo "" | tee -a $TEST_RESULTS
}
test_health_checks() {
echo "🏥 Testing Health Check Functionality..." | tee -a $TEST_RESULTS
# Test health endpoint
health_response=$(curl -s -o /dev/null -w "%{http_code}" "$TARGET_URL/health")
if [ "$health_response" -eq 200 ]; then
echo " ✅ Health check endpoint working (HTTP $health_response)" | tee -a $TEST_RESULTS
else
echo " ❌ Health check endpoint failed (HTTP $health_response)" | tee -a $TEST_RESULTS
fi
# Test failover (simulate backend failure)
echo " Testing failover scenario..." | tee -a $TEST_RESULTS
# This would require actual backend manipulation
# For demo purposes, we'll just check if multiple backends are configured
backend_count=$(curl -s -k "$TARGET_URL/nginx_status" | grep -o "server.*;" | wc -l)
if [ $backend_count -gt 1 ]; then
echo " ✅ Multiple backends configured for failover" | tee -a $TEST_RESULTS
else
echo " ⚠️ Only one backend detected" | tee -a $TEST_RESULTS
fi
echo "" | tee -a $TEST_RESULTS
}
test_performance() {
echo "⚡ Testing Performance..." | tee -a $TEST_RESULTS
# Test response times
echo " Measuring response times (10 requests):" | tee -a $TEST_RESULTS
total_time=0
for i in {1..10}; do
response_time=$(curl -s -o /dev/null -w "%{time_total}" -k "$TARGET_URL")
total_time=$(echo "$total_time + $response_time" | bc)
echo " Request $i: ${response_time}s" | tee -a $TEST_RESULTS
done
avg_time=$(echo "scale=3; $total_time / 10" | bc)
echo " Average response time: ${avg_time}s" | tee -a $TEST_RESULTS
# Test concurrent connections
echo " Testing concurrent connections (50 parallel requests):" | tee -a $TEST_RESULTS
start_time=$(date +%s.%N)
for i in {1..50}; do
curl -s -o /dev/null -k "$TARGET_URL" &
done
wait
end_time=$(date +%s.%N)
duration=$(echo "$end_time - $start_time" | bc)
rps=$(echo "scale=2; 50 / $duration" | bc)
echo " 50 concurrent requests completed in ${duration}s" | tee -a $TEST_RESULTS
echo " Requests per second: $rps" | tee -a $TEST_RESULTS
echo "" | tee -a $TEST_RESULTS
}
test_security_headers() {
echo "🛡️ Testing Security Headers..." | tee -a $TEST_RESULTS
headers=$(curl -s -k -I "$TARGET_URL")
security_headers=(
"Strict-Transport-Security"
"X-Frame-Options"
"X-Content-Type-Options"
"X-XSS-Protection"
"Referrer-Policy"
"Content-Security-Policy"
)
for header in "${security_headers[@]}"; do
if echo "$headers" | grep -qi "$header"; then
echo " ✅ $header header present" | tee -a $TEST_RESULTS
else
echo " ❌ $header header missing" | tee -a $TEST_RESULTS
fi
done
echo "" | tee -a $TEST_RESULTS
}
# Run all tests
mkdir -p /opt/nginx/scripts
test_basic_connectivity
test_load_distribution
test_ssl_termination
test_health_checks
test_performance
test_security_headers
echo "🎉 Load balancer testing completed!" | tee -a $TEST_RESULTS
echo "Results saved to: $TEST_RESULTS" | tee -a $TEST_RESULTS
echo "Completed: $(date)" | tee -a $TEST_RESULTS
EOF
chmod +x /opt/nginx/scripts/test-load-balancer.sh
Example 2: Real-time Load Balancer Monitoring
# Create real-time monitoring dashboard
cat > /opt/nginx/scripts/lb-monitor.sh << 'EOF'
#!/bin/bash
# Real-time Load Balancer Monitoring
monitor_load_balancer() {
while true; do
clear
echo "⚖️ Load Balancer Real-time Monitor"
echo "=================================="
echo "Monitoring started: $(date)"
echo ""
# Nginx process status
echo "🖥️ Nginx Status:"
echo "================"
if systemctl is-active --quiet nginx; then
echo "✅ Nginx: Running"
nginx_pid=$(pgrep -f "nginx: master")
nginx_workers=$(pgrep -f "nginx: worker" | wc -l)
echo " Master PID: $nginx_pid"
echo " Worker processes: $nginx_workers"
else
echo "❌ Nginx: Stopped"
fi
echo ""
# Connection statistics
echo "🌐 Connection Statistics:"
echo "========================"
if command -v ss >/dev/null 2>&1; then
http_conn=$(ss -tn | grep :80 | wc -l)
https_conn=$(ss -tn | grep :443 | wc -l)
echo "HTTP connections: $http_conn"
echo "HTTPS connections: $https_conn"
echo "Total connections: $((http_conn + https_conn))"
else
echo "ss command not available"
fi
echo ""
# Backend server status
echo "🖥️ Backend Server Status:"
echo "=========================="
# Parse upstream configuration to get server list
if [ -f "/etc/nginx/conf.d/load-balancer.conf" ]; then
servers=$(grep -E "server [0-9]+\." /etc/nginx/conf.d/load-balancer.conf | awk '{print $2}' | cut -d: -f1 | sort -u)
for server in $servers; do
if ping -c 1 -W 1 "$server" >/dev/null 2>&1; then
echo "✅ $server: Reachable"
else
echo "❌ $server: Unreachable"
fi
done
else
echo "Configuration file not found"
fi
echo ""
# Request statistics from access logs
echo "📊 Request Statistics (Last 5 minutes):"
echo "======================================="
if [ -f "/var/log/nginx/access.log" ]; then
current_time=$(date "+%d/%b/%Y:%H:%M")
last_5min=$(date -d "5 minutes ago" "+%d/%b/%Y:%H:%M")
total_requests=$(awk -v start="$last_5min" -v end="$current_time" '$4 >= "["start && $4 <= "["end' /var/log/nginx/access.log | wc -l)
status_200=$(awk -v start="$last_5min" -v end="$current_time" '$4 >= "["start && $4 <= "["end && $9 == "200"' /var/log/nginx/access.log | wc -l)
status_4xx=$(awk -v start="$last_5min" -v end="$current_time" '$4 >= "["start && $4 <= "["end && $9 ~ /^4/' /var/log/nginx/access.log | wc -l)
status_5xx=$(awk -v start="$last_5min" -v end="$current_time" '$4 >= "["start && $4 <= "["end && $9 ~ /^5/' /var/log/nginx/access.log | wc -l)
echo "Total requests: $total_requests"
echo "200 (Success): $status_200"
echo "4xx (Client Error): $status_4xx"
echo "5xx (Server Error): $status_5xx"
if [ $total_requests -gt 0 ]; then
success_rate=$(( status_200 * 100 / total_requests ))
echo "Success rate: $success_rate%"
fi
else
echo "Access log not found"
fi
echo ""
# System resources
echo "💻 System Resources:"
echo "==================="
echo "CPU: $(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1)%"
echo "Memory: $(free -h | awk 'NR==2{printf "%.1f%% (%s/%s)", $3*100/$2, $3, $2}')"
echo "Load Average: $(uptime | awk -F'load average:' '{print $2}')"
echo ""
# Top client IPs
echo "🎯 Top Client IPs (Last hour):"
echo "=============================="
if [ -f "/var/log/nginx/access.log" ]; then
awk -v hour="$(date -d '1 hour ago' '+%d/%b/%Y:%H')" '$4 ~ hour {print $1}' /var/log/nginx/access.log | \
sort | uniq -c | sort -nr | head -5 | \
while read count ip; do
echo "$ip: $count requests"
done
else
echo "No access log data available"
fi
echo ""
echo "🔄 Refreshing in 10 seconds... (Ctrl+C to exit)"
sleep 10
done
}
# Start monitoring
monitor_load_balancer
EOF
chmod +x /opt/nginx/scripts/lb-monitor.sh
Example 3: Automated Load Balancer Maintenance
# Create automated maintenance system
cat > /opt/nginx/scripts/lb-maintenance.sh << 'EOF'
#!/bin/bash
# Automated Load Balancer Maintenance
MAINTENANCE_LOG="/var/log/nginx/lb-maintenance.log"
BACKUP_DIR="/opt/nginx/backups"
CONFIG_DIR="/etc/nginx"
log_message() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a $MAINTENANCE_LOG
}
backup_configuration() {
log_message "💾 Backing up load balancer configuration..."
backup_date=$(date +%Y%m%d_%H%M%S)
backup_path="$BACKUP_DIR/nginx_config_$backup_date"
mkdir -p "$backup_path"
# Backup configuration files
cp -r $CONFIG_DIR/* "$backup_path/"
# Backup SSL certificates
mkdir -p "$backup_path/ssl"
cp -r /etc/ssl/certs "$backup_path/ssl/" 2>/dev/null || true
cp -r /etc/ssl/private "$backup_path/ssl/" 2>/dev/null || true
# Create manifest
cat > "$backup_path/manifest.txt" << MANIFEST_EOF
Load Balancer Configuration Backup
==================================
Backup Date: $(date)
Nginx Version: $(nginx -v 2>&1)
System: $(uname -a)
Configuration Files:
- Main config: nginx.conf
- Virtual hosts: conf.d/*.conf
- SSL certificates and keys
Restore Instructions:
1. Stop Nginx: systemctl stop nginx
2. Restore configs: cp -r backup/* /etc/nginx/
3. Test config: nginx -t
4. Start Nginx: systemctl start nginx
MANIFEST_EOF
# Compress backup
tar -czf "$backup_path.tar.gz" -C "$BACKUP_DIR" "nginx_config_$backup_date"
rm -rf "$backup_path"
# Keep only last 10 backups
ls -t $BACKUP_DIR/nginx_config_*.tar.gz | tail -n +11 | xargs rm -f
log_message "✅ Configuration backup completed: $backup_path.tar.gz"
}
rotate_logs() {
log_message "🗂️ Rotating load balancer logs..."
# Rotate access logs
if [ -f "/var/log/nginx/access.log" ]; then
mv /var/log/nginx/access.log /var/log/nginx/access.log.$(date +%Y%m%d)
touch /var/log/nginx/access.log
chown nginx:nginx /var/log/nginx/access.log
chmod 644 /var/log/nginx/access.log
fi
# Rotate error logs
if [ -f "/var/log/nginx/error.log" ]; then
mv /var/log/nginx/error.log /var/log/nginx/error.log.$(date +%Y%m%d)
touch /var/log/nginx/error.log
chown nginx:nginx /var/log/nginx/error.log
chmod 644 /var/log/nginx/error.log
fi
# Compress old logs
find /var/log/nginx -name "*.log.*" -mtime +1 -exec gzip {} \;
# Remove very old logs
find /var/log/nginx -name "*.log.*.gz" -mtime +30 -delete
# Reload Nginx to reopen log files
systemctl reload nginx
log_message "✅ Log rotation completed"
}
check_ssl_certificates() {
log_message "🔐 Checking SSL certificate expiration..."
for cert_file in /etc/ssl/certs/*.crt; do
if [ -f "$cert_file" ]; then
domain=$(basename "$cert_file" .crt)
# Check certificate expiration
expiry_date=$(openssl x509 -enddate -noout -in "$cert_file" | cut -d= -f2)
expiry_epoch=$(date -d "$expiry_date" +%s)
current_epoch=$(date +%s)
days_left=$(( (expiry_epoch - current_epoch) / 86400 ))
if [ $days_left -lt 30 ]; then
log_message "⚠️ Certificate $domain expires in $days_left days"
# Send alert email (customize as needed)
echo "SSL certificate for $domain expires in $days_left days" | \
mail -s "SSL Certificate Expiration Warning" [email protected] 2>/dev/null || true
else
log_message "✅ Certificate $domain is valid for $days_left days"
fi
fi
done
}
optimize_performance() {
log_message "⚡ Optimizing load balancer performance..."
# Check current system load
load_avg=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | sed 's/,//')
cpu_count=$(nproc)
load_threshold=$(echo "$cpu_count * 0.8" | bc)
if (( $(echo "$load_avg > $load_threshold" | bc -l) )); then
log_message "⚠️ High system load detected ($load_avg)"
# Temporarily reduce worker connections if load is high
current_connections=$(grep -o "worker_connections [0-9]*" /etc/nginx/nginx.conf | awk '{print $2}')
if [ $current_connections -gt 4096 ]; then
sed -i 's/worker_connections [0-9]*/worker_connections 4096/' /etc/nginx/nginx.conf
nginx -t && systemctl reload nginx
log_message "🔧 Reduced worker connections to 4096 due to high load"
# Schedule restoration of original settings
echo "sed -i 's/worker_connections 4096/worker_connections $current_connections/' /etc/nginx/nginx.conf && systemctl reload nginx" | at now + 1 hour
fi
fi
# Clear old cache files if they exist
if [ -d "/var/cache/nginx" ]; then
find /var/cache/nginx -type f -mtime +7 -delete
log_message "🧹 Cleared old cache files"
fi
log_message "✅ Performance optimization completed"
}
health_check_maintenance() {
log_message "🏥 Running health check maintenance..."
# Check if health check service is running
if systemctl is-active --quiet nginx-health-checker.timer; then
log_message "✅ Health checker service is running"
else
log_message "❌ Health checker service is not running, restarting..."
systemctl start nginx-health-checker.timer
fi
# Analyze health check logs
if [ -f "/var/log/nginx/health-check.log" ]; then
unhealthy_count=$(grep -c "Unhealthy" /var/log/nginx/health-check.log | tail -100)
if [ $unhealthy_count -gt 10 ]; then
log_message "⚠️ High number of health check failures detected: $unhealthy_count"
fi
fi
log_message "✅ Health check maintenance completed"
}
generate_status_report() {
report_file="/opt/nginx/reports/weekly_report_$(date +%Y%m%d).txt"
mkdir -p "$(dirname $report_file)"
cat > "$report_file" << REPORT_EOF
Load Balancer Weekly Status Report
==================================
Report Date: $(date)
Reporting Period: $(date -d '7 days ago') to $(date)
System Status:
- Nginx Version: $(nginx -v 2>&1)
- Uptime: $(uptime)
- System Load: $(uptime | awk -F'load average:' '{print $2}')
Configuration:
- Main config file: $(stat -c '%y' /etc/nginx/nginx.conf)
- Number of virtual hosts: $(ls -1 /etc/nginx/conf.d/*.conf | wc -l)
- SSL certificates: $(ls -1 /etc/ssl/certs/*.crt 2>/dev/null | wc -l)
Recent Activity:
- Total requests (last 7 days): $(wc -l /var/log/nginx/access.log* 2>/dev/null | tail -1 | awk '{print $1}')
- Average response time: $(awk '{print $NF}' /var/log/nginx/access.log | head -1000 | awk '{sum+=$1} END {print sum/NR}' 2>/dev/null | cut -c1-4)s
- Top 5 client IPs:
$(awk '{print $1}' /var/log/nginx/access.log* 2>/dev/null | sort | uniq -c | sort -nr | head -5)
Maintenance Activities:
- Last backup: $(ls -t $BACKUP_DIR/nginx_config_*.tar.gz 2>/dev/null | head -1)
- Log rotation: Completed $(date)
- SSL certificate check: Completed $(date)
REPORT_EOF
log_message "📋 Status report generated: $report_file"
}
# Main maintenance routine
case "$1" in
daily)
log_message "🌅 Starting daily load balancer maintenance..."
rotate_logs
check_ssl_certificates
optimize_performance
health_check_maintenance
log_message "✅ Daily maintenance completed"
;;
weekly)
log_message "📅 Starting weekly load balancer maintenance..."
backup_configuration
rotate_logs
check_ssl_certificates
optimize_performance
health_check_maintenance
generate_status_report
log_message "✅ Weekly maintenance completed"
;;
*)
echo "Usage: $0 {daily|weekly}"
echo " daily - Run daily maintenance tasks"
echo " weekly - Run weekly maintenance tasks"
exit 1
;;
esac
EOF
chmod +x /opt/nginx/scripts/lb-maintenance.sh
# Set up automated maintenance schedule
(crontab -l 2>/dev/null; echo "0 2 * * * /opt/nginx/scripts/lb-maintenance.sh daily") | crontab -
(crontab -l 2>/dev/null; echo "0 3 * * 0 /opt/nginx/scripts/lb-maintenance.sh weekly") | crontab -
Now let me continue with the remaining articles in batch 7. Let me proceed with the next article:
<function_calls>