Memory issues in Alpine Linux can cause system instability, application crashes, and performance degradation. This comprehensive guide provides systematic approaches to diagnose, resolve, and prevent memory-related problems in Alpine Linux environments.
🔍 Understanding Alpine Linux Memory Management
Alpine Linux uses the Linux kernel’s memory management system with lightweight userspace components, making it efficient but requiring careful monitoring and tuning for optimal performance.
Memory Architecture Overview
- Physical Memory (RAM) - System hardware memory 🖥️
- Virtual Memory - Process address space abstraction 🔄
- Swap Space - Disk-based memory extension 💾
- Page Cache - File system cache in memory 📄
- Buffer Cache - Block device cache 🔧
Common Memory Problem Types
# Memory pressure indicators
cat /proc/meminfo | grep -E "(MemTotal|MemFree|MemAvailable|Buffers|Cached|SwapTotal|SwapFree)"
# Out of Memory (OOM) events
dmesg | grep -i "killed process"
journalctl -k | grep -i "out of memory"
# Memory fragmentation
cat /proc/buddyinfo
cat /proc/pagetypeinfo
🚨 Diagnosing Memory Issues
System Memory Analysis
# Comprehensive memory analysis script
cat > /usr/local/bin/analyze-memory << 'EOF'
#!/bin/sh
# Alpine Linux memory analysis tool
echo "=== Alpine Linux Memory Analysis ==="
echo "Date: $(date)"
echo "Hostname: $(hostname)"
echo ""
# Basic memory information
echo "1. Memory Overview:"
echo "==================="
free -h
echo ""
# Detailed memory statistics
echo "2. Detailed Memory Statistics:"
echo "=============================="
cat /proc/meminfo | head -20
echo ""
# Memory usage by process
echo "3. Top Memory Consumers:"
echo "======================="
ps aux --sort=-%mem | head -10
echo ""
# Memory maps for high-usage processes
echo "4. Memory Maps Analysis:"
echo "======================="
for pid in $(ps aux --sort=-%mem --no-headers | head -5 | awk '{print $2}'); do
if [ -f "/proc/$pid/status" ]; then
echo "Process PID $pid:"
grep -E "(Name|VmSize|VmRSS|VmData|VmStk|VmExe)" /proc/$pid/status 2>/dev/null
echo ""
fi
done
# Swap usage analysis
echo "5. Swap Analysis:"
echo "================"
if [ -f /proc/swaps ]; then
cat /proc/swaps
echo ""
# Swap usage by process
echo "Processes using swap:"
for pid in $(ps -eo pid --no-headers); do
if [ -f "/proc/$pid/status" ]; then
swap_kb=$(grep VmSwap /proc/$pid/status 2>/dev/null | awk '{print $2}')
if [ -n "$swap_kb" ] && [ "$swap_kb" -gt 0 ]; then
process_name=$(grep Name /proc/$pid/status 2>/dev/null | awk '{print $2}')
echo " PID $pid ($process_name): ${swap_kb} kB"
fi
fi
done | sort -k3 -nr | head -10
else
echo "No swap configured"
fi
echo ""
# Memory fragmentation analysis
echo "6. Memory Fragmentation:"
echo "======================="
cat /proc/buddyinfo
echo ""
# OOM killer events
echo "7. Recent OOM Events:"
echo "===================="
dmesg | grep -i "killed process" | tail -5
echo ""
# Memory pressure indicators
echo "8. Memory Pressure Indicators:"
echo "=============================="
echo "Available memory: $(cat /proc/meminfo | grep MemAvailable | awk '{print $2 " " $3}')"
echo "Free memory: $(cat /proc/meminfo | grep MemFree | awk '{print $2 " " $3}')"
echo "Cached memory: $(cat /proc/meminfo | grep "^Cached" | awk '{print $2 " " $3}')"
echo "Buffer memory: $(cat /proc/meminfo | grep Buffers | awk '{print $2 " " $3}')"
# Calculate memory pressure percentage
total_mem=$(cat /proc/meminfo | grep MemTotal | awk '{print $2}')
avail_mem=$(cat /proc/meminfo | grep MemAvailable | awk '{print $2}')
pressure=$(( (total_mem - avail_mem) * 100 / total_mem ))
echo "Memory pressure: ${pressure}%"
if [ $pressure -gt 90 ]; then
echo "⚠️ CRITICAL: High memory pressure detected!"
elif [ $pressure -gt 80 ]; then
echo "⚠️ WARNING: Elevated memory pressure"
else
echo "✅ Memory pressure normal"
fi
echo ""
echo "Analysis completed at $(date)"
EOF
chmod +x /usr/local/bin/analyze-memory
# Run memory analysis
analyze-memory
OOM Killer Investigation
# OOM killer analysis and prevention
cat > /usr/local/bin/investigate-oom << 'EOF'
#!/bin/sh
# OOM killer investigation tool
OOM_LOG="/var/log/oom-events.log"
echo "=== OOM Killer Investigation ==="
echo "Date: $(date)"
echo ""
# Check for recent OOM events
echo "1. Recent OOM Events:"
echo "===================="
dmesg | grep -i "out of memory\|killed process" | tail -10
echo ""
# Analyze OOM score for running processes
echo "2. Current OOM Scores:"
echo "====================="
printf "%-8s %-8s %-8s %-20s %s\n" "PID" "OOM_SCORE" "OOM_ADJ" "COMMAND" "RSS(MB)"
echo "================================================================"
for pid in $(ps -eo pid --no-headers | head -20); do
if [ -f "/proc/$pid/oom_score" ]; then
oom_score=$(cat /proc/$pid/oom_score 2>/dev/null)
oom_adj=$(cat /proc/$pid/oom_score_adj 2>/dev/null)
cmd=$(ps -p $pid -o comm= 2>/dev/null)
rss_kb=$(grep VmRSS /proc/$pid/status 2>/dev/null | awk '{print $2}')
rss_mb=$(( rss_kb / 1024 ))
if [ -n "$oom_score" ] && [ "$oom_score" -gt 0 ]; then
printf "%-8s %-8s %-8s %-20s %s\n" "$pid" "$oom_score" "$oom_adj" "$cmd" "$rss_mb"
fi
fi
done | sort -k2 -nr | head -15
echo ""
# Memory overcommit analysis
echo "3. Memory Overcommit Settings:"
echo "============================="
echo "vm.overcommit_memory: $(cat /proc/sys/vm/overcommit_memory)"
echo "vm.overcommit_ratio: $(cat /proc/sys/vm/overcommit_ratio)"
echo "vm.overcommit_kbytes: $(cat /proc/sys/vm/overcommit_kbytes)"
case "$(cat /proc/sys/vm/overcommit_memory)" in
0) echo "Mode: Heuristic overcommit (default)" ;;
1) echo "Mode: Always overcommit" ;;
2) echo "Mode: Don't overcommit" ;;
esac
echo ""
# Calculate commit limit
commit_limit=$(cat /proc/meminfo | grep CommitLimit | awk '{print $2}')
committed=$(cat /proc/meminfo | grep Committed_AS | awk '{print $2}')
commit_usage=$(( committed * 100 / commit_limit ))
echo "Commit limit: ${commit_limit} kB"
echo "Committed memory: ${committed} kB"
echo "Commit usage: ${commit_usage}%"
if [ $commit_usage -gt 95 ]; then
echo "⚠️ CRITICAL: Near commit limit!"
elif [ $commit_usage -gt 85 ]; then
echo "⚠️ WARNING: High commit usage"
fi
echo ""
# Generate recommendations
echo "4. Recommendations:"
echo "=================="
if [ $commit_usage -gt 90 ]; then
echo "- Add more physical memory or swap space"
echo "- Reduce memory usage by stopping unnecessary services"
echo "- Consider tuning vm.overcommit_ratio"
fi
high_oom_processes=$(ps -eo pid,oom_score,comm --no-headers | awk '$2 > 500 {print $0}' | wc -l)
if [ $high_oom_processes -gt 0 ]; then
echo "- $high_oom_processes processes have high OOM scores"
echo "- Consider adjusting OOM scores for critical processes"
fi
echo ""
echo "Investigation completed at $(date)"
EOF
chmod +x /usr/local/bin/investigate-oom
# Run OOM investigation
investigate-oom
🔧 Memory Issue Resolution Techniques
Memory Leak Detection and Fix
# Memory leak detection system
cat > /usr/local/bin/detect-memory-leaks << 'EOF'
#!/bin/sh
# Memory leak detection for Alpine Linux
MONITOR_DURATION="${1:-300}" # 5 minutes default
SAMPLE_INTERVAL="${2:-10}" # 10 seconds default
LEAK_THRESHOLD="${3:-10}" # 10MB growth threshold
echo "Starting memory leak detection..."
echo "Duration: ${MONITOR_DURATION}s, Interval: ${SAMPLE_INTERVAL}s"
echo "Leak threshold: ${LEAK_THRESHOLD}MB"
echo ""
# Create monitoring directory
MONITOR_DIR="/tmp/memory-leak-detection"
mkdir -p "$MONITOR_DIR"
# Initial memory snapshot
ps aux --sort=-%mem | head -20 > "$MONITOR_DIR/initial_snapshot"
# Monitor memory usage over time
echo "Monitoring memory usage..."
END_TIME=$(($(date +%s) + MONITOR_DURATION))
while [ $(date +%s) -lt $END_TIME ]; do
TIMESTAMP=$(date +%s)
# Capture memory data
ps aux --sort=-%mem | head -20 > "$MONITOR_DIR/snapshot_$TIMESTAMP"
# Log system memory
echo "$TIMESTAMP $(free | grep Mem | awk '{print $3}')" >> "$MONITOR_DIR/system_memory.log"
sleep $SAMPLE_INTERVAL
done
echo "Monitoring complete. Analyzing results..."
# Analyze for memory leaks
echo ""
echo "Memory Leak Analysis:"
echo "===================="
# Compare initial and final snapshots
while read line; do
PID=$(echo "$line" | awk '{print $2}')
INITIAL_MEM=$(echo "$line" | awk '{print $6}')
CMD=$(echo "$line" | awk '{print $11}')
# Find same process in final snapshot
FINAL_SNAPSHOT=$(ls "$MONITOR_DIR"/snapshot_* | tail -1)
FINAL_MEM=$(grep "^[^ ]* *$PID " "$FINAL_SNAPSHOT" | awk '{print $6}')
if [ -n "$FINAL_MEM" ] && [ "$FINAL_MEM" -gt "$INITIAL_MEM" ]; then
# Convert to MB for easier reading
INITIAL_MB=$((INITIAL_MEM / 1024))
FINAL_MB=$((FINAL_MEM / 1024))
GROWTH_MB=$((FINAL_MB - INITIAL_MB))
if [ $GROWTH_MB -ge $LEAK_THRESHOLD ]; then
echo "⚠️ Potential leak detected:"
echo " PID: $PID"
echo " Command: $CMD"
echo " Initial memory: ${INITIAL_MB}MB"
echo " Final memory: ${FINAL_MB}MB"
echo " Growth: ${GROWTH_MB}MB"
echo ""
# Detailed process analysis
if [ -f "/proc/$PID/status" ]; then
echo " Detailed memory info:"
grep -E "(VmSize|VmRSS|VmData|VmStk)" /proc/$PID/status | sed 's/^/ /'
echo ""
fi
fi
fi
done < "$MONITOR_DIR/initial_snapshot"
# System memory trend
echo "System Memory Trend:"
echo "==================="
if [ -f "$MONITOR_DIR/system_memory.log" ]; then
INITIAL_SYS=$(head -1 "$MONITOR_DIR/system_memory.log" | awk '{print $2}')
FINAL_SYS=$(tail -1 "$MONITOR_DIR/system_memory.log" | awk '{print $2}')
SYS_GROWTH_KB=$((FINAL_SYS - INITIAL_SYS))
SYS_GROWTH_MB=$((SYS_GROWTH_KB / 1024))
echo "Initial system memory usage: $((INITIAL_SYS / 1024))MB"
echo "Final system memory usage: $((FINAL_SYS / 1024))MB"
echo "System memory growth: ${SYS_GROWTH_MB}MB"
fi
# Cleanup
rm -rf "$MONITOR_DIR"
echo ""
echo "Memory leak detection completed"
EOF
chmod +x /usr/local/bin/detect-memory-leaks
# Usage examples:
# detect-memory-leaks 600 15 20 # Monitor for 10 minutes, 15s intervals, 20MB threshold
# detect-memory-leaks # Default: 5 minutes, 10s intervals, 10MB threshold
Memory Optimization and Tuning
# Memory optimization script
cat > /usr/local/bin/optimize-memory << 'EOF'
#!/bin/sh
# Alpine Linux memory optimization
echo "=== Alpine Linux Memory Optimization ==="
echo "Starting memory optimization process..."
echo ""
# 1. Kernel memory parameters tuning
echo "1. Tuning kernel memory parameters..."
echo "======================================"
# Backup current sysctl configuration
cp /etc/sysctl.conf /etc/sysctl.conf.backup.$(date +%Y%m%d)
# Apply memory optimizations
cat >> /etc/sysctl.conf << 'SYSCTL'
# Memory optimization settings
# Reduce memory pressure triggers
vm.swappiness=10
vm.vfs_cache_pressure=50
# Memory overcommit settings
vm.overcommit_memory=2
vm.overcommit_ratio=80
# Dirty memory management
vm.dirty_ratio=15
vm.dirty_background_ratio=5
vm.dirty_expire_centisecs=3000
vm.dirty_writeback_centisecs=500
# Memory allocation policies
vm.min_free_kbytes=65536
vm.zone_reclaim_mode=0
# OOM killer tuning
vm.panic_on_oom=0
vm.oom_kill_allocating_task=1
SYSCTL
# Apply settings immediately
sysctl -p
echo "✅ Kernel parameters optimized"
echo ""
# 2. Service memory optimization
echo "2. Optimizing service memory usage..."
echo "====================================="
# Stop unnecessary services
UNNECESSARY_SERVICES="chronyd ntp dbus avahi-daemon bluetooth cups"
for service in $UNNECESSARY_SERVICES; do
if rc-service $service status >/dev/null 2>&1; then
echo "Stopping unnecessary service: $service"
rc-service $service stop
rc-update del $service default 2>/dev/null
fi
done
# Optimize running services
if rc-service nginx status >/dev/null 2>&1; then
echo "Optimizing Nginx memory usage..."
# Add memory-optimized nginx config
cat > /etc/nginx/conf.d/memory-optimization.conf << 'NGINX'
worker_processes auto;
worker_rlimit_nofile 2048;
worker_connections 1024;
# Memory optimization
client_body_buffer_size 128k;
client_max_body_size 10m;
client_header_buffer_size 1k;
large_client_header_buffers 4 4k;
output_buffers 1 32k;
postpone_output 1460;
NGINX
fi
echo "✅ Services optimized"
echo ""
# 3. Memory cache optimization
echo "3. Optimizing memory caches..."
echo "=============================="
# Clear page cache if memory pressure is high
TOTAL_MEM=$(cat /proc/meminfo | grep MemTotal | awk '{print $2}')
FREE_MEM=$(cat /proc/meminfo | grep MemAvailable | awk '{print $2}')
USAGE_PERCENT=$(( (TOTAL_MEM - FREE_MEM) * 100 / TOTAL_MEM ))
if [ $USAGE_PERCENT -gt 85 ]; then
echo "High memory usage detected (${USAGE_PERCENT}%). Clearing caches..."
# Sync first to ensure data integrity
sync
# Clear page cache
echo 1 > /proc/sys/vm/drop_caches
sleep 2
# Clear dentries and inodes
echo 2 > /proc/sys/vm/drop_caches
sleep 2
# Clear page cache, dentries and inodes
echo 3 > /proc/sys/vm/drop_caches
echo "✅ Memory caches cleared"
else
echo "Memory usage normal (${USAGE_PERCENT}%). No cache clearing needed."
fi
echo ""
# 4. Swap optimization
echo "4. Optimizing swap configuration..."
echo "=================================="
# Check if swap exists
if [ ! -f /proc/swaps ] || [ $(cat /proc/swaps | wc -l) -eq 1 ]; then
echo "Creating swap file for memory relief..."
# Calculate optimal swap size (1.5x RAM or 4GB max for small systems)
TOTAL_MEM_GB=$(( TOTAL_MEM / 1024 / 1024 ))
if [ $TOTAL_MEM_GB -lt 2 ]; then
SWAP_SIZE="2G"
elif [ $TOTAL_MEM_GB -lt 4 ]; then
SWAP_SIZE="4G"
else
SWAP_SIZE="4G" # Cap at 4GB for most systems
fi
# Create swap file
fallocate -l $SWAP_SIZE /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
# Add to fstab
echo "/swapfile none swap sw 0 0" >> /etc/fstab
echo "✅ Swap file created (${SWAP_SIZE})"
else
echo "Swap already configured"
fi
echo ""
# 5. Process memory limits
echo "5. Setting process memory limits..."
echo "=================================="
# Create memory limits configuration
cat > /etc/security/limits.d/memory-limits.conf << 'LIMITS'
# Memory limits for users and processes
* soft memlock 64
* hard memlock 64
* soft as 2097152
* hard as 4194304
# Specific limits for known memory-hungry processes
nginx soft as 524288
nginx hard as 1048576
mysql soft as 1048576
mysql hard as 2097152
LIMITS
echo "✅ Process memory limits configured"
echo ""
# 6. Generate optimization report
echo "6. Optimization Report:"
echo "======================"
NEW_FREE=$(cat /proc/meminfo | grep MemAvailable | awk '{print $2}')
NEW_USAGE=$(( (TOTAL_MEM - NEW_FREE) * 100 / TOTAL_MEM ))
echo "Memory status after optimization:"
echo "- Total memory: $((TOTAL_MEM / 1024))MB"
echo "- Available memory: $((NEW_FREE / 1024))MB"
echo "- Memory usage: ${NEW_USAGE}%"
echo ""
echo "Optimization recommendations:"
if [ $NEW_USAGE -gt 80 ]; then
echo "⚠️ Consider adding more physical RAM"
echo "⚠️ Monitor for memory leaks in applications"
elif [ $NEW_USAGE -gt 60 ]; then
echo "✅ Memory usage acceptable, monitor regularly"
else
echo "✅ Memory usage optimal"
fi
echo ""
echo "Memory optimization completed at $(date)"
EOF
chmod +x /usr/local/bin/optimize-memory
# Run memory optimization
optimize-memory
📊 Memory Monitoring and Alerting
Continuous Memory Monitoring
# Advanced memory monitoring system
cat > /usr/local/bin/monitor-memory-continuous << 'EOF'
#!/bin/sh
# Continuous memory monitoring for Alpine Linux
MONITOR_INTERVAL="${1:-60}" # 1 minute default
ALERT_THRESHOLD="${2:-85}" # 85% default
LOG_FILE="/var/log/memory-monitor.log"
ALERT_EMAIL="${ALERT_EMAIL:[email protected]}"
# Ensure log directory exists
mkdir -p "$(dirname "$LOG_FILE")"
log_message() {
echo "$(date '+%Y-%m-%d %H:%M:%S'): $1" | tee -a "$LOG_FILE"
}
send_alert() {
local alert_type="$1"
local message="$2"
# Log the alert
log_message "ALERT: $alert_type - $message"
# Send email if configured
if [ -n "$ALERT_EMAIL" ] && command -v mail >/dev/null 2>&1; then
echo "Memory Alert on $(hostname)
Alert Type: $alert_type
Message: $message
Timestamp: $(date)
System Memory Information:
$(free -h)
Top Memory Consumers:
$(ps aux --sort=-%mem | head -10)
Please investigate immediately." | mail -s "Memory Alert: $alert_type" "$ALERT_EMAIL"
fi
# Log to syslog if available
if command -v logger >/dev/null 2>&1; then
logger -p user.warning "Memory Alert: $alert_type - $message"
fi
}
check_memory_status() {
# Get memory information
TOTAL_MEM=$(cat /proc/meminfo | grep MemTotal | awk '{print $2}')
AVAIL_MEM=$(cat /proc/meminfo | grep MemAvailable | awk '{print $2}')
FREE_MEM=$(cat /proc/meminfo | grep MemFree | awk '{print $2}')
BUFFERS=$(cat /proc/meminfo | grep Buffers | awk '{print $2}')
CACHED=$(cat /proc/meminfo | grep "^Cached" | awk '{print $2}')
# Calculate usage percentages
USED_MEM=$((TOTAL_MEM - AVAIL_MEM))
USAGE_PERCENT=$((USED_MEM * 100 / TOTAL_MEM))
# Check swap usage
if [ -f /proc/swaps ] && [ $(cat /proc/swaps | wc -l) -gt 1 ]; then
SWAP_TOTAL=$(cat /proc/meminfo | grep SwapTotal | awk '{print $2}')
SWAP_FREE=$(cat /proc/meminfo | grep SwapFree | awk '{print $2}')
SWAP_USED=$((SWAP_TOTAL - SWAP_FREE))
SWAP_USAGE_PERCENT=$((SWAP_USED * 100 / SWAP_TOTAL))
else
SWAP_USAGE_PERCENT=0
fi
# Log current status
log_message "Memory: ${USAGE_PERCENT}% used ($((USED_MEM/1024))MB/$((TOTAL_MEM/1024))MB), Swap: ${SWAP_USAGE_PERCENT}% used"
# Check for alert conditions
if [ $USAGE_PERCENT -ge $ALERT_THRESHOLD ]; then
send_alert "HIGH_MEMORY_USAGE" "Memory usage at ${USAGE_PERCENT}% (threshold: ${ALERT_THRESHOLD}%)"
fi
if [ $SWAP_USAGE_PERCENT -ge 50 ]; then
send_alert "HIGH_SWAP_USAGE" "Swap usage at ${SWAP_USAGE_PERCENT}%"
fi
# Check for OOM events
if dmesg | tail -100 | grep -q "killed process"; then
RECENT_OOM=$(dmesg | grep "killed process" | tail -1)
send_alert "OOM_KILLER_ACTIVE" "Recent OOM kill detected: $RECENT_OOM"
fi
# Check for memory pressure
if [ $USAGE_PERCENT -ge 95 ]; then
send_alert "CRITICAL_MEMORY_PRESSURE" "Critical memory pressure detected"
# Emergency memory cleanup
log_message "Performing emergency memory cleanup..."
sync
echo 3 > /proc/sys/vm/drop_caches
fi
}
# Signal handlers for graceful shutdown
trap 'log_message "Memory monitoring stopped"; exit 0' TERM INT
log_message "Starting continuous memory monitoring (interval: ${MONITOR_INTERVAL}s, threshold: ${ALERT_THRESHOLD}%)"
# Main monitoring loop
while true; do
check_memory_status
sleep $MONITOR_INTERVAL
done
EOF
chmod +x /usr/local/bin/monitor-memory-continuous
# Create systemd service for continuous monitoring
cat > /etc/init.d/memory-monitor << 'EOF'
#!/sbin/openrc-run
name="memory-monitor"
description="Continuous memory monitoring service"
command="/usr/local/bin/monitor-memory-continuous"
command_args="60 85"
command_background="yes"
pidfile="/var/run/memory-monitor.pid"
command_user="root"
depend() {
need localmount
after bootmisc
}
EOF
chmod +x /etc/init.d/memory-monitor
# Enable and start the service
rc-update add memory-monitor default
rc-service memory-monitor start
🛠️ Advanced Memory Troubleshooting
Memory Debugging Tools
# Install debugging tools
apk add gdb strace valgrind htop
# Advanced memory debugging script
cat > /usr/local/bin/debug-memory-process << 'EOF'
#!/bin/sh
# Memory debugging for specific processes
PID="$1"
DURATION="${2:-60}"
if [ -z "$PID" ]; then
echo "Usage: $0 <pid> [duration_seconds]"
echo ""
echo "Available processes with high memory usage:"
ps aux --sort=-%mem | head -10
exit 1
fi
if [ ! -f "/proc/$PID/status" ]; then
echo "Error: Process $PID not found"
exit 1
fi
PROCESS_NAME=$(grep Name /proc/$PID/status | awk '{print $2}')
echo "Debugging memory usage for process: $PROCESS_NAME (PID: $PID)"
echo "Duration: ${DURATION} seconds"
echo ""
# 1. Process memory information
echo "1. Current Memory Information:"
echo "=============================="
grep -E "(VmSize|VmRSS|VmData|VmStk|VmExe|VmLib|VmPTE|VmSwap)" /proc/$PID/status
echo ""
# 2. Memory maps
echo "2. Memory Maps:"
echo "==============="
if [ -f "/proc/$PID/maps" ]; then
cat /proc/$PID/maps | head -20
echo "... (showing first 20 entries)"
else
echo "Memory maps not available"
fi
echo ""
# 3. File descriptors
echo "3. Open File Descriptors:"
echo "========================="
if [ -d "/proc/$PID/fd" ]; then
FD_COUNT=$(ls /proc/$PID/fd | wc -l)
echo "Total open file descriptors: $FD_COUNT"
if [ $FD_COUNT -gt 100 ]; then
echo "High FD count detected. Sample of open files:"
ls -la /proc/$PID/fd | head -10
fi
else
echo "File descriptor information not available"
fi
echo ""
# 4. System call tracing
echo "4. Memory-related System Calls (${DURATION}s sample):"
echo "===================================================="
timeout $DURATION strace -e trace=mmap,munmap,brk,mprotect -p $PID 2>&1 | head -20
echo ""
# 5. Memory allocation patterns
echo "5. Memory Allocation Analysis:"
echo "=============================="
# Create a temporary script to monitor malloc/free patterns
cat > /tmp/memory_trace_$PID.gdb << 'GDB'
set pagination off
set logging file /tmp/memory_allocation.log
set logging on
break malloc
break free
continue
GDB
# Attach GDB briefly to trace allocations
timeout 30 gdb -batch -x /tmp/memory_trace_$PID.gdb -p $PID >/dev/null 2>&1 &
# Wait and analyze results
sleep 5
if [ -f /tmp/memory_allocation.log ]; then
MALLOC_COUNT=$(grep -c "malloc" /tmp/memory_allocation.log)
FREE_COUNT=$(grep -c "free" /tmp/memory_allocation.log)
echo "Malloc calls: $MALLOC_COUNT"
echo "Free calls: $FREE_COUNT"
if [ $MALLOC_COUNT -gt $FREE_COUNT ]; then
LEAK_CALLS=$((MALLOC_COUNT - FREE_COUNT))
echo "⚠️ Potential memory leak: $LEAK_CALLS unfreed allocations"
fi
rm -f /tmp/memory_allocation.log
fi
rm -f /tmp/memory_trace_$PID.gdb
# 6. Recommendations
echo ""
echo "6. Recommendations:"
echo "=================="
RSS_KB=$(grep VmRSS /proc/$PID/status | awk '{print $2}')
RSS_MB=$((RSS_KB / 1024))
if [ $RSS_MB -gt 1000 ]; then
echo "- Process using significant memory (${RSS_MB}MB)"
echo "- Consider process restart if memory usage is unexpected"
echo "- Monitor for memory leaks"
elif [ $RSS_MB -gt 500 ]; then
echo "- Moderate memory usage (${RSS_MB}MB)"
echo "- Monitor trends over time"
else
echo "- Normal memory usage (${RSS_MB}MB)"
fi
if [ $FD_COUNT -gt 1000 ]; then
echo "- High file descriptor usage detected"
echo "- Check for file descriptor leaks"
fi
echo ""
echo "Memory debugging completed for PID $PID"
EOF
chmod +x /usr/local/bin/debug-memory-process
# Usage example:
# debug-memory-process 1234 120 # Debug PID 1234 for 2 minutes
🎯 Memory Recovery Procedures
Emergency Memory Recovery
# Emergency memory recovery script
cat > /usr/local/bin/emergency-memory-recovery << 'EOF'
#!/bin/sh
# Emergency memory recovery for Alpine Linux
echo "=== EMERGENCY MEMORY RECOVERY ==="
echo "Starting emergency memory recovery procedures..."
echo "Timestamp: $(date)"
echo ""
# Check current memory status
TOTAL_MEM=$(cat /proc/meminfo | grep MemTotal | awk '{print $2}')
AVAIL_MEM=$(cat /proc/meminfo | grep MemAvailable | awk '{print $2}')
USAGE_PERCENT=$(( (TOTAL_MEM - AVAIL_MEM) * 100 / TOTAL_MEM ))
echo "Current memory usage: ${USAGE_PERCENT}%"
echo "Available memory: $((AVAIL_MEM / 1024))MB"
echo ""
if [ $USAGE_PERCENT -lt 90 ]; then
echo "Memory usage not critical. Emergency recovery not needed."
exit 0
fi
echo "CRITICAL MEMORY SITUATION DETECTED"
echo "Initiating emergency recovery procedures..."
echo ""
# 1. Emergency cache clearing
echo "1. Clearing system caches..."
sync
echo 3 > /proc/sys/vm/drop_caches
echo "✅ System caches cleared"
# 2. Stop non-essential services
echo ""
echo "2. Stopping non-essential services..."
NON_ESSENTIAL="bluetooth avahi-daemon cups chronyd"
for service in $NON_ESSENTIAL; do
if rc-service $service status >/dev/null 2>&1; then
echo "Stopping $service..."
rc-service $service stop
fi
done
echo "✅ Non-essential services stopped"
# 3. Kill memory-hungry processes (with user confirmation)
echo ""
echo "3. Identifying memory-hungry processes..."
echo "Top memory consumers:"
ps aux --sort=-%mem | head -10
echo ""
echo "WARNING: The following processes are using significant memory:"
HEAVY_PROCESSES=$(ps aux --sort=-%mem --no-headers | head -5 | awk '$4 > 10 {print $2 ":" $11 " (" $4 "% memory)"}')
if [ -n "$HEAVY_PROCESSES" ]; then
echo "$HEAVY_PROCESSES"
echo ""
echo "Do you want to terminate these processes? (y/N)"
read -r CONFIRM
if [ "$CONFIRM" = "y" ] || [ "$CONFIRM" = "Y" ]; then
ps aux --sort=-%mem --no-headers | head -5 | awk '$4 > 10 {print $2}' | while read pid; do
if [ "$pid" -ne "$$" ] && [ "$pid" -ne "1" ]; then
echo "Terminating PID $pid..."
kill -TERM "$pid"
sleep 2
if kill -0 "$pid" 2>/dev/null; then
kill -KILL "$pid"
fi
fi
done
echo "✅ Memory-hungry processes terminated"
else
echo "Skipping process termination"
fi
fi
# 4. Adjust OOM killer aggressiveness
echo ""
echo "4. Adjusting OOM killer settings..."
echo 1 > /proc/sys/vm/oom_kill_allocating_task
echo 0 > /proc/sys/vm/panic_on_oom
echo "✅ OOM killer settings adjusted"
# 5. Create emergency swap if needed
echo ""
echo "5. Checking swap availability..."
if [ ! -f /proc/swaps ] || [ $(cat /proc/swaps | wc -l) -eq 1 ]; then
echo "No swap detected. Creating emergency swap file..."
# Create 1GB emergency swap
if [ -f /emergency_swap ]; then
echo "Emergency swap file already exists"
else
fallocate -l 1G /emergency_swap
chmod 600 /emergency_swap
mkswap /emergency_swap
swapon /emergency_swap
echo "✅ Emergency swap file created (1GB)"
fi
else
echo "Swap already available"
fi
# 6. Final memory status check
echo ""
echo "6. Post-recovery memory status:"
echo "=============================="
NEW_AVAIL=$(cat /proc/meminfo | grep MemAvailable | awk '{print $2}')
NEW_USAGE=$(( (TOTAL_MEM - NEW_AVAIL) * 100 / TOTAL_MEM ))
echo "Memory usage after recovery: ${NEW_USAGE}%"
echo "Available memory: $((NEW_AVAIL / 1024))MB"
echo "Memory freed: $(( (AVAIL_MEM - NEW_AVAIL) / -1024 ))MB"
if [ $NEW_USAGE -lt 80 ]; then
echo "✅ Memory recovery successful"
else
echo "⚠️ Memory usage still high. Consider:"
echo " - Adding more physical RAM"
echo " - Restarting the system if possible"
echo " - Investigating for memory leaks"
fi
echo ""
echo "Emergency memory recovery completed at $(date)"
EOF
chmod +x /usr/local/bin/emergency-memory-recovery
🎉 Conclusion
Resolving memory issues in Alpine Linux requires systematic diagnosis, proper monitoring, and proactive optimization. With the tools and techniques in this guide, you can effectively manage memory resources and maintain system stability.
Key takeaways:
- Monitor memory usage continuously 📊
- Implement proper alerting systems 🚨
- Optimize kernel parameters for your workload ⚙️
- Use debugging tools for complex issues 🔍
- Maintain emergency recovery procedures 🛠️
With proper memory management, your Alpine Linux systems will run efficiently and reliably! 🚀