solid
+
+
+
+
+
+
webstorm
+
+
+
+
intellij
=>
erlang
::
raspbian
docker
+
#
ray
wsl
+
+
||
+
preact
+
+
+
+
ada
symfony
+
+
+
+
+
+
+
+
+
+
+
+
emacs
+
+
xml
zorin
grafana
alpine
+
hugging
+
+
symfony
+
+
+
+
chef
+
django
lit
pytest
js
+
+
+
%
0b
r
+
gcp
+
+
notepad++
+
sqlite
neo4j
mxnet
+
+
+
angular
parcel
ts
webpack
Back to Blog
๐Ÿ“Š AlmaLinux Centralized Log Management Complete Guide
almalinux log-management elk-stack

๐Ÿ“Š AlmaLinux Centralized Log Management Complete Guide

Published Sep 18, 2025

Master centralized log management on AlmaLinux with ELK Stack (Elasticsearch, Logstash, Kibana), Rsyslog, journald, log aggregation, real-time monitoring, and automated log analysis for enterprise-grade logging solutions.

5 min read
0 views
Table of Contents

๐Ÿ“Š AlmaLinux Centralized Log Management Complete Guide

Ready to become a log management master? ๐ŸŽฏ This comprehensive guide will transform you from log chaos to log clarity! Youโ€™ll learn to set up enterprise-grade centralized logging that makes troubleshooting a breeze and gives you incredible insights into your systems. ๐Ÿš€

Centralized log management isnโ€™t just about collecting logs โ€“ itโ€™s about creating a powerful observatory for your entire infrastructure. Whether youโ€™re managing a single server or a complex multi-server environment, this guide will help you build a logging system that scales! ๐Ÿ“ˆ

๐Ÿค” Why is Centralized Log Management Important?

Imagine trying to find a specific conversation in thousands of chat rooms without a search function โ€“ thatโ€™s what managing logs without centralization feels like! ๐Ÿ˜ต Hereโ€™s why centralized logging is a game-changer:

  • ๐Ÿ” Unified Visibility: See all your system activity in one place, like mission control!
  • โšก Lightning-Fast Troubleshooting: Find issues across multiple servers instantly
  • ๐Ÿ“ˆ Powerful Analytics: Discover patterns and trends you never knew existed
  • ๐Ÿ›ก๏ธ Enhanced Security: Detect threats and monitor suspicious activities
  • ๐Ÿ“‹ Compliance Made Easy: Automated log retention and auditing capabilities
  • ๐Ÿ”” Real-time Alerts: Get notified the moment something goes wrong
  • ๐Ÿ’พ Efficient Storage: Compress and organize logs for optimal disk usage
  • ๐ŸŽฏ Better Decision Making: Data-driven insights from comprehensive log analysis

๐ŸŽฏ What You Need

Before we dive into the exciting world of centralized logging, letโ€™s make sure you have everything ready:

โœ… AlmaLinux server(s) (weโ€™ll set up the perfect logging infrastructure!) โœ… Root or sudo access (needed for installing and configuring services) โœ… At least 4GB RAM (recommended for ELK stack - more is better!) โœ… 20GB+ free disk space (logs can grow quickly, plan accordingly!) โœ… Network connectivity (for log shipping between servers) โœ… Basic understanding of logs (donโ€™t worry, weโ€™ll explain everything!) โœ… Text editor familiarity (nano, vim, or your favorite editor) โœ… Patience and curiosity (weโ€™re building something amazing together!)

๐Ÿ“ Step 1: Understanding Your Log Ecosystem

Letโ€™s start by exploring the AlmaLinux logging landscape! Think of this as taking inventory of all your log sources before we organize them. ๐Ÿ—บ๏ธ

# Discover all active log files on your system
sudo find /var/log -type f -name "*.log" | head -20
# Shows the most common log files your system is generating

# Check systemd journal status
sudo journalctl --disk-usage
# Shows how much space systemd journal is using

# View real-time log generation
sudo tail -f /var/log/messages
# Watch system messages in real-time (press Ctrl+C to stop)

# Check rsyslog configuration
sudo systemctl status rsyslog
# Verify rsyslog service is running

Letโ€™s create a comprehensive log discovery script:

# Create log discovery script
sudo nano /usr/local/bin/log-discovery.sh

# Add this content:
#!/bin/bash
echo "๐Ÿ” ALMALINUX LOG DISCOVERY REPORT"
echo "================================="
echo "Date: $(date)"
echo ""

echo "๐Ÿ“‹ ACTIVE LOG FILES:"
find /var/log -type f -name "*.log" -exec ls -lh {} \; | sort -k5 -hr | head -15
echo ""

echo "๐Ÿ’พ JOURNAL DISK USAGE:"
journalctl --disk-usage
echo ""

echo "๐Ÿ”„ RSYSLOG STATUS:"
systemctl status rsyslog --no-pager -l
echo ""

echo "๐Ÿ“Š LOG ROTATION STATUS:"
ls -la /etc/logrotate.d/ | wc -l
echo "Log rotation configs found: $(ls /etc/logrotate.d/ | wc -l)"
echo ""

echo "๐ŸŒ NETWORK LOG DESTINATIONS:"
grep -E "^[^#].*@" /etc/rsyslog.conf /etc/rsyslog.d/* 2>/dev/null || echo "No remote logging configured"
echo ""

echo "Discovery complete! โœ…"
# Make the script executable and run it
sudo chmod +x /usr/local/bin/log-discovery.sh
sudo /usr/local/bin/log-discovery.sh
# This gives you a complete picture of your current logging setup!

๐Ÿ”ง Step 2: Setting Up ELK Stack (Elasticsearch, Logstash, Kibana)

Time to build your log management powerhouse! ๐Ÿ’ช The ELK Stack is like having a supercomputer dedicated to understanding your logs.

# Add Elasticsearch repository
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

# Create repository configuration
sudo nano /etc/yum.repos.d/elasticsearch.repo

# Add this content:
[elasticsearch]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md
# Install Elasticsearch
sudo dnf install -y --enablerepo=elasticsearch elasticsearch

# Configure Elasticsearch for single-node setup
sudo nano /etc/elasticsearch/elasticsearch.yml

# Add/modify these settings:
cluster.name: almalinux-logs
node.name: log-server-01
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: localhost
http.port: 9200
discovery.type: single-node
xpack.security.enabled: false
# Note: In production, enable security!
# Configure JVM heap size (adjust based on your RAM)
sudo nano /etc/elasticsearch/jvm.options.d/heap.options

# Add these lines (use 50% of available RAM, max 32GB):
-Xms2g
-Xmx2g
# Adjust these values based on your system's RAM

# Start and enable Elasticsearch
sudo systemctl daemon-reload
sudo systemctl enable --now elasticsearch

# Verify Elasticsearch is running
sleep 30  # Wait for startup
curl -X GET "localhost:9200/"
# Should return cluster information

Now letโ€™s install Logstash:

# Install Logstash
sudo dnf install -y --enablerepo=elasticsearch logstash

# Create basic Logstash configuration
sudo nano /etc/logstash/conf.d/almalinux-logs.conf

# Add this comprehensive configuration:
input {
  # Receive logs from rsyslog
  syslog {
    port => 5514
    type => "syslog"
  }

  # Monitor specific log files
  file {
    path => "/var/log/messages"
    type => "system"
    start_position => "beginning"
  }

  file {
    path => "/var/log/secure"
    type => "security"
    start_position => "beginning"
  }

  # Beats input for log shippers
  beats {
    port => 5044
  }
}

filter {
  # Parse syslog messages
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{IPORHOST:host} %{DATA:program}(?:\[%{POSINT:pid}\])?: %{GREEDYDATA:message}" }
    }

    date {
      match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }

  # Parse security logs
  if [type] == "security" {
    if "Failed password" in [message] {
      mutate {
        add_tag => [ "failed_login" ]
      }
    }

    if "Accepted password" in [message] {
      mutate {
        add_tag => [ "successful_login" ]
      }
    }
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "almalinux-logs-%{+YYYY.MM.dd}"
  }

  # Also output to stdout for debugging
  stdout {
    codec => rubydebug
  }
}
# Start and enable Logstash
sudo systemctl enable --now logstash

# Check Logstash status
sudo systemctl status logstash

Finally, letโ€™s install Kibana:

# Install Kibana
sudo dnf install -y --enablerepo=elasticsearch kibana

# Configure Kibana
sudo nano /etc/kibana/kibana.yml

# Add/modify these settings:
server.port: 5601
server.host: "localhost"
elasticsearch.hosts: ["http://localhost:9200"]
logging.appenders.file.fileName: /var/log/kibana/kibana.log
# Start and enable Kibana
sudo systemctl enable --now kibana

# Wait for Kibana to start (can take a few minutes)
sleep 60

# Check if Kibana is accessible
curl -I http://localhost:5601
# Should return HTTP 200 OK

๐ŸŒŸ Step 3: Configuring Rsyslog for Centralized Collection

Now letโ€™s turn your AlmaLinux server into a log collection powerhouse! ๐ŸŽฏ Rsyslog will be our reliable log courier, delivering messages exactly where they need to go.

# Backup original rsyslog configuration
sudo cp /etc/rsyslog.conf /etc/rsyslog.conf.backup

# Configure rsyslog for centralized logging
sudo nano /etc/rsyslog.conf

# Find and uncomment these lines to enable log reception:
# module(load="imudp")
# input(type="imudp" port="514")
# module(load="imtcp")
# input(type="imtcp" port="514")

# Add these lines at the end:
# Forward logs to Logstash
*.* @@localhost:5514

# Local log templates for better formatting
$template DailyPerHostLogs,"/var/log/hosts/%HOSTNAME%/%$YEAR%-%$MONTH%-%$DAY%.log"
*.* ?DailyPerHostLogs
& stop

# High-precision timestamps
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
$template HighPrecision,"%timegenerated:::date-rfc3339% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg:::drop-last-lf%\n"
$ActionFileDefaultTemplate HighPrecision

Create an advanced rsyslog configuration:

# Create custom rsyslog configuration for log management
sudo nano /etc/rsyslog.d/10-almalinux-centralized.conf

# Add this content:
# Enhanced logging for centralized management
$ModLoad imudp
$UDPServerRun 514
$UDPServerAddress 0.0.0.0

$ModLoad imtcp
$InputTCPServerRun 514

# Create directories for organized logging
$CreateDirs on
$Umask 0022
$DirCreateMode 0755
$FileCreateMode 0644

# Templates for structured logging
$template RemoteHost, "/var/log/remote/%HOSTNAME%/%PROGRAMNAME%.log"
$template DetailedFormat, "%timegenerated:::date-rfc3339% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg:::drop-last-lf%\n"

# Log separation by severity
$template DebugFile, "/var/log/debug.log"
$template InfoFile, "/var/log/info.log"
$template WarnFile, "/var/log/warn.log"
$template ErrorFile, "/var/log/error.log"

# Route logs by priority
*.debug ?DebugFile
*.info ?InfoFile
*.warn ?WarnFile
*.err ?ErrorFile

# Security-specific logging
auth,authpriv.* /var/log/auth.log
mail.* /var/log/mail.log
cron.* /var/log/cron.log

# Forward everything to Logstash
*.* @@localhost:5514;DetailedFormat

# Stop processing after forwarding to prevent duplication
& stop
# Restart rsyslog to apply changes
sudo systemctl restart rsyslog

# Verify rsyslog is listening on the correct ports
sudo ss -tulpn | grep ":514"
# Should show rsyslog listening on ports 514

# Test log forwarding
logger "Test message from rsyslog to centralized logging"
# This should appear in your logs and be forwarded to Logstash

โœ… Step 4: Advanced Log Parsing and Analysis

Letโ€™s make your logs tell amazing stories! ๐Ÿ“š Weโ€™ll set up intelligent parsing that transforms raw log data into meaningful insights.

Create advanced Logstash parsing rules:

# Create advanced parsing configuration
sudo nano /etc/logstash/conf.d/advanced-parsing.conf

# Add this sophisticated parsing configuration:
input {
  beats {
    port => 5044
  }

  syslog {
    port => 5514
    type => "syslog"
  }
}

filter {
  # Parse Apache/Nginx access logs
  if [fields][logtype] == "apache" {
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }

    date {
      match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
    }

    mutate {
      convert => [ "response", "integer" ]
      convert => [ "bytes", "integer" ]
    }

    if [response] >= 400 {
      mutate {
        add_tag => [ "error" ]
      }
    }
  }

  # Parse SSH authentication logs
  if "sshd" in [program] {
    if "Failed password" in [message] {
      grok {
        match => { "message" => "Failed password for %{USERNAME:username} from %{IP:src_ip} port %{INT:src_port}" }
      }
      mutate {
        add_tag => [ "ssh_failed" ]
      }
    }

    if "Accepted password" in [message] {
      grok {
        match => { "message" => "Accepted password for %{USERNAME:username} from %{IP:src_ip} port %{INT:src_port}" }
      }
      mutate {
        add_tag => [ "ssh_success" ]
      }
    }
  }

  # Parse system performance logs
  if [type] == "performance" {
    grok {
      match => { "message" => "CPU: %{NUMBER:cpu_usage}%, Memory: %{NUMBER:memory_usage}%, Disk: %{NUMBER:disk_usage}%" }
    }

    mutate {
      convert => [ "cpu_usage", "float" ]
      convert => [ "memory_usage", "float" ]
      convert => [ "disk_usage", "float" ]
    }
  }

  # Enrich logs with geographic information for IP addresses
  if [src_ip] {
    geoip {
      source => "src_ip"
      target => "geoip"
    }
  }

  # Add timestamp and host information
  mutate {
    add_field => { "received_at" => "%{@timestamp}" }
    add_field => { "log_host" => "%{host}" }
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "almalinux-%{type}-%{+YYYY.MM.dd}"
    template_name => "almalinux"
    template_pattern => "almalinux-*"
    template => "/etc/logstash/templates/almalinux-template.json"
  }
}

Create an Elasticsearch template for optimized storage:

# Create template directory
sudo mkdir -p /etc/logstash/templates

# Create Elasticsearch template
sudo nano /etc/logstash/templates/almalinux-template.json

{
  "index_patterns": ["almalinux-*"],
  "settings": {
    "number_of_shards": 1,
    "number_of_replicas": 0,
    "index.refresh_interval": "30s",
    "index.translog.flush_threshold_size": "1gb"
  },
  "mappings": {
    "properties": {
      "@timestamp": {
        "type": "date"
      },
      "host": {
        "type": "keyword"
      },
      "message": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "src_ip": {
        "type": "ip"
      },
      "response": {
        "type": "integer"
      },
      "bytes": {
        "type": "integer"
      },
      "cpu_usage": {
        "type": "float"
      },
      "memory_usage": {
        "type": "float"
      },
      "disk_usage": {
        "type": "float"
      },
      "geoip": {
        "properties": {
          "country_name": {
            "type": "keyword"
          },
          "city_name": {
            "type": "keyword"
          },
          "location": {
            "type": "geo_point"
          }
        }
      }
    }
  }
}
# Restart Logstash to apply new configuration
sudo systemctl restart logstash

# Monitor Logstash logs for errors
sudo tail -f /var/log/logstash/logstash-plain.log
# Press Ctrl+C to stop monitoring

๐ŸŒŸ Step 5: Real-time Monitoring and Alerting

Letโ€™s add some intelligence to your logging system! ๐Ÿง  Weโ€™ll set up real-time monitoring that keeps you informed about whatโ€™s happening.

# Install tools for log monitoring
sudo dnf install -y python3-pip
sudo pip3 install elasticsearch-dsl elasticsearch

# Create real-time log monitor script
sudo nano /usr/local/bin/log-monitor.py

#!/usr/bin/env python3
import time
import json
from datetime import datetime, timedelta
from elasticsearch import Elasticsearch

# Connect to Elasticsearch
es = Elasticsearch(['localhost:9200'])

def check_error_rate():
    """Monitor error rate in the last 5 minutes"""
    now = datetime.now()
    five_minutes_ago = now - timedelta(minutes=5)

    query = {
        "query": {
            "bool": {
                "must": [
                    {"range": {"@timestamp": {"gte": five_minutes_ago.isoformat()}}},
                    {"terms": {"tags": ["error", "ssh_failed"]}}
                ]
            }
        }
    }

    result = es.search(index="almalinux-*", body=query)
    error_count = result['hits']['total']['value']

    if error_count > 10:  # Threshold: more than 10 errors in 5 minutes
        print(f"๐Ÿšจ ALERT: High error rate detected! {error_count} errors in last 5 minutes")
        return True
    return False

def check_failed_logins():
    """Monitor failed SSH login attempts"""
    now = datetime.now()
    ten_minutes_ago = now - timedelta(minutes=10)

    query = {
        "query": {
            "bool": {
                "must": [
                    {"range": {"@timestamp": {"gte": ten_minutes_ago.isoformat()}}},
                    {"term": {"tags": "ssh_failed"}}
                ]
            }
        },
        "aggs": {
            "by_ip": {
                "terms": {
                    "field": "src_ip",
                    "size": 10
                }
            }
        }
    }

    result = es.search(index="almalinux-*", body=query)

    for bucket in result['aggregations']['by_ip']['buckets']:
        ip = bucket['key']
        count = bucket['doc_count']

        if count > 5:  # More than 5 failed attempts from same IP
            print(f"๐Ÿ”’ ALERT: Brute force attempt detected from {ip} ({count} attempts)")

def main():
    print("๐Ÿ” Starting AlmaLinux Log Monitor...")
    print("Press Ctrl+C to stop")

    while True:
        try:
            check_error_rate()
            check_failed_logins()
            time.sleep(60)  # Check every minute
        except KeyboardInterrupt:
            print("\n๐Ÿ‘‹ Monitor stopped.")
            break
        except Exception as e:
            print(f"โŒ Error: {e}")
            time.sleep(60)

if __name__ == "__main__":
    main()
# Make the monitor script executable
sudo chmod +x /usr/local/bin/log-monitor.py

# Create systemd service for log monitoring
sudo nano /etc/systemd/system/log-monitor.service

[Unit]
Description=AlmaLinux Log Monitor
After=elasticsearch.service
Wants=elasticsearch.service

[Service]
Type=simple
User=root
ExecStart=/usr/local/bin/log-monitor.py
Restart=always
RestartSec=30

[Install]
WantedBy=multi-user.target
# Enable and start the log monitor service
sudo systemctl daemon-reload
sudo systemctl enable --now log-monitor.service

# Check monitor status
sudo systemctl status log-monitor.service

Create a log analysis dashboard script:

# Create dashboard script
sudo nano /usr/local/bin/log-dashboard.sh

#!/bin/bash
echo "๐Ÿ“Š ALMALINUX LOG DASHBOARD"
echo "=========================="
echo "๐Ÿ“… $(date)"
echo ""

# Check Elasticsearch health
echo "๐Ÿ”‹ ELASTICSEARCH STATUS:"
curl -s localhost:9200/_cluster/health | python3 -m json.tool | grep -E "(status|number_of_nodes|active_primary_shards)"
echo ""

# Get log statistics
echo "๐Ÿ“ˆ LOG STATISTICS (Last 24 hours):"
curl -s "localhost:9200/almalinux-*/_search?size=0" -H 'Content-Type: application/json' -d '{
  "query": {
    "range": {
      "@timestamp": {
        "gte": "now-24h"
      }
    }
  },
  "aggs": {
    "by_type": {
      "terms": {
        "field": "type.keyword",
        "size": 10
      }
    },
    "by_host": {
      "terms": {
        "field": "host.keyword",
        "size": 5
      }
    }
  }
}' | python3 -c "
import sys, json
data = json.load(sys.stdin)
print('Total logs:', data['hits']['total']['value'])
print('\nBy Type:')
for bucket in data['aggregations']['by_type']['buckets']:
    print(f'  {bucket[\"key\"]}: {bucket[\"doc_count\"]}')
print('\nBy Host:')
for bucket in data['aggregations']['by_host']['buckets']:
    print(f'  {bucket[\"key\"]}: {bucket[\"doc_count\"]}')
"

echo ""
echo "๐Ÿ” Top Error Messages:"
curl -s "localhost:9200/almalinux-*/_search" -H 'Content-Type: application/json' -d '{
  "query": {
    "bool": {
      "must": [
        {"range": {"@timestamp": {"gte": "now-1h"}}},
        {"terms": {"tags": ["error", "failed"]}}
      ]
    }
  },
  "size": 5,
  "sort": [{"@timestamp": {"order": "desc"}}]
}' | python3 -c "
import sys, json
data = json.load(sys.stdin)
for hit in data['hits']['hits']:
    source = hit['_source']
    print(f'  {source.get(\"@timestamp\", \"N/A\")} - {source.get(\"message\", \"N/A\")[:100]}')
"

echo ""
echo "Dashboard complete! โœ…"
# Make dashboard executable and run it
sudo chmod +x /usr/local/bin/log-dashboard.sh
sudo /usr/local/bin/log-dashboard.sh

๐ŸŽฎ Quick Examples

Letโ€™s see your centralized logging system in action with real-world scenarios! ๐ŸŽฏ

Example 1: Web Server Log Analysis

# Create web server log simulator
sudo nano /usr/local/bin/web-log-simulator.sh

#!/bin/bash
echo "๐ŸŒ Simulating web server logs..."

IPS=("192.168.1.100" "10.0.0.50" "203.0.113.45" "198.51.100.22")
CODES=(200 200 200 404 500)
PAGES=("/index.html" "/login.php" "/api/users" "/admin" "/dashboard")

for i in {1..20}; do
    IP=${IPS[$RANDOM % ${#IPS[@]}]}
    CODE=${CODES[$RANDOM % ${#CODES[@]}]}
    PAGE=${PAGES[$RANDOM % ${#PAGES[@]}]}

    # Simulate Apache access log format
    echo "$(date '+%d/%b/%Y:%H:%M:%S %z') $IP \"GET $PAGE HTTP/1.1\" $CODE $((RANDOM % 5000))" | \
    logger -t apache-access

    sleep 1
done

echo "โœ… Web logs generated!"
# Run the simulator
sudo chmod +x /usr/local/bin/web-log-simulator.sh
sudo /usr/local/bin/web-log-simulator.sh

Example 2: Security Event Simulation

# Create security event simulator
sudo nano /usr/local/bin/security-simulator.sh

#!/bin/bash
echo "๐Ÿ”’ Simulating security events..."

ATTACKERS=("203.0.113.100" "198.51.100.200" "192.0.2.150")
USERS=("admin" "root" "test" "user1")

for i in {1..10}; do
    ATTACKER=${ATTACKERS[$RANDOM % ${#ATTACKERS[@]}]}
    USER=${USERS[$RANDOM % ${#USERS[@]}]}

    # Simulate failed SSH attempts
    logger -p auth.warning "sshd[$$]: Failed password for $USER from $ATTACKER port $((20000 + RANDOM % 10000)) ssh2"

    sleep 2
done

# Simulate one successful login
logger -p auth.info "sshd[$$]: Accepted password for admin from 192.168.1.100 port 22000 ssh2"

echo "โœ… Security events generated!"
# Run the security simulator
sudo chmod +x /usr/local/bin/security-simulator.sh
sudo /usr/local/bin/security-simulator.sh

Example 3: Performance Metrics Logging

# Create performance logger
sudo nano /usr/local/bin/perf-logger.sh

#!/bin/bash
while true; do
    # Get system metrics
    CPU=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1)
    MEM=$(free | grep Mem | awk '{printf("%.1f", $3/$2 * 100.0)}')
    DISK=$(df / | awk 'NR==2{print $5}' | cut -d'%' -f1)

    # Log performance metrics
    logger -t performance "CPU: ${CPU}%, Memory: ${MEM}%, Disk: ${DISK}%"

    sleep 30  # Log every 30 seconds
done
# Run performance logger in background
sudo chmod +x /usr/local/bin/perf-logger.sh
sudo nohup /usr/local/bin/perf-logger.sh > /dev/null 2>&1 &
# This runs in the background and logs performance metrics

๐Ÿšจ Fix Common Problems

Donโ€™t panic if you encounter issues โ€“ here are solutions to common log management problems! ๐Ÿ› ๏ธ

Problem 1: Elasticsearch Wonโ€™t Start

Symptoms: Service fails to start, โ€œunable to lock JVM memoryโ€ errors

# Check Elasticsearch logs
sudo journalctl -u elasticsearch -f

# Common fix: Increase virtual memory
echo 'vm.max_map_count=262144' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

# Fix memory lock issues
sudo nano /etc/elasticsearch/elasticsearch.yml
# Add: bootstrap.memory_lock: false

# Restart Elasticsearch
sudo systemctl restart elasticsearch

Problem 2: Logstash Parsing Errors

Symptoms: Logs not appearing in Elasticsearch, parsing failures

# Check Logstash logs for errors
sudo tail -f /var/log/logstash/logstash-plain.log

# Test configuration syntax
sudo -u logstash /usr/share/logstash/bin/logstash --config.test_and_exit --path.config=/etc/logstash/conf.d/

# Common fix: Grok pattern issues
# Validate patterns at http://grokdebug.herokuapp.com/

# Restart Logstash after fixes
sudo systemctl restart logstash

Problem 3: Disk Space Issues

Symptoms: System running out of space, logs growing too large

# Check disk usage by logs
sudo du -sh /var/log/* | sort -hr

# Set up log rotation for Elasticsearch
sudo nano /etc/cron.daily/elasticsearch-cleanup

#!/bin/bash
# Delete indices older than 30 days
curl -XDELETE "localhost:9200/almalinux-*$(date -d '30 days ago' '+%Y.%m.%d')"

# Make executable
sudo chmod +x /etc/cron.daily/elasticsearch-cleanup

# Configure rsyslog rotation
sudo nano /etc/logrotate.d/rsyslog-custom
/var/log/remote/*/*.log {
    daily
    missingok
    rotate 7
    compress
    notifempty
    create 644 root root
}

Problem 4: Network Connectivity Issues

Symptoms: Remote logs not arriving, connection refused errors

# Check if rsyslog is listening
sudo ss -tulpn | grep :514

# Test connectivity
nc -zv localhost 514
telnet localhost 5514

# Check firewall settings
sudo firewall-cmd --list-all
sudo firewall-cmd --add-port=514/tcp --permanent
sudo firewall-cmd --add-port=514/udp --permanent
sudo firewall-cmd --add-port=5514/tcp --permanent
sudo firewall-cmd --reload

# Restart services
sudo systemctl restart rsyslog logstash

๐Ÿ“‹ Simple Commands Summary

Hereโ€™s your quick reference for centralized log management! ๐Ÿ“š

TaskCommandPurpose
Check Logssudo journalctl -fFollow system journal
Test Logginglogger "Test message"Send test log message
ELK Statussudo systemctl status elasticsearch logstash kibanaCheck ELK services
View Indicescurl localhost:9200/_cat/indicesList Elasticsearch indices
Search Logscurl "localhost:9200/almalinux-*/_search"Search all log indices
Rsyslog Testrsyslogd -N1Test rsyslog config
Log Statssudo /usr/local/bin/log-dashboard.shShow log statistics
Monitor Errorssudo systemctl status log-monitor.serviceCheck error monitoring
Kibana Accesshttp://localhost:5601Open Kibana interface
Clear Cachecurl -XPOST localhost:9200/_cache/clearClear Elasticsearch cache
Delete Old Logscurl -XDELETE "localhost:9200/old-index"Remove old indices
Logstash Testsudo -u logstash /usr/share/logstash/bin/logstash --config.test_and_exitTest Logstash config

๐Ÿ’ก Tips for Success

Follow these expert tips to become a log management wizard! ๐Ÿง™โ€โ™‚๏ธ

๐ŸŽฏ Smart Log Management Strategy

  • Plan your retention policy โ€“ Decide how long to keep different types of logs
  • Use structured logging โ€“ JSON format makes parsing and searching much easier
  • Set up log rotation โ€“ Prevent logs from consuming all your disk space
  • Monitor your monitoring โ€“ Make sure your log system itself is healthy

๐Ÿ”ง Performance Optimization

  • Index optimization โ€“ Use appropriate Elasticsearch mappings for your data
  • Batch processing โ€“ Configure Logstash to process logs in batches for better performance
  • Resource allocation โ€“ Give Elasticsearch enough memory but not too much
  • Network tuning โ€“ Use appropriate protocols (TCP vs UDP) based on reliability needs

๐Ÿ›ก๏ธ Security Best Practices

  • Enable authentication โ€“ Never run ELK stack without security in production
  • Encrypt in transit โ€“ Use TLS for log shipping over networks
  • Log sanitization โ€“ Remove sensitive data before storage
  • Access control โ€“ Implement role-based access to different log types

๐Ÿš€ Advanced Features

  • Set up alerting โ€“ Use Watcher or external tools for automated alerts
  • Create dashboards โ€“ Kibana visualizations make data analysis much easier
  • Use machine learning โ€“ Elasticsearch ML can detect anomalies automatically
  • Implement log correlation โ€“ Connect related events across different systems

๐Ÿ† What You Learned

Congratulations! Youโ€™ve mastered centralized log management on AlmaLinux! ๐ŸŽ‰ Hereโ€™s your impressive skill set:

โœ… Built a complete ELK stack from scratch with Elasticsearch, Logstash, and Kibana โœ… Configured centralized log collection using rsyslog and advanced parsing rules โœ… Set up intelligent log parsing with Grok patterns and structured data extraction โœ… Implemented real-time monitoring with automated alerts and dashboards โœ… Created performance metrics logging for comprehensive system monitoring โœ… Mastered log analysis techniques for troubleshooting and security monitoring โœ… Designed scalable log architecture that can grow with your infrastructure โœ… Implemented security monitoring with failed login detection and threat analysis โœ… Built automated maintenance with log rotation and cleanup procedures โœ… Created custom monitoring tools for real-time system health checks

๐ŸŽฏ Why This Matters

Centralized log management is like having X-ray vision for your infrastructure! ๐Ÿฆธโ€โ™‚๏ธ

In todayโ€™s complex IT environments, logs are your best friends for understanding whatโ€™s really happening. Whether youโ€™re troubleshooting a mysterious outage at 3 AM or investigating a security incident, your centralized logging system provides the answers you need instantly.

These skills are incredibly valuable in the job market โ€“ companies desperately need people who can implement and manage robust logging solutions. From DevOps engineers to security analysts, centralized logging expertise opens doors to exciting career opportunities.

Remember, great logging isnโ€™t just about collecting data โ€“ itโ€™s about turning that data into actionable insights that make your systems more reliable, secure, and performant. You now have the power to build logging solutions that scale from small startups to enterprise-grade infrastructures!

Keep exploring, keep learning, and keep making your logs tell amazing stories! ๐Ÿ“Šโญ๐Ÿ™Œ