+
+
+
+
+
+
โ‰ˆ
nuxt
circle
+
+
+
redhat
asm
+
+
+
parcel
+
+
+
soap
+
!=
sinatra
+
axum
cassandra
+
yarn
+
prettier
protobuf
+
fiber
vercel
+
wasm
yaml
+
+
travis
swift
+
+
+
qwik
bun
_
#
redis
+
โŠ‚
bbedit
+
jax
+
+
smtp
+
rest
+
wsl
parcel
+
+
+
gatsby
+
+
+
influxdb
=
+
gh
asm
objc
preact
+
โˆ‘
clj
+
elixir
istio
android
+
+
+
+
java
Back to Blog
๐Ÿ“š AlmaLinux Logging: Complete ELK Stack Guide for Centralized Log Management
AlmaLinux ELK Stack Elasticsearch

๐Ÿ“š AlmaLinux Logging: Complete ELK Stack Guide for Centralized Log Management

Published Sep 18, 2025

Master centralized logging on AlmaLinux! Learn Elasticsearch, Logstash, Kibana setup, log aggregation, analysis, and visualization. Complete guide with real examples and troubleshooting.

59 min read
0 views
Table of Contents

๐Ÿ“š AlmaLinux Logging: Complete ELK Stack Guide for Centralized Log Management

Hey there, logging legend! ๐ŸŽ‰ Ready to transform your scattered log files into a powerful, searchable, and visualizable intelligence system? Today weโ€™re building the mighty ELK Stack (Elasticsearch, Logstash, Kibana) on AlmaLinux that will turn your logs from mysterious text files into actionable insights! ๐Ÿš€

Whether youโ€™re debugging applications, tracking security events, or analyzing system behavior, this guide will turn your AlmaLinux system into a log management powerhouse that makes finding needles in haystacks look easy! ๐Ÿ’ช

๐Ÿค” Why is ELK Stack Important?

Imagine trying to find a specific error across hundreds of servers by manually checking log files โ€“ itโ€™s like searching for a specific grain of sand on a beach! ๐Ÿ˜ฑ Without centralized logging, youโ€™re blind to patterns and trends that could save your system!

Hereโ€™s why ELK Stack on AlmaLinux is absolutely game-changing:

  • ๐Ÿ” Instant Search - Find any log entry across all systems in seconds
  • ๐Ÿ“Š Beautiful Visualizations - Turn boring logs into insightful dashboards
  • ๐Ÿšจ Real-Time Analysis - Spot problems as they happen, not hours later
  • ๐Ÿ“ˆ Pattern Detection - Identify trends and anomalies automatically
  • ๐Ÿ›ก๏ธ Security Monitoring - Track suspicious activities across your infrastructure
  • ๐Ÿ’พ Centralized Storage - All logs in one searchable location
  • ๐Ÿ”„ Automatic Processing - Parse and enrich logs without manual intervention
  • ๐Ÿ“ฑ Alert Integration - Get notified when critical events occur

๐ŸŽฏ What You Need

Before we start building your logging empire, letโ€™s make sure you have everything ready:

โœ… AlmaLinux 9.x system (with 8+ GB RAM recommended) โœ… Java 11 or higher (weโ€™ll install it) โœ… Root or sudo access for installation โœ… Internet connection for downloading packages โœ… 50+ GB disk space for log storage โœ… Basic understanding of log files and formats โœ… Systems generating logs (web servers, applications, etc.) โœ… Enthusiasm for data analysis! ๐Ÿ“Š

๐Ÿ“ Step 1: Install Elasticsearch

Letโ€™s start by setting up Elasticsearch, the heart of our logging system! ๐ŸŽฏ

# Install Java (required for Elasticsearch)
sudo dnf install -y java-11-openjdk java-11-openjdk-devel

# Verify Java installation
java -version

# Import Elasticsearch GPG key
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

# Add Elasticsearch repository
sudo tee /etc/yum.repos.d/elasticsearch.repo << 'EOF'
[elasticsearch-8.x]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF

# Install Elasticsearch
sudo dnf install -y elasticsearch

# Configure Elasticsearch for single-node setup
sudo tee /etc/elasticsearch/elasticsearch.yml << 'EOF'
# Cluster Settings
cluster.name: almalinux-logs
node.name: node-1

# Network Settings
network.host: 0.0.0.0
http.port: 9200

# Discovery Settings (single-node)
discovery.type: single-node

# Security Settings
xpack.security.enabled: false
xpack.security.enrollment.enabled: false

# Memory Settings
indices.breaker.total.limit: 70%
indices.fielddata.cache.size: 30%

# Path Settings
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
EOF

# Set JVM heap size (50% of available RAM, max 32GB)
sudo sed -i 's/^-Xms.*/-Xms2g/' /etc/elasticsearch/jvm.options
sudo sed -i 's/^-Xmx.*/-Xmx2g/' /etc/elasticsearch/jvm.options

# Start and enable Elasticsearch
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch

# Wait for Elasticsearch to start
sleep 30

# Test Elasticsearch
curl -X GET "localhost:9200"

Expected output:

{
  "name" : "node-1",
  "cluster_name" : "almalinux-logs",
  "version" : {
    "number" : "8.x.x"
  },
  "tagline" : "You Know, for Search"
}

Perfect! Elasticsearch is running! ๐ŸŽ‰

๐Ÿ”ง Step 2: Install and Configure Logstash

Now letโ€™s set up Logstash to process and parse our logs:

# Install Logstash
sudo dnf install -y logstash

# Create Logstash pipeline configuration
sudo tee /etc/logstash/conf.d/main-pipeline.conf << 'EOF'
# Input plugins - where logs come from
input {
  # Accept logs from Beats
  beats {
    port => 5044
  }

  # Accept syslog messages
  syslog {
    port => 5514
    type => "syslog"
  }

  # Accept JSON over HTTP
  http {
    port => 8080
    codec => json
  }

  # Read local log files
  file {
    path => "/var/log/messages"
    type => "system"
    start_position => "beginning"
  }
}

# Filter plugins - process and enrich logs
filter {
  # Parse syslog messages
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:hostname} %{DATA:program}(?:\[%{POSINT:pid}\])?: %{GREEDYDATA:msg}" }
    }
    date {
      match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }

  # Parse Apache/Nginx access logs
  if [type] == "apache" or [type] == "nginx" {
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
    geoip {
      source => "clientip"
      target => "geoip"
    }
    useragent {
      source => "agent"
      target => "useragent"
    }
  }

  # Parse application JSON logs
  if [type] == "application" {
    json {
      source => "message"
    }
  }

  # Add metadata
  mutate {
    add_field => { "[@metadata][environment]" => "production" }
    add_field => { "[@metadata][datacenter]" => "almalinux-dc1" }
  }

  # Remove unnecessary fields
  mutate {
    remove_field => [ "host", "port" ]
  }
}

# Output plugins - where processed logs go
output {
  # Send to Elasticsearch
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "logs-%{[type]}-%{+YYYY.MM.dd}"
    template_overwrite => true
  }

  # Debug output (comment out in production)
  stdout {
    codec => rubydebug
  }
}
EOF

# Test Logstash configuration
sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/main-pipeline.conf

# Start and enable Logstash
sudo systemctl enable logstash
sudo systemctl start logstash

# Check Logstash status
sudo systemctl status logstash

Excellent! Logstash is processing logs! ๐ŸŒŸ

๐ŸŒŸ Step 3: Install and Configure Kibana

Time to add the visual magic with Kibana:

# Install Kibana
sudo dnf install -y kibana

# Configure Kibana
sudo tee /etc/kibana/kibana.yml << 'EOF'
# Server Settings
server.port: 5601
server.host: "0.0.0.0"
server.name: "almalinux-kibana"

# Elasticsearch Settings
elasticsearch.hosts: ["http://localhost:9200"]

# Logging Settings
logging.dest: /var/log/kibana/kibana.log
logging.verbose: false

# UI Settings
server.defaultRoute: /app/discover
telemetry.enabled: false

# Security Settings
xpack.security.enabled: false
EOF

# Create log directory
sudo mkdir -p /var/log/kibana
sudo chown kibana:kibana /var/log/kibana

# Start and enable Kibana
sudo systemctl daemon-reload
sudo systemctl enable kibana
sudo systemctl start kibana

# Configure firewall
sudo firewall-cmd --permanent --add-port=5601/tcp
sudo firewall-cmd --permanent --add-port=9200/tcp
sudo firewall-cmd --permanent --add-port=5044/tcp
sudo firewall-cmd --reload

# Wait for Kibana to start
echo "โณ Waiting for Kibana to start (this may take a minute)..."
sleep 60

# Check if Kibana is running
curl -I http://localhost:5601

Access Kibana at http://your-server:5601

Fantastic! Your ELK Stack is ready! ๐ŸŽฏ

โœ… Step 4: Configure Filebeat for Log Collection

Letโ€™s set up Filebeat to ship logs from various sources:

# Install Filebeat
sudo dnf install -y filebeat

# Configure Filebeat
sudo tee /etc/filebeat/filebeat.yml << 'EOF'
# Filebeat Configuration

filebeat.inputs:
  # System logs
  - type: log
    enabled: true
    paths:
      - /var/log/messages
      - /var/log/secure
      - /var/log/cron
    fields:
      logtype: system
    multiline.pattern: '^\['
    multiline.negate: false
    multiline.match: after

  # Apache/Nginx logs
  - type: log
    enabled: true
    paths:
      - /var/log/httpd/access_log
      - /var/log/nginx/access.log
    fields:
      logtype: webserver

  - type: log
    enabled: true
    paths:
      - /var/log/httpd/error_log
      - /var/log/nginx/error.log
    fields:
      logtype: webserver-error

  # Application logs
  - type: log
    enabled: true
    paths:
      - /opt/application/logs/*.log
    fields:
      logtype: application
    json.keys_under_root: true
    json.add_error_key: true

  # Docker container logs
  - type: container
    enabled: true
    paths:
      - '/var/lib/docker/containers/*/*.log'
    fields:
      logtype: docker

# Processors to enhance logs
processors:
  - add_host_metadata:
      when.not.contains:
        tags: forwarded
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# Output to Logstash
output.logstash:
  hosts: ["localhost:5044"]

# Logging configuration
logging.level: info
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 0644
EOF

# Enable and start Filebeat
sudo systemctl enable filebeat
sudo systemctl start filebeat

# Test Filebeat configuration
sudo filebeat test config
sudo filebeat test output

Perfect! Filebeat is shipping logs to Logstash! ๐Ÿ“š

๐ŸŽฎ Quick Examples

Example 1: Custom Log Parser in Logstash

# Create custom parser for application logs
cat > /etc/logstash/conf.d/app-parser.conf << 'EOF'
filter {
  if [fields][logtype] == "application" {
    grok {
      match => {
        "message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] \[%{LOGLEVEL:level}\] \[%{DATA:module}\] %{GREEDYDATA:log_message}"
      }
    }

    # Parse stack traces
    if [level] == "ERROR" {
      multiline {
        pattern => "^\s+at "
        what => "previous"
      }
    }

    # Extract custom metrics
    if [log_message] =~ /response_time/ {
      grok {
        match => {
          "log_message" => "response_time=(?<response_time>[0-9.]+)ms"
        }
      }
      mutate {
        convert => { "response_time" => "float" }
      }
    }

    # Add alert flag for errors
    if [level] in ["ERROR", "CRITICAL"] {
      mutate {
        add_tag => [ "alert" ]
        add_field => { "alert_priority" => "high" }
      }
    }
  }
}
EOF

# Reload Logstash
sudo systemctl reload logstash

This creates advanced log parsing! ๐Ÿ”

Example 2: Kibana Dashboard Creation

# Create dashboard configuration
cat > create-dashboard.sh << 'EOF'
#!/bin/bash
# Create comprehensive Kibana dashboard

KIBANA_URL="http://localhost:5601"

# Create index pattern
curl -X POST "$KIBANA_URL/api/saved_objects/index-pattern" \
  -H "kbn-xsrf: true" \
  -H "Content-Type: application/json" \
  -d '{
    "attributes": {
      "title": "logs-*",
      "timeFieldName": "@timestamp"
    }
  }'

# Create visualizations
curl -X POST "$KIBANA_URL/api/saved_objects/visualization" \
  -H "kbn-xsrf: true" \
  -H "Content-Type: application/json" \
  -d '{
    "attributes": {
      "title": "Log Volume Over Time",
      "visState": "{\"title\":\"Log Volume Over Time\",\"type\":\"line\",\"aggs\":[{\"id\":\"1\",\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\"}}]}",
      "uiStateJSON": "{}",
      "kibanaSavedObjectMeta": {
        "searchSourceJSON": "{\"index\":\"logs-*\",\"query\":{\"match_all\":{}}}"
      }
    }
  }'

# Create dashboard
curl -X POST "$KIBANA_URL/api/saved_objects/dashboard" \
  -H "kbn-xsrf: true" \
  -H "Content-Type: application/json" \
  -d '{
    "attributes": {
      "title": "System Monitoring Dashboard",
      "hits": 0,
      "description": "Complete system monitoring dashboard",
      "panelsJSON": "[{\"id\":\"log-volume\",\"type\":\"visualization\",\"size_x\":12,\"size_y\":4,\"col\":1,\"row\":1}]",
      "version": 1
    }
  }'

echo "โœ… Dashboard created successfully!"
EOF

chmod +x create-dashboard.sh
./create-dashboard.sh

This creates interactive dashboards! ๐Ÿ“Š

Example 3: Real-Time Alert Configuration

# Create alerting rules
cat > /etc/logstash/conf.d/alerts.conf << 'EOF'
filter {
  # Detect brute force attempts
  if [type] == "syslog" and [program] == "sshd" {
    if [msg] =~ /Failed password/ {
      throttle {
        before_count => 3
        after_count => 5
        period => 60
        key => "%{hostname}"
        add_tag => "brute_force_attempt"
      }
    }
  }

  # Detect high error rates
  if [level] == "ERROR" {
    metrics {
      meter => "error_rate"
      add_tag => "metric"
      rates => [1, 5, 15]
    }

    if [error_rate][rate_1m] > 10 {
      mutate {
        add_tag => [ "high_error_rate", "alert" ]
      }
    }
  }

  # Detect slow responses
  if [response_time] > 1000 {
    mutate {
      add_tag => [ "slow_response", "performance_issue" ]
    }
  }
}

output {
  # Send alerts to monitoring system
  if "alert" in [tags] {
    http {
      url => "http://alerting-system/webhook"
      http_method => "post"
      format => "json"
      mapping => {
        "alert_type" => "%{alert_priority}"
        "message" => "%{message}"
        "host" => "%{hostname}"
        "timestamp" => "%{@timestamp}"
      }
    }

    # Also send email for critical alerts
    if [alert_priority] == "critical" {
      email {
        to => "[email protected]"
        subject => "Critical Alert: %{alert_type}"
        body => "Alert detected at %{@timestamp}: %{message}"
      }
    }
  }
}
EOF

# Restart Logstash with new configuration
sudo systemctl restart logstash

This enables real-time alerting! ๐Ÿšจ

๐Ÿšจ Fix Common Problems

Problem 1: Elasticsearch Wonโ€™t Start

Symptoms: Elasticsearch fails to start or crashes

# Check Elasticsearch logs
sudo tail -f /var/log/elasticsearch/almalinux-logs.log

# Common fixes:

# 1. Fix memory issues
sudo sysctl -w vm.max_map_count=262144
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf

# 2. Fix permissions
sudo chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
sudo chown -R elasticsearch:elasticsearch /var/log/elasticsearch

# 3. Clear corrupted indices
sudo systemctl stop elasticsearch
sudo rm -rf /var/lib/elasticsearch/nodes
sudo systemctl start elasticsearch

# 4. Adjust heap size based on available memory
free -h
sudo vim /etc/elasticsearch/jvm.options
# Set -Xms and -Xmx to 50% of available RAM (max 32GB)

# 5. Check disk space
df -h
# Elasticsearch needs at least 10% free disk space

Problem 2: Logstash Pipeline Errors

Symptoms: Logs not appearing in Elasticsearch

# Debug Logstash pipeline
sudo /usr/share/logstash/bin/logstash --debug -f /etc/logstash/conf.d/main-pipeline.conf

# Check for configuration errors
sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/

# Monitor Logstash logs
sudo journalctl -u logstash -f

# Test pipeline with stdin
echo "test log message" | sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/main-pipeline.conf

# Fix common issues:
# - Wrong Elasticsearch host
sudo grep elasticsearch.hosts /etc/logstash/conf.d/*.conf

# - Permission issues
sudo usermod -a -G adm logstash
sudo systemctl restart logstash

Problem 3: Kibana Canโ€™t Connect to Elasticsearch

Symptoms: Kibana shows โ€œUnable to connect to Elasticsearchโ€

# Verify Elasticsearch is running
curl -X GET "localhost:9200"

# Check Kibana configuration
sudo grep elasticsearch.hosts /etc/kibana/kibana.yml

# Test connectivity
curl -X GET "localhost:9200/_cluster/health"

# Fix common issues:
# 1. Restart services in order
sudo systemctl restart elasticsearch
sleep 30
sudo systemctl restart kibana

# 2. Check firewall
sudo firewall-cmd --list-all

# 3. Reset Kibana saved objects
curl -X DELETE "localhost:9200/.kibana*"
sudo systemctl restart kibana

Problem 4: High Resource Usage

Symptoms: System running slow, high CPU/memory usage

# Monitor resource usage
htop
docker stats

# Optimize Elasticsearch
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "indices.breaker.request.limit": "40%",
    "indices.breaker.total.limit": "70%"
  }
}'

# Reduce Logstash workers
sudo sed -i 's/# pipeline.workers:/pipeline.workers: 2/' /etc/logstash/logstash.yml

# Implement index lifecycle management
curl -X PUT "localhost:9200/_ilm/policy/logs-policy" -H 'Content-Type: application/json' -d'
{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "rollover": {
            "max_size": "10GB",
            "max_age": "7d"
          }
        }
      },
      "delete": {
        "min_age": "30d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}'

# Clean up old indices
curl -X DELETE "localhost:9200/logs-*-2024.01.*"

๐Ÿ“‹ Simple Commands Summary

CommandPurpose
sudo systemctl status elasticsearchCheck Elasticsearch status
curl -X GET "localhost:9200/_cluster/health"Check cluster health
sudo systemctl status logstashCheck Logstash status
sudo systemctl status kibanaCheck Kibana status
curl -X GET "localhost:9200/_cat/indices"List all indices
sudo filebeat test configTest Filebeat configuration
sudo journalctl -u logstash -fView Logstash logs
curl -X GET "localhost:9200/logs-*/_search"Search logs
sudo /usr/share/logstash/bin/logstash --config.test_and_exitTest Logstash config
curl -X DELETE "localhost:9200/logs-old-*"Delete old indices

๐Ÿ’ก Tips for Success

๐ŸŽฏ Start Simple: Begin with basic log collection before complex parsing

๐Ÿ” Index Strategy: Plan your index naming and retention policies

๐Ÿ“Š Dashboard Design: Create focused dashboards for different use cases

๐Ÿ›ก๏ธ Security First: Enable authentication and SSL in production

๐Ÿš€ Performance Tune: Monitor and optimize based on your log volume

๐Ÿ“ Document Patterns: Keep a library of useful Grok patterns

๐Ÿ”„ Regular Maintenance: Schedule index cleanup and optimization

โšก Buffer Wisely: Use persistent queues for reliability

๐Ÿ† What You Learned

Congratulations! Youโ€™ve successfully mastered the ELK Stack on AlmaLinux! ๐ŸŽ‰

โœ… Installed Elasticsearch for log storage and search โœ… Configured Logstash for log processing and enrichment โœ… Set up Kibana for visualization and analysis โœ… Deployed Filebeat for log collection and shipping โœ… Created custom parsers for different log formats โœ… Built dashboards for monitoring and analysis โœ… Implemented alerting for critical events โœ… Optimized performance for production use

๐ŸŽฏ Why This Matters

Centralized logging is the foundation of modern operations! ๐ŸŒŸ With your AlmaLinux ELK Stack, you now have:

  • Complete visibility into all system and application logs
  • Powerful search capabilities across terabytes of data
  • Real-time analysis for immediate problem detection
  • Historical insights for trend analysis and capacity planning
  • Foundation for compliance and security monitoring

Youโ€™re now equipped to handle logging at scale, turning mountains of log data into actionable intelligence that drives better decisions and faster problem resolution! ๐Ÿš€

Keep logging, keep analyzing, and remember โ€“ logs are your systemโ€™s story, and now you can read every chapter! Youโ€™ve got this! โญ๐Ÿ™Œ