๐ AlmaLinux Logging: Complete ELK Stack Guide for Centralized Log Management
Hey there, logging legend! ๐ Ready to transform your scattered log files into a powerful, searchable, and visualizable intelligence system? Today weโre building the mighty ELK Stack (Elasticsearch, Logstash, Kibana) on AlmaLinux that will turn your logs from mysterious text files into actionable insights! ๐
Whether youโre debugging applications, tracking security events, or analyzing system behavior, this guide will turn your AlmaLinux system into a log management powerhouse that makes finding needles in haystacks look easy! ๐ช
๐ค Why is ELK Stack Important?
Imagine trying to find a specific error across hundreds of servers by manually checking log files โ itโs like searching for a specific grain of sand on a beach! ๐ฑ Without centralized logging, youโre blind to patterns and trends that could save your system!
Hereโs why ELK Stack on AlmaLinux is absolutely game-changing:
- ๐ Instant Search - Find any log entry across all systems in seconds
- ๐ Beautiful Visualizations - Turn boring logs into insightful dashboards
- ๐จ Real-Time Analysis - Spot problems as they happen, not hours later
- ๐ Pattern Detection - Identify trends and anomalies automatically
- ๐ก๏ธ Security Monitoring - Track suspicious activities across your infrastructure
- ๐พ Centralized Storage - All logs in one searchable location
- ๐ Automatic Processing - Parse and enrich logs without manual intervention
- ๐ฑ Alert Integration - Get notified when critical events occur
๐ฏ What You Need
Before we start building your logging empire, letโs make sure you have everything ready:
โ AlmaLinux 9.x system (with 8+ GB RAM recommended) โ Java 11 or higher (weโll install it) โ Root or sudo access for installation โ Internet connection for downloading packages โ 50+ GB disk space for log storage โ Basic understanding of log files and formats โ Systems generating logs (web servers, applications, etc.) โ Enthusiasm for data analysis! ๐
๐ Step 1: Install Elasticsearch
Letโs start by setting up Elasticsearch, the heart of our logging system! ๐ฏ
# Install Java (required for Elasticsearch)
sudo dnf install -y java-11-openjdk java-11-openjdk-devel
# Verify Java installation
java -version
# Import Elasticsearch GPG key
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
# Add Elasticsearch repository
sudo tee /etc/yum.repos.d/elasticsearch.repo << 'EOF'
[elasticsearch-8.x]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
# Install Elasticsearch
sudo dnf install -y elasticsearch
# Configure Elasticsearch for single-node setup
sudo tee /etc/elasticsearch/elasticsearch.yml << 'EOF'
# Cluster Settings
cluster.name: almalinux-logs
node.name: node-1
# Network Settings
network.host: 0.0.0.0
http.port: 9200
# Discovery Settings (single-node)
discovery.type: single-node
# Security Settings
xpack.security.enabled: false
xpack.security.enrollment.enabled: false
# Memory Settings
indices.breaker.total.limit: 70%
indices.fielddata.cache.size: 30%
# Path Settings
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
EOF
# Set JVM heap size (50% of available RAM, max 32GB)
sudo sed -i 's/^-Xms.*/-Xms2g/' /etc/elasticsearch/jvm.options
sudo sed -i 's/^-Xmx.*/-Xmx2g/' /etc/elasticsearch/jvm.options
# Start and enable Elasticsearch
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch
# Wait for Elasticsearch to start
sleep 30
# Test Elasticsearch
curl -X GET "localhost:9200"
Expected output:
{
"name" : "node-1",
"cluster_name" : "almalinux-logs",
"version" : {
"number" : "8.x.x"
},
"tagline" : "You Know, for Search"
}
Perfect! Elasticsearch is running! ๐
๐ง Step 2: Install and Configure Logstash
Now letโs set up Logstash to process and parse our logs:
# Install Logstash
sudo dnf install -y logstash
# Create Logstash pipeline configuration
sudo tee /etc/logstash/conf.d/main-pipeline.conf << 'EOF'
# Input plugins - where logs come from
input {
# Accept logs from Beats
beats {
port => 5044
}
# Accept syslog messages
syslog {
port => 5514
type => "syslog"
}
# Accept JSON over HTTP
http {
port => 8080
codec => json
}
# Read local log files
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
# Filter plugins - process and enrich logs
filter {
# Parse syslog messages
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:hostname} %{DATA:program}(?:\[%{POSINT:pid}\])?: %{GREEDYDATA:msg}" }
}
date {
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
# Parse Apache/Nginx access logs
if [type] == "apache" or [type] == "nginx" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
geoip {
source => "clientip"
target => "geoip"
}
useragent {
source => "agent"
target => "useragent"
}
}
# Parse application JSON logs
if [type] == "application" {
json {
source => "message"
}
}
# Add metadata
mutate {
add_field => { "[@metadata][environment]" => "production" }
add_field => { "[@metadata][datacenter]" => "almalinux-dc1" }
}
# Remove unnecessary fields
mutate {
remove_field => [ "host", "port" ]
}
}
# Output plugins - where processed logs go
output {
# Send to Elasticsearch
elasticsearch {
hosts => ["localhost:9200"]
index => "logs-%{[type]}-%{+YYYY.MM.dd}"
template_overwrite => true
}
# Debug output (comment out in production)
stdout {
codec => rubydebug
}
}
EOF
# Test Logstash configuration
sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/main-pipeline.conf
# Start and enable Logstash
sudo systemctl enable logstash
sudo systemctl start logstash
# Check Logstash status
sudo systemctl status logstash
Excellent! Logstash is processing logs! ๐
๐ Step 3: Install and Configure Kibana
Time to add the visual magic with Kibana:
# Install Kibana
sudo dnf install -y kibana
# Configure Kibana
sudo tee /etc/kibana/kibana.yml << 'EOF'
# Server Settings
server.port: 5601
server.host: "0.0.0.0"
server.name: "almalinux-kibana"
# Elasticsearch Settings
elasticsearch.hosts: ["http://localhost:9200"]
# Logging Settings
logging.dest: /var/log/kibana/kibana.log
logging.verbose: false
# UI Settings
server.defaultRoute: /app/discover
telemetry.enabled: false
# Security Settings
xpack.security.enabled: false
EOF
# Create log directory
sudo mkdir -p /var/log/kibana
sudo chown kibana:kibana /var/log/kibana
# Start and enable Kibana
sudo systemctl daemon-reload
sudo systemctl enable kibana
sudo systemctl start kibana
# Configure firewall
sudo firewall-cmd --permanent --add-port=5601/tcp
sudo firewall-cmd --permanent --add-port=9200/tcp
sudo firewall-cmd --permanent --add-port=5044/tcp
sudo firewall-cmd --reload
# Wait for Kibana to start
echo "โณ Waiting for Kibana to start (this may take a minute)..."
sleep 60
# Check if Kibana is running
curl -I http://localhost:5601
Access Kibana at http://your-server:5601
Fantastic! Your ELK Stack is ready! ๐ฏ
โ Step 4: Configure Filebeat for Log Collection
Letโs set up Filebeat to ship logs from various sources:
# Install Filebeat
sudo dnf install -y filebeat
# Configure Filebeat
sudo tee /etc/filebeat/filebeat.yml << 'EOF'
# Filebeat Configuration
filebeat.inputs:
# System logs
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/secure
- /var/log/cron
fields:
logtype: system
multiline.pattern: '^\['
multiline.negate: false
multiline.match: after
# Apache/Nginx logs
- type: log
enabled: true
paths:
- /var/log/httpd/access_log
- /var/log/nginx/access.log
fields:
logtype: webserver
- type: log
enabled: true
paths:
- /var/log/httpd/error_log
- /var/log/nginx/error.log
fields:
logtype: webserver-error
# Application logs
- type: log
enabled: true
paths:
- /opt/application/logs/*.log
fields:
logtype: application
json.keys_under_root: true
json.add_error_key: true
# Docker container logs
- type: container
enabled: true
paths:
- '/var/lib/docker/containers/*/*.log'
fields:
logtype: docker
# Processors to enhance logs
processors:
- add_host_metadata:
when.not.contains:
tags: forwarded
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# Output to Logstash
output.logstash:
hosts: ["localhost:5044"]
# Logging configuration
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
EOF
# Enable and start Filebeat
sudo systemctl enable filebeat
sudo systemctl start filebeat
# Test Filebeat configuration
sudo filebeat test config
sudo filebeat test output
Perfect! Filebeat is shipping logs to Logstash! ๐
๐ฎ Quick Examples
Example 1: Custom Log Parser in Logstash
# Create custom parser for application logs
cat > /etc/logstash/conf.d/app-parser.conf << 'EOF'
filter {
if [fields][logtype] == "application" {
grok {
match => {
"message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] \[%{LOGLEVEL:level}\] \[%{DATA:module}\] %{GREEDYDATA:log_message}"
}
}
# Parse stack traces
if [level] == "ERROR" {
multiline {
pattern => "^\s+at "
what => "previous"
}
}
# Extract custom metrics
if [log_message] =~ /response_time/ {
grok {
match => {
"log_message" => "response_time=(?<response_time>[0-9.]+)ms"
}
}
mutate {
convert => { "response_time" => "float" }
}
}
# Add alert flag for errors
if [level] in ["ERROR", "CRITICAL"] {
mutate {
add_tag => [ "alert" ]
add_field => { "alert_priority" => "high" }
}
}
}
}
EOF
# Reload Logstash
sudo systemctl reload logstash
This creates advanced log parsing! ๐
Example 2: Kibana Dashboard Creation
# Create dashboard configuration
cat > create-dashboard.sh << 'EOF'
#!/bin/bash
# Create comprehensive Kibana dashboard
KIBANA_URL="http://localhost:5601"
# Create index pattern
curl -X POST "$KIBANA_URL/api/saved_objects/index-pattern" \
-H "kbn-xsrf: true" \
-H "Content-Type: application/json" \
-d '{
"attributes": {
"title": "logs-*",
"timeFieldName": "@timestamp"
}
}'
# Create visualizations
curl -X POST "$KIBANA_URL/api/saved_objects/visualization" \
-H "kbn-xsrf: true" \
-H "Content-Type: application/json" \
-d '{
"attributes": {
"title": "Log Volume Over Time",
"visState": "{\"title\":\"Log Volume Over Time\",\"type\":\"line\",\"aggs\":[{\"id\":\"1\",\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\"}}]}",
"uiStateJSON": "{}",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logs-*\",\"query\":{\"match_all\":{}}}"
}
}
}'
# Create dashboard
curl -X POST "$KIBANA_URL/api/saved_objects/dashboard" \
-H "kbn-xsrf: true" \
-H "Content-Type: application/json" \
-d '{
"attributes": {
"title": "System Monitoring Dashboard",
"hits": 0,
"description": "Complete system monitoring dashboard",
"panelsJSON": "[{\"id\":\"log-volume\",\"type\":\"visualization\",\"size_x\":12,\"size_y\":4,\"col\":1,\"row\":1}]",
"version": 1
}
}'
echo "โ
Dashboard created successfully!"
EOF
chmod +x create-dashboard.sh
./create-dashboard.sh
This creates interactive dashboards! ๐
Example 3: Real-Time Alert Configuration
# Create alerting rules
cat > /etc/logstash/conf.d/alerts.conf << 'EOF'
filter {
# Detect brute force attempts
if [type] == "syslog" and [program] == "sshd" {
if [msg] =~ /Failed password/ {
throttle {
before_count => 3
after_count => 5
period => 60
key => "%{hostname}"
add_tag => "brute_force_attempt"
}
}
}
# Detect high error rates
if [level] == "ERROR" {
metrics {
meter => "error_rate"
add_tag => "metric"
rates => [1, 5, 15]
}
if [error_rate][rate_1m] > 10 {
mutate {
add_tag => [ "high_error_rate", "alert" ]
}
}
}
# Detect slow responses
if [response_time] > 1000 {
mutate {
add_tag => [ "slow_response", "performance_issue" ]
}
}
}
output {
# Send alerts to monitoring system
if "alert" in [tags] {
http {
url => "http://alerting-system/webhook"
http_method => "post"
format => "json"
mapping => {
"alert_type" => "%{alert_priority}"
"message" => "%{message}"
"host" => "%{hostname}"
"timestamp" => "%{@timestamp}"
}
}
# Also send email for critical alerts
if [alert_priority] == "critical" {
email {
to => "[email protected]"
subject => "Critical Alert: %{alert_type}"
body => "Alert detected at %{@timestamp}: %{message}"
}
}
}
}
EOF
# Restart Logstash with new configuration
sudo systemctl restart logstash
This enables real-time alerting! ๐จ
๐จ Fix Common Problems
Problem 1: Elasticsearch Wonโt Start
Symptoms: Elasticsearch fails to start or crashes
# Check Elasticsearch logs
sudo tail -f /var/log/elasticsearch/almalinux-logs.log
# Common fixes:
# 1. Fix memory issues
sudo sysctl -w vm.max_map_count=262144
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
# 2. Fix permissions
sudo chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
sudo chown -R elasticsearch:elasticsearch /var/log/elasticsearch
# 3. Clear corrupted indices
sudo systemctl stop elasticsearch
sudo rm -rf /var/lib/elasticsearch/nodes
sudo systemctl start elasticsearch
# 4. Adjust heap size based on available memory
free -h
sudo vim /etc/elasticsearch/jvm.options
# Set -Xms and -Xmx to 50% of available RAM (max 32GB)
# 5. Check disk space
df -h
# Elasticsearch needs at least 10% free disk space
Problem 2: Logstash Pipeline Errors
Symptoms: Logs not appearing in Elasticsearch
# Debug Logstash pipeline
sudo /usr/share/logstash/bin/logstash --debug -f /etc/logstash/conf.d/main-pipeline.conf
# Check for configuration errors
sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/
# Monitor Logstash logs
sudo journalctl -u logstash -f
# Test pipeline with stdin
echo "test log message" | sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/main-pipeline.conf
# Fix common issues:
# - Wrong Elasticsearch host
sudo grep elasticsearch.hosts /etc/logstash/conf.d/*.conf
# - Permission issues
sudo usermod -a -G adm logstash
sudo systemctl restart logstash
Problem 3: Kibana Canโt Connect to Elasticsearch
Symptoms: Kibana shows โUnable to connect to Elasticsearchโ
# Verify Elasticsearch is running
curl -X GET "localhost:9200"
# Check Kibana configuration
sudo grep elasticsearch.hosts /etc/kibana/kibana.yml
# Test connectivity
curl -X GET "localhost:9200/_cluster/health"
# Fix common issues:
# 1. Restart services in order
sudo systemctl restart elasticsearch
sleep 30
sudo systemctl restart kibana
# 2. Check firewall
sudo firewall-cmd --list-all
# 3. Reset Kibana saved objects
curl -X DELETE "localhost:9200/.kibana*"
sudo systemctl restart kibana
Problem 4: High Resource Usage
Symptoms: System running slow, high CPU/memory usage
# Monitor resource usage
htop
docker stats
# Optimize Elasticsearch
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"persistent": {
"indices.breaker.request.limit": "40%",
"indices.breaker.total.limit": "70%"
}
}'
# Reduce Logstash workers
sudo sed -i 's/# pipeline.workers:/pipeline.workers: 2/' /etc/logstash/logstash.yml
# Implement index lifecycle management
curl -X PUT "localhost:9200/_ilm/policy/logs-policy" -H 'Content-Type: application/json' -d'
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_size": "10GB",
"max_age": "7d"
}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}'
# Clean up old indices
curl -X DELETE "localhost:9200/logs-*-2024.01.*"
๐ Simple Commands Summary
Command | Purpose |
---|---|
sudo systemctl status elasticsearch | Check Elasticsearch status |
curl -X GET "localhost:9200/_cluster/health" | Check cluster health |
sudo systemctl status logstash | Check Logstash status |
sudo systemctl status kibana | Check Kibana status |
curl -X GET "localhost:9200/_cat/indices" | List all indices |
sudo filebeat test config | Test Filebeat configuration |
sudo journalctl -u logstash -f | View Logstash logs |
curl -X GET "localhost:9200/logs-*/_search" | Search logs |
sudo /usr/share/logstash/bin/logstash --config.test_and_exit | Test Logstash config |
curl -X DELETE "localhost:9200/logs-old-*" | Delete old indices |
๐ก Tips for Success
๐ฏ Start Simple: Begin with basic log collection before complex parsing
๐ Index Strategy: Plan your index naming and retention policies
๐ Dashboard Design: Create focused dashboards for different use cases
๐ก๏ธ Security First: Enable authentication and SSL in production
๐ Performance Tune: Monitor and optimize based on your log volume
๐ Document Patterns: Keep a library of useful Grok patterns
๐ Regular Maintenance: Schedule index cleanup and optimization
โก Buffer Wisely: Use persistent queues for reliability
๐ What You Learned
Congratulations! Youโve successfully mastered the ELK Stack on AlmaLinux! ๐
โ Installed Elasticsearch for log storage and search โ Configured Logstash for log processing and enrichment โ Set up Kibana for visualization and analysis โ Deployed Filebeat for log collection and shipping โ Created custom parsers for different log formats โ Built dashboards for monitoring and analysis โ Implemented alerting for critical events โ Optimized performance for production use
๐ฏ Why This Matters
Centralized logging is the foundation of modern operations! ๐ With your AlmaLinux ELK Stack, you now have:
- Complete visibility into all system and application logs
- Powerful search capabilities across terabytes of data
- Real-time analysis for immediate problem detection
- Historical insights for trend analysis and capacity planning
- Foundation for compliance and security monitoring
Youโre now equipped to handle logging at scale, turning mountains of log data into actionable intelligence that drives better decisions and faster problem resolution! ๐
Keep logging, keep analyzing, and remember โ logs are your systemโs story, and now you can read every chapter! Youโve got this! โญ๐