+
+
+
weaviate
+
lit
+
netlify
ubuntu
+
โ‰ 
+
nest
+
pip
+
yarn
{}
deno
+
hapi
+
+
ocaml
=>
+
grpc
+
+
+
graphql
+
remix
nvim
+
neo4j
laravel
vb
pandas
pytest
jenkins
grpc
lua
+
express
+
aws
|>
+
alpine
raspbian
+
+
+
argocd
+
c
+
rails
hack
unix
+
angular
+
gitlab
โˆช
+
+
+
+
azure
+
+
haiku
echo
next
fortran
puppet
+
+
+
pytest
neo4j
arch
flask
+
bun
+
tcl
prometheus
Back to Blog
๐Ÿ” Elasticsearch and Kibana on AlmaLinux: Search and Visualize Everything
AlmaLinux Elasticsearch Kibana

๐Ÿ” Elasticsearch and Kibana on AlmaLinux: Search and Visualize Everything

Published Aug 20, 2025

Deploy Elasticsearch and Kibana on AlmaLinux. Build powerful search, analyze logs, create stunning dashboards, and master the Elastic Stack with practical examples.

14 min read
0 views
Table of Contents

๐Ÿ” Elasticsearch and Kibana on AlmaLinux: Search and Visualize Everything

Drowning in logs and canโ€™t find anything? ๐Ÿ˜ต I was there! Our app generated 10GB of logs daily and grep was useless. Then I discovered Elasticsearch and Kibana - suddenly I could search millions of logs in milliseconds and create beautiful dashboards! Today Iโ€™m showing you how to build your own search and analytics powerhouse on AlmaLinux. Letโ€™s turn data chaos into insights! ๐Ÿ“Š

๐Ÿค” Why Elasticsearch and Kibana Are Game-Changers

This isnโ€™t just search - itโ€™s intelligence! Hereโ€™s why everyone uses it:

  • ๐Ÿš€ Lightning fast search - Milliseconds for billions of records
  • ๐Ÿ“Š Real-time analytics - Live dashboards and alerts
  • ๐Ÿ” Full-text search - Google-like search for your data
  • ๐Ÿ“ˆ Beautiful visualizations - Charts, maps, graphs
  • ๐ŸŒ Scalable - From laptop to thousands of nodes
  • ๐Ÿ”ง RESTful API - Integrate with anything

True story: We found a critical bug that was happening once every 100,000 requests. Grep? Impossible. Elasticsearch? Found it in 2 seconds! ๐ŸŽฏ

๐ŸŽฏ What You Need

Before we search everything, ensure you have:

  • โœ… AlmaLinux server with 8GB+ RAM
  • โœ… Java 11+ installed
  • โœ… 50GB+ free disk space
  • โœ… Root or sudo access
  • โœ… 60 minutes to master search
  • โœ… Coffee (ELK stack needs energy! โ˜•)

๐Ÿ“ Step 1: Install Elasticsearch

Letโ€™s get the search engine running!

Install Java and Elasticsearch

# Install Java 11
sudo dnf install -y java-11-openjdk java-11-openjdk-devel

# Verify Java
java -version

# Import Elasticsearch GPG key
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

# Add Elasticsearch repository
sudo tee /etc/yum.repos.d/elasticsearch.repo << EOF
[elasticsearch]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md
EOF

# Install Elasticsearch
sudo dnf install -y --enablerepo=elasticsearch elasticsearch

# Save the generated password!
# The installation will show: "The generated password for the elastic built-in superuser is: YOUR_PASSWORD"
# SAVE THIS PASSWORD - you'll need it!

# Enable and start Elasticsearch
sudo systemctl enable --now elasticsearch

# Wait 30 seconds for startup
sleep 30

# Test Elasticsearch (save the enrollment token if shown)
sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
# Note the new password!

Configure Elasticsearch

# Backup original config
sudo cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.backup

# Edit configuration
sudo nano /etc/elasticsearch/elasticsearch.yml

# Essential settings:
cluster.name: my-cluster
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.type: single-node

# Memory settings (important!)
# Set to 50% of system RAM (4GB for 8GB system)
sudo nano /etc/elasticsearch/jvm.options

-Xms4g
-Xmx4g

# Security settings (for production)
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12

# Restart Elasticsearch
sudo systemctl restart elasticsearch

# Check status
sudo systemctl status elasticsearch

# View logs if needed
sudo tail -f /var/log/elasticsearch/my-cluster.log

Configure Firewall

# Open Elasticsearch port
sudo firewall-cmd --permanent --add-port=9200/tcp
sudo firewall-cmd --permanent --add-port=9300/tcp
sudo firewall-cmd --reload

# Test connection (with authentication)
curl -k -u elastic:YOUR_PASSWORD https://localhost:9200

# Should return cluster info

๐Ÿ”ง Step 2: Install and Configure Kibana

Time for beautiful dashboards! ๐ŸŽจ

Install Kibana

# Install Kibana
sudo dnf install -y --enablerepo=elasticsearch kibana

# Configure Kibana
sudo nano /etc/kibana/kibana.yml

# Essential settings:
server.port: 5601
server.host: "0.0.0.0"
server.name: "my-kibana"
elasticsearch.hosts: ["https://localhost:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.ssl.verificationMode: none

# Create kibana_system user password
sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system
# Note the password!

# Add to kibana.yml
elasticsearch.password: "KIBANA_SYSTEM_PASSWORD"

# Generate encryption keys
sudo /usr/share/kibana/bin/kibana-encryption-keys generate
# Add the generated keys to kibana.yml

# Enable and start Kibana
sudo systemctl enable --now kibana

# Open firewall port
sudo firewall-cmd --permanent --add-port=5601/tcp
sudo firewall-cmd --reload

# Check status
sudo systemctl status kibana

# Access Kibana at http://your-server:5601
# Login with elastic / YOUR_PASSWORD

Configure Index Patterns

# Create sample data
curl -k -u elastic:YOUR_PASSWORD -X POST "https://localhost:9200/logs-2024.01/_doc" -H 'Content-Type: application/json' -d'
{
  "@timestamp": "2024-01-15T10:00:00",
  "level": "INFO",
  "message": "Application started",
  "service": "web-app",
  "host": "server1"
}'

# Create index pattern in Kibana
# 1. Go to Stack Management โ†’ Index Patterns
# 2. Create pattern: logs-*
# 3. Select @timestamp as time field

๐ŸŒŸ Step 3: Ingest and Search Data

Letโ€™s put data in and search it! ๐Ÿ”

Using Logstash for Data Ingestion

# Install Logstash
sudo dnf install -y --enablerepo=elasticsearch logstash

# Create Logstash pipeline
sudo nano /etc/logstash/conf.d/system-logs.conf

input {
  file {
    path => "/var/log/messages"
    start_position => "beginning"
    tags => ["syslog"]
  }
  
  file {
    path => "/var/log/nginx/access.log"
    start_position => "beginning"
    tags => ["nginx"]
  }
}

filter {
  if "syslog" in [tags] {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:hostname} %{DATA:program}: %{GREEDYDATA:log_message}" }
    }
    date {
      match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
  
  if "nginx" in [tags] {
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
    geoip {
      source => "clientip"
    }
  }
  
  mutate {
    remove_field => [ "message" ]
  }
}

output {
  elasticsearch {
    hosts => ["https://localhost:9200"]
    index => "logs-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "YOUR_PASSWORD"
    ssl => true
    ssl_certificate_verification => false
  }
  
  stdout {
    codec => rubydebug
  }
}

# Start Logstash
sudo systemctl enable --now logstash

Filebeat for Log Shipping

# Install Filebeat
sudo dnf install -y --enablerepo=elasticsearch filebeat

# Configure Filebeat
sudo nano /etc/filebeat/filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/*.log
  exclude_files: ['\.gz$']
  
- type: log
  enabled: true
  paths:
    - /var/log/nginx/*.log
  fields:
    service: nginx
    
- type: log
  enabled: true
  paths:
    - /var/log/mysql/*.log
  fields:
    service: mysql
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after

output.elasticsearch:
  hosts: ["https://localhost:9200"]
  username: "elastic"
  password: "YOUR_PASSWORD"
  ssl.verification_mode: none
  index: "filebeat-%{+yyyy.MM.dd}"

processors:
  - add_host_metadata:
      when.not.contains:
        tags: forwarded
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# Enable and start Filebeat
sudo filebeat modules enable system nginx mysql
sudo filebeat setup -e
sudo systemctl enable --now filebeat

โœ… Step 4: Create Dashboards and Visualizations

Time to make data beautiful! ๐ŸŽจ

Sample Dashboard Creation Script

#!/usr/bin/env python3
# create_dashboard.py - Create Kibana dashboards programmatically

import requests
import json
from datetime import datetime, timedelta

class KibanaDashboard:
    def __init__(self, kibana_url, username, password):
        self.kibana_url = kibana_url
        self.auth = (username, password)
        self.headers = {
            'Content-Type': 'application/json',
            'kbn-xsrf': 'true'
        }
    
    def create_index_pattern(self, pattern, time_field='@timestamp'):
        """Create index pattern"""
        data = {
            "attributes": {
                "title": pattern,
                "timeFieldName": time_field
            }
        }
        
        response = requests.post(
            f"{self.kibana_url}/api/saved_objects/index-pattern",
            auth=self.auth,
            headers=self.headers,
            json=data,
            verify=False
        )
        return response.json()
    
    def create_visualization(self, title, index_pattern, viz_type='line'):
        """Create visualization"""
        viz_state = {
            "title": title,
            "type": viz_type,
            "params": {
                "grid": {"categoryLines": False, "style": {"color": "#eee"}},
                "categoryAxes": [{"id": "CategoryAxis-1", "type": "category", "position": "bottom"}],
                "valueAxes": [{"id": "ValueAxis-1", "type": "value", "position": "left"}],
                "seriesParams": [{"type": "line", "mode": "normal"}],
                "addTooltip": True,
                "addLegend": True,
                "legendPosition": "right"
            },
            "aggs": [
                {"id": "1", "type": "count", "schema": "metric", "params": {}},
                {"id": "2", "type": "date_histogram", "schema": "segment", 
                 "params": {"field": "@timestamp", "interval": "auto"}}
            ]
        }
        
        data = {
            "attributes": {
                "title": title,
                "visState": json.dumps(viz_state),
                "uiStateJSON": "{}",
                "kibanaSavedObjectMeta": {
                    "searchSourceJSON": json.dumps({
                        "index": index_pattern,
                        "query": {"match_all": {}},
                        "filter": []
                    })
                }
            }
        }
        
        response = requests.post(
            f"{self.kibana_url}/api/saved_objects/visualization",
            auth=self.auth,
            headers=self.headers,
            json=data,
            verify=False
        )
        return response.json()
    
    def create_dashboard(self, title, visualizations):
        """Create dashboard with visualizations"""
        panels = []
        for i, viz_id in enumerate(visualizations):
            panels.append({
                "id": viz_id,
                "type": "visualization",
                "gridData": {
                    "x": (i % 2) * 24,
                    "y": (i // 2) * 15,
                    "w": 24,
                    "h": 15
                }
            })
        
        data = {
            "attributes": {
                "title": title,
                "hits": 0,
                "description": f"Dashboard created on {datetime.now()}",
                "panelsJSON": json.dumps(panels),
                "timeRestore": False,
                "kibanaSavedObjectMeta": {
                    "searchSourceJSON": json.dumps({
                        "query": {"match_all": {}},
                        "filter": []
                    })
                }
            }
        }
        
        response = requests.post(
            f"{self.kibana_url}/api/saved_objects/dashboard",
            auth=self.auth,
            headers=self.headers,
            json=data,
            verify=False
        )
        return response.json()

# Usage
dashboard = KibanaDashboard('http://localhost:5601', 'elastic', 'YOUR_PASSWORD')

# Create index pattern
pattern = dashboard.create_index_pattern('logs-*')

# Create visualizations
viz1 = dashboard.create_visualization('Log Count Over Time', pattern['id'])
viz2 = dashboard.create_visualization('Error Rate', pattern['id'])

# Create dashboard
dashboard.create_dashboard('System Monitoring', [viz1['id'], viz2['id']])

๐ŸŽฎ Quick Examples

Example 1: Application Log Analysis ๐Ÿ“ฑ

#!/bin/bash
# Application log analyzer

# Create mapping for application logs
curl -k -u elastic:YOUR_PASSWORD -X PUT "https://localhost:9200/app-logs" -H 'Content-Type: application/json' -d'
{
  "mappings": {
    "properties": {
      "@timestamp": { "type": "date" },
      "level": { "type": "keyword" },
      "service": { "type": "keyword" },
      "host": { "type": "keyword" },
      "message": { "type": "text" },
      "stack_trace": { "type": "text" },
      "user_id": { "type": "keyword" },
      "request_id": { "type": "keyword" },
      "duration_ms": { "type": "float" },
      "status_code": { "type": "integer" },
      "ip_address": { "type": "ip" },
      "user_agent": { "type": "text" },
      "geo": { "type": "geo_point" }
    }
  }
}'

# Python script to send application logs
cat > /usr/local/bin/app-logger.py << 'EOF'
#!/usr/bin/env python3

import json
import logging
import requests
from elasticsearch import Elasticsearch
from datetime import datetime

class ElasticsearchLogger:
    def __init__(self, es_host='localhost', es_port=9200):
        self.es = Elasticsearch(
            [f'https://{es_host}:{es_port}'],
            basic_auth=('elastic', 'YOUR_PASSWORD'),
            verify_certs=False
        )
        self.index = f"app-logs-{datetime.now().strftime('%Y.%m.%d')}"
    
    def log(self, level, message, **kwargs):
        doc = {
            '@timestamp': datetime.utcnow(),
            'level': level,
            'message': message,
            'service': kwargs.get('service', 'app'),
            'host': kwargs.get('host', 'localhost'),
            **kwargs
        }
        
        self.es.index(index=self.index, document=doc)
    
    def search_errors(self, hours=24):
        """Search for errors in last N hours"""
        query = {
            "query": {
                "bool": {
                    "must": [
                        {"term": {"level": "ERROR"}},
                        {"range": {
                            "@timestamp": {
                                "gte": f"now-{hours}h"
                            }
                        }}
                    ]
                }
            },
            "aggs": {
                "error_types": {
                    "terms": {
                        "field": "message.keyword",
                        "size": 10
                    }
                }
            }
        }
        
        return self.es.search(index="app-logs-*", body=query)
    
    def get_slow_requests(self, threshold_ms=1000):
        """Find slow requests"""
        query = {
            "query": {
                "range": {
                    "duration_ms": {
                        "gte": threshold_ms
                    }
                }
            },
            "sort": [
                {"duration_ms": {"order": "desc"}}
            ]
        }
        
        return self.es.search(index="app-logs-*", body=query)

# Usage
logger = ElasticsearchLogger()

# Log events
logger.log('INFO', 'Application started', service='web-api')
logger.log('ERROR', 'Database connection failed', 
           service='web-api', 
           stack_trace='Connection timeout at...',
           duration_ms=5000)

# Search for errors
errors = logger.search_errors()
print(f"Found {errors['hits']['total']['value']} errors")
EOF

chmod +x /usr/local/bin/app-logger.py

Example 2: Security Monitoring Dashboard ๐Ÿ”’

#!/bin/bash
# Security monitoring with Elasticsearch

# Create security index template
curl -k -u elastic:YOUR_PASSWORD -X PUT "https://localhost:9200/_index_template/security-template" -H 'Content-Type: application/json' -d'
{
  "index_patterns": ["security-*"],
  "template": {
    "settings": {
      "number_of_shards": 2,
      "number_of_replicas": 1
    },
    "mappings": {
      "properties": {
        "@timestamp": { "type": "date" },
        "event_type": { "type": "keyword" },
        "severity": { "type": "keyword" },
        "source_ip": { "type": "ip" },
        "destination_ip": { "type": "ip" },
        "source_port": { "type": "integer" },
        "destination_port": { "type": "integer" },
        "username": { "type": "keyword" },
        "action": { "type": "keyword" },
        "result": { "type": "keyword" },
        "message": { "type": "text" },
        "geo_location": { "type": "geo_point" }
      }
    }
  }
}'

# Script to monitor SSH attempts
cat > /usr/local/bin/ssh-monitor.sh << 'EOF'
#!/bin/bash

ES_HOST="https://localhost:9200"
ES_USER="elastic"
ES_PASS="YOUR_PASSWORD"
INDEX="security-$(date +%Y.%m.%d)"

# Parse SSH logs and send to Elasticsearch
tail -F /var/log/secure | while read line; do
    if echo "$line" | grep -q "sshd"; then
        timestamp=$(date -u +"%Y-%m-%dT%H:%M:%S.000Z")
        
        if echo "$line" | grep -q "Failed password"; then
            username=$(echo "$line" | grep -oP 'for \K\w+')
            ip=$(echo "$line" | grep -oP 'from \K[\d.]+')
            
            curl -k -u $ES_USER:$ES_PASS -X POST "$ES_HOST/$INDEX/_doc" \
                -H 'Content-Type: application/json' -d "{
                \"@timestamp\": \"$timestamp\",
                \"event_type\": \"ssh_failed_login\",
                \"severity\": \"WARNING\",
                \"source_ip\": \"$ip\",
                \"username\": \"$username\",
                \"action\": \"authentication\",
                \"result\": \"failed\",
                \"message\": \"$line\"
            }"
        fi
        
        if echo "$line" | grep -q "Accepted password"; then
            username=$(echo "$line" | grep -oP 'for \K\w+')
            ip=$(echo "$line" | grep -oP 'from \K[\d.]+')
            
            curl -k -u $ES_USER:$ES_PASS -X POST "$ES_HOST/$INDEX/_doc" \
                -H 'Content-Type: application/json' -d "{
                \"@timestamp\": \"$timestamp\",
                \"event_type\": \"ssh_successful_login\",
                \"severity\": \"INFO\",
                \"source_ip\": \"$ip\",
                \"username\": \"$username\",
                \"action\": \"authentication\",
                \"result\": \"success\",
                \"message\": \"$line\"
            }"
        fi
    fi
done
EOF

chmod +x /usr/local/bin/ssh-monitor.sh

# Create systemd service
cat > /etc/systemd/system/ssh-monitor.service << EOF
[Unit]
Description=SSH Login Monitor for Elasticsearch
After=elasticsearch.service

[Service]
Type=simple
ExecStart=/usr/local/bin/ssh-monitor.sh
Restart=always
User=root

[Install]
WantedBy=multi-user.target
EOF

systemctl enable --now ssh-monitor

Example 3: Performance Metrics Collector ๐Ÿ“ˆ

#!/bin/bash
# System metrics collector for Elasticsearch

cat > /usr/local/bin/metrics-collector.py << 'EOF'
#!/usr/bin/env python3

import psutil
import time
from elasticsearch import Elasticsearch
from datetime import datetime

class MetricsCollector:
    def __init__(self):
        self.es = Elasticsearch(
            ['https://localhost:9200'],
            basic_auth=('elastic', 'YOUR_PASSWORD'),
            verify_certs=False
        )
    
    def collect_metrics(self):
        """Collect system metrics"""
        metrics = {
            '@timestamp': datetime.utcnow(),
            'host': psutil.os.uname().nodename,
            'cpu': {
                'percent': psutil.cpu_percent(interval=1),
                'cores': psutil.cpu_count(),
                'load_avg': psutil.getloadavg()
            },
            'memory': {
                'total': psutil.virtual_memory().total,
                'used': psutil.virtual_memory().used,
                'percent': psutil.virtual_memory().percent,
                'available': psutil.virtual_memory().available
            },
            'disk': {},
            'network': {},
            'processes': {
                'total': len(psutil.pids()),
                'running': len([p for p in psutil.process_iter() if p.status() == 'running'])
            }
        }
        
        # Disk metrics
        for partition in psutil.disk_partitions():
            if partition.mountpoint == '/':
                usage = psutil.disk_usage(partition.mountpoint)
                metrics['disk']['root'] = {
                    'total': usage.total,
                    'used': usage.used,
                    'free': usage.free,
                    'percent': usage.percent
                }
        
        # Network metrics
        net_io = psutil.net_io_counters()
        metrics['network'] = {
            'bytes_sent': net_io.bytes_sent,
            'bytes_recv': net_io.bytes_recv,
            'packets_sent': net_io.packets_sent,
            'packets_recv': net_io.packets_recv
        }
        
        return metrics
    
    def send_to_elasticsearch(self, metrics):
        """Send metrics to Elasticsearch"""
        index = f"metrics-{datetime.now().strftime('%Y.%m.%d')}"
        self.es.index(index=index, document=metrics)
    
    def run(self, interval=60):
        """Run collector"""
        print("๐Ÿ“Š Starting metrics collector...")
        while True:
            try:
                metrics = self.collect_metrics()
                self.send_to_elasticsearch(metrics)
                print(f"โœ… Metrics sent at {datetime.now()}")
            except Exception as e:
                print(f"โŒ Error: {e}")
            
            time.sleep(interval)

if __name__ == '__main__':
    collector = MetricsCollector()
    collector.run(interval=30)
EOF

chmod +x /usr/local/bin/metrics-collector.py

# Create service
cat > /etc/systemd/system/metrics-collector.service << EOF
[Unit]
Description=System Metrics Collector for Elasticsearch
After=elasticsearch.service

[Service]
Type=simple
ExecStart=/usr/bin/python3 /usr/local/bin/metrics-collector.py
Restart=always
User=root

[Install]
WantedBy=multi-user.target
EOF

systemctl enable --now metrics-collector

๐Ÿšจ Fix Common Problems

Problem 1: Elasticsearch Wonโ€™t Start โŒ

Service fails to start?

# Check logs
sudo journalctl -u elasticsearch -n 50

# Common issues:

# 1. Memory issues
sudo sysctl -w vm.max_map_count=262144
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf

# 2. Permissions
sudo chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
sudo chown -R elasticsearch:elasticsearch /var/log/elasticsearch

# 3. Java heap size
# Edit /etc/elasticsearch/jvm.options
-Xms2g  # Set to 50% of RAM
-Xmx2g  # Same as Xms

Problem 2: Kibana Canโ€™t Connect โŒ

Kibana shows connection error?

# Check Elasticsearch is running
curl -k -u elastic:YOUR_PASSWORD https://localhost:9200

# Reset kibana_system password
sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system

# Update in kibana.yml
elasticsearch.password: "NEW_PASSWORD"

# Restart Kibana
sudo systemctl restart kibana

Problem 3: High Memory Usage โŒ

Elasticsearch using all RAM?

# Limit heap size in jvm.options
-Xms2g  # Don't exceed 50% of RAM
-Xmx2g  # Don't exceed 32GB

# Reduce number of shards
curl -k -u elastic:YOUR_PASSWORD -X PUT "https://localhost:9200/_settings" -H 'Content-Type: application/json' -d'
{
  "index": {
    "number_of_replicas": 0
  }
}'

# Clear old indices
curator delete indices --older-than 30 --time-unit days

Problem 4: Slow Searches โŒ

Queries taking too long?

# Check cluster health
curl -k -u elastic:YOUR_PASSWORD https://localhost:9200/_cluster/health?pretty

# Optimize indices
curl -k -u elastic:YOUR_PASSWORD -X POST "https://localhost:9200/_forcemerge?max_num_segments=1"

# Increase refresh interval
curl -k -u elastic:YOUR_PASSWORD -X PUT "https://localhost:9200/_settings" -H 'Content-Type: application/json' -d'
{
  "index": {
    "refresh_interval": "30s"
  }
}'

๐Ÿ“‹ Simple Commands Summary

TaskCommand
๐Ÿ” Check healthcurl -k -u elastic:PASS https://localhost:9200/_cluster/health
๐Ÿ“Š List indicescurl -k -u elastic:PASS https://localhost:9200/_cat/indices
๐Ÿ”Ž Search allcurl -k -u elastic:PASS https://localhost:9200/_search
๐Ÿ’พ Create indexcurl -k -X PUT https://localhost:9200/my-index
๐Ÿ—‘๏ธ Delete indexcurl -k -X DELETE https://localhost:9200/my-index
๐Ÿ“ˆ Check statscurl -k https://localhost:9200/_stats
๐Ÿ”„ Restartsudo systemctl restart elasticsearch kibana
๐Ÿ“ View logssudo tail -f /var/log/elasticsearch/*.log

๐Ÿ’ก Tips for Success

  1. Start Small ๐ŸŽฏ - Single node first
  2. Monitor JVM ๐Ÿ“Š - Heap usage is critical
  3. Rotate Indices ๐Ÿ”„ - Donโ€™t keep everything forever
  4. Use Templates ๐Ÿ“‹ - Consistent mappings
  5. Secure Everything ๐Ÿ”’ - Enable X-Pack security
  6. Learn Query DSL ๐Ÿ” - Itโ€™s powerful!

Pro tip: Use Curator to automatically delete old indices. Saved us 500GB of disk space! ๐Ÿ’พ

๐Ÿ† What You Learned

Youโ€™re now a search master! You can:

  • โœ… Install Elasticsearch and Kibana
  • โœ… Ingest logs with Logstash/Filebeat
  • โœ… Create beautiful dashboards
  • โœ… Search millions of records instantly
  • โœ… Monitor security events
  • โœ… Analyze performance metrics
  • โœ… Troubleshoot ELK issues

๐ŸŽฏ Why This Matters

The ELK stack provides:

  • ๐Ÿ” Google-like search for your data
  • ๐Ÿ“Š Real-time analytics
  • ๐ŸŽจ Beautiful visualizations
  • ๐Ÿšจ Instant alerting
  • ๐Ÿ“ˆ Scalable architecture
  • ๐Ÿ”ง Flexible integration

We found a memory leak that only happened on Tuesdays at 3 PM. Grep? Never wouldโ€™ve found it. Kibana dashboard? Spotted the pattern instantly! Thatโ€™s the power of visualization! ๐Ÿ“Š

Remember: Data without search is just noise. Make it sing with Elasticsearch! ๐ŸŽต

Happy searching! May your queries be fast and your dashboards beautiful! ๐Ÿ”โœจ