+
+
oauth
redhat
โˆˆ
+
+
+
+
zig
eslint
kotlin
lit
+
gitlab
+
_
+
+
travis
vite
npm
gin
+
grpc
+
esbuild
+
ubuntu
android
ts
spring
pandas
fauna
โˆ‚
+
+
+
ionic
kali
+
+
+
bsd
=
raspbian
gradle
+
+
+
+
<=
+
+
+
+
alpine
+
+
+
+
+
kotlin
+
fedora
+
โˆ‰
webpack
gh
xcode
adonis
+
fauna
+
+
yarn
!!
+
scipy
+
+
โˆ‚
+
angular
+
lit
+
+
+
f#
Back to Blog
๐Ÿ”ญ OpenTelemetry Observability on AlmaLinux 9: Complete Guide
almalinux opentelemetry observability

๐Ÿ”ญ OpenTelemetry Observability on AlmaLinux 9: Complete Guide

Published Sep 6, 2025

Master unified observability with OpenTelemetry on AlmaLinux 9! Learn collector setup, auto-instrumentation, metrics, traces, logs collection with practical examples.

5 min read
0 views
Table of Contents

๐Ÿ”ญ OpenTelemetry Observability on AlmaLinux 9: Complete Guide

Ready to unify all your observability data? ๐Ÿš€ Today weโ€™ll deploy OpenTelemetry on AlmaLinux 9, creating a powerful observability platform that collects everything! Letโ€™s see everything! โœจ๐ŸŽฏ

๐Ÿค” Why is OpenTelemetry Important?

Imagine one standard for all observability data! ๐ŸŒŸ Thatโ€™s OpenTelemetryโ€™s superpower! Hereโ€™s why itโ€™s revolutionary:

  • ๐Ÿ”„ Unified Collection - Metrics, traces, and logs in one place!
  • ๐Ÿ“ฆ Vendor Neutral - Works with any observability backend
  • ๐Ÿš€ Auto-Instrumentation - Zero-code observability
  • ๐ŸŽฏ Industry Standard - CNCF incubating project
  • ๐Ÿ“Š Rich Context - Correlate all telemetry types
  • ๐ŸŒ Language Support - Works with all major languages
  • ๐Ÿ›ก๏ธ Production Ready - Used by major companies
  • ๐Ÿ’ก Future Proof - The future of observability

๐ŸŽฏ What You Need

Before we observe everything, gather these:

  • โœ… AlmaLinux 9 server (4GB RAM minimum, 8GB recommended)
  • โœ… Kubernetes cluster 1.24+ (optional but recommended)
  • โœ… kubectl configured (for K8s deployment)
  • โœ… Observability backend (Jaeger, Prometheus, etc.)
  • โœ… Application to monitor
  • โœ… Root or sudo access
  • โœ… Basic monitoring knowledge
  • โœ… Ready for complete observability! ๐ŸŽ‰

๐Ÿ“ Step 1: Install OpenTelemetry Collector

Letโ€™s install the collector on AlmaLinux 9! ๐Ÿ› ๏ธ

Direct Installation on AlmaLinux

# Update system
sudo dnf update -y  # Keep everything current

# Download OpenTelemetry Collector
OTEL_VERSION="0.108.0"  # Check latest version
wget https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v${OTEL_VERSION}/otelcol_${OTEL_VERSION}_linux_amd64.rpm

# Install the collector
sudo rpm -ivh otelcol_${OTEL_VERSION}_linux_amd64.rpm

# Verify installation
otelcol --version  # Shows version

# Check service status
sudo systemctl status otelcol  # Should be inactive initially

Configure Collector

# Create configuration directory
sudo mkdir -p /etc/otelcol

# Create basic configuration
sudo tee /etc/otelcol/config.yaml <<EOF
receivers:
  # Receive OTLP data via gRPC and HTTP
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
  
  # Collect host metrics
  hostmetrics:
    collection_interval: 10s
    scrapers:
      cpu:
      disk:
      filesystem:
      load:
      memory:
      network:
      paging:
      processes:
  
  # Collect Prometheus metrics
  prometheus:
    config:
      scrape_configs:
        - job_name: 'otel-collector'
          scrape_interval: 10s
          static_configs:
            - targets: ['localhost:8888']

processors:
  # Batch telemetry data
  batch:
    timeout: 10s
    send_batch_size: 1024
  
  # Add resource attributes
  resource:
    attributes:
      - key: service.name
        value: "otel-collector"
        action: insert
      - key: host.name
        from_attribute: host.hostname
        action: insert
  
  # Memory limiter prevents OOM
  memory_limiter:
    check_interval: 1s
    limit_mib: 512
    spike_limit_mib: 128

exporters:
  # Export to console for debugging
  debug:
    verbosity: detailed
  
  # Export OTLP to backends
  otlp/jaeger:
    endpoint: localhost:4317
    tls:
      insecure: true
  
  # Export metrics to Prometheus
  prometheusremotewrite:
    endpoint: http://localhost:9090/api/v1/write
  
  # Export to file
  file:
    path: /var/log/otelcol/telemetry.json

extensions:
  # Health check
  health_check:
    endpoint: :13133
  
  # Performance profiling
  pprof:
    endpoint: :1888
  
  # zPages for debugging
  zpages:
    endpoint: :55679

service:
  extensions: [health_check, pprof, zpages]
  pipelines:
    traces:
      receivers: [otlp]
      processors: [memory_limiter, batch, resource]
      exporters: [debug, otlp/jaeger]
    
    metrics:
      receivers: [otlp, hostmetrics, prometheus]
      processors: [memory_limiter, batch, resource]
      exporters: [debug, prometheusremotewrite]
    
    logs:
      receivers: [otlp]
      processors: [memory_limiter, batch]
      exporters: [debug, file]
EOF

# Start the collector
sudo systemctl enable --now otelcol

# Check status
sudo systemctl status otelcol  # Should be active

๐Ÿ”ง Step 2: Deploy on Kubernetes

Letโ€™s deploy OpenTelemetry on Kubernetes! ๐ŸŽŠ

Install with Helm

# Add OpenTelemetry Helm repository
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo update

# Create namespace
kubectl create namespace opentelemetry

# Install as DaemonSet (recommended for node metrics)
cat <<EOF > otel-values.yaml
mode: daemonset

presets:
  # Enable logs collection
  logsCollection:
    enabled: true
    includeCollectorLogs: true
  
  # Enable Kubernetes attributes
  kubernetesAttributes:
    enabled: true
    extractAllPodLabels: true
    extractAllPodAnnotations: true
  
  # Enable host metrics
  hostMetrics:
    enabled: true
  
  # Enable kubelet metrics
  kubeletMetrics:
    enabled: true

config:
  receivers:
    # OTLP receiver for applications
    otlp:
      protocols:
        grpc:
          endpoint: 0.0.0.0:4317
        http:
          endpoint: 0.0.0.0:4318
    
    # Kubernetes events
    k8s_events:
      auth_type: serviceAccount
  
  processors:
    # K8s metadata enrichment
    k8sattributes:
      extract:
        metadata:
          - k8s.namespace.name
          - k8s.deployment.name
          - k8s.statefulset.name
          - k8s.daemonset.name
          - k8s.cronjob.name
          - k8s.job.name
          - k8s.node.name
          - k8s.pod.name
          - k8s.pod.uid
          - k8s.pod.start_time
  
  exporters:
    # Debug output
    debug:
      verbosity: detailed
    
    # Send to Jaeger
    otlp/jaeger:
      endpoint: jaeger-collector.tracing:4317
      tls:
        insecure: true
    
    # Send to Prometheus
    prometheus:
      endpoint: "0.0.0.0:8889"
      
  service:
    pipelines:
      traces:
        receivers: [otlp]
        processors: [k8sattributes, memory_limiter, batch]
        exporters: [debug, otlp/jaeger]
      
      metrics:
        receivers: [otlp, hostmetrics, kubeletstats]
        processors: [k8sattributes, memory_limiter, batch]
        exporters: [debug, prometheus]
      
      logs:
        receivers: [otlp, filelog]
        processors: [k8sattributes, memory_limiter, batch]
        exporters: [debug]

resources:
  limits:
    cpu: 1000m
    memory: 2Gi
  requests:
    cpu: 200m
    memory: 512Mi
EOF

# Install OpenTelemetry Collector
helm install opentelemetry-collector \
  open-telemetry/opentelemetry-collector \
  --namespace opentelemetry \
  --values otel-values.yaml

# Verify installation
kubectl get pods -n opentelemetry
kubectl get svc -n opentelemetry

Install OpenTelemetry Operator

# Install cert-manager (required)
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml

# Wait for cert-manager
kubectl wait --for=condition=Available deployment/cert-manager -n cert-manager --timeout=120s

# Install OpenTelemetry Operator
kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml

# Verify operator
kubectl get pods -n opentelemetry-operator-system

๐ŸŒŸ Step 3: Auto-Instrumentation Setup

Letโ€™s enable zero-code instrumentation! ๐Ÿš€

Configure Auto-Instrumentation

# Create Instrumentation resource
cat <<EOF | kubectl apply -f -
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: my-instrumentation
  namespace: default
spec:
  # Common configuration
  exporter:
    endpoint: http://opentelemetry-collector.opentelemetry:4317
  propagators:
    - tracecontext
    - baggage
    - b3
  sampler:
    type: parentbased_traceidratio
    argument: "1.0"
  
  # Java auto-instrumentation
  java:
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:latest
    env:
      - name: OTEL_LOGS_EXPORTER
        value: otlp
  
  # Python auto-instrumentation
  python:
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-python:latest
    env:
      - name: OTEL_METRICS_EXPORTER
        value: otlp
      - name: OTEL_LOGS_EXPORTER
        value: otlp
  
  # Node.js auto-instrumentation
  nodejs:
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs:latest
  
  # .NET auto-instrumentation
  dotnet:
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:latest
  
  # Go auto-instrumentation
  go:
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-go:latest
EOF

# Annotate your deployments for auto-instrumentation
kubectl annotate deployment my-app \
  instrumentation.opentelemetry.io/inject-python="true" \
  instrumentation.opentelemetry.io/container-names="my-container"

# Restart deployment to apply
kubectl rollout restart deployment my-app

โœ… Step 4: Collect All Telemetry Types

Letโ€™s collect metrics, traces, and logs! ๐Ÿ“Š

Application Instrumentation Example

# Example Python application with OpenTelemetry
cat <<EOF > app.py
from opentelemetry import trace, metrics
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import OTLPMetricExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
from opentelemetry.instrumentation.flask import FlaskInstrumentor
from flask import Flask
import logging

# Configure tracing
trace.set_tracer_provider(TracerProvider())
tracer = trace.get_tracer(__name__)

span_processor = BatchSpanProcessor(
    OTLPSpanExporter(endpoint="localhost:4317", insecure=True)
)
trace.get_tracer_provider().add_span_processor(span_processor)

# Configure metrics
metric_reader = PeriodicExportingMetricReader(
    exporter=OTLPMetricExporter(endpoint="localhost:4317", insecure=True),
    export_interval_millis=10000
)
metrics.set_meter_provider(MeterProvider(metric_readers=[metric_reader]))
meter = metrics.get_meter(__name__)

# Create metrics
request_counter = meter.create_counter(
    "requests_total",
    description="Total number of requests"
)

# Flask app
app = Flask(__name__)
FlaskInstrumentor().instrument_app(app)

@app.route('/')
def hello():
    with tracer.start_as_current_span("process_request"):
        request_counter.add(1, {"endpoint": "/"})
        logging.info("Processing request")
        return "Hello from OpenTelemetry! ๐Ÿ”ญ"

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)
EOF

# Install dependencies
pip install opentelemetry-distro opentelemetry-exporter-otlp
pip install opentelemetry-instrumentation-flask flask

# Run with auto-instrumentation
opentelemetry-instrument python app.py

๐ŸŽฎ Quick Examples

Letโ€™s explore OpenTelemetry features! ๐ŸŽฌ

Example 1: Custom Collector Pipeline

# Advanced collector configuration
cat <<EOF | sudo tee -a /etc/otelcol/config.yaml
receivers:
  # Scrape Prometheus endpoints
  prometheus:
    config:
      scrape_configs:
        - job_name: 'kubernetes-pods'
          kubernetes_sd_configs:
            - role: pod
          relabel_configs:
            - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
              action: keep
              regex: true
  
  # Collect Docker metrics
  docker_stats:
    endpoint: unix:///var/run/docker.sock
    collection_interval: 10s

processors:
  # Filter by attributes
  filter:
    metrics:
      include:
        match_type: regexp
        metric_names:
          - prefix/.*
          - prefix2/.*
  
  # Transform data
  transform:
    metric_statements:
      - context: datapoint
        statements:
          - set(attributes["environment"], "production")

exporters:
  # Multiple destinations
  loadbalancing:
    protocol:
      otlp:
        timeout: 1s
    resolver:
      static:
        hostnames:
          - backend-1:4317
          - backend-2:4317
EOF

sudo systemctl restart otelcol

Example 2: Trace Context Propagation

# Deploy sample microservices
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
      annotations:
        instrumentation.opentelemetry.io/inject-nodejs: "true"
    spec:
      containers:
      - name: frontend
        image: otel/demo:frontend
        env:
        - name: BACKEND_URL
          value: http://backend:8080
        - name: OTEL_SERVICE_NAME
          value: frontend-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
      annotations:
        instrumentation.opentelemetry.io/inject-java: "true"
    spec:
      containers:
      - name: backend
        image: otel/demo:backend
        env:
        - name: OTEL_SERVICE_NAME
          value: backend-service
EOF

# Traces will automatically propagate between services!

Example 3: Metrics Dashboard

# Create ServiceMonitor for Prometheus
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: otel-metrics
  namespace: opentelemetry
  labels:
    app: opentelemetry-collector
spec:
  ports:
  - name: metrics
    port: 8889
    targetPort: 8889
  selector:
    app.kubernetes.io/name: opentelemetry-collector
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: otel-collector
  namespace: opentelemetry
spec:
  selector:
    matchLabels:
      app: opentelemetry-collector
  endpoints:
  - port: metrics
    interval: 30s
    path: /metrics
EOF

๐Ÿšจ Fix Common Problems

Donโ€™t panic! Here are solutions! ๐Ÿ’ช

Problem 1: Collector Not Receiving Data

# Check collector status
sudo systemctl status otelcol
sudo journalctl -u otelcol -f

# Verify endpoints are listening
sudo netstat -tlnp | grep otelcol

# Test OTLP endpoint
curl -X POST http://localhost:4318/v1/traces \
  -H "Content-Type: application/json" \
  -d '{}'

# Check firewall
sudo firewall-cmd --add-port=4317/tcp --permanent
sudo firewall-cmd --add-port=4318/tcp --permanent
sudo firewall-cmd --reload

Problem 2: High Memory Usage

# Adjust memory limits in config
sudo vi /etc/otelcol/config.yaml
# Modify memory_limiter processor:
# limit_mib: 256
# spike_limit_mib: 64

# Monitor collector metrics
curl http://localhost:8888/metrics | grep memory

# Enable sampling
# Add to config:
# processors:
#   probabilistic_sampler:
#     sampling_percentage: 10

Problem 3: Auto-Instrumentation Not Working

# Check operator logs
kubectl logs -n opentelemetry-operator-system deployment/opentelemetry-operator-controller-manager

# Verify instrumentation resource
kubectl get instrumentation -A
kubectl describe instrumentation my-instrumentation

# Check pod annotations
kubectl describe pod <your-pod>

# Restart pod after annotation
kubectl delete pod <your-pod>

๐Ÿ“‹ Simple Commands Summary

Your OpenTelemetry command toolkit! ๐Ÿ“š

CommandWhat It DoesWhen to Use
otelcol --config config.yamlStart collectorManual start
sudo systemctl status otelcolCheck serviceVerify running
curl localhost:13133Health checkMonitor health
curl localhost:55679/debug/tracezView tracesDebug traces
kubectl get instrumentationList auto-instrumentCheck setup
helm upgrade opentelemetry-collectorUpdate collectorApply changes
otelcol validate --configValidate configBefore restart
kubectl logs -l app=opentelemetryView logsTroubleshoot
curl localhost:8888/metricsCollector metricsMonitor performance
kubectl port-forward svc/otel 4317Access collectorLocal testing

๐Ÿ’ก Tips for Success

Master observability with these tips! ๐Ÿ†

Collection Strategy

  • ๐Ÿ“Š Start with essential signals only
  • ๐ŸŽฏ Use sampling for high-volume traces
  • ๐Ÿ’พ Batch data for efficiency
  • ๐Ÿ”„ Enable compression for exports
  • โšก Use memory limiter to prevent OOM

Best Practices

  • ๐Ÿ›ก๏ธ Secure OTLP endpoints with TLS
  • ๐Ÿ“ Use semantic conventions
  • ๐Ÿ” Add resource attributes
  • โš ๏ธ Set up health monitoring
  • ๐Ÿ“ˆ Track collector metrics
  • ๐ŸŽจ Use consistent naming
  • ๐Ÿ’ก Document your setup

Performance Optimization

  • ๐Ÿš€ Use batch processor always
  • ๐ŸŽฏ Filter unnecessary data early
  • ๐Ÿ“Š Monitor collector resource usage
  • ๐Ÿ’พ Configure appropriate queue sizes
  • ๐Ÿ”„ Use load balancing for scale

๐Ÿ† What You Learned

Fantastic work! Youโ€™re now an OpenTelemetry expert! ๐ŸŽ‰ You can:

  • โœ… Install OpenTelemetry on AlmaLinux 9
  • โœ… Deploy collectors on Kubernetes
  • โœ… Configure auto-instrumentation
  • โœ… Collect metrics, traces, and logs
  • โœ… Set up processing pipelines
  • โœ… Export to multiple backends
  • โœ… Troubleshoot common issues
  • โœ… Optimize collector performance

๐ŸŽฏ Why This Matters

Youโ€™ve unified all observability! ๐Ÿš€ With OpenTelemetry:

  • One Standard - No more proprietary formats
  • Vendor Freedom - Switch backends anytime
  • Complete Context - Correlate all signals
  • Zero-Code - Auto-instrumentation magic
  • Future Proof - Industry-wide adoption
  • Cost Efficient - One collector for everything
  • Production Ready - Battle-tested at scale

Your observability is now truly unified! No more data silos, no more vendor lock-in. Everything flows through one standard pipeline.

Keep exploring features like tail sampling, semantic conventions, and custom instrumentation. Youโ€™re building the future of observability! ๐ŸŒŸ

Remember: Observability is a journey - OpenTelemetry is your compass! Happy observing! ๐ŸŽŠ๐Ÿ”ญ


P.S. - Join the OpenTelemetry community, contribute to the project, and share your observability journey! Together weโ€™re standardizing observability! โญ๐Ÿ™Œ