+
kali
pytest
wasm
+
+
clickhouse
+
mxnet
+
rubymine
nomad
+
xcode
+
===
+
c++
+
+
+
+
+
+
https
+
+
jest
+
prometheus
stimulus
+
pinecone
+
+
+
+
argocd
raspbian
pinecone
+
+
deno
+
py
+
echo
+
abap
r
vercel
+
pnpm
jasmine
+
+
+
+
riot
+
eslint
+
!=
circle
+
+
weaviate
+
rubymine
+
+
+
elixir
c#
+
websocket
stencil
+
eslint
vscode
+
+
+
+
+
parcel
+
astro
Back to Blog
🤖 StackStorm Event-Driven Automation on AlmaLinux: IFTTT for Infrastructure
stackstorm automation almalinux

🤖 StackStorm Event-Driven Automation on AlmaLinux: IFTTT for Infrastructure

Published Sep 6, 2025

Master StackStorm on AlmaLinux! Learn installation, rule creation, workflows, auto-remediation, and ChatOps. Perfect event-driven automation platform for DevOps!

5 min read
0 views
Table of Contents

🤖 StackStorm Event-Driven Automation on AlmaLinux: IFTTT for Infrastructure

Welcome to intelligent automation that responds to events! 🎉 Ready to build self-healing infrastructure? StackStorm is the “IFTTT for Ops” - an event-driven automation platform for auto-remediation, incident responses, and more! It’s the platform that makes your infrastructure smart and responsive! Think of it as your infrastructure’s nervous system that reacts automatically! 🚀✨

🤔 Why is StackStorm Important?

StackStorm transforms infrastructure from reactive to proactive! 🚀 Here’s why it’s amazing:

  • 🎯 Event-Driven - If-This-Then-That for operations!
  • 🔄 Auto-Remediation - Self-healing infrastructure!
  • 📚 160+ Integration Packs - 6000+ pre-built actions!
  • 🤖 ChatOps - Slack/Teams integration!
  • 📊 Workflow Engine - Complex automation chains!
  • 🔌 Extensible - Python-based plugins!

It’s like having a smart assistant for your infrastructure! 💰

🎯 What You Need

Before building your event-driven platform, ensure you have:

  • ✅ AlmaLinux 9 server (RHEL-compatible)
  • ✅ Root or sudo access
  • ✅ At least 4GB RAM (8GB recommended)
  • ✅ 4 CPU cores minimum
  • ✅ 20GB free disk space
  • ✅ Python 3.8 or newer
  • ✅ Love for automation magic! 🤖

📝 Step 1: System Preparation - Building the Foundation!

Let’s prepare AlmaLinux 9 for StackStorm! 🏗️

# Update system
sudo dnf update -y

# Install required packages
sudo dnf install -y curl wget git gcc gcc-c++ make

# Install Python 3.8+ and pip
sudo dnf install -y python3 python3-pip python3-devel

# Verify Python version
python3 --version
# Should show: Python 3.9.x or higher

# Install development tools
sudo dnf groupinstall -y "Development Tools"

# Install additional dependencies
sudo dnf install -y openssl-devel libffi-devel \
  postgresql-devel redis nginx

# Disable SELinux (temporarily for installation)
sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config

Configure firewall for StackStorm:

# Open required ports
sudo firewall-cmd --permanent --add-port=443/tcp    # HTTPS Web UI
sudo firewall-cmd --permanent --add-port=9100/tcp   # Auth API
sudo firewall-cmd --permanent --add-port=9101/tcp   # API
sudo firewall-cmd --permanent --add-port=9102/tcp   # Stream API
sudo firewall-cmd --reload

# Verify ports
sudo firewall-cmd --list-ports
# Should show all configured ports

Perfect! System is ready! 🎯

🔧 Step 2: Installing StackStorm - The Quick Way!

Let’s install StackStorm using the official script! 🚀

Quick Installation Script:

# Download and run installation script
curl -sSL https://stackstorm.com/packages/install.sh | bash -s -- --user=st2admin --password='Ch@ngeMe123!'

# This will install:
# - StackStorm components
# - MongoDB (metadata storage)
# - RabbitMQ (message bus)
# - PostgreSQL (optional)
# - Redis (optional)
# - Nginx (web server)

# Wait for installation to complete (10-15 minutes)

Manual Installation (Alternative):

# Add StackStorm repository
sudo rpm --import https://packagecloud.io/StackStorm/stable/gpgkey

# Create repo file
sudo tee /etc/yum.repos.d/StackStorm_stable.repo << 'EOF'
[StackStorm_stable]
name=StackStorm Stable
baseurl=https://packagecloud.io/StackStorm/stable/el/9/$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packagecloud.io/StackStorm/stable/gpgkey
EOF

# Install StackStorm
sudo dnf install -y st2

# Install and configure MongoDB
sudo dnf install -y mongodb-org mongodb-org-server
sudo systemctl start mongod
sudo systemctl enable mongod

# Install and configure RabbitMQ
sudo dnf install -y rabbitmq-server
sudo systemctl start rabbitmq-server
sudo systemctl enable rabbitmq-server

# Setup StackStorm
sudo st2ctl bootstrap

Verify Installation:

# Check StackStorm status
sudo st2ctl status

# All services should show as running:
# st2actionrunner... running
# st2api... running
# st2auth... running
# st2garbagecollector... running
# st2notifier... running
# st2rulesengine... running
# st2scheduler... running
# st2sensorcontainer... running
# st2stream... running
# st2timersengine... running
# st2workflowengine... running

# Test authentication
st2 login st2admin -p 'Ch@ngeMe123!'
# Should show: Logged in as st2admin

🌟 Step 3: Accessing StackStorm - Your Automation Brain!

Time to access your event-driven platform! 🎮

Web UI Access:

# Get your server IP
ip addr show | grep inet
# Note your server IP address

# Access StackStorm Web UI
# URL: https://your-server-ip
# Username: st2admin
# Password: Ch@ngeMe123!

# Accept self-signed certificate warning

Command Line Access:

# Configure CLI
export ST2_AUTH_TOKEN=$(st2 auth st2admin -p 'Ch@ngeMe123!' -t)

# Test CLI
st2 action list --limit 5
# Should show list of actions

# Get help
st2 --help

Dashboard shows:

  • 📊 Actions - Available automations
  • 📋 Workflows - Automation chains
  • 🎯 Rules - Event triggers
  • 📦 Packs - Integration packages
  • 📈 History - Execution logs

✅ Step 4: Creating Your First Rule - If-This-Then-That!

Let’s create event-driven automation! 🎯

Install Example Pack:

# Install Linux pack for monitoring
st2 pack install linux

# List installed packs
st2 pack list

# View available actions
st2 action list --pack=linux

Create Simple Rule:

# Create rule file
cat << 'EOF' > /tmp/disk_space_rule.yaml
---
name: "disk_space_alert"
description: "Alert when disk space is low"
enabled: true
trigger:
  type: "core.st2.IntervalTimer"
  parameters:
    unit: "minutes"
    delta: 5
criteria: {}
action:
  ref: "core.local"
  parameters:
    cmd: |
      USAGE=$(df -h / | awk 'NR==2 {print $5}' | sed 's/%//')
      if [ $USAGE -gt 80 ]; then
        echo "ALERT: Disk usage is ${USAGE}%"
        # Could trigger cleanup action here
      else
        echo "OK: Disk usage is ${USAGE}%"
      fi
EOF

# Create the rule
st2 rule create /tmp/disk_space_rule.yaml

# List rules
st2 rule list

# Get rule info
st2 rule get disk_space_alert

Create Auto-Remediation Rule:

# Auto-restart failed service
cat << 'EOF' > /tmp/service_restart_rule.yaml
---
name: "auto_restart_service"
description: "Automatically restart failed services"
enabled: true
trigger:
  type: "core.st2.webhook"
  parameters:
    url: "service_down"
action:
  ref: "core.local"
  parameters:
    cmd: |
      SERVICE="{{ trigger.body.service }}"
      echo "Service $SERVICE is down. Attempting restart..."
      sudo systemctl restart $SERVICE
      sleep 5
      if systemctl is-active --quiet $SERVICE; then
        echo "SUCCESS: $SERVICE restarted successfully"
      else
        echo "FAILED: Could not restart $SERVICE"
        # Could trigger alert here
      fi
EOF

# Create the rule
st2 rule create /tmp/service_restart_rule.yaml

# Test webhook trigger
curl -X POST https://localhost/api/v1/webhooks/service_down \
  -H "Content-Type: application/json" \
  -H "X-Auth-Token: $ST2_AUTH_TOKEN" \
  -d '{"service": "nginx"}' \
  --insecure

🌟 Step 5: Creating Workflows - Complex Automation!

Let’s build multi-step workflows! 🎯

Create Workflow:

# Create workflow directory
mkdir -p /opt/stackstorm/packs/custom/actions/workflows

# Create workflow definition
cat << 'EOF' > /opt/stackstorm/packs/custom/actions/workflows/deployment_workflow.yaml
version: '2.0'

deployment_workflow:
  description: "Complete deployment workflow"
  type: direct
  
  input:
    - app_name
    - environment
  
  tasks:
    backup_current:
      action: core.local
      input:
        cmd: "tar czf /backup/{{ _.app_name }}-$(date +%Y%m%d).tar.gz /var/www/{{ _.app_name }}"
      on-success:
        - pull_code
      on-error:
        - send_alert
    
    pull_code:
      action: core.local
      input:
        cmd: "cd /var/www/{{ _.app_name }} && git pull origin main"
      on-success:
        - run_tests
      on-error:
        - rollback
    
    run_tests:
      action: core.local
      input:
        cmd: "cd /var/www/{{ _.app_name }} && npm test"
      on-success:
        - deploy_app
      on-error:
        - rollback
    
    deploy_app:
      action: core.local
      input:
        cmd: |
          cd /var/www/{{ _.app_name }}
          npm install
          npm run build
          sudo systemctl restart {{ _.app_name }}
      on-success:
        - health_check
      on-error:
        - rollback
    
    health_check:
      action: core.local
      input:
        cmd: "curl -f http://localhost:3000/health"
      retry:
        count: 3
        delay: 10
      on-success:
        - send_success
      on-error:
        - rollback
    
    rollback:
      action: core.local
      input:
        cmd: |
          echo "Deployment failed! Rolling back..."
          tar xzf /backup/{{ _.app_name }}-$(date +%Y%m%d).tar.gz -C /
          sudo systemctl restart {{ _.app_name }}
      on-complete:
        - send_alert
    
    send_success:
      action: core.sendmail
      input:
        to: "[email protected]"
        subject: "Deployment Success: {{ _.app_name }}"
        body: "{{ _.app_name }} deployed successfully to {{ _.environment }}"
    
    send_alert:
      action: core.sendmail
      input:
        to: "[email protected]"
        subject: "Deployment Failed: {{ _.app_name }}"
        body: "Deployment of {{ _.app_name }} to {{ _.environment }} failed!"
EOF

# Register workflow
st2ctl reload --register-all

# Execute workflow
st2 run custom.deployment_workflow app_name=myapp environment=production

🎮 Quick Examples

Example 1: ChatOps Integration

Setup Slack integration:

# Install Slack pack
st2 pack install slack

# Configure Slack
st2 pack config slack

# Create ChatOps alias
cat << 'EOF' > /opt/stackstorm/packs/custom/aliases/restart_service.yaml
---
name: "restart_service"
action_ref: "core.local"
description: "Restart a service via Slack"
formats:
  - "restart {{ service }}"
ack:
  enabled: true
  append_url: false
  format: "Restarting {{ service }}..."
result:
  format: |
    {% if execution.result.result.stdout %}
    Service restarted successfully!
    Output: {{ execution.result.result.stdout }}
    {% else %}
    Failed to restart service!
    Error: {{ execution.result.result.stderr }}
    {% endif %}
EOF

# Reload aliases
st2ctl reload --register-aliases

# Now in Slack: "@bot restart nginx"

Example 2: Monitoring Integration

# Create monitoring sensor
cat << 'EOF' > /opt/stackstorm/packs/custom/sensors/cpu_monitor.py
import psutil
from st2reactor.sensor.base import Sensor


class CPUMonitorSensor(Sensor):
    def setup(self):
        self._threshold = 80
        self._poll_interval = 60
    
    def run(self):
        while True:
            cpu_percent = psutil.cpu_percent(interval=1)
            if cpu_percent > self._threshold:
                payload = {
                    'cpu_percent': cpu_percent,
                    'threshold': self._threshold,
                    'hostname': socket.gethostname()
                }
                self.sensor_service.dispatch(
                    trigger='custom.high_cpu',
                    payload=payload
                )
            eventlet.sleep(self._poll_interval)
    
    def cleanup(self):
        pass
    
    def add_trigger(self, trigger):
        pass
    
    def update_trigger(self, trigger):
        pass
    
    def remove_trigger(self, trigger):
        pass
EOF

# Register sensor
st2ctl reload --register-sensors

Example 3: Kubernetes Integration

# Install Kubernetes pack
st2 pack install kubernetes

# Create pod restart rule
cat << 'EOF' > /tmp/k8s_pod_restart.yaml
---
name: "k8s_pod_auto_restart"
description: "Auto-restart failed pods"
enabled: true
trigger:
  type: "kubernetes.pod_status"
  parameters:
    status: "Failed"
action:
  ref: "kubernetes.restart_pod"
  parameters:
    namespace: "{{ trigger.namespace }}"
    pod: "{{ trigger.pod_name }}"
EOF

st2 rule create /tmp/k8s_pod_restart.yaml

🚨 Fix Common Problems

Problem 1: Services Not Starting

Symptom: StackStorm services fail to start 😰

Fix:

# Check service status
sudo st2ctl status

# Check individual service logs
sudo journalctl -u st2api -n 50
sudo journalctl -u st2actionrunner -n 50

# Common issue: MongoDB not running
sudo systemctl status mongod
sudo systemctl start mongod

# Common issue: RabbitMQ not running
sudo systemctl status rabbitmq-server
sudo systemctl start rabbitmq-server

# Restart all services
sudo st2ctl restart

Problem 2: Authentication Fails

Symptom: Cannot login to Web UI or CLI 🔐

Fix:

# Reset admin password
sudo htpasswd /etc/st2/htpasswd st2admin
# Enter new password

# Generate new auth token
st2 auth st2admin -p 'YourNewPassword' -t

# Check auth service
sudo systemctl status st2auth
sudo journalctl -u st2auth -n 50

# Verify API is accessible
curl -k https://localhost/api/v1/actions

Problem 3: Workflows Not Executing

Symptom: Workflows stuck or failing 🔴

Fix:

# Check workflow engine
sudo systemctl status st2workflowengine

# View execution details
st2 execution list
st2 execution get <execution-id>

# Check action runner
sudo systemctl status st2actionrunner

# Increase action runners
sudo vi /etc/st2/st2.conf
# [actionrunner]
# workers = 4

# Restart services
sudo st2ctl restart

# Clear stuck executions
st2 execution cancel <execution-id>

📋 Simple Commands Summary

TaskCommandPurpose
Check statussudo st2ctl statusService status
Restart allsudo st2ctl restartRestart services
List actionsst2 action listShow actions
Run actionst2 run core.local cmd="ls"Execute action
List rulesst2 rule listShow rules
List packsst2 pack listShow packs
Install packst2 pack install <pack>Add integration
View logsst2 execution listExecution history
Get helpst2 --helpCLI help

💡 Tips for Success

🚀 Performance Optimization

Make StackStorm super fast:

# Increase action runners
sudo vi /etc/st2/st2.conf
# [actionrunner]
# workers = 10
# [mistral]
# max_workflow_executions = 100

# Optimize MongoDB
sudo vi /etc/mongod.conf
# storage:
#   wiredTiger:
#     engineConfig:
#       cacheSizeGB: 2

# Enable result caching
# [resultstracker]
# query_interval = 0.1

# Restart services
sudo st2ctl restart

🔒 Security Best Practices

Keep StackStorm secure:

  1. Enable HTTPS - Use proper SSL certificates! 🔐
  2. RBAC - Role-based access control! 👥
  3. API Keys - Use token authentication! 🔑
  4. Secrets - Use datastore encryption! 🔓
  5. Audit - Enable comprehensive logging! 📝
# Enable RBAC
sudo vi /etc/st2/st2.conf
# [rbac]
# enable = True

# Store secrets securely
st2 key set api_key "secret_value" --encrypt

# Configure SSL
sudo vi /etc/nginx/conf.d/st2.conf
# Add SSL configuration

# Restart services
sudo st2ctl restart

📊 Monitoring and Backup

Keep StackStorm healthy:

# Backup script
cat << 'EOF' > /usr/local/bin/backup-stackstorm.sh
#!/bin/bash
BACKUP_DIR="/backup/stackstorm"
DATE=$(date +%Y%m%d)

# Create backup directory
mkdir -p $BACKUP_DIR

# Backup MongoDB
mongodump --out $BACKUP_DIR/mongo-$DATE

# Backup configurations
tar czf $BACKUP_DIR/config-$DATE.tar.gz /etc/st2/

# Backup packs
tar czf $BACKUP_DIR/packs-$DATE.tar.gz /opt/stackstorm/packs/

# Backup datastore
st2 key list -j > $BACKUP_DIR/datastore-$DATE.json

# Cleanup old backups
find $BACKUP_DIR -type f -mtime +30 -delete
EOF

chmod +x /usr/local/bin/backup-stackstorm.sh
# Add to cron: 0 2 * * * /usr/local/bin/backup-stackstorm.sh

🏆 What You Learned

You’re now a StackStorm automation expert! 🎓 You’ve successfully:

  • ✅ Installed StackStorm on AlmaLinux 9
  • ✅ Created event-driven rules
  • ✅ Built complex workflows
  • ✅ Implemented auto-remediation
  • ✅ Integrated monitoring
  • ✅ Set up ChatOps
  • ✅ Mastered IFTTT for infrastructure

Your infrastructure is now intelligent and self-healing! 🤖

🎯 Why This Matters

StackStorm transforms infrastructure operations! With your event-driven platform, you can:

  • 🚀 React instantly - Events trigger automation!
  • 🔧 Self-heal - Problems fix themselves!
  • 💬 ChatOps ready - Automation via Slack!
  • 🔄 Integrate everything - 160+ packs available!
  • 💰 Save time - Let robots handle repetitive tasks!

You’re not just automating - you’re building intelligent infrastructure that responds and adapts! Every event is captured, every response is automated! 🎭

Keep automating, keep innovating, and remember - with StackStorm, your infrastructure thinks for itself! ⭐

May your events trigger smoothly and your infrastructure self-heal perfectly! 🚀🤖🙌