+
+
โˆ‰
spring
+
+
lisp
ada
+
#
+
//
abap
haiku
ionic
node
+
+
+
windows
lua
xml
+
remix
+
+
+
s3
+
+
+
unix
+
jquery
+
+
!
+
jenkins
travis
micronaut
bundler
โˆซ
fastapi
+
next
centos
+
+
+
+
+
+
html
+
grafana
cosmos
+
+
+
+
aurelia
mongo
packer
%
+
+
+
+
ansible
haskell
+
+
micronaut
f#
centos
hack
+
chef
+
+
+
+
+
gin
fiber
scipy
+
+
Back to Blog
๐Ÿšข HashiCorp Nomad on AlmaLinux: Simple and Flexible Workload Orchestration
nomad hashicorp almalinux

๐Ÿšข HashiCorp Nomad on AlmaLinux: Simple and Flexible Workload Orchestration

Published Sep 6, 2025

Master HashiCorp Nomad on AlmaLinux! Learn installation, job scheduling, container orchestration, and cluster management. Perfect alternative to Kubernetes!

5 min read
0 views
Table of Contents

๐Ÿšข HashiCorp Nomad on AlmaLinux: Simple and Flexible Workload Orchestration

Welcome to simple orchestration that just works! ๐ŸŽ‰ Ready to deploy any workload anywhere? HashiCorp Nomad is the flexible orchestrator that runs containers, VMs, and standalone applications with ease! Itโ€™s the platform that makes deployment simple without the complexity! Think of it as your universal workload scheduler! ๐Ÿš€โœจ

๐Ÿค” Why is Nomad Important?

Nomad revolutionizes workload orchestration! ๐Ÿš€ Hereโ€™s why itโ€™s amazing:

  • ๐ŸŽฏ Simple to Learn - Single binary, easy concepts!
  • ๐Ÿ“ฆ Any Workload - Containers, VMs, binaries, and more!
  • ๐Ÿš€ Massive Scale - Proven with 10,000+ nodes!
  • ๐Ÿ”ง Self-Contained - No external dependencies!
  • ๐ŸŒ Multi-Region - Built-in federation!
  • ๐Ÿ†“ Open Source - Free community edition!

Itโ€™s like Kubernetes but actually simple! ๐Ÿ’ฐ

๐ŸŽฏ What You Need

Before building your orchestration platform, ensure you have:

  • โœ… AlmaLinux 9 server (or cluster)
  • โœ… Root or sudo access
  • โœ… At least 2GB RAM (4GB recommended)
  • โœ… 2 CPU cores minimum
  • โœ… 10GB free disk space
  • โœ… Docker installed (optional)
  • โœ… Love for simplicity! ๐Ÿšข

๐Ÿ“ Step 1: System Preparation - Getting Ready!

Letโ€™s prepare AlmaLinux 9 for Nomad! ๐Ÿ—๏ธ

# Update system packages
sudo dnf update -y

# Install required packages
sudo dnf install -y wget unzip curl jq

# Install Docker (for container workloads)
sudo dnf install -y dnf-utils
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y docker-ce docker-ce-cli containerd.io

# Start Docker
sudo systemctl start docker
sudo systemctl enable docker

# Create nomad user
sudo useradd -r -d /var/lib/nomad -s /bin/false nomad

# Create necessary directories
sudo mkdir -p /etc/nomad /opt/nomad /var/lib/nomad
sudo mkdir -p /var/log/nomad

# Set proper ownership
sudo chown -R nomad:nomad /etc/nomad /var/lib/nomad /var/log/nomad

Configure firewall for Nomad:

# Open Nomad ports
sudo firewall-cmd --permanent --add-port=4646/tcp  # HTTP API
sudo firewall-cmd --permanent --add-port=4647/tcp  # RPC
sudo firewall-cmd --permanent --add-port=4648/tcp  # Serf WAN
sudo firewall-cmd --permanent --add-port=4648/udp  # Serf WAN
sudo firewall-cmd --reload

# For dynamic port allocation (optional)
sudo firewall-cmd --permanent --add-port=20000-32000/tcp
sudo firewall-cmd --reload

# Verify ports
sudo firewall-cmd --list-ports

Perfect! System is ready! ๐ŸŽฏ

๐Ÿ”ง Step 2: Installing Nomad - The Single Binary!

Letโ€™s install Nomad! ๐Ÿš€

Install from HashiCorp Repository:

# Add HashiCorp repository
sudo dnf install -y dnf-plugins-core
sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo

# Install Nomad
sudo dnf install -y nomad

# Verify installation
nomad version
# Should show: Nomad v1.7.x

# Or manual installation:
cd /tmp
NOMAD_VERSION="1.7.2"
wget https://releases.hashicorp.com/nomad/${NOMAD_VERSION}/nomad_${NOMAD_VERSION}_linux_amd64.zip
unzip nomad_${NOMAD_VERSION}_linux_amd64.zip
sudo mv nomad /usr/local/bin/
sudo chmod +x /usr/local/bin/nomad

Configure Nomad Server:

# Create server configuration
sudo tee /etc/nomad/nomad.hcl << 'EOF'
datacenter = "dc1"
data_dir = "/var/lib/nomad"
log_level = "INFO"

server {
  enabled = true
  bootstrap_expect = 1
}

client {
  enabled = true
  servers = ["127.0.0.1:4647"]
}

ui {
  enabled = true
}

plugin "docker" {
  config {
    allow_privileged = true
    volumes {
      enabled = true
    }
  }
}

plugin "raw_exec" {
  config {
    enabled = true
  }
}

bind_addr = "0.0.0.0"

advertise {
  http = "{{ GetInterfaceIP \"eth0\" }}"
  rpc  = "{{ GetInterfaceIP \"eth0\" }}"
  serf = "{{ GetInterfaceIP \"eth0\" }}"
}
EOF

# Set proper permissions
sudo chown nomad:nomad /etc/nomad/nomad.hcl
sudo chmod 640 /etc/nomad/nomad.hcl

Create Systemd Service:

# Create service file
sudo tee /etc/systemd/system/nomad.service << 'EOF'
[Unit]
Description=HashiCorp Nomad
Documentation=https://www.nomadproject.io/docs/
Requires=network-online.target
After=network-online.target
ConditionFileNotEmpty=/etc/nomad/nomad.hcl
StartLimitBurst=3

[Service]
Type=notify
User=nomad
Group=nomad
ExecStart=/usr/bin/nomad agent -config=/etc/nomad/nomad.hcl
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
RestartSec=2
LimitNOFILE=65536
TasksMax=infinity

[Install]
WantedBy=multi-user.target
EOF

# Reload and start Nomad
sudo systemctl daemon-reload
sudo systemctl enable nomad
sudo systemctl start nomad

# Check status
sudo systemctl status nomad
# Should show: active (running)

๐ŸŒŸ Step 3: Access Nomad UI - Your Orchestration Dashboard!

Time to explore Nomad! ๐ŸŽฎ

Access Web UI:

# Get your server IP
ip addr show | grep inet

# Access Nomad UI
# URL: http://your-server-ip:4646
# No authentication by default

# Check cluster status
nomad server members
# Should show your server

nomad node status
# Should show your client node

Dashboard shows:

  • ๐Ÿ’ผ Jobs - Running workloads
  • ๐Ÿ“ฆ Allocations - Task instances
  • ๐Ÿ–ฅ๏ธ Clients - Worker nodes
  • ๐ŸŒ Servers - Control plane
  • ๐Ÿ“Š Topology - Cluster visualization

โœ… Step 4: Deploy Your First Job - Letโ€™s Run Workloads!

Time to deploy applications! ๐ŸŽฏ

Create Docker Job:

# Create job specification
cat << 'EOF' > webapp.nomad
job "webapp" {
  datacenters = ["dc1"]
  type = "service"
  
  group "frontend" {
    count = 3
    
    network {
      port "http" {
        to = 80
      }
    }
    
    task "nginx" {
      driver = "docker"
      
      config {
        image = "nginx:latest"
        ports = ["http"]
        
        volumes = [
          "local/html:/usr/share/nginx/html"
        ]
      }
      
      template {
        data = <<HTML
<html>
  <body>
    <h1>Hello from Nomad! ๐Ÿš€</h1>
    <p>Node: {{ env "node.unique.name" }}</p>
    <p>Allocation: {{ env "NOMAD_ALLOC_ID" }}</p>
  </body>
</html>
HTML
        destination = "local/html/index.html"
      }
      
      resources {
        cpu    = 100
        memory = 128
      }
      
      service {
        name = "webapp"
        port = "http"
        
        check {
          type     = "http"
          path     = "/"
          interval = "10s"
          timeout  = "2s"
        }
      }
    }
  }
}
EOF

# Plan the job (dry run)
nomad job plan webapp.nomad

# Run the job
nomad job run webapp.nomad

# Check job status
nomad job status webapp

# View allocations
nomad alloc status -short

Create Batch Job:

# Create batch job
cat << 'EOF' > batch-process.nomad
job "batch-process" {
  datacenters = ["dc1"]
  type = "batch"
  
  group "processing" {
    task "process-data" {
      driver = "raw_exec"
      
      config {
        command = "/bin/bash"
        args = ["-c", "echo 'Processing started...'; sleep 30; echo 'Processing complete!'"]
      }
      
      resources {
        cpu    = 500
        memory = 256
      }
      
      logs {
        max_files     = 10
        max_file_size = 10
      }
    }
  }
}
EOF

# Run batch job
nomad job run batch-process.nomad

# Monitor batch job
nomad job status batch-process

๐ŸŒŸ Step 5: Advanced Features - Scaling and Updates!

Letโ€™s explore advanced orchestration! ๐ŸŽฏ

Rolling Updates:

# Job with update strategy
cat << 'EOF' > rolling-update.nomad
job "api" {
  datacenters = ["dc1"]
  
  update {
    max_parallel     = 1
    min_healthy_time = "30s"
    healthy_deadline = "5m"
    auto_revert      = true
    canary           = 1
  }
  
  group "api" {
    count = 5
    
    task "api-server" {
      driver = "docker"
      
      config {
        image = "myapi:v1"
      }
      
      resources {
        cpu    = 200
        memory = 512
      }
    }
  }
}
EOF

# Deploy with canary
nomad job run rolling-update.nomad

# Promote canary if healthy
nomad job promote api

Auto-Scaling:

# Create scaling policy
cat << 'EOF' > scaling-policy.json
{
  "Target": {
    "Group": "frontend"
  },
  "Policy": {
    "cooldown": "1m",
    "evaluation_interval": "30s",
    "target": {
      "avg_cpu": {
        "target": 70
      }
    }
  },
  "Min": 1,
  "Max": 10
}
EOF

# Apply scaling policy
nomad scaling policy apply scaling-policy.json

Multi-Region Deployment:

# Multi-region job
cat << 'EOF' > multi-region.nomad
job "global-app" {
  multiregion {
    strategy {
      max_parallel = 1
      on_failure   = "fail_local"
    }
    
    region "us-east" {
      count = 3
      datacenters = ["us-east-1"]
    }
    
    region "eu-west" {
      count = 2
      datacenters = ["eu-west-1"]
    }
  }
  
  group "app" {
    task "server" {
      driver = "docker"
      config {
        image = "myapp:latest"
      }
    }
  }
}
EOF

๐ŸŽฎ Quick Examples

Example 1: Consul Integration

# Job with Consul service mesh
cat << 'EOF' > consul-connect.nomad
job "connect-demo" {
  datacenters = ["dc1"]
  
  group "api" {
    network {
      mode = "bridge"
    }
    
    service {
      name = "api"
      port = "9090"
      
      connect {
        sidecar_service {
          proxy {
            upstreams {
              destination_name = "database"
              local_bind_port  = 5432
            }
          }
        }
      }
    }
    
    task "api" {
      driver = "docker"
      config {
        image = "myapi:latest"
      }
    }
  }
}
EOF

Example 2: Parameterized Jobs

# Parameterized batch job
cat << 'EOF' > parameterized.nomad
job "data-processor" {
  datacenters = ["dc1"]
  type = "batch"
  
  parameterized {
    payload       = "required"
    meta_required = ["input_file"]
  }
  
  group "process" {
    task "processor" {
      driver = "raw_exec"
      
      config {
        command = "/usr/local/bin/process.sh"
        args    = ["${NOMAD_META_input_file}"]
      }
      
      dispatch_payload {
        file = "input.txt"
      }
    }
  }
}
EOF

# Dispatch job with parameters
echo "data to process" | nomad job dispatch -meta input_file=/data/file.txt data-processor -

Example 3: System Jobs

# System job (runs on all nodes)
cat << 'EOF' > monitoring.nomad
job "monitoring" {
  datacenters = ["dc1"]
  type = "system"
  
  group "prometheus" {
    task "node-exporter" {
      driver = "docker"
      
      config {
        image = "prom/node-exporter:latest"
        network_mode = "host"
        pid_mode = "host"
        
        volumes = [
          "/:/host:ro,rslave"
        ]
      }
      
      resources {
        cpu    = 100
        memory = 128
      }
    }
  }
}
EOF

๐Ÿšจ Fix Common Problems

Problem 1: Job Stuck Pending

Symptom: Job wonโ€™t start, stays pending ๐Ÿ˜ฐ

Fix:

# Check job status
nomad job status job-name

# Check allocation status
nomad alloc status alloc-id

# View placement failures
nomad job status -verbose job-name

# Common issues:
# - No nodes with required resources
# - Constraint conflicts
# - Driver not available

# Check node resources
nomad node status -verbose

# Relax constraints or add resources

Problem 2: Node Not Ready

Symptom: Client node not available ๐Ÿ–ฅ๏ธ

Fix:

# Check node status
nomad node status

# Check client logs
sudo journalctl -u nomad -n 100

# Verify Docker is running (if using)
sudo systemctl status docker

# Check node eligibility
nomad node eligibility -enable node-id

# Drain node for maintenance
nomad node drain -enable node-id

Problem 3: Networking Issues

Symptom: Services canโ€™t communicate ๐Ÿ”Œ

Fix:

# Check CNI plugins (if using)
ls -la /opt/cni/bin/

# Verify bridge network
docker network ls

# Check allocation networking
nomad alloc exec alloc-id ip addr

# Debug service discovery
nomad alloc exec alloc-id nslookup service.consul

# Check firewall rules
sudo firewall-cmd --list-all

๐Ÿ“‹ Simple Commands Summary

TaskCommandPurpose
Start Nomadsudo systemctl start nomadStart service
Run jobnomad job run job.nomadDeploy workload
Stop jobnomad job stop job-nameStop workload
Job statusnomad job status job-nameCheck job
List jobsnomad job listShow all jobs
Node statusnomad node statusShow nodes
Alloc logsnomad alloc logs alloc-idView logs
Exec intonomad alloc exec alloc-id bashContainer shell
Scale jobnomad job scale job-name group countChange count

๐Ÿ’ก Tips for Success

๐Ÿš€ Performance Optimization

Make Nomad super fast:

# Tune scheduler configuration
sudo tee -a /etc/nomad/nomad.hcl << 'EOF'
server {
  heartbeat_grace = "10s"
  
  plan_rejection_tracker {
    enabled = true
    node_threshold = 100
    node_window = "10m"
  }
}

client {
  max_kill_timeout = "30s"
  
  options {
    "driver.raw_exec.enable" = "1"
    "docker.volumes.enabled" = "true"
  }
}
EOF

# Enable memory oversubscription
# In job spec:
# resources {
#   memory_max = 1024  # Allow bursting
# }

๐Ÿ”’ Security Best Practices

Keep Nomad secure:

  1. Enable ACLs - Access control! ๐Ÿ”
  2. TLS everywhere - Encrypt traffic! ๐Ÿ”‘
  3. Vault integration - Secure secrets! ๐Ÿ”“
  4. Namespace isolation - Multi-tenancy! ๐Ÿข
  5. Audit logging - Track changes! ๐Ÿ“
# Enable ACLs
nomad acl bootstrap

# Create namespace
nomad namespace apply -description "Production apps" production

# Enable TLS
nomad tls ca create
nomad tls cert create -server -region global

๐Ÿ“Š Monitoring and Backup

Keep Nomad healthy:

# Backup script
cat << 'EOF' > /usr/local/bin/backup-nomad.sh
#!/bin/bash
BACKUP_DIR="/backup/nomad"
DATE=$(date +%Y%m%d-%H%M%S)

mkdir -p $BACKUP_DIR

# Snapshot state
nomad operator snapshot save $BACKUP_DIR/nomad-$DATE.snap

# Export job specs
for job in $(nomad job list -short); do
  nomad job inspect $job > $BACKUP_DIR/job-$job-$DATE.json
done

# Keep only last 7 days
find $BACKUP_DIR -name "*.snap" -mtime +7 -delete

echo "Backup completed!"
EOF

chmod +x /usr/local/bin/backup-nomad.sh
# Add to cron: 0 2 * * * /usr/local/bin/backup-nomad.sh

๐Ÿ† What You Learned

Youโ€™re now a Nomad expert! ๐ŸŽ“ Youโ€™ve successfully:

  • โœ… Installed Nomad on AlmaLinux 9
  • โœ… Deployed various workload types
  • โœ… Configured job specifications
  • โœ… Implemented rolling updates
  • โœ… Set up auto-scaling
  • โœ… Managed cluster operations
  • โœ… Mastered simple orchestration

Your orchestration platform is production-ready! ๐Ÿšข

๐ŸŽฏ Why This Matters

Nomad simplifies orchestration! With your platform, you can:

  • ๐Ÿš€ Deploy anything - Containers, VMs, binaries!
  • ๐ŸŽฏ Keep it simple - No complexity overhead!
  • ๐Ÿ“ˆ Scale massively - 10,000+ nodes proven!
  • ๐ŸŒ Go multi-region - Built-in federation!
  • ๐Ÿ’ฐ Save resources - Efficient bin-packing!

Youโ€™re not just orchestrating - youโ€™re deploying workloads the simple way! Every job is scheduled, every resource is optimized! ๐ŸŽญ

Keep deploying, keep scaling, and remember - with Nomad, orchestration is actually simple! โญ

May your workloads run smoothly and your clusters scale effortlessly! ๐Ÿš€๐Ÿšข๐Ÿ™Œ