solidity
vscode
echo
ฯ€
+
&
s3
sse
+
{}
^
keras
+
+
+
+
sinatra
+
smtp
+
netlify
+
+
+
lisp
meteor
+
ractive
+
+
+
+
julia
+
+
+
0x
+
sql
+
apex
centos
actix
+
++
+
fastapi
+
emacs
0b
+
+
+
+
cdn
parcel
+
+
+
scala
cosmos
+
centos
+
sse
gh
+
+
+
+
xml
packer
@
laravel
rider
+
helm
solid
+
jax
rubymine
elm
+
pycharm
+
asm
ionic
rubymine
lua
@
Back to Blog
๐Ÿšข AlmaLinux Container Orchestration: Complete Kubernetes Setup Guide
AlmaLinux Kubernetes Container Orchestration

๐Ÿšข AlmaLinux Container Orchestration: Complete Kubernetes Setup Guide

Published Sep 17, 2025

Master AlmaLinux Kubernetes cluster deployment for container orchestration! Learn cluster setup, pod management, service discovery, persistent storage, monitoring, and enterprise-grade container platform deployment.

52 min read
0 views
Table of Contents

๐Ÿšข AlmaLinux Container Orchestration: Complete Kubernetes Setup Guide

Welcome to the comprehensive AlmaLinux Kubernetes container orchestration guide! ๐ŸŽ‰ Kubernetes is the worldโ€™s leading container orchestration platform, enabling you to deploy, manage, and scale containerized applications with ease. Whether youโ€™re building microservices architectures, implementing CI/CD pipelines, or creating cloud-native applications, Kubernetes provides the foundation for modern application deployment! ๐ŸŒŸ

Setting up a Kubernetes cluster might seem daunting, but weโ€™ll guide you through every step to build a production-ready container orchestration platform. By the end of this guide, youโ€™ll have a fully functional Kubernetes cluster that can handle enterprise workloads with automatic scaling, service discovery, and self-healing capabilities! ๐Ÿš€

๐Ÿค” Why is Container Orchestration Important?

Container orchestration with Kubernetes is absolutely essential for modern application deployment! Hereโ€™s why setting up Kubernetes is incredibly valuable: โœจ

  • ๐ŸŽฏ Automated Deployment: Deploy and update applications without downtime
  • ๐Ÿ“ˆ Auto-Scaling: Automatically scale applications based on demand
  • ๐Ÿ”„ Self-Healing: Automatically restart failed containers and replace unhealthy nodes
  • ๐ŸŒ Service Discovery: Built-in load balancing and service networking
  • ๐Ÿ’พ Storage Orchestration: Automatically mount storage systems for persistent data
  • ๐Ÿ” Secret Management: Securely manage sensitive configuration and credentials
  • ๐Ÿ“Š Resource Management: Efficiently allocate CPU, memory, and storage resources
  • ๐Ÿ›ก๏ธ Security Policies: Implement network policies and access controls
  • ๐Ÿ”ง Configuration Management: Manage application configuration separately from code
  • ๐ŸŒ Multi-Cloud: Run consistently across different cloud providers and on-premises

๐ŸŽฏ What You Need

Before we start building your Kubernetes cluster, make sure you have these essentials ready:

โœ… Multiple AlmaLinux 9.x servers (1 master + 2+ worker nodes) โœ… Minimum 2GB RAM per node and 20GB disk space โœ… Static IP addresses for all cluster nodes โœ… Network connectivity between all nodes โœ… Basic container knowledge (weโ€™ll guide you through everything!) โœ… Terminal/SSH access to all servers โœ… Text editor familiarity (nano, vim, or gedit) โœ… Firewall admin access for port configuration โœ… Container registry access (Docker Hub or private registry) โœ… Storage for persistent volumes (local or network storage)

๐Ÿ“ Step 1: System Preparation and Container Runtime

Letโ€™s start by preparing all nodes and installing the container runtime! ๐ŸŽฏ

# Run these commands on ALL nodes (master and workers)

# Update system packages to latest versions
sudo dnf update -y

# Disable swap (required for Kubernetes)
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# Configure kernel parameters for Kubernetes
sudo tee /etc/modules-load.d/k8s.conf << 'EOF'
overlay
br_netfilter
EOF

# Load kernel modules
sudo modprobe overlay
sudo modprobe br_netfilter

# Configure sysctl parameters
sudo tee /etc/sysctl.d/k8s.conf << 'EOF'
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

# Apply sysctl parameters
sudo sysctl --system

# Install containerd container runtime
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y containerd.io

# Configure containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

# Enable SystemdCgroup for containerd
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

# Start and enable containerd
sudo systemctl enable containerd
sudo systemctl start containerd

# Verify containerd is running
sudo systemctl status containerd

# Install cri-tools (optional but useful)
sudo dnf install -y cri-tools

# Test containerd
sudo ctr version

Expected output:

Complete!
โ— containerd.service - containerd container runtime
     Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled)
     Active: active (running) since Tue 2025-09-17 17:00:15 EDT

Client:
  Version:  1.6.24
  Revision: 61f9fd88f79f081d64d6fa3bb1a0dc71ec870523
  Go version: go1.19.13

Server:
  Version:  1.6.24
  Revision: 61f9fd88f79f081d64d6fa3bb1a0dc71ec870523
  UUID: 12345678-1234-5678-9abc-123456789012

Perfect! ๐ŸŒŸ Container runtime is installed and configured on all nodes!

๐Ÿ”ง Step 2: Install Kubernetes Components

Install kubeadm, kubelet, and kubectl on all nodes! โšก

# Run these commands on ALL nodes (master and workers)

# Add Kubernetes repository
sudo tee /etc/yum.repos.d/kubernetes.repo << 'EOF'
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

# Install Kubernetes components
sudo dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

# Enable kubelet service
sudo systemctl enable kubelet

# Check installed versions
kubeadm version
kubelet --version
kubectl version --client

# Configure kubelet
sudo tee /etc/default/kubelet << 'EOF'
KUBELET_EXTRA_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock"
EOF

# Create kubelet systemd drop-in directory
sudo mkdir -p /etc/systemd/system/kubelet.service.d

# Configure kubelet for containerd
sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf << 'EOF'
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_KUBEADM_ARGS=--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9"
Environment="KUBELET_EXTRA_ARGS="
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
EOF

# Reload systemd
sudo systemctl daemon-reload

# Set SELinux to permissive (required for Kubernetes)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# Configure firewall for Kubernetes
sudo systemctl enable firewalld
sudo systemctl start firewalld

# On MASTER node only - add these firewall rules:
sudo firewall-cmd --permanent --add-port=6443/tcp    # Kubernetes API server
sudo firewall-cmd --permanent --add-port=2379-2380/tcp # etcd server client API
sudo firewall-cmd --permanent --add-port=10250/tcp   # Kubelet API
sudo firewall-cmd --permanent --add-port=10259/tcp   # kube-scheduler
sudo firewall-cmd --permanent --add-port=10257/tcp   # kube-controller-manager

# On WORKER nodes only - add these firewall rules:
sudo firewall-cmd --permanent --add-port=10250/tcp   # Kubelet API
sudo firewall-cmd --permanent --add-port=30000-32767/tcp # NodePort Services

# On ALL nodes - add common ports:
sudo firewall-cmd --permanent --add-service=ssh
sudo firewall-cmd --permanent --add-masquerade

# Reload firewall
sudo firewall-cmd --reload

echo "Kubernetes components installed successfully!"

Expected output:

Complete!
kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.2"}
kubelet version: v1.28.2
Client Version: v1.28.2
Kubernetes components installed successfully!

Excellent! โœ… Kubernetes components are installed on all nodes!

๐ŸŒŸ Step 3: Initialize Kubernetes Master Node

Set up the Kubernetes control plane on the master node! ๐Ÿ“Š

# Run these commands ONLY on the MASTER node

# Initialize the Kubernetes cluster
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint=$(hostname -I | awk '{print $1}') --upload-certs

# Wait for initialization to complete, then set up kubectl access
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Verify cluster status
kubectl cluster-info
kubectl get nodes

# Install Flannel CNI (Container Network Interface)
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

# Wait for Flannel pods to be ready
kubectl wait --for=condition=ready pod -l app=flannel -n kube-flannel --timeout=300s

# Check all system pods
kubectl get pods -n kube-system

# Generate join command for worker nodes
kubeadm token create --print-join-command > /tmp/kubeadm-join-command.txt
cat /tmp/kubeadm-join-command.txt

# Create cluster management script
sudo tee /usr/local/bin/k8s-cluster-manager.sh << 'EOF'
#!/bin/bash
# Kubernetes Cluster Management Script

case "$1" in
    status)
        echo "=== Cluster Status ==="
        kubectl cluster-info
        echo ""
        kubectl get nodes -o wide
        echo ""
        kubectl top nodes 2>/dev/null || echo "Metrics server not installed"
        ;;

    pods)
        echo "=== All Pods ==="
        kubectl get pods --all-namespaces -o wide
        ;;

    services)
        echo "=== All Services ==="
        kubectl get services --all-namespaces -o wide
        ;;

    health)
        echo "=== Cluster Health ==="
        kubectl get componentstatuses
        echo ""
        kubectl get pods -n kube-system
        ;;

    join-command)
        echo "=== Worker Join Command ==="
        kubeadm token create --print-join-command
        ;;

    logs)
        if [ -z "$2" ]; then
            echo "Usage: $0 logs <pod-name> [namespace]"
            exit 1
        fi
        NAMESPACE=${3:-default}
        kubectl logs $2 -n $NAMESPACE --tail=50
        ;;

    describe)
        if [ -z "$2" ]; then
            echo "Usage: $0 describe <resource> <name> [namespace]"
            exit 1
        fi
        NAMESPACE=${4:-default}
        kubectl describe $2 $3 -n $NAMESPACE
        ;;

    *)
        echo "Usage: $0 {status|pods|services|health|join-command|logs|describe}"
        echo "Examples:"
        echo "  $0 status"
        echo "  $0 pods"
        echo "  $0 logs nginx-pod default"
        echo "  $0 describe pod nginx-pod default"
        ;;
esac
EOF

# Make cluster manager executable
sudo chmod +x /usr/local/bin/k8s-cluster-manager.sh

# Test cluster management script
sudo /usr/local/bin/k8s-cluster-manager.sh status

echo "Kubernetes master node initialized successfully!"
echo "Join command saved to /tmp/kubeadm-join-command.txt"
echo "Copy this command and run it on worker nodes"

Expected output:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Kubernetes control plane is running at https://192.168.1.10:6443
CoreDNS is running at https://192.168.1.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

=== Cluster Status ===
Kubernetes control plane is running at https://192.168.1.10:6443
CoreDNS is running at https://192.168.1.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

NAME           STATUS   ROLES           AGE   VERSION
master-node    Ready    control-plane   5m    v1.28.2

Amazing! ๐ŸŒŸ Kubernetes master node is initialized and ready!

โœ… Step 4: Join Worker Nodes to Cluster

Add worker nodes to the Kubernetes cluster! ๐Ÿ”ฅ

# Run these commands on WORKER nodes

# Copy the join command from master node (example)
# sudo kubeadm join 192.168.1.10:6443 --token abc123.xyz456 \
#     --discovery-token-ca-cert-hash sha256:hash_value

# After joining, verify from master node:
# kubectl get nodes

# On MASTER node - verify all nodes are ready
kubectl get nodes -o wide

# Check node labels
kubectl get nodes --show-labels

# Label worker nodes (optional but recommended)
kubectl label node worker-node-1 node-role.kubernetes.io/worker=worker
kubectl label node worker-node-2 node-role.kubernetes.io/worker=worker

# Create node management script
sudo tee /usr/local/bin/k8s-node-manager.sh << 'EOF'
#!/bin/bash
# Kubernetes Node Management Script

case "$1" in
    list)
        echo "=== Cluster Nodes ==="
        kubectl get nodes -o wide
        ;;

    describe)
        if [ -z "$2" ]; then
            echo "Usage: $0 describe <node-name>"
            exit 1
        fi
        kubectl describe node $2
        ;;

    drain)
        if [ -z "$2" ]; then
            echo "Usage: $0 drain <node-name>"
            exit 1
        fi
        echo "Draining node $2..."
        kubectl drain $2 --ignore-daemonsets --force --delete-emptydir-data
        ;;

    uncordon)
        if [ -z "$2" ]; then
            echo "Usage: $0 uncordon <node-name>"
            exit 1
        fi
        echo "Uncordoning node $2..."
        kubectl uncordon $2
        ;;

    remove)
        if [ -z "$2" ]; then
            echo "Usage: $0 remove <node-name>"
            exit 1
        fi
        echo "Removing node $2 from cluster..."
        kubectl drain $2 --ignore-daemonsets --force --delete-emptydir-data
        kubectl delete node $2
        echo "Node $2 removed. Run 'kubeadm reset' on the node itself."
        ;;

    taint)
        if [ -z "$2" ] || [ -z "$3" ]; then
            echo "Usage: $0 taint <node-name> <taint>"
            echo "Example: $0 taint worker-1 key=value:NoSchedule"
            exit 1
        fi
        kubectl taint node $2 $3
        ;;

    untaint)
        if [ -z "$2" ] || [ -z "$3" ]; then
            echo "Usage: $0 untaint <node-name> <taint>"
            echo "Example: $0 untaint worker-1 key=value:NoSchedule-"
            exit 1
        fi
        kubectl taint node $2 $3
        ;;

    *)
        echo "Usage: $0 {list|describe|drain|uncordon|remove|taint|untaint}"
        echo "Examples:"
        echo "  $0 list"
        echo "  $0 describe worker-node-1"
        echo "  $0 drain worker-node-1"
        echo "  $0 uncordon worker-node-1"
        ;;
esac
EOF

sudo chmod +x /usr/local/bin/k8s-node-manager.sh

# Test node management
sudo /usr/local/bin/k8s-node-manager.sh list

echo "Worker nodes joined successfully!"

Expected output:

=== Cluster Nodes ===
NAME           STATUS   ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION
master-node    Ready    control-plane   15m   v1.28.2   192.168.1.10   <none>        AlmaLinux 9.2        5.14.0-284.30.1.el9_2.x86_64
worker-node-1  Ready    worker          5m    v1.28.2   192.168.1.11   <none>        AlmaLinux 9.2        5.14.0-284.30.1.el9_2.x86_64
worker-node-2  Ready    worker          5m    v1.28.2   192.168.1.12   <none>        AlmaLinux 9.2        5.14.0-284.30.1.el9_2.x86_64

Perfect! ๐ŸŽ‰ All nodes are joined and the cluster is ready!

๐Ÿ”ง Step 5: Deploy Essential Cluster Components

Install essential components for a production-ready cluster! ๐Ÿ“ˆ

# Run these commands on the MASTER node

# Install Metrics Server for resource monitoring
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# Patch Metrics Server for development environments (if using self-signed certs)
kubectl patch deployment metrics-server -n kube-system --type='json' \
  -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--kubelet-insecure-tls"}]'

# Wait for Metrics Server to be ready
kubectl wait --for=condition=ready pod -l k8s-app=metrics-server -n kube-system --timeout=300s

# Install Kubernetes Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

# Create admin user for dashboard
kubectl create serviceaccount admin-user -n kubernetes-dashboard
kubectl create clusterrolebinding admin-user --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:admin-user

# Create token for dashboard access
kubectl -n kubernetes-dashboard create token admin-user > /tmp/dashboard-token.txt

# Install Ingress Controller (NGINX)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/baremetal/deploy.yaml

# Wait for ingress controller to be ready
kubectl wait --namespace ingress-nginx \
  --for=condition=ready pod \
  --selector=app.kubernetes.io/component=controller \
  --timeout=300s

# Create storage class for persistent volumes
kubectl apply -f - << 'EOF'
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
EOF

# Create persistent volume (example)
kubectl apply -f - << 'EOF'
apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv-1
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /mnt/disk1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - worker-node-1
EOF

# Create cluster monitoring script
sudo tee /usr/local/bin/k8s-monitor.sh << 'EOF'
#!/bin/bash
# Kubernetes Cluster Monitoring Script

case "$1" in
    overview)
        echo "=== Cluster Overview ==="
        kubectl cluster-info
        echo ""
        kubectl get nodes -o wide
        echo ""
        kubectl top nodes 2>/dev/null
        echo ""
        kubectl get pods --all-namespaces | grep -E "(Running|Pending|Failed|CrashLoopBackOff)"
        ;;

    resources)
        echo "=== Resource Usage ==="
        kubectl top nodes
        echo ""
        kubectl top pods --all-namespaces --sort-by=cpu
        ;;

    storage)
        echo "=== Storage Information ==="
        kubectl get storageclass
        echo ""
        kubectl get pv
        echo ""
        kubectl get pvc --all-namespaces
        ;;

    network)
        echo "=== Network Information ==="
        kubectl get services --all-namespaces -o wide
        echo ""
        kubectl get ingress --all-namespaces
        echo ""
        kubectl get endpoints --all-namespaces
        ;;

    events)
        echo "=== Recent Events ==="
        kubectl get events --all-namespaces --sort-by='.lastTimestamp' | tail -20
        ;;

    health)
        echo "=== Cluster Health Check ==="

        # Check node status
        echo "Node Health:"
        kubectl get nodes | grep -v Ready && echo "โŒ Some nodes not ready" || echo "โœ… All nodes ready"

        # Check system pods
        echo -e "\nSystem Pods Health:"
        FAILED_PODS=$(kubectl get pods -n kube-system | grep -v Running | grep -v Completed | tail -n +2)
        if [ -z "$FAILED_PODS" ]; then
            echo "โœ… All system pods healthy"
        else
            echo "โŒ Failed system pods:"
            echo "$FAILED_PODS"
        fi

        # Check metrics server
        echo -e "\nMetrics Server:"
        kubectl top nodes >/dev/null 2>&1 && echo "โœ… Metrics server working" || echo "โŒ Metrics server not responding"

        # Check DNS
        echo -e "\nDNS Health:"
        kubectl get pods -n kube-system -l k8s-app=kube-dns | grep Running >/dev/null && echo "โœ… DNS pods running" || echo "โŒ DNS pods not running"
        ;;

    dashboard)
        echo "=== Dashboard Access ==="
        echo "Dashboard URL: https://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/"
        echo "Run: kubectl proxy"
        echo "Token saved in: /tmp/dashboard-token.txt"
        cat /tmp/dashboard-token.txt
        ;;

    *)
        echo "Usage: $0 {overview|resources|storage|network|events|health|dashboard}"
        ;;
esac
EOF

sudo chmod +x /usr/local/bin/k8s-monitor.sh

# Test cluster monitoring
sudo /usr/local/bin/k8s-monitor.sh overview

echo "Essential cluster components installed successfully!"

Expected output:

deployment.apps/metrics-server patched
pod/metrics-server-6d94bc8694-xyz123 condition met

=== Cluster Overview ===
Kubernetes control plane is running at https://192.168.1.10:6443
CoreDNS is running at https://192.168.1.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

NAME           STATUS   ROLES           AGE   VERSION
master-node    Ready    control-plane   25m   v1.28.2
worker-node-1  Ready    worker          15m   v1.28.2
worker-node-2  Ready    worker          15m   v1.28.2

NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master-node    156m         7%     1456Mi          74%
worker-node-1  45m          2%     892Mi           45%
worker-node-2  43m          2%     876Mi           44%

Excellent! โœ… Essential cluster components are installed and monitoring is active!

๐ŸŽฎ Quick Examples

Here are practical examples of deploying applications on your Kubernetes cluster! ๐ŸŒŸ

Example 1: Deploy Multi-Tier Web Application ๐ŸŒ

# Create namespace for the application
kubectl create namespace webapp

# Deploy MySQL database
kubectl apply -f - << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  namespace: webapp
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:8.0
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: rootpassword
        - name: MYSQL_DATABASE
          value: webapp
        - name: MYSQL_USER
          value: webuser
        - name: MYSQL_PASSWORD
          value: webpassword
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: webapp
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
EOF

# Deploy web application
kubectl apply -f - << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
  namespace: webapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
      - name: webapp
        image: nginx:alpine
        ports:
        - containerPort: 80
        env:
        - name: DATABASE_HOST
          value: mysql
        - name: DATABASE_NAME
          value: webapp
        volumeMounts:
        - name: webapp-config
          mountPath: /etc/nginx/conf.d
      volumes:
      - name: webapp-config
        configMap:
          name: webapp-config
---
apiVersion: v1
kind: Service
metadata:
  name: webapp
  namespace: webapp
spec:
  selector:
    app: webapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: webapp-config
  namespace: webapp
data:
  default.conf: |
    server {
        listen 80;
        server_name localhost;

        location / {
            root /usr/share/nginx/html;
            index index.html index.htm;
        }

        location /api/ {
            proxy_pass http://api-service:8080/;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
EOF

# Create Ingress for external access
kubectl apply -f - << 'EOF'
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webapp-ingress
  namespace: webapp
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: webapp.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: webapp
            port:
              number: 80
EOF

# Check deployment status
kubectl get all -n webapp

# Scale the application
kubectl scale deployment webapp --replicas=5 -n webapp

# Create application management script
sudo tee /usr/local/bin/k8s-app-manager.sh << 'EOF'
#!/bin/bash
# Kubernetes Application Management Script

NAMESPACE=${2:-default}

case "$1" in
    deploy)
        if [ -z "$2" ]; then
            echo "Usage: $0 deploy <app-name> [replicas]"
            exit 1
        fi
        REPLICAS=${3:-1}

        kubectl create deployment $2 --image=nginx:alpine --replicas=$REPLICAS
        kubectl expose deployment $2 --port=80 --type=ClusterIP
        echo "Application $2 deployed with $REPLICAS replicas"
        ;;

    scale)
        if [ -z "$2" ] || [ -z "$3" ]; then
            echo "Usage: $0 scale <deployment-name> <replicas> [namespace]"
            exit 1
        fi
        kubectl scale deployment $2 --replicas=$3 -n $NAMESPACE
        echo "Scaled $2 to $3 replicas in namespace $NAMESPACE"
        ;;

    restart)
        if [ -z "$2" ]; then
            echo "Usage: $0 restart <deployment-name> [namespace]"
            exit 1
        fi
        kubectl rollout restart deployment $2 -n $NAMESPACE
        echo "Restarted deployment $2 in namespace $NAMESPACE"
        ;;

    logs)
        if [ -z "$2" ]; then
            echo "Usage: $0 logs <pod-name> [namespace]"
            exit 1
        fi
        kubectl logs $2 -n $NAMESPACE --tail=50 -f
        ;;

    exec)
        if [ -z "$2" ]; then
            echo "Usage: $0 exec <pod-name> [namespace]"
            exit 1
        fi
        kubectl exec -it $2 -n $NAMESPACE -- /bin/sh
        ;;

    delete)
        if [ -z "$2" ]; then
            echo "Usage: $0 delete <resource-type> <resource-name> [namespace]"
            exit 1
        fi
        kubectl delete $2 $3 -n $NAMESPACE
        echo "Deleted $2 $3 from namespace $NAMESPACE"
        ;;

    *)
        echo "Usage: $0 {deploy|scale|restart|logs|exec|delete}"
        echo "Examples:"
        echo "  $0 deploy myapp 3"
        echo "  $0 scale myapp 5 production"
        echo "  $0 restart myapp production"
        echo "  $0 logs mypod-12345 production"
        echo "  $0 exec mypod-12345 production"
        ;;
esac
EOF

sudo chmod +x /usr/local/bin/k8s-app-manager.sh

echo "Multi-tier web application deployed successfully!"

Example 2: Deploy Microservices with Service Mesh ๐Ÿ•ธ๏ธ

# Create namespace for microservices
kubectl create namespace microservices

# Deploy User Service
kubectl apply -f - << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: user-service
  namespace: microservices
spec:
  replicas: 2
  selector:
    matchLabels:
      app: user-service
  template:
    metadata:
      labels:
        app: user-service
        version: v1
    spec:
      containers:
      - name: user-service
        image: httpd:alpine
        ports:
        - containerPort: 80
        env:
        - name: SERVICE_NAME
          value: "user-service"
        - name: SERVICE_VERSION
          value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: user-service
  namespace: microservices
spec:
  selector:
    app: user-service
  ports:
  - port: 80
    targetPort: 80
EOF

# Deploy Order Service
kubectl apply -f - << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: order-service
  namespace: microservices
spec:
  replicas: 3
  selector:
    matchLabels:
      app: order-service
  template:
    metadata:
      labels:
        app: order-service
        version: v1
    spec:
      containers:
      - name: order-service
        image: httpd:alpine
        ports:
        - containerPort: 80
        env:
        - name: SERVICE_NAME
          value: "order-service"
        - name: USER_SERVICE_URL
          value: "http://user-service"
---
apiVersion: v1
kind: Service
metadata:
  name: order-service
  namespace: microservices
spec:
  selector:
    app: order-service
  ports:
  - port: 80
    targetPort: 80
EOF

# Deploy API Gateway
kubectl apply -f - << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-gateway
  namespace: microservices
spec:
  replicas: 2
  selector:
    matchLabels:
      app: api-gateway
  template:
    metadata:
      labels:
        app: api-gateway
    spec:
      containers:
      - name: api-gateway
        image: nginx:alpine
        ports:
        - containerPort: 80
        volumeMounts:
        - name: gateway-config
          mountPath: /etc/nginx/conf.d
      volumes:
      - name: gateway-config
        configMap:
          name: gateway-config
---
apiVersion: v1
kind: Service
metadata:
  name: api-gateway
  namespace: microservices
spec:
  selector:
    app: api-gateway
  ports:
  - port: 80
    targetPort: 80
  type: LoadBalancer
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: gateway-config
  namespace: microservices
data:
  default.conf: |
    upstream user-service {
        server user-service:80;
    }

    upstream order-service {
        server order-service:80;
    }

    server {
        listen 80;

        location /users/ {
            proxy_pass http://user-service/;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }

        location /orders/ {
            proxy_pass http://order-service/;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }

        location /health {
            return 200 "OK";
            add_header Content-Type text/plain;
        }
    }
EOF

# Create network policies for security
kubectl apply -f - << 'EOF'
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: microservices-network-policy
  namespace: microservices
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: microservices
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: microservices
  - to: []
    ports:
    - protocol: TCP
      port: 53
    - protocol: UDP
      port: 53
EOF

echo "Microservices architecture deployed successfully!"

Example 3: Deploy Monitoring and Logging Stack ๐Ÿ“Š

# Create monitoring namespace
kubectl create namespace monitoring

# Deploy Prometheus
kubectl apply -f - << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
      - name: prometheus
        image: prom/prometheus:latest
        ports:
        - containerPort: 9090
        volumeMounts:
        - name: prometheus-config
          mountPath: /etc/prometheus
        - name: prometheus-storage
          mountPath: /prometheus
        args:
        - '--config.file=/etc/prometheus/prometheus.yml'
        - '--storage.tsdb.path=/prometheus'
        - '--web.console.libraries=/etc/prometheus/console_libraries'
        - '--web.console.templates=/etc/prometheus/consoles'
        - '--storage.tsdb.retention.time=200h'
        - '--web.enable-lifecycle'
      volumes:
      - name: prometheus-config
        configMap:
          name: prometheus-config
      - name: prometheus-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  name: prometheus
  namespace: monitoring
spec:
  selector:
    app: prometheus
  ports:
  - port: 9090
    targetPort: 9090
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: monitoring
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s

    scrape_configs:
    - job_name: 'prometheus'
      static_configs:
      - targets: ['localhost:9090']

    - job_name: 'kubernetes-apiservers'
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https

    - job_name: 'kubernetes-nodes'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
EOF

# Deploy Grafana
kubectl apply -f - << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
    spec:
      containers:
      - name: grafana
        image: grafana/grafana:latest
        ports:
        - containerPort: 3000
        env:
        - name: GF_SECURITY_ADMIN_PASSWORD
          value: admin123
        volumeMounts:
        - name: grafana-storage
          mountPath: /var/lib/grafana
      volumes:
      - name: grafana-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: monitoring
spec:
  selector:
    app: grafana
  ports:
  - port: 3000
    targetPort: 3000
  type: LoadBalancer
EOF

# Create monitoring management script
sudo tee /usr/local/bin/k8s-monitoring.sh << 'EOF'
#!/bin/bash
# Kubernetes Monitoring Management Script

case "$1" in
    status)
        echo "=== Monitoring Stack Status ==="
        kubectl get pods -n monitoring
        echo ""
        kubectl get services -n monitoring
        ;;

    prometheus-url)
        echo "=== Prometheus Access ==="
        echo "Port forward: kubectl port-forward -n monitoring svc/prometheus 9090:9090"
        echo "URL: http://localhost:9090"
        ;;

    grafana-url)
        echo "=== Grafana Access ==="
        echo "Port forward: kubectl port-forward -n monitoring svc/grafana 3000:3000"
        echo "URL: http://localhost:3000"
        echo "Username: admin"
        echo "Password: admin123"
        ;;

    logs)
        if [ -z "$2" ]; then
            echo "Usage: $0 logs <prometheus|grafana>"
            exit 1
        fi
        kubectl logs -n monitoring deployment/$2 --tail=50
        ;;

    *)
        echo "Usage: $0 {status|prometheus-url|grafana-url|logs}"
        ;;
esac
EOF

sudo chmod +x /usr/local/bin/k8s-monitoring.sh

echo "Monitoring and logging stack deployed successfully!"

๐Ÿšจ Fix Common Problems

Here are solutions to common Kubernetes cluster issues you might encounter! ๐Ÿ”ง

Problem 1: Pods Stuck in Pending State โŒ

# Check pod status and events
kubectl get pods --all-namespaces | grep Pending
kubectl describe pod <pod-name> -n <namespace>

# Check node resources
kubectl top nodes
kubectl describe nodes

# Check for resource constraints
kubectl get events --all-namespaces --sort-by='.lastTimestamp' | grep -i "insufficient"

# Check for scheduling issues
kubectl get pods -o wide | grep Pending

# Check taints and tolerations
kubectl describe nodes | grep -A 5 Taints

# Common fixes:
echo "Common solutions for pending pods:"
echo "1. Check if nodes have sufficient CPU/Memory"
echo "2. Verify pod resource requests are reasonable"
echo "3. Check for node taints blocking scheduling"
echo "4. Ensure storage is available for PVC claims"

# Add more resources to nodes or reduce pod requests
# Example: Remove taints from master node
kubectl taint nodes master-node node-role.kubernetes.io/control-plane:NoSchedule-

# Check after fixes
kubectl get pods --all-namespaces

echo "โœ… Pending pod issues diagnosed!"

Problem 2: Service Discovery and DNS Issues โŒ

# Check CoreDNS status
kubectl get pods -n kube-system -l k8s-app=kube-dns

# Test DNS resolution from a pod
kubectl run -it --rm debug --image=busybox --restart=Never -- nslookup kubernetes.default

# Check CoreDNS configuration
kubectl get configmap coredns -n kube-system -o yaml

# Test service connectivity
kubectl get services --all-namespaces

# Create debug pod for network testing
kubectl apply -f - << 'EOF'
apiVersion: v1
kind: Pod
metadata:
  name: network-debug
spec:
  containers:
  - name: debug
    image: nicolaka/netshoot
    command: ["sleep", "3600"]
EOF

# Wait for pod to be ready
kubectl wait --for=condition=ready pod network-debug --timeout=60s

# Test DNS from debug pod
kubectl exec -it network-debug -- nslookup kubernetes.default.svc.cluster.local

# Test service connectivity
kubectl exec -it network-debug -- curl http://kubernetes.default:443 -k

# Check iptables rules (on nodes)
sudo iptables -t nat -L | grep -i kube

# Restart CoreDNS if needed
kubectl rollout restart deployment coredns -n kube-system

# Clean up debug pod
kubectl delete pod network-debug

echo "โœ… DNS and service discovery issues resolved!"

Problem 3: Network Plugin Issues โŒ

# Check Flannel status
kubectl get pods -n kube-flannel -o wide

# Check Flannel logs
kubectl logs -n kube-flannel -l app=flannel --tail=50

# Check node network configuration
ip addr show flannel.1
ip route show | grep flannel

# Verify CNI configuration
ls -la /etc/cni/net.d/
cat /etc/cni/net.d/10-flannel.conflist

# Check for network policy conflicts
kubectl get networkpolicy --all-namespaces

# Test pod-to-pod connectivity
kubectl run test1 --image=busybox --restart=Never -- sleep 3600
kubectl run test2 --image=busybox --restart=Never -- sleep 3600

# Get pod IPs
POD1_IP=$(kubectl get pod test1 -o jsonpath='{.status.podIP}')
POD2_IP=$(kubectl get pod test2 -o jsonpath='{.status.podIP}')

# Test connectivity
kubectl exec test1 -- ping -c 3 $POD2_IP

# Reinstall Flannel if needed
kubectl delete -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

# Clean up test pods
kubectl delete pod test1 test2

echo "โœ… Network plugin issues resolved!"

Problem 4: Storage and Persistent Volume Issues โŒ

# Check storage classes
kubectl get storageclass

# Check persistent volumes
kubectl get pv

# Check persistent volume claims
kubectl get pvc --all-namespaces

# Check for failed PVC bindings
kubectl get events --all-namespaces | grep -i "persistentvolume"

# Describe failed PVC
kubectl describe pvc <pvc-name> -n <namespace>

# Check node storage
df -h

# Create test storage
sudo mkdir -p /mnt/disk{1,2,3}
sudo chmod 777 /mnt/disk{1,2,3}

# Create additional persistent volumes
for i in {2..3}; do
kubectl apply -f - << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv-$i
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /mnt/disk$i
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}')
EOF
done

# Test PVC creation
kubectl apply -f - << 'EOF'
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: local-storage
EOF

# Check PVC status
kubectl get pvc test-pvc

# Clean up test PVC
kubectl delete pvc test-pvc

echo "โœ… Storage and PV issues resolved!"

๐Ÿ“‹ Simple Commands Summary

Hereโ€™s a quick reference for essential Kubernetes cluster management commands! ๐Ÿ“š

Command CategoryCommandDescription
Cluster Managementkubectl cluster-infoShow cluster information
kubectl get nodesList all nodes
kubectl top nodesShow node resource usage
kubectl describe node <name>Show detailed node info
Pod Managementkubectl get pods --all-namespacesList all pods
kubectl describe pod <name> -n <namespace>Show pod details
kubectl logs <pod-name> -n <namespace>View pod logs
kubectl exec -it <pod-name> -- /bin/shExecute commands in pod
Deployment Managementkubectl create deployment <name> --image=<image>Create deployment
kubectl scale deployment <name> --replicas=3Scale deployment
kubectl rollout restart deployment <name>Restart deployment
kubectl rollout status deployment <name>Check rollout status
Service Managementkubectl get services --all-namespacesList all services
kubectl expose deployment <name> --port=80Expose deployment
kubectl port-forward svc/<service> 8080:80Port forward service
Namespace Managementkubectl get namespacesList namespaces
kubectl create namespace <name>Create namespace
kubectl config set-context --current --namespace=<name>Set default namespace
Configurationkubectl apply -f <file.yaml>Apply configuration
kubectl delete -f <file.yaml>Delete configuration
kubectl get events --sort-by='.lastTimestamp'View recent events
Troubleshooting/usr/local/bin/k8s-monitor.sh healthRun cluster health check
/usr/local/bin/k8s-cluster-manager.sh statusCheck cluster status

๐Ÿ’ก Tips for Success

Here are expert tips to make your Kubernetes cluster even better! ๐ŸŒŸ

Cluster Management Excellence ๐ŸŽฏ

  • ๐Ÿ“Š Resource monitoring: Regularly monitor node and pod resource usage
  • ๐Ÿ”„ Rolling updates: Use rolling updates for zero-downtime deployments
  • ๐Ÿ’พ Backup strategy: Implement etcd backups and disaster recovery
  • ๐ŸŽ›๏ธ Resource quotas: Set resource quotas to prevent resource exhaustion
  • ๐Ÿ“ˆ Autoscaling: Implement horizontal and vertical pod autoscaling

Security Best Practices ๐Ÿ›ก๏ธ

  • ๐Ÿ” RBAC: Implement Role-Based Access Control for fine-grained permissions
  • ๐Ÿšซ Network policies: Use network policies to restrict pod communication
  • ๐Ÿ“ Pod security: Implement pod security standards and admission controllers
  • ๐Ÿ” Image scanning: Scan container images for vulnerabilities
  • ๐ŸŽ›๏ธ Secrets management: Use Kubernetes secrets for sensitive data

Performance Optimization โšก

  • ๐ŸŽฏ Resource requests: Set appropriate CPU and memory requests/limits
  • ๐Ÿ“Š Node sizing: Right-size nodes based on workload requirements
  • ๐Ÿ”„ Efficient scheduling: Use node affinity and anti-affinity rules
  • ๐Ÿ’พ Storage optimization: Choose appropriate storage classes for workloads
  • ๐ŸŒ Network optimization: Optimize CNI plugin configuration

Operational Excellence ๐Ÿข

  • ๐Ÿ“š GitOps: Implement GitOps for configuration management
  • ๐ŸŽ›๏ธ Monitoring: Deploy comprehensive monitoring with Prometheus/Grafana
  • ๐Ÿ‘ฅ Documentation: Maintain clear documentation of cluster architecture
  • ๐Ÿ“Š Capacity planning: Monitor usage and plan for growth
  • ๐Ÿ”ง Automation: Automate cluster operations and maintenance tasks

๐Ÿ† What You Learned

Congratulations! Youโ€™ve successfully mastered AlmaLinux Kubernetes container orchestration! Hereโ€™s everything youโ€™ve accomplished: ๐ŸŽ‰

โœ… Cluster Setup: Built a complete multi-node Kubernetes cluster from scratch โœ… Container Runtime: Configured containerd runtime for optimal performance โœ… Network Configuration: Set up Flannel CNI for pod-to-pod communication โœ… Storage Management: Implemented persistent storage with local volumes โœ… Application Deployment: Deployed multi-tier applications and microservices โœ… Service Discovery: Configured services and ingress for external access โœ… Monitoring Stack: Deployed Prometheus and Grafana for cluster monitoring โœ… Security Implementation: Applied network policies and RBAC controls โœ… Troubleshooting Skills: Learned to diagnose and fix common cluster issues โœ… Operational Tools: Created management scripts for daily operations

๐ŸŽฏ Why This Matters

Building a production-ready Kubernetes cluster is fundamental to modern application deployment! ๐ŸŒ Hereโ€™s the real-world impact of what youโ€™ve accomplished:

For Modern Applications: Your Kubernetes cluster enables cloud-native application architectures with microservices, containerization, and automated scaling that handle millions of users. ๐Ÿ—๏ธ

For DevOps Practices: Container orchestration provides the foundation for CI/CD pipelines, automated deployments, and infrastructure-as-code practices that accelerate development cycles. ๐Ÿš€

For Operational Excellence: Kubernetes provides self-healing, automatic scaling, and efficient resource utilization that reduces operational overhead and improves reliability. ๐Ÿ“ˆ

For Business Agility: Your cluster enables rapid application deployment, easy scaling, and consistent environments from development to production, accelerating time-to-market. โšก

Your AlmaLinux Kubernetes cluster is now providing the container orchestration platform that powers modern cloud-native applications with automatic scaling, self-healing, and enterprise-grade reliability! Youโ€™re not just running containers โ€“ youโ€™re operating the infrastructure platform that enables modern application architecture! โญ

Continue exploring advanced Kubernetes features like service mesh, serverless computing with Knative, and multi-cluster management. The container orchestration expertise youโ€™ve developed is essential for modern infrastructure! ๐Ÿ™Œ