Kubernetes has become the de facto standard for container orchestration, and AlmaLinux 9 provides an excellent, enterprise-grade foundation for running production Kubernetes clusters. This comprehensive guide walks you through deploying Kubernetes on AlmaLinux 9, from single-node development environments to multi-node production clusters, covering networking, storage, security, and real-world application deployment.
🌟 Why Kubernetes on AlmaLinux 9?
The combination of Kubernetes and AlmaLinux 9 offers a powerful platform for modern containerized applications, bringing together enterprise stability with cutting-edge container orchestration capabilities.
Key Advantages
- Enterprise Stability - AlmaLinux 9’s RHEL compatibility ensures production-grade reliability 🏢
- Long-term Support - AlmaLinux 9 supported until 2032, perfect for Kubernetes LTS 📅
- Cost-Effective - No licensing fees while maintaining enterprise features 💰
- Container-Optimized - Modern kernel with advanced container features 🐳
- Security-First - SELinux integration with Kubernetes security policies 🔒
📋 Prerequisites and System Requirements
Hardware Requirements
# Minimum Requirements (Development)
- Control Plane: 2 CPU cores, 2 GB RAM, 20 GB storage
- Worker Nodes: 1 CPU core, 1 GB RAM, 20 GB storage
# Recommended Production Requirements
- Control Plane: 4 CPU cores, 8 GB RAM, 100 GB SSD
- Worker Nodes: 4 CPU cores, 16 GB RAM, 200 GB SSD
- Network: 1 Gbps minimum, 10 Gbps recommended
Network Planning
# Example Network Layout
- Pod Network: 10.244.0.0/16
- Service Network: 10.96.0.0/12
- Node Network: 192.168.1.0/24
- External Load Balancer: 192.168.1.10
🔧 Preparing AlmaLinux 9 for Kubernetes
System Preparation
# Update system
sudo dnf update -y
# Install essential packages
sudo dnf install -y \
curl \
wget \
vim \
git \
net-tools \
bridge-utils \
bash-completion \
yum-utils \
device-mapper-persistent-data \
lvm2
# Set hostname (on each node)
sudo hostnamectl set-hostname k8s-master-01 # On master
sudo hostnamectl set-hostname k8s-worker-01 # On worker nodes
Configure System Settings
# Disable swap (Kubernetes requirement)
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
# Configure kernel modules
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# Configure sysctl settings
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
Configure Firewall
# Master node firewall rules
sudo firewall-cmd --permanent --add-port=6443/tcp # Kubernetes API server
sudo firewall-cmd --permanent --add-port=2379-2380/tcp # etcd server client API
sudo firewall-cmd --permanent --add-port=10250/tcp # Kubelet API
sudo firewall-cmd --permanent --add-port=10259/tcp # kube-scheduler
sudo firewall-cmd --permanent --add-port=10257/tcp # kube-controller-manager
sudo firewall-cmd --permanent --add-port=30000-32767/tcp # NodePort Services
# Worker node firewall rules
sudo firewall-cmd --permanent --add-port=10250/tcp # Kubelet API
sudo firewall-cmd --permanent --add-port=30000-32767/tcp # NodePort Services
# Calico networking (if using Calico)
sudo firewall-cmd --permanent --add-port=179/tcp # BGP
sudo firewall-cmd --permanent --add-port=4789/udp # VXLAN
sudo firewall-cmd --reload
Configure SELinux
# Set SELinux to permissive mode (for initial setup)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# Install SELinux utilities for Kubernetes
sudo dnf install -y container-selinux
🐳 Installing Container Runtime
Option 1: containerd (Recommended)
# Install containerd
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y containerd.io
# Configure containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
# Configure SystemdCgroup
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
# Start and enable containerd
sudo systemctl enable --now containerd
Option 2: CRI-O
# Define versions
OS=CentOS_9_Stream
VERSION=1.28
# Add CRI-O repository
sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo
sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
# Install CRI-O
sudo dnf install -y cri-o
# Start and enable CRI-O
sudo systemctl enable --now crio
📦 Installing Kubernetes Components
Add Kubernetes Repository
# Add Kubernetes repository
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# Install Kubernetes components
sudo dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# Enable kubelet
sudo systemctl enable --now kubelet
Configure crictl (Container Runtime Interface)
# Configure crictl
cat <<EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 2
debug: false
pull-image-on-create: false
EOF
🚀 Initializing Kubernetes Cluster
Initialize Master Node
# Initialize cluster with kubeadm
sudo kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--control-plane-endpoint=k8s-master-01:6443 \
--upload-certs
# Configure kubectl for regular user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Enable kubectl autocompletion
echo 'source <(kubectl completion bash)' >> ~/.bashrc
echo 'alias k=kubectl' >> ~/.bashrc
echo 'complete -o default -F __start_kubectl k' >> ~/.bashrc
source ~/.bashrc
Install Network Plugin
Option 1: Calico (Recommended for Production)
# Install Calico
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
# Create Calico custom resource
cat <<EOF | kubectl create -f -
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
ipPools:
- blockSize: 26
cidr: 10.244.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
---
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
EOF
# Verify Calico installation
kubectl get pods -n calico-system
Option 2: Flannel (Simple Alternative)
# Install Flannel
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
# Verify Flannel installation
kubectl get pods -n kube-flannel
Join Worker Nodes
# On master node, generate join command
kubeadm token create --print-join-command
# On worker nodes, run the join command (example)
sudo kubeadm join k8s-master-01:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
📊 Deploying Essential Add-ons
Metrics Server
# Install Metrics Server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# Patch for self-signed certificates (development only)
kubectl patch deployment metrics-server -n kube-system --type='json' -p='[
{
"op": "add",
"path": "/spec/template/spec/containers/0/args/-",
"value": "--kubelet-insecure-tls"
}
]'
# Verify metrics
kubectl top nodes
kubectl top pods --all-namespaces
Kubernetes Dashboard
# Install Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
# Create admin user
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
# Get token
kubectl -n kubernetes-dashboard create token admin-user
# Access dashboard (run on local machine)
kubectl proxy
# Visit: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
Ingress Controller (NGINX)
# Install NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/baremetal/deploy.yaml
# Verify installation
kubectl get pods -n ingress-nginx
kubectl get services -n ingress-nginx
💾 Storage Configuration
Local Storage Provider
# Create StorageClass for local storage
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF
# Create local persistent volume
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv-1
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/k8s-local-storage/pv1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-worker-01
EOF
# Create directory on worker node
sudo mkdir -p /mnt/k8s-local-storage/pv1
NFS Storage Provider
# Install NFS server (on storage node)
sudo dnf install -y nfs-utils
sudo systemctl enable --now nfs-server
# Configure NFS exports
echo "/srv/nfs/k8s *(rw,sync,no_subtree_check,no_root_squash)" | sudo tee -a /etc/exports
sudo mkdir -p /srv/nfs/k8s
sudo exportfs -rav
# Install NFS client on all nodes
sudo dnf install -y nfs-utils
# Deploy NFS provisioner
kubectl create namespace nfs-provisioner
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs-provisioner
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
EOF
🔒 Security Hardening
Network Policies
# Default deny all ingress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
---
# Allow traffic from specific namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-frontend
namespace: backend
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: frontend
ports:
- protocol: TCP
port: 8080
Pod Security Standards
# Enable Pod Security Standards
kubectl label namespace production pod-security.kubernetes.io/enforce=restricted
kubectl label namespace production pod-security.kubernetes.io/audit=restricted
kubectl label namespace production pod-security.kubernetes.io/warn=restricted
# Create restricted pod example
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: security-compliant-pod
namespace: production
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: nginx:alpine
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
volumeMounts:
- name: tmp
mountPath: /tmp
- name: var-cache
mountPath: /var/cache/nginx
- name: var-run
mountPath: /var/run
volumes:
- name: tmp
emptyDir: {}
- name: var-cache
emptyDir: {}
- name: var-run
emptyDir: {}
EOF
RBAC Configuration
# Create namespace-specific admin
apiVersion: v1
kind: ServiceAccount
metadata:
name: namespace-admin
namespace: production
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: namespace-admin
namespace: production
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: namespace-admin
namespace: production
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: namespace-admin
subjects:
- kind: ServiceAccount
name: namespace-admin
namespace: production
🚀 Deploying Applications
Sample Microservices Application
# Frontend Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: nginx:alpine
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
---
# Frontend Service
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: production
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
---
# Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend
namespace: production
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
Deploying with Helm
# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# Add Helm repositories
helm repo add stable https://charts.helm.sh/stable
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
# Deploy PostgreSQL using Helm
helm install postgresql bitnami/postgresql \
--namespace production \
--create-namespace \
--set auth.postgresPassword=secretpassword \
--set primary.persistence.size=10Gi
# Deploy Redis
helm install redis bitnami/redis \
--namespace production \
--set auth.password=secretpassword \
--set master.persistence.size=8Gi
📊 Monitoring and Observability
Prometheus and Grafana Stack
# Add Prometheus community Helm repository
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
# Install kube-prometheus-stack
helm install monitoring prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace \
--set prometheus.prometheusSpec.retention=30d \
--set prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage=50Gi
# Access Grafana (get password)
kubectl get secret --namespace monitoring monitoring-grafana -o jsonpath="{.data.admin-password}" | base64 --decode
# Port forward to access Grafana
kubectl port-forward -n monitoring svc/monitoring-grafana 3000:80
Application Monitoring
# ServiceMonitor for application
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: app-metrics
namespace: production
spec:
selector:
matchLabels:
app: myapp
endpoints:
- port: metrics
interval: 30s
path: /metrics
🔧 Cluster Maintenance
Backup and Restore
# Backup etcd
ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /backup/etcd-snapshot-$(date +%Y%m%d-%H%M%S).db
# Backup Kubernetes resources
kubectl get all --all-namespaces -o yaml > /backup/k8s-resources-$(date +%Y%m%d-%H%M%S).yaml
Cluster Upgrades
# Check available versions
sudo dnf list --showduplicates kubeadm
# Upgrade control plane
sudo dnf install -y kubeadm-1.28.2-0 --disableexcludes=kubernetes
sudo kubeadm upgrade plan
sudo kubeadm upgrade apply v1.28.2
# Upgrade kubelet and kubectl
sudo dnf install -y kubelet-1.28.2-0 kubectl-1.28.2-0 --disableexcludes=kubernetes
sudo systemctl daemon-reload
sudo systemctl restart kubelet
# Upgrade worker nodes (on each worker)
sudo dnf install -y kubeadm-1.28.2-0 --disableexcludes=kubernetes
sudo kubeadm upgrade node
sudo dnf install -y kubelet-1.28.2-0 kubectl-1.28.2-0 --disableexcludes=kubernetes
sudo systemctl daemon-reload
sudo systemctl restart kubelet
Node Maintenance
# Drain node for maintenance
kubectl drain k8s-worker-01 --ignore-daemonsets --delete-emptydir-data
# Perform maintenance...
# Uncordon node
kubectl uncordon k8s-worker-01
🚨 Troubleshooting
Common Issues and Solutions
# Check component status
kubectl get componentstatuses
kubectl get nodes
kubectl get pods --all-namespaces
# Check kubelet logs
sudo journalctl -u kubelet -f
# Check container runtime
sudo crictl ps
sudo crictl logs <container-id>
# DNS troubleshooting
kubectl run -it --rm debug --image=busybox --restart=Never -- nslookup kubernetes.default
# Network connectivity test
kubectl run -it --rm debug --image=nicolaka/netshoot --restart=Never -- bash
# Check cluster events
kubectl get events --all-namespaces --sort-by='.lastTimestamp'
Performance Tuning
# Adjust kube-apiserver flags
--max-requests-inflight=400
--max-mutating-requests-inflight=200
# Optimize etcd
--quota-backend-bytes=8589934592 # 8GB
# Configure kubelet
--max-pods=110
--kube-api-qps=50
--kube-api-burst=100
🎯 Best Practices
Production Readiness Checklist
-
High Availability
- ✅ Multiple control plane nodes (3 or 5)
- ✅ Load balancer for API server
- ✅ etcd cluster with odd number of members
- ✅ Multiple worker nodes across availability zones
-
Security
- ✅ Network policies enabled
- ✅ Pod Security Standards enforced
- ✅ RBAC properly configured
- ✅ Secrets encryption at rest
- ✅ Regular security updates
-
Monitoring
- ✅ Metrics collection (Prometheus)
- ✅ Log aggregation (EFK stack)
- ✅ Alerting configured
- ✅ Dashboard access secured
-
Backup and Recovery
- ✅ Regular etcd backups
- ✅ Disaster recovery plan tested
- ✅ Application data backed up
- ✅ Configuration in version control
-
Resource Management
- ✅ Resource quotas per namespace
- ✅ LimitRanges configured
- ✅ Pod Disruption Budgets
- ✅ Horizontal Pod Autoscaling
📚 Advanced Topics
Custom Resource Definitions (CRDs)
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: applications.apps.example.com
spec:
group: apps.example.com
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
replicas:
type: integer
image:
type: string
scope: Namespaced
names:
plural: applications
singular: application
kind: Application
Operators
# Install Operator SDK
curl -LO https://github.com/operator-framework/operator-sdk/releases/latest/download/operator-sdk_linux_amd64
chmod +x operator-sdk_linux_amd64
sudo mv operator-sdk_linux_amd64 /usr/local/bin/operator-sdk
# Create new operator
operator-sdk init --domain example.com --repo github.com/example/app-operator
operator-sdk create api --group apps --version v1 --kind Application --resource --controller
🌐 Next Steps and Resources
Learning Path
- Container Fundamentals - Deep dive into container technology
- Kubernetes Architecture - Understanding internal components
- Cloud Native Patterns - Microservices and 12-factor apps
- Service Mesh - Istio or Linkerd implementation
- GitOps - ArgoCD or Flux for declarative deployments
Useful Resources
Running Kubernetes on AlmaLinux 9 provides a robust, enterprise-ready platform for container orchestration. Start with a single-node cluster for learning, then scale to multi-node production deployments as your needs grow. Remember that Kubernetes is a journey, not a destination – continuous learning and adaptation are key to success in the cloud-native ecosystem. Happy orchestrating! 🚢