Installing Kubernetes on Alpine Linux: Lightweight Container Orchestration
I’ll show you how to install Kubernetes on Alpine Linux for a production-ready cluster. After running K8s on various distributions, I’ve found Alpine provides the perfect balance of security, performance, and resource efficiency for container orchestration.
Introduction
Kubernetes on Alpine Linux is a match made in heaven. Alpine’s minimal footprint means more resources for your workloads, while the security-focused design reduces your cluster’s attack surface. Plus, the consistent package management makes cluster maintenance much easier.
I’ve been running Kubernetes clusters on Alpine in production for years. The combination gives you enterprise-grade orchestration without the bloat of heavier distributions.
Why You Need This
- Minimize resource overhead on cluster nodes
- Reduce security vulnerabilities in production
- Achieve faster boot times and updates
- Create lightweight, efficient container hosts
Prerequisites
You’ll need these things first:
- Multiple Alpine Linux servers (3+ for HA)
- At least 2GB RAM per node (4GB+ recommended)
- Network connectivity between all nodes
- Root access to all machines
- Basic understanding of Kubernetes concepts
Step 1: Prepare Alpine Linux Nodes
Install Base System
Let’s start with a clean Alpine installation optimized for Kubernetes.
What we’re doing: Setting up Alpine with essential packages for K8s operation.
# Update system packages
apk update && apk upgrade
# Install required system packages
apk add \
curl \
ca-certificates \
iptables \
ip6tables \
cni-plugins \
containerd \
runc
# Enable required services
rc-update add containerd default
rc-update add cgroups default
# Start containerd
service containerd start
Code explanation:
cni-plugins
: Container Network Interface plugins for pod networkingcontainerd
: Container runtime required by Kubernetescgroups
: Control groups service for resource managementrunc
: Low-level container runtime
Configure System Settings
What we’re doing: Optimizing Alpine for Kubernetes workloads.
# Enable IP forwarding
echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-iptables = 1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
# Apply sysctl settings
sysctl -p
# Load required kernel modules
echo 'br_netfilter' >> /etc/modules-load.d/k8s.conf
echo 'overlay' >> /etc/modules-load.d/k8s.conf
# Load modules now
modprobe br_netfilter
modprobe overlay
# Disable swap (required for kubelet)
swapoff -a
sed -i '/swap/d' /etc/fstab
Configuration explanation:
ip_forward = 1
: Enables packet forwarding between network interfacesbridge-nf-call-iptables
: Allows iptables to see bridged trafficbr_netfilter
: Required for network policy enforcement- Swap must be disabled for kubelet to work properly
Set Up Container Runtime
What we’re doing: Configuring containerd for Kubernetes integration.
# Create containerd configuration directory
mkdir -p /etc/containerd
# Generate default configuration
containerd config default > /etc/containerd/config.toml
# Configure systemd cgroup driver
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
# Set sandbox image
sed -i 's|sandbox_image = ".*"|sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml
# Restart containerd
service containerd restart
Tip: The systemd cgroup driver is recommended for Kubernetes. It provides better resource isolation and management.
Step 2: Install Kubernetes Components
Add Kubernetes Repository
What we’re doing: Setting up the official Kubernetes package repository.
# Add Kubernetes signing key
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | gpg --dearmor -o /etc/apk/keys/kubernetes.gpg
# Add repository configuration
echo "https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" > /etc/apk/repositories.d/kubernetes.list
# Update package index
apk update
Note: Since Alpine doesn’t have official Kubernetes packages, we’ll install from binaries for the most reliable setup.
Install Kubernetes Binaries
What we’re doing: Installing kubelet, kubeadm, and kubectl from official releases.
# Set Kubernetes version
K8S_VERSION="v1.28.4"
# Download Kubernetes binaries
cd /tmp
curl -L "https://dl.k8s.io/release/${K8S_VERSION}/bin/linux/amd64/kubelet" -o kubelet
curl -L "https://dl.k8s.io/release/${K8S_VERSION}/bin/linux/amd64/kubeadm" -o kubeadm
curl -L "https://dl.k8s.io/release/${K8S_VERSION}/bin/linux/amd64/kubectl" -o kubectl
# Make binaries executable
chmod +x kubelet kubeadm kubectl
# Install to system
mv kubelet kubeadm kubectl /usr/local/bin/
# Verify installation
kubelet --version
kubeadm version
kubectl version --client
Code explanation:
kubelet
: Node agent that manages containerskubeadm
: Tool for bootstrapping clusterskubectl
: Command-line interface for Kubernetes- Using specific version ensures compatibility
Configure kubelet Service
What we’re doing: Creating an OpenRC service for kubelet.
# Create kubelet configuration directory
mkdir -p /var/lib/kubelet /etc/kubernetes
# Create kubelet service script
cat > /etc/init.d/kubelet << 'EOF'
#!/sbin/openrc-run
name="kubelet"
command="/usr/local/bin/kubelet"
command_args="--config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9"
command_background=true
pidfile="/run/kubelet.pid"
output_log="/var/log/kubelet.log"
error_log="/var/log/kubelet.log"
depend() {
need containerd
after net
}
start_pre() {
checkpath -d -m 0755 -o root:root /var/lib/kubelet
checkpath -d -m 0755 -o root:root /etc/kubernetes
}
EOF
# Make service executable
chmod +x /etc/init.d/kubelet
# Enable kubelet service (don't start yet)
rc-update add kubelet default
Step 3: Initialize Kubernetes Cluster
Bootstrap Control Plane
What we’re doing: Creating the first master node using kubeadm.
# Initialize cluster on master node
kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=192.168.1.10 \
--control-plane-endpoint=192.168.1.10 \
--upload-certs
# Save the join command output for later use
# It will look like: kubeadm join 192.168.1.10:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxx
Expected Output:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You can now join any number of control-plane nodes by running the following command on each as root:
kubeadm join 192.168.1.10:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxx --control-plane --certificate-key xxx
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.10:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxx
Configure kubectl Access
What we’re doing: Setting up kubectl for cluster administration.
# Set up kubectl for root user
mkdir -p /root/.kube
cp -i /etc/kubernetes/admin.conf /root/.kube/config
chown root:root /root/.kube/config
# Create regular user for cluster management
adduser -D -s /bin/sh k8sadmin
addgroup k8sadmin wheel
# Set up kubectl for regular user
mkdir -p /home/k8sadmin/.kube
cp /etc/kubernetes/admin.conf /home/k8sadmin/.kube/config
chown k8sadmin:k8sadmin /home/k8sadmin/.kube/config
# Test cluster access
kubectl get nodes
kubectl get pods -A
Step 4: Install Network Plugin
Deploy Flannel CNI
What we’re doing: Installing Flannel for pod-to-pod networking.
# Download Flannel manifest
curl -L https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml -o kube-flannel.yml
# Apply Flannel to cluster
kubectl apply -f kube-flannel.yml
# Wait for Flannel pods to be ready
kubectl wait --for=condition=ready pod -l app=flannel -n kube-flannel --timeout=300s
# Check network plugin status
kubectl get pods -n kube-flannel
kubectl get nodes
Alternative: Calico CNI
# For production environments, consider Calico
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/calico.yaml -O
# Edit calico.yaml to match your pod CIDR if needed
# Then apply
kubectl apply -f calico.yaml
Network plugin explanation:
- Flannel: Simple overlay network, good for basic setups
- Calico: Advanced features like network policies and BGP routing
- Both work well with Alpine Linux
Step 5: Join Worker Nodes
Prepare Worker Nodes
What we’re doing: Setting up additional nodes to join the cluster.
# On each worker node, repeat the Alpine preparation steps:
# - Install base packages
# - Configure system settings
# - Set up container runtime
# - Install Kubernetes binaries
# - Configure kubelet service
# Use the join command from kubeadm init output
kubeadm join 192.168.1.10:6443 \
--token abc123.def456ghi789jkl \
--discovery-token-ca-cert-hash sha256:1234567890abcdef...
# Verify node joined successfully
kubectl get nodes
Add Additional Control Plane Nodes
What we’re doing: Creating high-availability control plane.
# On additional master nodes, use the control-plane join command
kubeadm join 192.168.1.10:6443 \
--token abc123.def456ghi789jkl \
--discovery-token-ca-cert-hash sha256:1234567890abcdef... \
--control-plane \
--certificate-key fedcba0987654321...
# Set up kubectl on new master
mkdir -p /root/.kube
cp -i /etc/kubernetes/admin.conf /root/.kube/config
# Verify cluster health
kubectl get nodes
kubectl get pods -A
Practical Examples
Example 1: Deploy Sample Application
What we’re doing: Testing the cluster with a real workload.
# Create a deployment
kubectl create deployment nginx-test --image=nginx:alpine --replicas=3
# Expose the deployment
kubectl expose deployment nginx-test --port=80 --type=NodePort
# Check deployment status
kubectl get deployments
kubectl get pods -o wide
kubectl get services
# Test application access
curl http://192.168.1.10:$(kubectl get svc nginx-test -o jsonpath='{.spec.ports[0].nodePort}')
# Scale the deployment
kubectl scale deployment nginx-test --replicas=5
# Clean up test deployment
kubectl delete deployment nginx-test
kubectl delete service nginx-test
Example 2: Configure Resource Quotas
What we’re doing: Setting up resource management for namespaces.
# Create a namespace for applications
kubectl create namespace production
# Create resource quota
cat > resource-quota.yaml << 'EOF'
apiVersion: v1
kind: ResourceQuota
metadata:
name: production-quota
namespace: production
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "10"
services: "5"
EOF
# Apply resource quota
kubectl apply -f resource-quota.yaml
# Verify quota
kubectl describe quota production-quota -n production
Troubleshooting
Node Not Ready Issues
Problem: Nodes showing as “NotReady” status Solution: Check kubelet and container runtime
# Check node status
kubectl describe node node-name
# Check kubelet logs
tail -f /var/log/kubelet.log
# Check containerd status
service containerd status
ctr --namespace k8s.io containers list
# Restart services if needed
service containerd restart
service kubelet restart
Pod Networking Problems
Problem: Pods can’t communicate or reach external networks Solution: Verify CNI configuration and firewall rules
# Check CNI pods
kubectl get pods -n kube-flannel
kubectl logs -n kube-flannel -l app=flannel
# Test pod networking
kubectl run test-pod --image=alpine:latest --rm -it -- sh
# Inside pod: ping google.com, nslookup kubernetes
# Check iptables rules
iptables -t nat -L
iptables -t filter -L FORWARD
Cluster Certificate Issues
Problem: Certificate errors or expired certificates Solution: Check and renew cluster certificates
# Check certificate expiration
kubeadm certs check-expiration
# Renew certificates if needed
kubeadm certs renew all
# Restart control plane components
kubectl delete pods -n kube-system -l component=kube-apiserver
kubectl delete pods -n kube-system -l component=kube-controller-manager
kubectl delete pods -n kube-system -l component=kube-scheduler
Best Practices
-
High Availability Setup:
# Use odd number of control plane nodes (3 or 5) # Load balance API server access # Regular etcd backups kubectl get pods -n kube-system -l component=etcd
-
Security Hardening:
- Enable RBAC (enabled by default)
- Use network policies
- Regular security updates
- Pod security standards
-
Resource Management:
- Set resource requests and limits
- Use horizontal pod autoscalers
- Monitor cluster resource usage
- Plan capacity carefully
Verification
To verify your Kubernetes cluster is working correctly:
# Check cluster info
kubectl cluster-info
kubectl get nodes -o wide
# Verify all system pods are running
kubectl get pods -A
# Test DNS resolution
kubectl run test-dns --image=busybox:1.28 --rm -it --restart=Never -- nslookup kubernetes.default
# Check component health
kubectl get componentstatuses
Wrapping Up
You just set up a production-ready Kubernetes cluster on Alpine Linux:
- Prepared Alpine nodes with optimal configuration
- Installed Kubernetes components from official binaries
- Initialized a high-availability control plane
- Configured pod networking with CNI plugins
- Added worker nodes and tested the cluster
This setup gives you a lightweight, secure foundation for container orchestration. Alpine’s minimal footprint means more resources for your applications, and the robust security model makes it perfect for production workloads. I’ve been running clusters like this for years and they’re incredibly reliable.