Edge computing has become critical for processing data closer to its source, reducing latency, and enabling real-time decision-making. K3s, a lightweight Kubernetes distribution, is perfectly suited for edge deployments where resources are constrained. This comprehensive guide will walk you through building a production-ready edge computing platform using K3s on Rocky Linux.
Understanding Edge Computing Architecture
Edge computing brings computation and data storage closer to data sources. Key benefits include:
- Ultra-Low Latency: Sub-millisecond response times for critical applications
- Bandwidth Optimization: Process data locally, send only relevant information to cloud
- Offline Capability: Continue operating when disconnected from central infrastructure
- Data Sovereignty: Keep sensitive data within geographical boundaries
- Cost Efficiency: Reduce cloud egress costs and bandwidth requirements
Why K3s for Edge Computing
K3s is optimized for edge deployments:
- Lightweight: ~100MB binary, 512MB RAM minimum
- Single Binary: Easy deployment and updates
- ARM Support: Runs on Raspberry Pi and other ARM devices
- Built-in Components: Includes storage, networking, and ingress
- Production Ready: CNCF certified Kubernetes distribution
Prerequisites and Planning
Hardware Requirements
For different edge scenarios:
Edge Gateway (Main Node):
- Rocky Linux 9
- 4 CPU cores
- 8GB RAM
- 64GB storage
- 2x Network interfaces
Edge Worker Nodes:
- 2 CPU cores
- 4GB RAM
- 32GB storage
- Network connectivity
IoT/Minimal Nodes:
- ARM64/AMD64 processor
- 1GB RAM minimum
- 16GB storage
Network Architecture
Plan your edge network:
- Management Network: 192.168.1.0/24 (Central management)
- Edge Network: 10.0.0.0/16 (Edge devices)
- IoT Network: 172.16.0.0/12 (IoT devices)
- 5G/LTE Backup: Failover connectivity
Preparing Rocky Linux for Edge
Initial System Setup
On all edge nodes:
# Set hostname
sudo hostnamectl set-hostname edge-gateway-01
# Update system
sudo dnf update -y
# Install essential packages
sudo dnf install -y curl wget vim git htop iotop sysstat
sudo dnf install -y NetworkManager-tui firewalld
# Configure firewall for K3s
sudo firewall-cmd --permanent --add-port=6443/tcp # API Server
sudo firewall-cmd --permanent --add-port=10250/tcp # Kubelet metrics
sudo firewall-cmd --permanent --add-port=10251/tcp # Scheduler
sudo firewall-cmd --permanent --add-port=10252/tcp # Controller
sudo firewall-cmd --permanent --add-port=8472/udp # Flannel VXLAN
sudo firewall-cmd --permanent --add-port=51820/udp # Flannel WireGuard
sudo firewall-cmd --permanent --add-port=51821/udp # Flannel WireGuard IPv6
sudo firewall-cmd --reload
# Disable swap
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# Configure kernel parameters
cat << EOF | sudo tee /etc/sysctl.d/k3s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
vm.swappiness = 0
vm.panic_on_oom = 0
vm.overcommit_memory = 1
kernel.panic = 10
kernel.panic_on_oops = 1
EOF
sudo sysctl --system
Network Configuration for Edge
# Configure multiple network interfaces
# Management interface
sudo nmcli connection modify ens160 connection.id management
sudo nmcli connection modify management ipv4.addresses 192.168.1.10/24
sudo nmcli connection modify management ipv4.gateway 192.168.1.1
sudo nmcli connection modify management ipv4.dns 8.8.8.8
sudo nmcli connection modify management ipv4.method manual
# Edge network interface
sudo nmcli connection add type ethernet con-name edge ifname ens192
sudo nmcli connection modify edge ipv4.addresses 10.0.0.1/16
sudo nmcli connection modify edge ipv4.method manual
sudo nmcli connection up edge
# Enable IP masquerading for edge network
sudo firewall-cmd --permanent --zone=public --add-masquerade
sudo firewall-cmd --reload
Storage Optimization for Edge
# Configure storage for edge workloads
# Create separate partition for K3s data
sudo fdisk /dev/sdb # Create new partition
# Create XFS filesystem optimized for containers
sudo mkfs.xfs -f -L k3s-data /dev/sdb1
sudo mkdir -p /var/lib/rancher/k3s
# Mount with optimized options
echo "LABEL=k3s-data /var/lib/rancher/k3s xfs defaults,noatime,nodiratime 0 0" | sudo tee -a /etc/fstab
sudo mount -a
# Configure log rotation for edge
cat << EOF | sudo tee /etc/logrotate.d/k3s
/var/log/k3s.log {
daily
rotate 7
compress
missingok
notifempty
create 0640 root root
}
EOF
Installing K3s
Installing K3s Server (Control Plane)
# Install K3s server with custom configuration
curl -sfL https://get.k3s.io | sh -s - server \
--write-kubeconfig-mode 644 \
--disable traefik \
--disable servicelb \
--cluster-cidr 10.42.0.0/16 \
--service-cidr 10.43.0.0/16 \
--cluster-dns 10.43.0.10 \
--kubelet-arg="max-pods=110" \
--kubelet-arg="eviction-hard=memory.available<500Mi" \
--kubelet-arg="eviction-soft=memory.available<1Gi" \
--kubelet-arg="eviction-soft-grace-period=memory.available=2m" \
--kube-controller-manager-arg="node-monitor-period=20s" \
--kube-controller-manager-arg="node-monitor-grace-period=60s" \
--data-dir /var/lib/rancher/k3s
# Verify installation
sudo systemctl status k3s
sudo k3s kubectl get nodes
# Get node token for agents
sudo cat /var/lib/rancher/k3s/server/node-token
Installing K3s Agents (Worker Nodes)
On each worker node:
# Set variables
K3S_URL="https://edge-gateway-01:6443"
K3S_TOKEN="K10c42a67d4b6d1b8c3f4e5a6b7c8d9e0f1::server:2a3b4c5d6e7f8g9h0i1j2k3l"
# Install K3s agent
curl -sfL https://get.k3s.io | K3S_URL=$K3S_URL K3S_TOKEN=$K3S_TOKEN sh -s - agent \
--kubelet-arg="max-pods=110" \
--kubelet-arg="eviction-hard=memory.available<300Mi" \
--node-label="node-role.kubernetes.io/edge=true" \
--node-label="edge.location/zone=warehouse-01"
# Verify agent
sudo systemctl status k3s-agent
Configuring kubectl Access
# Configure kubectl for local user
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $USER:$USER ~/.kube/config
chmod 600 ~/.kube/config
# Install kubectl bash completion
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null
# Test access
kubectl get nodes
kubectl get pods -A
Edge-Specific Configurations
Implementing Network Policies for Edge
# network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: edge-isolation
namespace: edge-apps
spec:
podSelector:
matchLabels:
tier: edge
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: edge-apps
- podSelector:
matchLabels:
tier: edge
ports:
- protocol: TCP
port: 8080
egress:
- to:
- namespaceSelector:
matchLabels:
name: edge-apps
ports:
- protocol: TCP
port: 443
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
Resource Constraints for Edge
# resource-quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: edge-quota
namespace: edge-apps
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
persistentvolumeclaims: "10"
pods: "50"
---
apiVersion: v1
kind: LimitRange
metadata:
name: edge-limit-range
namespace: edge-apps
spec:
limits:
- max:
cpu: "2"
memory: "4Gi"
min:
cpu: "100m"
memory: "128Mi"
default:
cpu: "500m"
memory: "512Mi"
defaultRequest:
cpu: "200m"
memory: "256Mi"
type: Container
Deploying Edge Applications
IoT Data Collector
# iot-collector.yaml
apiVersion: v1
kind: Namespace
metadata:
name: edge-apps
labels:
name: edge-apps
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: iot-collector
namespace: edge-apps
spec:
selector:
matchLabels:
app: iot-collector
template:
metadata:
labels:
app: iot-collector
tier: edge
spec:
nodeSelector:
node-role.kubernetes.io/edge: "true"
tolerations:
- key: node-role.kubernetes.io/edge
operator: Equal
value: "true"
effect: NoSchedule
containers:
- name: collector
image: edge/iot-collector:v1.0
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
env:
- name: MQTT_BROKER
value: "tcp://localhost:1883"
- name: COLLECTION_INTERVAL
value: "10s"
volumeMounts:
- name: data
mountPath: /data
- name: config
mountPath: /etc/collector
- name: mqtt
image: eclipse-mosquitto:2.0
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "200m"
ports:
- containerPort: 1883
hostPort: 1883
volumes:
- name: data
hostPath:
path: /var/edge/data
type: DirectoryOrCreate
- name: config
configMap:
name: collector-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: collector-config
namespace: edge-apps
data:
collector.yaml: |
sensors:
- id: temp-sensor-01
type: temperature
interval: 5s
topic: sensors/temperature/warehouse
- id: humidity-sensor-01
type: humidity
interval: 5s
topic: sensors/humidity/warehouse
processing:
batch_size: 100
compression: true
encryption: true
upstream:
endpoint: https://cloud.example.com/api/v1/data
retry_count: 3
timeout: 30s
Edge ML Inference
# edge-inference.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ml-inference
namespace: edge-apps
spec:
replicas: 1
selector:
matchLabels:
app: ml-inference
template:
metadata:
labels:
app: ml-inference
tier: edge
spec:
nodeSelector:
kubernetes.io/arch: amd64
node-role.kubernetes.io/edge: "true"
containers:
- name: inference
image: edge/ml-inference:v1.0
resources:
requests:
memory: "1Gi"
cpu: "1"
limits:
memory: "2Gi"
cpu: "2"
env:
- name: MODEL_PATH
value: "/models/edge-model.onnx"
- name: INFERENCE_THREADS
value: "4"
- name: BATCH_SIZE
value: "1"
volumeMounts:
- name: models
mountPath: /models
- name: dshm
mountPath: /dev/shm
ports:
- containerPort: 8080
name: http
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: models
persistentVolumeClaim:
claimName: model-storage
- name: dshm
emptyDir:
medium: Memory
sizeLimit: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: ml-inference
namespace: edge-apps
spec:
selector:
app: ml-inference
ports:
- port: 80
targetPort: 8080
type: NodePort
Video Analytics Pipeline
# video-analytics.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: video-config
namespace: edge-apps
data:
pipeline.yaml: |
sources:
- name: camera-01
type: rtsp
url: rtsp://192.168.1.100/stream
fps: 15
- name: camera-02
type: rtsp
url: rtsp://192.168.1.101/stream
fps: 15
analytics:
- name: object-detection
model: yolov5s
confidence: 0.5
classes: [person, vehicle]
- name: motion-detection
sensitivity: 0.7
area_threshold: 100
output:
- type: mqtt
topic: analytics/detections
qos: 1
- type: http
endpoint: http://ml-inference/process
batch: true
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: video-analytics
namespace: edge-apps
spec:
serviceName: video-analytics
replicas: 1
selector:
matchLabels:
app: video-analytics
template:
metadata:
labels:
app: video-analytics
tier: edge
spec:
nodeSelector:
node-role.kubernetes.io/edge: "true"
edge.capability/gpu: "true"
containers:
- name: analytics
image: edge/video-analytics:v1.0
resources:
requests:
memory: "2Gi"
cpu: "2"
nvidia.com/gpu: 1 # If GPU available
limits:
memory: "4Gi"
cpu: "4"
nvidia.com/gpu: 1
volumeMounts:
- name: config
mountPath: /etc/analytics
- name: cache
mountPath: /cache
env:
- name: CUDA_VISIBLE_DEVICES
value: "0"
- name: TF_FORCE_GPU_ALLOW_GROWTH
value: "true"
volumes:
- name: config
configMap:
name: video-config
- name: cache
emptyDir:
sizeLimit: 10Gi
Edge Storage Solutions
Local Path Provisioner
# Install local-path-provisioner for edge storage
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.24/deploy/local-path-storage.yaml
# Configure as default storage class
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Distributed Storage for Edge
# longhorn-edge.yaml
apiVersion: v1
kind: Namespace
metadata:
name: longhorn-system
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: longhorn
namespace: kube-system
spec:
repo: https://charts.longhorn.io
chart: longhorn
targetNamespace: longhorn-system
createNamespace: true
valuesContent: |-
persistence:
defaultClass: false
defaultClassReplicaCount: 1
csi:
attacherReplicaCount: 1
provisionerReplicaCount: 1
resizerReplicaCount: 1
snapshotterReplicaCount: 1
defaultSettings:
defaultReplicaCount: 1
guaranteedEngineCPU: 0.1
storageMinimalAvailablePercentage: 10
upgradeChecker: false
createDefaultDiskLabeledNodes: true
defaultDataLocality: best-effort
replicaZoneSoftAntiAffinity: false
nodeDownPodDeletionPolicy: delete-both-statefulset-and-deployment-pod
Networking and Connectivity
5G/LTE Failover Configuration
# network-failover.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: network-monitor
namespace: kube-system
data:
monitor.sh: |
#!/bin/bash
PRIMARY_GW="192.168.1.1"
LTE_INTERFACE="wwan0"
CHECK_INTERVAL=10
FAIL_COUNT=0
while true; do
if ! ping -c 1 -W 2 $PRIMARY_GW > /dev/null 2>&1; then
FAIL_COUNT=$((FAIL_COUNT + 1))
if [ $FAIL_COUNT -ge 3 ]; then
echo "Primary network failed, switching to LTE"
ip route del default
ip route add default dev $LTE_INTERFACE
FAIL_COUNT=0
fi
else
FAIL_COUNT=0
fi
sleep $CHECK_INTERVAL
done
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: network-monitor
namespace: kube-system
spec:
selector:
matchLabels:
app: network-monitor
template:
metadata:
labels:
app: network-monitor
spec:
hostNetwork: true
hostPID: true
containers:
- name: monitor
image: alpine:3.18
command: ["/bin/sh"]
args: ["/scripts/monitor.sh"]
securityContext:
privileged: true
volumeMounts:
- name: scripts
mountPath: /scripts
volumes:
- name: scripts
configMap:
name: network-monitor
defaultMode: 0755
Edge-to-Cloud VPN
# wireguard-vpn.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: wireguard-config
namespace: kube-system
data:
wg0.conf: |
[Interface]
Address = 10.100.0.2/24
PrivateKey = <EDGE_PRIVATE_KEY>
ListenPort = 51820
[Peer]
PublicKey = <CLOUD_PUBLIC_KEY>
Endpoint = cloud.example.com:51820
AllowedIPs = 10.100.0.0/24, 10.43.0.0/16
PersistentKeepalive = 25
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: wireguard
namespace: kube-system
spec:
selector:
matchLabels:
app: wireguard
template:
metadata:
labels:
app: wireguard
spec:
hostNetwork: true
initContainers:
- name: sysctls
image: busybox
command:
- sh
- -c
- |
sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv6.conf.all.forwarding=1
securityContext:
privileged: true
containers:
- name: wireguard
image: linuxserver/wireguard:latest
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
privileged: true
volumeMounts:
- name: config
mountPath: /config/wg0.conf
subPath: wg0.conf
- name: modules
mountPath: /lib/modules
readOnly: true
volumes:
- name: config
configMap:
name: wireguard-config
- name: modules
hostPath:
path: /lib/modules
Edge Management and Monitoring
Lightweight Monitoring Stack
# monitoring-edge.yaml
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus:v2.45.0
args:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--storage.tsdb.retention.time=7d'
- '--storage.tsdb.retention.size=5GB'
- '--web.enable-lifecycle'
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
volumeMounts:
- name: config
mountPath: /etc/prometheus
- name: data
mountPath: /prometheus
volumes:
- name: config
configMap:
name: prometheus-config
- name: data
persistentVolumeClaim:
claimName: prometheus-data
---
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: monitoring
data:
prometheus.yml: |
global:
scrape_interval: 30s
evaluation_interval: 30s
scrape_configs:
- job_name: 'kubernetes-nodes'
kubernetes_sd_configs:
- role: node
relabel_configs:
- source_labels: [__address__]
regex: '(.*):10250'
replacement: '${1}:9100'
target_label: __address__
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
Edge Fleet Management
# fleet-manager.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: fleet-config
namespace: kube-system
data:
fleet.yaml: |
clusters:
- name: edge-warehouse-01
endpoint: https://10.0.1.10:6443
location: warehouse
zone: us-east
- name: edge-warehouse-02
endpoint: https://10.0.2.10:6443
location: warehouse
zone: us-west
sync:
interval: 5m
applications:
- name: iot-collector
version: v1.0
targets: [warehouse]
- name: ml-inference
version: v1.0
targets: [warehouse]
monitoring:
metrics_retention: 7d
logs_retention: 3d
alerts:
- name: edge-offline
condition: up == 0
duration: 5m
severity: critical
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: fleet-sync
namespace: kube-system
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: sync
image: edge/fleet-manager:v1.0
command:
- /bin/sh
- -c
- |
for cluster in $(kubectl get configmap fleet-config -o jsonpath='{.data.fleet\.yaml}' | yq e '.clusters[].name' -); do
echo "Syncing cluster: $cluster"
kubectl --context=$cluster apply -f /manifests/
done
volumeMounts:
- name: config
mountPath: /etc/fleet
- name: manifests
mountPath: /manifests
volumes:
- name: config
configMap:
name: fleet-config
- name: manifests
persistentVolumeClaim:
claimName: fleet-manifests
restartPolicy: OnFailure
Security for Edge
Edge Security Policies
# edge-security.yaml
apiVersion: policy/v1
kind: PodSecurityPolicy
metadata:
name: edge-restricted
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'MustRunAsNonRoot'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
readOnlyRootFilesystem: true
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: edge-restricted
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- edge-restricted
Certificate Management
# Install cert-manager for edge
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml
# Create self-signed issuer for edge
cat << EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: edge-selfsigned
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: edge-ca
namespace: cert-manager
spec:
isCA: true
commonName: edge-ca
secretName: edge-ca-secret
issuerRef:
name: edge-selfsigned
kind: ClusterIssuer
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: edge-ca-issuer
spec:
ca:
secretName: edge-ca-secret
EOF
Backup and Disaster Recovery
Edge Backup Strategy
#!/bin/bash
# backup-edge.sh
# Backup K3s configuration
kubectl get all --all-namespaces -o yaml > k3s-resources-$(date +%Y%m%d).yaml
# Backup etcd (K3s embedded)
sudo k3s etcd-snapshot save --name edge-snapshot-$(date +%Y%m%d)
# Backup persistent volumes
for pv in $(kubectl get pv -o jsonpath='{.items[*].metadata.name}'); do
echo "Backing up PV: $pv"
# Implement your backup logic based on storage type
done
# Sync to cloud
aws s3 sync /var/lib/rancher/k3s/server/db/snapshots/ s3://edge-backups/$(hostname)/
Automated Recovery
# recovery-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: edge-recovery
namespace: kube-system
spec:
template:
spec:
containers:
- name: recovery
image: edge/recovery-tool:v1.0
command:
- /bin/sh
- -c
- |
# Check cluster health
if ! kubectl get nodes | grep -q Ready; then
echo "Cluster unhealthy, initiating recovery"
# Download latest backup
aws s3 cp s3://edge-backups/$(hostname)/latest.db /tmp/
# Restore etcd
k3s etcd-snapshot restore /tmp/latest.db
fi
restartPolicy: OnFailure
Performance Optimization
Edge-Specific Optimizations
# Optimize K3s for edge
# Disable unnecessary controllers
sudo k3s server \
--disable-cloud-controller \
--disable-network-policy \
--kube-controller-manager-arg="controllers=*,bootstrapsigner,tokencleaner,-attachdetach,-cloud-node-lifecycle" \
--kubelet-arg="feature-gates=RotateKubeletServerCertificate=true" \
--kubelet-arg="max-pods=50" \
--kubelet-arg="pods-per-core=10"
# Configure aggressive image garbage collection
sudo k3s kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: kubelet-config
namespace: kube-system
data:
config.yaml: |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
imageGCHighThresholdPercent: 70
imageGCLowThresholdPercent: 50
evictionHard:
memory.available: "500Mi"
nodefs.available: "10%"
evictionSoft:
memory.available: "1Gi"
nodefs.available: "15%"
evictionSoftGracePeriod:
memory.available: "2m"
nodefs.available: "2m"
EOF
Troubleshooting
Common Edge Issues
# Debug networking issues
kubectl run debug --image=nicolaka/netshoot -it --rm -- /bin/bash
# Check K3s logs
journalctl -u k3s -f
# Verify cluster connectivity
k3s check-config
# Reset node if needed
sudo k3s-killall.sh
sudo systemctl restart k3s
# Check resource usage
kubectl top nodes
kubectl top pods -A
# Debug DNS issues
kubectl run -it --rm debug --image=busybox --restart=Never -- nslookup kubernetes.default
Best Practices for Edge
-
Resource Management
- Set appropriate resource limits
- Use node selectors for workload placement
- Implement pod priority classes
-
Connectivity
- Plan for intermittent connectivity
- Implement local data caching
- Use message queuing for reliability
-
Security
- Minimize attack surface
- Use network policies
- Regular security updates
-
Monitoring
- Lightweight monitoring solutions
- Local alerting capabilities
- Efficient log management
Conclusion
You’ve successfully deployed a production-ready edge computing platform using K3s on Rocky Linux. This lightweight yet powerful solution enables you to run Kubernetes workloads at the edge, process data locally, and maintain reliable operations even in challenging environments.
The combination of Rocky Linux’s stability and K3s’s efficiency creates an ideal platform for edge computing scenarios, from IoT data processing to AI inference at the edge. Continue to adapt and optimize your edge deployment based on your specific use cases and requirements.