Cilium revolutionizes Kubernetes networking by leveraging eBPF technology to provide a high-performance, secure service mesh without sidecars. This comprehensive guide demonstrates deploying Cilium on Rocky Linux, implementing advanced networking features, security policies, and deep observability for cloud-native applications.
Understanding Cilium and eBPF
Cilium uses eBPF to implement networking, security, and observability directly in the Linux kernel:
- Sidecar-free: No proxy containers, reducing resource overhead
- Kernel-level performance: Near-native networking speed
- Identity-based security: Cryptographic workload identity
- Deep visibility: Kernel-level observability without performance penalty
- Protocol acceleration: HTTP/gRPC/Kafka awareness in-kernel
Architecture Overview
- Cilium Agent: Runs on each node, programs eBPF
- Cilium Operator: Manages cluster-wide operations
- Hubble: Observability platform built on Cilium
- eBPF Programs: Kernel-level data plane
- Identity Management: Workload identity allocation
Prerequisites
Before deploying Cilium:
- Rocky Linux 9 with kernel 5.10+ (eBPF support)
- Kubernetes 1.23+ cluster
- Minimum 3 nodes with 8GB RAM each
- Helm 3.x installed
- kubectl configured
- No existing CNI plugin installed
Preparing Rocky Linux for Cilium
System Requirements
# Update system
sudo dnf update -y
# Install required packages
sudo dnf install -y \
kernel-devel \
kernel-headers \
elfutils-libelf-devel \
gcc \
make \
git \
bpftool \
perf
# Enable IP forwarding
cat <<EOF | sudo tee /etc/sysctl.d/99-cilium.conf
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.fib_multipath_use_neigh = 1
EOF
sudo sysctl -p /etc/sysctl.d/99-cilium.conf
# Disable firewalld (Cilium will handle firewalling)
sudo systemctl stop firewalld
sudo systemctl disable firewalld
# Load required kernel modules
sudo modprobe ip_tables
sudo modprobe ip6_tables
sudo modprobe netlink_diag
sudo modprobe tcp_diag
sudo modprobe udp_diag
sudo modprobe inet_diag
sudo modprobe unix_diag
# Make modules persistent
cat <<EOF | sudo tee /etc/modules-load.d/cilium.conf
ip_tables
ip6_tables
netlink_diag
tcp_diag
udp_diag
inet_diag
unix_diag
EOF
Kubernetes Cluster Setup
# Initialize Kubernetes cluster without CNI
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --skip-phases=addon/kube-proxy
# Configure kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Remove kube-proxy if exists
kubectl -n kube-system delete ds kube-proxy
kubectl -n kube-system delete cm kube-proxy
# Delete iptables rules installed by kube-proxy
sudo iptables-save | grep -v KUBE | sudo iptables-restore
Installing Cilium
Helm Installation
# Add Cilium Helm repository
helm repo add cilium https://helm.cilium.io/
helm repo update
# Create Cilium namespace
kubectl create namespace cilium
# Generate Cilium configuration
cat <<EOF > cilium-values.yaml
# Cilium configuration for service mesh
kubeProxyReplacement: strict
k8sServiceHost: ${API_SERVER_IP}
k8sServicePort: ${API_SERVER_PORT}
# Enable Hubble for observability
hubble:
enabled: true
relay:
enabled: true
ui:
enabled: true
service:
type: NodePort
metrics:
enabled:
- dns
- drop
- tcp
- flow
- icmp
- http
serviceMonitor:
enabled: true
# Enable service mesh features
serviceAccounts:
cilium:
name: cilium
operator:
name: cilium-operator
# Security and encryption
encryption:
enabled: true
type: wireguard
nodeEncryption: true
# Enable L7 proxy
l7Proxy: true
# eBPF features
bpf:
masquerade: true
clockProbe: true
preallocateMaps: true
lbMapMax: 65536
policyMapMax: 16384
monitorAggregation: medium
monitorFlags: all
# IPAM configuration
ipam:
mode: cluster-pool
operator:
clusterPoolIPv4PodCIDRList:
- 10.244.0.0/16
# Enable bandwidth manager
bandwidthManager:
enabled: true
bbr: true
# Cluster mesh for multi-cluster
cluster:
name: cluster1
id: 1
# Enable BGP
bgp:
enabled: false
announce:
loadbalancerIP: true
# Prometheus metrics
prometheus:
enabled: true
serviceMonitor:
enabled: true
operator:
replicas: 2
prometheus:
enabled: true
serviceMonitor:
enabled: true
# Debug and monitoring
debug:
enabled: true
verbose: flow
# Load balancing
loadBalancer:
algorithm: maglev
mode: hybrid
acceleration: native
# MTU configuration
mtu: 0 # Auto-detect
# Enable endpoint routes
endpointRoutes:
enabled: true
# Host firewall
hostFirewall:
enabled: true
# Policy audit mode
policyAuditMode: false
# Enable CiliumEndpointSlice
enableCiliumEndpointSlice: true
EOF
# Install Cilium
helm install cilium cilium/cilium \
--version 1.14.5 \
--namespace cilium \
--values cilium-values.yaml
# Wait for Cilium to be ready
kubectl -n cilium rollout status deployment/cilium-operator
kubectl -n cilium rollout status daemonset/cilium
Verify Installation
# Check Cilium status
kubectl -n cilium exec ds/cilium -- cilium status
# Validate installation
kubectl create ns cilium-test
kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.14/examples/kubernetes/connectivity-check/connectivity-check.yaml
# Check connectivity test results
kubectl -n cilium-test get pods -w
Implementing Service Mesh Features
Ingress Controller with Cilium
# cilium-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: cilium
spec:
controller: cilium.io/ingress-controller
parameters:
apiGroup: cilium.io
kind: CiliumIngressParameters
name: cilium-ingress-params
---
apiVersion: cilium.io/v2
kind: CiliumIngressParameters
metadata:
name: cilium-ingress-params
spec:
loadBalancerMode: dedicated
serviceType: LoadBalancer
insecureNodePort: 30000
secureNodePort: 30443
---
apiVersion: v1
kind: Service
metadata:
name: cilium-ingress
namespace: cilium
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
selector:
app.kubernetes.io/name: cilium-ingress
Service Mesh Configuration
# cilium-service-mesh.yaml
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
name: service-mesh-default
spec:
description: "Default service mesh policy"
endpointSelector: {}
ingress:
- fromEndpoints:
- {}
egress:
- toEndpoints:
- {}
- toServices:
- k8sService:
serviceName: kube-dns
namespace: kube-system
toPorts:
- ports:
- port: "53"
protocol: UDP
---
apiVersion: cilium.io/v2
kind: CiliumEnvoyConfig
metadata:
name: envoy-prometheus-metrics
spec:
services:
- name: prometheus-metrics
namespace: monitoring
resources:
- "@type": type.googleapis.com/envoy.config.listener.v3.Listener
name: prometheus_metrics_listener
address:
socket_address:
address: 0.0.0.0
port_value: 9090
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: prometheus_metrics
route_config:
name: prometheus_metrics_route
virtual_hosts:
- name: prometheus_metrics_vhost
domains: ["*"]
routes:
- match:
prefix: "/metrics"
route:
prefix_rewrite: "/stats/prometheus"
cluster: envoy-admin
Advanced Networking Policies
Identity-Based Security
# identity-policy.yaml
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: frontend-to-backend
namespace: production
spec:
description: "Allow frontend to communicate with backend"
endpointSelector:
matchLabels:
app: backend
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
toPorts:
- ports:
- port: "8080"
protocol: TCP
rules:
http:
- method: "GET"
path: "/api/v1/.*"
- method: "POST"
path: "/api/v1/users"
headers:
- "Content-Type: application/json"
---
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: backend-to-database
namespace: production
spec:
description: "Allow backend to access database"
endpointSelector:
matchLabels:
app: backend
egress:
- toEndpoints:
- matchLabels:
app: postgres
toPorts:
- ports:
- port: "5432"
protocol: TCP
- toFQDNs:
- matchPattern: "*.amazonaws.com"
toPorts:
- ports:
- port: "443"
protocol: TCP
Layer 7 Policy Enforcement
# l7-policy.yaml
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: api-rate-limiting
namespace: api
spec:
endpointSelector:
matchLabels:
app: api-gateway
ingress:
- fromEndpoints:
- matchLabels:
role: external-lb
toPorts:
- ports:
- port: "8080"
protocol: TCP
rules:
http:
- method: "GET"
path: "/api/v1/public/.*"
headers:
- "X-API-Key: .*"
- method: "POST"
path: "/api/v1/auth/.*"
rateLimit:
burst: 100
average: 50
interval: "1m"
---
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: grpc-service-policy
spec:
endpointSelector:
matchLabels:
app: grpc-service
ingress:
- toPorts:
- ports:
- port: "9090"
protocol: TCP
rules:
grpc:
- serviceName: "user.UserService"
method: "GetUser"
- serviceName: "user.UserService"
method: "ListUsers"
Kafka-Aware Policies
# kafka-policy.yaml
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: kafka-producer-consumer
spec:
endpointSelector:
matchLabels:
app: kafka
ingress:
- fromEndpoints:
- matchLabels:
role: producer
toPorts:
- ports:
- port: "9092"
protocol: TCP
rules:
kafka:
- apiVersion: 3
apiKeys:
- "produce"
- "metadata"
topic: "events"
clientID: "producer-.*"
- fromEndpoints:
- matchLabels:
role: consumer
toPorts:
- ports:
- port: "9092"
protocol: TCP
rules:
kafka:
- apiVersion: 3
apiKeys:
- "fetch"
- "metadata"
topic: "events"
clientID: "consumer-.*"
Implementing Encryption and Security
Transparent Encryption
# Enable WireGuard encryption
kubectl -n cilium patch configmap cilium-config \
--type merge \
--patch '{"data":{"enable-wireguard":"true","encryption-type":"wireguard"}}'
# Restart Cilium pods
kubectl -n cilium rollout restart daemonset/cilium
# Verify encryption status
kubectl -n cilium exec ds/cilium -- cilium encrypt status
Certificate Management
# cert-manager-integration.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: cilium-server-cert
namespace: cilium
spec:
secretName: cilium-server-tls
issuerRef:
name: cilium-ca-issuer
kind: ClusterIssuer
commonName: cilium.cluster.local
dnsNames:
- cilium.cluster.local
- "*.cilium.cluster.local"
duration: 8760h # 1 year
renewBefore: 720h # 30 days
---
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: mutual-tls-enforcement
spec:
endpointSelector:
matchLabels:
security: high
ingress:
- fromEndpoints:
- matchLabels:
tls: required
authentication:
mode: required
mutual:
spiffe:
trustDomain: cluster.local
Observability with Hubble
Hubble UI and CLI Setup
# Install Hubble CLI
HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -L --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz
sudo tar xzvfC hubble-linux-amd64.tar.gz /usr/local/bin
rm hubble-linux-amd64.tar.gz
# Port forward to Hubble Relay
kubectl port-forward -n cilium deployment/hubble-relay 4245:4245 &
# Configure Hubble CLI
hubble config set server localhost:4245
# Test Hubble
hubble status
hubble observe
Flow Visibility
# hubble-metrics.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: hubble-metrics-config
namespace: cilium
data:
metrics.yaml: |
metrics:
- name: "http_requests_total"
context:
- source_pod
- destination_pod
- http_status_code
- http_method
options:
- labelsContext:
- source_namespace
- destination_namespace
- name: "http_request_duration_seconds"
context:
- source_pod
- destination_pod
options:
- buckets: [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]
- name: "tcp_connections_total"
context:
- source_pod
- destination_pod
- verdict
- name: "dns_queries_total"
context:
- source_pod
- dns_query
- dns_rcode
- name: "network_bytes_total"
context:
- source_pod
- destination_pod
- direction
Service Map Generation
#!/usr/bin/env python3
# generate_service_map.py
import subprocess
import json
import networkx as nx
import matplotlib.pyplot as plt
from collections import defaultdict
def get_hubble_flows():
"""Fetch flows from Hubble"""
cmd = ["hubble", "observe", "--output", "json", "--last", "1000"]
result = subprocess.run(cmd, capture_output=True, text=True)
flows = []
for line in result.stdout.strip().split('\n'):
if line:
flows.append(json.loads(line))
return flows
def build_service_graph(flows):
"""Build service dependency graph from flows"""
G = nx.DiGraph()
edge_counts = defaultdict(int)
for flow in flows:
if flow.get('verdict') == 'FORWARDED':
source = flow.get('source', {})
dest = flow.get('destination', {})
src_name = source.get('pod_name', 'unknown')
dst_name = dest.get('pod_name', 'unknown')
if src_name != 'unknown' and dst_name != 'unknown':
edge_counts[(src_name, dst_name)] += 1
# Add edges with weights
for (src, dst), count in edge_counts.items():
G.add_edge(src, dst, weight=count)
return G
def visualize_service_map(G):
"""Visualize service dependency graph"""
plt.figure(figsize=(12, 8))
# Calculate node positions
pos = nx.spring_layout(G, k=2, iterations=50)
# Draw nodes
nx.draw_networkx_nodes(G, pos, node_size=3000, node_color='lightblue')
# Draw edges with varying widths based on traffic
edges = G.edges()
weights = [G[u][v]['weight'] for u, v in edges]
max_weight = max(weights) if weights else 1
edge_widths = [5 * w / max_weight for w in weights]
nx.draw_networkx_edges(G, pos, width=edge_widths, alpha=0.6, edge_color='gray')
# Draw labels
nx.draw_networkx_labels(G, pos, font_size=8)
plt.title("Service Dependency Map")
plt.axis('off')
plt.tight_layout()
plt.savefig('service_map.png', dpi=300)
plt.show()
if __name__ == "__main__":
flows = get_hubble_flows()
G = build_service_graph(flows)
visualize_service_map(G)
Advanced eBPF Features
Custom eBPF Programs
// custom_network_policy.c
#include <linux/bpf.h>
#include <linux/if_ether.h>
#include <linux/ip.h>
#include <linux/tcp.h>
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_endian.h>
#define RATE_LIMIT_WINDOW_NS 1000000000 // 1 second
#define MAX_REQUESTS_PER_WINDOW 100
struct rate_limit_key {
__u32 src_ip;
__u16 dst_port;
};
struct rate_limit_value {
__u64 last_seen_ns;
__u32 request_count;
};
struct {
__uint(type, BPF_MAP_TYPE_LRU_HASH);
__uint(max_entries, 10000);
__type(key, struct rate_limit_key);
__type(value, struct rate_limit_value);
} rate_limit_map SEC(".maps");
SEC("tc")
int rate_limit_ingress(struct __sk_buff *skb)
{
void *data = (void *)(long)skb->data;
void *data_end = (void *)(long)skb->data_end;
struct ethhdr *eth = data;
if ((void*)(eth + 1) > data_end)
return TC_ACT_OK;
if (eth->h_proto != bpf_htons(ETH_P_IP))
return TC_ACT_OK;
struct iphdr *ip = (void*)(eth + 1);
if ((void*)(ip + 1) > data_end)
return TC_ACT_OK;
if (ip->protocol != IPPROTO_TCP)
return TC_ACT_OK;
struct tcphdr *tcp = (void*)ip + (ip->ihl * 4);
if ((void*)(tcp + 1) > data_end)
return TC_ACT_OK;
// Rate limiting logic
struct rate_limit_key key = {
.src_ip = ip->saddr,
.dst_port = bpf_ntohs(tcp->dest)
};
__u64 now = bpf_ktime_get_ns();
struct rate_limit_value *value = bpf_map_lookup_elem(&rate_limit_map, &key);
if (value) {
if (now - value->last_seen_ns > RATE_LIMIT_WINDOW_NS) {
// New window
value->last_seen_ns = now;
value->request_count = 1;
} else {
value->request_count++;
if (value->request_count > MAX_REQUESTS_PER_WINDOW) {
// Drop packet - rate limit exceeded
return TC_ACT_SHOT;
}
}
} else {
// First request
struct rate_limit_value new_value = {
.last_seen_ns = now,
.request_count = 1
};
bpf_map_update_elem(&rate_limit_map, &key, &new_value, BPF_ANY);
}
return TC_ACT_OK;
}
char LICENSE[] SEC("license") = "GPL";
Loading Custom eBPF Programs
#!/bin/bash
# load_custom_ebpf.sh
# Compile eBPF program
clang -O2 -target bpf -c custom_network_policy.c -o custom_network_policy.o
# Load into Cilium
kubectl -n cilium create configmap custom-ebpf --from-file=custom_network_policy.o
# Apply custom program
kubectl apply -f - <<EOF
apiVersion: cilium.io/v2
kind: CiliumBPFProgram
metadata:
name: custom-rate-limiter
spec:
program: custom_network_policy.o
attachType: TC
direction: ingress
selector:
matchLabels:
rate-limit: enabled
EOF
Multi-Cluster Service Mesh
Cluster Mesh Setup
# Enable cluster mesh on cluster 1
cilium clustermesh enable --context cluster1
# Enable cluster mesh on cluster 2
cilium clustermesh enable --context cluster2
# Connect clusters
cilium clustermesh connect --context cluster1 --destination-context cluster2
# Verify mesh status
cilium clustermesh status --context cluster1
Global Services
# global-service.yaml
apiVersion: v1
kind: Service
metadata:
name: global-service
annotations:
service.cilium.io/global: "true"
service.cilium.io/affinity: "local"
spec:
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
---
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-cross-cluster
spec:
endpointSelector:
matchLabels:
app: my-app
ingress:
- fromEndpoints:
- matchLabels:
io.cilium.k8s.policy.cluster: cluster2
app: client
Performance Optimization
eBPF Map Tuning
# cilium-performance-tuning.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cilium-config
namespace: cilium
data:
bpf-lb-map-max: "65536"
bpf-policy-map-max: "16384"
bpf-ct-global-tcp-max: "524288"
bpf-ct-global-any-max: "262144"
bpf-nat-global-max: "524288"
preallocate-bpf-maps: "true"
enable-bpf-clock-probe: "true"
enable-tcp-early-demux: "true"
enable-bpf-bandwidth-manager: "true"
enable-bbr: "true"
kube-proxy-replacement: "strict"
enable-host-reachable-services: "true"
native-routing-cidr: "10.0.0.0/8"
enable-endpoint-routes: "true"
enable-local-redirect-policy: "true"
XDP Acceleration
# xdp-acceleration.yaml
apiVersion: cilium.io/v2alpha1
kind: CiliumLoadBalancerIPPool
metadata:
name: xdp-pool
spec:
cidrs:
- cidr: "10.100.0.0/24"
disabled: false
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cilium-config
namespace: cilium
data:
enable-xdp-acceleration: "true"
xdp-mode: "native" # native, generic, or testing
enable-xdp-prefilter: "true"
Monitoring and Troubleshooting
Prometheus Integration
# prometheus-scrape-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
data:
prometheus.yml: |
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'cilium-agent'
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- cilium
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_k8s_app]
action: keep
regex: cilium
- source_labels: [__meta_kubernetes_pod_name]
target_label: instance
- target_label: __address__
replacement: ${1}:9962
source_labels: [__meta_kubernetes_pod_ip]
- job_name: 'hubble'
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- cilium
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_k8s_app]
action: keep
regex: hubble-relay
- source_labels: [__meta_kubernetes_pod_name]
target_label: instance
- target_label: __address__
replacement: ${1}:9965
source_labels: [__meta_kubernetes_pod_ip]
Debugging Tools
#!/bin/bash
# cilium-debug.sh
# Check Cilium status
echo "=== Cilium Status ==="
kubectl -n cilium exec ds/cilium -- cilium status --verbose
# List endpoints
echo -e "\n=== Cilium Endpoints ==="
kubectl -n cilium exec ds/cilium -- cilium endpoint list
# Check BPF maps
echo -e "\n=== BPF Maps ==="
kubectl -n cilium exec ds/cilium -- cilium bpf lb list
kubectl -n cilium exec ds/cilium -- cilium bpf ct list global
# Monitor drops
echo -e "\n=== Monitoring Drops ==="
kubectl -n cilium exec ds/cilium -- cilium monitor --type drop
# Check policy enforcement
echo -e "\n=== Policy Trace ==="
kubectl -n cilium exec ds/cilium -- cilium policy trace --src-k8s-pod default:pod1 --dst-k8s-pod default:pod2
# Hubble flows
echo -e "\n=== Recent Flows ==="
hubble observe --last 100 --verdict DROPPED
Grafana Dashboards
{
"dashboard": {
"title": "Cilium Service Mesh Metrics",
"panels": [
{
"title": "Request Rate by Service",
"targets": [{
"expr": "sum(rate(hubble_flows_processed_total[5m])) by (destination_service)"
}]
},
{
"title": "P95 Latency",
"targets": [{
"expr": "histogram_quantile(0.95, sum(rate(hubble_http_request_duration_seconds_bucket[5m])) by (le, destination_service))"
}]
},
{
"title": "Policy Drops",
"targets": [{
"expr": "sum(rate(cilium_drop_count_total[5m])) by (reason)"
}]
},
{
"title": "Endpoint Status",
"targets": [{
"expr": "cilium_endpoint_state"
}]
}
]
}
}
Production Best Practices
High Availability Configuration
# ha-configuration.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cilium-operator
namespace: cilium
spec:
replicas: 3
selector:
matchLabels:
name: cilium-operator
template:
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: name
operator: In
values:
- cilium-operator
topologyKey: kubernetes.io/hostname
containers:
- name: cilium-operator
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 1000m
memory: 512Mi
Backup and Recovery
#!/bin/bash
# backup-cilium.sh
BACKUP_DIR="/backup/cilium/$(date +%Y%m%d_%H%M%S)"
mkdir -p $BACKUP_DIR
# Backup Cilium configuration
kubectl -n cilium get cm cilium-config -o yaml > $BACKUP_DIR/cilium-config.yaml
# Backup network policies
kubectl get cnp -A -o yaml > $BACKUP_DIR/cilium-network-policies.yaml
kubectl get ccnp -o yaml > $BACKUP_DIR/cilium-clusterwide-policies.yaml
# Backup Cilium identities
kubectl -n cilium exec ds/cilium -- cilium identity list > $BACKUP_DIR/cilium-identities.txt
# Backup endpoints
kubectl -n cilium exec ds/cilium -- cilium endpoint list -o json > $BACKUP_DIR/cilium-endpoints.json
echo "Backup completed: $BACKUP_DIR"
Conclusion
Cilium with eBPF on Rocky Linux provides a powerful, high-performance service mesh solution that eliminates the overhead of traditional sidecar proxies. By leveraging kernel-level networking and security enforcement, Cilium enables advanced features like transparent encryption, L7 visibility, and multi-cluster connectivity while maintaining near-native performance.
The combination of eBPF’s efficiency, Cilium’s feature richness, and Rocky Linux’s stability creates an ideal platform for modern cloud-native applications requiring sophisticated networking, security, and observability capabilities.