Prerequisites
- Basic understanding of programming concepts ๐
- Python installation (3.8+) ๐
- VS Code or preferred IDE ๐ป
What you'll learn
- Understand the concept fundamentals ๐ฏ
- Apply the concept in real projects ๐๏ธ
- Debug common issues ๐
- Write clean, Pythonic code โจ
๐ฏ Introduction
Welcome to the exciting world of Kubernetes! ๐ In this guide, weโll explore how to orchestrate containers like a maestro conducting a symphony orchestra.
Kubernetes (often abbreviated as K8s) is the superhero of container orchestration, helping you manage, scale, and deploy containerized applications with ease. Whether youโre building microservices ๐๏ธ, scaling web applications ๐, or managing complex distributed systems ๐ง, understanding Kubernetes is essential for modern DevOps and cloud-native development.
By the end of this tutorial, youโll feel confident deploying and managing Python applications in Kubernetes! Letโs dive in! ๐โโ๏ธ
๐ Understanding Kubernetes
๐ค What is Kubernetes?
Kubernetes is like a smart apartment building manager for your containers ๐ข. Think of containers as individual apartments, and Kubernetes as the management system that handles electricity, water, security, and maintenance for all residents automatically!
In technical terms, Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. This means you can:
- โจ Deploy applications across multiple servers automatically
- ๐ Scale up or down based on demand
- ๐ก๏ธ Self-heal when containers crash
- ๐ Roll out updates without downtime
- ๐ฆ Manage configuration and secrets securely
๐ก Why Use Kubernetes?
Hereโs why developers love Kubernetes:
- Automatic Scaling ๐: Handle traffic spikes like Black Friday sales
- Self-Healing ๐ฅ: Containers restart automatically when they fail
- Load Balancing โ๏ธ: Distribute traffic evenly across containers
- Rolling Updates ๐: Deploy new versions without downtime
- Resource Optimization ๐ฐ: Use computing resources efficiently
Real-world example: Imagine running an e-commerce site ๐. With Kubernetes, you can automatically scale up during sales, heal crashed services instantly, and deploy updates while customers keep shopping!
๐ง Basic Syntax and Usage
๐ Simple Python App for Kubernetes
Letโs start with a simple Python Flask application:
# ๐ Hello, Kubernetes!
# app.py
from flask import Flask, jsonify
import os
import socket
app = Flask(__name__)
@app.route('/')
def hello():
# ๐จ Get environment info
hostname = socket.gethostname()
version = os.environ.get('APP_VERSION', '1.0')
return jsonify({
'message': 'Hello from Kubernetes! ๐',
'hostname': hostname, # ๐ฅ๏ธ Which pod are we in?
'version': version # ๐ฆ What version is running?
})
@app.route('/health')
def health():
# ๐ฅ Health check endpoint for Kubernetes
return jsonify({'status': 'healthy โ
'}), 200
if __name__ == '__main__':
# ๐ Run on port 5000
app.run(host='0.0.0.0', port=5000)
๐ก Explanation: This Flask app has a health check endpoint that Kubernetes uses to monitor if your app is running properly!
๐ณ Dockerfile for Our App
# ๐๏ธ Multi-stage build for smaller image
FROM python:3.9-slim as builder
# ๐ฆ Install dependencies
WORKDIR /app
COPY requirements.txt .
RUN pip install --user -r requirements.txt
# ๐ Final stage
FROM python:3.9-slim
WORKDIR /app
# ๐ค Run as non-root user
RUN useradd -m appuser
USER appuser
# ๐ Copy dependencies and app
COPY --from=builder /root/.local /home/appuser/.local
COPY app.py .
# ๐ Make sure scripts are in PATH
ENV PATH=/home/appuser/.local/bin:$PATH
# ๐ฏ Expose port and run
EXPOSE 5000
CMD ["python", "app.py"]
๐ฏ Kubernetes Manifest
Hereโs a basic Kubernetes deployment:
# ๐ deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-app
labels:
app: python-app
spec:
replicas: 3 # ๐ฏ Run 3 instances
selector:
matchLabels:
app: python-app
template:
metadata:
labels:
app: python-app
spec:
containers:
- name: python-app
image: your-registry/python-app:1.0 # ๐ณ Your Docker image
ports:
- containerPort: 5000
env:
- name: APP_VERSION
value: "1.0"
resources:
requests:
memory: "64Mi" # ๐พ Minimum memory
cpu: "250m" # ๐ฅ๏ธ Minimum CPU
limits:
memory: "128Mi" # ๐พ Maximum memory
cpu: "500m" # ๐ฅ๏ธ Maximum CPU
livenessProbe:
httpGet:
path: /health
port: 5000
initialDelaySeconds: 30
periodSeconds: 10
---
# ๐ Service to expose the app
apiVersion: v1
kind: Service
metadata:
name: python-app-service
spec:
selector:
app: python-app
ports:
- protocol: TCP
port: 80
targetPort: 5000
type: LoadBalancer # โ๏ธ Cloud load balancer
๐ก Practical Examples
๐ Example 1: E-Commerce Microservice
Letโs build a product catalog service:
# ๐๏ธ Product catalog microservice
# catalog_service.py
from flask import Flask, jsonify, request
import redis
import json
import os
app = Flask(__name__)
# ๐ Connect to Redis (another container)
redis_host = os.environ.get('REDIS_HOST', 'redis-service')
redis_client = redis.Redis(host=redis_host, port=6379, decode_responses=True)
# ๐ฆ Sample products
def init_products():
products = [
{'id': '1', 'name': 'Kubernetes Book', 'price': 39.99, 'emoji': '๐'},
{'id': '2', 'name': 'Docker Mug', 'price': 14.99, 'emoji': 'โ'},
{'id': '3', 'name': 'Cloud Native Shirt', 'price': 24.99, 'emoji': '๐'}
]
for product in products:
redis_client.set(f"product:{product['id']}", json.dumps(product))
@app.route('/products', methods=['GET'])
def get_products():
# ๐ฏ Get all products
products = []
for key in redis_client.keys('product:*'):
product = json.loads(redis_client.get(key))
products.append(product)
return jsonify({
'products': products,
'count': len(products),
'service': 'catalog-service ๐๏ธ'
})
@app.route('/products/<product_id>', methods=['GET'])
def get_product(product_id):
# ๐ Get specific product
product_data = redis_client.get(f'product:{product_id}')
if product_data:
return jsonify(json.loads(product_data))
return jsonify({'error': 'Product not found ๐ข'}), 404
@app.route('/health', methods=['GET'])
def health():
# ๐ฅ Health check with Redis connection
try:
redis_client.ping()
return jsonify({'status': 'healthy โ
', 'redis': 'connected ๐'})
except:
return jsonify({'status': 'unhealthy โ', 'redis': 'disconnected ๐'}), 503
if __name__ == '__main__':
init_products() # ๐ Initialize sample data
app.run(host='0.0.0.0', port=5000)
๐ฏ Kubernetes Deployment with ConfigMap:
# ๐ configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
REDIS_HOST: "redis-service"
APP_NAME: "Product Catalog Service ๐๏ธ"
---
# ๐ deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: catalog-service
spec:
replicas: 2
selector:
matchLabels:
app: catalog-service
template:
metadata:
labels:
app: catalog-service
spec:
containers:
- name: catalog-app
image: your-registry/catalog-service:1.0
ports:
- containerPort: 5000
envFrom:
- configMapRef:
name: app-config
readinessProbe:
httpGet:
path: /health
port: 5000
initialDelaySeconds: 10
periodSeconds: 5
---
# ๐ Redis deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:alpine
ports:
- containerPort: 6379
---
# ๐ Services
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: catalog-service
spec:
selector:
app: catalog-service
ports:
- port: 80
targetPort: 5000
type: LoadBalancer
๐ฎ Example 2: Job Processing System
Letโs create a job processing system:
# ๐๏ธ Job processor
# job_processor.py
import time
import os
import json
from kubernetes import client, config
import requests
class JobProcessor:
def __init__(self):
# ๐ฏ Initialize Kubernetes client
try:
config.load_incluster_config() # ๐ Running inside cluster
except:
config.load_kube_config() # ๐ป Running locally
self.v1 = client.CoreV1Api()
self.batch_v1 = client.BatchV1Api()
def process_job(self, job_data):
# ๐ง Process a job
print(f"๐ Starting job: {job_data['name']}")
# Simulate work
for i in range(job_data.get('steps', 5)):
print(f" ๐ Step {i+1}/{job_data.get('steps', 5)} complete")
time.sleep(2)
print(f"โ
Job {job_data['name']} completed!")
return {'status': 'completed', 'result': '๐ Success!'}
def create_k8s_job(self, job_name, image, command):
# ๐ฆ Create a Kubernetes Job
job = client.V1Job(
api_version="batch/v1",
kind="Job",
metadata=client.V1ObjectMeta(name=job_name),
spec=client.V1JobSpec(
template=client.V1PodTemplateSpec(
spec=client.V1PodSpec(
containers=[
client.V1Container(
name="job-container",
image=image,
command=command
)
],
restart_policy="Never"
)
),
backoff_limit=3 # ๐ Retry 3 times on failure
)
)
# ๐ Create the job
self.batch_v1.create_namespaced_job(
namespace="default",
body=job
)
print(f"๐ฏ Created Kubernetes job: {job_name}")
# ๐ฎ Example usage
if __name__ == '__main__':
processor = JobProcessor()
# ๐ Sample job
job = {
'name': 'data-processing-job',
'steps': 10,
'type': 'batch-processing'
}
# ๐ Process the job
result = processor.process_job(job)
print(f"๐ Result: {result}")
# ๐๏ธ Create a Kubernetes job
processor.create_k8s_job(
job_name="python-batch-job",
image="python:3.9-slim",
command=["python", "-c", "print('Hello from Kubernetes Job! ๐')"]
)
๐ Advanced Concepts
๐งโโ๏ธ Horizontal Pod Autoscaling
When youโre ready to level up, implement auto-scaling:
# ๐ฏ Auto-scaling metrics server
# metrics_server.py
from flask import Flask, jsonify
import psutil
import os
app = Flask(__name__)
@app.route('/metrics')
def metrics():
# ๐ Expose custom metrics for HPA
cpu_percent = psutil.cpu_percent(interval=1)
memory_percent = psutil.virtual_memory().percent
# ๐ฏ Custom business metric
active_users = int(os.environ.get('ACTIVE_USERS', '100'))
return jsonify({
'cpu_usage': f'{cpu_percent}%',
'memory_usage': f'{memory_percent}%',
'active_users': active_users,
'scaling_recommendation': '๐ Scale up!' if cpu_percent > 70 else 'โ
Normal'
})
# ๐ HPA configuration
"""
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: python-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: python-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70 # ๐ Scale at 70% CPU
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80 # ๐พ Scale at 80% memory
"""
๐๏ธ StatefulSets for Databases
For stateful applications like databases:
# ๐๏ธ Stateful database application
# stateful_app.py
import os
import sqlite3
from flask import Flask, jsonify, request
app = Flask(__name__)
# ๐พ Persistent volume path
DB_PATH = '/data/app.db'
def init_db():
# ๐๏ธ Initialize database
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute('''
CREATE TABLE IF NOT EXISTS events (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
timestamp DATETIME DEFAULT CURRENT_TIMESTAMP,
pod_name TEXT
)
''')
conn.commit()
conn.close()
@app.route('/events', methods=['POST'])
def create_event():
# ๐ Store event with pod identifier
data = request.json
pod_name = os.environ.get('HOSTNAME', 'unknown')
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
'INSERT INTO events (name, pod_name) VALUES (?, ?)',
(data['name'], pod_name)
)
conn.commit()
conn.close()
return jsonify({
'message': f'Event created! ๐',
'pod': pod_name,
'storage': 'persistent ๐พ'
})
# ๐ StatefulSet configuration
"""
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: stateful-app
spec:
serviceName: "stateful-service"
replicas: 3
selector:
matchLabels:
app: stateful-app
template:
metadata:
labels:
app: stateful-app
spec:
containers:
- name: app
image: your-registry/stateful-app:1.0
ports:
- containerPort: 5000
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
"""
if __name__ == '__main__':
init_db()
app.run(host='0.0.0.0', port=5000)
โ ๏ธ Common Pitfalls and Solutions
๐ฑ Pitfall 1: Not Setting Resource Limits
# โ Wrong way - no resource limits!
spec:
containers:
- name: python-app
image: myapp:latest
# ๐ฅ Can consume all node resources!
# โ
Correct way - set limits!
spec:
containers:
- name: python-app
image: myapp:latest
resources:
requests:
memory: "128Mi" # ๐พ Guaranteed minimum
cpu: "100m" # ๐ฅ๏ธ 0.1 CPU core
limits:
memory: "256Mi" # ๐พ Maximum allowed
cpu: "500m" # ๐ฅ๏ธ 0.5 CPU core
๐คฏ Pitfall 2: Ignoring Health Checks
# โ Dangerous - no health endpoint!
@app.route('/')
def home():
return "Hello!" # ๐ฅ Kubernetes can't check health!
# โ
Safe - proper health checks!
@app.route('/health')
def health():
# ๐ฅ Check all dependencies
try:
# Check database
db_status = check_database()
# Check external API
api_status = check_external_api()
if db_status and api_status:
return jsonify({'status': 'healthy โ
'}), 200
else:
return jsonify({'status': 'degraded โ ๏ธ'}), 503
except Exception as e:
return jsonify({'status': 'unhealthy โ', 'error': str(e)}), 503
# ๐ฏ Kubernetes probes
"""
livenessProbe:
httpGet:
path: /health
port: 5000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 5000
initialDelaySeconds: 5
periodSeconds: 5
"""
๐ ๏ธ Best Practices
- ๐ฏ Use Namespaces: Organize resources by environment or team
- ๐ Label Everything: Use consistent labels for organization
- ๐ก๏ธ Set Resource Limits: Prevent resource starvation
- ๐ Use Secrets: Never hardcode sensitive data
- โจ Implement Health Checks: Enable self-healing
- ๐ Monitor Everything: Use Prometheus and Grafana
- ๐ Use Rolling Updates: Zero-downtime deployments
๐งช Hands-On Exercise
๐ฏ Challenge: Build a Microservices System
Create a complete microservices architecture:
๐ Requirements:
- โ API Gateway service that routes requests
- ๐ท๏ธ User service for authentication
- ๐ฆ Product service for catalog
- ๐ Order service for purchases
- ๐พ Each service has its own database
- ๐จ Services communicate via REST APIs
๐ Bonus Points:
- Add service mesh with Istio
- Implement circuit breakers
- Add distributed tracing
๐ก Solution
๐ Click to see solution
# ๐ฏ API Gateway Service
# gateway.py
from flask import Flask, jsonify, request
import requests
import os
app = Flask(__name__)
# ๐ Service URLs from environment
USER_SERVICE = os.environ.get('USER_SERVICE_URL', 'http://user-service')
PRODUCT_SERVICE = os.environ.get('PRODUCT_SERVICE_URL', 'http://product-service')
ORDER_SERVICE = os.environ.get('ORDER_SERVICE_URL', 'http://order-service')
@app.route('/api/users/<path:path>', methods=['GET', 'POST', 'PUT', 'DELETE'])
def proxy_users(path):
# ๐ค Route to user service
resp = requests.request(
method=request.method,
url=f'{USER_SERVICE}/{path}',
headers={key: value for (key, value) in request.headers if key != 'Host'},
data=request.get_data(),
allow_redirects=False
)
return resp.content, resp.status_code
@app.route('/api/products/<path:path>', methods=['GET', 'POST', 'PUT', 'DELETE'])
def proxy_products(path):
# ๐ฆ Route to product service
resp = requests.request(
method=request.method,
url=f'{PRODUCT_SERVICE}/{path}',
headers={key: value for (key, value) in request.headers if key != 'Host'},
data=request.get_data(),
allow_redirects=False
)
return resp.content, resp.status_code
@app.route('/health')
def health():
# ๐ฅ Check all services
services_health = {}
for service_name, service_url in [
('users', USER_SERVICE),
('products', PRODUCT_SERVICE),
('orders', ORDER_SERVICE)
]:
try:
resp = requests.get(f'{service_url}/health', timeout=2)
services_health[service_name] = 'โ
' if resp.status_code == 200 else 'โ'
except:
services_health[service_name] = 'โ'
all_healthy = all(status == 'โ
' for status in services_health.values())
return jsonify({
'gateway': 'healthy โ
',
'services': services_health
}), 200 if all_healthy else 503
# ๐ Kubernetes Deployment
"""
apiVersion: v1
kind: Service
metadata:
name: api-gateway
spec:
selector:
app: api-gateway
ports:
- port: 80
targetPort: 5000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway
spec:
replicas: 3
selector:
matchLabels:
app: api-gateway
template:
metadata:
labels:
app: api-gateway
spec:
containers:
- name: gateway
image: your-registry/api-gateway:1.0
ports:
- containerPort: 5000
env:
- name: USER_SERVICE_URL
value: "http://user-service"
- name: PRODUCT_SERVICE_URL
value: "http://product-service"
- name: ORDER_SERVICE_URL
value: "http://order-service"
livenessProbe:
httpGet:
path: /health
port: 5000
initialDelaySeconds: 30
periodSeconds: 10
"""
# ๐ฎ Test with port-forward
# kubectl port-forward service/api-gateway 8080:80
# curl http://localhost:8080/api/products
๐ Key Takeaways
Youโve learned so much! Hereโs what you can now do:
- โ Deploy Python apps to Kubernetes with confidence ๐ช
- โ Create scalable microservices that handle real traffic ๐
- โ Implement health checks and monitoring for reliability ๐ก๏ธ
- โ Use ConfigMaps and Secrets for configuration ๐
- โ Scale applications automatically based on load ๐
Remember: Kubernetes is powerful but start simple! Master the basics before diving into advanced features. ๐ค
๐ค Next Steps
Congratulations! ๐ Youโve mastered Kubernetes basics!
Hereโs what to do next:
- ๐ป Deploy the examples to a real cluster (try Minikube locally)
- ๐๏ธ Build a complete microservices project
- ๐ Learn about Helm charts for packaging applications
- ๐ Explore service meshes like Istio
Keep orchestrating, keep scaling, and most importantly, have fun with Kubernetes! ๐
Happy container orchestration! ๐๐โจ