Prerequisites
- Basic understanding of JavaScript ๐
- TypeScript installation โก
- VS Code or preferred IDE ๐ป
- Docker fundamentals ๐ณ
- Node.js and npm/pnpm ๐ฆ
What you'll learn
- Understand Kubernetes orchestration fundamentals ๐ฏ
- Apply Kubernetes TypeScript client in real projects ๐๏ธ
- Debug common Kubernetes integration issues ๐
- Write type-safe Kubernetes automation code โจ
๐ฏ Introduction
Welcome to the exciting world of Kubernetes container orchestration with TypeScript! ๐ In this guide, weโll explore how to harness the power of Kubernetes to manage your containerized applications at scale.
Youโll discover how Kubernetes can transform your TypeScript application deployment experience. Whether youโre building microservices ๐๏ธ, managing application scaling ๐, or orchestrating complex workflows ๐, understanding Kubernetes with TypeScript is essential for modern cloud-native development.
By the end of this tutorial, youโll feel confident building TypeScript applications that interact with Kubernetes clusters! Letโs dive in! ๐โโ๏ธ
๐ Understanding Kubernetes
๐ค What is Kubernetes?
Kubernetes is like having a super-smart conductor for your application orchestra! ๐ผ Think of it as an automated stage manager that knows exactly when to bring in more musicians (containers), when to replace tired ones, and how to keep the whole performance running smoothly.
In TypeScript terms, Kubernetes provides a powerful API that you can interact with programmatically ๐ง. This means you can:
- โจ Deploy applications automatically
- ๐ Scale services based on demand
- ๐ก๏ธ Ensure high availability and fault tolerance
- ๐ Manage rolling updates seamlessly
๐ก Why Use Kubernetes with TypeScript?
Hereโs why developers love this combination:
- Type Safety ๐: TypeScript client ensures API interactions are type-safe
- Better IDE Support ๐ป: Autocomplete for Kubernetes resources
- Infrastructure as Code ๐: Define deployments in strongly-typed code
- Automation Power ๐ง: Build custom operators and controllers
Real-world example: Imagine deploying a microservices shopping platform ๐. With Kubernetes and TypeScript, you can automatically scale your payment service during peak shopping hours while keeping everything type-safe!
๐ง Basic Syntax and Usage
๐ Setting Up the Kubernetes Client
Letโs start with installing the official Kubernetes client:
# ๐ฆ Install the Kubernetes JavaScript client
pnpm add @kubernetes/client-node
pnpm add -D @types/node
// ๐ Hello, Kubernetes with TypeScript!
import * as k8s from '@kubernetes/client-node';
// ๐จ Create a Kubernetes configuration
const kc = new k8s.KubeConfig();
kc.loadFromDefault(); // ๐ Load from ~/.kube/config
// ๐ Create API clients
const k8sApi = kc.makeApiClient(k8s.CoreV1Api);
const appsApi = kc.makeApiClient(k8s.AppsV1Api);
console.log('๐ Kubernetes client ready!');
๐ก Explanation: The KubeConfig
loads your cluster credentials, and we create typed API clients for different Kubernetes resources!
๐ฏ Common Kubernetes Operations
Here are patterns youโll use daily:
// ๐๏ธ Pattern 1: Listing pods
const listPods = async (): Promise<void> => {
try {
const response = await k8sApi.listNamespacedPod('default');
console.log('๐โโ๏ธ Active pods:');
response.body.items.forEach(pod => {
console.log(` ๐ณ ${pod.metadata?.name}: ${pod.status?.phase}`);
});
} catch (error) {
console.error('โ Error listing pods:', error);
}
};
// ๐จ Pattern 2: Creating a service
const createService = async (name: string, port: number): Promise<void> => {
const service: k8s.V1Service = {
metadata: {
name: name,
labels: { app: name, emoji: '๐' }
},
spec: {
selector: { app: name },
ports: [{
protocol: 'TCP',
port: port,
targetPort: port
}],
type: 'ClusterIP'
}
};
try {
await k8sApi.createNamespacedService('default', service);
console.log(`โ
Service ${name} created on port ${port}!`);
} catch (error) {
console.error('โ Error creating service:', error);
}
};
// ๐ Pattern 3: Watching for changes
const watchPods = (): void => {
const watch = new k8s.Watch(kc);
watch.watch('/api/v1/namespaces/default/pods',
{},
(type, apiObj) => {
const pod = apiObj as k8s.V1Pod;
console.log(`๐ Pod ${pod.metadata?.name} was ${type.toLowerCase()}`);
},
(err) => {
console.error('โ ๏ธ Watch error:', err);
}
);
};
๐ก Practical Examples
๐ Example 1: E-commerce Microservice Deployer
Letโs build a TypeScript tool to deploy our shopping platform:
// ๐๏ธ Define our application types
interface MicroserviceConfig {
name: string;
image: string;
port: number;
replicas: number;
emoji: string;
env?: Record<string, string>;
}
// ๐๏ธ Microservice deployment manager
class MicroserviceDeployer {
private k8sApi: k8s.CoreV1Api;
private appsApi: k8s.AppsV1Api;
constructor(kubeConfig: k8s.KubeConfig) {
this.k8sApi = kubeConfig.makeApiClient(k8s.CoreV1Api);
this.appsApi = kubeConfig.makeApiClient(k8s.AppsV1Api);
}
// ๐ Deploy a microservice
async deployMicroservice(config: MicroserviceConfig): Promise<void> {
console.log(`๐ Deploying ${config.emoji} ${config.name}...`);
// ๐ฆ Create deployment
const deployment: k8s.V1Deployment = {
metadata: {
name: config.name,
labels: { app: config.name, type: 'microservice' }
},
spec: {
replicas: config.replicas,
selector: {
matchLabels: { app: config.name }
},
template: {
metadata: {
labels: { app: config.name }
},
spec: {
containers: [{
name: config.name,
image: config.image,
ports: [{ containerPort: config.port }],
env: config.env ? Object.entries(config.env).map(([key, value]) => ({
name: key,
value: value
})) : []
}]
}
}
}
};
try {
await this.appsApi.createNamespacedDeployment('default', deployment);
await this.createService(config.name, config.port);
console.log(`โ
${config.emoji} ${config.name} deployed successfully!`);
} catch (error) {
console.error(`โ Failed to deploy ${config.name}:`, error);
}
}
// ๐ Create service for the deployment
private async createService(name: string, port: number): Promise<void> {
const service: k8s.V1Service = {
metadata: {
name: `${name}-service`,
labels: { app: name }
},
spec: {
selector: { app: name },
ports: [{ port: port, targetPort: port }],
type: 'ClusterIP'
}
};
await this.k8sApi.createNamespacedService('default', service);
}
// ๐ Get deployment status
async getDeploymentStatus(name: string): Promise<void> {
try {
const response = await this.appsApi.readNamespacedDeployment(name, 'default');
const deployment = response.body;
const available = deployment.status?.availableReplicas || 0;
const desired = deployment.spec?.replicas || 0;
console.log(`๐ ${name} Status:`);
console.log(` ๐ฏ Desired: ${desired} replicas`);
console.log(` โ
Available: ${available} replicas`);
console.log(` ๐ Ready: ${available === desired ? 'Yes' : 'No'}`);
} catch (error) {
console.error(`โ Error getting status for ${name}:`, error);
}
}
}
// ๐ฎ Let's use it!
const deployer = new MicroserviceDeployer(kc);
// ๐ Deploy our shopping platform services
const services: MicroserviceConfig[] = [
{
name: 'user-service',
image: 'myapp/user-service:latest',
port: 3001,
replicas: 2,
emoji: '๐ค',
env: { NODE_ENV: 'production', DB_HOST: 'postgres-service' }
},
{
name: 'product-service',
image: 'myapp/product-service:latest',
port: 3002,
replicas: 3,
emoji: '๐ฆ',
env: { NODE_ENV: 'production', REDIS_URL: 'redis-service:6379' }
},
{
name: 'payment-service',
image: 'myapp/payment-service:latest',
port: 3003,
replicas: 2,
emoji: '๐ณ',
env: { NODE_ENV: 'production', STRIPE_KEY: 'sk_...' }
}
];
// ๐ Deploy all services
services.forEach(service => deployer.deployMicroservice(service));
๐ฏ Try it yourself: Add a scaleService
method that can automatically scale services based on CPU usage!
๐ฎ Example 2: Game Server Orchestrator
Letโs create a fun game server manager:
// ๐ Game server configuration
interface GameServerConfig {
gameType: 'battle-royale' | 'racing' | 'puzzle';
maxPlayers: number;
region: string;
emoji: string;
}
class GameServerOrchestrator {
private k8sApi: k8s.CoreV1Api;
private appsApi: k8s.AppsV1Api;
private gameServers: Map<string, GameServerConfig> = new Map();
constructor(kubeConfig: k8s.KubeConfig) {
this.k8sApi = kubeConfig.makeApiClient(k8s.CoreV1Api);
this.appsApi = kubeConfig.makeApiClient(k8s.AppsV1Api);
}
// ๐ฎ Spawn a new game server
async spawnGameServer(config: GameServerConfig): Promise<string> {
const serverId = `game-${config.gameType}-${Date.now()}`;
console.log(`๐ฎ Spawning ${config.emoji} ${config.gameType} server...`);
const deployment: k8s.V1Deployment = {
metadata: {
name: serverId,
labels: {
app: 'game-server',
gameType: config.gameType,
region: config.region
}
},
spec: {
replicas: 1,
selector: { matchLabels: { app: serverId } },
template: {
metadata: { labels: { app: serverId } },
spec: {
containers: [{
name: 'game-server',
image: `game-servers/${config.gameType}:latest`,
ports: [{ containerPort: 7777 }],
env: [
{ name: 'MAX_PLAYERS', value: config.maxPlayers.toString() },
{ name: 'REGION', value: config.region },
{ name: 'GAME_TYPE', value: config.gameType }
],
resources: {
requests: { cpu: '500m', memory: '1Gi' },
limits: { cpu: '1000m', memory: '2Gi' }
}
}]
}
}
}
};
try {
await this.appsApi.createNamespacedDeployment('game-servers', deployment);
this.gameServers.set(serverId, config);
console.log(`โ
${config.emoji} Game server ${serverId} spawned!`);
return serverId;
} catch (error) {
console.error(`โ Failed to spawn game server:`, error);
throw error;
}
}
// ๐ฅ Terminate idle servers
async cleanupIdleServers(): Promise<void> {
console.log('๐งน Cleaning up idle game servers...');
try {
const response = await this.appsApi.listNamespacedDeployment('game-servers');
const deployments = response.body.items;
for (const deployment of deployments) {
const name = deployment.metadata?.name;
if (name && await this.isServerIdle(name)) {
await this.appsApi.deleteNamespacedDeployment(name, 'game-servers');
console.log(`๐๏ธ Removed idle server: ${name}`);
}
}
} catch (error) {
console.error('โ Error during cleanup:', error);
}
}
// ๐ Check if server is idle
private async isServerIdle(serverName: string): Promise<boolean> {
// ๐ฏ In a real implementation, you'd check player count or CPU usage
return Math.random() > 0.7; // ๐ Simulate 30% chance of being idle
}
// ๐ Scale servers by region
async scaleByRegion(region: string, targetCount: number): Promise<void> {
console.log(`๐ Scaling ${region} servers to ${targetCount}...`);
const currentServers = Array.from(this.gameServers.entries())
.filter(([_, config]) => config.region === region);
if (currentServers.length < targetCount) {
// ๐ Scale up
const serversToAdd = targetCount - currentServers.length;
for (let i = 0; i < serversToAdd; i++) {
await this.spawnGameServer({
gameType: 'battle-royale', // ๐ฏ Default type
maxPlayers: 100,
region: region,
emoji: 'โ๏ธ'
});
}
}
console.log(`๐ฏ Region ${region} scaled to ${targetCount} servers!`);
}
}
// ๐ฎ Let's set up our game servers!
const gameOrchestrator = new GameServerOrchestrator(kc);
// ๐ Deploy servers across regions
const gameConfigs: GameServerConfig[] = [
{ gameType: 'battle-royale', maxPlayers: 100, region: 'us-east', emoji: 'โ๏ธ' },
{ gameType: 'racing', maxPlayers: 12, region: 'eu-west', emoji: '๐๏ธ' },
{ gameType: 'puzzle', maxPlayers: 4, region: 'asia-pacific', emoji: '๐งฉ' }
];
gameConfigs.forEach(config => gameOrchestrator.spawnGameServer(config));
๐ Advanced Concepts
๐งโโ๏ธ Advanced Topic 1: Custom Resource Definitions (CRDs)
When youโre ready to level up, create your own Kubernetes resources:
// ๐ฏ Define a custom TypeScript application resource
interface TypeScriptApp {
apiVersion: 'apps.mycompany.com/v1';
kind: 'TypeScriptApp';
metadata: {
name: string;
namespace?: string;
};
spec: {
image: string;
replicas: number;
nodeVersion: string;
buildCommand: string;
startCommand: string;
emoji: string;
};
status?: {
phase: 'Building' | 'Running' | 'Failed';
buildTime?: string;
readyReplicas?: number;
};
}
// ๐ช TypeScript App Controller
class TypeScriptAppController {
private customApi: k8s.CustomObjectsApi;
constructor(kubeConfig: k8s.KubeConfig) {
this.customApi = kubeConfig.makeApiClient(k8s.CustomObjectsApi);
}
// ๐จ Create a TypeScript application
async createTypeScriptApp(app: TypeScriptApp): Promise<void> {
console.log(`๐ Creating TypeScript app: ${app.spec.emoji} ${app.metadata.name}`);
try {
await this.customApi.createNamespacedCustomObject(
'apps.mycompany.com',
'v1',
app.metadata.namespace || 'default',
'typescriptapps',
app
);
console.log(`โ
TypeScript app ${app.metadata.name} created!`);
} catch (error) {
console.error('โ Error creating TypeScript app:', error);
}
}
// ๐ Watch for TypeScript app changes
watchTypeScriptApps(): void {
const watch = new k8s.Watch(this.customApi.kubeConfig);
watch.watch(
'/apis/apps.mycompany.com/v1/namespaces/default/typescriptapps',
{},
(type, obj) => {
const app = obj as TypeScriptApp;
console.log(`๐ TypeScript app ${app.metadata.name} was ${type.toLowerCase()}`);
if (type === 'ADDED') {
this.processNewApp(app);
}
},
(err) => console.error('โ ๏ธ Watch error:', err)
);
}
// ๐๏ธ Process new TypeScript applications
private async processNewApp(app: TypeScriptApp): Promise<void> {
console.log(`๐จ Building ${app.spec.emoji} ${app.metadata.name}...`);
// ๐ฏ Create build job, then deployment
// This would trigger your CI/CD pipeline!
console.log(`๐ ${app.metadata.name} is now running with ${app.spec.replicas} replicas!`);
}
}
๐๏ธ Advanced Topic 2: Health Monitoring & Auto-Healing
For the brave developers who want bulletproof systems:
// ๐ฉบ Health monitoring system
interface HealthCheck {
name: string;
endpoint: string;
expectedStatus: number;
timeout: number;
emoji: string;
}
class KubernetesHealthMonitor {
private k8sApi: k8s.CoreV1Api;
private appsApi: k8s.AppsV1Api;
private healthChecks: Map<string, HealthCheck> = new Map();
constructor(kubeConfig: k8s.KubeConfig) {
this.k8sApi = kubeConfig.makeApiClient(k8s.CoreV1Api);
this.appsApi = kubeConfig.makeApiClient(k8s.AppsV1Api);
}
// ๐ Monitor application health
async monitorHealth(): Promise<void> {
console.log('๐ฉบ Starting health monitoring...');
setInterval(async () => {
for (const [serviceName, healthCheck] of this.healthChecks) {
const isHealthy = await this.checkServiceHealth(healthCheck);
if (!isHealthy) {
console.log(`โ ๏ธ ${healthCheck.emoji} ${serviceName} is unhealthy!`);
await this.healService(serviceName);
} else {
console.log(`โ
${healthCheck.emoji} ${serviceName} is healthy`);
}
}
}, 30000); // ๐ Check every 30 seconds
}
// ๐ง Auto-heal unhealthy services
private async healService(serviceName: string): Promise<void> {
console.log(`๐ท Healing service: ${serviceName}`);
try {
// ๐ Restart deployment by updating annotation
const patch = {
spec: {
template: {
metadata: {
annotations: {
'kubectl.kubernetes.io/restartedAt': new Date().toISOString()
}
}
}
}
};
await this.appsApi.patchNamespacedDeployment(
serviceName,
'default',
patch,
undefined,
undefined,
undefined,
undefined,
{ headers: { 'Content-Type': 'application/merge-patch+json' } }
);
console.log(`๐ Service ${serviceName} restarted for healing!`);
} catch (error) {
console.error(`โ Failed to heal ${serviceName}:`, error);
}
}
// ๐ฉบ Check individual service health
private async checkServiceHealth(healthCheck: HealthCheck): Promise<boolean> {
try {
// ๐ In real implementation, make HTTP request to health endpoint
return Math.random() > 0.2; // ๐ Simulate 80% uptime
} catch (error) {
return false;
}
}
// ๐ Register health check
registerHealthCheck(serviceName: string, healthCheck: HealthCheck): void {
this.healthChecks.set(serviceName, healthCheck);
console.log(`๐ Registered health check for ${healthCheck.emoji} ${serviceName}`);
}
}
// ๐ฎ Set up health monitoring
const healthMonitor = new KubernetesHealthMonitor(kc);
healthMonitor.registerHealthCheck('user-service', {
name: 'user-service',
endpoint: '/health',
expectedStatus: 200,
timeout: 5000,
emoji: '๐ค'
});
healthMonitor.monitorHealth();
โ ๏ธ Common Pitfalls and Solutions
๐ฑ Pitfall 1: Ignoring Resource Limits
// โ Wrong way - no resource limits!
const badDeployment: k8s.V1Deployment = {
metadata: { name: 'memory-hog' },
spec: {
template: {
spec: {
containers: [{
name: 'app',
image: 'myapp:latest'
// ๐ฅ No resource limits = potential cluster crash!
}]
}
}
}
};
// โ
Correct way - always set resource limits!
const goodDeployment: k8s.V1Deployment = {
metadata: { name: 'well-behaved-app' },
spec: {
template: {
spec: {
containers: [{
name: 'app',
image: 'myapp:latest',
resources: {
requests: { // ๐ฏ Minimum resources needed
cpu: '100m',
memory: '128Mi'
},
limits: { // ๐ก๏ธ Maximum resources allowed
cpu: '500m',
memory: '512Mi'
}
}
}]
}
}
}
};
๐คฏ Pitfall 2: Not Handling API Errors Properly
// โ Dangerous - no error handling!
const badPodCreation = async (): Promise<void> => {
const pod = { /* pod definition */ };
await k8sApi.createNamespacedPod('default', pod); // ๐ฅ Might throw!
console.log('Pod created!'); // ๐ซ This might never run!
};
// โ
Safe - proper error handling!
const goodPodCreation = async (): Promise<void> => {
try {
const pod: k8s.V1Pod = {
metadata: { name: 'safe-pod' },
spec: {
containers: [{
name: 'app',
image: 'nginx:latest'
}]
}
};
const response = await k8sApi.createNamespacedPod('default', pod);
console.log(`โ
Pod created: ${response.body.metadata?.name}`);
} catch (error) {
if (error instanceof k8s.HttpError) {
console.error(`โ Kubernetes API error: ${error.statusCode} - ${error.body?.message}`);
} else {
console.error('โ Unexpected error:', error);
}
}
};
๐จ Pitfall 3: Forgetting Cleanup
// โ Memory leak - watching without cleanup!
const badWatcher = (): void => {
const watch = new k8s.Watch(kc);
watch.watch('/api/v1/pods', {}, (type, obj) => {
console.log('Pod event:', type);
});
// ๐ฅ Watch never stops!
};
// โ
Proper cleanup with AbortController!
const goodWatcher = (): void => {
const watch = new k8s.Watch(kc);
const abortController = new AbortController();
watch.watch('/api/v1/pods',
{ signal: abortController.signal },
(type, obj) => {
console.log('Pod event:', type);
},
(err) => {
if (err.name !== 'AbortError') {
console.error('Watch error:', err);
}
}
);
// ๐งน Clean up after 5 minutes
setTimeout(() => {
abortController.abort();
console.log('๐ Watch stopped');
}, 5 * 60 * 1000);
};
๐ ๏ธ Best Practices
- ๐ฏ Use TypeScript Interfaces: Define clear types for your Kubernetes resources
- ๐ Resource Limits: Always set CPU and memory limits
- ๐ก๏ธ Error Handling: Wrap API calls in try-catch blocks
- ๐ Graceful Cleanup: Use AbortController for watches
- โจ Namespace Organization: Use namespaces to organize resources
- ๐ Monitoring: Implement health checks and logging
- ๐ Security: Use RBAC and service accounts properly
๐งช Hands-On Exercise
๐ฏ Challenge: Build a TypeScript Microservice Auto-Scaler
Create a smart auto-scaling system for your TypeScript microservices:
๐ Requirements:
- โ Monitor CPU and memory usage of deployments
- ๐ท๏ธ Scale up when usage > 70% for 2 minutes
- ๐ Scale down when usage < 30% for 5 minutes
- ๐ค Support min/max replica limits
- ๐จ Add fun emojis and logging throughout!
๐ Bonus Points:
- Add custom metrics (request rate, queue length)
- Implement predictive scaling based on time patterns
- Create a dashboard showing scaling events
- Add Slack/Discord notifications for scaling events
๐ก Solution
๐ Click to see solution
// ๐ฏ Our TypeScript microservice auto-scaler!
interface ScalingConfig {
deploymentName: string;
namespace: string;
minReplicas: number;
maxReplicas: number;
targetCpuPercent: number;
targetMemoryPercent: number;
scaleUpThreshold: number;
scaleDownThreshold: number;
emoji: string;
}
interface MetricPoint {
timestamp: number;
cpuPercent: number;
memoryPercent: number;
}
class TypeScriptAutoScaler {
private k8sApi: k8s.CoreV1Api;
private appsApi: k8s.AppsV1Api;
private metricsApi: k8s.Metrics;
private scalingConfigs: Map<string, ScalingConfig> = new Map();
private metricsHistory: Map<string, MetricPoint[]> = new Map();
constructor(kubeConfig: k8s.KubeConfig) {
this.k8sApi = kubeConfig.makeApiClient(k8s.CoreV1Api);
this.appsApi = kubeConfig.makeApiClient(k8s.AppsV1Api);
this.metricsApi = kubeConfig.makeApiClient(k8s.Metrics);
}
// ๐ Register a deployment for auto-scaling
registerDeployment(config: ScalingConfig): void {
this.scalingConfigs.set(config.deploymentName, config);
this.metricsHistory.set(config.deploymentName, []);
console.log(`๐ Registered ${config.emoji} ${config.deploymentName} for auto-scaling`);
}
// ๐ Start the auto-scaling loop
startAutoScaling(): void {
console.log('๐ฏ Starting TypeScript Auto-Scaler!');
setInterval(async () => {
for (const [deploymentName, config] of this.scalingConfigs) {
await this.checkAndScale(deploymentName, config);
}
}, 30000); // ๐ Check every 30 seconds
}
// ๐ Check metrics and scale if needed
private async checkAndScale(deploymentName: string, config: ScalingConfig): Promise<void> {
try {
const metrics = await this.getDeploymentMetrics(deploymentName, config.namespace);
if (!metrics) {
console.log(`โ ๏ธ No metrics available for ${config.emoji} ${deploymentName}`);
return;
}
// ๐ Store metrics history
const history = this.metricsHistory.get(deploymentName) || [];
history.push({
timestamp: Date.now(),
cpuPercent: metrics.cpuPercent,
memoryPercent: metrics.memoryPercent
});
// ๐งน Keep only last 10 minutes of data
const cutoff = Date.now() - (10 * 60 * 1000);
const recentHistory = history.filter(point => point.timestamp > cutoff);
this.metricsHistory.set(deploymentName, recentHistory);
// ๐ฏ Make scaling decision
await this.makeScalingDecision(deploymentName, config, metrics, recentHistory);
} catch (error) {
console.error(`โ Error checking metrics for ${deploymentName}:`, error);
}
}
// ๐ Get current deployment metrics
private async getDeploymentMetrics(deploymentName: string, namespace: string): Promise<{cpuPercent: number, memoryPercent: number} | null> {
try {
// ๐ฏ In a real implementation, you'd get metrics from Prometheus or metrics-server
// For this example, we'll simulate metrics
const cpuPercent = Math.random() * 100;
const memoryPercent = Math.random() * 100;
console.log(`๐ ${deploymentName} - CPU: ${cpuPercent.toFixed(1)}%, Memory: ${memoryPercent.toFixed(1)}%`);
return { cpuPercent, memoryPercent };
} catch (error) {
console.error(`โ Error getting metrics for ${deploymentName}:`, error);
return null;
}
}
// ๐ฏ Make scaling decision based on metrics
private async makeScalingDecision(
deploymentName: string,
config: ScalingConfig,
currentMetrics: {cpuPercent: number, memoryPercent: number},
history: MetricPoint[]
): Promise<void> {
const currentReplicas = await this.getCurrentReplicas(deploymentName, config.namespace);
if (currentReplicas === null) return;
const avgCpu = currentMetrics.cpuPercent;
const avgMemory = currentMetrics.memoryPercent;
// ๐ Scale up conditions
if ((avgCpu > config.scaleUpThreshold || avgMemory > config.scaleUpThreshold) &&
currentReplicas < config.maxReplicas) {
const newReplicas = Math.min(currentReplicas + 1, config.maxReplicas);
await this.scaleDeployment(deploymentName, config.namespace, newReplicas);
console.log(`๐ ${config.emoji} ${deploymentName} scaled UP to ${newReplicas} replicas (CPU: ${avgCpu.toFixed(1)}%, Memory: ${avgMemory.toFixed(1)}%)`);
}
// ๐ Scale down conditions
else if ((avgCpu < config.scaleDownThreshold && avgMemory < config.scaleDownThreshold) &&
currentReplicas > config.minReplicas) {
// ๐ Only scale down if consistently low for 5 minutes
const fiveMinutesAgo = Date.now() - (5 * 60 * 1000);
const recentLowUsage = history
.filter(point => point.timestamp > fiveMinutesAgo)
.every(point => point.cpuPercent < config.scaleDownThreshold && point.memoryPercent < config.scaleDownThreshold);
if (recentLowUsage && history.length >= 10) {
const newReplicas = Math.max(currentReplicas - 1, config.minReplicas);
await this.scaleDeployment(deploymentName, config.namespace, newReplicas);
console.log(`๐ ${config.emoji} ${deploymentName} scaled DOWN to ${newReplicas} replicas (consistently low usage)`);
}
}
}
// ๐ข Get current replica count
private async getCurrentReplicas(deploymentName: string, namespace: string): Promise<number | null> {
try {
const response = await this.appsApi.readNamespacedDeployment(deploymentName, namespace);
return response.body.spec?.replicas || 0;
} catch (error) {
console.error(`โ Error getting replica count for ${deploymentName}:`, error);
return null;
}
}
// โ๏ธ Scale the deployment
private async scaleDeployment(deploymentName: string, namespace: string, replicas: number): Promise<void> {
try {
const patch = {
spec: {
replicas: replicas
}
};
await this.appsApi.patchNamespacedDeployment(
deploymentName,
namespace,
patch,
undefined,
undefined,
undefined,
undefined,
{ headers: { 'Content-Type': 'application/merge-patch+json' } }
);
console.log(`โ
Successfully scaled ${deploymentName} to ${replicas} replicas`);
} catch (error) {
console.error(`โ Error scaling ${deploymentName}:`, error);
}
}
// ๐ Get scaling statistics
getScalingStats(): void {
console.log('๐ Auto-Scaling Statistics:');
for (const [deploymentName, config] of this.scalingConfigs) {
const history = this.metricsHistory.get(deploymentName) || [];
const recent = history.slice(-5);
const avgCpu = recent.reduce((sum, p) => sum + p.cpuPercent, 0) / recent.length;
const avgMemory = recent.reduce((sum, p) => sum + p.memoryPercent, 0) / recent.length;
console.log(` ${config.emoji} ${deploymentName}:`);
console.log(` ๐ฏ Target: ${config.targetCpuPercent}% CPU, ${config.targetMemoryPercent}% Memory`);
console.log(` ๐ Current: ${avgCpu.toFixed(1)}% CPU, ${avgMemory.toFixed(1)}% Memory`);
console.log(` ๐ Range: ${config.minReplicas}-${config.maxReplicas} replicas`);
}
}
}
// ๐ฎ Set up our auto-scaler!
const autoScaler = new TypeScriptAutoScaler(kc);
// ๐ Register our microservices
const scalingConfigs: ScalingConfig[] = [
{
deploymentName: 'user-service',
namespace: 'default',
minReplicas: 2,
maxReplicas: 10,
targetCpuPercent: 50,
targetMemoryPercent: 60,
scaleUpThreshold: 70,
scaleDownThreshold: 30,
emoji: '๐ค'
},
{
deploymentName: 'payment-service',
namespace: 'default',
minReplicas: 3,
maxReplicas: 15,
targetCpuPercent: 60,
targetMemoryPercent: 70,
scaleUpThreshold: 80,
scaleDownThreshold: 20,
emoji: '๐ณ'
}
];
scalingConfigs.forEach(config => autoScaler.registerDeployment(config));
// ๐ Start auto-scaling!
autoScaler.startAutoScaling();
// ๐ Show stats every 2 minutes
setInterval(() => autoScaler.getScalingStats(), 2 * 60 * 1000);
๐ Key Takeaways
Youโve learned so much about Kubernetes orchestration with TypeScript! Hereโs what you can now do:
- โ Deploy containerized applications with confidence ๐ช
- โ Manage Kubernetes resources programmatically ๐ก๏ธ
- โ Scale applications automatically based on metrics ๐ฏ
- โ Build custom controllers and operators ๐
- โ Monitor and heal your applications ๐
Remember: Kubernetes is your orchestration partner, not your enemy! Itโs here to help you run resilient, scalable applications. ๐ค
๐ค Next Steps
Congratulations! ๐ Youโve mastered Kubernetes container orchestration with TypeScript!
Hereโs what to do next:
- ๐ป Practice with the auto-scaler exercise above
- ๐๏ธ Build a small microservice and deploy it with your TypeScript tools
- ๐ Move on to our next tutorial: Serverless Functions with TypeScript
- ๐ Share your Kubernetes journey with the community!
Remember: Every Kubernetes expert was once a beginner. Keep experimenting, keep learning, and most importantly, have fun orchestrating your applications! ๐
Happy orchestrating! ๐๐โจ