Prerequisites
- Basic understanding of JavaScript 📝
- TypeScript installation ⚡
- VS Code or preferred IDE 💻
What you'll learn
- Understand the concept fundamentals 🎯
- Apply the concept in real projects 🏗️
- Debug common issues 🐛
- Write type-safe code ✨
📘 Performance Profiling: Runtime Analysis
🎯 Introduction
Ever wonder why your TypeScript app sometimes feels sluggish? 🐌 Or why that button click takes forever to respond? Welcome to the exciting world of performance profiling! 🎉
Performance profiling is like being a detective 🕵️♂️ for your code - you get to investigate what’s making your app slow and fix it! In this tutorial, we’ll explore how to analyze your TypeScript application’s runtime performance and make it lightning fast! ⚡
By the end of this guide, you’ll be able to spot performance bottlenecks like a pro and optimize your TypeScript applications to run smoother than butter! 🧈✨
📚 Understanding Performance Profiling
Think of performance profiling like checking your car’s dashboard 🚗. Just like how you monitor speed, fuel, and engine temperature, profiling lets you monitor your code’s execution time, memory usage, and function calls!
What is Runtime Analysis? 🤔
Runtime analysis is observing how your code behaves when it’s actually running. It’s different from static analysis (which checks code without running it) - this is the real deal! 🎯
Here’s what we can discover:
- Which functions take the longest to execute ⏱️
- Where memory is being consumed 💾
- How often functions are called 📊
- Network request timings 🌐
🔧 Basic Syntax and Usage
Let’s start with the built-in performance API that works in both browser and Node.js environments! 🚀
// 👋 Hello Performance!
const startTime = performance.now();
// 🎨 Some operation we want to measure
const result = calculateSomethingExpensive();
const endTime = performance.now();
const duration = endTime - startTime;
console.log(`⏱️ Operation took ${duration.toFixed(2)} milliseconds`);
Using Performance Marks and Measures 🎯
interface PerformanceMarker {
markStart(label: string): void;
markEnd(label: string): void;
measure(name: string, startMark: string, endMark: string): number;
}
class PerformanceProfiler implements PerformanceMarker {
markStart(label: string): void {
performance.mark(`${label}-start`);
}
markEnd(label: string): void {
performance.mark(`${label}-end`);
}
measure(name: string, startMark: string, endMark: string): number {
performance.measure(name, startMark, endMark);
const measures = performance.getEntriesByName(name);
return measures[measures.length - 1].duration;
}
}
// 🚀 Let's use it!
const profiler = new PerformanceProfiler();
profiler.markStart('data-processing');
// 🎨 Process some data
const data = processLargeDataset();
profiler.markEnd('data-processing');
const duration = profiler.measure(
'DataProcessingTime',
'data-processing-start',
'data-processing-end'
);
console.log(`📊 Data processing took ${duration.toFixed(2)}ms`);
💡 Practical Examples
Example 1: E-Commerce Search Performance 🛒
Let’s profile a product search feature that’s been running slowly:
interface Product {
id: number;
name: string;
price: number;
category: string;
tags: string[];
}
class ProductSearchProfiler {
private products: Product[] = [];
constructor(products: Product[]) {
this.products = products;
}
// ❌ Wrong way - No profiling, hard to identify bottlenecks
searchProductsBad(query: string): Product[] {
return this.products.filter(product =>
product.name.toLowerCase().includes(query.toLowerCase()) ||
product.tags.some(tag => tag.toLowerCase().includes(query.toLowerCase()))
);
}
// ✅ Correct way - With detailed profiling
searchProductsGood(query: string): Product[] {
performance.mark('search-start');
// 📊 Profile query normalization
performance.mark('normalize-start');
const normalizedQuery = query.toLowerCase();
performance.mark('normalize-end');
// 🔍 Profile filtering
performance.mark('filter-start');
const results = this.products.filter(product => {
performance.mark('product-check-start');
const nameMatch = product.name.toLowerCase().includes(normalizedQuery);
const tagMatch = product.tags.some(tag =>
tag.toLowerCase().includes(normalizedQuery)
);
performance.mark('product-check-end');
return nameMatch || tagMatch;
});
performance.mark('filter-end');
performance.mark('search-end');
// 📈 Log performance metrics
this.logPerformanceMetrics();
return results;
}
private logPerformanceMetrics(): void {
performance.measure('Total Search Time', 'search-start', 'search-end');
performance.measure('Normalization Time', 'normalize-start', 'normalize-end');
performance.measure('Filter Time', 'filter-start', 'filter-end');
const measures = performance.getEntriesByType('measure');
console.log('🎯 Performance Report:');
measures.forEach(measure => {
console.log(` ${measure.name}: ${measure.duration.toFixed(2)}ms`);
});
// 🧹 Clean up marks
performance.clearMarks();
performance.clearMeasures();
}
}
// 🚀 Usage
const products: Product[] = generateTestProducts(10000); // Large dataset
const profiler = new ProductSearchProfiler(products);
console.log('🔍 Searching for "gaming"...');
const results = profiler.searchProductsGood('gaming');
console.log(`✅ Found ${results.length} products!`);
Example 2: Game Loop Performance Monitor 🎮
interface GameMetrics {
fps: number;
frameTime: number;
updateTime: number;
renderTime: number;
}
class GamePerformanceMonitor {
private frameCount = 0;
private lastTime = performance.now();
private metrics: GameMetrics = {
fps: 0,
frameTime: 0,
updateTime: 0,
renderTime: 0
};
measureFrame(update: () => void, render: () => void): void {
const frameStart = performance.now();
// 🎯 Measure update phase
const updateStart = performance.now();
update();
const updateEnd = performance.now();
// 🎨 Measure render phase
const renderStart = performance.now();
render();
const renderEnd = performance.now();
const frameEnd = performance.now();
// 📊 Calculate metrics
this.frameCount++;
this.metrics.frameTime = frameEnd - frameStart;
this.metrics.updateTime = updateEnd - updateStart;
this.metrics.renderTime = renderEnd - renderStart;
// 🎯 Calculate FPS every second
if (frameEnd - this.lastTime >= 1000) {
this.metrics.fps = this.frameCount;
this.frameCount = 0;
this.lastTime = frameEnd;
this.logMetrics();
}
}
private logMetrics(): void {
console.log(`
🎮 Game Performance:
📊 FPS: ${this.metrics.fps}
⏱️ Frame Time: ${this.metrics.frameTime.toFixed(2)}ms
🔄 Update Time: ${this.metrics.updateTime.toFixed(2)}ms
🎨 Render Time: ${this.metrics.renderTime.toFixed(2)}ms
`);
// ⚠️ Warn if frame rate drops
if (this.metrics.fps < 30) {
console.warn('⚠️ Performance warning: FPS below 30!');
}
}
getMetrics(): GameMetrics {
return { ...this.metrics };
}
}
// 🚀 Game loop example
const monitor = new GamePerformanceMonitor();
let gameRunning = true;
const updateGame = () => {
// 🎲 Game logic here
for (let i = 0; i < 1000000; i++) {
// Simulate complex calculations
}
};
const renderGame = () => {
// 🎨 Rendering logic here
for (let i = 0; i < 500000; i++) {
// Simulate rendering
}
};
// 🎮 Main game loop
const gameLoop = () => {
if (gameRunning) {
monitor.measureFrame(updateGame, renderGame);
requestAnimationFrame(gameLoop);
}
};
Example 3: API Response Time Tracker 🌐
interface ApiMetrics {
endpoint: string;
method: string;
duration: number;
status: number;
timestamp: Date;
}
class ApiPerformanceTracker {
private metrics: ApiMetrics[] = [];
private slowThreshold = 1000; // 1 second
async trackRequest<T>(
url: string,
options: RequestInit = {}
): Promise<T> {
const startTime = performance.now();
const method = options.method || 'GET';
try {
// 🚀 Make the request
performance.mark(`api-${method}-start`);
const response = await fetch(url, options);
performance.mark(`api-${method}-end`);
const endTime = performance.now();
const duration = endTime - startTime;
// 📊 Record metrics
const metric: ApiMetrics = {
endpoint: url,
method,
duration,
status: response.status,
timestamp: new Date()
};
this.metrics.push(metric);
this.analyzePerformance(metric);
return await response.json();
} catch (error) {
console.error('❌ API request failed:', error);
throw error;
}
}
private analyzePerformance(metric: ApiMetrics): void {
// 🎯 Log the performance
console.log(`
📡 API Performance:
🔗 ${metric.method} ${metric.endpoint}
⏱️ Duration: ${metric.duration.toFixed(2)}ms
📊 Status: ${metric.status}
`);
// ⚠️ Warn about slow requests
if (metric.duration > this.slowThreshold) {
console.warn(`⚠️ Slow API call detected: ${metric.duration.toFixed(2)}ms`);
}
}
getAverageResponseTime(endpoint?: string): number {
const relevantMetrics = endpoint
? this.metrics.filter(m => m.endpoint === endpoint)
: this.metrics;
if (relevantMetrics.length === 0) return 0;
const total = relevantMetrics.reduce((sum, m) => sum + m.duration, 0);
return total / relevantMetrics.length;
}
getSlowestEndpoints(count: number = 5): ApiMetrics[] {
return [...this.metrics]
.sort((a, b) => b.duration - a.duration)
.slice(0, count);
}
}
// 🚀 Usage example
const apiTracker = new ApiPerformanceTracker();
// Track API calls
const userData = await apiTracker.trackRequest<User>('/api/users/123');
const products = await apiTracker.trackRequest<Product[]>('/api/products');
// 📊 Get performance insights
console.log(`Average response time: ${apiTracker.getAverageResponseTime().toFixed(2)}ms`);
console.log('🐌 Slowest endpoints:', apiTracker.getSlowestEndpoints(3));
🚀 Advanced Concepts
Memory Profiling with Performance Observer 🧠
class MemoryProfiler {
private observer: PerformanceObserver | null = null;
private memorySnapshots: any[] = [];
startProfiling(): void {
// 🎯 Set up performance observer
this.observer = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
if (entry.entryType === 'measure') {
this.logMeasurement(entry as PerformanceMeasure);
}
}
});
this.observer.observe({ entryTypes: ['measure', 'mark'] });
// 📊 Take memory snapshots if available
if ('memory' in performance) {
this.takeMemorySnapshot();
}
}
private takeMemorySnapshot(): void {
const memory = (performance as any).memory;
if (memory) {
const snapshot = {
timestamp: Date.now(),
usedJSHeapSize: memory.usedJSHeapSize,
totalJSHeapSize: memory.totalJSHeapSize,
jsHeapSizeLimit: memory.jsHeapSizeLimit
};
this.memorySnapshots.push(snapshot);
console.log(`
💾 Memory Snapshot:
📊 Used: ${(snapshot.usedJSHeapSize / 1048576).toFixed(2)} MB
📈 Total: ${(snapshot.totalJSHeapSize / 1048576).toFixed(2)} MB
🚀 Limit: ${(snapshot.jsHeapSizeLimit / 1048576).toFixed(2)} MB
`);
}
}
private logMeasurement(entry: PerformanceMeasure): void {
console.log(`⏱️ ${entry.name}: ${entry.duration.toFixed(2)}ms`);
}
stopProfiling(): void {
if (this.observer) {
this.observer.disconnect();
this.observer = null;
}
}
analyzeMemoryTrend(): void {
if (this.memorySnapshots.length < 2) {
console.log('📊 Not enough snapshots for trend analysis');
return;
}
const first = this.memorySnapshots[0];
const last = this.memorySnapshots[this.memorySnapshots.length - 1];
const memoryGrowth = last.usedJSHeapSize - first.usedJSHeapSize;
console.log(`
📈 Memory Trend Analysis:
🕐 Duration: ${((last.timestamp - first.timestamp) / 1000).toFixed(2)}s
📊 Memory Growth: ${(memoryGrowth / 1048576).toFixed(2)} MB
${memoryGrowth > 0 ? '⚠️ Potential memory leak detected!' : '✅ Memory usage stable'}
`);
}
}
Custom Performance Decorators 🎨
// 🎯 Performance decorator for methods
function measurePerformance(
target: any,
propertyKey: string,
descriptor: PropertyDescriptor
): PropertyDescriptor {
const originalMethod = descriptor.value;
descriptor.value = async function(...args: any[]) {
const start = performance.now();
const result = await originalMethod.apply(this, args);
const end = performance.now();
console.log(`⏱️ ${propertyKey} took ${(end - start).toFixed(2)}ms`);
return result;
};
return descriptor;
}
// 🚀 Usage with decorators
class DataProcessor {
@measurePerformance
async processLargeDataset(data: any[]): Promise<any[]> {
// 🎨 Simulate complex processing
return data.map(item => ({
...item,
processed: true,
timestamp: Date.now()
}));
}
@measurePerformance
async optimizeImages(images: string[]): Promise<string[]> {
// 🖼️ Simulate image optimization
await new Promise(resolve => setTimeout(resolve, 100));
return images.map(img => img.replace('.png', '-optimized.png'));
}
}
⚠️ Common Pitfalls and Solutions
Pitfall 1: Measuring in Development Mode 🚧
// ❌ Wrong way - Development mode adds overhead
if (process.env.NODE_ENV === 'development') {
performance.mark('operation-start');
doExpensiveOperation();
performance.mark('operation-end');
// This will show inflated times!
}
// ✅ Correct way - Conditional profiling
const isProfiling = process.env.ENABLE_PROFILING === 'true';
if (isProfiling) {
performance.mark('operation-start');
}
doExpensiveOperation();
if (isProfiling) {
performance.mark('operation-end');
performance.measure('Operation', 'operation-start', 'operation-end');
}
Pitfall 2: Memory Leaks from Profiling 💾
// ❌ Wrong way - Keeping all measurements in memory
class BadProfiler {
private allMeasurements: any[] = [];
measure(name: string, fn: () => void): void {
const start = performance.now();
fn();
const end = performance.now();
// This array keeps growing!
this.allMeasurements.push({
name,
duration: end - start,
timestamp: Date.now()
});
}
}
// ✅ Correct way - Limit stored measurements
class GoodProfiler {
private recentMeasurements: any[] = [];
private maxMeasurements = 1000;
measure(name: string, fn: () => void): void {
const start = performance.now();
fn();
const end = performance.now();
this.recentMeasurements.push({
name,
duration: end - start,
timestamp: Date.now()
});
// 🧹 Keep only recent measurements
if (this.recentMeasurements.length > this.maxMeasurements) {
this.recentMeasurements.shift();
}
}
clearMeasurements(): void {
this.recentMeasurements = [];
performance.clearMarks();
performance.clearMeasures();
}
}
🛠️ Best Practices
-
Profile in Production Mode 🚀
- Development builds have extra overhead
- Use production builds for accurate measurements
- Enable source maps for debugging
-
Use Sampling for High-Frequency Operations 📊
let sampleCount = 0; const sampleRate = 100; // Profile 1 in 100 operations if (++sampleCount % sampleRate === 0) { // Profile this operation }
-
Clean Up Performance Entries 🧹
- Clear marks and measures regularly
- Prevent memory buildup
- Use
performance.clearMarks()
andperformance.clearMeasures()
-
Set Performance Budgets 💰
const performanceBudget = { apiCall: 500, // 500ms max pageLoad: 3000, // 3s max animation: 16.67 // 60fps };
-
Use Performance Observer API 👀
- Non-blocking performance monitoring
- Better for continuous profiling
- Works well with async operations
-
Profile User Interactions 👆
- Measure click-to-response time
- Track perceived performance
- Focus on user experience metrics
-
Aggregate and Analyze Trends 📈
- Don’t rely on single measurements
- Look for patterns over time
- Identify performance regressions
🧪 Hands-On Exercise
Let’s build a performance monitoring system for a shopping cart! 🛒
Your Challenge: Create a ShoppingCartProfiler
that tracks the performance of adding items, calculating totals, and applying discounts.
interface CartItem {
id: string;
name: string;
price: number;
quantity: number;
}
interface PerformanceReport {
operation: string;
averageTime: number;
slowestTime: number;
callCount: number;
}
// 🎯 Your task: Implement this class!
class ShoppingCartProfiler {
private items: CartItem[] = [];
// Add your profiling logic here!
addItem(item: CartItem): void {
// Profile this operation
}
calculateTotal(): number {
// Profile this operation
return 0;
}
applyDiscount(percentage: number): number {
// Profile this operation
return 0;
}
getPerformanceReport(): PerformanceReport[] {
// Return performance statistics
return [];
}
}
// Test your implementation!
const cart = new ShoppingCartProfiler();
// Add some items
cart.addItem({ id: '1', name: 'Gaming Mouse 🖱️', price: 59.99, quantity: 1 });
cart.addItem({ id: '2', name: 'Mechanical Keyboard ⌨️', price: 129.99, quantity: 1 });
cart.addItem({ id: '3', name: 'LED Monitor 🖥️', price: 299.99, quantity: 2 });
// Calculate and apply discount
const total = cart.calculateTotal();
const discountedTotal = cart.applyDiscount(10);
// Get performance report
const report = cart.getPerformanceReport();
console.log('📊 Performance Report:', report);
💡 Click here for the solution
class ShoppingCartProfiler {
private items: CartItem[] = [];
private performanceData: Map<string, number[]> = new Map();
private profile<T>(operation: string, fn: () => T): T {
const start = performance.now();
const result = fn();
const duration = performance.now() - start;
// 📊 Store performance data
if (!this.performanceData.has(operation)) {
this.performanceData.set(operation, []);
}
this.performanceData.get(operation)!.push(duration);
console.log(`⏱️ ${operation} took ${duration.toFixed(2)}ms`);
return result;
}
addItem(item: CartItem): void {
this.profile('addItem', () => {
const existingItem = this.items.find(i => i.id === item.id);
if (existingItem) {
existingItem.quantity += item.quantity;
} else {
this.items.push({ ...item });
}
});
}
calculateTotal(): number {
return this.profile('calculateTotal', () => {
return this.items.reduce((total, item) => {
return total + (item.price * item.quantity);
}, 0);
});
}
applyDiscount(percentage: number): number {
return this.profile('applyDiscount', () => {
const total = this.calculateTotal();
const discount = total * (percentage / 100);
return total - discount;
});
}
getPerformanceReport(): PerformanceReport[] {
const reports: PerformanceReport[] = [];
this.performanceData.forEach((durations, operation) => {
const average = durations.reduce((sum, d) => sum + d, 0) / durations.length;
const slowest = Math.max(...durations);
reports.push({
operation,
averageTime: average,
slowestTime: slowest,
callCount: durations.length
});
});
return reports;
}
}
// 🎉 Great job! You've built a performance-aware shopping cart!
🎓 Key Takeaways
You’ve mastered performance profiling in TypeScript! Here’s what you learned:
- ⏱️ Performance API - Use
performance.now()
, marks, and measures - 🎯 Runtime Analysis - Monitor code execution in real-time
- 📊 Metrics Collection - Track duration, frequency, and patterns
- 🧠 Memory Profiling - Detect leaks and monitor heap usage
- 🎨 Decorators - Create reusable performance monitoring
- 🚀 Best Practices - Sample, aggregate, and set budgets
Remember:
- Profile in production mode for accurate results 🏭
- Clean up performance entries to prevent memory leaks 🧹
- Focus on user-perceived performance metrics 👤
- Look for trends, not individual measurements 📈
🤝 Next Steps
Congratulations on becoming a performance profiling expert! 🎉 Your TypeScript apps are about to get a serious speed boost! 🚀
Here’s what to explore next:
- 🔍 Chrome DevTools Profiler - Deep dive into browser profiling
- 📊 Performance CI/CD - Automate performance testing
- 🎯 Web Vitals - Master Core Web Vitals metrics
- 🧪 Load Testing - Profile under stress conditions
- 📈 APM Tools - Application Performance Monitoring
Keep profiling, keep optimizing, and remember - fast apps make happy users! 😊✨
Ready to dive deeper into TypeScript tooling? Check out our next tutorial on debugging techniques! 🔍🎯