+
+
+
+
+
zorin
+
%
nim
crystal
r
+
+
travis
+
jquery
+
rollup
+
+
+
+
+
*
+
+
+
+
+
redis
+
+
+
+
+
ionic
+
kotlin
mxnet
+
+
apex
cdn
0x
+
sqlite
redhat
raspbian
elementary
+
alpine
+
+
+
+
+
+
+
+
mvn
+
+
pip
goland
+
express
spacy
+
ansible
+
pytest
=>
==
[]
+
vb
+
fauna
c++
clion
+
ios
lisp
pycharm
gitlab
+
websocket
soap
parcel
Back to Blog
๐Ÿš€ Optimizing Database Performance: Simple Guide
Alpine Linux Database Beginner

๐Ÿš€ Optimizing Database Performance: Simple Guide

Published Jun 13, 2025

Easy tutorial on optimizing database performance in Alpine Linux. Perfect for beginners to speed up queries, reduce resource usage, and improve application responsiveness.

9 min read
0 views
Table of Contents

Iโ€™ll show you how to optimize database performance on Alpine Linux! A slow database can make your entire application crawl. With these optimization techniques, your queries will fly and your apps will feel snappy. Itโ€™s like tuning up a car engine - small adjustments can make a huge difference!

๐Ÿค” Why Optimize Database Performance?

Database optimization makes your queries run faster, use less memory, and handle more users. A well-tuned database can be 10x or even 100x faster than an unoptimized one. This means happier users, lower server costs, and fewer late-night emergency calls!

Benefits of optimization:

  • Faster query response
  • Lower server costs
  • Better user experience
  • Higher capacity
  • Improved reliability

๐ŸŽฏ What You Need

Before starting, youโ€™ll need:

  • Alpine Linux installed
  • Database installed (MySQL/PostgreSQL)
  • Sample data to work with
  • Root access
  • About 30 minutes

๐Ÿ“‹ Step 1: Install Database and Tools

Letโ€™s set up our optimization toolkit:

# Update packages
apk update

# Install PostgreSQL (or MySQL)
apk add postgresql postgresql-client postgresql-contrib

# Install monitoring tools
apk add htop iotop pg_top

# Install benchmarking tools
apk add sysbench pgbench

# Initialize PostgreSQL
/etc/init.d/postgresql setup
rc-service postgresql start
rc-update add postgresql

# Create test database
su - postgres -c "createdb testdb"
su - postgres -c "psql testdb -c 'CREATE EXTENSION pg_stat_statements;'"

# For MySQL alternative:
# apk add mysql mysql-client
# /etc/init.d/mariadb setup
# rc-service mariadb start

๐Ÿ“‹ Step 2: Analyze Current Performance

First, understand your baseline:

# Create performance analysis script
cat > /usr/local/bin/db-analyze.sh << 'EOF'
#!/bin/sh
# Database Performance Analyzer

DB_NAME="${1:-testdb}"
DB_USER="${2:-postgres}"

echo "๐Ÿ” Database Performance Analysis"
echo "================================"
echo ""

# PostgreSQL analysis
if [ "$DB_USER" = "postgres" ]; then
    echo "๐Ÿ“Š Database Size:"
    psql -U $DB_USER -d $DB_NAME -c "
        SELECT pg_database.datname,
               pg_size_pretty(pg_database_size(pg_database.datname)) AS size
        FROM pg_database
        ORDER BY pg_database_size(pg_database.datname) DESC;"
    
    echo -e "\n๐Ÿ“Š Table Sizes:"
    psql -U $DB_USER -d $DB_NAME -c "
        SELECT schemaname AS schema,
               tablename AS table,
               pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
        FROM pg_tables
        ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC
        LIMIT 10;"
    
    echo -e "\n๐Ÿ“Š Slow Queries:"
    psql -U $DB_USER -d $DB_NAME -c "
        SELECT query,
               calls,
               round(total_time::numeric, 2) AS total_time,
               round(mean_time::numeric, 2) AS mean_time
        FROM pg_stat_statements
        ORDER BY total_time DESC
        LIMIT 5;"
    
    echo -e "\n๐Ÿ“Š Index Usage:"
    psql -U $DB_USER -d $DB_NAME -c "
        SELECT schemaname,
               tablename,
               indexname,
               idx_scan AS index_scans
        FROM pg_stat_user_indexes
        ORDER BY idx_scan ASC
        LIMIT 10;"
fi

# MySQL analysis
if [ "$DB_USER" = "mysql" ]; then
    mysql -u root -e "
        SELECT table_schema AS 'Database',
               ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS 'Size (MB)'
        FROM information_schema.TABLES
        GROUP BY table_schema;"
fi
EOF

chmod +x /usr/local/bin/db-analyze.sh

# Run initial analysis
/usr/local/bin/db-analyze.sh

๐Ÿ“‹ Step 3: Optimize Configuration

Tune database settings for performance:

# PostgreSQL optimization
cat > /etc/postgresql/postgresql-optimized.conf << 'EOF'
# Memory Settings
shared_buffers = 256MB              # 25% of total RAM
effective_cache_size = 768MB        # 75% of total RAM
work_mem = 4MB                      # RAM per query operation
maintenance_work_mem = 64MB         # RAM for maintenance tasks

# Checkpoint Settings
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100

# Connection Settings
max_connections = 100               # Adjust based on needs

# Query Planner
random_page_cost = 1.1             # SSD = 1.1, HDD = 4
effective_io_concurrency = 200      # SSD = 200, HDD = 2

# Logging
log_min_duration_statement = 1000   # Log queries over 1 second
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h '
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_temp_files = 0

# Autovacuum
autovacuum = on
autovacuum_max_workers = 4
autovacuum_naptime = 30s
EOF

# Apply PostgreSQL config
cat /etc/postgresql/postgresql-optimized.conf >> /etc/postgresql/postgresql.conf
rc-service postgresql restart

# MySQL/MariaDB optimization
cat > /etc/my.cnf.d/optimization.cnf << 'EOF'
[mysqld]
# Buffer Pool (50-80% of RAM)
innodb_buffer_pool_size = 512M
innodb_buffer_pool_instances = 4

# Log Settings
innodb_log_file_size = 128M
innodb_log_buffer_size = 16M

# Connection Settings
max_connections = 100
thread_cache_size = 8

# Query Cache (deprecated in MySQL 8.0)
query_cache_type = 1
query_cache_size = 32M
query_cache_limit = 2M

# Temp Tables
tmp_table_size = 64M
max_heap_table_size = 64M

# Other Settings
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
innodb_file_per_table = 1
EOF

# Apply MySQL config (if using MySQL)
# rc-service mariadb restart

๐Ÿ“‹ Step 4: Create and Optimize Indexes

Indexes are crucial for performance:

# Create index optimization script
cat > /usr/local/bin/optimize-indexes.sh << 'EOF'
#!/bin/sh
# Index Optimization Tool

DB_NAME="${1:-testdb}"

echo "๐Ÿ”ง Index Optimization"
echo "===================="

# Find missing indexes (PostgreSQL)
psql -U postgres -d $DB_NAME << SQL
-- Find tables without primary keys
SELECT schemaname, tablename 
FROM pg_tables t
LEFT JOIN pg_indexes i ON t.tablename = i.tablename 
    AND t.schemaname = i.schemaname 
    AND i.indexname LIKE '%_pkey'
WHERE i.indexname IS NULL 
    AND t.schemaname NOT IN ('pg_catalog', 'information_schema');

-- Find foreign keys without indexes
SELECT
    conrelid::regclass AS table_name,
    a.attname AS column_name,
    'CREATE INDEX idx_' || conrelid::regclass || '_' || a.attname || 
    ' ON ' || conrelid::regclass || '(' || a.attname || ');' AS create_index_sql
FROM pg_constraint c
JOIN pg_attribute a ON a.attnum = ANY(c.conkey) AND a.attrelid = c.conrelid
LEFT JOIN pg_index i ON i.indrelid = c.conrelid AND a.attnum = ANY(i.indkey)
WHERE c.contype = 'f' AND i.indexrelid IS NULL;

-- Suggest indexes based on query patterns
SELECT 
    schemaname,
    tablename,
    attname,
    n_distinct,
    correlation,
    'CREATE INDEX idx_' || tablename || '_' || attname || 
    ' ON ' || schemaname || '.' || tablename || '(' || attname || ');' AS suggested_index
FROM pg_stats
WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
    AND n_distinct > 100
    AND correlation < 0.1
ORDER BY n_distinct DESC
LIMIT 10;
SQL

# Create commonly needed indexes
cat > /tmp/create-indexes.sql << SQL
-- Common performance indexes

-- For user tables
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_email ON users(email);
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_created ON users(created_at);

-- For session/auth tables  
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_sessions_user_id ON sessions(user_id);
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_sessions_expires ON sessions(expires_at);

-- For audit/log tables
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_logs_timestamp ON logs(timestamp);
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_logs_level ON logs(level) WHERE level IN ('ERROR', 'CRITICAL');

-- Partial indexes for common queries
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_active ON users(id) WHERE active = true;
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_orders_pending ON orders(created_at) WHERE status = 'pending';

-- Composite indexes
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_orders_user_date ON orders(user_id, created_at);
SQL

echo -e "\n๐Ÿ“ Index creation script saved to /tmp/create-indexes.sql"
echo "Review and run with: psql -U postgres -d $DB_NAME -f /tmp/create-indexes.sql"
EOF

chmod +x /usr/local/bin/optimize-indexes.sh

๐Ÿ“‹ Step 5: Query Optimization

Optimize slow queries:

# Query optimization helper
cat > /usr/local/bin/query-optimizer.sh << 'EOF'
#!/bin/sh
# Query Optimization Assistant

echo "๐Ÿ” Query Optimization Guide"
echo "=========================="

# Create example tables for testing
psql -U postgres -d testdb << SQL
-- Create test tables if not exists
CREATE TABLE IF NOT EXISTS users (
    id SERIAL PRIMARY KEY,
    email VARCHAR(255) UNIQUE,
    name VARCHAR(255),
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    active BOOLEAN DEFAULT true
);

CREATE TABLE IF NOT EXISTS orders (
    id SERIAL PRIMARY KEY,
    user_id INTEGER REFERENCES users(id),
    total DECIMAL(10,2),
    status VARCHAR(50),
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- Insert sample data if empty
INSERT INTO users (email, name) 
SELECT 
    'user' || generate_series || '@example.com',
    'User ' || generate_series
FROM generate_series(1, 10000)
ON CONFLICT DO NOTHING;

INSERT INTO orders (user_id, total, status)
SELECT 
    (random() * 10000)::int + 1,
    (random() * 1000)::numeric(10,2),
    CASE (random() * 3)::int
        WHEN 0 THEN 'pending'
        WHEN 1 THEN 'completed'
        ELSE 'cancelled'
    END
FROM generate_series(1, 100000)
ON CONFLICT DO NOTHING;
SQL

echo -e "\n๐Ÿ“Š Query Optimization Examples:"

# Bad query example
echo -e "\nโŒ Bad Query (using wildcards):"
cat << 'SQL'
-- Slow: Leading wildcard prevents index usage
SELECT * FROM users WHERE email LIKE '%@gmail.com';

-- Slow: Function on indexed column
SELECT * FROM orders WHERE DATE(created_at) = '2024-01-01';

-- Slow: OR conditions can prevent index usage
SELECT * FROM users WHERE email = '[email protected]' OR name = 'Test User';
SQL

# Good query example
echo -e "\nโœ… Optimized Queries:"
cat << 'SQL'
-- Fast: Can use index
SELECT * FROM users WHERE email LIKE 'user%@gmail.com';

-- Fast: Direct comparison on indexed column
SELECT * FROM orders WHERE created_at >= '2024-01-01' AND created_at < '2024-01-02';

-- Fast: UNION can use indexes
SELECT * FROM users WHERE email = '[email protected]'
UNION
SELECT * FROM users WHERE name = 'Test User';

-- Use EXPLAIN ANALYZE to see query plan
EXPLAIN ANALYZE SELECT * FROM orders WHERE user_id = 123;
SQL

# Query rewriting rules
echo -e "\n๐Ÿ“š Query Optimization Rules:"
cat << 'RULES'
1. Avoid SELECT * - only fetch needed columns
2. Use indexes on WHERE, JOIN, and ORDER BY columns  
3. Avoid functions on indexed columns
4. Use EXISTS instead of IN for subqueries
5. Paginate large result sets with LIMIT/OFFSET
6. Consider partial indexes for filtered queries
7. Use prepared statements to avoid parsing overhead
8. Batch multiple INSERT/UPDATE operations
RULES
EOF

chmod +x /usr/local/bin/query-optimizer.sh

๐Ÿ“‹ Step 6: Monitor and Tune

Set up continuous monitoring:

# Create monitoring dashboard
cat > /usr/local/bin/db-monitor.sh << 'EOF'
#!/bin/sh
# Database Performance Monitor

clear
echo "๐Ÿš€ Database Performance Monitor"
echo "=============================="

while true; do
    # Move cursor to position
    printf "\033[3;0H"
    
    # PostgreSQL stats
    echo "๐Ÿ“Š PostgreSQL Statistics:"
    echo "------------------------"
    
    # Connection info
    CONNECTIONS=$(psql -U postgres -t -c "SELECT count(*) FROM pg_stat_activity;")
    MAX_CONN=$(psql -U postgres -t -c "SHOW max_connections;")
    echo "Connections: $CONNECTIONS / $MAX_CONN"
    
    # Cache hit ratio
    CACHE_HIT=$(psql -U postgres -t -c "
        SELECT round(100.0 * sum(heap_blks_hit) / 
               (sum(heap_blks_hit) + sum(heap_blks_read)), 2) 
        FROM pg_statio_user_tables;")
    echo "Cache Hit Ratio: ${CACHE_HIT}%"
    
    # Transaction rate
    TPS=$(psql -U postgres -t -c "
        SELECT round(xact_commit::numeric / 
               EXTRACT(EPOCH FROM (now() - stats_reset)), 2) 
        FROM pg_stat_database 
        WHERE datname = 'testdb';")
    echo "Transactions/sec: $TPS"
    
    # Active queries
    echo -e "\n๐Ÿ“ Active Queries:"
    psql -U postgres -x -c "
        SELECT pid, 
               usename, 
               application_name,
               state,
               SUBSTRING(query, 1, 50) AS query_preview,
               query_start
        FROM pg_stat_activity 
        WHERE state != 'idle' 
        ORDER BY query_start
        LIMIT 5;"
    
    # Table activity
    echo -e "\n๐Ÿ“ Table Activity (Top 5):"
    psql -U postgres -c "
        SELECT schemaname || '.' || tablename AS table,
               n_tup_ins AS inserts,
               n_tup_upd AS updates,
               n_tup_del AS deletes,
               n_live_tup AS live_rows
        FROM pg_stat_user_tables
        ORDER BY n_tup_ins + n_tup_upd + n_tup_del DESC
        LIMIT 5;"
    
    # System resources
    echo -e "\n๐Ÿ’ป System Resources:"
    CPU=$(top -bn1 | grep postgres | awk '{sum += $9} END {print sum}')
    echo "PostgreSQL CPU: ${CPU}%"
    
    MEM=$(ps aux | grep postgres | awk '{sum += $6} END {print sum/1024}')
    echo "PostgreSQL Memory: ${MEM} MB"
    
    sleep 5
done
EOF

chmod +x /usr/local/bin/db-monitor.sh

# Create auto-tuning script
cat > /usr/local/bin/db-autotune.sh << 'EOF'
#!/bin/sh
# Database Auto-Tuning

echo "๐Ÿ”ง Database Auto-Tuning"
echo "======================"

# Get system info
TOTAL_MEM=$(free -m | awk '/^Mem:/ {print $2}')
CPU_COUNT=$(nproc)
DISK_TYPE=$(lsblk -d -o name,rota | grep -E "sd|nvme" | awk '{if($2==0) print "SSD"; else print "HDD"}' | head -1)

echo "System: ${TOTAL_MEM}MB RAM, ${CPU_COUNT} CPUs, ${DISK_TYPE} storage"

# Calculate optimal settings
SHARED_BUFFERS=$((TOTAL_MEM / 4))
EFFECTIVE_CACHE=$((TOTAL_MEM * 3 / 4))
WORK_MEM=$((TOTAL_MEM / 100))
MAINTENANCE_MEM=$((TOTAL_MEM / 10))

echo -e "\n๐Ÿ“ Recommended Settings:"
echo "shared_buffers = ${SHARED_BUFFERS}MB"
echo "effective_cache_size = ${EFFECTIVE_CACHE}MB"
echo "work_mem = ${WORK_MEM}MB"
echo "maintenance_work_mem = ${MAINTENANCE_MEM}MB"

if [ "$DISK_TYPE" = "SSD" ]; then
    echo "random_page_cost = 1.1"
    echo "effective_io_concurrency = 200"
else
    echo "random_page_cost = 4"
    echo "effective_io_concurrency = 2"
fi

echo "max_connections = $((CPU_COUNT * 25))"
echo "max_worker_processes = $CPU_COUNT"
echo "max_parallel_workers_per_gather = $((CPU_COUNT / 2))"
echo "max_parallel_workers = $CPU_COUNT"

echo -e "\nโšก Apply these settings to postgresql.conf and restart"
EOF

chmod +x /usr/local/bin/db-autotune.sh

๐Ÿ“‹ Step 7: Maintenance Automation

Keep database healthy automatically:

# Create maintenance script
cat > /usr/local/bin/db-maintenance.sh << 'EOF'
#!/bin/sh
# Database Maintenance

LOG_FILE="/var/log/db-maintenance.log"

log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> $LOG_FILE
}

echo "๐Ÿ”ง Database Maintenance"
echo "====================="

# PostgreSQL maintenance
if pgrep postgres > /dev/null; then
    log "Starting PostgreSQL maintenance"
    
    # Update statistics
    log "Updating statistics"
    psql -U postgres -d testdb -c "ANALYZE;"
    
    # Reindex bloated indexes
    log "Checking index bloat"
    BLOATED_INDEXES=$(psql -U postgres -d testdb -t -c "
        SELECT schemaname || '.' || indexname
        FROM pg_stat_user_indexes
        JOIN pg_index ON pg_index.indexrelid = pg_stat_user_indexes.indexrelid
        WHERE pg_relation_size(indexrelid) > 100000000
        AND NOT indisunique;")
    
    for idx in $BLOATED_INDEXES; do
        log "Reindexing $idx"
        psql -U postgres -d testdb -c "REINDEX INDEX CONCURRENTLY $idx;"
    done
    
    # Vacuum tables
    log "Running vacuum"
    vacuumdb -U postgres -d testdb -z -j $CPU_COUNT
    
    # Clean up old logs
    find /var/log/postgresql -name "*.log" -mtime +7 -delete
    
    log "PostgreSQL maintenance completed"
fi

# MySQL maintenance
if pgrep mysqld > /dev/null; then
    log "Starting MySQL maintenance"
    
    # Optimize tables
    mysql -e "SELECT CONCAT('OPTIMIZE TABLE ', table_schema, '.', table_name, ';') 
              FROM information_schema.tables 
              WHERE table_schema NOT IN ('information_schema', 'mysql', 'performance_schema');" | \
    mysql
    
    log "MySQL maintenance completed"
fi

# Report
echo "Maintenance completed. Check $LOG_FILE for details."
EOF

chmod +x /usr/local/bin/db-maintenance.sh

# Schedule maintenance
echo "0 2 * * 0 /usr/local/bin/db-maintenance.sh" | crontab -

๐Ÿ“‹ Step 8: Performance Testing

Benchmark your optimizations:

# Create benchmark script
cat > /usr/local/bin/db-benchmark.sh << 'EOF'
#!/bin/sh
# Database Benchmark

echo "๐Ÿƒ Database Performance Benchmark"
echo "================================"

# PostgreSQL benchmark
if command -v pgbench > /dev/null; then
    echo -e "\n๐Ÿ“Š PostgreSQL Benchmark:"
    
    # Initialize pgbench
    pgbench -i -s 10 testdb
    
    # Run read-only test
    echo "Read-only test (1 minute):"
    pgbench -c 10 -j 2 -T 60 -S testdb
    
    # Run read-write test
    echo -e "\nRead-write test (1 minute):"
    pgbench -c 10 -j 2 -T 60 testdb
fi

# Custom query benchmark
echo -e "\n๐Ÿ“Š Custom Query Benchmark:"

# Test query performance
cat > /tmp/benchmark-queries.sql << SQL
-- Test 1: Simple select
\timing on
SELECT COUNT(*) FROM users WHERE active = true;

-- Test 2: Join query
SELECT u.name, COUNT(o.id) as order_count
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
GROUP BY u.id, u.name
ORDER BY order_count DESC
LIMIT 10;

-- Test 3: Complex aggregation
SELECT 
    DATE_TRUNC('day', created_at) as day,
    COUNT(*) as orders,
    SUM(total) as revenue,
    AVG(total) as avg_order
FROM orders
WHERE created_at >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY DATE_TRUNC('day', created_at)
ORDER BY day;
SQL

psql -U postgres -d testdb -f /tmp/benchmark-queries.sql

# Compare before/after optimization
echo -e "\n๐Ÿ“ˆ Performance Improvement:"
echo "Run this benchmark before and after optimization to measure improvement"
EOF

chmod +x /usr/local/bin/db-benchmark.sh

๐ŸŽฎ Practice Exercise

Try optimizing a slow query:

  1. Create a slow query
  2. Analyze its performance
  3. Add appropriate indexes
  4. Test the improvement
# Create problematic query
psql -U postgres -d testdb << SQL
-- Slow query without index
EXPLAIN ANALYZE 
SELECT u.*, COUNT(o.id) as order_count
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
WHERE u.email LIKE '%gmail.com'
GROUP BY u.id
HAVING COUNT(o.id) > 5;

-- Add index
CREATE INDEX idx_users_email_pattern ON users(email text_pattern_ops);
CREATE INDEX idx_orders_user_id ON orders(user_id);

-- Test again - should be much faster!
SQL

๐Ÿšจ Troubleshooting Common Issues

High Memory Usage

Control memory consumption:

# Check current usage
ps aux | grep postgres | awk '{sum += $6} END {print "Total: " sum/1024 " MB"}'

# Reduce shared_buffers if needed
sed -i 's/shared_buffers = .*/shared_buffers = 128MB/' /etc/postgresql/postgresql.conf

# Limit work_mem per query
ALTER SYSTEM SET work_mem = '2MB';
SELECT pg_reload_conf();

Slow Queries Still Slow

Debug query performance:

# Enable detailed logging
ALTER SYSTEM SET log_min_duration_statement = 0;
ALTER SYSTEM SET auto_explain.log_min_duration = 0;
SELECT pg_reload_conf();

# Check query plan
EXPLAIN (ANALYZE, BUFFERS) SELECT ...;

# Force index usage if needed
SET enable_seqscan = OFF;

Lock Contention

Handle locking issues:

# Find blocking queries
SELECT 
    blocked_locks.pid AS blocked_pid,
    blocking_locks.pid AS blocking_pid,
    blocked_activity.query AS blocked_query,
    blocking_activity.query AS blocking_query
FROM pg_locks blocked_locks
JOIN pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid
JOIN pg_locks blocking_locks ON blocking_locks.locktype = blocked_locks.locktype
JOIN pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
WHERE NOT blocked_locks.granted;

# Kill blocking query if needed
SELECT pg_terminate_backend(blocking_pid);

๐Ÿ’ก Pro Tips

Tip 1: Partition Large Tables

Handle big data efficiently:

# Create partitioned table
CREATE TABLE orders_partitioned (
    LIKE orders INCLUDING ALL
) PARTITION BY RANGE (created_at);

# Create monthly partitions
CREATE TABLE orders_2024_01 PARTITION OF orders_partitioned
    FOR VALUES FROM ('2024-01-01') TO ('2024-02-01');

Tip 2: Use Read Replicas

Scale read operations:

# Set up streaming replication
# On primary:
ALTER SYSTEM SET wal_level = replica;
ALTER SYSTEM SET max_wal_senders = 3;

# Configure replica for read queries

Tip 3: Query Result Caching

Cache expensive queries:

# Use materialized views
CREATE MATERIALIZED VIEW daily_stats AS
SELECT DATE(created_at) as day, COUNT(*), SUM(total)
FROM orders
GROUP BY DATE(created_at);

# Refresh periodically
CREATE INDEX ON daily_stats(day);
REFRESH MATERIALIZED VIEW CONCURRENTLY daily_stats;

โœ… Best Practices

  1. Regular maintenance

    # Weekly VACUUM ANALYZE
    0 2 * * 0 vacuumdb -azw
  2. Monitor everything

    • Query performance
    • Index usage
    • Cache hit ratios
    • Connection count
  3. Test before production

    # Always EXPLAIN ANALYZE
    # Test on staging first
  4. Index strategically

    • Donโ€™t over-index
    • Remove unused indexes
    • Use partial indexes
  5. Keep statistics updated

    # Auto-analyze aggressive
    ALTER SYSTEM SET autovacuum_analyze_scale_factor = 0.02;

๐Ÿ† What You Learned

Fantastic work! You can now:

  • โœ… Analyze database performance
  • โœ… Optimize configuration settings
  • โœ… Create effective indexes
  • โœ… Tune slow queries
  • โœ… Monitor and maintain databases

Your database is now turbocharged!

๐ŸŽฏ Whatโ€™s Next?

Now that youโ€™ve optimized your database, explore:

  • Database clustering and replication
  • Advanced indexing strategies
  • Query plan optimization
  • Database security hardening

Keep those queries flying! ๐Ÿš€