๐๏ธ AlmaLinux Database Optimization & MySQL Performance Complete Guide
Ready to transform your database from sluggish to lightning-fast? โก This comprehensive guide will turn you into a database performance wizard, covering everything from basic MySQL tuning to advanced enterprise configurations that handle millions of queries per second!
Database optimization isnโt just about making queries faster โ itโs about building a robust, scalable data foundation that grows with your business while maintaining rock-solid reliability. Whether youโre running a small application or managing enterprise-scale databases, these techniques will supercharge your performance! ๐
๐ค Why is Database Optimization Important?
Think of database optimization as tuning a race car engine โ every adjustment brings massive performance gains! ๐๏ธ Hereโs why itโs absolutely crucial:
- โก Lightning-Fast Queries: Transform slow queries that take minutes into millisecond responses
- ๐ฐ Cost Savings: Optimized databases need fewer resources, saving on hardware and cloud costs
- ๐ Massive Scalability: Handle 10x more concurrent users with the same infrastructure
- ๐ก๏ธ Rock-Solid Reliability: Properly tuned databases crash less and recover faster
- ๐ฏ Better User Experience: Fast databases mean happy users and higher conversion rates
- ๐ Easier Troubleshooting: Well-optimized systems are easier to monitor and debug
- ๐ฑ Future-Proof Growth: Scalable database architecture grows with your business
- ๐ช Competitive Advantage: Faster applications outperform slower competitors every time
๐ฏ What You Need
Before we dive into database mastery, letโs make sure you have everything ready:
โ AlmaLinux server (weโll optimize any database configuration!) โ Root or sudo access (needed for MySQL installation and configuration) โ At least 4GB RAM (more RAM = better database performance!) โ Fast storage (SSDs dramatically improve database performance) โ Network connectivity (for replication and remote access) โ Basic SQL knowledge (donโt worry, weโll explain optimization concepts!) โ Text editor (nano, vim, or your favorite editor) โ Backup plan (always backup before major database changes!)
๐ Step 1: MySQL Installation and Initial Setup
Letโs start by installing MySQL and setting up a solid foundation! Think of this as preparing the perfect workspace for your database engine. ๐ง
# Update system packages
sudo dnf update -y
# Ensures you have the latest security patches and packages
# Install MySQL server
sudo dnf install -y mysql-server mysql
# Installs MySQL server and client tools
# Start and enable MySQL service
sudo systemctl enable --now mysqld
# Starts MySQL and configures it to start automatically on boot
# Check MySQL service status
sudo systemctl status mysqld
# Verifies MySQL is running properly
Secure your MySQL installation:
# Run MySQL security script
sudo mysql_secure_installation
# Follow the prompts to:
# - Set root password
# - Remove anonymous users
# - Disable remote root login
# - Remove test database
# - Reload privilege tables
# Connect to MySQL as root
mysql -u root -p
# Enter the password you just set
Create a comprehensive database setup script:
# Create database setup script
sudo nano /usr/local/bin/mysql-setup.sh
#!/bin/bash
echo "๐๏ธ MYSQL ENTERPRISE SETUP"
echo "=========================="
# Create performance monitoring database
mysql -u root -p -e "
CREATE DATABASE IF NOT EXISTS performance_monitoring;
CREATE USER IF NOT EXISTS 'monitor'@'localhost' IDENTIFIED BY 'MonitorPass123!';
GRANT SELECT ON performance_schema.* TO 'monitor'@'localhost';
GRANT SELECT ON information_schema.* TO 'monitor'@'localhost';
GRANT SELECT ON performance_monitoring.* TO 'monitor'@'localhost';
FLUSH PRIVILEGES;
"
# Create application database
mysql -u root -p -e "
CREATE DATABASE IF NOT EXISTS app_production;
CREATE USER IF NOT EXISTS 'app_user'@'localhost' IDENTIFIED BY 'AppSecurePass456!';
GRANT ALL PRIVILEGES ON app_production.* TO 'app_user'@'localhost';
FLUSH PRIVILEGES;
"
# Create sample tables for testing
mysql -u root -p app_production -e "
CREATE TABLE IF NOT EXISTS users (
id INT AUTO_INCREMENT PRIMARY KEY,
username VARCHAR(50) UNIQUE NOT NULL,
email VARCHAR(100) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX idx_username (username),
INDEX idx_email (email),
INDEX idx_created_at (created_at)
);
CREATE TABLE IF NOT EXISTS orders (
id INT AUTO_INCREMENT PRIMARY KEY,
user_id INT NOT NULL,
order_total DECIMAL(10,2) NOT NULL,
order_status ENUM('pending', 'processing', 'shipped', 'delivered', 'cancelled'),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users(id),
INDEX idx_user_id (user_id),
INDEX idx_status (order_status),
INDEX idx_created_at (created_at)
);
"
echo "โ
MySQL setup complete!"
echo "๐ Databases created: performance_monitoring, app_production"
echo "๐ค Users created: monitor, app_user"
echo "๐ Sample tables created for testing"
# Make the script executable and run it
sudo chmod +x /usr/local/bin/mysql-setup.sh
# Note: You'll be prompted for the MySQL root password multiple times
๐ง Step 2: MySQL Configuration Optimization
Time to supercharge your MySQL configuration! ๐ Weโll optimize memory usage, connection handling, and query performance.
# Backup original MySQL configuration
sudo cp /etc/my.cnf /etc/my.cnf.backup
# Create optimized MySQL configuration
sudo nano /etc/my.cnf.d/optimization.cnf
# Add this optimized configuration:
[mysqld]
# Basic Settings
default-storage-engine = InnoDB
sql_mode = STRICT_TRANS_TABLES,NO_ZERO_DATE,NO_ZERO_IN_DATE,ERROR_FOR_DIVISION_BY_ZERO
# Connection Settings
max_connections = 200
max_connect_errors = 1000000
wait_timeout = 28800
interactive_timeout = 28800
# Buffer Pool Settings (adjust based on available RAM)
# Use 70-80% of available RAM for innodb_buffer_pool_size
innodb_buffer_pool_size = 2G
innodb_buffer_pool_instances = 8
innodb_log_file_size = 512M
innodb_log_buffer_size = 16M
# Query Cache (MySQL 5.7 and earlier)
query_cache_type = 1
query_cache_size = 128M
query_cache_limit = 2M
# Table Cache
table_open_cache = 4000
table_definition_cache = 2000
# Temporary Tables
tmp_table_size = 256M
max_heap_table_size = 256M
# MyISAM Settings
key_buffer_size = 256M
myisam_sort_buffer_size = 64M
# InnoDB Settings
innodb_file_per_table = 1
innodb_flush_method = O_DIRECT
innodb_io_capacity = 1000
innodb_io_capacity_max = 2000
innodb_read_io_threads = 8
innodb_write_io_threads = 8
# Binary Logging
log_bin = mysql-bin
binlog_format = ROW
expire_logs_days = 7
max_binlog_size = 100M
# Slow Query Log
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1
log_queries_not_using_indexes = 1
# Error Log
log_error = /var/log/mysql/error.log
# General Log (disable in production)
# general_log = 1
# general_log_file = /var/log/mysql/general.log
[mysql]
default-character-set = utf8mb4
[client]
default-character-set = utf8mb4
Create MySQL log directories and restart:
# Create log directories
sudo mkdir -p /var/log/mysql
sudo chown mysql:mysql /var/log/mysql
# Restart MySQL to apply new configuration
sudo systemctl restart mysqld
# Check MySQL status
sudo systemctl status mysqld
# Verify configuration is loaded
mysql -u root -p -e "SHOW VARIABLES LIKE 'innodb_buffer_pool_size';"
Create a MySQL tuning analyzer script:
# Create MySQL tuning script
sudo nano /usr/local/bin/mysql-tuner.sh
#!/bin/bash
echo "๐ MYSQL PERFORMANCE ANALYSIS"
echo "=============================="
# Connect to MySQL and run diagnostics
mysql -u root -p -e "
SELECT
'Buffer Pool Usage' as Metric,
ROUND((SELECT (PagesData*PageSize)/(1024*1024*1024) FROM
(SELECT variable_value AS PagesData FROM information_schema.global_status WHERE variable_name='Innodb_buffer_pool_pages_data') AS A,
(SELECT variable_value AS PageSize FROM information_schema.global_status WHERE variable_name='Innodb_page_size') AS B
), 2) AS 'Used_GB',
ROUND((SELECT variable_value/(1024*1024*1024) FROM information_schema.global_variables WHERE variable_name='innodb_buffer_pool_size'), 2) AS 'Total_GB'
UNION ALL
SELECT
'Query Cache Hit Rate' as Metric,
ROUND((SELECT variable_value FROM information_schema.global_status WHERE variable_name='Qcache_hits') /
(SELECT variable_value FROM information_schema.global_status WHERE variable_name='Qcache_hits') +
(SELECT variable_value FROM information_schema.global_status WHERE variable_name='Qcache_inserts') * 100, 2) AS 'Hit_Rate_%',
'N/A' AS 'Total_GB'
UNION ALL
SELECT
'Connection Usage' as Metric,
(SELECT variable_value FROM information_schema.global_status WHERE variable_name='Threads_connected') AS 'Current',
(SELECT variable_value FROM information_schema.global_variables WHERE variable_name='max_connections') AS 'Maximum';
"
echo ""
echo "๐ Current MySQL Process List:"
mysql -u root -p -e "SHOW PROCESSLIST;"
echo ""
echo "๐ Recent Slow Queries:"
if [ -f /var/log/mysql/slow.log ]; then
echo "Last 5 slow queries:"
tail -20 /var/log/mysql/slow.log | grep -A 5 "Query_time"
else
echo "Slow query log not found or not enabled"
fi
echo ""
echo "Analysis complete! โ
"
# Make executable
sudo chmod +x /usr/local/bin/mysql-tuner.sh
# Run it: sudo /usr/local/bin/mysql-tuner.sh
๐ Step 3: Query Optimization and Indexing
Letโs make your queries fly! โก Proper indexing and query optimization can improve performance by 1000x or more.
# Connect to MySQL for optimization work
mysql -u root -p app_production
Now letโs create some test data and optimize queries:
-- Insert sample data for testing
INSERT INTO users (username, email) VALUES
('john_doe', '[email protected]'),
('jane_smith', '[email protected]'),
('bob_johnson', '[email protected]'),
('alice_wilson', '[email protected]'),
('charlie_brown', '[email protected]');
-- Insert sample orders
INSERT INTO orders (user_id, order_total, order_status) VALUES
(1, 99.99, 'delivered'),
(2, 149.50, 'shipped'),
(1, 75.25, 'delivered'),
(3, 200.00, 'processing'),
(4, 50.75, 'pending'),
(5, 300.00, 'delivered'),
(2, 125.99, 'cancelled');
-- Generate more test data (run this to create larger dataset)
DELIMITER //
CREATE PROCEDURE generate_test_data()
BEGIN
DECLARE i INT DEFAULT 1;
WHILE i <= 10000 DO
INSERT INTO users (username, email) VALUES
(CONCAT('user_', i), CONCAT('user', i, '@example.com'));
INSERT INTO orders (user_id, order_total, order_status) VALUES
(i, ROUND(RAND() * 500, 2),
ELT(FLOOR(1 + RAND() * 5), 'pending', 'processing', 'shipped', 'delivered', 'cancelled'));
SET i = i + 1;
END WHILE;
END //
DELIMITER ;
-- Run the procedure to generate test data
CALL generate_test_data();
Now letโs analyze and optimize queries:
-- Enable query profiling
SET profiling = 1;
-- Example of a slow query (before optimization)
SELECT u.username, u.email, COUNT(o.id) as order_count, SUM(o.order_total) as total_spent
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
WHERE u.created_at > '2025-01-01'
GROUP BY u.id, u.username, u.email
ORDER BY total_spent DESC;
-- Check query execution time
SHOW PROFILES;
-- Analyze query execution plan
EXPLAIN FORMAT=JSON
SELECT u.username, u.email, COUNT(o.id) as order_count, SUM(o.order_total) as total_spent
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
WHERE u.created_at > '2025-01-01'
GROUP BY u.id, u.username, u.email
ORDER BY total_spent DESC\G
-- Add optimized indexes
CREATE INDEX idx_users_created_at ON users(created_at);
CREATE INDEX idx_orders_user_total ON orders(user_id, order_total);
-- Test the query again after indexing
SELECT u.username, u.email, COUNT(o.id) as order_count, SUM(o.order_total) as total_spent
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
WHERE u.created_at > '2025-01-01'
GROUP BY u.id, u.username, u.email
ORDER BY total_spent DESC;
-- Check performance improvement
SHOW PROFILES;
Create a query optimization script:
# Create query optimization helper
sudo nano /usr/local/bin/query-optimizer.sh
#!/bin/bash
analyze_slow_queries() {
echo "๐ SLOW QUERY ANALYSIS"
echo "======================"
if [ -f /var/log/mysql/slow.log ]; then
echo "Top 10 slowest queries by total time:"
mysqldumpslow -s t -t 10 /var/log/mysql/slow.log
echo ""
echo "Top 10 most frequent slow queries:"
mysqldumpslow -s c -t 10 /var/log/mysql/slow.log
else
echo "โ Slow query log not found. Enable it in MySQL configuration."
fi
}
check_index_usage() {
echo "๐ INDEX USAGE ANALYSIS"
echo "======================="
mysql -u root -p -e "
SELECT
table_schema as 'Database',
table_name as 'Table',
round(((data_length + index_length) / 1024 / 1024), 2) as 'Size_MB',
table_rows as 'Rows'
FROM information_schema.TABLES
WHERE table_schema NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys')
ORDER BY (data_length + index_length) DESC
LIMIT 10;
"
echo ""
echo "Tables without primary keys:"
mysql -u root -p -e "
SELECT
t.table_schema,
t.table_name
FROM information_schema.tables t
LEFT JOIN information_schema.table_constraints tc
ON t.table_schema = tc.table_schema
AND t.table_name = tc.table_name
AND tc.constraint_type = 'PRIMARY KEY'
WHERE t.table_schema NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys')
AND tc.constraint_name IS NULL
AND t.table_type = 'BASE TABLE';
"
}
suggest_optimizations() {
echo "๐ก OPTIMIZATION SUGGESTIONS"
echo "==========================="
echo "1. Check for unused indexes:"
mysql -u root -p -e "
SELECT
object_schema as 'Database',
object_name as 'Table',
index_name as 'Index'
FROM performance_schema.table_io_waits_summary_by_index_usage
WHERE index_name IS NOT NULL
AND count_star = 0
AND object_schema NOT IN ('mysql', 'performance_schema', 'information_schema')
ORDER BY object_schema, object_name;
"
echo ""
echo "2. Tables with full table scans:"
mysql -u root -p -e "
SELECT
object_schema as 'Database',
object_name as 'Table',
count_read as 'Full_Scans'
FROM performance_schema.table_io_waits_summary_by_table
WHERE count_read > 0
AND object_schema NOT IN ('mysql', 'performance_schema', 'information_schema')
ORDER BY count_read DESC
LIMIT 10;
"
}
case "$1" in
slow)
analyze_slow_queries
;;
indexes)
check_index_usage
;;
suggest)
suggest_optimizations
;;
*)
echo "Usage: $0 {slow|indexes|suggest}"
echo " slow - Analyze slow queries"
echo " indexes - Check index usage"
echo " suggest - Get optimization suggestions"
;;
esac
# Make executable
sudo chmod +x /usr/local/bin/query-optimizer.sh
# Usage: sudo /usr/local/bin/query-optimizer.sh slow
โ Step 4: MySQL Replication Setup
Letโs set up high-availability replication! ๐ This provides data redundancy and allows read scaling across multiple servers.
First, configure the master server:
# Configure master server
sudo nano /etc/my.cnf.d/replication-master.cnf
[mysqld]
# Server identification
server-id = 1
log-bin = mysql-bin
binlog-format = ROW
# Databases to replicate (optional - replicate all if not specified)
# binlog-do-db = app_production
# Improved binary logging
expire-logs-days = 7
max-binlog-size = 100M
sync-binlog = 1
# For GTID-based replication (recommended)
gtid-mode = ON
enforce-gtid-consistency = ON
log-slave-updates = ON
# Restart MySQL to apply replication settings
sudo systemctl restart mysqld
# Create replication user
mysql -u root -p -e "
CREATE USER 'replicator'@'%' IDENTIFIED BY 'ReplicationPass789!';
GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'%';
FLUSH PRIVILEGES;
"
# Get master status (save this information!)
mysql -u root -p -e "SHOW MASTER STATUS;"
Configure the slave server (on second AlmaLinux server):
# On slave server - configure slave settings
sudo nano /etc/my.cnf.d/replication-slave.cnf
[mysqld]
# Server identification (must be unique)
server-id = 2
log-bin = mysql-bin
binlog-format = ROW
# Read-only slave (optional)
read-only = 1
# Relay log settings
relay-log = relay-bin
relay-log-index = relay-bin.index
# For GTID-based replication
gtid-mode = ON
enforce-gtid-consistency = ON
log-slave-updates = ON
# Slave settings
slave-skip-errors = 1062
# Restart MySQL on slave
sudo systemctl restart mysqld
# Configure slave to connect to master
mysql -u root -p -e "
CHANGE MASTER TO
MASTER_HOST='192.168.1.100', -- Replace with master IP
MASTER_USER='replicator',
MASTER_PASSWORD='ReplicationPass789!',
MASTER_AUTO_POSITION=1;
START SLAVE;
"
# Check slave status
mysql -u root -p -e "SHOW SLAVE STATUS\G"
Create replication monitoring script:
# Create replication monitor
sudo nano /usr/local/bin/replication-monitor.sh
#!/bin/bash
check_master_status() {
echo "๐ก MASTER SERVER STATUS"
echo "======================="
mysql -u root -p -e "SHOW MASTER STATUS;"
echo ""
echo "Connected slaves:"
mysql -u root -p -e "SHOW PROCESSLIST;" | grep "Binlog Dump"
echo ""
echo "Binary log files:"
mysql -u root -p -e "SHOW BINARY LOGS;"
}
check_slave_status() {
echo "๐ SLAVE SERVER STATUS"
echo "======================"
# Get detailed slave status
mysql -u root -p -e "SHOW SLAVE STATUS\G" | grep -E "(Slave_IO_Running|Slave_SQL_Running|Last_Error|Seconds_Behind_Master|Master_Log_File|Read_Master_Log_Pos)"
echo ""
echo "Replication health check:"
# Check if replication is running
IO_RUNNING=$(mysql -u root -p -e "SHOW SLAVE STATUS\G" | grep "Slave_IO_Running:" | awk '{print $2}')
SQL_RUNNING=$(mysql -u root -p -e "SHOW SLAVE STATUS\G" | grep "Slave_SQL_Running:" | awk '{print $2}')
if [ "$IO_RUNNING" = "Yes" ] && [ "$SQL_RUNNING" = "Yes" ]; then
echo "โ
Replication is healthy"
else
echo "โ Replication has issues:"
echo " IO Thread: $IO_RUNNING"
echo " SQL Thread: $SQL_RUNNING"
fi
}
test_replication() {
echo "๐งช REPLICATION TEST"
echo "=================="
# Create test on master
mysql -u root -p -e "
USE app_production;
CREATE TABLE IF NOT EXISTS replication_test (
id INT AUTO_INCREMENT PRIMARY KEY,
test_data VARCHAR(100),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
INSERT INTO replication_test (test_data) VALUES ('Replication test at $(date)');
"
echo "โ
Test data created on master"
echo "๐ค Waiting 5 seconds for replication..."
sleep 5
echo "๐ Checking if data replicated to slave..."
# This would be run on slave server
echo "Run this on slave: mysql -u root -p -e 'SELECT * FROM app_production.replication_test ORDER BY id DESC LIMIT 1;'"
}
case "$1" in
master)
check_master_status
;;
slave)
check_slave_status
;;
test)
test_replication
;;
*)
echo "Usage: $0 {master|slave|test}"
echo " master - Check master server status"
echo " slave - Check slave server status"
echo " test - Test replication functionality"
;;
esac
# Make executable
sudo chmod +x /usr/local/bin/replication-monitor.sh
๐ Step 5: Database Monitoring and Maintenance
Time to set up comprehensive monitoring! ๐ Great databases need constant attention to perform their best.
# Install additional monitoring tools
sudo dnf install -y mytop
# Create database monitoring script
sudo nano /usr/local/bin/db-monitor.sh
#!/bin/bash
show_realtime_stats() {
echo "๐ REAL-TIME DATABASE STATISTICS"
echo "================================="
mysql -u root -p -e "
SELECT
'Queries per second' as Metric,
ROUND(variable_value / (SELECT variable_value FROM information_schema.global_status WHERE variable_name='Uptime'), 2) as Value
FROM information_schema.global_status
WHERE variable_name='Questions'
UNION ALL
SELECT
'Connections' as Metric,
variable_value as Value
FROM information_schema.global_status
WHERE variable_name='Threads_connected'
UNION ALL
SELECT
'InnoDB Buffer Pool Hit Rate %' as Metric,
ROUND((1 - (variable_value / (SELECT variable_value FROM information_schema.global_status WHERE variable_name='Innodb_buffer_pool_read_requests'))) * 100, 2) as Value
FROM information_schema.global_status
WHERE variable_name='Innodb_buffer_pool_reads'
UNION ALL
SELECT
'Slow Queries' as Metric,
variable_value as Value
FROM information_schema.global_status
WHERE variable_name='Slow_queries';
"
}
check_locks_and_deadlocks() {
echo "๐ LOCKS AND DEADLOCKS"
echo "======================"
echo "Current locks:"
mysql -u root -p -e "
SELECT
r.trx_id waiting_trx_id,
r.trx_mysql_thread_id waiting_thread,
r.trx_query waiting_query,
b.trx_id blocking_trx_id,
b.trx_mysql_thread_id blocking_thread,
b.trx_query blocking_query
FROM information_schema.innodb_lock_waits w
INNER JOIN information_schema.innodb_trx b ON b.trx_id = w.blocking_trx_id
INNER JOIN information_schema.innodb_trx r ON r.trx_id = w.requesting_trx_id;
"
echo ""
echo "Deadlock information:"
mysql -u root -p -e "SHOW ENGINE INNODB STATUS\G" | grep -A 20 "LATEST DETECTED DEADLOCK"
}
analyze_table_sizes() {
echo "๐ TABLE SIZE ANALYSIS"
echo "======================"
mysql -u root -p -e "
SELECT
table_schema as 'Database',
table_name as 'Table',
ROUND(((data_length + index_length) / 1024 / 1024), 2) as 'Size_MB',
ROUND((data_length / 1024 / 1024), 2) as 'Data_MB',
ROUND((index_length / 1024 / 1024), 2) as 'Index_MB',
table_rows as 'Rows'
FROM information_schema.TABLES
WHERE table_schema NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys')
ORDER BY (data_length + index_length) DESC
LIMIT 20;
"
}
case "$1" in
stats)
show_realtime_stats
;;
locks)
check_locks_and_deadlocks
;;
tables)
analyze_table_sizes
;;
*)
echo "Usage: $0 {stats|locks|tables}"
echo " stats - Show real-time database statistics"
echo " locks - Check for locks and deadlocks"
echo " tables - Analyze table sizes"
;;
esac
Create automated maintenance script:
# Create maintenance script
sudo nano /usr/local/bin/db-maintenance.sh
#!/bin/bash
optimize_tables() {
echo "๐ง OPTIMIZING TABLES"
echo "===================="
# Get list of tables that need optimization
mysql -u root -p -e "
SELECT CONCAT('OPTIMIZE TABLE ', table_schema, '.', table_name, ';') as 'Optimization_Commands'
FROM information_schema.tables
WHERE table_schema NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys')
AND engine = 'MyISAM'
" > /tmp/optimize_commands.sql
if [ -s /tmp/optimize_commands.sql ]; then
echo "Running table optimizations..."
mysql -u root -p < /tmp/optimize_commands.sql
echo "โ
Table optimization complete"
else
echo "โน๏ธ No MyISAM tables found to optimize"
fi
# Analyze InnoDB tables
echo ""
echo "Analyzing InnoDB tables..."
mysql -u root -p -e "
SELECT CONCAT('ANALYZE TABLE ', table_schema, '.', table_name, ';') as 'Analysis_Commands'
FROM information_schema.tables
WHERE table_schema NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys')
AND engine = 'InnoDB'
" > /tmp/analyze_commands.sql
if [ -s /tmp/analyze_commands.sql ]; then
mysql -u root -p < /tmp/analyze_commands.sql
echo "โ
Table analysis complete"
fi
# Cleanup
rm -f /tmp/optimize_commands.sql /tmp/analyze_commands.sql
}
cleanup_logs() {
echo "๐งน CLEANING UP LOGS"
echo "==================="
# Purge old binary logs
mysql -u root -p -e "PURGE BINARY LOGS BEFORE DATE(NOW() - INTERVAL 7 DAY);"
echo "โ
Old binary logs purged"
# Rotate slow query log
if [ -f /var/log/mysql/slow.log ]; then
mv /var/log/mysql/slow.log /var/log/mysql/slow.log.$(date +%Y%m%d_%H%M%S)
mysqladmin -u root -p flush-logs
echo "โ
Slow query log rotated"
fi
# Clean up old log files (keep last 30 days)
find /var/log/mysql/ -name "*.log.*" -mtime +30 -delete
echo "โ
Old log files cleaned up"
}
backup_databases() {
echo "๐พ BACKING UP DATABASES"
echo "======================="
BACKUP_DIR="/storage/mysql-backups/$(date +%Y%m%d)"
mkdir -p $BACKUP_DIR
# Get list of databases to backup
DATABASES=$(mysql -u root -p -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema|performance_schema|mysql|sys)")
for db in $DATABASES; do
echo "Backing up database: $db"
mysqldump -u root -p --single-transaction --routines --triggers $db > $BACKUP_DIR/$db.sql
gzip $BACKUP_DIR/$db.sql
echo "โ
$db backed up and compressed"
done
# Keep only last 7 days of backups
find /storage/mysql-backups/ -type d -mtime +7 -exec rm -rf {} \; 2>/dev/null
echo "โ
Database backup complete"
}
case "$1" in
optimize)
optimize_tables
;;
cleanup)
cleanup_logs
;;
backup)
backup_databases
;;
all)
optimize_tables
cleanup_logs
backup_databases
;;
*)
echo "Usage: $0 {optimize|cleanup|backup|all}"
echo " optimize - Optimize and analyze tables"
echo " cleanup - Clean up old logs"
echo " backup - Backup all databases"
echo " all - Run all maintenance tasks"
;;
esac
# Make scripts executable
sudo chmod +x /usr/local/bin/db-monitor.sh
sudo chmod +x /usr/local/bin/db-maintenance.sh
# Set up automated maintenance cron job
sudo crontab -e
# Add this line to run maintenance every Sunday at 2 AM:
# 0 2 * * 0 /usr/local/bin/db-maintenance.sh all
๐ฎ Quick Examples
Letโs see your optimized database in action with real-world scenarios! ๐ฏ
Example 1: E-commerce Database Optimization
-- Create e-commerce tables
USE app_production;
CREATE TABLE products (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(255) NOT NULL,
description TEXT,
price DECIMAL(10,2) NOT NULL,
category_id INT,
stock_quantity INT DEFAULT 0,
is_active BOOLEAN DEFAULT TRUE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);
CREATE TABLE categories (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(100) NOT NULL,
parent_id INT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Add optimized indexes for e-commerce queries
CREATE INDEX idx_products_category ON products(category_id);
CREATE INDEX idx_products_active_price ON products(is_active, price);
CREATE INDEX idx_products_name_fulltext ON products(name);
CREATE INDEX idx_categories_parent ON categories(parent_id);
-- Optimized query for product listing with pagination
EXPLAIN
SELECT p.id, p.name, p.price, c.name as category_name
FROM products p
INNER JOIN categories c ON p.category_id = c.id
WHERE p.is_active = TRUE
ORDER BY p.created_at DESC
LIMIT 20 OFFSET 0;
Example 2: Performance Testing Script
# Create performance testing script
sudo nano /usr/local/bin/db-performance-test.sh
#!/bin/bash
run_insert_test() {
echo "๐ INSERT PERFORMANCE TEST"
echo "=========================="
start_time=$(date +%s.%3N)
mysql -u root -p app_production -e "
INSERT INTO users (username, email)
SELECT
CONCAT('testuser_', FLOOR(RAND() * 1000000)) as username,
CONCAT('test', FLOOR(RAND() * 1000000), '@example.com') as email
FROM
(SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5) t1,
(SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5) t2,
(SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5) t3,
(SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5) t4
LIMIT 10000;
"
end_time=$(date +%s.%3N)
duration=$(echo "$end_time - $start_time" | bc)
echo "โ
Inserted 10,000 records in $duration seconds"
echo "๐ Rate: $(echo "scale=2; 10000 / $duration" | bc) inserts/second"
}
run_select_test() {
echo "๐ SELECT PERFORMANCE TEST"
echo "=========================="
start_time=$(date +%s.%3N)
mysql -u root -p app_production -e "
SELECT u.username, u.email, COUNT(o.id) as order_count
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
GROUP BY u.id, u.username, u.email
ORDER BY order_count DESC
LIMIT 1000;
" > /dev/null
end_time=$(date +%s.%3N)
duration=$(echo "$end_time - $start_time" | bc)
echo "โ
Complex SELECT query completed in $duration seconds"
}
run_concurrent_test() {
echo "๐ CONCURRENT CONNECTION TEST"
echo "============================="
# Create test script for concurrent connections
cat > /tmp/concurrent_test.sql << 'EOF'
SELECT SLEEP(0.1);
SELECT COUNT(*) FROM users;
SELECT COUNT(*) FROM orders;
INSERT INTO users (username, email) VALUES (CONCAT('concurrent_', CONNECTION_ID()), CONCAT('concurrent_', CONNECTION_ID(), '@test.com'));
EOF
echo "Starting 50 concurrent connections..."
for i in {1..50}; do
mysql -u root -p app_production < /tmp/concurrent_test.sql &
done
wait
echo "โ
Concurrent test completed"
# Cleanup
rm -f /tmp/concurrent_test.sql
mysql -u root -p app_production -e "DELETE FROM users WHERE username LIKE 'concurrent_%';"
}
case "$1" in
insert)
run_insert_test
;;
select)
run_select_test
;;
concurrent)
run_concurrent_test
;;
all)
run_insert_test
echo ""
run_select_test
echo ""
run_concurrent_test
;;
*)
echo "Usage: $0 {insert|select|concurrent|all}"
;;
esac
# Make executable and run tests
sudo chmod +x /usr/local/bin/db-performance-test.sh
# Usage: sudo /usr/local/bin/db-performance-test.sh all
Example 3: Database Health Dashboard
# Create database health dashboard
sudo nano /usr/local/bin/db-dashboard.sh
#!/bin/bash
while true; do
clear
echo "๐๏ธ MYSQL HEALTH DASHBOARD"
echo "=========================="
echo "๐
$(date)"
echo ""
# Server status
echo "๐ SERVER STATUS:"
if systemctl is-active --quiet mysqld; then
echo " MySQL Service: โ
Running"
else
echo " MySQL Service: โ Stopped"
fi
# Connection info
echo " Uptime: $(mysql -u root -p -e "SELECT CONCAT(FLOOR(variable_value/3600), 'h ', FLOOR((variable_value%3600)/60), 'm') as uptime FROM information_schema.global_status WHERE variable_name='Uptime';" 2>/dev/null | tail -1)"
echo " Connections: $(mysql -u root -p -e "SELECT variable_value FROM information_schema.global_status WHERE variable_name='Threads_connected';" 2>/dev/null | tail -1) active"
echo ""
echo "๐ PERFORMANCE METRICS:"
mysql -u root -p -e "
SELECT 'QPS' as Metric, ROUND(Questions/Uptime,2) as Value FROM
(SELECT variable_value as Questions FROM information_schema.global_status WHERE variable_name='Questions') q,
(SELECT variable_value as Uptime FROM information_schema.global_status WHERE variable_name='Uptime') u
UNION ALL
SELECT 'Buffer Pool Hit Rate %', ROUND((1 - (Innodb_buffer_pool_reads / Innodb_buffer_pool_read_requests)) * 100, 2)
FROM
(SELECT variable_value as Innodb_buffer_pool_reads FROM information_schema.global_status WHERE variable_name='Innodb_buffer_pool_reads') r,
(SELECT variable_value as Innodb_buffer_pool_read_requests FROM information_schema.global_status WHERE variable_name='Innodb_buffer_pool_read_requests') req
UNION ALL
SELECT 'Slow Queries', variable_value FROM information_schema.global_status WHERE variable_name='Slow_queries';
" 2>/dev/null | column -t
echo ""
echo "Press Ctrl+C to exit, or wait 10 seconds for refresh..."
sleep 10
done
# Make executable
sudo chmod +x /usr/local/bin/db-dashboard.sh
# Run: sudo /usr/local/bin/db-dashboard.sh
๐จ Fix Common Problems
Donโt panic when database issues arise โ here are solutions to common MySQL problems! ๐ ๏ธ
Problem 1: MySQL Wonโt Start
Symptoms: Service fails to start, โCanโt connect to MySQL serverโ errors
# Check MySQL error log
sudo tail -50 /var/log/mysql/error.log
# Common fixes:
# 1. Fix permissions
sudo chown -R mysql:mysql /var/lib/mysql
sudo chmod 750 /var/lib/mysql
# 2. Remove MySQL lock files
sudo rm -f /var/lib/mysql/mysql.sock.lock
sudo rm -f /var/lib/mysql/*.pid
# 3. Check disk space
df -h /var/lib/mysql
# 4. Start MySQL in safe mode for recovery
sudo mysqld_safe --user=mysql &
Problem 2: Slow Query Performance
Symptoms: Queries taking too long, high CPU usage
# Enable slow query log temporarily
mysql -u root -p -e "SET GLOBAL slow_query_log = 'ON';"
mysql -u root -p -e "SET GLOBAL long_query_time = 0.5;"
# Analyze slow queries
sudo /usr/local/bin/query-optimizer.sh slow
# Common solutions:
# 1. Add missing indexes
# 2. Rewrite inefficient queries
# 3. Increase buffer pool size
# 4. Optimize table structure
Problem 3: Too Many Connections
Symptoms: โToo many connectionsโ error messages
# Check current connections
mysql -u root -p -e "SHOW PROCESSLIST;"
# Kill long-running queries
mysql -u root -p -e "KILL <process_id>;"
# Increase max_connections temporarily
mysql -u root -p -e "SET GLOBAL max_connections = 500;"
# Permanent fix in configuration:
echo "max_connections = 500" | sudo tee -a /etc/my.cnf.d/optimization.cnf
sudo systemctl restart mysqld
Problem 4: Replication Lag
Symptoms: Slave behind master, data inconsistency
# Check replication status
mysql -u root -p -e "SHOW SLAVE STATUS\G" | grep "Seconds_Behind_Master"
# Solutions:
# 1. Optimize slave hardware
# 2. Use parallel replication
mysql -u root -p -e "
SET GLOBAL slave_parallel_workers = 4;
SET GLOBAL slave_parallel_type = 'LOGICAL_CLOCK';
STOP SLAVE;
START SLAVE;
"
# 3. Check for blocking queries
mysql -u root -p -e "SHOW PROCESSLIST;" | grep "Waiting for"
๐ Simple Commands Summary
Hereโs your MySQL optimization quick reference guide! ๐
Task | Command | Purpose |
---|---|---|
MySQL Status | sudo systemctl status mysqld | Check MySQL service |
Connect MySQL | mysql -u root -p | Connect to database |
Show Databases | SHOW DATABASES; | List all databases |
Show Processlist | SHOW PROCESSLIST; | See active connections |
Check Slow Queries | sudo /usr/local/bin/query-optimizer.sh slow | Analyze slow queries |
Optimize Tables | sudo /usr/local/bin/db-maintenance.sh optimize | Optimize database tables |
Monitor Performance | sudo /usr/local/bin/db-monitor.sh stats | Show performance stats |
Check Replication | sudo /usr/local/bin/replication-monitor.sh slave | Monitor replication |
Backup Database | sudo /usr/local/bin/db-maintenance.sh backup | Backup databases |
Performance Test | sudo /usr/local/bin/db-performance-test.sh all | Run performance tests |
Health Dashboard | sudo /usr/local/bin/db-dashboard.sh | Real-time monitoring |
Restart MySQL | sudo systemctl restart mysqld | Restart MySQL service |
๐ก Tips for Success
Follow these expert strategies to become a database optimization master! ๐
๐ฏ Optimization Strategy
- Monitor first, optimize second โ Always establish baseline performance before making changes
- Index strategically โ More indexes arenโt always better; they slow down writes
- Query optimization โ Focus on the 20% of queries that cause 80% of the problems
- Regular maintenance โ Schedule weekly optimization and cleanup tasks
๐ง Performance Best Practices
- Use appropriate data types โ Smaller data types mean faster queries and less storage
- Normalize wisely โ Balance between normalization and query performance
- Partition large tables โ Split huge tables into manageable chunks
- Monitor buffer pool hit ratio โ Should be >99% for good performance
๐ก๏ธ Reliability & Backup
- Set up replication โ Always have at least one replica for high availability
- Test your backups โ Backups are useless if you canโt restore from them
- Monitor disk space โ Databases grow quickly; plan for capacity
- Use transactions โ Ensure data consistency with proper transaction handling
๐ Advanced Techniques
- Connection pooling โ Use tools like ProxySQL for better connection management
- Read/write splitting โ Direct reads to slaves and writes to master
- Caching layers โ Implement Redis or Memcached for frequently accessed data
- Database sharding โ Distribute data across multiple servers for massive scale
๐ What You Learned
Congratulations! Youโve mastered MySQL database optimization on AlmaLinux! ๐ Hereโs your impressive skill set:
โ Installed and configured MySQL with enterprise-grade optimization settings โ Mastered query optimization with advanced indexing and EXPLAIN analysis โ Set up MySQL replication for high availability and read scaling โ Created comprehensive monitoring with real-time performance dashboards โ Built automated maintenance scripts for optimization and cleanup โ Implemented performance testing to measure and validate improvements โ Configured advanced MySQL features including GTID replication and binary logging โ Developed troubleshooting skills for common database problems โ Created backup and recovery procedures for data protection โ Built monitoring and alerting systems for proactive database management
๐ฏ Why This Matters
Database optimization expertise is incredibly valuable in todayโs data-driven world! ๐
Every application depends on fast, reliable database performance. From e-commerce sites handling thousands of transactions per minute to analytics platforms processing massive datasets, optimized databases are the foundation of successful digital businesses.
These skills open doors to high-paying roles like Database Administrator, DevOps Engineer, Backend Developer, and Data Engineer. Companies desperately need people who can design, optimize, and maintain database systems that scale with their business growth.
Remember, a well-optimized database doesnโt just make applications faster โ it reduces costs, improves user experience, and enables businesses to make data-driven decisions quickly. You now have the power to build database systems that can handle any workload while maintaining peak performance.
Keep practicing, keep optimizing, and keep pushing the boundaries of database performance! Your expertise will power the next generation of amazing applications! ๐๏ธโก๐