๐ณ Docker Installation and Management on AlmaLinux: Container Revolution
โIt works on my machine!โ Sound familiar? ๐ค That excuse died the day I discovered Docker! Our deployment nightmares ended when we containerized everything. No more dependency hell, no more environment conflicts, just pure portable bliss! Today Iโm showing you how to master Docker on AlmaLinux. Get ready to ship code, not excuses! ๐ข
๐ค Why Docker Changes Everything
Docker isnโt just trendy - itโs transformative! Hereโs why everyoneโs hooked:
- ๐ Lightning deployments - Seconds, not hours
- ๐ฆ Perfect isolation - No more conflicts
- ๐ Total portability - Runs anywhere
- ๐พ Tiny footprint - Containers, not VMs
- ๐ฏ Microservices ready - Scale what you need
- ๐ ๏ธ DevOps friendly - CI/CD paradise
True story: We reduced deployment time from 2 hours to 30 seconds. Our rollback strategy? One command. Docker saved our sanity! ๐ง
๐ฏ What You Need
Before we containerize everything, ensure you have:
- โ AlmaLinux 8/9 server
- โ 4GB+ RAM minimum
- โ 20GB+ free disk space
- โ Root or sudo access
- โ 45 minutes to become a Docker ninja
- โ Coffee (containers need caffeine! โ)
๐ Step 1: Install Docker Engine
Letโs get the whale swimming! ๐
Remove Old Versions
# Remove any old Docker installations
sudo dnf remove -y docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine \
podman \
runc
# Clean up
sudo rm -rf /var/lib/docker
sudo rm -rf /var/lib/containerd
Install Docker CE
# Install required packages
sudo dnf install -y yum-utils device-mapper-persistent-data lvm2
# Add Docker repository
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# Install Docker Engine
sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Start Docker
sudo systemctl enable --now docker
# Verify installation
sudo docker version
sudo docker run hello-world
# Add user to docker group (logout/login required)
sudo usermod -aG docker $USER
newgrp docker
# Test without sudo
docker ps
Configure Docker Daemon
# Create daemon configuration
sudo tee /etc/docker/daemon.json << EOF
{
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "3"
},
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://mirror.gcr.io"],
"insecure-registries": [],
"debug": false,
"experimental": true,
"features": {
"buildkit": true
},
"live-restore": true,
"default-runtime": "runc",
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 64000,
"Soft": 64000
}
}
}
EOF
# Restart Docker
sudo systemctl daemon-reload
sudo systemctl restart docker
# Configure firewall
sudo firewall-cmd --permanent --zone=trusted --add-interface=docker0
sudo firewall-cmd --permanent --zone=trusted --add-interface=br-+
sudo firewall-cmd --reload
๐ง Step 2: Essential Docker Commands
Master the Docker CLI! ๐ฎ
Container Management
# Run containers
docker run -d --name webserver -p 80:80 nginx
docker run -it --rm alpine sh
docker run -d --name db -e MYSQL_ROOT_PASSWORD=secret mysql:8
# Container lifecycle
docker ps # List running containers
docker ps -a # List all containers
docker start container_name # Start container
docker stop container_name # Stop container
docker restart container_name # Restart container
docker kill container_name # Force stop
docker rm container_name # Remove container
docker rm -f $(docker ps -aq) # Remove all containers
# Container inspection
docker logs container_name # View logs
docker logs -f container_name # Follow logs
docker exec -it container_name bash # Enter container
docker inspect container_name # Container details
docker stats # Resource usage
docker top container_name # Running processes
docker port container_name # Port mappings
# Copy files
docker cp file.txt container:/path/
docker cp container:/path/file.txt .
Image Management
# Pull images
docker pull nginx:latest
docker pull alpine:3.18
docker pull redis:7-alpine
# List images
docker images
docker images -a
# Remove images
docker rmi image_name
docker rmi $(docker images -q) # Remove all images
# Tag images
docker tag source_image:tag new_image:tag
# Save and load images
docker save -o image.tar image_name
docker load -i image.tar
# Search Docker Hub
docker search nginx
Network Management
# List networks
docker network ls
# Create networks
docker network create myapp-network
docker network create --driver bridge --subnet 172.20.0.0/16 custom-net
# Connect containers
docker network connect myapp-network container_name
docker network disconnect myapp-network container_name
# Inspect network
docker network inspect bridge
# Remove network
docker network rm network_name
Volume Management
# Create volumes
docker volume create mydata
docker volume create --driver local --opt type=nfs --opt o=addr=192.168.1.1,rw --opt device=:/path/to/dir nfs-volume
# List volumes
docker volume ls
# Inspect volume
docker volume inspect mydata
# Remove volumes
docker volume rm mydata
docker volume prune # Remove unused volumes
# Use volumes in containers
docker run -v mydata:/data alpine
docker run -v /host/path:/container/path:ro alpine # Read-only mount
๐ Step 3: Build Custom Images
Create your own containers! ๐๏ธ
Basic Dockerfile
# Create a Node.js application image
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application code
COPY . .
# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
# Change ownership
RUN chown -R nodejs:nodejs /app
# Switch to non-root user
USER nodejs
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node healthcheck.js || exit 1
# Start application
CMD ["node", "server.js"]
Multi-stage Build
# Multi-stage build for Go application
# Build stage
FROM golang:1.20-alpine AS builder
WORKDIR /build
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
# Final stage
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
# Copy binary from builder
COPY --from=builder /build/app .
# Add non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
EXPOSE 8080
CMD ["./app"]
Build and Push Images
# Build image
docker build -t myapp:latest .
docker build -t myapp:v1.0 -f Dockerfile.prod .
docker build --no-cache -t myapp:latest .
# Tag for registry
docker tag myapp:latest registry.example.com/myapp:latest
# Login to registry
docker login registry.example.com
# Push image
docker push registry.example.com/myapp:latest
# Build with BuildKit (faster)
DOCKER_BUILDKIT=1 docker build -t myapp:latest .
โ Step 4: Docker Compose
Orchestrate multi-container apps! ๐ญ
Install Docker Compose
# Docker Compose is included as plugin
docker compose version
# Legacy standalone installation (if needed)
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
Docker Compose Example
# docker-compose.yml
version: '3.9'
services:
web:
build: .
image: myapp:latest
container_name: web-app
ports:
- "80:3000"
environment:
- NODE_ENV=production
- DB_HOST=db
- REDIS_HOST=redis
depends_on:
- db
- redis
volumes:
- ./uploads:/app/uploads
- app-data:/app/data
networks:
- app-network
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
db:
image: postgres:15-alpine
container_name: postgres-db
environment:
POSTGRES_DB: myapp
POSTGRES_USER: admin
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- postgres-data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- app-network
restart: unless-stopped
redis:
image: redis:7-alpine
container_name: redis-cache
command: redis-server --appendonly yes
volumes:
- redis-data:/data
networks:
- app-network
restart: unless-stopped
nginx:
image: nginx:alpine
container_name: nginx-proxy
ports:
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./ssl:/etc/nginx/ssl:ro
depends_on:
- web
networks:
- app-network
restart: unless-stopped
volumes:
app-data:
postgres-data:
redis-data:
networks:
app-network:
driver: bridge
Docker Compose Commands
# Start services
docker compose up
docker compose up -d # Detached mode
docker compose up --scale web=3 # Scale service
# Stop services
docker compose down
docker compose down -v # Remove volumes
# View logs
docker compose logs
docker compose logs -f web
# Execute commands
docker compose exec db psql -U admin
docker compose run --rm web npm test
# Build/rebuild
docker compose build
docker compose build --no-cache
# Service management
docker compose ps
docker compose restart web
docker compose stop db
๐ฎ Quick Examples
Example 1: Complete Web Stack ๐
#!/bin/bash
# Deploy complete web application stack
# Create project structure
mkdir -p webapp/{app,nginx,db}
cd webapp
# Create Node.js app
cat > app/server.js << 'EOF'
const express = require('express');
const redis = require('redis');
const { Pool } = require('pg');
const app = express();
const redisClient = redis.createClient({ host: 'redis' });
const pgPool = new Pool({
host: 'db',
database: 'webapp',
user: 'admin',
password: 'secret'
});
app.get('/', async (req, res) => {
const visits = await redisClient.incr('visits');
const result = await pgPool.query('SELECT NOW()');
res.json({
visits,
time: result.rows[0].now,
container: process.env.HOSTNAME
});
});
app.listen(3000, () => console.log('Server running on port 3000'));
EOF
# Create Dockerfile
cat > app/Dockerfile << 'EOF'
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
EOF
# Create docker-compose.yml
cat > docker-compose.yml << 'EOF'
version: '3.9'
services:
app:
build: ./app
deploy:
replicas: 3
environment:
- NODE_ENV=production
depends_on:
- db
- redis
networks:
- webapp
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- app
networks:
- webapp
db:
image: postgres:15-alpine
environment:
POSTGRES_DB: webapp
POSTGRES_USER: admin
POSTGRES_PASSWORD: secret
volumes:
- db-data:/var/lib/postgresql/data
networks:
- webapp
redis:
image: redis:7-alpine
networks:
- webapp
adminer:
image: adminer
ports:
- "8080:8080"
networks:
- webapp
volumes:
db-data:
networks:
webapp:
EOF
# Create Nginx config
cat > nginx/nginx.conf << 'EOF'
events { worker_connections 1024; }
http {
upstream app {
least_conn;
server app:3000;
}
server {
listen 80;
location / {
proxy_pass http://app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
}
EOF
# Deploy
docker compose up -d
echo "โ
Web stack deployed! Access at http://localhost"
Example 2: CI/CD Pipeline with Docker ๐
#!/bin/bash
# Docker-based CI/CD pipeline
# Create Jenkins with Docker-in-Docker
cat > jenkins-compose.yml << 'EOF'
version: '3.9'
services:
jenkins:
image: jenkins/jenkins:lts
container_name: jenkins
privileged: true
user: root
ports:
- "8080:8080"
- "50000:50000"
volumes:
- jenkins-data:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
environment:
- DOCKER_HOST=unix:///var/run/docker.sock
networks:
- cicd
registry:
image: registry:2
container_name: docker-registry
ports:
- "5000:5000"
volumes:
- registry-data:/var/lib/registry
environment:
REGISTRY_STORAGE_DELETE_ENABLED: "true"
networks:
- cicd
gitlab:
image: gitlab/gitlab-ce:latest
container_name: gitlab
hostname: gitlab.local
ports:
- "8081:80"
- "8443:443"
- "8022:22"
volumes:
- gitlab-config:/etc/gitlab
- gitlab-logs:/var/log/gitlab
- gitlab-data:/var/opt/gitlab
networks:
- cicd
runner:
image: gitlab/gitlab-runner:latest
container_name: gitlab-runner
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- runner-config:/etc/gitlab-runner
networks:
- cicd
volumes:
jenkins-data:
registry-data:
gitlab-config:
gitlab-logs:
gitlab-data:
runner-config:
networks:
cicd:
driver: bridge
EOF
# Create build pipeline script
cat > build-pipeline.sh << 'EOF'
#!/bin/bash
# Automated build and deploy
PROJECT=$1
VERSION=$2
echo "๐จ Building $PROJECT:$VERSION"
# Build image
docker build -t $PROJECT:$VERSION .
# Run tests
docker run --rm $PROJECT:$VERSION npm test
# Tag for registry
docker tag $PROJECT:$VERSION localhost:5000/$PROJECT:$VERSION
# Push to registry
docker push localhost:5000/$PROJECT:$VERSION
# Deploy to production
docker service update --image localhost:5000/$PROJECT:$VERSION production_app
echo "โ
Deployed $PROJECT:$VERSION to production"
EOF
chmod +x build-pipeline.sh
# Deploy CI/CD stack
docker compose -f jenkins-compose.yml up -d
Example 3: Monitoring Stack ๐
#!/bin/bash
# Complete Docker monitoring solution
cat > monitoring-compose.yml << 'EOF'
version: '3.9'
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus-data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
ports:
- "9090:9090"
networks:
- monitoring
grafana:
image: grafana/grafana:latest
container_name: grafana
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
- GF_INSTALL_PLUGINS=redis-datasource
volumes:
- grafana-data:/var/lib/grafana
- ./grafana/dashboards:/etc/grafana/provisioning/dashboards
ports:
- "3000:3000"
networks:
- monitoring
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
container_name: cadvisor
privileged: true
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
- /dev/disk:/dev/disk:ro
ports:
- "8080:8080"
networks:
- monitoring
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
ports:
- "9100:9100"
networks:
- monitoring
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer-data:/data
ports:
- "9000:9000"
networks:
- monitoring
volumes:
prometheus-data:
grafana-data:
portainer-data:
networks:
monitoring:
driver: bridge
EOF
# Create Prometheus config
cat > prometheus.yml << 'EOF'
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'cadvisor'
static_configs:
- targets: ['cadvisor:8080']
- job_name: 'node'
static_configs:
- targets: ['node-exporter:9100']
- job_name: 'docker'
static_configs:
- targets: ['172.17.0.1:9323']
EOF
# Enable Docker metrics
cat > /etc/docker/daemon.json << 'EOF'
{
"metrics-addr": "0.0.0.0:9323",
"experimental": true
}
EOF
systemctl restart docker
# Deploy monitoring
docker compose -f monitoring-compose.yml up -d
echo "โ
Monitoring stack deployed!"
echo "๐ Grafana: http://localhost:3000 (admin/admin)"
echo "๐ Prometheus: http://localhost:9090"
echo "๐ณ Portainer: http://localhost:9000"
๐จ Fix Common Problems
Problem 1: Cannot Connect to Docker Daemon โ
Permission denied error?
# Check Docker status
sudo systemctl status docker
# Add user to docker group
sudo usermod -aG docker $USER
# Apply changes
newgrp docker
# Or logout and login again
Problem 2: No Space Left โ
Disk full with images/containers?
# Clean up everything
docker system prune -a --volumes
# Remove specific items
docker container prune
docker image prune -a
docker volume prune
docker network prune
# Check disk usage
docker system df
Problem 3: Container Canโt Access Internet โ
DNS or network issues?
# Check DNS in container
docker run --rm alpine nslookup google.com
# Fix DNS
echo '{"dns": ["8.8.8.8", "8.8.4.4"]}' | sudo tee /etc/docker/daemon.json
sudo systemctl restart docker
# Check firewall
sudo firewall-cmd --zone=trusted --add-interface=docker0 --permanent
sudo firewall-cmd --reload
Problem 4: Containers Keep Restarting โ
Crash loop?
# Check logs
docker logs container_name --tail 50
# Check restart policy
docker inspect container_name | grep -A 5 RestartPolicy
# Update restart policy
docker update --restart=no container_name
# Debug interactively
docker run -it --entrypoint /bin/sh image_name
๐ Simple Commands Summary
Task | Command |
---|---|
๐ Run container | docker run -d nginx |
๐ List containers | docker ps -a |
๐ Stop container | docker stop container |
๐ View logs | docker logs -f container |
๐๏ธ Remove all | docker system prune -a |
๐๏ธ Build image | docker build -t app . |
๐ฆ Save image | docker save app > app.tar |
๐ Compose up | docker compose up -d |
๐ก Tips for Success
- Use Alpine Images ๐๏ธ - Smaller is better
- Multi-stage Builds ๐๏ธ - Minimize final size
- Health Checks ๐ฅ - Know when containers die
- Volume for Data ๐พ - Never store in containers
- One Process per Container 1๏ธโฃ - Unix philosophy
- Tag Everything ๐ท๏ธ - Never use latest in production
Pro tip: Always use docker compose
for multi-container apps. Managing individual containers is a nightmare! ๐ญ
๐ What You Learned
Youโre now a Docker master! You can:
- โ Install and configure Docker
- โ Manage containers and images
- โ Build custom images
- โ Use Docker Compose
- โ Create multi-container apps
- โ Implement monitoring
- โ Troubleshoot issues
๐ฏ Why This Matters
Docker provides:
- ๐ Instant deployments
- ๐ฆ Perfect isolation
- ๐ Easy rollbacks
- ๐ฐ Resource efficiency
- ๐ Cloud portability
- ๐ ๏ธ DevOps excellence
We ship 50+ deployments daily with zero downtime. Before Docker? Lucky to do 2 per week. Containers didnโt just improve our workflow - they revolutionized it! ๐
Remember: Friends donโt let friends deploy without containers! ๐ณ
Happy containerizing! May your builds be fast and your containers be stable! ๐ฆโจ