+
rider
+
<=
terraform
couchdb
mysql
+
gitlab
+
windows
clickhouse
+
+
+
+
+
htmx
esbuild
+
emacs
tcl
scala
+
circle
+
+
html
+
+
+
+
cassandra
|>
+
gh
pascal
micronaut
emacs
+
laravel
gh
objc
+
<=
+
circle
android
macos
+
android
+
+
goland
marko
scipy
eclipse
solidity
+
vim
+
pascal
+
oauth
xcode
+
suse
+
linux
sklearn
vscode
firebase
ubuntu
+
+
+
+
ractive
swift
phoenix
+
+
+
hapi
+
|>
+
+
+
+
Back to Blog
LVM Configuration and Management on AlmaLinux: Complete Storage Guide
AlmaLinux LVM Storage

LVM Configuration and Management on AlmaLinux: Complete Storage Guide

Published Jul 27, 2025

Master Logical Volume Management (LVM) on AlmaLinux. Learn to create, resize, and manage flexible storage solutions with volume groups, logical volumes, snapshots, and advanced features.

26 min read
0 views
Table of Contents

Logical Volume Management (LVM) is a powerful storage virtualization technology that provides a flexible abstraction layer between your physical storage devices and the filesystems that use them. Unlike traditional partitioning, LVM allows you to dynamically resize volumes, create snapshots, migrate data between physical disks, and manage storage with unprecedented flexibility. This comprehensive guide explores LVM implementation on AlmaLinux, from basic concepts to advanced configurations.

Understanding LVM Architecture

The LVM Storage Stack

LVM operates through a hierarchical architecture that abstracts physical storage into manageable logical units:

┌─────────────────────────────────────────────────────────┐
│                    Filesystems (ext4, xfs)              │
├─────────────────────────────────────────────────────────┤
│                  Logical Volumes (LV)                    │
│     /dev/vg_data/lv_home    /dev/vg_data/lv_var       │
├─────────────────────────────────────────────────────────┤
│                   Volume Groups (VG)                     │
│                      vg_data                            │
├─────────────────────────────────────────────────────────┤
│                  Physical Volumes (PV)                   │
│    /dev/sda1        /dev/sdb1        /dev/sdc1         │
├─────────────────────────────────────────────────────────┤
│                  Physical Devices                        │
│     /dev/sda         /dev/sdb         /dev/sdc         │
└─────────────────────────────────────────────────────────┘

Key Advantages of LVM

  1. Dynamic Volume Management

    • Resize volumes online without downtime
    • Add or remove storage on demand
    • No need to backup/restore for resizing
  2. Storage Pooling

    • Combine multiple disks into single storage pools
    • Abstract physical layout from logical organization
    • Efficient space utilization
  3. Advanced Features

    • Instant snapshots for backups
    • Thin provisioning for overcommit
    • Built-in RAID functionality
    • Live data migration
  4. Flexibility

    • Move data between physical devices
    • Stripe data across multiple disks
    • Mirror data for redundancy

LVM Components and Terminology

Physical Volumes (PV)

Physical volumes are the foundation of LVM, representing block devices or partitions:

# Physical volume characteristics
- Can be entire disks (/dev/sdb)
- Can be partitions (/dev/sda2)
- Can be RAID arrays (/dev/md0)
- Divided into Physical Extents (PE)
- Default PE size: 4MB

Volume Groups (VG)

Volume groups aggregate physical volumes into storage pools:

# Volume group properties
- Collection of one or more PVs
- Storage pool for creating LVs
- Defines extent size for all LVs
- Can span multiple physical devices
- Dynamic expansion by adding PVs

Logical Volumes (LV)

Logical volumes are virtual block devices carved from volume groups:

# Logical volume features
- Created from free space in VG
- Can span multiple PVs
- Resizable (grow/shrink)
- Support various layouts (linear, striped, mirrored)
- Accessible as /dev/vg_name/lv_name

Extents

The smallest allocatable unit in LVM:

# Extent types
Physical Extent (PE): Fixed-size chunks on PV
Logical Extent (LE): Maps to PE, same size within VG
Default size: 4MB (configurable)

Installing and Enabling LVM

Checking LVM Installation

# Verify LVM installation
rpm -qa | grep lvm2

# Check kernel support
lsmod | grep dm_mod

# View LVM version
lvm version

Installing LVM Tools

# Install LVM2 package
sudo dnf install lvm2 -y

# Install additional utilities
sudo dnf install lvm2-libs lvm2-dbusd device-mapper-persistent-data -y

# Enable and start LVM services
sudo systemctl enable lvm2-monitor.service
sudo systemctl start lvm2-monitor.service

# Verify services
sudo systemctl status lvm2-monitor.service

Loading LVM Modules

# Load device mapper modules
sudo modprobe dm_mod
sudo modprobe dm_mirror
sudo modprobe dm_snapshot

# Make modules persistent
echo "dm_mod" | sudo tee -a /etc/modules-load.d/lvm.conf
echo "dm_mirror" | sudo tee -a /etc/modules-load.d/lvm.conf
echo "dm_snapshot" | sudo tee -a /etc/modules-load.d/lvm.conf

Creating Physical Volumes

Preparing Disks

# List available disks
lsblk
sudo fdisk -l

# Create partition table (optional)
sudo parted /dev/sdb mklabel gpt
sudo parted /dev/sdb mkpart primary 1MiB 100%

# Or use entire disk without partitioning
# This is often preferred for LVM

Creating Physical Volumes

# Create PV on entire disk
sudo pvcreate /dev/sdb

# Create PV on partition
sudo pvcreate /dev/sdc1

# Create multiple PVs at once
sudo pvcreate /dev/sdb /dev/sdc /dev/sdd

# Force creation (use with caution)
sudo pvcreate -f /dev/sdb

# Create with specific metadata size
sudo pvcreate --metadatasize 10M /dev/sdb

Viewing Physical Volume Information

# Display all PVs
sudo pvs

# Detailed PV information
sudo pvdisplay

# Specific PV details
sudo pvdisplay /dev/sdb

# Show PV with attributes
sudo pvs -v

# Display PV allocation
sudo pvs -o+pv_used

# Scan for PVs
sudo pvscan

Physical Volume Management

# Check PV consistency
sudo pvck /dev/sdb

# Remove PV (must not be in use)
sudo pvremove /dev/sdb

# Move extents off PV
sudo pvmove /dev/sdb

# Change PV attributes
sudo pvchange -x n /dev/sdb  # Disallow allocation

Managing Volume Groups

Creating Volume Groups

# Create VG with single PV
sudo vgcreate vg_data /dev/sdb

# Create VG with multiple PVs
sudo vgcreate vg_storage /dev/sdb /dev/sdc /dev/sdd

# Create with specific extent size
sudo vgcreate -s 16M vg_large /dev/sdb

# Create with specific physical extent size
sudo vgcreate --physicalextentsize 8M vg_custom /dev/sdc

Displaying Volume Group Information

# List all VGs
sudo vgs

# Detailed VG information
sudo vgdisplay

# Specific VG details
sudo vgdisplay vg_data

# Show VG with all attributes
sudo vgs -v

# Display VG allocation
sudo vgs -o +vg_free_count,vg_extent_count

# Scan for VGs
sudo vgscan

Extending Volume Groups

# Add PV to existing VG
sudo vgextend vg_data /dev/sdd

# Add multiple PVs
sudo vgextend vg_data /dev/sde /dev/sdf

# Verify extension
sudo vgdisplay vg_data

Reducing Volume Groups

# Remove PV from VG (must be empty)
sudo vgreduce vg_data /dev/sdd

# Move data before removing
sudo pvmove /dev/sdd
sudo vgreduce vg_data /dev/sdd

# Remove missing PVs
sudo vgreduce --removemissing vg_data

Managing Volume Groups

# Rename VG
sudo vgrename vg_old vg_new

# Change VG attributes
sudo vgchange -a y vg_data  # Activate
sudo vgchange -a n vg_data  # Deactivate

# Export VG (for moving to another system)
sudo vgchange -a n vg_data
sudo vgexport vg_data

# Import VG
sudo vgimport vg_data
sudo vgchange -a y vg_data

# Remove VG (all LVs must be removed first)
sudo vgremove vg_data

Creating Logical Volumes

Basic Logical Volume Creation

# Create LV with specific size
sudo lvcreate -L 10G -n lv_home vg_data

# Create LV with percentage of VG
sudo lvcreate -l 50%VG -n lv_var vg_data

# Create LV with percentage of free space
sudo lvcreate -l 100%FREE -n lv_backup vg_data

# Create LV with specific number of extents
sudo lvcreate -l 2560 -n lv_custom vg_data

Creating Striped Logical Volumes

# Create striped LV across 3 PVs
sudo lvcreate -L 30G -i 3 -I 64 -n lv_striped vg_data

# -i: number of stripes
# -I: stripe size in KB

# Specify PVs for striping
sudo lvcreate -L 20G -i 2 -n lv_fast vg_data /dev/sdb /dev/sdc

Creating Mirrored Logical Volumes

# Create mirrored LV
sudo lvcreate -L 10G -m1 -n lv_mirror vg_data

# Create mirror with specific PVs
sudo lvcreate -L 10G -m1 -n lv_secure vg_data /dev/sdb /dev/sdc

# Create mirror with separate log
sudo lvcreate -L 10G -m1 --mirrorlog disk -n lv_logged vg_data

Formatting and Mounting Logical Volumes

# Create filesystem
sudo mkfs.xfs /dev/vg_data/lv_home
sudo mkfs.ext4 /dev/vg_data/lv_var

# Create mount points
sudo mkdir -p /mnt/home
sudo mkdir -p /mnt/var

# Mount LVs
sudo mount /dev/vg_data/lv_home /mnt/home
sudo mount /dev/vg_data/lv_var /mnt/var

# Add to fstab for persistent mounting
echo "/dev/vg_data/lv_home /mnt/home xfs defaults 0 0" | sudo tee -a /etc/fstab
echo "/dev/vg_data/lv_var /mnt/var ext4 defaults 0 0" | sudo tee -a /etc/fstab

# Mount all from fstab
sudo mount -a

Extending and Reducing Volumes

Extending Logical Volumes

# Extend LV by specific size
sudo lvextend -L +5G /dev/vg_data/lv_home

# Extend LV to specific size
sudo lvextend -L 20G /dev/vg_data/lv_home

# Extend LV by percentage
sudo lvextend -l +50%FREE /dev/vg_data/lv_home

# Extend and resize filesystem (ext4)
sudo lvextend -L +10G -r /dev/vg_data/lv_var

# Manual filesystem resize
# For XFS
sudo xfs_growfs /mnt/home

# For ext4
sudo resize2fs /dev/vg_data/lv_var

Reducing Logical Volumes

# WARNING: Always backup data before reducing!

# For ext4 filesystems
# 1. Unmount the filesystem
sudo umount /mnt/var

# 2. Check filesystem
sudo e2fsck -f /dev/vg_data/lv_var

# 3. Resize filesystem
sudo resize2fs /dev/vg_data/lv_var 15G

# 4. Reduce LV
sudo lvreduce -L 15G /dev/vg_data/lv_var

# 5. Remount
sudo mount /dev/vg_data/lv_var /mnt/var

# For XFS (cannot be reduced - must backup/recreate)

Resizing with Different Filesystems

# Online resize for XFS (grow only)
sudo lvextend -L +5G /dev/vg_data/lv_xfs
sudo xfs_growfs /mount/point

# Online resize for ext4
sudo lvextend -L +5G /dev/vg_data/lv_ext4
sudo resize2fs /dev/vg_data/lv_ext4

# Btrfs resize
sudo lvextend -L +5G /dev/vg_data/lv_btrfs
sudo btrfs filesystem resize max /mount/point

LVM Snapshots

Creating Snapshots

# Create snapshot of existing LV
sudo lvcreate -L 1G -s -n lv_home_snap /dev/vg_data/lv_home

# Create snapshot with percentage of origin
sudo lvcreate -l 20%ORIGIN -s -n lv_var_snap /dev/vg_data/lv_var

# Create read-only snapshot
sudo lvcreate -L 1G -s -p r -n lv_backup_snap /dev/vg_data/lv_backup

Managing Snapshots

# List snapshots
sudo lvs -o +lv_snapshot_invalid,snap_percent

# Monitor snapshot usage
sudo lvdisplay /dev/vg_data/lv_home_snap

# Extend snapshot
sudo lvextend -L +500M /dev/vg_data/lv_home_snap

# Mount snapshot
sudo mkdir -p /mnt/snap
sudo mount -o ro /dev/vg_data/lv_home_snap /mnt/snap

Snapshot Restoration

# Merge snapshot back to origin
# WARNING: This will revert origin to snapshot state

# 1. Unmount origin and snapshot
sudo umount /mnt/home
sudo umount /mnt/snap

# 2. Merge snapshot
sudo lvconvert --merge /dev/vg_data/lv_home_snap

# 3. Remount origin
sudo mount /dev/vg_data/lv_home /mnt/home

Automated Snapshot Management

#!/bin/bash
# snapshot_backup.sh

VG="vg_data"
LV="lv_home"
SNAP_SIZE="2G"
SNAP_NAME="${LV}_snap_$(date +%Y%m%d_%H%M%S)"
BACKUP_DIR="/backup"

# Create snapshot
lvcreate -L $SNAP_SIZE -s -n $SNAP_NAME /dev/$VG/$LV

# Mount snapshot
mkdir -p /mnt/snapshot
mount -o ro /dev/$VG/$SNAP_NAME /mnt/snapshot

# Backup snapshot
rsync -av /mnt/snapshot/ $BACKUP_DIR/

# Cleanup
umount /mnt/snapshot
lvremove -f /dev/$VG/$SNAP_NAME

Thin Provisioning

Creating Thin Pools

# Create thin pool
sudo lvcreate -L 100G --thinpool tp_data vg_data

# Create thin pool with custom chunk size
sudo lvcreate -L 100G --thinpool tp_fast --chunksize 128K vg_data

# Create thin pool with metadata size
sudo lvcreate -L 100G --thinpool tp_large --poolmetadatasize 1G vg_data

Creating Thin Volumes

# Create thin LV
sudo lvcreate -V 50G --thin -n lv_thin1 vg_data/tp_data

# Create multiple thin LVs
sudo lvcreate -V 50G --thin -n lv_thin2 vg_data/tp_data
sudo lvcreate -V 50G --thin -n lv_thin3 vg_data/tp_data

# Create thin LV with specific size
sudo lvcreate -V 200G --thin -n lv_overcommit vg_data/tp_data

Managing Thin Provisioning

# Monitor thin pool usage
sudo lvs -o +data_percent,metadata_percent

# Extend thin pool
sudo lvextend -L +50G vg_data/tp_data

# Set thin pool thresholds
sudo lvchange --monitor y vg_data/tp_data
sudo lvchange --poolmetadataspare y vg_data/tp_data

# Configure auto-extension
cat << EOF | sudo tee /etc/lvm/lvm.conf
thin_pool_autoextend_threshold = 80
thin_pool_autoextend_percent = 20
EOF

Thin Provisioning Best Practices

# Monitor script for thin pools
#!/bin/bash
# monitor_thin_pools.sh

THRESHOLD=80

lvs --noheadings -o vg_name,lv_name,data_percent,metadata_percent | while read vg lv data meta; do
    data_int=${data%.*}
    meta_int=${meta%.*}
    
    if [ $data_int -gt $THRESHOLD ]; then
        echo "WARNING: Thin pool $vg/$lv data usage at $data%"
        # Send alert or auto-extend
    fi
    
    if [ $meta_int -gt $THRESHOLD ]; then
        echo "WARNING: Thin pool $vg/$lv metadata usage at $meta%"
    fi
done

RAID with LVM

Creating RAID Logical Volumes

# Create RAID1 (mirror)
sudo lvcreate --type raid1 -m 1 -L 20G -n lv_raid1 vg_data

# Create RAID5
sudo lvcreate --type raid5 -i 3 -L 30G -n lv_raid5 vg_data

# Create RAID6
sudo lvcreate --type raid6 -i 4 -L 40G -n lv_raid6 vg_data

# Create RAID10
sudo lvcreate --type raid10 -m 1 -i 2 -L 20G -n lv_raid10 vg_data

RAID Configuration Options

# Create RAID with specific stripe size
sudo lvcreate --type raid5 -i 3 -I 64 -L 30G -n lv_raid5_fast vg_data

# Create RAID with specific PVs
sudo lvcreate --type raid1 -m 1 -L 10G -n lv_raid1_specific vg_data /dev/sdb /dev/sdc

# Create RAID with region size
sudo lvcreate --type raid1 -m 1 --regionsize 2M -L 10G -n lv_raid1_region vg_data

Managing RAID Volumes

# Check RAID sync status
sudo lvs -o +raid_sync_action,sync_percent

# Repair RAID
sudo lvchange --syncaction repair vg_data/lv_raid1

# Replace failed device
sudo lvconvert --replace /dev/sdb vg_data/lv_raid1

# Convert linear to RAID1
sudo lvconvert --type raid1 -m 1 vg_data/lv_linear

# Add mirror to RAID1
sudo lvconvert -m +1 vg_data/lv_raid1

RAID Monitoring

#!/bin/bash
# raid_monitor.sh

# Check RAID health
lvs --noheadings -o vg_name,lv_name,health_status,raid_sync_action | while read vg lv health sync; do
    if [ "$health" != "partial" ] && [ "$health" != "" ]; then
        echo "WARNING: RAID volume $vg/$lv health: $health"
        # Send alert
    fi
    
    if [ "$sync" != "idle" ] && [ "$sync" != "" ]; then
        echo "INFO: RAID volume $vg/$lv sync action: $sync"
    fi
done

Data Migration

Migrating Data Between PVs

# Move all data from one PV
sudo pvmove /dev/sdb

# Move specific LV
sudo pvmove -n vg_data/lv_home /dev/sdb /dev/sdc

# Move with progress
sudo pvmove -i 5 /dev/sdb

# Background move
sudo pvmove -b /dev/sdb

# Check move status
sudo lvs -o +move_pv

Migrating Volume Groups

# Export VG from source system
sudo vgchange -a n vg_migrate
sudo vgexport vg_migrate

# Move disks to new system
# Import on target system
sudo pvscan
sudo vgimport vg_migrate
sudo vgchange -a y vg_migrate

Online Volume Migration

# Create mirror of existing LV on new PV
sudo lvconvert --type mirror -m 1 vg_data/lv_migrate /dev/new_disk

# Wait for sync
watch 'sudo lvs -o +copy_percent vg_data/lv_migrate'

# Remove old mirror leg
sudo lvconvert -m 0 vg_data/lv_migrate /dev/old_disk

# Convert back to linear
sudo lvconvert --type linear vg_data/lv_migrate

Performance Optimization

Optimizing Extent Size

# For large files (databases, media)
sudo vgcreate -s 32M vg_large_files /dev/sdb

# For many small files
sudo vgcreate -s 1M vg_small_files /dev/sdc

# Check current extent size
sudo vgdisplay vg_data | grep "PE Size"

Striping for Performance

# Create striped LV for performance
sudo lvcreate -L 100G -i 4 -I 128 -n lv_fast vg_data

# Optimal stripe size:
# 64K - General purpose
# 128K - Large sequential I/O
# 256K - Very large files

# Test performance
sudo fio --name=test --filename=/dev/vg_data/lv_fast \
    --size=1G --rw=randread --bs=4k --iodepth=32 --direct=1

Cache Volumes

# Create cache pool
sudo lvcreate -L 10G -n lv_cache_pool vg_data /dev/fast_ssd
sudo lvcreate -L 100M -n lv_cache_meta vg_data /dev/fast_ssd
sudo lvconvert --type cache-pool --poolmetadata vg_data/lv_cache_meta vg_data/lv_cache_pool

# Add cache to existing LV
sudo lvconvert --type cache --cachepool vg_data/lv_cache_pool vg_data/lv_slow

I/O Scheduling Optimization

# Set optimal scheduler for LVM
echo noop | sudo tee /sys/block/sdb/queue/scheduler

# Set read-ahead
sudo blockdev --setra 1024 /dev/vg_data/lv_sequential

# Persistent configuration
cat << EOF | sudo tee /etc/udev/rules.d/60-lvm-scheduler.rules
ACTION=="add|change", KERNEL=="dm-[0-9]*", ATTR{queue/scheduler}="noop"
ACTION=="add|change", KERNEL=="dm-[0-9]*", ATTR{queue/read_ahead_kb}="1024"
EOF

Backup and Recovery

LVM Metadata Backup

# Manual metadata backup
sudo vgcfgbackup vg_data

# Backup all VGs
sudo vgcfgbackup

# Backup to specific location
sudo vgcfgbackup -f /backup/vg_data_backup vg_data

# View backup files
ls -la /etc/lvm/backup/

Restoring LVM Configuration

# Restore from backup
sudo vgcfgrestore -f /etc/lvm/backup/vg_data vg_data

# List available backups
sudo vgcfgrestore -l vg_data

# Restore from specific backup
sudo vgcfgrestore -f /etc/lvm/archive/vg_data_00001.vg vg_data

Complete Backup Strategy

#!/bin/bash
# lvm_backup.sh

BACKUP_DIR="/backup/lvm"
DATE=$(date +%Y%m%d_%H%M%S)

# Create backup directory
mkdir -p $BACKUP_DIR/$DATE

# Backup all VG metadata
vgcfgbackup -f $BACKUP_DIR/$DATE/%s_backup

# Backup PV information
pvdisplay > $BACKUP_DIR/$DATE/pvdisplay.txt
pvs -o +pv_uuid > $BACKUP_DIR/$DATE/pvs.txt

# Backup VG information
vgdisplay > $BACKUP_DIR/$DATE/vgdisplay.txt
vgs -o +vg_uuid > $BACKUP_DIR/$DATE/vgs.txt

# Backup LV information
lvdisplay > $BACKUP_DIR/$DATE/lvdisplay.txt
lvs -o +lv_uuid > $BACKUP_DIR/$DATE/lvs.txt

# Backup device mappings
dmsetup table > $BACKUP_DIR/$DATE/dmsetup_table.txt
dmsetup info > $BACKUP_DIR/$DATE/dmsetup_info.txt

# Create tarball
tar -czf $BACKUP_DIR/lvm_backup_$DATE.tar.gz -C $BACKUP_DIR $DATE

# Cleanup
rm -rf $BACKUP_DIR/$DATE

echo "LVM backup completed: $BACKUP_DIR/lvm_backup_$DATE.tar.gz"

Disaster Recovery Procedures

# Recovery checklist
# 1. Boot from rescue media
# 2. Install LVM tools
# 3. Scan for PVs
pvscan

# 4. Restore VG metadata if needed
vgcfgrestore vg_data

# 5. Activate VGs
vgchange -a y

# 6. Check and mount filesystems
fsck /dev/vg_data/lv_home
mount /dev/vg_data/lv_home /mnt

# 7. Verify data integrity

Troubleshooting Common Issues

Device Missing or Failed

# Check for missing PVs
sudo vgs -o +vg_missing_pv_count

# Repair VG with missing PV
sudo vgreduce --removemissing vg_data

# Force removal of missing PV
sudo vgreduce --removemissing --force vg_data

# Recreate missing PV (if metadata exists)
sudo pvcreate --uuid "xxxxx" --restorefile /etc/lvm/backup/vg_data /dev/sdb

Cannot Remove LV

# Check if LV is open
sudo lsof /dev/vg_data/lv_problem
sudo fuser -m /dev/vg_data/lv_problem

# Force unmount
sudo umount -l /mount/point

# Deactivate LV
sudo lvchange -a n vg_data/lv_problem

# Remove LV
sudo lvremove vg_data/lv_problem

Metadata Corruption

# Check metadata
sudo vgck vg_data

# Repair metadata
sudo vgck --updatemetadata vg_data

# Restore from archive
sudo vgcfgrestore --list vg_data
sudo vgcfgrestore -f /etc/lvm/archive/vg_data_00010.vg vg_data

Performance Issues

# Check for fragmentation
sudo pvs -o +pv_pe_count,pv_pe_alloc_count

# Display LV segments
sudo lvdisplay -m /dev/vg_data/lv_fragmented

# Defragment by moving data
sudo pvmove --alloc anywhere /dev/sdb

Boot Issues with LVM

# In rescue mode
# Load LVM modules
modprobe dm_mod
modprobe dm_mirror

# Scan and activate
vgscan
vgchange -a y

# Mount root filesystem
mount /dev/vg_root/lv_root /mnt

# Chroot and repair
chroot /mnt
dracut -f  # Rebuild initramfs

Best Practices

Planning and Design

  1. Naming Conventions
# Use descriptive names
vg_<purpose>     # vg_database, vg_backup
lv_<function>    # lv_mysql_data, lv_apache_logs

# Avoid generic names
# Bad: vg1, lv1
# Good: vg_webserver, lv_www_data
  1. Sizing Guidelines
# Leave free space in VG (20%)
# For snapshots and growth
# Don't allocate 100% initially

# Appropriate extent sizes:
# 4MB - Default, good for most uses
# 16-32MB - Large files, databases
# 1-2MB - Many small files
  1. Physical Volume Selection
# Use whole disks when possible
# Avoid partitions for flexibility
# Similar disk sizes in same VG
# Separate VGs for different purposes

Security Considerations

  1. Encryption with LVM
# Create encrypted PV
sudo cryptsetup luksFormat /dev/sdb
sudo cryptsetup open /dev/sdb crypt_sdb
sudo pvcreate /dev/mapper/crypt_sdb

# Add to VG
sudo vgcreate vg_secure /dev/mapper/crypt_sdb
  1. Access Control
# Set proper permissions
sudo chmod 600 /etc/lvm/backup/*
sudo chmod 700 /etc/lvm/backup

# Restrict LVM commands
# Use sudo rules for specific commands

Maintenance Procedures

  1. Regular Tasks
# Weekly metadata backup
0 2 * * 0 /usr/sbin/vgcfgbackup

# Monthly consistency check
0 3 1 * * /usr/sbin/vgck

# Monitor thin pool usage
*/5 * * * * /usr/local/bin/check_thin_pools.sh
  1. Documentation
# Document configuration
lvs > /root/lvm_layout.txt
pvs >> /root/lvm_layout.txt
vgs >> /root/lvm_layout.txt

# Keep change log
echo "$(date): Extended lv_home by 10G" >> /root/lvm_changes.log

Performance Guidelines

  1. Optimal Configurations
# Database servers
- Stripe across multiple PVs
- Separate logs and data
- Use cache volumes for hot data

# File servers
- Large extent sizes
- Consider thin provisioning
- Regular monitoring

# Virtual machines
- Thin provisioning
- Snapshot before updates
- Monitor overcommit
  1. Monitoring Script
#!/bin/bash
# lvm_monitor.sh

# Check utilization
echo "=== Volume Group Usage ==="
vgs -o vg_name,vg_size,vg_free,vg_free_percent

echo -e "\n=== Logical Volume Usage ==="
df -h | grep "/dev/mapper"

echo -e "\n=== Thin Pool Usage ==="
lvs -o lv_name,data_percent,metadata_percent | grep "tpool"

echo -e "\n=== RAID Status ==="
lvs -o lv_name,raid_sync_action,sync_percent | grep "raid"

Conclusion

LVM provides unparalleled flexibility in managing storage on AlmaLinux systems. From basic volume management to advanced features like thin provisioning and integrated RAID, LVM enables administrators to adapt storage configurations to changing needs without downtime. Key takeaways include:

  • Always plan storage layout considering future growth
  • Regular metadata backups are crucial for disaster recovery
  • Monitor thin provisioning to avoid overcommit issues
  • Use appropriate RAID levels for data protection
  • Test recovery procedures before emergencies

By mastering LVM, you gain the ability to manage storage efficiently, respond to changing requirements dynamically, and maintain high availability for critical data. Whether managing a single server or enterprise storage infrastructure, LVM’s features provide the tools needed for modern storage administration.