← Back to posts

LVM Operations: Expand, Shrink, and Migrate Volumes

A complete guide to Logical Volume Manager operations—expanding partitions online, shrinking safely, and migrating directories with minimal downtime.

Case Snapshot

Situation

While managing a fleet of RHEL servers, storage operations were a recurring pain point. Applications would outgrow their allocated space, migrations required careful coordination, and shrink operations were dreaded due to data loss risks.

Issue:

Storage operations were handled inconsistently across the team. Some admins would reboot servers for partition changes, others would attempt risky online operations without proper checkpoints, and migrations often resulted in extended downtime windows.

Solution:

Documented a standardized LVM playbook covering the three core operations—expansion, shrinking, and migration—with clear pre-flight checks, execution steps, and rollback procedures.

Used In:

Used in Linux platform engineering for SAP deployments, PostgreSQL database servers, and application server fleets managed via Ansible.

Impact:

Reduced storage-related incidents by establishing consistent procedures. Expansion operations now happen without reboots, migrations complete within predictable maintenance windows, and shrink operations have clear safety gates.

Situation

During server provisioning and lifecycle management, storage operations are inevitable. Applications grow beyond their allocated space, new directories need dedicated volumes, and sometimes space needs to be reclaimed from over-provisioned filesystems.

I’ve handled three distinct LVM scenarios repeatedly across our server fleet:

  1. Expansion: A VM’s disk is extended at the hypervisor, but the Linux partition and LVM need to catch up—ideally without a reboot.
  2. Shrinking: An application was decommissioned and its volume can be reduced, but shrinking ext4 is risky if done wrong.
  3. Migration: An application directory like /opt is consuming root filesystem space and needs to move to a dedicated volume with minimal downtime.

Operation 1: Expanding LVM Online (No Reboot)

When a disk is expanded at the hypervisor level but you don’t have growpart available, you can resize the partition manually using fdisk. This works without a reboot as long as you preserve the LVM signature.

The fdisk Procedure

Assuming we’re expanding /dev/sda3:

fdisk /dev/sda

Inside the interactive prompt:

  1. p — Print the partition table, note the starting sector of sda3
  2. d3 — Delete partition 3 (data remains in disk blocks)
  3. np3 — Create new primary partition 3
  4. First sector: Press ENTER (must match the original start sector)
  5. Last sector: Press ENTER (uses the new disk end)
  6. When prompted about removing the LVM signature: Type N
  7. t38e — Mark as Linux LVM
  8. w — Write changes

Update the Kernel and Expand

# Reread partition table without reboot
partprobe /dev/sda

# Expand the physical volume
pvresize /dev/sda3

# Verify new free space
vgs

# Expand the LV and filesystem in one command
lvextend -r -l +100%FREE /dev/mapper/vg_system-root

Key insight: The -r flag to lvextend handles the filesystem resize automatically. No separate resize2fs needed.


Operation 2: Shrinking an ext4 Filesystem

Unlike expansion, shrinking ext4 requires unmounting—the filesystem must be offline. This is a destructive operation if done incorrectly.

Pre-flight Checklist

  • Full backup exists
  • Maintenance window confirmed
  • Application services stopped
  • Filesystem unmounted

The Shrink Procedure

Target: Shrink /data (mapped to /dev/mapper/vg_data-lv_data) to 10G.

# 1. Unmount
umount /data

# 2. Force filesystem check (mandatory)
e2fsck -f /dev/mapper/vg_data-lv_data

# 3. Shrink filesystem SMALLER than target LV (safety margin)
resize2fs /dev/mapper/vg_data-lv_data 9.5G

# 4. Shrink the logical volume to final size
lvreduce -L 10G /dev/mapper/vg_data-lv_data
# Confirm when prompted

# 5. Expand filesystem to fill the LV exactly
resize2fs /dev/mapper/vg_data-lv_data

# 6. Remount and verify
mount /data
df -h /data

Why shrink the filesystem smaller first? If you shrink the LV below the filesystem size, you corrupt data. The two-step shrink (filesystem to 9.5G, then LV to 10G, then filesystem back to fill) guarantees safety.


Operation 3: Migrating a Directory to a New Volume

When /opt or another application directory is consuming root filesystem space, migrate it to a dedicated LVM volume. The key is using rsync for an initial sync before the maintenance window.

Preparation (Before Maintenance Window)

# Create the new LV
lvcreate -n opt_vol -L 10G vg_app
mkfs.ext4 /dev/mapper/vg_app-opt_vol

# Mount temporarily
mkdir /mnt/new_opt
mount /dev/mapper/vg_app-opt_vol /mnt/new_opt

# Initial sync (application still running)
rsync -avz /opt/ /mnt/new_opt/

Execution (During Maintenance Window)

# Stop application services
systemctl stop myapp-agent.service

# Final sync (catches changes since initial)
rsync -avz /opt/ /mnt/new_opt/

# Switchover
umount /mnt/new_opt
mv /opt /opt_old
mkdir /opt
mount /dev/mapper/vg_app-opt_vol /opt

# Restore SELinux contexts (critical!)
restorecon -Rv /opt

# Make permanent
echo '/dev/mapper/vg_app-opt_vol  /opt  ext4  defaults,nodev  1 2' >> /etc/fstab

# Start services
systemctl start myapp-agent.service

Cleanup (Days Later)

After confirming stability:

rm -rf /opt_old

Quick Reference Table

OperationOnline/OfflineRisk LevelKey Command
Expand LVOnlineLowlvextend -r -l +100%FREE /dev/mapper/vg-lv
Shrink LVOfflineHighresize2fslvreduceresize2fs
MigrateOnline + brief offlineMediumrsync → stop → rsync → switchover

Common Failure Modes

FailureCausePrevention
Data loss during shrinkLV shrunk below filesystem sizeAlways shrink filesystem first, with margin
LVM signature lostAnswered Y to “remove signature” in fdiskAlways answer N
SELinux denials after migrationSecurity contexts not restoredRun restorecon -Rv on mount point
”Target is busy” during unmountProcess holding filesystem openUse lsof +D /mount to identify

Architecture Diagram

LVM Migration Flow

This diagram shows the migration flow from a high-risk state (application data on rootfs) to an isolated state using a dedicated Logical Volume.

Post-Specific Engineering Lens

For this post, the primary objective is: Change storage allocation safely with reversible checkpoints.

Implementation decisions for this case

  • Documented three distinct operations based on real production scenarios
  • Established clear pre-flight checklists for high-risk operations
  • Used rsync incremental sync to minimize maintenance windows

Practical command path

# Pre-change baseline
lsblk -f
lvdisplay; vgdisplay; pvdisplay

# Post-change verification
df -h
systemctl status <affected-service>

Validation Matrix

Validation goalWhat to baselineWhat confirms success
Functional stabilityService status, mount pointssystemctl --failed empty, df -h shows expected sizes
Operational safetyBackup exists, rollback plan documentedBackup verified restorable
Production readinessApplication starts, data accessibleApplication health check passes

Failure Modes and Mitigations

Failure modeWhy it appearsMitigation
Incorrect device targetSimilar device namesVerify with lsblk before each command
Insufficient free extentsMath error in size planningPre-calculate with vgs and lvs
Rollback ambiguityNo clear recovery pathDocument exact rollback commands before starting

Recruiter-Readable Impact Summary

  • Scope: Storage operations for enterprise Linux fleet
  • Execution quality: Clear procedures reduce incident risk
  • Outcome signal: Reproducible runbooks for LVM operations