This guide outlines the steps to recover software RAID arrays and LVM volumes after a system reinstall, or after any similar situation where the storage configuration needs to be reactivated.
Table of Contents
1. Prerequisites
Install the necessary packages:
sudo apt-get update
sudo apt-get install mdadm lvm2
If this is a recovery session, avoid writing to the disks until you have confirmed the array layout and the LVM metadata.
2. Identifying RAID Arrays and LVM Volumes
Check for RAID Arrays
# Check for existing RAID arrays
sudo mdadm --detail --scan
cat /proc/mdstat
You can also inspect candidate member devices directly:
sudo mdadm --examine /dev/nvme1n1p3 /dev/nvme3n1p3
Adjust the device names to match your machine. Use lsblk, blkid, or your installer/live environment’s disk utility to confirm the correct devices before assembling anything.
Identify LVM Configuration
# Examine partition types to identify RAID and LVM members
sudo blkid
# Check for LVM physical volumes
sudo pvs
# Check volume groups
sudo vgs
# Check logical volumes
sudo lvs
# Get detailed information about physical volumes
sudo pvdisplay --maps
If the RAID device has not been assembled yet, LVM may not show the physical volumes that live on top of it. Assemble the RAID array first, then rescan LVM.
3. Activating RAID Arrays
# Assemble all detected RAID arrays
sudo mdadm --assemble --scan
# Verify successful assembly
cat /proc/mdstat
# Get detailed information about a specific RAID array
# Replace md0 with the actual device name if different
sudo mdadm --detail /dev/md0
If normal assembly fails, inspect the metadata first:
sudo mdadm --examine /dev/nvme1n1p3 /dev/nvme3n1p3
If the members are correct and the array is known to be consistent, you can try forcing assembly:
# Only use this if normal assembly fails and you are sure these are the correct members
sudo mdadm --assemble --force /dev/md0 /dev/nvme1n1p3 /dev/nvme3n1p3
For RAID-0 in particular, member order matters. If the array does not assemble cleanly, stop and verify the original layout before trying destructive commands.
4. Activating LVM Volumes
# Scan for volume groups
sudo vgscan
# Activate all volume groups
sudo vgchange -ay
# Verify all logical volumes are available
sudo lvs -a
If specific volumes need to be activated:
sudo lvchange -ay vg_name/lv_name
# For example:
sudo lvchange -ay vg_home/lv_home
You can confirm the resulting device paths with:
lsblk -f
sudo lvs -o +devices
5. Mounting the Volumes
Create mount points and mount the logical volumes:
# Create mount points if they do not exist
sudo mkdir -p /mnt/home /mnt/home_lfs /mnt/storage /mnt/trash
# Mount the logical volumes
sudo mount /dev/vg_home/lv_home /mnt/home
sudo mount /dev/vg_home_lfs/home_lfs /mnt/home_lfs
sudo mount /dev/vg_storage/lv_storage /mnt/storage
sudo mount /dev/vg_trash/lv_trash /mnt/trash
# Verify successful mounting
df -h
If you are not sure about the filesystem type or the correct mount options, inspect the logical volume first:
sudo blkid /dev/vg_home/lv_home
sudo file -s /dev/vg_home/lv_home
6. Making Mounts Permanent (Optional)
To make these mounts permanent across reboots, add them to /etc/fstab.
First, get the UUID of each logical volume:
sudo blkid | grep /dev/mapper
Then edit /etc/fstab:
sudo vim /etc/fstab
Entries using UUIDs are usually more stable than entries using device names. Example format:
UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /mnt/home ext4 defaults 0 2
UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /mnt/home_lfs ext4 defaults 0 2
UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /mnt/storage ext4 defaults 0 2
UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /mnt/trash ext4 defaults 0 2
Device-mapper paths can also work if they match your system:
/dev/mapper/vg_home-lv_home /mnt/home ext4 defaults 0 2
/dev/mapper/vg_home_lfs-home_lfs /mnt/home_lfs ext4 defaults 0 2
/dev/mapper/vg_storage-lv_storage /mnt/storage ext4 defaults 0 2
/dev/mapper/vg_trash-lv_trash /mnt/trash ext4 defaults 0 2
Before rebooting, test the file:
sudo mount -a
findmnt /mnt/home /mnt/home_lfs /mnt/storage /mnt/trash
7. Troubleshooting
If RAID Assembly Fails
Check RAID metadata on the member devices:
sudo mdadm --examine /dev/nvme1n1p3 /dev/nvme3n1p3
Check the current kernel view of RAID devices:
cat /proc/mdstat
lsblk -f
If the array was previously created with mdadm, prefer assembling from existing metadata. Avoid mdadm --create during recovery unless you fully understand the original RAID level, member order, chunk size, metadata version, and device count. A wrong --create command can destroy access to the data.
If you are certain you are recreating the exact original RAID-0 definition, the command would look like this:
# Dangerous during recovery unless the original layout is known exactly
sudo mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/nvme1n1p3 /dev/nvme3n1p3
If LVM Volumes Are Not Found
# Scan physical devices for LVM metadata
sudo pvscan
# Force a volume group scan and create missing device nodes
sudo vgscan --mknodes
# Check a specific volume group
sudo vgdisplay vg_name
If the physical volume is inside a RAID array, confirm that /dev/md0 or the correct /dev/md/* device exists before rescanning LVM.
If Volumes Will Not Mount
# Check the filesystem type
sudo file -s /dev/mapper/vg_name-lv_name
sudo blkid /dev/mapper/vg_name-lv_name
# Check kernel messages for mount errors
dmesg | tail -n 50
Only run filesystem repair after confirming the correct device and filesystem type:
# For ext filesystems, use the matching logical volume path
sudo fsck -f /dev/mapper/vg_name-lv_name
For XFS, use xfs_repair instead of fsck:
sudo xfs_repair /dev/mapper/vg_name-lv_name
8. Quick Recovery Commands
For quick recovery, the minimal command set is:
sudo apt-get update && sudo apt-get install -y mdadm lvm2
sudo mdadm --assemble --scan
sudo vgscan
sudo vgchange -ay
sudo lvs -a
lsblk -f
# Create mount points and mount volumes as needed
For the specific configuration with RAID-0 on nvme1n1p3 and nvme3n1p3, and LVM volume groups vg_home, vg_home_lfs, vg_storage, and vg_trash, these commands should restore the storage setup after a system reinstall, assuming the RAID metadata and LVM metadata are still intact.
Adjust device names, logical volume names, filesystem types, and mount points for your actual setup before writing anything to disk.
