
mdadm is the standard Linux utility for creating and managing software RAID arrays. It lets you combine multiple raw block devices into a single logical device with configurable performance, redundancy, or both, without requiring dedicated hardware.
This guide walks through creating RAID 0, 1, 5, 6, and 10 arrays on Ubuntu using mdadm. For each level you will identify component disks, create the array, format and mount it, and persist the configuration across reboots. A dedicated section covers array monitoring, disk failure handling, failed disk replacement, and a comparison of mdadm software RAID against ZFS and LVM.
/etc/mdadm/mdadm.conf, run update-initramfs -u, and add the device to /etc/fstab.To follow the steps in this guide, you will need:
sudo privileges on an Ubuntu server. To learn how to set up an account with these privileges, follow our Ubuntu initial server setup guide.Info: Due to the inefficiency of RAID setups on virtual private servers, we don’t recommend deploying a RAID setup on DigitalOcean droplets. The efficiency of data center disk replication makes the benefits of a RAID negligible relative to a setup on bare-metal hardware. This tutorial aims to be a reference for a conventional RAID setup.
Note: This guide uses whole disks as array members, which is valid for dedicated storage arrays. If you need to partition disks before use - for example, to mark them with the Linux RAID type code (0xfd) - see our guide on how to partition and format storage devices in Linux. For background on storage terminology used throughout this tutorial, see our introduction to storage terminology and concepts in Linux.
You can skip this section for now if you have not yet set up any arrays. This guide will introduce a number of different RAID levels. If you wish to follow along and complete each RAID level for your devices, you will likely want to reuse your storage devices after each section. This specific section Resetting Existing RAID Devices can be referenced to reset your component storage devices prior to testing a new RAID level.
Warning: This process will completely destroy the array and any data written to it. Make sure that you are operating on the correct array and that you have copied any data you need to retain prior to destroying the array.
Begin by finding the active arrays in the /proc/mdstat file:
- cat /proc/mdstat
OutputPersonalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid0 sdc[1] sdd[0]
209584128 blocks super 1.2 512k chunks
unused devices: <none>
Then unmount the array from the filesystem:
- sudo umount /dev/md0
Now stop and remove the array:
- sudo mdadm --stop /dev/md0
Find the devices that were used to build the array with the following command:
Warning: Keep in mind that the /dev/sd* names can change any time you reboot. Check them every time to make sure you are operating on the correct devices.
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G linux_raid_member disk
sdb 100G linux_raid_member disk
sdc 100G disk
sdd 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
vdb 466K iso9660 disk
The linux_raid_member value in the FSTYPE column identifies disks that are currently part of a RAID array. After identifying them, zero their superblock, which holds the RAID metadata. Zeroing the superblock removes that metadata and returns the disk to a plain, unformatted state:
- sudo mdadm --zero-superblock /dev/sda
- sudo mdadm --zero-superblock /dev/sdb
It’s recommended to also remove any persistent references to the array. Edit the /etc/fstab file and comment out or remove the reference to your array. You can comment it out by inserting a hashtag symbol # at the beginning of the line, using nano or your preferred text editor:
- sudo nano /etc/fstab
. . .
# /dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0
Also, comment out or remove the array definition from the /etc/mdadm/mdadm.conf file:
- sudo nano /etc/mdadm/mdadm.conf
. . .
# ARRAY /dev/md0 metadata=1.2 name=mdadmwrite:0 UUID=7261fb9c:976d0d97:30bc63ce:85e76e91
Finally, update the initramfs again so that the early boot process does not try to bring an unavailable array online:
- sudo update-initramfs -u
From here, you should be ready to reuse the storage devices individually, or as components of a different array.
The RAID 0 array works by breaking up data into chunks and striping it across the available disks. This means that each disk contains a portion of the data and that multiple disks will be referenced when retrieving information.
To start, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
vdb 466K iso9660 disk
In this example, you have two disks without a filesystem, each 100G in size. These devices have been given the /dev/sda and /dev/sdb identifiers for this session and will be the raw components used to build the array. The vda device is the system disk. The vdb entry with iso9660 filesystem is a read-only cloud-init or metadata disk attached by the hypervisor — you can ignore it. You are only working with the raw, unformatted disks, which are the ones showing no FSTYPE value.
To create a RAID 0 array with these components, pass them into the mdadm --create command. You will have to specify the device name you wish to create, the RAID level, and the number of devices. In this command example, you will be naming the device /dev/md0, and include the two disks that will build the array:
- sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sda /dev/sdb
Outputmdadm: layout defaults to -p original
mdadm: chunk size defaults to 512K
mdadm: size set to 104792064K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
Confirm that the RAID was successfully created by checking the /proc/mdstat file:
- cat /proc/mdstat
OutputPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid0 sdb[1] sda[0]
209584128 blocks super 1.2 512k chunks
unused devices: <none>
This output reveals that the /dev/md0 device was created in the RAID 0 configuration using the /dev/sda and /dev/sdb devices.
Next, create an ext4 filesystem on the array. The -F flag forces mkfs.ext4 to proceed even though /dev/md0 is not a partition — it is a whole block device, and mkfs requires this flag to confirm you intend to format the entire device:
- sudo mkfs.ext4 -F /dev/md0
Then, create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem with the following command:
- sudo mount /dev/md0 /mnt/md0
After mounting, check whether the new space is available:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 196G 61M 186G 1% /mnt/md0
The new filesystem is mounted and accessible. The available space shows 196G rather than the full 200G (2x 100G) because ext4 reserves approximately 5% of the filesystem for root and internal metadata by default.
Persisting the array across reboots requires three separate steps, each solving a different failure mode:
mdadm --detail --scan writes the array definition including its UUID to /etc/mdadm/mdadm.conf, which is what mdadm reads at boot to know which arrays to assemble and under which device names.update-initramfs -u rebuilds the early boot image to include the updated mdadm.conf. Without this step, the initramfs used during boot still reflects the old configuration and the array may not assemble at all, or may assemble under an unexpected name like /dev/md127./etc/fstab entry with nofail tells the OS where to mount the array after it assembles. The nofail flag is critical: if the array fails to assemble at boot for any reason, nofail allows the system to continue booting instead of dropping to emergency mode.Skipping any one of these three steps is the most common cause of an array that works on first boot but fails to come back after a reboot.
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 0 array will now automatically assemble and mount each boot.
You’re now finished with your RAID set up. If you want to try a different RAID, follow the resetting instructions at the beginning of this tutorial to proceed with creating a new RAID array type.
The RAID 1 array type is implemented by mirroring data across all available disks. Each disk in a RAID 1 array gets a full copy of the data, providing redundancy in the event of a device failure.
mdadm can service read requests from both disks simultaneously.To start, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
vdb 466K iso9660 disk
In this example, you have two disks without a filesystem, each 100G in size. These devices have been given the /dev/sda and /dev/sdb identifiers for this session and will be the raw components you use to build the array.
To create a RAID 1 array with these components, pass them into the mdadm --create command. You will have to specify the device name you wish to create, the RAID level, and the number of devices. In this command example, you will be naming the device /dev/md0, and include the disks that will build the array:
- sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
If the component devices you are using are not partitions with the boot flag enabled, you will likely receive the following warning. It is safe to respond with y and continue:
Outputmdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 104792064K
Continue creating array? y
The mdadm tool will start to mirror the drives. This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat file:
- cat /proc/mdstat
OutputPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb[1] sda[0]
104792064 blocks super 1.2 [2/2] [UU]
[====>................] resync = 20.2% (21233216/104792064) finish=6.9min speed=199507K/sec
unused devices: <none>
In the first highlighted line, the /dev/md0 device was created in the RAID 1 configuration using the /dev/sda and /dev/sdb devices. The second highlighted line reveals the progress on the mirroring. You can continue to the next step while this process completes.
Next, create an ext4 filesystem on the array. The -F flag forces mkfs.ext4 to proceed even though /dev/md0 is not a partition — it is a whole block device, and mkfs requires this flag to confirm you intend to format the entire device:
- sudo mkfs.ext4 -F /dev/md0
Then, create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem by running the following:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 99G 60M 94G 1% /mnt/md0
The new filesystem is mounted and accessible. The available space shows 94G rather than the full 100G because ext4 reserves approximately 5% of the filesystem for root and internal metadata by default.
As with RAID 0, persisting this array across reboots requires the same three steps, each solving a different failure mode:
mdadm --detail --scan writes the array definition including its UUID to /etc/mdadm/mdadm.conf, which is what mdadm reads at boot to know which arrays to assemble and under which device names.update-initramfs -u rebuilds the early boot image to include the updated mdadm.conf. Without this step, the initramfs used during boot still reflects the old configuration and the array may not assemble at all, or may assemble under an unexpected name like /dev/md127./etc/fstab entry with nofail tells the OS where to mount the array after it assembles. The nofail flag is critical: if the array fails to assemble at boot for any reason, nofail allows the system to continue booting instead of dropping to emergency mode.Skipping any one of these three steps is the most common cause of an array that works on first boot but fails to come back after a reboot.
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 1 array will now automatically assemble and mount each boot.
You’re now finished with your RAID set up. If you want to try a different RAID, follow the resetting instructions at the beginning of this tutorial to proceed with creating a new RAID array type.
The RAID 5 array type is implemented by striping data across the available devices. One component of each stripe is a calculated parity block. If a device fails, the parity block and the remaining blocks can be used to calculate the missing data. The device that receives the parity block is rotated so that each device has a balanced amount of parity information.
To start, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
vdb 466K iso9660 disk
You have three disks without a filesystem, each 100G in size. These devices have been given the /dev/sda, /dev/sdb, and /dev/sdc identifiers for this session and will be the raw components you use to build the array.
To create a RAID 5 array with these components, pass them into the mdadm --create command. You will have to specify the device name you wish to create, the RAID level, and the number of devices. In this command example, you will be naming the device /dev/md0, and include the disks that will build the array:
- sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc
Outputmdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 104791040K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
The mdadm tool will start to configure the array. It uses the recovery process to build the array because writing parity incrementally as data is placed is faster than zeroing all blocks first — this means the array is technically in a recovery state during the initial build. This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat file:
- cat /proc/mdstat
OutputPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc[3] sdb[1] sda[0]
209582080 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 0.9% (957244/104791040) finish=18.0min speed=95724K/sec
unused devices: <none>
In the first highlighted line, the /dev/md0 device was created in the RAID 5 configuration using the /dev/sda, /dev/sdb and /dev/sdc devices. The second highlighted line shows the progress of the build.
Warning: Due to the way that mdadm builds RAID 5 arrays, while the array is still building, the number of spares in the array will be inaccurately reported. This means that you must wait for the array to finish assembling before updating the /etc/mdadm/mdadm.conf file. If you update the configuration file while the array is still building, the system will have incorrect information about the array state and will be unable to assemble it automatically at boot with the correct name.
You can continue the guide while this process completes.
Next, create an ext4 filesystem on the array. The -F flag forces mkfs.ext4 to proceed even though /dev/md0 is not a partition — it is a whole block device, and mkfs requires this flag to confirm you intend to format the entire device:
- sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem with the following:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 197G 60M 187G 1% /mnt/md0
The new filesystem is mounted and accessible.
As with RAID 0, persisting this array across reboots requires the same three steps, each solving a different failure mode:
mdadm --detail --scan writes the array definition including its UUID to /etc/mdadm/mdadm.conf, which is what mdadm reads at boot to know which arrays to assemble and under which device names.update-initramfs -u rebuilds the early boot image to include the updated mdadm.conf. Without this step, the initramfs used during boot still reflects the old configuration and the array may not assemble at all, or may assemble under an unexpected name like /dev/md127./etc/fstab entry with nofail tells the OS where to mount the array after it assembles. The nofail flag is critical: if the array fails to assemble at boot for any reason, nofail allows the system to continue booting instead of dropping to emergency mode.Skipping any one of these three steps is the most common cause of an array that works on first boot but fails to come back after a reboot.
Warning: Before running the steps below, confirm the array has finished assembling. If you run mdadm --detail --scan while the array is still building, the output will show an incorrect spare count and the UUID written to mdadm.conf may be wrong. A wrong entry means mdadm cannot match the stored definition to the array at boot and will assemble it under a fallback name like /dev/md127 instead of /dev/md0. If this happens after a reboot, run sudo mdadm --detail --scan again, clear the incorrect line from /etc/mdadm/mdadm.conf, append the correct entry, and run sudo update-initramfs -u again.
You can monitor the progress of the mirroring by checking the /proc/mdstat file:
- cat /proc/mdstat
OutputPersonalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc[3] sdb[1] sda[0]
209584128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
This output reveals that the rebuild is complete. Now, you can automatically scan the active array and append the file:
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 5 array will now automatically assemble and mount each boot.
You’re now finished with your RAID set up. If you want to try a different RAID, follow the resetting instructions at the beginning of this tutorial to proceed with creating a new RAID array type.
The RAID 6 array type is implemented by striping data across the available devices. Two components of each stripe are calculated parity blocks. If one or two devices fail, the parity blocks and the remaining blocks can be used to calculate the missing data. The devices that receive the parity blocks are rotated so that each device has a balanced amount of parity information. This is similar to a RAID 5 array, but allows for the failure of two drives.
To start, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G disk
sdd 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
vdb 466K iso9660 disk
In this example, you have four disks without a filesystem, each 100G in size. These devices have been given the /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd identifiers for this session and will be the raw components used to build the array.
To create a RAID 6 array with these components, pass them into the mdadm --create command. You have to specify the device name you wish to create, the RAID level, and the number of devices. In this following command example, you will be naming the device /dev/md0 and include the disks that will build the array:
- sudo mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
Outputmdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 104792064K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
The mdadm tool will start to configure the array. It uses the recovery process to build the array because writing parity incrementally as data is placed is faster than zeroing all blocks first — this means the array is technically in a recovery state during the initial build. This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat file:
- cat /proc/mdstat
OutputPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 sdd[3] sdc[2] sdb[1] sda[0]
209584128 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
[>....................] resync = 0.6% (668572/104792064) finish=10.3min speed=167143K/sec
unused devices: <none>
In the first highlighted line, the /dev/md0 device has been created in the RAID 6 configuration using the /dev/sda, /dev/sdb, /dev/sdc and /dev/sdd devices. The second highlighted line shows the progress of the build. You can continue the guide while this process completes.
Next, create an ext4 filesystem on the array. The -F flag forces mkfs.ext4 to proceed even though /dev/md0 is not a partition — it is a whole block device, and mkfs requires this flag to confirm you intend to format the entire device:
- sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem with the following:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 197G 60M 187G 1% /mnt/md0
The new filesystem is mounted and accessible.
As with RAID 0, persisting this array across reboots requires the same three steps, each solving a different failure mode:
mdadm --detail --scan writes the array definition including its UUID to /etc/mdadm/mdadm.conf, which is what mdadm reads at boot to know which arrays to assemble and under which device names.update-initramfs -u rebuilds the early boot image to include the updated mdadm.conf. Without this step, the initramfs used during boot still reflects the old configuration and the array may not assemble at all, or may assemble under an unexpected name like /dev/md127./etc/fstab entry with nofail tells the OS where to mount the array after it assembles. The nofail flag is critical: if the array fails to assemble at boot for any reason, nofail allows the system to continue booting instead of dropping to emergency mode.Skipping any one of these three steps is the most common cause of an array that works on first boot but fails to come back after a reboot.
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 6 array will now automatically assemble and mount each boot.
You’re now finished with your RAID set up. If you want to try a different RAID, follow the resetting instructions at the beginning of this tutorial to proceed with creating a new RAID array type.
The RAID 10 array type is traditionally implemented by creating a striped RAID 0 array composed of sets of RAID 1 arrays. This nested array type gives both redundancy and high performance, at the expense of large amounts of disk space. The mdadm utility has its own RAID 10 type that provides the same type of benefits with increased flexibility. It is not created by nesting arrays, but has many of the same characteristics and guarantees. You will be using the mdadm RAID 10 here.
mdadm’s native RAID 10 technically accepts 3 disks, but 4 is the practical minimum for balanced mirroring.By default, two copies of each data block will be stored in what is called the near layout. The possible layouts that dictate how each data block is stored are as follows:
You can find out more about these layouts by checking out the RAID10 section of this man page:
- man 4 md
To start, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G disk
sdd 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
vdb 466K iso9660 disk
In this example, you have four disks without a filesystem, each 100G in size. These devices have been given the /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd identifiers for this session and will be the raw components used to build the array.
To create a RAID 10 array with these components, pass them into the mdadm --create command. You have to specify the device name you wish to create, the RAID level, and the number of devices. In this following command example, you will be naming the device /dev/md0 and include the disks that will build the array:
You can set up two copies using the near layout by not specifying a layout and copy number:
- sudo mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
Outputmdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 104792064K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
If you want to use a different layout or change the number of copies, you will have to use the --layout= option, which takes a layout and copy identifier. The layouts are n for near, f for far, and o for offset. The number of copies to store is appended afterward.
For instance, to create an array that has three copies in the offset layout, the command would include the following:
- sudo mdadm --create --verbose /dev/md0 --level=10 --layout=o3 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
The mdadm tool will start to configure the array. It uses the recovery process to build the array because writing parity incrementally as data is placed is faster than zeroing all blocks first — this means the array is technically in a recovery state during the initial build. This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat file:
- cat /proc/mdstat
OutputPersonalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid10 sdd[3] sdc[2] sdb[1] sda[0]
209584128 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
[===>.................] resync = 18.1% (37959424/209584128) finish=13.8min speed=206120K/sec
unused devices: <none>
In the first highlighted line, the /dev/md0 device has been created in the RAID 10 configuration using the /dev/sda, /dev/sdb, /dev/sdc and /dev/sdd devices. The second highlighted area shows the layout that was used for this example (two copies in the near configuration). The third highlighted area shows the progress on the build. You can continue the guide while this process completes.
Next, create an ext4 filesystem on the array. The -F flag forces mkfs.ext4 to proceed even though /dev/md0 is not a partition — it is a whole block device, and mkfs requires this flag to confirm you intend to format the entire device:
- sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem with the following:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 197G 60M 187G 1% /mnt/md0
The new filesystem is mounted and accessible.
As with RAID 0, persisting this array across reboots requires the same three steps, each solving a different failure mode:
mdadm --detail --scan writes the array definition including its UUID to /etc/mdadm/mdadm.conf, which is what mdadm reads at boot to know which arrays to assemble and under which device names.update-initramfs -u rebuilds the early boot image to include the updated mdadm.conf. Without this step, the initramfs used during boot still reflects the old configuration and the array may not assemble at all, or may assemble under an unexpected name like /dev/md127./etc/fstab entry with nofail tells the OS where to mount the array after it assembles. The nofail flag is critical: if the array fails to assemble at boot for any reason, nofail allows the system to continue booting instead of dropping to emergency mode.Skipping any one of these three steps is the most common cause of an array that works on first boot but fails to come back after a reboot.
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 10 array will now automatically assemble and mount each boot.
Once an array is running, mdadm provides several tools to inspect its health, respond to disk failures, and replace degraded members.
The /proc/mdstat file is the fastest way to check array state:
- cat /proc/mdstat
OutputPersonalities : [raid1] [raid5] [raid10]
md0 : active raid1 sdb[1] sda[0]
104792064 blocks super 1.2 [2/2] [UU]
unused devices: <none>
Each U in the [UU] field represents a healthy member disk. A _ indicates a missing or degraded member. For a deeper summary of a specific array, use --detail:
- sudo mdadm --detail /dev/md0
Output/dev/md0:
Version : 1.2
Creation Time : Mon Jan 1 00:00:00
Raid Level : raid1
Array Size : 104792064 (99.94 GiB 107.31 GB)
Used Dev Size : 104792064 (99.94 GiB 107.31 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Jan 1 00:05:00
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : hostname:0 (local to host hostname)
UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
Events : 17
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
Configure mdadm to send email notifications when array events occur. Add the following line to /etc/mdadm/mdadm.conf:
- sudo nano /etc/mdadm/mdadm.conf
MAILADDR admin@example.com
Then enable and start the mdmonitor service:
- sudo systemctl enable mdmonitor
- sudo systemctl start mdmonitor
Verify the service is running:
- sudo systemctl status mdmonitor
Output● mdmonitor.service - MD array monitor
Loaded: loaded (/lib/systemd/system/mdmonitor.service; enabled)
Active: active (running)
mdadm --monitor sends alerts for events including Fail, DegradedArray, SparesMissing, and RebuildFinished.
Note: If mdmonitor.service is not found, verify the correct service name on your system with systemctl list-units | grep -i md.
A hot spare is a standby disk that mdadm automatically uses to begin rebuilding the array the moment a member disk fails. To add a spare to an existing array:
- sudo mdadm /dev/md0 --add /dev/sde
Outputmdadm: added /dev/sde
Confirm the spare is registered:
- sudo mdadm --detail /dev/md0 | grep spare
Output Spare Devices : 1
2 8 64 - spare /dev/sde
When mdadm marks a disk as failed, the array enters a degraded state. The /proc/mdstat output will show a _ in the device status field:
Outputmd0 : active raid1 sdb[1] sda[2](F)
104792064 blocks super 1.2 [2/1] [_U]
The (F) flag on /dev/sda confirms the failure. If no hot spare is configured, the array continues to operate in degraded mode, but has no further redundancy until the failed disk is replaced.
To replace the failed disk, first mark it as failed (if mdadm has not already done so automatically):
- sudo mdadm /dev/md0 --fail /dev/sda
Remove the failed disk from the array:
- sudo mdadm /dev/md0 --remove /dev/sda
Physically replace the disk, then add the new disk to the array:
- sudo mdadm /dev/md0 --add /dev/sda
mdadm immediately begins rebuilding the array onto the replacement disk. Monitor the rebuild progress:
- watch -n 1 cat /proc/mdstat
Outputmd0 : active raid1 sda[2] sdb[1]
104792064 blocks super 1.2 [2/1] [_U]
[=>...................] recovery = 6.4% (6710784/104792064) finish=13.8min speed=118080K/sec
The array returns to a fully healthy state when the rebuild finishes and the status field shows [UU].
Warning: On Ubuntu systems using udev, device names such as /dev/sda can change after a reboot or hardware change. Use persistent identifiers from ls -l /dev/disk/by-id/ when scripting replacement procedures, or when the server will be rebooted between the remove and add steps.
mdadm software RAID is not the only option for redundant storage on Linux. The table below summarises when to use each approach.
| Approach | Best for | Key tradeoff |
|---|---|---|
mdadm software RAID |
General-purpose redundancy and performance on bare metal or dedicated servers | CPU overhead; kernel must load correct RAID personality at boot |
ZFS (via zfsutils-linux) |
Data integrity-critical workloads; built-in checksumming, snapshots, and RAID-Z | Higher memory requirement (minimum 1 GB RAM per TB of storage recommended); licensing incompatibility with some kernels |
| LVM striping/mirroring | Flexible volume management layered on top of RAID or single disks | No checksumming; LVM mirroring is less battle-tested for failure scenarios than mdadm |
| Hardware RAID controller | Environments requiring RAID offload from the host CPU | Controller cost; portability risk if controller fails and replacement is unavailable |
When to choose mdadm:
When to consider ZFS instead:
For volume management layered on top of an existing RAID array, see our guide on how to use LVM to manage storage devices on Ubuntu.
What is the difference between RAID 5 and RAID 10 in mdadm?
RAID 5 distributes parity across three or more disks, using one disk-equivalent of capacity for parity. It tolerates one disk failure. RAID 10 mirrors pairs of disks and stripes across the mirrors, tolerating one failure per mirrored pair. RAID 10 generally delivers better write performance and faster rebuild times; RAID 5 gives more usable capacity per disk added.
How do I check the status of a RAID array created with mdadm?
Run cat /proc/mdstat for a quick overview of all active arrays. For detailed information about a specific array including member disk state, run sudo mdadm --detail /dev/md0.
How do I make a mdadm RAID array persist after a reboot on Ubuntu?
Three steps are required. First, append the array definition to /etc/mdadm/mdadm.conf by running sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf. Second, update the initramfs with sudo update-initramfs -u. Third, add the mount entry to /etc/fstab using the nofail option so the system boots even if the array is unavailable.
Can I add a hot spare to an existing mdadm RAID array?
Yes. Run sudo mdadm /dev/md0 --add /dev/sdX where /dev/sdX is the spare disk. mdadm registers it as a spare and will automatically begin a rebuild onto it if a member disk fails.
What happens when a disk in a mdadm RAID array fails?
mdadm marks the disk with a (F) flag in /proc/mdstat and the array enters degraded state. If a hot spare is configured, mdadm begins rebuilding automatically. If no spare is present, the array continues operating with reduced redundancy. RAID 0 has no redundancy: a single disk failure destroys all data.
How do I replace a failed disk in a mdadm RAID 1 or RAID 5 array?
Mark the disk as failed with sudo mdadm /dev/md0 --fail /dev/sdX, remove it with sudo mdadm /dev/md0 --remove /dev/sdX, replace the physical disk, then add the new disk with sudo mdadm /dev/md0 --add /dev/sdX. The array begins rebuilding immediately.
What is the /etc/mdadm/mdadm.conf file used for?
/etc/mdadm/mdadm.conf stores persistent array definitions including the array UUID, member devices, and monitoring configuration such as the alert email address. Without a correct entry here, mdadm cannot reliably reassemble the array at boot using the expected device name.
How does mdadm software RAID compare to ZFS on Ubuntu?
mdadm is kernel-native, lightweight, and well-tested across a wide range of hardware. ZFS adds built-in data checksumming, copy-on-write snapshots, and RAID-Z (similar to RAID 5/6), but requires more RAM and has licensing constraints that prevent it from being shipped in the Linux kernel directly. For most general-purpose redundancy use cases on Ubuntu servers, mdadm is the straightforward choice. For large-scale file servers where silent data corruption is a concern, ZFS is worth the additional overhead.
In this guide, you created RAID 0, 1, 5, 6, and 10 arrays using mdadm on Ubuntu. You formatted and mounted each array, persisted the configuration across reboots by updating /etc/mdadm/mdadm.conf and running update-initramfs -u, and set up monitoring and disk replacement procedures.
As next steps:
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
RAID allows you to manage separate storage drives as a unified device with better performance or redundancy properties. In this series, we’ll walk through RAID concepts and terminology, create software RAID arrays using Linux’s mdadm utility, and learn how to manage and administer arrays to keep your storage infrastructure running smoothly.
Browse Series: 3 tutorials
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Last step editing the fstab echo ‘/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0’ | sudo tee -a /etc/fstab
should be
echo ‘/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 2’ | sudo tee -a /etc/fstab
The 2 in the 0 space allows the system to boot the main os first then the array after. If left to 0 there is reports (and experienced myself) of the array not assembling in enough time before the system fully boots resulting in the nofail trigger to never report why it did not load. This is shown here fstab(5) - Linux manual page (man7.org)
Hi, thank you for this awesome tutorial (and others).
On a new Ubuntu 22.04 install I created a RAID 10 array per your code (sudo mdadm --create --verbose /dev/md0 --level=10 --layout=f2 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd) using f2 for a far/2-copies setup on a 4Tb x 4 array to get approx. 8Tb. I’ve created/mounted/saved per all of your instructions. I also set USB shares to mount at /media/ instead of /media/user/ per this tutorial. I started yesterday afternoon and currently when I run “cat /proc/mdstat” the resync = 13.7%.
My 2 questions, please and thank you:
Do I need to keep this computer on until the resync is finished, or can I reboot and assume it will continue where it left off?
At what point can I add md0 to Samba and start copying data onto the new array?
Newbie here, sorry:(
OK so I have followed this how-to several times now but seem to have ongoing issues with the array stability? Today I went to access the array as I had copied 1.5TB of data to it and it wasn’t accessible? Said the super block was unreadable and I could not save the array. So now I am rebuilding it again as a Raid 6 volume. Currently waiting for the build to finish. I read the comment below about changing the 0 to a 2 so that the array would load at boot time and used that config. Is this causing me issues? I am running Ubuntu 22.04.3 LTS with a 256gb SSD OS drive and the 4 834gb SSD drives for the array which gives me a 1.7tb disk for storage. I am not a big Linux guy my day job is Windows admin so this stuff is a bit foreign to me. I am using this as a Photoprism photo respository and don’t want to get all my photos on the array only to lose them down the road? Any help is much appreciated.
After following this guide for RAID 0 my RAID was not coming up automatically after reboot. I created the RAID on the partition instead of the device (used sda1 instead of sda) and it’s working now.
There should be a section added to resetting devices. For drives that were formatted GTP there is a possibility that they will loose their superblock upon reboot causing the array to fail. This can be cleared off by sgdisk --zap /dev/sdX I spent a few days trying to figure out why my raid5 array wouldn’t survive reboot. This fixed the problem by clearing whatever corrupted partition data was on the drive.
First, congrats on and thanks for an excellent article. Just what I need, as I want to implement a software raid at home.
I hate to be pedantic…
You have a typesetting issue above. In the code boxes, up in the first section on resetting the drives for the array, when you show a pathntame, such as /dev/sda, or /dev/whatever, your html causes a gap to appear that looks like a space between /dev/ and whatever follows. This looks to a beginner like a space. If you use the copy button, or copy manually, the path is correct in the copy buffer, but if you are retyping, you may be tricked into thinking that there is an extra separate argument.
The HTML is more intricate right there.
<span class="token parameter variable">--stop</span> /dev/<mark>md0</mark>
It’s the <mark> that does it.
This is on Chrome using a Chromebook.
Hi team,
If I got this heading where if I copied it, it will link up to the wrong heading
It will link to:
It needs to link to the sub heading in here as I am using RAID1 only, thanks.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.