20.5 RAID3 - Byte-level Striping with Dedicated Parity

Written by Mark Gladman and Daniel Gerzo. Based on documentation by Tom Rhodes and Murray Stokely.

RAID3 is a method used to combine several disk drives into a single volume with a dedicated parity disk. In a RAID3 system, data is split up into a number of bytes that are written across all the drives in the array except for one disk which acts as a dedicated parity disk. This means that reading 1024KB from a RAID3 implementation will access all disks in the array. Performance can be enhanced by using multiple disk controllers. The RAID3 array provides a fault tolerance of 1 drive, while providing a capacity of 1 - 1/n times the total capacity of all drives in the array, where n is the number of hard drives in the array. Such a configuration is mostly suitable for storing data of larger sizes, e.g. multimedia files.

At least 3 physical hard drives are required to build a RAID3 array. Each disk must be of the same size, since I/O requests are interleaved to read or write to multiple disks in parallel. Also, due to the nature of RAID3, the number of drives must be equal to 3, 5, 9, 17, etc. (2^n + 1).

20.5.1 Creating a Dedicated RAID3 Array

In FreeBSD, support for RAID3 is implemented by the graid3(8) GEOM class. Creating a dedicated RAID3 array on FreeBSD requires the following steps.

Note: While it is theoretically possible to boot from a RAID3 array on FreeBSD, that configuration is uncommon and is not advised.

  1. First, load the geom_raid3.ko kernel module by issuing the following command:

    # graid3 load
    

    Alternatively, it is possible to manually load the geom_raid3.ko module:

    # kldload geom_raid3.ko
    
  2. Create or ensure that a suitable mount point exists:

    # mkdir /multimedia/
    
  3. Determine the device names for the disks which will be added to the array, and create the new RAID3 device. The final device listed will act as the dedicated parity disk. This example uses three unpartitioned ATA drives: ada1 and ada2 for data, and ada3 for parity.

    # graid3 label -v gr0 /dev/ada1 /dev/ada2 /dev/ada3
    Metadata value stored on /dev/ada1.
    Metadata value stored on /dev/ada2.
    Metadata value stored on /dev/ada3.
    Done.
    
  4. Partition the newly created gr0 device and put a UFS file system on it:

    # gpart create -s GPT /dev/raid3/gr0
    # gpart add -t freebsd-ufs /dev/raid3/gr0
    # newfs -j /dev/raid3/gr0p1
    

    Many numbers will glide across the screen, and after a bit of time, the process will be complete. The volume has been created and is ready to be mounted.

  5. The last step is to mount the file system:

    # mount /dev/raid3/gr0p1 /multimedia/
    

    The RAID3 array is now ready to use.

Additional configuration is needed to retain the above setup across system reboots.

  1. The geom_raid3.ko module must be loaded before the array can be mounted. To automatically load the kernel module during the system initialization, add the following line to the /boot/loader.conf file:

    geom_raid3_load="YES"
    
  2. The following volume information must be added to the /etc/fstab file in order to automatically mount the array's file system during the system boot process:

    /dev/raid3/gr0p1	/multimedia	ufs	rw	2	2