RAID0, Software, How to setup
The following is a practical guide to setting up software RAID0 on FreeBSD using the GEOM subsystem. This may appear to be written as an aide-memoir however it is a real-working example written by the author actually configuring a real system.
The system is intended to be a file server using Samba.
By using RAID0, you are at least doubling the chance of data loss over any given period of time. There is no parity in RAID0, which means that failure of any drive in the array will destroy the entire volume. If you are well aware of the MTBF (Mean Time Between Failure) ratings of all the drives you will use, understand the increased risk of data loss, and have a satisfactory backup plan to compensate for this, read on. If any of this sounds scary - and it should - Jimbo strongly suggests you consider adding one more drive and setting up a RAID3 array instead.
The system comprises of the following hardware:
- Processor: AMD Athlon XP 3000;
- Motherboard: ASUS KT4AV;
- Memory: 512MB;
- Hard Drive: Maxtor 30GB IDE (for Operating System);
- Hard Drive: Seagate 500GB SATA x2 (for RAID0 file share);
- Hard Drive Controller: Promise PDC20375 SATA150
- Case: basic full-ATX sized;
- PSU: Antec ATX (with SATA power connections).
The Promise SATA controller is capable of RAID0 and RAID1 however it is being used as a simple hard drive controller for the two SATA Seagate drives since the motherboard has no SATA ports. The RAID0 is provided by the FreeBSD software-based solution documented within this article.
This guide wouldn't be here unless it involved FreeBSD! It is intended that the system will be a file server for media files using Samba to not only share the files but also to offer WINS for name resolution on a small LAN.
The following steps are based on a working implementation however it should be broad enough to cover most instances and be used as a guide to others wishing to implement RAID0.
On FreeBSD the RAID0 "driver" is provided by the GEOM subsystem and is referred to as disk striping. The driver is available as a loadable kernel module called 'geom_striping'. In order to automatically load this driver on boot, the line geom_stripe_load="YES" needs to be added to the /boot/loader.conf file. We can avoid having to actually reboot the system now by loading the driver manually, however.
server# kldload geom_stripe
There needs to be a mount point for this RAID drive. Because this RAID drive is intended to be used with Samba it will be called '/smb' and created as follows.
server# mkdir /smb
In order to establish the RAID0 drive the underlying drives need to be determined. Here we'll use grep with a regexp expression to find all kernel messages that begin with 'ad0:' through 'ad9:'. (For SCSI drives we would have used 'da' instead.) This will show us the drives the kernel detected the last time it was booted.
server# dmesg | grep -e "ad[0-9]:" ad0: 29325MB <Maxtor 6E030L0 NAR61590> at ata0-master UDMA133 ad4: 476940MB <Seagate ST3500630AS 3.AAK> at ata2-master SATA150 ad6: 476940MB <Seagate ST3500630AS 3.AAK> at ata3-master SATA150
This reveals that the drives intended for use as part of the RAID0 setup - the two Seagate drives - have been allocated 'ad4' and 'ad6'. The operating system drive, the Maxtor, is allocated 'ad0'.
To create the RAID0 drive using the two drives determined above the gstripe command is used as follows.
server# gstripe label -v st0 /dev/ad4 /dev/ad6 Metadata value stored on /dev/ad4. Metadata value stored on /dev/ad6. Done.
This created a RAID0 drive called st0, which is a virtual device the system treats in much the same way as the physical drives found under ad4 and ad6. The use of -v instructed the gstripe command to be more verbose - without that argument, it would have returned us to the prompt completely silently.
server# dmesg | grep GEOM_STRIPE GEOM_STRIPE: Device st0 created (id=2925520033). GEOM_STRIPE: Disk ad4 attached to st0. GEOM_STRIPE: Disk ad6 attached to st0. GEOM_STRIPE: Device st0 activated.
The new RAID0 drive is located under /dev/stripe/st0.
Before FreeBSD can utilise a drive, whether it is a regular single drive or a RAID array, it must be initialised and marked as an available drive. This is done by writing a marker to the drive. Under FreeBSD this is done by using the 'bsdlabel' command.
server# bsdlabel -w /dev/stripe/st0
This simply writes to the new virtual device that hosts the RAID0 drive enabling FreeBSD to reference it as an available drive. As a result of this command executing the virtual device shows a single slice (or "partition" under Microsoft speak) under /dev/stripe/st0a.
To allow data to be written to the RAID0 drive it must be formatted. FreeBSD has a native file system called UFS2 but is also capable of reading and writing to file systems of other operating systems. These "foreign" file systems include Microsoft's variations of the FAT system (called 'MSDOS' on FreeBSD). Since Samba will be running locally on this system the file system will be the native UFS2 and is created using the following command.
server# newfs -U /dev/stripe/st0a
The "-U" instructs newfs to enable soft updates on the file system as it is formatted. The command will fill the screen with output similar to the following.
/dev/stripe/st0a: 953880.0MB (1953546304 sectors) block size 16384, fragment size 2048 using 5191 cylinder groups of 183.77MB, 11761 blks, 23552 inodes. with soft updates super-block backups (for fsck -b #) at: 160, 376512, 752864, 1129216, 1505568, 1881920, 2258272, 2634624, 3010976, 3387328, 3763680, 4140032, 4516384, 4892736, 5269088, 5645440, 6021792, 6398144, 6774496, 7150848, 7527200, 7903552, 8279904, 8656256, 9032608, 9408960, 9785312, 10161664, 10538016, 10914368, 11290720, 11667072, 12043424, 12419776, 12796128, 13172480, 13548832, 13925184, 14301536, 14677888, 15054240, 15430592, 15806944, 16183296, 16559648, 16936000, 17312352, 17688704, 18065056, 18441408, 18817760, 19194112, 19570464, 19946816, 20323168, 20699520, 21075872, 21452224, 21828576, 22204928, 22581280, 22957632, 23333984, 23710336, 24086688, 24463040, 24839392, 25215744, 25592096, 25968448, 26344800, (and on and on...), 1944610944, 1944987296, 1945363648, 1945740000, 1946116352, 1946492704, 1946869056, 1947245408, 1947621760, 1947998112, 1948374464, 1948750816, 1949127168, 1949503520, 1949879872, 1950256224, 1950632576, 1951008928, 1951385280, 1951761632, 1952137984, 1952514336, 1952890688, 1953267040
The 'newfs' command has many more options available, including one to specify the size of the drive to format meaning it is possible to create more then one 'slice' (again, akin to "partition") on it. Since no such option was specified the entire drive is formatted as one single slice. This resulted in the creation of almost 1TB of storage using the two 500GB drives.
The final stage in which to permit FreeBSD to access this drive, and from there allow Samba to read and write via network file shares, is to actually mount it. In order to do this the RAID0 drive needs adding as a mountable drive to the '/etc/fstab' file. The following example taken from the above system shows, on the last line, the entry required in particular.
# Device Mountpoint FStype Options Dump Pass# /dev/ad0s1b none swap sw 0 0 /dev/ad0s1a / ufs rw 1 1 /dev/ad0s1e /tmp ufs rw 2 2 /dev/ad0s1f /usr ufs rw 2 2 /dev/ad0s1d /var ufs rw 2 2 /dev/acd0 /cdrom cd9660 ro,noauto 0 0 /dev/stripe/st0a /smb ufs rw 2 2
After a reboot the drive will be mounted as '/smb' however issuing the following command will save a reboot at this stage.
server# mount /dev/stripe/st0a /smb
This will allow you to use the RAID0 drive much like any other drive on the system. The drive can be verified and the free space determined by using the df command.
server# df -h /smb
The "-h" shows free space in "human readable" blocksize - which may be kilobytes, megabytes, gigabytes, terabytes, or even petabytes as appropriate. (You may also ask for a specific blocksize directly; for example "df -g" shows free space in gigabytes whether that "seems like" the most readable option or not.)
While the drives can simply be removed and thereby remove the RAID0 drive the system will very likely become unstable. The following explains the cleaner way to remove a RAID0 drive from a running system.
Removing the RAID0 configuration will result in loss of all data. Ensure all essential data is backed up prior to doing the following.
Ensure any and all services that might use the drive are either stopped or configured to no longer have a dependency to the drive.
Remove the drives entry from '/etc/fstab' (see above with regards to how it was added) and unmount the drive from its mount point.
Using the gstripe command inform the GEOM driver to unload the RAID0 drive.
server# gstripe unload /dev/stripe/st0a
To ensure the underlying drives of the RAID0 set can be used for other purposes it is recommended the metadata (data used by GEOM, stored on the last sector of each drive, describing the setup in use) is cleared.
server# gstripe clear -v /dev/ad4 server# gstripe clear -v /dev/ad6
The 'da4' and 'da6' being the drives used in the above example.
Issuing these commands will result in the following from dmesg, confirming the removal of RAID0 was successful.
GEOM_STRIPE: Disk ad4 removed from st0. GEOM_STRIPE: Device st0 removed. GEOM_STRIPE: Disk ad6 removed from st0. GEOM_STRIPE: Device st0 destroyed.