RAID, performance tests
From FreeBSDwiki
Equipment:
Athlon X2 5000+, 3x Western Digital 250GB drives (WDC WD2500JS-22NCB1 10.02E02 SATA-300), Nvidia nForce onboard RAID controller, Promise TX2300 RAID controller Athlon 64 3500+, 5x Seagate 750GB drives (Seagate ST3750640NS 3.AEE SATA-150), Nvidia nForce onboard RAID controller
Procedure:
/usr/bin/time -h measuring simultaneous cp of two 3.1GB random binary files to /dev/null files generated with dd if=/dev/random of=/data/random.bin bs=16M count=200
Notes:
system default of vfs.read_max=8 bonnie++ was flirted with, but couldn't figure out how to make it read big enough chunks of data to ever once hit the disk instead of the cache!
Test data:
vfs.read_max=8: 1 250GB disk: 3m 56s 2 250GB disks, gmirror round-robin: 4m 38s 3 250GB disks, gmirror round-robin: 3m 24s
vfs.read_max=128: 5 750GB disks, graid3: 0m 51s (peak: 140+MB/sec) 3 250GB disks, graid3: 1m 05s (peak: 130+ MB/sec) 3 250GB disks, graid3 -r: 1m 13s (peak: 120+ MB/sec) 2 250GB disks, nVidia onboard RAID1: 1m 19s (peak: 120+ MB/sec) 2 250GB disks, Promise TX2300 RAID1: 1m 32s (peak: 100+ MB/sec) 3 250GB disks, gmirror round-robin: 1m 40s (peak: 65+MB/sec) 3 250GB disks, gmirror split 128K: 1m 52s (peak: 65+MB/sec) 1 250GB disk: 1m 55s (peak: 60+ MB/sec) 2 250GB disks, gmirror round-robin: 1m 57s (peak: 65+ MB/sec)
Ancillary data:
single copy time for 1x250GB drive is 57.1s @ 58MB/sec sustained with very little variation single copy time for 3x250GB raid3 is 28.4s @ 120MB/sec sustained with dips down to 90MB/sec single copy time for any gmirror config is roughly equal to single copy time for 1 drive
Preliminary conclusions:
system default of vfs.read_max=8 is insufficient for ANY configuration, including vanilla single-drive gmirror read performance sucks; surprisingly, so does both Promise RAID1 and nVidia RAID1: why the hell aren't RAID1 reads done like RAID0 reads? graid3 is the clear performance king here and offers very significant write performance increase as well