RAID, performance tests
From FreeBSDwiki
(Difference between revisions)
Line 2: | Line 2: | ||
Equipment: | Equipment: | ||
− | Athlon X2 5000+ | + | Athlon X2 5000+ |
− | Athlon 64 3500+ | + | 3x Western Digital 250GB drives (WDC WD2500JS-22NCB1 10.02E02 SATA-300) |
+ | 2x Western Digital 500GB drives (WDC WD5000AAKS-00YGA0 12.01C02) | ||
+ | Nvidia nForce onboard RAID controller, Promise TX2300 RAID controller | ||
+ | |||
+ | Athlon 64 3500+ | ||
+ | 5x Seagate 750GB drives (Seagate ST3750640NS 3.AEE SATA-150) | ||
+ | Nvidia nForce onboard RAID controller | ||
Procedure: | Procedure: | ||
− | /usr/bin/time -h measuring simultaneous cp of | + | /usr/bin/time -h measuring simultaneous cp of 3.1GB random binary files to /dev/null |
− | files generated with dd if=/dev/ | + | simultaneous cp processes use separate files to help avoid cache hits altering results |
+ | sysctl -w vfs.read_max=128 unless otherwise stated | ||
+ | files generated with dd if=/dev/zero bs=16M count=200 | ||
Notes: | Notes: | ||
system default of vfs.read_max=8 | system default of vfs.read_max=8 | ||
− | bonnie++ was flirted with, but couldn't figure out how to make it read big enough chunks of data to ever '''once''' hit the disk instead of the cache! | + | testing proved files generated with /dev/random performed no |
+ | differently on read than files generated with /dev/zero | ||
+ | bonnie++ was flirted with, but couldn't figure out how to make it read | ||
+ | big enough chunks of data to ever '''once''' hit the disk instead of the cache! | ||
Test data: | Test data: | ||
− | ''' | + | '''write performance (1 process)''' |
− | + | 5 250GB/500GB, graid3 : 153MB/s | |
− | 2 250GB disks, gmirror round-robin: | + | 5 250GB/500GB, graid3 -r : 142MB/s |
− | 3 250GB disks, gmirror round-robin: | + | |
+ | '''1 process''' | ||
+ | 5 250GB/500GB, graid3 : 213MB/s (dips down to 160MB/sec) | ||
+ | 5 750GB disks, graid3 : 152MB/s (wildly fluctuating 120MB/s-200MB/s) | ||
+ | 3 250GB disks, graid3 : 114MB/s (dips down to 90MB/sec) | ||
+ | 1 750GB drive : 65MB/s (60MB/s-70MB/s) | ||
+ | 1 250GB drive : 56MB/s (very little variation) | ||
+ | |||
+ | '''2 processes''' | ||
+ | 5 250GB/500GB, graid3 : 128MB/s (peak: 155+ MB/sec) | ||
+ | 5 750GB disks, graid3 : 125MB/s (peak: 140+ MB/sec) | ||
+ | 3 250GB disks, graid3 : 98MB/s (peak: 130+ MB/sec) | ||
+ | 3 250GB disks, graid3 -r : 88MB/s (peak: 120+ MB/sec) | ||
+ | 2 250GB disks, nVidia onboard RAID1: 81MB/s (peak: 120+ MB/sec) | ||
+ | 2 250GB disks, Promise TX2300 RAID1: 70MB/s (peak: 100+ MB/sec) | ||
+ | 3 250GB disks, gmirror round-robin : 64MB/s (peak: 65+ MB/sec) | ||
+ | 3 250GB disks, gmirror split 128K : 57MB/s (peak: 65+ MB/sec) | ||
+ | 1 250GB disk : 56MB/s (peak: 60+ MB/sec) | ||
+ | 2 250GB disks, gmirror round-robin : 55MB/s (peak: 65+ MB/sec) | ||
+ | |||
+ | '''3 processes''' | ||
+ | 5 250GB/500GB, graid3 : 106MB/s (peak: 130+ MB/sec low: 90+MB/sec) | ||
+ | 5 250GB/500GB, graid3 -r : 103MB/s (peak: 120+ MB/sec low: 80+MB/sec) | ||
+ | |||
+ | '''4 processes''' | ||
+ | 5 250GB/500GB, graid3 : 105MB/s (peak: 130+ MB/sec low: 90+MB/sec) | ||
+ | 5 250GB/500GB, graid3 -r : 105MB/s (peak: 120+ MB/sec low: 80+MB/sec) | ||
+ | |||
+ | '''5 processes''' | ||
+ | 5 250GB/500GB, graid3 -r : 107MB/s (peak: 120+ MB/sec low: 80+MB/sec) | ||
+ | 5 250GB/500GB, graid3 : 105MB/s (peak: 130+ MB/sec low: 90+MB/sec) | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
Ancillary data: | Ancillary data: | ||
− | + | '''vfs.read_max=8, 2 parallel cp processes''' | |
− | + | 1 250GB disk: 3m 56s | |
− | + | 2 250GB disks, gmirror round-robin: 4m 38s | |
− | + | 3 250GB disks, gmirror round-robin: 3m 24s | |
Preliminary conclusions: | Preliminary conclusions: | ||
system default of vfs.read_max=8 is insufficient for ANY configuration, including vanilla single-drive | system default of vfs.read_max=8 is insufficient for ANY configuration, including vanilla single-drive | ||
− | gmirror read performance sucks | + | gmirror read performance sucks |
+ | Promise and nVidia RAID1 are better, but oddly still SIGNIFICANTLY slower than graid3: wtf? | ||
graid3 is the clear performance king here and offers very significant write performance increase as well | graid3 is the clear performance king here and offers very significant write performance increase as well | ||
− | + | SATA-II offers significant performance increases over SATA-I on large arrays |
Revision as of 13:17, 26 December 2007
Equipment:
Athlon X2 5000+ 3x Western Digital 250GB drives (WDC WD2500JS-22NCB1 10.02E02 SATA-300) 2x Western Digital 500GB drives (WDC WD5000AAKS-00YGA0 12.01C02) Nvidia nForce onboard RAID controller, Promise TX2300 RAID controller
Athlon 64 3500+ 5x Seagate 750GB drives (Seagate ST3750640NS 3.AEE SATA-150) Nvidia nForce onboard RAID controller
Procedure:
/usr/bin/time -h measuring simultaneous cp of 3.1GB random binary files to /dev/null simultaneous cp processes use separate files to help avoid cache hits altering results sysctl -w vfs.read_max=128 unless otherwise stated files generated with dd if=/dev/zero bs=16M count=200
Notes:
system default of vfs.read_max=8 testing proved files generated with /dev/random performed no differently on read than files generated with /dev/zero bonnie++ was flirted with, but couldn't figure out how to make it read big enough chunks of data to ever once hit the disk instead of the cache!
Test data:
write performance (1 process) 5 250GB/500GB, graid3 : 153MB/s 5 250GB/500GB, graid3 -r : 142MB/s
1 process 5 250GB/500GB, graid3 : 213MB/s (dips down to 160MB/sec) 5 750GB disks, graid3 : 152MB/s (wildly fluctuating 120MB/s-200MB/s) 3 250GB disks, graid3 : 114MB/s (dips down to 90MB/sec) 1 750GB drive : 65MB/s (60MB/s-70MB/s) 1 250GB drive : 56MB/s (very little variation)
2 processes 5 250GB/500GB, graid3 : 128MB/s (peak: 155+ MB/sec) 5 750GB disks, graid3 : 125MB/s (peak: 140+ MB/sec) 3 250GB disks, graid3 : 98MB/s (peak: 130+ MB/sec) 3 250GB disks, graid3 -r : 88MB/s (peak: 120+ MB/sec) 2 250GB disks, nVidia onboard RAID1: 81MB/s (peak: 120+ MB/sec) 2 250GB disks, Promise TX2300 RAID1: 70MB/s (peak: 100+ MB/sec) 3 250GB disks, gmirror round-robin : 64MB/s (peak: 65+ MB/sec) 3 250GB disks, gmirror split 128K : 57MB/s (peak: 65+ MB/sec) 1 250GB disk : 56MB/s (peak: 60+ MB/sec) 2 250GB disks, gmirror round-robin : 55MB/s (peak: 65+ MB/sec)
3 processes 5 250GB/500GB, graid3 : 106MB/s (peak: 130+ MB/sec low: 90+MB/sec) 5 250GB/500GB, graid3 -r : 103MB/s (peak: 120+ MB/sec low: 80+MB/sec)
4 processes 5 250GB/500GB, graid3 : 105MB/s (peak: 130+ MB/sec low: 90+MB/sec) 5 250GB/500GB, graid3 -r : 105MB/s (peak: 120+ MB/sec low: 80+MB/sec)
5 processes 5 250GB/500GB, graid3 -r : 107MB/s (peak: 120+ MB/sec low: 80+MB/sec) 5 250GB/500GB, graid3 : 105MB/s (peak: 130+ MB/sec low: 90+MB/sec)
Ancillary data:
vfs.read_max=8, 2 parallel cp processes 1 250GB disk: 3m 56s 2 250GB disks, gmirror round-robin: 4m 38s 3 250GB disks, gmirror round-robin: 3m 24s
Preliminary conclusions:
system default of vfs.read_max=8 is insufficient for ANY configuration, including vanilla single-drive gmirror read performance sucks Promise and nVidia RAID1 are better, but oddly still SIGNIFICANTLY slower than graid3: wtf? graid3 is the clear performance king here and offers very significant write performance increase as well SATA-II offers significant performance increases over SATA-I on large arrays