pavement

RAID, performance tests

From FreeBSDwiki
(Difference between revisions)
Jump to: navigation, search
Line 2: Line 2:
  
 
Equipment:  
 
Equipment:  
  Athlon X2 5000+, 3x Western Digital 250GB drives (WDC WD2500JS-22NCB1 10.02E02 SATA-300), Nvidia nForce onboard RAID controller, Promise TX2300 RAID controller
+
  Athlon X2 5000+  
  Athlon 64 3500+, 5x Seagate 750GB drives (Seagate ST3750640NS 3.AEE SATA-150), Nvidia nForce onboard RAID controller
+
    3x Western Digital 250GB drives (WDC WD2500JS-22NCB1 10.02E02 SATA-300)
 +
    2x Western Digital 500GB drives (WDC WD5000AAKS-00YGA0 12.01C02)
 +
    Nvidia nForce onboard RAID controller, Promise TX2300 RAID controller
 +
 
 +
  Athlon 64 3500+  
 +
    5x Seagate 750GB drives (Seagate ST3750640NS 3.AEE SATA-150)
 +
    Nvidia nForce onboard RAID controller
  
 
Procedure:
 
Procedure:
  /usr/bin/time -h measuring simultaneous cp of two 3.1GB random binary files to /dev/null
+
  /usr/bin/time -h measuring simultaneous cp of 3.1GB random binary files to /dev/null
  files generated with dd if=/dev/random of=/data/random.bin bs=16M count=200
+
simultaneous cp processes use separate files to help avoid cache hits altering results
 +
sysctl -w vfs.read_max=128 unless otherwise stated
 +
  files generated with dd if=/dev/zero bs=16M count=200
  
 
Notes:
 
Notes:
 
  system default of vfs.read_max=8
 
  system default of vfs.read_max=8
  bonnie++ was flirted with, but couldn't figure out how to make it read big enough chunks of data to ever '''once''' hit the disk instead of the cache!
+
testing proved files generated with /dev/random performed no
 +
  differently on read than files generated with /dev/zero
 +
  bonnie++ was flirted with, but couldn't figure out how to make it read  
 +
  big enough chunks of data to ever '''once''' hit the disk instead of the cache!
  
 
Test data:
 
Test data:
  '''vfs.read_max=8:'''
+
  '''write performance (1 process)'''
  1 250GB disk: 3m 56s
+
5 250GB/500GB, graid3              : 153MB/s
   2 250GB disks, gmirror round-robin: 4m 38s
+
5 250GB/500GB, graid3 -r            : 142MB/s
   3 250GB disks, gmirror round-robin: 3m 24s
+
 
 +
'''1 process'''
 +
5 250GB/500GB, graid3              : 213MB/s (dips down to 160MB/sec)
 +
5 750GB disks, graid3              : 152MB/s (wildly fluctuating 120MB/s-200MB/s)
 +
3 250GB disks, graid3              : 114MB/s (dips down to 90MB/sec)
 +
1 750GB drive                      :  65MB/s (60MB/s-70MB/s)
 +
1 250GB drive                      : 56MB/s (very little variation)
 +
 
 +
'''2 processes'''
 +
  5 250GB/500GB, graid3              : 128MB/s (peak: 155+ MB/sec)
 +
  5 750GB disks, graid3              : 125MB/s (peak: 140+ MB/sec)
 +
  3 250GB disks, graid3              :  98MB/s (peak: 130+ MB/sec)
 +
  3 250GB disks, graid3 -r          :  88MB/s (peak: 120+ MB/sec)
 +
   2 250GB disks, nVidia onboard RAID1:  81MB/s (peak: 120+ MB/sec)
 +
  2 250GB disks, Promise TX2300 RAID1:  70MB/s (peak: 100+ MB/sec)
 +
  3 250GB disks, gmirror round-robin : 64MB/s (peak: 65+ MB/sec)
 +
   3 250GB disks, gmirror split 128K  :  57MB/s (peak: 65+ MB/sec)
 +
  1 250GB disk                      :  56MB/s (peak: 60+ MB/sec)
 +
  2 250GB disks, gmirror round-robin : 55MB/s (peak: 65+ MB/sec)
 +
 
 +
'''3 processes'''
 +
5 250GB/500GB, graid3              : 106MB/s (peak: 130+ MB/sec low: 90+MB/sec)
 +
5 250GB/500GB, graid3 -r            : 103MB/s (peak: 120+ MB/sec low: 80+MB/sec)
 +
 
 +
'''4 processes'''
 +
5 250GB/500GB, graid3              : 105MB/s (peak: 130+ MB/sec low: 90+MB/sec)
 +
5 250GB/500GB, graid3 -r            : 105MB/s (peak: 120+ MB/sec low: 80+MB/sec)
 +
 
 +
'''5 processes'''
 +
5 250GB/500GB, graid3 -r            : 107MB/s (peak: 120+ MB/sec low: 80+MB/sec)
 +
5 250GB/500GB, graid3              : 105MB/s (peak: 130+ MB/sec low: 90+MB/sec)
  
'''vfs.read_max=128:'''
 
  5 750GB disks, graid3: 0m 51s (peak: 140+MB/sec)
 
  3 250GB disks, graid3: 1m 05s (peak: 130+ MB/sec)
 
  3 250GB disks, graid3 -r: 1m 13s (peak: 120+ MB/sec)
 
  2 250GB disks, nVidia onboard RAID1: 1m 19s (peak: 120+ MB/sec)
 
  2 250GB disks, Promise TX2300 RAID1: 1m 32s (peak: 100+ MB/sec)
 
  3 250GB disks, gmirror round-robin: 1m 40s (peak: 65+MB/sec)
 
  3 250GB disks, gmirror split 128K: 1m 52s (peak: 65+MB/sec)
 
  1 250GB disk: 1m 55s (peak: 60+ MB/sec)
 
  2 250GB disks, gmirror round-robin: 1m 57s (peak: 65+ MB/sec)
 
  
 
Ancillary data:
 
Ancillary data:
single copy time for 1x250GB drive is 57.1s @ 58MB/sec sustained with very little variation
+
  '''vfs.read_max=8, 2 parallel cp processes'''
single copy time for 1x750GB drive is 49.0s @ 65MB/sec with dips to 60 and peaks to 70
+
  1 250GB disk: 3m 56s
single copy time for 3x250GB raid3 is 28.4s @ 120MB/sec sustained with dips down to 90MB/sec
+
  2 250GB disks, gmirror round-robin: 4m 38s
single copy time for any gmirror config is roughly equal to single copy time for 1 drive
+
  3 250GB disks, gmirror round-robin: 3m 24s
  
 
Preliminary conclusions:
 
Preliminary conclusions:
 
  system default of vfs.read_max=8 is insufficient for ANY configuration, including vanilla single-drive
 
  system default of vfs.read_max=8 is insufficient for ANY configuration, including vanilla single-drive
  gmirror read performance sucks; surprisingly, so does both Promise RAID1 and nVidia RAID1: why the hell aren't RAID1 reads done like RAID0 reads?
+
  gmirror read performance sucks
 +
Promise and nVidia RAID1 are better, but oddly still SIGNIFICANTLY slower than graid3: wtf?
 
  graid3 is the clear performance king here and offers very significant write performance increase as well
 
  graid3 is the clear performance king here and offers very significant write performance increase as well
  5x750 graid3 would almost certainly have been tremendously faster on SATA-II - appears to have been limited by controller speed of 150MB/sec
+
  SATA-II offers significant performance increases over SATA-I on large arrays

Revision as of 13:17, 26 December 2007


Equipment:

Athlon X2 5000+ 
    3x Western Digital 250GB drives (WDC WD2500JS-22NCB1 10.02E02 SATA-300)
    2x Western Digital 500GB drives (WDC WD5000AAKS-00YGA0 12.01C02)
    Nvidia nForce onboard RAID controller, Promise TX2300 RAID controller
Athlon 64 3500+ 
    5x Seagate 750GB drives (Seagate ST3750640NS 3.AEE SATA-150)
    Nvidia nForce onboard RAID controller

Procedure:

/usr/bin/time -h measuring simultaneous cp of 3.1GB random binary files to /dev/null
simultaneous cp processes use separate files to help avoid cache hits altering results
sysctl -w vfs.read_max=128 unless otherwise stated
files generated with dd if=/dev/zero bs=16M count=200

Notes:

system default of vfs.read_max=8
testing proved files generated with /dev/random performed no 
  differently on read than files generated with /dev/zero
bonnie++ was flirted with, but couldn't figure out how to make it read 
  big enough chunks of data to ever once hit the disk instead of the cache!

Test data:

write performance (1 process)
5 250GB/500GB, graid3               : 153MB/s
5 250GB/500GB, graid3 -r            : 142MB/s
1 process
5 250GB/500GB, graid3               : 213MB/s (dips down to 160MB/sec)
5 750GB disks, graid3               : 152MB/s (wildly fluctuating 120MB/s-200MB/s)
3 250GB disks, graid3               : 114MB/s (dips down to 90MB/sec)
1 750GB drive                       :  65MB/s (60MB/s-70MB/s)
1 250GB drive                       :  56MB/s (very little variation)
2 processes
 5 250GB/500GB, graid3              : 128MB/s (peak: 155+ MB/sec)
 5 750GB disks, graid3              : 125MB/s (peak: 140+ MB/sec)
 3 250GB disks, graid3              :  98MB/s (peak: 130+ MB/sec)
 3 250GB disks, graid3 -r           :  88MB/s (peak: 120+ MB/sec)
 2 250GB disks, nVidia onboard RAID1:  81MB/s (peak: 120+ MB/sec)
 2 250GB disks, Promise TX2300 RAID1:  70MB/s (peak: 100+ MB/sec)
 3 250GB disks, gmirror round-robin :  64MB/s (peak: 65+ MB/sec)
 3 250GB disks, gmirror split 128K  :  57MB/s (peak: 65+ MB/sec)
 1 250GB disk                       :  56MB/s (peak: 60+ MB/sec)
 2 250GB disks, gmirror round-robin :  55MB/s (peak: 65+ MB/sec)
3 processes 
5 250GB/500GB, graid3               : 106MB/s (peak: 130+ MB/sec low: 90+MB/sec)
5 250GB/500GB, graid3 -r            : 103MB/s (peak: 120+ MB/sec low: 80+MB/sec)
4 processes
5 250GB/500GB, graid3               : 105MB/s (peak: 130+ MB/sec low: 90+MB/sec)
5 250GB/500GB, graid3 -r            : 105MB/s (peak: 120+ MB/sec low: 80+MB/sec)
5 processes
5 250GB/500GB, graid3 -r            : 107MB/s (peak: 120+ MB/sec low: 80+MB/sec) 
5 250GB/500GB, graid3               : 105MB/s (peak: 130+ MB/sec low: 90+MB/sec)


Ancillary data:

 vfs.read_max=8, 2 parallel cp processes
 1 250GB disk: 3m 56s
 2 250GB disks, gmirror round-robin: 4m 38s
 3 250GB disks, gmirror round-robin: 3m 24s

Preliminary conclusions:

system default of vfs.read_max=8 is insufficient for ANY configuration, including vanilla single-drive
gmirror read performance sucks
Promise and nVidia RAID1 are better, but oddly still SIGNIFICANTLY slower than graid3: wtf?
graid3 is the clear performance king here and offers very significant write performance increase as well
SATA-II offers significant performance increases over SATA-I on large arrays
Personal tools