pavement

Talk:RAID, performance tests

From FreeBSDwiki
(Difference between revisions)
Jump to: navigation, search
(Private Server Rank: new section)
(revert spam)
Line 100: Line 100:
  
 
== /end raw info ==
 
== /end raw info ==
 
== Rmtp Streaming
 
==
 
 
Here are some more links on the topic, Rename Cyrillic
 
         
 
http://jasojeri.t35.com
 
http://doyifaj.t35.com
 
http://coxuhuh.t35.com
 
http://dijarusi.t35.com
 
http://jufonase.t35.com
 
http://fenalapo.t35.com
 
http://guvoziyu.t35.com
 
http://hogegada.t35.com
 
http://cezagige.t35.com
 
http://cayuquh.t35.com
 
http://fufegeg.t35.com
 
http://gogopero.t35.com
 
http://dihosav.t35.com
 
http://dixiyiso.t35.com
 
http://becavaw.t35.com
 
http://gesozeca.t35.com
 
http://fafaperi.t35.com
 
http://docefiv.t35.com
 
http://javaxina.t35.com
 
http://gomekowa.t35.com
 
http://fowejis.t35.com
 
http://kakujufu.t35.com
 
http://jufuduke.t35.com
 
http://corowero.t35.com
 
http://cemalowi.t35.com
 
http://habibuje.t35.com
 
http://celupiv.t35.com
 
http://gujocovu.t35.com
 
http://gahoqixa.t35.com
 
http://fumecar.t35.com
 
http://burikase.t35.com
 
http://ciroxequ.t35.com
 
http://jiyehite.t35.com
 
http://jejuzisi.t35.com
 
http://fazajaz.t35.com
 
http://jorayose.t35.com
 
http://hopuwecu.t35.com
 
http://cusiwuza.t35.com
 
http://joyopoq.t35.com
 
http://hucivawo.t35.com
 
http://gegajiwo.t35.com
 
http://cikomexu.t35.com
 
http://cidacut.t35.com
 
http://hiduloga.t35.com
 
http://ciloxiso.t35.com
 
http://dogetuta.t35.com
 
http://dutiquse.t35.com
 
http://cahadef.t35.com
 
http://homepuze.t35.com
 
http://jatajaqi.t35.com
 
 
== Serious Sam Desert Temple
 
==
 
 
Here are some more links on the topic, T O 00-20-2
 
         
 
http://fieldvehfx.comlu.com
 
http://flatworstzsz.comlu.com
 
http://familyfaz.comlu.com
 
http://enmitymwr.comlu.com
 
http://gauzyhxav.comlu.com
 
http://displayusp.comlu.com
 
http://differeupgkt.comlu.com
 
http://elsedhvx.comlu.com
 
http://entityunmbv.comlu.com
 
http://averagebvmt.comlu.com
 
http://behaviocezt.comlu.com
 
http://actorherce.comlu.com
 
http://epauletsvp.comlu.com
 
http://backupqdwrw.comlu.com
 
http://blockdnfqz.comlu.com
 
http://dislodgcnrdb.comlu.com
 
http://arrivaleyxe.comlu.com
 
http://amarylluffgn.comlu.com
 
http://chromalckam.comlu.com
 
http://fixatioytvcf.comlu.com
 
http://assignetmv.comlu.com
 
http://flexingqttb.comlu.com
 
http://garyuht.comlu.com
 
http://ambergrfqbxc.comlu.com
 
http://extensirqswm.comlu.com
 
http://demandkgn.comlu.com
 
http://cotangemut.comlu.com
 
http://aweightybp.comlu.com
 
http://aftersifmy.comlu.com
 
http://costsamr.comlu.com
 
http://delaykrfd.comlu.com
 
http://feeperruef.comlu.com
 
http://controlzaf.comlu.com
 
http://allelichezn.comlu.com
 
http://aforenagxxh.comlu.com
 
http://censureuhwah.comlu.com
 
http://crossbradfz.comlu.com
 
http://attainaeyczs.comlu.com
 
http://duplexqkrun.comlu.com
 
http://fluiditkwdq.comlu.com
 
http://despondpxwnd.comlu.com
 
http://flowwebm.comlu.com
 
http://almostzydm.comlu.com
 
http://amendatgaxg.comlu.com
 
http://discounpde.comlu.com
 
http://acerbicmxqkb.comlu.com
 
http://checkqpzyp.comlu.com
 
http://chancevsy.comlu.com
 
http://attacheghssg.comlu.com
 
http://bulbousvdxx.comlu.com
 
 
== Private Server Rank
 
==
 
 
Here are some more links on the topic, Telugu Movie Dubai Seenu Dvdrip
 
         
 
http://thunderwpzc.comlu.com
 
http://aguesfhkd.hostei.com
 
http://timeesnd.comlu.com
 
http://weeduxy.comlu.com
 
http://assuredrmywp.hostei.com
 
http://shriftfrb.comlu.com
 
http://welfarekchgd.comlu.com
 
http://allocatukmy.hostei.com
 
http://bulktkah.hostei.com
 
http://strivinxxskt.comlu.com
 
http://aptlypgz.hostei.com
 
http://archivebwpsw.hostei.com
 
http://tarnishmbd.comlu.com
 
http://bleederdfw.hostei.com
 
http://smilaxahmd.comlu.com
 
http://volgarpmk.comlu.com
 
http://basicmphu.hostei.com
 
http://statistbvr.comlu.com
 
http://bernemysde.hostei.com
 
http://capacitnwr.hostei.com
 
http://windowpfzpra.comlu.com
 
http://assurespvrb.hostei.com
 
http://bluebelauwsk.hostei.com
 
http://analyzacsw.hostei.com
 
http://songbiruxc.comlu.com
 
http://boundarkztcm.hostei.com
 
http://abasiafnsd.hostei.com
 
http://bracketswqcv.hostei.com
 
http://adzesdusb.hostei.com
 
http://transpofptst.comlu.com
 
http://semigwqn.comlu.com
 
http://controlswf.hostei.com
 
http://bearishhqpts.hostei.com
 
http://adequacvap.hostei.com
 
http://utilityhbf.comlu.com
 
http://arthritvttaa.hostei.com
 
http://sylviapvth.comlu.com
 
http://switzersrnez.comlu.com
 
http://youthfubcx.comlu.com
 
http://adieusmgr.hostei.com
 
http://surcharrba.comlu.com
 
http://siskinratxd.comlu.com
 
http://stockpigguah.comlu.com
 
http://stuartkyghz.comlu.com
 
http://statistbzun.comlu.com
 
http://sprucesuua.comlu.com
 
http://capitalqubsb.hostei.com
 
http://voucherzwfmw.comlu.com
 
http://sellingcyfu.comlu.com
 
http://absentikbddd.hostei.com
 

Revision as of 14:56, 15 August 2011

raw info temporarily moved from main article page here

Equipment:

Athlon X2 5000+ 
    3x Western Digital 250GB drives (WDC WD2500JS-22NCB1 10.02E02 SATA-300)
    2x Western Digital 500GB drives (WDC WD5000AAKS-00YGA0 12.01C02)
    Nvidia nForce onboard RAID controller, Promise TX2300 RAID controller
Athlon 64 3500+ 
    5x Seagate 750GB drives (Seagate ST3750640NS 3.AEE SATA-150)
    Nvidia nForce onboard RAID controller

Procedure:

/usr/bin/time -h measuring simultaneous cp of 3.1GB files to /dev/null
    files generated with dd if=/dev/random bs=16M count=200
    simultaneous cp processes use physically separate files
write performance tested with dd if/dev/zero bs=16M count=200
sysctl -w vfs.read_max=128 unless otherwise stated

Notes:

system default of vfs.read_max=8
bonnie++ was flirted with, but couldn't figure out how to make it read 
  big enough chunks of data to ever once hit the disk instead of the cache!

Ancillary data:

write performance (1 process)
5 250GB/500GB, graid3               : 153 MB/s
5 250GB/500GB, graid3 -r            : 142 MB/s
1 500GB drive                       :  72 MB/s
1 WD Raptor 74GB drive (Cygwin)     :  60 MB/s
1 250GB drive                       :  58 MB/s
5 250GB/500GB, gmirror round-robin  :  49 MB/s
5 250GB/500GB, gmirror split 128k   :  49 MB/s
1 process
5 250GB/500GB, graid3               : 213 MB/s (dips down to 160MB/sec)
5 750GB disks, graid3               : 152 MB/s (wildly fluctuating 120MB/s-200MB/s)
3 250GB disks, graid3               : 114 MB/s (dips down to 90MB/sec)
1 500GB disk                        :  76 MB/s
1 750GB disk                        :  65 MB/s (60MB/s-70MB/s)
5 250GB/500GB, gmirror round-robin  :  63 MB/s
3 250GB disks, gmirror round-robin  :  59 MB/s
1 250GB disk                        :  56 MB/s (very little variation)
3 250GB disks, gmirror split 128K   :  55 MB/s
5 250GB/500GB, gmirror split 128K   :  54 MB/s
2 processes
5 250GB/500GB, graid3               : 128 MB/s (peak: 155+ MB/sec)
5 750GB disks, graid3               : 125 MB/s (peak: 140+ MB/sec)
3 250GB disks, graid3               :  98 MB/s (peak: 130+ MB/sec)
3 250GB disks, graid3 -r            :  88 MB/s (peak: 120+ MB/sec)
2 250GB disks, nVidia onboard RAID1 :  81 MB/s (peak: 120+ MB/sec) // initial test had flawed data - retested for final article
5 250GB/500GB, gmirror round-robin  :  73 MB/s
2 250GB disks, Promise TX2300 RAID1 :  70 MB/s (peak: 100+ MB/sec) // initial test had flawed data - retested for final article
1 500GB disk                        :  70 MB/s
1 250GB disk                        :  56 MB/s (peak: 60+ MB/sec)
2 250GB disks, gmirror round-robin  :  55 MB/s (peak: 65+ MB/sec)
3 250GB disks, gmirror round-robin  :  53 MB/s
5 250GB/500GB, gmirror split 128K   :  50 MB/s
3 250GB disks, gmirror split 128K   :  46 MB/s
3 processes 
5 250GB/500GB, graid3               : 106 MB/s (peak: 130+ MB/sec low: 90+ MB/sec)
5 250GB/500GB, graid3 -r            : 103 MB/s (peak: 120+ MB/sec low: 80+ MB/sec)
1 500GB disk                        :  72 MB/s
5 250GB/500GB, gmirror round-robin  :  69 MB/s
1 250GB disk                        :  55 MB/s
3 250GB disks, gmirror round-robin  :  53 MB/s
3 250GB disks, gmirror split 128K   :  49 MB/s
5 250GB/500GB, gmirror split 128K   :  47 MB/s
4 processes
5 250GB/500GB, graid3               : 105 MB/s (peak: 130+ MB/sec low: 90+ MB/sec)
5 250GB/500GB, graid3 -r            : 105 MB/s (peak: 120+ MB/sec low: 80+ MB/sec)
1 500GB disk                        :  72 MB/s
5 250GB/500GB, gmirror round-robin  :  71 MB/s (peak:  75+ MB/sec low: 64+ MB/sec)
3 250GB disks, gmirror round-robin  :  65 MB/s
1 250GB disk                        :  55 MB/s
3 250GB disks, gmirror split 128K   :  55 MB/s
5 250GB/500GB, gmirror split 128K   :  47 MB/s (peak:  59+ MB/sec low: 31+ MB/sec)
5 processes
5 250GB/500GB, graid3 -r            : 107 MB/s (peak: 120+ MB/sec low: 80+ MB/sec) 
5 250GB/500GB, graid3               : 105 MB/s (peak: 130+ MB/sec low: 90+ MB/sec)
5 250GB/500GB, gmirror round-robin  :  72 MB/s (peak:  80+ MB/sec low: 67+ MB/sec)
1 500GB disk                        :  72 MB/s
1 250GB disk                        :  56 MB/s
5 250GB/500GB, gmirror split 128K   :  47 MB/s (peak: 60+ MB/sec low: 35+ MB/sec)
vfs.read_max=8, 2 parallel cp processes
3 250GB disks, gmirror round-robin : 31 MB/s
1 250GB disk                       : 27 MB/s
2 250GB disks, gmirror round-robin : 23 MB/s

Preliminary conclusions:

system default of vfs.read_max=8 is insufficient for ANY configuration, including vanilla single-drive
gmirror read performance sucks - Promise read performance sucks - nvidia read performance sucks for single-process
graid3 is the clear performance king here and offers very significant write performance increase as well
SATA-II seems to offer significant performance increases over SATA-I on large arrays

/end raw info

Personal tools