RAID, performance tests
m (Disk performance moved to RAID, performance tests: much better description of article contents) |
|||
Line 3: | Line 3: | ||
[[image:gmirror-performance.png | Click for raw data and test equipment information]] | [[image:gmirror-performance.png | Click for raw data and test equipment information]] | ||
− | [[Gmirror]], unfortunately, is not doing | + | [[Gmirror]], unfortunately, is not doing well at all at this time. 2-drive and 3-drive gmirror arrays performed grossly worse than even a single baseline drive, with a 5-drive gmirror managing to outperform the baseline 250GB drive tested but being handily beaten by the 500GB baseline drive, the Nvidia onboard RAID1 implementation, and especially the Linux RAID1 implementations, which handily dominated everything across the board. |
− | Only results for | + | Only results for [[gmirror]]'s '''round-robin''' balance algorithm are shown here, because the '''load''' and '''split''' balance algorithms performed even more poorly than round-robin. Results for '''split''' are available as raw data if you click the image, but were not included on the graph itself. '''Load''' results are not available because initial testing showed it performing even worse than '''split''' and so the tests were not allowed to complete. |
− | It is interesting to note that the Nvidia | + | It is interesting to note that the Nvidia, Promise, and Linux RAID1 implementations all display a significant variation in how they handle simultaneous processes - all three exhibited differences up to 15, 30, and even 38 seconds in times to process otherwise identical simultaneously begun '''cp''' processes to /dev/null. While '''gmirror''''s sheer performance is abysmal, it is worth noting that it does at least handle processes consistently; it never finished processes more than a few hundred milliseconds apart. |
− | The Promise TX-2300 RAID1 implementation was just plain poor | + | The Promise TX-2300 RAID1 implementation was just plain poor, performing nearly as badly as '''gmirror''' but still failing to improve on the scores of the single baseline drive, while still turning in oddly inconsistent times as the vastly higher-performing arrays did. |
− | The Gmirror | + | The Gmirror and Linux implementations were the only ones tested which allowed RAID1 arrays with more than two member drives. |
Line 28: | Line 28: | ||
* FreeBSD 6.2-RELEASE (amd64) | * FreeBSD 6.2-RELEASE (amd64) | ||
+ | * Ubuntu Server 7.04 (amd64) | ||
* Athlon X2 5000+ | * Athlon X2 5000+ | ||
* 2GB DDR2 SDRAM | * 2GB DDR2 SDRAM | ||
Line 37: | Line 38: | ||
==Methodology== | ==Methodology== | ||
The read-ahead cache was changed from the default value of 8 to 128 for all tests performed, using '''sysctl -w vfs.read_max=128'''. Initial testing showed that dramatic performance increases occurred for ''all'' tested configurations, including baseline single-drive, with increases of vfs.read_max. The value of 128 was arrived at by continuing to double vfs.read_max until no further significant performance increase was to be seen (at vfs.read_max=256) and backing down to the last value tried. | The read-ahead cache was changed from the default value of 8 to 128 for all tests performed, using '''sysctl -w vfs.read_max=128'''. Initial testing showed that dramatic performance increases occurred for ''all'' tested configurations, including baseline single-drive, with increases of vfs.read_max. The value of 128 was arrived at by continuing to double vfs.read_max until no further significant performance increase was to be seen (at vfs.read_max=256) and backing down to the last value tried. | ||
+ | |||
+ | Similarly, for the Linux tests read-ahead cache was changed from the default value of 256 to 4192, using '''hdparm /dev/md0 -a4096'''. Baseline drive performance was not tested under Linux, but extremely erratic initial test results on the first RAID1 configuration tested led me to googling Linux disk performance tweaking so as to make a completely fair comparison. The 4192 value was arrived at by successive doubling and testing until the highest performing value was found, then testing against 3/4 its value. The RAID1 array was created using the command '''mdadm --create /dev/md0 --level raid1 -n 5 --assume-clean /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf''', and subsequently shrunk to three members and then to two members as testing completed. | ||
For the actual testing, 5 individual 3200MB files were created on each tested device or array using '''[[dd]] if=[[dev/random|/dev/random]] bs=16m count=200''' as random1.bin - random5.bin. These files were then [[cp]]'ed from the device or array to [[dev/null|/dev/null]]. Elapsed times were generated by echoing a timestamp immediately before beginning the test and immediately at the end of each individual process, and subtracting the beginning timestamp from the last completed timestamp. Speeds shown are simply the amount of data in MB copied to /dev/null (3200, 6400, 9600, 12800, or 16000) divided by the total elapsed time. | For the actual testing, 5 individual 3200MB files were created on each tested device or array using '''[[dd]] if=[[dev/random|/dev/random]] bs=16m count=200''' as random1.bin - random5.bin. These files were then [[cp]]'ed from the device or array to [[dev/null|/dev/null]]. Elapsed times were generated by echoing a timestamp immediately before beginning the test and immediately at the end of each individual process, and subtracting the beginning timestamp from the last completed timestamp. Speeds shown are simply the amount of data in MB copied to /dev/null (3200, 6400, 9600, 12800, or 16000) divided by the total elapsed time. | ||
Line 47: | Line 50: | ||
Write performance was also tested on each of the devices and arrays listed and will be included in graphs at a later date (for now, raw data is available in the discussion page). | Write performance was also tested on each of the devices and arrays listed and will be included in graphs at a later date (for now, raw data is available in the discussion page). | ||
− | Googling "gmirror performance" and "gmirror slow" did not get me much of a return; just one other individual wondering why his gmirror was so abominably slow - so I reformatted the test system with 6.2-RELEASE (i386) and retested, | + | Googling "gmirror performance" and "gmirror slow" did not get me much of a return; just one other individual wondering why his gmirror was so abominably slow - so I reformatted the test system with 6.2-RELEASE (i386) and retested. Unfortunately, the gmirror results did not improve with the change of platform back to i386. It strikes me as very odd that graid3 with only 3 drives (therefore only 2 data drives) outperforms even a ''five''-drive gmirror implementation. And in sharp contrast to '''gmirror''', of course, the Linux kernel RAID1 results speak for themselves. |
[[Category: Common Tasks]] [[Category: FreeBSD for Servers]] [[Category: FreeBSD for Workstations]] [[Category: Configuring FreeBSD]] | [[Category: Common Tasks]] [[Category: FreeBSD for Servers]] [[Category: FreeBSD for Workstations]] [[Category: Configuring FreeBSD]] |
Revision as of 22:10, 27 December 2007
Contents |
Gmirror Disk Performance
Gmirror, unfortunately, is not doing well at all at this time. 2-drive and 3-drive gmirror arrays performed grossly worse than even a single baseline drive, with a 5-drive gmirror managing to outperform the baseline 250GB drive tested but being handily beaten by the 500GB baseline drive, the Nvidia onboard RAID1 implementation, and especially the Linux RAID1 implementations, which handily dominated everything across the board.
Only results for gmirror's round-robin balance algorithm are shown here, because the load and split balance algorithms performed even more poorly than round-robin. Results for split are available as raw data if you click the image, but were not included on the graph itself. Load results are not available because initial testing showed it performing even worse than split and so the tests were not allowed to complete.
It is interesting to note that the Nvidia, Promise, and Linux RAID1 implementations all display a significant variation in how they handle simultaneous processes - all three exhibited differences up to 15, 30, and even 38 seconds in times to process otherwise identical simultaneously begun cp processes to /dev/null. While gmirror's sheer performance is abysmal, it is worth noting that it does at least handle processes consistently; it never finished processes more than a few hundred milliseconds apart.
The Promise TX-2300 RAID1 implementation was just plain poor, performing nearly as badly as gmirror but still failing to improve on the scores of the single baseline drive, while still turning in oddly inconsistent times as the vastly higher-performing arrays did.
The Gmirror and Linux implementations were the only ones tested which allowed RAID1 arrays with more than two member drives.
Graid3 Disk Performance
Graid3 is doing noticeably better than Gmirror. The 5-drive Graid3 implementation handily outperformed everything else tested, and while the 3-drive Graid3 implementation performed slightly slower than the Nvidia RAID1 in the 2-process and 3-process tests and significantly slower in the 4-process and 5-process tests, it's worth noting that it nearly doubled the Nvidia RAID1's single-process performance due to Nvidia's interesting failure to accelerate single-process copying at all.
Only results for the -R configuration (do not use parity drive for read operations from a healthy array) are shown here, because it outperformed the -r configuration (always use the parity member during reads) slightly to significantly in all but the 5-process test, in which it performed only very slightly worse. It is possible that a more massively parallel test (or a test of a much less contiguous filesystem) would show some advantage to -r, but for these tests, no advantage is apparent. Raw data is available on the image page itself.
Equipment
- FreeBSD 6.2-RELEASE (amd64)
- Ubuntu Server 7.04 (amd64)
- Athlon X2 5000+
- 2GB DDR2 SDRAM
- Nvidia nForce MCP51 SATA 300 onboard RAID controller
- Promise TX2300 SATA 300 RAID controller
- 3x Western Digital 250GB drives (WDC WD2500JS-22NCB1 10.02E02 SATA-300)
- 2x Western Digital 500GB drives (WDC WD5000AAKS-00YGA0 12.01C02 SATA-300)
Methodology
The read-ahead cache was changed from the default value of 8 to 128 for all tests performed, using sysctl -w vfs.read_max=128. Initial testing showed that dramatic performance increases occurred for all tested configurations, including baseline single-drive, with increases of vfs.read_max. The value of 128 was arrived at by continuing to double vfs.read_max until no further significant performance increase was to be seen (at vfs.read_max=256) and backing down to the last value tried.
Similarly, for the Linux tests read-ahead cache was changed from the default value of 256 to 4192, using hdparm /dev/md0 -a4096. Baseline drive performance was not tested under Linux, but extremely erratic initial test results on the first RAID1 configuration tested led me to googling Linux disk performance tweaking so as to make a completely fair comparison. The 4192 value was arrived at by successive doubling and testing until the highest performing value was found, then testing against 3/4 its value. The RAID1 array was created using the command mdadm --create /dev/md0 --level raid1 -n 5 --assume-clean /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf, and subsequently shrunk to three members and then to two members as testing completed.
For the actual testing, 5 individual 3200MB files were created on each tested device or array using dd if=/dev/random bs=16m count=200 as random1.bin - random5.bin. These files were then cp'ed from the device or array to /dev/null. Elapsed times were generated by echoing a timestamp immediately before beginning the test and immediately at the end of each individual process, and subtracting the beginning timestamp from the last completed timestamp. Speeds shown are simply the amount of data in MB copied to /dev/null (3200, 6400, 9600, 12800, or 16000) divided by the total elapsed time.
Notes
The methodology used produces a very highly contiguous filesystem, which may skew results significantly higher than in some real-world settings - particularly in the single-process test. Presumably the multiple process copy tests would be much less affected by fragmentation in real-world filesystems, since by their nature they require a significant amount of drive seeks between blocks of the individual files being copied throughout the test.
In the 5-drive Graid3 array tested, the (significantly faster) 500GB drives were positioned as the last two elements of the array. This is significant particularly because this means the parity drive was noticeably faster than 3 of the 4 data drives in this configuration; some other testing on equipment not listed here leads me to believe that this had a favorable impact when using the -r configuration. There was not, however, enough of an improvement to make the -r results worth including on the graph.
Write performance was also tested on each of the devices and arrays listed and will be included in graphs at a later date (for now, raw data is available in the discussion page).
Googling "gmirror performance" and "gmirror slow" did not get me much of a return; just one other individual wondering why his gmirror was so abominably slow - so I reformatted the test system with 6.2-RELEASE (i386) and retested. Unfortunately, the gmirror results did not improve with the change of platform back to i386. It strikes me as very odd that graid3 with only 3 drives (therefore only 2 data drives) outperforms even a five-drive gmirror implementation. And in sharp contrast to gmirror, of course, the Linux kernel RAID1 results speak for themselves.