Performance testing
Iometer version 2006.07.27 was used for performance testing. We ran it at block-level on an unformatted array and also at file-system level over NTFS.
We saw minimal difference between the two and so we will be using our block-level results henceforth, except for the test-rig's internal array, for which we used a 1GB test-file size.
Four Seagate 7200.10 750GB SATA drives were provided by Boston for testing.
Setup
The host machine for Iometer was as follows:
Component | Details |
---|---|
CPU | AMD Opteron 146 @2.5GHz |
Motherboard | ABIT AN8 Ultra (nForce 4 Ultra) |
Memory | 2.0GB PC3200 DDR @209MHz |
Disks | 4x Seagate 7200.8 250GB - RAID 10 |
Graphics | NVIDIA GeForce 6800 GT 256MiB |
Network | NVIDIA network controller - 1Gbps, 9000byte frames |
OS | Microsoft Windows XP x64 |
RAID card | RocketRAID 2300 PCIe 1x - BIOS v2.01 - Driver 1.1.0.0 |
Here is our Iometer test regime:
Option/test | Configuration |
---|---|
Outstanding I/Os | 10 |
Individual test run time | 30 seconds |
Read test access spec | 1MB transfers 100% sequential 100% read |
Write test access spec | 1MB transfers 100% sequential 100% write |
General usage access spec | 64KB transfers 50% sequential, 50% random 33% write, 67% read |
For something to compare the RocketRAID 2300 against we ran Iometer tests on the four-disk nVRAID level-10 array already installed in the test system.
We have performance figures for a number of other setups, available in our Boston RAID -X- pack review, but the test platform and method has altered slightly since those tests, which is why we're not going to draw any direct comparisons against them. Still, if you're interested in a rough comparison, do have a look at these numbers.
We configured the 2300 with four disks in RAID-5 and RAID-0, giving 2.25TB and 3TB respectively. Or, for the more adventurous with standard indices, that's 4.2Gibinibbles and 5.9Gibinibbles. Don't worry, we're just messing with you!
For the next three graphs, this legend might be useful.
Read performance
Our Iometer read-performance test paints an interesting picture of the RocketRAID 2300. The disks used with it will out-pace the slower, older, 250GB Seagates hung off the nVRAID controller in any situation.
Imagine, though, that all the disks were the same. Four-disk RAID-0 should be roughly twice as fast as Four-disk RAID-10, with a four-disk RAID-5 likely sitting somewhere behind the RAID-0 configuration.
Considering the slower disks in nVRAID, the RocketRAID delivers good but not mind-blowing read performance.
Then look at the RAID-0/5 difference... hardly anything.
And then remember our earlier musings about the bandwidth of a 1x PCIe connection. We believe that the interface is the bottleneck here, not the card, RAID-level or disks.
Write performance
In the past, we've run tests that have seen 750GB Seagate drives - which use perpendicular storage techniques - take a bit of a performance hit on writes. Note how the nVRAID array's write speed is more or less the same as its reads, while the RocketRAID arrays slow somewhat.
That's not HighPoint's fault, though, and RAID-0 throughput is still good. RAID-5 is hit quite hard, however, perhaps down to the lack of any XOR processor and RAM on the card itself.
General performance
Here we find something the RocketRAID 2300 is damn good at. RAID-0 proves mightily fast in our general test, which uses small requests and randomised disk access.
RAID-5 performance looks meagre in comparison. But in actual fact, we reckon it's OK for a card that's not doing anything in the way of acceleration.
RAID testing
This is where we give the RAID array a bit of John McClane treatment, causing it some... damage!
It's impossible to damage a RAID-0 array without breaking it, so we only tested the RAID-5 configuration. We also performed testing at "low" and "high" rebuild speeds.
Read performance plummets to less than half of normal speed when one of the disks pops its clogs. A high-priority rebuild will see that performance loss continue. Lowering the priority will buy you a throughput boost but at the cost of a longer rebuild time.
Write performance is somewhat counter-intuitive, rising amidst a disk failure. But with one less disk to write to, perhaps it makes sense that the RocketRAID 2300 is quicker to service write requests.
Rebuild priority has little impact on write performance.
Finally, general performance reflects that of read performance, albeit at lower speeds and with less of a throughput hit should a disk die.
A note on rebuilding
While initialising and rebuilding arrays, we checked CPU usage to see if the controller would consume more processor time.
We didn't observe any increase in CPU but drvInst.exe - part of the 2300's driver - performed a lot of I/O operations.
The result was a system that became sluggish to do things such as swap between windows and start new applications. However, this sluggishness was only present during initialisation/rebuilding.