Promise VTrak J300s
by Jason Clark & Dave Muysson on February 2, 2007 8:00 AM EST- Posted in
- IT Computing
Test results - Read Performance
100% Sequential Reads
When we first gathered the test data together, we were very impressed with the results. Both SAS and SATA I/II show extremely high performance with 100% sequential accesses. The SCSI and SAS drives are similar in terms of capacity while the SAS drives offer higher rotational speeds. Meanwhile, the SATA drives are all 7200 RPM models, theoretically offering less performance but higher capacities than either of the other two solutions.
We can really see the benefit a point-to-point architecture has to offer, as well as the impact the shared bus has on performance. All of the SAS/SATA solutions offer much higher read performance than the SCSI offering here, which helps to explain why SAS solutions are rapidly increasing market share. SCSI Ultra320 can't even come near its theoretical 320 MB/s transfer rates due to sharing overhead. Strangely, we see SCSI and SATA I drop slightly as the DQL increases, while SAS and SATA II increase slightly.
It is interesting to note the performance at low Disk Queue Length between the different technologies. Both SAS and SCSI are very close to their peak performance even at a DQL of 2, whereas SATA needs a DQL of 4 before it starts performing well. As the Disk Queue Length increases, both SAS and SCSI hold their performance well, and a gap starts to emerge between SATA I and SATA II.
95% Sequential Reads
Here we start to see the impact that randomness has on our read performance. SAS is still in the lead but SCSI is starting to catch up to SATA I/II. The higher rotational speeds of SCSI and SAS solutions can help relative to most SATA offerings, as access times depend in part on rotational latency. Performance continues to grow through the whole range of Disk Queue Length with SAS breaking 350MB/sec around 34. This is significantly higher then the 175-200MB/Sec of SATA I/II and 125MB/Sec that SCSI is able to obtain.
Again, we see the benefit that the point-to-point architecture has to offer over the old shared bus architecture. SATA II maintains a slight lead over SATA I, but both don't even come close to touching SAS. SCSI gains ground, but given the maximum throughput of around 130 MB/s that was measured in the 100% sequential test, it still trails by a large margin in this workload.
80% Sequential Reads
At 20% Random Reads we start to see SATA I/II losing its performance advantage over SCSI, while SAS clearly remains as the performance leader. We continue to see performance grow as the DQL increases, but it's interesting to see that SAS is ramping up faster then everyone else. SATA I and SATA II are struggling to keep up with SCSI which has now moved into second place. At this point, the performance of SATA I and SATA II are almost identical.
Random (0% Sequential) Reads
In the previous graphs we've seen the impact that random data has on our overall throughput. Now that we are at 100% Random accesses, we get to see how the drives perform in a 'worst-case' scenario. SAS continues to take the performance lead by throughput that's almost double that of SCSI, but now SCSI is almost twice as fast as SATA I/II. Looking at the results through the whole DQL range, SAS continues to grow without showing signs of slowing down, whereas SCSI and SATA I/II appear to level off around 32-36 DQL.
While a point-to-point architecture clearly has advantages with sequential accesses, there is still something to be said about spindle speed and disk firmware. However, given that SCSI maxed out at ~130 MB/s in the sequential testing, higher spindle speeds would only go so far in closing the gap between SCSI and SAS. SATA drives like Western Digital's 10K RPM Raptor should also be able to close the distance with SCSI in highly random usage scenarios.
We've covered the read performance, but what happens when we test writes?
100% Sequential Reads
100% Sequential Read Performance (MB/Sec) | |||
1 DQL/Disk | 2 DQL/Disk | 3 DQL/Disk | |
SAS | 536 MB/sec | 536 MB/sec | 540 MB/sec |
SATA II | 370 MB/sec | 374 MB/sec | 407 MB/sec |
SATA I | 356 MB/sec | 333 MB/sec | 348 MB/sec |
SCSI | 129 MB/sec | 128 MB/sec | 126 MB/sec |
When we first gathered the test data together, we were very impressed with the results. Both SAS and SATA I/II show extremely high performance with 100% sequential accesses. The SCSI and SAS drives are similar in terms of capacity while the SAS drives offer higher rotational speeds. Meanwhile, the SATA drives are all 7200 RPM models, theoretically offering less performance but higher capacities than either of the other two solutions.
We can really see the benefit a point-to-point architecture has to offer, as well as the impact the shared bus has on performance. All of the SAS/SATA solutions offer much higher read performance than the SCSI offering here, which helps to explain why SAS solutions are rapidly increasing market share. SCSI Ultra320 can't even come near its theoretical 320 MB/s transfer rates due to sharing overhead. Strangely, we see SCSI and SATA I drop slightly as the DQL increases, while SAS and SATA II increase slightly.
It is interesting to note the performance at low Disk Queue Length between the different technologies. Both SAS and SCSI are very close to their peak performance even at a DQL of 2, whereas SATA needs a DQL of 4 before it starts performing well. As the Disk Queue Length increases, both SAS and SCSI hold their performance well, and a gap starts to emerge between SATA I and SATA II.
95% Sequential Reads
95% Sequential Read Performance (MB/Sec) | |||
1 DQL/Disk | 2 DQL/Disk | 3 DQL/Disk | |
SAS | 254 MB/sec | 323 MB/sec | 356 MB/sec |
SATA II | 138 MB/sec | 178 MB/sec | 194 MB/sec |
SATA I | 132 MB/sec | 167 MB/sec | 182 MB/sec |
SCSI | 93 MB/sec | 119 MB/sec | 126 MB/sec |
Here we start to see the impact that randomness has on our read performance. SAS is still in the lead but SCSI is starting to catch up to SATA I/II. The higher rotational speeds of SCSI and SAS solutions can help relative to most SATA offerings, as access times depend in part on rotational latency. Performance continues to grow through the whole range of Disk Queue Length with SAS breaking 350MB/sec around 34. This is significantly higher then the 175-200MB/Sec of SATA I/II and 125MB/Sec that SCSI is able to obtain.
Again, we see the benefit that the point-to-point architecture has to offer over the old shared bus architecture. SATA II maintains a slight lead over SATA I, but both don't even come close to touching SAS. SCSI gains ground, but given the maximum throughput of around 130 MB/s that was measured in the 100% sequential test, it still trails by a large margin in this workload.
80% Sequential Reads
80% Sequential Read Performance (MB/Sec) | |||
1 DQL/Disk | 2 DQL/Disk | 3 DQL/Disk | |
SAS | 144 MB/sec | 188 MB/sec | 210 MB/sec |
SATA II | 75 MB/sec | 96 MB/sec | 103 MB/sec |
SATA I | 74 MB/sec | 96 MB/sec | 102 MB/sec |
SCSI | 77 MB/sec | 104 MB/sec | 115 MB/sec |
At 20% Random Reads we start to see SATA I/II losing its performance advantage over SCSI, while SAS clearly remains as the performance leader. We continue to see performance grow as the DQL increases, but it's interesting to see that SAS is ramping up faster then everyone else. SATA I and SATA II are struggling to keep up with SCSI which has now moved into second place. At this point, the performance of SATA I and SATA II are almost identical.
Random (0% Sequential) Reads
Random Read Performance (MB/Sec) | |||
1 DQL/Disk | 2 DQL/Disk | 3 DQL/Disk | |
SAS | 113 MB/sec | 165 MB/sec | 197 MB/sec |
SATA II | 43 MB/sec | 53 MB/sec | 56 MB/sec |
SATA I | 43 MB/sec | 53 MB/sec | 55 MB/sec |
SCSI | 70 MB/sec | 100 MB/sec | 112 MB/sec |
In the previous graphs we've seen the impact that random data has on our overall throughput. Now that we are at 100% Random accesses, we get to see how the drives perform in a 'worst-case' scenario. SAS continues to take the performance lead by throughput that's almost double that of SCSI, but now SCSI is almost twice as fast as SATA I/II. Looking at the results through the whole DQL range, SAS continues to grow without showing signs of slowing down, whereas SCSI and SATA I/II appear to level off around 32-36 DQL.
While a point-to-point architecture clearly has advantages with sequential accesses, there is still something to be said about spindle speed and disk firmware. However, given that SCSI maxed out at ~130 MB/s in the sequential testing, higher spindle speeds would only go so far in closing the gap between SCSI and SAS. SATA drives like Western Digital's 10K RPM Raptor should also be able to close the distance with SCSI in highly random usage scenarios.
We've covered the read performance, but what happens when we test writes?
31 Comments
View All Comments
yyrkoon - Friday, February 2, 2007 - link
When are you guys going to do some reviews on consumer grade equipment ? Well, let me clarify, 'consumer grade' with on card RAID processor(s). For instance,, right now, I'm in the market for a 8 + port RAID HBA, but would like to know if buying a Highpoint 16 port SATA RAID HBA, would really be any worse than getting an Areca 8 port HBA, for ~$200 usd more. 3Ware, from what I understand offers the best Linux/Unix support, or does it ? If so, would it really make much of a difference in a SOHO application ?I personally, would like to see a comparison of the latest Promise, Highpoint, Areca, 3Ware, etc controllers. In short, there is a lot out there for a potential buyer, such as myself, to get lost in, and basically, I personally am interested in reliability first, speed second (to a point).
Anyhow, I just thought I'd point out, that while you guys do cover a lot in the area, you guys seem to have a gap, where I think it really matters most to your readers (home PC / enthusiast crowd/SOHO).
mino - Saturday, February 3, 2007 - link
I would stay away from Highpoint.We have had several issues of RAID HBA(new one!) consistently going down AND screwing the whole RAID5 ubner some workloads. For the money one is better off with QuadFX ASUS board than to go Highpoint-like solutions.
Areca is pretty much on a different level, ofcourse...
yyrkoon - Sunday, February 4, 2007 - link
Again, this only reinforces what I've said, need a good article on which HBAs are good for reliability, etc.mino - Sunday, February 4, 2007 - link
Any 3Ware, Areca, LSi, Adaptec solution should be just fine.Most people do not actually need RAID5 for home usage and it is usually cheaper to go _software_ RAID1 with every drive in the RAID attached to different controller. In such a scenario even the cheapest or onboard controller offers comparable fault-tollerancy to high-end RAID% solutions.
However the simplest way to go is really 2 NAS RAID5 boxes mirroring each other.
dropadrop - Tuesday, February 6, 2007 - link
I would rule out Adaptec and the older LSI chipsets still available (under several brands like Intel for example). We replaced a bunch of Intel 6 & 8 port controllers with top of the line 8-port Adaptec SATA II controllers.
The performance of the Intel controllers (with LSI chipsets) was terrible. We got about 8-13MB/s sequential writes with RAID 10 arrays, and tested using alot of differant drives. The Adaptec products are alot better in regard to speed, but keep dropping drives. This seems to be a common problem, but they have no solution.
I've previously used 3ware without any problems, and would gladly test Areca if they where available here.
yyrkoon - Sunday, February 4, 2007 - link
why would I want to spend 1300 usd + per 5 disk array (minus drives), when I could build my own system much cheaper, and use the hardware/software I wanted ? Just because I don't know which HBAs are more reliable, than others (because I obviously cant afford to buy them all), doesn't mean I'm an idiot ;)Bob Markinson - Friday, February 2, 2007 - link
Interesting review!I would have liked to see a comparison with latest gen 15K SCSI drives and not 10K SCSI drives to see the true SAS interface performance advantage over SCSI. Futhermore, the Serveraid 6M comes in two versions - one with 128 MB cache and the other with 256 MB cache. Also, there were performance issues with early 7.xx firmware/sw revisions on the 6M at high IO loads - hopefully you ran the tests most recent firmware. Write-back cache was enabled on the 6M, right?
Lifted - Tuesday, February 6, 2007 - link
Based on the title of the article, Promise VTrak J300S, you are expecting too much. The "comparison" was more like an ad for the product. What is point in comparing 10K U320 vs 15k SAS? It's supposed to tell us what exactly? You clearly need to look elsewhere for a SAS vs U320 comparison if that's what you were expecting here. This was more for kicks I think, and perhaps to make the J300S look better than ____ ??? I don't get it, it's just a storage enclosure. The RAID adapters and drives are what determine performance, so why was this apples-to-oranges "performance" review thrown into an enclosure article?Odd, quite odd.
fjeske - Friday, February 2, 2007 - link
Isn't it a bit unfair to use old IBM 10K SCSI drives in this comparison? None of the now Hitachi drives show good performance on Storagereview.com. Compare to Seagate's Cheetah 15K.5 and I think you'll see a difference.Also, how was the SCSI setup done? Attaching 12 drives to one U320 bus will obviously saturate it. Servers usually pair them when connecting this many drives.
cgaspar - Friday, February 2, 2007 - link
SAS and SCSI drives have disk write caches disabled by default, as the drives' caches are not battery backed. IDE and SATA drives frequently have write caching enabled by default. This makes writes much faster, but if you loose power, those writes the drive claimed were committed will be lost, which can be a very bad thing for a database. I'd suggest disabling the write cache on the SATA drives and re-testing (if you still have the gear), I suspect the results will be illuminating.