Test results - Write Performance

100% Sequential Writes


100% Sequential Write Performance (MB/Sec)
1 DQL/Disk 2 DQL/Disk 3 DQL/Disk
SAS 329 MB/sec 333 MB/sec 333 MB/sec
SATA II 242 MB/sec 245 MB/sec 245 MB/sec
SATA I 201 MB/sec 201 MB/sec 202 MB/sec
SCSI 96 MB/sec 96 MB/sec 95 MB/sec

Using 100% sequential writes we get to see what the 'best-case' is for overall write throughput. SAS has once again taken the lead achieving over 325MB/sec, SCSI falls behind but maintains a solid rate almost reading 100MB/sec, but SATA I/II has a different story to tell. While starting out very close to one another, SATA II quickly jumps ahead of SATA I by almost 50MB/sec and maintains that lead throughout the DQL range. This may be the result of having 16MB of disk cache on the SATA II drives vs. the 8MB cache on SATA I. It will be interesting to see what impact this has as we introduce random access into the tests.

It's worth noting that maximum write performance is substantially lower than maximum read performance. All of the tested drives are at least 20% slower. There is some logic to this, however, as typical storage usage tends to be far more write heavy than read heavy, so optimizing for better read performance makes sense.

95% Sequential Writes


95% Sequential Write Performance (MB/Sec)
1 DQL/Disk 2 DQL/Disk 3 DQL/Disk
SAS 111 MB/sec 122 MB/sec 134 MB/sec
SATA II 189 MB/sec 188 MB/sec 188 MB/sec
SATA I 69 MB/sec 72 MB/sec 74 MB/sec
SCSI 83 MB/sec 85 MB/sec 84 MB/sec

Now this was very surprising. SATA II has jumped ahead of everyone else and is leading at almost 180MB/sec from a DQL of 6 to 36, whereas SAS is struggling to reach 140MB/sec by the time it gets to a DQL of 36. Both SCSI and SATA I maintain a sub 100MB/sec performance with SATA I being slightly lower then SCSI. This created a lot of discussion inside our team that lead to a lot of testing and retesting of the drives. The results you see here are accurate and reflect what we were able to achieve with IOMeter.

Let's step back for a minute and go back to the 64K Read graphs. If we look at the impact adding 5% random access had to reads, SAS dropped 34%, SATA I 48%, SATA II 52%, and SCSI 0%. When we look at 64K Writes with 5% random access, SAS drops 60%, SATA I 63%, SATA II 23%, and SCSI 11%. We continue to see a significant drop with writes as we did with reads, though SATA II clearly has an advantage with writes.

It appears then that the 16MB cache on the SATA II drives is reducing the performance impact random access has to overall write throughput. Since reads must come right from the platters, this extra cache won't really be that advantageous to reads. Also, the SATA II drive has NCQ, but so do SAS and SCSI. NCQ should only bring us closer to SAS/SCSI but not surpass them. Since writes can be cached and could benefit from a larger cache, this is the only difference we could find that explains the high SATA II performance.

As the data stream becomes more and more random, the advantage the extra cache gives SATA II drives should be reduced since it won't be able to flush the data from cache to the disk as quickly as with 95% sequential writes. Let's find out with empirical testing, though.

80% Sequential Writes


80% Sequential Write Performance (MB/Sec)
1 DQL/Disk 2 DQL/Disk 3 DQL/Disk
SAS 85 MB/sec 93 MB/sec 98 MB/sec
SATA II 114 MB/sec 118 MB/sec 121 MB/sec
SATA I 32 MB/sec 33 MB/sec 33 MB/sec
SCSI 61 MB/sec 62 MB/sec 61 MB/sec

In this graph we start to see SAS catching up to SATA II. It appears the advantage SATA II had with the extra cache is starting to decrease as the access type becomes more random. SCSI holds fairly steady around the 60MB/sec mark while SATA I drops far behind everyone else.

Random (0% Sequential) Writes


Random Write Performance (MB/Sec)
1 DQL/Disk 2 DQL/Disk 3 DQL/Disk
SAS 90 MB/sec 94 MB/sec 94 MB/sec
SATA II 59 MB/sec 61 MB/sec 62 MB/sec
SATA I 24 MB/sec 25 MB/sec 25 MB/sec
SCSI 38 MB/sec 38 MB/sec 40 MB/sec

Now we see our 'worst-case' scenario, 100% random writes. SAS takes back the lead at 94MB/sec which is rather impressive when you consider it was at 98MB/sec @ 20% random. SATA II is in second place at 62MB/sec, which translates into a 50% drop from the last test. SCSI maintains third place at 40MB/sec, and SATA stays in last place at 25MB/sec.

Again, spindle speeds of the various offerings could help improve performance, but only to a certain extent. Additional cache is also clearly beneficial, as SATA maintains a strong second place showing. There's still a lot to think about, however, so let's take a look at the overall picture.

Test results – Read Performance Closing Thoughts
Comments Locked

31 Comments

View All Comments

  • yyrkoon - Friday, February 2, 2007 - link

    When are you guys going to do some reviews on consumer grade equipment ? Well, let me clarify, 'consumer grade' with on card RAID processor(s). For instance,, right now, I'm in the market for a 8 + port RAID HBA, but would like to know if buying a Highpoint 16 port SATA RAID HBA, would really be any worse than getting an Areca 8 port HBA, for ~$200 usd more. 3Ware, from what I understand offers the best Linux/Unix support, or does it ? If so, would it really make much of a difference in a SOHO application ?

    I personally, would like to see a comparison of the latest Promise, Highpoint, Areca, 3Ware, etc controllers. In short, there is a lot out there for a potential buyer, such as myself, to get lost in, and basically, I personally am interested in reliability first, speed second (to a point).

    Anyhow, I just thought I'd point out, that while you guys do cover a lot in the area, you guys seem to have a gap, where I think it really matters most to your readers (home PC / enthusiast crowd/SOHO).
  • mino - Saturday, February 3, 2007 - link

    I would stay away from Highpoint.
    We have had several issues of RAID HBA(new one!) consistently going down AND screwing the whole RAID5 ubner some workloads. For the money one is better off with QuadFX ASUS board than to go Highpoint-like solutions.
    Areca is pretty much on a different level, ofcourse...
  • yyrkoon - Sunday, February 4, 2007 - link

    Again, this only reinforces what I've said, need a good article on which HBAs are good for reliability, etc.
  • mino - Sunday, February 4, 2007 - link

    Any 3Ware, Areca, LSi, Adaptec solution should be just fine.

    Most people do not actually need RAID5 for home usage and it is usually cheaper to go _software_ RAID1 with every drive in the RAID attached to different controller. In such a scenario even the cheapest or onboard controller offers comparable fault-tollerancy to high-end RAID% solutions.

    However the simplest way to go is really 2 NAS RAID5 boxes mirroring each other.
  • dropadrop - Tuesday, February 6, 2007 - link

    quote:

    Any 3Ware, Areca, LSi, Adaptec solution should be just fine.


    I would rule out Adaptec and the older LSI chipsets still available (under several brands like Intel for example). We replaced a bunch of Intel 6 & 8 port controllers with top of the line 8-port Adaptec SATA II controllers.

    The performance of the Intel controllers (with LSI chipsets) was terrible. We got about 8-13MB/s sequential writes with RAID 10 arrays, and tested using alot of differant drives. The Adaptec products are alot better in regard to speed, but keep dropping drives. This seems to be a common problem, but they have no solution.

    I've previously used 3ware without any problems, and would gladly test Areca if they where available here.
  • yyrkoon - Sunday, February 4, 2007 - link

    why would I want to spend 1300 usd + per 5 disk array (minus drives), when I could build my own system much cheaper, and use the hardware/software I wanted ? Just because I don't know which HBAs are more reliable, than others (because I obviously cant afford to buy them all), doesn't mean I'm an idiot ;)
  • Bob Markinson - Friday, February 2, 2007 - link

    Interesting review!
    I would have liked to see a comparison with latest gen 15K SCSI drives and not 10K SCSI drives to see the true SAS interface performance advantage over SCSI. Futhermore, the Serveraid 6M comes in two versions - one with 128 MB cache and the other with 256 MB cache. Also, there were performance issues with early 7.xx firmware/sw revisions on the 6M at high IO loads - hopefully you ran the tests most recent firmware. Write-back cache was enabled on the 6M, right?

  • Lifted - Tuesday, February 6, 2007 - link

    Based on the title of the article, Promise VTrak J300S, you are expecting too much. The "comparison" was more like an ad for the product. What is point in comparing 10K U320 vs 15k SAS? It's supposed to tell us what exactly? You clearly need to look elsewhere for a SAS vs U320 comparison if that's what you were expecting here. This was more for kicks I think, and perhaps to make the J300S look better than ____ ??? I don't get it, it's just a storage enclosure. The RAID adapters and drives are what determine performance, so why was this apples-to-oranges "performance" review thrown into an enclosure article?

    Odd, quite odd.
  • fjeske - Friday, February 2, 2007 - link

    Isn't it a bit unfair to use old IBM 10K SCSI drives in this comparison? None of the now Hitachi drives show good performance on Storagereview.com. Compare to Seagate's Cheetah 15K.5 and I think you'll see a difference.

    Also, how was the SCSI setup done? Attaching 12 drives to one U320 bus will obviously saturate it. Servers usually pair them when connecting this many drives.
  • cgaspar - Friday, February 2, 2007 - link

    SAS and SCSI drives have disk write caches disabled by default, as the drives' caches are not battery backed. IDE and SATA drives frequently have write caching enabled by default. This makes writes much faster, but if you loose power, those writes the drive claimed were committed will be lost, which can be a very bad thing for a database. I'd suggest disabling the write cache on the SATA drives and re-testing (if you still have the gear), I suspect the results will be illuminating.

Log in

Don't have an account? Sign up now