Test results - Read Performance

100% Sequential Reads


100% Sequential Read Performance (MB/Sec)
1 DQL/Disk 2 DQL/Disk 3 DQL/Disk
SAS 536 MB/sec 536 MB/sec 540 MB/sec
SATA II 370 MB/sec 374 MB/sec 407 MB/sec
SATA I 356 MB/sec 333 MB/sec 348 MB/sec
SCSI 129 MB/sec 128 MB/sec 126 MB/sec

When we first gathered the test data together, we were very impressed with the results. Both SAS and SATA I/II show extremely high performance with 100% sequential accesses. The SCSI and SAS drives are similar in terms of capacity while the SAS drives offer higher rotational speeds. Meanwhile, the SATA drives are all 7200 RPM models, theoretically offering less performance but higher capacities than either of the other two solutions.

We can really see the benefit a point-to-point architecture has to offer, as well as the impact the shared bus has on performance. All of the SAS/SATA solutions offer much higher read performance than the SCSI offering here, which helps to explain why SAS solutions are rapidly increasing market share. SCSI Ultra320 can't even come near its theoretical 320 MB/s transfer rates due to sharing overhead. Strangely, we see SCSI and SATA I drop slightly as the DQL increases, while SAS and SATA II increase slightly.

It is interesting to note the performance at low Disk Queue Length between the different technologies. Both SAS and SCSI are very close to their peak performance even at a DQL of 2, whereas SATA needs a DQL of 4 before it starts performing well. As the Disk Queue Length increases, both SAS and SCSI hold their performance well, and a gap starts to emerge between SATA I and SATA II.

95% Sequential Reads


95% Sequential Read Performance (MB/Sec)
1 DQL/Disk 2 DQL/Disk 3 DQL/Disk
SAS 254 MB/sec 323 MB/sec 356 MB/sec
SATA II 138 MB/sec 178 MB/sec 194 MB/sec
SATA I 132 MB/sec 167 MB/sec 182 MB/sec
SCSI 93 MB/sec 119 MB/sec 126 MB/sec

Here we start to see the impact that randomness has on our read performance. SAS is still in the lead but SCSI is starting to catch up to SATA I/II. The higher rotational speeds of SCSI and SAS solutions can help relative to most SATA offerings, as access times depend in part on rotational latency. Performance continues to grow through the whole range of Disk Queue Length with SAS breaking 350MB/sec around 34. This is significantly higher then the 175-200MB/Sec of SATA I/II and 125MB/Sec that SCSI is able to obtain.

Again, we see the benefit that the point-to-point architecture has to offer over the old shared bus architecture. SATA II maintains a slight lead over SATA I, but both don't even come close to touching SAS. SCSI gains ground, but given the maximum throughput of around 130 MB/s that was measured in the 100% sequential test, it still trails by a large margin in this workload.

80% Sequential Reads


80% Sequential Read Performance (MB/Sec)
1 DQL/Disk 2 DQL/Disk 3 DQL/Disk
SAS 144 MB/sec 188 MB/sec 210 MB/sec
SATA II 75 MB/sec 96 MB/sec 103 MB/sec
SATA I 74 MB/sec 96 MB/sec 102 MB/sec
SCSI 77 MB/sec 104 MB/sec 115 MB/sec

At 20% Random Reads we start to see SATA I/II losing its performance advantage over SCSI, while SAS clearly remains as the performance leader. We continue to see performance grow as the DQL increases, but it's interesting to see that SAS is ramping up faster then everyone else. SATA I and SATA II are struggling to keep up with SCSI which has now moved into second place. At this point, the performance of SATA I and SATA II are almost identical.

Random (0% Sequential) Reads


Random Read Performance (MB/Sec)
1 DQL/Disk 2 DQL/Disk 3 DQL/Disk
SAS 113 MB/sec 165 MB/sec 197 MB/sec
SATA II 43 MB/sec 53 MB/sec 56 MB/sec
SATA I 43 MB/sec 53 MB/sec 55 MB/sec
SCSI 70 MB/sec 100 MB/sec 112 MB/sec

In the previous graphs we've seen the impact that random data has on our overall throughput. Now that we are at 100% Random accesses, we get to see how the drives perform in a 'worst-case' scenario. SAS continues to take the performance lead by throughput that's almost double that of SCSI, but now SCSI is almost twice as fast as SATA I/II. Looking at the results through the whole DQL range, SAS continues to grow without showing signs of slowing down, whereas SCSI and SATA I/II appear to level off around 32-36 DQL.

While a point-to-point architecture clearly has advantages with sequential accesses, there is still something to be said about spindle speed and disk firmware. However, given that SCSI maxed out at ~130 MB/s in the sequential testing, higher spindle speeds would only go so far in closing the gap between SCSI and SAS. SATA drives like Western Digital's 10K RPM Raptor should also be able to close the distance with SCSI in highly random usage scenarios.

We've covered the read performance, but what happens when we test writes?

Test configuration Test results – Write Performance
Comments Locked

31 Comments

View All Comments

  • LordConrad - Sunday, February 4, 2007 - link

    They may not be used much in corporate environments, but I think it would be interesting to see where the Raptors fall on these charts considering their higher rotational speeds.
  • dropadrop - Tuesday, February 6, 2007 - link

    quote:

    They may not be used much in corporate environments, but I think it would be interesting to see where the Raptors fall on these charts considering their higher rotational speeds.


    Yeah, I never saw a commercial product offered with Raptors. SATA seems to always be with 500GB 7200rpm drives. I guess the logic is, that people will only go with SATA to get 'cheap' space. The price / capacity ratio would fall quite drastically as soon as you move to Raptors negating the advantage.
  • bob4432 - Saturday, February 3, 2007 - link

    how can you camparee older 10k scsi with brand new fujtisu max 15k sas? you do kow that they make a u320 version of the max drive? or the industry leader atm - the seagate 15k.5 (which i currently own and have both a str and burst of 96MB/s on a single channel u160 card due to 32bit pci limitations...) ? why woould you compare apples to oranges when you could apples to apples? why not add soem 5400rpms hdds to the mix too???
  • JarredWalton - Saturday, February 3, 2007 - link

    Sometimes you have to test with what you have available. Obviously, the SCSI setup is going to perform better with a 15K spindle, and we mention this numerous times in various ways. However, the realizable throughput is not going to come anywhere near SAS. The sequential tests show maximum throughput, and while having a SCSI setup with two connections rather than one would improve throughput, SCSI's parallel design is becoming outdated. It can still hold its own for now, but most drive companies are putting more effort into higher capacity, higher performance SAS models now.
  • shady28 - Sunday, February 4, 2007 - link


    I agree your approach to SCSI is tabloid like. You are looking at a JBOD array on a single SCSI channel using obsolete 3-year old drives. Moreover, I have yet to see a production SCSI system utilize only one SCSI channel. A setup like that is the mark of a newbie, and a dangerous one if handling critical data.

    There is a huge difference in the performance of new 15k SCSI drives and the old 10K drives. Check storagereview.com and look at their IOPs readings - a critical measure for databases and OLTP applications. The top 2 ranked drives are SCSI, you don't even see SATA until you get down to the Raptor - a drive that has an IOPS rating that is more than 1/3 lower than the top rated Atlas 15K II 147GB. Even the SCSI JBOD array you used was pulled from market some 7 months ago.

    If that doesn't convince you of how silly your SCSI approach is consider this :

    The Seagate Cheetah 15k.5 U320 single drive has a sequential transfer rate that is better than your entire array of 14 10k rpm SCSI drives. I have seen two drives on the even older U160 interface do better in sequential reads than your array.

    None of this is really a good way to benchmark arrays. A much better and more informative method would be to utilize benchmarks with Oracle and MS-SQL server under Linux and Windows with various disk configurations.
  • yyrkoon - Sunday, February 4, 2007 - link

    Guys, you completely missed the whole point of WHY they used those drives in the comparison. They already had those drives, so thats what they used. In other words, they couldn't afford whatever the latest greatest SCSI drive costs x14 (and to be honest, why even bother buying SCSI drives, when you already have a goodly amount of SAS drives ?).

    Some of you guys, I really don't know what to think about you. You seem to think, that reviewers have endless amounts of cash, to drop on stuff they don't need, and would most likely never use, because they already have something better. Regardless whether you except it or not, SAS is far superior to SCSI, and has a very visible road map, compared to SCSI's, 'shaky' and un-certain future. Yes, SCSI has proven its self many times in the past, and for a long time, was the fastest option without using solid state, but now, a NEW technology, BASED on SCSI, and SATA has emerged, and I personally think that SCSI days are drawing to an end. Who knows though, maybe I'm wrong, and not like it would be the first time either . . .
  • JarredWalton - Monday, February 5, 2007 - link

    I can't say that we purchase most of the hardware that we review, simply because it would be too expensive. In this case, however, why would a manufacturer want to send us SCSI hard drives when they already know SAS is going to be faster in many instances? Basically, SCSI and SAS 15K RPM drives cost about the same amount, but either the enclosures cost more for SCSI (in order to get multiple SCSI channels) or else they offer lower total performance (throughput). In random access tests or seek times take precedence over throughput, SAS and SCSI are going to perform about the same. With most storage arrays being used for a variety of purposes, however, why would you want a SCSI setup that offers equal good performance in a few areas but lower performance in others?

    At this point, the only major reason to purchase SCSI hard drives is because of existing infrastructure. For companies that have a lot of high-end SCSI equipment, it would probably make more sense to upgrade the hard drives rather than purchasing Serial Attached SCSI enclosures and hard drives, at least in the short-term. The long-term prospects definitely favor SAS over SCSI, however -- at least in my book.
  • yyrkoon - Monday, February 5, 2007 - link

    Oh, hey Jarred, whilst you guys are still paying attention to this thread, something I personally would like to see, is minimum hardware requirements, for certain storage 'protocols'. I don't suppose you guys plan on doing something like this ?

    Let me clarify a little. Lately, I've been doing a LOT of experimentation with Linux / Windows file / block level storage. This includes AoE, iSCSI, CIFS, NFS, and FTP. Between two of my latest systems, I seem to be limited at around ~30MB/s(Megabytes/second). The hardware I'm using isn't server grade, but isn't shabby either, so I'm a bit confused as to what is going on. Anyhow, network is p2p GbE, and I've used multiple different drive configurations (including a 4x RAID0 array capable of 210MB/s reads). MY personal end goals are to have a very reliable storage server, but secondary goals are as high speed as possible. I wasn't expecting too much I don't think, in thinking that ~30MB/s is too slow (I was hoping for ~80-100MB/s, but would settle for ~50-60MB/s).

    Anyhow, some food for though ?
  • JarredWalton - Monday, February 5, 2007 - link

    I actually don't do too much with high-end storage. I've had transfer rates between systems of about 50 MB/s, which is close to my HDD's maximum, but as soon as there's some fragmentation it drops pretty quickly when doing network transfers. 20-30 MBps seems typical. I don't know how the OS, NIC, switch, etc. will impact things - I would assume all can have an impact, depending on the hardware and situation. Motherboard and CPU could also impact things.

    Best theoretical performance on GbE tends to be around 900-920 Mbps, but I've seen quite a few NICs that will top out at around 500-600 Mbps. That also creates a CPU load of 20-50% depending on CPU. Depending on your hardware, you might actually be hitting a bottleneck somewhere that caps you at ~30 MBps, but I wouldn't know much about the cause without knowing a lot more about the hardware and doing lots of testing. :|

    Maybe Jason or Dave can respond - you might try emailing them, though.
  • yyrkoon - Monday, February 5, 2007 - link

    I understand that you guys do not buy most of your hardware, well the hardware that you review, but thats part of my point, I assuming Promise either 1) gave you the SAS enclosure, for the review, or 2) 'lent' you the system for review. Either way, in my book, it doesn't really matter. Anyhow, Promise sent you guys hardware, you reviewed it, and compared it to whatever else you had on hand (no ? ).

Log in

Don't have an account? Sign up now