Closing Thoughts

We've run quite a few tests, and encountered some surprising results. What exactly does all this tell us? We've only truly tested four different drives, each representing one of the current interface standards. Ideally, we would have tested nearly identical drives on each interface, but sometimes that's just not possible. Even with differences in basic features, however, we can draw some meaningful conclusions.

SAS

SAS clearly dominates when it comes to pure I/O throughput with the exception of 5%-20% random writes. With the introduction of a Point-to-Point topology, the performance floodgates have been opened and I/O reads of up to 540MB/sec with just 12 drives are now achievable. Even writes have jumped from 95MB/sec on SCSI to over 330MB/sec with SAS. It's also amazing to see that with just 12 drives in a RAID 10 you can almost guarantee a minimum of 100MB/sec sustained for both reads and writes in any combination.

SATA I/II

It's hard to find a good place for SATA I and II when it comes to performance. Reads for both types can hold their own against SCSI when the access type is between 0 and 20% random, but after that they are slower. SATA II performs extremely well with writes from 0 to 100% random, but SATA I drops below SCSI as soon as you reach 5% random. So why would you want SATA I or II? Put simply, the cost per GB of SATA cannot be touched by SAS or SCSI, which makes SATA the perfect choice for high capacity, albeit slower storage. If you are considering this type of configuration, we would strongly recommend buying drives with a large disk cache, as it appears they can improve your writes significantly.

The Raptor line is also unique at present, as it is the only 10K RPM SATA solution and it also comes with a 16MB cache. It is "only" an SATA I interface, but without further testing it's difficult to say for certain how much of the SATA II performance advantage is due to the interface and how much simply comes from having newer hard drive models with more cache. We're inclined to think the cache and platter density is playing a significant role, along with NCQ support, as each individual drive shouldn't be able to come anywhere near saturating SATA's 150MB/s bandwidth limit per connection.

The simple fact of the matter, however, is that no businesses looking at something like the VTrak J300s (or any other SAS enclosure) are going to really be interested in the Raptor line. Most will want a high performance, low capacity SAS drive array, perhaps complemented by a few high capacity SATA drives. The Raptor is either not fast enough or not large enough; the lower price of the Raptors vs. 15K SAS models doesn't normally enter the picture.

SCSI

SCSI still has its place in the server market when you compare it against SATA I/II, but it simply cannot touch SAS. SCSI has been around for a long time now, and SAS is clearly the next logical step. When you work on a drive by drive basis we're sure the gap between SAS and SCSI will be very small, but as you add more drives to the array the point-to-point architecture clearly wins. As time has progressed, SCSI drives have become faster and faster, but the shared bus architecture is definitely holding them back. This is very apparent when you consider how little SCSI drops from 0% random to 100% random in both reads and writes. Higher RPM drives would drop even less, but they still won't be able to improve on the bandwidth limits we encountered. The only way to improve overall bandwidth is to move to multiple SCSI connections, which can improve performance but comes with increased costs and more cable clutter.

Performance Conclusion

Both SAS and SATA I/II have their place in the Direct Attached Storage market. If your application needs high I/O throughput and can reside on low to medium capacities, then SAS is definitely the way to go. If you need high capacity more than you need high I/O then SATA I/II is the best choice. Of course with 300GB SAS drives (soon to be 600GB) you can still reach very high capacities, but at a cost that is significantly higher than with SATA. In the end you have to balance performance vs. cost and determine which is best for you.

Conclusion

Through our testing the Promise VTrak J300s has proven itself to be a reliable solution with good performance that is well suited for the SMB world. The ability to use high capacity SATA or high performance SAS in the same chassis gives users a new level of flexibility that wasn't an option before. Add to that the new performance levels that SAS can obtain vs. SCSI and DAS becomes a feasible solution again before moving to a full Fiber/SAN infrastructure. Of course when that time comes, it can be filled with high capacity SATA disks and still be useful for hosting a Virtual Tape Library, NAS backend, or long term data storage.

At the time of writing Shopping.com reported a price of $2,100 to $2,500 USD for the VTrak J300s, which is a decent price for an external SAS/SATA chassis. We had a hard time coming up with things to say about our experience with the Promise VTrak because it simply just worked, and in the end, isn't that what we're all looking for?

Test results – Write Performance
Comments Locked

31 Comments

View All Comments

  • LordConrad - Sunday, February 4, 2007 - link

    They may not be used much in corporate environments, but I think it would be interesting to see where the Raptors fall on these charts considering their higher rotational speeds.
  • dropadrop - Tuesday, February 6, 2007 - link

    quote:

    They may not be used much in corporate environments, but I think it would be interesting to see where the Raptors fall on these charts considering their higher rotational speeds.


    Yeah, I never saw a commercial product offered with Raptors. SATA seems to always be with 500GB 7200rpm drives. I guess the logic is, that people will only go with SATA to get 'cheap' space. The price / capacity ratio would fall quite drastically as soon as you move to Raptors negating the advantage.
  • bob4432 - Saturday, February 3, 2007 - link

    how can you camparee older 10k scsi with brand new fujtisu max 15k sas? you do kow that they make a u320 version of the max drive? or the industry leader atm - the seagate 15k.5 (which i currently own and have both a str and burst of 96MB/s on a single channel u160 card due to 32bit pci limitations...) ? why woould you compare apples to oranges when you could apples to apples? why not add soem 5400rpms hdds to the mix too???
  • JarredWalton - Saturday, February 3, 2007 - link

    Sometimes you have to test with what you have available. Obviously, the SCSI setup is going to perform better with a 15K spindle, and we mention this numerous times in various ways. However, the realizable throughput is not going to come anywhere near SAS. The sequential tests show maximum throughput, and while having a SCSI setup with two connections rather than one would improve throughput, SCSI's parallel design is becoming outdated. It can still hold its own for now, but most drive companies are putting more effort into higher capacity, higher performance SAS models now.
  • shady28 - Sunday, February 4, 2007 - link


    I agree your approach to SCSI is tabloid like. You are looking at a JBOD array on a single SCSI channel using obsolete 3-year old drives. Moreover, I have yet to see a production SCSI system utilize only one SCSI channel. A setup like that is the mark of a newbie, and a dangerous one if handling critical data.

    There is a huge difference in the performance of new 15k SCSI drives and the old 10K drives. Check storagereview.com and look at their IOPs readings - a critical measure for databases and OLTP applications. The top 2 ranked drives are SCSI, you don't even see SATA until you get down to the Raptor - a drive that has an IOPS rating that is more than 1/3 lower than the top rated Atlas 15K II 147GB. Even the SCSI JBOD array you used was pulled from market some 7 months ago.

    If that doesn't convince you of how silly your SCSI approach is consider this :

    The Seagate Cheetah 15k.5 U320 single drive has a sequential transfer rate that is better than your entire array of 14 10k rpm SCSI drives. I have seen two drives on the even older U160 interface do better in sequential reads than your array.

    None of this is really a good way to benchmark arrays. A much better and more informative method would be to utilize benchmarks with Oracle and MS-SQL server under Linux and Windows with various disk configurations.
  • yyrkoon - Sunday, February 4, 2007 - link

    Guys, you completely missed the whole point of WHY they used those drives in the comparison. They already had those drives, so thats what they used. In other words, they couldn't afford whatever the latest greatest SCSI drive costs x14 (and to be honest, why even bother buying SCSI drives, when you already have a goodly amount of SAS drives ?).

    Some of you guys, I really don't know what to think about you. You seem to think, that reviewers have endless amounts of cash, to drop on stuff they don't need, and would most likely never use, because they already have something better. Regardless whether you except it or not, SAS is far superior to SCSI, and has a very visible road map, compared to SCSI's, 'shaky' and un-certain future. Yes, SCSI has proven its self many times in the past, and for a long time, was the fastest option without using solid state, but now, a NEW technology, BASED on SCSI, and SATA has emerged, and I personally think that SCSI days are drawing to an end. Who knows though, maybe I'm wrong, and not like it would be the first time either . . .
  • JarredWalton - Monday, February 5, 2007 - link

    I can't say that we purchase most of the hardware that we review, simply because it would be too expensive. In this case, however, why would a manufacturer want to send us SCSI hard drives when they already know SAS is going to be faster in many instances? Basically, SCSI and SAS 15K RPM drives cost about the same amount, but either the enclosures cost more for SCSI (in order to get multiple SCSI channels) or else they offer lower total performance (throughput). In random access tests or seek times take precedence over throughput, SAS and SCSI are going to perform about the same. With most storage arrays being used for a variety of purposes, however, why would you want a SCSI setup that offers equal good performance in a few areas but lower performance in others?

    At this point, the only major reason to purchase SCSI hard drives is because of existing infrastructure. For companies that have a lot of high-end SCSI equipment, it would probably make more sense to upgrade the hard drives rather than purchasing Serial Attached SCSI enclosures and hard drives, at least in the short-term. The long-term prospects definitely favor SAS over SCSI, however -- at least in my book.
  • yyrkoon - Monday, February 5, 2007 - link

    Oh, hey Jarred, whilst you guys are still paying attention to this thread, something I personally would like to see, is minimum hardware requirements, for certain storage 'protocols'. I don't suppose you guys plan on doing something like this ?

    Let me clarify a little. Lately, I've been doing a LOT of experimentation with Linux / Windows file / block level storage. This includes AoE, iSCSI, CIFS, NFS, and FTP. Between two of my latest systems, I seem to be limited at around ~30MB/s(Megabytes/second). The hardware I'm using isn't server grade, but isn't shabby either, so I'm a bit confused as to what is going on. Anyhow, network is p2p GbE, and I've used multiple different drive configurations (including a 4x RAID0 array capable of 210MB/s reads). MY personal end goals are to have a very reliable storage server, but secondary goals are as high speed as possible. I wasn't expecting too much I don't think, in thinking that ~30MB/s is too slow (I was hoping for ~80-100MB/s, but would settle for ~50-60MB/s).

    Anyhow, some food for though ?
  • JarredWalton - Monday, February 5, 2007 - link

    I actually don't do too much with high-end storage. I've had transfer rates between systems of about 50 MB/s, which is close to my HDD's maximum, but as soon as there's some fragmentation it drops pretty quickly when doing network transfers. 20-30 MBps seems typical. I don't know how the OS, NIC, switch, etc. will impact things - I would assume all can have an impact, depending on the hardware and situation. Motherboard and CPU could also impact things.

    Best theoretical performance on GbE tends to be around 900-920 Mbps, but I've seen quite a few NICs that will top out at around 500-600 Mbps. That also creates a CPU load of 20-50% depending on CPU. Depending on your hardware, you might actually be hitting a bottleneck somewhere that caps you at ~30 MBps, but I wouldn't know much about the cause without knowing a lot more about the hardware and doing lots of testing. :|

    Maybe Jason or Dave can respond - you might try emailing them, though.
  • yyrkoon - Monday, February 5, 2007 - link

    I understand that you guys do not buy most of your hardware, well the hardware that you review, but thats part of my point, I assuming Promise either 1) gave you the SAS enclosure, for the review, or 2) 'lent' you the system for review. Either way, in my book, it doesn't really matter. Anyhow, Promise sent you guys hardware, you reviewed it, and compared it to whatever else you had on hand (no ? ).

Log in

Don't have an account? Sign up now