Update: I'll be answering questions about the S3700 live from this year's SC12 conference. Head over here to get your questions answered!

When Intel arrived on the scene with its first SSD, it touted superiority in controller, firmware and NAND as the reason it was able to so significantly outperform the competition. Slow but steady improvements to the design followed over the next year and until 2010 Intel held our recommendation for best SSD on the market. The long awaited X25-M G3 ended up being based on the same 3Gbps SATA controller as the previous two drives, just sold under a new brand. Intel changed its tune, claiming that the controller (or who made it) wasn't as important as firmware and NAND.

Then came the 510, Intel's first 6Gbps SATA drive...based on a Marvell controller. Its follow-on, the Intel SSD 520 used a SandForce controller. With the release of the Intel SSD 330, Intel had almost completely moved to third party SSD controllers. Intel still claimed proprietary firmware, however in the case of the SandForce based drives Intel never seemed to have access to firmware source code - but rather its custom firmware was the result of Intel validation, and SandForce integrating changes into a custom branch made specifically for Intel. Intel increasingly looked like a NAND and validation house, giving it a small edge over the competition. Meanwhile, Samsung aggressively went after Intel's consumer SSD business with the SSD 830/840 Pro, while others attempted to pursue Intel's enterprise customers.

This is hardly a fall from grace, but Intel hasn't been able to lead the market it helped establish in 2008 - 2009. The SSD division at Intel is a growing one. Unlike the CPU architecture group, the NAND solutions group just hasn't been around for that long. Growing pains are still evident, and Intel management isn't too keen on investing heavily there. Despite the extremely positive impact on the industry, storage has always been a greatly commoditized market. Just as Intel is in no rush to sell $20 smartphone SoCs, it's similarly in no hurry to dominate the consumer storage market.

I had heard rumors of an Intel designed 6Gbps SATA controller for a while now. Work on the project began years ago, but the scope of Intel's true next-generation SATA SSD controller changed many times over the years. What started as a client focused controller eventually morphed into an enterprise specific design, with its scope and feature set reinvented many times over. It's typical of any new company or group. Often times the only way to learn focus is to pay the penalty for not being focused. It usually happens when you're really late with a product. Intel's NSG had yet to come into its own, it hadn't yet found its perfect development/validation/release cadence. If you look at how long it took the CPU folks to get to tick-tock, it's still very early to expect the same from NSG.

The true 3rd generation Intel SATA SSD controller

Today all of that becomes moot as Intel releases its first brand new SSD controller in 5 years. This controller has been built from the ground up rather than as an evolution of a previous generation. It corrects a lot of the flaws of the original design and removes many constraints. Finally, this new controller marks the era of a completely new performance focus. For the past couple of years we've seen controllers quickly saturate 6Gbps SATA, and slowly raise the bar for random IO performance. With its first 6Gbps SATA controller, Intel does significantly improve performance along both traditional vectors but it adds a new one entirely: performance consistency. All SSDs see their performance degrade over time, but with its new controller Intel wanted to deliver steady state performance that's far more consistent than the competition.

I originally thought that we wouldn't see much innovation in the non-PCIe SSD space until SATA Express. It turns out I was wrong. Codenamed Taylorsville, it's time to review Intel's SSD DC S3700.

Inside the Drive
Comments Locked


View All Comments

  • Hans Hagberg - Monday, November 12, 2012 - link

    An enterprise storage review today is not really complete without an array of 15K mechanical disks for comparison. This is still what is being used for performance in most cases and that is what we are up against when we are looking to motivate SSDs in existing configurations.

    And for completeness, please throw in PCI-based SSD storage as well. Such storage always come up in discussions around SSD but there is too little independent test data available to take decisions.

    Another question when reading the review is about the test system being used. I couldn't find this information?

    Also - enterprise storage is most often fronted by high-end controllers with lot's of cache. It would be interesting to see an analysis of how that impacts the different drives and their consistency. Will the consistency be equalized by a big controller and cache in front of it?

    The Swingbench anomaly is unfortunate because database servers are probably the primary application for massive implementation of SSD storage. It would be nice if the anomaly could be sorted out so we could see what the units can do. Normally, if one cares for enterprise performance, you are careful with alignment and separation of storage (data, logs etc.) so I agree with the Intel statement on this. Changing the benchmark would tear up the old test data so I'm not sure how to fix it without starting over.

    The review format and test case selection is excellent. Just give us some more data points.
    I would go as far as to say I would pay good money to read the review if the above was included.
  • Sb1 - Tuesday, November 13, 2012 - link

    "An enterprise storage review today is not really complete without an array of 15K mechanical disks for comparison."
    ... "And for completeness, please throw in PCI-based SSD storage as well."

    I __fully__ agree with Hans Hagberg

    I thought this was a good article, but it would be an excellent one with both of these.

    Still keep up the good work.
  • Troff - Wednesday, November 14, 2012 - link

    I agree as far as PCI-based SSDs go, but I see no point in including the 15K mechanical drive array for the same reason you don't see velocipedes in car reviews.
  • ilkhan - Tuesday, November 13, 2012 - link

    So what I see here is that for an enterprise server drive, go with this Intel. For a desktop drive, this intel or a samsung 840pro, for a laptop drive, the samsung 840pro is best.

    That about sum it up?
  • korbendallas - Friday, November 16, 2012 - link

    Instead of average and max latency figures, I would love to see percentiles: 50%, 90%, 99%, 99,9% for instance. If you look at intel's claims for these drives, they're in percentiles too.

    If your distribution does not follow a bell curve, which is the case in many of the SSDs you are testing, average is useless. And as you already know (and why you didn't include it before now), max is useless too.
  • dananski - Saturday, November 17, 2012 - link

    I'd really like to see more graphs like the ones on "Consistent Performance: A Reality" showing how much variation drives can have in instantaneous IOPS. These really do a great job of showing exactly what Intel has fixed and I can see the benefit in some enterprise situations. A millisecond hiccup is an eternity for the CPU waiting for that data.

    Personally I'd now like to know:
    * How much of a problem this can be on consumer drives, where sustained random IO is less common?
    * Is this test a good way to characterise the microstutter problem for a particular drive?
    * How badly are drives with uneven IOPS distributions affected by RAID? (I know this was touched on briefly in the webcast with Intel)
  • junky77 - Sunday, November 18, 2012 - link

    the consistency of current consumer SSDs?
  • virtualstorage - Tuesday, March 12, 2013 - link

    I see the test results upto 2000 seconds. With a enterprise array, there will be continuos ios in 24/7 production environment. What is the performance behavior of Intel SSD DCS3700 with continuous io's over many hours?
  • damnintel - Wednesday, March 13, 2013 - link

    heyyyy check this out damnintel dot com
  • rayoflight - Sunday, October 6, 2013 - link

    Got two of these. Both of them failed after approx. 30 boot up's. They arent recognised anymore by the bios or as external harddrives on a different system, as if they are completely dead. Faulty batch? Or do they "lock up" ? Anyone had this problem?

Log in

Don't have an account? Sign up now