Final Words

The X25-E remains one of the fastest Intel SSDs in the enterprise despite being three generations old from a controller standpoint. The inherent advantages of SLC NAND are undeniable. Intel's SSD 520 regularly comes close to the X25-E in performance and easily surpasses it if you've got a 6Gbps interface. Over a 3Gbps interface, most of these drives end up performing very similarly.

We also showed a clear relationship between performance and drive capacity/spare area. Sizing your drive appropriately for your workload is extremely important for both client and enterprise SSD deployments. On the client side we've typically advocated keeping around 20% of your drive free at all times, but for enterprise workloads with high writes you should shoot for a larger amount. How much spare area obviously depends on your workload but if you do a lot of writing, definitely don't skimp on capacity.

What's most interesting to me is that although the 520 offers great performance, it didn't offer a tremendous advantage in endurance in our tests. Its endurance was in line with the SSD 320, if not a bit lower if we normalize to capacity. Granted this will likely vary depending on the workload, but don't assume that the 520 alone will bring you enterprise class endurance thanks to its lower write amplification.

This brings us to the final point. If endurance is a concern, there really is no replacement for the Intel SSD 710. Depending on the workload you get almost an order of magnitude improvement in drive longevity. You do pay for that endurance though. While an Intel SSD 320 performs similarly to the 710 in a number of areas, the 710 weighs in at around $6/GB compared to sub-$2/GB for the 320. If you can get by with the consumer drives, either the 320 or 520, they are a much better solution from a cost perspective.

Intel gives you the tools to figure out how much NAND endurance you actually need, the only trick is that you'll need to run your workload on an Intel SSD to figure that out first. It's a clever way to sell your drives. The good news is that if you're moving from a hard drive based setup you should be able to at least try out your workload on a small number of SSDs (maybe even one if your data isn't too large) before deciding on a final configuration. There are obviously software tools you can use to monitor writes but they won't give you an idea of write amplification.

Measuring How Long Your Intel SSD Will Last
Comments Locked

55 Comments

View All Comments

  • ssj4Gogeta - Thursday, February 9, 2012 - link

    I think what you're forgetting here is that the 90% or 100% figures are _including_ the extra work that an SSD has to do for writing on already used blocks. That doesn't mean the data is incompressible; it means it's quite compressible.
    For example, if the SF drive compresses the data to 0.3x its original size, then including all the extra work that has to be done, the final value comes out to be 0.9x. The other drives would directly write the data and have an amplification of 3x.
  • jwilliams4200 - Thursday, February 9, 2012 - link

    No, not at all. The other SSDs have a WA of about 1.1 when writing the same data.
  • Anand Lal Shimpi - Thursday, February 9, 2012 - link

    Haha yes I do :) These SSDs were all deployed in actual systems, replacing other SSDs or hard drives. At the end of the study we looked at write amplification. The shortest use case was around 2 months I believe and the longest was 8 months of use.

    This wasn't simulated, these were actual primary use systems that we monitored over months.

    Take care,
    Anand
  • Ryan Smith - Thursday, February 9, 2012 - link

    Indeed. I was the "winner" with the highest write amplification due to the fact that I had large compressed archives regularly residing on my Vertex 2, and even then as Anand notes the write amplification was below 1.0.
  • jwilliams4200 - Thursday, February 9, 2012 - link

    And still you dodge my question.

    If the Sandforce controller can achieve decent compression, why did it not do better than the Intel 320 in the endurance test in this article?

    I think the answer is that your "8 month study" is invalid.
  • Anand Lal Shimpi - Thursday, February 9, 2012 - link

    SandForce can achieve decent compression, but not across all workloads. Our study was limited to client workloads as these were all primary use desktops/notebooks. The benchmarks here were derived from enterprise workloads and some tasks on our own servers.

    It's all workload dependent, but to say that SandForce is incapable of low write amplification in any environment is incorrect.

    Take care,
    Anand
  • jwilliams4200 - Friday, February 10, 2012 - link

    If we look at the three "workloads" discussed in this thread:

    (1) anandtech "enterprise workload"

    (2) xtremesystems.org client-workload obtained by using data actually found on user drives and writing it (mostly sequential) to a Sandforce 2281 SSD

    (3) anandtech "8 month" client study

    we find that two out of three show that Sandforce cannot achieve decent compression on realistic data.

    I think you should repeat your "client workload" tests and be more careful with tracking exactly what is being written. I suspect there was a flaw in your study. Either benchmarks were run that you were not aware of, or else it could be something like frequent hibernation where a lot of empty RAM is being dumped to SSD. I can believe Sandforce can achieve a decent compression ratio on unused RAM! :)
  • RGrizzzz - Wednesday, February 8, 2012 - link

    What the heck is your site doing where you're writing that much data? Does that include the Anandtech forums, or just Anandtech.com?
  • extide - Wednesday, February 8, 2012 - link

    Probably logs requests and browser info and whatnot.
  • Stuka87 - Wednesday, February 8, 2012 - link

    That most likely includes the CMS and a large amount of the content, the Ad system, our users accounts for commenting here, all the Bench data, etc.

    The forums would use their own vBulletin database. But most likely run on the same servers.

Log in

Don't have an account? Sign up now