POST A COMMENT

55 Comments

Back to Article

  • jeremyshaw - Wednesday, February 08, 2012 - link

    woah... I've been waiting for an article like this for a long time.

    Thank you Anandtech!
    Reply
  • ckryan - Wednesday, February 08, 2012 - link

    Is AnandTech ever planning on doing a longer period SSD test? A long term testing scenario would make for interesting reading. Reply
  • Anand Lal Shimpi - Wednesday, February 08, 2012 - link

    Technically all of our SSD tests are long term. We're still testing Vertex 2 class drives and I actually still have six Intel X25-M G1s deployed in systems in my lab alone. You only hear about them when things go wrong. Most of the time I feed errors back to the vendors to get fixes put into firmware updates. The fact that you aren't seeing more of this sort of stuff means that things are working well :-P

    But the results of our long term tests directly impact our reviews/recommendations. It's one of the reasons I've been so positive on the Samsung SSD 830 lately. I've been using 830s 24/7 since our review published in September with very good results :)

    Take care,
    Anand
    Reply
  • Samus - Thursday, February 09, 2012 - link

    I've had an X25-M G1 in my Macbook since 2009, used daily, never a problem. Lack of trim support doesn't really seem to matter unless you're the type the writes/deletes a lot of data. Reply
  • jwilliams4200 - Wednesday, February 08, 2012 - link

    Since you found that the 520 does not really do any better than the 320 for endurance, does this also imply that the Sandforce controller was not able to achieve significant compression on the workload that you fed to it? In other words, Sandforce compression does not work very well on real data as opposed to artificial benchmark data. Reply
  • ckryan - Wednesday, February 08, 2012 - link

    SF is really good at compressing fake data. I suppose some logs could really benefit, but one of my personal SF drives has 10% more raw writes than host writes. I suspect I'm not alone with this either.

    People doing repeated incompressible benches could have WA higher than 1 with SF, but once you install the OS and and programs, every day writes are less compressible than promised it would seem.
    Reply
  • Anand Lal Shimpi - Wednesday, February 08, 2012 - link

    Keep in mind that only 10% more NAND writes than host writes is *really* good. It's not uncommon to get much, much higher than that with other controllers.

    We did an 8 month study on SF drives internally. The highest write amp we saw was 0.7x. On my personal drive I saw a write amp of around 0.6x.

    Take care,
    Anand
    Reply
  • jwilliams4200 - Thursday, February 09, 2012 - link

    Baloney!

    You just saw a write amplification of near 1 on this very article. Why do you dodge my question?
    Reply
  • erple2 - Thursday, February 09, 2012 - link

    I suspect that the workloads that they were testing for with the SF drives internally are not what is reflected in this article.

    That implies, then, that the SF drives have been doing other workloads like acting in desktops and/or laptop duties. For those kinds of things, I suspect that a 0.6-0.7x is more reasonable (assuming there isn't much reading/writing of incompressible data).

    Given that some of the workload may be for mobile applications, and given a strong focus on WDE for laptops, I wonder how that ultimately impacts the write amplification for drives with WDE on them.
    Reply
  • jwilliams4200 - Thursday, February 09, 2012 - link

    The "8 month study" that he refers to is very hard to believe.

    Does he really expect us to believe that the people in Anand's test lab used these SSDs for 8 months and did not run any benchmarks on them?

    Most benchmarks write easily compressible data, and a lot of it.

    The real way to test the Sandforce compression is to write typical user data to the SSD and monitor the raw write and host write attributes. That experiment has already been done on xtremesystems.org, and the findings were that typical user data bare compresses at all -- at best raw writes were 90% of host writes, but for most data it was 100% or higher. The only thing that got some compression was the OS and application installs, and most people only do those once, so it should not be counted towards user data when estimating endurance.
    Reply
  • ssj4Gogeta - Thursday, February 09, 2012 - link

    I think what you're forgetting here is that the 90% or 100% figures are _including_ the extra work that an SSD has to do for writing on already used blocks. That doesn't mean the data is incompressible; it means it's quite compressible.
    For example, if the SF drive compresses the data to 0.3x its original size, then including all the extra work that has to be done, the final value comes out to be 0.9x. The other drives would directly write the data and have an amplification of 3x.
    Reply
  • jwilliams4200 - Thursday, February 09, 2012 - link

    No, not at all. The other SSDs have a WA of about 1.1 when writing the same data. Reply
  • Anand Lal Shimpi - Thursday, February 09, 2012 - link

    Haha yes I do :) These SSDs were all deployed in actual systems, replacing other SSDs or hard drives. At the end of the study we looked at write amplification. The shortest use case was around 2 months I believe and the longest was 8 months of use.

    This wasn't simulated, these were actual primary use systems that we monitored over months.

    Take care,
    Anand
    Reply
  • Ryan Smith - Thursday, February 09, 2012 - link

    Indeed. I was the "winner" with the highest write amplification due to the fact that I had large compressed archives regularly residing on my Vertex 2, and even then as Anand notes the write amplification was below 1.0. Reply
  • jwilliams4200 - Thursday, February 09, 2012 - link

    And still you dodge my question.

    If the Sandforce controller can achieve decent compression, why did it not do better than the Intel 320 in the endurance test in this article?

    I think the answer is that your "8 month study" is invalid.
    Reply
  • Anand Lal Shimpi - Thursday, February 09, 2012 - link

    SandForce can achieve decent compression, but not across all workloads. Our study was limited to client workloads as these were all primary use desktops/notebooks. The benchmarks here were derived from enterprise workloads and some tasks on our own servers.

    It's all workload dependent, but to say that SandForce is incapable of low write amplification in any environment is incorrect.

    Take care,
    Anand
    Reply
  • jwilliams4200 - Friday, February 10, 2012 - link

    If we look at the three "workloads" discussed in this thread:

    (1) anandtech "enterprise workload"

    (2) xtremesystems.org client-workload obtained by using data actually found on user drives and writing it (mostly sequential) to a Sandforce 2281 SSD

    (3) anandtech "8 month" client study

    we find that two out of three show that Sandforce cannot achieve decent compression on realistic data.

    I think you should repeat your "client workload" tests and be more careful with tracking exactly what is being written. I suspect there was a flaw in your study. Either benchmarks were run that you were not aware of, or else it could be something like frequent hibernation where a lot of empty RAM is being dumped to SSD. I can believe Sandforce can achieve a decent compression ratio on unused RAM! :)
    Reply
  • RGrizzzz - Wednesday, February 08, 2012 - link

    What the heck is your site doing where you're writing that much data? Does that include the Anandtech forums, or just Anandtech.com? Reply
  • extide - Wednesday, February 08, 2012 - link

    Probably logs requests and browser info and whatnot. Reply
  • Stuka87 - Wednesday, February 08, 2012 - link

    That most likely includes the CMS and a large amount of the content, the Ad system, our users accounts for commenting here, all the Bench data, etc.

    The forums would use their own vBulletin database. But most likely run on the same servers.
    Reply
  • Anand Lal Shimpi - Wednesday, February 08, 2012 - link

    There's a *ton* of data that we manage. We run statistics, ad serving and forums all in house. Among other things, we can guarantee that no one funny is looking at the data we manage.

    Statistics are pretty beefy (they are one of our enterprise workloads after all) as we're tracking requests to all articles published. Couple a few hundred thousand readers per day with multiple article requests per reader and that's a lot of traffic to keep track of. Multiply all of that by a few ads per page and you can see where ad serving/tracking gets insane.

    Then there are the forums. Repeat the same workload as above but across a different, but also quite large community.

    The MS SQL server is main site, the My SQL server is forums + ads :)

    Take care,
    Anand
    Reply
  • Lord 666 - Wednesday, February 08, 2012 - link

    @Anand,

    What model server and what controller was it using with the qty 8 320 drives? Been waiting for an article like for this for some time.
    Reply
  • Anand Lal Shimpi - Thursday, February 09, 2012 - link

    The temporary hardware is a Dell R710 I believe. We're simply using Intel's Matrix RAID, no real need for a discrete PCIe RAID solution for what we're doing. I'll be providing more details about our final hardware configuration and how it compares to what we were running on for the past few years in the not too distant future.

    Take care,
    Anand
    Reply
  • mojobary - Thursday, February 09, 2012 - link

    Hi,

    This is exactly the information I am interested in. I am a video editor, so my needs are typically long sequential reads. I would be interested in RAID adapters, iSCSI and Fibre channel RAID enclosures in respect to using SSDs. This is something that not much good information is present out in the wild. I have been researching this topic for about nine months and do not have conclusive information. Even from vendors which say "they support ssd", they don't list supported drives or even TRIM support. I typically like this site as it seems unbiased in the regard and usually helps me drive purchasing decisions.
    Reply
  • bobbozzo - Monday, February 13, 2012 - link

    HDDs are pretty good at sequential reads (and writes).

    For the same $, you'd be able to get more HDDs, and therefore higher sequential performance, than SSDs.

    This will remain true until SSDs get MUCH faster sequential performance, or get MUCH cheaper than they currently are.
    Reply
  • Movieman420 - Wednesday, February 08, 2012 - link

    Given the 710's obvious benefit of using HET nand it'd be nice to see a comparison between it and an eMLC equipped Ocz SF2500 Deneva 2 drive. :evil grin: Reply
  • Anand Lal Shimpi - Thursday, February 09, 2012 - link

    I'm trying to get more enterprise SSDs in house. I've got a bunch that I'm working on now actually. Not the Deneva 2 sadly :)

    Take care,
    Anand
    Reply
  • Sufo - Thursday, February 09, 2012 - link

    How about the HP for ProLiant drives? Also, anything from Anobit?

    Is the 710 a realistic option for enterprise?
    Reply
  • zepi - Wednesday, February 08, 2012 - link

    I was hoping on some input regarding TRIM and SSD RAIDs in enterprise environments. What if I stick these babies to a proper raid-controller to run them in RAID 5? Or how about under other operating systems than Windows? Do the drives choke quickly if trim is not available or is it a non-issue? Does trim work in a software RAID array, assuming my operating system supports it? And how about trim / garbage collection behavior if the drives are never idle?

    Afaik Intel has released RAID 0- and RAID 1-compatible drivers that support trim, but only for Windows. Was that active in your test or does it even matter the slightest?
    Reply
  • lonestar212 - Wednesday, February 08, 2012 - link

    I was about ask exactly the same thing. Very curious about this! Reply
  • Anand Lal Shimpi - Thursday, February 09, 2012 - link

    Given enough spare area and a good enough SSD controller, TRIM isn't as important. It's still nice to have, but it's more of a concern on a drive where you're running much closer to capacity. Take the Intel SSD 710 in our benchmarks for example. We're putting a ~60GB data set on a 200GB drive with 320GB of NAND. With enough spare area it's possible to maintain low write amplification without TRIM. That's not to say that it's not valuable, but for the discussion today it's not at the top of the list.

    The beauty of covering the enterprise SSD space is that you avoid a lot of the high write amp controllers to begin with and extra spare area isn't unheard of. Try selling a 320GB consumer SSD with only 200GB of capacity and things look quite different :-P

    Take care,
    Anand
    Reply
  • Stuka87 - Wednesday, February 08, 2012 - link

    Great article Anand, I have been waiting for one like this. It will really come in handy to refer back to myself, and refer others too when they ask about SSD's in an enterprise environment. Reply
  • Iketh - Thursday, February 09, 2012 - link

    Anand's nickname should be Magnitude or the OOM Guy. Reply
  • wrednys - Thursday, February 09, 2012 - link

    What's going on with the media wear indicator on the first screenshot? 656%?
    Or is the data meaningless before the first E4 reset?
    Reply
  • Kristian Vättö - Thursday, February 09, 2012 - link

    Great article Anand, very interesting stuff! Reply
  • ssj3gohan - Thursday, February 09, 2012 - link

    So... something I'm missing entirely in the article: what is your estimate of write amplification for the various drives? Like you said in another comment, typical workloads on Sandforce usually see WA < 1.0, while in this article it seems to be squarely above 1. Why is that, what is your estimate of the exact value and can you show us a workload that would actually benefit from Sandforce?

    This is very important, because with any reliability qualms out of the way the intel SSD 520 could be a solid recommendation for certain kinds of workloads. This article does not show any benefit to the 520.
    Reply
  • Christopher29 - Thursday, February 09, 2012 - link

    Members of this forum are testing (Anvil) SSDs with VERY extreme workloads. X25-V40GB (Intel drive) has already 685 TB WRITES ! This is WAY more than 5TB suggested by Intel. They also fill drives completely! This means that your 120GB SSDs (limited even to 100GB) could withstand almost 1 PB writes. One of their 40GB Intel 320 failed after writting 400TB!
    Corsair Force 3 120GB has already 1050TB writes! You shoul reconsider your assumptions, because it seems that those drives (and large ones especially) will last much longer.

    Stats for today:
    - Intel 320 40GB – 400TB (dead)
    - Samsung 470 64GB – 490TB (dead)
    - Crucial M4 64GB – 780TB (dead)
    - Crucial M225 60GB – 840TB (dead)
    - Corsair F40A - 210TB (dead)
    - Mushkin Chronos Deluxe 60GB – 480TB (dead)
    - Corsair Force 3 120GB – 1050TB (1 PB! and still going)
    - Kingston SSDNow 40GB (X25-V) (34nm) - 640TB

    SOURCE:
    http://www.xtremesystems.org/forums/showthread.php...
    Reply
  • Christopher29 - Thursday, February 09, 2012 - link

    PS: And also interestingly Force 3 (that lasted longest) is exactly SF-2281 drive? So what is it in reality Anand, does this mean that SF do write less and therefore SSD last longer? Reply
  • Death666Angel - Thursday, February 09, 2012 - link

    In every sentence, he commented how he was being conservative and that real numbers would likely be higher. However, given the sensitive nature of business data/storage needs, I think most of them are conservative and rightly so. The mentioned p/e cycles are also just estimates and likely vary a lot. Without anyone showing 1000 Force 3 drives doing over 1PB, that number is pretty much useless for such an article. :-) Reply
  • Kristian Vättö - Thursday, February 09, 2012 - link

    I agree. In this case, it's better to underestimate than overestimate. Reply
  • ckryan - Thursday, February 09, 2012 - link

    Very true. And again, many 60/64GB could do 1PB with an entirely sequential workload. Under such conditions, most non-SF drives typically experience a WA of 1.10 to 1.20.

    Reality has a way of biting you in the ass, so in reality, be conservative and reasonable about how long a drive will last.

    No one will throw a parade if a drive lasts 5 years, but if it only lasts 3 you're gonna hear about it.
    Reply
  • ckryan - Thursday, February 09, 2012 - link

    The 40GB 320 failed with almost 700TB, not 400. Remember though, the workload is mostly sequential. That particular 320 40GB also suffered a failure of what may have been an entire die last year, and just recently passed on to the SSD afterlife.

    So that's pretty reassuring. The X25-V is right around 700TB now, and it's still chugging along.
    Reply
  • eva2000 - Thursday, February 09, 2012 - link

    Would be interesting to see how consumer drives in the tests and life expectancy if they are configured with >40% over provisioning. Reply
  • vectorm12 - Thursday, February 09, 2012 - link

    Thanks for the insight into this subject Anand.

    However I am curios as to why controller manufacturers haven't come up with a controller to manage cell-wear across multiple drives without raid.

    Basically throw more drives at a problem. As you would be to some extent be mirroring most of your P/E cycles in a traditional raid I feel there should be room for an extra layer of management. For instance having a traditional raid 1 between two drive and keeping another one or two as "hot spare" for when cells start to go bad.

    After all if you deploy SSD in raid you're likely to be subjecting them to a similar if not identical number of P/E cycles. This would force you to proactively switch out drives(naturally most would anyway) in order to guarantee you won't be subjected to a massive, collective failure of drives risking loss of data.

    Proactive measures are the correct way of dealing with this issue but in all honesty I love "set and forget" systems more than anything else. If a drive has exhausted it's NAND I'd much rather get an email from a controller telling me to replace the drive and that it's already handled the emergency by allocating data to a spare drive.

    Also I'm still seeing 320 8MB-bugg despite running the latest firmware in a couple of servers hosting low access-rate files for some strange reason. It seems as though they behave fine as long as the are constantly stressed but leave them idle for too long and things start to go wrong. Have you guys observed anything like this behavior?
    Reply
  • Kristian Vättö - Thursday, February 09, 2012 - link

    I've read some reports of the 8MB bug persisting even after the FW update. Your experience sounds similar - problems start to occur when you power off the SSD (i.e. power cycling). A guy I know actually bought the 80GB model just to try this out but unfortunately he couldn't make it repeatable. Reply
  • vectorm12 - Monday, February 13, 2012 - link

    Unfortunately I'm in the same boat. 320s keep failing left and right(up to three now) all running latest firmware. However the issues aren't directly related to powercycles as these drives run 24/7 without any offtime.

    I've made sure drive-spinndown is deactivated as well as all other powermanagement features I could think of. I've also move the RAIDs from Adaptec controllers to the integrated SAS-controllers and still had a third drive fail.

    I've actually switched out the remaining 320s for Samsung 830s now to see how they behave in this configuration.
    Reply
  • DukeN - Thursday, February 09, 2012 - link

    One with RAID'd drives, whether on a DAS or a high end SAN?

    Would love to see how 12 SSDs in (for argument's sake) an MSA1000 compare to 12 15K SAS drives.

    TIA
    Reply
  • ggathagan - Thursday, February 09, 2012 - link

    Compare in what respect? Reply
  • FunBunny2 - Thursday, February 09, 2012 - link

    Anand:

    I've been thinking about the case where using SSD, which has calculable (sort of, as this piece describes) lifespan, as swap (linux context). Have you done (and I can't find) or considering doing, such an experiment? From a multi-user, server perspective, the bang for the buck might be very high.
    Reply
  • varunkrish - Thursday, February 09, 2012 - link

    I have recently seen 2 SSDs fail without warning and they are completely not detected currently. While I love the performance gains from an SSD , lower noise and cooler operation, i feel you have to be more careful while storing critical data on a SSD as recovery is next to impossible.

    I would love to see an article which addresses SSDs from this angle.
    Reply
  • krazyderek - Thursday, February 09, 2012 - link

    i've been thinking about recycling some agility 2's into a raid array on a server, and this article gives a great blue print on the intel side of things! thank you! Reply
  • neotiger - Thursday, February 09, 2012 - link

    It's important to note that most of the SSDs you tested are not suitable for "enterprise" use because they are not crash-safe.

    X25-E, 510, 520 - none of them come with capacitors. That means in the event of a crash or power outage your data will be lost or corrupted (most likely both). They are not suited for enterprise use.
    Reply
  • Per Hansson - Sunday, February 12, 2012 - link

    Hi Anand,
    Any interest in testing Adaptecs Hybrid RAID?
    It claims to offer good speed on a RAID-1 setup with a normal HDD together with a SSD.
    Something that on a normal controller would limit the SSD to the write speed of the HDD...

    http://ask.adaptec.com/scripts/adaptec_tic.cfg/php...

    Also will you be including any more SLC drives in your tests?
    Like the Micron RealSSD P300

    http://www.micron.com/products/solid-state-storage...

    I love that you are finally starting to do enterprise tests :)

    Regards,
    Per Hansson
    Reply
  • silversurferer - Saturday, February 18, 2012 - link

    Hi,

    Fabulous article - very well written!

    Just digging in on SSD since im having huge problem with my mailserver as the accounts and files grow in numbers and weight.

    It seems that SSD is made for this, if im not mistaking. Witch SSD disk would be suited for this and what kind of setup is recommended? Take gladely any pointers in this subject.

    Thx.
    Reply
  • enealDC - Monday, February 20, 2012 - link

    Don't normally post, but I wanted to say great read! Reply

Log in

Don't have an account? Sign up now