Enterprise Storage Bench - Oracle Swingbench

We begin with a popular benchmark from our server reviews: the Oracle Swingbench. This is a pretty typical OLTP workload that focuses on servers with a light to medium workload of 100 - 150 concurrent users. The database size is fairly small at 10GB, however the workload is absolutely brutal.

Swingbench consists of over 1.28 million read IOs and 3.55 million writes. The read/write GB ratio is nearly 1:1 (bigger reads than writes). Parallelism in this workload comes through aggregating IOs as 88% of the operations in this benchmark are 8KB or smaller. This test is actually something we use in our CPU reviews so its queue depth averages only 1.33. We will be following up with a version that features a much higher queue depth in the future.

Oracle Swingbench - Average Data Rate

SLC NAND offers great write latency and we see a definite advantage to the older drive here in our Swingbench test. Only a 6Gbps SSD 520 is able to deliver better performance, everything else trails the 3+ year old drive. Note that the Marvell based Intel SSD 510, even on a 6Gbps controller, is the slowest drive in Intel's lineup. From a write amplification perspective, Marvell's controller has always been significantly behind Intel's own creations so the drop in performance isn't surprising. The 710 actually delivers performance that's lower than the 320, but you do get much better endurance out of the 710.

Oracle Swingbench - Disk Busy Time

Oracle Swingbench - Average Service Time

While throughput isn't much better on the 6Gbps Intel SSD 520, average service time is tangibly lower. There's clearly a benefit to higher bandwidth IO interfaces in the enterprise space, which is a big reason we're seeing a tremendous push for PCIe based SSDs. The 710 does well here but not nearly as well as the X25-E which continues to behave like a modern SSD thanks to its SLC NAND.

Intel's SSD 520 in the Enterprise Enterprise Storage Bench - Microsoft SQL UpdateDailyStats
POST A COMMENT

55 Comments

View All Comments

  • jeremyshaw - Wednesday, February 08, 2012 - link

    woah... I've been waiting for an article like this for a long time.

    Thank you Anandtech!
    Reply
  • ckryan - Wednesday, February 08, 2012 - link

    Is AnandTech ever planning on doing a longer period SSD test? A long term testing scenario would make for interesting reading. Reply
  • Anand Lal Shimpi - Wednesday, February 08, 2012 - link

    Technically all of our SSD tests are long term. We're still testing Vertex 2 class drives and I actually still have six Intel X25-M G1s deployed in systems in my lab alone. You only hear about them when things go wrong. Most of the time I feed errors back to the vendors to get fixes put into firmware updates. The fact that you aren't seeing more of this sort of stuff means that things are working well :-P

    But the results of our long term tests directly impact our reviews/recommendations. It's one of the reasons I've been so positive on the Samsung SSD 830 lately. I've been using 830s 24/7 since our review published in September with very good results :)

    Take care,
    Anand
    Reply
  • Samus - Thursday, February 09, 2012 - link

    I've had an X25-M G1 in my Macbook since 2009, used daily, never a problem. Lack of trim support doesn't really seem to matter unless you're the type the writes/deletes a lot of data. Reply
  • jwilliams4200 - Wednesday, February 08, 2012 - link

    Since you found that the 520 does not really do any better than the 320 for endurance, does this also imply that the Sandforce controller was not able to achieve significant compression on the workload that you fed to it? In other words, Sandforce compression does not work very well on real data as opposed to artificial benchmark data. Reply
  • ckryan - Wednesday, February 08, 2012 - link

    SF is really good at compressing fake data. I suppose some logs could really benefit, but one of my personal SF drives has 10% more raw writes than host writes. I suspect I'm not alone with this either.

    People doing repeated incompressible benches could have WA higher than 1 with SF, but once you install the OS and and programs, every day writes are less compressible than promised it would seem.
    Reply
  • Anand Lal Shimpi - Wednesday, February 08, 2012 - link

    Keep in mind that only 10% more NAND writes than host writes is *really* good. It's not uncommon to get much, much higher than that with other controllers.

    We did an 8 month study on SF drives internally. The highest write amp we saw was 0.7x. On my personal drive I saw a write amp of around 0.6x.

    Take care,
    Anand
    Reply
  • jwilliams4200 - Thursday, February 09, 2012 - link

    Baloney!

    You just saw a write amplification of near 1 on this very article. Why do you dodge my question?
    Reply
  • erple2 - Thursday, February 09, 2012 - link

    I suspect that the workloads that they were testing for with the SF drives internally are not what is reflected in this article.

    That implies, then, that the SF drives have been doing other workloads like acting in desktops and/or laptop duties. For those kinds of things, I suspect that a 0.6-0.7x is more reasonable (assuming there isn't much reading/writing of incompressible data).

    Given that some of the workload may be for mobile applications, and given a strong focus on WDE for laptops, I wonder how that ultimately impacts the write amplification for drives with WDE on them.
    Reply
  • jwilliams4200 - Thursday, February 09, 2012 - link

    The "8 month study" that he refers to is very hard to believe.

    Does he really expect us to believe that the people in Anand's test lab used these SSDs for 8 months and did not run any benchmarks on them?

    Most benchmarks write easily compressible data, and a lot of it.

    The real way to test the Sandforce compression is to write typical user data to the SSD and monitor the raw write and host write attributes. That experiment has already been done on xtremesystems.org, and the findings were that typical user data bare compresses at all -- at best raw writes were 90% of host writes, but for most data it was 100% or higher. The only thing that got some compression was the OS and application installs, and most people only do those once, so it should not be counted towards user data when estimating endurance.
    Reply

Log in

Don't have an account? Sign up now