Enterprise Storage Bench - Oracle Swingbench

We've been toying with the idea of bringing enterprise performance tests to our SSD reviews for over a year now. We had expected to have more time to launch the suite but the Z-Drive R4 (and another enterprise SSD) were dropped on us unexpectedly so we had to rush things a bit. By no means is this the extent of our enterprise storage bench, but it's a start.

We begin with a popular benchmark from our server reviews: the Oracle Swingbench. This is a pretty typical OLTP workload that focuses on servers with a light to medium workload of 100 - 150 concurrent users. The database size is fairly small at 10GB, however the workload is absolutely brutal.

Swingbench consists of over 1.28 million read IOs and 3.55 million writes. The read/write GB ratio is nearly 1:1 (bigger reads than writes). Parallelism in this workload comes through aggregating IOs as 88% of the operations in this benchmark are 8KB or smaller. This test is actually something we use in our CPU reviews so its queue depth averages only 1.33. We will be following up with a version that features a much higher queue depth in the coming weeks.

Oracle Swingbench - Average Data Rate

Surprisingly enough the SF-2281 based drives don't actually do all that well here. A single Vertex 3 is only good for 38MB/s, while the old X25-E manages 56.1MB/s. The Z-Drive R4 is the fastest drive here but only by 30% margin.

Oracle Swingbench - Disk Busy Time

The same results are reflected if we look at disk busy time. However as I mentioned before, this benchmark features a very low average queue depth. With only one IO running most of the time you're not going to see much of a difference between multi-controller and single-controller drives. Despite this fact there's still a 90%+ advantage the Z-Drive R4 enjoys over a single Vertex 3, implying that there are periods of extremely high queue depths that are simply being hidden in the average. In order to see how well the drive does in these periods of very heavy IO activity we need to look at another metric: average service time.

Oracle Swingbench - Average Service Time

Average data rate and disk busy time tell us how much data is moved but they don't tell us how quickly each stacked IO completes. Average service time gives us this data. Here we see huge performance advantages for the SF-2281 drives, and the Z-Drive R4 is significantly faster than even the RevoDrive 3 X2.

Both metrics are important - one gives you an idea of average performance while the other illustrates how the drive performances under heavy load.

AnandTech Storage Bench 2011 - Light Workload Enterprise Storage Bench - Microsoft SQL UpdateDailyStats
Comments Locked

57 Comments

View All Comments

  • jdietz - Tuesday, September 27, 2011 - link

    I looked up the prices for these on Google Shopping - $7 / GB.

    These offer extreme performance, but probably only an enterprise server can ever benefit from this much performance. Enthusiast users of single-user machines should probably stick with RevoDrive X2 for around $2 / GB.
  • NCM - Tuesday, September 27, 2011 - link

    Anand writes: "During periods of extremely high queuing the Z-Drive R4 is a few orders of magnitude faster than a single drive."

    Umm, a bit hyperbolic! With "a few" meaning three or more, the R4 would need to be at least 1000 times faster. That's nowhere near the case.
  • JarredWalton - Tuesday, September 27, 2011 - link

    Correct. I've edited the text slightly, though even a single order of magnitude is huge, and we're looking at over 30x faster with the R4 CM88 (and over two orders of magnitude faster on the service times for the weekly stats update).
  • Casper42 - Tuesday, September 27, 2011 - link

    Where do you plan on testing it? (EU vs US)

    Have you tried asking HP for an "IO Accelerator" ? (Its a Fusion card)

    I worked with a customer a few weeks ago near me and they were testing 10 x 1.28TB Fusion IO cards in 2 different DB Server upgrade projects. 8 in a DL980 for one project and 2 in a DL580g7 for a separate project.
  • Movieman420 - Tuesday, September 27, 2011 - link

    I see all these posts taking all kinds of punishment, please try and remember that ANY company that uses SandForce has the SAME issues, but since Ocz is the largest they catch all the flack. If anything, SF needs to beef up validation testing first and foremost.
  • josephjpeters - Wednesday, September 28, 2011 - link

    Like I said before, it's really more of a motherboard issue with the SATA ports then it is a SF/OCZ issue. They designed to spec...
  • Yabbadooo - Tuesday, September 27, 2011 - link

    I note that on the Windows Live Team blog they write that they are moving to flash based blob storage for their file systems.

    Maybe they will use a few of these? That would definitely be a big vote of confidence, and the testimonials from that would be influential.
  • Guspaz - Tuesday, September 27, 2011 - link

    I have to wonder at the utility of these drives. They're not really PCIe drives, they're four or eight RAID-0 SAS drives and a SAS controller on a single PCB. They're still going to be bound by the limitations of RAID-0 and SAS. There are proper PCIe SSDs on the market (Fusion-io makes some), but considering the price-per-gig, these Z-Drives seem to offer little benefit other than saving space.

    Why should I spend $11,200 on a 1600GB Z-Drive when I can spend about the same on eight OCZ Talos SAS drives and a SAS RAID controller, and get 3840GB of capacity? Or spend half as much on eight OCZ Vertex 3 drives and a SATA RAID controller, and get 1920GB of capacity?

    I'm just trying to see the value proposition here. Even with enterprise-grade SSDs (like the Talos) and RAID controllers, the Z-Drive seems to cost twice as much per-gig than OCZ's own products.
  • lorribot - Tuesday, September 27, 2011 - link

    I'm with you on this.

    What happens if a controller toasts itself wheres your data then?
    I would rather have smaller hot swap units sitting behind a raid controller.
    It is a shame OCZ couldn't supply such a setup for you to compare performance, or perhaps they know it would be comparable.

    Yes it is a great bit of kit but f I can't raid it then it is of no more use to me than as a cache and RAM is better at that, and a lot cheaper, $11000 buys some big quantities of DDR3.

    In the enterprise space security of data is king, speed is secondary. Losing data means a new job, slow data you just get moaned at. That is why SANs are so well used. Having all your storage in one basket that could fail easily is a big no, no and has been for many years.
  • Guspaz - Tuesday, September 27, 2011 - link

    To be fair, you can RAID it in software if required. You could RAID a bunch of USB sticks if you really wanted to. There are more than a few enterprise-grade SAN solutions out there that ultimately rely on Linux's software RAID, after all.

Log in

Don't have an account? Sign up now