AnandTech Storage Bench 2011 - Light Workload

Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).

The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:

AnandTech Storage Bench 2011 - Light Workload IO Breakdown
IO Size % of Total
4KB 27%
16KB 8%
32KB 6%
64KB 5%

Despite the reduction in large IOs, over 60% of all operations are perfectly sequential. Average queue depth is a lighter 2.2029 IOs.

Light Workload 2011 - Average Data Rate

The performance advantage over the RevoDrive 3 X2 drops to around 29% in our lighter workload. The narrowing gap makes sense given the workload here. There's less to break up and distribute among all of the controllers and thus we see less of a speedup.

Light Workload 2011 - Average Read Speed

Light Workload 2011 - Average Write Speed

Light Workload 2011 - Disk Busy Time

Light Workload 2011 - Disk Busy Time (Reads)

Light Workload 2011 - Disk Busy Time (Writes)

AnandTech Storage Bench 2011 Enterprise Storage Bench - Oracle Swingbench
Comments Locked

57 Comments

View All Comments

  • caliche - Wednesday, September 28, 2011 - link

    I am sure he is referring to the previous versions of the z-drive, which is all you can use as an indicator.

    I am an enterprise customer. Dell R710s and two Z-Drive R2 M84 512GB models, one in each. I have had to RMA one of them once, and the other is on it's second RMA. They are super fast when they work, but three failures across two devices in less than a year is not production ready. We are using them in benchmarking servers running Red Hat Enterprise 5 for database stores, mostly read only to break other pieces of software talking to it. Very low writes.

    But here is the thing. When they power on, one or more of the four RAID pieces is "gone". This is just the on board software on the SSD board itself, no OS, no I/O on it at all besides the power up RAID confidence check. Power on the server, works one day, next day the controller on the card says a piece is missing. That's not acceptable when you are trying to get things done.

    In a perfect world, you have redundant and distributed everything with spare capacity and this is not a factor. But then you start looking at dealing with these failures and you start to ask yourself is your time better spent on screwing around with an RMA process and rebuilds or optimizing your environment?
  • ypsylon - Thursday, September 29, 2011 - link

    Nobody in the right frame of mind using SSD in enterprise segment (not even interested in them as consumer drives, but that is not the issue here). SSDs are just as unreliable as normal HDDs with ridiculous price point. You can lose all of data much quicker than from normal HDD. RAID arrays built from standard HDDs are just as fast as 1 or 2 "uber" SSDs and cost fraction of a SSD setup (often even including cost of the RAID controller itself). Also nobody running large arrays in RAID0 (except maybe video processing). RAID0 is pretty much non-existent in serious storage applications. As a backup I much more prefer another HDD array than unreliable, impossible to test, super-duper expensive SSD.

    You can't tests NAND reliability. That is the biggest problem of SSDs in business class environment. Because of that SSD will whiter and die in the next 5-10 years. SSDs are not good enough for industry, if you can't hold on to big storage market then no matter how good something is, it will die. Huge, corporate customers are key to stay alive is storage market.
  • Zan Lynx - Thursday, September 29, 2011 - link

    You are so, so wrong.

    Enterprises are loving SSDs and are buying piles of them.

    SSDs are the best thing since sliced bread if you run a database server.

    For one thing, the minimum latency of a PCIe SSD 4K read is almost 1,000 times less than a 4K read off a 15K SAS drive. The drive arrays don't even start to close the performance gap until well over 100 drives, and even then the drive array cannot match the minimum latency. It can only match the performance in parallel operations.

    If you have a lot of operations that work at queue depth of 1, the SSD will win every time, no matter how large the disk array.
  • leonzio666 - Wednesday, November 2, 2011 - link

    Bear in mind though, that enterprises (real heavy weights) probably preffer something like fusion-io io-drives which btw are the only ssd`s running in IBM driven blade servers. With speeds up to 3 Gb/s and over 320 k IOPS it`s not surprising they cost ca 20k $$ per unit :D So it`s not true that SSD`s in general are not good for the enterprise segment. Also, and this is hot - these ssd use SLC NAND...
  • MCS7 - Thursday, September 29, 2011 - link

    I remember Anand doing a VOODOO 2 card review (VIDEO) way way way back at the turn of the millenium! Oh boy..we are getting OLD....lol take care all
  • Googer - Thursday, September 29, 2011 - link

    Statistics for CPU usage would have been handy as some storage devices have greater demands for the CPU than others. Even between various HDD makes, CPU use varies.
  • alpha754293 - Thursday, September 29, 2011 - link

    Were you able to reproduce the SF-2xxxx BSOD issue with this? or is it limited to just the SF-2281?

Log in

Don't have an account? Sign up now