Enterprise Storage Bench - Microsoft SQL WeeklyMaintenance

Our final enterprise storage bench test once again comes from our own internal databases. We're looking at the stats DB again however this time we're running a trace of our Weekly Maintenance procedure. This procedure runs a consistency check on the 30GB database followed by a rebuild index on all tables to eliminate fragmentation. As its name implies, we run this procedure weekly against our stats DB.

The read:write ratio here remains around 3:1 but we're dealing with far more operations: approximately 1.8M reads and 1M writes. Average queue depth is up to 5.43.

Microsoft SQL WeeklyMaintenance - Average Data Rate

We don't see perfect scaling going from 4 to 8 controllers but the performance gains are tangible: +42% over the RevoDrive 3 X2 and nearly 3x the performance of a single Vertex 3.

Microsoft SQL WeeklyMaintenance - Disk Busy Time

Microsoft SQL WeeklyMaintenance - Average Service Time

Average service time continues to be where the Z-Drive R4 really dominates. The use of 8 controllers in parallel appears to be able to significantly reduce average service times when queue depths skyrocket. The R4 CM88 is now over two orders of magnitude (136x) faster than a single Vertex 3—and 227x faster than the Intel X25-E. Again we see that the RevoDrive 3 X2 is much slower than it should be here, possibly pointing at a firmware bug or some other enhancement on the Z-Drive R4.

Enterprise Storage Bench - Microsoft SQL UpdateDailyStats Final Words
Comments Locked

57 Comments

View All Comments

  • caliche - Wednesday, September 28, 2011 - link

    I am sure he is referring to the previous versions of the z-drive, which is all you can use as an indicator.

    I am an enterprise customer. Dell R710s and two Z-Drive R2 M84 512GB models, one in each. I have had to RMA one of them once, and the other is on it's second RMA. They are super fast when they work, but three failures across two devices in less than a year is not production ready. We are using them in benchmarking servers running Red Hat Enterprise 5 for database stores, mostly read only to break other pieces of software talking to it. Very low writes.

    But here is the thing. When they power on, one or more of the four RAID pieces is "gone". This is just the on board software on the SSD board itself, no OS, no I/O on it at all besides the power up RAID confidence check. Power on the server, works one day, next day the controller on the card says a piece is missing. That's not acceptable when you are trying to get things done.

    In a perfect world, you have redundant and distributed everything with spare capacity and this is not a factor. But then you start looking at dealing with these failures and you start to ask yourself is your time better spent on screwing around with an RMA process and rebuilds or optimizing your environment?
  • ypsylon - Thursday, September 29, 2011 - link

    Nobody in the right frame of mind using SSD in enterprise segment (not even interested in them as consumer drives, but that is not the issue here). SSDs are just as unreliable as normal HDDs with ridiculous price point. You can lose all of data much quicker than from normal HDD. RAID arrays built from standard HDDs are just as fast as 1 or 2 "uber" SSDs and cost fraction of a SSD setup (often even including cost of the RAID controller itself). Also nobody running large arrays in RAID0 (except maybe video processing). RAID0 is pretty much non-existent in serious storage applications. As a backup I much more prefer another HDD array than unreliable, impossible to test, super-duper expensive SSD.

    You can't tests NAND reliability. That is the biggest problem of SSDs in business class environment. Because of that SSD will whiter and die in the next 5-10 years. SSDs are not good enough for industry, if you can't hold on to big storage market then no matter how good something is, it will die. Huge, corporate customers are key to stay alive is storage market.
  • Zan Lynx - Thursday, September 29, 2011 - link

    You are so, so wrong.

    Enterprises are loving SSDs and are buying piles of them.

    SSDs are the best thing since sliced bread if you run a database server.

    For one thing, the minimum latency of a PCIe SSD 4K read is almost 1,000 times less than a 4K read off a 15K SAS drive. The drive arrays don't even start to close the performance gap until well over 100 drives, and even then the drive array cannot match the minimum latency. It can only match the performance in parallel operations.

    If you have a lot of operations that work at queue depth of 1, the SSD will win every time, no matter how large the disk array.
  • leonzio666 - Wednesday, November 2, 2011 - link

    Bear in mind though, that enterprises (real heavy weights) probably preffer something like fusion-io io-drives which btw are the only ssd`s running in IBM driven blade servers. With speeds up to 3 Gb/s and over 320 k IOPS it`s not surprising they cost ca 20k $$ per unit :D So it`s not true that SSD`s in general are not good for the enterprise segment. Also, and this is hot - these ssd use SLC NAND...
  • MCS7 - Thursday, September 29, 2011 - link

    I remember Anand doing a VOODOO 2 card review (VIDEO) way way way back at the turn of the millenium! Oh boy..we are getting OLD....lol take care all
  • Googer - Thursday, September 29, 2011 - link

    Statistics for CPU usage would have been handy as some storage devices have greater demands for the CPU than others. Even between various HDD makes, CPU use varies.
  • alpha754293 - Thursday, September 29, 2011 - link

    Were you able to reproduce the SF-2xxxx BSOD issue with this? or is it limited to just the SF-2281?

Log in

Don't have an account? Sign up now