Enterprise Storage Bench - Microsoft SQL WeeklyMaintenance

Our final enterprise storage bench test once again comes from our own internal databases. We're looking at the stats DB again however this time we're running a trace of our Weekly Maintenance procedure. This procedure runs a consistency check on the 30GB database followed by a rebuild index on all tables to eliminate fragmentation. As its name implies, we run this procedure weekly against our stats DB.

The read:write ratio here remains around 3:1 but we're dealing with far more operations: approximately 1.8M reads and 1M writes. Average queue depth is up to 5.43.

Microsoft SQL WeeklyMaintenance - Average Data Rate

Once again we see great performance out of the 910 here. The 800GB drives are significantly faster than the SandForce based drives from OCZ, but at 400GB performance is cut in half once again. At the 2.5" form factor Intel's SSD 520 is in the lead, followed by Toshiba's 400GB SAS drive.

Microsoft SQL WeeklyMaintenance - Disk Busy Time

Microsoft SQL WeeklyMaintenance - Average Service Time

Enterprise Storage Bench - Microsoft SQL UpdateDailyStats Final Words
Comments Locked

39 Comments

View All Comments

  • JellyRoll - Friday, August 10, 2012 - link

    WOW. low QD testing on an enterprise PCIe storage card is ridiculous. End users of these SSDs will use them in datacenters, and the average QD will be ridiculously high. This evaluation shows absolutely nothing that will be encountered in this type of SSDs actual usage. No administrator in their right mind would purchase these for such ridiculously low workloads.
  • SanX - Friday, August 10, 2012 - link

    and you do not need more then 16/32/64GB size for your speedy needs, then consider almost free RAMdisk with the backup. It will be 4-8x faster then this card
  • marcplante - Friday, August 10, 2012 - link

    It seems that there would be a market for a consumer desktop implementation.
  • Ksman - Friday, August 10, 2012 - link

    Given how well the 520's perform, perhaps a RAID with 520's on a LSI RAID adapter would be a very good solution and a comparison VS the 910 would be interesting. If RAID>0, then one could pull drives and attach direct for TRIM etc which would eliminate the problem where SSD's in a RAID cannot be managed.
  • Pixelpusher6 - Friday, August 10, 2012 - link

    I was wondering the exact same thing. What are the advantages of offering a PCIe solution like this compared to say just throwing in a SAS RAID card and connecting a bunch of SSD SAS drives in a RAID 0? Is the Intel 910 mainly targeted at 1U/2U servers that might not have space available for a 2.5" drive? Is it possible to over-provision any 2.5" drive to increase endurance and reduce write amplification (I think the desktop Samsung 830 I have allows this)? Seeing the performance charts I wonder how 2 of those Toshiba 400GB SAS drives would compare against the Intel 910.

    Is the enterprise market moving towards MLC-HET NAND with tons of spare area vs. SLC NAND because of the low cost of MLC NAND now since fabs have ramped up production? I was under the impression that SLC NAND was preferable in the enterprise segment but I might be wrong. What are some usage scenarios where SLC would be better than MLC-HET and vice versa?

    I think lorribot brought up a good point:

    "I like the idea but coming from a highly redundant arrays point of view how do you set this all up in a a safe and secure way, what are the points of failure? what happens if you lose the bridge chip, is all your data dead and buried?"

    I wonder if it is possible to just swap the 1st PCIe PCB board with all the controllers and DRAM in case of a failure of the bridge chip or controller thus the data remains safe. Can SSD controllers fail? Is it likely that the Intel 910 will be used in RAID 0? I didn't think RAID 0 was used much in enterprise. Sorry for all the questions. I have been visiting this site for over 10 years and I just now registered an account.
  • FunBunny2 - Saturday, August 11, 2012 - link

    eMLC/MLC-HET/foo-MLC are all attempts to get cheaper parts into SSD chassis, even for enterprise companies such as Texas Memory. Part of the motivation is yet more sophisticated controllers, and, I suspect, the realization that enterprises understand duty life far better than consumers (who'll run a HDD forever if it survives infant mortality). The SSD survival curve (due to NAND failure) is more predictable than HDD, so with the very much faster operations, if 5 years remains the lifetime, the parts used don't matter. The part gets swapped out at 90% or 95% of duty life (or whatever %-age the shop decides); end of story. 5 years ago, SLC was the only way to 5 years. That's not true any longer.
  • GatoRat - Sunday, August 12, 2012 - link

    "the 800GB 910 is easily the fastest SSD we've ever tested."

    Yet the tests clearly show that it isn't. In fact, the Oracle tests show it's a dog. In other tests, it doesn't come up on top. The OCZ Z-Drive R4 CM84 600GB is clearly the faster overall drive.
  • Galcobar - Sunday, August 12, 2012 - link

    Grok!

    I'm impressed both to see the literary reference, correctly used, and that nobody has called it a typo in the comments. Not bad for a fifty-year-old novel once dismissed by the New York Times as a puerile mishmash.
  • a50505 - Thursday, August 30, 2012 - link

    So, has anyone heard of a workstation class laptop that with a PCIe based ssd?

Log in

Don't have an account? Sign up now