Enterprise Storage Bench - Microsoft SQL UpdateDailyStats

Our next two tests are taken from our own internal infrastructure. We do a lot of statistics tracking at AnandTech - we record traffic data to all articles as well as aggregate traffic for the entire site (including forums) on a daily basis. We also keep track of a running total of traffic for the month. Our first benchmark is a trace of the MS SQL process that does all of the daily and monthly stats processing for the site. We run this process once a day as it puts a fairly high load on our DB server. Then again, we don't have a beefy SSD array in there yet :)

The UpdateDailyStats procedure is mostly reads (3:1 ratio of GB reads to writes) with 431K read operations and 179K write ops. Average queue depth is 4.2 and only 34% of all IOs are issued at a queue depth of 1. The transfer size breakdown is as follows:

AnandTech Enterprise Storage Bench MS SQL UpdateDaily Stats IO Breakdown
IO Size % of Total
8KB 21%
64KB 35%
128KB 35%

Microsoft SQL UpdateDailyStats - Average Data Rate

Things look a lot better with our first SQL benchmark, Micron's P320h outperforms both of the OCZ SandForce based offerings. Only Intel's 910 is faster, but it maintains a healthy performance advantage here over the P320h (~44%).

Average service times are very low, which is one of the benefits of being able to serve so many IOs in parallel by a native PCIe SSD controller.

Microsoft SQL UpdateDailyStats - Disk Busy Time

Microsoft SQL UpdateDailyStats - Average Service Time

Enterprise Storage Bench - Oracle Swingbench Enterprise Storage Bench - Microsoft SQL WeeklyMaintenance
Comments Locked

57 Comments

View All Comments

  • JellyRoll - Monday, October 15, 2012 - link

    Of course you have absolutely no experience with Virtualization, which would mean that for your archaic workloads you wouldn't need something of this nature.
    users that purchase this will not be running one database at such low queue depths, that would be an insane waste of money.
    This is designed for high load OLTP and virtualized environments, not to run the database of one website.
    you may be in IT at some small company, but you havent seen anything on datacenter scale apparently.
  • DataC - Tuesday, October 16, 2012 - link

    JellyRoll is correct. I work for Micron, and we developed the P320h’s controller and firmware through collaboration with enterprise OEMs—which is why we optimized for higher queue depths. When the P320h is run in these environments (which are common in datacenters), you’ll see significantly higher performance than what’s shown in the charts above.
  • jospoortvliet - Tuesday, October 16, 2012 - link

    Yup. And it should be tested on a proper enterprise platform - this test is like running a Nascar vehicle with the handbrakes on.

    Time for an upgrade to a real OS, Anand.
  • Denithor - Monday, October 15, 2012 - link

    Would have liked to see the fastest consumer-grade drive thrown in just to see exactly how much faster enterprise drives go. Also would like to see how this drive would perform in the standard Light and Heavy Bench tests.
  • FunBunny2 - Monday, October 15, 2012 - link

    Actually, against a Fusion-io part, the closest example.
  • jwilliams4200 - Monday, October 15, 2012 - link

    Right, enterprise drives should get all the standard consumer SSD tests run on them in addition to the enterprise tests.
  • mckirkus - Wednesday, October 17, 2012 - link

    And I'd argue a RAMDisk should be included just to get a sense of relative performance.
  • Kevin G - Monday, October 15, 2012 - link

    I'm kinda surprised that there wasn't as much discussion about the effects of the native PCI-e controller. Lower latency results do crop up in various benchmarks here. I wonder if the impact is merely 'benchmark only' and not anything that'd be noticeable in more real world tests.

    By going with 34 nm SLC, they have limited capacity but his article seems to indicate that the controller is capable of support MLC in the 20 to 30 nm range. That would allow it to hit the 4 TB maximum capacity of the controller. I'm also curious on how such a change would perform. The current P320h does need a PCI-e 2.0 8x connection as some of the benchmarks are (barely) exceeding what a PCI-e 2.0 4x link can provide. With faster NAND, a move to PCI-e 3.0 8x or PCI-e 2.0 16x may be warranted.

    I'm also curious if multiple P320h's can be used in a system behind a RAID. Overkill the overkill?

    Now for a few general comments about NVMe. I'd love to see NAND chips on DIMMs at the enterprise level. If the controller detects NAND failure or chips reaching their maximum endurance, they could potentially be swapped out. This is akin to current ECC DIMMs. Along those same lines it would be nice to see a SAS or SATA port on the board so that it could fail over to a hard drive in the event of multiple impending NAND failures. The main reasoning I can see to avoid DIMMs would simply be physical space.

    This is also a good preview of what to expect with SATA-Express drives next year. They won't reach such bandwidth figures as they'll be limited to two PCI-e lanes but the latency improvements should carry over with a good controller.
  • PCTC2 - Monday, October 15, 2012 - link

    You could probably just do an OS-level software stripe (like in Linux). I think that would be more beneficial just in terms of usable capacity rather than the increase in performance. However, the increase in performance could be tangible, depending on your workload.

    As for the link, I think we're more constrained by the controller to the performance than the NAND. I don't think we need the PCIe 3.0 or PCIe 2.0 x16 links for this iteration of the controller. I don't think it would saturate the link. As you said, some of the tests don't even saturate a PCIe x4 link, if you don't include overhead (there is overhead).

    Also, Anand did point out a 25nm eMLC version is coming out in the future.

    As for putting chips on DIMMs, for a HH/HL PCIe card, that is a waste of space, as you said yourself. Between the controller, DRAM, and then the NAND, the sockets would just take up space. The daughterboard direction allows a much more compact, proprietary size depending on the board itself. If you wanted a FH/HL card, I'm sure DIMMs would be more possible.
  • FunBunny2 - Monday, October 15, 2012 - link

    Check out the Sun/Oracle flash appliance. Other niche Enterprise flash storage also exist.

Log in

Don't have an account? Sign up now