AnandTech Storage Bench 2013

Our Storage Bench 2013 focuses on worst-case multitasking and IO consistency. Similar to our earlier Storage Benches, the test is still application trace based—we record all IO requests made to a test system and play them back on the drive we're testing and run statistical analysis on the drive's responses. There are 49.8 million IO operations in total with 1583.0GB of reads and 875.6GB of writes. I'm not including the full description of the test for better readability, so make sure to read our Storage Bench 2013 introduction for the full details.

AnandTech Storage Bench 2013 - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

We are reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the test workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we've been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

AnandTech Storage Bench 2013 - The Destroyer (Data Rate)

The P3700 takes the performance crown away from Samsung's XP941. Granted we are talking about a much larger and more expensive drive, but if you're after the absolute best performance for a workstation or high-end client, the P3700 is without equal.

AnandTech Storage Bench 2013 - The Destroyer (Service Time)

In our initial P3700 review we talked about the impact of NVMe and a lower overhead interface stack on IO latency - we see the benefits of that here in our look at average service times.

AnandTech Storage Bench 2011

Back in 2011 (which seems like so long ago now!), we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on peak IO performance and basic garbage collection routines. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives. The full description of the Heavy test can be found here, while the Light workload details are here.

Heavy Workload 2011 - Average Data Rate

The XP941 remains the king in our 2011 heavy test. I was pretty surprised to find the P3700 lose its first place position here, but it's still competitive.

Light Workload 2011 - Average Data Rate

The situation reverses back to normal when we look at the light workload.

Introduction & Performance Consistency Random & Sequential Performance
Comments Locked

36 Comments

View All Comments

  • [-Stash-] - Tuesday, June 10, 2014 - link

    Will be great to see how the 35 and 36 perform in the client workloads – really quite excited about these.
  • romrunning - Tuesday, June 10, 2014 - link

    In the 25% OP 4KB Random Write (QD32) graph, the Samsung XP941 showed a massive drop between its 512GB and 384GB (25% spare area) results. From 30k down to 5k - is that an anomaly?

    Also, what's with the vertical scale showing 1, 10, and 100 ranges? That forces all of the data points into a much smaller range, and it's visually not as informative.
  • Kristian Vättö - Tuesday, June 10, 2014 - link

    Are you sure you are not mixing up the graphs? With 25% spare area, the results are better (~30K IOPS), which is how things should be.
  • romrunning - Tuesday, June 10, 2014 - link

    I see - the two charts on the bottom of page one are same but with different vertical scales (as explained so well by DanNeely below).

    Yes, I suppose instead of calling it a drop, you could say it's actually a rise from 5k to 30k IOPs when you go 25% spare area. It seems Samsung drives especially like extra spare area.
  • DanNeely - Tuesday, June 10, 2014 - link

    It's called a log scale; and generally is the better choice when you've got values spanning more than 2 or 3 orders of magnitude (ie more than 100x or 1000x). A linear scale would crush everything but the largest values into a tiny area at the bottom. With the highly consistent performance of the DC P3700 it's not as big a concern; but with the less consistent behavior of most consumer or older enterprise drives it's the only good way to see the behavior. Look at the Samsung 840; it has peak IOPS of ~100,000; drops to a steady state behavior mostly between ~3000-9000 but with occasional excursions as low as ~110.

    Also, the tests are being done to show consistency of performance, a log scale makes seeing that much easier because the same vertical distance between min and max always represents the same multiple (eg 2x 3x 10x) between the values; a linear scale would mask inconsistency in slower drives while exaggerating it in higher performing ones because because 20000-22000 is a much larger interval on a linear scale than 10-1000 despite the former being a highly consistent drive with 10% variation and the latter having a 100x variation.
  • ryanjacoby2 - Tuesday, June 10, 2014 - link

    I'm enjoying the pivotchart type graphs you guys transitioned to a while ago. Makes readability and comparisons a breeze without the clutter, thanks for the change!
  • uruturu - Tuesday, June 10, 2014 - link

    why don't you test the same drives also with 25% OP in storage bench 2013?
    samsung 840 pro (25% OP) vs sandisk extreme II (25% OP)...factory settings seem to penalize some drives.
  • morganf - Tuesday, June 10, 2014 - link

    I thought the same thing. The 2013 storage bench writes an insane amount of data in a short time. If someone really cares about performance with such a heavy write-load, then they would certainly overprovision their SSD. The P3700 obviously already comes from the factory with a lot of OP. It only makes sense to also OP the Samsung 840 Pro and other drives, as a point of comparison, when running the 2013 Storage bench.

    To be clear, I mean to run the 2013 Storage Bench on each SSD with and without OP.
  • eanazag - Tuesday, June 10, 2014 - link

    When's the Intel giveaway for the P3700?
  • extide - Tuesday, June 10, 2014 - link

    LOL! Yeah, SIGN ME UP!! Haha ;)

Log in

Don't have an account? Sign up now