AnandTech Storage Bench - The Destroyer

The Destroyer has been an essential part of our SSD test suite for nearly two years now. It was crafted to provide a benchmark for very IO intensive workloads, which is where you most often notice the difference between drives. It's not necessarily the most relevant test to an average user, but for anyone with a heavier IO workload The Destroyer should do a good job at characterizing performance.

AnandTech Storage Bench - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

The table above describes the workloads of The Destroyer in a bit more detail. Most of the workloads are run independently in the trace, but obviously there are various operations (such as backups) in the background. 

AnandTech Storage Bench - The Destroyer - Specs
Reads 38.83 million
Writes 10.98 million
Total IO Operations 49.8 million
Total GB Read 1583.02 GB
Total GB Written 875.62 GB
Average Queue Depth ~5.5
Focus Worst case multitasking, IO consistency

The name Destroyer comes from the sheer fact that the trace contains nearly 50 million IO operations. That's enough IO operations to effectively put the drive into steady-state and give an idea of the performance in worst case multitasking scenarios. About 67% of the IOs are sequential in nature with the rest ranging from pseudo-random to fully random. 

AnandTech Storage Bench - The Destroyer - IO Breakdown
IO Size <4KB 4KB 8KB 16KB 32KB 64KB 128KB
% of Total 6.0% 26.2% 3.1% 2.4% 1.7% 38.4% 18.0%

I've included a breakdown of the IOs in the table above, which accounts for 95.8% of total IOs in the trace. The leftover IO sizes are relatively rare in between sizes that don't have a significant (>1%) share on their own. Over a half of the transfers are large IOs with one fourth being 4KB in size.

AnandTech Storage Bench - The Destroyer - QD Breakdown
Queue Depth 1 2 3 4-5 6-10 11-20 21-32 >32
% of Total 50.0% 21.9% 4.1% 5.7% 8.8% 6.0% 2.1% 1.4%

Despite the average queue depth of 5.5, a half of the IOs happen at queue depth of one and scenarios where the queue depths is higher than 10 are rather infrequent. 

The two key metrics I'm reporting haven't changed and I'll continue to report both data rate and latency because the two have slightly different focuses. Data rate measures the speed of the data transfer, so it emphasizes large IOs that simply account for a much larger share when looking at the total amount of data. Latency, on the other hand, ignores the IO size, so all IOs are given the same weight in the calculation. Both metrics are useful, although in terms of system responsiveness I think the latency is more critical. As a result, I'm also reporting two new stats that provide us a very good insight to high latency IOs by reporting the share of >10ms and >100ms IOs as a percentage of the total.

AnandTech Storage Bench - The Destroyer (Data Rate)

In terms of throughput, the SSD 750 is actually marginally slower than the SM951, although when you look at latency the SD 750 wins by a large margin. The difference in these scores is explained by Intel's focus on random performance as Intel specifically optimized the firmware for high random IO performance, which does have some impact on the sequential performance. As I've explained above, data rate has more emphasis on large IO size transfers, whereas latency treats all IOs the same regardless of their size.

AnandTech Storage Bench - The Destroyer (Latency)

The number of high latency IOs is also excellent and in fact the best we have tested. The SSD 750 is without a doubt a very consistent drive.

AnandTech Storage Bench - The Destroyer (Latency)

AnandTech Storage Bench - The Destroyer (Latency)

Performance Consistency AnandTech Storage Bench - Heavy
Comments Locked

132 Comments

View All Comments

  • Kristian Vättö - Thursday, April 2, 2015 - link

    That's up to the motherboard manufacturers. If they provide BIOS with NVMe support then yes, but I wouldn't get my hopes up as the motherboard OEMs don't usually do updates for old boards.
  • vailr - Thursday, April 2, 2015 - link

    If Z97 board bioses from Asus, Gigabyte, etc. are going to be upgradeable to support Broadwell for all desktop (socket 1150) motherboards, wouldn't they also want to include NVMe support? I'm assuming such support is at least within the realm of possibility, for both Z87 and Z97 boards.
  • TheRealPD - Thursday, April 2, 2015 - link

    Has anyone worked out exactly what the limitation is/why the bios needs upgrading yet?

    Simply that I had the idea that the P3700 had it's own nvme orom, nominally akin to a raid card... ...& that people have had issues with the updated mobo bioses replacing intel's one with a generic one...

    ...which kind of suggests that the bios update could conceivably not be a requirement for some nvme drives.
  • vailr - Friday, April 3, 2015 - link

    A motherboard bios update would be required to provide bootability. Without that update, an NVMe drive could only function as a secondary storage drive. As stated elsewhere, each device model needs specific support added to the motherboard bios. Samsung's SM941 (an M.2 SSD form factor device) is a prime example of this conundrum, and why it's not generally available as a retail device. Although it can be found for sale at Newegg or on eBay.
  • TheRealPD - Friday, April 3, 2015 - link

    Ummmm... Well, for example, looking at http://www.thessdreview.com/Forums/ssd-discussion/... then the P3700 could be used as a boot drive on a Z87 board in July 2014 - so clearly that wasn't using a mobo bios with an added nvme orom as ami hadn't even released their generic nvme orom that's being added to the Z97 boards.

    (& from recollection, on Z97 boards, in Windows the P3700 is detected as an intel nvme device without the bios update... ...& an ami nvme one with the update)

    This appears to effectively the same as, say, an lsi sas raid card loading it's own orom during the boot process & the drives on it becoming bootable - as obviously, as new raid cards with new feature sets are introduced, you don't have to have updates for every mobo bios.

    Now, whilst I can clearly appreciate that *if* a nvme drive didn't have it's own orom then there would be issues, it really doesn't seem to be the case with drives that do... ...so is there some other issue with the nvme feature set or...?

    Now, obviously this review is about another intel nvme pcie ssd - so it might be reasonable to imagine that it could well also have it's own orom - but, more generally, I'm questioning the assumption that just because it's an nvme drive you can *only* fully utilise it with a board with an updated bios...

    ...& that if it's the case that some nvme ssds will & some won't have their own orom (& it doesn't affect the feature set), it would be a handy thing to see talked about in the reviews as it means that people with older machines are neither put off buying nor buy an inappropriate ssd when more consumer orientated ones are released.
  • TheRealPD - Saturday, April 4, 2015 - link

    I think I've kind of found the answer via a few different sources - it's not that nvme drives necessarily won't work properly with booting & whatnot on older boards... it's that there's no stated consistency as to what will & won't work...

    So apparently they can simply not work on some boards d.t. a bios conflict & there can separately be address space issues... So the ami nvme orom & uefi bios updates are about compatibility - *not* that an nvme ssd with its own orom will or won't necessarily work without them on any particular setup.

    it would be very useful if there was some extra info about this though...

    - well, it's conceivable that at least part of the problem is akin to the issues on much older boards with the free bios capacity for oroms & multiple raid configurations... ...where if you attempted to both enable all of the onboard controllers for raid (as this alters the bios behaviour to load them) &/or had too many additional controllers then one or more of them simply wouldn't operate d.t. the bios limitation; whereas they'd all work both individually & with smaller no's enabled/installed... ...so people with older machines who haven't seen this issue previously simply because they've never used cards with their own oroms or the ssd is the extra thing where they're hitting the limit, are now seeing what some of us experienced years ago.

    - or, similarly, that there's a min uefi version that's needed - I know that intel's recommending 2.3.1 or later for compatibility but clearly they were working on some boards prior to that...
  • pesho00 - Thursday, April 2, 2015 - link

    Why they omit M2? I really think this is a mistake missing the whole mobile market while SM951 will penetrate both!
  • Kristian Vättö - Thursday, April 2, 2015 - link

    Because M.2 would melt with that beast of a controller.
  • metayoshi - Thursday, April 2, 2015 - link

    The Idle power spec of this drive is 4W, while the SM951 is at 50 mW with an L1.2 power consumption at 2mW. Your notebook's battery life will suffer greatly with a drive this power hungry.
  • jwilliams4200 - Thursday, April 2, 2015 - link

    Even though you could not run the performance tests with additional overprovisioning on the 750, you should still show the comparison SSDs with additional overprovisioning.

    The fair comparison is NOT with the Intel 750 no OP versus other SSDs with no OP. The comparison you should be showing is similar capacity vs. similar capacity. So, for example, a 512GB Samsung 850 Pro with OP to leave it with 400GB usable, versus and Intel 750 with 400GB usable.

    I also think it would be good testing policy to test ALL SSDs twice, once with no OP, and once with 50% overprovisioning, running them through all the tests with 0% and 50% OP. The point is not that 50% OP is typical, but rather that it will reveal the best and worst case performance that the SSD is capable of. The reason I say 50% rather than 20% or 25% is that the optimal OP varies from SSD to SSD, especially among models that already come with significant OP. So, to be sure that you OP enough that you reach optimal performance, and to provide historical comparison tests, it is best just to arbitrarily choose 50% OP since that should be more than enough to achieve optimal sustained performance on any SSD.

Log in

Don't have an account? Sign up now