AnandTech Storage Bench - Heavy

While The Destroyer focuses on sustained and worst-case performance by hammering the drive with nearly 1TB worth of writes, the Heavy trace provides a more typical enthusiast and power user workload. By writing less to the drive, the Heavy trace doesn't drive the SSD into steady-state and thus the trace gives us a good idea of peak performance combined with some basic garbage collection routines.

AnandTech Storage Bench - Heavy
Workload Description Applications Used
Photo Editing Import images, edit, export Adobe Photoshop
Gaming Pllay games, load levels Starcraft II, World of Warcraft
Content Creation HTML editing Dreamweaver
General Productivity Browse the web, manage local email, document creation, application install, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Application Development Compile Chromium Visual Studio 2008

The Heavy trace drops virtualization from the equation and goes a bit lighter on photo editing and gaming, making it more relevant to the majority of end-users.

AnandTech Storage Bench - Heavy - Specs
Reads 2.17 million
Writes 1.78 million
Total IO Operations 3.99 million
Total GB Read 48.63 GB
Total GB Written 106.32 GB
Average Queue Depth ~4.6
Focus Peak IO, basic GC routines

The Heavy trace is actually more write-centric than The Destroyer is. A part of that is explained by the lack of virtualization because operating systems tend to be read-intensive, be that a local or virtual system. The total number of IOs is less than 10% of The Destroyer's IOs, so the Heavy trace is much easier for the drive and doesn't even overwrite the drive once.

AnandTech Storage Bench - Heavy - IO Breakdown
IO Size <4KB 4KB 8KB 16KB 32KB 64KB 128KB
% of Total 7.8% 29.2% 3.5% 10.3% 10.8% 4.1% 21.7%

The Heavy trace has more focus on 16KB and 32KB IO sizes, but more than half of the IOs are still either 4KB or 128KB. About 43% of the IOs are sequential with the rest being slightly more full random than pseudo-random.

AnandTech Storage Bench - Heavy - QD Breakdown
Queue Depth 1 2 3 4-5 6-10 11-20 21-32 >32
% of Total 63.5% 10.4% 5.1% 5.0% 6.4% 6.0% 3.2% 0.3%

In terms of queue depths the Heavy trace is even more focused on very low queue depths with three fourths happening at queue depth of one or two. 

I'm reporting the same performance metrics as in The Destroyer benchmark, but I'm running the drive in both empty and full states. Some manufacturers tend to focus intensively on peak performance on an empty drive, but in reality the drive will always contain some data. Testing the drive in full state gives us valuable information whether the drive loses performance once it's filled with data.

AnandTech Storage Bench - Heavy (Data Rate)

It turns out that the SM951 is overall faster than the SSD 750 in our heavy trace as it beats the SSD 750 in both data rate and average latency. I was expecting the SSD 750 to do better due to NVMe, but it looks like the SM951 is a very capable drive despite lacking NVMe (although there appears to be an NVMe version too after all). On the other hand, I'm not too surprised because the SM951 has specifically been built for client workloads, whereas the SSD 750 has an enterprise heritage and even on the client side it's designed for the most intensive workloads. 

AnandTech Storage Bench - Heavy (Latency)

AnandTech Storage Bench - Heavy (Latency)

AnandTech Storage Bench - The Destroyer AnandTech Storage Bench - Light
Comments Locked

132 Comments

View All Comments

  • knweiss - Thursday, April 2, 2015 - link

    According to Semiaccurate the 400 GB drive has "only" 512 MB DRAM.
    (Unfortunately, ARK hasn't been updated yet so I can't verify.)
  • eddieobscurant - Thursday, April 2, 2015 - link

    You're right it's probably 512mb for the 400gb model and 1gb for the 1.2tb model
  • Azunia - Thursday, April 2, 2015 - link

    In PCPer's review of this drive, they actually talk about the problems of benchmarking this drive. (https://www.youtube.com/watch?v=ubxgTBqgXV8)

    Seems like some benchmarks like Iometer cannot actually feed the drive, due to being programmed with a single thread. Have you had similar experiences during benchmarking, or is their logic faulty?
  • Kristian Vättö - Friday, April 3, 2015 - link

    I didn't notice anything that would suggest a problem with Iometer's capability of saturating the drive. In fact, Intel provided us Iometer benchmarking guidelines for the review, although they didn't really differ from what I've been doing for a while now.
  • Azunia - Friday, April 3, 2015 - link

    Reread their article and it seems like the only problem is the Iometer's Fileserver IOPS Test, which peaks at around 200.000 IOPS, since you don't use that one thats probably the reason why you saw no problem.
  • Gigaplex - Thursday, April 2, 2015 - link

    "so if you were to put two SSD 750s in RAID 0 the only option would be to use software RAID. That in turn will render the volume unbootable"

    It's incredibly easy to use software RAID in Linux on the boot drives. Not all software RAID implementations are as limiting as Windows.
  • PubFiction - Friday, April 3, 2015 - link

    "For better readability, I now provide bar graphs with the first one being an average IOPS of the last 400 seconds and the second graph displaying the standard deviation during the same period"

    lol why not just portray standard deviation as error bars like they are supposed to be shown. Kudos for being one of the few sites to recognize this but what a convoluted senseless way of showing them.
  • Chloiber - Friday, April 3, 2015 - link

    I think the practical tests of many other reviews show that the normal consumer has absolutely no benefit (except being able to copy files faster) from such an SSD. We have reached the peak a long time ago. SSDs are not the limiting factor anymore.

    Still, it's great to see that we finally again major improvements. It was always sad that all SSDs got limited by the interface. This was the case with SATA 2, it's the case with SATA 3.
  • akdj - Friday, April 3, 2015 - link

    Thanks for sharing Kristian
    Query about the thorough put using these on external Thunderbolt docks and PCIe 'decks' (several new third party drive and GPU enclosires experimenting with the latter ...and adding powerful desktop cards {GPU} etc) ... Would there still be the 'bottle neck' (not that SLI nor Crossfire with the exception of the MacPro and how the two AMDs work together--would be a concern in OS X but Windows motherboards...) if you were to utilize the TBolt headers to the PCIe lane-->CPU? These seem like a better idea than external video cards for what I'm Doing on the rMBPs. The GPUs are quick enough, especially in tandem with the IrisPro and its ability to 'calculate' ;) -- but a 2.4GB twin card RAID external box with a 'one cord' plug hot or cold would be SWEEET,
  • wyewye - Friday, April 3, 2015 - link

    Kristian: test with QD128, moron, its NVM.

    Anandtech becomes more and more idiotic: poor articles and crappy hosting, you have to real pages multiple times to access anything.

    Go look at TSSDreview for a competent review.

Log in

Don't have an account? Sign up now