AnandTech Storage Bench - Heavy

While The Destroyer focuses on sustained and worst-case performance by hammering the drive with nearly 1TB worth of writes, the Heavy trace provides a more typical enthusiast and power user workload. By writing less to the drive, the Heavy trace doesn't drive the SSD into steady-state and thus the trace gives us a good idea of peak performance combined with some basic garbage collection routines.

AnandTech Storage Bench - Heavy
Workload Description Applications Used
Photo Editing Import images, edit, export Adobe Photoshop
Gaming Pllay games, load levels Starcraft II, World of Warcraft
Content Creation HTML editing Dreamweaver
General Productivity Browse the web, manage local email, document creation, application install, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Application Development Compile Chromium Visual Studio 2008

The Heavy trace drops virtualization from the equation and goes a bit lighter on photo editing and gaming, making it more relevant to the majority of end-users.

AnandTech Storage Bench - Heavy - Specs
Reads 2.17 million
Writes 1.78 million
Total IO Operations 3.99 million
Total GB Read 48.63 GB
Total GB Written 106.32 GB
Average Queue Depth ~4.6
Focus Peak IO, basic GC routines

The Heavy trace is actually more write-centric than The Destroyer is. A part of that is explained by the lack of virtualization because operating systems tend to be read-intensive, be that a local or virtual system. The total number of IOs is less than 10% of The Destroyer's IOs, so the Heavy trace is much easier for the drive and doesn't even overwrite the drive once.

AnandTech Storage Bench - Heavy - IO Breakdown
IO Size <4KB 4KB 8KB 16KB 32KB 64KB 128KB
% of Total 7.8% 29.2% 3.5% 10.3% 10.8% 4.1% 21.7%

The Heavy trace has more focus on 16KB and 32KB IO sizes, but more than half of the IOs are still either 4KB or 128KB. About 43% of the IOs are sequential with the rest being slightly more full random than pseudo-random.

AnandTech Storage Bench - Heavy - QD Breakdown
Queue Depth 1 2 3 4-5 6-10 11-20 21-32 >32
% of Total 63.5% 10.4% 5.1% 5.0% 6.4% 6.0% 3.2% 0.3%

In terms of queue depths the Heavy trace is even more focused on very low queue depths with three fourths happening at queue depth of one or two. 

I'm reporting the same performance metrics as in The Destroyer benchmark, but I'm running the drive in both empty and full states. Some manufacturers tend to focus intensively on peak performance on an empty drive, but in reality the drive will always contain some data. Testing the drive in full state gives us valuable information whether the drive loses performance once it's filled with data.

AnandTech Storage Bench - Heavy (Data Rate)

It turns out that the SM951 is overall faster than the SSD 750 in our heavy trace as it beats the SSD 750 in both data rate and average latency. I was expecting the SSD 750 to do better due to NVMe, but it looks like the SM951 is a very capable drive despite lacking NVMe (although there appears to be an NVMe version too after all). On the other hand, I'm not too surprised because the SM951 has specifically been built for client workloads, whereas the SSD 750 has an enterprise heritage and even on the client side it's designed for the most intensive workloads. 

AnandTech Storage Bench - Heavy (Latency)

AnandTech Storage Bench - Heavy (Latency)

AnandTech Storage Bench - The Destroyer AnandTech Storage Bench - Light
Comments Locked

132 Comments

View All Comments

  • kaisellgren - Friday, May 1, 2015 - link

    Do not forget the Fiji 390x!
  • dzezik - Saturday, May 7, 2016 - link

    who needs chipset for PCIe if You have 40 lanes directly from CPU. it is step back in the configuration. it was big step ahead to put memory and PCIe to CPU. the chipset is useless.
  • zrav - Thursday, April 2, 2015 - link

    >It's again a bit disappointing that the SSD 750 isn't that well optimized for sequential IO because there's prcatically no scaling at all

    That's a weird conclusion. I'd say it is quite impressive that the drive almost reaches peak throughput at QD 1 already. Requiring higher QD to achieve more throughput is a not a positive characteristic. But if that matters depends on the usage scenario ofc.
  • Kristian Vättö - Thursday, April 2, 2015 - link

    It's impressive that the performance is almost the same regardless of queue depth, but I don't find 1.2GB/s to be very impressive for a 1.2TB PCIe drive.
  • futrtrubl - Thursday, April 2, 2015 - link

    Unfortunately your use of un-normalised standard deviation for performance consistency makes them a barrier to understanding. A 1000 IOPS drive with 5% variance is going to have lower standard deviation and by the way you have presented it "better consistency" than a 10000 IOPS drive with 1% variance.
  • Kristian Vättö - Thursday, April 2, 2015 - link

    Any suggestions for improving the metric? Perhaps divide by the average IOPS or its square root to take that into account as well?
  • futrtrubl - Thursday, April 2, 2015 - link

    Yes, I think dividing by the average IOPs would be perfect. You could even x100 to get it to a sort of percentage deviation.
  • bricko - Saturday, April 4, 2015 - link

    Here is test and review of the new 750, what is up with boot time...its SLOWEST of 14 drives. Everything else is great, but boot time. The Plextor M6 is 15 seconds, the 750 is 34 sec....ideas

    http://techreport.com/review/28050/intel-750-serie...
  • Ethos Evoss - Saturday, April 4, 2015 - link

    Plextor SSDs - BEST
  • bricko - Saturday, April 4, 2015 - link

    Its only slow on the boot time, otherwise it beats ALL other ssd on different loads and tests , by 2 - 3 times....odd it seems

Log in

Don't have an account? Sign up now