AnandTech Storage Bench - Light

The Light trace is designed to be an accurate illustration of basic usage. It's basically a subset of the Heavy trace, but we've left out some workloads to reduce the writes and make it more read intensive in general. 

AnandTech Storage Bench - Light - Specs
Reads 372,630
Writes 459,709
Total IO Operations 832,339
Total GB Read 17.97 GB
Total GB Written 23.25 GB
Average Queue Depth ~4.6
Focus Basic, light IO usage

The Light trace still has more writes than reads, but a very light workload would be even more read-centric (think web browsing, document editing, etc). It has about 23GB of writes, which would account for roughly two or three days of average usage (i.e. 7-11GB per day). 

AnandTech Storage Bench - Light - IO Breakdown
IO Size <4KB 4KB 8KB 16KB 32KB 64KB 128KB
% of Total 6.2% 27.6% 2.4% 8.0% 6.5% 4.8% 26.4%

The IO distribution of the Light trace is very similar to the Heavy trace with slightly more IOs being 128KB. About 70% of the IOs are sequential, though, so that is a major difference compared to the Heavy trace.

AnandTech Storage Bench - Light - QD Breakdown
Queue Depth 1 2 3 4-5 6-10 11-20 21-32 >32
% of Total 73.4% 16.8% 2.6% 2.3% 3.1% 1.5% 0.2% 0.2%

Over 90% of the IOs have a queue depth of one or two, which further proves the importance of low queue depth performance. 

AnandTech Storage Bench - Light (Data Rate)

The same trend continues in our Light trace where the SM951 is still the king of the hill. It's obvious that Intel didn't design the SSD 750 with such light workloads in mind as ultimately you need to have a relatively IO intensive workload to get the full benefit of PCIe and NVMe.

AnandTech Storage Bench - Light (Latency)

AnandTech Storage Bench - Light (Latency)

AnandTech Storage Bench - Heavy Random Performance
Comments Locked

132 Comments

View All Comments

  • knweiss - Thursday, April 2, 2015 - link

    According to Semiaccurate the 400 GB drive has "only" 512 MB DRAM.
    (Unfortunately, ARK hasn't been updated yet so I can't verify.)
  • eddieobscurant - Thursday, April 2, 2015 - link

    You're right it's probably 512mb for the 400gb model and 1gb for the 1.2tb model
  • Azunia - Thursday, April 2, 2015 - link

    In PCPer's review of this drive, they actually talk about the problems of benchmarking this drive. (https://www.youtube.com/watch?v=ubxgTBqgXV8)

    Seems like some benchmarks like Iometer cannot actually feed the drive, due to being programmed with a single thread. Have you had similar experiences during benchmarking, or is their logic faulty?
  • Kristian Vättö - Friday, April 3, 2015 - link

    I didn't notice anything that would suggest a problem with Iometer's capability of saturating the drive. In fact, Intel provided us Iometer benchmarking guidelines for the review, although they didn't really differ from what I've been doing for a while now.
  • Azunia - Friday, April 3, 2015 - link

    Reread their article and it seems like the only problem is the Iometer's Fileserver IOPS Test, which peaks at around 200.000 IOPS, since you don't use that one thats probably the reason why you saw no problem.
  • Gigaplex - Thursday, April 2, 2015 - link

    "so if you were to put two SSD 750s in RAID 0 the only option would be to use software RAID. That in turn will render the volume unbootable"

    It's incredibly easy to use software RAID in Linux on the boot drives. Not all software RAID implementations are as limiting as Windows.
  • PubFiction - Friday, April 3, 2015 - link

    "For better readability, I now provide bar graphs with the first one being an average IOPS of the last 400 seconds and the second graph displaying the standard deviation during the same period"

    lol why not just portray standard deviation as error bars like they are supposed to be shown. Kudos for being one of the few sites to recognize this but what a convoluted senseless way of showing them.
  • Chloiber - Friday, April 3, 2015 - link

    I think the practical tests of many other reviews show that the normal consumer has absolutely no benefit (except being able to copy files faster) from such an SSD. We have reached the peak a long time ago. SSDs are not the limiting factor anymore.

    Still, it's great to see that we finally again major improvements. It was always sad that all SSDs got limited by the interface. This was the case with SATA 2, it's the case with SATA 3.
  • akdj - Friday, April 3, 2015 - link

    Thanks for sharing Kristian
    Query about the thorough put using these on external Thunderbolt docks and PCIe 'decks' (several new third party drive and GPU enclosires experimenting with the latter ...and adding powerful desktop cards {GPU} etc) ... Would there still be the 'bottle neck' (not that SLI nor Crossfire with the exception of the MacPro and how the two AMDs work together--would be a concern in OS X but Windows motherboards...) if you were to utilize the TBolt headers to the PCIe lane-->CPU? These seem like a better idea than external video cards for what I'm Doing on the rMBPs. The GPUs are quick enough, especially in tandem with the IrisPro and its ability to 'calculate' ;) -- but a 2.4GB twin card RAID external box with a 'one cord' plug hot or cold would be SWEEET,
  • wyewye - Friday, April 3, 2015 - link

    Kristian: test with QD128, moron, its NVM.

    Anandtech becomes more and more idiotic: poor articles and crappy hosting, you have to real pages multiple times to access anything.

    Go look at TSSDreview for a competent review.

Log in

Don't have an account? Sign up now