Random Read Performance

The random read test requests 4kB blocks and tests queue depths ranging from 1 to 32. The queue depth is doubled every three minutes, for a total test duration of 18 minutes. The test spans the entire drive, which is filled before the test starts. The primary score we report is an average of performances at queue depths 1, 2 and 4, as client usage typically consists mostly of low queue depth operations.

Iometer - 4KB Random Read

It is unsurprising to see that the TLC-based 960 EVO has slower random read speeds than the MLC-based 950 Pro and 960 Pro, but the 960 EVO still manages to be faster than all the non-Samsung drives.

Iometer - 4KB Random Read (Power)

The 960 EVO's power consumption is essentially the same as Samsung's other drives, which puts it at an efficiency disadvantage to their MLC PCIe SSDs but more efficient than all the lower-performing drives.

As with Samsung's other SSDs, random read speed scales with queue depth until hitting a limit at QD16.

Random Write Performance

The random write test writes 4kB blocks and tests queue depths ranging from 1 to 32. The queue depth is doubled every three minutes, for a total test duration of 18 minutes. The test is limited to a 16GB portion of the drive, and the drive is empty save for the 16GB test file. The primary score we report is an average of performances at queue depths 1, 2 and 4, as client usage typically consists mostly of low queue depth operations.

Iometer - 4KB Random Write

The Samsung 960 EVO's random write speed is essentially tied with the 960 Pro and the OCZ RD400A, while the Intel 750 holds on to a comfortable lead.

Iometer - 4KB Random Write (Power)

The 960 EVO is not as power efficient as the 960 Pro, but it is still far better than everything else.

The scaling behavior of the 960 EVO is essentially the same as the 960 Pro: full performance is reached at QD4, and there's no indication of any severe thermal throttling.

AnandTech Storage Bench - Light Sequential Performance
Comments Locked

87 Comments

View All Comments

  • Foralin - Tuesday, November 15, 2016 - link

    I'd like to see this kind of analisys for the new Macbook Pro's SSD
  • philehidiot - Tuesday, November 15, 2016 - link

    I think that often Apple use a couple of different suppliers for their SSDs (certainly was the case when I bought my Air ages ago) and they're unlikely to hand out samples for testing as if there's one thing Apple seems to hate, it's scrutiny. This means that you might have to buy quite a few Macbooks, ID the SSD and then you'd still never know if they were using one, two, three or even four different suppliers unless you got loads of people to run the appropriate software and then went on a shopping spree. Hoping of course that you could return those you've unpacked, set up, tested and carefully repackaged.... Whilst it'd be nice, Apple don't make it easy and unless you're loaded it's not going to be practical.
  • repoman27 - Tuesday, November 15, 2016 - link

    Apple sourced SSDs from Samsung, SanDisk and Toshiba back when they used SATA SSDs, but went 100% Samsung when they switched to PCIe. The 2015 MBPs were all SM951, for instance. From what I've seen thus far, the 2016 MBPs use a new, in-house designed PCIe 3.0 x4 NVMe controller paired with SanDisk NAND.
  • repoman27 - Tuesday, November 15, 2016 - link

    And I take that back that last bit because I just saw a post with a photo of the internals of the MBP w/ TouchBar and it looked to have a Samsung SSD on board.
  • Threska - Tuesday, November 15, 2016 - link

    One disadvantage I see of the M.2 form-factor is inadequate cooling on some motherboards, compared to their more traditional SSD brethren.
  • willis936 - Tuesday, November 15, 2016 - link

    There's a quick fix for that: an ugly PCIe adapter with a heatsink. Or actually slapping some RAM heatsinks on the drive itself. I've been looking for a 2x M.2 to PCIe x8 adapter. The only ones I've found are expensive server adapters. Considering one of these drives nearly saturates 4 PCIe 3.0 lanes it seems that a regular consumer who wants to do RAID 0 should run their GPU in x8 (or go all out on HEDT) and get two PCIe adapters with heatsinks.
  • TheinsanegamerN - Tuesday, November 15, 2016 - link

    The issue is that is only possible on desktops. Laptops are more SOL in this regard.
  • willis936 - Tuesday, November 15, 2016 - link

    More performance = more power. It would be neat if they made different power profiles that could be set by the user through the OS. I don't want 5W pulled from my laptop just for my SSD to read 2 GB/s but I also don't need it to run that quickly.
  • MajGenRelativity - Tuesday, November 15, 2016 - link

    That's a nifty idea! I would like that too :)
  • Billy Tallis - Tuesday, November 15, 2016 - link

    NVMe already has that feature. Drives can define multiple power states, both operational and non-operational idle. The definition of those power states can include information about the relative performance impact on read and write throughput and latency, and how long it takes to enter and leave the different idle power states. For example, the 960 Pro declares a full-power operational power state with maximum power draw of up to 6.9W, and restricted operational power states with limits of 5.5W and 5.1W. It also declares two non-operational idle power states with limits of 0.05W and 0.008W, which my measurements have haven't accurately captured.

    Making full use of this capability requires better support on the software side.

Log in

Don't have an account? Sign up now