Mixed Random Read/Write Performance

The mixed random I/O benchmark starts with a pure read test and gradually increases the proportion of writes, finishing with pure writes. The queue depth is 3 for the entire test and each subtest lasts for 3 minutes, for a total test duration of 18 minutes. As with the pure random write test, this test is restricted to a 16GB span of the drive, which is empty save for the 16GB test file.

Iometer - Mixed 4KB Random Read/Write

The 960 EVO is essentially tied for second place with the OCZ RD400 and significantly behind the 960 Pro in overall performance on mixed random I/O.

Iometer - Mixed 4KB Random Read/Write (Power)

The 960 EVO's power efficiency on this test is not great, but it is a big improvement over last year's 950 Pro.

The 960 EVO's high performance score comes primarily from its great performance in the pure write final phase of the test. Throughout the rest of the test, the 960 EVO is not as fast as the 950 Pro.

Mixed Sequential Read/Write Performance

The mixed sequential access test covers the entire span of the drive and uses a queue depth of one. It starts with a pure read test and gradually increases the proportion of writes, finishing with pure writes. Each subtest lasts for 3 minutes, for a total test duration of 18 minutes. The drive is filled before the test starts.

Iometer - Mixed 128KB Sequential Read/Write

The 960 EVO's mixed sequential I/O performance is the second-fastest among M.2 SSDs and third place overall. Performance is modestly improved over the 950 Pro.

Iometer - Mixed 128KB Sequential Read/Write (Power)

The 960 EVO's power efficiency is better than most PCIe SSDs, but still well behind the 960 Pro.

The 960 EVO's performance in the pure read first phase of the test is great, but its performance with an 80/20 mix is much worse than the 950 Pro or OCZ RD400. The worst-case performance is also not as good as the RD400 or 960 Pro.

Sequential Performance ATTO, AS-SSD & Idle Power Consumption
Comments Locked

87 Comments

View All Comments

  • Foralin - Tuesday, November 15, 2016 - link

    I'd like to see this kind of analisys for the new Macbook Pro's SSD
  • philehidiot - Tuesday, November 15, 2016 - link

    I think that often Apple use a couple of different suppliers for their SSDs (certainly was the case when I bought my Air ages ago) and they're unlikely to hand out samples for testing as if there's one thing Apple seems to hate, it's scrutiny. This means that you might have to buy quite a few Macbooks, ID the SSD and then you'd still never know if they were using one, two, three or even four different suppliers unless you got loads of people to run the appropriate software and then went on a shopping spree. Hoping of course that you could return those you've unpacked, set up, tested and carefully repackaged.... Whilst it'd be nice, Apple don't make it easy and unless you're loaded it's not going to be practical.
  • repoman27 - Tuesday, November 15, 2016 - link

    Apple sourced SSDs from Samsung, SanDisk and Toshiba back when they used SATA SSDs, but went 100% Samsung when they switched to PCIe. The 2015 MBPs were all SM951, for instance. From what I've seen thus far, the 2016 MBPs use a new, in-house designed PCIe 3.0 x4 NVMe controller paired with SanDisk NAND.
  • repoman27 - Tuesday, November 15, 2016 - link

    And I take that back that last bit because I just saw a post with a photo of the internals of the MBP w/ TouchBar and it looked to have a Samsung SSD on board.
  • Threska - Tuesday, November 15, 2016 - link

    One disadvantage I see of the M.2 form-factor is inadequate cooling on some motherboards, compared to their more traditional SSD brethren.
  • willis936 - Tuesday, November 15, 2016 - link

    There's a quick fix for that: an ugly PCIe adapter with a heatsink. Or actually slapping some RAM heatsinks on the drive itself. I've been looking for a 2x M.2 to PCIe x8 adapter. The only ones I've found are expensive server adapters. Considering one of these drives nearly saturates 4 PCIe 3.0 lanes it seems that a regular consumer who wants to do RAID 0 should run their GPU in x8 (or go all out on HEDT) and get two PCIe adapters with heatsinks.
  • TheinsanegamerN - Tuesday, November 15, 2016 - link

    The issue is that is only possible on desktops. Laptops are more SOL in this regard.
  • willis936 - Tuesday, November 15, 2016 - link

    More performance = more power. It would be neat if they made different power profiles that could be set by the user through the OS. I don't want 5W pulled from my laptop just for my SSD to read 2 GB/s but I also don't need it to run that quickly.
  • MajGenRelativity - Tuesday, November 15, 2016 - link

    That's a nifty idea! I would like that too :)
  • Billy Tallis - Tuesday, November 15, 2016 - link

    NVMe already has that feature. Drives can define multiple power states, both operational and non-operational idle. The definition of those power states can include information about the relative performance impact on read and write throughput and latency, and how long it takes to enter and leave the different idle power states. For example, the 960 Pro declares a full-power operational power state with maximum power draw of up to 6.9W, and restricted operational power states with limits of 5.5W and 5.1W. It also declares two non-operational idle power states with limits of 0.05W and 0.008W, which my measurements have haven't accurately captured.

    Making full use of this capability requires better support on the software side.

Log in

Don't have an account? Sign up now