Mixed Random Access

Instead of testing a range of queue depths, our mixed workload tests vary the proportion of reads and writes while using a constant queue depth. The test begins with pure reads, then incrementally shifts toward pure writes with three minutes for each subtest. As more writes come in to the mix, the odds increase that a read request will be held up by one of the flash chips being busy with a longer-duration write. Likewise, having lots of reads in the mix can limit the drive's ability to combine writes into larger batches. Thus, the worst performance on these tests usually occurs somewhere around the middle. To approximate client workloads, the mixed random access test uses a queue depth of three and like the random write test it is restricted to a 16GB portion of the drive.

Iometer - Mixed 4KB Random Read/WriteIometer - Mixed 4KB Random Read/Write (Power)

The mixed workloads were the only tests where the two capacities showed significant performance differences even without the heatsink, indicating that thermal throttling was much less of an issue for the 950 Pro here. The heatsink still helps, but only slightly. Given how random reads were essentially unaffected by the heatsink, it's a bit of a surprise that the writes improved by enough to bring the average up by 12.5% for the 512GB drive.

Random Mixed
256GB no heatsink 512GB no heatsink
256GB with heatsink 512GB with heatsink

Almost all of the performance improvement with the heatsink comes at the very end of the test as it shifts to pure writes. Performance earlier in the test is virtually unaffected by the heatsink, but power efficiency does see the slight improvement from lower operating temperature.

Mixed Sequential Access

As compared with the mixed random test described above, the mixed sequential test differs by using a queue depth of one and by requesting larger chunks of data. This test operates across the whole drive, which is pre-filled with data.

Iometer - Mixed 128KB Sequential Read/WriteIometer - Mixed 128KB Sequential Read/Write (Power)

Both of the previous sequential performance tests showed huge improvements even at low queue depths, so it's no surprise to see a significant improvement in a mix of the two.

Sequential Mixed
256GB no heatsink 512GB no heatsink
256GB with heatsink 512GB with heatsink

A closer look reveals that the overall performance improvements are once again attributable to the non-mixed segments of the test. Unlike the mixed random test, read speeds are part of the improvement here. But on the sub-tests with a balanced mix of reads and writes, the 950 Pro wasn't throttling even without the heatsink.

Sequential Performance Performance Consistency and Final Words
Comments Locked

69 Comments

View All Comments

  • Haravikk - Monday, December 21, 2015 - link

    I think I could only see this being useful if you were building a system loaded with SSDs in the PCIe slots; in a system with a GPU I'd expect the extra heat from that will easily result in worse performance than keeping the M.2 drive on the motherboard.

    In fact, for a single M.2 SSD system my preference is a motherboard with an M.2 slot on the back; this keeps it away from the worst heat generating components, and even though few cases provide proper airflow on the back of the motherboard, as long as your cooling is adequate it should never get too hot for the drive.

    Even if you are building a system with a ton of SSDs, the main benefit is having the PCIe adapter IMO, it doesn't seem like the heatsink makes such a big difference that you're ever really going to notice it.
  • vFunct - Monday, December 21, 2015 - link

    This is going to be mostly useful in servers, where sustained (non-burst) read/write is typical.
  • Ethos Evoss - Saturday, December 26, 2015 - link

    Now new NVMe M.2 SSDs NEED heatsink totally bcos PCIe 3.0 x4 has very very high bandwidth and generates 100 C degrees celsius !
  • Ethos Evoss - Saturday, December 26, 2015 - link

    https://www.youtube.com/watch?v=d3GlInzvHr8
  • frowertr - Monday, December 21, 2015 - link

    Really think M.2 is the future. No cables and small size sounds like a winner to me.
  • ImSpartacus - Monday, December 21, 2015 - link

    It's probably the future, but it'll take a while to get there.

    If you need a cheap ssd for a boring boot drive, then 2.5" is the way to go if you have anything close to resembling a budget.
  • frowertr - Monday, December 21, 2015 - link

    Yeah I agree. But they will figure out how to get more capacity at lower costs packed onto the size factor soon enough. I just built a new Skylake build for my living room HTPC/Xbox one look a like, and I used the Samsung EVO m.2 drive. What a refreshing piece of hardware. Just clipping it onto the motherboard like RAM and not dealing with any cables whatsoever made me feel like I was living in the future. Can't believe how far HDDs have come since I started building computers in the mid-90s.
  • Lonyo - Monday, December 21, 2015 - link

    The only reason consumer SSDs are 2.5" is because that's what the space is. If you had a 1.8" drive slot, and 1.8" drives, then SSDs would be smaller. They are the size they are because 2.5" was around for mechanical drives before SSDs, so it allows drop in replacement.

    The problem with M2 is that you end up having a space limitation because you need to free up space on the motherboard to put the thing, which means either you skip something else, or you have a larger motherboard, and then you aren't really saving any space anyway.
  • DanNeely - Monday, December 21, 2015 - link

    Using the 1.8" HDD form factor probably would have impacted higher end drives in prior years. It only has 60% of the areal size of a 2.5" model; and until fairly recently most high performance/capacity SSDs; used a full size 2.5" PCB. The only ones that were using cut down boards that would fit into a 1.8" housing without needing shrunk were lower end budget models. While it doesn't matter much now (Samsung's 2tb models use smaller PCBs that look like they'd almost fit in the smaller form factor unchanged); cropping off the largest size from the market a few years ago would've probably hurt adoption.
  • MrSpadge - Monday, December 21, 2015 - link

    I fail to see a good reason why SSDs have to become more expensive if you remove their case. Anything on that M.2 card is also in a 2.5" drive, yet it's no problem to fit the components onto that small PCB (as long as you're not trying to make very large drives).

Log in

Don't have an account? Sign up now