Final Words

In setting their lofty goals for the drive, the 2TB Samsung 960 Pro does not quite live up to every performance specification. But against any other standard it is a very fast drive. It increases performance over its predecessor across the board. It sets new performance records on almost every test while staying within roughly the same power and thermal limits, leading it to also set many new records for efficiency where previous PCIe SSDs have tended to sacrifice efficiency to reach higher performance.

The 960 Pro's performance even suggests that it may be a suitable enterprise SSD. While it lacks power loss protection capacitors that are still found on most enterprise SSDs (and are the reason why the longer M.2 22110 size is typically used for enterprise M.2 SSDs), the 960 Pro's performance on our random write consistency test is clearly enterprise-class and in the high-airflow environment of a server it should deliver much better sustained performance where it throttled due to high temperatures in our desktop testbed. Samsung probably won't have to change much other than the write endurance rating to make a good enterprise SSD based off this Polaris controller.

SATA SSDs are doing well to improve performance by a few percent, and power efficiency is for the most part also not improving much. PCIe 3.0 is not fully exploited by any current product, so generational improvements of NVMe SSDs can be much larger. In the SATA market gains this big would be revolutionary whether considered in terms of relative percentage improvement or absolute MB/s and IOPS gained.

On the other hand, this was a comparison of a 2TB drive against PCIe SSDs that were all much smaller; it has four times the capacity and twice the NAND die count of the largest and fastest 950 Pro. Higher capacity almost always enables higher performance, and it appears in many tests that the 512GB 960 Pro may not have much if any advantage over either its predecessor or the current fastest drive of similar capacity.

This review should not be taken as the final word on the Samsung 960 Pro. We still intend to test the smaller and more affordable capacities, and to conduct a more thorough investigation of its thermal throttling behavior. We also need to test against the Windows 10 NVMe driver and will test with any driver Samsung releases. Additionally, we look forward to testing the Samsung 960 EVO, which uses the same Polaris controller but TLC V-NAND with an SLC cache. The 960 EVO has a shorter warranty period and lower endurance rating, but still promises higher performance than the 950 Pro and at a much lower price.

The $1299 MSRP on the 2TB 960 Pro is almost as shocking as the $1499 MSRP for the 4TB 850 EVO was. This drive is not for everyone, though it might be coveted by everyone. But for those who have the money, the price per gigabyte is not outlandish. Aside from Intel's TLC-based SSD 600p, PCIe SSDs currently start around $0.50/GB, and at $0.63/GB the 960 Pro is more expensive than the Plextor M8Pe but cheaper than the Intel SSD 750 or the OCZ RD400A. Samsung is by no means price gouging and they could justify charging even more based on the performance and efficiency advantages the 960 Pro has over the competitors. The 960 Pro and 960 EVO are not yet listed on Amazon and are only listed as "Coming Soon" with no price on Newegg, but they can be pre-ordered direct from Samsung with an estimated ship time of 2-4 weeks.

The 960 Pro appears to not offer much cost savings over the 950 Pro despite the switch from 32-layer V-NAND to 48-layer V-NAND. The 48-layer V-NAND has had trouble living up to expectations and was much later to market than Samsung had planned for: the 950 Pro was initially supposed to switch over in the first half of this year and gain a 1TB capacity. This doesn't pose a serious concern for the 960 Pro, but it is clear that Samsung was too optimistic about the ease of scaling up 3D NAND and their projections for the 64-layer generation should be regarded with increased skepticism.

ATTO, AS-SSD & Idle Power Consumption
Comments Locked

72 Comments

View All Comments

  • JoeyJoJo123 - Tuesday, October 18, 2016 - link

    Not too surprised that Samsung, once again, achieves another performance crown for another halo SSD product.
  • Eden-K121D - Tuesday, October 18, 2016 - link

    Bring on the competition
  • ibudic1 - Tuesday, October 18, 2016 - link

    Intel 750 is better. The only thing that you can tell is random write 4K QD1-4. Also it's really bad when you don't have the consistency when you need it. There's nothing worse than a hanging application, it's about consistancy not outright speed. Which reminds me...

    When evaluating graphics cards a MINIMUM frame rate is WAY more important than average or maximum.

    Just like in racing the slowest speed in the corner is what separates great cars from average.

    Hopefully Anandtech can recognize this in future reviews
  • Flying Aardvark - Wednesday, October 19, 2016 - link

    Exactly. Intel 750 is still the king for someone who seriously needs storage performance. 4K randoms and zero throttling.
    I'd stick with the Evo or 600P, 3D TLC stuff unless I really needed the performance then I'd go all the way up to the real professional stuff with the 750. I need a 1TB M.2 NVME SSD myself and eager to see street prices on the 960 EVO 1TB and Intel 600P 1TB.
  • iwod - Wednesday, October 19, 2016 - link

    Exactly, when majority ( 90%+ ) of consumer usage is going to be based on QD1. Giving me QD32 numbers is like a Mpixel or Mhz race. I used to think we reached the limit of Random read write performance. Turns out we haven't actually improved Random Read Write QD1 much, hence it is likely still the bottleneck.

    And yes we need consistency in QD1 Random Speed test as well.
  • dsumanik - Wednesday, October 19, 2016 - link

    Nice to see there are still some folks out there who arent duped by marketing, random write and full capacity consistency are the only 2 things a look at. When moving large video files around sequential speeds can help, but difference between 500 and 1000 mb/s isnt much, you start the copy then go do something else. In many cases random write is the bottleneck for the times you are waiting on the computer to "do something", and dictates if the computer feels "snappy". Likewise, performance loss when a drive is getting full also makes you 'notice' things are slowing down.

    Samsung if you are reading this, go balls out random write performance on the next generation, tyvm.
  • Samus - Wednesday, October 19, 2016 - link

    You can't put an Intel 750 in a laptop though, and it also caps at 1.2TB. But your point is correct, it is a performance monster.
  • edward1987 - Friday, October 28, 2016 - link

    Intel SSD 750 SSDPEDMW400G4X1 PCI-Express-v3-x4 - HHHL
    AND Samsung SSD 960 PRO MZ-V6P512BW M.2 2280 NVMe
    IOPS 230-430K VS 330K
    ead speed (Max) 2200 VS 3500

    Much better in comparison http://www.span.com/compare/SSDPEDMW400G4X1-vs-MZ-...
  • shodanshok - Tuesday, October 18, 2016 - link

    Let me do a BIG WARNING against disabling write-buffer flushing. Any drive without special provisions for power loss (eg: supercapacitor), can really lose much data in the event of a unexpected power loss. In the worst scenario, entire filesystem loss can happen.

    What the two Windows settings do? In short:
    1) "enable write cache on the device" enables the controller's private DRAM writeback cache and it is *required* for good performance on SSD drives. The reason exactly the one cited on the article: for good performance, flash memory requires batched writes. For example, with DRAM cache disabled I recorded write speed of 5 MB/s on a otherwise fast Crucial M550 256 GB. With DRAM cache enabled, the very same disk almost saturated the SATA link (> 400 MB/s).
    However, a writeback cache imply some data loss risk. For that reason the IDE/SATA standard has some special commands to force a full cache flush when the OS need to be sure about data persistence. This bring us that second option...

    2) "turn off write-cache buffer flushing on the device": this option should be absolutely NOT enabled on consumer, non-power-protected disks. With this option enabled, Windows will *not* force a full cache flush even on critical tasks (eg: update of NTFS metadata). This can have catastrophic consequence if power is loss at the wrong moment. I am not speaking about "simple", limited data loss, but entire filesystem corruption. The key reason for such a catastrophic behavior is that cache-flush command are not only used for store critical data, but for properly order their writeout also. In other words, with cache flushing disabled, key filesystem metadata can be written out of order. If power is lost during a incomplete, badly-reordered metadata writes, all sort of problems can happen.
    This option exists for one, and only one, case: when your system has a power-loss-protected array/drives, you trust your battery/capacitor AND your RAID card/drive behave poorly when flushing is enabled. However, basically all modern RAID controllers automatically ignores cache flushes when the battery/capacitor are healthy, negating the needing to disable cache flushes software-side.

    In short, if such a device (960 Pro) really need disabled cache flushing to shine, this is a serious product/firmware flaw which need to be corrected as soon as possible.
  • Br3ach - Tuesday, October 18, 2016 - link

    Is power loss a problem for M.2 drives though? E.g. my PSU's (Corsair AX1200i) capacitors keeps the MB alive for probably 1 minute following power loss - plenty of time for the drive to flush any caches, no?

Log in

Don't have an account? Sign up now