Final Words

In setting their lofty goals for the drive, the 2TB Samsung 960 Pro does not quite live up to every performance specification. But against any other standard it is a very fast drive. It increases performance over its predecessor across the board. It sets new performance records on almost every test while staying within roughly the same power and thermal limits, leading it to also set many new records for efficiency where previous PCIe SSDs have tended to sacrifice efficiency to reach higher performance.

The 960 Pro's performance even suggests that it may be a suitable enterprise SSD. While it lacks power loss protection capacitors that are still found on most enterprise SSDs (and are the reason why the longer M.2 22110 size is typically used for enterprise M.2 SSDs), the 960 Pro's performance on our random write consistency test is clearly enterprise-class and in the high-airflow environment of a server it should deliver much better sustained performance where it throttled due to high temperatures in our desktop testbed. Samsung probably won't have to change much other than the write endurance rating to make a good enterprise SSD based off this Polaris controller.

SATA SSDs are doing well to improve performance by a few percent, and power efficiency is for the most part also not improving much. PCIe 3.0 is not fully exploited by any current product, so generational improvements of NVMe SSDs can be much larger. In the SATA market gains this big would be revolutionary whether considered in terms of relative percentage improvement or absolute MB/s and IOPS gained.

On the other hand, this was a comparison of a 2TB drive against PCIe SSDs that were all much smaller; it has four times the capacity and twice the NAND die count of the largest and fastest 950 Pro. Higher capacity almost always enables higher performance, and it appears in many tests that the 512GB 960 Pro may not have much if any advantage over either its predecessor or the current fastest drive of similar capacity.

This review should not be taken as the final word on the Samsung 960 Pro. We still intend to test the smaller and more affordable capacities, and to conduct a more thorough investigation of its thermal throttling behavior. We also need to test against the Windows 10 NVMe driver and will test with any driver Samsung releases. Additionally, we look forward to testing the Samsung 960 EVO, which uses the same Polaris controller but TLC V-NAND with an SLC cache. The 960 EVO has a shorter warranty period and lower endurance rating, but still promises higher performance than the 950 Pro and at a much lower price.

The $1299 MSRP on the 2TB 960 Pro is almost as shocking as the $1499 MSRP for the 4TB 850 EVO was. This drive is not for everyone, though it might be coveted by everyone. But for those who have the money, the price per gigabyte is not outlandish. Aside from Intel's TLC-based SSD 600p, PCIe SSDs currently start around $0.50/GB, and at $0.63/GB the 960 Pro is more expensive than the Plextor M8Pe but cheaper than the Intel SSD 750 or the OCZ RD400A. Samsung is by no means price gouging and they could justify charging even more based on the performance and efficiency advantages the 960 Pro has over the competitors. The 960 Pro and 960 EVO are not yet listed on Amazon and are only listed as "Coming Soon" with no price on Newegg, but they can be pre-ordered direct from Samsung with an estimated ship time of 2-4 weeks.

The 960 Pro appears to not offer much cost savings over the 950 Pro despite the switch from 32-layer V-NAND to 48-layer V-NAND. The 48-layer V-NAND has had trouble living up to expectations and was much later to market than Samsung had planned for: the 950 Pro was initially supposed to switch over in the first half of this year and gain a 1TB capacity. This doesn't pose a serious concern for the 960 Pro, but it is clear that Samsung was too optimistic about the ease of scaling up 3D NAND and their projections for the 64-layer generation should be regarded with increased skepticism.

ATTO, AS-SSD & Idle Power Consumption
Comments Locked

72 Comments

View All Comments

  • emn13 - Wednesday, October 19, 2016 - link

    Especially since NAND hasn't magically gotten lots faster after the SATA->NVMe transition. If SATA is fast enough to saturate the underlying NAND+controller combo when they must actually write to disk, then NVMe simply looks unnecessarily expensive (if you look at writes only). Since the fast NVMe drives all have ram caches, it's hard to detect whether data is properly being written.

    Perhaps windows is doing something odd here, but it's definitely fishy.
  • jhoff80 - Tuesday, October 18, 2016 - link

    This is probably a stupid question because I've been changing that setting for years on SSDs without even thinking about it and you clearly know more about this than I do, but does the use of a drive in a laptop (eg battery-powered) or with a UPS for the system negate this risk anyway? That was always my impression, but it could very much be wrong.
  • shodanshok - Tuesday, October 18, 2016 - link

    Having a battery, laptops are inherently safer than desktop against power loss. However, a bad (or missing) battery and/or a failing cable/connector can expose the disks to the very same unprotected power-loss scenario.
  • Dr. Krunk - Sunday, October 23, 2016 - link

    What happens if accidently press the battery release button and it pops out just enough to lose connection?
  • woggs - Tuesday, October 18, 2016 - link

    I would love to see Anandtech do a deep dive into this very topic. It's important. I've heard that windows and other apps do excessive cache flushing when enabled and that's also a problem. I've also heard intel SSDs completely ignore the cache flush command and simply implement full power loss protection. Batching writes into ever larger pieces is a fact of SSD life and it needs to be done right.
  • voicequal - Tuesday, October 18, 2016 - link

    Agreed. Last year I traced slow disk i/o on a new Surface Pro 4 with 256GB Toshiba XG3 NVMe to the write-cache buffer flushing, so I checked the box to turn it off. Then in July, another driver bug caused the Surface Pro 4 to frequently lock up and require a forced power off. Within a few weeks I had a corrupted Windows profile and system file issues that took several DISM runs to clean up. Don't know for sure if my problem resulted from the disabled buffer flushing, but I'm now hesitant to reenable the setting.

    It would be good to understand what this setting does with respect to NVMe driver operation, and interesting to measure the impact / data loss when power loss does occur.
  • Kristian Vättö - Tuesday, October 18, 2016 - link

    I think you are really exaggerating the problem. DRAM cache has been used in storage well before SSDs became mainstream. Yes, HDDs have DRAM cache too and it's used for the same purpose: to cache writes. I would argue that HDDs are even more vulnerable because data sits in the cache for a longer time due to the much higher latency of platter-based storage.

    Because of that, all consumer friendly file systems have resilience against small data losses. In the end, only a few MB of user data is cached anyway, so it's not like we talk about a major data loss. It's small enough not to impact user experience, and the file system can recover itself in case there was metadata in the lost cache.

    If this was a severe issue, there would have been a fix years ago. For client-grade products there is simply no need because 100% data protection and uptime are not needed.
  • shodanshok - Tuesday, October 18, 2016 - link

    The problem is not the cache, rather ignoring cache flushes requests. I know DRAM caches are used from decades, and when disks lied about flushing them (in the good old IDE days), catastrophic filesystem failure were much more common (see XFS or ZFS FAQs / mailing lists for some reference, or even SATA command specifications).

    I'm not exaggerating anything: it is a real problem, greatly debated in the Linux community in the past. From https://lwn.net/Articles/283161/
    "So the potential for corruption is always there; in fact, Chris Mason has a torture-test program which can make it happen fairly reliably. There can be no doubt that running without barriers is less safe than using them"

    This quote is ext3-specific, but other journaled filesystem behave in very similar manners. And hey - the very same Windows check box warns you about the risks related to disabling flushes.

    You should really inquiry Microsoft about what these check box do on its NVMe driver. Anyway, suggesting to disable cache flushes is a bad advise (unless you don't use your PC for important things).
  • Samus - Wednesday, October 19, 2016 - link

    I don't think people understand how cache flushing works at the hardware level.

    If the operating system has buffer flushing disabled, it will never tell the drive to dump the cache, for example, when an operation is complete. In this event, a drive will hold onto whatever data is in cache until the cache fills up, then the drive firmware will trigger the controller to write the cache to disk.

    Since OS's randomly write data to disk all the time, bits here and there go into cache to prevent disk thrashing/NAND wear, all determined in hardware. This has nothing to do with pooled or paged data at the OS level or RAM data buffers.

    Long story short, it's moronic to disable write buffer flushing, where the OS will command the drive after IO operations (like a file copy or write) complete, ensuring the cache is clear as the system enters idle. This happens hundreds if not thousands of times per minute and its important to fundamentally protect the data in cache. With buffer flushing disabled the cache will ALWAYS have something in it until you shutdown - which is the only time (other than suspend) a buffer flush command will be sent.
  • Billy Tallis - Wednesday, October 19, 2016 - link

    "With buffer flushing disabled the cache will ALWAYS have something in it until you shutdown - which is the only time (other than suspend) a buffer flush command will be sent."

    I expect at least some drives flush their internal caches before entering any power saving mode. I've occasionally seen the power meter spike before a drive actually drops down to its idle power level, and I probably would have seen a lot more such spikes if the meter were sampling more than once per second.

Log in

Don't have an account? Sign up now