Final Words

In setting their lofty goals for the drive, the 2TB Samsung 960 Pro does not quite live up to every performance specification. But against any other standard it is a very fast drive. It increases performance over its predecessor across the board. It sets new performance records on almost every test while staying within roughly the same power and thermal limits, leading it to also set many new records for efficiency where previous PCIe SSDs have tended to sacrifice efficiency to reach higher performance.

The 960 Pro's performance even suggests that it may be a suitable enterprise SSD. While it lacks power loss protection capacitors that are still found on most enterprise SSDs (and are the reason why the longer M.2 22110 size is typically used for enterprise M.2 SSDs), the 960 Pro's performance on our random write consistency test is clearly enterprise-class and in the high-airflow environment of a server it should deliver much better sustained performance where it throttled due to high temperatures in our desktop testbed. Samsung probably won't have to change much other than the write endurance rating to make a good enterprise SSD based off this Polaris controller.

SATA SSDs are doing well to improve performance by a few percent, and power efficiency is for the most part also not improving much. PCIe 3.0 is not fully exploited by any current product, so generational improvements of NVMe SSDs can be much larger. In the SATA market gains this big would be revolutionary whether considered in terms of relative percentage improvement or absolute MB/s and IOPS gained.

On the other hand, this was a comparison of a 2TB drive against PCIe SSDs that were all much smaller; it has four times the capacity and twice the NAND die count of the largest and fastest 950 Pro. Higher capacity almost always enables higher performance, and it appears in many tests that the 512GB 960 Pro may not have much if any advantage over either its predecessor or the current fastest drive of similar capacity.

This review should not be taken as the final word on the Samsung 960 Pro. We still intend to test the smaller and more affordable capacities, and to conduct a more thorough investigation of its thermal throttling behavior. We also need to test against the Windows 10 NVMe driver and will test with any driver Samsung releases. Additionally, we look forward to testing the Samsung 960 EVO, which uses the same Polaris controller but TLC V-NAND with an SLC cache. The 960 EVO has a shorter warranty period and lower endurance rating, but still promises higher performance than the 950 Pro and at a much lower price.

The $1299 MSRP on the 2TB 960 Pro is almost as shocking as the $1499 MSRP for the 4TB 850 EVO was. This drive is not for everyone, though it might be coveted by everyone. But for those who have the money, the price per gigabyte is not outlandish. Aside from Intel's TLC-based SSD 600p, PCIe SSDs currently start around $0.50/GB, and at $0.63/GB the 960 Pro is more expensive than the Plextor M8Pe but cheaper than the Intel SSD 750 or the OCZ RD400A. Samsung is by no means price gouging and they could justify charging even more based on the performance and efficiency advantages the 960 Pro has over the competitors. The 960 Pro and 960 EVO are not yet listed on Amazon and are only listed as "Coming Soon" with no price on Newegg, but they can be pre-ordered direct from Samsung with an estimated ship time of 2-4 weeks.

The 960 Pro appears to not offer much cost savings over the 950 Pro despite the switch from 32-layer V-NAND to 48-layer V-NAND. The 48-layer V-NAND has had trouble living up to expectations and was much later to market than Samsung had planned for: the 950 Pro was initially supposed to switch over in the first half of this year and gain a 1TB capacity. This doesn't pose a serious concern for the 960 Pro, but it is clear that Samsung was too optimistic about the ease of scaling up 3D NAND and their projections for the 64-layer generation should be regarded with increased skepticism.

ATTO, AS-SSD & Idle Power Consumption
Comments Locked

72 Comments

View All Comments

  • Gigaplex - Tuesday, October 18, 2016 - link

    "Because of that, all consumer friendly file systems have resilience against small data losses."

    And for those to work, cache flush requests need to be functional for the journalling to work correctly. Disabling cache flushing will reintroduce the serious corruption issues.
  • emn13 - Wednesday, October 19, 2016 - link

    "100% data protection is not needed": at some level that's obviously true. But it's nice to have *some* guarantees so you know which risks you need to mitigate and which you can ignore.

    Also, NVMe has the potential to make this problem much worse: it's plausible that the underlying NAND+controller cannot outperform SATA alternatives to the degree they appear to; and that to achieve that (marketable) advantage, they need to rely more on buffering and write merging. If so, then it's possible you may be losing still only milliseconds of data, but that might cause quite a lot of corruption given how much data that can be on NVMe. Even though "100%" safe is possibly unnecessary, that would make the NVMe value proposition much worse: not only are such drives much more expensive, they also (in this hypothesis) would be more likely to cause data corruption - I certainly wouldn't buy one given that tradeoff; the performance gains are simply too slim (in almost any normal workload).

    Also, it's not quite true that "all consumer friendly file systems have resilience against small data losses". Journalled filesystems typically only journal metadata; not data - so you may still have a bunch of corrupted files. And, critically - the journaling algorithms rely on proper drive flushing! If a drive can lose data that has been flushed (pre-fsync-writes), then even a journalled filesystem can (easily!) be corrupted extensively. If anything, journalled filesystems are even more vulnerable to that than plain old fat, because they rely on clever interactions of multiple (conflicting) sources of truth in the event of a crash, and when the assumptions the FS makes turn out to be invalid, it (by design) will draw incorrect inferences about which data is "real" and which due to the crash. You can easily lose whole directories (say, user directories) at once like this.
  • HollyDOL - Wednesday, October 19, 2016 - link

    Tbh I consider whole this argument strongly obsolete... if you have close to $1300 spare to buy 2TB SSD monster, you definitely should have $250-350ish to buy decent UPS.

    Or, if you run several thousand USD machine without any, you more than deserve what you can get.

    It's same argument like you won't build double Titan XP monster and power it with chinesse noname PSU. There are things which are simply no go.
  • bcronce - Tuesday, October 18, 2016 - link

    As an ex-IT who used to manage thousands of computers, I have never seen catastrophic data loss caused by a power outage, and I have seen many of them. What I have seen are harddrives or PSUs dying and recently committed data was lost, but never fully committed data.

    That being said. SSDs are a special beast because many times writing new data requires moving existing data, and this is dangerous.

    Most modern filesystems since the 90s, except FAT32, were meant to handle unexpected powerloss. NTFS was the first FS from MS that pretty much got rid of powerloss issues.
  • KAlmquist - Tuesday, October 18, 2016 - link

    The functionality that a file system like NTFS requires to avoid corruption in the case of a power failure is a write barrier. A write barrier is a directive that says that the storage device should perform all writes prior to the write barrier before performing any of the writes issued after the write barrier.

    On a device using flash memory, write barriers should have minimal performance impact. It is not possible to overwrite flash memory, so when an SSD gets a write request, it will allocate a new page (or multiple pages) of flash memory to hold the data begin written. After it writes the data, it will update the mapping table so to point to the newly written page(s). If an SSD gets a whole bunch of writes, it can perform the data write operations in parallel as long as the pages being written all reside on different flash chips.

    If an SSD gets a bunch of writes separated by write barriers, it can write the data to flash just like it would without the write barriers. The only change is in when a write completes, the SSD cannot update the mapping table to point to the new data until earlier writes have completed.

    This is different from a mechanical hard drive. If you issue a bunch of writes to a mechanical hard drive, the drive will attempt to perform the writes in an order that will minimize seek time and rotational latency. If you place write barriers between the write requests, then the drive will execute the writes in the same order you issued them, resulting in lower throughput.

    Now suppose you are unable to use write barriers for some reason. You can achieve the same effect by issuing commands to flush the disk after every write, but that will prevent the device from executing mulitple write commands in parallel. A mechanical hard drive can only execute one write at a time, so cache flushes are a viable alternative to write barriers if you know you are using a mechanical hard drive. But on SSD's, parallel writes are not only possible, they are essential to performance. The write speeds of individual flash chips are slower than hard drive write speeds; the reason that sequential writes on most SSD's are faster than on a hard drive is that the SSD writes to multiple chips in parallel. So if you are talking to an SSD, you do not want to use cache flushes to get the effect of write barriers.

    I take it from what shodanshok wrote is that Microsoft Windows doesn't use write barriers on NVME devices, giving you the choice of either using cache flushes or risking file system corruption on loss of power. A quick look at the NVME specification suggests that this is the fault of Intel, not Microsoft. Unless I've missed it, Intel inexplicably omitted write barrier functionality from the specification, forcing Microsoft to use cache flushing as a work-around:

    http://www.nvmexpress.org/wp-content/uploads/NVM_E...

    On SSD devices, write barriers are essentially free. There is no need for a separate write barrier command; the write command could contain a field indicating that the write operation should be preceded by a write barrier. Users shouldn't have to chose between data protection and performance when the correct use of a sensibly designed protocol would given them both without them having to worry about it.
  • Dorkaman - Monday, November 28, 2016 - link

    So this drive has capacitors to help write out anything in the buffer if the power goes out:

    https://youtu.be/nwCzcFvmbX0 skip to 2:00

    23 power-loss capacitors used to keep the SSD's controller running just long enough, in the event of an outage, to flush all pending writes:

    http://www.tomshardware.com/reviews/samsung-845dc-...

    Will the 960 Evo have that? Would this prevent something like this (RAID 0 lost due to power outage):

    https://youtu.be/-Qddrz1o9AQ
  • Nitas - Tuesday, October 18, 2016 - link

    This may be silly of me but why did they use W8.1 instead of 10?
  • Billy Tallis - Tuesday, October 18, 2016 - link

    I'm still on Windows 8.1 because this is still our 2015 SSD testbed and benchmark suite. I am planning to switch to Windows 10 soon, but that will mean that new benchmark results are not directly comparable to our current catalog of results, so I'll have to re-test all the drives I still have on hand, and I'll probably take the opportunity to make a few other adjustments to the test protocol.

    Switching to Windows 10 hasn't been a priority because of the hassle it entails and the fact that it's something of a moving target, but particularly with the direction the NVMe market is headed the Windows version is starting to become an important factor.
  • Nitas - Tuesday, October 18, 2016 - link

    I see, thanks for clearing that up!
  • Samus - Wednesday, October 19, 2016 - link

    Windows 8.1 will have virtually no difference in performance compared to Windows 10 for the purpose of benchmarking SSD's...

Log in

Don't have an account? Sign up now