ATTO

ATTO's Disk Benchmark is a quick and easy freeware tool to measure drive performance across various transfer sizes.

ATTO Performance

The 960 Pro hits full performance at 32kB or larger transfers, while the Intel SSD 750 doesn't reach its highest read speeds until 1MB transfers and the OCZ RD400 needs 512kB transfers for its highest read speeds. Unlike the 512GB 950 Pro, the 960 Pro does not hit any severe thermal throttling.

AS-SSD

AS-SSD is another quick and free benchmark tool. It uses incompressible data for all of its tests, making it an easy way to keep an eye on which drives are relying on transparent data compression. The short duration of the test makes it a decent indicator of peak drive performance.

Incompressible Sequential Read PerformanceIncompressible Sequential Write Performance

The 960 Pro's read speed breaks away from the pack of other PCIe SSDs but still doesn't come close to the advertised 3.5GB/s. The write speed stands out even more and very slightly exceeds the advertised speed of 2100MB/s.

Idle Power Consumption

Since the ATSB tests based on real-world usage cut idle times short to 25ms, their power consumption scores paint an inaccurate picture of the relative suitability of drives for mobile use. During real-world client use, a solid state drive will spend far more time idle than actively processing commands.

There are two main ways that a NVMe SSD can save power when idle. The first is through suspending the PCIe link through the Active State Power Management (ASPM) mechanism, analogous to the SATA Link Power Management mechanism. Both define two power saving modes: an intermediate power saving mode with strict wake-up latency requirements (eg. 10µs for SATA "Partial" state) and a deeper state with looser wake-up requirements (eg. 10ms for SATA "Slumber" state). SATA Link Power Management is supported by almost all SSDs and host systems, though it is commonly off by default for desktops. PCIe ASPM support on the other hand is a minefield and it is common to encounter devices that do not implement it or implement it incorrectly. Forcing PCIe ASPM on for a system that defaults to disabling it may lead to the system locking up; this is the case for our current SSD testbed and thus we are unable to measure the effect of PCIe ASPM on SSD idle power.

The NVMe standard also defines a drive power management mechanism that is separate from PCIe link power management. The SSD can define up to 32 different power states and inform the host of the time taken to enter and exit these states. Some of these power states can be operational states where the drive continues to perform I/O with a restricted power budget, while others are non-operational idle states. The host system can either directly set these power states, or it can declare rules for which power states the drive may autonomously transition to after being idle for different lengths of time.

The big caveat to NVMe power management is that while I am able to manually set power states under Linux using low-level tools, I have not yet seen any OS or NVMe driver automatically engage this power saving. Work is underway to add Autonomous Power State Transition (APST) support to the Linux NVMe driver, and it may be possible to configure Windows to use this capability with some SSDs and NVMe drivers. NVMe power management including APST fortunately does not depend on motherboard support the way PCIe ASPM does, so it should eventually reach the same widespread availability that SATA Link Power Management enjoys.

We report two idle power values for each drive: an active idle measurement taken with none of the above power management states engaged, and an idle power measurement with either SATA LPM Slumber state or the lowest-power NVMe non-operational power state, if supported.

Active Idle Power Consumption (No LPM)

The active idle power consumption of the PCIe SSDs is still far higher than is typical for SATA SSDs, and is enough to keep their temperatures relatively high as well. The 960 Pro 2TB draws only slightly more power than the 950 Pro.

Idle Power Consumption

With power saving modes enabled, the Samsung NVMe SSDs are almost as efficient as typical SATA SSD, with the 960 Pro unsurprisingly drawing a little more power than the lower-capacity 950 Pros. The OCZ RD400 does benefit some from power management, but still draws far more than it should.

Mixed Read/Write Performance Final Words
Comments Locked

72 Comments

View All Comments

  • JoeyJoJo123 - Tuesday, October 18, 2016 - link

    Not too surprised that Samsung, once again, achieves another performance crown for another halo SSD product.
  • Eden-K121D - Tuesday, October 18, 2016 - link

    Bring on the competition
  • ibudic1 - Tuesday, October 18, 2016 - link

    Intel 750 is better. The only thing that you can tell is random write 4K QD1-4. Also it's really bad when you don't have the consistency when you need it. There's nothing worse than a hanging application, it's about consistancy not outright speed. Which reminds me...

    When evaluating graphics cards a MINIMUM frame rate is WAY more important than average or maximum.

    Just like in racing the slowest speed in the corner is what separates great cars from average.

    Hopefully Anandtech can recognize this in future reviews
  • Flying Aardvark - Wednesday, October 19, 2016 - link

    Exactly. Intel 750 is still the king for someone who seriously needs storage performance. 4K randoms and zero throttling.
    I'd stick with the Evo or 600P, 3D TLC stuff unless I really needed the performance then I'd go all the way up to the real professional stuff with the 750. I need a 1TB M.2 NVME SSD myself and eager to see street prices on the 960 EVO 1TB and Intel 600P 1TB.
  • iwod - Wednesday, October 19, 2016 - link

    Exactly, when majority ( 90%+ ) of consumer usage is going to be based on QD1. Giving me QD32 numbers is like a Mpixel or Mhz race. I used to think we reached the limit of Random read write performance. Turns out we haven't actually improved Random Read Write QD1 much, hence it is likely still the bottleneck.

    And yes we need consistency in QD1 Random Speed test as well.
  • dsumanik - Wednesday, October 19, 2016 - link

    Nice to see there are still some folks out there who arent duped by marketing, random write and full capacity consistency are the only 2 things a look at. When moving large video files around sequential speeds can help, but difference between 500 and 1000 mb/s isnt much, you start the copy then go do something else. In many cases random write is the bottleneck for the times you are waiting on the computer to "do something", and dictates if the computer feels "snappy". Likewise, performance loss when a drive is getting full also makes you 'notice' things are slowing down.

    Samsung if you are reading this, go balls out random write performance on the next generation, tyvm.
  • Samus - Wednesday, October 19, 2016 - link

    You can't put an Intel 750 in a laptop though, and it also caps at 1.2TB. But your point is correct, it is a performance monster.
  • edward1987 - Friday, October 28, 2016 - link

    Intel SSD 750 SSDPEDMW400G4X1 PCI-Express-v3-x4 - HHHL
    AND Samsung SSD 960 PRO MZ-V6P512BW M.2 2280 NVMe
    IOPS 230-430K VS 330K
    ead speed (Max) 2200 VS 3500

    Much better in comparison http://www.span.com/compare/SSDPEDMW400G4X1-vs-MZ-...
  • shodanshok - Tuesday, October 18, 2016 - link

    Let me do a BIG WARNING against disabling write-buffer flushing. Any drive without special provisions for power loss (eg: supercapacitor), can really lose much data in the event of a unexpected power loss. In the worst scenario, entire filesystem loss can happen.

    What the two Windows settings do? In short:
    1) "enable write cache on the device" enables the controller's private DRAM writeback cache and it is *required* for good performance on SSD drives. The reason exactly the one cited on the article: for good performance, flash memory requires batched writes. For example, with DRAM cache disabled I recorded write speed of 5 MB/s on a otherwise fast Crucial M550 256 GB. With DRAM cache enabled, the very same disk almost saturated the SATA link (> 400 MB/s).
    However, a writeback cache imply some data loss risk. For that reason the IDE/SATA standard has some special commands to force a full cache flush when the OS need to be sure about data persistence. This bring us that second option...

    2) "turn off write-cache buffer flushing on the device": this option should be absolutely NOT enabled on consumer, non-power-protected disks. With this option enabled, Windows will *not* force a full cache flush even on critical tasks (eg: update of NTFS metadata). This can have catastrophic consequence if power is loss at the wrong moment. I am not speaking about "simple", limited data loss, but entire filesystem corruption. The key reason for such a catastrophic behavior is that cache-flush command are not only used for store critical data, but for properly order their writeout also. In other words, with cache flushing disabled, key filesystem metadata can be written out of order. If power is lost during a incomplete, badly-reordered metadata writes, all sort of problems can happen.
    This option exists for one, and only one, case: when your system has a power-loss-protected array/drives, you trust your battery/capacitor AND your RAID card/drive behave poorly when flushing is enabled. However, basically all modern RAID controllers automatically ignores cache flushes when the battery/capacitor are healthy, negating the needing to disable cache flushes software-side.

    In short, if such a device (960 Pro) really need disabled cache flushing to shine, this is a serious product/firmware flaw which need to be corrected as soon as possible.
  • Br3ach - Tuesday, October 18, 2016 - link

    Is power loss a problem for M.2 drives though? E.g. my PSU's (Corsair AX1200i) capacitors keeps the MB alive for probably 1 minute following power loss - plenty of time for the drive to flush any caches, no?

Log in

Don't have an account? Sign up now