ATTO

ATTO's Disk Benchmark is a quick and easy freeware tool to measure drive performance across various transfer sizes.

ATTO Performance

The 960 Pro hits full performance at 32kB or larger transfers, while the Intel SSD 750 doesn't reach its highest read speeds until 1MB transfers and the OCZ RD400 needs 512kB transfers for its highest read speeds. Unlike the 512GB 950 Pro, the 960 Pro does not hit any severe thermal throttling.

AS-SSD

AS-SSD is another quick and free benchmark tool. It uses incompressible data for all of its tests, making it an easy way to keep an eye on which drives are relying on transparent data compression. The short duration of the test makes it a decent indicator of peak drive performance.

Incompressible Sequential Read PerformanceIncompressible Sequential Write Performance

The 960 Pro's read speed breaks away from the pack of other PCIe SSDs but still doesn't come close to the advertised 3.5GB/s. The write speed stands out even more and very slightly exceeds the advertised speed of 2100MB/s.

Idle Power Consumption

Since the ATSB tests based on real-world usage cut idle times short to 25ms, their power consumption scores paint an inaccurate picture of the relative suitability of drives for mobile use. During real-world client use, a solid state drive will spend far more time idle than actively processing commands.

There are two main ways that a NVMe SSD can save power when idle. The first is through suspending the PCIe link through the Active State Power Management (ASPM) mechanism, analogous to the SATA Link Power Management mechanism. Both define two power saving modes: an intermediate power saving mode with strict wake-up latency requirements (eg. 10µs for SATA "Partial" state) and a deeper state with looser wake-up requirements (eg. 10ms for SATA "Slumber" state). SATA Link Power Management is supported by almost all SSDs and host systems, though it is commonly off by default for desktops. PCIe ASPM support on the other hand is a minefield and it is common to encounter devices that do not implement it or implement it incorrectly. Forcing PCIe ASPM on for a system that defaults to disabling it may lead to the system locking up; this is the case for our current SSD testbed and thus we are unable to measure the effect of PCIe ASPM on SSD idle power.

The NVMe standard also defines a drive power management mechanism that is separate from PCIe link power management. The SSD can define up to 32 different power states and inform the host of the time taken to enter and exit these states. Some of these power states can be operational states where the drive continues to perform I/O with a restricted power budget, while others are non-operational idle states. The host system can either directly set these power states, or it can declare rules for which power states the drive may autonomously transition to after being idle for different lengths of time.

The big caveat to NVMe power management is that while I am able to manually set power states under Linux using low-level tools, I have not yet seen any OS or NVMe driver automatically engage this power saving. Work is underway to add Autonomous Power State Transition (APST) support to the Linux NVMe driver, and it may be possible to configure Windows to use this capability with some SSDs and NVMe drivers. NVMe power management including APST fortunately does not depend on motherboard support the way PCIe ASPM does, so it should eventually reach the same widespread availability that SATA Link Power Management enjoys.

We report two idle power values for each drive: an active idle measurement taken with none of the above power management states engaged, and an idle power measurement with either SATA LPM Slumber state or the lowest-power NVMe non-operational power state, if supported.

Active Idle Power Consumption (No LPM)

The active idle power consumption of the PCIe SSDs is still far higher than is typical for SATA SSDs, and is enough to keep their temperatures relatively high as well. The 960 Pro 2TB draws only slightly more power than the 950 Pro.

Idle Power Consumption

With power saving modes enabled, the Samsung NVMe SSDs are almost as efficient as typical SATA SSD, with the 960 Pro unsurprisingly drawing a little more power than the lower-capacity 950 Pros. The OCZ RD400 does benefit some from power management, but still draws far more than it should.

Mixed Read/Write Performance Final Words
Comments Locked

72 Comments

View All Comments

  • Gigaplex - Tuesday, October 18, 2016 - link

    "Because of that, all consumer friendly file systems have resilience against small data losses."

    And for those to work, cache flush requests need to be functional for the journalling to work correctly. Disabling cache flushing will reintroduce the serious corruption issues.
  • emn13 - Wednesday, October 19, 2016 - link

    "100% data protection is not needed": at some level that's obviously true. But it's nice to have *some* guarantees so you know which risks you need to mitigate and which you can ignore.

    Also, NVMe has the potential to make this problem much worse: it's plausible that the underlying NAND+controller cannot outperform SATA alternatives to the degree they appear to; and that to achieve that (marketable) advantage, they need to rely more on buffering and write merging. If so, then it's possible you may be losing still only milliseconds of data, but that might cause quite a lot of corruption given how much data that can be on NVMe. Even though "100%" safe is possibly unnecessary, that would make the NVMe value proposition much worse: not only are such drives much more expensive, they also (in this hypothesis) would be more likely to cause data corruption - I certainly wouldn't buy one given that tradeoff; the performance gains are simply too slim (in almost any normal workload).

    Also, it's not quite true that "all consumer friendly file systems have resilience against small data losses". Journalled filesystems typically only journal metadata; not data - so you may still have a bunch of corrupted files. And, critically - the journaling algorithms rely on proper drive flushing! If a drive can lose data that has been flushed (pre-fsync-writes), then even a journalled filesystem can (easily!) be corrupted extensively. If anything, journalled filesystems are even more vulnerable to that than plain old fat, because they rely on clever interactions of multiple (conflicting) sources of truth in the event of a crash, and when the assumptions the FS makes turn out to be invalid, it (by design) will draw incorrect inferences about which data is "real" and which due to the crash. You can easily lose whole directories (say, user directories) at once like this.
  • HollyDOL - Wednesday, October 19, 2016 - link

    Tbh I consider whole this argument strongly obsolete... if you have close to $1300 spare to buy 2TB SSD monster, you definitely should have $250-350ish to buy decent UPS.

    Or, if you run several thousand USD machine without any, you more than deserve what you can get.

    It's same argument like you won't build double Titan XP monster and power it with chinesse noname PSU. There are things which are simply no go.
  • bcronce - Tuesday, October 18, 2016 - link

    As an ex-IT who used to manage thousands of computers, I have never seen catastrophic data loss caused by a power outage, and I have seen many of them. What I have seen are harddrives or PSUs dying and recently committed data was lost, but never fully committed data.

    That being said. SSDs are a special beast because many times writing new data requires moving existing data, and this is dangerous.

    Most modern filesystems since the 90s, except FAT32, were meant to handle unexpected powerloss. NTFS was the first FS from MS that pretty much got rid of powerloss issues.
  • KAlmquist - Tuesday, October 18, 2016 - link

    The functionality that a file system like NTFS requires to avoid corruption in the case of a power failure is a write barrier. A write barrier is a directive that says that the storage device should perform all writes prior to the write barrier before performing any of the writes issued after the write barrier.

    On a device using flash memory, write barriers should have minimal performance impact. It is not possible to overwrite flash memory, so when an SSD gets a write request, it will allocate a new page (or multiple pages) of flash memory to hold the data begin written. After it writes the data, it will update the mapping table so to point to the newly written page(s). If an SSD gets a whole bunch of writes, it can perform the data write operations in parallel as long as the pages being written all reside on different flash chips.

    If an SSD gets a bunch of writes separated by write barriers, it can write the data to flash just like it would without the write barriers. The only change is in when a write completes, the SSD cannot update the mapping table to point to the new data until earlier writes have completed.

    This is different from a mechanical hard drive. If you issue a bunch of writes to a mechanical hard drive, the drive will attempt to perform the writes in an order that will minimize seek time and rotational latency. If you place write barriers between the write requests, then the drive will execute the writes in the same order you issued them, resulting in lower throughput.

    Now suppose you are unable to use write barriers for some reason. You can achieve the same effect by issuing commands to flush the disk after every write, but that will prevent the device from executing mulitple write commands in parallel. A mechanical hard drive can only execute one write at a time, so cache flushes are a viable alternative to write barriers if you know you are using a mechanical hard drive. But on SSD's, parallel writes are not only possible, they are essential to performance. The write speeds of individual flash chips are slower than hard drive write speeds; the reason that sequential writes on most SSD's are faster than on a hard drive is that the SSD writes to multiple chips in parallel. So if you are talking to an SSD, you do not want to use cache flushes to get the effect of write barriers.

    I take it from what shodanshok wrote is that Microsoft Windows doesn't use write barriers on NVME devices, giving you the choice of either using cache flushes or risking file system corruption on loss of power. A quick look at the NVME specification suggests that this is the fault of Intel, not Microsoft. Unless I've missed it, Intel inexplicably omitted write barrier functionality from the specification, forcing Microsoft to use cache flushing as a work-around:

    http://www.nvmexpress.org/wp-content/uploads/NVM_E...

    On SSD devices, write barriers are essentially free. There is no need for a separate write barrier command; the write command could contain a field indicating that the write operation should be preceded by a write barrier. Users shouldn't have to chose between data protection and performance when the correct use of a sensibly designed protocol would given them both without them having to worry about it.
  • Dorkaman - Monday, November 28, 2016 - link

    So this drive has capacitors to help write out anything in the buffer if the power goes out:

    https://youtu.be/nwCzcFvmbX0 skip to 2:00

    23 power-loss capacitors used to keep the SSD's controller running just long enough, in the event of an outage, to flush all pending writes:

    http://www.tomshardware.com/reviews/samsung-845dc-...

    Will the 960 Evo have that? Would this prevent something like this (RAID 0 lost due to power outage):

    https://youtu.be/-Qddrz1o9AQ
  • Nitas - Tuesday, October 18, 2016 - link

    This may be silly of me but why did they use W8.1 instead of 10?
  • Billy Tallis - Tuesday, October 18, 2016 - link

    I'm still on Windows 8.1 because this is still our 2015 SSD testbed and benchmark suite. I am planning to switch to Windows 10 soon, but that will mean that new benchmark results are not directly comparable to our current catalog of results, so I'll have to re-test all the drives I still have on hand, and I'll probably take the opportunity to make a few other adjustments to the test protocol.

    Switching to Windows 10 hasn't been a priority because of the hassle it entails and the fact that it's something of a moving target, but particularly with the direction the NVMe market is headed the Windows version is starting to become an important factor.
  • Nitas - Tuesday, October 18, 2016 - link

    I see, thanks for clearing that up!
  • Samus - Wednesday, October 19, 2016 - link

    Windows 8.1 will have virtually no difference in performance compared to Windows 10 for the purpose of benchmarking SSD's...

Log in

Don't have an account? Sign up now