Random Read Performance

The random read test requests 4kB blocks and tests queue depths ranging from 1 to 32. The queue depth is doubled every three minutes, for a total test duration of 18 minutes. The test spans the entire drive, which is filled before the test starts. The primary score we report is an average of performances at queue depths 1, 2 and 4, as client usage typically consists mostly of low queue depth operations.

Iometer - 4KB Random Read

The Samsung 960 Pro slightly widens what was already a commanding lead in low queue depth random read performance.

Iometer - 4KB Random Read (Power)

The 960 Pro's power usage is higher in proportion to its increased performance. Only a handful of the smallest and lowest-power SATA SSDs are more efficient, but at half the overall performance.

While they are unmatched at lower queue depths, both the 960 Pro and the 950 Pro under-perform expectations at QD32. This hardly matters for a consumer SSD.

Random Write Performance

The random write test writes 4kB blocks and tests queue depths ranging from 1 to 32. The queue depth is doubled every three minutes, for a total test duration of 18 minutes. The test is limited to a 16GB portion of the drive, and the drive is empty save for the 16GB test file. The primary score we report is an average of performances at queue depths 1, 2 and 4, as client usage typically consists mostly of low queue depth operations.

Iometer - 4KB Random Write

The 960 Pro's random write performance is a big improvement over the 950 Pro, catching up with the OCZ RD400 but still well behind the Intel 750.

Iometer - 4KB Random Write (Power)

In addition to greatly improving random write performance over the 950 Pro, the 960 Pro greatly improves power consumption and jumps to the top of the efficiency ranking, just ahead of the Crucial MX300.

Where thermal throttling prevented the 950 Pro from improving past QD2, the 960 Pro scales up to QD4 and plateaus at that level for the second half of the test, with somewhat steadier performance than the OCZ RD400 that draws more power and thus has more thermal throttling to contend with. The Intel 750 with its massive heatsink entirely avoids thermal throttling.

AnandTech Storage Bench - Light Sequential Performance
Comments Locked

72 Comments

View All Comments

  • emn13 - Wednesday, October 19, 2016 - link

    Especially since NAND hasn't magically gotten lots faster after the SATA->NVMe transition. If SATA is fast enough to saturate the underlying NAND+controller combo when they must actually write to disk, then NVMe simply looks unnecessarily expensive (if you look at writes only). Since the fast NVMe drives all have ram caches, it's hard to detect whether data is properly being written.

    Perhaps windows is doing something odd here, but it's definitely fishy.
  • jhoff80 - Tuesday, October 18, 2016 - link

    This is probably a stupid question because I've been changing that setting for years on SSDs without even thinking about it and you clearly know more about this than I do, but does the use of a drive in a laptop (eg battery-powered) or with a UPS for the system negate this risk anyway? That was always my impression, but it could very much be wrong.
  • shodanshok - Tuesday, October 18, 2016 - link

    Having a battery, laptops are inherently safer than desktop against power loss. However, a bad (or missing) battery and/or a failing cable/connector can expose the disks to the very same unprotected power-loss scenario.
  • Dr. Krunk - Sunday, October 23, 2016 - link

    What happens if accidently press the battery release button and it pops out just enough to lose connection?
  • woggs - Tuesday, October 18, 2016 - link

    I would love to see Anandtech do a deep dive into this very topic. It's important. I've heard that windows and other apps do excessive cache flushing when enabled and that's also a problem. I've also heard intel SSDs completely ignore the cache flush command and simply implement full power loss protection. Batching writes into ever larger pieces is a fact of SSD life and it needs to be done right.
  • voicequal - Tuesday, October 18, 2016 - link

    Agreed. Last year I traced slow disk i/o on a new Surface Pro 4 with 256GB Toshiba XG3 NVMe to the write-cache buffer flushing, so I checked the box to turn it off. Then in July, another driver bug caused the Surface Pro 4 to frequently lock up and require a forced power off. Within a few weeks I had a corrupted Windows profile and system file issues that took several DISM runs to clean up. Don't know for sure if my problem resulted from the disabled buffer flushing, but I'm now hesitant to reenable the setting.

    It would be good to understand what this setting does with respect to NVMe driver operation, and interesting to measure the impact / data loss when power loss does occur.
  • Kristian Vättö - Tuesday, October 18, 2016 - link

    I think you are really exaggerating the problem. DRAM cache has been used in storage well before SSDs became mainstream. Yes, HDDs have DRAM cache too and it's used for the same purpose: to cache writes. I would argue that HDDs are even more vulnerable because data sits in the cache for a longer time due to the much higher latency of platter-based storage.

    Because of that, all consumer friendly file systems have resilience against small data losses. In the end, only a few MB of user data is cached anyway, so it's not like we talk about a major data loss. It's small enough not to impact user experience, and the file system can recover itself in case there was metadata in the lost cache.

    If this was a severe issue, there would have been a fix years ago. For client-grade products there is simply no need because 100% data protection and uptime are not needed.
  • shodanshok - Tuesday, October 18, 2016 - link

    The problem is not the cache, rather ignoring cache flushes requests. I know DRAM caches are used from decades, and when disks lied about flushing them (in the good old IDE days), catastrophic filesystem failure were much more common (see XFS or ZFS FAQs / mailing lists for some reference, or even SATA command specifications).

    I'm not exaggerating anything: it is a real problem, greatly debated in the Linux community in the past. From https://lwn.net/Articles/283161/
    "So the potential for corruption is always there; in fact, Chris Mason has a torture-test program which can make it happen fairly reliably. There can be no doubt that running without barriers is less safe than using them"

    This quote is ext3-specific, but other journaled filesystem behave in very similar manners. And hey - the very same Windows check box warns you about the risks related to disabling flushes.

    You should really inquiry Microsoft about what these check box do on its NVMe driver. Anyway, suggesting to disable cache flushes is a bad advise (unless you don't use your PC for important things).
  • Samus - Wednesday, October 19, 2016 - link

    I don't think people understand how cache flushing works at the hardware level.

    If the operating system has buffer flushing disabled, it will never tell the drive to dump the cache, for example, when an operation is complete. In this event, a drive will hold onto whatever data is in cache until the cache fills up, then the drive firmware will trigger the controller to write the cache to disk.

    Since OS's randomly write data to disk all the time, bits here and there go into cache to prevent disk thrashing/NAND wear, all determined in hardware. This has nothing to do with pooled or paged data at the OS level or RAM data buffers.

    Long story short, it's moronic to disable write buffer flushing, where the OS will command the drive after IO operations (like a file copy or write) complete, ensuring the cache is clear as the system enters idle. This happens hundreds if not thousands of times per minute and its important to fundamentally protect the data in cache. With buffer flushing disabled the cache will ALWAYS have something in it until you shutdown - which is the only time (other than suspend) a buffer flush command will be sent.
  • Billy Tallis - Wednesday, October 19, 2016 - link

    "With buffer flushing disabled the cache will ALWAYS have something in it until you shutdown - which is the only time (other than suspend) a buffer flush command will be sent."

    I expect at least some drives flush their internal caches before entering any power saving mode. I've occasionally seen the power meter spike before a drive actually drops down to its idle power level, and I probably would have seen a lot more such spikes if the meter were sampling more than once per second.

Log in

Don't have an account? Sign up now