Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The Samsung PM981 set new records for burst sequential read performance, but the Samsung 970 EVO fails to live up to that standard. The 970 EVO is a substantial improvement over the 960 EVO, but doesn't manage to beat the last generation's fastest MLC drives.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data.

Sustained 128kB Sequential Read

On the longer sequential read test, the Samsung 970 EVO performs far better than the Samsung PM981, indicating that Samsung has made significant firmware tweaks to improve how the drive handles the internal fragmentation left over from running the random I/O tests. The 970 EVO is the fastest TLC-based drive on this test, and the 1TB model even manages to beat the MLC-based 1TB 960 PRO.

Sustained 128kB Sequential Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The 1TB 970 EVO draws more power during this sequential read test than any other M.2 drive in this mix, but its performance is high enough to leave it with a good efficiency score. The 500GB 970 EVO ends up with below-average efficiency.

Both capacities of the Samsung 970 EVO have very steady performance and power consumption across the duration of the sequential read test. This is in contrast to drives like the WD Black and Toshiba XG5 that don't reach full performance until the queue depths are rather high.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The burst sequential write performance of the Samsung 970 EVO tops the charts, with the 500GB model almost reaching 2.5GB/s where the last generation of drives couldn't hit 2GB/s. The WD Black is only slightly behind the 970 EVO.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

On the longer sequential write test, the 1TB 970 EVO takes a clear lead over everything else, even the 1TB PM981. The 500GB model is handicapped by its smaller capacity and smaller SLC cache, but still manages to be significantly faster than the 512GB PM981.

Sustained 128kB Sequential Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The 970 EVO and PM981 offer almost exactly the same power efficiency on the sequential write test. The 1TB model is slightly less efficient than the WD Black and 960 PRO, while the 500GB model is well behind the MLC-based drives of similar capacity.

The 1TB 970 EVO starts off with a much higher QD1 performance on the sequential write test than the PM981 offers, and at higher queue depths it maintains a slight lead. At 500GB, the 970 EVO's performance oscillates as only some portions of the test are hitting the SLC cache.

Random Performance Mixed Read/Write Performance
Comments Locked

68 Comments

View All Comments

  • qlum - Tuesday, October 16, 2018 - link

    Just as a reminder how the argument against the ssd can change overtime:
    Right now the prices at least here in the netherlands are as follows:

    Crucial MX500 (SATA) €160
    HP EX920 (NVMe PCIe x4) €287
    Intel 760p (NVMe PCIe x4 ) €289
    WD Black (NVMe PCIe x4) €312
    Samsung 970 EVO (NVMe PCIe x4) €269
    Samsung 960 PRO €374

    Suddenly the 970 evo is the cheapest of the bunch this makes its value a lot better

    Of course there are still cheaper nvme ssds such as the intel 660p but at the tb mark its one of the cheapest nvme ssd's
  • modeonoff - Tuesday, April 24, 2018 - link

    Isn't nvme m.2 SSD performance affected by Meltdown/Spectre patches?
  • Billy Tallis - Tuesday, April 24, 2018 - link

    Yes, because storage benchmarks make system calls more frequently than almost anything else. Once the updates have been applied to the testbed, I'll be re-testing everything for future reviews. This will take a while, so I've waited until I have several reviews worth of testing completed that can fill the gap before I have new results for a new drive and the older drives it needs to be compared against.

    My preliminary tests of the impact of the patches show that while the scores themselves are affected, the rankings of drives aren't, so the current measurements are still useful for judging which drives are best.
  • Reppiks - Tuesday, April 24, 2018 - link

    Could be nice to see AMD vs Intel post patches as it shouldn't affect AMD as much?
  • Infy2 - Tuesday, April 24, 2018 - link

    Nvme and Sata controllers are made by AsMedia on AMD's AM4 mother boards. Sadly they are somewhat slower than Intel's controllers. Even after Spectre and Meltdown patches Intel is still king of storage performance.
  • Tamz_msc - Tuesday, April 24, 2018 - link

    Ryzen CPUs have a dedicated PCI-E x4 link for nvme drives which bypasses the chipset.
  • bernstein - Tuesday, April 24, 2018 - link

    nvme is just PCIe x4 + software... that's why a passive pcie x4 to m.2 works. and why a nvme m.2 ssd should work with reduced speed over PCIe x1 or PCIe x2. the same goes for PCIe 2.0 links... combine these and you get a working passive mPCIe to M.2 adapter.
  • willis936 - Tuesday, April 24, 2018 - link

    NVMe controllers are made by Intel, AMD, and Microsoft (and whatever analog set of companies for mobile) because it's just a software stack that runs on CPUs.
  • Kwarkon - Wednesday, April 25, 2018 - link

    Close but not exactly. You mean drivers.
  • HStewart - Tuesday, April 24, 2018 - link

    I personally think people are making a bigger deal of this Methdown/Spectre stuff than it worth it.

    Yes performance is one area - but there are other reasons why people purchase a product.

    Especially in heavy graphics or in this case storage usage - these patches should not have no minimal effect.

    To the average customer - the effect is not notice - how much will they notice a 5% or leas slow down in cpu speed. But change a hard drive to one of these SSD's would be a significant improvement

Log in

Don't have an account? Sign up now