Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The Samsung PM981 set new records for burst sequential read performance, but the Samsung 970 EVO fails to live up to that standard. The 970 EVO is a substantial improvement over the 960 EVO, but doesn't manage to beat the last generation's fastest MLC drives.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data.

Sustained 128kB Sequential Read

On the longer sequential read test, the Samsung 970 EVO performs far better than the Samsung PM981, indicating that Samsung has made significant firmware tweaks to improve how the drive handles the internal fragmentation left over from running the random I/O tests. The 970 EVO is the fastest TLC-based drive on this test, and the 1TB model even manages to beat the MLC-based 1TB 960 PRO.

Sustained 128kB Sequential Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The 1TB 970 EVO draws more power during this sequential read test than any other M.2 drive in this mix, but its performance is high enough to leave it with a good efficiency score. The 500GB 970 EVO ends up with below-average efficiency.

Both capacities of the Samsung 970 EVO have very steady performance and power consumption across the duration of the sequential read test. This is in contrast to drives like the WD Black and Toshiba XG5 that don't reach full performance until the queue depths are rather high.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The burst sequential write performance of the Samsung 970 EVO tops the charts, with the 500GB model almost reaching 2.5GB/s where the last generation of drives couldn't hit 2GB/s. The WD Black is only slightly behind the 970 EVO.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

On the longer sequential write test, the 1TB 970 EVO takes a clear lead over everything else, even the 1TB PM981. The 500GB model is handicapped by its smaller capacity and smaller SLC cache, but still manages to be significantly faster than the 512GB PM981.

Sustained 128kB Sequential Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The 970 EVO and PM981 offer almost exactly the same power efficiency on the sequential write test. The 1TB model is slightly less efficient than the WD Black and 960 PRO, while the 500GB model is well behind the MLC-based drives of similar capacity.

The 1TB 970 EVO starts off with a much higher QD1 performance on the sequential write test than the PM981 offers, and at higher queue depths it maintains a slight lead. At 500GB, the 970 EVO's performance oscillates as only some portions of the test are hitting the SLC cache.

Random Performance Mixed Read/Write Performance
Comments Locked

68 Comments

View All Comments

  • bji - Tuesday, April 24, 2018 - link

    You're kind of arguing against benchmarking in general here. Almost no benchmarks are directly relevant to any one person's intended use of the product. Benchmarks are not useful in that they tell me exactly how much performance to expect when running one specific program on one specifically configured hardware setup. They are useful because they allow extrapolation from measured results to expected results on workloads that actually matter to the reader.

    So I don't agree with your sentiment that Meltdown/Spectre are not worth consideration for their effect on system performance.

    However, I am not sure that I would include Meltdown/Spectre considerations in a specific SSD review. I think these considerations deserve to be in a CPU review.
  • bji - Tuesday, April 24, 2018 - link

    Also, may I say that users generally will not notice a 5% slowdown in any particular task; however, we've already established that readers care about minimum differences in benchmark results, because they routinely call a 5% difference clear indication of a "winner" and a "loser" for that benchmark, so for the purposes of performance reviews, the 5% difference contributed by Meltdown/Spectre definitely matters.
  • Flying Aardvark - Tuesday, April 24, 2018 - link

    It's up to 50% reduction in storage performance not 5%. You'll feel 50% loss when it happens to you.
  • cmdrdredd - Tuesday, April 24, 2018 - link

    What you are saying is misleading. SATA performance is nearly identical (within 2% difference for me). It's NVMe drives that take the hit, but even still they are faster than everything else. Processor speed is unaffected for me as well. Tested multiple times with various benchmarks both ways and it was within margin of error. I don't see the problem to be honest.
  • LurkingSince97 - Wednesday, April 25, 2018 - link

    Tell that to my I/O intensive servers that suddenly have 30% less throughput.
  • modeonoff - Tuesday, April 24, 2018 - link

    Yes but I am not an average customer. Performance is important for me.
  • Ryun - Tuesday, April 24, 2018 - link

    For everyday tasks do you guys notice an improvement in responsiveness of NVMe SSDs versus SATA SSDs?

    The transfer rates are definitely impressive, I've just never seen a review where I've wanted to upgrade my 500GB SATA SSD for development/gaming/maintenance tasks on my machine. Seems like boot times and opening programs are within a couple seconds of another between NVMe and SATA. Nothing like the jump between HDDs vs SSDs.
  • HollyDOL - Tuesday, April 24, 2018 - link

    I wonder myself, got Vertex 3 (240GB) and while not permanently watching perf counters I don't see much cases of 100% load. Wonder if I would be able to see a difference if I moved to some "best enthusiast m.2/pcie ssd available". (Rest of the machine is fully capable)
  • eek2121 - Tuesday, April 24, 2018 - link

    I notice it in certain tasks. My system can get from cold boot to the login screen in about 3 seconds for instance. Editing video is much faster as well.
  • imaheadcase - Tuesday, April 24, 2018 - link

    I wouldnt say a huge performance, it really depends on certain tasks that you work with. If you work with file manager a lot with big files sure. But most people no. It makes sense if just upgrading though.

Log in

Don't have an account? Sign up now