Note: All our previous testing has been on an Intel test bed. Because of the move to PCIe 4.0, we have upgraded to Ryzen. Devices tested under Ryzen in time for this review are identified in the charts.

Random Read Performance

Our first test of random read performance uses very short bursts of operations issued one at a time with no queuing. The drives are given enough idle time between bursts to yield an overall duty cycle of 20%, so thermal throttling is impossible. Each burst consists of a total of 32MB of 4kB random reads, from a 16GB span of the disk. The total data read is 1GB.

Burst 4kB Random Read (Queue Depth 1)

The burst random read latency of Samsung's 128L TLC as used in the 980 PRO is faster than their earlier TLC, but still lags behind some of the competition—as does their 64L MLC used in the 970 PRO. Our new Ryzen testbed consistently imposes a bit more overhead on drives than our older Skylake-based testbed, and PCIe Gen4 bandwidth is no help here.

Our sustained random read performance is similar to the random read test from our 2015 test suite: queue depths from 1 to 32 are tested, and the average performance and power efficiency across QD1, QD2 and QD4 are reported as the primary scores. Each queue depth is tested for one minute or 32GB of data transferred, whichever is shorter. After each queue depth is tested, the drive is given up to one minute to cool off so that the higher queue depths are unlikely to be affected by accumulated heat build-up. The individual read operations are again 4kB, and cover a 64GB span of the drive.

Sustained 4kB Random Read

On the longer random read test that adds in some slightly higher queue depths, the 980 PRO catches up to the 970 PRO's performance while the Phison-based Seagate drives fall behind. The SK hynix drive and the two drives with the Silicon Motion SM2262EN controller remain the fastest flash-based drives. Our Ryzen testbed has a slight advantage over the old Skylake testbed here, but it's due to the faster CPU rather than the extra PCIe bandwidth: the PCIe Gen3 drives benefit as well.

Sustained 4kB Random Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

Samsung's power efficiency on the random read test is clearly improved with the new NAND and the 8nm Elpis controller, but that really only brings the 980 PRO's efficiency up to par at best. The SMI-based drives from Kingston and ADATA are a bit more efficient, and the SK hynix Gold P31 with its extremely efficient 4-channel controller is still far beyond the 8-channel competitors.

Compared to the 970 EVO Plus, the 980 PRO's random read performance is increased across the board and power consumption is reduced, but the differences are slight. The 980 PRO's improvement is greatest at the highest queue depths we test, and running on our new Ryzen testbed helps a bit more—but the PCIe 3 SK hynix Gold P31 gets most of the same benefits at high queue depths and matches the QD32 random read throughput of the 980 PRO. We would need to use multiple CPU cores for this test in order to reach the performance levels where the 980 PRO could build a big lead over the Gen3 drives. The 250GB model shows more significant improvement than the 1TB model, but again this is mainly at high queue depths.

The random read performance and power consumption of the 980 PRO start out in mundane territory for the lower queue depths. At the highest queue depths tested, it is largely CPU-limited and stands out only because we haven't tested many drives on our new, faster Ryzen testbed. The PCIe Gen3 SK hynix Gold P31 is still able to keep pace with the 980 PRO under these conditions, and it still uses far less power than the competition.

Random Write Performance

Our test of random write burst performance is structured similarly to the random read burst test, but each burst is only 4MB and the total test length is 128MB. The 4kB random write operations are distributed over a 16GB span of the drive, and the operations are issued one at a time with no queuing.

Burst 4kB Random Write (Queue Depth 1)

The burst random write performance of the Samsung 980 PRO is an improvement over its predecessors, but Samsung's SLC write cache latency is still significantly slower than many of their competitors. PCIe Gen4 support isn't a factor for the 980 PRO here at QD1, and the two capacities of the 980 PRO seem to disagree whether the other differences between our old and new testbeds help or hurt. Meanwhile, the Phison E16-based Seagate FireCuda 520 does seem to be able to benefit significantly on our Gen4 testbed, where it takes a clear lead.

As with the sustained random read test, our sustained 4kB random write test runs for up to one minute or 32GB per queue depth, covering a 64GB span of the drive and giving the drive up to 1 minute of idle time between queue depths to allow for write caches to be flushed and for the drive to cool down.

Sustained 4kB Random Write

On the longer random write test with some higher queue depths, it's more clear that our new Ryzen testbed performs a bit better than our old Skylake testbed, and that PCIe Gen4 support is only responsible for part of that advantage. Even using PCIe Gen4, the 1TB 980 PRO is not able to establish a clear lead over the PCIe Gen3 drives and is a bit slower than the Phison E16 drive, but the smaller 250GB 980 PRO is a big improvement over the 970 EVO Plus thanks to the larger SLC cache (now up to 49GB, compared to 13GB).

Sustained 4kB Random Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The 980 PRO brings significant and much-needed power efficiency improvements over its predecessors, and takes first place among the high-end 8-channel SSDs. But the 4-channel SK hynix Gold P31 still has a wide lead, since its random write performance is very competitive while its power requirements are much lower.

The 1TB 980 PRO offers basically the same performance profile on this test as its predecessors from the 970 generation: performance tops out around QD4, where CPU overhead becomes the limiting factor. (This test is single-threaded, so higher throughput could be achieved on either testbed using a multi-threaded benchmark, but real-world applications need a lot of CPU power left over to actually *do something* with the data they're shuffling around.)

The 250GB 980 PRO briefly reaches the same peak performance as the 1TB model, but in the second half of the test it still overflows the SLC cache. It's a big improvement over the 250GB 970 EVO Plus, but the low capacity still imposes a significant performance handicap when writing a lot of data.

At QD1 the 980 PRO's random write performance is still in SATA territory, but it quickly moves to much higher performance ranges without much increase in power consumption. At the highest queue depths tested, the 1TB 980 PRO's performance is tied with the other CPU-limited drives and its power consumption is about midway between the fairly power-hungry Phison E16 drive and the stunningly efficient SK hynix drive.

AnandTech Storage Bench Sequential IO Performance
Comments Locked

137 Comments

View All Comments

  • 5j3rul3 - Wednesday, September 23, 2020 - link

    980 Pro (X)
    980 EVO (O)
  • DarkMatter69 - Wednesday, September 23, 2020 - link

    Could the comparison include some of the fastest M.2 SSDs already in market, e.g. Sabrent Rocket 4, Corsair MP600, Aorus NVM, etc.? the comparison drives used in this article are not the fastest ones, so it is difficult to understand how good is this M.2 drive vs all the other top ones in the market already. Thank you!
  • Slash3 - Wednesday, September 23, 2020 - link

    I'd also like to see a few more models make their way through the Anandtech tests, but the Seagate Firecuda 520 in this review is essentially representative of the models you listed. They're all based on what are effectively reference Phison E16 designs and can even be cross-flashed with the same firmware. Upcoming Phison E18 based drives should shake things up a little bit more and will be the true point of comparison for the 980 Pro.
  • Koenig168 - Wednesday, September 23, 2020 - link

    980 Evo masquerading as 980 Pro.
  • yetanotherhuman - Wednesday, September 23, 2020 - link

    TLC != Pro. Forget it. 2-bit or bust.
  • twtech - Wednesday, September 23, 2020 - link

    The write endurance on this drive is identical to the 970 EVO. Yeah, it may be a bit faster - especially being PCIE 4.0 - but it's not like you can use it in ways that you can't use the (much cheaper) non-Pro drives now.
  • Whiteknight2020 - Wednesday, September 23, 2020 - link

    And you are entirely missing the point. "Pro" is a generic, meaningless, marketing term. Just look on Amazon for "pro" branded items, ranging from cheap tat to quality (for specific use cases) items. You are choosing to interpret the way a marketing/branding term has previously been applied to product by a manufacturer as having a fixed value and meaning which it does not, it is merely branding and ascribes no specific technical, functional or physical properties to the product. That you are entitled to be aggrieved at the way the branding is used is not in question, what is is your giving the branding a meaning which it does not have.
  • XabanakFanatik - Wednesday, September 23, 2020 - link

    Pro is not a generic, meaningless marketing term. Pro is a branding on Samsung SSD's that Samsung has been cultivating for a decade, which has a very well-defined meaning. Samsung Pro SSD's are 2-bit MLC with sustained write performance and high endurance.

    This drive has none of the three things that has defined a Samsung Pro SSD for a decade.

    They just threw a decade of brand building away with one product.
  • edzieba - Wednesday, September 23, 2020 - link

    The near-zero change in random performance at QD1 for PCIe 3 vs. 4 was expected, but the very lot bump in high-QD sequential transfers was not. It's abundantly clear that PCIe 4 bandwidth, at least for desktop use, has no practical applications as of yet.
  • lightningz71 - Wednesday, September 23, 2020 - link

    I wonder how the lower end NVME drives will fare when they move to PCIe 4.0? One of the hangups of using host drive map caching was the slower data path between the drive controller and the host memory that was imposed by having to cross the PCIe 3.0 lanes to get to the host memory. Eventually, cacheless controllers will move to PCIe 4.0. Will it be cheaper to make a cacheless PCIe 4.0 controller that actually uses all 4 lanes (some of the cheapest PCIe 3.0 cacheless controllers only used 2 lanes) than to stay with a more mature PCIe 3.0 controller that has a modest amount of cache with it? Will the performance be close enough to make that decision moot?

Log in

Don't have an account? Sign up now