Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The Samsung PM981 set new records for burst sequential read performance, but the Samsung 970 EVO fails to live up to that standard. The 970 EVO is a substantial improvement over the 960 EVO, but doesn't manage to beat the last generation's fastest MLC drives.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data.

Sustained 128kB Sequential Read

On the longer sequential read test, the Samsung 970 EVO performs far better than the Samsung PM981, indicating that Samsung has made significant firmware tweaks to improve how the drive handles the internal fragmentation left over from running the random I/O tests. The 970 EVO is the fastest TLC-based drive on this test, and the 1TB model even manages to beat the MLC-based 1TB 960 PRO.

Sustained 128kB Sequential Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The 1TB 970 EVO draws more power during this sequential read test than any other M.2 drive in this mix, but its performance is high enough to leave it with a good efficiency score. The 500GB 970 EVO ends up with below-average efficiency.

Both capacities of the Samsung 970 EVO have very steady performance and power consumption across the duration of the sequential read test. This is in contrast to drives like the WD Black and Toshiba XG5 that don't reach full performance until the queue depths are rather high.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The burst sequential write performance of the Samsung 970 EVO tops the charts, with the 500GB model almost reaching 2.5GB/s where the last generation of drives couldn't hit 2GB/s. The WD Black is only slightly behind the 970 EVO.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

On the longer sequential write test, the 1TB 970 EVO takes a clear lead over everything else, even the 1TB PM981. The 500GB model is handicapped by its smaller capacity and smaller SLC cache, but still manages to be significantly faster than the 512GB PM981.

Sustained 128kB Sequential Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The 970 EVO and PM981 offer almost exactly the same power efficiency on the sequential write test. The 1TB model is slightly less efficient than the WD Black and 960 PRO, while the 500GB model is well behind the MLC-based drives of similar capacity.

The 1TB 970 EVO starts off with a much higher QD1 performance on the sequential write test than the PM981 offers, and at higher queue depths it maintains a slight lead. At 500GB, the 970 EVO's performance oscillates as only some portions of the test are hitting the SLC cache.

Random Performance Mixed Read/Write Performance
Comments Locked

68 Comments

View All Comments

  • cfenton - Tuesday, April 24, 2018 - link

    I've been meaning to ask about this for a while, but why do you order the performance charts based on the 'empty' results? In most of my systems, the SSD's are ~70% full most of the time. Does performance only degrade significantly if they are 100% full? If not, it seems to me that the 'full' results would be more representative of the performance most users will see.
  • Billy Tallis - Tuesday, April 24, 2018 - link

    At 70% full you're generally going to get performance closer to fresh out of the box than to 100% full. Performance drops steeply as the last bits of space are used up. At 70% full, you probably still have the full dynamic SLC cache size usable, and there's plenty of room for garbage collection and wear leveling.

    When it comes to manual overprovisioning to prevent full-drive performance degradation, I don't think I've ever seen someone recommend reserving more than 25% of the drive's usable space unless you're trying to abuse a consumer drive with a very heavy enterprise workload.
  • cfenton - Tuesday, April 24, 2018 - link

    Thanks for the reply. That's really helpful to know. I didn't even think about the dynamic SLC cache.
  • imaheadcase - Tuesday, April 24, 2018 - link

    So im wondering, i got a small 8TB server i use for media/backup. While i know im limited to network bandwidth, would replacing the drives with ssd make any impact at all?
  • Billy Tallis - Tuesday, April 24, 2018 - link

    It would be quieter and use less power. For media archiving over GbE, the sequential performance of mechanical drives is adequate. Incremental backups may make more random accesses, and retrieving a subset of data from your backup archive can definitely benefit from solid state performance, but it's probably not something you do often enough for it to matter.

    Even with the large pile of SSDs I have on hand, my personal machines still back up to a home server with mechanical drives in RAID.
  • gigahertz20 - Tuesday, April 24, 2018 - link

    @Billy Tallis Just out of curiosity, what backup software are you using?
  • enzotiger - Tuesday, April 24, 2018 - link

    With the exception of sequential write, there are some significant gap between your numbers and Samsung's spec. Any clue?
  • anactoraaron - Tuesday, April 24, 2018 - link

    Honest question here. Which of these tests do more than just test the SLC cache? That's a big thing to test, as some of these other drives are MLC and won't slow down when used beyond any SLC caching.
  • RamGuy239 - Tuesday, April 24, 2018 - link

    So these are sold and markedet with IEEE1667 / Microsoft edrive from the get-go, unlike Samsung 960 EVO and Pro that had this promised only to get it at the end of their life-cycles (the latest firmware update).

    That's good and old. But does it really work? The current implementation on the Samsung 960 EVO and Pro has a major issue, it doesn't work when the disk is used as a boot drive. Samsung keeps claiming this is due to a NVMe module bug in most UEFI firmware's and will require motherboard manufactures to provide a UEFI firmware update including a fix.

    Whether this is indeed true or not is hard for me to say, but that's what Samsung themselves claims over at their own support forums.

    All I know is that I can't get neither my Samsung 960 EVO 1TB, or my Samsung 960 Pro 1TB to use hardware encryption with BitLocker on Windows 10 when its used as a boot drive on neither my Asus Maximus IX Apex or my Asus Maximus X Apex both running the latest BIOS/UEFI firmware update.

    When used as a secondary drive hardware encryption works as intended.

    With this whole mess around BitLocker/IEEE1667/Microsoft Edrive on the Samsung 960 EVO and Pro how does it all fare with these new ones? Is it all indeed a issue with NVMe and most UEFI firmware's requiring new UEFI firmware's with fixes from motherboard manufactures or does the 970 EVO and Pro suddenly work with BitLocker as a boot drive without new UEFI firmware releases?
  • Palorim12 - Tuesday, April 24, 2018 - link

    Seems to be an issue with the BIOS chipset manufacturers like Megatrends, Phoenix, etc, and Samsung has stated they are working with them to resolve the issue.

Log in

Don't have an account? Sign up now