Burst IO Performance

Our burst IO tests operate at queue depth 1 and perform several short data transfers interspersed with idle time. The random read and write tests consist of 32 bursts of up to 64MB each. The sequential read and write tests use eight bursts of up to 128MB each. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

QD1 Burst IO Performance
Random Read Random Write
Sequential Read Sequential Write

Our burst IO tests show little to no performance differences between the Samsung 870 EVO and other top SATA SSDs. The 1MB sequential transfers are already hitting the SATA throughput limits even at QD1, and the 4kB random IOs are at best marginally improved over Samsung's previous generation. Samsung's slight improvement to random read latency is enough to catch up to Micron's as shown by the Crucial MX500, but a 10% gain hardly matters when NVMe drives can double this performance.

Sustained IO Performance

Our sustained IO tests exercise a range of queue depths and transfer more data than the burst IO tests, but still have limits to keep the duration somewhat realistic. The primary scores we report are focused on the low queue depths that make up the bulk of consumer storage workloads. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Sustained IO Performance
Random Read Random Write
Sequential Read Sequential Write

On the longer synthetic tests that bring in some slightly higher queue depths, the improved random read performance of the 870 EVO is a bit more clear. In one sense it is impressive to see Samsung squeeze a bit more performance out of the same SATA bottleneck, but we're still talking about small incremental refinements where NVMe enables drastic improvements. Aside from random reads, the 870 EVO's performance improvements are exceedingly minute and it should be considered essentially tied with most other recent mainstream TLC SATA drives.

Sustained IO Performance
Random Read Random Write
Sequential Read Sequential Write

Power consumption is one area where Samsung could theoretically offer more significant improvements despite still being constrained by the same SATA interface, but the 870 EVO doesn't really deliver any meaningful improvements there. The 4TB model is consistently a bit less efficient than the 1TB model on account of having more memory to keep powered up, but when comparing the 1TB model against its predecessor and competing drives there's nothing particularly noteworthy about the 870 EVO. SK hynix's Gold S31 has a modest efficiency advantage for random IO while Samsung is technically the most efficient of these SATA drives for sequential IO.

Random Read
Random Write
Sequential Read
Sequential Write

The queue depth scaling behavior of the 870 EVOs is almost identical to the 860 EVOs and still quite typical for mainstream SATA drives. For random reads the 870 EVOs saturate around QD16, while for random writes QD4 suffices. On the sequential IO tests there's only a small performance gain from QD1 to QD16, and the more interesting question is how stable performance is through the rest of the sequential tests. The 1TB 870 EVO seems to run out of SLC cache a bit earlier than the 860 EVO when the sequential write test is running on an 80% full drive, but the 4TB model has plenty of cache to finish out that test at full speed.

Random Read Performance Consistency

This test illustrates how drives with higher throughput don't always offer better IO latency and Quality of Service (QoS), and that latency often gets much worse when a drive is pushed to its limits. This test is more intense than real-world consumer workloads and the results can be a bit noisy, but large differences that show up clearly on a log scale plot are meaningful. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Consistent with most of our other read performance tests, the Samsung 870 EVO shows slightly better average and 99th percentile random read latencies than most of its SATA competition. Even some of the entry-level NVMe drives that can deliver higher random read throughput than is possible for the 870 EVO still have clearly higher latency across most or all of the throughput range that the 870 EVO can cover. A QLC-based or DRAMless TLC NVMe SSD can potentially offer far higher throughput than any SATA SSD, but clearly beating the 870 EVO on both throughput and latency requires stepping up to a more mainstream NVMe design with DRAM and TLC NAND.

Trace Tests: AnandTech Storage Bench and PCMark 10 Advanced Synthetic Tests: Block Sizes and Cache Size Effects
Comments Locked

136 Comments

View All Comments

  • hansmuff - Wednesday, February 17, 2021 - link

    Especially the 4TB seems like a fantastic Games drive to me, really good performance at a great price.
  • ekon - Wednesday, February 17, 2021 - link

    What (my) world needs is an absolutely rubbish but cheap high capacity SSD. As many cell levels as it takes.
  • jarablue - Wednesday, February 17, 2021 - link

    I have 2 sata WD Blue 1 tb ssds for game installs. They work totally fine and load games fast as hell. SATA ssds are still on point for large game storage space.
  • Spunjji - Friday, February 19, 2021 - link

    It'll be interesting to see if this changes along with software being developed for the new generation of consoles.
  • Duncan Macdonald - Thursday, February 18, 2021 - link

    It takes 10 gig ethernet to exceed the speed of SATA - the SATA limit of 600 MBytes/sec is 4800Mbits/sec - allowing for TCP/IP overhead even a 5GbE link can not carry data as fast as a SATA link.

    SATA is still the only effective way of increasing the internal storage of systems that have no free NVMe slots available.
  • Tomatotech - Thursday, February 18, 2021 - link

    Or a USB3 / USB-C drive taped / velcro’d somewhere in the PC case.

    Could contain either 2.5” HDD, or 2.5” SDD or a m.2 NVME SSD in a USB enclosure. I’ve done that a couple of times with small cases. Works perfectly fine especially with solid state media.
  • edzieba - Thursday, February 18, 2021 - link

    Cable-attached storage still has the massive advantage that you can connect more drives than you have board area for. A regular motherboard may have one m.2 slot, paying out the nose may net you two slots. Selling off a few organs may buy a halo board with 3 or more slots. Or you may need a bloated riser card and occupy as 16x slot (no go for ITX). Or... you can use the at least 4 SATA ports on even the most bargain basement board (with 8 being hardly uncommon) to stuff more capacity in as needed with ease.
    SATA Express was the right idea at the wrong time: a x1 or x2 PCIe interface to allow NVMe, with PHY fallback to SATA, and at entirely acceptable bandwidth for most uses (stick in an m.2 boot drive for OS and key applications) would be a perfect upgrade path for consumer SATA use. If it integrated power transport too (for SSD only, block mechanical drives with keying at the device end cable) it would simplify cable routing too. But that boat has probably sailed for good, unless enterprise just happens to adopt such a connector and drive architecture, which seems unlikely with density demands ever increasing. I don't think we'll see any new internal high bandwidth cabling standards other than PCIe link rate updates to the persistently high-priced OCuLink.
  • Tomatotech - Thursday, February 18, 2021 - link

    For more internal storage, a USB3 / USB-C drive can be taped / velcro’d somewhere in the PC case.

    Could contain either 2.5” HDD, or 2.5” SDD or a m.2 NVME SSD in a USB enclosure. I’ve done that a couple of times with small cases. Works perfectly fine especially with solid state media.
  • abufrejoval - Thursday, February 18, 2021 - link

    I remember experimenting with Compact Flash cards on PCMCIA and IDE adapters, trying to run Windows XP and Linux on them: Sure, there were no seeks, but at the time I didn’t understand the erase block issues yet and was just befuddled how some I/O just seemed so slow it had XP crash.

    When FusionIO came out with their first devices, I jumped on those and they were basically a precursor of NVMe, ouch, is it 13 years already?

    I’ve celebrated SATA SSDs, still have a 160GB Postville under current in a firewall, that may last another 10 years. I lost count, but there may be 30-50 SATA SSD in the house, some still used as „boot stick“ with only 128GB, most 1/4 to 1TB, some with 2TB.

    The only 4TB SSD here is actually a RAID-0 using 4x1TB, because I tend to have plenty of SATA ports left over in all these tower chassis, that used to house 3.5“ HDDs. The last system with 2.4 TB FusionIO card also still sports 4x 200GB Intel enterprise DC3700 MLC drives, just because that X99 board has 10 SATA ports, so why not use them in a RAID so vastly overprovisioned that it will never die?

    Like those ancient HDDs, these SSDs move around between boxes, almost like the „Winchesters“ or removable hard disks in the old days (been around since PDP-11/34). I use carrier-less hot swap bays on all systems, SATA caddies on laptops or USB3/SATA cases, for their flexibility: Milliseconds saved on storage benchmarks don’t compare to productivty and not having to disassemble a workstation with a 3 slot GPU and giant fans for low-noise, just to switch to another OS is a real bonus!

    Most of the time SATA-SSD really is quite simply fast enough. If not, it’s the architecture, stupid!

    And some of the more agonizing waits, turn out to be not at all related to SATA vs. NVMe…

    ARK Survival Evolved is one of my favorite games, because I play that with my kids. Its main downside is loading-time: It just takes ages and ages to launch! Sure, it has 200GB data with all these extra maps and extensions, but perhaps more importantly, it’s 100.ooo files.

    I got really tired of waiting for those minutes it took to load that from HDDs, so I invested in one of those „giant“ 1TB (SATA) SSDs at the time… still took awhile to load, but the improvement was significant (less than one CMU or coffee mug unit). Now, since we all play the game, I tried to be smart and put it on the network in the 4TB JBOD/RAID0 I mentioned, and then upgraded to a 10Gbit network to match the performance.

    Alas, the load times across the 10Gbit network were horrific! Far worse, than the single 2TB HDD I had used in the beginning.

    Then one day I ran ARK on Linux, within a larger experiment on the quality of Linux gaming. I didn’t have a big enough SSD around to store the game data, so instead I used one of those 2TB -WD HDD hunks from 15 years back, that just refuse to fail.

    And then I almost fell of my chair, when ARK launched faster off that HDD than I had ever even see it launch from an NVMe drive (yes, of course, I had to have some of those, too).

    Long story slightly abbreviated, the annoyingly slow ARK launches were never a storage issue, but a Windows file access overhead issue. Linux truly put Windows to shame that day! It managed to load those tens of thousands of files ARK required much faster from a HDD, than Windows needed on NVMe storage, and way, way, WAY faster than Windows (10Gbit) networking from a SATA-SSD RAID0.

    Now, Windows and Windows 10Gbit networking isn’t always and by-default orders of magnitude slower than Linux. At least not, when you’re dealing with a few large files. But when your game (or application) happens to use 100.ooo small files instead of 10 big ones, be advised to test the OS before you blame it on the storage!

    The general protocol and latency overhead of SATA vs. (PCIe)NVMe is no doubt significant.
    As are the benefits of a well established form factor with all those ports and enclosures I already own, and the flexibilty I learned to rely on. Dogma rarely helps and I find myself buying SATA-SSD over NVMe, once the default system boot storage requirements for every box have already been filled (with NVMe, if capable). Mostly because a) SATA SSD are really fast enough already (real bottlenecks are architecture) and b) because aggregating those lower capacity NVMe sticks into RAID0 to extend their usability, is really, really, really expensive, at least so far, because those PCIe switches are so overpriced by Avago/Broadcomm, while SATA multiplexers are cheap and mostly built-into the PCH you already own.
  • zodiacfml - Thursday, February 18, 2021 - link

    I don't mind SATA being limited since it is only huge file transfers are limited, not random performance or game/OS loading. The form factor needs improving though. A quick, cheap way to do that is to simply cut the 2.5" form factor to half or smaller since it will still leave a pair of mounting holes for screws.

Log in

Don't have an account? Sign up now