Mixed IO Performance

For details on our mixed IO tests, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Mixed IO Performance
Mixed Random IO Mixed Sequential IO

The mixed random IO test provides the Samsung 870 EVO with one of its biggest performance wins yet over the rest of the SATA field and the entry-level NVMe competition. But most of that comes from the capacity advantage the 4TB model has over most of these comparison drives; the 1TB 870 EVO is only about 5% faster overall than the 860 EVO. On the mixed sequential IO test, the SATA bottleneck keeps most of the performance scores within a fairly narrow range, and the 1TB 870 EVO's performance is actually a bit of a regression compared to its predecessor.

Mixed IO Efficiency
Mixed Random IO Mixed Sequential IO

As with our separate tests of random reads and writes, the top efficiency scores for mixed random IO go to SK hynix, with Samsung's TLC drives turning in the next best scores and having a clear advantage over other competing brands. Over on the sequential IO side of things, the efficiency scores more closely mirror the performance scores, and the 870 EVO doesn't have any real advantage over other mainstream SATA drives.

Mixed Random IO
Mixed Sequential IO

The 1TB 870 EVO's performance during the mixed random IO test is more consistent than the 860 EVO's, but still has a few unpleasant drops that aren't present for the 4TB model. On the mixed sequential IO test, the 1TB 870 EVO's performance is actually a bit less consistent than the 860 EVO. But aside from those occasional outliers, the general trend is for the 870 EVO to provide superior random IO performance and link-saturating sequential performance across a wide range of workload mixes.

 

Idle Power Management

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Idle Power Consumption - No PMIdle Power Consumption - Desktop

The Samsung 870 EVO may feature an updated controller compared to the 860 EVO, but there's no real difference in idle power consumption, for either active idle or the desktop (non-DevSleep) idle states. Samsung's idle power figures are best in class, with SK hynix offering the only close competition.

Idle Wake-Up Latency

The Samsung SATA drives all take about one millisecond to wake up from using SATA link power management. This is higher than several of the other SATA drives, but not really enough to be of much concern for system responsiveness.

Advanced Synthetic Tests: Block Sizes and Cache Size Effects Conclusion
Comments Locked

136 Comments

View All Comments

  • hansmuff - Wednesday, February 17, 2021 - link

    Especially the 4TB seems like a fantastic Games drive to me, really good performance at a great price.
  • ekon - Wednesday, February 17, 2021 - link

    What (my) world needs is an absolutely rubbish but cheap high capacity SSD. As many cell levels as it takes.
  • jarablue - Wednesday, February 17, 2021 - link

    I have 2 sata WD Blue 1 tb ssds for game installs. They work totally fine and load games fast as hell. SATA ssds are still on point for large game storage space.
  • Spunjji - Friday, February 19, 2021 - link

    It'll be interesting to see if this changes along with software being developed for the new generation of consoles.
  • Duncan Macdonald - Thursday, February 18, 2021 - link

    It takes 10 gig ethernet to exceed the speed of SATA - the SATA limit of 600 MBytes/sec is 4800Mbits/sec - allowing for TCP/IP overhead even a 5GbE link can not carry data as fast as a SATA link.

    SATA is still the only effective way of increasing the internal storage of systems that have no free NVMe slots available.
  • Tomatotech - Thursday, February 18, 2021 - link

    Or a USB3 / USB-C drive taped / velcro’d somewhere in the PC case.

    Could contain either 2.5” HDD, or 2.5” SDD or a m.2 NVME SSD in a USB enclosure. I’ve done that a couple of times with small cases. Works perfectly fine especially with solid state media.
  • edzieba - Thursday, February 18, 2021 - link

    Cable-attached storage still has the massive advantage that you can connect more drives than you have board area for. A regular motherboard may have one m.2 slot, paying out the nose may net you two slots. Selling off a few organs may buy a halo board with 3 or more slots. Or you may need a bloated riser card and occupy as 16x slot (no go for ITX). Or... you can use the at least 4 SATA ports on even the most bargain basement board (with 8 being hardly uncommon) to stuff more capacity in as needed with ease.
    SATA Express was the right idea at the wrong time: a x1 or x2 PCIe interface to allow NVMe, with PHY fallback to SATA, and at entirely acceptable bandwidth for most uses (stick in an m.2 boot drive for OS and key applications) would be a perfect upgrade path for consumer SATA use. If it integrated power transport too (for SSD only, block mechanical drives with keying at the device end cable) it would simplify cable routing too. But that boat has probably sailed for good, unless enterprise just happens to adopt such a connector and drive architecture, which seems unlikely with density demands ever increasing. I don't think we'll see any new internal high bandwidth cabling standards other than PCIe link rate updates to the persistently high-priced OCuLink.
  • Tomatotech - Thursday, February 18, 2021 - link

    For more internal storage, a USB3 / USB-C drive can be taped / velcro’d somewhere in the PC case.

    Could contain either 2.5” HDD, or 2.5” SDD or a m.2 NVME SSD in a USB enclosure. I’ve done that a couple of times with small cases. Works perfectly fine especially with solid state media.
  • abufrejoval - Thursday, February 18, 2021 - link

    I remember experimenting with Compact Flash cards on PCMCIA and IDE adapters, trying to run Windows XP and Linux on them: Sure, there were no seeks, but at the time I didn’t understand the erase block issues yet and was just befuddled how some I/O just seemed so slow it had XP crash.

    When FusionIO came out with their first devices, I jumped on those and they were basically a precursor of NVMe, ouch, is it 13 years already?

    I’ve celebrated SATA SSDs, still have a 160GB Postville under current in a firewall, that may last another 10 years. I lost count, but there may be 30-50 SATA SSD in the house, some still used as „boot stick“ with only 128GB, most 1/4 to 1TB, some with 2TB.

    The only 4TB SSD here is actually a RAID-0 using 4x1TB, because I tend to have plenty of SATA ports left over in all these tower chassis, that used to house 3.5“ HDDs. The last system with 2.4 TB FusionIO card also still sports 4x 200GB Intel enterprise DC3700 MLC drives, just because that X99 board has 10 SATA ports, so why not use them in a RAID so vastly overprovisioned that it will never die?

    Like those ancient HDDs, these SSDs move around between boxes, almost like the „Winchesters“ or removable hard disks in the old days (been around since PDP-11/34). I use carrier-less hot swap bays on all systems, SATA caddies on laptops or USB3/SATA cases, for their flexibility: Milliseconds saved on storage benchmarks don’t compare to productivty and not having to disassemble a workstation with a 3 slot GPU and giant fans for low-noise, just to switch to another OS is a real bonus!

    Most of the time SATA-SSD really is quite simply fast enough. If not, it’s the architecture, stupid!

    And some of the more agonizing waits, turn out to be not at all related to SATA vs. NVMe…

    ARK Survival Evolved is one of my favorite games, because I play that with my kids. Its main downside is loading-time: It just takes ages and ages to launch! Sure, it has 200GB data with all these extra maps and extensions, but perhaps more importantly, it’s 100.ooo files.

    I got really tired of waiting for those minutes it took to load that from HDDs, so I invested in one of those „giant“ 1TB (SATA) SSDs at the time… still took awhile to load, but the improvement was significant (less than one CMU or coffee mug unit). Now, since we all play the game, I tried to be smart and put it on the network in the 4TB JBOD/RAID0 I mentioned, and then upgraded to a 10Gbit network to match the performance.

    Alas, the load times across the 10Gbit network were horrific! Far worse, than the single 2TB HDD I had used in the beginning.

    Then one day I ran ARK on Linux, within a larger experiment on the quality of Linux gaming. I didn’t have a big enough SSD around to store the game data, so instead I used one of those 2TB -WD HDD hunks from 15 years back, that just refuse to fail.

    And then I almost fell of my chair, when ARK launched faster off that HDD than I had ever even see it launch from an NVMe drive (yes, of course, I had to have some of those, too).

    Long story slightly abbreviated, the annoyingly slow ARK launches were never a storage issue, but a Windows file access overhead issue. Linux truly put Windows to shame that day! It managed to load those tens of thousands of files ARK required much faster from a HDD, than Windows needed on NVMe storage, and way, way, WAY faster than Windows (10Gbit) networking from a SATA-SSD RAID0.

    Now, Windows and Windows 10Gbit networking isn’t always and by-default orders of magnitude slower than Linux. At least not, when you’re dealing with a few large files. But when your game (or application) happens to use 100.ooo small files instead of 10 big ones, be advised to test the OS before you blame it on the storage!

    The general protocol and latency overhead of SATA vs. (PCIe)NVMe is no doubt significant.
    As are the benefits of a well established form factor with all those ports and enclosures I already own, and the flexibilty I learned to rely on. Dogma rarely helps and I find myself buying SATA-SSD over NVMe, once the default system boot storage requirements for every box have already been filled (with NVMe, if capable). Mostly because a) SATA SSD are really fast enough already (real bottlenecks are architecture) and b) because aggregating those lower capacity NVMe sticks into RAID0 to extend their usability, is really, really, really expensive, at least so far, because those PCIe switches are so overpriced by Avago/Broadcomm, while SATA multiplexers are cheap and mostly built-into the PCH you already own.
  • zodiacfml - Thursday, February 18, 2021 - link

    I don't mind SATA being limited since it is only huge file transfers are limited, not random performance or game/OS loading. The form factor needs improving though. A quick, cheap way to do that is to simply cut the 2.5" form factor to half or smaller since it will still leave a pair of mounting holes for screws.

Log in

Don't have an account? Sign up now