Mixed IO Performance

For details on our mixed IO tests, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Mixed IO Performance
Mixed Random IO Performance Efficiency
Mixed Sequential IO Performance Efficiency

The mixed random IO test is still a significant weakness for the Intel SSD 670p; it's clearly faster than the 660p, but still far slower than either of the Phison E12-based QLC SSDs shown here (Corsair MP400, Sabrent Rocket Q). Power efficiency is consequently also poor, and the 670p falls behind even the slower Samsung 870 QVO; at least when Samsung's SATA QLC drive is being so slow, it's not using much power.

The mixed sequential IO test is a very different story: the 670p's overall performance is competitive with mainstream TLC SSDs, and even slightly higher than the HP EX950 with the SM2262EN controller. Power efficiency is also decent in this case.

Mixed Random IO
Mixed Sequential IO

The Intel 670p's performance across the mixed random IO test isn't quite as steady as the 660p, but there's still not much variation and only a slight overall downward trend in performance as the workload shifts to be more write-heavy. On the mixed sequential IO test the 670p shows a few drops where SLC cache space apparently started running low, through most of the test the 670p maintains a higher throughput than the 660p could deliver for any workload even under ideal conditions.

 

Power Management Features

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.

Intel SSD 670p 2TB
NVMe Power and Thermal Management Features
Controller Silicon Motion SM2265G
Firmware 002C
NVMe
Version
Feature Status
1.0 Number of operational (active) power states 3
1.1 Number of non-operational (idle) power states 2
Autonomous Power State Transition (APST) Supported
1.2 Warning Temperature 77 °C
Critical Temperature 80 °C
1.3 Host Controlled Thermal Management Supported
 Non-Operational Power State Permissive Mode Supported

The Intel 670p supports the usual range of power and thermal management features. The only oddity is the exit latency listed for waking up from the deepest idle power state: 11.999 milliseconds sounds like the drive is trying to stay under some arbitrary threshold. This might be an attempt to work around the behavior of some operating system's NVMe driver and its default latency tolerance settings.

Intel SSD 670p 2TB
NVMe Power States
Controller Silicon Motion SM2265
Firmware 002C
Power
State
Maximum
Power
Active/Idle Entry
Latency
Exit
Latency
PS 0 5.5 W Active - -
PS 1 3.6 W Active - -
PS 2 2.6 W Active - -
PS 3 25 mW Idle 5 ms 5 ms
PS 4 4 mW Idle 3 ms 11.999 ms (?!)

Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks, and depending on which NVMe driver is in use. Additionally, there are multiple degrees of PCIe link power savings possible through Active State Power Management (APSM).

We report three idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. Our Desktop Idle number represents what can usually be expected from a desktop system that is configured to enable SATA link power management, PCIe ASPM and NVMe APST, but where the lowest PCIe L1.2 link power states are not available. The Laptop Idle number represents the maximum power savings possible with all the NVMe and PCIe power management features in use—usually the default for a battery-powered system but rarely achievable on a desktop even after changing BIOS and OS settings. Since we don't have a way to enable SATA DevSleep on any of our testbeds, SATA drives are omitted from the Laptop Idle charts.

Idle Power Consumption - No PMIdle Power Consumption - DesktopIdle Power Consumption - Laptop

The active idle power of the 670p is clearly lower than the 660p with the SM2263 controller, but not quite as low as the Mushkin Helix-L with the DRAMless SM2263XT. So Silicon Motion has made some power optimizations with the SM2265, but it's still not in the same league as the controller SK hynix built for the Gold P31.

The desktop and laptop idle states we test have appropriately low power draw. However, when activating the laptop idle configuration (PCIe ASPM L1.2) the 670p would crash and not wake up from idle. This kind of bug is not unheard-of (especially with other Silicon Motion NVMe controllers), and the Linux NVMe driver has a list of drives that can't be trusted to work properly with their deepest idle power state enabled. Sometimes this can be narrowed down to a particular host system configuration or specific SSD firmware versions. But until now, this particular machine hasn't run into crashes with idle power modes on any of the drives we've tested, which is why we've trusted it as a good proxy for the power management behavior that can be expected from a properly-configured laptop. It's disappointing to see this problem show up once again with a new controller where the host system is almost certainly not at fault. Hopefully Intel can quickly fix this with a new firmware version.

Idle Wake-Up Latency

Advanced Synthetic Tests: Block Sizes and Cache Size Effects Conclusion: Great QLC, Way Overpriced
Comments Locked

72 Comments

View All Comments

  • bananaforscale - Wednesday, March 3, 2021 - link

    Fascinating. I have a Netgear GS810EMX connected to an Aquantia AQC-108, and the NIC has issues in Linux when it's receiving lots of data, but works fine in Windows.This requires further research.
  • justaviking - Monday, March 1, 2021 - link

    A MATTER OF PERSPECTIVE...

    Billy wrote: "More importantly, at 0.2 DWPD Intel's QLC SSDs aren't that far behind the 0.3 DWPD that most consumer TLC SSDs are rated for."

    A 0.1 DWPD difference might not sound like it is "that far behind," but on the other hand that is 33% behind, and 33% *is* significant.
  • Billy Tallis - Monday, March 1, 2021 - link

    My thinking is that the 33% difference on paper is a lot less significant than it looks at first glance, because most consumers won't come close to crossing either limit. If 0.1 DWPD is probably sufficient for your usage and 0.2 DWPD definitely is, then 0.3 DWPD doesn't really have much added benefit.
  • frbeckenbauer - Monday, March 1, 2021 - link

    I bought a Samsung PM9A1 for 115€. What is intel doing with these prices? A 1TB QLC SSD should be the price they're offering here for the 512GB version.
  • Machinus - Monday, March 1, 2021 - link

    You can still run linux on an X-25E RAID for the next 100 years.
  • MDD1963 - Tuesday, March 2, 2021 - link

    Intel does not Eff around when you have used up your allotted writes....; good or bad still, you are damn well done writing once you've used them up!
  • Hifihedgehog - Monday, March 1, 2021 - link

    Hey Billy. What is the best 240-256GB NVMe today? I am looking for something under $50 that is the fastest there is currently for system boot times and mixed I/O.
  • Tomatotech - Tuesday, March 2, 2021 - link

    To start with I wouldn't buy a 256GB NVME. Speed scales with size quite well for NVME, and the difference from 256 -> 512 -> 1TB is astounding. Go for a 1TB. This is going to be the fastest drive on your system by far, and more fast space is always useful.

    The next thing is make sure you get a drive that folds *all* (or almost all) unused space into SLC space. This means that with an empty 1TB TLC drive, you get 330GB of high-speed SLC space. Smaller drives give you far less cache space. My 1TB is about 500GB full, means I still have about around 150GB SLC storage left. (it's a 2018 Adata SX8200 1TB, non-pro).

    Beyond that, eh, from a user perspective they're all roughly equal, look at the table on the last page of the article. Used 1TB NVMe drives are a good buy too, there's not much that can go wrong with them, and if there is, you'll find out on first boot. The only things I would check for in a used working NVME drive is a) total writes, but it's extremely rare for that to be excessively high; and b) run a speed test - if that seems slow, then do a full secure erase and the SSD should be back to full performance, but even that is rarely needed with modern OSes.
  • Hifihedgehog - Tuesday, March 2, 2021 - link

    My price point was $50 and under so you ignored a key point from the very beginning.
  • abufrejoval - Tuesday, March 2, 2021 - link

    USB sticks are used all over the place for booting.

    And you get relatively fast µSD-cards which you could combine with a USB reader-stick.
    A "class 10/A2" rating card can be had at many capacity points where NVMe no longer goes.

Log in

Don't have an account? Sign up now