Conclusion: Less Lakes, More Coves Please

One thing that Intel has learned through the successive years of the reiterating the Skylake microarchitecture on the same process but with more cores has been optimization – the ability to squeeze as many drops out of a given manufacturing node and architecture as is physically possible, and still come out with a high-performing product when the main competitor is offering similar performance at a much lower power.

Intel has pushed Comet Lake and its 14nm process to new heights, and in many cases, achieving top results in a lot of our benchmarks, at the expense of power. There’s something to be said for having the best gaming CPU on the market, something which Intel seems to have readily achieved here when considering gaming in isolation, though now Intel has to deal with the messaging around the power consumption, similar how AMD had to do in the Vishera days.

One of the common phrases that companies in this position like to use is that ‘when people use a system, they don’t care about how much power it’s using at any specific given time’. It’s the same argument borne out of ‘some people will pay for the best, regardless of price or power consumption’. Both of these arguments are ones that we’ve taken onboard over the years, and sometimes we agree with them – if all you want is the best, then yes these other metrics do not matter.

In this review we tested the Core i9-10900K with ten cores, the Core i7-10700K with eight cores, and the Core i5-10600K with six cores. On the face of it, the Core i9-10900K with its ability to boost all ten cores to 4.9 GHz sustained (in the right motherboard) as well as offering 5.3 GHz turbo will be a welcome recommendation to some users. It offers some of the best frame rates in our more CPU-limited gaming tests, and it competes at a similar price against an AMD processor that offers two more cores at a lower frequency. This is really going to be a case of ‘how many things do you do at once’ type of recommendation, with the caveat of power.

The ability of Intel to pull 254 W on a regular 10 core retail processor is impressive. After we take out the system agent and DRAM, this is around 20 W per core. With its core designs, Intel has often quoted its goal to scale these Core designs from fractions of a Watt to dozens of Watts. The only downside now of championing something in the 250 W range is going to be that there are competitive offerings that do 24, 32, and 64 cores in this power range. Sure, those processors are more expensive, but they do a lot more work, especially for large multi-core scenarios.

If we remember back to AMD’s Vishera days, the company launched a product with eight cores, 5.0 GHz, with a 220 W TDP. At peak power consumption, the AMD FX-9590 was nearer 270 W, and while AMD offered caveats listed above (‘people don’t’ really care about the power of the system while in use’) Intel chastised the processor for being super power hungry and not high performance enough to be competitive. Fast forward to 2020, we have the reverse situation, where Intel is pumping out the high power processor, but this time, there’s a lot of performance.

The one issue that Intel won’t escape from is that all this extra power requires extra money to be put into cooling the chip. While the Core i9 processor is around the same price as the Ryzen 9 3900X, the AMD processor comes with a 125 W cooler which will do the job – Intel customers will have to go forth and source expensive cooling in order to keep this cool. Speaking with a colleague, he had issues cooling his 10900K test chip with a Corsair H115i, indicating that users should look to spending $150+ on a cooling setup. That’s going to be a critical balancing element here when it comes to recommendations.

For recommendations, Intel’s Core i9 is currently performing the best in a lot of tests, and that’s hard to ignore. But will the end-user want that extra percent of performance, for the sake of spending more on cooling and more in power? Even in the $500 CPU market, that’s a hard one to ask. Then add in the fact that the Core i9 doesn’t have PCIe 4.0 support, we get to a situation that it’s the best offering if you want Intel, and you want the best processor, but AMD has almost the same ST performance, better MT performance in a lot of cases, and a much better power efficiency and future PCIe support – it comes across as the better package overall.

Agisoft Photoscan 1.3.3, Complex Test

Throughout this piece I’ve mainly focused on the Core i9, with it being the flagship part. The Core i7 and the Core i5 are also in our benchmark results, and their results are a little mixed.

The Core i5 doesn’t actually draw too much power, but it doesn’t handle the high performance aspect in multi-threaded scenarios. At $262 it comes in at more expensive than the Ryzen 5 3600 which is $200, and the trade-off here is the Ryzen’s better IPC against the Core i5’s frequency. Most of the time the Core i5-10600K wins, as it should do given that it costs an extra $60, but again doesn’t have the PCIe 4.0 support that the AMD chip offers. When it comes to building that $1500 PC, this will be an interesting trade-off to consider.

For the Core i7-10700K, at 8C/16T, we really have a repeat of the Core i9-9900K from the previous generation. It offers a similar sort of performance and power, except with Turbo Boost Max 3.0 giving it a bit more frequency. There’s also the price – at $374 it is certainly a lot more attractive than it was at $488. Anand once said that there are no bad products, only bad prices, and a $114 price drop on the 8c overclocked part is highly welcome. At $374 it fits between the Ryzen 9 3900X ($432) and the Ryzen 7 3800X ($340), so we’re still going to see some tradeoffs against the Ryzen 7 in performance vs power vs cost.

Final Thoughts

Overall, I think the market is getting a little bit of fatigue from continuous recycling of the Skylake microarchitecture on the desktop. The upside is that a lot of programs are optimized for it, and Intel has optimized the process and the platform so much that we’re getting a lot higher frequencies out of it, at the expense of power. Intel is still missing key elements in its desktop portfolio, such as PCIe 4.0 and DDR4-3200, as well as anything resembling a decent integrated graphics processor, but ultimately one could argue that the product team are at the whims of the manufacturing arm, and 10nm isn’t ready yet for the desktop prime time.

When 10nm will be ready for the desktop we don’t know - Intel is set to showcase 10nm Ice Lake for servers at Hot Chips in August, as well as 10nm Tiger Lake for notebooks. If we’re stuck on 14nm on the desktop for another generation, then we really need a new microarchitecture that scales as well. Intel might be digging itself a hole, optimizing Skylake so much, especially if it can’t at least match what might be coming next – assuming it even has enough space on the manufacturing line to build it. Please Intel, bring a Cove our way soon.

For the meantime, we get a power hungry Comet Lake, but at least the best chip in the stack performs well enough to top a lot of our charts for the price point.

These processors should be available from today. As far as we understand, the overclockable parts will be coming to market first, with the rest coming to shelves depending on region and Intel's manufacturing plans.

Gaming: F1 2018
Comments Locked

220 Comments

View All Comments

  • DrKlahn - Wednesday, May 20, 2020 - link

    My biggest issue with gaming is that these reviews rarely show anything other than low resolution scenarios. I realize a sizable slice of the gaming community uses 1080p and that some of them are trying to hit very high frame rates. But there also a lot of us with 1440p+ or Ultrawides and I think it gets overlooked that Intels gaming "lead" largely evaporates for anyone not trying to hit very high frames at 1080p.
  • ElvenLemming - Wednesday, May 20, 2020 - link

    Honestly, I think it's ignored because it's well understood that at 1440p+ the CPU just doesn't matter very much. There's not much value in anything above 1080p for a CPU review the vast majority of games are going to be GPU limited. That said, plenty of other outlets include them in their reviews if you want to see a bunch of charts where the top is all within 1% of each other.
  • DrKlahn - Wednesday, May 20, 2020 - link

    I do agree with you that a lot of us do understand that as resolution and detail increases, CPUs become almost irrelevant to gaming performance. However you do see a fair few posters parroting "Intel is better for gaming" when in reality for their use case it really isn't any better. That's why I feel like these reviews (here and elsewhere) should spotlight where this difference matters. If you are a competitive CS:GO player that wants 1080p or lower with the most frames you can get, then Intel is undoubtedly better. But a person who isn't as tech savvy that games and does some productivity tasks with a 1440p+ monitor is only spending more money for a less efficient architecture that won't benefit them if they simply see "Intel better for gaming" and believe it applies to them.
  • shing3232 - Thursday, May 21, 2020 - link

    3900X or 3800X can beat Intel 9900Kf on csgo with pbo on if I remember correctly.
  • silencer12 - Saturday, May 23, 2020 - link

    Csgo is not a demanding game
  • vanilla_gorilla - Monday, June 15, 2020 - link

    >If you are a competitive CS:GO player that wants 1080p or lower with the most frames you can get, then Intel is undoubtedly better.

    It's actually more complicated than that. Even midrange Zen 2 CPU can hit well over 200 fps in CS:GO. So unless you have a 240hz monitor, it won't make any difference buying Intel or AMD in that case.
  • Irata - Wednesday, May 20, 2020 - link

    Techspot shows a seven game average and there the avg fps / min 1% difference to the Ryzen 3 3300x is less than 10% using a 2080ti.
  • CrimsonKnight - Thursday, May 21, 2020 - link

    This review's benchmarks goes up to 4K/8K resolution. You have to click the thumbnails under the graphs.
  • Meteor2 - Wednesday, July 15, 2020 - link

    To be clear: Anandtech tests at low resolutions so the bottleneck is the CPU, not the GPU. A Ryzen 5 won’t bottleneck a 2080 Ti at 4K.
  • kmmatney - Wednesday, May 20, 2020 - link

    Those of us who live near a Microcenter can get the 3900X for $389, along with a $20 discount on a motherboard (and a serviceable heatsink). The Ryzen 5 (what I bought) is $159, also with a $20 motherboard discount and a decent cooler. So my effective motherboard cost was $79, and total cost of $240 + tax, with a motherboard that can (most likely) be upgraded to Zen 3

Log in

Don't have an account? Sign up now