Performance

I’m not a big one on posting first-party benchmark results, but the high-level overview from Intel was this:

  • At 3.3 GHz, 12900K is +19% better in Single Thread Performance over the 11900K
  • Over the 11900K, the 12900K is +19% better at 1080p High with RTX 3090
  • Over the 11900K, the 12900K gets +84% better fps when concurrently streaming
  • Over the 11900K, the 12900K is +22-100% better in content creation (Adobe)
  • Over the 11900K, the 12900K is +50% faster in BlenderMT at 241W (vs 250W)
  • Over the 11900K, the 12900K performs the same in BlenderMT at only 65W (vs 250W)

All of Intel’s tests were using Windows 11, with DDR5-4400 vs DDR4-3200. Intel did have a small one slide of comparisons against AMD in gaming with an RTX 3090, however they stated they were done without the latest L3 patch fix, and admitted that they would have preferred to show us full results. By the time this article goes live, we may have seen those results at Intel’s event.

This is a reasonable set of data, very focused on the Core i9, but when the reviews come out we’ll be able to see where it sits compared to the other parts, as well as the competition. The only thing that concerns me right now leading up to the launch is the behavior of demoting workloads to E-cores when not in focus when on the Balanced Power Plan (mentioned on the Thread Director page). It won’t be until I get hands-on with the hardware as to whether I see it as an issue or not.

Another factor to mention is DRM. Intel has made statements to this, but there is an issue with Denuvo as it uses part of the CPU configuration to identify systems to stop piracy. Due to the hybrid nature, Denuvo might register starting on a different core (P vs E) as a new system, and eventually lock you out of the game either temporarily or permanently. Out of the top 200 games, around 20 are affected and Intel says it still has a couple more to fix. It’s working with Denuvo for a high-level fix from their side, and with developers to fix from their end as well. Intel says it’s a bit harder with older titles, especially when there’s no development going on, or the IP is far away from its original source. A solution to this would be to only launch those games on specific cores, but look out for more updates as time marches on.

Conclusions

Well, it’s almost here. It looks like Intel will take the ST crown, although MT is a bit of a different story, and might rely explicitly on the software being used or if the difference in performance is worth the price. The use of the hybrid architecture might be an early pain point, and it will be interesting to see if Thread Director remains resilient to the issues. The bump up to Windows 11 is also another potential rock in the stream, and we’re seeing some teething issues from users, although right now users who are looking to early adopt a new CPU are likely more than ready to adopt a new version of Windows at the same time.

The discourse on DDR4 vs DDR5 is one I’ve had for almost a year now. Memory vendors seem ready to start seeding kits to retailers, however the expense over DDR4 is somewhat eyewatering. The general expectation is that DDR5 won’t offer much performance uplift over a good kit of DDR4, or might even be worse. The benefit of DDR5 then at this point is more to start on that DDR5 ladder, where the only way to go is up. This will be Intel’s last DDR4 platform on desktop it seems.

On the processors themselves, the Core i5 and Core i7 parts look very competitive and in line with respective popular AMD processors. Both the Core i5 and Core i7 have extra E-cores, so we’ll see if that comes in handy for extra performance, or they’ll just end up burning power and performance per watt needs re-examining. The Core i9 challenge is probably sided on Intel for single thread, but all the questions will be over proper multi-threaded performance.

Intel 12th Gen Core, Alder Lake
AnandTech Cores
P+E/T
E-Core
Base
E-Core
Turbo
P-Core
Base
P-Core
Turbo
IGP Base
W
Turbo
W
Price
$1ku
i9-12900K 8+8/24 2400 3900 3200 5200 770 125 241 $589
i9-12900KF 8+8/24 2400 3900 3200 5200 - 125 241 $564
i7-12700K 8+4/20 2700 3800 3600 5000 770 125 190 $409
i7-12700KF 8+4/20 2700 3800 3600 5000 - 125 190 $384
i5-12600K 6+4/20 2800 3600 3700 4900 770 125 150 $289
i5-12600KF 6+4/20 2800 3600 3700 4900 - 125 150 $264

After not much CPU news for a while, it’s time to get in gear and find out what Intel has been cooking. Come back on November 4th for our review.

Package Improvements and Overclocking
Comments Locked

395 Comments

View All Comments

  • yeeeeman - Friday, October 29, 2021 - link

    You comparisons at various price points are a good idea, but wrong, because 12600K will fight with 5800X performance wise, hence its efficiency will be judged compared to the 5800X, not with the 5600X, which obviously more efficient than the 5800X.
  • Spunjji - Friday, October 29, 2021 - link

    This remains to be seen - Intel rarely go low on price and high on performance. I'll be happy if they've changed that pattern!
  • Carmen00 - Friday, October 29, 2021 - link

    Surprised that nobody's talking about the inherent scheduling problems with efficiency+performance cores and desktop workloads. This is NOT a solved problem and is, in fact, very far away from good general-purpose solutions. Your phone/tablet shows you one app at a time, which goes some way towards masking the issues. On a general-purpose desktop, the efficiency+performance split has never been successfully solved, as far as I am aware. You read it right - NEVER! (I welcome links to any peer-reviewed theoretical CS research that shows the opposite.)

    In the interim, it seems to me that Intel has hitched its wagon to a problem that is theoretically unsolvable and generally unapproachable at the chip level. Scheduling with homogenous cores is unsolvable too, but an easier problem to attack. Heterogenous cores add another layer of difficulty and if Intel's approach is really just breaking things into priority classes ... oh, my. Good luck to them on real-world performance. They'll need it.

    I can see why they've done it. They have the tech and they're trying to make the most of their existing tech investment. That doesn't mean it's a good idea. It is relatively easy to make schedulers behave pathologically (e.g. flipping between E/P constantly, or burning power for nothing) during normal app execution and we see this on phones already. Bringing that mess to desktops ... yeah. Not a great idea.
  • kwohlt - Friday, October 29, 2021 - link

    "I welcome links to any peer-reviewed theoretical CS research that shows the opposite"

    That's not how this works. YOU are making the claim that heterogenous scheduling has never been solved on a general-purpose desktop - the onus would be on your to provide proof of this.

    "Scheduling with homogenous cores is unsolvable too"
    So if scheduling with heterogenous and homogenous cores is both unsolvable, then what point are you trying to make?

    Has apple not demonstrated functional scheduling across a heterogenous architecture with M1?
    And what does "solved" look like? Because if your definition of solved is "no inefficiencies or loss", then that's not a realistic expectation. Heterogenous architecture simply needs to provide more benefit than not - a goal of zero overheard of inefficiency is unrealistic.

    As long as efficiency and performance gain outpaces the inefficiencies, it's a success. Consider a scenario of 4 efficiency cores occupying the same physical die space, thermal constrains, and power consumption of 4 performant cores - Surely an 8 + 8 design would offer better performance than a 10 + 0 design, when the e cores can offer performance greater than 50% of the P core.

    Consider this: a 12600 will be 6 + 0. A 12600K will be 6+4. If we downclock 12600K to match P core frequency of 12600, we can directly measure the benefit of the 4 e cores.
    If we disable 4 e cores in the 12700K so it is only 8P cores, and compare that to the 6+4 of the 12600K, if the 12600K is more performant, we can directly show that 6+4 was better than 7+0 in this scenario.
  • Carmen00 - Wednesday, November 3, 2021 - link

    Sure, take a look at this for a good recent overview: https://dl.acm.org/doi/pdf/10.1145/3387110 . (honestly, though, I think you could have found that one for yourself... the research is not hidden!)

    Again, best of luck to Intel on it; deep learning models or no, they have a risk appetite that I don't share. Apple's success is due, in no small part, to the fact that it controls the entirety of the hardware and software stack. Intel has no such ability. My prediction is that you will have users whose computers sometimes simply start burning power for "no reason", and this can't be replicated in a lab environment; and you will have users whose computers are sometimes just slow, and again, this can't be replicated easily. The result will likely be an intermittently poor user experience. I wouldn't risk buying such a machine myself, and I certainly wouldn't get it for grandma or the kids.
  • mode_13h - Saturday, October 30, 2021 - link

    Good scheduling requires an element of prediction. In the article, they mention Intel took many measurements of many different workloads, in order to train a deep learning model to recognize and classify different usage patterns.
  • PedroCBC - Saturday, October 30, 2021 - link

    In the WAN show yesterday Linus said that some DRMs will not work with Alder Lake, most of then old ones but Denuvo also said that it will have some problems
  • iranterres - Friday, October 29, 2021 - link

    Having "efficiency" cores in modern desktop CPU are irrelevant. For laptops is another story.
  • iranterres - Friday, October 29, 2021 - link

    *is irrelevant
  • nandnandnand - Friday, October 29, 2021 - link

    False. It boosts multicore performance per die area.

    Intel could have given you 10 big cores instead of 8 big, 8 small. And that would have been a worse chip.

Log in

Don't have an account? Sign up now