Intel's Core i9-9900K: Technically The Highest Performing Gaming CPU

When Intel announced the new processor lineup, it billed the Core i9-9900K as the ‘world’s best gaming processor’. Here’s Intel’s Anand Srivatsa, showcasing the new packaging for this eight core, sixteen thread, 5.0 GHz giant:

In actual fact, the packaging is very small. Intel didn’t supply us with this upgraded retail version of the box, but we were sampled with a toasty Core i9-9900K inside. We sourced the i7-9700K and i5-9600K from Intel’s partners for this review.

With the claim of ‘world’s best ever gaming processor’, it was clear that this needed to be put to the test. Intel commissioned (paid for) a report into the processor performance by a third party in order to obtain data, which unfortunately had numerous issues, particularly with how the chips it was tested against were benchmarked, but here at AnandTech we’ll give you the right numbers.

For our gaming tests this time around, we put each game through four different resolutions and scenarios, labelled IGP (for 720p), Low (for 1080p), Medium (for 1440p to 4K), and High (for 4K and above). Here’s a brief summary of results:

  • World of Tanks: Best CPU at IGP, Low, Medium, and top class in High
  • Final Fantasy XV: Best CPU or near top in all
  • Shadow of War: Best CPU or near top in all
  • Civilization VI: Best CPU at IGP, a bit behind at 4K, top class at 8K/16K
  • Ashes Classic: Best CPU at IGP, Low, top class at Medium, mid-pack at 4K
  • Strange Brigade DX12/Vulkan: Best CPU or near top in all
  • Grand Theft Auto V: Best CPU or near top in all
  • Far Cry 5: Best CPU or near top in all
  • Shadow of the Tomb Raider: Near top in all
  • F1 2018: Best CPU or near top in all

There’s no way around it, in almost every scenario it was either top or within variance of being the best processor in every test (except Ashes at 4K). Intel has built the world’s best gaming processor (again).

On our CPU tests, the i9-9900K hit a lot of the synthetics higher than any other mainstream processor. In some of our real world tests, such as application loading or web performance, it lost out from time to time to the i7 and i5 due to having hyper-threading, as those tests tend to prefer threads that have access to the full core resources. For memory limited tests, the high-end desktop platforms provide a better alternative.

While there’s no specific innovation in the processors driving the performance, Intel re-checked the box for STIM, last used on the mainstream in Sandy Bridge. The STIM implementation has enabled Intel to push the frequency of these parts. It was always one of the tools the company had in its back pocket, and many will speculate as to the reasons why it used that tool at this point in time.

But overall, due to the frequency push and the core push, the three new 9th Generation processors sit at the top of most of our mixed workload tests, given the high natural frequency, and set a new standard in Intel’s portfolio for being a jack of all trades. If a user has a variable workload, and wants to squeeze performance, then these new processors will should get you there.

So now, if you are the money-no-object kind of gamer, this is the processor for you. But it’s not a processor for everyone, and that comes down to cost and competition.

At $488 SEP, plus a bit more for 'on-shelf price', plus add $80-$120 for a decent cooler or $200 for a custom loop, it’s going to be out of the range for almost all builds south of $1500 where GPU matters the most. When Intel’s own i5-9600K is under half the cost with only two fewer cores, or AMD’s R7 2700X is very competitive in almost every test, while they might not be the best, they’re more cost-effective.

The outlandish flash of the cash goes on the Core i9-9900K. The smart money ends up on the 9700K, 9600K, or the 2700X. For the select few, money is no object. For the rest of us, especially when gaming at 1440p and higher settings where the GPU is the bigger bottleneck, there are plenty of processors that do just fine, and are a bit lighter on the power bill in the process.

Edit: We initially posted this review with data taken with an ASRock Z370 motherboard. After inspection, we discovered that the motherboard used intentionally over-volts 9th Generation Core processors in our power testing. While benchmarking seems unaffected, we have redone power numbers using an MSI MPG Z390 Gaming Edge AC motherboard, and updated the review accordingly.

Overclocking
Comments Locked

274 Comments

View All Comments

  • Total Meltdowner - Sunday, October 21, 2018 - link

    Those typoes..

    "Good, F U foreigners who want our superior tech."
  • muziqaz - Monday, October 22, 2018 - link

    Same to you, who still thinks that Intel CPUs are made purely in USA :D
  • Hifihedgehog - Friday, October 19, 2018 - link

    What do I think? That it is a deliberate act of desperation. It looks like it may draw more power than a 32-Core ThreadRipper per your own charts.

    https://i.redd.it/iq1mz5bfi5t11.jpg
  • AutomaticTaco - Saturday, October 20, 2018 - link

    Revised
    https://www.anandtech.com/show/13400/intel-9th-gen...

    The motherboard in question was using an insane 1.47v
    https://twitter.com/IanCutress/status/105342741705...
    https://twitter.com/IanCutress/status/105339755111...
  • edzieba - Friday, October 19, 2018 - link

    For the last decade, you've had the choice between "I want really fast cores!" and "I want lots of cores!". This is the 'now you can have both' CPU, and it's surprisingly not in the HEDT realm.
  • evernessince - Saturday, October 20, 2018 - link

    It's priced like HEDT though. It's priced well into HEDT. FYI, you could have had both of those when the 1800X dropped.
  • mapesdhs - Sunday, October 21, 2018 - link

    I noticed initially in the UK the pricing of the 9900K was very close to the 7820X, but now pricing for the latter has often been replaced on retail sites with CALL. Coincidence? It's almost as if Intel is trying to hide that even Intel has better options at this price level.
  • iwod - Friday, October 19, 2018 - link

    Nothing unexpected really. 5Ghz with "better" node that is tuned for higher Frequency. The TDP was the real surprise though, I knew the TDP were fake, but 95 > 220W? I am pretty sure in some countries ( um... EU ) people can start suing Intel for misleading customers.

    For the AVX test, did the program really use AMD's AVX unit? or was it not optimised for AMD 's AVX, given AMD has a slightly different ( I say saner ) implementation. And if they did, the difference shouldn't be that big.

    I continue to believe there is a huge market for iGPU, and I think AMD has the biggest chance to capture it, just looking at those totally playable 1080P frame-rate, if they could double the iGPU die size budget with 7nm Ryzen it would be all good.

    Now we are just waiting for Zen 2.
  • GreenReaper - Friday, October 19, 2018 - link

    It's using it. You can see points increased in both cases. But AMD implemented AVX on the cheap. It takes twice the cycles to execute AVX operations involving 256-bit data, because (AFAIK) it's implemented using 128-bit registers, with pairs of units that can only do multiplies or adds, not both.

    That may be the smart choice; it probably saves significant space and power. It might also work faster with SSE[2/3/4] code, still heavily used (in part because Intel has disabled AVX support on its lower-end chips). But some workloads just won't perform as well vs. Intel's flexible, wider units. The same is true for AVX-512, where the workstation chips run away with it.

    It's like the difference between using a short bus, a full-sized school bus, and a double decker - or a train. If you can actually fill the train on a regular basis, are going to go a long way on it, and are willing to pay for the track, it works best. Oh, and if developers are optimizing AVX code for *any* CPU, it's almost certainly Intel, at least first. This might change in the future, but don't count on it.
  • emn13 - Saturday, October 20, 2018 - link

    Those AVX numbers look like they're measuing something else; not just AVX512. You'd expect performance to increase (compared to AVX256) by around 50%, give or take quite a margin of error. It should *never* be more than a factor 2 faster. So ignore AMD; their AVX implementation is wonky, sure - but those intel numbers almost have to be wrong. I think the baseline isn't vectorized at all, or something like that - that would explain the huge jump.

    Of course, AVX512 is fairly complicated, and it's more than just wider - but these results seem extraordinary; and there' just not enough evidence the effect is real, not just some quirk of how the variations were compiled.

Log in

Don't have an account? Sign up now