Overclocking

With the upgraded thermal interface between the processor and the heatspreader, from paste to solder, Intel is leaning on the fact that these overclockable processors should be more overclockable than previous generations. We’ve only had time to test the Core i9-9900K and i7-9700K on this, so we took them for a spin.

Our overclocking methodology is simple. We set the Load Line Calibration to static (or level 1 for this ASRock Z370 motherboard), set the frequency to 4.5 GHz, the voltage to 1.000 volts, and run our tests. If successfully stable, we record the power and performance, and then increase the CPU multiplier. If the system fails, we increase the voltage by +0.025 volts. The overclocking ends when the temperatures get too high (85C+).

For our new test suite comes new overclocking features. As mentioned in the previous page, our software loading for power measurement is POV-Ray, which can thrash a processor quite harshly. POV-Ray also does a good job on stability, but is not a substantial enough test – for that we use our Blender workload, which pushes the cores and the memory, and lasts about 5 minutes on an 8 core processor.

Results as follows:

For the Core i7-9700K, we hit 5.3 GHz very easily, for a small bump in power and temperature. For 5.4 GHz, we could boot into the operating system but it was in no way stable – we were ultimately voltage/temperature limited at this case. But an eight core, eight thread 5.3 GHz CPU at 180W for $374? Almost unimaginable a year ago.

Overclocking the Core i9-9900K was not as fruitful. The best bit about this overclock is the 4.7 GHz value: by using our own voltage settings, we reduced power consumption by 41W, almost 25% of the total power, and also reduced temperatures by 24ºC. That's a safe idea. Even 4.8 GHz and 4.9 GHz was reasonable, but the temperatures at 5.0 GHz might not be for everyone. When all cores and threads are loaded, this is one warm chip.

Power Consumption Intel's Core i9-9900K: For The Gamer That Wants It All
Comments Locked

274 Comments

View All Comments

  • 3dGfx - Friday, October 19, 2018 - link

    game developers like to build and test on the same machine
  • mr_tawan - Saturday, October 20, 2018 - link

    > game developers like to build and test on the same machine

    Oh I thought they use remote debugging.
  • 12345 - Wednesday, March 27, 2019 - link

    Only thing I can think of as a gaming use for those would be to pass through a gpu each to several VMs.
  • close - Saturday, October 20, 2018 - link

    @Ryan, "There’s no way around it, in almost every scenario it was either top or within variance of being the best processor in every test (except Ashes at 4K). Intel has built the world’s best gaming processor (again)."

    Am I reading the iGPU page wrong? The occasional 100+% handicap does not seem to be "within variance".
  • daxpax - Saturday, October 20, 2018 - link

    if you noticed 2700x is faster in half benchmarks for games but they didnt include it
  • nathanddrews - Friday, October 19, 2018 - link

    That wasn't a negative critique of the review, just the opposite in fact: from the selection of benchmarks you provided, it is EASY to see that given more GPU power, the new Intel chips will clearly outperform AMD most of the time - generally with average, but specifically minimum frames. From where I'm sitting - 3570K+1080Ti - I think I could save a lot of money by getting a 2600X/2700X OC setup and not miss out on too many fpses.
  • philehidiot - Friday, October 19, 2018 - link

    I think anyone with any sense (and the constraints of a budget / missus) will be stupid to buy this CPU for gaming. The sensible thing to do is to buy the AMD chip that provides 99% of the gaming performance for half the price (even better value when you factor in the mobo) and then to plough that money into a better GPU, more RAM and / or a better SSD. The savings from the CPU alone will allow you to invest a useful amount more into ALL of those areas. There are people who do need a chip like this but they are not gamers. Intel are pushing hard with both the limitations of their tech (see: stupid temperatures) and their marketing BS (see: outright lies) because they know they're currently being held by the short and curlies. My 4 year old i5 may well score within 90% of these gaming benchmarks because the limitation in gaming these days is the GPU. Sorry, Intel, wrong market to aim at.
  • imaheadcase - Saturday, October 20, 2018 - link

    I like how you said limitations in tech and point to temps, like any gamer cares about that. Every game wants raw performance, and the fact remains intel systems are still easier to go about it. The reason is simple, most gamers will upgrade from another intel system and use lots of parts from it that work with current generation stuff.

    Its like the whole Gsync vs non gsync. Its a stupid arguement, its not a tax on gsync when you are buying the best monitor anyways.
  • philehidiot - Saturday, October 20, 2018 - link

    Those limitations affect overclocking and therefore available performance. Which is hardly different to much cheaper chips. You're right about upgrading though.
  • emn13 - Saturday, October 20, 2018 - link

    The AVX 512 numbers look suspicious. Both common sense and other examples online suggest that AVX512 should improve performance by much less than a factor 2. Additionally, AVX-512 causes varying amounts of frequency throttling; so you;re not going to get the full factor 2.

    This suggests to me that your baseline is somehow misleading. Are you comparing AVX512 to ancient SSE? To no vectorization at all? Something's not right there.

Log in

Don't have an account? Sign up now