A Few Words on Power Consumption

When we tested the first wave of Skylake-X processors, one of the take away points was that Intel was starting to push the blurred line between thermal design power (TDP) and power consumption. Technically the TDP is a value, in Watts, to which a CPU cooler should be designed to cope with heat energy of that amount: a processor with a 140W TDP should be paired with a CPU cooler that can dissipate a minimum of 140W in order to avoid temperature spikes and ‘thermal runaway’. Failure to do so will cause the processor to hit thermal limits and reduce performance to compensate. Normally the TDP is, on average, also a good metric for power consumption values. A processor with a TDP of 140W should, in general, consume 140W of power (plus some efficiency losses).

In the past, particularly with mainstream processors, and even with the latest batch of mainstream processors, Intel typically rides the power consumption well under the rated TDP value. The Core i5-7600K for example has a TDP of 95W, and we measured a power consumption of ~61W, of which ~53W was from the CPU cores. So when we say that in the past Intel has been conservative with the TDP value, this is typically the sort of metric we will quote.

With the initial Skylake-X launch, things were a little different. Due to the high all-core frequencies, the new mesh topology, the advent of AVX-512, and the sheer number of cores in play, the power consumption was matching the TDP and even exceeding it in some cases. The Core i9-7900X is rated at 140W TDP, however we measured 149W, a 6.4% difference. The previous generation 10-core, the Core i7-6950X was also rated at 140W, but only draws 111W at load. Intel’s power strategy has changed with Skylake-X, particularly as we ramp up the number of cores.

Even though we didn’t perform the testing ourselves, our colleagues over at Tom’s Hardware, Paul Alcorn and Igor Wallossek, did extensive power testing on the Skylake-X processors. Along with showing that the power delivery system of the new motherboards works best with substantial heatsinks and active cooling (such as a VRM fan), they showed that with the right overclock, a user can draw over 330W without too much fuss.

So for the two processors in the review today, the same high values ring true. Almost to the point of it being alarmingly so. Both the Core i9-7980XE and the Core i9-7960X have a TDP rating of 165W, and we start with the peak headline numbers first. Our power testing implements a Prime95 stress test, with the data taken from the internal power management registers that the hardware uses to manage power delivery and frequency response. This method is not as accurate as a physical measurement, but is more universal, it removes the need to tool up every single product, and the hardware itself uses these values to make decisions about the performance response.

Power: Total Package (Full Load)

At full load, the total package power consumption for the Core i9-7960X is almost on the money, drawing 163W.

However the Core i9-7980XE goes above and beyond (and not necessarily in a good way). At full load, running an all-core frequency of 3.4 GHz, we recorded a total package power consumption of 190.36W. This is a 25W increase over the TDP value, or a 15.4% gain. Assuming our singular CPU is ‘representative’, I’d hazard a guess and say that the TDP value of this processor should be nearer 190W, or 205W to be on the safe side. Unfortunately, when Intel started designing the Basin Falls platform, it only was designed to be rated at 165W. This is a case of Intel pushing the margins, perhaps a little too far for some. It will be interesting to get the Xeon-W processors in for equivalent testing.

Our power testing program can also pull out a breakdown of the power consumption, depending if the registers are preconfigured in the software. In this case we were also able to pull out values for the DRAM controller(s) power consumption, although looking at the values this is likely to include the uncore/mesh as well. For both CPUs at load, we see that this DRAM and mesh combination is drawing ~42W. If we remove this from the load power numbers, that leaves 121W for the 16-core chip (7.5W per core) and 140W for the 18-core chip (7.8W per core).

Power: Cores Only (Full Load)

Most of the rise of the power consumption, for both the cores and DRAM, happens when the processor is loaded to four threads - the Core i9-7980XE is drawing 100W+ when four threads are loaded. This is what we expect to see: when the processor is lightly loaded and in turbo mode, a core can consume upwards of 20W, while at load it will migrate down to a smaller value. We saw the same with with Ryzen, drawing 17W per core when lightly threaded down to 6W per core when loaded. Obviously the peak efficiency point for these cores is down nearer the 6-8W range than up at the 15-20W range.

Unfortunately, due to timing, we did not perform any overclocking to see the effect it has on power. There was one number in the review materials we received that will likely be checked with our other Purch colleagues: one motherboard vendor quoted the power consumption of the Core i9-7980XE, when overclocked to 4.4 GHz, will reach over 500W. I think someone wants IBM’s record. It also means that the choice of CPU cooler is an important factor in all of this: very few off-the-shelf solutions will happily deal with 300W properly, let alone 500W. These processors are unlikely to bring about a boom in custom liquid cooling loops, however for the professionals that want all the cores and also peak single thread performance, start looking at pre-built overclocked systems that emphasize a massive amount of cooling capability.

A Quick Run on Efficiency

Some of our readers have requested a look into some efficiency numbers. We’re still in the process of producing a good way to represent this data, and take power numbers directly during the benchmark to get a full accurate reading. In the meantime, we’re going to take a benchmark we know hammers every thread of every CPU and put that against our load power readings.

First up is Corona. We take the benchmark result and divide by the load power, to get the efficiency value. This value is then reduced by a constant factor to provide a single digit number.

In a rendering task like Corona, where all the threads are hammered all the time, both the Skylake-X parts out-perform Threadripper for power efficiency, although not by twice as much. Interestingly the results show that as we reduce the clocks on TR, the 1700 comes out on top for pure efficiency in this test.

HandBrake’s HEVC efficiency with large frames actually peaks with the Core i5 here, with the 1700 not far behind. All the Skylake-X processors out-perform Threadripper on efficiency.

Benchmarking Performance: CPU Legacy Tests Performance Per Dollar Analysis
Comments Locked

152 Comments

View All Comments

  • mapesdhs - Monday, September 25, 2017 - link

    Ian, thanks for the great review! Very much appreciate the initial focus on productivity tasks, encoding, rendering, etc., instead of games. One thing though, something that's almost always missing from reviews like this (ditto here), how do these CPUs behave for platform stability with max RAM, especially when oc'd?

    When I started building oc'd X79 systems for prosumers on a budget, they often wanted the max 64GB. This turned out to be more complicated than I'd expected, as reviews and certainly most oc forum "clubs" achieved their wonderful results with only modest amounts of RAM, in the case of X79 typically 16GB. Mbd vendors told me published expectations were never with max RAM in mind, and it was "normal" for a mbd to launch without stable BIOS support for a max RAM config at all (blimey). With 64GB installed (I used two GSkill TridentX/2400 4x8GB kits), it was much harder to achieve what was normally considered a typical oc for a 3930K (mab was the ASUS P9X79 WS, basically an R4E but with PLEX chips and some pro features), especially if one wanted the RAM running at 2133 or 2400. Talking to ASUS, they were very helpful and advised on some BIOS tweaks not mentioned in their usual oc guides to specifically help in cases where all RAM slots were occupied and the density was high, especially a max RAM config. Eventually I was able to get 4.8GHz with 64GB @ 2133. However, with the help of an AE expert (this relates to the lack of ECC I reckon), I was also able to determine that although the system could pass every benchmark I could throw at it (all of toms' CPU tests for that era, all 3DMark, CB, etc.), a large AE render (gobbles 40GB RAM) would result in pixel artefacts in the final render which someone like myself (not an AE user) would never notice, but the AE guy spotted them instantly. This was very interesting to me and not something I've ever seen mentioned in any article, ie. an oc'd consumer PC can be "stable" (benchmarks, Prime95 and all the rest of it), but not correct, ie. the memory is sending back incorrect data, but not in a manner that causes a crash. Dropping the clock to 4.7 resolved the issue. Tests like P95 and 3DMark only test parts of a system; a large AE render hammered the whole lot (storage, CPU, RAM and three GTX 580s).

    Thus, could you or will you be able at some point to test how these CPUs/mbds behave with the max 128GB fitted? I suspect you'd find it a very different experience compared to just having 32GB installed, especially under oc'd conditions. It stresses the IMCs so much more.

    I note the Gigabyte specs page says the mbd supports up to 512GB with Registered DIMMs; any chance a memory corp could help you test that? Mind you, I suspect that without ECC, the kind of user who would want that much RAM would probably not be interested in such a system anyway (XEON or EPYC much more sensible).

    Ian.
  • peevee - Monday, September 25, 2017 - link

    "256 KB per core to 1 MB per core. To compensate for the increase in die area, Intel reduced the size of the size of the L3 from 2.5 MB per core to 1.375 MB per core, keeping the overall L2+L3 constant"

    You might want to check your calculator.
  • tygrus - Monday, September 25, 2017 - link

    Maybe Intel saw the AMD TR numbers and had to add 10-15% to their expected freqs. Sure, there is some power that goes to the CPU which ends up in RAM et. al. but these are expensive room heaters. Intel marketing bunnies thought 165w looked better thn 180w to fool the customers.
  • eddieobscurant - Monday, September 25, 2017 - link

    Wow! Another intel pro review. I was expecting this but having graphs displaying intels perf/$ advantage, just wow , you've really outdone yourselves this time.

    Of course i wanted to see how long are you gonna keep delaying the gaming benchmarks of intel's core i9 due to mess rearrangement horrid performance. I guess you're expecting game developers to fix what can be fixed. It's been already several months, but on ryzen you were displaying a few issues since day 1.

    You tested amd with 2400mhz ram , when you know that performance is affected with anything below 3200mhz.

    Several different intel cpus come and go into your graphs only to show that a different intel cpu is better when core i9 lacks in performance and an amd cpu is better.

    Didn't even mention the negligent performance difference bettween the 7960x and 7980xe. Just take a look at phoronix review.

    Can this site even get any lower? Anands name is the only thing keeping it afloat.
  • mkaibear - Tuesday, September 26, 2017 - link

    Erm, there are five graphs on the performance/$ page, and three of them show AMD with a clear price/$ advantage in everything except the very top end and the very bottom end (and one of the other two is pretty much a tie).

    ...how can you possibly call that a pro-Intel review?
  • wolfemane - Tuesday, September 26, 2017 - link

    And why the heck would you want game reviews on these CPUs anyways? By now we KNOW what the results are gonna be and they won't be astonishing. And more than likely will be under a 7700k. Game benchmarks are utterly worthless for these CPUs and any kind of s surprise by the reader in their lack of overall performance in game is the readers fault for not paying attention to previous reviews.
  • Notmyusualid - Tuesday, September 26, 2017 - link

    Sorry to distract gents (and ladies?), and even though I am not a fan of liquid nitrogen, here:

    http://www.pcgamer.com/overclocked-core-i9-7980xe-...
  • gagegfg - Tuesday, September 26, 2017 - link

    EPYC 7551P vs core i9 790XE

    That is the true comparison, or not?
    $2000 vs $2000
  • gagegfg - Tuesday, September 26, 2017 - link

    EPYC 7551P vs core i9 7980XE

    That is the true comparison, or not?
    $2000 vs $2000
  • IGTrading - Tuesday, September 26, 2017 - link

    That's a perfectly valid comparison with the exception of the fact that Intel's X299 platform will look completely handicapped next to AMD's EPYC based solution and it will have just half of the computational power.

Log in

Don't have an account? Sign up now