Power Consumption

As with all the major processor launches in the past few years, performance is nothing without a good efficiency to go with it. Doing more work for less power is a design mantra across all semiconductor firms, and teaching silicon designers to build for power has been a tough job (they all want performance first, naturally). Of course there might be other tradeoffs, such as design complexity or die area, but no-one ever said designing a CPU through to silicon was easy. Most semiconductor companies that ship processors do so with a Thermal Design Power, which has caused some arguments recently based presentations broadcast about upcoming hardware.

Yes, technically the TDP rating is not the power draw. It’s a number given by the manufacturer to the OEM/system designer to ensure that the appropriate thermal cooling mechanism is employed: if you have a 65W TDP piece of silicon, the thermal solution must support at least 65W without going into heat soak.  Both Intel and AMD also have different ways of rating TDP, either as a function of peak output running all the instructions at once, or as an indication of a ‘real-world peak’ rather than a power virus. This is a contentious issue, especially when I’m going to say that while TDP isn’t power, it’s still a pretty good metric of what you should expect to see in terms of power draw in prosumer style scenarios.

So for our power analysis, we do the following: in a system using one reasonable sized memory stick per channel at JEDEC specifications, a good cooler with a single fan, and a GTX 770 installed, we look at the long idle in-Windows power draw, and a mixed AVX power draw given by OCCT (a tool used for stability testing). The difference between the two, with a good power supply that is nice and efficient in the intended range (85%+ from 50W and up), we get a good qualitative comparison between processors. I say qualitative as these numbers aren’t absolute, as these are at-wall VA numbers based on power you are charged for, rather than consumption. I am working with our PSU reviewer, E.Fylladikatis, in order to find the best way to do the latter, especially when working at scale.

Nonetheless, here are our recent results for Kaby Lake at stock frequencies:

Power Delta (Long Idle to OCCT)

The Core i3-7350K, by virtue of its higher frequency, seems to require a good voltage to get up to speed. This is more than enough to go above and beyond the Core i5, which despite having more cores, is in the nicer part (efficiency wise) in the voltage/frequency curve. As is perhaps to be expected, the Core i7-2600K uses more power, having four cores with hyperthreading and a much higher TDP.

Overclocking

At this point I’ll assume that as an AnandTech reader, you are au fait with the core concepts of overclocking, the reason why people do it, and potentially how to do it yourself. The core enthusiast community always loves something for nothing, so Intel has put its high-end SKUs up as unlocked for people to play with. As a result, we still see a lot of users running a Sandy Bridge i7-2600K heavily overclocked for a daily system, as the performance they get from it is still highly competitive.

There’s also a new feature worth mentioning before we get into the meat: AVX Offset. We go into this more in our bigger overclocking piece, but the crux is that AVX instructions are power hungry and hurt stability when overclocked. The new Kaby Lake processors come with BIOS options to implement an offset for these instructions in the form of a negative multiplier. As a result, a user can stick on a high main overclock with a reduced AVX frequency for when the odd instruction comes along that would have previously caused the system to crash.

For our testing, we overclocking all cores under all conditions:

The overclocking experience with the Core i3-7350K matched that from our other overclockable processors - around 4.8-5.0 GHz. The stock voltage was particularly high, given that we saw 1.100 volts being fine at 4.2 GHz. But at the higher frequencies, depending on the quality of the CPU, it becomes a lot tougher maintain a stable system. With the Core i3, temperature wasn't really a feature here with our cooler, and even hitting 4.8 GHz was not much of a strain on the power consumption either - only +12W over stock. The critical thing here is voltage and stability, and it would seem that these chips would rather hit the voltage limit first (and our 1.4 V limit is really a bit much for a 24/7 daily system anyway). 

A quick browse online shows a wide array of Core i3-7350K results, from 4.7 GHz to 5.1 GHz. Kaby Lake, much like previous generations, is all about the luck of the draw - if you want to push it to the absolute limit.

Gaming: Shadow of Mordor Core i3-7350K vs Core i7-2600K: More MHz Cap'n!
Comments Locked

186 Comments

View All Comments

  • Flunk - Monday, February 6, 2017 - link

    Yeah, that is funny. I'm using a massively overpowered PSU myself. I have a 850W unit running a system with a moderately-overclocked i7-6700k and Geforce 1070. Had it left over from my previous massively overclocked i5-2500k and dual Radeon 7970s, even if it's aged badly (which it probably hasn't it's only a few years old) it should still be good for ages, especially as under-stressed as it now is.
  • fanofanand - Friday, February 3, 2017 - link

    Or you could just get Ryzen with the wraith cooler :)
  • BrokenCrayons - Friday, February 3, 2017 - link

    Perhaps when they're available for purchase I'll look into it. I'm interested in seeing what AMD does with mobile Ryzen, integrated graphics, and HBM for CPUs (unlikely) and how it changes laptop computing.
  • fanofanand - Friday, February 3, 2017 - link

    The rumor mill has been churning and the consensus is that APU's will be available in 2018 with HBM. That will be a game changer for more than just mobile computing, but for small form factors as well. At least theoretically, experience tells me we should wait for reviews before deciding how profound the impact will be.
  • Flunk - Monday, February 6, 2017 - link

    The Wraith cooler is both marginal and loud compared to quality aftermarket coolers that cost as little as $35. Sure it's better than the last AMD stock cooler, but that's more a case of the last AMD stock cooler being total garbage.
  • bananaforscale - Wednesday, February 8, 2017 - link

    Hey, no dissing huge air coolers! :D (Yeah, I have one and it's so big it largely dictated the case selection. Does keep a hexcore Bulldozer at 52 degrees at 4 GHz tho.) There's also the niggle on Intel side that their enthusiast line has only made it to Broadwell-E, so that's what I'll be upgrading to. A huge upgrade in IPC (which probably won't rise much in the next years), more cores and lower power use per core. I figure I'll be upgrading next around 2025. :D I'm pondering whether I should go AIO liquid or custom...
  • MonkeyPaw - Friday, February 3, 2017 - link

    More emphasis is going into the IGP.
  • CaedenV - Friday, February 3, 2017 - link

    I doubt it is competition. I mean, lack of competition certainly explains the price per performance not coming down even though the manufacturing costs are getting cheaper, but I think that we have hit a performance wall.
    With every die shrink we can get more performance per watt... but the die is also more heat sensitive which kills stability for higher clocks. The idea that you can hit 5GHz on the new chips is nothing short of a miracle! But without a major increase in clock speed, then your performance is limited to the instruction sets and execution model... and that is much harder to change.
    And that isn't hard to change because of competition. That is hard to change because PCs live and die by legacy applications. If I can't go back and play my 20 year old games every 3-4 years then I am going to get rather annoyed and not upgrade. If businesses can't run their 20 year old software every day, then they will get annoyed and not upgrade.
    I think we are rather stuck with today's performance until we can get a new CPU architecture on the market that is as good as ARM on the minimum power consumption side, but as efficient as x86 on the performance per watt side... but whatever chip comes out will have to be able to emulate today's x86 technology fast enough to feel like it isn't a huge step backwards... and that is going to be hard to do!
  • xenol - Friday, February 3, 2017 - link

    Anandtech please do frame-time tests as well for games. Average frame rate is good and all, but if the processor causes dips in games that could lead to an unpleasant experience.
  • Mr Perfect - Friday, February 3, 2017 - link

    I would also be interested in seeing this.

    The site slips my mind, but somewhere tested multiple generations of i7s, i5s and i3s for minimum framerate and even the oldest i7s had a more consistent framerate then the newest i3s. It would be interesting to get AT's take on this.

Log in

Don't have an account? Sign up now