Power Consumption

As with all the major processor launches in the past few years, performance is nothing without a good efficiency to go with it. Doing more work for less power is a design mantra across all semiconductor firms, and teaching silicon designers to build for power has been a tough job (they all want performance first, naturally). Of course there might be other tradeoffs, such as design complexity or die area, but no-one ever said designing a CPU through to silicon was easy. Most semiconductor companies that ship processors do so with a Thermal Design Power, which has caused some arguments recently based presentations broadcast about upcoming hardware.

Yes, technically the TDP rating is not the power draw. It’s a number given by the manufacturer to the OEM/system designer to ensure that the appropriate thermal cooling mechanism is employed: if you have a 65W TDP piece of silicon, the thermal solution must support at least 65W without going into heat soak.  Both Intel and AMD also have different ways of rating TDP, either as a function of peak output running all the instructions at once, or as an indication of a ‘real-world peak’ rather than a power virus. This is a contentious issue, especially when I’m going to say that while TDP isn’t power, it’s still a pretty good metric of what you should expect to see in terms of power draw in prosumer style scenarios.

So for our power analysis, we do the following: in a system using one reasonable sized memory stick per channel at JEDEC specifications, a good cooler with a single fan, and a GTX 770 installed, we look at the long idle in-Windows power draw, and a mixed AVX power draw given by OCCT (a tool used for stability testing). The difference between the two, with a good power supply that is nice and efficient in the intended range (85%+ from 50W and up), we get a good qualitative comparison between processors. I say qualitative as these numbers aren’t absolute, as these are at-wall VA numbers based on power you are charged for, rather than consumption. I am working with our PSU reviewer, E.Fylladikatis, in order to find the best way to do the latter, especially when working at scale.

Nonetheless, here are our recent results for Kaby Lake at stock frequencies:

Power Delta (Long Idle to OCCT)

The Core i3-7350K, by virtue of its higher frequency, seems to require a good voltage to get up to speed. This is more than enough to go above and beyond the Core i5, which despite having more cores, is in the nicer part (efficiency wise) in the voltage/frequency curve. As is perhaps to be expected, the Core i7-2600K uses more power, having four cores with hyperthreading and a much higher TDP.

Overclocking

At this point I’ll assume that as an AnandTech reader, you are au fait with the core concepts of overclocking, the reason why people do it, and potentially how to do it yourself. The core enthusiast community always loves something for nothing, so Intel has put its high-end SKUs up as unlocked for people to play with. As a result, we still see a lot of users running a Sandy Bridge i7-2600K heavily overclocked for a daily system, as the performance they get from it is still highly competitive.

There’s also a new feature worth mentioning before we get into the meat: AVX Offset. We go into this more in our bigger overclocking piece, but the crux is that AVX instructions are power hungry and hurt stability when overclocked. The new Kaby Lake processors come with BIOS options to implement an offset for these instructions in the form of a negative multiplier. As a result, a user can stick on a high main overclock with a reduced AVX frequency for when the odd instruction comes along that would have previously caused the system to crash.

For our testing, we overclocking all cores under all conditions:

The overclocking experience with the Core i3-7350K matched that from our other overclockable processors - around 4.8-5.0 GHz. The stock voltage was particularly high, given that we saw 1.100 volts being fine at 4.2 GHz. But at the higher frequencies, depending on the quality of the CPU, it becomes a lot tougher maintain a stable system. With the Core i3, temperature wasn't really a feature here with our cooler, and even hitting 4.8 GHz was not much of a strain on the power consumption either - only +12W over stock. The critical thing here is voltage and stability, and it would seem that these chips would rather hit the voltage limit first (and our 1.4 V limit is really a bit much for a 24/7 daily system anyway). 

A quick browse online shows a wide array of Core i3-7350K results, from 4.7 GHz to 5.1 GHz. Kaby Lake, much like previous generations, is all about the luck of the draw - if you want to push it to the absolute limit.

Gaming: Shadow of Mordor Core i3-7350K vs Core i7-2600K: More MHz Cap'n!
Comments Locked

186 Comments

View All Comments

  • BillBear - Friday, February 3, 2017 - link

    For consumers who intend to purchase a discrete GPU card, it's interesting to see it confirmed that Intel could include four additional CPU cores in instead of the unnecessary (for you) integrated GPU within pretty much the exact same die size.

    It wouldn't cost them more to manufacture than an i7. They just want to be able to charge more money by forcing you into a different price range of product if you need many cores.

    For instance: Intel’s new 10-core Core i7 Extreme Edition costs a whopping $1,723

    http://www.geek.com/tech/intels-new-10-core-core-i...
  • fanofanand - Friday, February 3, 2017 - link

    I'm not sure I agree with your assessment. Intel has a vested interest in pushing people like you into the HEDT platform which is far more profitable. If you have a powerful dGPU then you are not "mainstream" by Intel's thinking. Based on the number of computers that have no dGPU maybe they are right.
  • BillBear - Friday, February 3, 2017 - link

    You're just defending price gouging now.
  • fanofanand - Sunday, February 5, 2017 - link

    I am not defending anything, I am neither an Intel shareowner nor have they seen a penny of mine in a decade. I am saying that what they are doing makes business sense even if it doesn't suit you as well as you'd like.
  • BillBear - Sunday, February 5, 2017 - link

    For someone who frequently responds to other people's posts with silly BS like "This post brought to you by CompanyName" why are you suddenly defending price gouging on Intel's part?

    Price gouging makes business sense for any company. Tacking on an additional thousand dollars per part? Not really defensible.
  • block2 - Friday, February 3, 2017 - link

    The CPU I want is one that costs me the least electricity for the 95% of the time when I'm on Facebook and surfing, yet supports Photoshop well. I bought a gold rated seasonic PSU a couple years ago and my overall power usage is very low (40w?). Really, a low end i3 ought to suffice for me. I'm still using a 2.8ghz AMD phenom II with the CPU throttling enabled (800mhz).
  • Scipio Africanus - Friday, February 3, 2017 - link

    Have you considered one of the Intel T suffix cores? They don't get a lot of coverage but give you the newest architecture with a very low TDP. The current Kaby Lake T cores are 35w TDP.

    The newest review i could find was for Haswell T cores:
    http://www.anandtech.com/show/8774/intel-haswell-l...
  • Scipio Africanus - Friday, February 3, 2017 - link

    It takes Intel 6 years to post 25% increase in single threaded performance? Yeesh. Competiton (read: Ryzen) can't come fast enough. My Sandy Bridge Dell Precision is staying put for a bit more.
  • StrangerGuy - Friday, February 3, 2017 - link

    And AMD did what exactly during the same 6 years? Sheesh.
  • silverblue - Saturday, February 4, 2017 - link

    Well, they've transformed a power-hungry server architecture into something that the majority of users could use in a mobile device without complaining about power or performance. I'm also pretty sure that if they had commissioned Zen even before Bulldozer's release, we still wouldn't have seen anything until recently. I'm not going to defend them for Bulldozer, but it was either trash it and work on a replacement with no money coming in or at least try to fix its problems with power, L1 and decoder, and redesign the FPU. A 25%+ IPC boost from Bulldozer to Excavator, despite the loss of L3, would have been much better received had Bulldozer been better to begin with. That's what AMD have been doing.

Log in

Don't have an account? Sign up now