Power Consumption

As with all the major processor launches in the past few years, performance is nothing without a good efficiency to go with it. Doing more work for less power is a design mantra across all semiconductor firms, and teaching silicon designers to build for power has been a tough job (they all want performance first, naturally). Of course there might be other tradeoffs, such as design complexity or die area, but no-one ever said designing a CPU through to silicon was easy. Most semiconductor companies that ship processors do so with a Thermal Design Power, which has caused some arguments recently based presentations broadcast about upcoming hardware.

Yes, technically the TDP rating is not the power draw. It’s a number given by the manufacturer to the OEM/system designer to ensure that the appropriate thermal cooling mechanism is employed: if you have a 65W TDP piece of silicon, the thermal solution must support at least 65W without going into heat soak.  Both Intel and AMD also have different ways of rating TDP, either as a function of peak output running all the instructions at once, or as an indication of a ‘real-world peak’ rather than a power virus. This is a contentious issue, especially when I’m going to say that while TDP isn’t power, it’s still a pretty good metric of what you should expect to see in terms of power draw in prosumer style scenarios.

So for our power analysis, we do the following: in a system using one reasonable sized memory stick per channel at JEDEC specifications, a good cooler with a single fan, and a GTX 770 installed, we look at the long idle in-Windows power draw, and a mixed AVX power draw given by OCCT (a tool used for stability testing). The difference between the two, with a good power supply that is nice and efficient in the intended range (85%+ from 50W and up), we get a good qualitative comparison between processors. I say qualitative as these numbers aren’t absolute, as these are at-wall VA numbers based on power you are charged for, rather than consumption. I am working with our PSU reviewer, E.Fylladikatis, in order to find the best way to do the latter, especially when working at scale.

Nonetheless, here are our recent results for Kaby Lake at stock frequencies:

Power Delta (Long Idle to OCCT)

The Core i3-7350K, by virtue of its higher frequency, seems to require a good voltage to get up to speed. This is more than enough to go above and beyond the Core i5, which despite having more cores, is in the nicer part (efficiency wise) in the voltage/frequency curve. As is perhaps to be expected, the Core i7-2600K uses more power, having four cores with hyperthreading and a much higher TDP.

Overclocking

At this point I’ll assume that as an AnandTech reader, you are au fait with the core concepts of overclocking, the reason why people do it, and potentially how to do it yourself. The core enthusiast community always loves something for nothing, so Intel has put its high-end SKUs up as unlocked for people to play with. As a result, we still see a lot of users running a Sandy Bridge i7-2600K heavily overclocked for a daily system, as the performance they get from it is still highly competitive.

There’s also a new feature worth mentioning before we get into the meat: AVX Offset. We go into this more in our bigger overclocking piece, but the crux is that AVX instructions are power hungry and hurt stability when overclocked. The new Kaby Lake processors come with BIOS options to implement an offset for these instructions in the form of a negative multiplier. As a result, a user can stick on a high main overclock with a reduced AVX frequency for when the odd instruction comes along that would have previously caused the system to crash.

For our testing, we overclocking all cores under all conditions:

The overclocking experience with the Core i3-7350K matched that from our other overclockable processors - around 4.8-5.0 GHz. The stock voltage was particularly high, given that we saw 1.100 volts being fine at 4.2 GHz. But at the higher frequencies, depending on the quality of the CPU, it becomes a lot tougher maintain a stable system. With the Core i3, temperature wasn't really a feature here with our cooler, and even hitting 4.8 GHz was not much of a strain on the power consumption either - only +12W over stock. The critical thing here is voltage and stability, and it would seem that these chips would rather hit the voltage limit first (and our 1.4 V limit is really a bit much for a 24/7 daily system anyway). 

A quick browse online shows a wide array of Core i3-7350K results, from 4.7 GHz to 5.1 GHz. Kaby Lake, much like previous generations, is all about the luck of the draw - if you want to push it to the absolute limit.

Gaming: Shadow of Mordor Core i3-7350K vs Core i7-2600K: More MHz Cap'n!
Comments Locked

186 Comments

View All Comments

  • realneil - Wednesday, February 8, 2017 - link

    ^^This^^
    Intel is ~finally~ facing some upcoming opposition in the CPU arena and they're trying to fill in some perceived gaps in their CPU lineup.
    After Ryzen is released, expect to see multiple product changes from team blue right away to combat AMD's offerings.
  • CaedenV - Friday, February 3, 2017 - link

    I think it will make more sense with next gen parts. I suspect we are watching a shift in the lineup that is slowly rolling out.
    celeron - duel core
    pentium - entry duel core with HT (limited cache/clock/iGPU)
    i3 - high-end duel core HT (essentially unchanged)
    i5 - quad core with HT (today's i7)
    i7 - 6-12 core with HT (today's LGA2011 line)

    So why no straight quad core part? Well, 2 reasons.
    1) it probably isn't needed. The original i5 parts were just i7s with broken HT cores that were disabled. I imagine most chips coming out now have perfectly fine HT cores, so they are artificially disabled. This increases the cost of the binning process, and reduces profit on a per-chip basis... especially if they can sell the same part somewhere between today's i5 and i7 price.
    2) Right now I would wager that most home builders buy either an i3 because they are budget conscious, or i7 because their pride will not let them get anything less than the best. But the i7 that they buy is the lower margin 'consumer' i7 chips rather than the premium laiden LGA2011 i7 chips that make buku bucks on both CPU and chipset sales. Moving the i7 lineup to start at ~$500 instead of ~$280 would more than off-set the number of people willing to step down to an i5 chip; even if the step down is in name only and really the i5 would be more affordable while offering traditionally i7 performance levels.
    3) Bonus reason; Ryzen chips are expected to land near today's i5/i7 chips in performance, and Intel does not want AMD to be able to say 'our chips are as fast as an i7 but only cost what an i5 does'. Instead, intel want's it's smug users (like myself) to say 'ya, that Ryzen is not a bad chip, but it doesn't hold a candle to my i7'. Real world benchmarks be damned, it is what people are going to say.
  • Alexvrb - Friday, February 3, 2017 - link

    I wouldn't necessarily bet that more home users buy i7s than i5s. I personally know two gamers that recently built i5 systems because they wanted more oomph than a 2C/4T i3, but didn't want to spend money on an i7. Why? So they could spend more money where it makes the biggest difference... the graphics card. An i5 provides plenty of CPU horsepower for games, and gives you another $100 or so to spend on better graphics.

    I think their judgement was sound. I doubt they are alone in this kind of assessment. I think you're letting your admitted i7 smugness cloud your judgement a little bit.
  • Tunnah - Saturday, February 4, 2017 - link

    I build PCs for my friends, and advise people on what to buy, and I don't know a single person apart from myself who has an i7 (and only know of 1 person who has an i3 but he uses his box for media). i5 is a perfect chip for casual users who use the PC mostly to game.

    Hell the only reason I have an i7 is for Civ VI ha.
  • Alexvrb - Saturday, February 4, 2017 - link

    Bingo!
  • Meteor2 - Sunday, February 5, 2017 - link

    i7s (or Xeons) are nice if you're encoding a lot of x265 video (x265 gives better quality per bitrate than hardware encoders). That's the only desktop use case I can think of.
  • Meteor2 - Sunday, February 5, 2017 - link

    ...or apparently not, according to a comment way below, where a 12C/24T Xeon barely does double digit x265 FPS.
  • bak0n - Monday, February 6, 2017 - link

    Exactly why I have an I5 3570k. My next build will either be a ryzen (depending on wither it hits expectations), or the I5 of whatever generation is out when I'm ready to buy. To big of a price jump to the I7 for a hard core 1080P maxed setting gamer, but not to much of a price jump over the i3's. That is, until now with the i3k which I may actually give a second look at.
  • DiHydro - Monday, February 6, 2017 - link

    I fit into that category with one caveat, I also do some 3d modeling and rendering. This pushed me to the i7-2600k about four years ago, and I still don't feel that my CPU is the limiting factor on my PC.
  • Byte - Saturday, February 4, 2017 - link

    Very true, most customers want the best or the cheapest. Changing the lineup liek that would make it easier.

Log in

Don't have an account? Sign up now