Power Consumption

As with all the major processor launches in the past few years, performance is nothing without a good efficiency to go with it. Doing more work for less power is a design mantra across all semiconductor firms, and teaching silicon designers to build for power has been a tough job (they all want performance first, naturally). Of course there might be other tradeoffs, such as design complexity or die area, but no-one ever said designing a CPU through to silicon was easy. Most semiconductor companies that ship processors do so with a Thermal Design Power, which has caused some arguments recently based presentations broadcast about upcoming hardware.

Yes, technically the TDP rating is not the power draw. It’s a number given by the manufacturer to the OEM/system designer to ensure that the appropriate thermal cooling mechanism is employed: if you have a 65W TDP piece of silicon, the thermal solution must support at least 65W without going into heat soak.  Both Intel and AMD also have different ways of rating TDP, either as a function of peak output running all the instructions at once, or as an indication of a ‘real-world peak’ rather than a power virus. This is a contentious issue, especially when I’m going to say that while TDP isn’t power, it’s still a pretty good metric of what you should expect to see in terms of power draw in prosumer style scenarios.

So for our power analysis, we do the following: in a system using one reasonable sized memory stick per channel at JEDEC specifications, a good cooler with a single fan, and a GTX 770 installed, we look at the long idle in-Windows power draw, and a mixed AVX power draw given by OCCT (a tool used for stability testing). The difference between the two, with a good power supply that is nice and efficient in the intended range (85%+ from 50W and up), we get a good qualitative comparison between processors. I say qualitative as these numbers aren’t absolute, as these are at-wall VA numbers based on power you are charged for, rather than consumption. I am working with our PSU reviewer, E.Fylladikatis, in order to find the best way to do the latter, especially when working at scale.

Nonetheless, here are our recent results for Kaby Lake at stock frequencies:

Power Delta (Long Idle to OCCT)

The Core i3-7350K, by virtue of its higher frequency, seems to require a good voltage to get up to speed. This is more than enough to go above and beyond the Core i5, which despite having more cores, is in the nicer part (efficiency wise) in the voltage/frequency curve. As is perhaps to be expected, the Core i7-2600K uses more power, having four cores with hyperthreading and a much higher TDP.

Overclocking

At this point I’ll assume that as an AnandTech reader, you are au fait with the core concepts of overclocking, the reason why people do it, and potentially how to do it yourself. The core enthusiast community always loves something for nothing, so Intel has put its high-end SKUs up as unlocked for people to play with. As a result, we still see a lot of users running a Sandy Bridge i7-2600K heavily overclocked for a daily system, as the performance they get from it is still highly competitive.

There’s also a new feature worth mentioning before we get into the meat: AVX Offset. We go into this more in our bigger overclocking piece, but the crux is that AVX instructions are power hungry and hurt stability when overclocked. The new Kaby Lake processors come with BIOS options to implement an offset for these instructions in the form of a negative multiplier. As a result, a user can stick on a high main overclock with a reduced AVX frequency for when the odd instruction comes along that would have previously caused the system to crash.

For our testing, we overclocking all cores under all conditions:

The overclocking experience with the Core i3-7350K matched that from our other overclockable processors - around 4.8-5.0 GHz. The stock voltage was particularly high, given that we saw 1.100 volts being fine at 4.2 GHz. But at the higher frequencies, depending on the quality of the CPU, it becomes a lot tougher maintain a stable system. With the Core i3, temperature wasn't really a feature here with our cooler, and even hitting 4.8 GHz was not much of a strain on the power consumption either - only +12W over stock. The critical thing here is voltage and stability, and it would seem that these chips would rather hit the voltage limit first (and our 1.4 V limit is really a bit much for a 24/7 daily system anyway). 

A quick browse online shows a wide array of Core i3-7350K results, from 4.7 GHz to 5.1 GHz. Kaby Lake, much like previous generations, is all about the luck of the draw - if you want to push it to the absolute limit.

Gaming: Shadow of Mordor Core i3-7350K vs Core i7-2600K: More MHz Cap'n!
Comments Locked

186 Comments

View All Comments

  • CaedenV - Friday, February 3, 2017 - link

    Seems to me you only hit the CPU wall when dealing with multiple GPUs. For most games, with a single GPU, and i3 is plenty. Considering an i3 does not have enough PCIe lanes to support multiple GPUs this is a rather moot point.
  • Aerodrifting - Saturday, February 4, 2017 - link

    Like I said, You don't play any CPU demanding games so you have no right to make those ridiculous comments. Take battlefield 1 for example, Good luck in a 64 player map with i3.
  • Michael Bay - Saturday, February 4, 2017 - link

    >plays bf1
    >blabs about rights

    You`re like a perfect example.
  • Aerodrifting - Saturday, February 4, 2017 - link

    Nice trolling, loser.
    I am simply making a point: There are tons of games can bottleneck i3, Battlefield 1 is just one example, Stop lying to others "i3 can game just fine like i7 etc" it's very misleading and misinformed.
  • fanofanand - Sunday, February 5, 2017 - link

    Considering the length of time Dr Cutress has been reviewing CPUs and gaming, any notion that he is not fit to be reviewing gaming CPUs is absurd.
  • Aerodrifting - Sunday, February 5, 2017 - link

    The notion of someone who is good at theorycraft reviews must be an expert at knowing gaming PC is absurd. 1 min of benchmark run in single player mode suddenly makes you an expert at gaming computer? Give me a break.
  • BrokenCrayons - Thursday, February 9, 2017 - link

    If you doubt the validity of the claims made in these articles in spite of the years of experience the writers have AND the supporting evidence of their work, then its rather odd you'd read any of these reviews at all. We can infer from your responses that you feel insecure about your purchase decisions, feel compelled to defend them aggressively, and that you're dismissing the evidence at hand even though you personally find it useful so you can justify the defensiveness -- more to yourself than others here because honestly, why should the opinions ruffle your feathers if there's genuinely no doubt in your mind that you feel you're as correct as you claim?
  • Aerodrifting - Saturday, February 11, 2017 - link

    Nice job coming up with such rhetoric yet no concrete evidence in your argument. I do NOT DOUBT the validity of the claims, I KNOW they are WRONG for a fact. Your reviewers do reviews for the sake of results from incomplete tests and benchmarks that can not represent real life results, Therefore the conclusion is wrong. I have been playing video games and playing around with hardware when you guys were playing with sand, Before this website is even established, Yet you want to talk about "years of experience"? I am not defending anything, I am simply pointing out you are wrong and you are misleading people, You are just butthurt because finally someone is having a different opinion and they are actually right.
    You want evidence? Let's save the rhetoric and go straight to facts, There are games that can severely bottleneck i3 or even i5, Battlefield is just one of them. Doesn't matter what video card you use, Join a 64 player game and you can see your CPU usage go over 90% and game starts stuttering on i3 / i5.
  • Ratman6161 - Friday, February 3, 2017 - link

    "there will be a time where a Core i3 based CPU will match the performance of the older Core i7-2600K"

    Maybe, but not today! I'm still patting myself on the back for my purchase of the i7-2600K back in the spring of 2011. My first computer ran an 8088 and for every computer I owned weather off the shelf or self built from then until 2011 left me constantly on the upgrade treadmill looking for more speed . But with the 2600K I finally hit good enough (other than adding more RAM and replacing spinning disks with SSD's along the way). While its fun to read about the new stuff I don't really see myself upgrading again until either the CPU or (more likely) the motherboard give out.
  • CaedenV - Friday, February 3, 2017 - link

    Yep! I have upgraded the ram several times, the GPU a few times, and moved from HDD to SSD, all of which have kept my 2600 very happy.
    What is going to get me to change over is going to be the motherboard level improvements. I cant get a much faster GPU without hitting a PCIe2 bottleneck. NVMe, while impractical (real world tests show little to no improvement), has me drooling for an upgrade. 4-10gig Ethernet is going to be more and more popular going forward. DDR4 is finally maturing to a point where it is actually better than DDR3. All new devices run USB-C, and my piddly 4 USB3 ports can't do them all, and certainly not at full speed.

    It is literally everything around the CPU that is slowly making me want to upgrade, not the CPU itself. But even these improvements need another year or two to bake before I am really thinking about an upgrade. I am just unfortunately very happy with what I have, and I have one more GPU upgrade (looking forward to something like a GTX1170) before I will really need a new system.
    Who knows, this might be my last 'real' PC. In a few years it may make more sense to have a central gaming server that runs all the games, and then stream them to an end-user PC that is no more powerful than a cell phone.... or just dock my cell phone.

Log in

Don't have an account? Sign up now