Power Consumption

As with all the major processor launches in the past few years, performance is nothing without a good efficiency to go with it. Doing more work for less power is a design mantra across all semiconductor firms, and teaching silicon designers to build for power has been a tough job (they all want performance first, naturally). Of course there might be other tradeoffs, such as design complexity or die area, but no-one ever said designing a CPU through to silicon was easy. Most semiconductor companies that ship processors do so with a Thermal Design Power, which has caused some arguments recently based presentations broadcast about upcoming hardware.

Yes, technically the TDP rating is not the power draw. It’s a number given by the manufacturer to the OEM/system designer to ensure that the appropriate thermal cooling mechanism is employed: if you have a 65W TDP piece of silicon, the thermal solution must support at least 65W without going into heat soak.  Both Intel and AMD also have different ways of rating TDP, either as a function of peak output running all the instructions at once, or as an indication of a ‘real-world peak’ rather than a power virus. This is a contentious issue, especially when I’m going to say that while TDP isn’t power, it’s still a pretty good metric of what you should expect to see in terms of power draw in prosumer style scenarios.

So for our power analysis, we do the following: in a system using one reasonable sized memory stick per channel at JEDEC specifications, a good cooler with a single fan, and a GTX 770 installed, we look at the long idle in-Windows power draw, and a mixed AVX power draw given by OCCT (a tool used for stability testing). The difference between the two, with a good power supply that is nice and efficient in the intended range (85%+ from 50W and up), we get a good qualitative comparison between processors. I say qualitative as these numbers aren’t absolute, as these are at-wall VA numbers based on power you are charged for, rather than consumption. I am working with our PSU reviewer, E.Fylladikatis, in order to find the best way to do the latter, especially when working at scale.

Nonetheless, here are our recent results for Kaby Lake at stock frequencies:

Power Delta (Long Idle to OCCT)

The Core i3-7350K, by virtue of its higher frequency, seems to require a good voltage to get up to speed. This is more than enough to go above and beyond the Core i5, which despite having more cores, is in the nicer part (efficiency wise) in the voltage/frequency curve. As is perhaps to be expected, the Core i7-2600K uses more power, having four cores with hyperthreading and a much higher TDP.

Overclocking

At this point I’ll assume that as an AnandTech reader, you are au fait with the core concepts of overclocking, the reason why people do it, and potentially how to do it yourself. The core enthusiast community always loves something for nothing, so Intel has put its high-end SKUs up as unlocked for people to play with. As a result, we still see a lot of users running a Sandy Bridge i7-2600K heavily overclocked for a daily system, as the performance they get from it is still highly competitive.

There’s also a new feature worth mentioning before we get into the meat: AVX Offset. We go into this more in our bigger overclocking piece, but the crux is that AVX instructions are power hungry and hurt stability when overclocked. The new Kaby Lake processors come with BIOS options to implement an offset for these instructions in the form of a negative multiplier. As a result, a user can stick on a high main overclock with a reduced AVX frequency for when the odd instruction comes along that would have previously caused the system to crash.

For our testing, we overclocking all cores under all conditions:

The overclocking experience with the Core i3-7350K matched that from our other overclockable processors - around 4.8-5.0 GHz. The stock voltage was particularly high, given that we saw 1.100 volts being fine at 4.2 GHz. But at the higher frequencies, depending on the quality of the CPU, it becomes a lot tougher maintain a stable system. With the Core i3, temperature wasn't really a feature here with our cooler, and even hitting 4.8 GHz was not much of a strain on the power consumption either - only +12W over stock. The critical thing here is voltage and stability, and it would seem that these chips would rather hit the voltage limit first (and our 1.4 V limit is really a bit much for a 24/7 daily system anyway). 

A quick browse online shows a wide array of Core i3-7350K results, from 4.7 GHz to 5.1 GHz. Kaby Lake, much like previous generations, is all about the luck of the draw - if you want to push it to the absolute limit.

Gaming: Shadow of Mordor Core i3-7350K vs Core i7-2600K: More MHz Cap'n!
Comments Locked

186 Comments

View All Comments

  • Ian Cutress - Friday, February 3, 2017 - link

    Next test bed update will be on W10. I keep getting mixed reactions recently from W10/W7/Linux users on this front - some want to see W10 poweeeeeer, others want default. But for DX12 it'll have to change over.
  • CaedenV - Friday, February 3, 2017 - link

    Bench-marking in win10 is... well... difficult. The OS has too many automatic features, so it is hard to get consistent results. You still get better overall performance... but not consistent performance. Win7 is gloriously dumb and gives very clear numbers to make very easy comparisons.
  • Flunk - Friday, February 3, 2017 - link

    It's a bit sad that you can compare any CPU from 2011 to one from 2017 and have them match up like this. In the 90's a CPU that was 6 years newer was many times faster than the older one. Is it lack of competition? Or have we just hit the wall with silicon chip technology?
  • Ro_Ja - Friday, February 3, 2017 - link

    Back in the days it was all about higher clock speed = faster. Nowadays it's a bit complex for me :\
  • BrokenCrayons - Friday, February 3, 2017 - link

    It's probably a combination of both, but I'd go out on a limb and say it's mostly due to technology and not so much market forces. Intel's primary competition for new processor models really ends up being its own prior generations It the company wants to land sales, it needs to offer a compelling incentive to upgrade.

    There's also Intel's efforts to reduce TDP over successive generations (something the company would probably not do were there more credible competitive forces in the market). Those reductions are probably a side effect of a mobile-first perspective in modern CPU design, but there's something nice about buying a reasonably power 35W desktop processor and not having to worry about copper-pipe festooned tower coolers with 120mm fans strapped on them just to keep your chip happy. If I were to build a new desktop, I'd entertain a T-series part before exploring any other option.
  • StrangerGuy - Friday, February 3, 2017 - link

    It's funny we got big perf/watt increases over the past few years in CPUs and GPUs, yet somehow everyone are still buying massive overkill 650W+ PSUs where most systems would struggle to even draw 1/3 of the PSU rated wattage at load.

    I'm pretty confident that an undervolted i5 7400 and GTX 1060 (60W @ 1600MHz according to THG) would be able to draw <100W at the wall in a normal gaming load with an efficient enough PSU...
  • fanofanand - Friday, February 3, 2017 - link

    Because MOAR POWER and marketing. Seriously, they sell the high power PSUs for a LOT more than the lower powered PSUs, it's going to take consumers buying the 300-450W psu's en masse before the manufacturers adjust. Your theoretical operates under false assumptions however. The 1060 boosts up well beyond 1600 and will consume far more than 60 watts, and there are efficiency losses in the PSU and throughout your system. Go ahead and try to run a 1060 and an undervolted i5, see what happens.
  • t.s - Friday, February 3, 2017 - link

    He said normal gaming. His number is quite possible --with good mobo, ssd, no optical drive.
  • fanofanand - Friday, February 3, 2017 - link

    No, it's not. For typical gaming the 1060 consumes between 90-120 watts. So please do tell me how his system with a 100 watt GPU is going to consume less than 100 watts with a CPU, mobo, RAM, etc.?
  • hybrid2d4x4 - Friday, February 3, 2017 - link

    As a point of reference, I have a 1060 in a i5 4670 system running a 400W Platinum PSU. All stock clocks, 1 SSD, 1 HDD. Peak power in games measured at the wall is ~200W (180-200 depending on which AAA game), so I doubt <100W is doable.
    But agree with the commentary about how overkill most PSUs are.

Log in

Don't have an account? Sign up now