In the Words of Jeremy Clarkson: POWEEEERRRRR

As with all the major processor launches in the past few years, performance is nothing without a good efficiency to go with it. Doing more work for less power is a design mantra across all semiconductor firms, and teaching silicon designers to build for power has been a tough job (they all want performance first, naturally). Of course there might be other tradeoffs, such as design complexity or die area, but no-one ever said designing a CPU through to silicon was easy. Most semiconductor companies that ship processors do so with a Thermal Design Power, which has caused some arguments recently based on performance presentations.

Yes, technically the TDP rating is not the power draw. It’s a number given by the manufacturer to the OEM/system designer to ensure that the appropriate thermal cooling mechanism is employed: if you have a 65W TDP piece of silicon, the thermal solution must support at least 65W without going into heat soak.  Both Intel and AMD also have different ways of rating TDP, either as a function of peak output running all the instructions at once, or as an indication of a ‘real-world peak’ rather than a power virus. This is a contentious issue, especially when I’m going to say that while TDP isn’t power, it’s still a pretty good metric of what you should expect to see in terms of power draw in prosumer style scenarios.

So for our power analysis, we do the following: in a system using one reasonable sized memory stick per channel at JEDEC specifications, a good cooler with a single fan, and a GTX 770 installed, we look at the long idle in-Windows power draw, and a mixed AVX power draw given by OCCT (a tool used for stability testing). The difference between the two, with a good power supply that is nice and efficient in the intended range (85%+ from 50W and up), we get a good qualitative comparison between processors. I say qualitative as these numbers aren’t absolute, as these are at-wall VA numbers based on power you are charged for, rather than consumption. I am working with our PSU reviewer, E.Fylladikatis, in order to find the best way to do the latter, especially when working at scale.

Nonetheless, here are our recent results for Kaby Lake at stock frequencies:

Power Delta (Long Idle to OCCT)

What amazes me, if anything, is how close the Core i7 and Core i3 parts are to their TDP in our measurements. Previously, such as with the Core i7-6700K and Core i7-4790K, we saw +20W on our system compared to TDP, but the Core i7-7700K is pretty much bang on at 90W (for a 91W rated part). Similarly, the Core i3-7350K is rated at 60W and we measured it at 55W. The Core i5-7600K is a bit different due to no hyperthreading meaning the AVX units aren’t loaded as much, but more on that in that review.

To clarify, our tests were performed on retail units. No engineering sample trickery here.

With power on the money, this perhaps mean that Intel is getting the voltages of each CPU to where they should be based on the quality of the silicon. In previous generations, Intel would over estimate the voltage needed in order to capture more CPUs within a given yield – however AMD has been demonstrating of late that it is possible to tailor the silicon more based on internal metrics. Either our samples are flukes, or Intel is doing something similar here.

With power consumption in mind, let’s move on to Overclocking, and watch some sand burn a hole in a PCB (hopefully not).

Overclocking

At this point I’ll assume that as an AnandTech reader, you are au fait with the core concepts of overclocking, the reason why people do it, and potentially how to do it yourself. The core enthusiast community always loves something for nothing, so Intel has put its high-end SKUs up as unlocked for people to play with. As a result, we still see a lot of users running a Sandy Bridge i7-2600K heavily overclocked for a daily system, as the performance they get from it is still highly competitive.

Despite that, the i7-7700K has somewhat of an uphill battle. As a part with a 4.5 GHz turbo frequency, if users are expecting a 20-30% increase for a daily system then we will be pushing 5.4-5.8 GHz, which for daily use with recent processors has not happened.

There’s also a new feature worth mentioning before we get into the meat: AVX Offset. We go into this more in our bigger overclocking piece, but the crux is that AVX instructions are power hungry and hurt stability when overclocked. The new Kaby Lake processors come with BIOS options to implement an offset for these instructions in the form of a negative multiplier. As a result, a user can stick on a high main overclock with a reduced AVX frequency for when the odd instruction comes along that would have previously caused the system to crash.

Because of this, we took our overclocking methods in two ways. First, we left the AVX Offset alone, meaning our OCCT mixed-AVX stability test got the full brunt of AVX power and the increased temperature/power reading there in.  We then applied a second set of overclocks with a -10 offset, meaning that at 4.5 GHz the AVX instructions were at 3.5 GHz. This did screw up some of our usual numbers that rely on the AVX part to measure power, but here are our results:

At stock, our Core i7-7700K ran at 1.248 volts at load, drawing 90 watts (the column marked ‘delta’), and saw a temperature of 79C using our 2kg copper cooling.

After this, we put the CPU on a 20x multiplier, set it to 1.000 volt (which didn’t work, so 1.100 volts instead), gave the load-line calibration setting to Level 1 (constant voltage on ASRock boards), and slowly went up in 100 MHz jumps. Every time the POV-Ray/OCCT stability tests failed, the voltage was raised 0.025V.

This gives a few interesting metrics. For a long time, we did not need a voltage increase: 1.100 volts worked as a setting all the way up to 4.2 GHz, which is what we’ve been expecting for a 14nm processor at such a frequency. From there the voltage starts increasing, but at 4.5 GHz we needed more voltage in a manual overclock to achieve stability than the CPU gave itself at stock frequency. So much for overclocking! It wasn’t until that 4.3-4.5 GHz set of results that the CPU started to get warm, as shown by the OCCT temperature values.

At 4.8 GHz, the Core i7-7700K passed POV-Ray with ease, however the 1.400 volts needed at that point were pushing the processor up to 95C during OCCT and its mixed AVX workload. At that point I decided to call an end to it, where the CPU was now drawing 122W from idle to load. The fact that it is only 122W is surprisingly low – I would have thought we would be nearing 160W at this point, other i7 overclockable processors at this level in the past.

The second set of results is with the AVX offset. This afforded stability at 4.8 GHz and 4.9 GHz, however at 5.0 GHz and 1.425 volts the CPU was clearly going into thermal recovery modes, as given by the lower scores in POV-Ray.

Based on what we’ve heard out in the ether, our CPU sample is somewhat average to poor in terms of overclocking performance. Some colleagues at the motherboard manufacturers are seeing 5.0 GHz at 1.3 volts (with AVX offset) although I’m sure they’re not talking in terms of a serious reasonable stability. 

Gaming: Shadow of Mordor Conclusions: The New Champion
Comments Locked

125 Comments

View All Comments

  • Ian Cutress - Wednesday, January 4, 2017 - link

    The boards will default to DDR4-2133 as a base memory frequency, regardless of processor. JEDEC has profiles for 2133 and 2400, and Kaby Lake is compatible with the JEDEC DDR4-2400 profile. So in order to achieve this, we use kits that offer DDR4-2400 JEDEC memory profiles via XMP. Enable XMP, and you're at the frequency that's officially supported by the processor, which is JEDEC. Out of the box usually refers to the BIOS, as we tend to eschew special 'media' BIOSes that might adjust certain performance parameters.
  • ccdrop - Tuesday, January 3, 2017 - link

    I just wanted to give you guys a super big THANK YOU! for testing under Windows 7 64-bit SP1, now I can be excited about the 7700k again!

    My big worry was that the 7700k was going to be a useless upgrade from my 2600K due to the whole "not officially support" drama as I flat have no interest in windows 10 (Please don't reply with Pro-10 comments I will never read them as I will never check these comments again I am just here to say thank you, along with the fact I have a laundry list about a mile long as to why I despise 10, I have thoroughly tested it for my use cases and it is a very solid downgrade. I am not a gamer so do it for the games is meaningless. As for security, my main workstation isn't attached to any networks and if you have local access to the system 10 is no better then 7, finally as for doing it for the "new features" just because you know new features are new... I will wait and see if the 7700k really runs 10~20% better on Windows 10 than windows 7 WITH MY SOFTWARE not games or things I don't use, then I'll switch. However as of now on my current hardware Windows 10 runs about 10~20% slower then windows 7 with my software, and is vastly more prone to errors and workflow interruptions.)
  • negusp - Thursday, January 5, 2017 - link

    stfu, it is a pretty useless upgrade. 10-20% over a 2600k is nothing to be excited about.

    wait for Ryzen or Cannonlake. if you think your 2600k is anywhere near obsolete you have to be kidding me.
  • fm13 - Thursday, January 5, 2017 - link

    I'm still using my i7 860 which is still OK at stock frequencies.
  • The_Assimilator - Tuesday, January 3, 2017 - link

    AnandTech reviews that are on time, what sorcery is this? I sincerely hope to see more of it this year!
  • just4U - Wednesday, January 4, 2017 - link

    I do not recall Ian ever being late to the party on his reviews...
  • Thatguy97 - Wednesday, January 4, 2017 - link

    Ryan is always late

    Remember Fiji? And the "on the way" gtx 950 review?
  • Toss3 - Tuesday, January 3, 2017 - link

    "In most of our benchmarks, the results are clear: a stock Core i7-7700K beat our overclocked Core i7-4790K in practically every CPU-based test (Our GPU tests showed little change)."

    Wait the 4790K was overclocked? You didn't mention the clockspeed anywhere. And how can a 5820K be faster than a 6800K (Grid: Autosport on MSI R9 290X)? You really need to let your readers know what speeds these CPUs are running at.
  • Thatguy97 - Tuesday, January 3, 2017 - link

    No fucking increase in IPC

    Damn we need some competition bad and shame on anandtech for not ragging on Intel for lack of innovation
  • ThomasS31 - Tuesday, January 3, 2017 - link

    Thanks... though it would be time to upgrade the GPU part to at least a GTX1080 or more like a TXP... I see on other tests, that those, especially the TXP shows some differences in high end gpus more. GTX980 is limiting these days too heavy.

Log in

Don't have an account? Sign up now