In the Words of Jeremy Clarkson: POWEEEERRRRR

As with all the major processor launches in the past few years, performance is nothing without a good efficiency to go with it. Doing more work for less power is a design mantra across all semiconductor firms, and teaching silicon designers to build for power has been a tough job (they all want performance first, naturally). Of course there might be other tradeoffs, such as design complexity or die area, but no-one ever said designing a CPU through to silicon was easy. Most semiconductor companies that ship processors do so with a Thermal Design Power, which has caused some arguments recently based on performance presentations.

Yes, technically the TDP rating is not the power draw. It’s a number given by the manufacturer to the OEM/system designer to ensure that the appropriate thermal cooling mechanism is employed: if you have a 65W TDP piece of silicon, the thermal solution must support at least 65W without going into heat soak.  Both Intel and AMD also have different ways of rating TDP, either as a function of peak output running all the instructions at once, or as an indication of a ‘real-world peak’ rather than a power virus. This is a contentious issue, especially when I’m going to say that while TDP isn’t power, it’s still a pretty good metric of what you should expect to see in terms of power draw in prosumer style scenarios.

So for our power analysis, we do the following: in a system using one reasonable sized memory stick per channel at JEDEC specifications, a good cooler with a single fan, and a GTX 770 installed, we look at the long idle in-Windows power draw, and a mixed AVX power draw given by OCCT (a tool used for stability testing). The difference between the two, with a good power supply that is nice and efficient in the intended range (85%+ from 50W and up), we get a good qualitative comparison between processors. I say qualitative as these numbers aren’t absolute, as these are at-wall VA numbers based on power you are charged for, rather than consumption. I am working with our PSU reviewer, E.Fylladikatis, in order to find the best way to do the latter, especially when working at scale.

Nonetheless, here are our recent results for Kaby Lake at stock frequencies:

Power Delta (Long Idle to OCCT)

What amazes me, if anything, is how close the Core i7 and Core i3 parts are to their TDP in our measurements. Previously, such as with the Core i7-6700K and Core i7-4790K, we saw +20W on our system compared to TDP, but the Core i7-7700K is pretty much bang on at 90W (for a 91W rated part). Similarly, the Core i3-7350K is rated at 60W and we measured it at 55W. The Core i5-7600K is a bit different due to no hyperthreading meaning the AVX units aren’t loaded as much, but more on that in that review.

To clarify, our tests were performed on retail units. No engineering sample trickery here.

With power on the money, this perhaps mean that Intel is getting the voltages of each CPU to where they should be based on the quality of the silicon. In previous generations, Intel would over estimate the voltage needed in order to capture more CPUs within a given yield – however AMD has been demonstrating of late that it is possible to tailor the silicon more based on internal metrics. Either our samples are flukes, or Intel is doing something similar here.

With power consumption in mind, let’s move on to Overclocking, and watch some sand burn a hole in a PCB (hopefully not).

Overclocking

At this point I’ll assume that as an AnandTech reader, you are au fait with the core concepts of overclocking, the reason why people do it, and potentially how to do it yourself. The core enthusiast community always loves something for nothing, so Intel has put its high-end SKUs up as unlocked for people to play with. As a result, we still see a lot of users running a Sandy Bridge i7-2600K heavily overclocked for a daily system, as the performance they get from it is still highly competitive.

Despite that, the i7-7700K has somewhat of an uphill battle. As a part with a 4.5 GHz turbo frequency, if users are expecting a 20-30% increase for a daily system then we will be pushing 5.4-5.8 GHz, which for daily use with recent processors has not happened.

There’s also a new feature worth mentioning before we get into the meat: AVX Offset. We go into this more in our bigger overclocking piece, but the crux is that AVX instructions are power hungry and hurt stability when overclocked. The new Kaby Lake processors come with BIOS options to implement an offset for these instructions in the form of a negative multiplier. As a result, a user can stick on a high main overclock with a reduced AVX frequency for when the odd instruction comes along that would have previously caused the system to crash.

Because of this, we took our overclocking methods in two ways. First, we left the AVX Offset alone, meaning our OCCT mixed-AVX stability test got the full brunt of AVX power and the increased temperature/power reading there in.  We then applied a second set of overclocks with a -10 offset, meaning that at 4.5 GHz the AVX instructions were at 3.5 GHz. This did screw up some of our usual numbers that rely on the AVX part to measure power, but here are our results:

At stock, our Core i7-7700K ran at 1.248 volts at load, drawing 90 watts (the column marked ‘delta’), and saw a temperature of 79C using our 2kg copper cooling.

After this, we put the CPU on a 20x multiplier, set it to 1.000 volt (which didn’t work, so 1.100 volts instead), gave the load-line calibration setting to Level 1 (constant voltage on ASRock boards), and slowly went up in 100 MHz jumps. Every time the POV-Ray/OCCT stability tests failed, the voltage was raised 0.025V.

This gives a few interesting metrics. For a long time, we did not need a voltage increase: 1.100 volts worked as a setting all the way up to 4.2 GHz, which is what we’ve been expecting for a 14nm processor at such a frequency. From there the voltage starts increasing, but at 4.5 GHz we needed more voltage in a manual overclock to achieve stability than the CPU gave itself at stock frequency. So much for overclocking! It wasn’t until that 4.3-4.5 GHz set of results that the CPU started to get warm, as shown by the OCCT temperature values.

At 4.8 GHz, the Core i7-7700K passed POV-Ray with ease, however the 1.400 volts needed at that point were pushing the processor up to 95C during OCCT and its mixed AVX workload. At that point I decided to call an end to it, where the CPU was now drawing 122W from idle to load. The fact that it is only 122W is surprisingly low – I would have thought we would be nearing 160W at this point, other i7 overclockable processors at this level in the past.

The second set of results is with the AVX offset. This afforded stability at 4.8 GHz and 4.9 GHz, however at 5.0 GHz and 1.425 volts the CPU was clearly going into thermal recovery modes, as given by the lower scores in POV-Ray.

Based on what we’ve heard out in the ether, our CPU sample is somewhat average to poor in terms of overclocking performance. Some colleagues at the motherboard manufacturers are seeing 5.0 GHz at 1.3 volts (with AVX offset) although I’m sure they’re not talking in terms of a serious reasonable stability. 

Gaming: Shadow of Mordor Conclusions: The New Champion
Comments Locked

125 Comments

View All Comments

  • 1PYTHON1 - Saturday, January 21, 2017 - link

    u do realize the 6700k only clocks to 4.5 or 4.6 if u get a good one...this will do 5ghz. so saying theres 0 improvement is crap.
  • Gasaraki88 - Tuesday, January 3, 2017 - link

    Why are you testing with Win7 when the CPUs have more functionality under Windows 10?
  • ltcommanderdata - Wednesday, January 4, 2017 - link

    I thought Intel wasn't going to release Windows 7/8.1 drivers for 200-series chipsets and Kaby Lake in accordance with Microsoft's policy that Skylake was the last new CPU family to be officially supported by those OS. If Anandtech tested Z270 motherboards and Kaby Lake with Windows 7 did Intel end up releasing Windows 7 drivers for 200-series chipsets after-all or do existing 100-series drivers work with the 200-series or is some other workaround being done?
  • jimbo2779 - Wednesday, January 4, 2017 - link

    I dont think it was intel saying they wouldn't release drivers for win 7, that would be them shooting themselves in the foot big time. Microsoft were saying they would not be supporting new features in CPUs.

    I believe this means things like a new sse instruction set would not have native support in windows prior to 8. However this does not stop a CPU manufacturer from implementing support via drivers which is what intel would likely do at some point if not at launch.
  • Shadow7037932 - Wednesday, January 4, 2017 - link

    Probably because they don't want to re-test the old systems under Windows 10 just for this review. But yeah, I do think it's about time AnandTech move on to Windows 10 as the baseline OS.
  • Iketh - Tuesday, January 3, 2017 - link

    Identical IPC yet AVX Offset support? Can clarify plz?
  • Iketh - Tuesday, January 3, 2017 - link

    nevermind, you clarified in overclocking section
  • Iketh - Tuesday, January 3, 2017 - link

    for anyone else wondering, AVX Offset is not an additional instruction set, it's a bios setting
  • User.Name - Tuesday, January 3, 2017 - link

    It's really time for a new suite of gaming tests if they aren't showing any difference between the CPUs.

    For one thing, average framerates are meaningless when doing CPU tests. You need to be looking at minimum framerates.

    Just look at the difference between CPUs in Techspot's Gears of War 4 performance review: http://www.techspot.com/review/1263-gears-of-war-4...

    Or GameGPU's Watch Dogs 2 CPU test: http://gamegpu.com/images/stories/Test_GPU/Action/...

    So many people keep repeating that CPUs don't matter for gaming these days, but that's absolutely wrong. The problem is that many of the hardware review sites that have been around for a long time seem to have forgotten how to properly benchmark games.
  • takeshi7 - Tuesday, January 3, 2017 - link

    I agree that AnandTech should improve their gaming benchmarks. Some frame time variance measurements would be nice, and also some runs with lower graphics settings so that the CPU is the bottleneck rather than the GPU.

Log in

Don't have an account? Sign up now