Performance Per Watt

With the ASUS Zephyrus G14, it comes with some fancy ASUS software called the Armo(u)ry Crate. Inside is the usual array of options for a modern laptop when it comes to performance profiles, fans, special RGB effects and lighting, information about voltages, frequencies, fan speeds, fan profiles, and all that jazz. However inside the software there is also an interface that allows the user to cap how much APU/SoC power can be put through the processor or the whole platform.

With this option, we took advantage of the fact that the after we select a given SoC wattage, the system will automatically migrate to the required voltage and frequency under load while only ever going up to the power limits - or as much as the system would be allowed to. Using this tool, we ran a spectrum of performance data against power options to see how the POV-Ray benchmark would scale, as it is one of the benchmarks that drives core use very high and very hard.

In this first graph, we monitor how the CPU voltage increases by raising the power, as well as the at-load temperature of the processor. The voltage increments start off around the 60-65 mW per 5W of SoC power, eventually becoming 15-25 W due to the way that voltage and power scales. The temperature was a very constant rise, showing 96ºC with the full 80 W selected.

Now if we transition this to the benchmark results, as we plot this with the all-core frequency as well:

These two lines follow a similar pattern, as the score doesn't increase if the frequency doesn't increase. The biggest jumps are in the 15-35W mark, which is where most modern processors are the most efficient. However as the power is added in, the processor moves away from that ideal efficiency point, and going from 50 W to 80 W is a 60% power increase for only +375 MHz and only +7.7% increased score in the benchmark.

We can pivot this data into something a bit more familiar:

Here we can see the voltage required for all-core frequencies and how the voltage scales up. With all this data, we can actually do a performance per watt graph for Rembrandt:

In this graph we're plotting Score per watt against Frequency, and it showcases that beyond 2.5 GHz, the Rembrandt CPU design becomes less efficient. Most modern processors end up being most efficient around this frequency, so it isn't perhaps all that surprising.

Now all of this is also subject to binning - not only are chips binned by the designation (6900HS vs 6800H for example), but also within an individual SKU, there will be better bins than others. We see this in some mobile processors that can have 10+ bins with different voltage/frequency characteristics, but all still called the same, because they perform at a shared guaranteed minimum. With smartphones, this testing is a lot easier, as that voltage/frequency table is often part of the hardware mechanism. But for notebooks and desktops, we're often at the mercy of the motherboard manufacturer or OEM, who can use their own settings, overriding anything that Intel or AMD suggest. Hopefully in the future we will get more control and be able to determine what is manufacturer based and what is motherboard based.

Power Consumption CPU Tests: SPEC Performance
Comments Locked

92 Comments

View All Comments

  • DannyH246 - Wednesday, March 2, 2022 - link

    For a laugh.
  • Speedfriend - Wednesday, March 2, 2022 - link

    Seriously, how old are you?
  • abufrejoval - Friday, March 4, 2022 - link

    It's a slow season (for computers) so they have to spread it out some. The other pieces evidently have been prepared already as parting gifts by Ian.
  • vegemeister - Tuesday, March 1, 2022 - link

    >Per-Thread Power/Clock Control: Rather than being per core, each thread can carry requirements

    Does that imply the core can change its voltage and clocking on the same timescale as switching SMT thread? I thought modern SMT was fine-grained enough that there are instructions from both threads in-flight at once.

    Or is it just for simplifying the OS's cpufreq driver?

    >For example, if a core is idle for a few seconds, would it be better to put in a sleep state?

    A few hundred microseconds, surely?
  • Arnulf - Tuesday, March 1, 2022 - link

    "... following AMD’s cadence of naming its mobile processors after painters"

    As opposed to what, their desktop lineup naming (also named after painters)? Consumer processors are named after painters.
  • syxbit - Tuesday, March 1, 2022 - link

    >>While we haven’t touched battery life or graphics in this article

    that's pretty critical for a Laptop review.
    I'm pretty tired of Intel reviews constantly covering their 12th gen superiority without talking about power. It's easy to beat a competitor if you just double the power budget. It's laughable that Intel is pretending they've caught up to Apple.
  • Oxford Guy - Tuesday, March 1, 2022 - link

    I am sure those producing the Steam handheld would like reviewers to not test battery life.
  • ninjaquick - Tuesday, March 1, 2022 - link

    How fast do these chips perform vp9 4k decode? A major use case moving forward will be game streaming, and I'm struggling to find hardware acceleration numbers.
  • dwillmore - Tuesday, March 1, 2022 - link

    Error on page 3: "yCrundher" is a misspelling
  • YukaKun - Tuesday, March 1, 2022 - link

    Writing this from a 5900HX (Asus G17 Strix) and upgrading from a i7 7700HQ that, I have to say is really efficient for what it is, the AMD laptop is just in another league of its own. Both have a 90Wh battery and the Intel, not even new, would break the 4h mark. This thing has as much usage as my tablets with normal usage. It's really impressive and, for the go stuff, it's so SO nice. Then you need to game and it just works. The 6800M is quite the beast in its own right. Sad this thing doesn't have a mux switch, but it still works amazingly well.

    This preamble was just to say, I'm surprised the 6000HK isn't a lot better, but I guess it's to be expected. On paper, the 6000 mobile series has a lot of potential with PCIe4 and slightly better process. DDR5 is too new IMO to show a definitive advantage on mobile, but maybe next gen will leap. I have DDR4L 3200 with my 5900HX and I put DDR4L 2666 to the i7 7700HQ, so DDR5L needs to be way faster than the crappy 4800 MT/s JEDEC spec we have currently.

    Regards.

Log in

Don't have an account? Sign up now