Platform Power

Performance aside, the other side of the coin is battery life. AMD made big gains in battery life with the Ryzen 3000 series, somewhat addressing the power requirements of the platform and getting rid of some of the excessive idle power draw, but they are still using DDR4 on their mobile platform, which puts them at a disadvantage right out of the gate. Intel has made very good gains in battery life over the last several generations, and the move to 10 nm for Ice Lake also brought along LPDDR4X support. Most of the previous generation laptops stuck with LPDDR3, unless the manufacturer needed more than 16 GB of RAM, where they’d be forced to switch to DDR4. Finally adding LPDDR4X support is something that Intel has needed to do for a while, and ironically Intel’s flagship Core product line lagged behind their low-cost Atom lineup which did support LPDDR4.

Web Battery Life

Battery Life 2016 - Web

The Ryzen 7 3780U powered Surface Laptop 3 was slightly under the Ryzen 5 device we tested at launch, but still in the same range. The AMD system isn’t helped very much by Microsoft only offering a 46 Wh nominal battery capacity, which is rather undersized for a 15-inch laptop. The Ice Lake device, as we’ve seen before, was much more efficient under load, offering a sizeable battery life lead.

Idle Power

One of AMD’s biggest challenges was to get their laptop SoC into a premium device, and with the Surface Laptop 3 they have succeeded. Microsoft has shown themselves as being adept at squeezing battery life out of devices, with low-power displays, and good internal components to minimize power draw. Here Intel has held a considerable advantage over the last couple of years, and the move to 10 nm should, in theory, help as well.

To test the idle power draw of both systems, the battery discharge rate was monitored with the screens fixed in at 5.35 nits, to minimize the power draw of the display on the result. Normally we’d prefer to have the display completely off for this test, but Microsoft’s power plan on the Surface Laptop actively turns off the laptop when the display times out.

Minimum Idle Power Draw

The Ice Lake system was able to go all the way down to right around 2 Watts of power draw – and sometimes slightly under – with as low as 1.7 Watts seen. We’ve seen under 1 Watt of draw on an 8th generation Core Y series processor, and around 1.5 Watts on the same generation U series, so considering the display is not completely off on the Surface Laptop, the 2-Watt draw is quite reasonable.

The Picasso system was not quite as efficient, drawing 3 Watts at idle. This is in-line with the results we’ve seen on other Picasso systems and explains the lower battery life results on the AMD system. AMD made big gains moving from Raven Ridge to Picasso, but I’m sure the team is looking forward to the 7 nm Zen 2 coming to their laptops, which we hope will address this further.

Benchmark Analysis: Boost Behavior Final Words
Comments Locked

174 Comments

View All Comments

  • TheinsanegamerN - Friday, December 13, 2019 - link

    It isnt just speed, the intel chip uses LPDDR4X. That's an entirely different beat from LPDDR4, let alone normal DDR4.

    AMD would need to redesign their memory controller, and they have just...not done it. The writing was on the wall, and I have no idea why AMD didnt put LPDDR4X compatibility in their chips, hell I dont know why intel waited so long. The sheer voltage difference makes a huge impact in the mobile space.

    You are correct, pushing those speeds at normal DDR4 voltage levels would have tanked battery life.
  • ikjadoon - Friday, December 13, 2019 - link

    Sigh, it is just speed. DDR4-2400 to DDR4-3200 is simply speed: there is no "entirely new controller" needed. The Zen+ desktop counterpart is rated between DDR4-2666 to 2933.

    LPDDR4X is almost identical to LPDDR4: "LPDDR4X is identical to LPDDR4 except additional power is saved by reducing the I/O voltage (Vddq) to 0.6 V from 1.1 V." Whoever confused you that LPDDR4X is "an entirely different beat" from LPDDR4 is talking out of their ass and I caution you to believe anything else they ever say.

    And, no: DDR4-3200 vs DDR4-2400 would've tanked battery life, but simply made it somewhat worse. DDR4-3200 can still run on the stock 1.2V that SO-DIMM DDR4 relies on, but it's pricier and you'd still pay the MHz power penalty.

    I don't think RAM speed/voltage has ever "tanked" a laptop's battery life: shaking my head here...
  • mczak - Friday, December 13, 2019 - link

    I'm quite sure you're wrong here. The problem isn't the memory itself (as long as you get default 1.2V modules, which exist up to ddr4-3200 itself), but the cpu. Zen(+) cpus require higher SoC voltage for higher memory speeds (memory frequency is tied to the on-die interconnect frequency). And as far as I know, this makes quite a sizeable difference - not enough to really matter on the desktop, but enough to matter on mobile. (Although I thought Zen+ could use default SoC voltage up to ddr4-2666, but I could be wrong on that.)
  • Byte - Friday, December 13, 2019 - link

    Ryzen had huge problems with memory speed and even compatibility at launch. No doubt they had to play it safe on laptops. They should have it mostly sorted out with Zen 2 laptop, it is why the notebooks are a gen behind where as intel notebook are usually a gen ahead.
  • ikjadoon - Saturday, December 14, 2019 - link

    We both agree it would be bad for battery life and a clear AMD failure. But, the details...more errors:

    1. Zen+ is rated up to DDR4-2933. 3200 is a short jump. Even then, AMD couldn't even rate this custom SKU to 2666 (the bare minimum of Zen+). AMD put zero work into this custom SKU (whose only saving grace is graphics and even that was neutered). It's obviously a low-volume part (relative to what AMD sells otherwise) or such a high-profile design win.

    2. If AMD can't rate (= bin) *any* of its mobile SoC batches to support even 2666MHz at normal voltages, I'd be shocked.

    For any random Zen+ silicon, sure, it'd need more voltage. The whole impetus for my comments are that AMD created an entire SKU for Microsoft and seemed to take it out of oven half-baked.

    Or, perhaps they had binned the GPU side too much that very few of those CU 11 units could've survived a second binning on the memory controller.
  • azazel1024 - Monday, December 16, 2019 - link

    So all that being said, yes it had a huge impact. GPU based workloads are heavily memory speed dependent. Going from 2400 to 3200MHz likely would have seen a 10-25% increase in the various GPU benchmarks (on the lower end for those that are a bit more CPU biased). That changes AMD from being slightly better overall in GPU performance to a commanding lead.

    On the CPU side of things, many of the Intel wins were on workloads with a lot of memory performance needed. Going from 2400 to 3200 would probably have only resulted in the AMD chip moving up 3-5% in many workloads (20-40% in the more memory subsystem dependent SPEC INT tests), but that would have still evened the playing field a lot more.

    Going to 3766 like the Intel chip would have just been even more of the same.

    Zen 2 and much higher memory bandwidth can't come soon enough for AMD.
  • Zoolook - Saturday, December 21, 2019 - link

    It's not about binning, they couldn't support that memory and keep within their desired TDP because they would have to run infinity fabric at a higher speed.
    They could have used faster memory and lower CPU and/or GPU speed but this is the compromise they settled on.
  • Dragonstongue - Friday, December 13, 2019 - link

    AMD make/design for a client what that client wants, in this case, MSFT as "well known" for making sure to get (hopefully pay much for) what they want, for only reasons that they can understand.

    this case, AMD really cannot say "we are not doing that" as this would mean loss of likely into the millions (or more) vs just saying "not a problem, what would you like?"

    MSFT is very well known for catering to INTC and NVDA whims (they have, still do, even if it cost everyone many things)

    still they AMD and MSFT should have "made sure" to not hold back it's potential performance by using "min spec" memory speed, instead choosing the highest speed they know (through testing) it will support.

    I imagine AMD (or others) could have chosen to use LP memory selection as I call BS on others saying AMD would have no choice but to rearchitecture their design to use the LP over standard power memory, seeing as the LP is likely very little changes need to be done (if any compared to ground up for an entirely different memory type)

    they should have "upped" to the next speed levels however instead of 2400 baseline, 2666, 2933, 3000, 3200 as power draw difference is "negligible" with proper tuning (which MSFT likely would have made sure to do...but then again is MSFT whom pull stupid as heck all the time, so long it keeps their "buddies happy" who care about the consumers themselves)
  • mikeztm - Friday, December 13, 2019 - link

    LPDDR4/LPDDR4X is not related to DDR4.
    It's a upgraded LPDDR3 which is also not related to DDR3.

    LPDDR family is just like GDDR family and are total different type of DRAM standard.
    They almost draw 0 watt when not in use. And in active ram access they do not draw less power significantly compare to DDR4.

    LPDDR4 was first shipped with iPhone 6s in 2015 and it takes Intel 4 years to finally catch up.
    BTW this article has a intentional typo: LPDDR4 3733 on Intel is actually quad channel because each channel is half width 32bit instead of DDR4 64bit.
  • Dragonstongue - Friday, December 13, 2019 - link

    AMD make/design for a client what that client wants, in this case, MSFT as "well known" for making sure to get (hopefully pay much for) what they want, for only reasons that they can understand.

    this case, AMD really cannot say "we are not doing that" as this would mean loss of likely into the millions (or more) vs just saying "not a problem, what would you like?"

    MSFT is very well known for catering to INTC and NVDA whims (they have, still do, even if it cost everyone many things)

    still they AMD and MSFT should have "made sure" to not hold back it's potential performance by using "min spec" memory speed, instead choosing the highest speed they know (through testing) it will support.

    I imagine AMD (or others) could have chosen to use LP memory selection as I call BS on others saying AMD would have no choice but to rearchitecture their design to use the LP over standard power memory, seeing as the LP is likely very little changes need to be done (if any compared to ground up for an entirely different memory type)

    they should have "upped" to the next speed levels however instead of 2400 baseline, 2666, 2933, 3000, 3200 as power draw difference is "negligible" with proper tuning )

    IMO

Log in

Don't have an account? Sign up now