Efficiency and Die Area Savings

AMD’s take home message in all of this is efficiency. We a being quoted a performance per watt increase of 2.4x, coming from typical power draw savings of 2x and performance increase of almost 1.5x for 23% less die area, all in one go.

Ultimately this all helps AMD’s plan to be 25x more efficient with their APUs by 2020, and the cumulative bar chart on the right is how mobile improvements from all sides are being realized. Migrating the southbridge on die severely reduces its idle power consumption to almost zero and can help efficiencies elsewhere in the system. The APU general use and memory controllers are the next targets, but the common constant here is the display. Using a low power display might give battery life in exchange for quality, and there is only so much power you can save at the SoC level. In time, the display will be the main focus of power saving for these devices.

A big part of the reduction in die area comes from the set of high density libraries being used by AMD. Above were three examples provided where >33% gains were made in silicon area. Typically using a high density library design is a double edged sword – it reduces die area and potentially leaves more area for other things, but the caveat is that it may be more prone to defects in construction, require additional latency or have a different frequency/voltage profile. AMD assures us that these changes are at least like-for-like but most of them contain other improvements as well.

It’s worth noting here that AMD has described the high density library project internally as the equivalent of a moonshot, essentially the developers were part of a ‘skunkworks’ division attempting to make drastic changes in order to improve performance. The high density library is one such successful project from that.

With the new libraries, comparing Excavator to Steamroller shows the effect moving designs has. The power/frequency curve below 20W per module shifts to higher frequency/lower power, whereas losses are observed above 20W. However for 15W per module, this means either a 10%+ power reduction at the same frequency or a 5% increase in frequency for the same power. Should AMD release dual thread / single core APUs in the 7.5W region, this is where most of the gains are (as noted in the comments, the dual module designs are at 7.5W per module, meaning that what we should see in devices is already in the peak value for gains and benefits such as 25% frequency or 33% power). As also seen in the insert, the silicon stack has been adjusted to a more general purpose orientation. I could comment that this makes the CPU and GPU work better together, but I have no way of verifying this. AMD states the change in the silicon stack makes production slightly easier but also helps with achieving the higher density Excavator exhibits.

The Platform IPC Increases: Double L1 Data Cache, Better Branch Prediction
Comments Locked

137 Comments

View All Comments

  • renegade800x - Thursday, June 4, 2015 - link

    Although viewable it's far from being "perfectly" fine. 15.6 should be FHD.
  • albert89 - Tuesday, June 23, 2015 - link

    You don't need a strong CPU since win8 because most laptops use atom, Celeron or Pentium processors. AMD APU's are the natural choice !
  • mabsark - Wednesday, June 3, 2015 - link

    AMD should make Steam Box's. They already do APUs, chipsets (which are going on die) and memory. It would be pretty simple for AMD to partner with a motherboard maker. Imagine a Steam Box about the size of a router, with a nano-ITX motherboard, a 14 nm APU with HBM, wifi, a few USB ports and an HDMI port to connect to a TV.

    An AMD/Valve partnership could potentially revolutionise the console market, providing cheap yet powerful and efficient console-type PCs.
  • Refuge - Wednesday, June 3, 2015 - link

    HBM isn't coming to APU's anytime soon.
  • Cryio - Saturday, June 6, 2015 - link

    Probably the first APU after Carrizo
  • coder111 - Wednesday, June 3, 2015 - link

    Aren't Steamboxes supposed to run Linux?

    AMD drivers for Linux are a bit weird. Catalyst is the official supported driver but it's buggy.

    Open source drivers are quite good but they are slower than Catalyst and don't support latest OpenGL spec. There is no Mantle/Vulcan/HSA/Crossfire support with Open-Source drivers either. OpenCL is in alpha stage.

    So AMD would need to man up and do the Linux drivers properly. They are working on it and making good progress but I doubt it is ready to be used at the moment as it is...

    Besides, lots of games these days get developed with Nvidia's "help" to ensure they run well on Nvidia GPUs and run like crap on AMD GPUs. And if the games are built using Intel Compiler, they'll run like crap on AMD CPUs as well. All of these tactics are anticompetitive and should be illegal IMO but who said the world is fair...

    And don't get me wrong, I love AMD, I use Linux + AMD dGPU + APU, but I don't think it's ready for the masses yet.
  • AS118 - Wednesday, June 3, 2015 - link

    I agree. I'm a double AMD Linux gamer and I've run into the exact same problems as you have, and I wish they'd be more serious about Linux. Sure they have Microsoft's support, but I feel that they should take Linux more seriously outside of the enterprise (where they do take Linux more seriously).
  • yankeeDDL - Wednesday, June 3, 2015 - link

    I disagree.
    For casual gaming on laptops, 1366x768 is just fine. You'll need a lot more horsepower to drive a fullHD screen and battery life will suffer.
    I won't say that there's no benefit gaming at fullHD vs 1366x768: obviously, the visuals are better, but if you want an "all rounder" laptop which does not weight one ton (like "real" gaming laptops) and that it is below $500, it's not bad at all.
  • BrokenCrayons - Wednesday, June 3, 2015 - link

    I personally would rather have a cheap 1366x768 panel. I don't care about color accuracy much, light bleed, panel responsiveness or much of anything else and haven't since we transitioned from passive to active matrix screens in the 486 to original Pentium era of notebook computers. In fact, I see higher resolutions as an unnecessary (because I have to scale things anyway to easily read text and interact with UI elements and because native resolution gaming on higher res screens demands more otherwise unnecessary GPU power) drain on battery life that invariably drives up the cost of the system to get otherwise identical performance. The drive for progressively smaller, higher pixel density displays is a pointless struggle to fill in comparable checkboxes between competitors to appease a consumer audience that has been swept up in the artificially fabricated frenzy over an irrelevant device specification.
  • yankeeDDL - Wednesday, June 3, 2015 - link

    I think it depends on the use, ultimately.
    For office work (i.e.: much reading/writing emails), a reasonably high resolution helps making the text sharp and easier on the eyes.
    For home use (web browsing, watching videos, casual gaming) though, I find it a lot less relevant.
    Personally, at home, I rather have a <$400 laptop always ready to be used for anything, to be moved around, even in the kitchen, than a $1000 laptop which I would need to treat with gloves for fears of damaging. Since Kaveri I also started recommending AMD again to my friends and family: much cheaper than Intel and with a decent GPU makes them a lot more versatile. Again, my opinion, based on my use. As they say: to each his own...

Log in

Don't have an account? Sign up now