Low Power, FinFET and Clock Gating

When AMD launched Carrizo and Bristol Ridge for notebooks, one of the big stories was how AMD had implemented a number of techniques to improve power consumption and subsequently increase efficiency. A number of those lessons have come through with Zen, as well as a few new aspects in play due to the lithography.

First up is the FinFET effect. Regular readers of AnandTech and those that follow the industry will already be bored to death with FinFET, but the design allows for a lower power version of a transistor at a given frequency. Now of course everyone using FinFET can have a different implementation which gives specific power/performance characteristics, but Zen on the 14nm FinFET process at Global Foundries is already a known quantity with AMD’s Polaris GPUs which are built similarly. The combination of FinFET with the fact that AMD confirmed that they will be using the density-optimised version of 14nm FinFET (which will allow for smaller die sizes and more reasonable efficiency points) also contributes to a shift of either higher performance at the same power or the same performance at lower power.

AMD stated in the brief that power consumption and efficiency was constantly drilled into the engineers, and as explained in previous briefings, there ends up being a tradeoff between performance and efficiency about what can be done for a number of elements of the core (e.g. 1% performance might cost 2% efficiency). For Zen, the micro-op cache will save power by not having to go further out to get instruction data, improved prefetch and a couple of other features such as move elimination will also reduce the work, but AMD also states that cores will be aggressively clock gated to improve efficiency.

We saw with AMD’s 7th Gen APUs that power gating was also a target with that design, especially when remaining at the best efficiency point (given specific performance) is usually the best policy. The way the diagram above is laid out would seem to suggest that different parts of the core could independently be clock gated depending on use (e.g. decode vs FP ports), although we were not able to confirm if this is the case. It also relies on having very quick (1-2 cycle) clock gating implementations, and note that clock gating is different to power-gating, which is harder to implement.

Deciphering the New Cache Hierarchy: L1, 512 KB L2, 8 or 16 MB L3 Simultaneous Multi-Threading, Time Frame
Comments Locked

216 Comments

View All Comments

  • FMinus - Thursday, August 18, 2016 - link

    He's right tho. AMD was a licensee of Intel to produce bulk Intel products, because intel couldn't keep up with the demand. Then AMD reverse engineered Intels products and brought their own line out and Intel didn't like that, thus they broke the agreement, which in the end didn't help much since AMD had already all they needed.

    That being said, what AMD did anyone would, so it's just business as per usual. Then they actually stepped up and made great own CPUs to combat intel and made a great dual core and AMD64. AMD did a lot for computing, but the early days were pretty much a contractor and pirate.

    I wish them all the best with Zen and future, and I hope they get Vega right, by that I mean don't fucking gimp the chip by power delivery, cause you can't get that under control, everyone knows nvidia is ahead in that game, just give a great performing GPU on the market and let it eat 250W if need be.
  • Nagorak - Thursday, August 18, 2016 - link

    Yeah, they matched Intel on the CPU front, and Intel responded by abusing their stronger market position to limit AMD's gains. I'll be happy to get an AMD processor back in my machine just based on principle.
  • Klimax - Saturday, August 20, 2016 - link

    Correction: IBM forced Intel to license number of CPU manufactures. (At least two suppliers, similar to rule sued by militaries) And there was lawsuit or two. Fun stuff.
  • looncraz - Thursday, August 18, 2016 - link

    Maybe you are too young to remember, but AMD has historically been a primary driver in processor innovation.

    They created the first native multi-core dies, broke the Ghz barrier, first to debut dynamic clock speeds, invented the seamless x64 transition and AMD64 instruction set, created CMT, created HSA, created the APU, and so much more. And I'm only focusing on CPUs, here.

    Intel uses a great deal of AMD tech, and vice-versa.
  • smilingcrow - Thursday, August 18, 2016 - link

    I used to buy AMD exclusively but they have been second rate for 10 years now.
    I don't buy innovations I buy products and AMD have really struggled for a decade to offer decent products unless your main criteria is value.
    Value is fine but for mobile products where power consumption is very important and for workstations where performance is king AMD have had nothing to compete.
    I'm very glad that Zen is looking as if it will compete at the higher end although I think they will find it harder to compete with Core M.
    Just because I don't view AMD through ten year old rose tinted glasses doesn't mean I don't want them to succeed.
    I have been feeling confident for Zen as an 8c/16t chips for ages but it's how it does as a 4c/8t chip that may well be more important in the consumer space unless the 8c/16c chip is unusually cheap for its performance level which it could even be.

    Some people here can't tell the difference between someone who is critical of AMD's failings and an Intel fanboy. Intel have their issues but they have delivered decent chips in the decade that AMD fell into disarray. I'm not loyal to incompetent companies.
  • Nagorak - Thursday, August 18, 2016 - link

    It's been hard for anyone to stick with AMD for the last decade. Phenom and Phenom II came up short, and then Bulldozer turned out to be a total disaster. In retrospect AMD should have tossed Bulldozer in the trash and started work on a new processor design immediately. Trying to iterate on that failed design is what almost killed AMD.
  • Gigaplex - Thursday, August 18, 2016 - link

    "Created the APU".

    That's not entirely accurate. Intel was actually first to market with their "APU" type CPUs, even though AMD announced theirs first.
  • KPOM - Friday, August 19, 2016 - link

    These days ARM (soon SoftBank) is the company that keeps Intel management up at night. Intel missed the boat on mobile.
  • Kevin G - Saturday, August 20, 2016 - link

    The first dual core chip was POWER4 from IBM.

    Dynamic clock speeds existed in mobile (think ARM/MIPS) designs back in the 90's.

    Seamless x86 transition could be credited to Transmeta for thei VLIW based Crusoe line of chips running x86 code. Runner up could be the FX32! emulator that ran unmodifed x86 Windows binaries in Alpha based hardware back in the 90's.

    CMT was done beforehand in Sun's Niagra chip. There designs even before that did unit sharing for CMT.

    Elements of HSA came from 3Dlabs and their cards supporting a unified virtual address space.

    Integrating a CPU and GPU was first done by Intel though they never shipped it due to relaying on a flawed RDRAM to SDRAM buffer chip:
    http://m.theregister.co.uk/2007/02/06/forgotten_te...

    Thus the only innovation on your list is the 1 Ghz clock rate for a CPU, which isn't that innovative.
  • Klimax - Saturday, August 20, 2016 - link

    Sorry, wrong. Multicores weren't AMD''s invention, dynamic clock speeds were parallel execution, x64 transition si AMD win only thanks to Microsoft who killed Intel's own development, CMT is not AMD¨s invention (and I would say it is nothing to be proud of), HSA is just label for preexisting technologies, APU was done before AMD's own (in fact, Intel had APU-like chip in late 80s),. AMD didn't invent much as most of technologies were bought in previous acquisitions like HyperTransport (See DEC Alpha)

    Sorry, to tell you, but what you posit is pure fantasy. AMD inveted very few things and fewer of them were of much importance or use.

Log in

Don't have an account? Sign up now