Though it didn’t garner much attention at the time, in 2011 AMD and memory manufacturer Hynix (now SK Hynix) publicly announced plans to work together on the development and deployment of a next generation memory standard: High Bandwidth Memory (HBM). Essentially pitched as the successor to GDDR, HBM would implement some very significant changes in the working of memory in order to further improve memory bandwidth and turn back the dial on memory power consumption.

AMD (and graphics predecessor ATI) for their part have in the last decade been on the cutting edge of adopting new memory technologies in the graphics space, being the first to deploy products based on the last 2 graphics DDR standards, GDDR4, and GDDR5. Consequently, AMD and Hynix’s announcement, though not a big deal at the time, was a logical extension of AMD’s past behavior in continuing to explore new memory technologies for future products. Assuming everything were to go well for the AMD and Hynix coalition – something that was likely, but not necessarily a given – in a few years the two companies would be able to bring the technology to market.


AMD Financial Analyst Day 2015

It’s now 4 years later, and successful experimentation has given way to productization. Earlier this month at AMD’s 2015 Financial Analyst day, the company announced that they would be releasing their first HBM-equipped GPU – the world’s first HBM-equipped GPU, in fact – to the retail market this quarter. Since then there have been a number of questions of just what AMD intends to do with HBM and just what it means for their products (is it as big of a deal as it seems?), and while AMD is not yet ready to reveal the details of their forthcoming HBM-equipped GPU, the company is looking to hit the ground running on HBM in order to explain what the technology is and what it can do for their products ahead of the GPU launch later that quarter.

To date there have been a number of presentations released on HBM, including by memory manufactures, the JEDEC groups responsible for shaping HBM, AMD, and even NVIDIA. So although the first HBM products have yet to hit retail shelves, the underpinnings of HBM are well understood, at least inside of engineering circles. In fact it’s the fact that HBM is really only well understood within those technical circles that’s driving AMD’s latest disclosure today. AMD sees HBM as a significant competitive advantage over the next year, and with existing HBM presentations having been geared towards engineers, academia, and investors, AMD is looking to take the next step and reach out to end-users about HBM technology.

This brings us to the topic of today’s article: AMD’s deep dive disclosure on High Bandwidth Memory. Looking to set the stage ahead of their next GPU launch, AMD is reaching out to technical and gaming press to get the word out about HBM and what it means for AMD’s products. Ideally for AMD, an early disclosure on HBM can help to drum up interest in their forthcoming GPU before it launches later this quarter, but if nothing else it can help answer some burning questions about what to expect ahead of the launch. So with that in mind, let’s dive in.

I'd also like to throw out a quick thank you to AMD Product CTO and Corporate Fellow Joe Macri, who fielded far too many questions about HBM.

History: Where GDDR5 Reaches Its Limits
Comments Locked

163 Comments

View All Comments

  • phoenix_rizzen - Wednesday, May 20, 2015 - link

    I see you missed the part of the article that discusses how the entire package, RAM and GPU cores, will be covered by a heat spreader (most likely along with some heat transferring goo underneath to make everything level) that will make it easier to dissipate heat from all the chips together.

    Similar to how Intel CPU packages (where there's multiple chips) used heat spreaders in the past.
  • vortmax2 - Wednesday, May 20, 2015 - link

    Is this something that AMD will be able to license? Wondering if this could be a potential significant revenue stream for AMD.
  • Michael Bay - Wednesday, May 20, 2015 - link

    So what`s the best course of action?
    Wait for the second generation of technology?
  • wnordyke - Wednesday, May 20, 2015 - link

    You will wait 1 year for the second generation. The second generation chips will be a big improvement over the current chips. (Pascal = 8 X Maxwell). (R400 = 4 X R390)
  • Michael Bay - Wednesday, May 20, 2015 - link

    4 times Maxwell seems nice.
    I`m in no hurry to upgrade.
  • akamateau - Thursday, May 28, 2015 - link

    No not at all.

    HBM doesn't need the depth of memory that DDR4 or DDR5 does. DX12 performance is going to go through the roof.

    HBM was designed to solve the GPU bottleneck. The electrical path latecy improvement is at least one clock not to mention the width of the pipes. The latency improvement will likely be 50% better. In and out. HBM will outperform ANYTHING that nVidia has out using DX12.

    Use DX11 and you cripple your GPU anyway. You can only get 10% of the DX12 performance out of your system.

    So get Windows 10 enable DX12 and buy this Card; by Christmas ALL games will be out DX12 capable as Microsoft is supporting DX12 with XBOX.
  • Mat3 - Thursday, May 21, 2015 - link

    What if you put the memory controllers and the ROPs on the memory stack's base layer? You'd save more area for the GPU and have less data traffic going from the memory to the GPU.
  • akamateau - Thursday, May 28, 2015 - link

    That is what the interposer is for.
  • akamateau - Monday, June 8, 2015 - link

    AMD is already doing that. Here is their Patent.

    Interposer having embedded memory controller circuitry
    US 20140089609 A1
    " For high-performance computing systems, it is desirable for the processor and memory modules to be located within close proximity for faster communication (high bandwidth). Packaging chips in closer proximity not only improves performance, but can also reduce the energy expended when communicating between the processor and memory. It would be desirable to utilize the large amount of "empty" silicon that is available in an interposer. "

    And a little light reading:

    “NoC Architectures for Silicon Interposer Systems Why pay for more wires when you can get them (from your interposer) for free?” Natalie Enright Jerger, Ajaykumar Kannan, Zimo Li Edward S. Rogers Department of Electrical and Computer Engineering University of Toronto Gabriel H. Loh AMD Research Advanced Micro Devices, Inc”
    http://www.eecg.toronto.edu/~enright/micro14-inter...
  • TheJian - Friday, May 22, 2015 - link

    It's only an advantage if you're product actually WINS the gaming benchmarks. Until then, there is NO advantage. And I'm not talking winning benchmarks that are purely used to show bandwidth (like 4k when running single digits fps etc), when games are still well below 30fps min anyway. That is USELESS. IF the game isn't playable at whatever settings your benchmarking that is NOT a victory. It's like saying something akin to this "well if we magically had a gpu today that COULD run 30fps at this massively stupid resolution for today's cards, company X would win"...LOL. You need to win at resolutions 95% of us are using and in games we are playing (based on sales hopefully in most cases).

    Nvidia's response to anything good that comes of rev1 HBM will be, we have more memory (perception after years of built up more mem=better), and adding up to 512bit bus (from current 384 on their cards) if memory bandwidth is any kind of issue for next gen top cards. Yields are great on GDDR5, speeds can go up as shrinks occur and as noted Nvidia is on 384bit leaving a lot of room for even more memory bandwidth if desired. AMD should have went one more rev (at least) on GDDR5 to be cost competitive as more memory bandwidth (when GCN 1.2+ brings more bandwidth anyway) won't gain enough to make a difference vs. price increase it will cause. They already have zero pricing power on apu, cpu and gpu. This will make it worse for a brand new gpu.

    What they needed to do was chop off compute crap that most don't use (saving die size or committing that size to more stuff we DO use in games), and improve drivers. Their last drivers were december 2014 (for apu or gpu, I know, I'm running them!). Latest beta on their site is 4/12. Are they so broke they can’t even afford a new WHQL driver every 6 months?

    No day 1 drivers for witcher3. Instead we get complaints that gameworks hair cheats AMD and complaints CDPR rejected tressfx two months ago. Ummm, should have asked for that the day you saw witcher3 wolves with gameworks hair 2yrs ago, not 8 weeks before game launch. Nvidia spent 2yrs working with them on getting the hair right (and it only works great on the latest cards, screws kepler too until possible driver fixes even for those), while AMD made the call to CDPR 2 months ago...LOL. How did they think that call would go this late into development which is basically just spit and polish in the last months? Hairworks in this game is clearly optimized for maxwell at the moment but should improve over time for others. Turn down the aliasing on the hair if you really want a temp fix (edit the config file). I think the game was kind of unfinished at release TBH. There are lots of issues looking around (not just hairworks). Either way, AMD clearly needs to do the WORK necessary to keep up, instead of complaining about the other guy.

Log in

Don't have an account? Sign up now