Though it didn’t garner much attention at the time, in 2011 AMD and memory manufacturer Hynix (now SK Hynix) publicly announced plans to work together on the development and deployment of a next generation memory standard: High Bandwidth Memory (HBM). Essentially pitched as the successor to GDDR, HBM would implement some very significant changes in the working of memory in order to further improve memory bandwidth and turn back the dial on memory power consumption.

AMD (and graphics predecessor ATI) for their part have in the last decade been on the cutting edge of adopting new memory technologies in the graphics space, being the first to deploy products based on the last 2 graphics DDR standards, GDDR4, and GDDR5. Consequently, AMD and Hynix’s announcement, though not a big deal at the time, was a logical extension of AMD’s past behavior in continuing to explore new memory technologies for future products. Assuming everything were to go well for the AMD and Hynix coalition – something that was likely, but not necessarily a given – in a few years the two companies would be able to bring the technology to market.


AMD Financial Analyst Day 2015

It’s now 4 years later, and successful experimentation has given way to productization. Earlier this month at AMD’s 2015 Financial Analyst day, the company announced that they would be releasing their first HBM-equipped GPU – the world’s first HBM-equipped GPU, in fact – to the retail market this quarter. Since then there have been a number of questions of just what AMD intends to do with HBM and just what it means for their products (is it as big of a deal as it seems?), and while AMD is not yet ready to reveal the details of their forthcoming HBM-equipped GPU, the company is looking to hit the ground running on HBM in order to explain what the technology is and what it can do for their products ahead of the GPU launch later that quarter.

To date there have been a number of presentations released on HBM, including by memory manufactures, the JEDEC groups responsible for shaping HBM, AMD, and even NVIDIA. So although the first HBM products have yet to hit retail shelves, the underpinnings of HBM are well understood, at least inside of engineering circles. In fact it’s the fact that HBM is really only well understood within those technical circles that’s driving AMD’s latest disclosure today. AMD sees HBM as a significant competitive advantage over the next year, and with existing HBM presentations having been geared towards engineers, academia, and investors, AMD is looking to take the next step and reach out to end-users about HBM technology.

This brings us to the topic of today’s article: AMD’s deep dive disclosure on High Bandwidth Memory. Looking to set the stage ahead of their next GPU launch, AMD is reaching out to technical and gaming press to get the word out about HBM and what it means for AMD’s products. Ideally for AMD, an early disclosure on HBM can help to drum up interest in their forthcoming GPU before it launches later this quarter, but if nothing else it can help answer some burning questions about what to expect ahead of the launch. So with that in mind, let’s dive in.

I'd also like to throw out a quick thank you to AMD Product CTO and Corporate Fellow Joe Macri, who fielded far too many questions about HBM.

History: Where GDDR5 Reaches Its Limits
Comments Locked

163 Comments

View All Comments

  • chizow - Tuesday, May 19, 2015 - link

    Nvidia has already confirmed HBM2 support with Pascal (see the ref PCB on last page). I guess they weighed the pros/cons of low supply/high costs and limited VRAM on HBM1 and decided to wait until the tech matured. HBM1 also has significantly less bandwidth than what HBM2 claims (1+GB/s).
  • DanNeely - Tuesday, May 19, 2015 - link

    Probably part of it; but I suspect passing on HBM1 is part of the same more conservative engineering approach that's lead to nVidia launching on new processes a bit later than ATI has over the last few generations. Going for the next big thing early on potentially gives a performance advantage; but comes at a cost. Manufacturing is generally more expensive because early adopters end up having to fund more of the upfront expenses in building capacity, and being closer to the bleeding edge generally results in the engineering to make it work being harder. A dollar spend on fighting with bleeding edge problems is a either going to contribute to higher device costs; or to less engineering being able to optimize other parts of the design.

    There's no right answer here. In some generations ATI got a decent boost from either a newer GDDR standard or GPU process. At other times, nVidia's gotten big wins from refining existing products; the 7xx/9xx series major performance/watt wins being the most recent example.
  • chizow - Wednesday, May 20, 2015 - link

    Idk, I think AMD's early moves have been pretty negligible. GDDR4 for example was a complete flop, made no impact on the market, Nvidia skipped it entirely and AMD moved off of it even in the same generation with the 4770. GDDR5 was obviously more important, and AMD did have an advantage with their experience with the 4770. Nvidia obviously took longer to get their memory controller fixed, but since then they've been able to extract higher performance from it.

    And that's not even getting into AMD's proclivity to going to a leading edge process node sooner than Nvidia. Negligible performance benefit, certainly more efficiency (except when we are stuck on 28nm), but not much in the way of increased sales, profits, margins etc.
  • testbug00 - Tuesday, May 19, 2015 - link

    They probably also didn't have the engineering set up for it. *rollseyes* for NVidia's software superiority in the majority of cases, it is commonly accepted that AMD has far better physical design.

    And, they also co-developed HBM. That probably doesn't hurt!

    Nvidia probably wouldn't have gone with it anyways, but, I don't think they even had the option.
  • chizow - Tuesday, May 19, 2015 - link

    No the article covers it quite well, AMD tends to move to next-gen commodity processes as soon as possible in an attempt to generate competitive advantage, but unfortunately for them, this risk seldom pays off and typically increases their risk and exposure without any significant payoff. This is just another example, as HBM1 clearly has limitations and trade-offs related to capacity, cost and supply.

    As for not having the option lol, yeah I am sure SK Hynix developed the process to pander it to only AMD and their measly $300M/quarter in GPU revenue.
  • testbug00 - Tuesday, May 19, 2015 - link

    Next gen process? What does that have to do with HBM again? There you lose me, even with that slight explanation.

    Now, HBM has issues, but, supply isn't one of them. Capacity-- if AMD really can make an 8GB card (or 6GB card would be enough, really) are the real issues. Cost is a lesser one, it can be partially offset, so, the extra cost of HBM won't be extra cost eaten by AMD/added to the card. However, the cost will be higher than if the card had 4GB of GDDR5.

    AMD *worked with* SK Hynix to develop this technology. This technology is going to be widely adopted. At least, SK Hynix believed that enough to be willing to push forward with it while only having AMD as a partner (it appears to me). There's obviously some merit with it.
  • chizow - Tuesday, May 19, 2015 - link

    HBM is that next-gen, commodity process....

    How can you say HBM doesn't have supply/yield issues? You really can't say that, in fact, if it follows the rest of the DRAM industry's historical pricing, prices are going to be exponentially higher until they ramp for the mainstream.

    This article already lists out a number of additional costs that HBM carries, including the interposer itself which adds complexity, cost and another point of failure to a fledgling process.
  • testbug00 - Tuesday, May 19, 2015 - link

    Because HBM doesn't bring any areas where you get to reduce cost.
    Currently, it does and will add a net cost. It also can reduce some costs. *yawn*
  • chizow - Thursday, May 21, 2015 - link

    What? Again, do you think it will cost more, or not? lol.
  • Ranger101 - Wednesday, May 20, 2015 - link

    Lol @ Chizowshill doing what he does best, Nvidia troll carrot still visibly protruding,
    stenching out the Anandtech forums...thanks for the smiles dude.

Log in

Don't have an account? Sign up now