Though it didn’t garner much attention at the time, in 2011 AMD and memory manufacturer Hynix (now SK Hynix) publicly announced plans to work together on the development and deployment of a next generation memory standard: High Bandwidth Memory (HBM). Essentially pitched as the successor to GDDR, HBM would implement some very significant changes in the working of memory in order to further improve memory bandwidth and turn back the dial on memory power consumption.

AMD (and graphics predecessor ATI) for their part have in the last decade been on the cutting edge of adopting new memory technologies in the graphics space, being the first to deploy products based on the last 2 graphics DDR standards, GDDR4, and GDDR5. Consequently, AMD and Hynix’s announcement, though not a big deal at the time, was a logical extension of AMD’s past behavior in continuing to explore new memory technologies for future products. Assuming everything were to go well for the AMD and Hynix coalition – something that was likely, but not necessarily a given – in a few years the two companies would be able to bring the technology to market.


AMD Financial Analyst Day 2015

It’s now 4 years later, and successful experimentation has given way to productization. Earlier this month at AMD’s 2015 Financial Analyst day, the company announced that they would be releasing their first HBM-equipped GPU – the world’s first HBM-equipped GPU, in fact – to the retail market this quarter. Since then there have been a number of questions of just what AMD intends to do with HBM and just what it means for their products (is it as big of a deal as it seems?), and while AMD is not yet ready to reveal the details of their forthcoming HBM-equipped GPU, the company is looking to hit the ground running on HBM in order to explain what the technology is and what it can do for their products ahead of the GPU launch later that quarter.

To date there have been a number of presentations released on HBM, including by memory manufactures, the JEDEC groups responsible for shaping HBM, AMD, and even NVIDIA. So although the first HBM products have yet to hit retail shelves, the underpinnings of HBM are well understood, at least inside of engineering circles. In fact it’s the fact that HBM is really only well understood within those technical circles that’s driving AMD’s latest disclosure today. AMD sees HBM as a significant competitive advantage over the next year, and with existing HBM presentations having been geared towards engineers, academia, and investors, AMD is looking to take the next step and reach out to end-users about HBM technology.

This brings us to the topic of today’s article: AMD’s deep dive disclosure on High Bandwidth Memory. Looking to set the stage ahead of their next GPU launch, AMD is reaching out to technical and gaming press to get the word out about HBM and what it means for AMD’s products. Ideally for AMD, an early disclosure on HBM can help to drum up interest in their forthcoming GPU before it launches later this quarter, but if nothing else it can help answer some burning questions about what to expect ahead of the launch. So with that in mind, let’s dive in.

I'd also like to throw out a quick thank you to AMD Product CTO and Corporate Fellow Joe Macri, who fielded far too many questions about HBM.

History: Where GDDR5 Reaches Its Limits
Comments Locked

163 Comments

View All Comments

  • chizow - Wednesday, May 20, 2015 - link

    Wouldn't be the first time David Kanter was wrong, certainly won't be the last. Still waiting for him to recant his nonsense article about PhysX lacking SSE and only supporting x87. But I guess that's why he's David Kanter and not David ReKanter.
  • Poisoner - Friday, June 12, 2015 - link

    You're just making up stuff. No way Fiji is just two Tonga chips stuck together. My guess is your identity is wrapped up in nVidia so you need to spread fud.
  • close - Tuesday, May 19, 2015 - link

    That will be motivation enough to really improve on the chip for the next generation(s), not just rebrand it. Because to be honest very, very few people need 6 or 8GB on a consumer card today. It's so prohibitively expensive that you'd just have an experiment like the $3000 (now just $1600) 12GB Titan Z.

    The fact that a select few can or would buy such a graphics card doesn't justify the costs that go into building such a chip, costs that would trickle down into the mainstream. No point in asking 99% of potential buyers to pay more to cover the development of features they'd never use. Like a wider bus, a denser interposer, or whatever else is involved in doubling the possible amount of memory.
  • chizow - Tuesday, May 19, 2015 - link

    Idk, I do think 6 and 8GB will be the sweet spot for any "high-end" card. 4GB will certainly be good for 1080p, but if you want to run 1440p or higher and have the GPU grunt to push it, that will feel restrictive, imo.

    As for the expense, I agree its a little bit crazy how much RAM they are packing on these parts. 4GB on the 970 I thought was pretty crazy at $330 when it launched, but now AMD is forced to sell their custom 8GB 290X for only around $350-360 and there's more recent rumors that Hawaii is going to be rebranded again for R9 300 desktop with a standard 8GB. How much are they going to ask for it is the question, because that's a lot of RAM to put on a card that sells for maybe $400 tops.
  • silverblue - Tuesday, May 19, 2015 - link

    ...plus the extra 30-ish watts of power just for having that extra 4GB. I can see why higher capacity cards had slightly nerfed clock speeds.
  • przemo_li - Thursday, May 21, 2015 - link

    VR.

    It require 90Hz 1090p x2 if one assume graphics same as current get non-VR graphics!

    That is lots of data to push to and from GPU.
  • robinspi - Tuesday, May 19, 2015 - link

    Wrong. They will be using a dual link interposer, making it instead of 4GB hi it will be 8GB hi-hi. Read more on WCCFTech:

    http://wccftech.com/amd-radeon-r9-390x-fiji-xt-8-h...
  • HighTech4US - Tuesday, May 19, 2015 - link

    Wrong.

    4GB first, 8GB to follow (on dual GPU card)

    http://www.fudzilla.com/news/graphics/37790-amd-fi...
  • chizow - Tuesday, May 19, 2015 - link

    Wow lol. That 4GB rumor again. And that X2 rumor again. And $849 price tag for just the single GPU version???! I guess AMD is looking to be rewarded for their efforts with HBM and hitting that ultra-premium tier? I wonder if the market will respond at that asking price if the single-GPU card does only have 4GB.
  • przemo_li - Thursday, May 21, 2015 - link

    Artificial number like X GB, wont matter.

    Artificial number like YZW FPS in games S,X,E will ;)

    Do note that Nvidia need to pack lots of GB just for wide bus effect!
    It works for them, but games do not require 12GB now, nor in short term future (-- no consoles!)

Log in

Don't have an account? Sign up now