Though it didn’t garner much attention at the time, in 2011 AMD and memory manufacturer Hynix (now SK Hynix) publicly announced plans to work together on the development and deployment of a next generation memory standard: High Bandwidth Memory (HBM). Essentially pitched as the successor to GDDR, HBM would implement some very significant changes in the working of memory in order to further improve memory bandwidth and turn back the dial on memory power consumption.

AMD (and graphics predecessor ATI) for their part have in the last decade been on the cutting edge of adopting new memory technologies in the graphics space, being the first to deploy products based on the last 2 graphics DDR standards, GDDR4, and GDDR5. Consequently, AMD and Hynix’s announcement, though not a big deal at the time, was a logical extension of AMD’s past behavior in continuing to explore new memory technologies for future products. Assuming everything were to go well for the AMD and Hynix coalition – something that was likely, but not necessarily a given – in a few years the two companies would be able to bring the technology to market.


AMD Financial Analyst Day 2015

It’s now 4 years later, and successful experimentation has given way to productization. Earlier this month at AMD’s 2015 Financial Analyst day, the company announced that they would be releasing their first HBM-equipped GPU – the world’s first HBM-equipped GPU, in fact – to the retail market this quarter. Since then there have been a number of questions of just what AMD intends to do with HBM and just what it means for their products (is it as big of a deal as it seems?), and while AMD is not yet ready to reveal the details of their forthcoming HBM-equipped GPU, the company is looking to hit the ground running on HBM in order to explain what the technology is and what it can do for their products ahead of the GPU launch later that quarter.

To date there have been a number of presentations released on HBM, including by memory manufactures, the JEDEC groups responsible for shaping HBM, AMD, and even NVIDIA. So although the first HBM products have yet to hit retail shelves, the underpinnings of HBM are well understood, at least inside of engineering circles. In fact it’s the fact that HBM is really only well understood within those technical circles that’s driving AMD’s latest disclosure today. AMD sees HBM as a significant competitive advantage over the next year, and with existing HBM presentations having been geared towards engineers, academia, and investors, AMD is looking to take the next step and reach out to end-users about HBM technology.

This brings us to the topic of today’s article: AMD’s deep dive disclosure on High Bandwidth Memory. Looking to set the stage ahead of their next GPU launch, AMD is reaching out to technical and gaming press to get the word out about HBM and what it means for AMD’s products. Ideally for AMD, an early disclosure on HBM can help to drum up interest in their forthcoming GPU before it launches later this quarter, but if nothing else it can help answer some burning questions about what to expect ahead of the launch. So with that in mind, let’s dive in.

I'd also like to throw out a quick thank you to AMD Product CTO and Corporate Fellow Joe Macri, who fielded far too many questions about HBM.

History: Where GDDR5 Reaches Its Limits
Comments Locked

163 Comments

View All Comments

  • LukaP - Wednesday, May 20, 2015 - link

    Its only developed by them. Its a technology that is on the market now (or will be in 6 months after it stops being AMD exclusive). Its the same with GDDR3/5. ATI did lots of the work with developing it, but NV still had the option of using it.
  • chizow - Wednesday, May 20, 2015 - link

    http://en.wikipedia.org/wiki/JEDEC

    Like any standards board or working group, you have a few heavy-lifters and everyone else leeches/contributes as they see fit, but all members have access to the technology in the hopes it drives adoption for the entire industry. Obviously the ones who do the most heavy-lifting are going to be the most eager to implement it. See: FreeSync and now HBM.
  • Laststop311 - Wednesday, May 20, 2015 - link

    I do not agree with this article saying gpu's are memory bandwidth bottlenecked. If you don't believe me test it yourself. Keep gpu core clock at stock and maximize your memory oc and see the very little if any gains. Now put the memory at stock and maximize your gpu core oc and see the noticeable, decent gains.

    HBM is still a very necessary step in the right direction. Being able to dedicate an extra 25-30 watts to the gpu core power budget is always a good thing. As 4k becomes the new standard and games upgrade their assets to take advantage of 4k we should start to see gddr5's bandwidth eclipsed, especially with multi monitor 4k setups. It's better to be ahead of the curve than playing catchup but the benefits you get from using HBM right now today are actually pretty minor.

    In some ways it hurts amd as it forces us to pay more money for a feature we won't get much use out of. Would you rather pay 850 for a HBM 390x or 700 for a gddr5 390x with basically identical performance since memory bandwidth is still good enough for the next few years with gddr5.
  • chizow - Wednesday, May 20, 2015 - link

    I agree, bandwidth is not going to be the game-changer that many seem to think, at least not for gaming/graphics. For compute, bandwidth to the GPU is much more important as applications are constantly reading/writing new data. For graphics, the main thing you are looking at is reducing valleys and any associated stutters or drops in framerate as new textures are accessed by the GPU.
  • akamateau - Monday, June 8, 2015 - link

    High Bandwidth is absolutely essential for the increased demand that DX12 is going to provide. With DX11 GPU's did not work very hard. Massive drawcalls are going to require massive rendering. That is where HBM is the only solution.

    With DX11 the API overhead for a dGPU was around 2MILLION draw calls. With DX12 that changes radically to 15-20MILLION draw calls. All those extra polygons need rendering! how do you propose to do it with miniscule DDR4-5 pipes?
  • nofumble62 - Wednesday, May 20, 2015 - link

    Won't be cheap. How many of you has pocket deep enough for this card?
  • junky77 - Wednesday, May 20, 2015 - link

    Just a note - the HBM solution seems to be more effective for high memory bandwidth loads. For low loads, the slower memory with higher parallelity might not be effective against the faster GDDR5
  • asmian - Wednesday, May 20, 2015 - link

    I understand that the article is primarily focussed on AMD as the innovator and GPU as the platform because of that. But once this is an open tech, and given the aggressive power budgeting now standard practice in motherboard/CPU/system design, won't there come a point at which the halving of power required means this *must* challenge standard CPU memory as well?

    I just feel I'm missing here a roadmap (or even a single sidenote, really) about how this will play into the non-GPU memory market. If bandwidth and power are both so much better than standard memory, and assuming there isn't some other exotic game-changing technology in the wings (RRAM?) what is the timescale for switchover generally? Or is HBM's focus on bandwidth rather than pure speed the limiting factor for use with CPUs? But then, Intel forced us on to DDR4 which hasn't much improved speeds while increasing cost dramatically because of the lower operating voltage and therefore power efficiency... so there's definitely form in that transitioning to lower power memory solutions. Or is GDDR that much more power-hungry than standard DDR that the power saving won't materialise with CPU memory?
  • Ryan Smith - Friday, May 22, 2015 - link

    The non-GPU memory market is best described as TBD.

    For APUs it makes a ton of sense, again due to the GPU component. But for pure CPUs? The cost/benefit ratio isn't nearly as high. CPUs aren't nearly as bandwidth starved, thanks in part to some very well engineered caches.
  • PPalmgren - Wednesday, May 20, 2015 - link

    There's something that concerns me with this: Heat!

    They push the benefits of a more compact card, but that also moves all the heat from the RAM right up next to the main core. The stacking factor of the RAM also scrunches their heat together, making it harder to dissipate.

    The significant power reduction results in a significant heat reduction, but it still concerns me. Current coolers are designed to cover the RAM for a reason, and the GPUs currently get hot as hell. Will they be able to cool this combined setup reasonably?

Log in

Don't have an account? Sign up now