Though it didn’t garner much attention at the time, in 2011 AMD and memory manufacturer Hynix (now SK Hynix) publicly announced plans to work together on the development and deployment of a next generation memory standard: High Bandwidth Memory (HBM). Essentially pitched as the successor to GDDR, HBM would implement some very significant changes in the working of memory in order to further improve memory bandwidth and turn back the dial on memory power consumption.

AMD (and graphics predecessor ATI) for their part have in the last decade been on the cutting edge of adopting new memory technologies in the graphics space, being the first to deploy products based on the last 2 graphics DDR standards, GDDR4, and GDDR5. Consequently, AMD and Hynix’s announcement, though not a big deal at the time, was a logical extension of AMD’s past behavior in continuing to explore new memory technologies for future products. Assuming everything were to go well for the AMD and Hynix coalition – something that was likely, but not necessarily a given – in a few years the two companies would be able to bring the technology to market.


AMD Financial Analyst Day 2015

It’s now 4 years later, and successful experimentation has given way to productization. Earlier this month at AMD’s 2015 Financial Analyst day, the company announced that they would be releasing their first HBM-equipped GPU – the world’s first HBM-equipped GPU, in fact – to the retail market this quarter. Since then there have been a number of questions of just what AMD intends to do with HBM and just what it means for their products (is it as big of a deal as it seems?), and while AMD is not yet ready to reveal the details of their forthcoming HBM-equipped GPU, the company is looking to hit the ground running on HBM in order to explain what the technology is and what it can do for their products ahead of the GPU launch later that quarter.

To date there have been a number of presentations released on HBM, including by memory manufactures, the JEDEC groups responsible for shaping HBM, AMD, and even NVIDIA. So although the first HBM products have yet to hit retail shelves, the underpinnings of HBM are well understood, at least inside of engineering circles. In fact it’s the fact that HBM is really only well understood within those technical circles that’s driving AMD’s latest disclosure today. AMD sees HBM as a significant competitive advantage over the next year, and with existing HBM presentations having been geared towards engineers, academia, and investors, AMD is looking to take the next step and reach out to end-users about HBM technology.

This brings us to the topic of today’s article: AMD’s deep dive disclosure on High Bandwidth Memory. Looking to set the stage ahead of their next GPU launch, AMD is reaching out to technical and gaming press to get the word out about HBM and what it means for AMD’s products. Ideally for AMD, an early disclosure on HBM can help to drum up interest in their forthcoming GPU before it launches later this quarter, but if nothing else it can help answer some burning questions about what to expect ahead of the launch. So with that in mind, let’s dive in.

I'd also like to throw out a quick thank you to AMD Product CTO and Corporate Fellow Joe Macri, who fielded far too many questions about HBM.

History: Where GDDR5 Reaches Its Limits
Comments Locked

163 Comments

View All Comments

  • Horza - Tuesday, May 19, 2015 - link

    TW3 doesn't even get close, highest VRAM usage I've seen is ~2.3GB @1440p everything ultra AA on etc. In fact of all the games you mentioned Shadows of Mordor is the only one that really pushes past 4GB @1440p in my experience (without unplayable levels of MSAA ). If that makes much difference to playability is another thing entirely, I've played Shadows on a 4GB card @1440p and it wasn't a stuttery mess or anything. It's hard to know without framerate/frametime testing if a specific game is using VRAM because it can or because is really requires it.

    We've been through a period of rapid VRAM requirement expansion but I think things are going to plateau soon like they did with the ports from previous console generation.
  • chizow - Wednesday, May 20, 2015 - link

    I just got TW3 free with Nvidia's Titan X promotion and it doesn't seem to be pushing upward of 3GB, but the rest of the games absolutely do. Are you enabling AA? GTA5, Mordor, Ryse (with AA/SSAA), and Unity all do push over 4GB at 1440p. Also, any game that has heavy texture modding, like Skyrim, appreciates the extra VRAM.

    Honestly I don't think we have hit the ceiling yet, the consoles are the best indication of this as they have 8GB of RAM, which is generally allocated as 2GB/6GB for CPU/GPU, so you are looking at ~6GB to really be safe, and we still haven't seen what DX12 will offer. Given many games are going to single large resources like megatextures, being able to load the entire texture to local VRAM would obviously be better than having to stream it in using advanced methods like bindless textures.
  • przemo_li - Thursday, May 21, 2015 - link

    False.

    It would be better to ONLY stream what will be needed!

    And that is why DX12/Vulkan will allow just that. App's will tell DX which part to stream.

    Wholesale streaming will only be good if whole resource will be consumed.

    This is benefit of bindless, only transfer what You will use.
  • chizow - Thursday, May 21, 2015 - link

    False, streaming from system RAM or slower resources is non-optimal compared to keeping it in local VRAM cache. Simply put, if you can eliminate streaming, you're going to get a better experience and more timely data accesses, plain and simple.
  • testbug00 - Tuesday, May 19, 2015 - link

    A quick check on hardOCP Shows max settings on Titan X with just under 4GB of RAM used at 1440p. To keep it playable, you had to turn down the settings slightly.
    http://www.hardocp.com/article/2015/05/04/grand_th...

    You certainly can push VRAM usage over 4GB at 1440/1600p, but, generally speaking, it appears that it would push the game into not being fluid.

    Having at least 6GB is 100% the safe spot. 4GB is pushing it.
  • chizow - Tuesday, May 19, 2015 - link

    Those aren't max settings, not even close. FXAA is being used, turn it up to just 2xMSAA or MFAA with Nvidia and that breaks 4GB easily.

    Source: I own a Titan X and play GTA5 at 1440p.

    Also, the longer you play, the more you load, the bigger your RAM and VRAM footprint. And this is a game that launched on last-gen consoles in 2013, so to think 4GB is going to hold up for the life of this card with DX12 on the horizon is not a safe bet, imo.
  • Mark_gb - Sunday, May 24, 2015 - link

    Do not forget the color compression that AMD designed into their chips. Its in Fuji. In addition, AMD assigned some engineers to work on ways to use the 4GB of memory more efficiently, since in the past AMD viewed the memory as free memory since it kept expanding, and wasn't really needed, so they had never bothered to assign anyone to make memory usage efficient. Now with a team having worked on that issue, which will work just with the drivers making changes to memory usage and allocation more efficiently, 4GB will be enough.
  • xthetenth - Tuesday, May 19, 2015 - link

    The Tonga XT x2 with HBM rumor is insane if you're suggesting the one I think you are. First off the chip has a GDDR memory controller, and second if the CF profile doesn't work out a 290X is a better card.
  • chizow - Tuesday, May 19, 2015 - link

    I do think its crazy but the more I read the more credibility there is to that rumor lol. Btw, memory controllers can often support more than 1 standard, not uncommon at all. In fact, most of AMD's APUs can support HBM per their own whitepapers and I do believe there was a similar leak last year that was the basis of the rumors Tonga would launch with HBM.
  • tuxRoller - Wednesday, May 20, 2015 - link

    David Kanter really seemed certain that amd was going to bring 8GB of HBM.

Log in

Don't have an account? Sign up now