HBM: The 4GB Question

Having taken a look at HBM from a technical perspective, there’s one final matter to address with Fiji’s implementation of HBM, and that is the matter of capacity.

For HBM1, the maximum capacity of a HBM stack is 1GB, which in turn is made possible through the use of 4 256MB (2Gb) memory dies. With a 1GB/stack limit, this means that AMD can only equip the R9 Fury X and its siblings with 4GB of VRAM when using 4 stacks. Larger stacks are not possible, and while in principle it would be possible to do an 8 stack HBM1 design, doing so would double the width of the memory bus and invite a whole slew of issues with it at the same time. Ultimately for reasons ranging from interposers to where to place the stacks, the most AMD can get out of HBM1 is 4GB of VRAM.

To address the elephant in the room then, the question arises of whether 4GB is going to be enough VRAM. 4GB is as much VRAM as was on the R9 290X in 2013, it’s as much VRAM as was on the GTX 980 in 2014. But it’s also less VRAM than the 6GB that is on the GTX 980 Ti in 2015 (never mind the GTX Titan X at this point) and it’s less VRAM than the 8GB that is on the just-launched R9 390X. Even ignoring NVIDIA for a moment, R9 Fury X offers less VRAM than AMD’s next-lower tier of video cards.

This is quite a bit of a role reversal in the video card industry, as traditionally AMD has offered more VRAM than the competition. Thanks in large part to their favoring wider memory buses (which means more memory chips), AMD has offered greater memory capacities at similar prices than traditionally stingy NVIDIA. Now however they are on the other foot, and the timing is not all that great.

Console Memory Capacity
  Capacity
Xbox 360 512MB (Shared)
Playstation 3 256MB + 256MB
Xbox One 8GB (Shared)
Playstation 4 8GB (Shared)
Fiji 4GB (Dedicated VRAM)

Perhaps the single biggest influence here over VRAM requirements right now is the current-generation consoles, which launched back in 2013 with 8GB of RAM each. To be fair to AMD and to be technically correct these are shared memory devices, so that 8GB gets split between GPU resources and CPU resources, and even this comes after Microsoft and Sony set aside a significant amount of memory for their OSes and background tasks. Still, when using the current-gen consoles as a baseline, the current situation makes it possible to build a game that requires over 4GB of VRAM (if only just over), and if that game is naïvely ported over to the PC, there could be issues.

Throwing an extra wrench into things is that PCs have more going on than just console games. PC gamers buying high-end cards like the R9 Fury X will be running at resolutions greater than 1080p and likely with higher quality settings than the console equivalent, driving up the VRAM requirements. The Windows Desktop Window Manager responsible for rendering and compositing the different windows together in 3D space consumes its own VRAM as well. So the current PC situation pushes VRAM requirements higher still.

The reality of the situation is that AMD knows where they stand. 4GB is the most they can equip Fiji with, so it’s what they will have to make-do with until HBM2 comes along with greater densities. In the meantime the marketing side of AMD needs to convince potential buyers that 4GB is enough, and the technical side of AMD needs to come up with other solutions to help mitigate the problem.

On the latter point, while AMD can’t do anything about the amount of VRAM they have, they can and are working on doing a better job of using it. AMD has been rather straightforward in admitting that up until now they’ve never seriously dedicated resources to VRAM management on their cards, as they’ve always had enough VRAM that they have never considered it an issue. Until Fiji there was always enough VRAM.

Which is why for Fiji, AMD tells us they have dedicated two engineers to the task of VRAM optimizations. To be clear here, there’s little AMD can to do reduce VRAM consumption, but what they can do is better manage what resources are placed in VRAM and what resources are paged out to system RAM. Even this optimization can’t completely resolve the 4GB issue, but it can help up to a point. So long as game isn’t actively trying to use all 4GB of resources at once, then intelligent paging can help ensure that only the resources that are actively in use reside in VRAM and therefore are immediately available to the GPU when requested.

As for the overall utility of this kind of optimization, it’s going to depend on a number of factors, including the OS, the game’s own resource management, and ultimately the real working set needs of a game. The situation AMD faces right now is one where they have to simultaneously fight an OS/driver paradigm that wastes memory, and the games that will be running on their GPUs traditionally treat VRAM like it’s going out of style. The limitations of DirectX 11/WDDM 1.x prevent full reuse of certain types of assets by developers, and all the while it’s extremely common for games to claim much (if not all) available VRAM for their own use with the intent of ensuring they have enough VRAM for future use, and otherwise caching as many resources as possible for better performance.

The good news here is that the current situation leaves overhead that AMD can optimize around. AMD has been creating both generic and game-specific memory optimizations in order to better manage VRAM usage and what resources are held in local VRAM versus paging out to system memory. By controlling duplicate resources and clamping down on overzealous caching by games, it is possible to get more mileage out of the 4GB of VRAM AMD has.

Longer term, AMD is looking at the launch of Windows 10 and DirectX 12 to change the situation for the better. The low-level API will allow careful developers to avoid duplicate assets in the first place, and WDDM 2.0 overall is said to be a bit nicer about how it handles VRAM consumption. None the less the first DirectX 12 games aren’t launching for a few more months, and it will be longer still until those games are in the majority. As a result the situation AMD faces is one where they need to do well with Windows 8.1 and DirectX 11 games, as those games aren’t going anywhere right away and they will be the games that stress Fiji the most.

So with that in mind, let’s attempt to answer the question at hand: is 4GB enough VRAM for R9 Fury X? Is it enough for a $650 card?

The short answer is yes, at the moment it’s enough, if just barely.

To be clear, we can without fail “break” the R9 Fury X and place it in situations where performance nosedives because it has run out of VRAM. However of the tests we’ve put together, those cases are essentially edge cases; any scenario we come up with that breaks the R9 Fury X also results in average framerates that are too low to be playable in the first place. So it is very difficult (though I do not believe impossible) to come up with a scenario where the R9 Fury X would produce playable framerates if only it had more VRAM.

Case in point, in our current gaming test suite Shadows of Mordor and Grand Theft Auto V are the two most VRAM-hungry games. Attempting to break the R9 Fury X with Shadow of Mordor is ineffective at best; even with the HD texture pack installed (which is not the default for our test suite) the game’s built-in benchmark hardly registers a difference. Both the average and minimum framerates are virtually unchanged from our results without the HD texture pack. Meanwhile playing the game is much the same, though it’s entirely possible there are scenarios in the game not covered by that or the benchmark where more than 4GB of VRAM is truly required.

Breaking Fiji: VRAM Usage Testing
  R9 Fury X GTX 980 Ti
Shadows of Mordor Ultra, Avg 47.7 fps 49 fps
Shadows of Mordor Ultra, Min 31.6 fps 38 fps
GTA V, "Breaker", Avg 21.7 fps 26.2 fps
GTA V, "Breaker", 99th Perc. 6 fps 17.8 fps

Meanwhile with GTA5 we can break the R9 Fury X, but only at unplayable settings. The card already teeters on the brink with our standard 4K “Very High” settings, which includes 4x MSAA but no “advanced” draw distance enhancements, with minimum framerates well below the GTX 980 Ti. Turning up the draw distance in turn further halves those minimums, driving the minimum framerate to 6fps as the R9 Fury X is forced to swap between VRAM and system RAM over the very slow PCIe bus.

But in both of these cases the average framerate is below 30fps (never mind 60fps), and not just for the R9 Fury X, but for the GTX 980 Ti as well. No scenario we’ve tried that breaks the R9 Fury X leaves it or the GTX 980 Ti running a game at 30fps or better, typically because in order to break the R9 Fury X we have to run with MSAA, which is itself a performance killer.

Unfortunately for AMD they are pushing the R9 Fury X as a 4K gaming card, and for a good reason. AMD’s performance traditionally scales better with resolution (i.e. deteriorates more slowly), so AMD’s best chance of catching up to NVIDIA is at 4K. However this also stresses R9 Fury X’s 4GB of VRAM all the more, which puts them in VRAM-limited situations all the sooner. It’s not quite a catch-22 situation, but it’s also not a situation AMD is going to want to be in.

Ultimately even at 4K AMD is okay for the moment, but only just. If VRAM requirements increase any more than they already have – if games start requiring 6-8GB at the very high end – then the R9 Fury X (and every other 4GB card for that matter) is going to be in trouble. And in the meantime anything worse than 4K, be it multi-monitor setups or 5K displays, is going to exacerbate the problem.

AMD believes their situation will get better with Windows 10 and DirectX 12, but until DX12 games actually come out in large numbers, all we can do is look at the kind of games we have today. And right now what we’re seeing are signs that the 4GB era is soon to come to an end. 4GB is enough right now, but I suspect 4GB cards now have less than 2 years to go until they’re undersized, which is a difficult situation to be in for a $650 video card.

High Bandwidth Memory: Wide & Slow Makes It Fast Display Matters: Virtual Super Resolution, Frame Rate Targeting, and HEVC Decoding
Comments Locked

458 Comments

View All Comments

  • Chaser - Friday, July 3, 2015 - link

    Oh yeah that invalidated the entire review. /facepalm
  • Strychn9ne - Saturday, July 4, 2015 - link

    Great review here! It was a good read going through all the technical details of the card I must say. The Fury X is an awesome card for sure. I am trying to wait for next gen to buy a new card as my 280X is holding it's own for now, but this thing makes it tempting not to wait. As for the performance, I expect it will perform better with the next driver release. The performance is more than fine even now despite the few losses it had in the benches. I suspect that AMD kind of rushed the driver out for this thing and didn't get enough time to polish it fully. The scaling down to lower resolutions kind of points that way for me anyways.
  • Peichen - Saturday, July 4, 2015 - link

    AMD/ATI, what a fail. Over the past 15 years I have only gone Nvidia twice for 6600GT and 9800GT but now I am using a GTX 980. Not a single mid-range/high-end card in AMD/ATI's line up is correctly priced. Lower price by 15-20% to take into account the power usage, poor driver and less features will make them more competitive
  • just4U - Saturday, July 4, 2015 - link

    At the high end you "may" have a point.. but what is the 960 bringing to the table against the 380? Not much.. not much at all. How about the 970 vs the 390? Again.. not much.. and in crossfire/sli situations the 390 (in theory..) should be one helluva bang for the buck 4k setup.

    There will be a market for the FuryX.. and considering the efforts they put into it I don't believe it's going to get the 15-20% price drop your hoping for.
  • TheinsanegamerN - Saturday, July 4, 2015 - link

    Slightly better performance while pulling less power and putting out less heat, and in the 970's case, is currently about $10 cheaper. Given that crossfire is less reliable than SLI, why WOULD you buy an AMD card?
  • Oxford Guy - Saturday, July 4, 2015 - link

    Maybe because people want decent performance above 3.5 GB of VRAM? Or they don't appreciate bait and switch, being lied to (ROP count, VRAM speed, nothing about the partitioning in the specs, cache size).
  • medi03 - Sunday, July 5, 2015 - link

    Freesync?
    Built-in water cooling?
    Disgust for nVidia's shitty buisness practices?
    A brain?
  • chizow - Monday, July 6, 2015 - link

    How do you feel about the business practice of sending out a card with faults that you claimed were fixed?

    Or claims that you had the world's fastest GPU enabled by HBM?

    Or claims/benches that your card was faster than 980Ti?

    Or claims that your card was an Overclocker's Dream when it is anything but that and OCs 10% max?

    A brain right? :)
  • sa365 - Tuesday, July 7, 2015 - link

    How do you feel about the business practice of sending out a card with faulty, cheating drivers that lower IQ despite what you set in game so you can win/cheat in those said benchmarks. It's supposed to be apples to apples not apples to mandarins?

    How about we wait until unwinder writes the software for voltage unlocks before we test overclocking, those darn fruits again huh?

    Nvidia will cheat their way through anything it seems.

    It's pretty damning when you look at screens side by side, no AF Nvidia.
  • Margalus - Monday, July 6, 2015 - link

    freesync? not as good as gsync and is still not free. It takes similar hardware added to the monitor just like gsync.

    built in water cooling? just something else to go wrong and be more expensive to repair, with the possibility of it ruining other computer components.

    Disgust for NVidia's shitty business practices? what are those? Do you mean like not giving review samples of your cards to honest review sites because they told the truth about their cards so now you are afraid that they will tell the truth about your newest pos? Sounds like you should really hate AMD's shitty business practices.

Log in

Don't have an account? Sign up now