HBM: The 4GB Question

Having taken a look at HBM from a technical perspective, there’s one final matter to address with Fiji’s implementation of HBM, and that is the matter of capacity.

For HBM1, the maximum capacity of a HBM stack is 1GB, which in turn is made possible through the use of 4 256MB (2Gb) memory dies. With a 1GB/stack limit, this means that AMD can only equip the R9 Fury X and its siblings with 4GB of VRAM when using 4 stacks. Larger stacks are not possible, and while in principle it would be possible to do an 8 stack HBM1 design, doing so would double the width of the memory bus and invite a whole slew of issues with it at the same time. Ultimately for reasons ranging from interposers to where to place the stacks, the most AMD can get out of HBM1 is 4GB of VRAM.

To address the elephant in the room then, the question arises of whether 4GB is going to be enough VRAM. 4GB is as much VRAM as was on the R9 290X in 2013, it’s as much VRAM as was on the GTX 980 in 2014. But it’s also less VRAM than the 6GB that is on the GTX 980 Ti in 2015 (never mind the GTX Titan X at this point) and it’s less VRAM than the 8GB that is on the just-launched R9 390X. Even ignoring NVIDIA for a moment, R9 Fury X offers less VRAM than AMD’s next-lower tier of video cards.

This is quite a bit of a role reversal in the video card industry, as traditionally AMD has offered more VRAM than the competition. Thanks in large part to their favoring wider memory buses (which means more memory chips), AMD has offered greater memory capacities at similar prices than traditionally stingy NVIDIA. Now however they are on the other foot, and the timing is not all that great.

Console Memory Capacity
  Capacity
Xbox 360 512MB (Shared)
Playstation 3 256MB + 256MB
Xbox One 8GB (Shared)
Playstation 4 8GB (Shared)
Fiji 4GB (Dedicated VRAM)

Perhaps the single biggest influence here over VRAM requirements right now is the current-generation consoles, which launched back in 2013 with 8GB of RAM each. To be fair to AMD and to be technically correct these are shared memory devices, so that 8GB gets split between GPU resources and CPU resources, and even this comes after Microsoft and Sony set aside a significant amount of memory for their OSes and background tasks. Still, when using the current-gen consoles as a baseline, the current situation makes it possible to build a game that requires over 4GB of VRAM (if only just over), and if that game is naïvely ported over to the PC, there could be issues.

Throwing an extra wrench into things is that PCs have more going on than just console games. PC gamers buying high-end cards like the R9 Fury X will be running at resolutions greater than 1080p and likely with higher quality settings than the console equivalent, driving up the VRAM requirements. The Windows Desktop Window Manager responsible for rendering and compositing the different windows together in 3D space consumes its own VRAM as well. So the current PC situation pushes VRAM requirements higher still.

The reality of the situation is that AMD knows where they stand. 4GB is the most they can equip Fiji with, so it’s what they will have to make-do with until HBM2 comes along with greater densities. In the meantime the marketing side of AMD needs to convince potential buyers that 4GB is enough, and the technical side of AMD needs to come up with other solutions to help mitigate the problem.

On the latter point, while AMD can’t do anything about the amount of VRAM they have, they can and are working on doing a better job of using it. AMD has been rather straightforward in admitting that up until now they’ve never seriously dedicated resources to VRAM management on their cards, as they’ve always had enough VRAM that they have never considered it an issue. Until Fiji there was always enough VRAM.

Which is why for Fiji, AMD tells us they have dedicated two engineers to the task of VRAM optimizations. To be clear here, there’s little AMD can to do reduce VRAM consumption, but what they can do is better manage what resources are placed in VRAM and what resources are paged out to system RAM. Even this optimization can’t completely resolve the 4GB issue, but it can help up to a point. So long as game isn’t actively trying to use all 4GB of resources at once, then intelligent paging can help ensure that only the resources that are actively in use reside in VRAM and therefore are immediately available to the GPU when requested.

As for the overall utility of this kind of optimization, it’s going to depend on a number of factors, including the OS, the game’s own resource management, and ultimately the real working set needs of a game. The situation AMD faces right now is one where they have to simultaneously fight an OS/driver paradigm that wastes memory, and the games that will be running on their GPUs traditionally treat VRAM like it’s going out of style. The limitations of DirectX 11/WDDM 1.x prevent full reuse of certain types of assets by developers, and all the while it’s extremely common for games to claim much (if not all) available VRAM for their own use with the intent of ensuring they have enough VRAM for future use, and otherwise caching as many resources as possible for better performance.

The good news here is that the current situation leaves overhead that AMD can optimize around. AMD has been creating both generic and game-specific memory optimizations in order to better manage VRAM usage and what resources are held in local VRAM versus paging out to system memory. By controlling duplicate resources and clamping down on overzealous caching by games, it is possible to get more mileage out of the 4GB of VRAM AMD has.

Longer term, AMD is looking at the launch of Windows 10 and DirectX 12 to change the situation for the better. The low-level API will allow careful developers to avoid duplicate assets in the first place, and WDDM 2.0 overall is said to be a bit nicer about how it handles VRAM consumption. None the less the first DirectX 12 games aren’t launching for a few more months, and it will be longer still until those games are in the majority. As a result the situation AMD faces is one where they need to do well with Windows 8.1 and DirectX 11 games, as those games aren’t going anywhere right away and they will be the games that stress Fiji the most.

So with that in mind, let’s attempt to answer the question at hand: is 4GB enough VRAM for R9 Fury X? Is it enough for a $650 card?

The short answer is yes, at the moment it’s enough, if just barely.

To be clear, we can without fail “break” the R9 Fury X and place it in situations where performance nosedives because it has run out of VRAM. However of the tests we’ve put together, those cases are essentially edge cases; any scenario we come up with that breaks the R9 Fury X also results in average framerates that are too low to be playable in the first place. So it is very difficult (though I do not believe impossible) to come up with a scenario where the R9 Fury X would produce playable framerates if only it had more VRAM.

Case in point, in our current gaming test suite Shadows of Mordor and Grand Theft Auto V are the two most VRAM-hungry games. Attempting to break the R9 Fury X with Shadow of Mordor is ineffective at best; even with the HD texture pack installed (which is not the default for our test suite) the game’s built-in benchmark hardly registers a difference. Both the average and minimum framerates are virtually unchanged from our results without the HD texture pack. Meanwhile playing the game is much the same, though it’s entirely possible there are scenarios in the game not covered by that or the benchmark where more than 4GB of VRAM is truly required.

Breaking Fiji: VRAM Usage Testing
  R9 Fury X GTX 980 Ti
Shadows of Mordor Ultra, Avg 47.7 fps 49 fps
Shadows of Mordor Ultra, Min 31.6 fps 38 fps
GTA V, "Breaker", Avg 21.7 fps 26.2 fps
GTA V, "Breaker", 99th Perc. 6 fps 17.8 fps

Meanwhile with GTA5 we can break the R9 Fury X, but only at unplayable settings. The card already teeters on the brink with our standard 4K “Very High” settings, which includes 4x MSAA but no “advanced” draw distance enhancements, with minimum framerates well below the GTX 980 Ti. Turning up the draw distance in turn further halves those minimums, driving the minimum framerate to 6fps as the R9 Fury X is forced to swap between VRAM and system RAM over the very slow PCIe bus.

But in both of these cases the average framerate is below 30fps (never mind 60fps), and not just for the R9 Fury X, but for the GTX 980 Ti as well. No scenario we’ve tried that breaks the R9 Fury X leaves it or the GTX 980 Ti running a game at 30fps or better, typically because in order to break the R9 Fury X we have to run with MSAA, which is itself a performance killer.

Unfortunately for AMD they are pushing the R9 Fury X as a 4K gaming card, and for a good reason. AMD’s performance traditionally scales better with resolution (i.e. deteriorates more slowly), so AMD’s best chance of catching up to NVIDIA is at 4K. However this also stresses R9 Fury X’s 4GB of VRAM all the more, which puts them in VRAM-limited situations all the sooner. It’s not quite a catch-22 situation, but it’s also not a situation AMD is going to want to be in.

Ultimately even at 4K AMD is okay for the moment, but only just. If VRAM requirements increase any more than they already have – if games start requiring 6-8GB at the very high end – then the R9 Fury X (and every other 4GB card for that matter) is going to be in trouble. And in the meantime anything worse than 4K, be it multi-monitor setups or 5K displays, is going to exacerbate the problem.

AMD believes their situation will get better with Windows 10 and DirectX 12, but until DX12 games actually come out in large numbers, all we can do is look at the kind of games we have today. And right now what we’re seeing are signs that the 4GB era is soon to come to an end. 4GB is enough right now, but I suspect 4GB cards now have less than 2 years to go until they’re undersized, which is a difficult situation to be in for a $650 video card.

High Bandwidth Memory: Wide & Slow Makes It Fast Display Matters: Virtual Super Resolution, Frame Rate Targeting, and HEVC Decoding
Comments Locked

458 Comments

View All Comments

  • Samus - Saturday, July 4, 2015 - link

    Being an NVidia use for 3 generations, I'm finding it hard to ignore this cards value, especially since I've invested $100 each on my last two NVidia cards (including my SLI setup) adding liquid cooling. The brackets alone are $30.

    Even if this card is less efficient per watt than NVidia's, the difference is negligible when considering kw/$. It's like comparing different brand of LED bulbs, some use 10-20% less energy but the overall value isn't as good because the more efficient ones cost more, don't dim, have a light buzz noise, etc.

    After reading this review I find the Fury X more impressive than I otherwise would have.
  • Alexvrb - Sunday, July 5, 2015 - link

    Yeah a lot of reviews painted doom and gloom but the watercooler has to be factored into that price. Noise and system heat removal of the closed loop cooler are really nice. I still think they should launch the vanilla Fury at $499 - if it gets close to the performance of the Fury X they'll have a decent card on their hands. To me though the one I'll be keeping an eye out for is Nano. If they can get something like 80% of the performance at roughly half the power, that would make a lot of sense for more moderately spec'd systems. Regardless of what flavor, I'll be interested to see if third parties will soon launch tools to bump the voltage up and tinker with HBM clocks.
  • chizow - Monday, July 6, 2015 - link

    Water cooling if anything has proven to be a negative so far for Fury X with all the concerns of pump whine and in the end where is the actual benefit of water cooling when it still ends up slower than 980Ti with virtually no overclocking headroom?

    Based on Ryan's review Fury Air we'll most likely see the downsides of leakage on TDP and its also expected to be 7/8th SP/TMU. Fury Nano also appears to be poised as a niche part that will cost as much if not more than Fury X, which is amazing because at 80-85% of Fury X it won't be any faster than the GTX 980 at 1440p and below and right in that same TDP range too. It will have the benefit of form factor but will that be enough to justify a massive premium?
  • Alexvrb - Monday, July 6, 2015 - link

    You can get a bad batch of pumps in any CLC. Cooler Master screwed up (and not for the first time!) but the fixed units seem to be fine and for the units out there with a whine just RMA them. I'm certainly not going to buy one, but I know people that love water cooled components and like the simplicity and warranty of a CL system.

    Nobody knows the price of the Nano, nor final performance. I think they'd be crazy to price it over $550 even factoring in the form factor - unless someone releases a low-profile model, then they can charge whatever they want for it. We also don't know final performance of Fury compared to Fury X, though I already said they should price it more aggressively. I don't think leakage will be that big of an issue as they'll probably cap thermals. Clocks will vary depending on load but they do on Maxwell too - it's the new norm for stock aircooled graphics cards.

    As for overclocking, yeah that was really terrible. Until people are able to tinker with voltage controls and the memory, there's little point. Even then, set some good fan profiles.
  • Refuge - Thursday, July 23, 2015 - link

    To be honest, the wine I've seen on these isn't anything more than any other CLC I've ever seen in the wild.

    I feel like this was blown a bit out of proportion. Maybe I'm going deaf, maybe I didn't see a real example. I'm not sure.
  • tritiumosu3 - Thursday, July 2, 2015 - link

    "AMD Is nothing if not the perineal underdog"
    ...
    perineal =/= perennial! You should probably fix that...
  • Ryan Smith - Thursday, July 2, 2015 - link

    Thanks. Fixed. It was right, and then the spell-checker undid things on me...
  • ddriver - Thursday, July 2, 2015 - link

    I'd say after the Hecktor RuiNz fiasco, "perpetual underdog" might be more appropriate.
  • testbug00 - Sunday, July 5, 2015 - link

    Er, what fiasco did Hector Ruiz create for AMD?
  • Samus - Monday, July 6, 2015 - link

    I'm wondering the same thing. When Hector Ruiz left Motorola, they fell apart, and when he joined AMD, they out-engineered and out-manufactured Intel with quality control parity. I guess the fiasco would be when Hector Ruiz left AMD, because then they fell apart.

Log in

Don't have an account? Sign up now