The 4GB Question, Mantle’s Teething Issues, & the Test

Though not strictly a feature of R9 285 or Tonga, before diving into our benchmark breakdown we would like to spend a moment pondering VRAM capacity and how it impacts the R9 285.

When it comes to VRAM capacity the R9 285 is in a particularly odd position for a few different reasons. First and foremost, the R9 285 is a 2GB standard card that is replacing the 3GB standard R9 280. Despite R9 285 in most other ways being a lateral on R9 280 (including price), this is the one area where R9 285 is a clear downgrade compared to R9 280, losing 33% of its predecessor’s RAM capacity.

But second of all, midrange and high-end cards in general are in a bit of an odd spot due to the combination of a ready supply of 4Gb GDDR5 chips and the current-generation consoles. The use of 4Gb chips would allow a standard 256 bit memory bus card to accommodate 4GB of VRAM, and in the Playstation 4’s case this is used in 16bit mode to give the console a full 8GB of VRAM. So a 2GB card is somewhat behind the times as far as cutting edge RAM goes, but it also means that such a card only has ¼ of the RAM capacity of the current-gen consoles, which is a potential problem for playing console ports on the PC (at least without sacrificing asset quality).

Finally, midrange cards have been stuck at 2GB for some time now. In AMD’s ecosystem this has been the case informally since the 2GB 6950 fell to $250 in the middle of 2011, and formally since the 7850 launched with 2GB back in 2012. So depending on your starting point, 2GB of VRAM has been a standard of midrange cards for 2-3 years, which is about as long as we’d expect to go before we outgrow any given RAM capacity.

The question in our minds then is this: is 2GB enough VRAM for a $250 video card? All things considered we’ll always take more VRAM; there’s no performance penalty for having it, however there’s also no benefit to having it unless you can put it to good use. And to that end at least in our current benchmarks that’s generally not the case.

While we don’t have a 4GB card to use as a control at this time, of all of our benchmarks, the only Direct3D benchmarks that seem to show any signs of being impacted by 2GB of VRAM are Battlefield 4 and Thief. Even in those cases these signs are only occurring at 2560x1440 with MSAA and SSAA respectively, both of which tend to chew up memory to store the necessary anti-aliasing buffers. Otherwise if we drop down to 1920x1080, even with the aforementioned MSAA/SSAA, the 2GB R9 285 seems perfectly content.

The one global exception to this is in the case of Mantle, which throws a wrench in matters since it gives developers direct control over memory access. For both Thief and BF4, the Mantle renderers in these games are far more at home with 3GB+ of VRAM, and ignoring the present issues with Mantle on Tonga (more on this later), 2GB just isn’t cutting it when Mantle is involved, which is something we’ve already seen on other 2GB cards such as the R9 270 series.

The short answer to our question then is that whether 2GB is enough VRAM is going to depend on the resolution and API used. For AMD’s stated goal of being a 2560x1440 gaming card the R9 285 is already at risk of coming up short, and this is only going to get worse as more graphically advanced games come down the pipeline, especially console ports that aren’t being held back by last-generation consoles. On the other hand 1920x1080 is solid for the moment, and it may continue to be that way for some time.

Ultimately due to overall performance the R9 285 is not our first choice for a 2560x1440 gaming card – we’d suggest a minimum of the R9 280X – but the lack of VRAM isn’t doing it any favors here. Otherwise 1920x1080 should fare better, but whether that holds true for what’s increasingly becoming a 3+ upgrade cycle for video cards remains to be seen. With 2GB cards having been the $250 standard for so long, a 4GB card is looking like a safer bet right now, which is all the more reason we’re interested in seeing just what the premium for the 4GB R9 285 will be. Very rarely do we suggest the higher capacity version of a video card, but R9 285 may prove to be the exception.

Mantle: Teething Problems

Shifting gears, for the launch of the R9 285 AMD is advising reviewers and users alike that Mantle performance on Thief and Battlefield 4 is not going to be up to snuff right now. The reason for this is simple, but the potential ramifications are a bit more complex.

Because Tonga is based on a new GPU – and a newer version of GCN no less – the developers of Thief and Battlefield 4 have not had the opportunity to optimize their games for Tonga products. If you have ever used some of the lower end GCN products (e.g. Cape Verde) then you’ve seen first-hand that these games already are hit & miss depending on the GPU in use, so Tonga is an extension to that limitation. Meanwhile though AMD’s admission doesn’t include drivers, we would expect that there is some work that the company needs to do to better account for the minor architectural differences, even if Mantle is a thin driver API.

The complexity then stems from the fact that this is basically the first litmus test for how well Mantle (and potentially other low level APIs) will handle new hardware in the future, and at this time AMD is close to failing this test. On the one hand Mantle is up and running; both Thief and Battlefield 4’s Mantle rendering paths work on R9 285 despite neither game having seen the GPU before, and as far as we can tell there are no immediate rendering errors. However the fact that Mantle performance has significantly regressed and at this point is below Direct3D performance is not what we’d like to see.

Radeon R9 285 Mantle Performance

In explaining the situation, AMD tells us that this is an application level issue due to these games not being familiar with Tonga, and that this can be fixed through further patches. And ultimately if nothing else, these Tonga teething issues would be limited to these two games since they’re the only Mantle games to be released before Tonga.

The bind this puts AMD in, and why this is a bad omen for Mantle, is that if low level APIs are to take off then these kind of forward compatibility issues cannot occur. Though even high level APIs aren’t perfect – we’ve seen OS and driver updates break very old D3D and OpenGL games over time – high level APIs are forward compatible enough that virtually all games will work on newer hardware. And in the case they don’t, due to the abstraction-heavy nature of these APIs the problem and the solution are likely at the driver level. Mantle’s current state on the other hand puts the resolution in the hands of game developers, who unlike hardware vendors cannot necessarily be counted on to update their games to account for new hardware, especially given the front-loaded nature of video game sales.

For the moment Mantle is still in beta and very clearly so, with Thief and Battlefield 4 serving as proof of concept for the API. For that reason AMD still has time to contemplate the issue and ensure Mantle is more readily forward-compatible. But it’s going to be very hard justifying using Mantle if we see these kinds of regressions on non-beta drivers with games that were built against the non-beta SDK. AMD needs to ensure the shipping version of Mantle doesn’t suffer from these teething issues.

On a tangential note, this does raise the question of how well Direct3D 12 may handle the issue. By its vendor-limited nature Mantle has the opportunity to work even lower than a cross-vendor low level API like Direct3D 12, but D3D12 is still going to be low level and exposed to some of these hazards. For that reason it will be interesting to keep an eye on Direct3D development over the next year to see how Microsoft and its partners handle the issue. We would expect to see Microsoft have a better handle on forward-compatibility – in their position they pretty much have to – but if nothing else we’re curious just what it will take from game developers, API developers, and hardware developers alike to ensure that necessary level of forward-compatibility.

The Test

For the launch of the R9 285 AMD has released beta driver version 14.300.1005, which identifies itself as Catalyst 14.7 (though we suspect this will not be the final Catalyst version number). As to be expected for a launch involving a new GPU architecture, this launch driver is from a new driver branch (14.300) to account for the new hardware. With that said, based on our examination of the performance of this driver it does not appear to be significantly different than Catalyst 14.7 (14.200) for existing Radeon products.

Our R9 285 sample meanwhile is Sapphire’s R9 285 Dual-X OC. As this is a factory overclocked model, for the purposes of our testing we will be testing this card at both its factory clockspeed (965MHz/5.6GHz) and the R9 285 reference clockspeed (918MHz/5.5GHz) by underclocking our card to the appropriate clockspeeds. The bulk of our comparisons in turn will be drawn from the reference clockspeeds, but we do want to note that of the 5 R9 285 cards currently available for sale at Newegg, only a single (non-Sapphire) model is shipping without some kind of factory overclock. Consequently while we are looking to establish a reliable performance baseline, retail cards should perform a bit closer to our card’s factory overclocked performance.

CPU: Intel Core i7-4960X @ 4.2GHz
Motherboard: ASRock Fatal1ty X79 Professional
Power Supply: Corsair AX1200i
Hard Disk: Samsung SSD 840 EVO (750GB)
Memory: G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26)
Case: NZXT Phantom 630 Windowed Edition
Monitor: Asus PQ321
Video Cards: AMD Radeon R9 290
Sapphire R9 285 Dual-X OC
AMD Radeon R9 280X
AMD Radeon R9 280
AMD Radeon R9 270
AMD Radeon HD 7850
AMD Radeon HD 6870
NVIDIA GeForce GTX 770
NVIDIA GeForce GTX 760
NVIDIA GeForce GTX 660
NVIDIA GeForce GTX 560 Ti
Video Drivers: NVIDIA Release 340.52 WHQL
AMD Catalyst 14.300.1005 Beta
OS: Windows 8.1 Pro

 

Meet The Sapphire R9 285 Dual-X OC 2GB Metro: Last Light
Comments Locked

86 Comments

View All Comments

  • CrazyElf - Wednesday, September 10, 2014 - link

    All in all, this doesn't really change the market all that much.

    I still very firmly feel that the R9 290 right now (Q3 2014) remains the best price:performance of the mid to high end cards. That and the 4GB VRAM which may make it more future proof.

    What really is interesting at this point is what AMD has to respond on Nvidia's Maxwell.
  • MrSpadge - Wednesday, September 10, 2014 - link

    I Agree - Tonga is not bad, but on the other hand it does not change anything substantially compared to Tahiti. This would have been a nice result 1 - 1.5 years after the introduction of Tahiti. But that's almost been 3 years ago! The last time a GPU company showed no real progress after 3 years they went out of business shortly afterwards...

    And seing how AMD brags to beat GTX760 almost makes cry. That's the double cut-down version of a 2.5 years old chip which is significantly smaller than Tonga! This is only a comparison because nVidia kept this card at a far too high price because there was no competitive pressure from AMD.

    If this is all they have their next generation will get stomped by Maxwell.
  • iLovefloss - Wednesday, September 10, 2014 - link

    So all you got from this review is that Tonga is a cut down version of Tahiti? After reading this review, this is the impression you were left with?
  • MrSpadge - Thursday, September 11, 2014 - link

    Nope. But in the end the result performs just the same at even almost the same power consumption. Sure, there are some new features.. but so far and I expect for the foreseeable future they don't matter.
  • Demiurge - Wednesday, September 10, 2014 - link

    This is the first mid-range card to have all the value add features of the high-end cards. I wish AMD would leverage TrueAudio better, but the other features and the nice TDP drop.

    The color compression enhancement is a very interesting feature. I think that in itself deserves a little applause because of its significance in the design and comparing to the 280's. I think this is more significant, not as a performance feature, but similar to what Maxwell represented for NV in terms of efficiency. Both are respectable design improvements, in different areas. It's a shame they don't cross-license... seems like such as waste.
  • MrSpadge - Thursday, September 11, 2014 - link

    Well, the TDP-drop is real, but mostly saves virtual power. By this I mean that 280 / 7950 never come close to using 250 W, and hence the savings from Tonga are far less than the TDP difference makes it seem. The average between different articles seems to be ~20 W saving at the wall and establishes about a power-efficiency parity with cards like GTX670.

    The color compression could be Tongas best feature. But I still wonder: if Pitcairn on 270X comes so close to 285 and 280 performance with 256 bit memory bus and without color compression.. how much does it really matter (for 285)? To me it seems that Tahiti most often didn't need that large bus rather than color compression working wonders for Tonga. Besides, GTX770 and GTX680 also hold up fine at that performance level with a 256 bit bus.
  • Demiurge - Thursday, September 11, 2014 - link

    The TDP drop is something I did not think about being a paper launch value. You make a good point about the color compression too. It will be interesting how both fair. That may be an interesting topic to follow up during the driver refresh.

    As an owner of GTX 260 with a 448-bit bus, I can tell you that with anti-aliasing, it matters quite a bit as that becomes the limiter. The shader count is definitely not the limiter usually in the low-end and mid-range displays that these cards will typically be paired with. My GTX 260 and 1280x1024 monitor kind of illustrate that with 216 Shaders/896MB. :-)

    It isn't pretty, but I don't see anything that forces me to upgrade yet. Think I've got two more generations or so to wait on before performance is significant enough, or a groundbreaking feature would do it. I'm actually considering upgrading out of boredom and interest in gimmicky features more than anything else at this point.
  • TiGr1982 - Thursday, September 11, 2014 - link

    GTX 260 is like 6 years old now. It's lacking DX11, having less than 1 GB of (relatively slow) GDDR3 VRAM, and overall should be 3-4 times slower than R9 285 or R9 290, I guess.

    I really didn't think anybody still uses these old gen cards (e.g. I have HD 7950 Boost Dual-X which is essentially identical to R9 280).
  • P39Airacobra - Friday, January 9, 2015 - link

    Because they would loose money! LOL. And they are both about the same anyway, Except AMD goes for brute force to get performance,(like using aV8) And Nvidia uses efficency with power. (Like a turbo charged 4cyl or 6cyl)
  • bwat47 - Thursday, September 11, 2014 - link

    "And seing how AMD brags to beat GTX760 almost makes cry. That's the double cut-down version of a 2.5 years old chip which is significantly smaller than Tonga! This is only a comparison because nVidia kept this card at a far too high price because there was no competitive pressure from AMD."

    You are being pretty silly here. Both AMD and Nvidia were rebranding a lot of cards these last few gens. You can'y go after AMD for rebranding a 2-3 year old chip, and then say its fine if nvidia does it and blame AMD's 'lack of competitive pressure'. If lack of competitive pressure was the reason for rebranding, then there was lack of competitive pressure on both sides.

    And I highly doubt the 285 is 'all amd has'. this was just a small update to their product line, to bring some missing features (freesync, true audio etc...), and reduced power consumption to the 28x series. I'm sure there is a 3xx series coming down the road (or whatever they will call it). Both AMD and nvidia have been working been squeezing all they can out of older architecture for the past few years, you can't really put the blame on one of the other without being hypocritical.

Log in

Don't have an account? Sign up now