The State of Mantle, The Drivers, & The Test

Before diving into our long-awaited benchmark results, I wanted to quickly touch upon the state of Mantle now that AMD has given us a bit more insight into what’s going on.

With the Vulkan project having inherited and extended Mantle, Mantle’s external development is at an end for AMD. AMD has already told us in the past that they are essentially taking it back inside, and will be using it as a platform for testing future API developments. Externally then AMD has now thrown all of their weight behind Vulkan and DirectX 12, telling developers that future games should use those APIs and not Mantle.

In the meantime there is the question of what happens to existing Mantle games. So far there are about half a dozen games that support the API, and for these games Mantle is the only low-level API available to them. Should Mantle disappear, then these games would no longer be able to render at such a low-level.

The situation then is that in discussing the performance results of the R9 Fury X with Mantle, AMD has confirmed that while they are not outright dropping Mantle support, they have ceased all further Mantle optimization. Of particular note, the Mantle driver has not been optimized at all for GCN 1.2, which includes not just R9 Fury X, but R9 285, R9 380, and the Carrizo APU as well. Mantle titles will probably still work on these products – and for the record we can’t get Civilization: Beyond Earth to play nicely with the R9 285 via Mantle – but performance is another matter. Mantle is essentially deprecated at this point, and while AMD isn’t going out of their way to break backwards compatibility they aren’t going to put resources into helping it either. The experiment that is Mantle has come to an end.

This will in turn impact our testing somewhat. For our 2015 benchmark suite we began using low-level APIs when available, which in the current game suite includes Battlefield 4, Dragon Age: Inquisition, and Civilization: Beyond Earth, not counting on AMD to cease optimizing Mantle quite so soon. As a result we’re in the uncomfortable position of having to backtrack on our policies some in order to not base our recommendations on stupid settings.

Starting with this review we’re going to use low-level APIs when available, and when using them makes performance sense. That means we’re not going to use Mantle in the cases where performance has clearly regressed due to a lack of optimizations, but will use it for games where it still works as expected (which essentially comes down to Civ: BE). Ultimately everything will move to Vulkan and DirectX 12, but in the meantime we will need to be more selective about where we use Mantle.

The Drivers

For the launch of the 300/Fury series, AMD has taken an unexpected direction with their drivers. The launch driver for these parts is the Catalyst 15.15 driver, AMD’s next major driver branch which includes everything from Fiji support to WDDM 2.0 support. However in launching these parts, AMD has bifurcated their drivers; the new cards get Catalyst 15.15, the old cards get Catalyst 15.6 (driver version 14.502).

Eventually AMD will bring these cards back together in a later driver release, after they have done more extensive QA against their older cards. In the meantime it’s possible to use a modified version of Catalyst 15.15 to enable support for some of these older cards, but unsigned drivers and Windows do not get along well, and it introduces other potential issues. Otherwise considering that these new drivers do include performance improvements for existing cards, we are not especially happy with the current situation. Existing Radeon owners are essentially having performance held back from them, if only temporarily. Small tomes could be written on AMD’s driver situation – they clearly don’t have the resources to do everything they’d like to at once – but this is perhaps the most difficult situation they’ve put Radeon owners in yet.

The Test

Finally, let’s talk testing. For our benchmarking we have used AMD’s Catalyst 15.15 beta drivers for the R9 Fury X, and their Catalyst 15.5 beta drivers for all other AMD cards. Meanwhile for NVIDIA cards we are on release 352.90.

From a build standpoint we’d like to remind everyone that installing a GPU radiator in our closed cased test bed does require reconfiguring the test bed slightly; a 120mm rear exhaust fan must be removed to make room for the GPU radiator.

CPU: Intel Core i7-4960X @ 4.2GHz
Motherboard: ASRock Fatal1ty X79 Professional
Power Supply: Corsair AX1200i
Hard Disk: Samsung SSD 840 EVO (750GB)
Memory: G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26)
Case: NZXT Phantom 630 Windowed Edition
Monitor: Asus PQ321
Video Cards: AMD Radeon R9 Fury X
AMD Radeon R9 295X2
AMD Radeon R9 290X
AMD Radeon R9 285
AMD Radeon HD 7970
NVIDIA GeForce GTX Titan X
NVIDIA GeForce GTX 980 Ti
NVIDIA GeForce GTX 980
NVIDIA GeForce GTX 780 Ti
NVIDIA GeForce GTX 680
NVIDIA GeForce GTX 580
Video Drivers: NVIDIA Release 352.90 Beta
AMD Catalyst Cat 15.5 Beta (All Other AMD Cards)
AMD Catalyst Cat 15.15 Beta (R9 Fury X)
OS: Windows 8.1 Pro
Meet The Radeon R9 Fury X Battlefield 4
Comments Locked

458 Comments

View All Comments

  • looncraz - Friday, July 3, 2015 - link

    75MHz on a factory low-volting GPU is actually to be expected. If the voltage scaled automatically, like nVidia's, there is no telling where it would go. Hopefully someone cracks the voltage lock and gets to cranking of the hertz.
  • chizow - Friday, July 3, 2015 - link

    North of 400W is probably where we'll go, but I look forward to AMD exposing these voltage controls, it makes you wonder why they didn't release them from the outset given they made the claims the card was an "Overclocker's Dream" despite the fact it is anything but this.
  • Refuge - Friday, July 3, 2015 - link

    It isn't unlocked yet, so nobody has overclocked it yet.
  • chizow - Monday, July 6, 2015 - link

    But but...AMD claimed it was an Overclocker's Dream??? Just another good example of what AMD says and reality being incongruent.
  • Thatguy97 - Thursday, July 2, 2015 - link

    would you say amd is now the "geforce fx 5800"
  • sabrewings - Thursday, July 2, 2015 - link

    That wasn't so much due to ATI's excellence. It had a lot to do with NVIDIA dropping the ball horribly, off a cliff, into a black hole.

    They learned their lessons and turned it around. I don't think either company "lost" necessarily, but I will say NVIDIA won. They do more with less. More performance with less power, less transistors, less SPs, and less bandwidth. Both cards perform admirably, but we all know the Fury X would've been more expensive had the 980 Ti not launched where it did. So, to perform arguably on par, AMD is living with smaller margins on probably smaller volume while Nvidia has plenty of volume with the 980 Ti and their base cost is less as they're essentially using Titan X throw away chips.
  • looncraz - Thursday, July 2, 2015 - link

    They still had to pay for those "Titan X throw away chips" and they cost more per chip to produce than AMD's Fiji GPU. Also, nVidia apparently had to not cut down the GPU as much as they were planning as a response to AMD's suspected performance. Consumers win, of course, but it isn't like nVidia did something magical, they simply bit the bullet and undercut their own offerings by barely cutting down the Titan X to make the 980Ti.

    That said, it is very telling that the AMD GCN architecture is less balanced in relation to modern games than the nVidia architecture, however the GCN architecture has far more features that are going unused. That is one long-standing habit ATi and, now, AMD engineers have had: plan for the future in their current chips. It's actually a bad habit as it uses silicon and transistors just sitting around sucking up power and wasting space for, usually, years before the features finally become useful... and then, by that time, the performance level delivered by those dormant bits is intentionally outdone by the competition to make AMD look inferior.

    AMD had tessellation years before nVidia, but it went unused until DX11, by which time nVidia knew AMD's capabilities and intentionally designed a way to stay ahead in tessellation. AMD's own technology being used against it only because it released it so early. HBM, I fear, will be another example of this. AMD helped to develop HBM and interposer technologies and used them first, but I bet nVidia will benefit most from them.

    AMD's only possible upcoming saving grace could be that they might be on Samsung's 14nm LPP FinFet tech at GloFo and nVidia will be on TSMC's 16nm FinFet tech. If AMD plays it right they can keep this advantage for a couple generations and maximize the benefits that could bring.
  • vladx - Thursday, July 2, 2015 - link

    Afaik, even though TSMC's GinFet will be 16nm it's a superior process overall to GloFo's 14nm FF so I dount AMD will gain any advantage.
  • testbug00 - Sunday, July 5, 2015 - link

    TSMC's FinFET 16nm process might be better than GloFo's own canceled 14XM or whatever they called it.

    Better than Samsung's 14nm? Dubious. Very unlikely.
  • chizow - Sunday, July 5, 2015 - link

    Why is it dubious? What's the biggest chip Samsung has fabbed? If they start producing chips bigger than the 100mm^2 chips for Apple, then we can talk but as much flak as TSMC gets flak over delays/problems, they still produce what are arguably the world's most advanced seminconductors, right there next to Intel's biggest chips in size and complexity.

Log in

Don't have an account? Sign up now