GRID Autosport

For the racing game in our benchmark suite we have Codemasters’ GRID Autosport. Codemasters continues to set the bar for graphical fidelity in racing games, delivering realistic looking environments with layed with additional graphical effects. Based on their in-house EGO engine, GRID Autosport includes a DirectCompute based advanced lighting system in its highest quality settings, which incurs a significant performance penalty on lower-end cards but does a good job of emulating more realistic lighting within the game world.

GRID Autosport - 3840x2160 - Ultra Quality

GRID Autosport - 2560x1440 - Ultra Quality

Unfortunately for AMD, after a streak of wins and ties for AMD, things start going off the rails with GRID, very off the rails.

At 4K Ultra this is AMD’s single biggest 4K performance deficit; the card trails the GTX 980 Ti by 14%. The good news is that in the process the card cracks 60fps, so framerates are solid on an absolute basis, though there are still going to be some frames below 60fps for racing purists to contend with.

Where things get really bad is at 1440p, in a situation we have never seen before in a high-end AMD video card review. The R9 Fury X gets pummeled here, trailing the GTX 980 Ti by 30%, and even falling behind the GTX 980 and GTX 780 Ti. The reason it’s getting pummeled is because the R9 Fury X is CPU bottlenecked here; no matter what resolution we pick, the R9 Fury X can’t spit out more than about 82fps here at Ultra quality.

With GPU performance outgrowing CPU performance year after year, this is something that was due to happen sooner or later, and is a big reason that low-level APIs are about to come into the fold. And if it was going to happen anywhere, it would happen with a flagship level video card. Still, with an overclocked Core i7-4960X driving our testbed, this is also one of the most powerful systems available with respect to CPU performance, so AMD’s drivers are burning an incredible amount of CPU time here.

Ultimately GRID serves to cement our concerns about AMD’s performance at 1440p, as it’s very possible that this is the tip of the iceberg. DirectX 11 will go away eventually, but it will still take some time. In the meantime there are a number of 1440p gamers out there, especially with R9 Fury X otherwise being such a good fit for high frame rate 1440p gaming. Perhaps the biggest issue here is that this makes it very hard to justify pairing 1440p 144Hz monitors with AMD’s GPUs, as although 82.6fps is fine for a 60Hz monitor, these CPU issues are making it hard for AMD to deliver framerates more suitable/desirable for those high performance monitors.

Total War: Attila Grand Theft Auto V
Comments Locked

458 Comments

View All Comments

  • mikato - Tuesday, July 7, 2015 - link

    Wow very interesting, thanks bugsy. I hope those guys at the various forums can work out the details and maybe a reputable tech reviewer will take a look.
  • OrphanageExplosion - Saturday, July 4, 2015 - link

    I'm still a bit perplexed about how AMD gets an absolute roasting for CrossFire frame-pacing - which only impacted a tiny amount of users - while the sub-optimal DirectX 11 driver (which will affect everyone to varying extents in CPU-bound scenarios) doesn't get anything like the same level of attention.

    I mean, AMD commands a niche when it comes to the value end of the market, but if you're combining a budget CPU with one of their value GPUs, chances are that in many games you're not going to see the same kind of performance you see from benchmarks carried out on mammoth i7 systems.

    And here, we've reached a situation where not even the i7 benchmarking scenario can hide the impact of the driver on a $650 part, hence the poor 1440p performance (which is even worse at 1080p). Why invest all that R&D, time, effort and money into this mammoth piece of hardware and not improve the driver so we can actually see what it's capable of? Is AMD just sitting it out until DX12?
  • harrydr - Saturday, July 4, 2015 - link

    With the black screen problem of r9 graphic cards not easy to support amd.
  • Oxford Guy - Saturday, July 4, 2015 - link

    Because lying to customers about VRAM performance, ROP count, and cache size is a far better way to conduct business.

    Oh, and the 970's specs are still false on Nvidia's website (claims 224 GB/s but that is impossible because of the 28 GB/s partition and the XOR contention — the more the slow partition is used the closer the other partition can get to the theoretical speed of 224 but the more it's used the more the faster partition is slowed by the 28 GB/s sloth — so a catch-22).

    It's pretty amazing that Anandtech came out with a "Correcting the Specs" article but Nvidia is still claiming false numbers on their website.
  • Peichen - Monday, July 6, 2015 - link

    And yet 970 is still faster. Nvidia is more efficient with resources than they let people on.
  • Oxford Guy - Thursday, July 9, 2015 - link

    The XOR contention and 28 GB/s sure is efficiency. If only the 8800 GT could have had VRAM that slow back in 2007.
  • Gunbuster - Saturday, July 4, 2015 - link

    Came for the chizow, was not disappointed.
  • chizow - Monday, July 6, 2015 - link

    :)
  • madwolfa - Saturday, July 4, 2015 - link

    "Throw a couple of these into a Micro-ATX SFF PC, and it will be the PSU, not the video cards, that become your biggest concern".

    I think the biggest concern here would be to fit a couple of 120mm radiators.
  • TheinsanegamerN - Saturday, July 4, 2015 - link

    My current Micro-ATX case has room for dual 120mm rads and a 240mm rad. plenty of room there

Log in

Don't have an account? Sign up now