Gaming: Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Grand Theft Auto V Open World Apr
2015
DX11 720p
Low
1080p
High
1440p
Very High
4K
Ultra
*Strange Brigade is run in DX12 and Vulkan modes

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low end GPU with lots of GPU memory, like an R7 240 4GB).

All of our benchmark results can also be found in our benchmark engine, Bench.

GTA 5 IGP Low Medium High
Average FPS
95th Percentile

GTA V is always an amusing game, and not just for its criminal hi-jinx. Originally released for the last-gen consoles years ago – with the best CPUs and GPUs of 2005/2006 – it still sells well. More importantly, it can still punish a modern GPU. And CPUs don’t get off too easily either, especially at our 1080p high settings. In this case the CFL-R chips take a 1-2-3 win, all of them pushing past even the 8700K. The performance gain is nothing to write home about, but the 9900K has improved over its predecessor by 9%.

However these CPU differences quickly become irrelevant at higher, more GPU-demanding settings. At 1440p Very High we’re looking at a tie for the top 7 CPUs, and no one is getting more than 23fps at 4K.

Gaming: Strange Brigade (DX12, Vulkan) Gaming: Far Cry 5
Comments Locked

274 Comments

View All Comments

  • Total Meltdowner - Sunday, October 21, 2018 - link

    Those typoes..

    "Good, F U foreigners who want our superior tech."
  • muziqaz - Monday, October 22, 2018 - link

    Same to you, who still thinks that Intel CPUs are made purely in USA :D
  • Hifihedgehog - Friday, October 19, 2018 - link

    What do I think? That it is a deliberate act of desperation. It looks like it may draw more power than a 32-Core ThreadRipper per your own charts.

    https://i.redd.it/iq1mz5bfi5t11.jpg
  • AutomaticTaco - Saturday, October 20, 2018 - link

    Revised
    https://www.anandtech.com/show/13400/intel-9th-gen...

    The motherboard in question was using an insane 1.47v
    https://twitter.com/IanCutress/status/105342741705...
    https://twitter.com/IanCutress/status/105339755111...
  • edzieba - Friday, October 19, 2018 - link

    For the last decade, you've had the choice between "I want really fast cores!" and "I want lots of cores!". This is the 'now you can have both' CPU, and it's surprisingly not in the HEDT realm.
  • evernessince - Saturday, October 20, 2018 - link

    It's priced like HEDT though. It's priced well into HEDT. FYI, you could have had both of those when the 1800X dropped.
  • mapesdhs - Sunday, October 21, 2018 - link

    I noticed initially in the UK the pricing of the 9900K was very close to the 7820X, but now pricing for the latter has often been replaced on retail sites with CALL. Coincidence? It's almost as if Intel is trying to hide that even Intel has better options at this price level.
  • iwod - Friday, October 19, 2018 - link

    Nothing unexpected really. 5Ghz with "better" node that is tuned for higher Frequency. The TDP was the real surprise though, I knew the TDP were fake, but 95 > 220W? I am pretty sure in some countries ( um... EU ) people can start suing Intel for misleading customers.

    For the AVX test, did the program really use AMD's AVX unit? or was it not optimised for AMD 's AVX, given AMD has a slightly different ( I say saner ) implementation. And if they did, the difference shouldn't be that big.

    I continue to believe there is a huge market for iGPU, and I think AMD has the biggest chance to capture it, just looking at those totally playable 1080P frame-rate, if they could double the iGPU die size budget with 7nm Ryzen it would be all good.

    Now we are just waiting for Zen 2.
  • GreenReaper - Friday, October 19, 2018 - link

    It's using it. You can see points increased in both cases. But AMD implemented AVX on the cheap. It takes twice the cycles to execute AVX operations involving 256-bit data, because (AFAIK) it's implemented using 128-bit registers, with pairs of units that can only do multiplies or adds, not both.

    That may be the smart choice; it probably saves significant space and power. It might also work faster with SSE[2/3/4] code, still heavily used (in part because Intel has disabled AVX support on its lower-end chips). But some workloads just won't perform as well vs. Intel's flexible, wider units. The same is true for AVX-512, where the workstation chips run away with it.

    It's like the difference between using a short bus, a full-sized school bus, and a double decker - or a train. If you can actually fill the train on a regular basis, are going to go a long way on it, and are willing to pay for the track, it works best. Oh, and if developers are optimizing AVX code for *any* CPU, it's almost certainly Intel, at least first. This might change in the future, but don't count on it.
  • emn13 - Saturday, October 20, 2018 - link

    Those AVX numbers look like they're measuing something else; not just AVX512. You'd expect performance to increase (compared to AVX256) by around 50%, give or take quite a margin of error. It should *never* be more than a factor 2 faster. So ignore AMD; their AVX implementation is wonky, sure - but those intel numbers almost have to be wrong. I think the baseline isn't vectorized at all, or something like that - that would explain the huge jump.

    Of course, AVX512 is fairly complicated, and it's more than just wider - but these results seem extraordinary; and there' just not enough evidence the effect is real, not just some quirk of how the variations were compiled.

Log in

Don't have an account? Sign up now