Gaming: Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low end GPU with lots of GPU memory, like an R7 240 4GB).

All of our benchmark results can also be found in our benchmark engine, Bench.

AnandTech IGP Low Medium High
Average FPS
95th Percentile

Gaming: Strange Brigade (DX12, Vulkan) Gaming: F1 2018
Comments Locked

79 Comments

View All Comments

  • Santoval - Monday, November 25, 2019 - link

    Wait for the prices of both to adjust first.
  • Drumsticks - Monday, November 25, 2019 - link

    I don't care about process nodes, as long as they're delivering competitive prices, core counts, and performance per core. Intel's not quite out of the game yet since AMD's HEDT goes higher than Intel's, but they've gotten smashed at the halo spot, and they won't be able to deliver on price and performance if they can't get something in order.
  • Braincruser - Tuesday, November 26, 2019 - link

    No they haven't been "smashed at the halo spot". The 3900X and 3950X are both beasts and both shred in most of the important benchmarks. For video rendering both the 3900x and 3950X hand out with both the threadrippers and the intels. You get 90% of the performance for 1/4th the price. 12-16 cores is also a very important number for programmers, since you have enough CPUs for compiling, and running 2-3 VMs comfortably.
  • Dolda2000 - Monday, November 25, 2019 - link

    Why is it that Intel gains so incredibly much more from AVX512 than AMD gains from AVX2?

    In the 3DPM2 test, the AMD CPUs gain roughly a factor of two in performance, which is exactly what I'd expect given that AVX2 is twice as wide as standard SSE. The Intel CPUs, on the other hand, gain almost a factor of 9, which is more than twice what I'd expect given that AVX512 as four times as wide as SSE.

    What causes this? Does AVX512 have some other kind of tricks up its sleeves? Does opmasking benefit 3DPM2?
  • Xyler94 - Monday, November 25, 2019 - link

    Basically, AVX-512 is double the performance of AVX2 (or another way to see it, 256bit vs 512bits, which 512 is double 256). So anything optimized for 512 will be about double in speed from 256, even on the exact same processor.
  • Xyler94 - Monday, November 25, 2019 - link

    To note: That's a highly overly simplistic view of it, there's a lot more under the hood.
  • eek2121 - Monday, November 25, 2019 - link

    Well that and the obvious point that AMD CPUs do not support AVX-512.
  • DanNeely - Monday, November 25, 2019 - link

    AVX-2 is 256 bits wide, and thus only does have as much/instruction as AVX-512.
  • JayNor - Monday, November 25, 2019 - link

    I believe for 10 cores and up there are dual avx512 units per core. You can see the dual avx512 units in the Execution Engine diagram at this link.
    https://en.wikichip.org/wiki/intel/microarchitectu...

    Also, cascade lake added dlboost 8 bit operations in avx512 to support ai inference convolutions.
  • Dolda2000 - Monday, November 25, 2019 - link

    But Zen 1/2 also has two 256-bit FMAs per core. And Intel also has two SSE units per core as well, so I don't see how that would explain the ratios.

Log in

Don't have an account? Sign up now