Gaming: Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

 

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low end GPU with lots of GPU memory, like an R7 240 4GB).

 

All of our benchmark results can also be found in our benchmark engine, Bench.

AnandTech IGP Low
Average FPS
95th Percentile
Gaming: Strange Brigade (DX12, Vulkan) Gaming: Far Cry 5
Comments Locked

220 Comments

View All Comments

  • ByteMag - Wednesday, May 20, 2020 - link

    I'm wondering why the 3300X wasn't in the DigiCortex benchmark? This $120 dollar 4c/8t banger lays waste to the selected lineup. Or is it too much of a foreshadowing of how Zen 3 may perform? I guess benchmarks can sometimes be like a box of chocolates.
  • ozzuneoj86 - Wednesday, May 20, 2020 - link

    Just a request, but can you guys consider renaming the "IGP" quality level something different? The site has been doing it for a while and it kind of seems like they may not even know why at this point. Just change it to "Lowest" or something. Listing "IGP" as a test, when running a 2080 Ti on a CPU that doesn't have integrated graphics is extremely confusing to readers, to say the least.

    Also, I know the main reason for not changing testing methods is so that comparisons can be done (and charts can be made) without having to test all of the other hardware configs, but I have one small request for the next suite of tests (I'm sure they'll be revised soon). I'd request that testing levels for CPU benchmarks should be:

    Low Settings at 720P
    Max Settings at 1080P
    Max Settings at 1440P
    Max Settings at 4K

    (Maybe a High Settings at 1080P thrown in for games where the CPU load is greatly affected by graphics settings)

    Drop 8K testing unless we're dealing with flagship GPU releases. It just seems like 8K has very little bearing on what people are realistically going to need to know. A benchmark that shows a range from 6fps for the slowest to 9fps for the fastest is completely pointless, especially for CPU testing. In the future, replacing that with a more common or more requested resolution would surely be more useful to your readers.

    Often times the visual settings in games do have a significant impact on CPU load, so tying the graphical settings to the resolution for each benchmark really muddies the waters. Why not just assume worst case scenario performance (max settings) for each resolution and go from there? Obviously anti-aliasing would need to be selected based on the game and resolution, with the focus being on higher frame rates (maybe no or low AA) for faster paced games and higher fidelity for slower paced games.

    Just my 2 cents. I greatly appreciate the work you guys do and it's nice to see a tech site that is still doing written reviews rather than forcing people to spend half an hour watching a video. Yeah, I'm old school.
  • Spunjji - Tuesday, May 26, 2020 - link

    Agreed 99% with this (especially that last part, all hial the written review) - but I'd personally say it makes more sense for the CPU reviews to be limited to 720p Low, 1080P High and 1440P Max.

    My theory behind that:
    720p Low gives you that entirely academic CPU-limited comparison that some people still seem to love. I don't get it, but w/e.
    1080p High is the kind of setting people with high-refresh-rate monitors are likely to run - having things look good, but not burning frames for near-invisible changes. CPU limiting is likely to be in play at higher frame rates. We can see whether a given CPU will get you all the way to your refresh-rate limit..
    1440p Max *should* take you to GPU-limited territory. Any setting above this ought to be equally limited, so that should cover you for everything, and if a given CPU and/or game doesn't behave that way then it's a point of interest.
  • dickeywang - Wednesday, May 20, 2020 - link

    With more and more cores being added to the CPU, it would've been nice to see some benchmarks under Linux.
  • MDD1963 - Wednesday, May 20, 2020 - link

    Darn near a full 2% gain in FPS in some games! Quite ...uhhh..... impressive! :/
  • MDD1963 - Wednesday, May 20, 2020 - link

    Doing these CPU gaming comparisons at 720P is just as silly as when HardOCP used to include 640x480 CPU scaling...; 1080P is low enough, go medium details if needed.
  • Spunjji - Tuesday, May 26, 2020 - link

    Personally agreed here. It just gives more fodder to the "15% advantage in gaming" trolls.
  • croc - Wednesday, May 20, 2020 - link

    It would be 'nice' if the author could use results from the exact same stack of chips for each test. If the same results cannot be obtained from the same stack, then whittle the stack down to those chips for which the full set of tests can be obtained. I could understand the lack of results on newly added tests...

    For a peer review exercise it would be imperative, and here at Anandtech I am sure that there are many peers....
  • 69369369 - Thursday, May 21, 2020 - link

    Overheating and very high power bills happens with Intel.
  • Atom2 - Thursday, May 21, 2020 - link

    Dear Ian, You must be the only person on the planet that goes to such lengths not to use AVX, that you even compare Intel's AVX512 instructions to a GPU based OpenCL, just to have a reason not to use it. Consequently you only have AMD win the synthetic benchmarks, but all real world math is held by Intel. Additionally, all those synthetics, which are "not" compiled with Intel C++. Forget it... GCC is only used by Universities. The level of bias towards AMD is becoming surreal.

Log in

Don't have an account? Sign up now