Gaming: Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

 

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low end GPU with lots of GPU memory, like an R7 240 4GB).

 

All of our benchmark results can also be found in our benchmark engine, Bench.

AnandTech IGP Low
Average FPS
95th Percentile
Gaming: Strange Brigade (DX12, Vulkan) Gaming: Far Cry 5
Comments Locked

220 Comments

View All Comments

  • Gastec - Friday, May 22, 2020 - link

    "pairing a high-end GPU with a mid-range CPU" should already be a meme, so many times I've seen it copy-pasted.
  • dotjaz - Thursday, May 21, 2020 - link

    What funny stuff are you smoking? In all actual configurations, AMD doesn't lose by any meaningful margin at a much better value.
    Anandtech is running CPU test where you set the quality low and get 150+fps or even 400+fps, nobody actually does that.
  • deepblue08 - Thursday, May 21, 2020 - link

    Intel may not be a great value chip all around. But a 10 FPS lead in 1440p is a lead nevertheless: https://hexus.net/tech/reviews/cpu/141577-intel-co...
  • DrKlahn - Thursday, May 21, 2020 - link

    If that's worth the more expensive motherboard, beefier (and more costly) cooling, and increased heat then go for it. If you put 120fps next to 130fps without a counter up how many people could tell?Personally I don't see it as worth it at all. Nor do I consider it a dominating lead. But I'm sure there are people out there that will buy Intel for a negligible lead.
  • Spunjji - Friday, May 22, 2020 - link

    An entirely unnoticeable lead that you get by sacrificing any sort of power consumption / cooling sanity and spending measurably larger amounts of cash on the hardware to achieve the boost clocks required to get that lead.

    The difference was meaningful back when AMD had lower minimum framerates, less consistency and -30fps or so off the average. Now it's just silly.
  • babadivad - Thursday, May 21, 2020 - link

    Do you need a new mother board with these? If so they make even less sense than they already did.
  • MDD1963 - Friday, May 22, 2020 - link

    As for Intel owners, I don't think too many 8700K, 9600K or above owners would seriously feel they are CPU limited and in a dire/ imminent need of a CPU upgrade as they sit now, anyway. Users of prior generations (I'm still on 7700K) will make their choices at a time of their own choosing, of course, and not simply because 'a new generation is out'. (I mean, look at 8700K vs. 10600K results.....; looks almost like a rebadging operation)
  • khanikun - Wednesday, May 27, 2020 - link

    I was on a 7700k and didn't feel CPU limited at all, but decided to get an 8086k for the 2 more cores and just cause it was an 8086. For my normal workloads or gaming, I don't notice a difference. I do reencode videos maybe a couple times a year. The only times I'll see the difference.

    I'll probably just be sitting on this 8086k for the next few years, unless something on my machine breaks or Intel does something crazy ridiculous, like making some 8 core i7 on 10nm at 5 ghz all core, in a new socket, then making dual socket consumer boards for it for relatively decent price. I'd upgrade for that, just cause I'd like to try making a dual processor system that isn't some expensive workstation/server system.
  • Spunjji - Friday, May 22, 2020 - link

    Yes, you do. So no, they don't make sense xD
  • Gastec - Friday, May 22, 2020 - link

    Games...framerate is pointless in video games, all that matters now are the "surprise mechanics".

Log in

Don't have an account? Sign up now