Gaming: Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low end GPU with lots of GPU memory, like an R7 240 4GB).

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Grand Theft Auto V Open World Apr
2015
DX11 720p
Low
1080p
High
1440p
Very High
4K
Ultra

All of our benchmark results can also be found in our benchmark engine, Bench.

AnandTech IGP Low Medium High
Average FPS
95th Percentile

We see performance parity between the chips at 4K, but for all other resolutions and settings, the OC chip again still can't make it to the level of the 7700K, often sitting midway between the 7700K at stock and the 2600K at stock.

Gaming: Strange Brigade (DX12) Gaming: Far Cry 5
Comments Locked

213 Comments

View All Comments

  • kgardas - Friday, May 10, 2019 - link

    Indeed, it's sad that it took ~8 years to have double performance kind of while in '90 we get that every 2-3 years. And look at the office tests, we're not there yet and we will probably never ever be as single-thread perf. increases are basically dead. Chromium compile suggests that it makes a sense to update at all -- for developers, but for office users it's nonsense if you consider just the CPU itself.
  • chekk - Friday, May 10, 2019 - link

    Thanks for the article, Ian. I like your summation: impressive and depressing.
    I'll be waiting to see what Zen 2 offers before upgrading my 2500K.
  • AshlayW - Friday, May 10, 2019 - link

    Such great innovation and progress and cost-effectiveness advances from Intel between 2011 and 2017. /s

    Yes AMD didn't do much here either, but it wasn't for lack of trying. Intel deliberately stagnated the market to bleed consumers from every single cent, and then Ryzen turns up and you get the 6 and now 8 core mainstream CPUs.

    Would have liked to see 2600K versus Ryzen honestly. Ryzen 1st gen is around Ivy/Haswell performance per core in most games and second gen is haswell/broadwell. But as many games get more threaded, Ryzen's advantage will ever increase.

    I owned a 2600K and it was the last product from Intel that I ever owned that I truly felt was worth its price. Even now I just can't justify spending £350-400 quid on a hexa core or octa with HT disabled when the competition has unlocked 16 threads for less money.
  • 29a - Friday, May 10, 2019 - link

    "Yes AMD didn't do much here either"

    I really don't understand that statement at all.
  • thesavvymage - Friday, May 10, 2019 - link

    Theyre saying AMD didnt do much to push the price/performance envelope between 2011 and 2017. Which they didnt, since their architecture until Zen was terrible.
  • eva02langley - Friday, May 10, 2019 - link

    Yeah, you are right... it is AMD fault and not Intel who wanted to make a dime on your back selling you quadcore for life.
  • wilsonkf - Friday, May 10, 2019 - link

    Would be more interesting to add 8150/8350 to the benchmark. I run my 8350 at 4.7Ghz for five years. It's a great room heater.
  • MDD1963 - Saturday, May 11, 2019 - link

    I don't think AMD would have sold as many of the 8350s and 9590s as they did had people known that i3's and i5's outperformed them in pretty much all games, and, at lower clock speeds, no less. Many people probably bought the FX8350 because it 'sounded faster' at 4.7 GHz than did the 2600K at 'only' 3.8 GHz' , or so I speculate, anyway... (sort of like the Florida Broward county votes in 2000!)
  • Targon - Tuesday, May 14, 2019 - link

    Not everyone looks at games as the primary use of a computer. The AMD FX chips were not great when it came to IPC, in the same way that the Pentium 4 was terrible from an IPC basis. Still, the 8350 was a lot faster than the Phenom 2 processors, that's for sure.
  • artk2219 - Wednesday, May 15, 2019 - link

    I got my FX 8320 because I preferred threads over single core performance. I was much more likely to notice a lack of computing resources and multi tasking ability vs how long something took to open or run. The funny part is that even though people shit all over them, they were, and honestly still are valid chips for certain use cases. They'll still game, they can be small cheap vhosts, nas servers, you name it. The biggest problem recently is finding a decent AM3+ board to put them in.

Log in

Don't have an account? Sign up now