Gaming: Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low end GPU with lots of GPU memory, like an R7 240 4GB).

All of our benchmark results can also be found in our benchmark engine, Bench.

AnandTech IGP Low Medium High
Average FPS
95th Percentile

Gaming: Strange Brigade (DX12, Vulkan) Gaming: F1 2018
Comments Locked

79 Comments

View All Comments

  • Korguz - Thursday, November 28, 2019 - link

    yep.. i knew gondalf wouldnt answer my question...
  • 0ldman79 - Thursday, December 5, 2019 - link

    That is ignorant.

    Adding L3 cannot increase processing. The L3 can only improve feeding of data, further the L3 is a victim cache, the data has to be expelled from the L2 first.

    It doesn't matter how big the fuel line is on your 4 cylinder, it's only going to burn so much gas. Same for the L2 and L3. If the size of the cache increases the IPC that is *only* because the cache was too small for the design in the first place.
  • Korguz - Sunday, December 8, 2019 - link

    keep in mind, the comment is from gondalf, he will say any thing to make his beloved intel look better, as you can see, he DIDN'T answer my question to him as well...
  • airdrifting - Monday, November 25, 2019 - link

    You are delusional. 2011 is the year for 2500K/2600K release, and since then Intel has been charging 300+ for quad core till 2017 Ryzen release. It was also the six darkest years in CPU history where we see like 5% increase in IPC every year, I kept my 4.5GHz overclocked 2600K for 6 years because there was no reason to upgrade.
  • eek2121 - Monday, November 25, 2019 - link

    Yeah that was part of the issue. Sandy Bridge had so much overclocking headroom, you could put a good AiO on it, crank it up to 4.8-5.0 GHz, and generations later the competition would just barely catch up. The percentage of difference between the two was very small, and Bulldozer was chasing Core i3s.
  • rahvin - Monday, November 25, 2019 - link

    You're not alone buddy. I've held on to my Icy Bridge 3700K until Ryzen 39**X because Intel was offering no innnovation to the market.

    I distinctly remember the Anandtech article for IIRC the Kaby Lake Intel processors where they basically said this was the first generation to be 20% better than Sandy Bridge/Icy Bridge which made is worth upgrading. That was 6 years without any performance increases.

    Make no mistake, without AMD competition we wouldn't have moved beyond 8 cores on the desktop or 12 cores in the HEDT. Intel was happy to sit on their fingers and rake in the money with 2-5% improvement per year. In fact 3 solid years of AMD competition have doubled core counts on both the desktop and server and at the same time lowered prices across the board. Without AMD there is no innovation at Intel because they don't have competition. Thank god for Lisa Su.
  • Santoval - Monday, November 25, 2019 - link

    Bollocks. Pulling arbitrary dollar values of nameless CPUs out of your behinds and linking even more arbitrarily 2011 CPUs to 2019 CPUs is an extremely poor tactic. Your suck at this (-->Intel apologetics). Be better so we can have meaningful arguments :)
  • milkywayer - Monday, November 25, 2019 - link

    Read the article your posting spam at. The author mentions the 1900 and 900 numbers. I'll let you guess which page. You might actual read the review then.
  • milkywayer - Monday, November 25, 2019 - link

    Whups. Meant it for RegsEx.
  • milkywayer - Monday, November 25, 2019 - link

    Out of the kindness of their heart. How generous and kind of them.
    /s

Log in

Don't have an account? Sign up now