Gaming: Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low end GPU with lots of GPU memory, like an R7 240 4GB).

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Grand Theft Auto V Open World Apr
2015
DX11 720p
Low
1080p
High
1440p
Very High
4K
Ultra

All of our benchmark results can also be found in our benchmark engine, Bench.

AnandTech IGP Low Medium High
Average FPS
95th Percentile

We see performance parity between the chips at 4K, but for all other resolutions and settings, the OC chip again still can't make it to the level of the 7700K, often sitting midway between the 7700K at stock and the 2600K at stock.

Gaming: Strange Brigade (DX12) Gaming: Far Cry 5
Comments Locked

213 Comments

View All Comments

  • Midwayman - Monday, May 13, 2019 - link

    I think the biggest thing I noticed moving to a 8700k from a 2600k was the same thing I noticed moving from a core 2 duo to a 2600k. Less weird pauses. The 2600k would get weird hitches in games. System processes would pop up and tank the frame rate for an instant, or just an explosion would trigger a physics event that would make it stutter. I see that a lot less with a couple extra cores and some performance overhead.
  • tmanini - Monday, May 13, 2019 - link

    I agree, the user experience is definitely improved in those ways. Granted, many of us think our time is a bit more important than it likely really is. (does waiting 3 seconds really ruin my day?)
  • ochadd - Monday, May 13, 2019 - link

    Enjoyed the article very much.
  • Magnus101 - Monday, May 13, 2019 - link

    You get about 3Xperformance when going from an upclocked 2600k@4.5GHz to a 8700k@4.5GHz when working in DAW:s (Digital Audio Workstation), i.e running dozens and dozens of virtual instruments and plugins when making music.
    The thing is that it is a combination of applications that:
    1. Use all the SSE/AVX or whatever all the streaming extensions that makes parallell flotaing point calculations go much faster. DAW is all about floating point calculations.
    2. Are extremely real-time dependent to get ultra low latency (milliseconds in single digits).

    This makes even the 7700 k about double in performance in some scenarios when compared to an equally clocked 2600k.
  • mikato - Monday, May 13, 2019 - link

    "and Intel’s final quad-core with HyperThreading chip for desktop, the 7700K"
    "the Core i7-7700K, Intel’s final quad-core with HyperThreading processor"

    Did I miss some big news?
  • mapesdhs - Monday, May 13, 2019 - link

    "... the best chips managed 5.0 GHz or 5.1 GHz in a daily system."

    Worth noting that with the refined 2700K, *all* of them run fine at 5GHz in a daily system, sensible temps, a TRUE and one fan is plenty for cooling. Threaded performance is identical to a stock 6700K, IPC is identical to a stock 2700X (880 and 177 for CB R15 Nt/1t resp.)

    Also, various P67/Z68 mbds support NVMe boot via modded BIOS files. The ROG forum has a selection for ASUS, search for "ASUS bolts4breakfast"; he's added support for the M4E and M4EZ, and I think others asked the same for the Pro Gen3, etc. I'm sure there are equivalent BIOS mod threads for GIgabyte, MSI, etc. My 5GHz 2700K on an M4E has a 1TB SM961 and a 1TB 970 EVO Plus (photo/video archive), though the C-drive is still a venerable Vector 256GB which holds up well even today.

    Also, RAM support runs fine with 2133 CL9 on the M4E, which is pretty good (16GB GSkill TridentX, two modules).

    However, after using this for a great many years, I do find myself wanting better performance for processing images & video, so I'll likely be stepping up to a Ryzen 3000 system, at least 8 cores.
  • mapesdhs - Monday, May 13, 2019 - link

    Forgot to mention, someting else interesting about SB is the low cost of the sibling SB-E. Would be a laugh to see how all those tests pan with with a 3930K stock/oc'd thrown into the mix. It's a pity good X79 boards are hard to find now given how cheap one can get 3930Ks for these days. If stock performance is ok though, there are some cheap Chinese boards which work pretty well, and some of them do support NVMe boot.
  • tezcan - Monday, May 13, 2019 - link

    I am still running 3930k, prices for it are still very high ~$500. Not much cheaper then what I paid for it in 2011. I am yet to really test my GTX 680's in SLI. Kind of a waste, but they are driving many displays throughout my house. There was an article where some Australian bloke guy runs an 8 core sandy bridge - e (server chip) vs all modern intel 8 core chips. It actually had the lowest latency so was best for pro gamers, lagged a little behind on everything else- but definitely good enough.
  • dad_at - Tuesday, May 14, 2019 - link

    I run 3960X at ~ 4 GHz on X79 ASUS P9X79 and have nvme boot drive with modified BIOS. So it is really interesting to compare 2011/2012 6c/12t to 8700K or 9900K. I guess it's about 7700K stock, so modern 4c/8t is like old 6c/12t. Per core perf is about 20-30% up on average and this includes higher frequency ... So IPC is only about 15% up: not impressive. Of course in some loads like AVX2 heavy apps IPC could be 50% up, but such case is not common.
  • martixy - Monday, May 13, 2019 - link

    Oh man... I just upgraded my 2600K to a 9900K and a couple days later this article drops...
    The timing is impeccable!

    If I ever had a shred of buyer's remorse, the article conclusion eradicated it thoroughly. Give me more FPS.

    I saw a screenshot of StarCraft 2. On a mission which I, again, coincidentally (this is uncanny) played today. I can now report that the 9900K can FINALLY feed my graphics card in SC2 properly. With the 2600K I'd be around 20-60 FPS depending on load and intensity of the action. With the new processors, it barely ever drops below 60 and usually hovers around 90FPS. Ingame cinematics also finally run above the "cinematic" 30 FPS I saw on my trusty old 2600K.

Log in

Don't have an account? Sign up now