Gaming: Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low end GPU with lots of GPU memory, like an R7 240 4GB).

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Grand Theft Auto V Open World Apr
2015
DX11 720p
Low
1080p
High
1440p
Very High
4K
Ultra

All of our benchmark results can also be found in our benchmark engine, Bench.

AnandTech IGP Low Medium High
Average FPS
95th Percentile

We see performance parity between the chips at 4K, but for all other resolutions and settings, the OC chip again still can't make it to the level of the 7700K, often sitting midway between the 7700K at stock and the 2600K at stock.

Gaming: Strange Brigade (DX12) Gaming: Far Cry 5
Comments Locked

213 Comments

View All Comments

  • Targon - Monday, May 13, 2019 - link

    I made a similar comment, Civ6 added a new benchmark with Gathering Storm as well that is even more resource intensive. Turn length will show what your CPU can do, without GPU issues getting in the way.
  • Zoomer - Friday, June 14, 2019 - link

    Articles says that bmrk is being developed.
  • nonoverclock - Friday, May 10, 2019 - link

    Interesting article! I'm still sitting on an i7 4770 and am debating an upgrade, would be also interesting to see a Haswell i7 in the mix.
  • HomerrK - Friday, May 10, 2019 - link

    I'm one of those who bought the 2600K back in the day. A few months ago I made the move to the 9900K. Cores and price don't matter so much as feeling it will be a chip that will offer great bang for the buck for years. I think it is the spiritual successor to the 2600K and that it was a mistake to omit it.
  • RSAUser - Saturday, May 11, 2019 - link

    Not even close, it's near double the price.
    The Ryzen 2700 at $300 would be a way better "successor" as it's within a lot of people's budgets, offers good gaming performance and with 8 cores is probably going to last quite a while as we move to higher threading.

    The Ryzen 2 chips moving to 7nm will probably have the largest leap in a while, so whichever one comes in around the $300 mark will probably be the "true" successor of the 2600K.
  • Targon - Monday, May 13, 2019 - link

    The issue that some will have with the 2700X is that the clock speeds are not up there at the 5GHz mark, which is what many Intel systems have been able to hit for over four years now. Third generation Ryzen should get to the 5GHz mark or possibly beyond, so there wouldn't be any compromises. Remember, extra cores will only result in better performance in some areas, but single threaded and many older programs benefit more from higher clock speeds(with similar IPC).

    Don't get me wrong, I have a Ryzen 7 1800X in this machine and wouldn't step down to a quad-core chip again on the desktop, but I do appreciate that some things just want higher clock speeds. I expect a 40 percent boost in overall performance by switching from this 1800X to the 16 core Ryzen if it hits 5GHz, and that doesn't even count the increase in core count. I may end up paying $600 or more for the CPU though, but that will keep me happy for at least another five years.
  • crimson117 - Friday, May 10, 2019 - link

    Finally retired my i5-2500K last spring for a Ryzen 2700X.

    But boy what a good run that CPU had.
  • jayfang - Friday, May 10, 2019 - link

    Likewise only recently "demoted" my i5-2500K - still has tons of grunt as family PC / HTPC
  • gijames1225 - Friday, May 10, 2019 - link

    Same boat. I used a 2400k and 2500k for my two main PCs for years and years. Just replaced the 2500k with a Ryzen 5 1600 (they were $80 at Microcenter for some blessed reason). Tripling the thread count has down wonders for my compile times, but it's just amazing how strong and long lasting the IPC was on the 2ng generation Core i processors.
  • qap - Friday, May 10, 2019 - link

    You've convinced me. Staying with my Sandy Bridge for another year. At 1600p difference in CPU is not that high (definitely not worth 1000+ USD for completely new system) and for day to day work it is plenty fast. Up to four threads there's very little to gain and only when more threads are at play there is large enough difference (same goes for Ryzen only there I would gain almost nothing up to four threads).
    Perhaps Zen 2 will change that, or maybe 10nm CPUs from intel when they finally arrive with new CPU architecture and not rehash of 4 year old Skylake.

Log in

Don't have an account? Sign up now