Gaming: Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low end GPU with lots of GPU memory, like an R7 240 4GB).

All of our benchmark results can also be found in our benchmark engine, Bench.

AnandTech IGP Low Medium High
Average FPS
95th Percentile

Gaming: Strange Brigade (DX12, Vulkan) Gaming: Far Cry 5
Comments Locked

206 Comments

View All Comments

  • Drazick - Sunday, November 17, 2019 - link

    The DDR Technology is orthogonal.
    I want Quad and the latest memory available.
  • guyr - Friday, December 20, 2019 - link

    Anything is possible, of course. 5 years ago, who would have predicted 16 cores in a consumer-oriented CPU? However, neither Intel nor AMD has made any moves beyond 2 memory channels in the consumer space. The demand is simply not there to justify the increase in complexity and price. In the professional space, more channels are easily justified and the target market doesn't hesitate to pay the higher prices. So, it's all driven by what the market will bear.
  • alufan - Saturday, November 16, 2019 - link

    weird intel launches its chip a couple of weeks ago and it stayed upfront and main story for over a week, AMD launches what is in effect the best CPU ever tested by this site and it lasts a few Days before being pushed aside for another intel article am sure the intention by the reporters is to be fair and unbiased however I can see how the commercial motives of the site are being manipulated looks like intels up to its old tricks again, the thread ripper article lasted even less time but no chips have been tested(or at least released) yet which I guess makes sense
  • penev91 - Sunday, November 17, 2019 - link

    Just ignore everything Intel/AMD related on Anandtech. There's been an obvious bias for years.
  • Atom2 - Saturday, November 16, 2019 - link

    There has never been a situation as big as this one, where the bench software was benchmarked more than the hardware. Comprehensive overview of historic software development? Whatever the reason, it seems that keeping back AVX512 to only select few CPUs, was an unfortunate decision by Intel, which only contributed to the situation. Yes, you know, if you compile your code with compiler from 1998 and ignore all the guidelines how to write fast code ... Voila... For some reason however, nobody tries to run 20 year old CPU code on GPU though.
  • chrkv - Monday, November 18, 2019 - link

    Second page "On the Ryzen High Performance power plan, our sustained single core frequency dropped to 4450 MHz" - I believe just "the High Performance" should be here.
    Page 4 "Despite 5.0 GHz all-core turbo being on the 9900K" - should be "9900KS".
  • Irata - Tuesday, November 19, 2019 - link

    Quick question: Are any of your benchmarks affected by the Mathlab issue (Ryzen are crippled because a poor code path is used due to a vendor ID check for "genuine Intel" )?
  • twotwotwo - Tuesday, November 19, 2019 - link

    Intel's had these consumer-platform-based "entry-level Xeons" (once E3, now E) for a while. Despite some obvious limits, and that there are other low-end server options, enough folks want 'em to seed an ecosystem of rackmount and blade servers from Supermicro, Dell, etc.

    Anyway, the "pro" (ECC/management enabled) variant of Ryzen seems like a great fit for that. 16 cores and 24 PCIe 4 lanes are probably more useful for little servers than for most desktop users. It's also more balanced than the 8/16C EPYCs; it's cool they have 128 lanes and tons of memory channels, but it takes very specific applications to use them all with that few cores (caching?). Ideally the lesser I/O and lower TDPs also help make denser/cheaper boxes, and the consumer-ish clocks pay off for some things.

    The biggest argument against is that the entry-level server market is probably shrinking anyway as users rent tiny slices of huge boxes from cloud providers instead. It also probably doesn't have the best margins. So maybe you could release a competitive product there and still not make all that much off it.
  • halfflat - Thursday, November 21, 2019 - link

    Very curious about the AVX512 vs AVX2 results for 3dPM. It's really unusual to see even a 2x performance increase going from AVX2 to AVX512 on the same architecture, given that running AVX512 instructions will lower the clock.

    The non-AVX versions, I'm presuming, are utilizing SSE2.

    The i9-9900K gets a factor of 2 increase going from SSE2 to AVX2, which is pretty much what one would expect with twice as many fp operations per instruction. But the i9-7960X performance with AVX512 is *ten times* improved over SSE2, when the vector is only four times as wide and the cores will be running at a lower clock speed.

    Is there some particular AVX512-only operation that is determining this huge performance gap? Some further analysis of these results would be very interesting.
  • AIV - Wednesday, November 27, 2019 - link

    Somebody posted that it's caused by 64 bit integer multiplies, which are supported in AVX512, but not in AVX2 and thus fallback to scalar operations.

Log in

Don't have an account? Sign up now