Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low-end GPU with lots of GPU memory, like an R7 240 4GB).

To that end, we run the benchmark at 1920x1080 using an average of Very High on the settings, and also at 4K using High on most of them. We take the average results of four runs, reporting frame rate averages, 99th percentiles, and our time under analysis.

All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

CPU Gaming Performance: Rise of the Tomb Raider Intel Coffee Lake Conclusion
Comments Locked

222 Comments

View All Comments

  • Chaser - Thursday, October 5, 2017 - link

    Thank you AMD.
  • vanilla_gorilla - Thursday, October 5, 2017 - link

    Exactly! No matter what side you're on, you gotta love the fact that competition is back in the x86 desktop space! And it looks like AMD 1700X is now under $300 on Amazon. Works both ways!
  • TEAMSWITCHER - Friday, October 6, 2017 - link

    I just don't see it this way. Since the release of Haswell-E in 2014 we've had sub $400 six core processors. While some like to compartmentalize the industry into mainstream and HEDT, the fact is, I built a machine with similar performance three years ago, for a similar price. Today's full featured Z370 motherboards (like the ROG Maximus X) cost nearly as much as X99 motherboards from 2014. To say that Intel was pushed by AMD is simply not true.
  • watzupken - Friday, October 6, 2017 - link

    I feel the fact that Intel had to rush a 6 core mainstream processor out in the same year they introduced Kaby Lake is a sign that AMD is putting pressure on them. You may find a Haswell E chip for sub 400 bucks in 2014, but you need to be mindful that Intel historically have only increase prices due to the lack of competition. Now you are seeing a 6 core mainstream chip from both AMD and Intel below 200 bucks. Motherboard prices are difficult to compare since there are lots of motherboards out there that are over engineered and cost significantly more. Assuming you pick the cheapest Z370 motherboard out there, I don't believe it's more expensive than a X99 board.
  • mapesdhs - Friday, October 6, 2017 - link

    KL-X is dead, that's for sure. Some sites claim CFL was not rushed, in which case Intel knew KL-X would be pointless when it was launched. People claiming Intel was not affected by AMD have to choose: either CFL was rushed because of pressure from AMD, or Intel released a CPU for a mismatched platform they knew would be irrelevant within months.

    There's plenty of evidence Intel was in a hurry here, especially the way X299 was handled, and the horrible heat issues, etc. with SL-X.
  • mapesdhs - Friday, October 6, 2017 - link

    PS. Is it just me or are we almost back to the days of the P4, where Intel tried to maintain a lead really by doing little more than raising clocks? It wasn't that long ago there was much fanfare when Intel released its first minimum-4GHz part (4790K IIRC), even though we all knew they could run their CPUs way quicker than that if need be (stock voltage oc'ing has been very productive for a long time). Now all of a sudden Intel is nearing 5GHz speeds, but it's kinda weird there's no accompanying fanfare given the reaction to their finally reaching 4GHz with the 4790K. At least in th mainstream, has Intel really just reverted to a MHz race to keep its performance up? Seems like it, but OS issues, etc. are preventing those higher bins from kicking in.
  • KAlmquist - Friday, October 6, 2017 - link

    Intel has been pushing up clock speeds, but (unlike the P4), not at the expense of IPC. The biggest thing that Intel has done to improve performance in this iteration is to increase the number of cores.
  • mapesdhs - Tuesday, October 10, 2017 - link

    Except in reality it's often not that much of a boost at all, and in some cases slower because of how the OS is affecting turbo levels.

    Remember, Intel could have released a CPU like this a very long time ago. As I keep having to remind people, the 3930K was an 8-core chip with two cores disabled. Back then, AMD couldn't even compete with SB, never mind SB-E, so Intel held back, and indeed X79 never saw a consumer 8-core part, even though the initial 3930K was a XEON-sourced crippled 8-core.

    Same applies to the mainstream, we could have had 6 core models ages ago. All they've really done to counter the lack of IPC improvements is boost the clocks way up. We're approaching standard bin levels now that years ago were considered top-notch oc's unless one was definitely using giant air coolers, decent AIOs or better.
  • wr3zzz - Thursday, October 5, 2017 - link

    I hope Anandtech solves the Civ6 AI benchmark soon. It's almost as important as compression and encoding benchmarks for me to decide CPU price-performance options as I am almost always GPU constrained in games.
  • Ian Cutress - Saturday, October 7, 2017 - link

    We finally got in contact with the Civ6 dev team to integrate the AI benchmark into our suite better. You should see it moving forward.

Log in

Don't have an account? Sign up now