Gaming: Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Grand Theft Auto V Open World Apr
2015
DX11 720p
Low
1080p
High
1440p
Very High
4K
Ultra

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low end GPU with lots of GPU memory, like an R7 240 4GB).

All of our benchmark results can also be found in our benchmark engine, Bench.

GTA V IGP Low Medium High
Average FPS
95th Percentile

.

Gaming: Strange Brigade (DX12, Vulkan) Gaming: Shadow of the Tomb Raider (DX12)
POST A COMMENT

133 Comments

View All Comments

  • mapesdhs - Saturday, February 2, 2019 - link

    Is that the same issue as the one referring to running on core zero? I watched a video about it recently but I can't recall if it was L1T or elsewhere. Reply
  • jospoortvliet - Sunday, February 3, 2019 - link

    it is that issue yes. blocking use of core is a work-around that kind'a works. Reply
  • jospoortvliet - Sunday, February 3, 2019 - link

    (in some workloads, not all) Reply
  • Coolmike980 - Monday, February 4, 2019 - link

    So here's my thing: Why can't we have good benchmarks? Nothing here on Linux, and nothing in a VM. I'd be willing to be good money I could take a 2990, run Linux, run 5 VM's of 6 cores each, run these benchmarks (the non-gpu dependent ones), and collectively beat the pants off of this CPU under any condition you want to run it. Also, this Civ 6 thing - the only benchmark that would be of any value would be the CPU one, and they've been claiming to want to make this work for 2 years now. Either get it working, or drop it altogether. Rant over. Thanks. Reply
  • FlanK3r - Wednesday, January 30, 2019 - link

    where is CinebenchR15 results? In testing methology is it, but in results I can not find it :) Reply
  • MattsMechanicalSSI - Wednesday, January 30, 2019 - link

    der8auer did a delid video, and a number of CB runs. https://www.youtube.com/watch?v=aD9B-uu8At8 Also, Steve at GN has had a good look at it. https://www.youtube.com/watch?v=N29jTOjBZrw Reply
  • MattZN - Wednesday, January 30, 2019 - link

    @MattsMechanicalSSI Yup... both are very telling.

    I give the 3175X a pass on DDR connectivity (from the DerBauer video) since he's constantly having to socket and unsocket the chip, but I agree with him that there should be a carrier for a chip that large. Depending on the user to guess the proper pressure is a bad idea.

    But, particularly the GN review around 16:00 or so where we see the 3175X pulling 672W at the wall (OC) for a tiny improvement in time over the 2990WX. Both AMD and Intel goose these CPUs, even at stock, but the Intel numbers are horrendous. They aren't even trying to keep wattages under control.

    The game tests are more likely an issue with the windows scheduler (ala Wendel's work). And the fact that nobody in their right mind runs games on these CPUs.

    The Xeon is certainly a faster CPU, but the price and the wattage cost kinda make it a non-starter. There's really no point to it, not even for professional work. Steve (GN) kinda thinks that there might be a use-case with Premier but... I don't really. At least not for the ~5 months or so before we get the next node on AMD (and ~11 months for Intel).

    -Matt
    Reply
  • mapesdhs - Saturday, February 2, 2019 - link

    Cinebench is badly broken at this level of cores, it's not scaling properly anymore. See:

    https://www.servethehome.com/cinebench-r15-is-now-...
    Reply
  • Kevin G - Wednesday, January 30, 2019 - link

    For $3000 USD, a 28 core unlocked Xeon chip isn't terribly bad. The real issue is its incredibly low volume nature and that in effect only two motherboards are going to be supporting it. LGA 3647 is a wide spread platform but the high 255W TDP keeps it isolated.

    Oddly I think Intel would have had better success if they also simultaneously launched an unlocked 18 core part with even higher base/turbo clocks. This would have threaded the needle better in terms of per thread performance and overall throughput. The six channel memory configuration would have assisted in performance to distinguish itself from the highend Core i9 Extreme chips.

    The other aspect is that there is no clear upgrade path from the current chips: pretty much one chip to board ratio for the life time of the product. There is a lot on the Xeon side Intel has planned like on package FGPAs, Omnipath fabric and Nervana accelerators which could stretch their wings with a 255 W TDP. The Xeon Gold 6138P is an example of this as it comes with an Arria 10 FPGA inside but a slightly reduced clock 6138 die as well at a 195 W TDP. At 255 W, that chip wouldn't have needed to compromise the CPU side. For the niche market Intel is targeting, a FPGA solution would be interesting if they pushed ideas like OpenCL and DirectCompute to run on the FPGA alongside the CPU. Doing something really bold like accelerating PhysX on the FPGA would have been an interesting demo of what that technology could do. Or leverage the FGPA for DSP audio effects in a full 3D environment. That'd give something for these users to look forward to.

    Well there is the opportunity to put in other LGA 3647 parts into these boards but starting off with a 28 core unlocked chip means that other offering are a downgrade. With luck, Ice Lake-SP would be an upgrade but Intel hasn't committed to it on LGA 3647.

    Ultimately this looks like AMD's old 4x4/QuadFX efforts that'll be quickly forgotten by history.

    Speaking of AMD, Intel missing the launch window by a few months places it closer to the eminent launch of new Threader designs leveraging Zen 2 and AMD's chiplet strategy. I wouldn't expect AMD to go beyond 32 cores for Threadripper but the common IO die should improve performance overall on top of the Zen 2 improvements. Intel has some serious competition coming.
    Reply
  • twtech - Wednesday, January 30, 2019 - link

    Nobody really upgrades workstation CPUs, but it sounds like getting a replacement in the event of failure.could be difficult if the stock will be so limited.

    If Dell and HP started offering this chip in their workstation lineup - which I don't expect to happen given the low-volume CPU production and needing a custom motherboard - then I think it would have been a popular product.
    Reply

Log in

Don't have an account? Sign up now