GRID: Autosport

No graphics tests are complete without some input from Codemasters and the EGO engine, which means for this round of testing we point towards GRID: Autosport, the next iteration in the GRID and racing genre. As with our previous racing testing, each update to the engine aims to add in effects, reflections, detail and realism, with Codemasters making ‘authenticity’ a main focal point for this version.

GRID’s benchmark mode is very flexible, and as a result we created a test race using a shortened version of the Red Bull Ring with twelve cars doing two laps. The car is focus starts last and is quite fast, but usually finishes second or third. Both the average and minimum frame rates are recorded.

For this test we used the following settings with our graphics cards:

GRID: Autosport Settings
  Resolution Quality
Low GPU Integrated Graphics 1920x1080 Medium
ASUS R7 240 1GB DDR3
Medium GPU MSI GTX 770 Lightning 2GB 1920x1080 Maximum
MSI R9 285 Gaming 2G
High GPU ASUS GTX 980 Strix 4GB 1920x1080 Maximum
MSI R9 290X Gaming 4G

Integrated Graphics

GRID: Autosport on Integrated Graphics GRID: Autosport on Integrated Graphics [Minimum FPS]

The difference between the APUs and Intel CPUs again shows up to a 33-50% difference in frame rates, to the point where at 1080p medium the integrated graphics do not break the minimum 30 FPS barrier. The GPU frequency and L3 cache again shows up the i3-6100 compared to the i3-6300.

Discrete Graphics

GRID: Autosport on ASUS R7 240 DDR3 2GB ($70) GRID: Autosport on ASUS R7 240 DDR3 2GB ($70) [Minimum FPS]

GRID: Autosport on MSI R9 285 Gaming 2GB ($240) GRID: Autosport on MSI R9 285 Gaming 2GB ($240) [Minimum FPS]

GRID: Autosport on MSI GTX 770 Lightning 2GB ($245) GRID: Autosport on MSI GTX 770 Lightning 2GB ($245) [Minimum FPS]

GRID: Autosport on MSI R9 290X Gaming LE 4GB ($380) GRID: Autosport on MSI R9 290X Gaming LE 4GB ($380) [Minimum FPS]

GRID: Autosport on ASUS GTX 980 Strix 4GB ($560) GRID: Autosport on ASUS GTX 980 Strix 4GB ($560) [Minimum FPS]

With the discrete GPUs, there are multiple avenues to take with this analysis.

On the low-end cards, the choice of CPU makes little difference in our tests.

On the mid-range and high-end cards, the power of the CPU makes more of an effect with AMD discrete cards than NVIDIA discrete cards, except with the AMD Athlon X4 845 in play. When using an AMD discrete card with a mid-range GPU, the X4 845 plays well enough with the i3 parts for its price, but falls away a bit more on the high-end AMD discrete GPU. With NVIDIA GPUs, the Athlon X4 845 sits at the bottom and the main challengers are the FX parts.

So for EGO engine rules, it would seem to be:

AMD Carrizo CPU + AMD discrete GPU is OK, the lower powered the GPU the better.
AMD FX CPU + NVIDIA discrete GPU is OK
Intel CPU + any discrete GPU works well.

One could attest the differences between the discrete GPU choices to driver implementation, IPC, or how each GPU company focuses in optimizing for each game at hand (frequency vs threads vs caches).

Gaming Comparison: Grand Theft Auto Gaming Comparison: Shadow of Mordor
Comments Locked

94 Comments

View All Comments

  • Andr3w - Friday, September 16, 2016 - link

    Hello guys ! I currently own a 860k OC at 4.2 Ghz paired with a Sapphire R7 370 2gb . After consulting this review I understand that the i3-6100 paired with the same R7 370 will perform better in gaming ? Correct me if I am wrong !

    Note : Currently I think, the 860k bottlenecks my R7 370 in Tom Clancy The Division. I am sayng this becasue the readings from MSI Afterburner show the following stats at medium settings, 1920x1080 resolution, V-Sync off :

    GPU Usage 65 - 70 % with 1800 VRAM usage
    CPU Usage on all 4 cores : 98 - 100 %

    On the other hand in Star Wars Battlefront, on high settings, 1920x1080 resolution, V-Sync off, I've read the following stats :

    GPU Usage : 100 % with about 1700 VRAM usage
    CPU Usage : 65-70 % on all 4 cores.

    So will it worth changing to i3-6100 ?
  • KosOR - Tuesday, October 30, 2018 - link

    Recently, I had an opportunity to purchase cheaper new Haswell or Skylake motherboard together with cheaper second hand i3 processor. And I searched and found this article comparing i3-6100 and i3-4330 processors. I also compared both CPUs at PassMark and UserBenchmark results (cpubenchmark.net, userbenchmark.com). The cumulative performance difference there was not greater than 15%. The 15% number seems also compatible with CPU architecture improvement and slightly higher clock speeds. That's why I was very surprised to see much higher performance difference in this article for almost all real world tests (Dolphin Benchmark, 3D Particle Movement v2, Mozilla Kraken and Google Octane v2). Personally, I could not find any logical reasons explaining those elevated performance numbers of Skylake i3 CPUs. Can anybody explain me why we see such big (greater than 30%) real world performance differences in all of those tests?
  • KosOR - Tuesday, October 30, 2018 - link

    Those higher than 15% performance numbers are also observable in 2 other test results - HandBrake v0.9.9 2x4K and Hybrid x265. Can anybody find the explanation in any of the architecture improvements from Haswell to Skylake generation of CPUs?
  • KosOR - Tuesday, October 30, 2018 - link

    Yet another absurd result is of Pentium G3420, which wins over i3 4330 on Dolphin benchmark by more than 15%. It looks absurd that 2 thread, 3.2GHz, 3 MB cache Haswell processor achieves 15% better result over 4 thread, 3.5GHz, 4 MB cache Haswell processor. Such results make me feel suspicious of all other test results on the charts, sorry.

Log in

Don't have an account? Sign up now