Discrete Graphics Performance

As stated on the first page, here we take both APUs from 3.5 GHz to 4.0 GHz in 100 MHz increments and run our testing suite at each stage. This is a 14.3% increase in clock speed, however when it comes to gaming it can be unpredictable where those gains are going to come from. 

For our gaming tests, we are only concerned with real-world resolutions and settings for these games. It would be fairly easy to adjust the settings in each game to a CPU limited scenario, however the results from such a test are mostly pointless and non-transferable to the real world in our view. Scaling takes many forms, based on GPU, resolution, detail levels, and settings, so we want to make sure the results correlate to what users will see day-to-day.

Civilization 6

First up in our CPU gaming tests is Civilization 6. Originally penned by Sid Meier and his team, the Civ series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer overflow. Truth be told I never actually played the first version, but every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, it a game that is easy to pick up, but hard to master.

Civilization 6 on ASUS GTX 1060 Strix 6GB - Average Frames Per SecondCivilization 6 on ASUS GTX 1060 Strix 6GB - 99th Percentile

Despite not showing any great scaling for integrated graphics, the minute we bump up to our discrete GPU we can see that Civilization gets a good bump from frequency scaling. The average frame rates climb up +7.0% for the 2400G and +9.7% for the 2200G. Percentile numbers seem to vary on the 2400G, but the 2200G gets a distinct +10.6% gain.

Ashes of the Singularity (DX12)

Seen as the holy child of DX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go and explore as many of the DX12 features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

Ashes of The Singularity on ASUS GTX 1060 Strix 6GB -  Average Frames Per SecondAshes of The Singularity on ASUS GTX 1060 Strix 6GB - 99th Percentile

AoTS is again a little over the place: technically there's an 8% gain in frame rates for the 2400G, however the 2200G seems to fluctuate a bit more. The better performance on the 2200G seems a bit startling too: technically with only four threads, each thread has more memory bandwidth and more core resources per thread than having eight threads together. This might improve certain latencies in the instruction list, although it is surprising so see such a big change.

Rise Of The Tomb Raider (DX12)

One of the newest games in the gaming benchmark suite is Rise of the Tomb Raider (RoTR), developed by Crystal Dynamics, and the sequel to the popular Tomb Raider which was loved for its automated benchmark mode. But don’t let that fool you: the benchmark mode in RoTR is very much different this time around.

Visually, the previous Tomb Raider pushed realism to the limits with features such as TressFX, and the new RoTR goes one stage further when it comes to graphics fidelity. This leads to an interesting set of requirements in hardware: some sections of the game are typically GPU limited, whereas others with a lot of long-range physics can be CPU limited, depending on how the driver can translate the DirectX 12 workload.

Rise of the Tomb Raider on ASUS GTX 1060 Strix 6GB -  Average Frames Per SecondRise of the Tomb Raider on ASUS GTX 1060 Strix 6GB - 99th Percentile

RoTR sees a small +3% gain in average frame rates going up to 4.0 GHz, however the percentiles get the biggest boost, showing +17.9% on the 2400G.

Integrated Graphics Performance, Cont Discrete Graphics Performance, Cont
POST A COMMENT

31 Comments

View All Comments

  • eastcoast_pete - Thursday, June 21, 2018 - link

    I hear your point. What worries me about buying second hand GPU especially nowadays is that there is no way to know whether it was used to mine crypto 24/7 for the last 2-3 years or not. Semiconductors can wear out if used for thousands of hours both overvolted and at above normal temps; both can really affect not just the GPU, but especially also the memory.
    The downside of a 980 or 970 (which wasn't as much at risk for cryptomining) is the now outdated HDMI standard. But yes, just for gaming, they can do.
    Reply
  • Lolimaster - Friday, June 22, 2018 - link

    A CM Hyper 212X is cheap and it's one the best bang for buck coolers. 16GB of ram is expensive if you want 2400 o 3000 CL15. 8GB is just too low, the igpu needs some of it and many games (2015+ already need 6GB+ of system memory) Reply
  • eastcoast_pete - Thursday, June 21, 2018 - link

    Thanks for the link! Yes, those results are REALLY interesting. They used stock 2200G and 2400G, no delidding, no undervolting of the CPU, and on stock heatsinks, and got quite an increase, especially when they also used faster memory (to OC memory speed also) . Downside was notable increase in power draw and the stock cooler's fan running at full tilt.
    So, Gavin's delidded APUs with their better heatsinks should do even better. The most notable thing in that German article was that the way to the overclock mountain (stable at 1600 Mhz stock cooler etc.) led through a valley of tears, i.e. the APUs crashed reliably when the iGPU was mildly overclocked, but then became stable again at higher iGPU clock speeds and voltage. They actually got some statement from AMD that AMD knows about that strange behavior, but apparently has no explanation for it. But then - running more stable if I run it even faster - bring it!
    Reply
  • 808Hilo - Friday, June 22, 2018 - link

    A R3 is not really an amazing feat. It's a defective R7 with core, lane, fabric, pinout defects. The rest of the chip is run at low speed because the integrity is affected. Not sure anyone is getting their money worth here. Reply
  • Lolimaster - Friday, June 22, 2018 - link

    I don't get this nonsense articles on an APU were the MAIN STAR IS THE IGPU. On some builds there mixed results when the gpu frequency jumped around 200-1200Mhz (hence some funny low 0.1-1% lows in benchmarks).

    It's all about OC the igpu forgetting about the cpu part and addressing/fixing igpu clock rubber band effect, sometimes disabling boost for cpu, increase soc voltage, etc.
    Reply
  • Galatian - Friday, June 22, 2018 - link

    I'm going to question the results a little bit. For me it looks like that the only ”jump” in performance you get in games occurs whenever you hit an OC over the standard boost clock, e.g. 3700 MHz on the 2400G. I would suspect that you are simply preventing some core parking or some other aggressive power management feature while applying the OC. That would explain the odd numbers with when you increase the OC.

    That being said I would say a CPU OC doesn't really make sense. An undervolting test to see where the sweet spot lies would be nice though.
    Reply
  • melgross - Monday, June 25, 2018 - link

    Frankly, the result of all these tests seems to be that overclocking isn’t doing much of anything useful, at least, not the small amounts we see here with AMD.

    5% is never going to be noticed. Several studies done a number of years ago showed that you need at least an overall 10% improvement in speed for it to even be noticeable. 15% would be barely noticeable.

    For heavy database workloads that take place over hours, or long rendering tasks, it will make a difference, but for gaming, which this article is overly interested in, nada!
    Reply
  • John T. Pritchett - Wednesday, June 27, 2018 - link

    What about selective overclocking of the iGPU, especially iGPU only? The key attraction of delidding the 2200 and 2400 for me is a potential to boost the iGPU performance. The stock speed of the CPU is fine IMO.LOOK FOR MORE.....www.precandy.com Reply
  • Allan_Hundeboll - Monday, July 02, 2018 - link

    Benchmarks @46W cTDP would be interesting Reply
  • leonette - Wednesday, July 04, 2018 - link

    To answer your questions directly:

    1. Power wasn't much of a factor in this piece as the focus is primarily on CPU frequency scaling, with power consumption being touched upon on in the previous articles.
    2. iGPU scaling piece is being worked upon currently, with the test bench being set up tomorrow or Friday (just working on some motherboard reviews today as well as balancing a severe hand injury which has hindered me for the last few weeks).
    3. I didn't have a planned follow up planned with a stock cooler, as my aim was to essentially show scaling without much limitations on things like cooling; the stock cooler would obviously generate more heat and I didn't want that to be a limiting factor at any stage.
    4. I made a post on my personal Facebook page about Ian's Best Gaming CPU Q2 article just published. My first reaction was the $500 system with the Ryzen 5 2400G made me smile inside as I genuinely think the 2400G is a stunner for the price, especially for gamers on a budget with certain limitations.

    ______________________________________
    https://plex.software/
    https://luckypatcher.pro/
    https://kodi.software/
    Reply

Log in

Don't have an account? Sign up now