APU Core Frequency Scaling

One of the holy grails in processor performance is higher clock speed. All things being told, if we exclude power consumption from the mix, most performance tools will prefer, in order, a high IPC, then a high frequency, and finally more cores. Building a magical processor with double the IPC or double the frequency (at the same power) would be a magnificent thing, and would usually be preferred to just adding cores, as cores can scale: IPC and frequency often does not.

The main issue with driving frequency is power consumption and process. We all remember the days of Pentium 4, where driving frequency led to disasterous heat output and power consumption, and Intel went back to the drawing board to work more on IPC. Processors today are often built around the notion of a peak efficiency point, and the design is geared towards that specific level of performance and power. Moving the frequency outside of that peak area can lead to drastic increases in power consumption, so a balance is made. We get cores down at 1W each, or large CPUs consuming 300W-500W depending on the industry the chip is designed for. 

Once a CPU is designed and built, the main dial for adjusting performance is only the frequency. For the whole chip, this might be the frequency of the cores, or the frequency of the memory/memory controller, or the frequency of the graphics. For this article, we are concerned purely with the core frequency and performance. Predicting how well core performance scales with frequency requires an intimate knowledge of how the software processes instructions: if the program is all about raw compute throughput, then as long as the memory bandwidth can keep up, then a direct scaling factor can usually be observed. When the memory cannot keep up, or the storage is lacking, or other pathways are blocked, then the CPU performance is seen as identical no matter what the frequency. 

In our CPU tests, we adjusted the frequency from 3.5 GHz to 4.0 GHz, testing at each 100 MHz level, for a total +14.3% frequency gain. In the results, the raw throughput tests such as Blender, POV-Ray, 3DPM, and Handbrake all had performance level gains around the 13-14% level, as you would expect with a direct frequency uplift, showing that these tests scale well. Our memory limited benchmarks, such as WinRAR and DigiCortex, only saw smaller gains, of 5% at best, though often not at all.

For gaming, predicting how the performance will change is quite difficult. Moving data over the PCIe bus is one thing, but it will come down to draw calls and the ability for the CPU to move textures and perform in-scene adjustments. We did tests on both the integrated graphics and using a discrete graphics card, the GTX 1060, which is around the price point that someone buying an APU is likely to invest into a discrete graphics card.

For our integrated graphics testing, where we normally expect to see improvements with overclocking the CPU, almost nothing happened (or within testing variance). Shadow of Mordor saw an uptick on the Ryzen 3 2200G, but that was more of an anomoly than a rule. The 99th percentile faired a bit better.

Thief was the best recipient of increasing the core clock for percentiles, with Ashes not far behind. Shadow of Mordor saw the same 7% gain or so with the 2200G.

On our discrete GPU, the improvements were more obvious:

Most combinations saw at least a 3% increase, although only Civilization 6 and Ashes on the 2200G approached a 13% increase consummate with the frequency uplift.

The scaling worked best with the 99th percentile frame rates, with almost every game seeing a 5% gain in performance, and a lot more with a 10-13% gain as well. Rise of the Tomb Raider, which always loves being the oddball in these cases, actually saw a benefit above and beyond a 14.3% frequency increase.

Overall, it would seem, overclocking an APU with just the core frequency in mind works best for pure CPU tests, and in the percentiles when using a discrete graphics card. Users that are more focused on integrated graphics should focus on the IGP frequency and the memory instead.

Recommended Reading

Discrete Graphics Performance, Cont
Comments Locked

29 Comments

View All Comments

  • eastcoast_pete - Thursday, June 21, 2018 - link

    I hear your point. What worries me about buying second hand GPU especially nowadays is that there is no way to know whether it was used to mine crypto 24/7 for the last 2-3 years or not. Semiconductors can wear out if used for thousands of hours both overvolted and at above normal temps; both can really affect not just the GPU, but especially also the memory.
    The downside of a 980 or 970 (which wasn't as much at risk for cryptomining) is the now outdated HDMI standard. But yes, just for gaming, they can do.
  • Lolimaster - Friday, June 22, 2018 - link

    A CM Hyper 212X is cheap and it's one the best bang for buck coolers. 16GB of ram is expensive if you want 2400 o 3000 CL15. 8GB is just too low, the igpu needs some of it and many games (2015+ already need 6GB+ of system memory)
  • eastcoast_pete - Thursday, June 21, 2018 - link

    Thanks for the link! Yes, those results are REALLY interesting. They used stock 2200G and 2400G, no delidding, no undervolting of the CPU, and on stock heatsinks, and got quite an increase, especially when they also used faster memory (to OC memory speed also) . Downside was notable increase in power draw and the stock cooler's fan running at full tilt.
    So, Gavin's delidded APUs with their better heatsinks should do even better. The most notable thing in that German article was that the way to the overclock mountain (stable at 1600 Mhz stock cooler etc.) led through a valley of tears, i.e. the APUs crashed reliably when the iGPU was mildly overclocked, but then became stable again at higher iGPU clock speeds and voltage. They actually got some statement from AMD that AMD knows about that strange behavior, but apparently has no explanation for it. But then - running more stable if I run it even faster - bring it!
  • 808Hilo - Friday, June 22, 2018 - link

    A R3 is not really an amazing feat. It's a defective R7 with core, lane, fabric, pinout defects. The rest of the chip is run at low speed because the integrity is affected. Not sure anyone is getting their money worth here.
  • Lolimaster - Friday, June 22, 2018 - link

    I don't get this nonsense articles on an APU were the MAIN STAR IS THE IGPU. On some builds there mixed results when the gpu frequency jumped around 200-1200Mhz (hence some funny low 0.1-1% lows in benchmarks).

    It's all about OC the igpu forgetting about the cpu part and addressing/fixing igpu clock rubber band effect, sometimes disabling boost for cpu, increase soc voltage, etc.
  • Galatian - Friday, June 22, 2018 - link

    I'm going to question the results a little bit. For me it looks like that the only ”jump” in performance you get in games occurs whenever you hit an OC over the standard boost clock, e.g. 3700 MHz on the 2400G. I would suspect that you are simply preventing some core parking or some other aggressive power management feature while applying the OC. That would explain the odd numbers with when you increase the OC.

    That being said I would say a CPU OC doesn't really make sense. An undervolting test to see where the sweet spot lies would be nice though.
  • melgross - Monday, June 25, 2018 - link

    Frankly, the result of all these tests seems to be that overclocking isn’t doing much of anything useful, at least, not the small amounts we see here with AMD.

    5% is never going to be noticed. Several studies done a number of years ago showed that you need at least an overall 10% improvement in speed for it to even be noticeable. 15% would be barely noticeable.

    For heavy database workloads that take place over hours, or long rendering tasks, it will make a difference, but for gaming, which this article is overly interested in, nada!
  • Allan_Hundeboll - Monday, July 2, 2018 - link

    Benchmarks @46W cTDP would be interesting
  • V900 - Friday, September 28, 2018 - link

    The 2200G makes sense for an absolute budget system.

    (Though if you're starting from rock bottom and also need to buy a cabinet, motherboard, RAM, etc. you'll probably be better off taking that money and buying a used computer. You can get some really good deals for less than 500$)

    The 2400G however? Not so much. The price is too high and the performance too low to compete with an Intel Pentium/Nvidia 1030 solution.

    Or if you want to spend a few dollars more and find a good deal: An Intel Pentium/Nvidia 1050.

Log in

Don't have an account? Sign up now