AMD Ryzen 5 2400G and Ryzen 3 2200G Core Frequency Scaling: An Analysis
by Gavin Bonshor on June 20, 2018 10:05 AM EST- Posted in
- CPUs
- AMD
- Zen
- APU
- Vega
- Ryzen
- Ryzen 5
- Ryzen 3
- Scaling
- CPU Frequency
- Ryzen 3 2200G
- Ryzen 5 2400G
Integrated Graphics Performance
As stated on the first page, here we take both APUs from 3.5 GHz to 4.0 GHz in 100 MHz increments and run our testing suite at each stage. This is a 14.3% increase in clock speed, however when it comes to gaming it can be unpredictable where those gains are going to come from.
Thief
Thief has been a long-standing title in the hearts of PC gamers since the introduction of the very first iteration back in 1998 (Thief: The Dark Project). Thief is the latest reboot in the long-standing series and renowned publisher Square Enix took over the task from where Eidos Interactive left off back in 2004. The game itself uses the UE3 engine and is known for optimised and improved destructible environments, large crowd simulation and soft body dynamics.
Increasing the core frequency does little for the average frame rates on Thief for integrated graphics, however the 99th percentiles clearly increase for both processors. They do not increase linearly so much, making the overall result hard to predict.
Shadow of Mordor
The next title in our testing is a battle of system performance with the open world action-adventure title, Middle Earth: Shadow of Mordor (SoM for short). Produced by Monolith and using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.
With the Ryzen 3 2200G, we see a clear gain in frame rates as the frequency is increased, around +7.6%, and similarly in the 99th percentile numbers. The 2400G isn't affected in the same way.
F1 2017
Released in the same year as the title suggests, F1 2017 is the ninth variant of the franchise to be published and developed by Codemasters. The game is based around the F1 2017 season and has been and licensed by the sports official governing body, the Federation Internationale de l’Automobile (FIA). F1 2017 features all twenty racing circuits, all twenty drivers across ten teams and allows F1 fans to immerse themselves into the world of Formula One with a rather comprehensive world championship season mode.
The Codemasters EGO engine has historically been an engine that has benefited from an increase in anything: CPU, memory, graphics, the lot. F1 2017 is using EGO 4.0, which seems to have removed some of the CPU bottleneck, as we're getting no difference in our integrated gaming results.
29 Comments
View All Comments
eastcoast_pete - Thursday, June 21, 2018 - link
I hear your point. What worries me about buying second hand GPU especially nowadays is that there is no way to know whether it was used to mine crypto 24/7 for the last 2-3 years or not. Semiconductors can wear out if used for thousands of hours both overvolted and at above normal temps; both can really affect not just the GPU, but especially also the memory.The downside of a 980 or 970 (which wasn't as much at risk for cryptomining) is the now outdated HDMI standard. But yes, just for gaming, they can do.
Lolimaster - Friday, June 22, 2018 - link
A CM Hyper 212X is cheap and it's one the best bang for buck coolers. 16GB of ram is expensive if you want 2400 o 3000 CL15. 8GB is just too low, the igpu needs some of it and many games (2015+ already need 6GB+ of system memory)eastcoast_pete - Thursday, June 21, 2018 - link
Thanks for the link! Yes, those results are REALLY interesting. They used stock 2200G and 2400G, no delidding, no undervolting of the CPU, and on stock heatsinks, and got quite an increase, especially when they also used faster memory (to OC memory speed also) . Downside was notable increase in power draw and the stock cooler's fan running at full tilt.So, Gavin's delidded APUs with their better heatsinks should do even better. The most notable thing in that German article was that the way to the overclock mountain (stable at 1600 Mhz stock cooler etc.) led through a valley of tears, i.e. the APUs crashed reliably when the iGPU was mildly overclocked, but then became stable again at higher iGPU clock speeds and voltage. They actually got some statement from AMD that AMD knows about that strange behavior, but apparently has no explanation for it. But then - running more stable if I run it even faster - bring it!
808Hilo - Friday, June 22, 2018 - link
A R3 is not really an amazing feat. It's a defective R7 with core, lane, fabric, pinout defects. The rest of the chip is run at low speed because the integrity is affected. Not sure anyone is getting their money worth here.Lolimaster - Friday, June 22, 2018 - link
I don't get this nonsense articles on an APU were the MAIN STAR IS THE IGPU. On some builds there mixed results when the gpu frequency jumped around 200-1200Mhz (hence some funny low 0.1-1% lows in benchmarks).It's all about OC the igpu forgetting about the cpu part and addressing/fixing igpu clock rubber band effect, sometimes disabling boost for cpu, increase soc voltage, etc.
Galatian - Friday, June 22, 2018 - link
I'm going to question the results a little bit. For me it looks like that the only ”jump” in performance you get in games occurs whenever you hit an OC over the standard boost clock, e.g. 3700 MHz on the 2400G. I would suspect that you are simply preventing some core parking or some other aggressive power management feature while applying the OC. That would explain the odd numbers with when you increase the OC.That being said I would say a CPU OC doesn't really make sense. An undervolting test to see where the sweet spot lies would be nice though.
melgross - Monday, June 25, 2018 - link
Frankly, the result of all these tests seems to be that overclocking isn’t doing much of anything useful, at least, not the small amounts we see here with AMD.5% is never going to be noticed. Several studies done a number of years ago showed that you need at least an overall 10% improvement in speed for it to even be noticeable. 15% would be barely noticeable.
For heavy database workloads that take place over hours, or long rendering tasks, it will make a difference, but for gaming, which this article is overly interested in, nada!
Allan_Hundeboll - Monday, July 2, 2018 - link
Benchmarks @46W cTDP would be interestingV900 - Friday, September 28, 2018 - link
The 2200G makes sense for an absolute budget system.(Though if you're starting from rock bottom and also need to buy a cabinet, motherboard, RAM, etc. you'll probably be better off taking that money and buying a used computer. You can get some really good deals for less than 500$)
The 2400G however? Not so much. The price is too high and the performance too low to compete with an Intel Pentium/Nvidia 1030 solution.
Or if you want to spend a few dollars more and find a good deal: An Intel Pentium/Nvidia 1050.