AMD Ryzen 7 8700G and Ryzen 5 8600G Review: Zen 4 APUs with RDNA3 Graphics
by Gavin Bonshor on January 29, 2024 9:00 AM EST- Posted in
- CPUs
- AMD
- APUs
- Phoenix
- 4nm
- Zen 4
- RDNA3
- AM5
- Ryzen 8000G
- Ryzen 7 8700G
- Ryzen 5 8600G
CPU Benchmark Performance: AI and Inferencing
As technology progresses at a breakneck pace, so too do the demands of modern applications and workloads. With artificial intelligence (AI) and machine learning (ML) becoming increasingly intertwined with our daily computational tasks, it's paramount that our reviews evolve in tandem. Recognizing this, we have AI and inferencing benchmarks in our CPU test suite for 2024.
Traditionally, CPU benchmarks have focused on various tasks, from arithmetic calculations to multimedia processing. However, with AI algorithms now driving features within some applications, from voice recognition to real-time data analysis, it's crucial to understand how modern processors handle these specific workloads. This is where our newly incorporated benchmarks come into play.
As chip makers such as AMD with Ryzen AI and Intel with their Meteor Lake mobile platform feature AI-driven hardware within the silicon, it seems in 2024, and we're going to see many applications using AI-based technologies coming to market.
We are using DDR5-5200 memory as per the JEDEC specifications on the Ryzen 7 8700G and Ryzen 5 8600G, as well as DDR4-3200 on the Ryzen 7 5700G and Ryzen 5 5600G. The same methodology is also used for the AMD Ryzen 7000 series and Intel's 14th, 13th, and 12th Gen processors. Below are the settings we have used for each platform:
- DDR5-5200 CL44 - Ryzen 8000G
- DDR4-3200 CL22 - Ryzen 5000G
- DDR5-5600B CL46 - Intel 14th & 13th Gen
- DDR5-5200 CL44 - Ryzen 7000
- DDR5-4800 (B) CL40 - Intel 12th Gen
A major focal point of AMD's Ryzen 8000G series is the inclusion of the Xilinx-based Ryzen AI NPU. While AI benchmarks and those measuring capabilities using large language models (LLMs) are thin off the ground, none of our benchmarks utilize the NPU itself. Much of the Ryzen AI NPU is based and, as such, is focused on enabling software features such as those generative AI capabilities within Microsoft Studio Effects and software such as Adobe and Davinci.
In ONNX Runtime using the utilized INT8 model, we can see that the Ryzen 7 8700G and Ryzen 5 8600G don't offer world-beating AI performance, but we intend to investigate this more deeply.
Using the latest firmware, which removes the STAPM limitations, we can see that the Ryzen 5 8600G shows the most gains, especially in DeepSpeech 0.6, where we saw a 12% bump in performance. The Ryzen 7 8700G also posted some very impressive gains in the UL Procyon Windows AI Inferencing benchmark, with a 34% jump in performance in our charts, but this could be a case where it underperformed in the MobileNet V3 test in the first place.
111 Comments
View All Comments
TheinsanegamerN - Tuesday, January 30, 2024 - link
guys WHY would you use a CPU only test in a CPU REVIEW??!?!?!?t.s - Tuesday, January 30, 2024 - link
Cause it have best iGPU in its class. If you won't test that, why bother testing, as it's almost certain that 7700x or 7700 will be better.AndrewJacksonZA - Tuesday, January 30, 2024 - link
👍TheinsanegamerN - Wednesday, January 31, 2024 - link
Ok, and in the CPU bench section they use a CPU test. How would you know if a 7700 would be better in CPU load if you dont test it?AndrewJacksonZA - Tuesday, January 30, 2024 - link
guys WHY would you only test PART of a CPU in a CPU REVIEW??!?!?!?TheinsanegamerN - Wednesday, January 31, 2024 - link
How DARE we want to see what a CPU does in a CPU review. WAAAH I NEED IGPU OR ILL CRY WAAAAHFWhitTrampoline - Tuesday, January 30, 2024 - link
Because AMD does not support ROCm/HIP for its iGPUs and its ROCm/HIP for consumer dGPUs is lacking as well on Linux. And the Blender Foundation starting with Blender 3.0/Later editions has dropped supporting OpenCL as the GPU compute API. And so since Blender 3.0/later the Blender Foundation only supports Nvidia's CUDA for non Apple PCs/Laptops and Apple's Metal for Apple silicon for Blender 3D 3.0/later editions.So without any Ryzen iGPU support for ROCm/HIP there's nothing to take the CUDA Intermediate Language Representation(ILR) and convert that to a form that can be executed on Radeon iGPU/dGPU hardware. And for Intel's iGPUs and dGPUs it's Intel's OneAPI/Level-0 that does the translating of the CUDA ILR to a form than can be executed on Intel's iGPU/dGPU hardware and for Intel that OneAPI/Level-0 works for Windows and Linux!
Blender 3D generates CUDA PTX ILR and All GPU Makers us Intermediate Languages for GPUs so GPU makers/others ship no pre-compiled binaries where software gets directly compiled into the GPUs Native Instruction Set in advance. And that's so the ILR code remains portable across OS/Ecosystems and GPU makers are free to modify their GPU ISA and still maintain comparability with software that only gets compiled into a portable Intermediate language Representation(ILR)
FWhitTrampoline - Tuesday, January 30, 2024 - link
Edit: maintain comparabilityto: maintain compatibility
I hate Firefox's Spell Checker its a Train Wreck as always!
thestryker - Monday, January 29, 2024 - link
Feels like these APUs deserve a DRAM scaling article comparing the IGP performance.GeoffreyA - Tuesday, January 30, 2024 - link
Yes, that would be a nice one. Always necessary for APUs.