CPU Benchmark Performance: AI and Inferencing

As technology progresses at a breakneck pace, so too do the demands of modern applications and workloads. With artificial intelligence (AI) and machine learning (ML) becoming increasingly intertwined with our daily computational tasks, it's paramount that our reviews evolve in tandem. Recognizing this, we have AI and inferencing benchmarks in our CPU test suite for 2024. 

Traditionally, CPU benchmarks have focused on various tasks, from arithmetic calculations to multimedia processing. However, with AI algorithms now driving features within some applications, from voice recognition to real-time data analysis, it's crucial to understand how modern processors handle these specific workloads. This is where our newly incorporated benchmarks come into play.

As chip makers such as AMD with Ryzen AI and Intel with their Meteor Lake mobile platform feature AI-driven hardware within the silicon, it seems in 2024, and we're going to see many applications using AI-based technologies coming to market.

We are using DDR5-5200 memory as per the JEDEC specifications on the Ryzen 7 8700G and Ryzen 5 8600G, as well as DDR4-3200 on the Ryzen 7 5700G and Ryzen 5 5600G. The same methodology is also used for the AMD Ryzen 7000 series and Intel's 14th, 13th, and 12th Gen processors. Below are the settings we have used for each platform:

  • DDR5-5200 CL44 - Ryzen 8000G
  • DDR4-3200 CL22 - Ryzen 5000G
  • DDR5-5600B CL46 - Intel 14th & 13th Gen
  • DDR5-5200 CL44 - Ryzen 7000
  • DDR5-4800 (B) CL40 - Intel 12th Gen

(6-2) DeepSpeech 0.6: Acceleration CPU

(6-3) TensorFlow 2.12: VGG-16, Batch Size 16 (CPU)

(6-3b) TensorFlow 2.12: VGG-16, Batch Size 64 (CPU)

(6-3d) TensorFlow 2.12: GoogLeNet, Batch Size 16 (CPU)

(6-3e) TensorFlow 2.12: GoogLeNet, Batch Size 64 (CPU)

(6-3f) TensorFlow 2.12: GoogLeNet, Batch Size 256 (CPU)

(6-4) UL Procyon Windows AI Inference: MobileNet V3 (float32)

(6-4b) UL Procyon Windows AI Inference: ResNet 50 (float32)

(6-4c) UL Procyon Windows AI Inference: Inception V4 (float32)

(6-1) ONNX Runtime 1.14: CaffeNet 12-int8 (CPU Only)

(6-1b) ONNX Runtime 1.14: CaffeNet 12-int8 (CPU Only)

A major focal point of AMD's Ryzen 8000G series is the inclusion of the Xilinx-based Ryzen AI NPU. While AI benchmarks and those measuring capabilities using large language models (LLMs) are thin off the ground, none of our benchmarks utilize the NPU itself. Much of the Ryzen AI NPU is based and, as such, is focused on enabling software features such as those generative AI capabilities within Microsoft Studio Effects and software such as Adobe and Davinci.

In ONNX Runtime using the utilized INT8 model, we can see that the Ryzen 7 8700G and Ryzen 5 8600G don't offer world-beating AI performance, but we intend to investigate this more deeply.

Using the latest firmware, which removes the STAPM limitations, we can see that the Ryzen 5 8600G shows the most gains, especially in DeepSpeech 0.6, where we saw a 12% bump in performance. The Ryzen 7 8700G also posted some very impressive gains in the UL Procyon Windows AI Inferencing benchmark, with a 34% jump in performance in our charts, but this could be a case where it underperformed in the MobileNet V3 test in the first place.

CPU Benchmark Performance: Science And Simulation iGPU Gaming Performance: 720p And Lower
Comments Locked

111 Comments

View All Comments

  • nandnandnand - Tuesday, January 30, 2024 - link

    I could have sworn that was in here, but I was probably thinking of other reviews.

    Short story long, AnandTech is dying and you should go to other sites for reviews now. And the 780M iGPU in the 8700G is not going to do well against a 2060/3060. It seems to be around a GTX 1650 in performance.
  • is4u2p - Wednesday, January 31, 2024 - link

    Uh, the M1 had dedicated NPUs in it and it is a desktop processor.

    As for these, they're Z1 and Z1 Extreme APUs rebranded, you can get these in the handheld gaming machines.
  • Stu7nm3dflash - Thursday, February 1, 2024 - link

    Certainly Apple has a lot of machine learning built in, this iPad Mini 6, has 16 tflops of ML, more than the M1, but little access to big amounts, of short term memory, my Ryzen 5 8600, only has 16 tflops, but I've given it 64GB of DDR 5 and PCIe4. Fingers crossed, for my creative work. ARM, is more power efficient, the M1,2,3, Pro, Max, Ultra, iPhone, Apple TV, iPads all have ML, plus unified memory and PCIE4. But amount of memory is also important, 64GB of DDR 5 only cost me $A200, PCIE4 $ 100, motherboard $125, chip $A375, my Mac mini M1, only has 8GB, of unified memory, it crashed immediately, under AI model load.
  • Stu7nm3dflash - Thursday, February 1, 2024 - link

    My use case, is a bit different, turning 2 stories, 11 pages into 80 pages, at first, online, I got 3 paragraphs. Then on an 8 core, 4000 series, Ryzen 7, with 32 GB of DDR 4, PCIe3, I got 4 pages, now, 64GB of DDR 5, PCIe4, Ryzen 5 8600, with 16 tflops of AI, I'm hoping for more pages, before it crashes. At double the short term memory, at double the speed, double speed long term memory, a specific AI architecture and 4nm nearly 3 times the transistor density, of 7nm, last time I used Jan.ai, plus a model, easiest build yet. Fingers crossed, more specific processing, 4 times the short term memory power, I hope I'm getting closer, memory, processing and software continue to advance.
  • GeoffreyA - Thursday, February 1, 2024 - link

    Perhaps you'd get better results with a GPU upgrade?
  • peevee - Thursday, February 1, 2024 - link

    Too bad 8700G is hamstrung by only 65W and PCIe4 (and only 2 RAM channels).

    Maybe their graphics department insisted on these things to preserve sales? APUs would be so nice at 4 DDR5 channels and 200W...
  • meacupla - Friday, February 2, 2024 - link

    Socket AM5, with its 1718 pins, doesn't have enough pins for quad channel DDR5.
    While I don't think it would require 4844 pins like sTR5 for threadripper, it would require a new socket.
  • vertigoz - Saturday, February 3, 2024 - link

    I would love to see benchmarks of ai/3d using GPU, allowing much more ram can be a major plus
  • blackie333 - Wednesday, February 7, 2024 - link

    I really care more for IDLE power consumption than maximum power because I use my PC mostly for reading/music listening. Gaming or crunching videos is not my daily routine.
    These integrated AMD CPUs have been chosen by many because of their much better IDLE power efficiency compared to normal desktop models.
    I haven't found idle power consumption comparison with older gen. models in the article.
    I'm getting old and my eyes don't serve me as good as before, maybe it's there somewhere but can't find it.
  • masb - Friday, February 9, 2024 - link

    Excellent comparison, especially regarding the OpenFOAM topic. Where can I access the complete specifications for the Intel Core i5-14600K system?

Log in

Don't have an account? Sign up now