CPU Benchmark Performance: AI and Inferencing

As technology progresses at a breakneck pace, so too do the demands of modern applications and workloads. With artificial intelligence (AI) and machine learning (ML) becoming increasingly intertwined with our daily computational tasks, it's paramount that our reviews evolve in tandem. Recognizing this, we have AI and inferencing benchmarks in our CPU test suite for 2024. 

Traditionally, CPU benchmarks have focused on various tasks, from arithmetic calculations to multimedia processing. However, with AI algorithms now driving features within some applications, from voice recognition to real-time data analysis, it's crucial to understand how modern processors handle these specific workloads. This is where our newly incorporated benchmarks come into play.

As chip makers such as AMD with Ryzen AI and Intel with their Meteor Lake mobile platform feature AI-driven hardware within the silicon, it seems in 2024, and we're going to see many applications using AI-based technologies coming to market.

We are using DDR5 memory on the Core i9-14900K, Core i7-14700K, Core i5-14600K, and Intel's 13th Gen at the relative JEDEC settings. The same methodology is also used for the AMD Ryzen 7000 series and Intel's 12th Gen (Alder Lake) processors. Below are the settings we have used for each platform:

  • DDR5-5600B CL46 - Intel 14th & 13th Gen
  • DDR5-5200 CL44 - Ryzen 7000
  • DDR5-4800 (B) CL40 - Intel 12th Gen

Note: As we are running with a completely refreshed CPU test suite, this means we are currently re-testing other processors for our data sets. These will be added to the below graphs as soon as we have more results, and these will also be added to our Bench database. Thanks for your understanding.

(6-1) ONNX Runtime 1.14: CaffeNet 12-int8 (CPU Only)

(6-1b) ONNX Runtime 1.14: CaffeNet 12-int8 (CPU Only)

(6-1c) ONNX Runtime 1.14: Super-Res-10 (CPU Only)

(6-1d) ONNX Runtime 1.14: Super-Res-10 (CPU Only)

(6-2) DeepSpeech 0.6: Acceleration CPU

(6-3) TensorFlow 2.12: VGG-16, Batch Size 16 (CPU)

(6-3b) TensorFlow 2.12: VGG-16, Batch Size 64 (CPU)

(6-3d) TensorFlow 2.12: GoogLeNet, Batch Size 16 (CPU)

(6-3e) TensorFlow 2.12: GoogLeNet, Batch Size 64 (CPU)

(6-3f) TensorFlow 2.12: GoogLeNet, Batch Size 256 (CPU)

(6-4) UL Procyon Windows AI Inference: MobileNet V3 (float32)

(6-4b) UL Procyon Windows AI Inference: ResNet 50 (float32)

(6-4c) UL Procyon Windows AI Inference: Inception V4 (float32)

Digesting performance in the latest addition to our CPU test suite for 2024, it's clear that the extra L3 cache on the Ryzen 7000X3D processors has a clear benefit in ONNX when using the INT8 model. Even in ONNX with Super-Res-10, the AMD Ryzen CPUs seem to do a better job of inferencing than Intel. It does appear that more L3 cache can benefit performance in this area, and even in TensorFlow with VGG-16, the AMD Ryzen 9 7950X seems to perform the best. Outside of that, the Core i9-14900K is next best, just marginally ahead of the Core i9-13900K. 

The Core i7-14700K performs respectively, and while the Core i5-14600K can certainly handle AI and inferencing, it doesn't offer quite the grunt of the Core i9 and Core i7 series chips.

CPU Benchmark Performance: Science And Simulation Gaming Performance: 720p And Lower
Comments Locked

57 Comments

View All Comments

  • cmdrdredd - Wednesday, October 18, 2023 - link

    You aren't even the target market for any of this so your comment is useless and pointless. You are not an enthusiast, gamer, or need the power for work.
  • ItsAdam - Wednesday, October 18, 2023 - link

    I old a 4090 and I wouldn't want a CPU with the same power as it. I was waiting for Meteor Lake upgrading from my 5800X3D, but when I heard it was a refresh I was like oh no.

    Looks like I'm going to be a beta tester for AMDs 6000 series, and I'm quite frankly bored of AMD and it's crash test consumer development.

    I know times are changing, I know, and I loved to tinker but it's getting too long in the tooth with AMD ATM with all the agesa "fixes" which is usually a big performance loss.

    I really wanted INTEL to come out brawling,but all they're doing is digging their own grave.

    They shouldn't have released ANOTHER refresh, especially one as bad as this.
  • lilo777 - Wednesday, October 18, 2023 - link

    You do not pay the utility for peak power consumption. You pay for actual consumption which is much lower because power peaks are rare and short.
  • mode_13h - Wednesday, October 18, 2023 - link

    > power peaks are rare and short.

    Depends on what you're doing. If rendering, video encoding, or lots of software compilation, then not necessarily.
  • mode_13h - Wednesday, October 18, 2023 - link

    Thanks for continuing to run SPEC2017, but I'm really missing the cumulative scores. Also, I wish we could get cumulative scores on E-cores only and P-cores only, as well as populating that graph with some other popular CPUs, as was done up to the i9-12900K review.

    For reference, please see the chart titled "SPEC2017 Rate-N Estimated Total", at the bottom of this page:

    https://www.anandtech.com/show/17047/the-intel-12t...

    The following page of that review goes on to explore the P & E cores.

    Perhaps this would be good material for a follow-on article?
  • eloyard - Thursday, October 19, 2023 - link

    2000s called, want their Net-Burst back.
  • Reinforcer - Saturday, October 28, 2023 - link

    Then don't let the motherboard run away with power lol, Honestly what is wrong with you reviewers fixated on how much power it can draw? Set it to Intel's 253w limit and enjoy almost the same performance as one that is consuming stupid amounts of power, It's not rocket science or do we not know how to set a motherboard up these days?

Log in

Don't have an account? Sign up now