CPU Benchmark Performance: AI and Inferencing

As technology progresses at a breakneck pace, so too do the demands of modern applications and workloads. With artificial intelligence (AI) and machine learning (ML) becoming increasingly intertwined with our daily computational tasks, it's paramount that our reviews evolve in tandem. Recognizing this, we have AI and inferencing benchmarks in our CPU test suite for 2024. 

Traditionally, CPU benchmarks have focused on various tasks, from arithmetic calculations to multimedia processing. However, with AI algorithms now driving features within some applications, from voice recognition to real-time data analysis, it's crucial to understand how modern processors handle these specific workloads. This is where our newly incorporated benchmarks come into play.

As chip makers such as AMD with Ryzen AI and Intel with their Meteor Lake mobile platform feature AI-driven hardware within the silicon, it seems in 2024, and we're going to see many applications using AI-based technologies coming to market.

We are using DDR5 memory on the Core i9-14900K, Core i7-14700K, Core i5-14600K, and Intel's 13th Gen at the relative JEDEC settings. The same methodology is also used for the AMD Ryzen 7000 series and Intel's 12th Gen (Alder Lake) processors. Below are the settings we have used for each platform:

  • DDR5-5600B CL46 - Intel 14th & 13th Gen
  • DDR5-5200 CL44 - Ryzen 7000
  • DDR5-4800 (B) CL40 - Intel 12th Gen

Note: As we are running with a completely refreshed CPU test suite, this means we are currently re-testing other processors for our data sets. These will be added to the below graphs as soon as we have more results, and these will also be added to our Bench database. Thanks for your understanding.

(6-1) ONNX Runtime 1.14: CaffeNet 12-int8 (CPU Only)

(6-1b) ONNX Runtime 1.14: CaffeNet 12-int8 (CPU Only)

(6-1c) ONNX Runtime 1.14: Super-Res-10 (CPU Only)

(6-1d) ONNX Runtime 1.14: Super-Res-10 (CPU Only)

(6-2) DeepSpeech 0.6: Acceleration CPU

(6-3) TensorFlow 2.12: VGG-16, Batch Size 16 (CPU)

(6-3b) TensorFlow 2.12: VGG-16, Batch Size 64 (CPU)

(6-3d) TensorFlow 2.12: GoogLeNet, Batch Size 16 (CPU)

(6-3e) TensorFlow 2.12: GoogLeNet, Batch Size 64 (CPU)

(6-3f) TensorFlow 2.12: GoogLeNet, Batch Size 256 (CPU)

(6-4) UL Procyon Windows AI Inference: MobileNet V3 (float32)

(6-4b) UL Procyon Windows AI Inference: ResNet 50 (float32)

(6-4c) UL Procyon Windows AI Inference: Inception V4 (float32)

Digesting performance in the latest addition to our CPU test suite for 2024, it's clear that the extra L3 cache on the Ryzen 7000X3D processors has a clear benefit in ONNX when using the INT8 model. Even in ONNX with Super-Res-10, the AMD Ryzen CPUs seem to do a better job of inferencing than Intel. It does appear that more L3 cache can benefit performance in this area, and even in TensorFlow with VGG-16, the AMD Ryzen 9 7950X seems to perform the best. Outside of that, the Core i9-14900K is next best, just marginally ahead of the Core i9-13900K. 

The Core i7-14700K performs respectively, and while the Core i5-14600K can certainly handle AI and inferencing, it doesn't offer quite the grunt of the Core i9 and Core i7 series chips.

CPU Benchmark Performance: Science And Simulation Gaming Performance: 720p And Lower
Comments Locked

57 Comments

View All Comments

  • mga318 - Tuesday, October 17, 2023 - link

    Well, I just built a new system with an Ryzen 9 7900x that I got on sale for $380 a couple weeks ago and have set at a 105w TDP. Looks like I have no regrets here either in performance or efficiency.
  • Farfolomew - Tuesday, October 17, 2023 - link

    The new Pentium 5!
  • Gradius2 - Tuesday, October 17, 2023 - link

    So 13900k is better as you can get one for $450
  • charlesg - Tuesday, October 17, 2023 - link

    Re all the justified comments about excessive power draw, is this not only when using it at peak capacity?

    If you're using it at peak capacity, all the time, then I agree, you've got the wrong CPU. It's like driving your vehicle at or over 6000rpm all the time.

    For everyone else who's using a compatible MB and prior gen intel cpu, who wants a drop in upgrade, this may be useful?

    (I'm using an amd 5950x here, with no regrets. When I need the cores (and I do use them), it's there. The rest of the time, it just idles..)
  • rUmX - Tuesday, October 17, 2023 - link

    I have 4 7950x machines where I encode using Handbrake SVT-AV1 almost 24/7. AT shows that the Intel is faster, but @ 2x the power consumption literally, AMD is still better. Besides my ambient rises at least 6-7c with the machines going 100%, I can't imagine how the 13900k/14900k will behave. Insane. Besides having all the machines going 100% with a 5000BTU AC it blows my circuit breaker, so I run the AC power with an extension from another room. I can't imagine how the 13900k/14900k will behave.

    PS: Before anyone says I should have gone for a 64+ core EPYC, it was still cheaper to build these 4 systems over a 64c Eypc, taking into consideration 12CH memory, server board, etc. and these run at least 5.1ghz all core over a Epyc at 3.5~ GHZ
  • flgt - Wednesday, October 18, 2023 - link

    The insane part of what you are doing is encoding in SW. Give up on some quality and run quicksync on an intel processor along with ARC GPU's for AV1.
  • rUmX - Monday, October 23, 2023 - link

    I did mess around with HW AV1 encoding on a Intel ARC A380. Quality was pretty good, but the file sizes are at least double (for GPU) for very very similar or even better quality (for SVT-AV1). I'm not doing live streaming, more like encoding for VOD, in this case filesize and bitrates are important as well as storage use. I'm using SSDs so smaller filesize = better. At that, the smaller the size, the more users I can serve at port speed.
  • meacupla - Wednesday, October 18, 2023 - link

    You can certainly drop it in, but it's not an upgrade going from 13th to 14th gen. It's a sidegrade at best. For the price, you would be better off upgrading to watercooling with a 280mm or 360mm radiator.
  • SanX - Wednesday, October 18, 2023 - link

    Ideally comparisons have to include previous gen competition too like AMD 5950x to convince people to upgrade
  • SanX - Wednesday, October 18, 2023 - link

    AMD has to start selling 32-core consumer chips based one their new 16-core chiplets versus older 8-core ones

Log in

Don't have an account? Sign up now