CPU Benchmark Performance: AI and Inferencing

As technology progresses at a breakneck pace, so too do the demands of modern applications and workloads. With artificial intelligence (AI) and machine learning (ML) becoming increasingly intertwined with our daily computational tasks, it's paramount that our reviews evolve in tandem. Recognizing this, we have AI and inferencing benchmarks in our CPU test suite for 2024. 

Traditionally, CPU benchmarks have focused on various tasks, from arithmetic calculations to multimedia processing. However, with AI algorithms now driving features within some applications, from voice recognition to real-time data analysis, it's crucial to understand how modern processors handle these specific workloads. This is where our newly incorporated benchmarks come into play.

As chip makers such as AMD with Ryzen AI and Intel with their Meteor Lake mobile platform feature AI-driven hardware within the silicon, it seems in 2024, and we're going to see many applications using AI-based technologies coming to market.

We are using DDR5 memory on the Core i9-14900K, Core i7-14700K, Core i5-14600K, and Intel's 13th Gen at the relative JEDEC settings. The same methodology is also used for the AMD Ryzen 7000 series and Intel's 12th Gen (Alder Lake) processors. Below are the settings we have used for each platform:

  • DDR5-5600B CL46 - Intel 14th & 13th Gen
  • DDR5-5200 CL44 - Ryzen 7000
  • DDR5-4800 (B) CL40 - Intel 12th Gen

Note: As we are running with a completely refreshed CPU test suite, this means we are currently re-testing other processors for our data sets. These will be added to the below graphs as soon as we have more results, and these will also be added to our Bench database. Thanks for your understanding.

(6-1) ONNX Runtime 1.14: CaffeNet 12-int8 (CPU Only)

(6-1b) ONNX Runtime 1.14: CaffeNet 12-int8 (CPU Only)

(6-1c) ONNX Runtime 1.14: Super-Res-10 (CPU Only)

(6-1d) ONNX Runtime 1.14: Super-Res-10 (CPU Only)

(6-2) DeepSpeech 0.6: Acceleration CPU

(6-3) TensorFlow 2.12: VGG-16, Batch Size 16 (CPU)

(6-3b) TensorFlow 2.12: VGG-16, Batch Size 64 (CPU)

(6-3d) TensorFlow 2.12: GoogLeNet, Batch Size 16 (CPU)

(6-3e) TensorFlow 2.12: GoogLeNet, Batch Size 64 (CPU)

(6-3f) TensorFlow 2.12: GoogLeNet, Batch Size 256 (CPU)

(6-4) UL Procyon Windows AI Inference: MobileNet V3 (float32)

(6-4b) UL Procyon Windows AI Inference: ResNet 50 (float32)

(6-4c) UL Procyon Windows AI Inference: Inception V4 (float32)

Digesting performance in the latest addition to our CPU test suite for 2024, it's clear that the extra L3 cache on the Ryzen 7000X3D processors has a clear benefit in ONNX when using the INT8 model. Even in ONNX with Super-Res-10, the AMD Ryzen CPUs seem to do a better job of inferencing than Intel. It does appear that more L3 cache can benefit performance in this area, and even in TensorFlow with VGG-16, the AMD Ryzen 9 7950X seems to perform the best. Outside of that, the Core i9-14900K is next best, just marginally ahead of the Core i9-13900K. 

The Core i7-14700K performs respectively, and while the Core i5-14600K can certainly handle AI and inferencing, it doesn't offer quite the grunt of the Core i9 and Core i7 series chips.

CPU Benchmark Performance: Science And Simulation Gaming Performance: 720p And Lower
Comments Locked

57 Comments

View All Comments

  • DabuXian - Tuesday, October 17, 2023 - link

    so basically a mere 6% better Cinebench MT score at the cost of almost 100 extra watts. I dunno in what universe would anyone want this instead of a 7950x.
  • yankeeDDL - Tuesday, October 17, 2023 - link

    At platform level it is over 200W difference. Impressive.
    And I agree, nobody in teh right mind should get Intel over AMD, unless they have very specific workload in which that 6% makes a difference worth hundreds/thousand of dollars in electricity per year.
  • schujj07 - Tuesday, October 17, 2023 - link

    If you have a workload like that then you run Epyc or Threadripper as the task is probably VERY threaded.
  • shaolin95 - Thursday, December 21, 2023 - link

    😆😆😆😆😆😆 AMDrip fanboys are hilarious and delusional
    And what bullshit connect about the electricity bill per year... thousands.. really???? Dang kid, you are hilariously sad
  • lemurbutton - Tuesday, October 17, 2023 - link

    Who cares about CInebench MT? It's a benchmark for a niche software in a niche.
  • powerarmour - Wednesday, October 18, 2023 - link

    Wouldn't buy the 7950X either, not interested in any CPU that draws >200W unless I'm building a HEDT workstation.
  • shabby - Tuesday, October 17, 2023 - link

    Lol @ the power usage, this will make a nice heater this winter.
  • yankeeDDL - Tuesday, October 17, 2023 - link

    I find it amazing. It takes more than 200W MORE to beat the 7950.
    The difference in efficiency is unbelievable.
    Buying Intel today still makes no sense unless that extra 5-10% in some specific benchmark really make a huge difference. Otherwise it'll cost you dearly in electricity.
  • bug77 - Thursday, October 19, 2023 - link

    While Anand has a policy of testing things out-of-the-box, which is fine, it is well known ADL and RPL can be power constrained to something like 125W max, while losing performance in the single digits range.
    It would be really useful if we had a follow up article looking into that.
  • yankeeDDL - Tuesday, October 17, 2023 - link

    So, 6% faster than previous gen, a bit (10%?) faster than AMD's 7950.
    Consuming over 200W *more* than the Ryzen 7950.
    I'd say Intel's power efficiency is still almost half that of the ryzen. It's amazing how far behind they are.

Log in

Don't have an account? Sign up now