CPU Benchmark Performance: AI and Inferencing

As technology progresses at a breakneck pace, so too do the demands of modern applications and workloads. With artificial intelligence (AI) and machine learning (ML) becoming increasingly intertwined with our daily computational tasks, it's paramount that our reviews evolve in tandem. Recognizing this, we have AI and inferencing benchmarks in our CPU test suite for 2024. 

Traditionally, CPU benchmarks have focused on various tasks, from arithmetic calculations to multimedia processing. However, with AI algorithms now driving features within some applications, from voice recognition to real-time data analysis, it's crucial to understand how modern processors handle these specific workloads. This is where our newly incorporated benchmarks come into play.

As chip makers such as AMD with Ryzen AI and Intel with their Meteor Lake mobile platform feature AI-driven hardware within the silicon, it seems in 2024, and we're going to see many applications using AI-based technologies coming to market.

We are using DDR5 memory on the Core i9-14900K, Core i7-14700K, Core i5-14600K, and Intel's 13th Gen at the relative JEDEC settings. The same methodology is also used for the AMD Ryzen 7000 series and Intel's 12th Gen (Alder Lake) processors. Below are the settings we have used for each platform:

  • DDR5-5600B CL46 - Intel 14th & 13th Gen
  • DDR5-5200 CL44 - Ryzen 7000
  • DDR5-4800 (B) CL40 - Intel 12th Gen

Note: As we are running with a completely refreshed CPU test suite, this means we are currently re-testing other processors for our data sets. These will be added to the below graphs as soon as we have more results, and these will also be added to our Bench database. Thanks for your understanding.

(6-1) ONNX Runtime 1.14: CaffeNet 12-int8 (CPU Only)

(6-1b) ONNX Runtime 1.14: CaffeNet 12-int8 (CPU Only)

(6-1c) ONNX Runtime 1.14: Super-Res-10 (CPU Only)

(6-1d) ONNX Runtime 1.14: Super-Res-10 (CPU Only)

(6-2) DeepSpeech 0.6: Acceleration CPU

(6-3) TensorFlow 2.12: VGG-16, Batch Size 16 (CPU)

(6-3b) TensorFlow 2.12: VGG-16, Batch Size 64 (CPU)

(6-3d) TensorFlow 2.12: GoogLeNet, Batch Size 16 (CPU)

(6-3e) TensorFlow 2.12: GoogLeNet, Batch Size 64 (CPU)

(6-3f) TensorFlow 2.12: GoogLeNet, Batch Size 256 (CPU)

(6-4) UL Procyon Windows AI Inference: MobileNet V3 (float32)

(6-4b) UL Procyon Windows AI Inference: ResNet 50 (float32)

(6-4c) UL Procyon Windows AI Inference: Inception V4 (float32)

Digesting performance in the latest addition to our CPU test suite for 2024, it's clear that the extra L3 cache on the Ryzen 7000X3D processors has a clear benefit in ONNX when using the INT8 model. Even in ONNX with Super-Res-10, the AMD Ryzen CPUs seem to do a better job of inferencing than Intel. It does appear that more L3 cache can benefit performance in this area, and even in TensorFlow with VGG-16, the AMD Ryzen 9 7950X seems to perform the best. Outside of that, the Core i9-14900K is next best, just marginally ahead of the Core i9-13900K. 

The Core i7-14700K performs respectively, and while the Core i5-14600K can certainly handle AI and inferencing, it doesn't offer quite the grunt of the Core i9 and Core i7 series chips.

CPU Benchmark Performance: Science And Simulation Gaming Performance: 720p And Lower
Comments Locked

57 Comments

View All Comments

  • colinstu - Tuesday, October 17, 2023 - link

    This power consumption / heat output is insane… this is putting their 90nm Netburst Prescott / Pentium D Smithfield days to shame. Remember when Apple left IBM/Motorola alliance? Power architecture power consumption going thru the roof, and intel JUST pivoted back to PIII/Pentium M-based Core arch. No wonder why Apple dumped Intel, they called what they were seeing really early on. Arm for windows/linux desktop needs to get more serious, apple's desktop arm is proving nearly as powerful using a fraction of the power draw. Windows is ready, and can even run non-arm code too.
  • herozeros - Tuesday, October 17, 2023 - link

    My AMD AM5 would like a word with you …
  • FLEXOBENDER - Tuesday, October 17, 2023 - link

    What point are you trying to make, that you have no clue how thermodynamics work?
    This 14900K manages to pull 430 watt peak. 430. 0.43 kilowatt. one CPU.
    It is still beat by a 80 watt peak 7800x3d. What is your point?
  • boozed - Wednesday, October 18, 2023 - link

    I think the point was that you don't have to abandon x86 for ARM to achieve good efficiency, just Intel.
  • The Von Matrices - Thursday, October 19, 2023 - link

    People remember Netburst CPUs as being absurdly power hungry, but they forget that even the most power-hungry Netburst CPUs still only had a TDP of 130W. Today that would be considered a normal or even a low TDP for a flagship CPU. It's actually understating the TDP if you compare it to a Netburst CPU.
  • GeoffreyA - Friday, October 20, 2023 - link

    And didn't Cedar Mill further drop that to a 65W TDP?
  • GeoffreyA - Friday, October 20, 2023 - link

    Possibly, ISA is just a small piece of the power puzzle, and the rest of the design is what's carrying the weight.

    An interesting article:
    https://chipsandcheese.com/2021/07/13/arm-or-x86-i...
  • Azjaran - Tuesday, October 17, 2023 - link

    Did i miss something or are there no temperatures shown? Because 428W shouldn't be on the low side and demands a good Cooling Solution.
  • Gastec - Tuesday, October 17, 2023 - link

    Just one question: do these AI "tools" connect to the Internet, after they "measure specific system characteristics, including telemetry from integrated sensors", to send that data to those Intel servers that are in the "cloud"?
  • TheinsanegamerN - Tuesday, October 17, 2023 - link

    Of course they do. Even if they say they dont.

Log in

Don't have an account? Sign up now