ASUS Zenbook 14 OLED UX3405MA: AI Performance

As technology progresses at a breakneck pace, so do the demands of modern applications and workloads. As artificial intelligence (AI) and machine learning (ML) become increasingly intertwined with our daily computational tasks, it's paramount that our reviews evolve in tandem. To this end, we have AI and inferencing benchmarks in our CPU test suite for 2024. 

Traditionally, CPU benchmarks have focused on various tasks, from arithmetic calculations to multimedia processing. However, with AI algorithms now driving features within some applications, from voice recognition to real-time data analysis, it's crucial to understand how modern processors handle these specific workloads. This is where our newly incorporated benchmarks come into play.

Given makers such as AMD with Ryzen AI and Intel with their Meteor Lake mobile platform feature AI-driven hardware, aptly named Intel AI Boost within the silicon, AI, and inferencing benchmarks will be a mainstay in our test suite as we go further into 2024 and beyond. 

The Intel Core Ultra 7 155H includes the dedicated Neural Processing Unit (NPU) embedded within the SoC tile, which is capable of providing up to 11 TeraOPS (TOPS) of matrix math computational throughput. You can find more architectural information on Intel's NPU in our Meteor Lake architectural deep dive. While both AMD and Intel's implementation of AI engines within their Phoenix and Meteor Lake architectures is much simpler than true AI inferencing hardware, these NPUs are more designed to provide a high efficiency processor for handling light-to-modest AI workloads, rather than a boost to overall inferencing performance. For all of these mobile chips, the GPU is typically the next step up for truly heavy AI workloads that need maximum performance.

(6-3) TensorFlow 2.12: VGG-16, Batch Size 16 (CPU)

(6-3b) TensorFlow 2.12: VGG-16, Batch Size 64 (CPU)

(6-3d) TensorFlow 2.12: GoogLeNet, Batch Size 16 (CPU)

(6-3e) TensorFlow 2.12: GoogLeNet, Batch Size 64 (CPU)

(6-3f) TensorFlow 2.12: GoogLeNet, Batch Size 256 (CPU)

Looking at performance in our typical TensorFlow 2.12 inferencing benchmarks from our CPU suite, using both the VGG-16 and GoogLeNet models, we can see that the Intel Core Ultra 7 155H is no match for any of the AMD Phoenix-based chips.

(6-4) UL Procyon AI Computer Vision: MobileNet V3 (int)

(6-4) UL Procyon AI Computer Vision: Inception V4 (int)

(6-4) UL Procyon AI Computer Vision: ResNet 50 (int)

Meanwhile, looking at inference performance on the hardware actually optimized for it – NPUs, and to a lesser extent, GPUs – UL's Procyon Computer Vision benchmark collection offers support for multiple execution backends, allowing it to be run on CPUs, GPUs, or NPUs. For Intel chips we're using the Intel OpenVINO backend, which enables access to Intel's NPU. Meanwhile AMD does not offer a custom execution backend for this test, so while Windows ML is available as a fallback option to access the CPU and the GPU, it does not have access to AMD's NPU.

With Meteor Lake's NPU active and running the INT8 version of the Procyon Computer Vision benchmarks, in Inception V4 and ResNet 50 we saw good gains in AI inferencing performance compared to using the CPU only. The Meteor Lake Arc Xe LPG graphics also did well, although the NPU is designed to be more power efficient with these workloads, and more often as not significantly outperforms the GPU at the same time.

This is just one test in a growing universe of NPU-accelerated appliations. But it helps to illustrate why hardware manufactuers are so interested in NPUs: they deliver a lot of performance for the power, at least as long as a workload is restricted enough that it can be run on an NPU.

That all being said, even with the dedicated Intel AI Boost NPU within the SoC tile, the use case is very specific. Even trying generative AI within Adobe Photoshop using Neural fillers, Adobe was relying much more on the CPU than it was the GPU or the NPU, which shows that just because it's related to generative AI or inferencing, the NPU isn't always guaranteed to be used. This is still the very early days of NPUs, and even just benchmarking them for an active task remains an interesting challenge.

ASUS Zenbook 14 OLED UX3405MA: Graphics Performance (Arc vs Radeon) ASUS Zenbook 14 OLED UX3405MA: Battery & Thermal Performance
POST A COMMENT

69 Comments

View All Comments

  • Ryan Smith - Thursday, April 11, 2024 - link

    Hmm, interesting. We're going to have to double-check with Intel on that.

    Ark has it down as 8 GPU cores. However the official presentation from the December "launch" had it down as 7: https://images.anandtech.com/galleries/9400/Intel%...

    It's not uncommon for Intel presentations to have errors. But they also crop up in Ark from time to time as well. So 8 is probably correct, but we should get confirmation just in case.
    Reply
  • meacupla - Friday, April 12, 2024 - link

    Yeah, I only noticed the difference when I had to look up why MSI claw was using a 135H. Reply
  • Ryan Smith - Friday, April 12, 2024 - link

    And Intel has since confirmed it to be 8. The article (and past articles with the bad info) have been updated. Thanks for pointing that out, meacupla! Reply
  • The_Assimilator - Thursday, April 11, 2024 - link

    An entire page, and an entire paragraph in the conclusion, wasted on pointless AI nonsense. Nobody reading this site cares about that rubbish, stop wasting our time. Reply
  • The_Assimilator - Thursday, April 11, 2024 - link

    Also good to see Intel's "revolutionary" new chip pulling the exact same BS as their past chips have regarding power. "28W" that actually uses 65W, incredible. Reply
  • mode_13h - Monday, April 15, 2024 - link

    The only thing your comment tells me is that the AI benchmarks should be updated to include more typical real-world AI usage examples. Aside from things like Stable Diffusion, where most readers are likely to encounter AI in their everyday computing is in apps like video conferencing (background replacement, noise suppression) and photo editing. Reply
  • NextGen_Gamer - Thursday, April 11, 2024 - link

    Those iGPU numbers are way off - the Socket AM5 8700G is obviously going to outperform the mobile variant across the board, based on TDP alone. Even if the mobile version was using LPDDR5X memory (which in this case it wasn't), it wouldn't be enough to close the gap.

    AnandTech: ever given any though to getting all these chips in mini PC form? Most of those allow much more control over the TDP, along with memory/SSD being your own picks. And then that all but eliminates the cooling issues of a laptop.
    Reply
  • meacupla - Thursday, April 11, 2024 - link

    While you can get a 7940HS miniPC now, Core Ultra miniPCs aren't out yet.
    Core Ultra laptops are showing up at reviewers just now, but they've barely left the gate in terms of availability to regular consumers.

    Asus announced NUC 14 two weeks ago, on Mar 27. They are just now being listed at various retailers, so I would expect them to show up in stock a month or two from now.
    Reply
  • ricebunny - Friday, April 12, 2024 - link

    My retailer indicates that the Asus Nuc 14Pro+ is available. I would be curious to see how the 155H performs in that chassis with a lot more thermal headroom. Reply
  • haplo602 - Thursday, April 11, 2024 - link

    "On the flip side, as we established with our look at power consumption, the Ryzen 9 7940HS laptop is able to sustain a much higher TDP overall – so that memory bandwidth deficit is counterbalanced by a sizable TDP advantage."

    Are you really a tech review site ? In this case memory bandwidth is king with TDP a distant second (given comparable compute power). And Intel has a huge advantage with LPDDR5X here that it manages to waste somehow.

    Also the official title is CPU review and there are graphs for BATTERY CHARGE TIME ??? Really ? What does that have to do with the CPU at all ?
    Reply

Log in

Don't have an account? Sign up now