CPU Benchmark Performance: Power And Office

Our previous sets of ‘office’ benchmarks have often been a mix of science and synthetics, so this time we wanted to keep our office section purely on real-world performance. We've also incorporated our power testing into this section too.

The biggest update to our Office-focused tests for 2023 and beyond include UL's Procyon software, which is the successor to PCMark. Procyon benchmarks office performance using Microsoft Office applications, as well as Adobe's Photoshop/Lightroom photo editing software, and Adobe Premier Pro's video editing capabilities. Due to issues with UL Procyon and the video editing test, we haven't been able to properly run these, but once we identify a fix with UL, we will re-test each chip.

We are using DDR5-4800 memory on the Intel Core i3-13100F as per the JEDEC specifications. Other recent chips, such as Intel's 13th/12th Gen Core series and Ryzen 7000 processors, are also tested at the rated JEDEC specifications. We tested the aforementioned platforms with the following settings:

  • DDR5-5600B CL46 - Intel 13th Gen
  • DDR5-5200 CL44 - Ryzen 7000
  • DDR5-4800 (B) CL40 - Intel 12th Gen
  • DDR5-4800 (B) CL40 - Intel 13th Gen Core i3 series

All other CPUs such as Ryzen 5000 and 3000 were tested at the relevant JEDEC settings as per the processor's individual memory support with DDR4.

Power

The nature of reporting processor power consumption has become, in part, a bit of a nightmare. Historically the peak power consumption of a processor, as purchased, is given by its Thermal Design Power (TDP, or PL1). For many markets, such as embedded processors, that value of TDP still signifies the peak power consumption. For the processors we test at AnandTech, either desktop, notebook, or enterprise, this is not always the case.

Modern high-performance processors implement a feature called Turbo. This allows, usually for a limited time, a processor to go beyond its rated frequency. Exactly how far the processor goes depends on a few factors, such as the Turbo Power Limit (PL2), whether the peak frequency is hard coded, the thermals, and the power delivery. Turbo can sometimes be very aggressive, allowing power values 2.5x above the rated TDP.

AMD and Intel have different definitions for TDP that are, broadly speaking, applied the same. The difference comes from turbo modes, turbo limits, turbo budgets, and how the processors manage that power balance. These topics are 10000-12000 word articles in their own right, and we’ve got a few articles worth reading on the topic.

(0-0) Peak Power

Looking at the peak power values for all of the sub $350 CPUs we've tested with the latest CPU suite, the Core i3-13100F, as expected draws the least power overall. The difference between the Core i3-13100F and the Core i3-12300 is around 7 W, which isn't massive, especially given they have the same Intel Golden Cove performance cores.

Diving into the power consumption of the Core i3-13100F during a Prime95 sustained load, we found that power was delivered consistently between 72 and 75 W, with no drop in frequency and power unless the compute load itself dropped off. This means that peak power is delivered for the duration when at full load. It also shows that Intel's Core i3-13100F is performing well within the associated turbo TDP power limit of 89 W. 

Office/Web

(1-1) Google Octane 2.0 Web Test

(1-2) UL Procyon Office: Word

(1-3) UL Procyon Office: Excel

(1-4) UL Procyon Office: PowerPoint

(1-5) UL Procyon Office: Outlook

(1-6) UL Procyon Photo Editing: Image Retouching

(1-7) UL Procyon Photo Editing: Batch Processing

(1-8) Kraken 1.1 Web Test

In office and productivity based benchmarks, the Core i3-13100F does very well considering it's a quad-core chip pitted up against six, eight, and even the Core i5-13600K, which is a 14C/20T part. As expected, the Core i3-13100F is slightly better than the Core i3-12300 as it has slightly faster cores.

Comparing Intel's latest quad-core to AMD's most recent quad-core, Intel has a distinct advantage in IPC performance as it's a newer process (Raptor/Alder Lake versus Zen 3) and clock speeds.

The Intel Core i3-13100F Review: Finding Value in Intel's Cheapest Core Chip CPU Benchmark Performance: Science
Comments Locked

34 Comments

View All Comments

  • nandnandnand - Friday, April 21, 2023 - link

    Now that AMD is offering an iGPU on all current AM5 CPUs, it will be interesting to see if Intel changes its 'F' strategy in any way, which has been the norm for several generations in a row. Intel already cuts down EU count from 32 to 24/16 (UHD 730/710). Might as well go to 16 (50%) instead of zero graphics.

    It would also be nice to see Intel compete with AMD APUs. Put 96-128 EUs on a desktop chip.
  • mode_13h - Friday, April 21, 2023 - link

    Thanks for the review, guys. Unfortunately, it didn't answer a key question I had: WHAT IS THIS CHIP?

    Was it made on Intel 7 or Raptor Lake's improved version? Does it have any silicon-level improvements or tweaks? Is it exactly the same die as the small Alder Lake desktop CPUs, but with the GPU disabled and maybe some microcode tweaks?
  • nandnandnand - Saturday, April 22, 2023 - link

    I don't know if it was ever officially confirmed anywhere, but it should be identical to the i3-12100F, based on the same 6+0 die used to make the 12400/12500/12600 (non-K) on the same version of Intel 7. Just with 100 MHz higher base and 200 MHz higher boost clocks.

    Intel will get another chance to do something interesting at the low-end with Raptor Lake Refresh later this year. For example, a 4+4 based on a different die.

    https://videocardz.com/newz/upcoming-intel-raptor-...
  • Otritus - Saturday, April 22, 2023 - link

    All 13th generation CPUs below the 13600K are made using Alder Lake dies. The i5s are made with Alder Lake's 8+8 design. The i3s are most likely made with Alder Lake's 6+0 design, as cutting from 8+8 down to 4+0 is probably less profitable compared to using 6+0. There is a Raptor Lake 6+0 die I believe, but Intel did not release it due to having a glut of Alder Lake dies.
  • ads295 - Saturday, April 22, 2023 - link

    1. What's the most powerful GPU you could pair with this and some fast RAM?
    2. What games could that run?

    Trying to understand how this fits into a budget gaming build.
  • Otritus - Saturday, April 22, 2023 - link

    The interesting thing about bottlenecks is that there is almost never a pure bottleneck towards any single component. This means you could arguably pair this processor with a 7900XTX to get maximum raster performance. The problem with this processor having 4 cores is that it is going to be a bottleneck in any modern game that isn't some simple indie game. The processor is however fast enough to hit 60 FPS in all older games and most if not all newer ones. AMD drivers have less CPU overhead than Nvidia ones, so AMD will be faster on this processor. You can easily get away with DDR4-3200 on this processor, and it can run all games. I probably wouldn't pair this CPU with anything faster than a 3070, but a 3060 or 6600XT seem pretty reasonable.
  • valtteris_big_batteries - Saturday, April 22, 2023 - link

    Great review, something I've been thinking about for a long time. Does seem like going with an AM4 5600G achieves better points overall, but for situations like mine where I have a lot of spare legacy dGPUs from a longtime tech addiction, the 13100F makes a case in price.
  • kkilobyte - Sunday, April 23, 2023 - link

    I really wonder what the point is to compare such low-end CPUs coupled with a high-end GPU?

    On amazon where I live (Belgium), the lowest price for the RX 6950XT is about 750€, and most references available are priced above 800€.

    So how is it a "good value for money for entry-level users" ? It doesn't have an iGPU, so you must factor that in the total cost. For reference, a GT710 (which is really scrapping the bottom of the barrel) is priced at around 50€ on amazon. The 13100F is listed there at 125€, while the Ryzen 5600G is at 132€. So, if you factor the GPU price in, that becomes a 132 vs 175€ comparaison, adding that the Ryzen iGPU is better than a GT710. Even if you add the motherboard in the equation, the Ryzen will still cost less: there are several B550 motherboards listed at around 110€. So that would put the price of the AMD-based platform at 242€ vs 270€, and you'd get a more capable platform for most tasks, including gaming.

    So, what exactly is the point of that test, if not showing that picking the -F serie is not economically sound? I don't understand your conclusions at all, and think it is edging a bit on the dishonest side of things.
  • abufrejoval - Monday, April 24, 2023 - link

    It's *your* fault, that I keep buying these new computers.

    But I couldn't do it if it wasn't for the kids and in-laws that I could push the older devices to.

    And in both camps there are still Ivy Bridge i7-3770 (not even K), which for some reason run quite happily at up to 4.2GHz using 16GB of DDR3. There are also some Kaby Lake i7-7700K (@4.8GHz), generally with DDR3-2400, because DDR4 was what DDR5 is now.

    In terms of GPU it's GTX980ti, GTX1070 typically and they are completely happy!

    Mostly because they have 1920x1080 screens.

    When I chance to look at them playing, I actually often feel a bit jealous, because my RTX3090 with the Ryzen 5950X somehow doesn't seem to get close.... at 4k.

    Of course, most of it is simply that they really know how to game, while I'm just a 10 minute dabbler (well, actually it's 10 seconds until I'm dead and 10 minutes until I give up trying).

    This discussion reminds me about the i3-7350K, a dual core Kaby Lake with hyper threading, which was hotly debated here vs. a "true quad" like the i5-7600K. I got one of those at the time and tried it and it was really rather capable; the main reason I eventually replaced it with a true quad was that those became rather cheap and I wanted to retain the value of the rest of the system, basically until today.

    Since I'm not a competent gamer, I might simply not be sensitive enough to notice the slight slow-downs that might be caused by "temporary core shortages" these days. But when I look at CPU graphs on my really big machines (16-18 cores), I don't see even recent games using lots of cores.

    Some, like the even the most recent version of Microsoft's Flight Simulator remain essentially single-threaded, others might use a little more but since truly balancing a gaming workload across 4, 8, 16 or more cores is very difficult to do, it's still rarely getting done.

    So if you are tight on the money, I'd also argue that the extra €50-100 are better spent on GPU, DRAM or SSD. At the same time extra cores will sleep saving thermal and electrical budget and give you the peace of mind that if you need the extra oomp, you can have it.

    I'd second the request to provide a bit of an overview on the current chips that Intel produces to navigate the near infinite number of SKUs they produce from them.

    And it seems that this really is a cut down version of what might already be the smallest (non-mobile) AD die, because for a Raptor Lake die, such a chip sounds like 30% active surface area, hard to imagine as a real yield result.

    I realize that Intel itself isn't keen on having that information out in the public, but that's why we turn to Anandtech: to learn more than vendors preach already.
  • Otritus - Monday, April 24, 2023 - link

    Modern games only need 6 cores to run properly. Consoles just got 8 proper high performance cores with SMT in 2020. In the future you will probably need 8 SMT cores or 6-7 SMT cores with little cores for background tasks. It's why Intel only maxes out with 8 P-cores before adding in little cores, and why the i5 still has 6 P-cores.

    The die used was Alder Lake's 6+0 die cut down into a 4+0 configuration. Raptor Lake also has its own die of 6+0 I believe that Intel never used due to a glut of Alder Lake dies.

Log in

Don't have an account? Sign up now