Conclusion

Over the last couple of years, AMD and Intel have been battling it out in the 'core wars.' The aim is to give users the largest core count – with many applications and games now benefiting from various multi-core optimizations – all for the best possible price. The other element of this fight has come down to IPC performance, with Intel and AMD flip-flopping for the top spot on its flagship desktop processors each time they both launch a new processor. Due to AMD's Zen architectures, Intel has been on the ropes in both performance and value for a while. Intel has been forced to push its innovation and boundaries to try and maintain what it previously had; a majority market share.

As we highlighted in our launch day Core i9-12900K review, Intel's new Alder Lake architecture brings new improvements and refinements to the table. This includes the two types of core, P-cores for performance and E-cores for efficiency/multi-threading. The P-cores are based on Intel's Golden Cove architecture, with the aim of offering applications performance when it needs it. The E-cores, or efficiency cores as Intel depicts them, are lower power, and one E-core equates to roughly 50-60% of the performance of one P-Core. 

Focusing on Intel's Core i3-12300, it is the top SKU in its Core i3 lineup, with four P-cores, a base frequency of 3.5 GHz, and a turbo frequency of 4.4 GHz. In terms of power consumption, Intel has rated the i3-12300 at base frequencies with a TDP of 60 W and a 89 W TDP when at turbo clock speeds.

Intel's interpretation of turbo on Alder Lake, which is essential from a power draw perspective, is 'infinite turbo.' This means the processor will try and use as much turbo as and when it can. Fundamentally from a performance perspective, this is good as it will hit turbo clock speeds more frequently, but ultimately, it makes power consumption particularly variable, depending on the workload.

Intel Core i3-12300: Performance Analysis

It's all good and well having 16-core processors in a desktop platform, but ultimately, these typically cost upwards of $600. So what about CPUs for the more budget-conscious? Well, up until the launch of Intel's 12th Gen Core processors, AMD's Ryzen 5000 series had been at top of IPC, multi-threaded performance, and in a lot of scenarios, game performance and overall value too. One of AMD's most cost-effective processors remains the Ryzen 5 5600X, with six cores and competitive clock speeds at a reasonable price of $229.

In budget CPU space, things have now changed with both of Intel's Core i3 and i5 series slotting into that sub $300 market segment, with the Core i3 series significantly undercutting the 5600X in terms of price. In practice, Intel has the sub-$200 market entirely to itself right now, since AMD doesn't have access to the manufacturing capacity needed to address that market (and thus compete directly with the i3-12300).

Another factor that comes into play is Alder Lake's ability to support both DDR5 and DDR4 memory. We tested the Core i3-12300 with Windows 11 and DDR5-4800 CL40 memory for our Bench database data for comparative purposes. We did run tests with DDR5 4800 CL40 against DDR4-3200 CL22 on the Core i3-12300 on page 3, but the overall performance swing isn't as big as users might think, at least not in the grand scheme of things.

Let's take a look at some of the results and digest how the Intel Core i3-12300 stacks up against the competition:

(4-7a) CineBench R23 Single Thread

Looking at single-threaded performance in Cinebench R23, the Core i3-12300 and the four Golden Cove P-cores show dominance over previous generations of Intel's architecture, as well as AMD's Zen 3 cores. This typically changes each time AMD or Intel releases a new desktop architecture, and right now, Intel's 12th gen Core has the single-threaded crown.

(4-7b) CineBench R23 Multi-Thread

In the multi-threaded section of Cinebench R23, this is where the cores or lack thereof come into play. Both AMD's Zen 3 6-core and 8-core processors have a considerable advantage in applications that can utilize all of the cores and threads available. Comparing DDR5 to DDR4 on the i3-12300, the DDR5-4800 CL40 performed around 8% better than DDR4-3200 CL22 in Cinebench R23's multi-threaded test.

For a more classic comparison, we also have the Core i7-2600K, which released over 10 years ago, was one of the most popular quad core chips ever produced. Comparing it directly to the Core i3-12300, the (much) newer i3 performs 2.3x better in terms of multi-threaded performance.

The only real competitor in the 4C/8T segment from AMD with Zen 3 cores, is the Ryzen 3 5300G APU. In our computational benchmarks, the Core i3-12300 has a consistent and distinct advantage in both single-thread and multi-threaded performance. So even if AMD could produce those chips in retail volumes, at this point they'd come in behind Intel's top quad.

Looking at other generational differences to AMD's Ryzen 3000 and 1000 series chips, the Core i3-12300 comfortably beats the Zen-based Ryzen 5 1600 (6C/12T) processor. For reference, it is only 12.4% slower than the Zen 2 based Ryzen 5 3600X (6C/12T) processor in Cinebench R23 multi-threaded performance.

(k-7) Grand Theft Auto V - 1080p Max - Average FPS

In our GTA V testing at 1080p at max settings with an NVIDIA RTX 2080 Ti, the Core i3-12300 is highly competitive with AMD's Ryzen 5000 six and eight-core models. This includes Intel's 11th generation processors, despite a lower core and thread count. The difference between DDR5 and DDR4 with the Core i3-12300 shows that using the platform with DDR5 memory was around 6% better in average frame rates over DDR4 when using Alder Lake's JEDEC memory settings.

While gaming is subjective as different game engines are optimized differently for CPU core count, performance in GPU intensive titles shows the Core i3-12300 in a favorable light, thus making it competitive for 1080p gaming.

(b-7) Civilization VI - 1080p Max - Average FPS

Looking at our Civilization VI benchmark performance at 1080p max settings, we see the limitations of where having a higher core count makes a substantial difference. Despite the 6C/12T and 8C/16T processors showing dominance, it's worth noting that the Core i3-12300 is still comfortably performing above 60 fps in average frame rates, with the 95th percentile frame rates also hovering close to 60 fps.

Touching on DDR5 versus DDR4 with the i3-12300 and in a more CPU favorable test such as Civilization VI, the DDR5 configuration performed 3.7% better in terms of average frame rates. Not a lot of difference considering the current price premium DDR5 has attached at the moment, but it's still an improvement nonetheless.

A broader selection of our CPU game testing results, including at other resolutions, can be found in our benchmark database: www.anandtech.com/bench. All gaming tests were with an RTX 2080 Ti for comparative purposes.

Intel Core i3-12300: Is Four Cores Enough in 2022?

The primary use case for Intel's Core i3 series includes lower-powered desktops, where tasks such as video rendering, encoding, and other content-creation tasks either aren't the target market, or are not the biggest use case for the market. As we've seen throughout our testing, the Intel Core i3-12300 is a fantastic processor for gamers on a budget, even with fewer cores than the competition, such as the Ryzen 5 5600 ($229), which is 6C/12T. You could say that the R5 5600X isn't really a competitor due to the price, but Intel is slotting in its Core i3 series well below AMD Ryzen 5000's entry point, and this makes things interesting.

Intel Core i5 & i3 Processor Specifications (12th Gen Alder Lake)
AnandTech Cores
P+E
P-Core
Base
P-Core
Turbo
E-Core
Base
E-Core
Turbo
L3
MB
IGP Base
W
Turbo
W
Price
$1ku
i5-12600K 6+4 3700 4900 2800 3600 20 770 125 150 $289
i5-12600KF 6+4 3700 4900 2800 3600 20 - 125 150 $264
i5-12600 6+0 3300 4800 - - 18 770 65 117 $223
i5-12600T 6+0 2100 4600 - - 18 770 35 74 $223
i5-12500 6+0 3000 4600 - - 18 770 65 117 $202
i5-12500T 6+0 2000 4400 - - 18 770 35 74 $202
i5-12400 6+0 2500 4400 - - 18 730 65 117 $192
i5-12400F 6+0 2500 4400 - - 18 - 65 117 $167
i5-12400T 6+0 1800 4200 - - 18 730 35 74 $192
 
i3-12300 4+0 3500 4400 - - 12 730 60 89 $143
i3-12300T 4+0 2300 4200 - - 12 730 35 69 $143
i3-12100 4+0 3300 4300 - - 12 730 60 89 $122
i3-12100F 4+0 3300 4300 - - 12 - 58 89 $97
i3-12100T 4+0 2200 4100 - - 12 730 35 69 $122

Looking above at Intel's 12th generation Core i5 and i3 processor stack, only the top two Core i5 models feature E-cores, the Core i5-12600K and Core i5-12600KF. Nothing below these in the stack comes with E-cores, and instead, rely on the fully-fledged P-cores (Golden Cove), with 6 cores for all the Core i5s and 4 cores for the Core i3 models.

Comparing the Core i3-12300 (4C/8T) to other 4C/8T and 6C/12T SKUs such as the AMD Ryzen 5 5600X (6C/12T), the Core i3 is very competitive overall, but it does show its limitations in multi-thread applications and scenarios. As users would generally expect to be the case, the Core i3-12300 is Intel's most potent quad-core processor thus far. This isn't a surprise as the i3-12300 benefits from the Intel 7 manufacturing process, as well as generational increases to IPC performance, base, and turbo frequencies.

The decision on whether or not it's worth buying the Core i3-12300 over hex-core chips like the Ryzen 5 3600X or the Core i5-12600K comes down to the use case. Users planning on doing any form of rendering, encoding, or running multi-core optimized workloads will benefit from the extra cores and threads. Still, there's not that much difference between the Core i3-12300 and Ryzen 5 3600X in regards to gaming, or in other workloads that can't fill out more than 4 cores.

In gaming at 1080p and single-threaded applications, the Core i3-12300 excels and stakes its claim as a fantastic option for users on a budget. Looking further down the Alder Lake Core i3 stack, the Core i3-12100 ($122) could be the best value of the bunch, with a base frequency of 3.3 GHz and a turbo frequency of 4.3 GHz; just 100 MHz slower than the i3-12300. The onus is now on AMD to bring something competitive in the sub-$200 segment of the market, as right now, Intel holds all the marbles.

With an MSRP of $143, it's clear that the Intel Core i3-12300 represents excellent value for money in a current market where users are struggling to find value in components. It is clear Alder Lake has been a success for Intel, and the Core i3-12300 offers leading-edge quad-core performance on desktop for an equally great price.

Gaming Performance: 1080p
Comments Locked

140 Comments

View All Comments

  • CiccioB - Friday, March 4, 2022 - link

    You may be surprised by how many applications are still using a single thread or even if multi-threaded be on thread bottle-necked.

    All office suite, for example, use just a main thread. Use a slow 32 thread capable CPU and you'll see how slow Word or PowerPoint can become. Excel is somewhat more threaded, but not surely to the level of using 32 core even for complex tables.
    Compilers are not multi-thread. They just spawn many instances to compile more files in parallel, and if you have mane many cores it just end up being I/O limited. At the end of the compiling process, however, you'll have the linked which is a single threaded task. Run it on a slow 64 core CPU, and you'll wait much more time for the final binary then on a fast Celeron CPU.

    All graphics retouching applications are mono thread. What is multi-threaded are just some of the effects you can apply. But the interface and the general data management is on a single task. That's why you can have Photoshop layer management so slow even on a Threadripper.

    Printing app and format converters are monothread. CADs are also.
    And browser as well, though they mask it as much as possible. With my surprise I have found that Javascript is run on a single thread for all opened windows as if I encounter some problems on a heavy Javascript page, other pages are slowed down as well despite having spare cores.

    At the end, there are many many task that cannot be parallelized. Single core performance can help much more than having a myriad of slower core.
    Yet there are some (and only some) applications that tasks advantage of a swarm of small cores, like 3D renderers, video converters and... well, that's it. Unless you count for scientific simulations but I doubt those are interesting for a consumer oriented market.
    BTW, video conversion can be easily and more efficiently done using HW converter like those present in GPUs, you you are left with 3D renderers to be able to saturate whichever the number of core you have.
  • mode_13h - Saturday, March 5, 2022 - link

    > Compilers are not multi-thread.

    There's been some work in this area, but it's generally a lower priority due to the file-level concurrency you noted.

    > if you have mane many cores it just end up being I/O limited.

    I've not seen this, but I also don't have anything like a 64-core CPU. Even on a 2x 4-core 3.4 GHz Westmere server with a 4-disk RAID-5, I could do a 16-way build and all the cores would stay pegged. You just need enough RAM for files to stay in cache while they're still needed, and buffer enough of the writes.

    > At the end of the compiling process, however,
    > you'll have the linked which is a single threaded task.

    There's a new, multi-threaded linker on the block. It's called "mold", which I guess is a play on Google's "gold" linker. For those who don't know, the traditional executable name for a UNIX linker is ld.

    > At the end, there are many many task that cannot be parallelized.

    There are more that could. They just aren't because... reasons. There are still software & hardware improvements that could enable a lot more multi-threading. CPUs are now starting to get so many cores that I think we'll probably see this becoming an area of increasing focus.
  • CiccioB - Saturday, March 5, 2022 - link

    You may be aware that there are lots of compiling chain tools that are not "google based" and are either not based on experimental code.

    "You just need enough RAM for files to stay in cache while they're still needed, and buffer enough of the writes."
    Try compiling something that is not "Hello world" and you'll see that there's not such a way to keep the files in RAM unless you have put your entire project is a RAM disk.

    "There are more that could. They just aren't because... reasons."
    Yes, the fact the making them multi threaded costs a lot of work for a marginal benefit.
    The most part of algorithms ARE NOT PARALLELIZABLE, they run as a contiguous stream of code where the following data is the result of the previous instruction.

    Parallelizable algorithms are a minority part and most of them require really lots of work to work better than a mono threaded one.
    You can easily see this in the fact that multi core CPU in consumer market has been existed for more than 15 years and still only a minor number of applications, mostly rendered and video transcoders, do really take advantage of many cores. Others do not and mostly like single threaded performance (either by improved IPC or faster clock).
  • mode_13h - Tuesday, March 8, 2022 - link

    > Try compiling something that is not "Hello world" and you'll see

    My current project is about 2 million lines of code. When I build on a 6-core workstation with SATA SSD, the entire build is CPU-bound. When I build on a 8-core server with a HDD RAID, the build is probably > 90% CPU-bound.

    As for the toolchain, we're using vanilla gcc and ld. Oh and ccache, if you know what that is. It *should* make the build even more I/O bound, but I've not seen evidence of that.

    I get that nobody like to be contradicted, but you could try fact-checking yourself, instead of adopting a patronizing attitude. I've been doing commercial software development for multiple decades. About 15 years ago, I even experimented with distributed compilation and still found it still to be mostly compute-bound.

    > You can easily see this in the fact that multi core CPU in consumer market has been
    > existed for more than 15 years and still only a minor number of applications, mostly
    > rendered and video transcoders, do really take advantage of many cores.

    Years ago, I saw an article on this site analyzing web browser performance and revealing they're quite heavily multi-threaded. I'd include a link, but the subject isn't addressed in their 2020 browser benchmark article and I'm not having great luck with the search engine.

    Anyway, what I think you're missing is that phones have so many cores. That's a bigger motivation for multi-threading, because it's easier to increase efficient performance by adding cores than any other way.

    Oh, and don't forget games. Most games are pretty well-threaded.
  • GeoffreyA - Tuesday, March 8, 2022 - link

    "analyzing web browser performance and revealing they're quite heavily multi-threaded"

    I think it was round about the IE9 era, which is 2011, that Internet Explorer, at least, started to exploit multi-threading. I still remember what a leap it was upgrading from IE8, and that was on a mere Core 2 Duo laptop.
  • GeoffreyA - Tuesday, March 8, 2022 - link

    As for compilers being heavy on CPU, amateur commentary on my part, but I've noticed the newer ones seem to be doing a whole lot more---obviously in line with the growing language specification---and take a surprising amount of time to compile. Till recently, I was actually still using VC++ 6.0 from 1998 (yes, I know, I'm crazy), and it used to slice through my small project in no time. Going to VS2019, I was stunned how much longer it took for the exact same thing. Thankfully, turning on MT compilation, which I believe just duplicates compiler instances, caused it to cut through the project like butter again.
  • mode_13h - Wednesday, March 9, 2022 - link

    Well, presumably you compiled using newer versions of the standard library and other runtimes, which use newer and more sophisticated language features.

    Also, the optimizers are now much more sophisticated. And compilers can do much more static analysis, to possibly find bugs in your code. All of that involves much more work!
  • GeoffreyA - Wednesday, March 9, 2022 - link

    On migration, it stepped up the project to C++14 as the language standard. And over the years, MSVC has added a great deal, particularly features that have to do with security. Optimisation, too, seems much more advanced. As a crude indicator, the compiler backend, C2.DLL, weighs in at 720 KB in VC6. In VS2022, round about 6.4-7.8 MB.
  • mode_13h - Thursday, March 10, 2022 - link

    So, I trust you've found cppreference.com? Great site, though it has occasional holes and the very rare error.

    Also worth a look s the CppCoreGuidelines on isocpp's github. I agree with quite a lot of it. Even when I don't, I find it's usually worth understanding their perspective.

    Finally, here you'll find some fantastic C++ infographics:

    https://hackingcpp.com/cpp/cheat_sheets.html

    Lastly, did you hear that Google has opened up GSoC to non-students? If you fancy working on an open source project, getting mentored, and getting paid for it, have a look!

    China's Institute of Software Chinese Academy of Sciences also ran one, last year. Presumably, they'll do it again, this coming summer. It's open to all nationalities, though the 2021 iteration was limited to university students. Maybe they'll follow Google and open it up to non-students, as well.

    https://summer.iscas.ac.cn/#/org/projectlist?lang=...
  • GeoffreyA - Thursday, March 10, 2022 - link

    I doubt I'll participate in any of those programs (the lazy bone in me talking), but many, many thanks for pointing out those programmes, as well as the references! Also, last year you directed me to Visual Studio Community Edition, and it turned out to be fantastic, with no real limitations. I am grateful. It's been a big step forward.

    That cppreference is excellent; for I looked at it when I was trying to find a lock that would replace a Win32 CRITICAL_SECTION in a singleton, and the one found, I think it was std::mutex, just dropped in and worked. But I left the old version in because there's other Win32 code in that module, and using std::mutex meant no more compiling on the older VS, which still works on the project, surprisingly.

    Again, much obliged for the leads and references.

Log in

Don't have an account? Sign up now