CPU Tests: Simulation

Simulation and Science have a lot of overlap in the benchmarking world, however for this distinction we’re separating into two segments mostly based on the utility of the resulting data. The benchmarks that fall under Science have a distinct use for the data they output – in our Simulation section, these act more like synthetics but at some level are still trying to simulate a given environment.

DigiCortex v1.35: link

DigiCortex is a pet project for the visualization of neuron and synapse activity in the brain. The software comes with a variety of benchmark modes, and we take the small benchmark which runs a 32k neuron/1.8B synapse simulation, similar to a small slug.

The results on the output are given as a fraction of whether the system can simulate in real-time, so anything above a value of one is suitable for real-time work. The benchmark offers a 'no firing synapse' mode, which in essence detects DRAM and bus speed, however we take the firing mode which adds CPU work with every firing.

The software originally shipped with a benchmark that recorded the first few cycles and output a result. So while fast multi-threaded processors this made the benchmark last less than a few seconds, slow dual-core processors could be running for almost an hour. There is also the issue of DigiCortex starting with a base neuron/synapse map in ‘off mode’, giving a high result in the first few cycles as none of the nodes are currently active. We found that the performance settles down into a steady state after a while (when the model is actively in use), so we asked the author to allow for a ‘warm-up’ phase and for the benchmark to be the average over a second sample time.

For our test, we give the benchmark 20000 cycles to warm up and then take the data over the next 10000 cycles seconds for the test – on a modern processor this takes 30 seconds and 150 seconds respectively. This is then repeated a minimum of 10 times, with the first three results rejected. Results are shown as a multiple of real-time calculation.

(3-1) DigiCortex 1.35 (32k Neuron, 1.8B Synapse)

AMD's single chiplet design seems to get a big win here, but DigiCortex can use AVX-512 so RKL gets a healthy boost over the previous generation.

Dwarf Fortress 0.44.12: Link

Another long standing request for our benchmark suite has been Dwarf Fortress, a popular management/roguelike indie video game, first launched in 2006 and still being regularly updated today, aiming for a Steam launch sometime in the future.

Emulating the ASCII interfaces of old, this title is a rather complex beast, which can generate environments subject to millennia of rule, famous faces, peasants, and key historical figures and events. The further you get into the game, depending on the size of the world, the slower it becomes as it has to simulate more famous people, more world events, and the natural way that humanoid creatures take over an environment. Like some kind of virus.

For our test we’re using DFMark. DFMark is a benchmark built by vorsgren on the Bay12Forums that gives two different modes built on DFHack: world generation and embark. These tests can be configured, but range anywhere from 3 minutes to several hours. After analyzing the test, we ended up going for three different world generation sizes:

  • Small, a 65x65 world with 250 years, 10 civilizations and 4 megabeasts
  • Medium, a 127x127 world with 550 years, 10 civilizations and 4 megabeasts
  • Large, a 257x257 world with 550 years, 40 civilizations and 10 megabeasts

DFMark outputs the time to run any given test, so this is what we use for the output. We loop the small test for as many times possible in 10 minutes, the medium test for as many times in 30 minutes, and the large test for as many times in an hour.

(3-2a) Dwarf Fortress 0.44.12 World Gen 65x65, 250 Yr(3-2b) Dwarf Fortress 0.44.12 World Gen 129x129, 550 Yr(3-2c) Dwarf Fortress 0.44.12 World Gen 257x257, 550 Yr

Dolphin v5.0 Emulation: Link

Many emulators are often bound by single thread CPU performance, and general reports tended to suggest that Haswell provided a significant boost to emulator performance. This benchmark runs a Wii program that ray traces a complex 3D scene inside the Dolphin Wii emulator. Performance on this benchmark is a good proxy of the speed of Dolphin CPU emulation, which is an intensive single core task using most aspects of a CPU. Results are given in seconds, where the Wii itself scores 1051 seconds.

(3-3) Dolphin 5.0 Render Test

Intel Regains the lead in our Dolphin test, with the Core i9 having a sizable advantage over the Core i7.

Factorio v1.1.26: Link

One of the most requested simulation game tests we’ve had in recently is that of Factorio, a construction and management title where the user builds endless automated factories of increasing complexity. Factorio falls under the same banner as other simulation games where users can lose hundreds of hours of sleepless nights configuring the minutae of their production line.

Our new benchmark here takes the v1.1.26 version of the game, a fixed map, and uses the automated benchmark mode to calculate how long it takes to run 1000 updates. This is then repeated for 5 minutes, and the best time to complete is used, reported in updates per second. The benchmark is single threaded and said to be reliant on cache size and memory.

Details for the benchmark can be found at this link.

(3-4) Factorio v1.1.26 Test

This is still a new test, so as we run through more systems, we will get more data.

CPU Tests: Rendering

Rendering tests, compared to others, are often a little more simple to digest and automate. All the tests put out some sort of score or time, usually in an obtainable way that makes it fairly easy to extract. These tests are some of the most strenuous in our list, due to the highly threaded nature of rendering and ray-tracing, and can draw a lot of power. If a system is not properly configured to deal with the thermal requirements of the processor, the rendering benchmarks is where it would show most easily as the frequency drops over a sustained period of time. Most benchmarks in this case are re-run several times, and the key to this is having an appropriate idle/wait time between benchmarks to allow for temperatures to normalize from the last test.

Blender 2.83 LTS: Link

One of the popular tools for rendering is Blender, with it being a public open source project that anyone in the animation industry can get involved in. This extends to conferences, use in films and VR, with a dedicated Blender Institute, and everything you might expect from a professional software package (except perhaps a professional grade support package). With it being open-source, studios can customize it in as many ways as they need to get the results they require. It ends up being a big optimization target for both Intel and AMD in this regard.

For benchmarking purposes, we fell back to one rendering a frame from a detailed project. Most reviews, as we have done in the past, focus on one of the classic Blender renders, known as BMW_27. It can take anywhere from a few minutes to almost an hour on a regular system. However now that Blender has moved onto a Long Term Support model (LTS) with the latest 2.83 release, we decided to go for something different.

We use this scene, called PartyTug at 6AM by Ian Hubert, which is the official image of Blender 2.83. It is 44.3 MB in size, and uses some of the more modern compute properties of Blender. As it is more complex than the BMW scene, but uses different aspects of the compute model, time to process is roughly similar to before. We loop the scene for at least 10 minutes, taking the average time of the completions taken. Blender offers a command-line tool for batch commands, and we redirect the output into a text file.

(4-1) Blender 2.83 Custom Render Test

At 8 cores, Intel gets a lead over AMD, but 10 cores in Comet Lake is better than Rocket Lake.

Corona 1.3: Link

Corona is billed as a popular high-performance photorealistic rendering engine for 3ds Max, with development for Cinema 4D support as well. In order to promote the software, the developers produced a downloadable benchmark on the 1.3 version of the software, with a ray-traced scene involving a military vehicle and a lot of foliage. The software does multiple passes, calculating the scene, geometry, preconditioning and rendering, with performance measured in the time to finish the benchmark (the official metric used on their website) or in rays per second (the metric we use to offer a more linear scale).

The standard benchmark provided by Corona is interface driven: the scene is calculated and displayed in front of the user, with the ability to upload the result to their online database. We got in contact with the developers, who provided us with a non-interface version that allowed for command-line entry and retrieval of the results very easily.  We loop around the benchmark five times, waiting 60 seconds between each, and taking an overall average. The time to run this benchmark can be around 10 minutes on a Core i9, up to over an hour on a quad-core 2014 AMD processor or dual-core Pentium.

(4-2) Corona 1.3 Benchmark

Crysis CPU-Only Gameplay

One of the most oft used memes in computer gaming is ‘Can It Run Crysis?’. The original 2007 game, built in the Crytek engine by Crytek, was heralded as a computationally complex title for the hardware at the time and several years after, suggesting that a user needed graphics hardware from the future in order to run it. Fast forward over a decade, and the game runs fairly easily on modern GPUs.

But can we also apply the same concept to pure CPU rendering? Can a CPU, on its own, render Crysis? Since 64 core processors entered the market, one can dream. So we built a benchmark to see whether the hardware can.

For this test, we’re running Crysis’ own GPU benchmark, but in CPU render mode. This is a 2000 frame test, with medium and low settings.

(4-3b) Crysis CPU Render at 1080p Low

POV-Ray 3.7.1: Link

A long time benchmark staple, POV-Ray is another rendering program that is well known to load up every single thread in a system, regardless of cache and memory levels. After a long period of POV-Ray 3.7 being the latest official release, when AMD launched Ryzen the POV-Ray codebase suddenly saw a range of activity from both AMD and Intel, knowing that the software (with the built-in benchmark) would be an optimization tool for the hardware.

We had to stick a flag in the sand when it came to selecting the version that was fair to both AMD and Intel, and still relevant to end-users. Version 3.7.1 fixes a significant bug in the early 2017 code that was advised against in both Intel and AMD manuals regarding to write-after-read, leading to a nice performance boost.

The benchmark can take over 20 minutes on a slow system with few cores, or around a minute or two on a fast system, or seconds with a dual high-core count EPYC. Because POV-Ray draws a large amount of power and current, it is important to make sure the cooling is sufficient here and the system stays in its high-power state. Using a motherboard with a poor power-delivery and low airflow could create an issue that won’t be obvious in some CPU positioning if the power limit only causes a 100 MHz drop as it changes P-states.

(4-4) POV-Ray 3.7.1

V-Ray: Link

We have a couple of renderers and ray tracers in our suite already, however V-Ray’s benchmark came through for a requested benchmark enough for us to roll it into our suite. Built by ChaosGroup, V-Ray is a 3D rendering package compatible with a number of popular commercial imaging applications, such as 3ds Max, Maya, Undreal, Cinema 4D, and Blender.

We run the standard standalone benchmark application, but in an automated fashion to pull out the result in the form of kilosamples/second. We run the test six times and take an average of the valid results.

(4-5) V-Ray Renderer

Cinebench R20: Link

Another common stable of a benchmark suite is Cinebench. Based on Cinema4D, Cinebench is a purpose built benchmark machine that renders a scene with both single and multi-threaded options. The scene is identical in both cases. The R20 version means that it targets Cinema 4D R20, a slightly older version of the software which is currently on version R21. Cinebench R20 was launched given that the R15 version had been out a long time, and despite the difference between the benchmark and the latest version of the software on which it is based, Cinebench results are often quoted a lot in marketing materials.

Results for Cinebench R20 are not comparable to R15 or older, because both the scene being used is different, but also the updates in the code bath. The results are output as a score from the software, which is directly proportional to the time taken. Using the benchmark flags for single CPU and multi-CPU workloads, we run the software from the command line which opens the test, runs it, and dumps the result into the console which is redirected to a text file. The test is repeated for a minimum of 10 minutes for both ST and MT, and then the runs averaged.

(4-6a) CineBench R20 Single Thread(4-6b) CineBench R20 Multi-Thread

The improvement in Cinebench R20 is a good measure over previous generations of Intel. However mobile Tiger Lake scores 593 at 28 W, still ahead of the 11700K.

CPU Tests: Office and Science CPU Tests: Encoding, Synthetic, Web
Comments Locked

279 Comments

View All Comments

  • Makste - Tuesday, April 6, 2021 - link

    I again have to agree with you on this. Especially with the cooler scenario, it is not easy to spot the detail, but you have managed to bring it to the surface. Rocket Lake is not a good upgrade option now that I look at it.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    (Sorry I messed up and forgot quotation marks in the previous post. 1st, 3rd, and 5th paragraphs are quotes from the article.)

    you wrote:
    ‘Rocket Lake on 14nm: The Best of a Bad Situation’

    I fixed it:
    Rocket Lake on 14nm: Intel's Obsolete Node Produces Inferior CPU'

    ‘Intel is promoting that the new Cypress Cove core offers ‘up to a +19%’ instruction per clock (IPC) generational improvement over the cores used in Comet Lake, which are higher frequency variants of Skylake from 2015.’

    What is the performance per watt? What is the performance per decibel? How do those compare with AMD? Performance includes performance per watt and per decibel, whether Intel likes that or not.

    ‘Designing a mass-production silicon layout requires balancing overall die size with expected yields, expected retail costs, required profit margins, and final product performance. Intel could easily make a 20+ core processor with these Cypress Cove cores, however the die size would be too large to be economical, and perhaps the power consumption when all the cores are loaded would necessitate a severe reduction in frequency to keep the power under control. To that end, Intel finalised its design on eight cores.’

    Translation: Intel wanted to maximize margin by feeding us the ‘overclocked few cores’ design paradigm, the same thing AMD did with Radeon VII. It’s a cynical strategy when one has an inferior design. Just like Radeon VII, these run hot, loud, and underperform. AMD banked on enough people irrationally wanting to buy from ‘team red’ to sell those, while its real focus was on peddling Polaris forever™ + consoles in the GPU space. Plus, AMD sells to miners with designs like that one.

    ‘Intel has stated that in the future it will have cores designed for multiple process nodes at the same time, and so given Rocket Lake’s efficiency at the high frequencies, doesn’t this mean the experiment has failed? I say no, because it teaches Intel a lot in how it designs its silicon’

    This is bad spin. This is not an experimental project. This is product being massed produced to be sold to consumers.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    One thing many are missing, with all the debate about AVX-512, is the AVX-2 performance per watt/decibel problem:

    'The rated TDP is 125 W, although we saw 160 W during a regular load, 225 W peaks with an AVX2 rendering load, and 292 W peak power with an AVX-512 compute load'

    Only 225 watts? How much power does AMD's stuff use with equivalent work completion speed?
  • Hifihedgehog - Thursday, April 1, 2021 - link

    "The spin also includes the testing, using a really loud high-CFM CPU cooler in the Intel and a different quieter one on the AMD."

    Keep whining... You'll eventually tire out.

    https://i.imgur.com/HZVC03T.png

    https://i.imgflip.com/53vqce.jpg
  • Makste - Tuesday, April 6, 2021 - link

    Isn't it too much for you to keep posting the same thing over and over?
  • Oxford Guy - Wednesday, March 31, 2021 - link

    Overclocking support page still doesn’t mention that Intel recently discontinued the overclocking warranty, something that was available since Sandy Bridge or something. Why the continued silence on this?

    ‘On the Overclocking Enhancement side of things, this is perhaps where it gets a bit nuanced.’

    How is it an ‘enhancement’ when the chips are already system-melting hot? There isn't much that's nuanced about Intel’s sudden elimination of the overclocking warranty.

    ‘Overall, it’s a performance plus. It makes sense for the users that can also manage the thermals. AMD caught a wind with the feature when it moved to TSMC’s 7nm. I have a feeling that Intel will have to shift to a new manufacturing node to get the best out of ABT’

    It also helps when people use extremely loud very high CFM coolers for their tests. Intel pioneered the giant hidden fridge but deafness-inducing air cooling is another option.

    How much performance will buyers find in the various hearing aids they'll be in the market for? There aren't any good treatments for tinnitus, btw. That's a benefit one gets for life.

    ‘Intel uses one published value for sustained performance, and an unpublished ‘recommended’ value for turbo performance, the latter of which is routinely ignored by motherboard manufacturers.’

    It’s also routinely ignored by Intel since it peddles its deceptive TDP.

    ‘This is showing the full test, and we can see that the higher performance Intel processors do get the job done quicker. However, the AMD Ryzen 7 processor is still the lowest power of them all, and finishes the quickest. By our estimates, the AMD processor is twice as efficient as the Core i9 in this test.’

    Is that with the super-loud very high CFM cooler on the Intel and the smaller weaker Noctua on the AMD? If so, how about a noise comparison? Performance per decibel?

    ‘The cooler we’re using on this test is arguably the best air cooling on the market – a 1.8 kilogram full copper ThermalRight Ultra Extreme, paired with a 170 CFM high static pressure fan from Silverstone.’

    The same publication that kneecapped AMD’s Zen 1 and Zen 2 but refusing to enable XMP for RAM on the very dubious claim that most enthusiasts don’t enter BIOS to switch it on. Most people are going to have that big loud cooler? Does Intel bundle it? Does it provide a coupon? Does the manual say you need cooler from a specific list?
  • BushLin - Wednesday, March 31, 2021 - link

    I won't argue with the rest of your assessment but given these CPUs are essentially factory overclocked close to their limits, the only people who'd benefit from an overclocking warranty are probably a handful of benchmark freaks doing suicide runs on LN2.
  • Oxford Guy - Thursday, April 1, 2021 - link

    That’s why I said the word ‘enhancement’ seems questionable.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    ‘Anyone wanting a new GPU has to actively pay attention to stock levels, or drive to a local store for when a delivery arrives.’

    You forgot the ‘pay the scalper price at retail’ part. MSI, for instance, was the first to raise its prices across the board to Ebay scalper prices and is now threatening to raise them again.

    ‘In a time where we have limited GPUs available, I can very much see users going all out on the CPU/memory side of the equation, perhaps spending a bit extra on the CPU, while they wait for the graphics market to come back into play. After all, who really wants to pay $1300 for an RTX 3070 right now?’

    • That is the worst possible way to deal with planned obsolescence.

    14nm is already obsolete. Now, you’re adding in wating for a very long time to get a GPU, making your already obsolete CPU really obsolete by the time you can get one. If you’re waiting for reasonable prices for GPUs you’re looking at, what, more than a year of waiting?

    ‘Intel’s Rocket Lake as a backported processor design has worked’

    No. It’s a failure. The only reasons Intel will be able to sell it is because AMD is production-constrained and because there isn’t enough competition in the x86 space to force AMD to cut the pricing of the 5000 line.

    Intel also cynically hobbled the CPU by starving it of cores to increase profit for itself, banking that people will buy it anyway. It’s the desktop equivalent of Radeon VII. Small die + way too high clock to ‘compensate’ + too-high price = banking on consumer foolishness to sell them (or mining, in the case of AMD). AVX-512 isn’t really going to sell these like mining sold the Radeon VII.

    ‘However, with the GPU market being so terrible, users could jump an extra $100 and get 50% more AMD cores.’

    No mention of power consumption, heat, and noise. Just ‘cores’ and price tag.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    'Intel could easily make a 20+ core processor with these Cypress Cove cores, however the die size would be too large to be economical'

    Citation needed.

    And, economical for Intel or the customer?

    Besides, going from 8 cores to 20+ is using hyperbole to distract from the facts.

    'and perhaps the power consumption when all the cores are loaded would necessitate a severe reduction in frequency to keep the power under control.'

    The few cores + excessive clocks to 'compensate' strategy is a purely cynical one. It always causes inferior performance per watt. It always causes more noise.

    So, Intel is not only trying to feed us its very obsolete 14nm node, it's trying to do it in the most cynical manner it can: by trying to use 8 cores as the equivalent of what it used to peddle exclusively for the desktop market: quads.

    It thinks it can keep its big margins up by segmenting this much, hoping people will be fooled into thinking the bad performance per watt from too-high clocks is just because of 14nm — not because it's cranking too few cores too high to save itself a few bucks.

    Intel could offer more cores and implement as turbo with a gaming mode that would keep power under control for gaming while maximizing performance. The extra cores would presumably be able to do more work for the watts by keeping clocks/voltage more within the optimal range.

    But no... it would rather give people the illusion of a gaming-optimized part ('8 cores ought to be enough for anyone') when it's only optimized for its margin.

Log in

Don't have an account? Sign up now