CPU Tests: Simulation

Simulation and Science have a lot of overlap in the benchmarking world, however for this distinction we’re separating into two segments mostly based on the utility of the resulting data. The benchmarks that fall under Science have a distinct use for the data they output – in our Simulation section, these act more like synthetics but at some level are still trying to simulate a given environment.

DigiCortex v1.35: link

DigiCortex is a pet project for the visualization of neuron and synapse activity in the brain. The software comes with a variety of benchmark modes, and we take the small benchmark which runs a 32k neuron/1.8B synapse simulation, similar to a small slug.

The results on the output are given as a fraction of whether the system can simulate in real-time, so anything above a value of one is suitable for real-time work. The benchmark offers a 'no firing synapse' mode, which in essence detects DRAM and bus speed, however we take the firing mode which adds CPU work with every firing.

The software originally shipped with a benchmark that recorded the first few cycles and output a result. So while fast multi-threaded processors this made the benchmark last less than a few seconds, slow dual-core processors could be running for almost an hour. There is also the issue of DigiCortex starting with a base neuron/synapse map in ‘off mode’, giving a high result in the first few cycles as none of the nodes are currently active. We found that the performance settles down into a steady state after a while (when the model is actively in use), so we asked the author to allow for a ‘warm-up’ phase and for the benchmark to be the average over a second sample time.

For our test, we give the benchmark 20000 cycles to warm up and then take the data over the next 10000 cycles seconds for the test – on a modern processor this takes 30 seconds and 150 seconds respectively. This is then repeated a minimum of 10 times, with the first three results rejected. Results are shown as a multiple of real-time calculation.

(3-1) DigiCortex 1.35 (32k Neuron, 1.8B Synapse)

For users wondering why the 5800X wins, it seems that Digicortex prefers single chiplet designs, and the more cores the better. On the Intel side, the 10700 pulls a slight lead.

Dwarf Fortress 0.44.12: Link

Another long standing request for our benchmark suite has been Dwarf Fortress, a popular management/roguelike indie video game, first launched in 2006 and still being regularly updated today, aiming for a Steam launch sometime in the future.

Emulating the ASCII interfaces of old, this title is a rather complex beast, which can generate environments subject to millennia of rule, famous faces, peasants, and key historical figures and events. The further you get into the game, depending on the size of the world, the slower it becomes as it has to simulate more famous people, more world events, and the natural way that humanoid creatures take over an environment. Like some kind of virus.

For our test we’re using DFMark. DFMark is a benchmark built by vorsgren on the Bay12Forums that gives two different modes built on DFHack: world generation and embark. These tests can be configured, but range anywhere from 3 minutes to several hours. After analyzing the test, we ended up going for three different world generation sizes:

  • Small, a 65x65 world with 250 years, 10 civilizations and 4 megabeasts
  • Medium, a 127x127 world with 550 years, 10 civilizations and 4 megabeasts
  • Large, a 257x257 world with 550 years, 40 civilizations and 10 megabeasts

DFMark outputs the time to run any given test, so this is what we use for the output. We loop the small test for as many times possible in 10 minutes, the medium test for as many times in 30 minutes, and the large test for as many times in an hour.

(3-2a) Dwarf Fortress 0.44.12 World Gen 65x65, 250 Yr(3-2b) Dwarf Fortress 0.44.12 World Gen 129x129, 550 Yr(3-2c) Dwarf Fortress 0.44.12 World Gen 257x257, 550 Yr

Dolphin v5.0 Emulation: Link

Many emulators are often bound by single thread CPU performance, and general reports tended to suggest that Haswell provided a significant boost to emulator performance. This benchmark runs a Wii program that ray traces a complex 3D scene inside the Dolphin Wii emulator. Performance on this benchmark is a good proxy of the speed of Dolphin CPU emulation, which is an intensive single core task using most aspects of a CPU. Results are given in seconds, where the Wii itself scores 1051 seconds.

(3-3) Dolphin 5.0 Render Test

CPU Tests: Office and Science CPU Tests: Rendering
Comments Locked

210 Comments

View All Comments

  • schujj07 - Friday, January 22, 2021 - link

    A stock 3700X has a total package power of 88W and the 212 EVO is a 150W TDP cooler. Whereas the included Wraith Prism cooler with the 3700X is a 125W TDP cooler. One would expect that the larger capacity cooler with the larger fan would be quieter.
  • vegemeister - Friday, January 22, 2021 - link

    Heat transfer does not work that way.

    ΔT = P * R

    where T is temperature (K), P is power (W), and R is thermal resistance (K/W).

    Unless the temperature rise is known the only thing "150W cooler" tells you is that the heat pipes won't dry out at 150W with reasonable ambient temperature. (That's a thing that can happen. It's not permanent damage, but it does mean R gets a lot bigger.)

    The fact is the Wraith Prism is the same 92mm downdraft cooler AMD has been shipping with their CPUs since the Phenom II 965.
  • Spunjji - Friday, January 22, 2021 - link

    The Wraith Prisms are fine - the one that comes with the low-end ryzens (and I think now the 5600) aren't so great for noise, but they do let the CPU come within 95% of its peak performance, so not bad for a freebie.
  • alufan - Thursday, January 21, 2021 - link

    Am not seeing the point of this article is 65w an option and then you blatantly ignore the actual TDP stated and produce a test, for the test to be a fair comparison all the chips should be limited to actual power stated and then run through any Benchmarks, its like saying we are testing CPUs at 125w and including the LN2 FX AMD chip and seeing how much power you can actually run through it, running these chips like this constantly will degrade them and eat up a considerable amount of power that you dont need to use.
    Then again I should be surprised, yet again 12 articles on the front page regarding Intel 3 regarding AMD guess Intels media budget is bigger hmm
  • DominionSeraph - Thursday, January 21, 2021 - link

    It's AMD CPUs that degrade at stock clocks. Intel will run for decades even with moderate overclocks.
  • bji - Thursday, January 21, 2021 - link

    AMD CPUs do not "degrade" at stock clocks or overclocks.
  • DominionSeraph - Thursday, January 21, 2021 - link

    Their Turbo is literally built around it. It will lower clocks as the chip degrades. The degradation is all over Reddit. I'm surprised no tech site has followed up on the scandal.
  • bigboxes - Thursday, January 21, 2021 - link

    I'm surprised there aren't more trolling like you
  • Spunjji - Friday, January 22, 2021 - link

    I'm not.

    I just spent a bit of time on Google and the majority of the results are people saying "I heard this, is it true?" - the rest are people talking about how they ran their chip way outside spec (significant overvoltage, overclock *and* high temperatures) and can no longer get the same overclock out of it.

    Take your FUD and cram it. 🥰
  • Spunjji - Friday, January 22, 2021 - link

    It took me less than 15 minutes to confirm that this is a lie.

    Incidentally, the only CPUs I've ever had "degradation" problems with were all Sandy Bridge - 2 i3s, one i5 and one i7. Only one of them was ever overclocked. They started to show strange issues after 3-5 years - stuff like frame-rate inconsistency in games, graphics artefacts, random crashes.

    I've never gone around slamming Intel, though, because sometimes you just get a bad chip. It happens.

Log in

Don't have an account? Sign up now