CPU Tests: Simulation

Simulation and Science have a lot of overlap in the benchmarking world, however for this distinction we’re separating into two segments mostly based on the utility of the resulting data. The benchmarks that fall under Science have a distinct use for the data they output – in our Simulation section, these act more like synthetics but at some level are still trying to simulate a given environment.

DigiCortex v1.35: link

DigiCortex is a pet project for the visualization of neuron and synapse activity in the brain. The software comes with a variety of benchmark modes, and we take the small benchmark which runs a 32k neuron/1.8B synapse simulation, similar to a small slug.

The results on the output are given as a fraction of whether the system can simulate in real-time, so anything above a value of one is suitable for real-time work. The benchmark offers a 'no firing synapse' mode, which in essence detects DRAM and bus speed, however we take the firing mode which adds CPU work with every firing.

The software originally shipped with a benchmark that recorded the first few cycles and output a result. So while fast multi-threaded processors this made the benchmark last less than a few seconds, slow dual-core processors could be running for almost an hour. There is also the issue of DigiCortex starting with a base neuron/synapse map in ‘off mode’, giving a high result in the first few cycles as none of the nodes are currently active. We found that the performance settles down into a steady state after a while (when the model is actively in use), so we asked the author to allow for a ‘warm-up’ phase and for the benchmark to be the average over a second sample time.

For our test, we give the benchmark 20000 cycles to warm up and then take the data over the next 10000 cycles seconds for the test – on a modern processor this takes 30 seconds and 150 seconds respectively. This is then repeated a minimum of 10 times, with the first three results rejected. Results are shown as a multiple of real-time calculation.

(3-1) DigiCortex 1.35 (32k Neuron, 1.8B Synapse)

The wide variation on AMD seems to prefer high-core-count single chiplet processors. Intel is taking a back seat here, as it is also using slower memory.

Dwarf Fortress 0.44.12: Link

Another long standing request for our benchmark suite has been Dwarf Fortress, a popular management/roguelike indie video game, first launched in 2006 and still being regularly updated today, aiming for a Steam launch sometime in the future.

Emulating the ASCII interfaces of old, this title is a rather complex beast, which can generate environments subject to millennia of rule, famous faces, peasants, and key historical figures and events. The further you get into the game, depending on the size of the world, the slower it becomes as it has to simulate more famous people, more world events, and the natural way that humanoid creatures take over an environment. Like some kind of virus.

For our test we’re using DFMark. DFMark is a benchmark built by vorsgren on the Bay12Forums that gives two different modes built on DFHack: world generation and embark. These tests can be configured, but range anywhere from 3 minutes to several hours. After analyzing the test, we ended up going for three different world generation sizes:

  • Small, a 65x65 world with 250 years, 10 civilizations and 4 megabeasts
  • Medium, a 127x127 world with 550 years, 10 civilizations and 4 megabeasts
  • Large, a 257x257 world with 550 years, 40 civilizations and 10 megabeasts

DFMark outputs the time to run any given test, so this is what we use for the output. We loop the small test for as many times possible in 10 minutes, the medium test for as many times in 30 minutes, and the large test for as many times in an hour.

(3-2a) Dwarf Fortress 0.44.12 World Gen 65x65, 250 Yr(3-2b) Dwarf Fortress 0.44.12 World Gen 129x129, 550 Yr(3-2c) Dwarf Fortress 0.44.12 World Gen 257x257, 550 Yr

Dolphin v5.0 Emulation: Link

Many emulators are often bound by single thread CPU performance, and general reports tended to suggest that Haswell provided a significant boost to emulator performance. This benchmark runs a Wii program that ray traces a complex 3D scene inside the Dolphin Wii emulator. Performance on this benchmark is a good proxy of the speed of Dolphin CPU emulation, which is an intensive single core task using most aspects of a CPU. Results are given in seconds, where the Wii itself scores 1051 seconds.

(3-3) Dolphin 5.0 Render Test

CPU Tests: Office and Science CPU Tests: Rendering
Comments Locked

126 Comments

View All Comments

  • Deicidium369 - Monday, January 4, 2021 - link

    And TSMC is really killing the fabrication front with the inability to ship anything in meaningful numbers - due to a extremely fragile supply chain - other than Apple - everything else in still on some variation of TSMC's 10nm class process - they call "7nm"
  • sadick - Monday, January 4, 2021 - link

    You are right, but Intel desktop CPUs are manufactured on the 14nm process since 2014!!! Ok, it's 14++++ now, but what an evolution, I'm very impressed ;-)

    I'm not an AMD fan boy, actually using a i7-9700k!
  • regsEx - Thursday, January 7, 2021 - link

    At least they are much cheaper. 10-core 10850K cost same as 6-core 5600X.
  • Impostors - Monday, January 4, 2021 - link

    So is apple? Lmfao you thought they were making the chips? TSMC isn't behind on production, they are the production for literally everyone, from PC to mobile.
  • name99 - Monday, January 4, 2021 - link

    "you could argue that was the right call given the state of the market"

    Only if you drank your own koolaid about the end of Moore's Law...

    Remember a book called _Only the Paranoid Survive_? About how in High Tech there are *constant* upsets and changes, nothing ever stays the same?
    Hmm, if only someone at Intel had read that book and though "Gee, this seems to describe an industry very much like the one in which we operate"...
  • 0ldman79 - Saturday, January 9, 2021 - link

    Playing it safe would have been fine if they had a product to release afterwards.

    Thing is they didn't. They got so cocky they screwed up their fabs, reached too far while physics are only getting tougher to overcome.

    TSMC made 7nm work, whether it hit their target density and speed goals or not it works. Intel had a goal and rather than back off as needed to release a product they kept fighting to hit an ego check-mark. When 10nm didn't work they should have backed off the density and tried again in order to release a product. Ultimately that's what they had to do but they did it 3 years too late.
  • WaltC - Monday, January 4, 2021 - link

    M1 has very little in software and hardware compatibility to recommend it, however. Those are the #1 reasons people buy computer systems--raw performance is merely icing on the cake. AMD blows the M1, and Intel CPUs, away, imo. As it sits today, the M1 is not competitive with AMD (or even Intel, actually) in terms of multithreaded performance desktops & enterprise-level offerings. I very much doubt Apple will be going there--but we shall see...M1 as it sits is a good beginner's start...let's see where it goes from there.
  • Great_Scott - Monday, January 4, 2021 - link

    The techie rant from the early 2000's is coming to pass, finally.

    So many programs are either mobile or browser-based that the M1 is going to get a pass on compatibility.

    Apple got lucky on the timing, in other words.
  • name99 - Monday, January 4, 2021 - link

    Geniuses (and genius companies) make their own timing...

    Seems kinda bizarre to consider the rise of mobile computing as an exogenous factor when discussing Apple!
  • Calin - Tuesday, January 5, 2021 - link

    Just read an article about Flash no longer being supported... and it was instead replaced by HTML5 and the like...
    Guess that genius companies really are lucky indeed ;)

Log in

Don't have an account? Sign up now