CPU Performance: Simulation Tests

A number of our benchmarks fall into the category of simulations, whereby we are either trying to emulate the real world or re-create systems with systems. In this set of tests, we have a variety including molecular modelling, non-x86 video game console emulation, a simulation of the equivalent of a slug brain with neurons and synapses firing, and finally a popular video game that simulates the growth of a fictional land including historical events and important characters within that world.

NAMD ApoA1

One frequent request over the years has been for some form of molecular dynamics simulation. Molecular dynamics forms the basis of a lot of computational biology and chemistry when modeling specific molecules, enabling researchers to find low energy configurations or potential active binding sites, especially when looking at larger proteins. We’re using the NAMD software here, or Nanoscale Molecular Dynamics, often cited for its parallel efficiency. Unfortunately the version we’re using is limited to 64 threads on Windows, but we can still use it to analyze our processors. We’re simulating the ApoA1 protein for 10 minutes, and reporting back the ‘nanoseconds per day’ that our processor can simulate. Molecular dynamics is so complex that yes, you can spend a day simply calculating a nanosecond of molecular movement.

This is one of our new tests, so we will be filling in more data as we start regression testing for older CPUs.

NAMD 2.31 Molecular Dynamics (ApoA1)

 

Dolphin 5.0: Console Emulation

One of the popular requested tests in our suite is to do with console emulation. Being able to pick up a game from an older system and run it as expected depends on the overhead of the emulator: it takes a significantly more powerful x86 system to be able to accurately emulate an older non-x86 console, especially if code for that console was made to abuse certain physical bugs in the hardware.

For our test, we use the popular Dolphin emulation software, and run a compute project through it to determine how close to a standard console system our processors can emulate. In this test, a Nintendo Wii would take around 1050 seconds.

The latest version of Dolphin can be downloaded from https://dolphin-emu.org/

Dolphin 5.0 Render Test

 

DigiCortex 1.20: Sea Slug Brain Simulation

This benchmark was originally designed for simulation and visualization of neuron and synapse activity, as is commonly found in the brain. The software comes with a variety of benchmark modes, and we take the small benchmark which runs a 32k neuron / 1.8B synapse simulation, equivalent to a Sea Slug.

Example of a 2.1B neuron simulation

We report the results as the ability to simulate the data as a fraction of real-time, so anything above a ‘one’ is suitable for real-time work. Out of the two modes, a ‘non-firing’ mode which is DRAM heavy and a ‘firing’ mode which has CPU work, we choose the latter. Despite this, the benchmark is still affected by DRAM speed a fair amount.

DigiCortex can be downloaded from http://www.digicortex.net/

DigiCortex 1.20 (32k Neuron, 1.8B Synapse)

The additional bandwidth of the HEDT platforms put them higher up the chart here - Digicortex always ends up as an odd mix of bottlenecks mostly around memory, but it can be localized internal bandwidth limited as well.

Dwarf Fortress

Another long standing request for our benchmark suite has been Dwarf Fortress, a popular management/roguelike indie video game, first launched in 2006. Emulating the ASCII interfaces of old, this title is a rather complex beast, which can generate environments subject to millennia of rule, famous faces, peasants, and key historical figures and events. The further you get into the game, depending on the size of the world, the slower it becomes.

DFMark is a benchmark built by vorsgren on the Bay12Forums that gives two different modes built on DFHack: world generation and embark. These tests can be configured, but range anywhere from 3 minutes to several hours. I’ve barely scratched the surface here, but after analyzing the test, we ended up going for three different world generation sizes.

This is another of our new tests.

Dwarf Fortress (Small) 65x65 World, 250 YearsDwarf Fortress (Medium) 129x129 World, 550 YearsDwarf Fortress (Big) 257x257 World, 550 Years

CPU Performance: Rendering Tests CPU Performance: Encoding Tests
Comments Locked

220 Comments

View All Comments

  • Gastec - Friday, May 22, 2020 - link

    "pairing a high-end GPU with a mid-range CPU" should already be a meme, so many times I've seen it copy-pasted.
  • dotjaz - Thursday, May 21, 2020 - link

    What funny stuff are you smoking? In all actual configurations, AMD doesn't lose by any meaningful margin at a much better value.
    Anandtech is running CPU test where you set the quality low and get 150+fps or even 400+fps, nobody actually does that.
  • deepblue08 - Thursday, May 21, 2020 - link

    Intel may not be a great value chip all around. But a 10 FPS lead in 1440p is a lead nevertheless: https://hexus.net/tech/reviews/cpu/141577-intel-co...
  • DrKlahn - Thursday, May 21, 2020 - link

    If that's worth the more expensive motherboard, beefier (and more costly) cooling, and increased heat then go for it. If you put 120fps next to 130fps without a counter up how many people could tell?Personally I don't see it as worth it at all. Nor do I consider it a dominating lead. But I'm sure there are people out there that will buy Intel for a negligible lead.
  • Spunjji - Friday, May 22, 2020 - link

    An entirely unnoticeable lead that you get by sacrificing any sort of power consumption / cooling sanity and spending measurably larger amounts of cash on the hardware to achieve the boost clocks required to get that lead.

    The difference was meaningful back when AMD had lower minimum framerates, less consistency and -30fps or so off the average. Now it's just silly.
  • babadivad - Thursday, May 21, 2020 - link

    Do you need a new mother board with these? If so they make even less sense than they already did.
  • MDD1963 - Friday, May 22, 2020 - link

    As for Intel owners, I don't think too many 8700K, 9600K or above owners would seriously feel they are CPU limited and in a dire/ imminent need of a CPU upgrade as they sit now, anyway. Users of prior generations (I'm still on 7700K) will make their choices at a time of their own choosing, of course, and not simply because 'a new generation is out'. (I mean, look at 8700K vs. 10600K results.....; looks almost like a rebadging operation)
  • khanikun - Wednesday, May 27, 2020 - link

    I was on a 7700k and didn't feel CPU limited at all, but decided to get an 8086k for the 2 more cores and just cause it was an 8086. For my normal workloads or gaming, I don't notice a difference. I do reencode videos maybe a couple times a year. The only times I'll see the difference.

    I'll probably just be sitting on this 8086k for the next few years, unless something on my machine breaks or Intel does something crazy ridiculous, like making some 8 core i7 on 10nm at 5 ghz all core, in a new socket, then making dual socket consumer boards for it for relatively decent price. I'd upgrade for that, just cause I'd like to try making a dual processor system that isn't some expensive workstation/server system.
  • Spunjji - Friday, May 22, 2020 - link

    Yes, you do. So no, they don't make sense xD
  • Gastec - Friday, May 22, 2020 - link

    Games...framerate is pointless in video games, all that matters now are the "surprise mechanics".

Log in

Don't have an account? Sign up now