CPU Performance: Simulation Tests

A number of our benchmarks fall into the category of simulations, whereby we are either trying to emulate the real world or re-create systems with systems. In this set of tests, we have a variety including molecular modelling, non-x86 video game console emulation, a simulation of the equivalent of a slug brain with neurons and synapses firing, and finally a popular video game that simulates the growth of a fictional land including historical events and important characters within that world.

NAMD ApoA1

One frequent request over the years has been for some form of molecular dynamics simulation. Molecular dynamics forms the basis of a lot of computational biology and chemistry when modeling specific molecules, enabling researchers to find low energy configurations or potential active binding sites, especially when looking at larger proteins. We’re using the NAMD software here, or Nanoscale Molecular Dynamics, often cited for its parallel efficiency. Unfortunately the version we’re using is limited to 64 threads on Windows, but we can still use it to analyze our processors. We’re simulating the ApoA1 protein for 10 minutes, and reporting back the ‘nanoseconds per day’ that our processor can simulate. Molecular dynamics is so complex that yes, you can spend a day simply calculating a nanosecond of molecular movement.

This is one of our new tests, so we will be filling in more data as we start regression testing for older CPUs.

NAMD 2.31 Molecular Dynamics (ApoA1)

 

Dolphin 5.0: Console Emulation

One of the popular requested tests in our suite is to do with console emulation. Being able to pick up a game from an older system and run it as expected depends on the overhead of the emulator: it takes a significantly more powerful x86 system to be able to accurately emulate an older non-x86 console, especially if code for that console was made to abuse certain physical bugs in the hardware.

For our test, we use the popular Dolphin emulation software, and run a compute project through it to determine how close to a standard console system our processors can emulate. In this test, a Nintendo Wii would take around 1050 seconds.

The latest version of Dolphin can be downloaded from https://dolphin-emu.org/

Dolphin 5.0 Render Test

 

DigiCortex 1.20: Sea Slug Brain Simulation

This benchmark was originally designed for simulation and visualization of neuron and synapse activity, as is commonly found in the brain. The software comes with a variety of benchmark modes, and we take the small benchmark which runs a 32k neuron / 1.8B synapse simulation, equivalent to a Sea Slug.

Example of a 2.1B neuron simulation

We report the results as the ability to simulate the data as a fraction of real-time, so anything above a ‘one’ is suitable for real-time work. Out of the two modes, a ‘non-firing’ mode which is DRAM heavy and a ‘firing’ mode which has CPU work, we choose the latter. Despite this, the benchmark is still affected by DRAM speed a fair amount.

DigiCortex can be downloaded from http://www.digicortex.net/

DigiCortex 1.20 (32k Neuron, 1.8B Synapse)

The additional bandwidth of the HEDT platforms put them higher up the chart here - Digicortex always ends up as an odd mix of bottlenecks mostly around memory, but it can be localized internal bandwidth limited as well.

Dwarf Fortress

Another long standing request for our benchmark suite has been Dwarf Fortress, a popular management/roguelike indie video game, first launched in 2006. Emulating the ASCII interfaces of old, this title is a rather complex beast, which can generate environments subject to millennia of rule, famous faces, peasants, and key historical figures and events. The further you get into the game, depending on the size of the world, the slower it becomes.

DFMark is a benchmark built by vorsgren on the Bay12Forums that gives two different modes built on DFHack: world generation and embark. These tests can be configured, but range anywhere from 3 minutes to several hours. I’ve barely scratched the surface here, but after analyzing the test, we ended up going for three different world generation sizes.

This is another of our new tests.

Dwarf Fortress (Small) 65x65 World, 250 YearsDwarf Fortress (Medium) 129x129 World, 550 YearsDwarf Fortress (Big) 257x257 World, 550 Years

CPU Performance: Rendering Tests CPU Performance: Encoding Tests
Comments Locked

220 Comments

View All Comments

  • Khenglish - Wednesday, May 20, 2020 - link

    Ian, for the Crysis CPU render test you'd probably get higher FPS disabling the GPU in the device manager and set Crysis to use hardware rendering. Disabling the GPU driver enables software rendering by default on Windows 10. The Win10 rendering does stutter worse than the reported FPS though, so take from it what you want.
  • shaolin95 - Wednesday, May 20, 2020 - link

    "But will the end-user want that extra percent of performance, for the sake of spending more on cooling and more in power?"

    Such retarded comment. More power...do you actually know who little difference this makes in a year. Wow this place is going down hill fast.
    Oh and a cooler you know we don't have to change our cooler with every CPU purchase so don't make it seem like this HUGE issue...your AMD fanboy colors are showing VERY clearly.
  • schujj07 - Wednesday, May 20, 2020 - link

    If you think you can use the 212 EVO you have from a 6700k or 7700k to keep the 10900k cool you are absolutely nuts. "Speaking with a colleague, he had issues cooling his 10900K test chip with a Corsair H115i, indicating that users should look to spending $150+ on a cooling setup. That’s going to be a critical balancing element here when it comes to recommendations." This isn't any form of fanboyism. This is stating a fact that to squeeze out the last remaining bits of performance in Skylake & 14nm Intel had to sacrifice massive amounts of heat/power to do so.
  • Maxiking - Wednesday, May 20, 2020 - link

    If you have issues cooling 10900k with H115i, the problem is always between the monitor and chair.

    They were able to cool OC 10900k with 240m AIO just lol

    Incompetency of some reviewers is just astonishing
  • schujj07 - Wednesday, May 20, 2020 - link

    All depends on the instructions that you are running. From Tomshardware: "We tested with the beefier Noctua NH-D15 and could mostly satisfy cooling requirements in standard desktop PC applications, but you will lose out on performance in workloads that push the boundaries with AVX instructions. As such, you'll need a greater-than-280mm AIO cooler or a custom loop to unlock the best of the 10900K. You'll also need an enthusiast-class motherboard with beefy power circuitry, and also plan on some form of active cooling for the motherboard's power delivery subsystem." https://www.tomshardware.com/reviews/intel-core-i9...
    "While Intel designed its 250W limit to keep thermals 'manageable' with a wide variety of cooling solutions, most motherboard vendors feed the chip up to ~330W of power at stock settings, leading to hideous power consumption metrics during AVX stress tests. Feeding 330W to a stock processor on a mainstream motherboard is a bit nuts, but it enables higher all-core frequencies for longer durations, provided the motherboard and power supply can feed the chip enough current, and your cooler can extract enough heat.

    To find the power limit associated with our chip paired with the Gigabyte Aorus Z490 Master motherboard, we ran a few Prime95 tests with AVX enabled (small FFT). During those tests, we recorded up to 332W of power consumption when paired with either the Corsair H115i 280mm AIO watercooler or a Noctua NH-D15S air cooler. Yes, that's with the processor configured at stock settings. For perspective, our 18-core Core i9-10980XE drew 'only' 256W during an identical Prime95 test." https://www.tomshardware.com/reviews/intel-core-i9...

    Think it is still a pebkac error?
  • alufan - Thursday, May 21, 2020 - link

    try this he doesn't slate the intel or amd just a proper review with live power draw at the socket OMG lol you need your won power plant when you run these let alone over clock it

    https://www.kitguru.net/components/leo-waldock/int...
  • Spunjji - Tuesday, May 26, 2020 - link

    "They were able to cool OC 10900k with 240m AIO just lol"
    Who were? Everyone I've read indicates that with a 240mm AIO, CPU temps hit 90+

    Pathetic comment troll is pathetic.
  • Retycint - Wednesday, May 20, 2020 - link

    It is, in fact, a huge issue because most people won't have high end coolers necessary to keep the thermals under control. Personal attacks such as accusing people of being a "fanboy" just degrades your argument (if there was any in the first place) and make you look dumb
  • Spunjji - Tuesday, May 26, 2020 - link

    "Such retarded comment."
    The pure, dripping irony of using a slur to mock someone else's intelligence, but screwing up the grammar of the sentence in which you do it...

    Some people build from scratch. Some people have uses for their old system. larger PSUs and suitable cooling to get optimal performance from this CPU don't come cheap. Go home, troll.
  • watzupken - Wednesday, May 20, 2020 - link

    Not surprising, Intel managed to keep their advantage in games by pushing for higher frequency. However the end result is a power hungry chip that requires some high end AIO or custom water cooler to keep cool. I agree that Intel is digging themselves deeper and deeper into a hole that they will not be able to get out so easily. In fact I don't think they can get out of it until their 7nm is ready and mature enough to maintain a high frequency, or they come out with a brand new architecture that allows them to improve on Comet Lake's performance without the crazy clockspeed. Indeed, they will not be able to pull another generation with their Skylake + 14nm combination looking at the power consumption and heat generation issue. Intel should consider bundling that industrial chiller they used to cool their 20 core chip during the demo.

Log in

Don't have an account? Sign up now