CPU Performance: New Tests!

As part of our ever on-going march towards a better rounded view of the performance of these processors, we have a few new tests for you that we’ve been cooking in the lab. Some of these new benchmarks provide obvious talking points, others are just a bit of fun. Most of them are so new we’ve only run them on a few processors so far. It will be interesting to hear your feedback!

As far as this review goes, we need to perform regression testing of our new benchmarks on the older hardware, and so these resulst are here more for completeness.

NAMD ApoA1

One frequent request over the years has been for some form of molecular dynamics simulation. Molecular dynamics forms the basis of a lot of computational biology and chemistry when modeling specific molecules, enabling researchers to find low energy configurations or potential active binding sites, especially when looking at larger proteins. We’re using the NAMD software here, or Nanoscale Molecular Dynamics, often cited for its parallel efficiency. Unfortunately the version we’re using is limited to 64 threads on Windows, but we can still use it to analyze our processors. We’re simulating the ApoA1 protein for 10 minutes, and reporting back the ‘nanoseconds per day’ that our processor can simulate. Molecular dynamics is so complex that yes, you can spend a day simply calculating a nanosecond of molecular movement.

NAMD 2.31 Molecular Dynamics (ApoA1)

 

Crysis CPU Render

One of the most oft used memes in computer gaming is ‘Can It Run Crysis?’. The original 2007 game, built in the Crytek engine by Crytek, was heralded as a computationally complex title for the hardware at the time and several years after, suggesting that a user needed graphics hardware from the future in order to run it. Fast forward over a decade, and the game runs fairly easily on modern GPUs, but we can also apply the same concept to pure CPU rendering – can the CPU render Crysis? Since 64 core processors entered the market, one can dream. We built a benchmark to see whether the hardware can.

For this test, we’re running Crysis’ own GPU benchmark, but in CPU render mode. This is a 2000 frame test, which we run over a series of resolutions from 800x600 up to 1920x1080. Here we have the 1920x1080 results, with the rest being in our Benchmark database.

Crysis CPU Render: (6) 1920x1080

Dwarf Fortress

Another long standing request for our benchmark suite has been Dwarf Fortress, a popular management/roguelike indie video game, first launched in 2006. Emulating the ASCII interfaces of old, this title is a rather complex beast, which can generate environments subject to millennia of rule, famous faces, peasants, and key historical figures and events. The further you get into the game, depending on the size of the world, the slower it becomes.

DFMark is a benchmark built by vorsgren on the Bay12Forums that gives two different modes built on DFHack: world generation and embark. These tests can be configured, but range anywhere from 3 minutes to several hours. I’ve barely scratched the surface here, but after analyzing the test, we ended up going for three different world generation sizes.

Dwarf Fortress (Small) 65x65 World, 250 YearsDwarf Fortress (Medium) 129x129 World, 550 YearsDwarf Fortress (Big) 257x257 World, 550 Years

AI Benchmark

One of the longest time requests we’ve had for our benchmark suite is AI-related benchmark, and the folks over at ETH have moved their popular AI Benchmark from mobile over PC. Using Intel’s MKL and Tensorflow 2.1.0, we use version 0.1.2 of the benchmark which tests both training and inference over a variety of different models. You can read the full scope of the benchmark here.

AI Benchmark (ETH) CombinedAI Benchmark (ETH) InferenceAI Benchmark (ETH) Training

 

V-Ray

We have a couple of renderers and ray tracers in our suite already, however V-Ray’s benchmark came through for a requested benchmark enough for us to roll it into our suite. We run the standard standalone benchmark application, but in an automated fashion to pull out the result in the form of kilosamples/second. We run the test six times and take an average of the valid results.

V-Ray Renderer

CPU Performance: Synthetic, Web and Legacy Tests Gaming: World of Tanks enCore
Comments Locked

114 Comments

View All Comments

  • eastcoast_pete - Monday, May 18, 2020 - link

    Thanks Ian!
    While this is not important for many (most?) readers here, I would like to see AMD or anyone else putting a more basic GPU (under $ 50 retail) out that has HDMI 2.0a or better, display port out, and that has ASICs for x264/265 and VP9 decoding; AV1 would be a plus. This could be a PCIe dGPU or something directly soldered into a MB. Am I the only one who's find that interesting? I don't like to always have to plug a high-powered dGPU into each build that has more than just an entry level CPU, so this would help.
  • Spunjji - Tuesday, May 19, 2020 - link

    You'll likely be waiting a while. You'd need to wait for the next generation of GPUs with new display controllers and video decoders. There's a rumour that Nvidia will be producing an Ampere "MX550" for mobile, which could mean a dGPU based on the same chip being released for ~$100. Give that a couple more years to drop in price and, well, by then you'll probably want new standards. :D
  • Pgndu - Monday, May 18, 2020 - link

    I come here for a clearer perspective more than benchmarks, but the timing of this article is weird, especially since 10th Gen's at the door. I get the market or Atleast pc builder cause and effect but market just got blown out of proportions with options, what actually transfers to general populace is not clear until OEM's embrace the reality like nividia
  • Arbie - Monday, May 18, 2020 - link

    "The Core i5-10500 ... is 65 W, the same as AMD".

    Anandtech knows very well that Intel TDP is not the same as AMD TDP. Please stop falling into the noob-journo trap of simply repeating the Intel BS just because it's official BS.
  • GreenReaper - Monday, May 18, 2020 - link

    In fairness, TMD is also turboing to 88W, with cores plus uncore measured as taking significantly more than 65W.
  • Spunjji - Tuesday, May 19, 2020 - link

    Absolutely right, but also in fairness, Intel's sole enhancement for the 10 series appears to be enabling higher clock speeds - and they're made on the same process with the same architecture as the 9 series, which inevitably means more power will be required to reach those higher clocks.

    So, it's likely to be either a CPU with similar real power use to the AMD processor that never really hits its rated turbo clocks, or a CPU that does hit its rated turbo and never drops below ~100W under sustained load. It's likely to be power and speed competitive on an either/or basis, but not both at the same time.
  • watzupken - Tuesday, May 19, 2020 - link

    This is true that its going above its TDP to provide the boost speed. However this is a practice that Intel has practiced since its Kaby Lake/ Coffee Lake series. Unfortunately, they are the worst violator when it comes to exceeding the supposed TDP when you consider how much power it is pulling to sustain its boost (PL2) speed. If you consider the boost speed of the Comet Lake, even the supposed 65W i5 10xxx series is not going to keep to 65W given the boost speed of up to 4.8Ghz, though nothing is mentioned about the all core turbo, but should be somewhere close, i.e. 4.2 to 4.6Ghz is my guess.
  • lakedude - Monday, May 18, 2020 - link

    I assume no one has mentioned the typo since it is still there.

    "Competition

    With six cores and twelve threads, the comparative Intel options vary between something like the Core i7-9600KF with six cores and no hyperthreading..." 

    Gotta be i5, right?
  • Kalelovil - Tuesday, May 19, 2020 - link

    @Ian Cutress
    There appears to be a mistake in the AI Benchmark results, the Ryzen 5 3600 Combined result is less than the sum of its Inference and Training results.
  • xSneak - Tuesday, May 19, 2020 - link

    Disappointed to see the continual cpu reviews using a GTX 1080 as the gpu. We would be better able to evaluate cpu performance if a 2080 ti was used given it is cpu bottlenecked at 1080p on some games. Hard to believe one of the biggest tech sites is using such under powered hardware.

Log in

Don't have an account? Sign up now