CPU Performance: New Tests!

As part of our ever on-going march towards a better rounded view of the performance of these processors, we have a few new tests for you that we’ve been cooking in the lab. Some of these new benchmarks provide obvious talking points, others are just a bit of fun. Most of them are so new we’ve only run them on a few processors so far. It will be interesting to hear your feedback!

As far as this review goes, we need to perform regression testing of our new benchmarks on the older hardware, and so these resulst are here more for completeness.

NAMD ApoA1

One frequent request over the years has been for some form of molecular dynamics simulation. Molecular dynamics forms the basis of a lot of computational biology and chemistry when modeling specific molecules, enabling researchers to find low energy configurations or potential active binding sites, especially when looking at larger proteins. We’re using the NAMD software here, or Nanoscale Molecular Dynamics, often cited for its parallel efficiency. Unfortunately the version we’re using is limited to 64 threads on Windows, but we can still use it to analyze our processors. We’re simulating the ApoA1 protein for 10 minutes, and reporting back the ‘nanoseconds per day’ that our processor can simulate. Molecular dynamics is so complex that yes, you can spend a day simply calculating a nanosecond of molecular movement.

NAMD 2.31 Molecular Dynamics (ApoA1)

 

Crysis CPU Render

One of the most oft used memes in computer gaming is ‘Can It Run Crysis?’. The original 2007 game, built in the Crytek engine by Crytek, was heralded as a computationally complex title for the hardware at the time and several years after, suggesting that a user needed graphics hardware from the future in order to run it. Fast forward over a decade, and the game runs fairly easily on modern GPUs, but we can also apply the same concept to pure CPU rendering – can the CPU render Crysis? Since 64 core processors entered the market, one can dream. We built a benchmark to see whether the hardware can.

For this test, we’re running Crysis’ own GPU benchmark, but in CPU render mode. This is a 2000 frame test, which we run over a series of resolutions from 800x600 up to 1920x1080. Here we have the 1920x1080 results, with the rest being in our Benchmark database.

Crysis CPU Render: (6) 1920x1080

Dwarf Fortress

Another long standing request for our benchmark suite has been Dwarf Fortress, a popular management/roguelike indie video game, first launched in 2006. Emulating the ASCII interfaces of old, this title is a rather complex beast, which can generate environments subject to millennia of rule, famous faces, peasants, and key historical figures and events. The further you get into the game, depending on the size of the world, the slower it becomes.

DFMark is a benchmark built by vorsgren on the Bay12Forums that gives two different modes built on DFHack: world generation and embark. These tests can be configured, but range anywhere from 3 minutes to several hours. I’ve barely scratched the surface here, but after analyzing the test, we ended up going for three different world generation sizes.

Dwarf Fortress (Small) 65x65 World, 250 YearsDwarf Fortress (Medium) 129x129 World, 550 YearsDwarf Fortress (Big) 257x257 World, 550 Years

AI Benchmark

One of the longest time requests we’ve had for our benchmark suite is AI-related benchmark, and the folks over at ETH have moved their popular AI Benchmark from mobile over PC. Using Intel’s MKL and Tensorflow 2.1.0, we use version 0.1.2 of the benchmark which tests both training and inference over a variety of different models. You can read the full scope of the benchmark here.

AI Benchmark (ETH) CombinedAI Benchmark (ETH) InferenceAI Benchmark (ETH) Training

 

V-Ray

We have a couple of renderers and ray tracers in our suite already, however V-Ray’s benchmark came through for a requested benchmark enough for us to roll it into our suite. We run the standard standalone benchmark application, but in an automated fashion to pull out the result in the form of kilosamples/second. We run the test six times and take an average of the valid results.

V-Ray Renderer

CPU Performance: Synthetic, Web and Legacy Tests Gaming: World of Tanks enCore
Comments Locked

114 Comments

View All Comments

  • PeachNCream - Monday, May 18, 2020 - link

    Anandtech spends a lot of time on gaming and on desktop PCs that are not representative of where and how people now accomplish compute tasks. They do spend a little time on mobile phones and that nets part of the market, but only at the pricey end of cellular handsets. Lower cost mobile for the masses and work-a-day PCs and laptops generally get a cursory acknowledgement once in a great while which is disappointing because there is a big chunk of the market that gets disregarded. IIRC, AT didn't even get around to reviewing the lower tiers of discrete GPUs in the past, effectively ignoring that chunk of the market until long after release and only if said lower end hardware happened to be in a system they ended up getting. They do not seem to actively seek out such components, sadly enough.
  • whatthe123 - Monday, May 18, 2020 - link

    AI/tensorflow runs so much faster even on mid tier GPUs that trying to argue CPUs are relevant is completely out of touch. No academic in their right mind is looking for a bang-for-buck CPU to train models, it would be an absurd waste of time.
  • wolfesteinabhi - Tuesday, May 19, 2020 - link

    well ..games also run on GPU ...so why bother benchmarking CPU's with them? ... same reason why anyone would want to look at other workflows .. i said tensor flow as just one of the examples(maybe not the best example) ..but more of such "work" or "development" oriented benchmarks.
  • pashhtk27 - Thursday, May 21, 2020 - link

    Or there should be proper support libraries for the integrated graphics to run tensor calculations. That would make GPU-less AI development machines a lot more cost effective. AMD and Intel are both working on this but it'll be hard to get around Nvidia's monopoly of AI computing. Free cloud compute services like colab have several problems and others are very cost prohibitive for students. And sometimes you just need to have a local system capable of loading and predicting. As a student, I think it would significantly lower the entry threshold if their cost effective laptops could run simple models and get output.

    We can talk about AI benchmarks then.
  • Gigaplex - Monday, May 18, 2020 - link

    As a developer I just use whatever my company gives me. I wouldn't be shopping for consumer CPUs for work purposes.
  • wolfesteinabhi - Tuesday, May 19, 2020 - link

    not all developers are paid by their companies or make money with what they develop ... some are hobbyists and some do it as their "side" activities and with their own money at home apart from what they do at work with big guns!.
  • mikato - Sunday, May 24, 2020 - link

    As a developer, I built my own new computer at work and got to pick everything within budget.
  • Achaios - Monday, May 18, 2020 - link

    "Every so often there comes a processor that captures the market. "

    This used to be Sandy Bridge I5-2500K, all time best seller.

    Oh, how the Mighty Chipzilla has fallen.
  • mikelward - Monday, May 18, 2020 - link

    My current PC is a 2500K. My next one will be a 3600.
  • Spunjji - Tuesday, May 19, 2020 - link

    Sandy was an absolute knockout. Most of the development thereafter was aimed at sticking similarly powerful CPUs in sleeker packages rather than increasing desktop performance, and while I feel like Intel deserve more credit for some things than they get (e.g. the leap in mobile power/performance that can from Haswell) they really shit the bed on 10nm and responding to Ryzen.

Log in

Don't have an account? Sign up now