CPU Tests: Office and Science

Our previous set of ‘office’ benchmarks have often been a mix of science and synthetics, so this time we wanted to keep our office section purely on real world performance.

Agisoft Photoscan 1.3.3: link

The concept of Photoscan is about translating many 2D images into a 3D model - so the more detailed the images, and the more you have, the better the final 3D model in both spatial accuracy and texturing accuracy. The algorithm has four stages, with some parts of the stages being single-threaded and others multi-threaded, along with some cache/memory dependency in there as well. For some of the more variable threaded workload, features such as Speed Shift and XFR will be able to take advantage of CPU stalls or downtime, giving sizeable speedups on newer microarchitectures.

For the update to version 1.3.3, the Agisoft software now supports command line operation. Agisoft provided us with a set of new images for this version of the test, and a python script to run it. We’ve modified the script slightly by changing some quality settings for the sake of the benchmark suite length, as well as adjusting how the final timing data is recorded. The python script dumps the results file in the format of our choosing. For our test we obtain the time for each stage of the benchmark, as well as the overall time.

(1-1) Agisoft Photoscan 1.3, Complex Test

 

Application Opening: GIMP 2.10.18

First up is a test using a monstrous multi-layered xcf file to load GIMP. While the file is only a single ‘image’, it has so many high-quality layers embedded it was taking north of 15 seconds to open and to gain control on the mid-range notebook I was using at the time.

What we test here is the first run - normally on the first time a user loads the GIMP package from a fresh install, the system has to configure a few dozen files that remain optimized on subsequent opening. For our test we delete those configured optimized files in order to force a ‘fresh load’ each time the software in run. As it turns out, GIMP does optimizations for every CPU thread in the system, which requires that higher thread-count processors take a lot longer to run.

We measure the time taken from calling the software to be opened, and until the software hands itself back over to the OS for user control. The test is repeated for a minimum of ten minutes or at least 15 loops, whichever comes first, with the first three results discarded.

(1-2) AppTimer: GIMP 2.10.18

 

Science

In this version of our test suite, all the science focused tests that aren’t ‘simulation’ work are now in our science section. This includes Brownian Motion, calculating digits of Pi, molecular dynamics, and for the first time, we’re trialing an artificial intelligence benchmark, both inference and training, that works under Windows using python and TensorFlow.  Where possible these benchmarks have been optimized with the latest in vector instructions, except for the AI test – we were told that while it uses Intel’s Math Kernel Libraries, they’re optimized more for Linux than for Windows, and so it gives an interesting result when unoptimized software is used.

3D Particle Movement v2.1: Non-AVX and AVX2/AVX512

This is the latest version of this benchmark designed to simulate semi-optimized scientific algorithms taken directly from my doctorate thesis. This involves randomly moving particles in a 3D space using a set of algorithms that define random movement. Version 2.1 improves over 2.0 by passing the main particle structs by reference rather than by value, and decreasing the amount of double->float->double recasts the compiler was adding in.

The initial version of v2.1 is a custom C++ binary of my own code, and flags are in place to allow for multiple loops of the code with a custom benchmark length. By default this version runs six times and outputs the average score to the console, which we capture with a redirection operator that writes to file.

For v2.1, we also have a fully optimized AVX2/AVX512 version, which uses intrinsics to get the best performance out of the software. This was done by a former Intel AVX-512 engineer who now works elsewhere. According to Jim Keller, there are only a couple dozen or so people who understand how to extract the best performance out of a CPU, and this guy is one of them. To keep things honest, AMD also has a copy of the code, but has not proposed any changes.

The 3DPM test is set to output millions of movements per second, rather than time to complete a fixed number of movements.

(2-1) 3D Particle Movement v2.1 (non-AVX)(2-2) 3D Particle Movement v2.1 (Peak AVX)

 

y-Cruncher 0.78.9506: www.numberworld.org/y-cruncher

If you ask anyone what sort of computer holds the world record for calculating the most digits of pi, I can guarantee that a good portion of those answers might point to some colossus super computer built into a mountain by a super-villain. Fortunately nothing could be further from the truth – the computer with the record is a quad socket Ivy Bridge server with 300 TB of storage. The software that was run to get that was y-cruncher.

Built by Alex Yee over the last part of a decade and some more, y-Cruncher is the software of choice for calculating billions and trillions of digits of the most popular mathematical constants. The software has held the world record for Pi since August 2010, and has broken the record a total of 7 times since. It also holds records for e, the Golden Ratio, and others. According to Alex, the program runs around 500,000 lines of code, and he has multiple binaries each optimized for different families of processors, such as Zen, Ice Lake, Sky Lake, all the way back to Nehalem, using the latest SSE/AVX2/AVX512 instructions where they fit in, and then further optimized for how each core is built.

For our purposes, we’re calculating Pi, as it is more compute bound than memory bound. In multithreaded mode we go for 2.5 billion digits. That 2.5 billion digit value requires ~12 GB of DRAM, and so is limited to systems with at least 16 GB.

(2-4) yCruncher 0.78.9506 MT (2.5b Pi)

 

NAMD 2.13 (ApoA1): Molecular Dynamics

One of the popular science fields is modeling the dynamics of proteins. By looking at how the energy of active sites within a large protein structure over time, scientists behind the research can calculate required activation energies for potential interactions. This becomes very important in drug discovery. Molecular dynamics also plays a large role in protein folding, and in understanding what happens when proteins misfold, and what can be done to prevent it. Two of the most popular molecular dynamics packages in use today are NAMD and GROMACS.

NAMD, or Nanoscale Molecular Dynamics, has already been used in extensive Coronavirus research on the Frontier supercomputer. Typical simulations using the package are measured in how many nanoseconds per day can be calculated with the given hardware, and the ApoA1 protein (92,224 atoms) has been the standard model for molecular dynamics simulation.

Luckily the compute can home in on a typical ‘nanoseconds-per-day’ rate after only 60 seconds of simulation, however we stretch that out to 10 minutes to take a more sustained value, as by that time most turbo limits should be surpassed. The simulation itself works with 2 femtosecond timesteps. We use version 2.13 as this was the recommended version at the time of integrating this benchmark into our suite. The latest nightly builds we’re aware have started to enable support for AVX-512, however due to consistency in our benchmark suite, we are retaining with 2.13. Other software that we test with has AVX-512 acceleration.

(2-5) NAMD ApoA1 Simulation

AI Benchmark 0.1.2 using TensorFlow: Link

Finding an appropriate artificial intelligence benchmark for Windows has been a holy grail of mine for quite a while. The problem is that AI is such a fast moving, fast paced word that whatever I compute this quarter will no longer be relevant in the next, and one of the key metrics in this benchmarking suite is being able to keep data over a long period of time. We’ve had AI benchmarks on smartphones for a while, given that smartphones are a better target for AI workloads, but it also makes some sense that everything on PC is geared towards Linux as well.

Thankfully however, the good folks over at ETH Zurich in Switzerland have converted their smartphone AI benchmark into something that’s useable in Windows. It uses TensorFlow, and for our benchmark purposes we’ve locked our testing down to TensorFlow 2.10, AI Benchmark 0.1.2, while using Python 3.7.6.

The benchmark runs through 19 different networks including MobileNet-V2, ResNet-V2, VGG-19 Super-Res, NVIDIA-SPADE, PSPNet, DeepLab, Pixel-RNN, and GNMT-Translation. All the tests probe both the inference and the training at various input sizes and batch sizes, except the translation that only does inference. It measures the time taken to do a given amount of work, and spits out a value at the end.

There is one big caveat for all of this, however. Speaking with the folks over at ETH, they use Intel’s Math Kernel Libraries (MKL) for Windows, and they’re seeing some incredible drawbacks. I was told that MKL for Windows doesn’t play well with multiple threads, and as a result any Windows results are going to perform a lot worse than Linux results. On top of that, after a given number of threads (~16), MKL kind of gives up and performance drops of quite substantially.

So why test it at all? Firstly, because we need an AI benchmark, and a bad one is still better than not having one at all. Secondly, if MKL on Windows is the problem, then by publicizing the test, it might just put a boot somewhere for MKL to get fixed. To that end, we’ll stay with the benchmark as long as it remains feasible.

(2-6) AI Benchmark 0.1.2 Total

 

Test Setup and #CPUOverload Benchmarks CPU Tests: Simulation
Comments Locked

339 Comments

View All Comments

  • halcyon - Tuesday, November 10, 2020 - link

    1. Ryzen 9 5xxx series dominate most gaming benhmarks in CPU bound games up to 720p
    2. However at 1440P/4K Intel, esp. 10850K pull ahead.

    Can somebody explain this anomaly? As Games become more GPU bound at higher res, why does Intel pull ahead (with worse single/multi-thread CPU perf)? Is it a bandwidth/latency issue? If so, where exactly (RAM? L3? somewhere else)? Can't be PCIe, can it?
  • feka1ity - Saturday, November 14, 2020 - link

    RAM. anandtech uses shitty ram for intel systems
  • Makste - Monday, November 16, 2020 - link

    I think the game optimizations for intel processors become clear at those resolutions. AMD has been a none factor in gaming for so long. These games have been developed on and mostly optimised to work better on intel machines
  • Silma - Wednesday, November 11, 2020 - link

    At 4K, the 3700X beats the 5600X quite often.
  • Samus - Friday, November 13, 2020 - link

    Considering Intel just released a new generation of CPU's, it's astonishing at their current IPC generation-over-generation trajectory, it will take them two more generations to surpass Zen 3. That's almost 2 years.

    Wow.
  • ssshenoy - Tuesday, December 15, 2020 - link

    I dont think this article compares the latest generation from Intel - the Willow Cove core in Tiger lake which is launched only for notebooks. The comparison here seems to be with the ancient Skylake generation on 14 nm.
  • abufrejoval - Friday, November 13, 2020 - link

    Got my Ryzen 7 5800X on a new Aorus X570 mainboard and finally working, too.

    It turbos to 4850MHz without any overclocking, so I'd hazard 150MHz "bonus" are pretty much the default across the line.

    At the wall plug 210 Watts was the biggest load I observed for pure CPU loads. HWinfo never reporting anything in excess of 120 Watts on the CPU from internal sensors.

    "finally working": I want ECC with this rig, because I am aiming for 64GB or even 128GB RAM and 24x7 operation. Ordered DDR4-3200 ECC modules from Kingston to go with the board. Those seem a little slow coming so I tried to make do with pilfering some DIMMs from other systems, that could be shut down for a moment. DDR4-2133 ECC and DDR4-2400 ECC modules where candidates, but wouldn't boot...

    Both were 2Rx4, dual rank, nibble not byte organized modules, unbuffered and unregistered but not the byte organized DIMMs that the Gigabyte documentation seeemd to prescribe... Asus, MSI and ASrock don't list such constraints, but I had to go with availability...

    I like to think of RAM as RAM, it may be slower or faster, but it shouldn't be tied to one specific system, right?

    So while I await the DDR4-3200 ECC 32GB modules to arrive, I got myself some DDR4-4000 R1x8 (no ECC, 8GB) DIMMs to fill the gap: But would that X570 mainboard, which might have been laying on shelves for months actually boot a Ryzen 5000?

    No, it wouldn't.

    But yes, it would update the BIOS via Q-Flash Plus-what-shall-we-call-it and then, yes, it did indeed recognize both the CPU and those R1x8 DIMMs just fine after the update.

    I haven't yet tried those R2x4 modules again, because I am still exploring the bandwidth high-end, but I want to report just how much I am impressed by the compatibility of the AM4 platform, fully aware that Zen 3 will be the last generation in this "sprint".

    I vividly remember how I had to get Skylake CPUs in order to get various mainboard ready for Kaby Lake...

    I have been using AMD x86 CPUs from 80486DX4. I owned every iteration of K6-II and K6-III, omitted all Slot-A variants, got back with socket-A, 754, 939, went single, quad, and hexa (Phenom II x4+x6), omitted Bulldozer, but did almost every APU but between Kaveri and Zen 3, AMD simply wasn't compelling enough.

    I would have gotten a Ryzen 9 5950x, if it had been available. But I count myself lucky for the moment to have snatched a Ryzen 7 5800X: It sure doesn't disappoint.

    AMD a toast! You have done very well indeed and you can count me impressed!

    Of course I'll nag about missing SVE/MKTME support day after tomorrow, but in the mean-time, please accept my gratitude.
  • feka1ity - Saturday, November 14, 2020 - link

    Interesting, my default 9700k with 1080ti does 225fps avg - Borderlands 3, 360p, very low settings and anantech testers poop 175fps avg with 10900k and 2080ti?!? And this favoritize amede products. Fake stuff, sorry.
  • Spunjji - Monday, November 16, 2020 - link

    "Fake stuff"

    Thanks for labelling your post
  • feka1ity - Monday, November 16, 2020 - link

    Fake stuff is not a label, it's a epicrisis. Go render stuff, spunji

Log in

Don't have an account? Sign up now