CPU System Tests

Our first set of tests is our general system tests. These set of tests are meant to emulate more about what people usually do on a system, like opening large files or processing small stacks of data. This is a bit different to our office testing, which uses more industry standard benchmarks, and a few of the benchmarks here are relatively new and different.

All of our benchmark results can also be found in our benchmark engine, Bench.

PDF Opening

First up is a self-penned test using a monstrous PDF we once received in advance of attending an event. While the PDF was only a single page, it had so many high-quality layers embedded it was taking north of 15 seconds to open and to gain control on the mid-range notebook I was using at the time. This put it as a great candidate for our 'let's open an obnoxious PDF' test. Here we use Adobe Reader DC, and disable all the update functionality within. The benchmark sets the screen to 1080p, opens the PDF to in fit-to-screen mode, and measures the time from sending the command to open the PDF until it is fully displayed and the user can take control of the software again. The test is repeated ten times, and the average time taken. Results are in milliseconds.

System: PDF Opening with Adobe Reader DC

There's not much between the Threadripper CPUs here, but frequency wins the day.

FCAT Processing: link

One of the more interesting workloads that has crossed our desks in recent quarters is FCAT - the tool we use to measure and visually analyze stuttering in gaming due to dropped or runt frames. The FCAT process requires enabling a color-based overlay onto a game, recording the gameplay, and then parsing the video file through the analysis software. The software is mostly single-threaded, however because the video is basically in a raw format, the file size is large and requires moving a lot of data around. For our test, we take a 90-second clip of the Rise of the Tomb Raider benchmark running on a GTX 980 Ti at 1440p, which comes in around 21 GB, and measure the time it takes to process through the visual analysis tool.

System: FCAT Processing ROTR 1440p GTX980Ti Data

Similar to PDF opening, single threaded performance wins out.

Dolphin Benchmark: link

Many emulators are often bound by single thread CPU performance, and general reports tended to suggest that Haswell provided a significant boost to emulator performance. This benchmark runs a Wii program that ray traces a complex 3D scene inside the Dolphin Wii emulator. Performance on this benchmark is a good proxy of the speed of Dolphin CPU emulation, which is an intensive single core task using most aspects of a CPU. Results are given in minutes, where the Wii itself scores 17.53 minutes.

System: Dolphin 5.0 Render Test

Dolphin likes single thread performance as well, although interpreting this graph is giving me somewhat of a headache. Game Mode seems to give a small improvement here.

3D Movement Algorithm Test v2.1: link

This is the latest version of the self-penned 3DPM benchmark. The goal of 3DPM is to simulate semi-optimized scientific algorithms taken directly from my doctorate thesis. Version 2.1 improves over 2.0 by passing the main particle structs by reference rather than by value, and decreasing the amount of double->float->double recasts the compiler was adding in. It affords a ~25% speed-up over v2.0, which means new data.

System: 3D Particle Movement v2.1

Our first pure multithreaded test, and the 1950X wins with 32 threads. The 1920X beats the 1950X in Game mode, due to 24 threads beating 16 cores. The 1800X edges out the 1950X-GM due to frequency.

DigiCortex v1.20: link

Despite being a couple of years old, the DigiCortex software is a pet project for the visualization of neuron and synapse activity in the brain. The software comes with a variety of benchmark modes, and we take the small benchmark which runs a 32k neuron/1.8B synapse simulation. The results on the output are given as a fraction of whether the system can simulate in real-time, so anything above a value of one is suitable for real-time work. The benchmark offers a 'no firing synapse' mode, which in essence detects DRAM and bus speed, however we take the firing mode which adds CPU work with every firing.

Unfortunately we had issues with the 1920X posting a result.

System: DigiCortex 1.20 (32k Neuron, 1.8B Synapse)

DigiCortex requires a mash of CPU frequency and DRAM performance to get a good result, although the 1950X in any mode regresses the result, even in Game Mode, suggesting it is more sensitive to overall DRAM latency.

Agisoft Photoscan 1.0: link

Photoscan stays in our benchmark suite from the previous version, however now we are running on Windows 10 so features such as Speed Shift on the latest processors come into play. The concept of Photoscan is translating many 2D images into a 3D model - so the more detailed the images, and the more you have, the better the model. The algorithm has four stages, some single threaded and some multi-threaded, along with some cache/memory dependency in there as well. For some of the more variable threaded workload, features such as Speed Shift and XFR will be able to take advantage of CPU stalls or downtime, giving sizeable speedups on newer microarchitectures.

System: Agisoft Photoscan 1.0 Total Time

The variable threaded nature of Agisoft shows that in our workflow, it's a mix of cores, IPC and frequency required to win. The quad-channel memory and lower crosstalk of the 1950X in Game Mode seems to get a marginal improvement over the 1950X.

The 2017 Benchmark Suite Benchmarking Performance: CPU Rendering Tests
Comments Locked

104 Comments

View All Comments

  • peevee - Friday, August 18, 2017 - link

    Compilation scales even on multi-CPU machines. With much higher communication latencies.
    In general, compilers running in parallel on MSVC (with MSBuild) run in different processes, they don't write into each other's address spaces and so do not need to communicate at all.

    Quit making excuses. You are doing something wrong. I am doing development for multi-CPU machines and ON multi-CPU machines for a very long time. YOU are doing something wrong.
  • peevee - Friday, August 18, 2017 - link

    BTW, when you enable NUMA on TR, does Windows 10 recognize it as one CPU group or 2?
  • gzunk - Saturday, August 19, 2017 - link

    It recognizes it as two NUMA nodes.
  • Alexey291 - Saturday, September 2, 2017 - link

    They aren't going to do anything.

    All their 'scientific benchmarking' is running the same macro again and again on different hardware setups.

    What you are suggesting requires actual work and thought.
  • Arbie - Thursday, August 17, 2017 - link

    As noted by edzieba, the correct phrase (and I'm sure it has a very British heritage) is "The proof of the pudding is in the eating".

    Another phrase needing repair: "multithreaded tests were almost halved to the 1950X". Was this meant to be something like "multithreaded tests were almost half of those in Creator mode" (?).

    Technically, of course, your articles are really well-done; thanks for all of them.
  • fanofanand - Thursday, August 17, 2017 - link

    Thank you for listening to the readers and re-testing this, Ian!
  • ddriver - Thursday, August 17, 2017 - link

    To sum it up - "game mode" is moronic. It is moronic for amd to push it, and to push TR as a gaming platform, which is clearly neither its peak, nor even its strong point. It is even more moronic for people to spend more than double the money just to have half of the CPU disabled, and still get worse performance than a ryzen chip.

    TR is great for prosumers, and represents a tremendous value and performance at a whole new level of affordability. It will do for games if you are a prosumer who occasionally games, but if you are a gamer it makes zero sense. Having AMD push it as a gaming platform only gives "people" the excuse to whine how bad it is at it.

    Also, I cannot shake the feeling there should be a better way to limit scheduling to half the chip for games without having to disable the rest, so it is still usable to the rest of the system.
  • Gothmoth - Thursday, August 17, 2017 - link

    first coders should do their job.. that is the main problem today. lazy and uncompetent coders.
  • eriohl - Thursday, August 17, 2017 - link

    Of course you could limit thread scheduling on software level. But it seems to me that there is a perfectly reasonable explanation why Microsoft and the game developers haven't been spending much time optimizing for running games on systems with NUMA.
  • HomeworldFound - Thursday, August 17, 2017 - link

    You can't call a coder that doesn't anticipate a 16 core 32 thread CPU lazy. The word is incompetent btw. I'd like to see you make a game worth millions of dollars and account for this processor, heck any processor with more than six cores.

Log in

Don't have an account? Sign up now