Benchmarking Performance: CPU System Tests

Our first set of tests is our general system tests. These set of tests are meant to emulate more about what people usually do on a system, like opening large files or processing small stacks of data. This is a bit different to our office testing, which uses more industry standard benchmarks, and a few of the benchmarks here are relatively new and different.

PDF Opening

First up is a self-penned test using a monstrous PDF we once received in advance of attending an event. While the PDF was only a single page, it had so many high-quality layers embedded it was taking north of 15 seconds to open and to gain control on the mid-range notebook I was using at the time. This put it as a great candidate for our 'let's open an obnoxious PDF' test. Here we use Adobe Reader DC, and disable all the update functionality within. The benchmark sets the screen to 1080p, opens the PDF to in fit-to-screen mode, and measures the time from sending the command to open the PDF until it is fully displayed and the user can take control of the software again. The test is repeated ten times, and the average time taken. Results are in milliseconds.

System: PDF Opening with Adobe Reader DC

FCAT Processing

One of the more interesting workloads that has crossed our desks in recent quarters is FCAT - the tool we use to measure stuttering in gaming due to dropped or runt frames. The FCAT process requires enabling a color-based overlay onto a game, recording the gameplay, and then parsing the video file through the analysis software. The software is mostly single-threaded, however because the video is basically in a raw format, the file size is large and requires moving a lot of data around. For our test, we take a 90-second clip of the Rise of the Tomb Raider benchmark running on a GTX 980 Ti at 1440p, which comes in around 21 GB, and measure the time it takes to process through the visual analysis tool. 

System: FCAT Processing ROTR 1440p GTX1080 Data

3D Particle Movement v2.1 

This is the latest version of the self-penned 3DPM benchmark. The goal of 3DPM is to simulate semi-optimized scientific algorithms taken directly from my doctorate thesis. Version 2.1 improves over 2.0 by passing the main particle structs by reference rather than by value, and decreasing the amount of double->float->double recasts the compiler was adding in. It affords a ~25% speed-up over v2.0, which means new data. 

System: 3D Particle Movement v2.1

DigiCortex 1.16

Despite being a couple of years old, the DigiCortex software is a pet project for the visualization of neuron and synapse activity in the brain. The software comes with a variety of benchmark modes, and we take the small benchmark which runs a 32k neuron/1.8B synapse simulation. The results on the output are given as a fraction of whether the system can simulate in real-time, so anything above a value of one is suitable for real-time work. The benchmark offers a 'no firing synapse' mode, which in essence detects DRAM and bus speed, however we take the firing mode which adds CPU work with every firing.

System: DigiCortex 1.16 (32k Neuron, 1.8B Synapse)

Agisoft Photoscan 1.0

Photoscan stays in our benchmark suite from the previous version, however now we are running on Windows 10 so features such as Speed Shift on the latest processors come into play. The concept of Photoscan is translating many 2D images into a 3D model - so the more detailed the images, and the more you have, the better the model. The algorithm has four stages, some single threaded and some multi-threaded, along with some cache/memory dependency in there as well. For some of the more variable threaded workload, features such as Speed Shift and XFR will be able to take advantage of CPU stalls or downtime, giving sizeable speedups on newer microarchitectures.

System: Agisoft Photoscan 1.0 Stage 1

System: Agisoft Photoscan 1.0 Stage 2

System: Agisoft Photoscan 1.0 Stage 3

System: Agisoft Photoscan 1.0 Stage 4

System: Agisoft Photoscan 1.0 Total Time

Test Bed Setup and Hardware Benchmarking Performance: CPU Rendering Tests
Comments Locked

574 Comments

View All Comments

  • mapesdhs - Sunday, March 5, 2017 - link

    Yet another example of manipulation which wouldn't be tolerated in other areas of commercial product. I keep coming across examples in the tech world where products are deliberately crippled, prices get hiked, etc., but because it's tech stuff, nobody cares. Media never mentions it.

    Last week I asked a seller site about why a particular 32GB 3200MHz DDR4 kit they had listed (awaiting an ETA) was so much cheaper than the official kits for Ryzen (same brand of RAM please note). Overnight, the seller site changed the ETA to next week but also increased the price by a whopping 80%, making it completely irrelevant. I've seen this happen three times with different products in the last 2 weeks.

    Ian.
  • HomeworldFound - Sunday, March 5, 2017 - link

    If they were pretty cheap then use your logic, placeholder prices happen. If they had no ETA the chances is that they had no prices. I don't see a shortage of decent DDR4 so it definitely isn't a supply and demand problem. Perhaps you need to talk to the manufacturer to get their guideline prices.
  • HomeworldFound - Sunday, March 5, 2017 - link

    Not really. If developers wanted to enhance AMD platforms, or it was actually worth it they'd have done it by now. It's now just an excuse to explain either underperformance or an inability to work with the industry.
  • Notmyusualid - Tuesday, March 7, 2017 - link

    @ sedra

    It certainly should not be forgotten, that is for sure.
  • Rene23 - Monday, March 6, 2017 - link

    yet people here mentioned multiple times "settled in 2009"; pretending it is not happening anymore, sick :-/
  • GeoffreyA - Monday, March 6, 2017 - link

    I kind of vaguely knew that benchmarks were often unfairly optimised for Intel CPUs; but I never knew this detailed information before, and from such a reputable source: Agner Fog. I know that he's an authority on CPU microarchitectures and things like that. Intel is evil. Even now with Ryzen, it seems the whole software ecosystem is somewhat suboptimal on it, because of software being tuned over the last decade for the Core microarchitecture. Yet, despite all that, Ryzen is still smashing Intel in many of the benchmarks.
  • Outlander_04 - Monday, March 6, 2017 - link

    Settled in 2009 .
    Not relevant to optimisation for Ryzen in any way
  • Rene23 - Monday, March 6, 2017 - link

    settled in 2009 does not mean their current compiler and libraries are not doing it anymore, e.g. it could simply not run the best SSE/AVX code path disguised as simply not matching new AMD cpus properly.
  • cocochanel - Saturday, March 4, 2017 - link

    One thing that is not being mentioned by many is the increase in savings when you buy a CPU + mobo. Intel knows how to milk the consumer. On their 6-8 core flagships, a mobo with a top chipset will set you back 300-400 $ or even more. That's a lot for a mobo. Add the overpriced CPU. I expect AMD mobos to offer better value. Historically, they always did.
    On top of that, a VEGA GPU will probably be a better match for Ryzen than an Nvidia card, but I say probably and not certainly.
    If I were to replace my aging gaming rig for Christmas, this would be my first choice.
  • mapesdhs - Sunday, March 5, 2017 - link

    Bang goes the saving when one asks about a RAM kit awaiting an ETA and the seller hikes the price by 80% overnight (see my comment above).

Log in

Don't have an account? Sign up now