Benchmarking Performance: CPU System Tests

Our first set of tests is our general system tests. These set of tests are meant to emulate more about what people usually do on a system, like opening large files or processing small stacks of data. This is a bit different to our office testing, which uses more industry standard benchmarks, and a few of the benchmarks here are relatively new and different.

PDF Opening

First up is a self-penned test using a monstrous PDF we once received in advance of attending an event. While the PDF was only a single page, it had so many high-quality layers embedded it was taking north of 15 seconds to open and to gain control on the mid-range notebook I was using at the time. This put it as a great candidate for our 'let's open an obnoxious PDF' test. Here we use Adobe Reader DC, and disable all the update functionality within. The benchmark sets the screen to 1080p, opens the PDF to in fit-to-screen mode, and measures the time from sending the command to open the PDF until it is fully displayed and the user can take control of the software again. The test is repeated ten times, and the average time taken. Results are in milliseconds.

System: PDF Opening with Adobe Reader DC

FCAT Processing

One of the more interesting workloads that has crossed our desks in recent quarters is FCAT - the tool we use to measure stuttering in gaming due to dropped or runt frames. The FCAT process requires enabling a color-based overlay onto a game, recording the gameplay, and then parsing the video file through the analysis software. The software is mostly single-threaded, however because the video is basically in a raw format, the file size is large and requires moving a lot of data around. For our test, we take a 90-second clip of the Rise of the Tomb Raider benchmark running on a GTX 980 Ti at 1440p, which comes in around 21 GB, and measure the time it takes to process through the visual analysis tool. 

System: FCAT Processing ROTR 1440p GTX1080 Data

3D Particle Movement v2.1 

This is the latest version of the self-penned 3DPM benchmark. The goal of 3DPM is to simulate semi-optimized scientific algorithms taken directly from my doctorate thesis. Version 2.1 improves over 2.0 by passing the main particle structs by reference rather than by value, and decreasing the amount of double->float->double recasts the compiler was adding in. It affords a ~25% speed-up over v2.0, which means new data. 

System: 3D Particle Movement v2.1

DigiCortex 1.16

Despite being a couple of years old, the DigiCortex software is a pet project for the visualization of neuron and synapse activity in the brain. The software comes with a variety of benchmark modes, and we take the small benchmark which runs a 32k neuron/1.8B synapse simulation. The results on the output are given as a fraction of whether the system can simulate in real-time, so anything above a value of one is suitable for real-time work. The benchmark offers a 'no firing synapse' mode, which in essence detects DRAM and bus speed, however we take the firing mode which adds CPU work with every firing.

System: DigiCortex 1.16 (32k Neuron, 1.8B Synapse)

Agisoft Photoscan 1.0

Photoscan stays in our benchmark suite from the previous version, however now we are running on Windows 10 so features such as Speed Shift on the latest processors come into play. The concept of Photoscan is translating many 2D images into a 3D model - so the more detailed the images, and the more you have, the better the model. The algorithm has four stages, some single threaded and some multi-threaded, along with some cache/memory dependency in there as well. For some of the more variable threaded workload, features such as Speed Shift and XFR will be able to take advantage of CPU stalls or downtime, giving sizeable speedups on newer microarchitectures.

System: Agisoft Photoscan 1.0 Stage 1

System: Agisoft Photoscan 1.0 Stage 2

System: Agisoft Photoscan 1.0 Stage 3

System: Agisoft Photoscan 1.0 Stage 4

System: Agisoft Photoscan 1.0 Total Time

Test Bed Setup and Hardware Benchmarking Performance: CPU Rendering Tests
Comments Locked

574 Comments

View All Comments

  • bobsta22 - Saturday, March 4, 2017 - link

    Office with 20 PCs - all developers - loads of VMs and containers.

    All the PCs are due a CPU/Gfx refresh, but ITX mobos required.

    Cant wait tbh. This is a game changer.
  • prisonerX - Saturday, March 4, 2017 - link

    What if they come out with a 16 core line next year!
  • bobsta22 - Saturday, March 4, 2017 - link

    What?
  • lilmoe - Tuesday, March 7, 2017 - link

    It really is. As a freelance developer, I can't wait.
  • ericgl21 - Saturday, March 4, 2017 - link

    For me, the more important thing to see from AMD is if they can come up with a chip that can beat the mobile Core i7-7820HQ (4c/8t no ECC) & the Xeon E3-1575M v5 (4c/8t with ECC), for less money.
    And the number of PCIe gen3 lanes is very important, especially with the rise of M.2 NVMe storage sticks.
  • cmagic - Sunday, March 5, 2017 - link

    Will anandtech review Ryzen in gaming? I would really like Anandtech view, since I don't really trust other sites especially those "entertainment" sites. Want to see how Anandtech dive into its main cause.
  • Tchamber - Sunday, March 5, 2017 - link

    @cmagic
    Page 15
    2017 GPU
    The bad news for our Ryzen review is that our new 2017 GPU testing stack not yet complete. We recieved our Ryzen CPU samples on February 21st, and tested in the hotel at the event for 6hr before flying back to Europe.

    I just ordered my 1700X, I plan to keep it for at least 5 years, as my needs don't change much. My current Intel 6 core is coming up on 7 years old now. I like to buy high end and use it a long time.
  • Lazlo Panaflex - Monday, March 6, 2017 - link

    Same here...probably gonna grab a 1700 at some point and put this here i5-2500 non-k in the kids computer.
  • asH98 - Sunday, March 5, 2017 - link

    '''The BIG QUESTION is WHY are the HEDT benchmarks (professional ie Blender) fairer than gaming benchmarks??

    Bottom line is that CUTTING-EDGE CODING is happening NOW in AI, HPC, data, and AV/AR, game coders because of $$$ are the last to change or learn unless forced (great for NVidia Intel) so most of the game coding is stuck in yesteryear- Bethesda will be the test bed for game coders to move forward
    Hence the difference in game benchmarks vs 'professional' (HEDT) benchmarks. Game coders can get stuck using yesterday's code without repercussions and consequences as long as old hardware dominates and there are no incentives to change or learn new skills. The same Cant happen in the Professional area where speed is tantamount to performance and $$$
  • TheJian - Sunday, March 5, 2017 - link

    I hope you're going to test a dozen games at 1080p where most of us run for article #2 and the GAMING article should come in a week not 1/2 year later like 1080/1070 gtx reviews...LOL. As this article just seems like AMD told you "guys, please don't run any games so we can sell some chips to suckers before they figure out games suck". And you listened. No point in testing 1440p or 4k for CPU, and 95% of us run 1920x1200 and BELOW so you should be testing your games there for a CPU test.

    The fact they are talking Zen2 instead of fixing Zen1 kind of makes me think most of the gaming is NOT going to be fixable.
    http://www.legitreviews.com/amd-ryzen-7-1800x-1700...
    149fps for 7700 in theif vs. 108 for 1800x? JEEZ. GTA5 again, 163 to 138. Deus ex MD 127 to 103. These are massive losses to Intel's chip and Deus was clearly gpu bound as many of Intel's chips hit the same 127fps including my old 4790k :( OUCH AMD.

    https://www.guru3d.com/articles_pages/amd_ryzen_7_...
    Tombraider same 7700k vs. 1800x 132fps to 114 (never mind 6850 scoring 140fps). This will probably get worse as we move to 1080ti, vega, nvidia refresh for xmas, Volta, 10nm etc. If you were using a faster gpu the cpus will separate even more especially if people are mostly gaming at 1080p. Even if many move to 1440p, that maybe fixes some games (tombraider is one with 1080 regular that hits a wall at 90fps), but again goes back to major losses as we move to 10nm etc. We get 10nm chips for mobile now and gpus probably next year at 12nm (real? fake 12nm? Either way) and might squeak into 2017 (volta, TSMC). 10nm gpus will likely come 2018 at the latest. Those gpus will make 1440p look like 1080p today surely and cpus will again spread out (and no, we won't all be running 4k then...LOL). You could see cpus smaller than 10nm BEFORE you upgrade your cpu again if you buy this year. That could get pretty ugly if the benchmarks around the web for gaming are not going to improve. One more point you'll likely be looking at GDDR6 (16Gbps probably) for vid cards allowing them to possibly stretch their legs even more if needed. Again, all not good for a gamer here IMHO.

    “But Senior Engineer Mike Clark says he knows where the easy gains are for Zen 2, and they're already working through the list”

    So maybe no fix in sight for Zen1? Just excuses like "run higher res, and code right guys"...I hope that isn't the best they've got. I could go on about games, but most should get the point. I was going to buy ryzen purely for Handbrake, but I'll need to see motherboard improvements and at least some movement on gaming VERY soon.

    One more ouch statement from pcper.
    https://www.pcper.com/news/Processors/AMD-responds...
    "For buyers today that are gaming at 1080p, the situation is likely to remain as we have presented it going forward."
    So they don't think a fix is coming based on AMD info and as noted as gpus get much faster (along with their memory speeds) expect 1440p to look like today's 1080p benchmarks at least to some extent.

    The board part is of major interest to me, so I can wait a bit and also see Intel's response. So AMD has be hanging for a bit here, but not for too long. I do like the pro side of these though (handbrake especially, just not quite enough).

Log in

Don't have an account? Sign up now