Benchmarking Performance: CPU System Tests

Our first set of tests is our general system tests. These set of tests are meant to emulate more about what people usually do on a system, like opening large files or processing small stacks of data. This is a bit different to our office testing, which uses more industry standard benchmarks, and a few of the benchmarks here are relatively new and different.

PDF Opening

First up is a self-penned test using a monstrous PDF we once received in advance of attending an event. While the PDF was only a single page, it had so many high-quality layers embedded it was taking north of 15 seconds to open and to gain control on the mid-range notebook I was using at the time. This put it as a great candidate for our 'let's open an obnoxious PDF' test. Here we use Adobe Reader DC, and disable all the update functionality within. The benchmark sets the screen to 1080p, opens the PDF to in fit-to-screen mode, and measures the time from sending the command to open the PDF until it is fully displayed and the user can take control of the software again. The test is repeated ten times, and the average time taken. Results are in milliseconds.

System: PDF Opening with Adobe Reader DC

The extra frequency of the new processors is helping when it comes to opening our monster PDF, but also the extra L2 cache is likely having an effect as well.

FCAT Processing

One of the more interesting workloads that has crossed our desks in recent quarters is FCAT - the tool we use to measure stuttering in gaming due to dropped or runt frames. The FCAT process requires enabling a color-based overlay onto a game, recording the gameplay, and then parsing the video file through the analysis software. The software is mostly single-threaded, however because the video is basically in a raw format, the file size is large and requires moving a lot of data around. For our test, we take a 90-second clip of the Rise of the Tomb Raider benchmark running on a GTX 980 Ti at 1440p, which comes in around 21 GB, and measure the time it takes to process through the visual analysis tool.

System: FCAT Processing ROTR 1440p GTX1080 Data

FCAT takes in a frame, processes it and dumps it, all on a single thread. The quicker you get through the workload the better, and frequency is supreme, hence we get the 7820X followed by the 7800X then the 7900X. Even though the 7900X has the higher turbo here, the results are with the margin expected.

3D Particle Movement v2.1

This is the latest version of the self-penned 3DPM benchmark. The goal of 3DPM is to simulate semi-optimized scientific algorithms taken directly from my doctorate thesis. Version 2.1 improves over 2.0 by passing the main particle structs by reference rather than by value, and decreasing the amount of double->float->double recasts the compiler was adding in. It affords a ~25% speed-up over v2.0, which means new data.

System: 3D Particle Movement v2.1

Give 3DPMv2.1 some cores, and it will show you the world / some numbers. The 1800X and 6950X were gunning for top spot, but the extra frequency of the 7900X wins here.

DigiCortex 1.20

Despite being a couple of years old, the DigiCortex software is a pet project for the visualization of neuron and synapse activity in the brain. The software comes with a variety of benchmark modes, and we take the small benchmark which runs a 32k neuron/1.8B synapse simulation. The results on the output are given as a fraction of whether the system can simulate in real-time, so anything above a value of one is suitable for real-time work. The benchmark offers a 'no firing synapse' mode, which in essence detects DRAM and bus speed, however we take the firing mode which adds CPU work with every firing.

System: DigiCortex 1.20 (32k Neuron, 1.8B Synapse)

DigiCortex loves a bit of memory, although when speaking with the developer, there can some instances where the beast needs to be fed. Losing the inclusive L3 might be a factor here, especially with the 7800X all the way down.

Agisoft Photoscan 1.0

Photoscan stays in our benchmark suite from the previous version, however now we are running on Windows 10 so features such as Speed Shift on the latest processors come into play. The concept of Photoscan is translating many 2D images into a 3D model - so the more detailed the images, and the more you have, the better the model. The algorithm has four stages, some single threaded and some multi-threaded, along with some cache/memory dependency in there as well. For some of the more variable threaded workload, features such as Speed Shift and XFR will be able to take advantage of CPU stalls or downtime, giving sizeable speedups on newer microarchitectures.

System: Agisoft Photoscan 1.0 Total Time

Photoscan is a mixed bag of operations, going through single thread sections to multithread and a range of cache/memory bandwidth requirepements. There's not much difference between thw 10 core and the 8 core, but the frequency helps against Broadwell-E.

Benchmarking Suite 2017 Benchmarking Performance: CPU Rendering Tests
Comments Locked

264 Comments

View All Comments

  • rascalion - Monday, June 19, 2017 - link

    Are the Ryzen numbers in the charts retests using the last round of bios and software updates?
  • Ian Cutress - Monday, June 19, 2017 - link

    As much as possible, the latest BIOSes are used.
    Our CPU testing suite is locked in for software versions as of March 2017. This is because testing 30/50/100+ CPUs can't be done overnight, we have to have rigid points where versions are locked in. My cycle is usually 12-18 months. (Note I'm only one person doing all this data.)
  • FreckledTrout - Monday, June 19, 2017 - link

    Ian any chance once there are a few BIOS tweaks you could say do a mini updated review on the 7820x vs Ryzen 1800x. With Ryzen having latest BIOS as well plus 3200Mhz memory. I'm just curious really how the 8-core guys line up when some of the dust settles and I think a lot of people will be.
  • Ian Cutress - Monday, June 19, 2017 - link

    Any reason why 3200? I'll have Intel people saying it is pushing the Ryzen out of spec
  • jjj - Monday, June 19, 2017 - link

    You could do a memory subsystem scaling review for all platforms, Skylake X, Threadripper, Ryzen (Summit Ridge) and Coffee Lake. Cache, interconnect, DRAM. See where they are, how they scale, where the bottlenecks are, single rank vs dual rank modules and perf impact in practice.Why not even impact on power and efficiency.

    In any case, you'll need to update Ryzen 5 and 7 results when Ryzen 3 arrives , isn't it?

    For DRAM at 3200 it might be out of spec - overclocking the core is out of spec too but that has never stopped anyone from overclocking the memory. Right now 3200 is what a lot of folks buy , at least for higher end mainstream Ofc some will argue that Ryzen scales better with memory and that's why it is unfair but it's a hell of a lot more reasonable than testing 1080p gaming with a 1080 TI since it's a popular real world scenario.

    At the end of the day the goal should be to inform, not to watch out for Intel's or AMD''s feelings.
  • vanilla_gorilla - Monday, June 19, 2017 - link

    >For DRAM at 3200 it might be out of spec - overclocking the core is out of spec too but that has never stopped anyone from overclocking the memory.

    This. Exactly. We're enthusiasts and we always push the envelope. No one cares what the specs are all we care is about what these processors are capable of in the right hands.

    And Ian I think you guys do an awesome job, there's no other place I look for CPU benchmarks. Keep up what you do, we all appreciate it, as well as your willingness to have a dialog with us about the process. Really cannot say how impressed I am by how open and engaged you are, it's really commendable.
  • Ian Cutress - Tuesday, June 20, 2017 - link

    Thanks for the comments :)

    Though on your comments about pushing things out of spec. We have a good deal of readers who want plain stock for their businesses - AT isn't solely a consumer focused site. Otherwise I'd just jack all the CPUs and just post OC results :D Our base testing will always be at stock, and for comparison testing there has to be an element of consistency - testing an OC'ed part against a stock part in a direct A vs B comparison is only going to end up with a barrage of emails being rammed down my throat. There has to be some planning involved.
  • Ian Cutress - Tuesday, June 20, 2017 - link

    I've been planning a memory scaling article, I just haven't had the time (this article was around 6 weeks of prep with all the events going on that I had to attend).

    Note we don't retest stuff every review. With our new 2017 test suite, I've been going through regression testing. Usually regression testing is done once for the full segment until the benchmarks are changed again. I'll look at my next few months (still stupidly busy) and look at the priorities here.
  • FreckledTrout - Monday, June 19, 2017 - link

    Most people can easily buy a 3200 kit for not a lot of extra money. It doesn't take a lot tweaking(well not anymore on AGESA 1.0.0.6) or silicone lottery like an OC, just a bit more cash. From what I have seen with Ryzen it is the sweet spot on price and performance. I would assume Its likely the most chosen configuration on the R7's. To make it fair use 3200 on the 7820x as well. I only ask because Ryzen did way better than I would have thought and would like to see it with 3200Mhz memory and latest updates to see really how close Intel and AMD are on 8-core systems. Then im going to build :)
  • tipoo - Monday, June 19, 2017 - link

    Launch review! Nice work dude(s).

Log in

Don't have an account? Sign up now