Benchmarking Performance: CPU System Tests

Our first set of tests is our general system tests. These set of tests are meant to emulate more about what people usually do on a system, like opening large files or processing small stacks of data. This is a bit different to our office testing, which uses more industry standard benchmarks, and a few of the benchmarks here are relatively new and different.

All of our benchmark results can also be found in our benchmark engine, Bench.

Strategic AI

One of the hot button topics this year (and for the next few years, no doubt) is how technology is shifting to using artificial intelligence and purpose built AI hardware to perform better analysis in low power environments. AI is not relatively new as a concept, as we have had it for over 50 years. What is new is the movement to neural network based training and inference: moving from ‘if this then that’ sort of AI to convolutional networks that can perform fractional analysis of all the parameters.

Unfortunately the movement of the neural-network ecosystem is fast paced right now, especially in software. Every few months or so, announcements are made on new software frameworks, improvements in accuracy, or fundamental paradigm shifts in how these networks should be calculated for accuracy, power, performance, and what the underlying hardware should support in order to do so. There is no situational AI benchmarking tools using network topologies that will remain relevant in 2-4 months, let alone an 18-24 month processor benchmark cycle. So to that end our AI test becomes the best of the rest: strategic AI in the latest video games.

For our test we use the in-game Civilization 6 AI benchmark with a few custom modifications. Civilization is one of the most popular strategy video games on the market, heralded for its ability for extended gameplay and for users to suddenly lose 8 hours in a day because they want to play ‘one more turn’. A strenuous setting would involve a large map with 20 AI players on the most difficult settings, leading to a turn time (waiting for the AI players to all move in one turn) to exceed several minutes on a mid-range system. Note that a Civilization game can easily run for over 500 turns and be played over several months due to the level of engagement and complexity.

Before the benchmark is run, we change the game settings for medium visual complexity at a 1920x1080 resolution while using a GTX 1080 graphics card, such that any rendered graphics are not interfering with the benchmark measurements. Our benchmark run uses a command line method to call the built-in AI benchmark, which features 8 AI players on a medium size map but in a late game scenario with most of the map discovered, each civilization in the throes of modern warfare. We set the benchmark to play for 15 turns, and output the per-turn time, which is then read into the script with the geometric mean calculated. This benchmark is newer than most of the others, so we only have a few data points so far:

System: Civilization 6 AI (1080p Medium + GTX 1080)

Our Strategic AI test is new to the scene, and it looks like there is at least an asymptotic result wken you have a 'good enough' processor.

PDF Opening

First up is a self-penned test using a monstrous PDF we once received in advance of attending an event. While the PDF was only a single page, it had so many high-quality layers embedded it was taking north of 15 seconds to open and to gain control on the mid-range notebook I was using at the time. This put it as a great candidate for our 'let's open an obnoxious PDF' test. Here we use Adobe Reader DC, and disable all the update functionality within. The benchmark sets the screen to 1080p, opens the PDF to in fit-to-screen mode, and measures the time from sending the command to open the PDF until it is fully displayed and the user can take control of the software again. The test is repeated ten times, and the average time taken. Results are in milliseconds.

System: PDF Opening with Adobe Reader DC

Single thread frequency usualy works well for PDF Opening, although as we add on more high performance cores it becomes more difficult for the system to pin that individual thread to a single core and get the full turbo boost - if anything flares up on any other core then it brings the frequencies down. I suspect that is what is happening here and the next couple of thests where the i7-8700K sits behind the i7-7700K and i7-7740X.

FCAT Processing: link

One of the more interesting workloads that has crossed our desks in recent quarters is FCAT - the tool we use to measure stuttering in gaming due to dropped or runt frames. The FCAT process requires enabling a color-based overlay onto a game, recording the gameplay, and then parsing the video file through the analysis software. The software is mostly single-threaded, however because the video is basically in a raw format, the file size is large and requires moving a lot of data around. For our test, we take a 90-second clip of the Rise of the Tomb Raider benchmark running on a GTX 980 Ti at 1440p, which comes in around 21 GB, and measure the time it takes to process through the visual analysis tool.

System: FCAT Processing ROTR 1440p GTX980Ti Data

Dolphin Benchmark: link

Many emulators are often bound by single thread CPU performance, and general reports tended to suggest that Haswell provided a significant boost to emulator performance. This benchmark runs a Wii program that ray traces a complex 3D scene inside the Dolphin Wii emulator. Performance on this benchmark is a good proxy of the speed of Dolphin CPU emulation, which is an intensive single core task using most aspects of a CPU. Results are given in minutes, where the Wii itself scores 17.53 minutes.

System: Dolphin 5.0 Render Test

3D Movement Algorithm Test v2.1: link

This is the latest version of the self-penned 3DPM benchmark. The goal of 3DPM is to simulate semi-optimized scientific algorithms taken directly from my doctorate thesis. Version 2.1 improves over 2.0 by passing the main particle structs by reference rather than by value, and decreasing the amount of double->float->double recasts the compiler was adding in. It affords a ~25% speed-up over v2.0, which means new data.

System: 3D Particle Movement v2.1

DigiCortex v1.20: link

Despite being a couple of years old, the DigiCortex software is a pet project for the visualization of neuron and synapse activity in the brain. The software comes with a variety of benchmark modes, and we take the small benchmark which runs a 32k neuron/1.8B synapse simulation. The results on the output are given as a fraction of whether the system can simulate in real-time, so anything above a value of one is suitable for real-time work. The benchmark offers a 'no firing synapse' mode, which in essence detects DRAM and bus speed, however we take the firing mode which adds CPU work with every firing.

System: DigiCortex 1.20 (32k Neuron, 1.8B Synapse)

DigiCortex can take advantage of the extra cores, paired with the faster DDR4-2666 memory. The Ryzen 7 chips still sit at the top here however.

Agisoft Photoscan 1.3.3: link

Photoscan stays in our benchmark suite from the previous version, however now we are running on Windows 10 so features such as Speed Shift on the latest processors come into play. The concept of Photoscan is translating many 2D images into a 3D model - so the more detailed the images, and the more you have, the better the model. The algorithm has four stages, some single threaded and some multi-threaded, along with some cache/memory dependency in there as well. For some of the more variable threaded workload, features such as Speed Shift and XFR will be able to take advantage of CPU stalls or downtime, giving sizeable speedups on newer microarchitectures. The 1.3.3 test is relatively new, so has only been run on a few parts so far.

System: Agisoft Photoscan 1.3.3 (Large) Total Time

Benchmark Overview Benchmarking Performance: CPU Rendering Tests
Comments Locked

222 Comments

View All Comments

  • Koenig168 - Friday, October 6, 2017 - link

    Hmm ... rather disappointing that Anandtech did not include Ryzen 1600/X until called out by astute readers.
  • mkaibear - Friday, October 6, 2017 - link

    ...apart from including all the data in their benchmark tool, which they make freely available, you mean? They put in the CPUs they felt that were most relevant. The readership disagreed, so they changed it from their benchmark database. That level of service is almost unheard of in the industry and all you can do is complain. Bravo.
  • Koenig168 - Friday, October 6, 2017 - link

    Irrelevant. While I agree with most of what you said, that does not change the fact that Anandtech did not include Ryzen 1600/X until called out by astute readers. To make things a little clearer for you, the i7-8700 is a 6C/12T processor. The Ryzen 1600 is a 6C/12T processor. Therefore, a comparison with the Ryzen 1600 is relevant.

    You should have addressed the point I made. Instead all you can do is complain about my post. Bravo. (In case this goes over your head again, that last bit is added just to illustrate how pointless such comments are.)
  • mkaibear - Saturday, October 7, 2017 - link

    So your point is, in essence, "they didn't do what I wanted them to do so they're damned for all time".

    They put up the comparison they felt was relevant, then someone asked them to include something different - so they did it. They listened to their readers and made changes to an article to fix it.

    Should they have put the R5 in the original comparison? Possibly. I can see arguments either way but if pushed I'd have said they should have done - but since even the 1600X gets beaten by the 8400 in virtually every benchmark on their list (as per https://www.anandtech.com/bench/product/2018?vs=20... they would then have been accused by the lurking AMD fanboys of having picked comparisons to make AMD look bad (like on every other article where AMD gets beaten in performance).

    So what are you actually upset about? That they made an editorial decision you disagree with? You can't accuse them of hiding data since they make it publicly accessible. You can't accuse them of not listening to the readers because they made the change when asked to. Where's the issue here?
  • mkaibear - Saturday, October 7, 2017 - link

    OK on further reading it's not "virtually every" benchmark on the list, just more than half. It's 50% i5 win, 37% R5 win, 12% tied. So not exactly a resounding triumph for the Ryzen but not as bad as I made it out to be.

    In the UK the price differential is about £12 in favour of the i5, although the motherboard is about £30 more expensive (though of course Z370 is a lot more fully featured than B650) so I think pricing wise it's probably a wash - but if you want gaming performance on anything except Civ VI then you'd be better off getting the i5.

    ...oh and if you don't want gaming performance then you'll need to buy a discrete graphics card with the R5 which probably means the platform costs are skewed in favour of Intel a bit (£25 for a GF210, £32 for a R5 230...)
  • watzupken - Saturday, October 7, 2017 - link

    As mentioned when I first called out this omission, I would think comparing a 6 vs 4 core irrelevant. This is what AnandTech recommended to lookout for on page 4 "Core Wars": Core i5-8400 vs Ryzen 5 1500X.
    You be the judge if this makes sense when there is a far better competition/ comparison between the i5 8400 vs R5 1600. Only when you go reading around and you realized that hey, the i5 8400 seems to be losing in some areas to the 1600. I give AnandTech the benefit of the doubt, so I am done debating what is relevant or not.
  • KAlmquist - Friday, October 6, 2017 - link

    The Anandtech benchmark tool confirms what Ryan indicated in the introduction: the i7-8700k wins against the 1600X across the board, due faster clocks and better IPC. The comparison to the i5-8400 is more interesting. It either beats the 1600X by a hair, or loses rather badly. I think the issue is the lack of hyperthreading on the i5-8400 makes the 1600X the better all-around performer. But if you mostly run software that can't take advantage of more than 6 threads, then the i5-8400 looks very good.

    Personally, I wouldn't buy i5-8400 just because of the socket issue. Coffee Lake is basically just a port of Skylake to a new process, but Intel still came out with a new socket for it. Since I don't want to dump my motherboard in a landfill every time I upgrade my CPU, Intel needs a significantly superior processor (like they had when they were competing against AMD's bulldozer derivatives) to convince me to buy from them.
  • GreenMeters - Friday, October 6, 2017 - link

    So Intel still isn't getting their head out of their rear and offering the option of a CPU that trades all the integrated GPU space for additional cores? Moronic.
  • mkaibear - Friday, October 6, 2017 - link

    Integrated graphics make up more than 70% of the desktop market. It's even greater than that for laptops. Why would they sacrifice their huge share of that 70% in order to gain a small share of the 30%? *that* would be moronic.

    In the meantime you can know that if you buy a desktop CPU from Intel it will have an integrated GPU which works even with no discrete graphics card, and if you need one without the integrated graphics you can go HEDT.

    Besides, the limit for Intel isn't remotely "additional space", they've got more than enough space for 8/10/12 CPU cores - it's thermal. Having an integrated GPU which is unused doesn't affect that at all - or arguably it gives more of a thermal sink but I suspect in truth that's a wash.
  • Zingam - Saturday, October 7, 2017 - link

    We need a completely new PC architecture - you need more CPU cores - add more CPU cores, you need more GPU cores add more GPU cores, all of them connected via some sort of Infinity fabric like bus and sharing a single RAM. That should be possible to implement. Instead of innovating Intel is stuck in the current 80s architecture introduced by IBM.

Log in

Don't have an account? Sign up now