2017 CPU Benchmarks

For our review, we are implementing our fresh CPU testing benchmark suite, using new scripts developed specifically for this testing. This means that with a fresh OS install, we can configure the OS to be more consistent, install the new benchmarks, maintain version consistency without random updates and start running the tests in under 5 minutes. After that it's a one button press to start an 8-10hr test (with a high-performance core) with nearly 100 relevant data points in the benchmarks given below. The tests cover a wide range of segments, some of which will be familiar but some of the tests are new to benchmarking in general, but still highly relevant for the markets they come from.

Our new CPU tests go through six main areas. We cover the Web (we've got an un-updateable version of Chrome 56), general system tests (opening tricky PDFs, emulation, brain simulation, AI, 2D image to 3D model conversion), rendering (ray tracing, modeling), encoding (compression, AES, h264 and HEVC), office based tests (PCMark and others), and our legacy tests, throwbacks from another generation of bad code but interesting to compare.

All of our benchmark results can also be found in our benchmark engine, Bench.

A side note on OS preparation. As we're using Windows 10, there's a large opportunity for something to come in and disrupt our testing. So our default strategy is multiple: disable the ability to update as much as possible, disable Windows Defender, uninstall OneDrive, disable Cortana as much as possible, implement the high performance mode in the power options, and disable the internal platform clock which can drift away from being accurate if the base frequency drifts (and thus the timing ends up inaccurate).

New Tests

PCMark10 – We had several requests to include PCMark10 in our new testing suite. Normally we wait until a new benchmark has most of the problems ironed out, however our initial performance scaling metrics show that PCMark10 is basically there already. The extended suite covers ‘Essential, Productivity and Creativity’ benchmarks such as GIMP, Blender, video editing, conferencing, complex spreadsheets and other tests. We use the subtest values as well as the gaming physics result.

Agisoft PhotoScan 1.3.3 – Again, requests to use a more updated version of Photoscan were also coming through the inbox. Over the older version, Photoscan includes various throughput enhancements to each of the core points of the algorithm. Agisoft also gave us a new larger set of more detailed test images to generate our 3D models, giving a longer benchmark (but results are not comparable to the old data). We’ve run this benchmark on about a dozen CPUs ready for this review.

Strategic AI - For our test we use the in-game Civilization 6 AI benchmark with a few custom modifications. Civilization is one of the most popular strategy video games on the market, heralded for its ability for extended gameplay and for users to suddenly lose 8 hours in a day because they want to play ‘one more turn’. A strenuous setting would involve a large map with 20 AI players on the most difficult settings, leading to a turn time (waiting for the AI players to all move in one turn) to exceed several minutes on a mid-range system. Note that a Civilization game can easily run for over 500 turns and be played over several months due to the level of engagement and complexity.

Web Tests on Chrome 56

Sunspider 1.0.2
Mozilla Kraken 1.1
Google Octane 2.0
WebXPRT15

System Tests

Strategic AI
PDF Opening
FCAT
3DPM v2.1
Dolphin v5.0
DigiCortex v1.20
Agisoft PhotoScan v1.3.3

Rendering Tests

Corona 1.3
Blender 2.78
LuxMark v3.1 CPU C++
LuxMark v3.1 CPU OpenCL
POV-Ray 3.7.1b4
Cinebench R15 ST
Cinebench R15 MT

Encoding Tests

7-Zip 9.2
WinRAR 5.40
AES Encoding (TrueCrypt 7.2)
HandBrake v1.0.2 x264 LQ
HandBrake v1.0.2 x264-HQ
HandBrake v1.0.2 HEVC-4K

Office / Professional

PCMark 8
PCMark 10
Chromium Compile (v56)
SYSmark 2014 SE (not in this early review)

Legacy Tests

3DPM v1 ST / MT
x264 HD 3 Pass 1, Pass 2
Cinebench R11.5 ST / MT
Cinebench R10 ST / MT

2017 GPU

For our new set of GPU tests, we wanted to think big. There are a lot of users in the ecosystem that prioritize gaming above all else, especially when it comes to choosing the correct CPU. If there's a chance to save $50 and get a better graphics card for no loss in performance, then this is the route that gamers would prefer to tread. The angle here though is tough - lots of games have different requirements and cause different stresses on a system, with various graphics cards having different reactions to the code flow of a game. Then users also have different resolutions and different perceptions of what feels 'normal'. This all amounts to more degrees of freedom than we could hope to test in a lifetime, only for the data to become irrelevant in a few months when a new game or new GPU comes into the mix. Just for good measure, let us add in DirectX 12 titles that make it easier to use more CPU cores in a game to enhance fidelity.

Our original list of nine games planned in February quickly became six, due to the lack of professional-grade controls on Ubisoft titles. If you want to see For Honor, Steep or Ghost Recon: Wildlands benchmarked on AnandTech, point Ubisoft Annecy or Ubisoft Montreal in my direction. While these games have in-game benchmarks worth using, unfortunately they do not provide enough frame-by-frame detail to the end user, despite using it internally to produce the data the user eventually sees (and it typically ends up obfuscated by another layer as well). I would instead perhaps choose to automate these benchmarks via inputs, however the extremely variable loading time is a strong barrier to this.

So we have the following benchmarks as part of our 4/2 script, automated to the point of a one-button run and out pops the results four hours later, per GPU. Also listed are the resolutions and settings used.

  • Civilization 6 (1080p Ultra, 4K Ultra)
  • Ashes of the Singularity: Escalation* (1080p Extreme, 4K Extreme)
  • Shadow of Mordor (1080p Ultra, 4K Ultra)
  • Rise of the Tomb Raider #1 - GeoValley (1080p High, 4K Medium)
  • Rise of the Tomb Raider #2 - Prophets (1080p High, 4K Medium)
  • Rise of the Tomb Raider #3 - Mountain (1080p High, 4K Medium)
  • Rocket League (1080p Ultra, 4K Ultra)
  • Grand Theft Auto V (1080p Very High, 4K High)

For each of the GPUs in our testing, these games (at each resolution/setting combination) are run four times each, with outliers discarded. Average frame rates, 99th percentiles and 'Time Under x FPS' data is sorted, and the raw data is archived.

The four GPUs we've managed to obtain for these tests are:

  • MSI GTX 1080 Gaming X 8G*
  • ASUS GTX 1060 Strix 6G
  • Sapphire Nitro R9 Fury 4GB
  • Sapphire Nitro RX 480 8GB

In our testing script, we save a couple of special things for the GTX 1080 here. The following tests are also added:

  • Civilization 6 (8K Ultra, 16K Lowest)

This benchmark, with a little coercion, are able to be run beyond the specifications of the monitor being used, allowing for 'future' testing of GPUs at 8K and 16K with some amusing results. We are only running these tests on the GTX 1080, because there's no point watching a slideshow more than once.

*Due to the timing of this review, we have only had a chance to run some CPU Gaming tests on the GTX 1080.

Power Consumption, Test Bed and Setup Benchmarking Performance: CPU System Tests
Comments Locked

222 Comments

View All Comments

  • boeush - Friday, October 6, 2017 - link

    To expand on this a bit more, with the "core wars" now in effect, I wonder if hyperthreading might be an unnecessary holdover feature that could be actually reducing performance of many(8+)-core chips in all but the most extremely threaded scenarios. Might it not be better to have many simple/efficient cores, rather than perhaps fewer cores loaded with the hyperthreading overhead both in terms of die area and energy density, as well as cache thrashing?
  • Zingam - Saturday, October 7, 2017 - link

    Hyperthreading was invented to optimize the use of CPU logic that would otherwise remain unutilized during high loads.There is no way of reducing performance with current architectures. There are "hyperthreading-less" CPUs and you compare them to hyperthreded CPUs.
  • boeush - Monday, October 9, 2017 - link

    Hyperthreading was particularly useful in the context of not having a lot of cores to work with - allowing to squeeze extra multi-threaded performance from your dual- or quad-core CPU. It comes at the cost of extra silicon and complexity in the CPU pipeline, but allows better utilization of CPU resources as you mention. At runtime, it has the dual detrimental effects on single-thread performance, of (1) splitting/sharing the on-CPU cache among more threads, thereby raising the frequency of cache misses for any given thread due to the threads trampling over each other's cached data, and (2) indeed maximizing CPU resource utilization, thereby maximizing dissipated energy per unit area - and thereby driving the CPU into a performance-throttling regime.

    With more cores starting to become available per CPU in this age of "core wars", it's no longer as important to squeeze every last ounce of resource utilization from each core. Most workloads/applications are not very parallelizable in practice, so you end up hitting the limits of Amdahl's law - at which point single-thread performance becomes the main bottleneck. And to maximize single-thread performance on any given core, you need two things: (a) maximum attainable clock frequency (resource utilization be damned), and (b) as much uncontested, dedicated on-CPU cache as you can get. Hyperthreading is an impediment to both of those goals.

    So, it seems to me that if we're going toward the future where we routinely have CPUs with 8 or more cores, then it would be beneficial for each of those cores to be simpler, more compact, more streamlined and optimized for single-thread performance (while foregoing hyperthreading support), while spending any resulting die space savings on more cores and/or more cache.
  • boeush - Monday, October 9, 2017 - link

    To add to the above: 'more cores and/or more cache' - and/or better branch predictor, and/or faster/wider ALU and/or FPU, and/or more pipeline stages to support a faster clock, and/or...
  • alinypd - Saturday, October 7, 2017 - link

    Slowest GAMING CPU Ever, Garbage!
  • yhselp - Saturday, October 7, 2017 - link

    The i3-8100 is made utterly redundant by the the necessity to buy a Z370 motherboard along with it; it'd be cheaper to get an i5-7400 with a lower-end motherboard. Intel...
  • watzupken - Saturday, October 7, 2017 - link

    This applies to all the non-overclocking chips, particularly i5 and below. The high cost of the Z370 boards currently simply wipe out any price benefits. For example, a i5 840 is good value for money, but once you factor in the price of a motherboard with a Z370 chipset, it may not be that good value for money anymore.
  • FourEyedGeek - Saturday, October 7, 2017 - link

    Enjoyed the article, thanks. An overclocked Ryzen 1700 looks appealing.
  • nierd - Saturday, October 7, 2017 - link

    "The problem here is *snip* Windows 10, *snip* All it takes is for a minor internal OS blip and single-threaded performance begins to diminish. Windows 10 famously kicks in a few unwanted instruction streams when you are not looking,"

    This is why single threaded performance is a silly benchmark in today's market, unless you happen to boot to DOS to run something. Your OS is designed to use threads. There are no systems in use today as a desktop (in any market these processors will compete - even if used as a server) where they will ever run a single thread. The only processors that run single threads today are ... single core processors (without hyperthreading even).

    Open your task manager - click the performance tab - look at the number of threads - when you have enough cores to match that number then single threaded performance is important. In the real world how the processor handles multiple tasks and thread switching is more important. Even hardcore gamers seem to miss this mark forgetting that behind the game the OS has threads for memory management, disk management, kernel routines, checking every piece of hardware in your system, antivirus, anti-malware (perhaps), network stack management, etc. That's not even counting if you run more than one monitor and happen to have web browsing or videos playing on another screen - and anything in the background you are running.

    The myth that you never need more than 4 cores is finally coming to rest - lets start seeing benchmarks that stress a system with 10 programs going in the background. My system frequently will be playing a movie, playing a game, and running handbrake in the background while it also serves as a plex server, runs antivirus, has 32 tabs open in 2 different browsers, and frequently has something else playing at the same time - A true benchmark would be multiple programs all tying up as many resources as possible - while a single app can give a datapoint I want to see how these new multi-core beasts handle real world scenarios and response times.
  • coolhardware - Sunday, October 8, 2017 - link

    Your comment has merit. It is crazy the number of tasks running on a modern OS. I sometimes miss the olden days where a clean system truly was clean and had minimal tasks upon bootup. ;-)

Log in

Don't have an account? Sign up now