Benchmarking Suite 2017

2017 CPU

For our review, we are implementing our fresh CPU testing benchmark suite, using new scripts developed specifically for this testing. This means that with a fresh OS install, we can configure the OS to be more consistent, install the new benchmarks, maintain version consistency without random updates and start running the tests in under 5 minutes. After that it's a one button press to start an 8-10hr test (with a high-performance core) with nearly 100 relevant data points in the benchmarks given below. The tests cover a wide range of segments, some of which will be familiar but some of the tests are new to benchmarking in general, but still highly relevant for the markets they come from.

Our new CPU tests go through six main areas. We cover the Web (we've got an un-updateable version of Chrome 56), general system tests (opening tricky PDFs, emulation, brain simulation, AI, 2D image to 3D model conversion), rendering (ray tracing, modeling), encoding (compression, AES, h264 and HEVC), office based tests (PCMark and others), and our legacy tests, throwbacks from another generation of bad code but interesting to compare.

A side note on OS preparation. As we're using Windows 10, there's a large opportunity for something to come in and disrupt our testing. So our default strategy is multiple: disable the ability to update as much as possible, disable Windows Defender, uninstall OneDrive, disable Cortana as much as possible, implement the high performance mode in the power options, and disable the internal platform clock which can drift away from being accurate if the base frequency drifts (and thus the timing ends up inaccurate).

Web Tests on Chrome 56

Sunspider
Kraken
Octane
Web13
Web15

System Tests

PDF Opening
FCAT
3DPM v21
Dolphin v5.0
DigiCortex v1.20
Agisoft PS v1.0

Rendering Tests

Corona 1.3
Blender 2.78
LuxMark CPU C++
LuxMark CPU OpenCL
POV-Ray
CB15 ST
CB15 MT

Encoding Tests

7-Zip
WinRAR 5.4
TrueCrypt
HandBrake 264-LQ
HandBrake 264-HQ
HandBrake 265-4K

Office / Professional

PCMark8
Chromium Compile
SYSmark 2014 SE

Legacy Tests

3DPM v1 ST / MT
x264 HD 3 Pass 1, Pass 2
CB 11.5 ST / MT
CB 10 ST / MT

A side note - a couple of benchmarks (LuxMark) weren't fully 100% giving good data during testing. Need to go back and re-work this part of our testing.

2017 GPU

For our 2017 set of GPU tests, we wanted to think big. There are a lot of users in the ecosystem that prioritize gaming above all else, especially when it comes to choosing the correct CPU. If there's a chance to save $50 and get a better graphics card for no loss in performance, then this is the route that gamers would prefer to tread. The angle here though is tough - lots of games have different requirements and cause different stresses on a system, with various graphics cards having different reactions to the code flow of a game. Then users also have different resolutions and different perceptions of what feels 'normal'. This all amounts to more degrees of freedom than we could hope to test in a lifetime, only for the data to become irrelevant in a few months when a new game or new GPU comes into the mix. Just for good measure, let us add in DirectX 12 titles that make it easier to use more CPU cores in a game to enhance fidelity.

Our original list of nine games planned in February quickly became six, due to the lack of professional-grade controls on Ubisoft titles. If you want to see For Honor, Steep or Ghost Recon: Wildlands benchmarked on AnandTech, point Ubisoft Annecy or Ubisoft Montreal in my direction. While these games have in-game benchmarks worth using, unfortunately they do not provide enough frame-by-frame detail to the end user, despite using it internally to produce the data the user eventually sees (and it typically ends up obfuscated by another layer as well). I would instead perhaps choose to automate these benchmarks via inputs, however the extremely variable loading time is a strong barrier to this.

So we have the following benchmarks as part of our 4/2 script, automated to the point of a one-button run and out pops the results four hours later, per GPU. Also listed are the resolutions and settings used.

  • Civilization 6 (1080p Ultra, 4K Ultra)
  • Ashes of the Singularity: Escalation* (1080p Extreme, 4K Extreme)
  • Shadow of Mordor (1080p Ultra, 4K Ultra)
  • Rise of the Tomb Raider #1 - GeoValley (1080p High, 4K Medium)
  • Rise of the Tomb Raider #2 - Prophets (1080p High, 4K Medium)
  • Rise of the Tomb Raider #3 - Mountain (1080p High, 4K Medium)
  • Rocket League (1080p Ultra, 4K Ultra)
  • Grand Theft Auto V (1080p Very High, 4K High)

For each of the GPUs in our testing, these games (at each resolution/setting combination) are run four times each, with outliers discarded. Average frame rates, 99th percentiles and 'Time Under x FPS' data is sorted, and the raw data is archived.

The four GPUs we've managed to obtain for these tests are:

  • MSI GTX 1080 Gaming X 8G
  • ASUS GTX 1060 Strix 6G
  • Sapphire Nitro R9 Fury 4GB
  • Sapphire Nitro RX 480 8GB

In our testing script, we save a special thing for the GTX 1080 here. The following tests are also added:

  • Civilization 6 (8K Ultra, 16K Lowest)

These two benchmarks, with a little coercion, are able to be run beyond the specifications of the monitor being used, allowing for 'future' testing of GPUs at 8K and 16K with some amusing results. We are only running these tests on the GTX 1080, because there's no point watching a slideshow more than once.

*A note on Ashes. During our initial testing, 8K and 16K testing were working great. But a game update around v.2.2 broke our scripting methods due to a new splashscreen/popup. We worked to find a solution that worked in order to start testing, however it still remains a kludge and the new resolution testing system stopped working. If it ever works again, we might start testing in 8K/16K.

Power Consumption, Test Bed and Setup Benchmarking Performance: CPU System Tests
Comments Locked

264 Comments

View All Comments

  • mat9v - Tuesday, June 20, 2017 - link

    To play it safe, invest in the Core i9-7900X today.
    To play it safe and get a big GPU, save $400 and invest in the Core i7-7820X today.

    Then the conclusion should have been - wait for fixed platform. I'm not even suggesting choosing Ryzen as it performs slower but encouraging buying flawed (for now) platform?
  • mat9v - Tuesday, June 20, 2017 - link

    Please then correct tables on 1st page comparing Ryzen and 7820X and 7800X to state that Intel has 24 lines as they leave 24 for PCIEx slots and 4 is reserved for DMI 3.0
    If you strip Ryzen lines to only show those available for PCIEx do so for Intel too.
  • Ryan Smith - Wednesday, June 21, 2017 - link

    The tables are correct. The i7 7800 series have 28 PCIe lanes from the CPU for general use, and another 4 DMI lanes for the chipset.
  • PeterCordes - Tuesday, June 20, 2017 - link

    Nice article, thanks for the details on the microarchitectural changes, especially to execution units and cache. This explains memory bandwidth vs. working-set size results I observed a couple months ago on Google Compute Engine's Skylake-Xeon VMs with ~55MB of L3: The L2-L3 transition was well beyond 256kB. I had assumed Intel wouldn't use a different L3 cache design for SKX vs. SKL, but large L2 doesn't make much sense with an inclusive L3 of 2 or 2.5MB per core.

    Anyway, some corrections for page3: The allocation queue (IDQ) is in Skylake-S is always 64 uops, with or without HT. For example, I looked at the `lsd.uops` performance counter in a loop with 97 uops on my i7-6700k. For 97 billion counts of uops_issued.any, I got exactly 0 counts of lsd.uops, with the system otherwise idle. (And I looked at cpu_clk_unhalted.one_thread_active to make sure it was really operating in non-HT mode the majority of the time it was executing.) Also, IIRC, Intel's optimization manual explicitly states that the IDQ is always 64 entries in Skylake.

    The scheduler (aka RS or Reservation Station) is 97 unfused-domain uops in Skylake, up from 60 in Haswell. The 180int / 168fp numbers you give are the int / fp register-file sizes. They are sized more like the ROB (224 fused-domain uops, up from 192 in Haswell), not the scheduler, since like the ROB, they have to hold onto values until retirement, not just until execution. See also http://blog.stuffedcow.net/2013/05/measuring-rob-c... for when the PRF size vs. the ROB is the limit on the out-of-order window. See also http://www.realworldtech.com/haswell-cpu/6/ for a nice block diagram of the whole pipeline.

    SKL-S DIVPS *latency* is 11 cycles, not 3. The *throughput* is one per 3 cycles for 128-bit vectors, or one per 5 cycles for 256b vectors, according to Agner Fog's table. I forget if I've tested that myself. So are you saying that SKL-SP has one per 5 cycle throughput for 128-bit vectors? What's the throughput for 256b and 512b vectors?

    -----

    It's really confusing the way you keep saying "AVX unit" or "AVX-512 unit" when I think you mean "512b FMA unit". It sounds like vector-integer, shuffle, and pretty much everything other than FMA will have true 512b execution units. If that's correct, then video codecs like x264/x265 should run the same on LCC vs. HCC silicon (other than differences in mesh interconnect latency), because they're integer-only, not using any vector-FP multiply/add/FMA.

    -------

    > This should allow programmers to separate control flow from data flow...

    SIMD conditional operations without AVX512 are already done branchlessly (I think that's what you mean by separate from control-flow) by masking the input and/or output. e.g. to conditionally add some elements of a vector, AND the input with a vector of all-one or all-zero elements (as produced by CMPPS or PGMPEQD, for example). Adding all-zeros is a no-op (the additive identity).

    Mask registers and support for doing it as part of another operation makes it much more efficient, potentially making it a win to vectorize things that otherwise wouldn't be. But it's not a new capability; you can do the same thing with boolean vectors and SSE/AVX VPBLENDVPS.
  • PeterCordes - Tuesday, June 20, 2017 - link

    Speed Shift / Hardware P-State is not Windows-specific, but this article kind of reads as if it is.

    Your article doesn't mention any other OSes, so nothing it says is actually wrong: I'm sure it did require Intel's collaboration with MS to get support into Win10. The bullet-point in the image that says "Collaboration between Intel and Microsoft specifically for W10 + Skylake" may be going too far, though. That definitely implies that it only works on Win10, which is incorrect.

    Linux has supported it for a while. "HWP enabled" in your kernel log means the kernel has handed off P-state selection to the hardware. (Since Linux is open-source, Intel contributed most of the code for this through the regular channels, like they do for lots of other drivers.)

    dmesg | grep intel_pstate
    [ 1.040265] intel_pstate: Intel P-state driver initializing
    [ 1.040924] intel_pstate: HWP enabled

    The hardware exposes a knob that controls the tradeoff between power and performance, called Energy Performance Preference or EPP. Len Brown@Intel's Linux patch notes give a pretty good description of it (and how it's different from a similar knob for controlling turbo usage in previous uarches), as well as describing how to use it from Linux. https://patchwork.kernel.org/patch/9723427/.

    # CPU features related to HWP, on an i7-6700k running Linux 4.11 on bare metal
    fgrep -m1 flags /proc/cpuinfo | grep -o 'hwp[_a-z]*'
    hwp
    hwp_notify
    hwp_act_window
    hwp_epp

    I find the simplest way to see what speed your cores are running is to just `grep MHz /proc/cpuinfo`. (It does accurately reflect the current situation; Linux finds out what the hardware is actually doing).

    IDK about OS X support, but I assume Apple has got it sorted out by now, almost 2 years after SKL launch.
  • Arbie - Wednesday, June 21, 2017 - link

    There are folks for whom every last compute cycle really matters to their job. They have to buy the technical best. If that's Intel, so be it.

    For those dealing more with 'want' than 'need', a lot of this debate misses an important fact. The only reason Intel is suddenly vomiting cores, defecating feature sizes, and pre-announcing more lakes than Wisonsin is... AMD. Despite its chronic financial weakness that company has, incredibly, come from waaaay behind and given us real competition again. In this ultra-high stakes investment game, can they do that twice? Maybe not. And Intel has shown us what to expect if they have no competitor. In this limited-supplier market it's not just about who has the hottest product - it's also about whom we should reward with our money, and about keeping vital players in the game.

    I suggest - if you can, buy AMD. They have earned our support and it's in our best interests to do so. I've always gone with Intel but have lately come to see this bigger picture. It motivated me to buy an 1800X and I will also buy Vega.
  • Rabnor - Wednesday, June 21, 2017 - link

    To play it safe and get a big GPU, save $400 and invest in the Core i7-7820X today.
    You have to spend that $400+ on a good motherboard & aio cooler.
    Are you sold by Intel, anandtech?
  • Synviks - Thursday, June 22, 2017 - link

    For some extra comparison: running Cinebench R15 on my 14c 2.7ghz Haswell Xeon, with turbo to 3ghz on all cores, my score is 2010.

    Pretty impressive performance gain if they can shave off 4 cores and end up with higher performance.
  • Pri - Thursday, June 22, 2017 - link

    On the first page you wrote this:
    Similarly, the 6-core Core i7-7820X at $599 goes up against the 8-core $499 Ryzen 7 1800X.

    The Core i7 7820X was mistakenly written as a 6-core processor when it is in-fact an 8-core processor.

    Kind Regards.
  • Gigabytes - Thursday, June 22, 2017 - link

    Okay, here is what I learned from this article. Gaming performance sucks and you will be able to cook a pizza inside your case. Did I miss anything?

    Oh, one thing missing.

    Play it SMART and wait to see the Ripper in action before buy your new Intel toaster oven.

Log in

Don't have an account? Sign up now