Our New Testing Suite for 2018 and 2019

Spectre and Meltdown Hardened

In order to keep up to date with our testing, we have to update our software every so often to stay relevant. In our updates we typically implement the latest operating system, the latest patches, the latest software revisions, the newest graphics drivers, as well as add new tests or remove old ones. As regular readers will know, our CPU testing revolves an automated test suite, and depending on how the newest software works, the suite either needs to change, be updated, have tests removed, or be rewritten completely. Last time we did a full re-write, it took the best part of a month, including regression testing (testing older processors).

One of the key elements of our testing update for 2018 (and 2019) is the fact that our scripts and systems are designed to be hardened for Spectre and Meltdown. This means making sure that all of our BIOSes are updated with the latest microcode, and all the steps are in place with our operating system with updates. In this case we are using Windows 10 x64 Enterprise 1709 with April security updates which enforces Smeltdown (our combined name) mitigations. Uses might ask why we are not running Windows 10 x64 RS4, the latest major update – this is due to some new features which are giving uneven results. Rather than spend a few weeks learning to disable them, we’re going ahead with RS3 which has been widely used.

Our previous benchmark suite was split into several segments depending on how the test is usually perceived. Our new test suite follows similar lines, and we run the tests based on:

  1. Power
  2. Memory
  3. Office
  4. System
  5. Render
  6. Encoding
  7. Web
  8. Legacy
  9. Linux
  10. Integrated Gaming
  11. CPU Gaming

Depending on the focus of the review, the order of these benchmarks might change, or some left out of the main review. All of our data will reside in our benchmark database, Bench, for which there is a new ‘CPU 2019’ section for all of our new tests.

Within each section, we will have the following tests:

Power

Our power tests consist of running a substantial workload for every thread in the system, and then probing the power registers on the chip to find out details such as core power, package power, DRAM power, IO power, and per-core power. This all depends on how much information is given by the manufacturer of the chip: sometimes a lot, sometimes not at all.

We are currently running Prime95 as our main test, however we are recently playing with POV-Ray as well.

Memory

These tests involve disabling all turbo modes in the system, forcing it to run at base frequency, and them implementing both a memory latency checker (Intel’s Memory Latency Checker works equally well for both platforms) and AIDA64 to probe cache bandwidth.

Office

  • Chromium Compile: Windows VC++ Compile of Chrome 56 (same as 2017)
  • PCMark10: Primary data will be the overview results – subtest results will be in Bench
  • 3DMark Physics: We test every physics sub-test for Bench, and report the major ones (new)
  • GeekBench4: By request (new)
  • SYSmark 2018: Recently released by BAPCo, currently automating it into our suite (new)

System

  • Application Load: Time to load GIMP 2.10.4 (new)
  • FCAT: Time to process a 90 second ROTR 1440p recording (same as 2017)
  • 3D Particle Movement: Particle distribution test (same as 2017) – we also have AVX2 and AVX512 versions of this, which may be added later
  • Dolphin 5.0: Console emulation test (same as 2017)
  • DigiCortex: Sea Slug Brain simulation (same as 2017)
  • y-Cruncher v0.7.6: Pi calculation with optimized instruction sets for new CPUs (new)
  • Agisoft Photoscan 1.3.3: 2D image to 3D modelling tool (updated)

Render

  • Corona 1.3: Performance renderer for 3dsMax, Cinema4D (same as 2017)
  • Blender 2.79b: Render of bmw27 on CPU (updated to 2.79b)
  • LuxMark v3.1 C++ and OpenCL: Test of different rendering code paths (same as 2017)
  • POV-Ray 3.7.1: Built-in benchmark (updated)
  • CineBench R15: Older Cinema4D test, will likely remain in Bench (same as 2017)

Encoding

  • 7-zip 1805: Built-in benchmark (updated to v1805)
  • WinRAR 5.60b3: Compression test of directory with video and web files (updated to 5.60b3)
  • AES Encryption: In-memory AES performance. Slightly older test. (same as 2017)
     
  • Handbrake 1.1.0: Logitech C920 1080p60 input file, transcoded into three formats for streaming/storage:
    • 720p60, x264, 6000 kbps CBR, Fast, High Profile
    • 1080p60, x264, 3500 kbps CBR, Faster, Main Profile
    • 1080p60, HEVC, 3500 kbps VBR, Fast, 2-Pass Main Profile

Web

  • WebXPRT3: The latest WebXPRT test (updated)
  • WebXPRT15: Similar to 3, but slightly older. (same as 2017)
  • Speedometer2: Javascript Framework test (new)
  • Google Octane 2.0: Depreciated but popular web test (same as 2017)
  • Mozilla Kraken 1.1: Depreciated but popular web test (same as 2017)

Legacy (same as 2017)

  • 3DPM v1: Older version of 3DPM, very naïve code
  • x264 HD 3.0: Older transcode benchmark
  • Cinebench R11.5 and R10: Representative of different coding methodologies

Linux

When in full swing, we wish to return to running LinuxBench 1.0. This was in our 2016 test, but was ditched in 2017 as it added an extra complication layer to our automation. By popular request, we are going to run it again.

Integrated and CPU Gaming

We are in the process of automating around a dozen games at four different performance levels. A good number of games will have frame time data, however due to automation complications, some will not. The idea is that we get a good overview of a number of different genres and engines for testing. So far we have the following games automated:

  • World of Tanks, encore (standalone benchmark)
  • Final Fantasy XV (standalone benchmark, standard detail to avoid overdraw)
  • Far Cry 5
  • Shadow of War
  • GTA5
  • F1 2017
  • Civilization 6
  • Car Mechanic Simulator 2018

We are also in the process of testing the following for automation, with varying success:

  • Ashes of the Singularity: Classic (is having issues with command line)
  • Total War: Thrones of Britannia (will not accept mouse input when loaded)
  • Deus Ex: Mankind Divided (current test not portable, might be Denuvo limited)
  • Steep For Honor
  • Ghost Recon

For our CPU Gaming tests, we will be running on an NVIDIA GTX 1080. For the CPU benchmarks, we use an RX460 as we now have several units for concurrent testing.

In previous years we tested multiple GPUs on a small number of games – this time around, due to a Twitter poll I did which turned out exactly 50:50, we are doing it the other way around: more games, fewer GPUs.

Scale Up vs Scale Out: Benefits of Automation

One comment we get every now and again is that automation isn’t the best way of testing – there’s a higher barrier to entry, and it limits the tests that can be done. From our perspective, despite taking a little while to program properly (and get it right), automation means we can do several things:

  1. Guarantee consistent breaks between tests for cooldown to occur, rather than variable cooldown times based on ‘if I’m looking at the screen’
  2. It allows us to simultaneously test several systems at once. I currently run five systems in my office (limited by the number of 4K monitors, and space) which means we can process more hardware at the same time
  3. We can leave tests to run overnight, very useful for a deadline
  4. With a good enough script, tests can be added very easily

Our benchmark suite collates all the results and spits out data as the tests are running to a central storage platform, which I can probe mid-run to update data as it comes through. This also acts as a mental check in case any of the data might be abnormal.

We do have one major limitation, and that rests on the side of our gaming tests. We are running multiple tests through one Steam account, some of which (like GTA) are online only. As Steam only lets one system play on an account at once, our gaming script probes Steam’s own APIs to determine if we are ‘online’ or not, and to run offline tests until the account is free to be logged in on that system. Depending on the number of games we test that absolutely require online mode, it can be a bit of a bottleneck.

Benchmark Suite Rollout

This will be the first review with our new benchmark suite, at least the CPU portion of it. We are still working on the new gaming suite. So far for this review we tested 8-9 processors, and I am expecting to iron out any inconsistencies further into September, after several key industry events over the next few weeks.

As always, we do take requests. It helps us understand the workloads that everyone is running and plan accordingly.

A side note on software packages: we have had requests for tests on software such as ANSYS, or other professional grade software. The downside of testing this software is licensing and scale. Most of these companies do not particularly care about us running tests, and state it’s not part of their goals. Others, like Agisoft, are more than willing to help. If you are involved in these software packages, the best way to see us benchmark them is to reach out. We have special versions of software for some of our tests, and if we can get something that works, and relevant to the audience, then we shouldn’t have too much difficulty adding it to the suite.

Test Setup and Comparison Points HEDT Benchmarks: System Tests
Comments Locked

171 Comments

View All Comments

  • Eastman - Tuesday, August 14, 2018 - link

    Just a comment regarding studios and game developers. I work in the industry and 90% of these facilities do run with Xeon workstations and ECC memory. Either custom built or purchased from the likes of Dell or HP. So yes, there is a market place for workstations. No serious pro would do work on a mobile tablet or phone where there is a huge market growth. There is definitely a place for a single 32 core CPUs. But among say 100 workstations there might be a place for only 4-5 of the 2990WX. Those would serve particles/fluids dynamics simulation. Most of the workload would be sent to render farms sometimes offsite. Those render farms could use Epyc/Xeon chips. If I was a head of technology, I would seriously consider these CPUs for my artists workflow.
  • ATC9001 - Wednesday, August 15, 2018 - link

    Another big thing which people don't consider is...the true "price" of these systems is nearly neck and neck. Sure you can save a couple hundred with AMD CPU, but by the time you add in RAM, mobo, PSU, storage etc....you're talking a 5k+...

    Intel doesn't want AMD to go away (think anti-trust) but they are definitely stepping up efforts which is great for consumers!
  • LsRamAir - Thursday, August 16, 2018 - link

    We've been patient! Looked at all the ads multiple times for support to. Please drop the rest of the knowledge, Sir! "Still writing" on the overclocking page is nibblin' at my patience and intrigue hemisphere.
  • Relic74 - Wednesday, August 29, 2018 - link

    Yes of course there is, I have one of the new 32 core systems and I use it with SmartOS. A VM management OS that could allow up to 8 game developers to use a single 32 Core workstation without a single bit of performance lost. That is as long as each VM has control over their own GPU. 4 Cores(most games dont new more than that in fact, no game needs more that), 32GB to 64GB of RAM (depending on server config) and an Nvidia 1080ti or higher, per VM. That is more than enough and would save the company thousands, in fact, that is exactly what most game developers use. Servers with 8 to 12 GPU's, dual CPUs, 32 to 64 cores, 512GB of RAM, standard config.

    You should watch Linus Tech Tips 12 node gaming system off of a single computer, it's the future and is amazing.
  • eek2121 - Saturday, August 18, 2018 - link

    You are downplaying the gaming market. It's a multi-billion dollar industry. Nothing niche about it.
  • HStewart - Monday, August 13, 2018 - link

    I agree with you - so this mainly concerning "It's over, Intel is finished"

    Normally I don't care much to discuss AMD related threads - but when people already bad mouth Intel, it all fair game in my opinion.

    But what is important and why I agree is that it not even close. Because the like it or not, PC Game industry which primary reason for desktop now is a minimal part of industry now - computers are mostly going to mobile - and just go into local BestBuy and you see why it not even close.

    Plus as in a famous WW II saying, "The Sleeper has been Awaken". One is got to be blind, if you think "Intel is finished" I think the real reason that 10nm is not coming out, is that Intel wants to shut down AMD for once and for always. I see this coming in two areas - in the CPU area and also with GPU - I believe the i870xG is precursor to it - with AMD GPU being replace with Artic Sound.

    But AMD does have a good side to this. That it keep Intel's prices down and Intel improving products.
  • ishould - Monday, August 13, 2018 - link

    "I think the real reason that 10nm is not coming out, is that Intel wants to shut down AMD for once and for always." This is actually not true, Intel is having *major* yield issues with 10nm, hence 14nm being a 4-year-node (possibly 5 years if it slips from the expected Holiday 2019), and is a contributing factor for the decline of Intel/rise of AMD.
  • HStewart - Monday, August 13, 2018 - link

    I not stating that Intel didn't have yield issues - but there is 2 things that should be taking in account - and of course Intel only really knows

    1. (Intel has stated this) That all 10nm are not equal - and then Intel's 10nm is closer to competition's 7nm - and this is likely the reason why it taking long.

    2. Intel realizes the process issues - and if you think they are not aware of competition in market - not just AMD but also ARM then one is a fool
  • ishould - Monday, August 13, 2018 - link

    I agree they were probably being too ambitious with their scaling (2.4x) for 10nm. Rumor is that they've had to sacrifice some scaling to get better yields. EUV cannot come soon enough!
  • MonkeyPaw - Monday, August 13, 2018 - link

    I highly highly doubt that Intel would postpone 10nm just to “shut down AMD.” Intel has shareholders to look out for, and Intel needs 10nm out the door yesterday. Their 10nm struggles are real, and it is costing them investor confidence. No way would they wait around to win a pissing match with AMD while their stock value goes down.

Log in

Don't have an account? Sign up now