Concluding Remarks

While the primary purpose of this exercise was just to update our datasets for future system reviews, it none the less proved to be an enlightening one, and something worth sharing. We already had an idea of what to expect going into refreshing our benchmark data for Meltdown and Spectre, and in some ways we still managed to find a surprise or two while looking at Intel's NUC7i7BNH NUC. The table below summarizes the extent of performance loss in various benchmarks.

Meltdown & Spectre Patches - Impact on the Intel NUC7i7BNH Benchmarks
Benchmark Performance Notes (Fully Patched vs. Unpatched)
BAPCo SYSmark 2014 SE - Overall -5.47%
BAPCo SYSmark 2014 SE - Office -5.17%
BAPCo SYSmark 2014 SE - Media -4.11%
BAPCo SYSmark 2014 SE - Data & Financial Analysis -2.05%
BAPCo SYSmark 2014 SE - Responsiveness -10.48%
   
Futuremark PCMark 10 Extended -2.31%
Futuremark PCMark 10 Essentials -6.56%
Futuremark PCMark 10 Productivity -8.03%
Futuremark PCMark 10 Gaming +5.56%
Futuremark PCMark 10 Digital Content Creation -0.33%
   
Futuremark PCMark 8 - Home -1.9%
Futuremark PCMark 8 - Creative -2.32%
Futuremark PCMark 8 - Work -0.83%
Futuremark PCMark 8 - Storage -1.34%
Futuremark PCMark 8 - Storage Bandwidth -29.15%
   
Futuremark PCMark 7 - PCMark Suite Score -4.03%
   
Futuremark 3DMark 11- Entry Preset +2.44%
   
Futuremark 3DMark 13 - Cloud Gate +1.14%
Futuremark 3DMark 13 - Ice Storm -13.73%
   
Agisoft Photoscan - Stage 1 -2.09%
Agisoft Photoscan - Stage 2 -12.82%
Agisoft Photoscan - Stage 3 -6.70%
Agisoft Photoscan - Stage 4 -2.84%
Agisoft Photoscan - Stage 1 (with GPU) +1.1%
Agisoft Photoscan - Stage 2 (with GPU) +1.46%
   
Cinebench R15 - Single Threaded +3.58%
Cinebench R15 - Multi-Threaded -0.32%
Cinebench R15 - Open GL +3.78%
   
x264 v5.0 - Pass I -1.1%
x264 v5.0 - Pass II -0.75%
   
7z - Compression -0.16%
7z - Decompression -0.38%

Looking at the NUC – and really this should be on the mark for most SSD-equipped Haswell+ systems – there isn't a significant universal trend. The standard for system tests such as these is +/- 3% performance variability, which covers a good chunk of the sub-benchmarks. What's left then are more meaningful performance impacts in select workloads of the BAPCo SYSmark 2014 SE and Futuremark PCMark 10 benchmarks, particularly storage-centric benchmarks. Other than those, we see certain compute workloads (such as the 2nd stage of the Agisoft Photoscan benchmark) experience a loss in performance of more than 10%.

On the whole, we see that the patches for Meltdown and Spectre affect real-world application benchmarks, but, synthetic ones are largely unaffected. The common factor among most of these benchmarks in turn is storage and I/O; the greater the number of operations, the more likely a program will feel the impact of the patches. Conversely, a compute-intensive workload that does little in the way of I/O is more or less unfazed by the changes. Though there is a certain irony to the fact that taken to its logical conclusion, patching a CPU instead renders storage performance slower, with the most impacted systems having the fastest storage.

As for what this means for future system reviews, the studies done as part of this article give us a way forward without completely invalidating all the benchmarks that we have processed in the last few years. While we can't reevaluate every last system – and so old data will need to stick around for a while longer still – these results mean that the data from unimpacted benchmarks is still valid and relevant even after the release of the Meltdown and Spectre patches. To be sure, we will be marking these results with an asterisk to denote this, but ultimately this will allow us to continue comparing new systems to older systems in at least a subset of our traditional benchmarks. Which combined with back-filling benchmarks for those older systems that we do have, lets us retain a good degree of review and benchmark continuity going forward.

Miscellaneous Benchmarks
POST A COMMENT

84 Comments

View All Comments

  • Drazick - Saturday, March 24, 2018 - link

    This is perfect!

    Thank You.
    Reply
  • nocturne - Friday, March 23, 2018 - link

    I'm wondering why there were different builds of windows tested, when the patches can be disabled via a simple powershell command. Performance can vary wildly for synthetic tests across subsequent builds, especially with insider builds.

    I can understand how this comparison gives you the /before and after/, but testing across different builds doesn't show you anything about the performance impact of the patches themselves.
    Reply
  • ganeshts - Saturday, March 24, 2018 - link

    BIOS patches (CPU microcode) can't be turned off from within the OS. But, I did use the InSpectre utility to do quick testing of the extensively affected benchmarks across all the builds (as applicable). The performance loss in those benchmarks were consistent with what we got with the final build (309) in a fully patched state (BIOS v0062).

    By the way, none of these builds are insider builds.

    The reason we have listed these versions is just to indicate the build used to collect comprehensive data for the configuration.

    The builds vary because the testing was done over the course of two months, as Intel kept revising their fix and MS also had to modify some of their patches.
    Reply
  • ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Saturday, March 24, 2018 - link

    Futuremark Storage Bench>
    Why are you getting 312MB/s (unpatched) bandwith for a drive that has an average read speed of 1000MB/s ?

    Please clarify why this synthetic test has any basis in fact
    Reply
  • ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Saturday, March 24, 2018 - link

    I'm only asking this because I have been getting real world results that have little relationship to a synthetic tests

    For example, simply swapping a CPU from a 2.6Ghz dualcore to a 3.3 Ghz quadcore while keeping all other hardware and software the same will add a couple seconds to my boot times (same O.S.)

    Now, I never expected a faster quadcore to take longer to boot but it does

    Is there more overhead as you add cores and could this be measured with a synthetic test?

    Do you believe the synthetic test is actually measuring the bandwidth of the SSD, or how fast the CPU can process the data coming from the SSD?

    How would this differ from a real world test?
    Reply
  • hyno111 - Sunday, March 25, 2018 - link

    Futuremark Storage Benchmark used real world load to test the overall disk throughput. The official sequential r/w speed does not represent actual use cases and is used for mainly for advertising. Reply
  • ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Sunday, March 25, 2018 - link

    "Futuremark Storage Benchmark used real world load to test the overall disk throughput."
    ----------------------------------------------------------------------------------------------------------------------
    O.K., except my point was you are not measuring the disk throughput which would stay the same regardless of slowdowns in the processor

    You are testing how fast the processor can handle the data coming from the disk "sorta"

    The synthetic test would still not tell me that my faster quadcore would boot slower than my dualcore in the example given, therefore it also does not directly relate to a real world test

    The disk hasn't changed and neither has it's actual throughput
    Reply
  • ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Sunday, March 25, 2018 - link

    "The synthetic test would still not tell me that my faster quadcore would boot slower than my dualcore in the example given, therefore it also does not directly relate to a real world test"
    -----------------------------------------------------------------------------------------------------------
    Before you answer, I admit that the example above does not tell me the actual throughput of the disk.
    It is used to show that the synthetic test does not directly relate to the results you might get in a real world test, yet both my example and AnandTech's example do not show the actual disk throughput which stays the same
    Reply
  • akula2 - Saturday, March 24, 2018 - link

    I do not have an iota of doubt that all these so-called vulnerabilities are well thought and deliberately pre-planned by the Deep State during the CPU architecture design stage. The result is huge loss of thrust in the brands like Intel who were/are part of this epic shamelessness! I'm pretty sure some of the tech media houses are in part of this syndicate willingly or not. Now, I do not give any benefit of doubt to AMD either.

    The gigantic problem: what is the alternative? The answer lies in nation taking the lead to setup companies away from the influence of Deep State, ideally in Asia.
    Reply
  • FullmetalTitan - Saturday, March 24, 2018 - link

    I thought I only had to deal with everyone's favorite Anandtech loony, but now we have the conspiracy nuts in here too?
    Can we get some forum moderation please?
    Reply

Log in

Don't have an account? Sign up now