Concluding Remarks

While the primary purpose of this exercise was just to update our datasets for future system reviews, it none the less proved to be an enlightening one, and something worth sharing. We already had an idea of what to expect going into refreshing our benchmark data for Meltdown and Spectre, and in some ways we still managed to find a surprise or two while looking at Intel's NUC7i7BNH NUC. The table below summarizes the extent of performance loss in various benchmarks.

Meltdown & Spectre Patches - Impact on the Intel NUC7i7BNH Benchmarks
Benchmark Performance Notes (Fully Patched vs. Unpatched)
BAPCo SYSmark 2014 SE - Overall -5.47%
BAPCo SYSmark 2014 SE - Office -5.17%
BAPCo SYSmark 2014 SE - Media -4.11%
BAPCo SYSmark 2014 SE - Data & Financial Analysis -2.05%
BAPCo SYSmark 2014 SE - Responsiveness -10.48%
Futuremark PCMark 10 Extended -2.31%
Futuremark PCMark 10 Essentials -6.56%
Futuremark PCMark 10 Productivity -8.03%
Futuremark PCMark 10 Gaming +5.56%
Futuremark PCMark 10 Digital Content Creation -0.33%
Futuremark PCMark 8 - Home -1.9%
Futuremark PCMark 8 - Creative -2.32%
Futuremark PCMark 8 - Work -0.83%
Futuremark PCMark 8 - Storage -1.34%
Futuremark PCMark 8 - Storage Bandwidth -29.15%
Futuremark PCMark 7 - PCMark Suite Score -4.03%
Futuremark 3DMark 11- Entry Preset +2.44%
Futuremark 3DMark 13 - Cloud Gate +1.14%
Futuremark 3DMark 13 - Ice Storm -13.73%
Agisoft Photoscan - Stage 1 -2.09%
Agisoft Photoscan - Stage 2 -12.82%
Agisoft Photoscan - Stage 3 -6.70%
Agisoft Photoscan - Stage 4 -2.84%
Agisoft Photoscan - Stage 1 (with GPU) +1.1%
Agisoft Photoscan - Stage 2 (with GPU) +1.46%
Cinebench R15 - Single Threaded +3.58%
Cinebench R15 - Multi-Threaded -0.32%
Cinebench R15 - Open GL +3.78%
x264 v5.0 - Pass I -1.1%
x264 v5.0 - Pass II -0.75%
7z - Compression -0.16%
7z - Decompression -0.38%

Looking at the NUC – and really this should be on the mark for most SSD-equipped Haswell+ systems – there isn't a significant universal trend. The standard for system tests such as these is +/- 3% performance variability, which covers a good chunk of the sub-benchmarks. What's left then are more meaningful performance impacts in select workloads of the BAPCo SYSmark 2014 SE and Futuremark PCMark 10 benchmarks, particularly storage-centric benchmarks. Other than those, we see certain compute workloads (such as the 2nd stage of the Agisoft Photoscan benchmark) experience a loss in performance of more than 10%.

On the whole, we see that the patches for Meltdown and Spectre affect real-world application benchmarks, but, synthetic ones are largely unaffected. The common factor among most of these benchmarks in turn is storage and I/O; the greater the number of operations, the more likely a program will feel the impact of the patches. Conversely, a compute-intensive workload that does little in the way of I/O is more or less unfazed by the changes. Though there is a certain irony to the fact that taken to its logical conclusion, patching a CPU instead renders storage performance slower, with the most impacted systems having the fastest storage.

As for what this means for future system reviews, the studies done as part of this article give us a way forward without completely invalidating all the benchmarks that we have processed in the last few years. While we can't reevaluate every last system – and so old data will need to stick around for a while longer still – these results mean that the data from unimpacted benchmarks is still valid and relevant even after the release of the Meltdown and Spectre patches. To be sure, we will be marking these results with an asterisk to denote this, but ultimately this will allow us to continue comparing new systems to older systems in at least a subset of our traditional benchmarks. Which combined with back-filling benchmarks for those older systems that we do have, lets us retain a good degree of review and benchmark continuity going forward.

Miscellaneous Benchmarks
Comments Locked


View All Comments

  • chuychopsuey - Friday, March 23, 2018 - link

    Wow! That's pretty significant. Looks like it takes you a generation or two backwards.
  • Samus - Saturday, March 24, 2018 - link

    The real damage comes for people with NVMe SSD's or other CPU-centric PCIe hardware. 29% performance
  • III-V - Saturday, March 24, 2018 - link

    On a single benchmark.

    You might need to see an optometrist.
  • Samus - Sunday, March 25, 2018 - link

    Optometrist, no. Optimist, yes.
  • umano - Sunday, March 25, 2018 - link

    I want to be optimist, this is not a very powerful cpu "This significant performance loss is partly due to the NVMe drive performance now being CPU bound" so maybe on more powerful cpu this % will be lower... I hope so
  • linuxgeex - Monday, March 26, 2018 - link

    yes but people with high-end CPUs will tend to have even faster drives, or RAID, or Optane as a disk cache, and I've seen benchmarks where Optane performance dropped >40% on i9. Desktop users won't feel this very much but others who have been paying 300% more for 25% more performance for scale-up business logic are crying.
  • iter - Monday, March 26, 2018 - link

    Ultra high performance is only required in workstations. Workstations are used for important work. It is best to keep such systems offline. No need for antivirus, firewall, updating, patching and all that stuff that impedes performance and introduces downtime.

    The maxima among professionals is "if it works don't touch it". We are well past the point where it is worth keeping everything up to date with the latest versions. It's been a while since version updates were about making things better for the user, they are mostly about making things better for the big software corporations.

    Use a separate low end system for internet. With nothing important on it.
  • PeachNCream - Monday, March 26, 2018 - link

    I can't think of any situations where I've seen professional workstations that are intentionally kept offline and deprived of software updates. Perhaps that's something that would happen in a small business or SOHO where the end user is also her own IT support and might make such an odd decision, but as a client to a company using that sort of policy, I'd be concerned they were making a larger error of judgement in adhering to automotive or mechanical engineering wisdom of the 20th Century.
  • iter - Monday, March 26, 2018 - link

    That same wisdom that has resulted in countless security breaches and the privacy of billions of people violated.

    Thanks but no thanks. The "standards" are too low.

    The lack of internet connectivity doesn't in any way impede dedicated support personnel from supporting. Those people are supposed to know their biz, not google about it.

    The only error in judgement you should be concerned at the time is your own. A system that is dedicated to doing work has no job being connected to the internet. There is absolutely no good reason to update it as long as it operates property. The update will add no value, will only introduce downtime, and is likely to break stuff up.
  • PeachNCream - Monday, March 26, 2018 - link

    Despite those strongly expressed thoughts, there are very few workstations that are running as stand-alone systems that don't get vendor software updates.

Log in

Don't have an account? Sign up now