Benchmarking Performance: CPU Office Tests

The office programs we use for benchmarking aren't specific programs per-se, but industry standard tests that hold weight with professionals. The goal of these tests is to use an array of software and techniques that a typical office user might encounter, such as video conferencing, document editing, architectural modeling, and so on and so forth.

All of our benchmark results can also be found in our benchmark engine, Bench.

Chromium Compile (v56)

Our new compilation test uses Windows 10 Pro, VS Community 2015.3 with the Win10 SDK to compile a nightly build of Chromium. We've fixed the test for a build in late March 2017, and we run a fresh full compile in our test. Compilation is the typical example given of a variable threaded workload - some of the compile and linking is linear, whereas other parts are multithreaded.

Office: Chromium Compile (v56)

For our compile test, it would appear that the extra memory width afforded by the quad-channel memory of Skylake-X can have a direct benefit in compile performance.

PCMark 10

PCMark 10 is the 2017 update to the family favorite, PCMark 8. PCMark 8 has been part of our test bed since the latest update in Q1. For the most part it runs well, although for some processors it doesn’t recognize, some tests will not complete, leading to holes in our benchmark data (there’s also an odd directory quirk in one test that causes issues). The newest version, PCMark 10, is the answer.

The new test is adapted for more 2016/2017 workflows. With the advent of office applications that perform deeper compute tasks, or the wave of online gamers and streamers, the idea behind PCMark 10 is to give a better ‘single number’ result that can provide a comparable metric between systems. Single metrics never tell the whole story, so we’re glad that Futuremark provides a very detailed breakdown of what goes on.

Ganesh’s article on PCMark 10 goes into more detail than I will here, but the ‘Extended Benchmark’ runs through four different sets of tests: Essential, Productivity, Creation and Gaming. Each of these have sub-test results as well, including startup performance, web performance, video conferencing, photo/video editing, spreadsheets, rendering, and physics, which you can find in Bench.

Office: PCMark10-1 Essential Set ScoreOffice: PCMark10-2 Productivity Set ScoreOffice: PCMark10-3 Creation Set ScoreOffice: PCMark10-4 Physics Score

PCMark8: link

Despite originally coming out in 2008/2009, Futuremark has maintained PCMark8 to remain relevant in 2017. On the scale of complicated tasks, PCMark focuses more on the low-to-mid range of professional workloads, making it a good indicator for what people consider 'office' work. We run the benchmark from the commandline in 'conventional' mode, meaning C++ over OpenCL, to remove the graphics card from the equation and focus purely on the CPU. PCMark8 offers Home, Work and Creative workloads, with some software tests shared and others unique to each benchmark set.

Office: PCMark8 Creative (non-OpenCL)Office: PCMark8 Home (non-OpenCL)Office: PCMark8 Work (non-OpenCL)

Benchmarking Performance: CPU Encoding Tests Benchmarking Performance: CPU Legacy Tests
Comments Locked

222 Comments

View All Comments

  • boeush - Friday, October 6, 2017 - link

    To expand on this a bit more, with the "core wars" now in effect, I wonder if hyperthreading might be an unnecessary holdover feature that could be actually reducing performance of many(8+)-core chips in all but the most extremely threaded scenarios. Might it not be better to have many simple/efficient cores, rather than perhaps fewer cores loaded with the hyperthreading overhead both in terms of die area and energy density, as well as cache thrashing?
  • Zingam - Saturday, October 7, 2017 - link

    Hyperthreading was invented to optimize the use of CPU logic that would otherwise remain unutilized during high loads.There is no way of reducing performance with current architectures. There are "hyperthreading-less" CPUs and you compare them to hyperthreded CPUs.
  • boeush - Monday, October 9, 2017 - link

    Hyperthreading was particularly useful in the context of not having a lot of cores to work with - allowing to squeeze extra multi-threaded performance from your dual- or quad-core CPU. It comes at the cost of extra silicon and complexity in the CPU pipeline, but allows better utilization of CPU resources as you mention. At runtime, it has the dual detrimental effects on single-thread performance, of (1) splitting/sharing the on-CPU cache among more threads, thereby raising the frequency of cache misses for any given thread due to the threads trampling over each other's cached data, and (2) indeed maximizing CPU resource utilization, thereby maximizing dissipated energy per unit area - and thereby driving the CPU into a performance-throttling regime.

    With more cores starting to become available per CPU in this age of "core wars", it's no longer as important to squeeze every last ounce of resource utilization from each core. Most workloads/applications are not very parallelizable in practice, so you end up hitting the limits of Amdahl's law - at which point single-thread performance becomes the main bottleneck. And to maximize single-thread performance on any given core, you need two things: (a) maximum attainable clock frequency (resource utilization be damned), and (b) as much uncontested, dedicated on-CPU cache as you can get. Hyperthreading is an impediment to both of those goals.

    So, it seems to me that if we're going toward the future where we routinely have CPUs with 8 or more cores, then it would be beneficial for each of those cores to be simpler, more compact, more streamlined and optimized for single-thread performance (while foregoing hyperthreading support), while spending any resulting die space savings on more cores and/or more cache.
  • boeush - Monday, October 9, 2017 - link

    To add to the above: 'more cores and/or more cache' - and/or better branch predictor, and/or faster/wider ALU and/or FPU, and/or more pipeline stages to support a faster clock, and/or...
  • alinypd - Saturday, October 7, 2017 - link

    Slowest GAMING CPU Ever, Garbage!
  • yhselp - Saturday, October 7, 2017 - link

    The i3-8100 is made utterly redundant by the the necessity to buy a Z370 motherboard along with it; it'd be cheaper to get an i5-7400 with a lower-end motherboard. Intel...
  • watzupken - Saturday, October 7, 2017 - link

    This applies to all the non-overclocking chips, particularly i5 and below. The high cost of the Z370 boards currently simply wipe out any price benefits. For example, a i5 840 is good value for money, but once you factor in the price of a motherboard with a Z370 chipset, it may not be that good value for money anymore.
  • FourEyedGeek - Saturday, October 7, 2017 - link

    Enjoyed the article, thanks. An overclocked Ryzen 1700 looks appealing.
  • nierd - Saturday, October 7, 2017 - link

    "The problem here is *snip* Windows 10, *snip* All it takes is for a minor internal OS blip and single-threaded performance begins to diminish. Windows 10 famously kicks in a few unwanted instruction streams when you are not looking,"

    This is why single threaded performance is a silly benchmark in today's market, unless you happen to boot to DOS to run something. Your OS is designed to use threads. There are no systems in use today as a desktop (in any market these processors will compete - even if used as a server) where they will ever run a single thread. The only processors that run single threads today are ... single core processors (without hyperthreading even).

    Open your task manager - click the performance tab - look at the number of threads - when you have enough cores to match that number then single threaded performance is important. In the real world how the processor handles multiple tasks and thread switching is more important. Even hardcore gamers seem to miss this mark forgetting that behind the game the OS has threads for memory management, disk management, kernel routines, checking every piece of hardware in your system, antivirus, anti-malware (perhaps), network stack management, etc. That's not even counting if you run more than one monitor and happen to have web browsing or videos playing on another screen - and anything in the background you are running.

    The myth that you never need more than 4 cores is finally coming to rest - lets start seeing benchmarks that stress a system with 10 programs going in the background. My system frequently will be playing a movie, playing a game, and running handbrake in the background while it also serves as a plex server, runs antivirus, has 32 tabs open in 2 different browsers, and frequently has something else playing at the same time - A true benchmark would be multiple programs all tying up as many resources as possible - while a single app can give a datapoint I want to see how these new multi-core beasts handle real world scenarios and response times.
  • coolhardware - Sunday, October 8, 2017 - link

    Your comment has merit. It is crazy the number of tasks running on a modern OS. I sometimes miss the olden days where a clean system truly was clean and had minimal tasks upon bootup. ;-)

Log in

Don't have an account? Sign up now