CPU Performance: Office Tests

The Office test suite is designed to focus around more industry standard tests that focus on office workflows, system meetings, some synthetics, but we also bundle compiler performance in with this section. For users that have to evaluate hardware in general, these are usually the benchmarks that most consider.

All of our benchmark results can also be found in our benchmark engine, Bench.

PCMark 10: Industry Standard System Profiler

Futuremark, now known as UL, has developed benchmarks that have become industry standards for around two decades. The latest complete system test suite is PCMark 10, upgrading over PCMark 8 with updated tests and more OpenCL invested into use cases such as video streaming.

PCMark splits its scores into about 14 different areas, including application startup, web, spreadsheets, photo editing, rendering, video conferencing, and physics. We post all of these numbers in our benchmark database, Bench, however the key metric for the review is the overall score.

PCMark10 Extended Score

As a general mix of a lot of tests, the new processors from Intel take the top three spots, in order. Even the i5-9600K goes ahead of the i7-8086K.

Chromium Compile: Windows VC++ Compile of Chrome 56

A large number of AnandTech readers are software engineers, looking at how the hardware they use performs. While compiling a Linux kernel is ‘standard’ for the reviewers who often compile, our test is a little more varied – we are using the windows instructions to compile Chrome, specifically a Chrome 56 build from March 2017, as that was when we built the test. Google quite handily gives instructions on how to compile with Windows, along with a 400k file download for the repo.

In our test, using Google’s instructions, we use the MSVC compiler and ninja developer tools to manage the compile. As you may expect, the benchmark is variably threaded, with a mix of DRAM requirements that benefit from faster caches. Data procured in our test is the time taken for the compile, which we convert into compiles per day.

Compile Chromium (Rate)

Pushing the raw frequency of the all-core turbo seems to work well in our compile test.

3DMark Physics: In-Game Physics Compute

Alongside PCMark is 3DMark, Futuremark’s (UL’s) gaming test suite. Each gaming tests consists of one or two GPU heavy scenes, along with a physics test that is indicative of when the test was written and the platform it is aimed at. The main overriding tests, in order of complexity, are Ice Storm, Cloud Gate, Sky Diver, Fire Strike, and Time Spy.

Some of the subtests offer variants, such as Ice Storm Unlimited, which is aimed at mobile platforms with an off-screen rendering, or Fire Strike Ultra which is aimed at high-end 4K systems with lots of the added features turned on. Time Spy also currently has an AVX-512 mode (which we may be using in the future).

For our tests, we report in Bench the results from every physics test, but for the sake of the review we keep it to the most demanding of each scene: Ice Storm Unlimited, Cloud Gate, Sky Diver, Fire Strike Ultra, and Time Spy.

3DMark Physics - Ice Storm Unlimited3DMark Physics - Cloud Gate3DMark Physics - Sky Diver3DMark Physics - Fire Strike Ultra3DMark Physics - Time Spy

The older Ice Storm test didn't much like the Core i9-9900K, pushing it back behind the R7 1800X. For the more modern tests focused on PCs, the 9900K wins out. The lack of HT is hurting the other two parts.

GeekBench4: Synthetics

A common tool for cross-platform testing between mobile, PC, and Mac, GeekBench 4 is an ultimate exercise in synthetic testing across a range of algorithms looking for peak throughput. Tests include encryption, compression, fast Fourier transform, memory operations, n-body physics, matrix operations, histogram manipulation, and HTML parsing.

I’m including this test due to popular demand, although the results do come across as overly synthetic, and a lot of users often put a lot of weight behind the test due to the fact that it is compiled across different platforms (although with different compilers).

We record the main subtest scores (Crypto, Integer, Floating Point, Memory) in our benchmark database, but for the review we post the overall single and multi-threaded results.

Geekbench 4 - ST Overall

Geekbench 4 - MT Overall

CPU Performance: Rendering Tests CPU Performance: Encoding Tests
Comments Locked

274 Comments

View All Comments

  • vext - Friday, October 19, 2018 - link

    Very good article, but here are my beefs.

    Why is there no mention of temperatures?

    According to Techspot the 9900k runs ridiculously hot under heavy loads. At stock clocks under a heavy Blender load it reaches 85C with a Corsair H100i Pro, or Noctua NH-D15. Pushed to 5Ghz, it hits 100C. At 5.1 Ghz it FAILS. I suggest that Anandtech has failed by not discussing this.

    Techspot says:

    "There’s simply no way you’re going to avoid thermal throttling without spending around $100 on the cooler, at least without your PC sounding like a jet about to take off. Throw in the Corsair H100i Pro and the 9900K now costs $700 and you still can’t overclock, at least not without running at dangerously high temperatures."

    Why the focus on single threaded benchmarks? For the most part they are irrelevant. Yet they are posted in their own graph, at the front of each testing section, as though they were the most important data point. Just include them as a separate bar with the multi-thread benchmarks. Good Grief!

    Why post MSRP prices in every single benchmark? You can't even buy them for MSRP. There should be a single chart at the front of the article with a rough retail estimate for each processor, and links to the retailers. If the MSRP is necessary, then just add a column to the chart. Sheesh.

    Why no in depth cost/benefit comparison? A Ryzen 2600 with included cooler at $160 costs only one quarter of a 9900k with an aio cooler at $700. The $540 difference would buy a new RTX 2070 video card. Or three more Ryzen 2600's. For crying out loud.

    I like the 9900k, it's a good processor. It's intended for hobbyists that can play with custom loop cooling. But it's not realistic for most people.
  • mapesdhs - Sunday, October 21, 2018 - link

    All good questions... the silence is deafening. Thankfully, there's plenty of commentary on the value equation to be found. A small channel atm, but I like this guy's vids:

    https://www.youtube.com/watch?v=EWO5A9VMcyY
  • abufrejoval - Friday, October 19, 2018 - link

    I needed something a little bigger for my lab two or three years ago and came across an E5-2696v3 on eBay from China, a Haswell generation 18-core at $700.

    That chips didn't officially exist, but after digging a little deeper I found it's basically an E5-2699v3 which clocks a little higher (3.8 instead of 3.6GHz) with 1-2 cores active. So it's basically a better chip for a fraction of the going price of the lesser one (E5-2699v3 is still listed at €4649 by my favorite e-tailer). And yes, it's a perfect chip, Prime95'd it for hours, POVrayd and Blendered for days until I was absolutely sure it was a prime quality chip.

    Officially it has 145Watts TDP, but I've only ever seen it go to 110Watts on HWiNFO with Prime95 in its meanest settings: It must be a perfect bin. With the particle pusher it's never more than 93Watts while no part of the CPU exceeds 54°C with a Noctua 140mm fan practically inaudible at 1000rpm cooling it: That because the 18 cores and 36 threads never run faster than 2.8GHz fully loaded. They also don't drop below it (except for idle, 1.855 Watts minimum btw.), so you can pretty much forget about the 2.3GHz 'nominal' speed.

    It gets 2968.245803 on that benchmark, slightly above the i9-9900k, somewhat below the ThreadRipper. That's 22nm Haswell against 14++/12nm current and 18 vs 8/12 cores.

    This is rather typical for highly-threaded workloads: It's either cores or clocks and when the power ceiling is fixed you get higher throughput and energy efficiency when you can throw cores instead of clocks at the problem.

    I think it's a data point worth highlighting in this crazy clock race somewhat reminiscent of Pentium 4 days, heat vs. efficiency, a four year old chip beating the newcomer in performance and almost 3:1 in efficiency at far too similar prices.

    Yet, this specific chip will clock pretty high for a server chip, easily doing 3.6 GHz with eight cores seeing action from your game engine, while the remaining ten are often ignored: Perhaps that's a Ryzen effect, it used to be 4:14 earlier.

    I've done BCLK overclock of 1.08 to have it reach the magic 4GHz at maximum turbo, but it's not noticeable in real-life neck-to-neck to an E3-1276v3 which also turbos to 4GHz on three cores out of four available, 3.9 at 4/4 with HT.
  • abufrejoval - Friday, October 19, 2018 - link

    2968.245803 on the particle pusher benchmark... need edit
  • icoreaudience - Friday, October 19, 2018 - link

    Move away from rar/lzma : the new darling of data compression is called Zstandard :
    https://www.zstd.net

    It comes with a nice integrated benchmark, which can easily ramp up with multithreading :
    zstd -b -1 -T8 fileToTest # benchmark level one on fileToTest using 8 threads

    Windows user can even download a pre-compiled binary directly in the release notice :
    https://github.com/facebook/zstd/releases/latest

    It would be great to see some numbers using this compressor on latest Intel cores !
  • Kaihekoa - Friday, October 19, 2018 - link

    Looks like all your gaming benchmarks are GPU bound and there pointless. Why not use a 2080 Ti to eliminate/reduce GPU bottleneck?
  • Kaihekoa - Friday, October 19, 2018 - link

    therefore*
  • palladium - Friday, October 19, 2018 - link

    Can you please run some SPEC2006 benchmarks and see if Apple's SOC really has caught on to Intel's performance (per core), as mentioned in Andrei in his iPhone XS review? Thanks
  • VirpZ - Friday, October 19, 2018 - link

    Apart from blender, your review is full Intel biased software for rendering.
  • Hifihedgehog - Friday, October 19, 2018 - link

    Hey Ian. I see your updated full load power consumptions results. Question: Why is it that the six-core i7-8086K is drawing so little power in comparison to everything else including the quad-cores? Is this due to its better binning or is this simply an error that crept in?

Log in

Don't have an account? Sign up now