CPU Performance: Office Tests

The Office test suite is designed to focus around more industry standard tests that focus on office workflows, system meetings, some synthetics, but we also bundle compiler performance in with this section. For users that have to evaluate hardware in general, these are usually the benchmarks that most consider.

All of our benchmark results can also be found in our benchmark engine, Bench.

For our graphs, some of them have two values: a regular value in orange, and one in red called 'Intel Spec'. ASUS offers the option to 'open up' the power and current limits of the chip, so the CPU is still running at the same frequency but is not throttled. Despite Intel saying that they recommend 'Intel Spec', the system they sent to us to test was actually set up with the power limits opened up, and the results they provided for us to compare to internally also correlated with that setting. As a result, we're providing both sets results for our CPU tests.

PCMark 10: Industry Standard System Profiler

Futuremark, now known as UL, has developed benchmarks that have become industry standards for around two decades. The latest complete system test suite is PCMark 10, upgrading over PCMark 8 with updated tests and more OpenCL invested into use cases such as video streaming.

PCMark splits its scores into about 14 different areas, including application startup, web, spreadsheets, photo editing, rendering, video conferencing, and physics. We post all of these numbers in our benchmark database, Bench, however the key metric for the review is the overall score.

PCMark10 Extended Score

.

Chromium Compile: Windows VC++ Compile of Chrome 56

A large number of AnandTech readers are software engineers, looking at how the hardware they use performs. While compiling a Linux kernel is ‘standard’ for the reviewers who often compile, our test is a little more varied – we are using the windows instructions to compile Chrome, specifically a Chrome 56 build from March 2017, as that was when we built the test. Google quite handily gives instructions on how to compile with Windows, along with a 400k file download for the repo.

In our test, using Google’s instructions, we use the MSVC compiler and ninja developer tools to manage the compile. As you may expect, the benchmark is variably threaded, with a mix of DRAM requirements that benefit from faster caches. Data procured in our test is the time taken for the compile, which we convert into compiles per day.

Compile Chromium (Rate)

.

3DMark Physics: In-Game Physics Compute

Alongside PCMark is 3DMark, Futuremark’s (UL’s) gaming test suite. Each gaming tests consists of one or two GPU heavy scenes, along with a physics test that is indicative of when the test was written and the platform it is aimed at. The main overriding tests, in order of complexity, are Ice Storm, Cloud Gate, Sky Diver, Fire Strike, and Time Spy.

Some of the subtests offer variants, such as Ice Storm Unlimited, which is aimed at mobile platforms with an off-screen rendering, or Fire Strike Ultra which is aimed at high-end 4K systems with lots of the added features turned on. Time Spy also currently has an AVX-512 mode (which we may be using in the future).

3DMark Physics - Time Spy

.

GeekBench4: Synthetics

A common tool for cross-platform testing between mobile, PC, and Mac, GeekBench 4 is an ultimate exercise in synthetic testing across a range of algorithms looking for peak throughput. Tests include encryption, compression, fast Fourier transform, memory operations, n-body physics, matrix operations, histogram manipulation, and HTML parsing.

I’m including this test due to popular demand, although the results do come across as overly synthetic, and a lot of users often put a lot of weight behind the test due to the fact that it is compiled across different platforms (although with different compilers).

We record the main subtest scores (Crypto, Integer, Floating Point, Memory) in our benchmark database, but for the review we post the overall single and multi-threaded results.

Geekbench 4 - ST OverallGeekbench 4 - MT Overall

 

CPU Performance: Rendering Tests CPU Performance: Encoding Tests
Comments Locked

136 Comments

View All Comments

  • SaturnusDK - Wednesday, January 30, 2019 - link

    The price is the only big surprise here. At $3000 for the CPU alone and three times that in system price it's actually pretty decently priced. The performance is as expected but it will soon be eclipsed. The only question is what price AMD will change for it's coming Zen2 based processors in the same performance bracket, we won't know until then if the W3175X is a worthwhile investment.
  • HStewart - Wednesday, January 30, 2019 - link

    I thought the rumors were that this chip was going to be $8000. I am curious what Covey version of this chip will perform and when it comes out.

    But lets be honest, unless you are extremely rich or crazy, buying any processor with large amount of cores is crazy - to me it seems like high end gaming market is being taking for ride with all this core war - buy high end core now just to say you have highest performance and then next year purchase a new one. Of course there is all the ridicules process stuff. It just interesting to find a 28 core beats a AMD 32 core with Skylake and 14nm on Intel.

    As for Server side, I would think it more cost effective to blade multiple lower core units than less higher core units.
  • jakmak - Wednesday, January 30, 2019 - link

    Its not really surprising to see an 28 Intel beating an 32Core AMD. After all, it is not a hidden mystery that the Intel chips not only have a small IPC advantage, but also are able to run with a higher clockrate (nevertheless the power wattage). In this case, the Xeon-W excells where these 2 advantages combined are working 28x, so the 2 more cores on AMD side wont cut it.
    It is also obvious that the massive advantage works mostly in those cases where clock rate is the most important part.
  • MattZN - Wednesday, January 30, 2019 - link

    Well, it depends on whether you care about power consumption or not, jakmak. Traditionally the consumer space hasn't cared so much, but its a bit of a different story when whole-system power consumption starts reaching for the sky. And its definitely reaching for sky with this part.

    The stock intel part burns 312W on the Blender benchmark while the stock threadripper 2990WX burns 190W. The OC'd Intel part burns 672W (that's right, 672W without a GPU) while the OCd 2990WX burns 432W.

    Now I don't know about you guys, but that kind of power dissipation in such a small area is not something I'm willing to put inside my house unless I'm physically there watching over it the whole time. Hell, I don't even trust my TR system's 330W consumption (at the wall) for continuous operation when some of the batches take several days to run. I run it capped at 250W.

    And... I pay for the electricity I use. Its not cheap to run machines far away from their maximally efficient point on the curve. Commercial machines have lower clocks for good reason.

    -Matt
  • joelypolly - Wednesday, January 30, 2019 - link

    Do you not have a hair dryer or vacuum or oil heater? They can all push up to 1800W or more
  • evolucion8 - Wednesday, January 30, 2019 - link

    That is a terrible example if you ask me.
  • ddelrio - Wednesday, January 30, 2019 - link

    lol How long do you keep your hair dryer going for?
  • philehidiot - Thursday, January 31, 2019 - link

    Anything up to one hour. I need to look pretty for my processor.
  • MattZN - Wednesday, January 30, 2019 - link

    Heh. That's is a pretty bad example. People don't leave their hair dryers turned on 24x7, nor floor heaters (I suppose, unless its winter). Big, big difference.

    Regardless, a home user is not likely to see a large bill unless they are doing something really stupid like crypto-mining. There is a fairly large distinction between the typical home-use of a computer vs a beefy server like the one being reviewed here, let alone a big difference between a home user, a small business environment (such as popular youtube tech channels), and a commercial setting.

    If we just use an average electricity cost of around $0.20/kWh (actual cost depends on where you live and the time of day and can range from $0.08/kWh to $0.40/kWh or so)... but lets just $0.20/kWh.

    For a gamer who is spending 4 hours a day burning 300W the cost of operation winds up being around $7/month. Not too bad. Your average gamer isn't going to break the bank, so to speak. Mom and Dad probably won't even notice the additional cost. If you live in cold environment, your floor heater will indeed cost more money to operate.

    If you are a solo content creator you might be spending 8 to 12 hours a day in front of the computer. For the sake of argument, running blender or encoding jobs in the background. 12 hours of computer use a day @ 300W costs around $22/month.

    If you are GN or Linus or some other popular YouTube site and you are running half a dozen servers 24x7 plus workstations for employees plus running numerous batch encoding jobs on top of that, the cost will begin to become very noticable. Now you are burning, say, 2000W 24x7 (pie in the sky rough average), costing around $290/month ($3480/year). That content needs to be making you money.

    A small business or commercial setting can wind up spending a lot of money on energy if no care at all is taken with regards to power consumption. There are numerous knock-on costs, such as A/C in the summer which has to take away all the equipment heat on top of everything else. If A/C is needed (in addition to human A/C needs), the cost is doubled. If you are renting colocation space then energy is the #1 cost and network bandwidth is the #2 cost. If you are using the cloud then everything has bloated costs (cpu, network, storage, and power).

    In anycase, this runs the gamut. You start to notice these things when you are the one paying the bills. So, yes, Intel is kinda playing with fire here trying to promote this monster. Gaming rigs that aren't used 24x7 can get away with high burns but once you are no longer a kid in a room playing a game these costs can start to matter. As machine requirements grow then running the machines closer to their maximum point of efficiency (which is at far lower frequencies) begins to trump other considerations.

    If that weren't enough, there is also the lifespan of the equipment to consider. A $7000 machine that remains relevant for only one year and has as $3000/year electricity bill is a big cost compared to a $3000 machine that is almost as fast and only has $1500/year electricity bill. Or a $2000 machine. Or a $1000 machine. One has to weigh convenience of use against the total cost of ownership.

    When a person is cognizant of the costs then there is much less of an incentive to O.C. the machines, or even run them at stock. One starts to run them like real servers... at lower frequencies to hit the maximum efficiency sweet spot. Once a person begins to think in these terms, buying something like this Xeon is an obvious and egregious waste of money.

    -Matt
  • 808Hilo - Thursday, January 31, 2019 - link

    Most servers run at idle speed. That is a sad fact. The sadder fact is that they have no discernible effect on business processes because they are in fact projected and run by people in a corp that have a negative cost to benefit ratio. Most important apps still run on legacy mainframe or mini computers. You know the one that keep the electricity flowing, planes up, ticketing, aisles restocked, powerplants from exploding, ICBM tracking. Only social constructivists need an overclocked server. Porn, youtubers, traders, datacollectors comes to mind. Not making much sense.

Log in

Don't have an account? Sign up now