Conclusion: Price Makes Perfect

When you buy a system, ask yourself – what matters most to you?

Is it gaming performance?
Is it bang-for-buck?
Is it all-out peak performance?
Is it power consumption?
Is it performance per watt?

I can guarantee that out of the AnandTech audience, we will have some readers in each of these categories. Some will be price sensitive, while others will not. Some will be performance sensitive, others will be power (or noise) sensitive. The point here is that the Xeon W-3175X only caters to one market: high performance.

We tested the Xeon W-3175X in our regular suite of tests, and it performs as much as we would expect – it is a 28 core version of the Core i9-9980XE, so in single threaded tests it is about the same, but in raw multi-threaded tests it performs up to 50% better. For rendering, that’s great. For our variable threaded tests, the gains are not as big, from either no gain at all to around 20% or so. This is the nature of increasing threads – at some point, software hits Amdahl’s law of scaling and more threads does nothing. However, for software that isn’t at that point, the W-3175X comes in like a wrecking ball.

Corona 1.3 Benchmark

For our graphs, some of them had two values: a regular value in orange, and one in red called 'Intel Spec'. ASUS offers the option to 'open up' the power and current limits of the chip, so the CPU is still running at the same frequency but is not throttled. Despite Intel saying that they recommend 'Intel Spec', the system they sent to us to test was actually set up with the power limits opened up, and the results they provided for us to compare to internally also correlated with that setting. As a result, we provided both sets results for our CPU tests.

For the most part, the 'opened up' results scored better, especially in multithreaded tests, however Intel Spec did excel in memory bound tests. This is likely because in the 'opened up' way, there is no limit to keeping the high turbo which means there could be additional stalls for memory based workloads. In a slower 'Intel Spec' environment, there's plenty of power for the mesh and the memory controllers do deal with requests as they come.

Power, Overclockability, and Availability

Two-and-a-half questions hung over Intel during the announcement and launch of the W-3175X. First one was power, second was overclockability, and two-point-five was availability.

On the power side of the equation, again the W-3175X comes in like a wrecking ball, and this baby is on fire. While this chip has a 255W TDP, the turbo max power value is 510W – we don’t hit that at ‘stock’ frequency, which is more around the 300W mark, but we can really crank out the power when we start overclocking.

This processor has a regular all-core frequency of 3.8 GHz, with AVX2 at 3.2 GHz and AVX-512 at 2.8 GHz. In our testing, just by adjusting multipliers, we achieved an all-core turbo of 4.4 GHz and an AVX2 turbo of 4.0 GHz, with the systems drawing 520W and 450W respectively. At these frequencies, our CPU was reporting temperatures in excess of 110ºC! This processor is actually rated with a thermal shutoff at 120ºC, well above the 105ºC we see with regular desktop processors, which shows that perhaps Intel had to bin these chips enough that the high temperature profile was required.

On the question of availability, this is where the road is not so clear. Intel is intending only to sell these processors through OEMs and system integrators as part of pre-built systems only, for now. We’ve heard some numbers about how many chips will be made (it’s a low four-digit number), but we can only approximately confirm those numbers given one motherboard vendor also qualified how many boards they were building.

One of Anand’s comments I will always remember during our time together at AnandTech was this:

“There are no bad products, only bad prices.”

According to OEMs we spoke to, initially this processor was going to be $8k. The idea here is that being 28-core and unlocked, Intel did not want to consume its $10k Xeon market. Since then, distributors told us that the latest information they were getting was around $4500, and now Intel is saying that the recommended consumer price is $3000. That’s not Intel’s usual definition of ‘per-1000 units’, that’s the actual end-user price. Intel isn’t even quoting a per-1000 unit price, which just goes to substantiate the numbers we heard about volume.

At $8000, this CPU would be dead in the water, only suitable for high-frequency traders who could eat up the cost within a few hours of trading. At $4500, it would be a stretch, given that 18-core on Intel is only $2099, and AMD offers the 32-core 2990WX for $1799 which surpasses the performance per dollar on any rendering task.

At $2999, Intel has probably priced this one just right.

At $2999, it's not a hideous monstronsity that some worried it would be, but instead becomes a very believeable progression from the Core i9-9980XE. Just don’t ask about the rest of the system, as an OEM is probably looking at a $7k minimum build, or $10k end-user shelf price.

Gaming: F1 2018
POST A COMMENT

133 Comments

View All Comments

  • SaturnusDK - Wednesday, January 30, 2019 - link

    The price is the only big surprise here. At $3000 for the CPU alone and three times that in system price it's actually pretty decently priced. The performance is as expected but it will soon be eclipsed. The only question is what price AMD will change for it's coming Zen2 based processors in the same performance bracket, we won't know until then if the W3175X is a worthwhile investment. Reply
  • HStewart - Wednesday, January 30, 2019 - link

    I thought the rumors were that this chip was going to be $8000. I am curious what Covey version of this chip will perform and when it comes out.

    But lets be honest, unless you are extremely rich or crazy, buying any processor with large amount of cores is crazy - to me it seems like high end gaming market is being taking for ride with all this core war - buy high end core now just to say you have highest performance and then next year purchase a new one. Of course there is all the ridicules process stuff. It just interesting to find a 28 core beats a AMD 32 core with Skylake and 14nm on Intel.

    As for Server side, I would think it more cost effective to blade multiple lower core units than less higher core units.
    Reply
  • jakmak - Wednesday, January 30, 2019 - link

    Its not really surprising to see an 28 Intel beating an 32Core AMD. After all, it is not a hidden mystery that the Intel chips not only have a small IPC advantage, but also are able to run with a higher clockrate (nevertheless the power wattage). In this case, the Xeon-W excells where these 2 advantages combined are working 28x, so the 2 more cores on AMD side wont cut it.
    It is also obvious that the massive advantage works mostly in those cases where clock rate is the most important part.
    Reply
  • MattZN - Wednesday, January 30, 2019 - link

    Well, it depends on whether you care about power consumption or not, jakmak. Traditionally the consumer space hasn't cared so much, but its a bit of a different story when whole-system power consumption starts reaching for the sky. And its definitely reaching for sky with this part.

    The stock intel part burns 312W on the Blender benchmark while the stock threadripper 2990WX burns 190W. The OC'd Intel part burns 672W (that's right, 672W without a GPU) while the OCd 2990WX burns 432W.

    Now I don't know about you guys, but that kind of power dissipation in such a small area is not something I'm willing to put inside my house unless I'm physically there watching over it the whole time. Hell, I don't even trust my TR system's 330W consumption (at the wall) for continuous operation when some of the batches take several days to run. I run it capped at 250W.

    And... I pay for the electricity I use. Its not cheap to run machines far away from their maximally efficient point on the curve. Commercial machines have lower clocks for good reason.

    -Matt
    Reply
  • joelypolly - Wednesday, January 30, 2019 - link

    Do you not have a hair dryer or vacuum or oil heater? They can all push up to 1800W or more Reply
  • evolucion8 - Wednesday, January 30, 2019 - link

    That is a terrible example if you ask me. Reply
  • ddelrio - Wednesday, January 30, 2019 - link

    lol How long do you keep your hair dryer going for? Reply
  • philehidiot - Thursday, January 31, 2019 - link

    Anything up to one hour. I need to look pretty for my processor. Reply
  • MattZN - Wednesday, January 30, 2019 - link

    Heh. That's is a pretty bad example. People don't leave their hair dryers turned on 24x7, nor floor heaters (I suppose, unless its winter). Big, big difference.

    Regardless, a home user is not likely to see a large bill unless they are doing something really stupid like crypto-mining. There is a fairly large distinction between the typical home-use of a computer vs a beefy server like the one being reviewed here, let alone a big difference between a home user, a small business environment (such as popular youtube tech channels), and a commercial setting.

    If we just use an average electricity cost of around $0.20/kWh (actual cost depends on where you live and the time of day and can range from $0.08/kWh to $0.40/kWh or so)... but lets just $0.20/kWh.

    For a gamer who is spending 4 hours a day burning 300W the cost of operation winds up being around $7/month. Not too bad. Your average gamer isn't going to break the bank, so to speak. Mom and Dad probably won't even notice the additional cost. If you live in cold environment, your floor heater will indeed cost more money to operate.

    If you are a solo content creator you might be spending 8 to 12 hours a day in front of the computer. For the sake of argument, running blender or encoding jobs in the background. 12 hours of computer use a day @ 300W costs around $22/month.

    If you are GN or Linus or some other popular YouTube site and you are running half a dozen servers 24x7 plus workstations for employees plus running numerous batch encoding jobs on top of that, the cost will begin to become very noticable. Now you are burning, say, 2000W 24x7 (pie in the sky rough average), costing around $290/month ($3480/year). That content needs to be making you money.

    A small business or commercial setting can wind up spending a lot of money on energy if no care at all is taken with regards to power consumption. There are numerous knock-on costs, such as A/C in the summer which has to take away all the equipment heat on top of everything else. If A/C is needed (in addition to human A/C needs), the cost is doubled. If you are renting colocation space then energy is the #1 cost and network bandwidth is the #2 cost. If you are using the cloud then everything has bloated costs (cpu, network, storage, and power).

    In anycase, this runs the gamut. You start to notice these things when you are the one paying the bills. So, yes, Intel is kinda playing with fire here trying to promote this monster. Gaming rigs that aren't used 24x7 can get away with high burns but once you are no longer a kid in a room playing a game these costs can start to matter. As machine requirements grow then running the machines closer to their maximum point of efficiency (which is at far lower frequencies) begins to trump other considerations.

    If that weren't enough, there is also the lifespan of the equipment to consider. A $7000 machine that remains relevant for only one year and has as $3000/year electricity bill is a big cost compared to a $3000 machine that is almost as fast and only has $1500/year electricity bill. Or a $2000 machine. Or a $1000 machine. One has to weigh convenience of use against the total cost of ownership.

    When a person is cognizant of the costs then there is much less of an incentive to O.C. the machines, or even run them at stock. One starts to run them like real servers... at lower frequencies to hit the maximum efficiency sweet spot. Once a person begins to think in these terms, buying something like this Xeon is an obvious and egregious waste of money.

    -Matt
    Reply
  • 808Hilo - Thursday, January 31, 2019 - link

    Most servers run at idle speed. That is a sad fact. The sadder fact is that they have no discernible effect on business processes because they are in fact projected and run by people in a corp that have a negative cost to benefit ratio. Most important apps still run on legacy mainframe or mini computers. You know the one that keep the electricity flowing, planes up, ticketing, aisles restocked, powerplants from exploding, ICBM tracking. Only social constructivists need an overclocked server. Porn, youtubers, traders, datacollectors comes to mind. Not making much sense. Reply

Log in

Don't have an account? Sign up now