Conclusion: Is Intel Serious About Xeon W? 

In this review, we have covered the performance on three of the more popular Xeon W processors, as well as two off-roadmap parts, and discussed that the Intel’s decision to bifurcate the way its workstation and consumer processors work has put more questions on the table for prospective buyers.

This ultimately comes down to the question: Is Intel Serious About Xeon W? If we ask Intel about this, of course the answer to them is yes – they want to have target markets and have a product portfolio that they feel will fit with that user base. However I am not so sure.

Xeon W was launched a lot later than both the Xeon Scalable platform and the equivalent Skylake-X consumer platform. The messaging behind Xeon W is unclear to a large degree, with only a limited amount of PR invested into it, unlike Xeon Scalable or Skylake-X. The decision to split the market between consumer and workstation, despite having a common socket, has minimized the accessibility of the workstation platform: fewer discussions are being had about the hardware, because there’s little room for a truly mix-and-match scenario as with previous generations. At no point in Intel’s messaging were we offered review samples for example, which is usually an indication that the product line is not one that the product managers are looking to promote. Only Intel’s latest Xeon E designs, released 10 months after the first equivalent consumer parts, beats Xeon W in terms of how un-exciting it can be to try and discuss talking about a platform. Intel does not want to sample Xeon E, either.

So will Intel lose workstation market share to AMD? If I am being so pallid, what are the financial ramifications for this market? AMD’s Threadripper looks like an appealing platform for workstation users for sure, but AMD is not without its own issues. Intel is the incumbent, and has embedded itself with a large number of OEMs and end-users for years, making it difficult for AMD to break that market. AMD’s chiplet design will take a few generations to get used to, so users might stick with ‘what they know’, regardless of any cost/benefit analysis. There is also the discussion of ECC support on Threadripper, for which the messaging has been somewhat unclear: technically it should support up to ECC LRDIMMs, however it does depend a lot on whether the motherboard vendor has qualified their product for RDIMMs or LRDIMMs – most of them are not, complicating the issue. If AMD wanted to tackle this space, they need an ASUS or a GIGABYTE to build a ‘workstation focused’ motherboard, with confirmed ECC and co-processor support. GIGABYTE’s Designare line and MSI’s upcoming X399 Creation might be aimed at this, but it really does require a razor-sharp message to get through.

All this confusion means that while AMD can be competitive in most tests, Intel is expected to remain the market leader for the foreseeable future. 

I’m Sold on Xeon W: Tell Me About Performance

As our benchmarks are anything to go by, there is a lot of parity in performance between Intel’s Xeon W and Intel’s Skylake-X product lines. Xeon W takes a hit in memory workloads, because of the memory support: ECC RDIMMs are typically run at base JEDEC sub-timings, and so our DDR4-2666 memory was run at 19-19-19, compared to the 16-16-17 on the consumer platform which is more typical.

Our Xeon W results are skewed a bit towards the low-end processors, mostly because three of the five units we managed to acquire were quad-core processors. At this level, Intel’s now EOL Kaby Lake-X processors fared better, or the consumer Coffee Lake-S look like the better option, unless the user needs ECC or more PCIe lanes than the consumer products provide. The obvious counterpoint here is that if a user needs ECC, and is happy with 64 GB maximum memory support, then Intel’s own Xeon E is also an option, however we have not tested those parts yet (if any OEM can sample them to us, please let us know).

On the high-end, we do see the W-2195 sit behind the Core i9-7980XE in almost all benchmarks, which also means that for embarrassingly parallel workloads, it also sits behind the Threadripper 1950X. It still holds that Intel’s single threaded performance of the Xeon W, despite the lack of Turbo Boost 3.0, still gives it a significant advantage in single threaded workloads over AMD.

For users worrying about Spectre and Meltdown patches affecting performance, in our SYSMark tests we saw a 2-6% decrease over all the tests, with the hardest hit tests seeing a 12% decrease due to the correlation with storage.

Why Buy Xeon W?

The obvious reasons to buy Xeon W processors are just tick boxes: ECC memory, PCIe lanes, co-processor verification. If these are needed, the number of options for the rest of the system (particularly the motherboard) becomes slim, especially when factoring in price and total cost of ownership. A lot of the workstation market works on development cycles and high-throughput compute: the faster the compute, the quicker the prototyping. The fastest processors for a lot of that work, if CPU bound, are won by the consumer Core i9 or Threadripper, however if the above boxes are ticked, then Xeon W would be needed. Or Xeon Scalable, depending on budget.

 

A small side note to end: If anyone has access to any of the Apple-only Xeons (like the W-2150B) and would kindly let us borrow it for a review, please let me know over email. 

Testing Spectre and Meltdown: SYSMark
Comments Locked

74 Comments

View All Comments

  • HStewart - Monday, July 30, 2018 - link

    I am curious why Xeon W for same core count is typically slower than Core X - also I notice the Scalable CPU have much more functionally especially related to reliability. In essence to keep the system running 24/7. Also the Scalable CPU's also appear to have 6 channel memory instead of 4 Channel memory. I wonder when 6 channel memory comes to consumer level CPUs.

    One test that would be is to see what same core processor for Xeon W vs the Scalar CPU with only one CPU.

    Another test that could be interesting is a dual CPU scalable with say 2 12 cores verses 1 24 core of CPU on same level.

    Just test to see what it with more cores vs CPU's
  • duploxxx - Monday, July 30, 2018 - link

    one threadripper 2.0 and you can throw all intel configs here into the bin
  • tricomp - Monday, July 30, 2018 - link

    YeaH
  • HStewart - Monday, July 30, 2018 - link

    I wish people keep the topic to the subject and not blab about competitor products
  • duploxxx - Tuesday, July 31, 2018 - link

    if you would know anything about cpu scalable systems you would not ask these questions. a 2*12 vs 1*24 will be roughly 20% slower if your application scales cross the total core count due to in between socket communication. Even Intel provides data sheets on that. No need to test.

    as long as intel can screw consumers they will not invest anything, you wont get 6 mem lanes in xeon W or consumer unless competition does it and they get nailed. btw why on earth would you need that on a consumer platform?
  • BurntMyBacon - Tuesday, July 31, 2018 - link

    If all things are equal, then what you say is true. There is a known performance drop due to intersocket communications. However, you may have more TDP headroom (depends on the chips you are using) and mostly likely more effective cooling with two sockets allowing for higher frequencies with the same number of active cores. If the workload doesn't require an abundance of socket to socket communications, then it is conceivable that the two socket solution may have merit is such circumstances.
  • SanX - Tuesday, July 31, 2018 - link

    Why ARM is just digging its buggers watching the game where it can beat Intel ? Where are ARM server and supercomputer chips? ARM processors soon surpass Intel in transistor count. And for the same amount of transistors ARM is 50-100x cheaper then duopoly Intel/AMD. As an additional advantage for ARM these two segments will soon completely abandon Microsoft.
  • beggerking@yahoo.com - Thursday, August 2, 2018 - link

    ARM is RISC which is completely from CISC so applications and os are limited. Microsoft server os has really evolved in every aspect in the last few years that may take RISC years to catch up on the software side.
  • JoJ - Saturday, August 4, 2018 - link

    ARM is Fujitsu's choice of successor core to SPARC64+, a architecture Fujitsu invested decades of research and development and testing to offer both commercially and at a national laboratory supercomputing level. ARM is therefore not a knee jerk choice of direction for a very interesting super builder.

    Obviously you exaggerated a little bit, saying ARM is "50 - 100 times cheaper than AMD/Intel".

    I wish I could shake my belief that pedantic literalism in Internet forums in general wasn't preventing broad discussion - we exaggerate in real life without any socially degrading effects, why not online?

    OR ate your conversation parties sniffing that obviously -- any person who inadvertently speaks technically inaccurately despite forming perfectly understandable inquiry... as if they are unwashed know nothings, and turning on their heels to end the discussion.....a bit like HN's "we don't tolerate humor here" reactions to innocent attempts at lightening the thread...

    but I digress, my point here is your comment above raised a couple of interesting questions, that I feel haven't been answered only because I think readers by themselves first over react to hyperbole, then infil the accepted wisdom to answer your questions, despite you ask about pertinent value critical concerns. I feel that by supplying the answer and dismissing the comment as uninformed, the most important thing happening is the reader voluntarily self reinforcing given marketing positions, and not engaging with the subject at all. I work in advertising​and am actually studying this, because advertising buyers adore this kind of"mind share" but we think that is at odds with the advertising buyers wanting"open minded, engaging, adaptable, innovative" customers.

    1. have a look at Serve The Homes review of the Cavium ARM server generations. This architecture is definitely viable and competitive now in a increasing number of application areas.

    2. Microsoft Azure has ARM deployed in my estimation at scale second only to Baidu. I am tempted to think it's actually politics that prevents a ARM Azure server machine offering to commercial users, little else. The problem with Microsoft, is user expectation of a all round performance consistency and intel and Microsoft have been working on that smooth delivery for decades.

    3. ARM is bit cheaper if you need to do more than a quick recompile with a few architecture options selected.
    re when we will see a Azure ARM instance, I think could even be waiting for the ability for Cavium to actually deliver hardware, because unmet demand is a fatal blow to new technology, as well as successful realisation.
    All my"quality time" with our server fleet, is spent all hands on the thermal and power profiles of our applications.
    We will rewrite to gain fractions of a percentage point where it's a consistent number across runs. Since twenty five years ago, I crashed a colo cage by not considering the power on start surge of a huge half terabyte raid array, power loads obsessed me. Power usage in Cavium ARM looks like a winner for us.

    4. BUT I said that,based on data mapping dense thermal sensor arrays, with the functional code paths of the actual application logic in flight across the fleet, at the time. If we're able to calculate the cost benefit of routing a new application function to a specific server, depending on the thermal load and core behaviour at the time of dispatch, I admit we're not very typical for a small scale customer. I think small is a server count below 10,000 here, including any peak on demand usage in case you're consumer retail and sell half price Gucci shoes on Black Monday.
    (we got surprised by the reliability of gains from very crude information. Originally we just wanted to see if we could balance the flows in the hot aisle, and even throttle hotspot buildup if we lost some cooling locally. For Intel, we got lots of gains, by sending jobs to not exceed the optimal max turbo clock of a processor, and immediately filling out the slower cores with background chores. AMD and Cavium ARM are not as sophisticated about thermal management, where Intel is keen on overkill recently, eg four nigh identical Xeon Gold SKUs. Just do really read that STH review about this"redundancy of the Xeon processor parts- I came away with a purchase order for the reviewed SKU, because we're so excited about the power management system roles in production deployment, as a competitive advantage.

    5. REAL COST ADVANTAGE DEPENDS ON CHANNEL PENETRATION, WITH AMD AT 2%, yes, TWO percent is considered healthy for them today, AMD need to be shipping in far greater volume, to move the money dial to realise the kind of cost advantage SanX is excited about.
    Certification of countless applications is hardly begun...
    I want to use a ARM workstation, to eat my dog food. This necessitates nvidia Quadro cards support. Yes, I write for a living. I target CUDA for a ever increasing proportion of customer needs. SURE I can just remote machines at will. BUT IF YOU DON'T GIVE CRITICAL DEVELOPERS TRULY GREAT HARDWARE, YOU'RE ABANDONING THE PLATFORM FOR ANY IDEA OF GENERAL DEPLOYMENT.

    6. Probably the last sentence should have been standalone here.
    I'll just say that we need a workstation as cool as the Silicon Graphics Indy of'93, to get a chance of getting a new GENERAL purpose platform in the mainstream soon.

    7. I am constantly a both astounded by the simple fact that we have a chip that good to compete at all, yet scared because I am starting to wonder if we'll ever see sales above"bargaining power level" and platform insurance, and the niche market for companies able to extract whole value chains from controling their entire software ecosystem, something almost nobody in the real world can do.
  • JoJ - Saturday, August 4, 2018 - link

    typo, mea,

    in point 3, I mean to say, "ARM is NOT cheaper, if you need to do more than a quick recompile.."

Log in

Don't have an account? Sign up now