Intel Xeon E Six-Core Review: E-2186G, E-2176G, E-2146G, and E-2136 Tested
by Ian Cutress on November 5, 2018 12:00 PM EST- Posted in
- CPUs
- Intel
- Xeon
- Enterprise CPUs
- Xeon E
CPU Performance: Web and Legacy Tests
While more the focus of low-end and small form factor systems, web-based benchmarks are notoriously difficult to standardize. Modern web browsers are frequently updated, with no recourse to disable those updates, and as such there is difficulty in keeping a common platform. The fast paced nature of browser development means that version numbers (and performance) can change from week to week. Despite this, web tests are often a good measure of user experience: a lot of what most office work is today revolves around web applications, particularly email and office apps, but also interfaces and development environments. Our web tests include some of the industry standard tests, as well as a few popular but older tests.
We have also included our legacy benchmarks in this section, representing a stack of older code for popular benchmarks.
All of our benchmark results can also be found in our benchmark engine, Bench.
WebXPRT 3: Modern Real-World Web Tasks, including AI
The company behind the XPRT test suites, Principled Technologies, has recently released the latest web-test, and rather than attach a year to the name have just called it ‘3’. This latest test (as we started the suite) has built upon and developed the ethos of previous tests: user interaction, office compute, graph generation, list sorting, HTML5, image manipulation, and even goes as far as some AI testing.
For our benchmark, we run the standard test which goes through the benchmark list seven times and provides a final result. We run this standard test four times, and take an average.
Users can access the WebXPRT test at http://principledtechnologies.com/benchmarkxprt/webxprt/
WebXPRT 2015: HTML5 and Javascript Web UX Testing
The older version of WebXPRT is the 2015 edition, which focuses on a slightly different set of web technologies and frameworks that are in use today. This is still a relevant test, especially for users interacting with not-the-latest web applications in the market, of which there are a lot. Web framework development is often very quick but with high turnover, meaning that frameworks are quickly developed, built-upon, used, and then developers move on to the next, and adjusting an application to a new framework is a difficult arduous task, especially with rapid development cycles. This leaves a lot of applications as ‘fixed-in-time’, and relevant to user experience for many years.
Similar to WebXPRT3, the main benchmark is a sectional run repeated seven times, with a final score. We repeat the whole thing four times, and average those final scores.
Speedometer 2: JavaScript Frameworks
Our newest web test is Speedometer 2, which is a accrued test over a series of javascript frameworks to do three simple things: built a list, enable each item in the list, and remove the list. All the frameworks implement the same visual cues, but obviously apply them from different coding angles.
Our test goes through the list of frameworks, and produces a final score indicative of ‘rpm’, one of the benchmarks internal metrics. We report this final score.
Google Octane 2.0: Core Web Compute
A popular web test for several years, but now no longer being updated, is Octane, developed by Google. Version 2.0 of the test performs the best part of two-dozen compute related tasks, such as regular expressions, cryptography, ray tracing, emulation, and Navier-Stokes physics calculations.
The test gives each sub-test a score and produces a geometric mean of the set as a final result. We run the full benchmark four times, and average the final results.
Mozilla Kraken 1.1: Core Web Compute
Even older than Octane is Kraken, this time developed by Mozilla. This is an older test that does similar computational mechanics, such as audio processing or image filtering. Kraken seems to produce a highly variable result depending on the browser version, as it is a test that is keenly optimized for.
The main benchmark runs through each of the sub-tests ten times and produces an average time to completion for each loop, given in milliseconds. We run the full benchmark four times and take an average of the time taken.
3DPM v1: Naïve Code Variant of 3DPM v2.1
The first legacy test in the suite is the first version of our 3DPM benchmark. This is the ultimate naïve version of the code, as if it was written by scientist with no knowledge of how computer hardware, compilers, or optimization works (which in fact, it was at the start). This represents a large body of scientific simulation out in the wild, where getting the answer is more important than it being fast (getting a result in 4 days is acceptable if it’s correct, rather than sending someone away for a year to learn to code and getting the result in 5 minutes).
In this version, the only real optimization was in the compiler flags (-O2, -fp:fast), compiling it in release mode, and enabling OpenMP in the main compute loops. The loops were not configured for function size, and one of the key slowdowns is false sharing in the cache. It also has long dependency chains based on the random number generation, which leads to relatively poor performance on specific compute microarchitectures.
3DPM v1 can be downloaded with our 3DPM v2 code here: 3DPMv2.1.rar (13.0 MB)
x264 HD 3.0: Older Transcode Test
This transcoding test is super old, and was used by Anand back in the day of Pentium 4 and Athlon II processors. Here a standardized 720p video is transcoded with a two-pass conversion, with the benchmark showing the frames-per-second of each pass. This benchmark is single-threaded, and between some micro-architectures we seem to actually hit an instructions-per-clock wall.
48 Comments
View All Comments
bolkhov - Friday, November 9, 2018 - link
Yes it does.72 for ECC vs. 64 for non-ECC.
notb - Wednesday, November 7, 2018 - link
Go ask your IT department if Ryzen's ECC "support" passes their security policy. :-)twtech - Sunday, November 18, 2018 - link
Most IT departments don't build their own systems. They'll standardize with something like Dell or HP, and maybe do things like RAM/HDD upgrades, etc.The cost of the CPU in a system is usually a relatively small cost compared to the overall total. Support costs can dwarf initial purchase costs. The more that a company can standardize their hardware, the easier it is to maintain it.
For example, IT may want to test patches before letting them be applied across the organization. If, in an ideal scenario, every system was exactly the same, then potentially only one test would be needed to make sure it works properly.
Of course that often won't be possible, but the closer they can remain to that ideal, the better.
HStewart - Monday, November 5, 2018 - link
To me it looks like AVX 512 gets a whopping 44% ( ~ 5200 / 3600 ) increase on similar equip similar CPU. It would be nice to see same cpu with and without AVX 512 enabled test.osrk - Monday, November 5, 2018 - link
FYI provantage told me they were getting in their shipment Nov 12. I have one on pre order.Dr_b_ - Monday, November 5, 2018 - link
Have my 2186G on pre-order with them as well, for over a month now. Already have the 64GB ECC UDIMMs and Asus C-246 Pro board just waiting. IMHO, the biggest let down with the Asus board is that it doesn't have USB 3.1 2.0 header on the motherboard. Have inquiry with Asus about support for 9900K CFL-R CPU's haven't heard back yet.mooninite - Tuesday, November 13, 2018 - link
Did you get yours shipped out yet? Websites are still showing either "out of stock" or "special order" for these Xeons.osrk - Friday, November 16, 2018 - link
No. They've had them in and out of stock. If you pre-ordered and the item comes in stock for even a penny more your item will not be shipped. They will only fulfill your order when the item comes into stock for the amount you paid. This has left a very bad taste in my mouth when ordering with Provantage and I'm likely not going to order from them again.For example the E-2176G was in stock with them for $1 more than I paid. Did they ship it or notify me that I could pay the extra $1 and have it ship? Nope. I'm writing up my experiences with them on the ratings site.
Dragonstongue - Monday, November 5, 2018 - link
wonder how much of these things tested are "biased" towards making Intel look as "rosy" as possible to steer away any shortcomings.. I would imagine they have very specific test methodology to ensure "weakness" are not shown.. Probably will never happen, but it would be real nice if "everyone" raced on the same track sort of speak so the best "car" wins, not because other "cars" are hobbled by illegitimate tactics.AnnoyedGrunt - Tuesday, November 6, 2018 - link
Yes, I suspect that Anandtech opened the "hood" on the AMD "cars" and drained their "blinker fluid" and damaged their "muffler bearings" so they couldn't "race" as well.I'd go on, but I ran out of quotation marks.
-AG