Selecting the Competition

In setting up our benchmarks, we chose four different Intel SKUs to compete with the Cavium ThunderX. Our choices are not ideal (as we only have a limited number of SKUs available) but there is still some logic behind the SKU choice.

The Xeon E5-2640 v4 (10 cores @2.4 GHz, $939) has Intel's latest server core (Broadwell EP) and features a price tag in the ballpark of the ThunderX ($800) along with a low 90W TDP.

The Xeon E5-2690 v3 (12 cores @2.6 GHz, $2090) is a less optimal choice, but we wanted an SKU with a higher TDP, in case that the actual power consumption of the Thunder-X is higher than what can be expected from the official 120W TDP. To be frank, it was the only SKU that was faster than the E5-2640 v4 that we had. The Xeon E5-2699v4 ($4115, 145W TDP) did not make much sense to us in this comparison... so we settled for the Xeon E5-2690v3.

And then we added all the Xeon Ds we had available. At first sight it's not fair to compare a 45W TDP SoC to our 120W ThunderX. But the Xeon D-1557 is in the same price range as the Cavium ThunderX, and is targeted more or less at the same market. And although they offer fewer network and SATA interfaces, Cavium has to beat these kind of Xeon Ds performance wise, otherwise Intel's performance per watt advantage will steal Cavium's thunder.

The Xeon D-1581 is the most expensive Xeon D, but it is Intel's current server SoC flagship. But if the ARM Server SoCs start beating competitively priced Xeon Ds, Intel can always throw this one in the fray with a lower price. It is the SoC the ARM server vendors have to watch.

Configuration

Most of our testing was conducted on Ubuntu Server 14.04 LTS. We did upgrade this distribution to the latest release (14.04.4), which gives us more extensive hardware support. However, to ensure support for the ThunderX, the gcc compiler was upgraded to 5.2. In case of the ThunderX, the kernel was also 4.2.0, while the Intel systems still used kernel 3.19.

The reason why we did not upgrade the kernel is simply that we know from experience that this can generate all kinds of problems. In the case of the ThunderX using a newer kernel was necessary, while for the Intel CPUs we simply checked that there were no big differences with the new Ubuntu 16.04. The only difference that we could see there is that some of our software now does not compile on 16.04 (Sysbench, Perlbench). As we already waste a lot of time with debugging all kinds of dependency trouble, we kept it simple.

Gigabyte R120-T30 (1U)

The full specs of the server can be found here.

CPU One ThunderX CN8890
RAM 128GB (4x32GB) DDR4-2133
Internal Disks 2x SanDisk CloudSpeed Ultra 800GB
Motherboard Gigabyte MT30-GS0
BIOS version 1/28/2016
PSU Delta Electronics 400w 80 Plus Gold

Supermicro X10SDV-7TP8F and X10SDV-12C-TLN4F (2U case)

CPU Xeon D-1557 (1.5 GHz, 12 cores, 45 W TDP)
Xeon D-1581 (1.8 GHz, 16 cores, 65 W TDP)
RAM 64 GB (4x16 GB) DDR4-2133
Internal Disks 2x Intel SSD3500 400GB
Motherboard Supermicro X10SDV-7TP8F
Supermicro X10SDV-12C-TLN4F
BIOS version 5/5/2016
PSU Delta Electronics 400w 80 Plus Gold

Hyperthreading, Turbo Boost, C1 and C6 were enabled in the BIOS.

Intel's Xeon E5 Server – S2600WT (2U Chassis)

This is the same server that we used in our latest Xeon v4 review.

CPU Xeon E5-2640 v4 (2.4 GHz, 10 cores, 90 W TDP)
Xeon E5-2690 v3 (2.6 GHz, 12 cores, 135 W TDP)
RAM 128GB (8x16GB) Kingston DDR-2400
Internal Disks 2x Intel SSD3500 400GB
Motherboard Intel Server Board Wildcat Pass
BIOS version 1/28/2016
PSU Delta Electronics 750W DPS-750XB A (80+ Platinum)

Hyperthreading, Turboost, C1 and C6 were enabled in the BIOS.

Other Notes

All servers are fed by a standard European 230V (16 Amps max.) power line. The room temperature is monitored and kept at 23°C by our Airwell CRACs in our Sizing Servers Lab.

The Small Cavium ARM Core Memory Subsystem: Bandwidth
Comments Locked

82 Comments

View All Comments

  • TheinsanegamerN - Thursday, June 16, 2016 - link

    While you are right on the actual age of the chip, if you dont compare efficiency on different nodes, how on earth would you know if you made any progress?

    Unless you are suggesting that one should never compare one generation of chips to another, which is simply ludicrous. Where is this "you cane compare two different nodes" mindset coming from? I've seen it in the GPU forums as well, and it makes no sense.
  • shelbystripes - Wednesday, June 15, 2016 - link

    The E5-2600 v3 is a Haswell part, meaning it's Intel's second ("tock") core design on 22nm. So not only is this a smaller process, it's a second-gen optimization on a smaller process.

    For a first-gen 28nm part that includes power-hungry features like multiple 10GbE, these are some very promising initial results. A 14nm die shrink should create some real improvements off the bat in terms of performance per watt, and further optimizations from there should make this thing really shine.

    Given that Intel hasn't cracked 10nm at all yet, and it'll take a while for 10nm Xeons to show up once they do, Cavium has room to play catch-up. I mean, hell, they're keeping up/surpassing Xeon D in some use cases NOW, and that's a 14nm part. What Cavium needs most is power optimization at this point, and I'm sure they'll get there in time.
  • Michael Bay - Thursday, June 16, 2016 - link

    Good to know Intel is keeping you up to date with what`s happening in their uv labs.
  • rahvin - Thursday, June 16, 2016 - link

    Last I saw Intel is already running their test fabs at 10nm. Once they perfect it in the test fabs it only takes them about 6 months to roll it into a full scale fab. Maybe you an point to this source that indicates Intel has failed at 10nm.
  • kgardas - Wednesday, June 15, 2016 - link

    Nice article, but really looking to see testing of ThunderX2 and X-Gene 3. Will be interesting as Intel seems to be kind of struggling with single-threaded performance recently...
  • Drazick - Wednesday, June 15, 2016 - link

    Just a question.
    You emphasized the performance are x3 instead of x5 but I bet Intel used Intel ICC for those tests.

    Intel works hard on their Compilers and anyone who wants to extract the best of Intel CPU uses them as well.

    Since CPU means Compilers, if Intel has advantage on that department you should show that as well.

    Namely give us some results using Intel ICC.

    Thank You.
  • UrQuan3 - Wednesday, June 15, 2016 - link

    Of course, if Anandtech uses ICC, they should use better flags in gcc for ARM/ThunderX as well (core specific flags, NEON, etc). Both ICC and targeted flags give improvements. Often large ones. This was a generic test.
  • JohanAnandtech - Thursday, June 16, 2016 - link

    For integer workloads, ICC is not that much faster than gcc (See Andreas Stiller's work). And there is the fact that ICC requires licensing and other time consuming stuff. From a linux developer/administrator perspective, it is much easier just to use gcc, you simply install it from repositories, no licensing headaches and very decent performance (about 90% of icc). So tha vast majority of the **NON HPC ** software is compiled with gcc. Our added value is that we show how the processors compare with the most popular compiler on linux. That is the big difference between benchmarking to put a CPU in the best light and benchmarking to show what most people will probably experience.

    Until Intel makes ICC part of the typical linux ecosystem, it is not an advantage at all in most non-HPC software.
  • patrickjp93 - Friday, June 17, 2016 - link

    His work is woefully incomplete, lacking any analysis on vectorized integer workloads, which Intel destroys GCC in to the tune of a 40% lead.
  • phoenix_rizzen - Wednesday, June 15, 2016 - link

    "The one disadvantage of all Supermicro boards remains their Java-based remote management system. It is a hassle to get it working securely (Java security is a user unfriendly mess), and it lacks some features like booting into the BIOS configuration system, which saves time."

    It's IPMI, you can use any IPMI client to connect to it. Once you give it an IP and password in the BIOS, you can connect to it using your IPMI client of choice. There's also a web interface that provides most of the features of their Java client (I think that uses Java as well, but just for the console).

    For our SuperMicro servers, I just use ipmitool from my Linux station and have full access to the console over the network, including booting it into the BIOS, managing the power states, and even connecting to the serial console over the network.

    Not sure why you'd consider a full IPMI 2.0 implementation a downside just because the default client sucks.

Log in

Don't have an account? Sign up now