Selecting the Competition

In setting up our benchmarks, we chose four different Intel SKUs to compete with the Cavium ThunderX. Our choices are not ideal (as we only have a limited number of SKUs available) but there is still some logic behind the SKU choice.

The Xeon E5-2640 v4 (10 cores @2.4 GHz, $939) has Intel's latest server core (Broadwell EP) and features a price tag in the ballpark of the ThunderX ($800) along with a low 90W TDP.

The Xeon E5-2690 v3 (12 cores @2.6 GHz, $2090) is a less optimal choice, but we wanted an SKU with a higher TDP, in case that the actual power consumption of the Thunder-X is higher than what can be expected from the official 120W TDP. To be frank, it was the only SKU that was faster than the E5-2640 v4 that we had. The Xeon E5-2699v4 ($4115, 145W TDP) did not make much sense to us in this comparison... so we settled for the Xeon E5-2690v3.

And then we added all the Xeon Ds we had available. At first sight it's not fair to compare a 45W TDP SoC to our 120W ThunderX. But the Xeon D-1557 is in the same price range as the Cavium ThunderX, and is targeted more or less at the same market. And although they offer fewer network and SATA interfaces, Cavium has to beat these kind of Xeon Ds performance wise, otherwise Intel's performance per watt advantage will steal Cavium's thunder.

The Xeon D-1581 is the most expensive Xeon D, but it is Intel's current server SoC flagship. But if the ARM Server SoCs start beating competitively priced Xeon Ds, Intel can always throw this one in the fray with a lower price. It is the SoC the ARM server vendors have to watch.

Configuration

Most of our testing was conducted on Ubuntu Server 14.04 LTS. We did upgrade this distribution to the latest release (14.04.4), which gives us more extensive hardware support. However, to ensure support for the ThunderX, the gcc compiler was upgraded to 5.2. In case of the ThunderX, the kernel was also 4.2.0, while the Intel systems still used kernel 3.19.

The reason why we did not upgrade the kernel is simply that we know from experience that this can generate all kinds of problems. In the case of the ThunderX using a newer kernel was necessary, while for the Intel CPUs we simply checked that there were no big differences with the new Ubuntu 16.04. The only difference that we could see there is that some of our software now does not compile on 16.04 (Sysbench, Perlbench). As we already waste a lot of time with debugging all kinds of dependency trouble, we kept it simple.

Gigabyte R120-T30 (1U)

The full specs of the server can be found here.

CPU One ThunderX CN8890
RAM 128GB (4x32GB) DDR4-2133
Internal Disks 2x SanDisk CloudSpeed Ultra 800GB
Motherboard Gigabyte MT30-GS0
BIOS version 1/28/2016
PSU Delta Electronics 400w 80 Plus Gold

Supermicro X10SDV-7TP8F and X10SDV-12C-TLN4F (2U case)

CPU Xeon D-1557 (1.5 GHz, 12 cores, 45 W TDP)
Xeon D-1581 (1.8 GHz, 16 cores, 65 W TDP)
RAM 64 GB (4x16 GB) DDR4-2133
Internal Disks 2x Intel SSD3500 400GB
Motherboard Supermicro X10SDV-7TP8F
Supermicro X10SDV-12C-TLN4F
BIOS version 5/5/2016
PSU Delta Electronics 400w 80 Plus Gold

Hyperthreading, Turbo Boost, C1 and C6 were enabled in the BIOS.

Intel's Xeon E5 Server – S2600WT (2U Chassis)

This is the same server that we used in our latest Xeon v4 review.

CPU Xeon E5-2640 v4 (2.4 GHz, 10 cores, 90 W TDP)
Xeon E5-2690 v3 (2.6 GHz, 12 cores, 135 W TDP)
RAM 128GB (8x16GB) Kingston DDR-2400
Internal Disks 2x Intel SSD3500 400GB
Motherboard Intel Server Board Wildcat Pass
BIOS version 1/28/2016
PSU Delta Electronics 750W DPS-750XB A (80+ Platinum)

Hyperthreading, Turboost, C1 and C6 were enabled in the BIOS.

Other Notes

All servers are fed by a standard European 230V (16 Amps max.) power line. The room temperature is monitored and kept at 23°C by our Airwell CRACs in our Sizing Servers Lab.

The Small Cavium ARM Core Memory Subsystem: Bandwidth
Comments Locked

82 Comments

View All Comments

  • vivs26 - Wednesday, June 15, 2016 - link

    Not necessarily - (read Amdahl's law of diminishing returns). The performance actually depends on the workload. Having a million cores guarantees nothing in terms of performance unless the workload is parallelizable which in the real world is not as much as we think it could be. I'm curious to see how xeon merged with altera programmable fabric performs than ARM on a server.
  • maxxbot - Wednesday, June 22, 2016 - link

    Technically true but every generation that millstone gets a little smaller, the die area and power needed to translate x86 into uops isn't huge and reduces every generation.
  • jardows2 - Wednesday, June 15, 2016 - link

    Interesting. Faster in a few workloads where heavy use of multi-thread is important, but significantly slower in more single thread workloads. For server use, you don't always want parallelized tasks. The results are pretty much across the board for all the processors tested: If the ThunderX was slower, it was slower than all the Intel chips. If it were faster, it was faster than all but the highest end Intel Chips. With the price only being slightly lower than the cheapest Intel chip being sold, I don't think this is going to be a Xeon competitor at all, but will take a few niche applications where it can do better.

    With no significant energy savings, we should be looking forward to the ThunderX2 to see if it will bring this into a better alternative.
  • ddriver - Wednesday, June 15, 2016 - link

    There is hardly a server workload where you don't get better throughput by throwing more cores and servers at it. Servers are NOT about parallelized task, but about concurrent tasks. That's why while desktops are still stuck at 8 cores, server chips come with 20 and more... Server workloads are usually very simple, it is just that there is a lot of them. They are so simple and take so little time it literally makes no sense parallelizing them.
  • jardows2 - Wednesday, June 15, 2016 - link

    In the scenario you described, the single-thread performance takes on even more importance, thus highlighting the advantage the Xeon's currently have in most server configurations.
  • niva - Wednesday, June 15, 2016 - link

    Not if the Xeon doesn't have enough cores to actually process 40+ singlethreaded tasks con-currently.
  • hechacker1 - Wednesday, June 15, 2016 - link

    But kernels and VMWare know how to schedule multiple threads on 1 core if it's not being fully utilized. Single threaded IPC can make up for not having as many cores. See the iPhone SoCs for another example.
  • ddriver - Wednesday, June 15, 2016 - link

    Not if you have thousands of concurrent workloads and only like 8 cores. As fast as each core might be, the overhead from workload context switching will eat it up.
  • willis936 - Thursday, June 16, 2016 - link

    Yeah if each task is not significantly longer than a context switch. Context switches are very fast, especially with processors with many sets of SMT registers per core.
  • ddriver - Thursday, June 16, 2016 - link

    If what you suggest is correct, then intel would not be investing chip TDP in more cores but higher clocks and better single threaded performance. Clearly this is not the case, as they are pushing 20 cores at the fairly modest 2.4 Ghz.

Log in

Don't have an account? Sign up now