Testing Notes

For the EPYC launch, AMD sent us their best SKU: the EPYC 7601. Meanwhile Intel gave us a choice between the top bin Xeon 8180 and the Xeon 8176. Considering that the latter had 165-173W TDP, similar to AMD's best EPYC, we felt that the Xeon 8176 was the best choice. 

Unfortunately, our time testing the two platforms has been limited. In particular, we only received AMD's EPYC system last week, and the company did not put an embargo on the results. This means that we can release the data now, in time to compare it to the new Skylake-SP Xeons, however it also means that we've only had a handful of days to work with the platform before writing all of this up for today's embargo. We're confident in the data, but it means that we haven't had a chance to tease out the nuances of EPYC quite yet, and that will have to be something we get to in a future article.

Meanwhile we should note that we've had to retire the bulk of our historical benchmark data, as we upgraded both our compiler and OS (see below). Due to this, we only had a very limited amount of time to run additional systems, and for that reason we've opted include Intel's Xeon E5-2690. The Sandy Bridge-EP processor is about 5 years old, and for customers who aren't upgrading their servers every single generation, it's these servers that we believe are most likely to get upgraded in this round. So for server managers looking at finally buying into new hardware, you can get an idea of much return of investment you get. 

Benchmark Configuration and Methodology

All of our testing was conducted on Ubuntu Server "Xenial" 16.04.2 LTS (Linux kernel  4.4.0 64 bit). The compiler that ships with this distribution is GCC 5.4.0. 

You will notice that the DRAM capacity varies among our server configurations. The reason is that we had little time left before today's launch embargo. Removing any hardware is always a risk, so we decided to run our tests without significantly changing the internal hardware of the systems we received from AMD and Intel (SSDs were still replaced). As far as we know, all of our tests fit in 128 GB, so DRAM capacity should not have much influence on performance. But it wil have a impact on total energy consumption, which we will discuss. 

Last but not least, we want to note how the performance graphs have been color-coded. Orange is AMD's EPYC, dark blue is Intel's best (Skylake-SP), and light blue is the previous generation Xeons (Xeon E5-v4) . Gray has been used for the soon-to-be-replaced Xeon v1. 

Intel's Xeon "Purley" Server – S2P2SY3Q (2U Chassis)

CPU Two Intel Xeon Platinum 8176  (2.1 GHz, 28c, 38.5MB L3, 165W)
RAM 384 GB (12x32 GB) Hynix DDR4-2666
Internal Disks SAMSUNG MZ7LM240 (bootdisk)
Intel SSD3710 800 GB (data)
Motherboard Intel S2600WF (Wolf Pass baseboard)
Chipset Intel Wellsburg B0
BIOS version 9/02/2017
PSU 1100W PSU (80+ Platinum)

The typical BIOS settings can be seen below; we enabled hyperthreading and Intel virtualization. 

AMD EPYC 7601 –  (2U Chassis)

Five years after our "Piledriver review", a new AMD server arrives in the Sizing Servers Lab

CPU Two EPYC 7601  (2.2 GHz, 32c, 8x8MB L3, 180W)
RAM 512 GB (16x32 GB) Samsung DDR4-2666 @2400
Internal Disks SAMSUNG MZ7LM240 (bootdisk)
Intel SSD3710 800 GB (data)
Motherboard AMD Speedway
BIOS version To check. 
PSU 1100W PSU (80+ Platinum)

 

Intel's Xeon E5 Server – S2600WT (2U Chassis)

CPU Two Intel Xeon processor E5-2699v4 (2.2 GHz, 22c, 55MB L3, 145W)
Two Intel Xeon processor E5-2690v3 (2.3 GHz, 14c, 35MB L3, 120W)
RAM 256 GB (16x16GB) Kingston DDR-2400
Internal Disks SAMSUNG MZ7LM240 (bootdisk)
Intel SSD3700 800 GB (data)
Motherboard Intel Server Board Wildcat Pass
BIOS version 1/28/2016
PSU Delta Electronics 750W DPS-750XB A (80+ Platinum)

The typical BIOS settings can be seen below. 

HP-G8 (2U Chassis) - Xeon E5-2690

CPU Two Intel Xeon processor E5-2690 (2.9GHz, 8c, 20MB L3, 135W)
RAM 512 GB (16x32GB) Samsung DDR-3 LR-DIMM 1866 MHz @ 1333 MHz
Internal Disks SAMSUNG MZ7LM240 (bootdisk)
Intel SSD3700 800 GB (data)
Motherboard HP G8
BIOS version 9/23/2016
PSU HP 750W (Gold)

 

Other Notes

Both servers are fed by a standard European 230V (16 Amps max.) power line. The room temperature is monitored and kept at 23°C by our Airwell CRACs.

Pricing Comparison: AMD versus Intel Memory Subsystem: Bandwidth
Comments Locked

219 Comments

View All Comments

  • ddriver - Wednesday, July 12, 2017 - link

    LOL, buthurt intel fanboy claims that the only unbiased benchmark in the review is THE MOST biased benchmark in the review, the one that was done entirely for the puprpose to help intel save face.

    Because if many core servers running 128 gigs of ram are primarily used to run 16 megabyte databases in the real world. That's right!
  • Beany2013 - Tuesday, July 11, 2017 - link

    Sure, test against Ubuntu 17.04 if you only plan to have your server running till January. When it goes end of life. That's not a joke - non LTS Ubuntu released get nine months patches and that's it.

    https://wiki.ubuntu.com/Releases

    16.04 is supported till 2021, it's what will be used in production by people who actually *buy* and *use* servers and as such it's a perfectly representative benchmark for people like me who are looking at dropping six figures on this level of hardware soon and want to see how it performs on...goodness, realistic workloads.
  • rahvin - Wednesday, July 12, 2017 - link

    This is a silly argument. No one running these is going to be running bleeding edge software, compiling special kernels or putting optimizing compiler flags on anything. Enterprise runs on stable verified software and OS's. Your typical Enterprise Linux install is similar to RHEL 6 or 7 or it's variants (some are still running RHEL 5 with a 2.6 kernel!). Both RHEL6 and 7 have kernels that are 5+ years old and if you go with 6 it's closer to 10 year old.

    Enterprises don't run bleeding edge software or compile with aggressive flags, these things create regressions and difficult to trace bugs that cost time and lots of money. Your average enterprise is going to care about one thing, that's performance/watt running something like a LAMP stack or database on a standard vanilla distribution like RHEL. Any large enterprise is going to take a review like this and use it as data point when they buy a server and put a standard image on it and test their own workloads perf/watt.

    Some of the enterprises who are more fault tolerant might run something as bleeding edge as an Ubuntu Server LTS release. This review is a fair review for the expected audience, yes every writer has a little bias but I'd dare you to find it in this article, because the fanboi's on both sides are complaining that indicates how fair the review is.
  • jjj - Tuesday, July 11, 2017 - link

    Do remember that the future is chiplets, even for Intel.
    The 2 are approaching that a bit differently as AMD had more cost constrains so they went with a 4 cores CCX that can be reused in many different prods.

    Highly doubt that AMD ever goes back to a very large die and it's not like Intel could do a monolithic 48 cores on 10nm this year or even next year and that would be even harder in a competitive market. Sure if they had a Cortex A75 like core and a lot less cache, that's another matter but they are so far behind in perf/mm2 that it's hard to even imagine that they can ever be that efficient.
  • coder543 - Tuesday, July 11, 2017 - link

    Never heard the term "chiplet" before. I think AMD has adequately demonstrated the advantages (much higher yield -> lower cost, more than adequate performance), but I haven't heard Intel ever announce that they're planning to do this approach. After the embarrassment that they're experiencing now, maybe they will.
  • Ian Cutress - Tuesday, July 11, 2017 - link

    Look up Intel's EMIB. It's an obvious future for that route to take as process nodes get smaller.
  • Threska - Saturday, July 22, 2017 - link

    We may see their interposer (like used with their GPUs) technology being used.
  • jeffsci - Tuesday, July 11, 2017 - link

    Benchmarking NAMD with pre-compiled binaries is pretty silly. If you can't figure out how to compile it for each every processor of interest, you shouldn't be benchmarking it.
  • CajunArson - Tuesday, July 11, 2017 - link

    On top of all that, they couldn't even be bothered to download and install a (completely free) vanilla version that was released this year. Their version of NAMD 2.10 is from *2014*!

    http://www.ks.uiuc.edu/Development/Download/downlo...
  • tamalero - Tuesday, July 11, 2017 - link

    Do high level servers update their versions constantly?
    I know that most of the critical stuff, only patch serious vulnerabilities and not update constantly to newer things just because they are available.

Log in

Don't have an account? Sign up now