First Impressions

Due to bad luck and timing issues we have not been able to test the latest Intel and AMD servers CPU in our most demanding workloads. However, the metrics we were able to perform shows that AMD is offering a product that pushes out Intel for performance and steals the show for performance-per-dollar.

For those with little time: at the high end with socketed x86 CPUs, AMD offers you up to 50 to 100% higher performance while offering a 40% lower price. Unless you go for the low end server CPUs, there is no contest: AMD offers much better performance for a much lower price than Intel, with more memory channels and over 2x the number of PCIe lanes. These are also PCIe 4.0 lanes. What if you want more than 2 TB of RAM in your dual socket server? The discount in favor of AMD just became 50%. 

We can only applaud this with enthusiasm as it empowers all the professionals who do not enjoy the same negotiating power as the Amazons, Azure and other large scale players of this world. Spend about $4k and you get 64 second generation EPYC cores. The 1P offerings offer even better deals to those with a tight budget.

So has AMD done the unthinkable? Beaten Intel by such a large margin that there is no contest? For now, based on our preliminary testing, that is the case. The launch of AMD's second generation EPYC processors is nothing short of historic, beating the competition by a large margin in almost every metric: performance, performance per watt and performance per dollar.  

Analysts in the industry have stated that AMD expects to double their share in the server market by Q2 2020, and there is every reason to believe that AMD will succeed. The AMD EPYC is an extremely attractive server platform with an unbeatable performance per dollar ratio. 

Intel's most likely immediate defense will be lowering their prices for a select number of important customers, which won't be made public. The company is also likely to showcase its 56-core Xeon Platinum 9200 series processors, which aren't socketed and only available from a limited number of vendors, and are listed without pricing so there's no firm determination on the value of those processors. Ultimately, if Intel wanted a core-for-core comparison here, we would have expected them to reach out and offer a Xeon 9200 system to test. That didn't happen. But keep an eye out on Intel's messaging in the next few months.

As you know, Ice lake is Intel's most promising response, and that chip will be available somewhere in the mid of 2020. Ice lake promises 18% higher IPC, eight instead of six memory channels and should be able to offer 56 or more cores in reasonable power envelope as it will use Intel's most advanced 10 nm process. The big question will be around the implementation of the design, if it uses chiplets, how the memory works, and the frequencies they can reach.

Overall, AMD has done a stellar job. The city may be built on seven hills, but Rome's 8x8-core chiplet design is a truly cultural phenomenon of the semiconductor industry.

We'll be revisiting more big data benchmarks through August and September, and hopefully have individual chip benchmark reviews coming soon. Stay tuned for those as and when we're able to acquire the other hardware.

Can't wait? Then read our interview with AMD's SVP and GM of the Datacenter and Embedded Solutions Group, Forrest Norrod, where we talk about Napes, Rome, Milan, and Genoa. It's all coming up EPYC.

An Interview with AMD’s Forrest Norrod: Naples, Rome, Milan, & Genoa



View All Comments

  • quorm - Thursday, August 8, 2019 - link

    Is it possible you haven't heard of docker? Reply
  • abufrejoval - Sunday, August 11, 2019 - link

    or OpenVZ/Virtuozzo or quite simply cgroups. Can even nest them, including with VMs. Reply
  • DillholeMcRib - Thursday, August 8, 2019 - link

    destruction … Intel sat on their proverbial hands too long. It's over. Reply
  • crotach - Friday, August 9, 2019 - link

    Bye Intel!! Reply
  • AnonCPU - Friday, August 9, 2019 - link

    The gain in hmmer on EPYC with GCC8 is not due to TAGE predictor.
    Hmmer gains a lot on EPYC only because of GCC8.
    GCC8 vectorizer has been improved in GCC8 and hmmer gets vectorized heavily while it was not the case for GCC7. The same run on an Intel machine would have shown the same kind of improvement.
  • JohanAnandtech - Sunday, August 11, 2019 - link

    Thanks, do you have a source for that? Interested in learning more! Reply
  • AnonCPU - Monday, August 12, 2019 - link

    That should be due to the improvements on loop distribution:

    "The classic loop nest optimization pass -ftree-loop-distribution has been improved and enabled by default at -O3 and above. It supports loop nest distribution in some restricted scenarios;"

    There are also some references here in what was missing for hmmer vectorization in GCC some years ago:

    And a page where you can see that LLVM was missing (at least in 2015) a good loop distribution algo useful for hmmer:
  • AnonCPU - Monday, August 12, 2019 - link

    And more:
  • just4U - Friday, August 9, 2019 - link

    I guess the question to ask now is can they churn these puppies out like no tomorrow? Is the demand there? What about other Hardware? Motherboards and the like..

    Do they have 100 000 of these ready to go? The window of opportunity for AMD is always fleeting.. and if their going to capitalize on this they need to be able to put the product out there.
  • name99 - Friday, August 9, 2019 - link

    No obvious reason why not. The chiplets are not large and TSMC ships 200 million Apple chips a year on essentially the same process. So yields should be there.
    Manufacturing the chiplet assembly also doesn't look any different from the Naples assembly (details differ, yes, but no new envelopes being pushed: no much higher frequency signals or denser traces -- the flip side to that is that there's scope there for some optimization come Milan...)

    So it seems like there is nothing to obviously hold them back...

Log in

Don't have an account? Sign up now