First Impressions

Due to bad luck and timing issues we have not been able to test the latest Intel and AMD servers CPU in our most demanding workloads. However, the metrics we were able to perform shows that AMD is offering a product that pushes out Intel for performance and steals the show for performance-per-dollar.

For those with little time: at the high end with socketed x86 CPUs, AMD offers you up to 50 to 100% higher performance while offering a 40% lower price. Unless you go for the low end server CPUs, there is no contest: AMD offers much better performance for a much lower price than Intel, with more memory channels and over 2x the number of PCIe lanes. These are also PCIe 4.0 lanes. What if you want more than 2 TB of RAM in your dual socket server? The discount in favor of AMD just became 50%. 

We can only applaud this with enthusiasm as it empowers all the professionals who do not enjoy the same negotiating power as the Amazons, Azure and other large scale players of this world. Spend about $4k and you get 64 second generation EPYC cores. The 1P offerings offer even better deals to those with a tight budget.

So has AMD done the unthinkable? Beaten Intel by such a large margin that there is no contest? For now, based on our preliminary testing, that is the case. The launch of AMD's second generation EPYC processors is nothing short of historic, beating the competition by a large margin in almost every metric: performance, performance per watt and performance per dollar.  

Analysts in the industry have stated that AMD expects to double their share in the server market by Q2 2020, and there is every reason to believe that AMD will succeed. The AMD EPYC is an extremely attractive server platform with an unbeatable performance per dollar ratio. 

Intel's most likely immediate defense will be lowering their prices for a select number of important customers, which won't be made public. The company is also likely to showcase its 56-core Xeon Platinum 9200 series processors, which aren't socketed and only available from a limited number of vendors, and are listed without pricing so there's no firm determination on the value of those processors. Ultimately, if Intel wanted a core-for-core comparison here, we would have expected them to reach out and offer a Xeon 9200 system to test. That didn't happen. But keep an eye out on Intel's messaging in the next few months.

As you know, Ice lake is Intel's most promising response, and that chip will be available somewhere in the mid of 2020. Ice lake promises 18% higher IPC, eight instead of six memory channels and should be able to offer 56 or more cores in reasonable power envelope as it will use Intel's most advanced 10 nm process. The big question will be around the implementation of the design, if it uses chiplets, how the memory works, and the frequencies they can reach.

Overall, AMD has done a stellar job. The city may be built on seven hills, but Rome's 8x8-core chiplet design is a truly cultural phenomenon of the semiconductor industry.

We'll be revisiting more big data benchmarks through August and September, and hopefully have individual chip benchmark reviews coming soon. Stay tuned for those as and when we're able to acquire the other hardware.

Can't wait? Then read our interview with AMD's SVP and GM of the Datacenter and Embedded Solutions Group, Forrest Norrod, where we talk about Napes, Rome, Milan, and Genoa. It's all coming up EPYC.

An Interview with AMD’s Forrest Norrod: Naples, Rome, Milan, & Genoa

HPC: NAMD
POST A COMMENT

184 Comments

View All Comments

  • fallaha56 - Saturday, August 10, 2019 - link

    Perhaps Hypertheading should be off on the Intel systems to better reflect eg Google’s reality / proper security standards now we know Intel isn’t secure? Reply
  • Targon - Monday, August 12, 2019 - link

    That is why Google is going to be buying many Epyc based servers going forward. Mitigations do not mean a problem has been fixed. Reply
  • imaskar - Wednesday, August 14, 2019 - link

    Why do you think AWS, GCP, Azure, etc. mitigated the vulnerabilities? They only patched Meltdown at most. All other things are too costly and hard to execute. They just don't care so much for your data. Too loose 2x cloud capacity for that? No way. And for security conscious serious customers they offer private clusters, so your workloads run on separate servers. Reply
  • ballsystemlord - Saturday, August 10, 2019 - link

    Spelling and grammar errors:

    "This happened in almost every OS, and in some cases we saw reports that system administrators and others had to do quite a bit optimization work to get the best performance out of the EPYC 7001 series."
    Missing "of":
    "This happened in almost every OS, and in some cases we saw reports that system administrators and others had to do quite a bit of optimization work to get the best performance out of the EPYC 7001 series."

    "...to us it is simply is ridiculous that Intel expect enterprise users to cough up another few thousand dollars per CPU for a model that supports 2 TB,..."
    Excess "is" and missing "s":
    "...to us it is simply ridiculous that Intel expects enterprise users to cough up another few thousand dollars per CPU for a model that supports 2 TB,..."

    "Although the 225W TDP CPUs needs extra heatspipes and heatsinks, there are still running on air cooling..."
    Excess "s" and incorrect "there",
    "Although the 225W TDP CPUs need extra heatspipes and heatsinks, they're still running on air cooling..."

    "The Intel L3-cache keeps latency consistingy low as long as you stay within the L3-cache."
    "consistently" not "consistingy":
    "The Intel L3-cache keeps latency consistently low as long as you stay within the L3-cache."

    "For example keeping a large part of the index in the cache improve performance..."
    Missing comma and missing "s" (you might also consider making cache plural, but you seem to be talking strictly about the L3):
    "For example, keeping a large part of the index in the cache improves performance..."

    "That is a real thing is shown by the fact that Intel states that the OLTP hammerDB runs 60% faster on a 28-core Intel Xeon 8280 than on EPYC 7601."
    Missing "it":
    "That it is a real thing is shown by the fact that Intel states that the OLTP hammerDB runs 60% faster on a 28-core Intel Xeon 8280 than on EPYC 7601."
    In general, the beginning of the sentance appears quite poorly worded, how about:
    "That L3 cache latency is a matter for concern is shown by the fact that Intel states that the OLTP hammerDB runs 60% faster on a 28-core Intel Xeon 8280 than on EPYC 7601."

    "In NPS4, the NUMA domains are reported to software in such a way as it chiplets always access the near (2 channels) DRAM."
    Missing "s":
    "In NPS4, the NUMA domains are reported to software in such a way as its chiplets always access the near (2 channels) DRAM."

    "The fact that the EPYC 7002 has higher DRAM bandwidth is clearly visible."
    Wrong numbers (maybet you ment, series?):
    "The fact that the EPYC 7742 has higher DRAM bandwidth is clearly visible."

    "...but show very significant improvements on EPYC 7002."
    Wrong numbers (maybet you ment, series?):
    "...but show very significant improvements on EPYC 7742."

    "Using older garbage collector because they happen to better at Specjbb"
    Badly worded.
    "Using an older garbage collector because it happens to be better at Specjbb"

    "For those with little time: at the high end with socketed x86 CPUs, AMD offers you up to 50 to 100% higher performance while offering a 40% lower price."
    "Up to" requires 1 metric, not 2. Try:
    "For those with little time: at the high end with socketed x86 CPUs, AMD offers you from 50 up to 100% higher performance while offering a 40% lower price."
    Reply
  • wrkingclass_hero - Sunday, August 11, 2019 - link

    What does AMD have to do to get a Gold or Platinum recommendation? Reply
  • oRAirwolf - Thursday, August 15, 2019 - link

    This is a good question Reply
  • imaskar - Sunday, August 11, 2019 - link

    Single thread performance is very important for those who lives in cloud. A quick example: suppose I provision 2 core/4gig vm (this is of course hyperthreads). And on AWS I have a choice - m5 and m5a, where AMD is cheaper. What do I sacrifice? Not really throughput, because you don't run your prod workloads at 100% CPU. But there is the latency. If those cores clocked lower, I would get the same amount of responses, but slower. And since in microservice world you have a chain of calls, you get this decrease 10 times. Is it worth it?
    That was the case for 1st gen EPYC. Would 2nd gen have latency parity?
    Reply
  • notashill - Sunday, August 11, 2019 - link

    It's hard to say until the cloud instances actually launch.

    The current m5a instances are using a custom SKU which is clocked at 2.5GHz max boost.

    Rome's IPC is ~15% higher and clock speeds are all around higher so single threaded performance should be quite a bit better, but ultimately the exact numbers will depend on which SKUs the cloud vendors decide to use and how high they clock.
    Reply
  • duploxxx - Tuesday, August 13, 2019 - link

    did you actually ever work with hypervisors?

    there are other things than raw clock speed.... its all about scheduling and when there are more cores / socket available the scheduling is more relaxed, less ready time..... EPYC generation 1 is already awesome for hypervisor way better choice than most Intel counter parts for sure if you look at socket cost... but than again I am probably talking to a typical retard ****
    Reply
  • JoeBraga - Wednesday, August 14, 2019 - link

    Can you Explain better? But the license isn't bought by the quantity of coresor Per socket? Reply

Log in

Don't have an account? Sign up now