HPC Benchmarks

Discussing HPC benchmarks feels always like opening a can of worms to me. Each benchmark requires a thorough understanding of the software and performance can be tuned massively by using the right compiler settings. And to make matters worse: in many cases, these workloads can be run much faster on a GPU or MIC, making CPU benchmarking in some situations irrelevant.

NAMD (NAnoscale Molecular Dynamics) is a molecular dynamics application designed for high-performance simulation of large biomolecular systems. It is rather memory bandwidth limited, as even with the advantage of an AVX-512 binary, the Xeon 8160 does not defeat the AVX2-equipped AMD EPYC 7601.

LAMMPS is classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. GROMACS (for GROningen MAchine for Chemical Simulations) primarily does simulations for biochemical molecules (bonded interactions). Intel compiled the AMD version with the Intel compiler and AVX2. The Intel machines were running AVX-512 binaries.

For these three tests, the CPU benchmarks results do not really matter. NAMD runs about 8 times faster on an NVIDIA P100. LAMMPS and GROMACS run about 3 times faster on a GPU, and also scale out with multiple GPUs.

Monte Carlo is a numerical method that uses statistical sampling techniques to approximate solutions to quantitative problems. In finance, Monte Carlo algorithms are used to evaluate complex instruments, portfolios, and investments. This is a compute bound, double precision workload that does not run faster on a GPU than on Intel's AVX-512 capable Xeons. In fact, as far as we know the best dual socket Xeons are quite a bit faster than the P100 based Tesla. Some of these tests are also FP latency sensitive.

Black-Scholes is another popular mathematical model used in finance. As this benchmark is also double precision, the dual socket Xeons should be quite competitive compared to GPUs.

So only the Monte Carlo and Black Scholes are really relevant, showing that AVX-512 binaries give the Intel Xeons the edge in a limited number of HPC applications. In most HPC cases, it is probably better to buy a much more affordable CPU and to add a GPU or even a MIC.

The Caveats

Intel drops three big caveats when reporting these numbers, as shown in the bullet points at the bottom of the slide.

Firstly is that these are single node measurements: One 32-core EPYC vs 20/24-core Intel processors. Both of these CPUs, the Gold 6148 and the Platinum 8160, are in the ball-park pricing of the EPYC. This is different to the 8160/8180 numbers that Intel has provided throughout the rest of the benchmarking numbers.

The second is the compiler situation: in each benchmark, Intel used the Intel compiler for Intel CPUs, but compiled the AMD code on GCC, LLVM and the Intel compiler, choosing the best result. Because Intel is going for peak hardware performance, there is no obvious need for Intel to ensure compiler parity here. Compiler choice, as always, can have a substantial effect on a real-world HPC can of worms. 

The third caveat is that Intel even admits that in some of these tests, they have different products oriented to these workloads because they offer faster memory. But as we point out on most tests, GPUs also work well here.

Database Performance & Variability Conclusion: Competition Is Good
Comments Locked

105 Comments

View All Comments

  • CajunArson - Tuesday, November 28, 2017 - link

    The whole "pricetag" thing is not really an issue when you start to look at what *really* costs money in many servers in the real world. Especially when you consider that there's really no need to pay for the highest-end Xeon Platinum parts to compete with Epyc in most real-world benchmarks that matter. In general even the best Epyc 7601 is roughly equivalent to similarly priced Xeon Gold parts there.

    If you seriously think that even $10000 for a Xeon Platinum CPU is "omg expensive"... try speccing out a full load of RAM for a two or four socket server sometime and get back to me with what actually drives up the price.

    In addition to providing actually new features like AVX-512, Intel has already shown us very exciting technologies like Optane that will allow for lower overall prices by reducing the need to buy gobs and gobs of expensive RAM just to keep the system running.
  • eek2121 - Tuesday, November 28, 2017 - link

    One thing is clear, you've never touched enterprise hardware in your life. You don't build servers, you typically buy them from companies like Dell, etc. Also RAM prices are typically not a big issue. Outfitting any of these systems with 128GB of ECC memory costs around $1,500 tops, and that's before any volume discounts that the company in question may get. Altogether a server with a 32 core AMD EPYC, 128 GB of RAM, and an array of 2-4 TB drives will cost under $10,000 and may typically be less than $8,000 depending on the drive configuration, so YES price is a factor when the CPU makes up half the machine's budget.
  • CajunArson - Tuesday, November 28, 2017 - link

    "One thing is clear, you've never touched enterprise hardware in your life. You don't build servers, you typically buy them from companies like Dell, etc"

    Well, maybe *you* don't build your own servers. But my point is 100% valid for pre-bought servers too, just check Dell's prices on RAM if you are such an expert.

    Incidentally, you also fell into the trap of assuming that the MSRPs of Intel CPUs are actually what those big companies like Dell are paying & charging. That's clearly shows that *you* don't really know much about how the enterprise hardware market actually works.
  • eek2121 - Tuesday, November 28, 2017 - link

    As someone who has purchased servers from Dell for 'enterprise', I know exactly how much large enterprise users pay. I was using the prices quoted here for comparison. I don't know of a single large enterprise company that builds their own servers. I have a very long work history working with multiple companies.
  • sor - Tuesday, November 28, 2017 - link

    There are plenty. I’ve been in data centers filled with thousands of super micro or chenbro home built and maintained servers. The cost savings are immense, even if you partner with a systems integrator to piece together and warranty the builds you design. Anyone doing scalable cloud/web/VM probably has thousands of cheap servers rather than HP/Dell.
  • IGTrading - Tuesday, November 28, 2017 - link

    This is one thing I HATE :)

    When AMD had total IPC and power consumption superiority people said "yeah buh software's optimized for Intel so better buy Xeon" .

    When AMD had complete superiority with higher IPC and a platform so mature they were building Supercomputers out of it, people said "yeah buh Intel has a tiny bit better power consumption and over the long term..."

    Over the long term "nothing" would be the truth. :)

    Then Bulldozer came and AMD's IPC took a hit, power consumption as well, but they were still dong ok in general and had some applicatioms where they did excell such as encrypting and INT calculus , plus they had the cost advantage.

    People didn't even listen ... and went Xeon.

    Now Intel comes and says that they've lost the power consumption crown, the core count crown, the PCIe I/O crown, the RAID crown, the RAM capacity crown, the FPU crown and he platform cost crown.

    But they come with this compilation of particular cases where Xeon has a good showing and people say: "uh oh you see ?! EPYC is still imature, we go Xeon" .

    What ?!

    Is this even happening ?! How many crowns AMD needs to win to be accepted as the better choice overall ?! :)

    Really ?! Intel writes a mostly marketing compilation of particular use cases and people take it as gospel ?!

    Honestly ... how many crowns does AMD need to win ?! :)

    In the end, I would like to point out that in an AMD EPYC article there was no mention of AMD's EPYC 2 SPEC World Records .....

    Not that it does anything to affect Intel's particular benchmarking, but really ?!

    You write an AMD EPYC article within less than a week since HP announces 2 World Records with EPYC and you don't even put it in thw article ?!?

    This is the nTH AMD bashing piece from AnandTech :(

    I still remeber how they've said that AMD's R290 for 550 USD was "not recommended" despite beating nVIDIA's 1000 USD Titan because "the fan was loud" :)

    This coming after nVIDIA's driver update resulting in dead GPUs ....

    But AMD's fan was "loud" :)

    WTF ?! ...

    For 21 years I've been in this industry and I'm really tired to have to stomach these ...

    And yeah .. I'm not bashing AnandTech at all. I'll keep reading it like I did since back in 1998 when I've first found it.

    But I really see the difference between an independent Anandtech article and this one where I'm VERY sure some Intel PR put in "a good word" and politely asked for a "positive spin" .
  • 0ldman79 - Tuesday, November 28, 2017 - link

    I agree with a lot of your points in general, though not really directed toward Anandtech.

    The article sounded pretty technical, unbiased and the final page was listing facts, the server CPU are similar, Intel showed a lot of benches that show the similarities and ignore the fact that their CPU costs twice as much.

    In all things, which CPU works best comes down to the actual app used. I was browsing the benches the other day, the FX six core actually beats the Ryzen quad and six core in a couple of benches (like literally one or two) so if that is your be-all end-all program, Ryzen isn't worth it.

    So far it looks like AMD has a good server product. Quite like the FX line it looks like the Epyc is going to be better at load bearing than maximum speed and honestly I'm okay with that.
  • IGTrading - Friday, December 1, 2017 - link

    I've just realized something even more despicable in this marketing compilation of particular use cases :

    1) Intel built a 2S Intel based server that is comparable in price with the AMD built.

    2) That Intel built gets squashed in almost all benchmarks or barely overtakes AMD in some.

    3) But then Intel added on all graphs a built that is 11,000 USD more expensive which also performed better, without clearly stating just how much more expensive that system is.

    4) Also Intel says that it's per core performance in some special use cases is 38% better without saying that AMD offers 33% more cores that, overall, overtake the Intel built.

    In conclusion, the more you look at it, the more this becomes clear as an elaborate marketing trick that has been treated like "technical" information by websites like AnandTech.

    It is not. It is an elaborate marketing trick that doesn't clearly state that the system that looks so good in these particular benchmarks is 11,000 USD more expensive. That over 60% extra money.

    Like I've said, AnandTech needs to be more critical of these marketing ploys.

    They should be informing and educating us, their readers, not dishing out marketing info and making it look technical and objective when it clearly is not.
  • Johan Steyn - Monday, December 18, 2017 - link

    The only reason I still sometimes read Anandtech's articles, is because some of the reader like you are not stupid and fall for this rubbish. I get more info from the comments than fro the articles itself. WCCF have great news posts, but the comments are like from 12 year-olds.

    Anandtech used to be a top rated review site and therefore some of the old die hard readers are still commenting on these articles.
  • submux - Thursday, November 30, 2017 - link

    I think that the performance crown which AMD has just won has some catches unfortunately.

    First of all, I'll probably start working with ARM and Intel as opposed to AMD an Intel... not because AMD is not a good source, but because from a business infrastructure perspective, Intel is better positioned to provide support. In addition, I'm looking into FPGA + CPU solutions which are not offered or even on the road map for AMD.

    Where AMD really missed the mark this time is that if AMD delivers better performance on 64-cores as Intel does 48-cores with current storage technologies, both CPUs are probably facing starvation anyway. The performance difference doesn't count anymore. If there's no data to process, there's no point in upping the core performance.

    The other huge problem is that software is licensed on core count, not on sockets. As such, requiring 33% more cores to accomplish the same thing... even if it's faster can cost A LOT more money. I suppose if AMD can get Microsoft, Oracle, SAP, etc... to license on users or flops... AMD would be better here. But with software costs far outweighing hardware costs, fake cores (hyperthreading) are far more interesting than real cores from a licensing angle.

    We already have virtualization sorted out. We have our Windows Servers running as VMs and unless you have far too many VMs for no particular reason or if you're simply wasting resources for no reason, you probably are far over-provisioned anyway. I know very few enterprises who really need more than two half-racks of servers to handle their entire enterprise workload... and I work with 120,000+ employee enterprises regularly.

    So, then there's the other workload. There's Cloud. Meaning business systems running on a platform which developers can code against and reliably expect it to run long term. For these systems, all the code is written in interpreted or JITed languages. These platforms look distinctly different than servers. You would never for example use a 64-core dual socket server in a cloud environment. You would instead use extremely low cost, 8-16 core ARM servers with low cost SSD drives. You would have a lot of them. These systems run "Lambdas" as Amazon would call them and they scale storage out across many nodes and are designed for massive failure and therefore don't need redundancy like a virtualized data center. A 256 node cluster would be enough to run a massive operation and cost less than 2 enterprise servers and a switch. It would have 64TB aggregate storage or about 21TB effective storage with incredible response times that no VM environment could ever dream of providing. You'd buy nodes with long term support so that you can keep buying them for 10-20 years and provide a reliable platform that a business could run on for decades.

    So, again, AMD doesn't really have a home here... not unless they will provide a sub-$100 cloud node (storage included).

    I'm a huge fan of competition, but I just simply don't see outside of HPC where AMD really fits right now. I don't know of any businesses looking to increase their Windows/VMWare licensing costs based on core count. (each dollar spent on another core will cost $3 on software licensed for it). It's a terrible fit for OpenStack which really performs best on $100 ARM devices. I suppose it could be used in Workstations, but they are generally GPU beasts not CPU. If you need CPU, you'd prefer MIC which is much faster. There are too many storage and RAM bottlenecks to run 64-core AMD or 48-core Intel systems as well.

    Maybe it would be suitable for a VDI environment. But we've learned that VDI doesn't really fit anywhere in enterprise. And to be fair, this is another place where we've learned that GPU is far more valuable than CPU as most CPU devoted to VDI goes to waste when there is GPU present.

    You could have a point... but I wonder if it's just too little too late.

    I also question the wisdom of investing too heavily on spending tens of thousands of dollars on an untested platform. Consider that even if AMD is a better choice, to run even a basic test would require investment in 12 sockets worth of these chips. To test it properly would require a minimum of $500,000 worth of hardware and let's assume about $200,000 in human resources to get a lab up and running. If software is licensed (not trial), consider another $900,000 for VMware, Windows and maybe Oracle or SQL server. That's a $700,000 $1.6 million investment to see if you can save a few thousand on a save a few thousand on CPUs.

    While Fortune 500s could probably waste that kind of money on an experiment, I don't see it making sense to smaller organizations who can go with a proven platform instead.

    I think these processors will probably find a good home in Microsoft's Azure, Google's Cloud and Amazon AWS and I wish them well and really hope AMD and the cloud beasts profit from it.

    In the mean time, I'll focus on moving our platforms to Cloud systems which generally work best on Raspberry Pi style systems.

Log in

Don't have an account? Sign up now