OpenFoam

Several of our readers have already suggested that we look into OpenFoam. That's easier said than done, as good benchmarking means you have to master the sofware somewhat. Luckily, my lab was able to work with the professionals of Actiflow. Actiflow specialises in combining aerodynamics and product design. Calculating aerodynamics involves the use of CFD software, and Actiflow uses OpenFoam to accomplish this. To give you an idea what these skilled engineers can do, they worked with Ferrari to improve the underbody airflow of the Ferrari 599 and increase its downforce.

The Ferrari 599: an improved product thanks to Openfoam.

We were allowed to use one of their test cases as a benchmark, but we are not allowed to discuss the specific solver. All tests were done on OpenFoam 2.2.1 and openmpi-1.6.3.

Many CFD calculations do not scale well on clusters, unless you use InfiniBand. InfiniBand switches are quite expensive and even then there are limits to scaling. We do not have an InfiniBand switch in the lab, unfortunately. Although it's not as low latency as InfiniBand, we do have a good 10G Ethernet infrastructure, which performs rather well.

So we added a fifth configuration to our testing: the quad-node Intel Server System H2200JF. The only CPU that we have eight of right now is the Xeon E5-2650L 1.8GHz. Yes, it is not perfect, but this is the start of our first clustered HPC benchmark. This way we can get an of idea whether or not the Xeon E7 v2 platform can replace a complete quad-node cluster system and at the same time offer much higher RAM capacity.

OpenFoam test

The results are pretty amazing: the quad Xeon E7-4980 v2 runs circles around our quad-node HPC cluster. Even if we were to outfit it with 50% higher clocked Xeons, the quad Xeon E7 v2 would still be the winner. Of course, there is no denying that our quad-node cluster is a lot cheaper to buy. Even with an InfiniBand switch, an HPC cluster with dual socket servers is a lot cheaper than a quad socket Intel Xeon E7 v2.

However, this bodes well for the soon to be released Xeon E5-46xx v2 parts. QPI links are even lower latency than InfiniBand. But since we do not have a lot of HPC testing experience, we'll leave it up to our readers to discuss this in more detail.

Another interesting detail is that the Xeon 2650L at 1.8GHz is about twice as fast as a Xeon L5650. We found AVX code inside OpenFoam 2.2.1, so we assume that this is one of the cases where AVX improves FP performance tremendously. Seasoned OpenFoam users, let us know whether is the accurate assessment.

SAP S&D Benchmark Conclusion
Comments Locked

125 Comments

View All Comments

  • JohanAnandtech - Friday, February 21, 2014 - link

    I don't see the error. "Beckton" (Nehalem-EX, X7560) is at 2.4 GHz
  • mslasm - Sunday, February 23, 2014 - link

    > I don't see the error.

    The article says "The Opteron core is also better than most people think: at 2.4GHz it would deliver about 2481 MIPs." - but, according to the graph, Opteron already delivers 2723 @ 2.3Ghz. So it is puzzling to see that it "would" deliver less MIPS (2481 vs 2723) at higher frequency (2.4 vs 2.3 Ghz) (regardless of any Intel results/frequencies)
  • silverblue - Saturday, February 22, 2014 - link

    It's entirely possible that the score is down to the 6376's 3.2GHz turbo mode.
  • plext0r - Friday, February 21, 2014 - link

    Would be nice to run benchmarks against a Quad E5-4650 system for comparison.
  • blaktron - Friday, February 21, 2014 - link

    ... you know you can't, right?
  • blaktron - Friday, February 21, 2014 - link

    Nevermind, read v2 there where you didn't write it. Too much coffee....
  • usernametaken76 - Friday, February 21, 2014 - link

    For the more typo-sensitive reader (perhaps both technically astute and typo-senstive):

    "A question like "Does the SPARC T5 also support both single-threaded and multi-threaded applications?" must sound particularly hilarious to the our technically astute readers."

    ...to the our...
  • JohanAnandtech - Friday, February 21, 2014 - link

    Fixed. Thx!
  • TiGr1982 - Friday, February 21, 2014 - link

    From the conclusion:
    "The Xeon E7 v2 chips are slated to remain in data centers for the next several years as the most robust—and most expensive—offerings from Intel."

    I don't think it will be really "several" years - maybe 1-2 years later this Ivy Bridge-EX-based E7 v2 will probably be superseded by Haswell-EX-based E7 v3 with Haswell cores with AVX2/FMA, which should make a difference in pro floating point calculations and data processing, and working with DDR4.
  • Kevin G - Friday, February 21, 2014 - link

    The Ivy Bridge-EX -> Haswell-EX transition will mimic the Nehalem-EX -> Westere-EX transition in that the core systems provided by the big OEM will stay the same. The OEM's offer Haswell-EX as a drop in replacement to their existing socket 2011v1 systems. Haswell-EX -> Broadwell-EX will again be using the same socket and follow a similarly quick transition. SkyLake-EX will bring a new socket design (perhaps with some optical interconnects?).

    At some point Intel will offer new memory buffer chips to support DDR4. This will likely require a system to swap out all the memory daughter cards but the motherboard from big OEM's shouldn't change. There may also be a period where these large systems can be initially configured with either DDR3 or DDR4 based upon customer requests.

Log in

Don't have an account? Sign up now