OpenFoam

Computational Fluid Dynamics is a very important part of the HPC world. Several readers told us that we should look into OpenFoam and calculating aerodynamics that involves the use of CFD software.

We use a realworld test case as benchmark. All tests were done on OpenFoam 2.2.1 and openmpi-1.6.3.

We also found AVX code inside OpenFoam 2.2.1, so we assume that this is one of the cases where AVX improves FP performance. 

Actiflow OpenFoam Benchmark

HPC code is where the Xeon E5 makes a lot more sense than the cheaper Xeons. The Xeon E5 is no less than 80% faster with 50% more cores than the Xeon D.  In this case, the Xeon D does not make the previous Xeons E3 look ridiculous: the Xeon D runs the job about 33% faster. Let us zoom in. 

OpenFoam 1-8 threads

OpenFoam scales much better on the Xeon E5, and we've seen previously that a second CPU boost performance by 90% offering near linear scaleability. Double the number of cores again and you get another very respectable 60%. Eight cores are 34% faster than four, and 4.1 times faster than one. 

Compares this to the horrible scaling of the Xeon E3 v2: 4 cores are slower than one. The Xeon E3 v3 fixed that somewhat, and doubles the performance over the same range. The eight cores of the Xeon D are about 2.8 times faster than one - that is decent scaling but nowhere near the Xeon E5. There are several reasons for this, but the most obvious one is that the Xeon E5 really benefits from the fact that it has almost twice the amount of bandwidth available. To be fair, Intel does not list HPC as a target market for the Xeon D. If the improved AVX2 capabilities and the pricing might have tempted you to use the Xeon D in your next workstation/HPC server, know that the Xeon D can not always deliver the full potential of the 8 Broadwell cores, despite having access to DDR4-2133.

Linux Kernel Compile Java Server Performance
Comments Locked

90 Comments

View All Comments

  • JohanAnandtech - Wednesday, June 24, 2015 - link

    Hi Patrick, the base clock of our chip is 2 GHz, not 1.9 GHz as the one pre-production version that we got from Intel. I have to check the turboclocks though, but I do believe we have measured 2.6 GHz. I'll doublecheck.
  • pjkenned - Wednesday, June 24, 2015 - link

    Awesome! Our ES ones were 1.9GHz.
  • Chrisrodinis1 - Tuesday, June 23, 2015 - link

    For comparison, this server uses Xeon's. It is the HP Proliant BL460c G9 blade server: https://www.youtube.com/watch?v=0s_w8JVmvf0
  • MrDiSante - Wednesday, June 24, 2015 - link

    Why use only -O2 when compiling the benchmarks? I would imagine that in order to squeeze out every last bit of performance, all production software is compiled with all optimizations turned up to 11. I noticed that their github uses -O2 as an example - is it that TinyMemBenchmark just doesn't play nice with -O3?
  • JohanAnandtech - Wednesday, June 24, 2015 - link

    The standard makefile had no optimization whatsoever. If you want to measure latency, you do not want maximum performance but rather accuracy, so I played it safe and used -O2. I am not convinced that all production software is optimized with all optimization turned on.
  • diediealldie - Wednesday, June 24, 2015 - link

    Intel seems disARMing them... X-Gene 2 doesn't look so promising, as they'll have to fight mighty Skylake-based Xeons, not Broadwell ones.

    Thanks for great article again.
  • jfallen - Wednesday, June 24, 2015 - link

    Thanks Johan for the great article. I'm a tech enthusiast, and will never buy or use one of these. But it makes great reading and I appreciate the time you take to research and write the article.

    Regards
    Jordan
  • JohanAnandtech - Wednesday, June 24, 2015 - link

    Happy to read this! :-)
  • TomWomack - Wednesday, June 24, 2015 - link

    This looks very much consistent with my experience; the disconcertingly high idle power (I looked at the board with a thermal camera; the hot chips were the gigabit PHY, the inductors for the power supply, and the AST2400 management chip), the surprisingly good memory performance, the fairly hot SoC (running sixteen threads of number-crunching I get a power draw of 83W at the plug) and the generally pretty good computation.

    I'm not entirely sure it was a better buy for my use case than a significantly cheaper 6-core Haswell E - Haswell E is not that hot, electricity not that expensive, and from my supplier the X10SDV-F board and memory were £929 whilst Scan get me an i7-5820K board, CPU and memory for £702. And four-channel DDR4 probably is usefully faster than two-channel for what I do.

    I quite strongly don't believe in server mystique - the outbuilding is big enough that I run out of power before I run out of space for micro-ATX cases, and I am lucky enough to be doing calculations which are self-checking to the point that ECC is a waste of money.
  • JohanAnandtech - Wednesday, June 24, 2015 - link

    Hi Tom, I believe we saw up to 90 Watt at the wall when running OpenFOAM (10 Gbit enabled). It is however less relevant for such a chip which is not meant to be a HPC chip as we have shown in the article. HPC really screams for an E5.

Log in

Don't have an account? Sign up now