HPC: Fluid Dynamics with OpenFOAM

Computational Fluid Dynamics is a very important part of the HPC world. Several readers told us that we should look into OpenFOAM, and my lab was able to work with the professionals of Actiflow. Actiflow specializes in combining aerodynamics and product design. Calculating aerodynamics involves the use of CFD software, and Actiflow uses OpenFOAM  to accomplish this. To give you an idea what these skilled engineers can do, they worked with Ferrari to improve the underbody airflow of the Ferrari 599 and increase its downforce.

We were allowed to use one of their test cases as a benchmark, however we are not allowed to discuss the specific solver. All tests were done on OpenFOAM 2.2.1 and openmpi-1.6.3. The reason why we still run with OpenFOAM 2.2.1 is that our current test case does not work well with higher versions.

We also found AVX code inside OpenFoam 2.2.1, so we assume that this is one of the cases where AVX improves FP performance. 

OpenFOAM

As this is AVX code, the clock speed of our Xeon processors can be lower than Intel's official specifications, and turbo boost speeds are also lower. Despite the fact that on Broadwell the only cores that reduce their clock when running AVX code are the AVX-active cores themseves (the others can continue at higher speeds), OpenFOAM does not run appreciably faster on the top of the line Xeon E5 v4 than it did on the E5 v3.

It is not as if OpenFOAM does not scale: 22% more cores delivers 13% higher performance (E5-2699v4 vs E5-2695v4). No, our first impression is that the new Xeon v4 needs to lower the clockspeed more than the old one. The official specifications tell us that both the Xeon E5-2699 v4 and v3 should run AVX code at up to 2.6 GHz with all cores enabled. The reality is however that Broadwell runs at a lower clock on average. 

Spark Benchmarking NAMD
Comments Locked

112 Comments

View All Comments

  • patrickjp93 - Friday, April 1, 2016 - link

    Knight's Landing: 730 mm^2, also on the 14nm platform
  • extide - Friday, April 1, 2016 - link

    Is it really that big..? Wow, I knew it was big, but didn't know it was that big. Got a source on that?
  • Kevin G - Friday, April 8, 2016 - link

    I'll second a link for a source. I knew it'd be big but that big?
  • extide - Friday, April 1, 2016 - link

    I know you meant Reticle, but that was a pretty funny typo, heh.
  • Kevin G - Friday, April 8, 2016 - link

    Autocorrect has gotten the best of me yet again.
  • extide - Friday, April 1, 2016 - link

    And, I know how big GM200 and Fiji are, but I am talking about big GPU's on 14/16nm. All signs are currently pointing to <300mm^2 for the first round of 14/16nm GPU's.
  • lorribot - Thursday, March 31, 2016 - link

    Given the way Microsoft and others are now licensing by the core and in large non splitable packages (Windows 2016 Datacenter is in blocks of 16 cores, a dual socket server with 44 cores would need 48 core licences) the increasing core count has limited appeal over small numbers of faster cores when looking at virtualised environments.
    Those still in the physical world will still have to pay per core but may have to buy 4 std Windows licenses.
    when it comes to doing your testing, it should reflect these costs and compare total bang per buck when dealing with performance.
    Red Hat still licences per socket but don't be surprised if they go per core too.
  • JohanAnandtech - Friday, April 1, 2016 - link

    Back in 2008, I had a sales person explaining the license models of Microsoft to me in our lab. From that point on, we have invested most of our time and resources in linux server software. :-D
  • extide - Friday, April 1, 2016 - link

    Enterprise linux isn't free, either ya know
  • rahvin - Friday, April 1, 2016 - link

    Support isn't free on the FOSS side but the software is. Redhat is never going to charge more per "cores" for support, that's ridiculous and would result in rivals stealing their support contracts. If licensing costs are that bad that you are dumping hardware you really should be looking at moving services to Linux and Visualizing the windows servers so you can limit the core count and provide more horsepower.

    Anyone putting Microsoft on bare hardware these days is nuts. Although the consolation is that they get to pay MS's exorbitant tax on software. Linux should be the core component of any IT services and virtualized servers where you need proprietary server software.

Log in

Don't have an account? Sign up now