OpenFoam

Several of our readers have already suggested that we look into OpenFoam. That's easier said than done, as good benchmarking means you have to master the sofware somewhat. Luckily, my lab was able to work with the professionals of Actiflow. Actiflow specialises in combining aerodynamics and product design. Calculating aerodynamics involves the use of CFD software, and Actiflow uses OpenFoam to accomplish this. To give you an idea what these skilled engineers can do, they worked with Ferrari to improve the underbody airflow of the Ferrari 599 and increase its downforce.

The Ferrari 599: an improved product thanks to Openfoam.

We were allowed to use one of their test cases as a benchmark, but we are not allowed to discuss the specific solver. All tests were done on OpenFoam 2.2.1 and openmpi-1.6.3.

Many CFD calculations do not scale well on clusters, unless you use InfiniBand. InfiniBand switches are quite expensive and even then there are limits to scaling. We do not have an InfiniBand switch in the lab, unfortunately. Although it's not as low latency as InfiniBand, we do have a good 10G Ethernet infrastructure, which performs rather well.

So we added a fifth configuration to our testing: the quad-node Intel Server System H2200JF. The only CPU that we have eight of right now is the Xeon E5-2650L 1.8GHz. Yes, it is not perfect, but this is the start of our first clustered HPC benchmark. This way we can get an of idea whether or not the Xeon E7 v2 platform can replace a complete quad-node cluster system and at the same time offer much higher RAM capacity.

OpenFoam test

The results are pretty amazing: the quad Xeon E7-4980 v2 runs circles around our quad-node HPC cluster. Even if we were to outfit it with 50% higher clocked Xeons, the quad Xeon E7 v2 would still be the winner. Of course, there is no denying that our quad-node cluster is a lot cheaper to buy. Even with an InfiniBand switch, an HPC cluster with dual socket servers is a lot cheaper than a quad socket Intel Xeon E7 v2.

However, this bodes well for the soon to be released Xeon E5-46xx v2 parts. QPI links are even lower latency than InfiniBand. But since we do not have a lot of HPC testing experience, we'll leave it up to our readers to discuss this in more detail.

Another interesting detail is that the Xeon 2650L at 1.8GHz is about twice as fast as a Xeon L5650. We found AVX code inside OpenFoam 2.2.1, so we assume that this is one of the cases where AVX improves FP performance tremendously. Seasoned OpenFoam users, let us know whether is the accurate assessment.

SAP S&D Benchmark Conclusion
Comments Locked

125 Comments

View All Comments

  • colonelclaw - Monday, February 24, 2014 - link

    I would like to see V-Ray benchmarked. It's fast becoming an industry standard across a number of 3D industries (started in ArchVis, is now moving into animation feature films and FX)
  • PowerTrumps - Sunday, February 23, 2014 - link

    The author is misleading with statements and data not to mention @Brutalizer comes across very knowledgeable but only backs up claims or Oracle server performance with platitudes and boasts.

    Starting with the article - comparing various cores regardless if you adjust the frequency is misleading. You need to normalize the values to show what the per core improvement is. To stay with sockets is useless and lazy. Yes, Intel customers buy servers by the socket but to understand what they are really gaining this is a much better metric. To say there is a 20 or 30% gain when there might be 50% more cores tells me the per core performance is actually lower than Westmere. This is important when using software like Oracle that would price a 15 core socket at 7.5 or 8 Oracle licenses. For software licensed by the core, customers should demand the highest performance available otherwise all you do is subsidize Uncle Larry's island. For the Power comparisons in the SAP benchmarks. You compare a 60 core to a 32 core N-1 generation Power7 server. Since Power servers scale almost linearly by frequency, the 8 core @ 4.22 GHz is 54,700. If we extrapolate that to a 4 socket or 32 cores we would be around 200K SAPS. That is quite a bit more than the 60 core Dell. Also, you could deploy a Power server as a standalone server. Nobody would deploy a mission critical workload on a standalone x86 server. Yes, I'm sure somebody will argue with me and say they do and have done it for years. Ok, but by and large we know they are always clustered and used to scale-out. Secondly, you claim how expensive the Power servers are. When was the last time you priced one Mr De Gelas? You can get a Power7+ 7R1, 7R2, or 7R4 that has price parity with a typical x86 price that includes Linux and VMware and comparably equipped. The 710 and 730 servers would be just a bit more but definitely competitive. Factor in the software savings and reduction in the number of servers required and the TCA and TCO will favor Power quickly. I do it all of the time and can back it up with hard data. You can run Power servers up to 90% utilization but rarely run x86 over 30%, maybe 35% tops.

    With regard to @Brutalizer - Big claims of big servers, up to 96 TB of RAM. Who needs that? Who needs a server with 100's or 1000's of cores? The Oracle M6-32 has 1000 DIMMs to get 32 TB of memory. Tell us how this influences the MTBF of the server since the number of components is a major factor in the calculation. Next, you scoff at IBM for comparing to older servers. That is because they are talking to customers who are running older servers - consolidate those older servers onto a few or just 1 server that is inherently reliable - nothing more than a IBM mainframe followed by a IBM Power servers. Oracles M6-32 and M5-32 are just cables T5 servers scaled back from 16 to 12 cores. They have little RAS and built for marketing hype and to drive Oracle software licensing revenue. You say the Oracle M processor pricing is X and then try to paint picture that Power servers are more expensive for a 32 socket than a 8 socket - really. A v8 luxury car is more expensive than a 4 cyl econobox. The server price is moot when the real cost is the software you run on it. With Oracle EE + RAC at $70,500 + 22% annual maintenance per core it matters. On Power I only have to license the cores I need. If I need 2 cores for Oracle then I license 2 cores. On x86, the 15 core is 8. (15 x .5 = 7.5 rounds to 8). Oracle M series is also .5 so your 128 cores on SAP S&D to my 64 co Power7 at 1.0 puts us about equal. However, most customers don't run the servers with one workload. You will say your LDOMs is efficient but compared to Power Hypervisor it won't hold a candle to efficiently using the cores and threads - all of them in true multi-thread fashion. With Power8 coming out soon both Intel and Oracle will go back to smelling the fumes of Power servers. To customers out there. It isn't about being Ford or Chevy. This isn't college - don't root for your team even when they are no good. Your business has to not only survive but hopefully thrive. Do that on a platform that controls the largest cost which is software and Full Time Equivalents - that is Power servers.
  • Phil_Oracle - Monday, February 24, 2014 - link

    Well I must say that this article is clearly Intel biased with a lot of misleading and downright wrong statements about Oracle and SPARC. Heres some accurate and substantiated counters:

    "Sun/Oracle's server CPUs have been lagging severely in performance"
    This is wrong, considering that since the SPARC T4 release, and now SPARC T5 and SPARC M6 announcements, Oracle has announced 20+ world record benchmarks across *all* of the public, audited benchmarks from TPC-C, TPC-H @ 1TB, 3TB, 10TB to SPECjEnterprise2010 and SPECjbb2013. Many of them are still valid today, almost a year later.

    What I'd like to ask, is where are the 8-socket Xeon E7 v2 benchmarks to compare to SPARC? Theres only one today - SAP. And this doesn’t demonstrate database performance nor java application performance.
    Theres also no 4-socket or 8-socket benchmarks on TPC-C, TPC-H, SPECjEnterprise2010.

    Even with SPECjbb2013, theres just a 4-socket result, and if you compare performance/core, the SPARC T5-2 @ 114,492 max-jOPS (just 32-cores) has a 1.3x performance/core advantage over the NEC Express5800/A040b with 60 x Intel E7-4890 v2 2.8 GHz cores @ 177,753 max-jOPS.

    "As usual, the published benchmarks are very vague and are only available for the top models "
    As of today, there is not a single real world application/database benchmark that shows Xeon having superior throughput, response times or even price/performance comparing systems with same # of CPUs to SPARC T5. You can go here to see all the comparisons with full transparency. https://blogs.oracle.com/BestPerf/

    "and the best performing systems come with astronomic price tags ($950,000 for two servers, some networking, and storage... really?)."
    You do realize you are linking to Oracle Exadata which isn't a server but an Engineered system with many servers, storage and networking all built-in and based on XEON??

    Why are you not linking to SPARC T5 server pricing which is here since that’s what you are trying to discredit? Heres the SPARC T5-2 pricing which is very aggressively priced to x86 & IBM Power7+ systems.
    https://shop.oracle.com/pls/ostore/f?p=dstore:5:90...

    Or better yet, look at a public benchmark where full HW and SW pricing is disclosed?

    A SPARC T5-4 is 2.4x faster than the 8-socket Xeon E7-4870 based HP DL980 G7 on TPC-H at 10TB.
    The SPARC T5-4 server HW fully configured costs $268,853, HP DL 980 costs $268,431.

    Basically same costs, SPARC T5 is 2.4x faster than Westmere-EX. Wheres Xeon E7 v2 to showcase its 2x faster??

    Details of pricing and results are here.
    http://www.tpc.org/tpch/results/tpch_result_detail...
    http://www.tpc.org/results/individual_results/orac...
    http://c970058.r58.cf2.rackcdn.com/individual_resu...

    On TPC-C OLTP benchmark, a SPARC T5-8 has a $/perf of .55USD/tpmC, versus fastest Oracle x2-8 of .89 USD/tpmC and IBM x3850 of .59USD/tpmC. SPARC T5-8 is 70% faster per CPU than Westmere-EX based Oracle x2-8. http://www.tpc.org/tpcc/results/tpcc_results.asp?o...
  • Haravikk - Tuesday, February 25, 2014 - link

    I wouldn't mind a 60 core Xeon in the next version of Apple's Mac Pro ;)
  • Desert Dude - Thursday, April 3, 2014 - link

    Interesting discussions. Just for clarification, there is an x86 server that goes beyond 8 sockets-bullion (Xeon E7 48xx up to 16sockets with near linear scale). Bull (legacy GE & Honeywell Mainframe) has leveraged technology used in its mainframe & HPC to build bullion...the world's FASTEST x86 server. bull.us/ bullion

Log in

Don't have an account? Sign up now