LS-DYNA

LS-DYNA is a "general purpose structural and fluid analysis simulation software package capable of simulating complex real world problems", developed by the Livermore Software Technology Corporation (LSTC). It is used by the automobile, aerospace, construction, military, manufacturing and bioengineering industry. Even simple simulations take hours to complete, so even a small performance increase results in tangible savings. Add to that that many of our readers have been asking that we perform some benchmarking with HPC workloads. So reasons enough to include our own LS-DYNA benchmarking.

These numbers are not directly comparable with AMD's and Intel's benchmarks as we did not perform any special tuning besides using the message passing interface (MPI) version of LS-DYNA ( ls971_mpp_hpmpi ) to run the LS-DYNA solver to get maximum scalability. This is HP-MPI version of LS-DYNA 9.71.

Our first test is a refined revised  Neon crash test simulation.

LS-Dyna Neon-Refined Revised

This is one of the few benchmarks (besides SAP) where the Opteron 6276 outperforms the older Opteron 6174 by a tangible margin (about 20% faster) and is significantly faster than the Xeon 5600, by 40% to be more precise. However, the direct competitor of the 6276, the Xeon E5-2630, will do a bit better (see the E5-2660 6C score). When you are aiming for the best performance, it is impossible to beat the best Xeons: the Xeon E5-2660 offers 26% better performance, the 2690 is 46% faster. It is interesting to note that LS-Dyna does not scale well with clockspeed: the 32% higher clockspeed of the Xeon E5-2690 results in only a 15% speed increase.

A few other interesting things to note: we saw only a very smal performance increase (+5%) due to Hyperthreading. Memory bandwidth does not seem to be critical either, as performance increased by only 6% when we replaced DDR3-1333 with DDR3-1600. If LS-Dyna was bottlenecked severely by the memory speed we should have seen a performance increase close to 20% (1600 vs 1333).

CMT boosted the Opteron 6276's performance by up to 33%, which seems weird at first since LS-DYNA is a typical floating point intensive application. As the shared floating point "outsources" load and stores to the integer cores, the most logical explanation is that LS-DYNA is limited by the load/store bandwidth. This is in sharp contrast with for example 3DS Max where the additional overhead of 16 extra threads slowed the shared FP down instead of speeding it up.

Also, both CPUs seem to have made good use of their turbo capabilities. The AMD Opteron was running at 2.6 GHz most of the time, the Xeon 2690 at 3.3 GHz and the Xeon 2660 at 2.6 GHz.

The second test is the "Three Vehicle Collision Test" simulation, which runs a lot longer.

LS-Dyna Three Vehicle Collision Test

The three vehicle collision test does not change the benchmarking picture, it confirms our early findings. The Opteron Interlagos does well, but the Xeon E5 is the new HPC champion.

Blender and 3DS Max Compression and Encryption
POST A COMMENT

65 Comments

View All Comments

  • Shuxclams - Tuesday, March 06, 2012 - link

    Looking at a complete visualization transformation in our server room, looks like the decision was made for us as far as architecture. Wow.... Reply
  • TeXWiller - Tuesday, March 06, 2012 - link

    <quote>The new Xeon also supports faster DDR-3 1600. Contrary to the Interlagos Opteron which can only support this memory speed with one DIMM per channel</quote>Interlagos supports memory up to DDR3-1600 using two single rank memory modules, or one single rank and one double rank module if using registered memory, and two single rank modules if using unbuffered memory. DDR3-1866 is supported on a single load-reduced registered, or on a single unbuffered module per channel. It depends on the board manufacturer and more importantly, it can be all read on the manual, so to speak. Reply
  • davegraham - Tuesday, March 06, 2012 - link

    AMD Interlagos can support more than 1 DDR3-1600 ECC/REG dimm per channel. I run 8 on a single socket 6276 and it works at the rated speed. Reply
  • TeXWiller - Wednesday, March 07, 2012 - link

    Too bad these kinds of errors in the articles are not usually fixed. Reply
  • JohanAnandtech - Wednesday, March 07, 2012 - link

    I will double check . Reply
  • meloz - Tuesday, March 06, 2012 - link

    Just wanted to congratulate Johan on a job well done. Very thorough analysis, Intel have achieved a very dominant position with this new platform and this is reflected in pricing of their processors as well!

    AMD was already a sub 10% niche (with a market share to mirror) in the data center, now even that niche has evaporated.

    New Opterons (based on Piledriver) might decrease the performance gap to Intel under certain benchmarks, but I doubt they will beat Intel. Intel has plenty of SKUs above the quickest AMD Opterons to adjust prices and kill any new challenge from AMD, instantly.
    Reply
  • JohanAnandtech - Wednesday, March 07, 2012 - link

    Thanks! Although I hope Intel gets a bit more competition though. Reply
  • alpha754293 - Tuesday, March 06, 2012 - link

    Thanks for running those.

    Are those results with HTT or without?

    If you can write a little more about the run settings that you used (with/without HTT, number of processes), that would be great.

    Very interesting results thought.

    It would have been interesting to see what the power consumption and total energy consumption numbers would be for these runs (to see if having the faster processor would really be that beneficial).

    Thanks!
    Reply
  • alpha754293 - Tuesday, March 06, 2012 - link

    I should work with you more to get you running some Fluent benchmarks as well.

    But, yes, HPC simulations DO take a VERY long time. And we beat the crap out of our systems on a regular basis.
    Reply
  • jhh - Tuesday, March 06, 2012 - link

    This is the most interesting part to me, as someone interested in high network I/O. With the packets going directly into cache, as long as they get processed before they get pushed out by subsequent packets, the packet processing code doesn't have to stall waiting for the packet to be pulled from RAM into cache. Potentially, the packet never needs to be written to RAM at all, avoiding using that memory capacity. In the other direction, web servers and the like can produce their output without ever putting the results into RAM. Reply

Log in

Don't have an account? Sign up now