STARS Euler3D CFD

The STARS Euler3D CFD benchmark got popular thanks to Scott of Techreport.com. It is a computational fluid dynamics (CFD) benchmark based on the STARS Euler3D structural analysis routines developed at CASELab, the Computational AeroServoElasticity Laboratory at Oklahoma State University. Since the benchmark has been used for years by Scott, we felt it was a good place to start our HPC benchmarking adventure: we could check if our results are in the right ballpark.

The benchmark is downloadable and described in great detail here. The benchmark score is reported as a CFD cycle frequency in Hertz, with higher results being better.

Stars Euler 3D CFD: maximum score

The Xeon E7 scales quite nicely on the condition that you disable Hyper-Threading. The benchmark is able to take advantage of Hyper-Threading, which can be seen on the dual Xeon system. However, the threads work on the same data grid, so the more threads, the more locking contention rears its ugly head. Here's a more detailed look at scaling with the number of threads:

The Hyper-Threading enabled Xeon X5670 performs worse than the non-HT setup until we run more than 12 threads. Once we do that it can offer a decent performance boost (17%). The benchmark however does not scale enough to take advantage of 80 threads. Hyper-Threading offers better resource utilization but that does not negate the negative performance effect of the overhead of running 80 threads. Once we pass 40 threads on the E7-4870, performance starts to level off and even drop.

Of course, you are probably more interested in the other server result. What happened to the Opteron scores? Why is the 48 core Opteron five times slower than the 40 core Xeon E7? Let's investigate further.

Cinebench Release 11.5 Investigating the Opteron Performance Mystery
Comments Locked

52 Comments

View All Comments

  • mino - Saturday, October 1, 2011 - link

    Memory channel count has nothing to do with coherency traffic.
  • mino - Saturday, October 1, 2011 - link

    Exactly. Actually the optimized way would normally be to split the workload into 12-thread chunks on Opterons and 20-thread chunks on Xeons. That is also a reason why 4S machines rarely seen in HPC.

    They just do not make sense for 99% of the workloads there.
  • lelliott73181 - Friday, September 30, 2011 - link

    For those of us out there that are seriously into doing distributed computing projects, it'd be cool to see a bit of information on how these systems scale in terms of programs like BOINC, Folding@home, etc.
  • MrSpadge - Friday, September 30, 2011 - link

    Scaling is pretty much perfect there, not very interesting. It may have been different back in the days when these big iron systems were starved for memory bandwidth.

    MrS
  • fic2 - Friday, September 30, 2011 - link

    Was hoping for some Bulldozer server benchmarks since the server chips are "released". ;o)
    Didn't really think that I would see them though.
  • rahvin - Friday, September 30, 2011 - link

    Have you considered that the Opteron problem could because the software is compiled with the Intel compiler which is disabling advanced features if it doesn't detect an Intel processor? This is a common problem in that the ICC compiler sets flags that if the processor doesn't find an Intel processor it turns off SSE and all the processor extensions and runs the code in x86 compatibility mode (very slow). Any time I see results that drastically off it reads to me that the software in question is using the Intel complier.
  • Chibimyk - Friday, September 30, 2011 - link

    Ifort 10 is from 2007 and is not aware of the architectures of any of these machines. It doesn't support the latest sse instructions and likely doesn't know the levels of sse supported by the cpus. You have no idea which math libraries it is linked to. It won't be using the latest Intel MKL which supports the newest chips. It isn't using the AMD optimized ACML libraries either.

    What you are comparing using these compiled binaries is the performance of both systems when running intel optimized code.

    You also have no idea of the levels of optimization used when compiling. Some of the highest optimization speed increases with the Intel compilers drop ANSI accuracy, or at least used to. Whether this impacts results is application specific.

    Generally speaking:
    Intel chips are fastest with Intel compilers and Intel MKL.
    AMD chips are fastest with the Portland Group compilers and AMD ACML.
    Some code runs faster with the Goto BLAS libraries.

    Ideally you want to compare benchmarks with each system under ideal conditions.
  • eachus - Saturday, October 1, 2011 - link

    Definitely true about AMD chips and the Portland Group. I get slightly better results with GCC than the Intel compiler, partly because I know how to get it to do what I want. ;-) But Portland is still better for Fortran.

    Second, there is a way to solve the NUMA problem that all HPC programmers know. Any (relatively static) data should be replicated to all processors. Arrays that will be written to by multiple threads can be duplicated with a "fire and forget" strategy, assuming that only one processor is writing to a particular element (well cache line)* in the array between checkpoints. In this particular case, you would use (all that extra) memory to have eight copies of the (frequently modified) data.

    Next, if your compiler doesn't use non-temporal memory references for random access floating-point data, you are going to get clobbered just like in the benchmark. (I'm fairly sure that the Portland Group compilers use PrefetchNTA instructions by default. I tend to do my innermost loops by hand on the GCC back end, which is how I get such good results. You can too--but you really need to understand the compiler internals to write--and use--your own intrinsic routines.) What PrefetchNTA does is two things, first
    it prefetches the data if it is not already in a local cache. This can be a big win. What kills you with Opteron NUMA fetches is not the Hypertransport bandwidth getting clogged, it is the latency. AMD CPUs hate memory latency. ;-)

    The other thing that PrefetchNTA does is to tell the caches not to cache this data. This prevents cache pollution, especially in the L1 data cache. Oh, and don't forget to use PrefetchNTA before writing to part of a cache line. This is where you can really get hit. The processor has to keep the data to be stored around until the cache line is in a local cache. (Or in the magic zeroth level cache AMD keeps in the floating point register file.) Running out of space in the register file can stall the floating point unit when no more registers are available for renaming purposes.

    Oh, and one of those "interesting" features of Bulldozer for compiler gurus is that it strongly prefers to have only one NT write stream at a time. (Reading from multiple data streams is apparently not an issue.) Just another reason we have to teach programmers to cache line aligned records for data, rather than many different arrays with the same dimensions. ;-)

    * This is another of those multi-processor gotchas that eats up address space--but there is plenty to go around now that everyone is using 64-bit (actually 48-bit) addresses. You really don't want code on two different CPU chips writing to the same cache line at about the same time, even if the memory hardware can (and will) go to extremes to make it work.

    It used to be that AMD CPUs used 64-byte cache lines and Intel always used 256-byte lines. When the hardware engineers got together for I think the DDR memory standard, they found that AMD fetched the "partner" 64 byte line if there were no other request waiting, and Intel cut fetches at 128 bytes if there was a waiting memory request. So it turned out that the width of the cache line inside the CPUs were different, but in practice most of the main memory accesses were 128-bytes wide no matter whose CPU you had. ;-) Anyway a data point for fluid flow software tends to have 48 bytes or so per data point. (Six DP values x,y, and z, and x',y' and z'. Aligning to 64-byte boundaries is good, 128-bytes is better, and you may want to try 256-bytes on some Intel hardware...)
  • mino - Saturday, October 1, 2011 - link

    You deserve the paycheck for this article!

    Howgh.
  • UrQuan3 - Monday, October 3, 2011 - link

    I'd like to add one to the request for a compiler benchmark. It might go well with the HPC study. The hardest part would, of course, be finding an unbiased way to conduct it. There's just so many compiler flags that add their own variables. Then you need source code.

    If you do decide to give it a try, Visual Studio, GCC, Intel, and Portland would be a must. I don't know how Anandtech would do it, but I've been impressed before.

Log in

Don't have an account? Sign up now