Rendering and HPC Benchmark Session Using Our Best Serversby Johan De Gelas on September 30, 2011 12:00 AM EST
Quad Xeon: the Quanta QSCC-4R Benchmark Configuration
Quad Intel Xeon "Westmere-EX" E7-4870
(10 core/20 threads at 2.4GHz, 2.8GHz Turbo, 30MB L3, 32nm)
|RAM||32 x 4GB (128GB) Samsung Registered DDR3-1333 at 1066MHz|
|Motherboard||QCI QSSC-S4R 31S4RMB00B0|
|PSU||4 x Delta DPS-850FB A S3F E62433-004 850W|
The quad Xeon configuration is equipped with 128GB RAM to make sure that all memory channels are filled.
Dual Xeon: ASUS RS700-E6/RS4 Configuration
Dual Intel Xeon “Westmere” X5670
(6 core/12 threads at 2.93GHz, 3.33GHz Turbo, 12MB L3, 32nm)
|RAM||12 x 4GB (48GB) ECC Registered DDR3-1333|
|BIOS version||Version 1.003|
|PSU||Delta Electronics DPS-770 AB 770W|
The dual Xeon server in contrast "only" has 48GB. This has no influence on the benchmark results, as the benchmarks use considerably less RAM.
Quad Opteron: Dell PowerEdge R815 Benchmarked Configuration
Quad AMD Opteron "Magny-Cours" 6174
(12 cores at 2.2GHz, 12MB L3, 45nm)
|RAM||16x4GB (64GB) Samsung Registered DDR3-1333 at 1333MHz|
|Motherboard||Dell Inc 06JC9T|
|PSU||2 x Dell L1100A-S0 1100W|
We reviewed the powerful but compact Dell R815 here. This time we're running 64GB, though again the amount of RAM was selected to make sure memory performance is optimized rather than for usage requirements.
Post Your CommentPlease log in or sign up to comment.
View All Comments
mino - Saturday, October 1, 2011 - linkMemory channel count has nothing to do with coherency traffic.
mino - Saturday, October 1, 2011 - linkExactly. Actually the optimized way would normally be to split the workload into 12-thread chunks on Opterons and 20-thread chunks on Xeons. That is also a reason why 4S machines rarely seen in HPC.
They just do not make sense for 99% of the workloads there.
lelliott73181 - Friday, September 30, 2011 - linkFor those of us out there that are seriously into doing distributed computing projects, it'd be cool to see a bit of information on how these systems scale in terms of programs like BOINC, Folding@home, etc.
MrSpadge - Friday, September 30, 2011 - linkScaling is pretty much perfect there, not very interesting. It may have been different back in the days when these big iron systems were starved for memory bandwidth.
fic2 - Friday, September 30, 2011 - linkWas hoping for some Bulldozer server benchmarks since the server chips are "released". ;o)
Didn't really think that I would see them though.
rahvin - Friday, September 30, 2011 - linkHave you considered that the Opteron problem could because the software is compiled with the Intel compiler which is disabling advanced features if it doesn't detect an Intel processor? This is a common problem in that the ICC compiler sets flags that if the processor doesn't find an Intel processor it turns off SSE and all the processor extensions and runs the code in x86 compatibility mode (very slow). Any time I see results that drastically off it reads to me that the software in question is using the Intel complier.
Chibimyk - Friday, September 30, 2011 - linkIfort 10 is from 2007 and is not aware of the architectures of any of these machines. It doesn't support the latest sse instructions and likely doesn't know the levels of sse supported by the cpus. You have no idea which math libraries it is linked to. It won't be using the latest Intel MKL which supports the newest chips. It isn't using the AMD optimized ACML libraries either.
What you are comparing using these compiled binaries is the performance of both systems when running intel optimized code.
You also have no idea of the levels of optimization used when compiling. Some of the highest optimization speed increases with the Intel compilers drop ANSI accuracy, or at least used to. Whether this impacts results is application specific.
Intel chips are fastest with Intel compilers and Intel MKL.
AMD chips are fastest with the Portland Group compilers and AMD ACML.
Some code runs faster with the Goto BLAS libraries.
Ideally you want to compare benchmarks with each system under ideal conditions.
eachus - Saturday, October 1, 2011 - linkDefinitely true about AMD chips and the Portland Group. I get slightly better results with GCC than the Intel compiler, partly because I know how to get it to do what I want. ;-) But Portland is still better for Fortran.
Second, there is a way to solve the NUMA problem that all HPC programmers know. Any (relatively static) data should be replicated to all processors. Arrays that will be written to by multiple threads can be duplicated with a "fire and forget" strategy, assuming that only one processor is writing to a particular element (well cache line)* in the array between checkpoints. In this particular case, you would use (all that extra) memory to have eight copies of the (frequently modified) data.
Next, if your compiler doesn't use non-temporal memory references for random access floating-point data, you are going to get clobbered just like in the benchmark. (I'm fairly sure that the Portland Group compilers use PrefetchNTA instructions by default. I tend to do my innermost loops by hand on the GCC back end, which is how I get such good results. You can too--but you really need to understand the compiler internals to write--and use--your own intrinsic routines.) What PrefetchNTA does is two things, first
it prefetches the data if it is not already in a local cache. This can be a big win. What kills you with Opteron NUMA fetches is not the Hypertransport bandwidth getting clogged, it is the latency. AMD CPUs hate memory latency. ;-)
The other thing that PrefetchNTA does is to tell the caches not to cache this data. This prevents cache pollution, especially in the L1 data cache. Oh, and don't forget to use PrefetchNTA before writing to part of a cache line. This is where you can really get hit. The processor has to keep the data to be stored around until the cache line is in a local cache. (Or in the magic zeroth level cache AMD keeps in the floating point register file.) Running out of space in the register file can stall the floating point unit when no more registers are available for renaming purposes.
Oh, and one of those "interesting" features of Bulldozer for compiler gurus is that it strongly prefers to have only one NT write stream at a time. (Reading from multiple data streams is apparently not an issue.) Just another reason we have to teach programmers to cache line aligned records for data, rather than many different arrays with the same dimensions. ;-)
* This is another of those multi-processor gotchas that eats up address space--but there is plenty to go around now that everyone is using 64-bit (actually 48-bit) addresses. You really don't want code on two different CPU chips writing to the same cache line at about the same time, even if the memory hardware can (and will) go to extremes to make it work.
It used to be that AMD CPUs used 64-byte cache lines and Intel always used 256-byte lines. When the hardware engineers got together for I think the DDR memory standard, they found that AMD fetched the "partner" 64 byte line if there were no other request waiting, and Intel cut fetches at 128 bytes if there was a waiting memory request. So it turned out that the width of the cache line inside the CPUs were different, but in practice most of the main memory accesses were 128-bytes wide no matter whose CPU you had. ;-) Anyway a data point for fluid flow software tends to have 48 bytes or so per data point. (Six DP values x,y, and z, and x',y' and z'. Aligning to 64-byte boundaries is good, 128-bytes is better, and you may want to try 256-bytes on some Intel hardware...)
mino - Saturday, October 1, 2011 - linkYou deserve the paycheck for this article!
UrQuan3 - Monday, October 3, 2011 - linkI'd like to add one to the request for a compiler benchmark. It might go well with the HPC study. The hardest part would, of course, be finding an unbiased way to conduct it. There's just so many compiler flags that add their own variables. Then you need source code.
If you do decide to give it a try, Visual Studio, GCC, Intel, and Portland would be a must. I don't know how Anandtech would do it, but I've been impressed before.