Understanding the Performance Numbers

As Intel and AMD are adding more and more cores to their CPUs, we encounter two main challenges to keep these CPUs scaling. Cache coherency messages can add a lot of latency and absorb a lot of bandwidth, and at the same time all those cores require more and more bandwidth. So the memory subsystem plays an important role. We still use our older stream binary. This binary was compiled by Alf Birger Rustad using v2.4 of Pathscale's C-compiler. It is a multi-threaded, 64-bit Linux Stream binary. The following compiler switches were used:

-Ofast -lm -static -mp

We ran the stream benchmark on SUSE SLES 11. The stream benchmark produces 4 numbers: copy, scale, add, triad. Triad is the most relevant in our opinion, it is a mix of the other three.

Stream TRIAD on 64 bit linux - maximum threads

The new DDR3 memory controller gives the Opteron 6100 series wings. Compared to the Opteron 2435 which uses DDR-2 800, bandwidth has increased by 130%. Each core gets more bandwidth, which should help a lot of HPC applications. It is a pity of course that the 1.8 GHz Northbridge is limiting the memory subsystem. It would be interesting to see 8-core versions with higher clocked northbridges for the HPC market.

Also notice that the new Xeon 5600 handles DDR3-1333 a lot more efficiently. We measured 15% higher bandwidth from exactly the same DDR3-1333 DIMMs compared to the older Xeon 5570.  

The other important metric for the memory subsystem is latency. Most of our older latency benchmarks (such as the latency test of CPUID) are no longer valid. So we turned to the latency test of Sisoft Sandra 2010.

  Speed (GHz) L1 (Clocks) L2 (Clocks) L3 (Clocks) Memory (ns)
Intel Xeon X5670 2.93GHz 4 10 56 87
Intel Xeon X5570 2.80GHz 4 9 47 81
AMD Opteron 6174 2.20GHz 3 16 57 98
AMD Opteron 2435 2.60GHz 3 16 56 113

 

With Nehalem, Intel increased the latency of the L1 cache from 3 cycles to 4. The tradeoff was meant to allow for future scaling as the basic architecture evolves. The Xeons have the smallest (256 KB) but the fastest L2-cache. The L3-cache of the Xeon 5570 is the fastest, but the latency advantage has disappeared on the Xeon X5670 as the cache size increased from 8 to 12 MB.

Interesting is also the fact that the move from DDR2-800 to DDR3-1333 has also decreased the latency to the memory system by about 15%. There's nothing but good news for the 12-core Opteron here: more bandwith and lower latency access per core.

Benchmark Methods and Systems Rendering: Cinebench 11.5
Comments Locked

58 Comments

View All Comments

  • wolfman3k5 - Monday, March 29, 2010 - link

    Great review! Thanks for the review, when will you guys be reviewing the AMD Phenom II X6 for us mere mortals? I wonder how the Phenom II X6 will stack up against the Core i7 920/930.

    Keep up the good work!
  • ash9 - Tuesday, March 30, 2010 - link

    Since SSE4.1,SSE4.2 are not in AMD's , its Andand's way of getting an easy benchmark win, seeing some of these benchmark test probably use them-

    http://blogs.zdnet.com/Ou/?p=719
    August 31st, 2007
    SSE extension wars heat up between Intel and AMD

    "Microprocessors take approximately five years to go from concept to product and there is no way Intel can add SSE5 to their Nehalem product and AMD can’t add SSE4 to their first-generation 45nm CPU “Shanghai” or their second-generation 45nm “Bulldozer” CPU even if they wanted to. AMD has stated that they will implement SSE4 following the introduction of SSE5 but declined to give a timeline for when this will happen."

    asH
  • mariush - Tuesday, March 30, 2010 - link

    One of the best optimized and multi threaded applications out there is the open source video encoder x264.

    Would it be possible to test how well 2 x 8 and 2x12 amd configurations work at encoding 1080p video at some very high quality settings?

    A workstation with 24 cores from AMD would cost almost as much as a single socket 6 cores system from Intel so it would be interesting to see if the increase in frequency and the additional SSE instructions would be more advantage than the number of cores.
  • Aclough - Tuesday, March 30, 2010 - link

    I wonder if the difference between the Windows and Linux test results is related to the recentish changes in the scheduler? From what I understand the introduction of the CFS in 2.6.23 was supposed to be really good for large numbers of cores, and I'm given to understand that before that the Linux scheduler worked similarly to the recent Windows one. It would be interesting to try running that benchmark with a 2.6.22 kernel or one with the old O(1) patched in.

    Or it could just be that Linux tends to be more tuned for throughput whereas Windows tends to be more tuned for low latency. Or both.
  • Aclough - Tuesday, March 30, 2010 - link

    In any event, the place I work for is a Linux shop and our workload is probably most similar to Blender, so we're probably going to continue to buy AMD.
  • ash9 - Tuesday, March 30, 2010 - link

    http://www.egenera.com/pdf/oracle_benchmarks.pdf


    "Performance testing on the Egenera BladeFrame system has demonstrated that the platform
    is capable of delivering high throughput from multiple servers using Oracle Real Application
    Clusters (RAC) database software. Analysis using Oracle’s Swingbench demonstration tool
    and the Calling Circle schema has shown very high transactions-per-minute performance
    from single-node implementations with dual-core, 4-socket SMP servers based on Intel and
    AMD architectures running a 64-bit-extension Linux operating system. Furthermore, results
    demonstrated 92 percent scalability on either server type up to at least 10 servers.
    The BladeFrame’s architecture naturally provides a host of benefits over other platforms
    in terms of manageability, server consolidation and high availability for Oracle RAC."
  • nexox - Tuesday, March 30, 2010 - link

    It could also be that Linux has a NUMA-aware scheduler, so it'd try to keep data stored in ram which is connected to the core that's running the thread which needs to access the data. I probably didn't explain that too well, but it'd cut down on memory latency because it would minimize going out over the HT links to fetch data. I doubt that Windows does this, given that Intel hasn't had NUMA systems for very long yet.

    I sort of like to see more Linux benchmarks, since that's really all I'd ever consider running on data center-class hardware like this, and since apparently Linux performance has very little to do with Windows performance, based on that one test.
  • yasbane - Wednesday, May 19, 2010 - link

    Agreed. I do find it disappointing that they put so few benchmarks for Linux for servers, and so many for windows.

    -C
  • jbsturgeon - Tuesday, March 30, 2010 - link

    I like the review and enjoyed reading it. I can't help but feel the benchmarks are less a comparison of CPU's and more a study on how well the apps can be threaded as well as the implementation of that threading -- higher clocked cpus will be better for serial code and more cores will win for apps that are well threaded. In scientific number crunching (the code I write ), more cores always wins (AMD). We do use Fluent too, so thanks for including those benchamarks!!
  • jbsturgeon - Tuesday, March 30, 2010 - link

    Obviously that rule can be altered by a killer memory bus :-).

Log in

Don't have an account? Sign up now