Final Words

The beancounters will probably point out that AMD’s strategy of bolting two CPU dies at 346 mm² together is quite costly. But this is the server CPU market, margins are quite a bit higher. Let AMD worry about the issue of margins. If AMD is willing to sell us - IT professionals - two CPUs for the price of one, we will not complain. It means that the fierce competitive market is favoring the customer. The bottom line is: is this twelve-core Opteron a good deal? For users waiting to use it in a workstation we have our doubts. You’ll benefit from the extra cores when rendering complex scenes, but in all other scenarios (quick simple rendering, modeling) the higher clocked and higher IPC Xeon X5600 series is simply the better choice.

Applications based on transactional databases (OLTP and ERP) are also better off with new Xeon. The SAP and our own Oracle Calling Circle benchmark all point in the same direction. Intel has a tangible performance advantage in both benchmarks.

Data mining applications clearly benefit from having “real” instead of “logical” cores. For datamining, we believe the 12-core Opteron is the clear winner. It offers 20% better performance at 20% lower prices, a good deal if you ask us. Intel’s relatively high prices for its six-core are challenged. The increased competition turns this into a buyers market again.

And then there is the most important segment: the virtualization market. We estimate that the new Opteron 6174 is about 20% slower than the Xeon 5670 in virtualized servers with very high VM counts. The difference is a lot smaller in the opposite scenario: a virtualized server with a few very heavy VMs. Here the choice is less clear. At this point, we believe both server CPUs consume about the same power, so that does not help either to make up our minds. It will depend on how the OEMs price their servers. The Opteron 6100 series offers up to 24 DIMMs slots, the Xeon is “limited” to 18. In many cases this allows the server buyer to achieve higher amount of memory with lower costs. You can go for 96 GB of memory with affordable 4 GB DIMMs, while the Intel server is limited to 72 GB there. That is a small bonus for the AMD server.

The HPC market seems to favor AMD once again. AMD holds only a small performance advantage, and this market is very cost sensitive. The lower price will probably convince the HPC people to go for the AMD based servers.  

All in all, this is good news for the IT professional that is a hardware enthusiast. Profiling your application and matching it to the right server CPU pays off and that is exactly what set us apart from the average IT professional.

Power Consumption
Comments Locked

58 Comments

View All Comments

  • wolfman3k5 - Monday, March 29, 2010 - link

    Great review! Thanks for the review, when will you guys be reviewing the AMD Phenom II X6 for us mere mortals? I wonder how the Phenom II X6 will stack up against the Core i7 920/930.

    Keep up the good work!
  • ash9 - Tuesday, March 30, 2010 - link

    Since SSE4.1,SSE4.2 are not in AMD's , its Andand's way of getting an easy benchmark win, seeing some of these benchmark test probably use them-

    http://blogs.zdnet.com/Ou/?p=719
    August 31st, 2007
    SSE extension wars heat up between Intel and AMD

    "Microprocessors take approximately five years to go from concept to product and there is no way Intel can add SSE5 to their Nehalem product and AMD can’t add SSE4 to their first-generation 45nm CPU “Shanghai” or their second-generation 45nm “Bulldozer” CPU even if they wanted to. AMD has stated that they will implement SSE4 following the introduction of SSE5 but declined to give a timeline for when this will happen."

    asH
  • mariush - Tuesday, March 30, 2010 - link

    One of the best optimized and multi threaded applications out there is the open source video encoder x264.

    Would it be possible to test how well 2 x 8 and 2x12 amd configurations work at encoding 1080p video at some very high quality settings?

    A workstation with 24 cores from AMD would cost almost as much as a single socket 6 cores system from Intel so it would be interesting to see if the increase in frequency and the additional SSE instructions would be more advantage than the number of cores.
  • Aclough - Tuesday, March 30, 2010 - link

    I wonder if the difference between the Windows and Linux test results is related to the recentish changes in the scheduler? From what I understand the introduction of the CFS in 2.6.23 was supposed to be really good for large numbers of cores, and I'm given to understand that before that the Linux scheduler worked similarly to the recent Windows one. It would be interesting to try running that benchmark with a 2.6.22 kernel or one with the old O(1) patched in.

    Or it could just be that Linux tends to be more tuned for throughput whereas Windows tends to be more tuned for low latency. Or both.
  • Aclough - Tuesday, March 30, 2010 - link

    In any event, the place I work for is a Linux shop and our workload is probably most similar to Blender, so we're probably going to continue to buy AMD.
  • ash9 - Tuesday, March 30, 2010 - link

    http://www.egenera.com/pdf/oracle_benchmarks.pdf


    "Performance testing on the Egenera BladeFrame system has demonstrated that the platform
    is capable of delivering high throughput from multiple servers using Oracle Real Application
    Clusters (RAC) database software. Analysis using Oracle’s Swingbench demonstration tool
    and the Calling Circle schema has shown very high transactions-per-minute performance
    from single-node implementations with dual-core, 4-socket SMP servers based on Intel and
    AMD architectures running a 64-bit-extension Linux operating system. Furthermore, results
    demonstrated 92 percent scalability on either server type up to at least 10 servers.
    The BladeFrame’s architecture naturally provides a host of benefits over other platforms
    in terms of manageability, server consolidation and high availability for Oracle RAC."
  • nexox - Tuesday, March 30, 2010 - link

    It could also be that Linux has a NUMA-aware scheduler, so it'd try to keep data stored in ram which is connected to the core that's running the thread which needs to access the data. I probably didn't explain that too well, but it'd cut down on memory latency because it would minimize going out over the HT links to fetch data. I doubt that Windows does this, given that Intel hasn't had NUMA systems for very long yet.

    I sort of like to see more Linux benchmarks, since that's really all I'd ever consider running on data center-class hardware like this, and since apparently Linux performance has very little to do with Windows performance, based on that one test.
  • yasbane - Wednesday, May 19, 2010 - link

    Agreed. I do find it disappointing that they put so few benchmarks for Linux for servers, and so many for windows.

    -C
  • jbsturgeon - Tuesday, March 30, 2010 - link

    I like the review and enjoyed reading it. I can't help but feel the benchmarks are less a comparison of CPU's and more a study on how well the apps can be threaded as well as the implementation of that threading -- higher clocked cpus will be better for serial code and more cores will win for apps that are well threaded. In scientific number crunching (the code I write ), more cores always wins (AMD). We do use Fluent too, so thanks for including those benchamarks!!
  • jbsturgeon - Tuesday, March 30, 2010 - link

    Obviously that rule can be altered by a killer memory bus :-).

Log in

Don't have an account? Sign up now