Conclusion

When the Xeon X7460 "Dunnington" was launched in September 2008, our first impression was that the 503mm² chip was a brute force approach to crush AMD out of it last stronghold, the quad socket server. In hindsight, the primary reason why this server CPU impressed was the poor execution of the AMD "Barcelona" chip. Still stuck at 2.3GHz and backed by a very meager 2MB L3 cache, the AMD server platform was performing well below its true capabilities. The advantage that AMD still held was that the NUMA fast interconnect platform was capable of much more, and it was just a matter of improving the CPUs. Intel is far beyond the limits of the multiple FSB platform and needs to roll out a completely new server platform, a "QuickPath" quad socket platform. AMD has already improved their Quad socket CPU two times in one year, while Intel's updated quad platform will not be available before the beginning of 2010.

The end result is that servers based on a quad hex-core Opteron are about 20% to 50% faster, and at the same time consume 20% less than Intel hex-core. The E7450 has a slightly better performance/watt ratio, but simple mathematics show that no matter which hex-core Xeon you chose, it is going to look bad for the Intel six-core. The X7460 and its brothers are toast. The Intel quad platform will not be attractive until the Nehalem EX arrives.

Until then, we have a landslide victory for the AMD quad Opteron platform, if only the pesky dual Xeon X5570 wouldn't spoil the party. Servers based on the X55xx series are the most expensive of the dual socket market, but still cost about half (or even less than half) as much  than  the quad hex-core Opterons based servers. The memory slot advantage is also shrinking: an X55xx based server can realistically use 18 x 4GB or 72GB (maximum: 144GB). A quad Opteron based server typically has 32 slots and can house up to 128GB of RAM if you use affordable 4 GB DIMMs (maximum: 256GB).

Before you go for quad sockets, make sure your application scales beyond 16 cores. Most of the applications don't, and we picked only those applications (large databases, ERP and virtualization) which typically scale well, and which are the target applications for quad socket servers.

So who wins? Intel's dual socket, AMD's dual socket, or AMD's quad socket platform? The answer is that it depends on your performance/RAM ratio. The more performance you require per GB, the more interesting the dual Nehalem platform gets. The more RAM you need to obtain a certain level of performance, the more interesting the AMD quad platform gets.

 

For example, a small intensively used database will probably sway you towards the dual Xeon X55xx server, as it is quite a bit cheaper to acquire and the performance/watt and performance/$ ratio are better. A very large database or virtualization consolidation scenario requiring more than 72GB of RAM will probably push you towards the quad Istanbul - once you need more than 64-72GB, memory gets really expensive on the Intel dual socket platform. There are two reasons for this: 8GB DIMMs are five times more expensive than 4GB DIMMs, and DDR3 is still more costly than DDR2 (especially in large DIMMs).

So there you have it: the latest quad socket Opteron hex-core scales and performs so well that it beats the "natural" enemy, the Xeon X7460, by a large margin especially from a performance/watt point of view. At the same time, it has to sweat very hard to shake off the dual socket Intel Xeon in quite a few applications. Servers with 24 of those fast cores can only really justify their higher price by offering more and ironically cheaper memory. Choosing between a dual socket and quad socket server is mostly a matter of knowing the memory footprint of the applications you will run on it… and your own personal vision on the datacenter.

I would like to thank my colleague Tijl Deneut for his assistance.

Power Consumption
Comments Locked

32 Comments

View All Comments

  • rbbot - Tuesday, October 6, 2009 - link

    Surely the high price of 8GB Dimms isn't going to last very long, especially with Samsung about to launch 16GB parts soon.
  • Calin - Wednesday, October 7, 2009 - link

    8GB DIMMs have two markets: one would be upgrade from 4GB or 2GB parts in older servers, the other would be more memory in cheaper servers. As the demand can be high, it all depends on the supply - and if the supply is low, prices are high.
    So, don't count on the price of 8GB DIMMs to decrease soon
  • Candide08 - Tuesday, October 6, 2009 - link

    One performance factor that has not improved much over the years is the decrease in percentage of performance gains for additional cores.

    A second core adds about 60% performance to the system.
    Third, fourth, fifth and sixth cores all add lower (decreasing) percentages of real performance gains - due to multi-core overhead.

    A dual socket dual core system (4 processors) seems like the sweet spot to our organization.
  • Calin - Wednesday, October 7, 2009 - link

    If your load is enough to fit into four processors, then this is great. However, for some, this level of performance is not enough, and more performance is needed - even if paying four times as much for twice as much performance
  • hifiaudio2 - Tuesday, October 6, 2009 - link

    FYI the R710 can have up to 192gb of ram...

    12x16GB

    not cheap :) but possible

  • JohanAnandtech - Tuesday, October 6, 2009 - link

    at $300 per GB, or the price of 2 times 4 GB DIMMs, I don't think 16 GB DIMMs are going to be a big success right now. :-)
  • wifiwolf - Wednesday, October 7, 2009 - link

    for at least 5 years you mean
  • mamisano - Tuesday, October 6, 2009 - link

    Great article, just have a question about the power supplies. Why do the quad-core servers need a 1200W PSU if the highest measured load was 512W? I know you would like to have some head-room but it looks to me that a more efficient 750 - 900W PSU may have provided better power consumption results... or am I totally wrong? :)
  • JarredWalton - Tuesday, October 6, 2009 - link

    Maximum efficiency for most PSUs is obtains at a load of around 40-60% (give or take), so if you have a server running mostly under load you would want a PSU rated at roughly twice the load power. (Plus a bit of headroom, of course.)
  • JohanAnandtech - Wednesday, October 7, 2009 - link

    Actually, the best server PSUs are now at maximum efficiency (+/- 3%) between 30 and 95% load.

    For example:
    http://www.supermicro.com/products/powersupply/80P...">http://www.supermicro.com/products/powersupply/80P...

    And the reason why our quads are using 1000W PSUs (not 1200) is indeed that you need some headroom. We do not test the server with all DIMM slots filled and you also need to take in account that you need a lot more power when starting up.

Log in

Don't have an account? Sign up now