• What
    is this?
    You've landed on the AMD Portal on AnandTech. This section is sponsored by AMD. It features a collection of all of our independent AMD content, as well as Tweets & News from AMD directly. AMD will also be running a couple of huge giveaways here so check back for those.
    PRESENTED BY

The Quad Opteron Alternative

Servers with the newest Intel six-core Xeon hit the market in April. The fastest six-cores Xeons were able to offer up to twice the performance of six-core Opteron “Istanbul”. The reason for this was that the age of the integer core in AMD's Opteron was starting to show. While the floating point part got a significant overhaul in 2007 with the AMD "Barcelona" quad-core chip, the integer part was a tuned version of the K8, launched back in 2003. This was partly compensated by large improvements in the multi-core performance scaling departement: HT-assist, faster CPU interconnects, larger L3 caches, and so on.

To counter this lower per-core performance, AMD's efforts focused on the "Magny-Cours" MCMs that scaled even better thanks to HT 3.0 and four DDR3 memory controllers. AMD’s twelve-core processors were launched at the end of March 2010, but servers based on these “Magny-Cours” Opterons were hard to find. So for a few months, Intel dominated the midrange and high-end server market. HP and Dell informed us that they would launch the "Magny-Cours" servers in June 2010. That is history now, and server buyers have an alternative again for the ubiquitous Xeon Servers.

AMD’s strategy to make their newest platform attractive is pretty simple: be very generous with cores. For example, you get 12 Opteron cores at 2.1GHz for the price of a six-core Xeon 2.66GHz (See our overview of SKUs). In our previous article, we measured that on average, a dual socket twelve-core Opteron is competitive with a similar Xeon server. It is a pretty muddy picture though: the Opteron wins in some applications, the Xeon wins in others. The extra DDR3 memory channel and the resulting higher bandwidth makes the Opteron the choice for most HPC applications. The Opteron has a small advantage in OLAP databases and the virtualization benchmarks are a neck and neck race. The Xeon wins in applications like rendering, OLTP and ERP, although again with a small margin.

But if the AMD platform really wants to lure away significant numbers of customers, AMD will have to do better than being slightly faster or slightly slower. There are many more Xeon based servers out there, so AMD Opteron based servers have to rise above the crowd. And they did: the “core generosity” didn’t end with offering more cores per socket. All 6100 Opterons are quad socket capable: the price per core stays the same whether you want 12, 24 or 48 cores in your machine. AMD says they have “shattered the 4P tax, making 2P and 4P processors the same price.”

So dual socket Opterons servers are ok, offering competitive performance at a slightly lower price, most of the time. Nice, but not a head turner. The really interesting servers of the AMD platforms should be the quad socket ones. For a small price premium you get twice as many DIMM slots and processors as a dual socket Xeon server. That means that a quad socket Opteron 6100 positions itself as a high-end alternative for a Dual Xeon 5600 server. If we take a quick look at the actual pricing of the large OEMs, the picture becomes very clear.

Compared to the DL380 G7 (72GB) speced above, the Dell R815 offers twice the amount of RAM while offering—theoretically—twice as much performance. The extra DIMM slots pay off: if you want 128GB, the dual Xeon servers have to use the more expensive 8GB DIMMs.

Quad Opteron Style Dell
POST A COMMENT

51 Comments

View All Comments

  • jdavenport608 - Thursday, September 09, 2010 - link

    Appears that the pros and cons on the last page are not correct for the SGI server. Reply
  • Photubias - Thursday, September 09, 2010 - link

    If you view the article in 'Print Format' than it shows correctly.
    Seems to be an Anandtech issue ... :p
    Reply
  • Ryan Smith - Thursday, September 09, 2010 - link

    Fixed. Thanks for the notice. Reply
  • yyrkoon - Friday, September 10, 2010 - link

    Hey guys, you've got to do better than this. The only thing that drew me to this article was the Name "SGI" and your explanation of their system is nothing.

    Why not just come out and say . . " Hey, look what I've got pictures of". Thats about all the use I have for the "article". Sorry if you do not like that Johan, but the truth hurts.
    Reply
  • JohanAnandtech - Friday, September 10, 2010 - link

    It is clear that we do not focus on the typical SGI market. But you have noticed that from the other competitors and you know that HPC is not our main expertise, virtualization is. It is not really clear what your complaint is, so I assume that it is the lack of HPC benchmarks. Care to make your complaint a little more constructive? Reply
  • davegraham - Monday, September 13, 2010 - link

    i'll defend Johan here...SGI has basically cornered themselves into the cloud scale market place where their BTO-style of engagement has really allowed them to prosper. If you wanted a competitive story there, the Dell DCS series of servers (C6100, for example) would be a better comparison.

    cheers,

    Dave
    Reply
  • tech6 - Thursday, September 09, 2010 - link

    While the 815 is great value where the host is CPU bound, most VM workloads seem to be memory limited rather than processing power. Another consideration is server (in particularly memory) longevity which is something where the 810 inherits the 910s RAS features while the 815 misses out.

    I am not disagreeing with your conclusion that the 815 is great value but only if your workload is CPU bound and if you are willing to take the risk of not having RAS features in a data center application.
    Reply
  • JFAMD - Thursday, September 09, 2010 - link

    True that there is a RAS difference, but you do have to weigh the budget differences and power differences to determine whether the RAS levels of either the R815 (or even a xeon 5600 system) are not sufficient for your application. Keep in mind that the xeon 7400 series did not have these RAS features, so if you were comfortable with the RAS levels of the 7400 series for these apps, then you have to question whether the new RAS features are a "must have". I am not saying that people shouldn't want more RAS (everyone should), but it is more a question of whether it is worth paying the extra price up front and the extra price every hour at the wall socket.

    For virtualization, the last time I talked to the VM vendors about attach rate, they said that their attach rate to platform matched the market (i.e. ~75% of their software was landing on 2P systems). So in the case of virtualization you can move to the R815 and still enjoy the economics of the 2P world but get the scalability of the 4P products.
    Reply
  • tech6 - Thursday, September 09, 2010 - link

    I don't disagree but the RAS issue also dictates the longevity of the platform. I have been in the hosting business for a while and we see memory errors bring down 2 year+ old HP blades in alarming numbers. If you budget for a 4 year life cycle, then RAS has to be high on your list of features to make that happen. Reply
  • mino - Thursday, September 09, 2010 - link

    Generally I would agree except that 2yr old HP blades (G5) are the worst way to ascertain commodity x86 platform reliability.
    Reasons:
    1) inadequate cooling setup (you better keep c7000 input air well below 20C at all costs)
    2) FBDIMM love to overheat
    3) G5 blade mobos are BIG MESS when it comes to memory compatibility => they clearly underestimated the tolerances needed

    4) All the points above hold true at least compared to HS21* and except 1) also against bl465*

    Speaking about 3yrs of operations of all three boxen in similar conditions. The most clear thi became to us when building power got cutoff and all our BladeSystems got dead within minutes (before running out of UPS by any means) while our 5yrs old BladeCenter (hosting all infrastructure services) remained online even at 35C (where the temp platoed thanks to dead HP's)
    Ironically, thanks to the dead production we did not have to kill infrastructure at all as the UPS's lasted for the 3 hours needed easily ...
    Reply

Log in

Don't have an account? Sign up now