Conclusions

To help summarize the current situation in the server CPU market, we have drawn up a comparison table of the performance we have measured so far. We'll compare the new Interlagos Opteron 6276 against the outgoing Opteron 6174 as well as teh Xeon X5650.

  Opteron 6276 vs.
Opteron 6174
Opteron 6276 vs.
Xeon X5650
ESXi + Linux -1% -2%
ESXi + Windows = +3%
Cinebench +2% +9%
3DS Max 2012 (iRay) -9% to + 4% -10% to +3%
Maxwell Render +4% +6%
Blender -4% -24%
Encryption/Decryption AES +265% / +275% +2% / +7%
Encryption/Decryption Twofish/Serpent +25% / +25% 31% / 46%
Compression/decompression +10% / +10% -33%/ +22%

Let us first discuss the virtualization scene, the most important market. Unfortunately, with the current power management in ESXi, we are not satisfied with the Performance/watt ratio of the Opteron 6276. The Xeon needs up to 25% less energy and performs slightly better. So if performance/watt is your first priority, we think the current Xeons are your best option.

The Opteron 6276 offers a better performance per dollar ratio. It delivers the performance of $1000 Xeon (X5650) at $800. Add to this that the G34 based servers are typically less expensive than their Intel LGA 1366 counterparts and the price bonus for the new Opteron grows. If performance/dollar is your first priority, we think the Opteron 6276 is an attractive alternative.

And then there is Windows Server 2008 R2. Typically we found that under heavy load (benchmarking at 85-100% CPU load) the power consumption was between 3% (integer) to 7% (FP) higher on the Opteron 6276 than on the Xeons and Opteron 6100, a lot better than under ESXi. Add to this the fact that the new Opteron energy usage at low load is excellent and you understand that we feel that there is no reason to go for the Opteron 6100 anymore. Again, AMD still understands that it should price its CPUs more attractive than the competition, so from the price/performance/watt point of view, the Opteron 6276 is a good cost effective alternative to the Xeon...on the condition that you enable the "high performance" policy and that AMD keeps the price delta the same in the coming months.

That is the good news. We cannot help but to feel a bit disappointed too. AMD promised us (in 2009/2010) that the Opteron 6200 would be significantly faster than the 6100: "unprecedented server performance gains". That is somewhat the case if you recompile your software with the latest and greatest optimized compiler as AMD's own SPEC CINT (+19%), CFP 2006 (+11%) and Linpack benchmarks (+32%) show.

One of the real advantages of a new processor architecture (prime examples where the K7 and K8) is if it performs well in older software too, without requiring a recompile. For some people of the HPC world, recompiling is acceptable and common, but for everybody else (that is probably >95% of the market!), it's best if existing binaries run faster. Administrators generally are not going to upgrade and recompile their software just to make better use of a new server CPU. Hopefully AMD's engineers have been looking into improving the legacy software performance of their latest chip the last few months, because it could use some help.

On the other side of the coin, it is clear that some of the excellent features of the new Opteron are not leveraged by the current software base. The deeper sleep and more advanced core gating is not working to its full potential, and the current operating systems frequently don't appear to know how to get the best from Turbo Core. The clock can be boosted by 39% when half of the cores are active, but an 18% boost was the best we saw (in a single-threaded app!). Simply turning the right knobs gave some tangible power savings (see ESXi) and some impressive performance improvements (see Windows Server 2008).

In short, we're going to need to do some additional testing and take this server out for another test drive, and we will. Stay tuned for a follow-up article as we investigate other options for improving performance.

Other Tests: TrueCrypt and 7-Zip
Comments Locked

106 Comments

View All Comments

  • mino - Wednesday, November 16, 2011 - link

    More workload ... also you need at least 3 servers for any meaningful redundancy ... even when only needing the power of 1/4 of iether of them.

    BTW. most cpu's sold in the SMB space are far cry from the 16-core monsters reviewed here ...
  • JohanAnandtech - Thursday, November 17, 2011 - link

    Don't forget the big "Cloud" buyers. Facebook has increased the numbers of server from 10.000 somewhere in 2008 tot 10 times more in 2011. That is one of the reasons why the number of units is still growing.
  • roberto.tomas - Wednesday, November 16, 2011 - link

    seems like the front page write and this article are from different versions:

    from the write up: "Each of the 16 integer threads gets their own integer cluster, complete with integer executions units, a load/store unit, and an L1-data cache"

    from the article: "Cores (Modules)/Threads 8/16 [...] L1 Data 8x 64 KB 2-way"

    what is really surprising is calling them threads (I thought, like the write up on the front page, that they each had their own independent integer "unit"). If they have their own L1 cache, they are cores as far as I'm concerned. Then again, the article itself seems to suggest just that: they are threads without independent L1 cache.

    ps> I post comments only like once a year -- please dont delete my account. every time I do, I have to register anew :D
  • mino - Wednesday, November 16, 2011 - link

    I suits Intel better to call them threads ... so writers are ordered ... only if the pesky reality did not pop up here and there.

    BD 4200 series is an 1-chip, 4-module, 8(4*2)-core, 16(4*2)-thread processor
    BD 6200 series is a 2-chip, 8(2*4)-module, 16(2*4*2)-core, 16(2*4*2)-thread processor

    Xeon 5600 series is an (up to) 1-chip, 6-core, 12(6*2)-thread processor.

    Simple as cake. :D
  • rendroid1 - Wednesday, November 16, 2011 - link

    The L1 D-cache should be 1 per thread, 4-way, etc.

    The L1 I-cache is shared by 2 threads per "module", and is 2-way, etc.
  • JohanAnandtech - Thursday, November 17, 2011 - link

    Yep. fixed. :-)
  • Novality77 - Wednesday, November 16, 2011 - link

    One thing that I never see in any reviews is remarks about the fact that more cores with lower IPC has added costs when it comes to licensing. For instance Oracle, IBM and most other suppliers charge per core. These costs can add up pretty fast. 10000 per core is not uncommon.....
  • fumigator - Wednesday, November 16, 2011 - link

    Great review as usual. I found all the new AMD opterons very interesting. Pairing two in a dual socket G34 would make a multitasking monster on the cheap, and quite future proof.

    Abour cores vs modules vs hyperthreading, people thinking AMD cores aren't true cores, should consider the following:

    adding virtual cores on hyperthreading in intel platforms don't make performance increase 100% per core, but only less than 50%

    Also if you look at intel processor photographs, you won't notice the virtual cores anywhere in the pictures.
    While in interlagos/bulldozer you could clearly spot each core by its shape inside each module. What surprises me is how small they are, but that's for an entire different discussion.
  • MossySF - Wednesday, November 16, 2011 - link

    I'm waiting to see the follow-up Linux article. The hints in this one confirm my own experiences. At our company, we're 99% FOSS and when using Centos packages, AMD chips run just as fast as Intel chips since it's all compiled with GCC instead of Intel's "disable faster code when running on AMD processors" compiler. As an example, PostgreSQL on native Centos is just as fast on Thuban compared to Sandy Bridge at the same GHz. And when you then virtualize Centos under Centos+KVM, Thuban is 35% faster. (Nehalem goes from 10% slower natively to 50% slower under KVM!)

    The compiler issue might be something to look at in virtualization tests. If you fake an Intel identifier in your VM, optimizations for new instruction sets might kick in.

    http://www.agner.org/optimize/blog/read.php?i=49#1...
  • UberApfel - Wednesday, November 16, 2011 - link

    Amazingly biased review from Anandtech.

    A fairer comparison would be between the Opteron 6272 ($539 / 8-module) and Xeon E5645 ($579 / 6-core); both common and recent processors.

    Yet handpicking the higher clocked Opteron 6276 (for what good reason?) seems to be nothing but an aim to make the new 6200 series seem un-remarkable in both power consumption and performance. The 6272 is cheaper, more common, and would beat the Xeon X5670 in power consumption which half this review is weighted on. Otherwise you should've used the 6282 SE which would compete in performance as well as being the appropriate processor according to your own chart.

    Even the chart on Page 1 is designed to make Intel look superior all-around. For what reason would you exclude the Opteron 4274 HE (65W TDP) or the Opteron 4256 EE (35W TDP) from the 'Power Optimized' section?

    The ignorance on processor tiers is forgivable even if you're likely paid to write this... but the benchmarks themselves are completely irrelevant. Where's the IIS/Apache/Nginx benchmark? PostgreSQL/SQLite? Facebook's HipHop? Node.js? Java? Something relevant to servers and not something obscure enough to sound professional?

Log in

Don't have an account? Sign up now