Benchmark Configuration

Since AMD sent us a 1U Supermicro server, we had to resort to testing our 1U servers again. That is why we went back to the ASUS RS700 for the Xeon. It is a bit unfortunate as on average 1U servers have a relatively worse performance/watt ratio than other form factors such as 2U and blades. Of course, 1U still makes sense in low cost, high density HPC environments.

Supermicro A+ server 1022G-URG (1U Chassis)

CPU Two AMD Opteron "Bulldozer" 6276 at 2.3GHz
Two AMD Opteron "Magny-Cours" 6174 at 2.2GHz
RAM 64GB (8x8GB) DDR3-1600 Samsung M393B1K70DH0-CK0
Motherboard SuperMicro H8DGU-F
Internal Disks 2 x Intel SLC X25-E 32GB or
1 x Intel MLC SSD510 120GB
Chipset AMD Chipset SR5670 + SP5100
BIOS version v2.81 (10/28/2011)
PSU SuperMicro PWS-704P-1R 750Watt

The AMD CPUS have four memory channels per CPU. The new Interlagos Bulldozer CPU supports DDR3-1600, and thus our dual CPU configuration gets eight DIMMs for maximum bandwidth.

Asus RS700-E6/RS4 1U Server

CPU Two Intel Xeon X5670 at 2.93GHz - 6 cores
Two Intel Xeon X5650 at 2.66GHz - 6 cores
RAM 48GB (12x4GB) Kingston DDR3-1333 FB372D3D4P13C9ED1
Motherboard Asus Z8PS-D12-1U
Chipset Intel 5520
BIOS version 1102 (08/25/2011)
PSU 770W Delta Electronics DPS-770AB

To speed up testing, we tested with the Intel Xeon and AMD Opteron system in parallel. As we didn't have more than eight 8GB DIMMs, we used our 4GB DDR3-1333 DIMMs. The Xeon system only gets 48GB, but this is no disadvantage as our benchmark with the highest memory footprint (vApus FOS, 5 tiles) uses no more than 36GB of RAM.

We measured the difference between 12x4GB and 8x8GB of RAM and recalculated the power consumption for our power measurements (note that the differences were very small). There is no alternative as our Xeon has three memory channels and cannot be outfitted with the same amount of RAM as our Opteron system (four channels).

We chose the Xeons based on AMD's positioning. The Xeon X5649 is priced at the same level as the Opteron 6276 but we didn't have the X5649 in the labs. As we suggested earlier, the Opteron 6276 should reach the performance of the X5650 to be attractive, so we tested with the X5670 and X5650. We only tested with the X5670 in some of the tests because of time constraints.

Common Storage System

For the virtualization tests, each server gets an adaptec 5085 PCIe x8 (driver aacraid v1.1-5.1[2459] b 469512) connected to six Cheetah 300GB 15000 RPM SAS disks (RAID-0) inside a Promise JBOD J300s. The virtualization testing requires more storage IOPs than our standard Promise JBOD with six SAS drives can provide. To counter this, we added internal SSDs:

  • We installed the Oracle Swingbench VMs (vApus Mark II) on two internal X25-E SSDs (no RAID). The Oracle database is only 6GB large. We test with two tiles. On each SSD, each OLTP VM accesses its own database data. All other VMs (web, SQL Server OLAP) are stored on the Promise JBOD (see above).
  • With vApus FOS, Zimbra is the I/O intensive VM. We spread the Zimbra data over the two Intel X25-E SSDs (no RAID). All other VMs (web, MySQL OLAP) get their data from the Promise JBOD (see above).

We monitored disk activity and phyiscal disk adapter latency (as reported by VMware vSphere) was between 0.5 and 2.5 ms.

Software configuration

All vApus testing was done one ESXi vSphere 5--VMware ESXi 5.0.0 (b 469512 - VMkernel SMP build-348481 Jan-12-2011 x86_64) to be more specific. All vmdks use thick provisioning, independent, and persistent. The power policy is "Balanced Power" unless indicated otherwise. All other testing was done on Windows 2008 R2 SP1.

Other notes

Both servers were fed by a standard European 230V (16 Amps max.) powerline. The room temperature was monitored and kept at 23°C by our Airwell CRACs.

We used the Racktivity ES1008 Energy Switch PDU to measure power. Using a PDU for accurate power measurements might same pretty insane, but this is not your average PDU. Measurement circuits of most PDUs assume that the incoming AC is a perfect sine wave, but it never is. However, the Rackitivity PDU measures true RMS current and voltage at a very high sample rate: up to 20,000 measurements per second for the complete PDU.

Inside Our Interlagos Test System Virtualization Performance: Linux VMs on ESXi
Comments Locked

106 Comments

View All Comments

  • mino - Wednesday, November 16, 2011 - link

    More workload ... also you need at least 3 servers for any meaningful redundancy ... even when only needing the power of 1/4 of iether of them.

    BTW. most cpu's sold in the SMB space are far cry from the 16-core monsters reviewed here ...
  • JohanAnandtech - Thursday, November 17, 2011 - link

    Don't forget the big "Cloud" buyers. Facebook has increased the numbers of server from 10.000 somewhere in 2008 tot 10 times more in 2011. That is one of the reasons why the number of units is still growing.
  • roberto.tomas - Wednesday, November 16, 2011 - link

    seems like the front page write and this article are from different versions:

    from the write up: "Each of the 16 integer threads gets their own integer cluster, complete with integer executions units, a load/store unit, and an L1-data cache"

    from the article: "Cores (Modules)/Threads 8/16 [...] L1 Data 8x 64 KB 2-way"

    what is really surprising is calling them threads (I thought, like the write up on the front page, that they each had their own independent integer "unit"). If they have their own L1 cache, they are cores as far as I'm concerned. Then again, the article itself seems to suggest just that: they are threads without independent L1 cache.

    ps> I post comments only like once a year -- please dont delete my account. every time I do, I have to register anew :D
  • mino - Wednesday, November 16, 2011 - link

    I suits Intel better to call them threads ... so writers are ordered ... only if the pesky reality did not pop up here and there.

    BD 4200 series is an 1-chip, 4-module, 8(4*2)-core, 16(4*2)-thread processor
    BD 6200 series is a 2-chip, 8(2*4)-module, 16(2*4*2)-core, 16(2*4*2)-thread processor

    Xeon 5600 series is an (up to) 1-chip, 6-core, 12(6*2)-thread processor.

    Simple as cake. :D
  • rendroid1 - Wednesday, November 16, 2011 - link

    The L1 D-cache should be 1 per thread, 4-way, etc.

    The L1 I-cache is shared by 2 threads per "module", and is 2-way, etc.
  • JohanAnandtech - Thursday, November 17, 2011 - link

    Yep. fixed. :-)
  • Novality77 - Wednesday, November 16, 2011 - link

    One thing that I never see in any reviews is remarks about the fact that more cores with lower IPC has added costs when it comes to licensing. For instance Oracle, IBM and most other suppliers charge per core. These costs can add up pretty fast. 10000 per core is not uncommon.....
  • fumigator - Wednesday, November 16, 2011 - link

    Great review as usual. I found all the new AMD opterons very interesting. Pairing two in a dual socket G34 would make a multitasking monster on the cheap, and quite future proof.

    Abour cores vs modules vs hyperthreading, people thinking AMD cores aren't true cores, should consider the following:

    adding virtual cores on hyperthreading in intel platforms don't make performance increase 100% per core, but only less than 50%

    Also if you look at intel processor photographs, you won't notice the virtual cores anywhere in the pictures.
    While in interlagos/bulldozer you could clearly spot each core by its shape inside each module. What surprises me is how small they are, but that's for an entire different discussion.
  • MossySF - Wednesday, November 16, 2011 - link

    I'm waiting to see the follow-up Linux article. The hints in this one confirm my own experiences. At our company, we're 99% FOSS and when using Centos packages, AMD chips run just as fast as Intel chips since it's all compiled with GCC instead of Intel's "disable faster code when running on AMD processors" compiler. As an example, PostgreSQL on native Centos is just as fast on Thuban compared to Sandy Bridge at the same GHz. And when you then virtualize Centos under Centos+KVM, Thuban is 35% faster. (Nehalem goes from 10% slower natively to 50% slower under KVM!)

    The compiler issue might be something to look at in virtualization tests. If you fake an Intel identifier in your VM, optimizations for new instruction sets might kick in.

    http://www.agner.org/optimize/blog/read.php?i=49#1...
  • UberApfel - Wednesday, November 16, 2011 - link

    Amazingly biased review from Anandtech.

    A fairer comparison would be between the Opteron 6272 ($539 / 8-module) and Xeon E5645 ($579 / 6-core); both common and recent processors.

    Yet handpicking the higher clocked Opteron 6276 (for what good reason?) seems to be nothing but an aim to make the new 6200 series seem un-remarkable in both power consumption and performance. The 6272 is cheaper, more common, and would beat the Xeon X5670 in power consumption which half this review is weighted on. Otherwise you should've used the 6282 SE which would compete in performance as well as being the appropriate processor according to your own chart.

    Even the chart on Page 1 is designed to make Intel look superior all-around. For what reason would you exclude the Opteron 4274 HE (65W TDP) or the Opteron 4256 EE (35W TDP) from the 'Power Optimized' section?

    The ignorance on processor tiers is forgivable even if you're likely paid to write this... but the benchmarks themselves are completely irrelevant. Where's the IIS/Apache/Nginx benchmark? PostgreSQL/SQLite? Facebook's HipHop? Node.js? Java? Something relevant to servers and not something obscure enough to sound professional?

Log in

Don't have an account? Sign up now