Benchmark Configuration

Unfortunately, the Intel R2208GZ4GSSPP is a 2U server, which makes it hard to compare it with the 1U Opteron "Interlagos" and 1U "Westmere EP" servers we have tested in the past. We will be showing a few power consumption numbers, but since a direct comparison isn't possible, please take them with a grain of salt.

Intel's Xeon E5 server R2208GZ4GSSPP (2U Chassis)

CPU Two Intel Xeon processor E5-2660 (2.2GHz, 8c, 20MB L3, 95W)
RAM 64GB (8x8GB) DDR-1600 Samsung M393B1K70DH0-CK0
Motherboard Intel Server Board S2600GZ "Grizzly Pass"
Chipset Intel C600
BIOS version SE5C600.86B (01/06/2012)
PSU Intel 750W DPS-750XB A (80+ Platinum)

The Xeon E5 CPUs have four memory channels per CPU and support DDR3-1600, and thus our dual CPU configuration gets eight DIMMs for maximum bandwidth. The typical BIOS settings can be found below.

Not visible in the above image is that all prefetchers are enabled in all of the tests.

Supermicro A+ Opteron server 1022G-URG (1U Chassis)

CPU Two AMD Opteron "Abu Dhabi" 6380 at 2.5GHz
Two AMD Opteron "Abu Dhabi" 6376 at 2.3GHz
Two AMD Opteron "Bulldozer" 6276 at 2.3GHz
Two AMD Opteron "Magny-Cours" 6174 at 2.2GHz
RAM 64GB (8x8GB) DDR3-1600 Samsung M393B1K70DH0-CK0
Motherboard SuperMicro H8DGU-F
Internal Disks 2 x Intel MLC SSD710 200GB
Chipset AMD Chipset SR5670 + SP5100
BIOS version v2.81 (10/28/2012)
PSU SuperMicro PWS-704P-1R 750Watt

The same is true for the latest AMD Opterons: eight DDR3-1600 DIMMs for maximum bandwidth. You can check out the BIOS settings of our Opteron server below.

C6 is enabled, TurboCore (CPB mode) is on.

ASUS RS700-E6/RS4 1U Server

CPU Two Intel Xeon X5670 at 2.93GHz—6 cores
Two Intel Xeon X5650 at 2.66GHz—6 cores
RAM 48GB (12x4GB) Kingston DDR3-1333 FB372D3D4P13C9ED1
Motherboard ASUS Z8PS-D12-1U
Chipset Intel 5520
BIOS version 1102 (08/25/2011)
PSU 770W Delta Electronics DPS-770AB

To speed up benchmarking, we tested the Intel Xeon and AMD Opteron system in parallel. As we didn't have more than eight 8GB DIMMs, we used our 4GB DDR3-1333 DIMMs. The Xeon system only gets 48GB, but this isn't a disadvantage as our highest memory footprint benchmark (vApus FOS, 5 tiles) uses no more than 40GB of RAM. There is no real alternative as our Xeon has three memory channels and cannot be outfitted with the same amount of RAM as our Opteron 6300 or Xeon E5 system (four channels).

Common Storage System

For the virtualization tests, each server gets an Adaptec 5085 PCIe x8 card (driver aacraid v1.1-5.1[2459] b 469512) connected to six Cheetah 300GB 15000 RPM SAS disks (RAID-0) inside a Promise JBOD J300.

Software Configuration

All vApus testing is done on ESXi vSphere 5--VMware ESXi 5.1. All vmdks use thick provisioning, independent, and persistent. The power policy is "Balanced Power" unless otherwise indicated. All other testing is done on Windows 2008 Enterprise R2 SP1. Unless noted otherwise, we use the "High Performance setting" on Windows 2008 R2 SP1.

Other Notes

Both servers are fed by a standard European 230V (16 Amps max.) powerline. The room temperature is monitored and kept at 23°C by our Airwell CRACs. We use the Racktivity ES1008 Energy Switch PDU to measure power consumption. Using a PDU for accurate power measurements might seem pretty insane, but this is not your average PDU. Measurement circuits of most PDUs assume that the incoming AC is a perfect sine wave, but it never is. However, the Rackitivity PDU measures true RMS current and voltage at a very high sample rate: up to 20,000 measurements per second for the complete PDU.

Positioning: SKUs and Servers vApusMark FOS
Comments Locked

55 Comments

View All Comments

  • coder543 - Wednesday, February 20, 2013 - link

    99%? I love your highly scientific numbers. and yes, of course only Intel can design a perfect processor. I'm glad you were here to let everyone know.

    To quote Abraham Lincoln, (no, not really) "All of our servers run Intel. Everything AMD makes is no better than British tea."
  • Tams80 - Wednesday, February 20, 2013 - link

    How much are Intel paying you? XD

    Seriously though; you've gone through the entire comments* posting walls of texts that add little to the discussion. Not only that, but your posts are a little offensive.

    *I realise I'm being hypocritical here.
  • JKflipflop98 - Wednesday, February 20, 2013 - link

    Well, Intel does pay me and I'll be the first to say these chips are lookin pretty good in comparison with their previous generation counterparts. Good value for the money for sure.

    As Anand says, however, HPC users are usually after the "extreme" ends of the scale. They're either after max performance or max performance to fit into a certain power/heat envelope. In either case, we win.
  • Tams80 - Wednesday, February 20, 2013 - link

    I'm sure you know what I mean. It wasn't exactly high brow humour.

    They certainly do look good, especially for a company that has already invested in AMD chips. Intel may well be better in both use cases, but at least AMD are providing competent competition.
  • tech6 - Wednesday, February 20, 2013 - link

    The AMD 6x000 series has always looked nearly competitive on paper but is nowhere near Intel performance and efficiency. We have 3 data centers and one is running a mix of 6100 and 6200 Opterons while the others a re older Xeon 7300s and new E5 Xeons. In terms of single threaded and total performance of the 6x00 series cannot keep up with even old 7300 Xeons and can't touch the E5s. What AMD needs is a 30-40% boost in real world performance before they could be considered competitive. AMD also needs better relations with VMWare to optimize memory management on that platform.

    The price difference won't help them as the cost for a data center host is mostly software and can be $15 vCloud and $10K hardware. That reduces the cost advantage to 5% but delivers worse performance and uses more power.

    Most data centers are looking to get the most from their VMWare investments while reducing power consumption and these AMDs do neither.
  • duploxxx - Wednesday, February 20, 2013 - link

    interesting information, but hard to catch if you don't add some figures and real data.

    Firsts of all the 7300 series had huge disadvantages with there FSB, so mentioning that these are way faster then the 6100-6200 opteron series is debatable. I 100% tend to disagree and we had severe Vmware performance issues on these machines on our highend applications.

    i'll just used anandtech as a refference:
    http://www.anandtech.com/show/2851/8
    http://www.it.anandtech.com/show/2978/amd-s-12-cor...

    even the 7400 series are a dog against opteron 8000 series and they are way older and slower against the 6000 series.

    for the E5 you have a point there, often the E5 series show a higher responsive platform, but once you load real life applications within hypervisor and they are starting to hit those HT cores we have seen several degraded performance within our datacenters, this is not really resulted into the anandtech VAPU's scores due to some sw within the benchmark that provides some code optimised results for the intels (the web servers) hence the higher score.

    The 6200 series did showe some response disadvantages but many things have to do with configuration of bios and power profiles in both server and hypervisor. might want to blame the setup rather then the servers. so for 6200 series we actually bought a 10% higher clock speed version to cover that, but reduced that again now with 6300 series.
  • silverblue - Wednesday, February 20, 2013 - link

    I'm going to go trawl the internet (note I said trawl, not troll - very important to bear in mind) for articles on FX CPUs resulting in PCs dying... nope, no matches. Funny, huh?

    I've also run a search concerning AMD CPUs producing incorrect results and crashing; any such occurrences would be the results of design bugs which, I must point out, are not limited to AMD. Nehalem had a bug causing spurious interrupts that locks up the hypervisor on Windows Server 2008 R2, for example. Core 2 had a huge list of bugs.
  • Shadowmaster625 - Wednesday, February 20, 2013 - link

    It is hard to disagree with the statement, knowing how overpaid US IT professionals are. But I just want to point out that this mentality is one of the reasons IT is being outsourced at a furious rate. Keep that in mind before you go blaming someone else for US jobs being lost.

    This meager cost savings may not matter here, but what about some company in Asia? They might actually bite on a few hundred dollar savings, especially if they are ordering quantities in the hundreds. In that case, $300 becomes $30,000. Which might be more than they spend on the people who deploy those servers.
  • ExarKun333 - Wednesday, February 20, 2013 - link

    Outsourced work isn't much cheaper these days and the workers are of much less quality, on a whole.
  • sherlockwing - Wednesday, February 20, 2013 - link

    Except in Asia( especially developing countries) the cost of electricity is a lot higher due to rapidly expanding industry,population & lacking power plants.

Log in

Don't have an account? Sign up now