Benchmark Configuration

Unfortunately, the Intel R2208GZ4GSSPP is a 2U server, which makes it hard to compare it with the 1U Opteron "Interlagos" and 1U "Westmere EP" servers we have tested in the past. We will be showing a few power consumption numbers, but since a direct comparison isn't possible, please take them with a grain of salt.

Intel's Xeon E5 server R2208GZ4GSSPP (2U Chassis)

CPU Two Intel Xeon processor E5-2660 (2.2GHz, 8c, 20MB L3, 95W)
RAM 64GB (8x8GB) DDR-1600 Samsung M393B1K70DH0-CK0
Motherboard Intel Server Board S2600GZ "Grizzly Pass"
Chipset Intel C600
BIOS version SE5C600.86B (01/06/2012)
PSU Intel 750W DPS-750XB A (80+ Platinum)

The Xeon E5 CPUs have four memory channels per CPU and support DDR3-1600, and thus our dual CPU configuration gets eight DIMMs for maximum bandwidth. The typical BIOS settings can be found below.

Not visible in the above image is that all prefetchers are enabled in all of the tests.

Supermicro A+ Opteron server 1022G-URG (1U Chassis)

CPU Two AMD Opteron "Abu Dhabi" 6380 at 2.5GHz
Two AMD Opteron "Abu Dhabi" 6376 at 2.3GHz
Two AMD Opteron "Bulldozer" 6276 at 2.3GHz
Two AMD Opteron "Magny-Cours" 6174 at 2.2GHz
RAM 64GB (8x8GB) DDR3-1600 Samsung M393B1K70DH0-CK0
Motherboard SuperMicro H8DGU-F
Internal Disks 2 x Intel MLC SSD710 200GB
Chipset AMD Chipset SR5670 + SP5100
BIOS version v2.81 (10/28/2012)
PSU SuperMicro PWS-704P-1R 750Watt

The same is true for the latest AMD Opterons: eight DDR3-1600 DIMMs for maximum bandwidth. You can check out the BIOS settings of our Opteron server below.

C6 is enabled, TurboCore (CPB mode) is on.

ASUS RS700-E6/RS4 1U Server

CPU Two Intel Xeon X5670 at 2.93GHz—6 cores
Two Intel Xeon X5650 at 2.66GHz—6 cores
RAM 48GB (12x4GB) Kingston DDR3-1333 FB372D3D4P13C9ED1
Motherboard ASUS Z8PS-D12-1U
Chipset Intel 5520
BIOS version 1102 (08/25/2011)
PSU 770W Delta Electronics DPS-770AB

To speed up benchmarking, we tested the Intel Xeon and AMD Opteron system in parallel. As we didn't have more than eight 8GB DIMMs, we used our 4GB DDR3-1333 DIMMs. The Xeon system only gets 48GB, but this isn't a disadvantage as our highest memory footprint benchmark (vApus FOS, 5 tiles) uses no more than 40GB of RAM. There is no real alternative as our Xeon has three memory channels and cannot be outfitted with the same amount of RAM as our Opteron 6300 or Xeon E5 system (four channels).

Common Storage System

For the virtualization tests, each server gets an Adaptec 5085 PCIe x8 card (driver aacraid v1.1-5.1[2459] b 469512) connected to six Cheetah 300GB 15000 RPM SAS disks (RAID-0) inside a Promise JBOD J300.

Software Configuration

All vApus testing is done on ESXi vSphere 5--VMware ESXi 5.1. All vmdks use thick provisioning, independent, and persistent. The power policy is "Balanced Power" unless otherwise indicated. All other testing is done on Windows 2008 Enterprise R2 SP1. Unless noted otherwise, we use the "High Performance setting" on Windows 2008 R2 SP1.

Other Notes

Both servers are fed by a standard European 230V (16 Amps max.) powerline. The room temperature is monitored and kept at 23°C by our Airwell CRACs. We use the Racktivity ES1008 Energy Switch PDU to measure power consumption. Using a PDU for accurate power measurements might seem pretty insane, but this is not your average PDU. Measurement circuits of most PDUs assume that the incoming AC is a perfect sine wave, but it never is. However, the Rackitivity PDU measures true RMS current and voltage at a very high sample rate: up to 20,000 measurements per second for the complete PDU.

Positioning: SKUs and Servers vApusMark FOS
POST A COMMENT

55 Comments

View All Comments

  • coder543 - Wednesday, February 20, 2013 - link

    You realize that we have no trouble recognizing that you've posted about fifty comments that are essentially incompetent racism against AMD, right?

    AMD's processors aren't prefect, but neither are Intel's. And also, AMD, much to your dismay, never announced they were planning to get out of the x86 server market. They'll be joining the ARM server market, but not exclusively. I'm honestly just ready for x86 as a whole to be gone, completely and utterly. It's a horrible CPU architecture, but so much money has been poured into it that it has good performance for now.
    Reply
  • Duwelon - Thursday, February 21, 2013 - link

    x86 is fine, just fine. Reply
  • coder543 - Wednesday, February 20, 2013 - link

    totes, ain't nobody got time for AMD. they is teh failzor.

    (yeah, that's what I heard when I read your highly misinformed argument.)
    Reply
  • quiksilvr - Wednesday, February 20, 2013 - link

    Obvious trolling aside, looking at the numbers and its pretty grim. Keep in mind that these are SERVER CPUs. Not only is Intel doing the job faster, its using less energy, and paying a mere $100-$300 more per CPU to cut off on average 20 watts is a no-brainer. These are expected to run 24 hours a day, 7 days a week with no stopping. That power adds up and if AMD has any chance to make any dent in the high end enterprise datacenters they need to push even more. Reply
  • Beenthere - Wednesday, February 20, 2013 - link

    You must be kidding. TCO is what enterprise looks at and $100-$300 more per CPU in addition to the increased cost of Intel based hardware is precisely why AMD is recovering server market share.

    If you do the math you'll find that most servers get upgraded long before the difference in power consumption between an Intel and AMD CPU would pay for itself. The total wattage per CPU is not the actual wattage used under normal operations and AMD has as good or better power saving options in their FX based CPUs as Intel has in IB. The bottom line is those who write the checks are buying AMD again and that's what really counts, in spite of the trolling.

    Rory Read has actually done a decent job so far even though it's not over and it has been painful, especially to see some talent and loyal AMD engineers and execs part ways with the company. This happens in most large company reorganizations and it's unfortunate but unavoidable. Those remaining at AMD seem up for the challenge and some of the fruits of their labor are starting to show with the Jaguar cores. When the Steamroller cores debut later this year, AMD will take another step forward in servers and desktops.
    Reply
  • Cotita - Wednesday, February 20, 2013 - link

    Most servers have a long life. You'll probably upgrade memory and storage, but CPU is rarely upgraded. Reply
  • Guspaz - Wednesday, February 20, 2013 - link

    Let's assume $0.10 per kilowatt hour. A $100 price difference at 20W would take 1000 kWh, which would take 50,000 hours to produce. The price difference would pay for itself (at $100) in about 6 years.

    So yes, the power savings aren't really enough to justify the cost increase. The higher IPC on the Intel chips, however, might.
    Reply
  • bsd228 - Wednesday, February 20, 2013 - link

    You're only getting part of the equation here. That extra 20w of power consumed mostly turns into heat, which now must be cooled (requiring more power and more AC infrastructure). Each rack can have over 20 2U servers with two processors each, which means nearly an extra kilowatt per rack, and the corresponding extra heat.

    Also, power costs can vary considerably. I was at a company paying 16-17cents in Oakland, CA. 11 cents in Sacramento, but only 2 cents in Central Washington (hydropower).
    Reply
  • JonnyDough - Wednesday, February 20, 2013 - link

    +as many as I could give. Best post! Reply
  • Tams80 - Wednesday, February 20, 2013 - link

    I wouldn't even ask the NYSE for the time day. Reply

Log in

Don't have an account? Sign up now