Benchmark Configuration

As far as reliability is concerned, while we little reason to doubt that the quad Xeon OEM systems out there are the pinnacle of reliability, our initial experience with Xeon E7 v3 has not been as rosy. Our updated and upgraded Quad Xeon Brickland system was only finally stable after many firmware updates, with its issues sorted out just a few hours before the launch of the Xeon E7 v3. Unfortunately this means our time testing the stable Xeon E7 v3 was a bit more limited than we would have liked.

Meanwhile to make the comparison more interesting, we decided to include both the Quad Xeon "Westmere-EX" as well as the "Nehalem-EX". Remember these heavy duty, high RAS servers continue to be used for much longer in the data center than their dual socket counterparts, 5 years or more are no exception. Of course, the comparison would not be complete without the latest dual Xeon 2699 v3 server.

All testing has been done on 64 bit Ubuntu Linux 14.04 (kernel 3.13.0-51, gcc version 4.8.2).

Intel S4TR1SY3Q "Brickland" IVT-EX 4U-server

The latest and greatest from Intel consists of the following components:

CPU 4x Xeon E7-8890v3 2.5 GHz 
18c, 45 MB L3, 165W TDP

or

4x Xeon E7-4890 v2 (D1 stepping) 2.8GHz
15 cores, 37.5MB L3, 155W TDP
RAM 256 GB, 32x 8 GB Micron  DDR-4-2100
at 1600MHz

or

256 GB, 32x8GB Samsung 8GB DDR3
M393B1K70DH0-YK0 at 1333MHz
Motherboard Intel CRB Baseboard "Thunder Ridge"
Chipset Intel C602J
PSU 2x1200W (2+0)

Total amount of DIMM slots is 96. When using 64GB LRDIMMs, this server can offer up to 6TB of RAM.

If only two cores are active, the 8890 can boost the clockspeed to 3.3 GHz (AVX code: 3.2 GHz). The 4890v2 reaches 3.4 GHz in that situation. Even with all cores active, 2.9 GHz is possible (AVX code: 2.6 GHz).

Intel Quanta QSCC-4R Benchmark Configuration

The previous quad Xeon E7 server, as reviewed here.

CPU 4x Xeon X7560 at 2.26GHz, or
4x Xeon E7-4870 at 2.4GHz
RAM 16x8GB Samsung 8GB DDR3
M393B1K70DH0-YK0 at 1066MHz
Motherboard QCI QSSC-S4R 31S4RMB00B0
Chipset Intel 7500
BIOS version QSSC-S4R.QCI.01.00.S012,031420111618
PSU 4x850W Delta DPS-850FB A S3F E62433-004 850W

The server can accept up to 64 32GB Load Reduced DIMMs (LR-DIMMs) or 2TB.

Intel's Xeon E5 Server – "Wildcat Pass" (2U Chassis)

Finally, we have our Xeon E5 v3 server:

CPU Two Intel Xeon processor E5-2699 v3 (2.3GHz, 18c, 45MB L3, 145W)
RAM 128GB (8x16GB) Samsung M393A2G40DB0 (RDIMM)
Internal Disks 2x Intel MLC SSD710 200GB
Motherboard Intel Server Board Wilcat pass
Chipset Intel Wellsburg B0
BIOS version August the 9th, 2014
PSU Delta Electronics 750W DPS-750XB A (80+ Platinum)

Every server was outfitted with two 200 GB S3700 SSDs.

POWER8 Versus Xeon E7 v3 SAP S&D Benchmark
Comments Locked

146 Comments

View All Comments

  • DanNeely - Saturday, May 9, 2015 - link

    The work loads that you'd be buying racks of servers for are better handled with individually less expensive systems. These 4/8way leviatans are for the one or two core business functions that only scale up not out; so the typical customer would only be buying a handful of these max.

    The other half is that even a thousand or two thousand/year in increased operating costs for the server is not only dwarfed by the price of the server; but by the price of software that makes the server look cheap. The best server for those applications isn't the server that costs the least to run. It's not the server that has the cheapest hardware price either. It's the one that lets you get away with the cheapest licensing fee for the application you're running.

    One extreme example from the better part of a decade ago was that prior to being acquired by Oracle, Sun was making extremely wide processors that were very competitive on a per socket basis but used a huge number of really slow cores/threads to get their throughput. At that time Oracle licensed its DB on a per core (per thread?) basis, not per socket. As a result, an $80-100k HP/IBM server was a cheaper way to run a massive Oracle database than a $30k Sun box even if your workload was such that the cheap Sun hardware performed equally well; because Oracle's licensing ate several times the difference in hardware prices.
  • KateH - Saturday, May 9, 2015 - link

    I think the Intel transition was almost-entirely dictated by the lack of mobile options for PowerPC. 125W each for 970MP's sounds like a lot, but keep in mind that the Mac Pro has been using a pair of 100-130W Xeons since the beginning in 2008. Workstations and HPC are much, much less constrained by TDP. The direction that Power and SPARC has been taking for the past decade of cramming loads of SMT-enabled, high-clocked cores into a single chip somewhat negates the power concerns- if a Power8 is pulling a couple hundred watts for a 12C/96T chip, that's probably going to be worth it for the users that need that much grunt. Even Intel's E7-8890V3 is a 165W chip!
  • melgross - Saturday, May 9, 2015 - link

    Actually, the G5 was moving faster than Netburst was. In a bit over a year, it would have caught up, then moved past. Intel's unexpected move to the older "M" series for the Yonah series surprised everyone (particularly AMD), and allowed Apple to make that move. It never would have happened with Netburst.

    Apple switched for two reasons. One was that IBM failed to deliver a mobile G5 chip right at the time when laptop sales were increasing faster than desktop sales, and Apple was forced into using two G4s instead, which wasn't a good alternative. IBM delivered the chip after Apple switched over, but it was too late.

    The second reason was that Apple wanted better Windows compatibility, which could only occur using x86 chips.
  • Kevin G - Saturday, May 9, 2015 - link

    IBM did fail to make a G5 chip for laptops which significantly hurt Apple. Though Apple did have a plan B: PowerPC chips from PA-Semi. Also Apple never shipped a laptop with two G4 chips.

    And Apple didn't care about Windows software compatibility. Apple did care about hardware support as many chips couldn't be used in big endian mode or it made writing firmware for those chips complicated.

    And the real second reasons why Apple ditched PowerPC was due to chipsets. The PCIe based G5's actually had a chipset that was more expensive than the CPUs that were used. It was composed of a DDR2/Hypertransport north bridge, two memory buffers, a hypertransport PCIe bridge chip from Broadcomm/Serverworks and a south bridge chip to handle SATA/USB IO, Firewire 800 chip, and a pair of Broadcomm ethernet chips. The dual core 2.5 Ghz PowerPC 970MP at the time were going between $200 and $250 a piece. Not only was the hardware complex for the motherboards but so was the software side. PowerPC 970's cannot boot themselves as they need a service processor to initialize the FSB. The PowerPC 970 chipsets Apple used have an embedded PowerPC 400 series chip in them that'll initialize and calibrate the PowerPC's high speed FSB before handing off the rest of the boot process.
  • SnowCat00 - Friday, May 8, 2015 - link

    I would question how accurate that chart is...
    Mainframe sales are up: http://www.businessinsider.com/mainframe-saves-ibm...

    Also as someone who works with mainframes, if one wanted to they could consolidate a entire data center to one big z13.
  • ats - Friday, May 8, 2015 - link

    Um, I'm not sure you quite comprehend the scale of some of the datacenters out here. While Z13 is very nice, Its hardly a replacement of 10 racks of 8 socket Xeons.
  • usernametaken76 - Friday, May 8, 2015 - link

    That depends entirely on what those 10 racks worth of systems are doing and what type of applications they are running and at what utilization.

    Mainframes are built to run up to 100% utilization. Real world x86 systems at or above 80% are either rendering video, doing HPC or they have process control issues.

    Real world Enterprise applications running in a virtualized environment is a more appropriate comparison. Everywhere I look it's VMWare at the moment.

    Compare a PowerVM DLPAR to a VMWare VM running Linux x64 for a more fair, real world comparison.
  • melgross - Saturday, May 9, 2015 - link

    It isn't the same thing. Mainframes excell in I/O, which often trumps pure processing power. It's a very different environment.
  • ats - Saturday, May 9, 2015 - link

    Um, the days of mainframes having any real advantage in I/O are long gone, fyi.
  • Kevin G - Saturday, May 9, 2015 - link

    Sort of. Mainframes still farm off most IO commands to dedicated coprocessors so that they don't eat away CPU cycles running actually applications.

    Mainframes also have dedicated hardware for encryption and compression. This is becoming more common in the x86 world on a drive basis but the mainframe implements this at a system level so that any drive's data can be encrypted and compressed.

    It is also because of these coprocessors that IBM's mainframe virtualization is so robust: even the hypervisor itself can be virtualized on top of an another hypervisor without any slow down in IO or reduction in functionality.

Log in

Don't have an account? Sign up now