Benchmark Methods and Systems

Our methods and configurations were identical to our previous review. The only system added was the Dell R810:

Dell R810 Configuration:
Dual Xeon X7560 2.26GHz
Dell 05W7DG Motherboard with Intel ICH10R Southbridge (BIOS version: 0.3.2)
128GB (32 x 4GB) of DDR3-1066 (HMT151R7BFR8C Hynix)
NIC: quad Broadcom BCM5709C NetXtreme II GigE (1GB)

Xeon Server 1: ASUS RS700-E6/RS4 barebone
Dual Intel Xeon "Gainestown" X5570 2.93GHz, Dual Intel Xeon “Westmere” X5670 2.93 GHz
ASUS Z8PS-D12-1U
6x4GB (24GB) ECC Registered DDR3-1333
NIC: Intel 82574L PCI-EGBit LAN
PSU: Delta Electronics DPS-770 AB 770W

Opteron Server 1 (Dual CPU): AMD Magny-Cours Reference system (desktop case)

Dual AMD Opteron 6174 2.2 GHz
AMD Dinar motherboard with AMD SR5690 Chipset & SB750 Southbridge
8x 4 GB (32 GB) ECC Registered DDR3-1333
NIC: Broadcom Corporation NetXtreme II BCM5709 Gigabit
PSU: 1200W PSU

Opteron Server 2 (Dual CPU): Supermicro A+ Server 1021M-UR+V
Dual Opteron 2435 "Istanbul" 2.6GHz
Dual Opteron 2389 2.9GHz
Supermicro H8DMU+
32GB (8x4GB) DDR2-800
PSU: 650W Cold Watt HE Power Solutions CWA2-0650-10-SM01-1

vApus/Oracle Calling Circle Client Configuration

First client (Tile one)
Intel Core 2 Quad Q9550 2.83 GHz
Foxconn P35AX-S
GB (2x2GB) Kingston DDR2-667
NIC: Intel PRO/1000

Second client (Tile two)
Single Xeon X3470 2.93GHz
S3420GPLC
Intel 3420 chipset
8GB (4 x 2GB) 1066MHz DDR3

Our benchmarking is relatively limited. We have gone from typically 12 to 16 threads per server system to 48 and 64 thread systems in less than a year! The sharp increase in available threads is making an in-depth analysis of our benchmarks necessary. Our current choices of Oracle Calling Circle and vApus Mark I are being improved to measure the full potential of these high-thread servers. So the number of benchmarks performed by our own lab is rather limited. This situation should improve soon.

Dell R810 and Intel Nehalem EX Platform Understanding the Performance Numbers
Comments Locked

23 Comments

View All Comments

  • dastruch - Monday, April 12, 2010 - link

    Thanks AnandTech! I've been waiting for an year for this very moment and if only those 25nm Lyndonville SSDs were here too.. :)
  • thunng8 - Monday, April 12, 2010 - link

    For reference, IBM just released their octal chip Power7 3.8Ghz result for the SAP 2 tier benchmark. The result is 202180 saps for approx 2.32x faster than the Octal chipNehalem-EX
  • Jammrock - Monday, April 12, 2010 - link

    The article cover on the front page mentions 1 TB maximum on the R810 and then 512 GB on page one. The R910 is the 1TB version, the R810 is "only" 512GB. You can also do a single processor in the R810. Though why you would drop the cash on an R810 and a single proc I don't know.
  • vol7ron - Tuesday, April 13, 2010 - link

    I wish I could afford something like this!

    I'm also curious how good it would be at gaming :) I know in many cases these server setups under-perform high end gaming machines, but I'd settle :) Still, something like this would be nice for my side business.
  • whatever1951 - Tuesday, April 13, 2010 - link

    None of the Nehalem-EX numbers are accurate, because Nehalem-EX kernel optimization isn't in Windows 2008 Enterprise. There are only 3 commercial OSes right now that have Nehalem-EX optimization: Windows Server R2 with SQL Server 2008 R2, RHEL 5.5, SLES 11, and soon to be released CentOS 5.5 based on RHEL 5.5. Windows 2008 R1 has trouble scaling to 64 threads, and SQL Server 2008 R1 absolutely hates Nehalem-EX. You are cutting Nehalem-EX benchmarks short by 20% or so by using Windows 2008 R1.

    The problem isn't as severe for Magny cours, because the OS sees 4 or 8 sockets of 6 cores each via the enumerator, thus treats it with the same optimization as an 8 socket 8400 series CPU.

    So, please rerun all the benchmarks.
  • JohanAnandtech - Tuesday, April 13, 2010 - link

    It is a small mistake in our table. We have been using R2 for months now. We do use Windows 2008 R2 Enterprise.
  • whatever1951 - Tuesday, April 13, 2010 - link

    Ok. Change the table to reflect Windows Server 2008 R2 and SQL Server 2008 R2 information please.

    Any explanation for such poor memory bandwidth? Damn, those SMBs must really slow things down or there must be a software error.
  • whatever1951 - Tuesday, April 13, 2010 - link

    It is hard to imagine 4 channels of DDR3-1066 to be 1/3 slower than even the westmere-eps. Can you remove half of the memory dimms to make sure that it isn't Dell's flex memory technology that's slowing things down intentionally to push sales toward R910?
  • whatever1951 - Tuesday, April 13, 2010 - link

    As far as I know, when you only populate two sockets on the R810, the Dell R810 flex memory technology routes the 16 dimms that used to be connected to the 2 empty sockets over to the 2 center CPUs, there could be significant memory bandwidth penalties induced by that.
  • whatever1951 - Tuesday, April 13, 2010 - link

    "This should add a little bit of latency, but more importantly it means that in a four-CPU configuration, the R810 uses only one memory controller per CPU. The same is true for the M910, the blade server version. The result is that the quad-CPU configuration has only half the bandwidth of a server like the Dell R910 which gives each CPU two memory controllers."

    Sorry, should have read a little slower. Damn, Dell cut half the memory channels from the R810!!!! That's a retarded design, no wonder the memory bandwidth is so low!!!!!

Log in

Don't have an account? Sign up now