Dell R810 and Intel Nehalem EX Platform

Dell no longer seems to focus on cost alone, but also on offering innovative features. In the new Dell servers we find two very interesting features that set them apart from the pack. The first one is a new redundant SD-card module for embedded hypervisors. This module is similar to previous embedded hypervisor SD card solutions, but adds mirroring to the feature set. You cannot install any form of Windows on it as Windows refuses to be installed on a "removable USB device". 1GB might be enough for some Linux installations, but the enterprise versions require more than 1GB most of the time. So it only seems fit for an ESXi hypervisor.

The server has no less than 32 DIMMs, which are available even if you install only two CPUs. This second innovation is called the "FlexMem Bridge" technology. You can see in the picture below that only two aluminum heatsinks with copper heatpipes are installed. The other two are rather simple black heatsinks.

When we remove those black heatsinks, we find the pass-through chip.

And below is the pass-through chip seen from the underside.

Having as many DIMMs on a quad-socket as on a dual-socket is pretty cool, though there are some limitations you should be aware of. The FlexMem Bridge is in fact a pass-through for the second memory controller, as you see can below.

This should add a little bit of latency, but more importantly it means that in a four-CPU configuration, the R810 uses only one memory controller per CPU. The same is true for the M910, the blade server version. The result is that the quad-CPU configuration has only half the bandwidth of a server like the Dell R910 which gives each CPU two memory controllers.

The Dell R810 is simply not meant to be the highest performing Xeon 7500 server out there. The reality is that a significant number of buyers out there will shrug their shoulders when reading that 32 Nehalem cores are not running at full speed. Those buyers view the two extra CPUs as an unnecessary cost to obtain their real goal: getting a server with a copious amount of memory. If you are consolidating lightly loaded but critical web servers, firewalls, software routers, LDAP, DNS and other infrastructure applications, chances are you will never even need 16 cores to power them.

With the R810 Dell created the "entry-level" server for the Xeon 7500 market. It offers the reliability features of Intel's newest Xeon, 32 DIMMs slots, and excellent expansion options with six PCIe slots.

So the natural processor configuration for the Dell R810 is the dual Xeon 6500 series. When we specced a system with a dual E6540 2GHz, 128GB (32x4GB) and redundant PSU costs, we ended up with a price tag of $14400. For reference, a similar R710 with two Xeon E5540 and 128GB arrived at $11400. The latter system has to use sixteen 8GB DIMMs which raises the price quite a bit. But still $3000 difference is acceptable as the R810 delivers more expansion possibilities and is in a different class when it comes to reliability. For RISC buyers, a fully equipped system with these reliability features in the range of $14-$20k must sound cheap.

Quad Opteron 6100 systems will offer up to 48 DIMM slots. At the time of writing, we could not find server systems based on quad Opterons. It is clear that these systems will be cheaper, but an in-depth analysis of how reliability features influence these "massive memory" systems is necessary to make any reality-based conclusion. For now, we can state that the Dell R810 is making the Xeon 7500 market more accessible.

AMD Opteron and Intel Xeon SKUs Benchmark Methods and Systems
Comments Locked

23 Comments

View All Comments

  • JohanAnandtech - Tuesday, April 13, 2010 - link

    "Damn, Dell cut half the memory channels from the R810!"

    You read too fast again :-). Only in Quad CPU config. In dual CPU config, you get 4 memory controllers, which connect each two SMBs. So in a dual Config, you get the same bandwidth as you would in another server.

    The R810 targets those that are not after the highest CPU processing power, but want the RAS features and 32 DIMM slots. AFAIK,
  • whatever1951 - Tuesday, April 13, 2010 - link

    2 channels of DDR3-1066 per socket in a fully populated R810 and if you populate 2 sockets, you get the flex memory routing penalty...damn..............!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! R810 sucks.
  • Sindarin - Tuesday, April 13, 2010 - link

    whatever1951 you lost me @ Hello.........................and I thought Sauron was tough!! lol
  • JohanAnandtech - Tuesday, April 13, 2010 - link

    "It is hard to imagine 4 channels of DDR3-1066 to be 1/3 slower than even the westmere-eps."

    On one side you have a parallel half duplex DDR-3 DIMM. On the other side of the SMB you have a serial full duplex SMI. The buffers might not perform this transition fast enough, and there has to be some overhead. I also am still searching for the clockspeed of the IMC. The SMIs are on a different (I/O) clockdomain than the L3-cache.

    We will test with Intel's / QSSC quad CPU to see whether the flexmem bridge has any influence. But I don't think it will do much. You might add a bit of latency, but essentially the R810 is working like a dual CPU with four IMCs just like another (Dual CPU) Nehalem EX server system would.
  • whatever1951 - Tuesday, April 13, 2010 - link

    Thanks for the useful info. R810 then doesn't meet my standard.

    Johan, is there anyway you can get your hands on a R910 4 Processor system from Dell and bench the memory bandwidth to see how much that flex mem chip costs in terms of bandwidth?
  • IntelUser2000 - Tuesday, April 13, 2010 - link

    The Uncore of the X7560 runs at 2.4GHz.
  • JohanAnandtech - Wednesday, April 14, 2010 - link

    Do you have a source for that? Must have missed it.
  • Etern205 - Thursday, April 15, 2010 - link

    I think AT needs to fix this "RE:RE:RE...:" problem?
  • amalinov - Wednesday, April 14, 2010 - link

    Great article! I like the way in witch you describe the memory subsystem - I have readed the Intel datasheets and many news articles about Xeon 7500, but your description is the best so far.

    You say "So each CPU has two memory interfaces that connect to two SMBs that can each drive two channels with two DIMMS. Thus, each CPU supports eight registered DDR3 DIMMs ...", but if I do the math it seems: 2 SMIs x 2 SMBs x 2 channels x 2 DIMMs = 16 DDR3 DIMMs, not 8 as written in the second sentence. Later in the article I think you mention 16 at different places, so it seems it is realy 16 and not 8.

    What about Itanium 9300 review (including general background on the plans of OEMs/Intel for IA-64 platform)? Comparision of scalability(HT/QPI)/memory/RAS features of Xeon 7500, Itanium 9300 and Opteron 6000 would be welcome. Also I would like to see a performance comparision with appropriate applications for the RISC mainframe market (HPC?) with 4- and 8-socket AMD, Intel Xeon, Intel Itanium, POWER7, newest SPARC.
  • jeha - Thursday, April 15, 2010 - link

    You really should review the IBM 3850 X5 I think?

    They have some interesting solutions when it comes to handling memory expansions etc.

Log in

Don't have an account? Sign up now