Bandwidth Monster

Previous versions of Intel's flagship Xeon always came with very conservative memory configurations as RAM capacity and reliability was the priority. Typically, these systems came with memory extension buffers for increased capacity, but those memory buffers also increase memory latency. As a result, these quad- and octal-socket monsters had a hard time competing with the best dual-Xeon setups in memory intensive applications.

The new Xeon E7 v2 still has plenty of memory buffers (code named "Jordan Creek"), and it now supports three instead of two DIMMs per channel. The memory riser cards with two buffers now support 12 instead of eight DIMMs (Xeon Westmere-EX). Using relatively affordable 32GB DIMMs, this allows you to load a system machine up to 3TB RAM. If you break the bank and use 64GB LRDIMMs, 6TB RAM is possible.

With the previous platform, having eight memory channels only increased capacity and not bandwidth as they ran in lockstep. Each channel delivers half a cache line, then the Jordan Creek buffer combines those halves and sends off the result to the requesting memory controller. The high speed serial interface or scalable memory interconnect (SMI) channels must run at the same speed as the DDR3 channels. With Westmere-EX, this resulted in an SMI running at a maximum of 1066MHz. With the Xeon E7 v2, we get four SMI interconnects running at speeds up to 1600MHz. In lockstep, the system can survive a dual-device error. As result, the RAS (Reliability, Accessibility, Serviceability) is best in Lockstep.

With the Ivy Bridge EX version of the Xeon E7, the channels can also run independently. This mode is called performance mode and each channel can deliver one cache line. To cope with twice the amount of bandwidth, the SMI interconnect must run twice as fast as the memory channels. In this case, the SMI channel can run at 2667 MT/s while the two channels work at 1333 MT/s. That means in theory, the E7 v2 chip could deliver as much as 85GB/s (1333 * 8 channels * 8 bytes per channel) of bandwidth, which is 2.5x more than what the previous platform delivered. The disadvantage is that only a single device error can be corrected—more speed, less RAS.

According to Intel, both latency and bandwidth are improved tremendously compared to the Westmere-EX platform. As a result, the new quad Xeon E7 v2 platform should perform a lot better in memory intensive HPC applications.

Meet the New Xeon E7 v2 Power Consumption
Comments Locked

125 Comments

View All Comments

  • JohanAnandtech - Friday, February 21, 2014 - link

    I don't see the error. "Beckton" (Nehalem-EX, X7560) is at 2.4 GHz
  • mslasm - Sunday, February 23, 2014 - link

    > I don't see the error.

    The article says "The Opteron core is also better than most people think: at 2.4GHz it would deliver about 2481 MIPs." - but, according to the graph, Opteron already delivers 2723 @ 2.3Ghz. So it is puzzling to see that it "would" deliver less MIPS (2481 vs 2723) at higher frequency (2.4 vs 2.3 Ghz) (regardless of any Intel results/frequencies)
  • silverblue - Saturday, February 22, 2014 - link

    It's entirely possible that the score is down to the 6376's 3.2GHz turbo mode.
  • plext0r - Friday, February 21, 2014 - link

    Would be nice to run benchmarks against a Quad E5-4650 system for comparison.
  • blaktron - Friday, February 21, 2014 - link

    ... you know you can't, right?
  • blaktron - Friday, February 21, 2014 - link

    Nevermind, read v2 there where you didn't write it. Too much coffee....
  • usernametaken76 - Friday, February 21, 2014 - link

    For the more typo-sensitive reader (perhaps both technically astute and typo-senstive):

    "A question like "Does the SPARC T5 also support both single-threaded and multi-threaded applications?" must sound particularly hilarious to the our technically astute readers."

    ...to the our...
  • JohanAnandtech - Friday, February 21, 2014 - link

    Fixed. Thx!
  • TiGr1982 - Friday, February 21, 2014 - link

    From the conclusion:
    "The Xeon E7 v2 chips are slated to remain in data centers for the next several years as the most robust—and most expensive—offerings from Intel."

    I don't think it will be really "several" years - maybe 1-2 years later this Ivy Bridge-EX-based E7 v2 will probably be superseded by Haswell-EX-based E7 v3 with Haswell cores with AVX2/FMA, which should make a difference in pro floating point calculations and data processing, and working with DDR4.
  • Kevin G - Friday, February 21, 2014 - link

    The Ivy Bridge-EX -> Haswell-EX transition will mimic the Nehalem-EX -> Westere-EX transition in that the core systems provided by the big OEM will stay the same. The OEM's offer Haswell-EX as a drop in replacement to their existing socket 2011v1 systems. Haswell-EX -> Broadwell-EX will again be using the same socket and follow a similarly quick transition. SkyLake-EX will bring a new socket design (perhaps with some optical interconnects?).

    At some point Intel will offer new memory buffer chips to support DDR4. This will likely require a system to swap out all the memory daughter cards but the motherboard from big OEM's shouldn't change. There may also be a period where these large systems can be initially configured with either DDR3 or DDR4 based upon customer requests.

Log in

Don't have an account? Sign up now