Understanding FB-DIMMs

Since Apple built the Mac Pro out of Intel workstation components, it unfortunately has to use more expensive Intel workstation memory. In other words, cheap unbuffered DDR2 isn't an option, its time to welcome ECC enabled Fully Buffered DIMM (FBD) to your Mac.

Years ago, Intel saw two problems happening with most mainstream memory technologies: 1) As we pushed for higher speed memory, the number of memory slots per channel went down, and 2) the rest of the world was going serial (USB, SATA and more recently, Hyper Transport, PCI Express, etc...) yet we were still using fairly antiquated parallel memory buses.

The number of memory slots per channel isn't really an issue on the desktop; currently, with unbuffered DDR2-800 we're limited to two slots per 64-bit channel, giving us a total of four slots on a motherboard with a dual channel memory controller. With four slots, just about any desktop user's needs can be met with the right DRAM density. It's in the high end workstation and server space that this limitation becomes an issue, as memory capacity can be far more important, often requiring 8, 16, 32 or more memory sockets on a single motherboard. At the same time, memory bandwidth is also important as these workstations and servers will most likely be built around multi-socket multi-core architectures with high memory bandwidth demands, so simply limiting memory frequency in order to support more memory isn't an ideal solution. You could always add more channels, however parallel interfaces by nature require more signaling pins than faster serial buses, and thus adding four or eight channels of DDR2 to get around the DIMMs per channel limitation isn't exactly easy.

Intel's first solution was to totally revamp PC memory technology, instead of going down the path of DDR and eventually DDR2, Intel wanted to move the market to a serial memory technology: RDRAM. RDRAM offered significantly narrower buses (16-bits per channel vs. 64-bits per channel for DDR), much higher bandwidth per pin (at the time a 64-bit wide RDRAM memory controller would offer 6.4GB/s of memory bandwidth, compared to a 64-bit DDR266 interface which at the time could only offer 2.1GB/s of bandwidth) and of course the ease of layout benefits that come with a narrow serial bus.

Unfortunately, RDRAM offered no tangible performance increase, as the demands of processors at the time were no where near what the high bandwidth RDRAM solutions could deliver. To make matters worse, RDRAM implementations were plagued by higher latency than their SDRAM and DDR SDRAM counterparts; with no use for the added bandwidth and higher latency, RDRAM systems were no faster, if not slower than their SDR/DDR counterparts. The final nail in the RDRAM coffin on the PC was the issue of pricing; your choices at the time were this: either spend $1000 on a 128MB stick of RDRAM, or spend $100 on a stick of equally performing PC133 SDRAM. The market spoke and RDRAM went the way of the dodo.

Intel quietly shied away from attempting to change the natural evolution of memory technologies on the desktop for a while. Intel eventually transitioned away from RDRAM, even after its price dropped significantly, embracing DDR and more recently DDR2 as the memory standards supported by its chipsets. Over the past couple of years however, Intel got back into the game of shaping the memory market of the future with this idea of Fully Buffered DIMMs.

The approach is quite simple in theory: what caused RDRAM to fail was the high cost of using a non-mass produced memory device, so why not develop a serial memory interface that uses mass produced commodity DRAMs such as DDR and DDR2? In a nutshell that's what FB-DIMMs are, regular DDR2 chips on a module with a special chip that communicates over a serial bus with the memory controller.

The memory controller in the system stops having a wide parallel interface to the memory modules, instead it has a narrow 69 pin interface to a device known as an Advanced Memory Buffer (AMB) on the first FB-DIMM in each channel. The memory controller sends all memory requests to the AMB on the first FB-DIMM on each channel and the AMBs take care of the rest. By fully buffering all requests (data, command and address), the memory controller no longer has a load that significantly increases with each additional DIMM, so the number of memory modules supported per channel goes up significantly. The FB-DIMM spec says that each channel can support up to 8 FB-DIMMs, although current Intel chipsets can only address 4 FB-DIMMs per channel. With a significantly lower pin-count, you can cram more channels onto your chipset, which is why the Intel 5000 series of chipsets feature four FBD channels.

Bandwidth is a little more difficult to determine with FBD than it is with conventional DDR or DDR2 memory buses. During Steve Jobs' keynote, he put up a slide that listed the Mac Pro as having a 256-bit wide DDR2-667 memory controller with 21.3GB/s of memory bandwidth. Unfortunately, that claim isn't being totally honest with you as the 256-bit wide interface does not exist between the memory controller and the FB-DIMMs. The memory controller in the Intel 5000X MCH communicates directly with the first AMB it finds on each channel, that interface is actually only 24-bits wide per channel for a total bus width of 96-bits (24-bits per channel x 4 channels). The bandwidth part of the equation is a bit more complicated, but we'll get to that in a moment.

Below we've got the anatomy of a AMB chip:

The AMB has two major roles, to communicate with the chipset's memory controller (or other AMBs) and to communicate with the memory devices on the same module.

When a memory request is made the first AMB in the chain then figures out if the request is to read/write to its module, or to another module, if it's the former then the AMB parallelizes the request and sends it off to the DDR2 chips on the module, if the request isn't for this specific module, then it passes the request on to the next AMB and the process repeats.

The Chipset Understanding FB-DIMMs (Continued)
POST A COMMENT

33 Comments

View All Comments

  • saneproductions - Sunday, August 27, 2006 - link

    I just picked up a 2.66 MP 2GB and got some SATA-eSATA PCI plates to route the 2 hidden SATA ports to my eSATA drive and it was a no go. I tried both having the drive powered up then booting (system hung at the gray screen) and powering on the drive after the MP was up and running (nothing happened). any ideas?

    Mike
    Reply
  • blwest - Monday, August 14, 2006 - link

    I received my Mac Pro last Friday afternoon. It's absolutely wonderful. It's also absolutely silent.

    The 7300 card also isn't that bad either. I could play World of Warcraft at 1600x1200 at reasonably high settings. Expose worked very smoothly, overall the system's performance screams in comparision to Windows XP. Running stock setup like on Anand's review.
    Reply
  • mycatsnameis - Monday, August 14, 2006 - link

    I see that Crucial is shipping 4 gig FB PC5400 DIMMs. I wonder if these can be used in a Mac Pro? In the past the max memory capacity that Apple has quoted (for pro or consumer machines) has generally been conservative and related more to the size of DIMMs that are generally available than any actual h/w limit. Reply
  • nitromullet - Friday, August 11, 2006 - link

    With boot camp and a Windows XP install, is the Mac Pro Crossfire capable? I don't imagine that OS X has drivers for that, but that wouldn't be the point anyway - use the Windows install for gaming and the OS X install for everything else... Reply
  • dcalfine - Saturday, August 12, 2006 - link

    I imagine that getting crossfire to work is a matter of simple firmware flashing. With SLI, the motherboard supports it, but the Mac OS doesn't. But because crossfire depends mostly on the crossfire card, flashing the card with Mac firmware, which often works with other cards, (see Strange Dog Forums, http://strangedogs.proboards40.com/index.cgi?board...">http://strangedogs.proboards40.com/index.cgi?board... should allow it to work. I'd be interested in trying this, if I had the funding.

    Apple should be doing something to get dual- or even quad-gpu solutions on macs, since now each mac pro is a quad-processor.
    Reply
  • tshen83 - Friday, August 11, 2006 - link

    Hey anandtech, the more interesting option for GPU is actually the QUAD 7300GT powering over 8 screens. I was wondering if Apple's OSX is able to push 3D or overlay stuff on all 8 screens like Linux could. Reply
  • michael2k - Friday, August 11, 2006 - link

    As far as I know, Apple's been able to do this for far longer than Linux could :) Reply
  • tshen83 - Friday, August 11, 2006 - link

    Hey anandtech, the more interesting option for GPU is actually the QUAD 7300GT powering over 8 screens. I was wondering if Apple's OSX is able to push 3D or overlay stuff on all 8 screens like Linux could. Reply
  • OddTSi - Thursday, August 10, 2006 - link

    Are there any plans for non-ad hoc, fast serial RAM or is Rambus the only one even attempting something like that with their new XDR memory? Reply
  • kobymu - Friday, August 11, 2006 - link

    There is QDR.... Reply

Log in

Don't have an account? Sign up now