Is RDRAM the Solution?

Earlier we illustrated a situation in which we would need approximately 3.7GB/s of available memory bandwidth for the memory bus not to be a limiting factor in the performance of a system.  However, we didn't say that RDRAM as it currently exists solves the problem because honestly, it doesn't.  Like we said at the beginning of this article, keep an open mind, this isn't designed to bash one company or another, just an attempt to clear up some misconceptions.

The most popular form of RDRAM that we are familiar with now is what is known as PC800 RDRAM.  This naming system is unfortunately a bit misleading, because the maximum operating frequency of the current RDRAM is 400MHz but, as we mentioned earlier, operates in a double pumped fashion, meaning that twice as much data is transferred every clock cycle (à la DDR), which is where the PC800 name comes from  (400MHz x 2 = 800MHz).  And since the Rambus channel is 2-bytes wide, we get an effective 1.6GB/s transfer rate for a single RDRAM channel.

Unfortunately, this is only 43% of the 3.7GB/s we calculated earlier, so RDRAM isn't the solution, right?  Not exactly.

Single channel RDRAM (1.6GB/s) offers less bandwidth than DDR SDRAM running at 133MHz DDR (2.1GB/s) and neither of those solutions offer the 3.7GB/s of memory bandwidth we decided was necessary to run the next-generation PC platforms. 

Without developing a brand-new memory technology between now and the end of the year (which is virtually impossible if you plan on shipping it anytime soon), there has to be a way to take one of the currently available memory technologies and manipulate it in such a way that it provides us with more memory bandwidth. 

You can do this one of two ways, either by increasing the operating frequency (or effective operating frequency) of the devices or by increasing the width of the memory bus. 

For example, in order to adopt SDRAM to the needs of the future, DDR SDRAM will be adopted in designs from AMD and VIA for the upcoming Mustang, otherwise the performance of that platform would be severely hindered by a lack of memory bandwidth.  Since the Mustang will potentially have a large on-die L2 cache, the penalty for an L2 cache miss will be much greater in a system with a slow memory subsystem or a memory bus that is saturated with data requests from other memory masters.  The current Athlon can get around this problem and survive pretty easily because of its off-die, high latency L2 cache running no faster than 350MHz, but once Thunderbird hits, we'll begin to see a definite need for a faster memory bus. 

Since both RDRAM and DDR SDRAM are already double pumped, the technology for quad pumping (or QDR) the two memory technologies would have to be developed and implemented in order for the effective operating frequency of the DRAM types to be increased (simply increasing the actual clock speed isn't a viable option right now until manufacturing processes improve). 

This leaves the latter option, increasing the width of the memory bus and this also brings about the reason why our current SDRAM implementation can't grow much further than DDR SDRAM.

The other side of the Interface Pin Count
Comments Locked

1 Comments

View All Comments

  • dylan522p - Wednesday, December 11, 2013 - link

    Wow I wish I read this all those years ago.

Log in

Don't have an account? Sign up now