Just One Small Problem: We Need a New Memory Technology

The R600 GPU had an incredibly wide 512-bit memory interface, the problem with such a large interface is that it artificially makes your die bigger as you’ve got to route those interface pads to the memory devices on the board. For RV770 to have the die size ATI wanted, it needed to have a 256-bit memory interface, but using (at the time) current memory technology that wouldn’t give the GPU enough memory bandwidth to hit the performance targets ATI wanted.

When the options were either make the chip too big or make the performance too low, ATI looked elsewhere: let’s use a new memory technology. Again, put yourself in ATI’s shoes, the time was 2005 and ATI had just decided to completely throw away the past few years of how-to-win-the-GPU-race and on top of that, even if the strategy were to succeed it would depend on a memory technology that hadn't even been prototyped yet.

The spec wasn’t finalized for GDDR5 at the time, there were no test devices, no interface design, nothing. Just an idea that at some point, there would be memory that could offer twice the bandwidth per pin of GDDR3, which would give ATI the bandwidth of a 512-bit bus, but with a physical 256-bit bus. It’s exactly what ATI needed, so it’s exactly what ATI decided to go with.

Unfortunately whether or not GDDR5 shipped by the summer of 2008 wasn’t all up to ATI, the memory manufacturers themselves had a lot of work to do. ATI committed a lot of resources both monetarily and engineering to working with its memory partners to make sure that not only was the spec ready, but that memory was ready, performing well and available by the summer of 2008. Note that the RV770 was going to be the only GPU that would use GDDR5, meaning that it was ATI and ATI alone driving the accelerated roadmap for this memory technology. It’s akin to you trying to single handedly bring 100Mbps internet to your city; it’ll happen eventually, but if you want it done on your time table you’re going to have to pickup a shovel and start burying a lot of your own cable.

ATI did much of the heavy lifting with the move to GDDR5, and it was risky because even if RV770 worked out perfectly but the memory wasn’t ready in time the GPU would get delayed. RV770 was married to GDDR5 memory, there was no other option, if in three years GDDR5 didn’t ship or had problems, then ATI would not only have no high end GPU, but it would have no performance GPU to sell into the market.

If GDDR5 did work out, then it meant that RV770 could succeed and that it would be another thing that NVIDIA didn’t have at launch. That is, of course, assuming that ATI’s smaller-die strategy would actually work...

The Power Paradigm Dave Baumann Saves the Radeon HD 4850
Comments Locked

116 Comments

View All Comments

  • PrinceGaz - Wednesday, December 3, 2008 - link

    Now that is the sort of reason I visit AT every day. Superb article. Thanks Anand.
  • 1078feba - Thursday, December 4, 2008 - link

    Couldn't agree more. Quite simply, this is the best article about hardware architecture I have ever had the distinct pleasure of reading. The human perspective adds an element of drama which cannot be underestimated. Very nearly reads like a Hollywood script, a la Jeff Bridges in "Tucker".

    Cheers Anand, bravo, well done.
  • fyleow - Wednesday, December 3, 2008 - link

    Is this a typo? Did you mean to say Carrell instead of Carol?

    "Carol recalled a story where Rick Bergman and others were at a table discussing RV770; Rick turned to Matt Skynner and asked him if he thought they could really do it, if they could make RV770 a smaller-than-NVIDIA GPU and still be successful, if it was possible to create a halo in the Performance segment."
  • Kromis - Wednesday, December 3, 2008 - link

    Watching too much of "The Office", eh?

    I kid, I kid. (Good show, by the way)
  • erikejw - Sunday, December 7, 2008 - link

    A really really good article with lots of good info.

    I think though that it is sad that you missed the opportunity to get the best insight you can into future GPU trends and technology for the coming years. That would have been an even better article.
  • rarson - Monday, April 24, 2017 - link

    This post needs to be preserved for posterity.

Log in

Don't have an account? Sign up now