The Bet, Would NVIDIA Take It?

In the Spring of 2005 ATI had R480 on the market (Radeon X850 series), a 130nm chip that was a mild improvement over R420 another 130nm chip (Radeon X800 series). The R420 to 480 transition is an important one because it’s these sorts of trends that NVIDIA would look at to predict ATI’s future actions.

ATI was still trying to work through execution on the R520, which was the Radeon X1800, but as you may remember that part was delayed. ATI was having a problem with the chip at the time, with a particular piece of IP. The R520 delay ended up causing a ripple that affected everything in the pipeline, including the R600 which itself was delayed for other reasons as well.

When ATI looked at the R520 in particular it was a big chip and it didn’t look like it got good bang for the buck, so ATI made a change in architecture going from the R520 to the R580 that was unexpected: it broke the 1:1:1:1 ratio.

The R520 had a 1:1:1:1 ratio of ALUs:texture units:color units:z units, but in the R580 ATI varied this relationship to be a 3:1:1:1. Increasing arithmetic power without increasing texture/memory capabilities; ATI noticed that shading complexity of applications went up but bandwidth requirements didn’t, justifying the architectural shift.

This made the R520 to R580 transition a much larger one than anyone would’ve expected, including NVIDIA. While the Radeon X1800 wasn’t really competitive (partially due to its delay, but also due to how good G70 was), the Radeon X1900 put ATI on top for a while. It was an unexpected move that undoubtedly ruffled feathers at NVIDIA. Used to being on top, NVIDIA doesn’t exactly like it when ATI takes its place.

Inside ATI, Carrell made a bet. He bet that NVIDIA would underestimate R580, that it would look at what ATI did with R480 and expect that R580 would be similar in vain. He bet that NVIDIA would be surprised by R580 and the chip to follow G70 would be huge, NVIDIA wouldn’t want to lose again, G80 would be a monster.

ATI had hoped to ship the R520 in early summer 2005, it ended up shipping in October, almost 6 months later and as I already mentioned, it delayed the whole stack. The negative ripple effect made it all the way into the R600 family. ATI speculated that NVIDIA would design its next part (G71, 7900 GTX) to be around 20% faster than R520 and not expect much out of R580.


A comparison of die sizes for ATI and NVIDIA GPUs over the year, these boxes are to scale. Red is ATI, Green is NV.

ATI was planning the R600 at the time and knew it was going to be big; it started at 18mm x 18mm, then 19, then 20. Engineers kept asking Carrell, “do you think their chip is going to be bigger than this?”. “Definitely! They aren’t going to lose, after the 580 they aren’t going to lose”. Whether or not G80’s size and power was a direct result of ATI getting too good with R580 is up for debate, I’m sure NVIDIA will argue that it was by design and had nothing to do with ATI, and obviously we know where ATI stands, but the fact of the matter is that Carrell’s prediction was correct - the next generation after G70 was going to be a huge chip.

If ATI was responsible, even in part, for NVIDIA’s G80 (GeForce 8800 GTX) being as good as it was then ATI ensured its own demise. Not only was G80 good, but R600 was late, very late. Still impacted by the R520 delay, R600 had a serious problem with its AA resolve hardware that took a while to work through and ended up being a part that wasn’t very competitive. Not only was G80 very good, but without AA resolve hardware the R600 had an even tougher time competing. ATI had lost the halo, ATI’s biggest chip ever couldn’t compete with NVIDIA’s big chip and for the next year ATI’s revenues and marketshare would suffer. While this was going on, Carrell was still trying to convince everyone working on the RV770 that they were doing the right thing, that winning the halo didn’t matter...just as ATI was suffering from not winning the halo. He must’ve sounded like a lunatic at the time.

When Carrell and crew were specing the RV770 the prediction was that not only would it be good against similarly sized chips, but it would be competitive because NVIDIA would still be in overshoot mode after G80. Carrell believed that whatever followed G80 would be huge and that RV770 would have an advantage because NVIDIA would have to charge a lot for this chip.

Carrell and the rest of ATI were in for the surprise of their lives...

Building a GPU for the Masses If it Ain’t Broke...
Comments Locked

116 Comments

View All Comments

  • Spivonious - Wednesday, December 3, 2008 - link

    I totally agree! Articles like this one are what separates Anandtech from the multitude of other tech websites.
  • goinginstyle - Wednesday, December 3, 2008 - link

    I have to admit this is one of the best articles I have read anywhere on the web in a long time. It is very insightful, interesting, and even compelling at times. Can you do a follow up, only from an NVIDIA perspective.
  • Jorgisven - Wednesday, December 3, 2008 - link

    I totally agree. This article is superbly written. One of the best tech articles I've read in a long long time, out of any source, magazine or online. I highly doubt nVidia will be as willing to expose their faults as easily as ATI was to expose their success; but I could be entirely mistaken on that.

    In either case, well done Anand. And well done ATI! Snagged the HD4850 two days after release during the 25% off Visiontek blunder from Best Buy during release week. I've been happy with it since and can still kick around the 8800GT performance like yesterday's news.
  • JonnyDough - Wednesday, December 3, 2008 - link

    I agree about the insight especially. Gave us a real look at the decision making behind the chips.

    This got me excited about graphics again, and it leaves me eager to see what will happen in the coming years. This kind of article is what will draw readers back. Thank you Anandtech and the red team for this amazing back stage pass.
  • magreen - Wednesday, December 3, 2008 - link

    Great article! Really compelling story, too.
    Thanks AMD/ATI for making this possible!
    And thanks Anand for continually being the best on the web.
  • JPForums - Wednesday, December 3, 2008 - link

    Like others have said, this is probably the best article I've read in recent memory. It was IMHO well written and interesting. Kudos to ATI as well for divulging the information.

    I second the notion that similar articles from nVidia and Intel would also be interesting. Any chance of AMD's CPU division doing something similar? I always find the architectural articles interesting, but they gain more significance when you understand the reasoning behind the design.
  • jordanclock - Wednesday, December 3, 2008 - link

    This is easily one of my favorite articles on this website. It really puts a lot of aspects of the GPU design process into perspective, such as the shear amount of time it takes to design one.

    I also think this article really adds a great deal of humanity to GPU design. The designers of these marvels of technology are often forgotten (if ever known by most) and to hear the story of one of the most successful architectures to date, from the people that fought for this radical departure... It's amazing, to say the least.

    I really envy you, Anand. You get to meet the geek world's superheroes.
  • pattycake0147 - Wednesday, December 3, 2008 - link

    I couldn't agree more! This could be the best article I've read here at anandtech period. The performance reviews are great, but once in a while you need something different or refreshing and this is just precisely that.
  • LordanSS - Wednesday, December 3, 2008 - link

    Yep, I agree with that. This is simply one of the best articles I've read here.

    Awesome work, Anand.
  • Clauzii - Wednesday, December 3, 2008 - link

    I totally agree.

Log in

Don't have an account? Sign up now