What’s Next?

Much like the R300 days, the success of the RV770 was partially ensured by NVIDIA’s failure. Unlike NV30 however, GT200 wasn’t delayed nor was it terribly underperforming - it was simply overpriced. ATI got very lucky with RV770, NVIDIA was tied up making a huge chip and avoided two major risks: 55nm and GDDR5, both of which ATI capitalized on.

The next round won’t be as easy, NVIDIA will be at 55nm and they’ll eventually transition to GDDR5 as well. ATI can’t pull off another Radeon HD 4800 launch every year, so chances are 2010 will be closer. Even today NVIDIA has managed to close the gap quite a bit by aggressively pricing the GeForce GTX 260 Core 216, but there’s still the problem of there not being any mainstream GT200 derivative nor will there be until sometime in 2010. Not to mention the impact of selling a 576mm^2 die at the same price as ATI selling a 260mm^2 die will have on NVIDIA’s financials.

Carrell was very upfront about the follow-on to RV770, he told me frankly that it was impossible to have the perfect product every time. He’d love to, but the reality was that they’re not going to. There are many factors in doing this business that are out of ATI’s (or NVIDIA’s) control, but sometimes the stars align and you get a launch like the Radeon HD 4800 (or the Radeon 9700 Pro).

Carrell did add however that it is possible to, within the limits imposed by those outside factors, ATI can do things that are of compelling value. It’s possible to do the best you can within constraints, and while that may not result in one of these perfect products, it can be something good.

I asked specifically what would made the RV8xx series special all he could tell me was that ATI does have some things that are very interesting, very novel and very useful for the next product. I wanted more but given what Carrell and the rest of ATI had just given me, I wasn’t about to get greedy.

A Little About Larrabee

The big unknown in all of this is Larrabee, Intel’s first fully programmable GPU. Naturally, I talked to Carrell and crew about Larrabee during my final 30 minutes in the room with them.

First we’ve got to get the let’s all be friends speak out of the way. ATI and Intel (and NVIDIA) all agree that data parallelism is incredibly important, it’s the next frontier of compute performance. We don’t exactly know in what form we’ll see data parallel computing used on desktops, but when it happens, it’ll be big. Every single person in that room also expressed the highest respect and regard for ATI’s competitors, that being said, they did have some criticisms.

Like NVIDIA, ATI views the Larrabee approach as a very CPU-like approach to designing a GPU. The challenge from approaching the problem of accelerating data parallel algorithms from the GPU side is to get the programming model to be as easy as it is on the CPU. ATI admitted that Intel does have an advantage given that Larrabee is x86 and the whole environment is familiar to existing developers. ATI believes that it’ll still have the performance advantage (a significant one) but that Larrabee comes out of the gates with a programming advantage.

The thing worth mentioning however is that regardless of who makes the GPU, ATI, NVIDIA or Intel, you still need to rewrite your code to be data parallel. ATI believes that to write efficient parallel code requires a level of skill that’s an order of magnitude higher than what your typical programmer can do. If you can harness the power of a GPU however, you get access to a tremendous amount of power. You get ~1 TFLOP of performance for $170. If you’re a brilliant programmer, you know exactly what you should view as your next frontier...

The Last Hiccup and Recon from Taiwan Final Words
Comments Locked

116 Comments

View All Comments

  • Spivonious - Wednesday, December 3, 2008 - link

    I totally agree! Articles like this one are what separates Anandtech from the multitude of other tech websites.
  • goinginstyle - Wednesday, December 3, 2008 - link

    I have to admit this is one of the best articles I have read anywhere on the web in a long time. It is very insightful, interesting, and even compelling at times. Can you do a follow up, only from an NVIDIA perspective.
  • Jorgisven - Wednesday, December 3, 2008 - link

    I totally agree. This article is superbly written. One of the best tech articles I've read in a long long time, out of any source, magazine or online. I highly doubt nVidia will be as willing to expose their faults as easily as ATI was to expose their success; but I could be entirely mistaken on that.

    In either case, well done Anand. And well done ATI! Snagged the HD4850 two days after release during the 25% off Visiontek blunder from Best Buy during release week. I've been happy with it since and can still kick around the 8800GT performance like yesterday's news.
  • JonnyDough - Wednesday, December 3, 2008 - link

    I agree about the insight especially. Gave us a real look at the decision making behind the chips.

    This got me excited about graphics again, and it leaves me eager to see what will happen in the coming years. This kind of article is what will draw readers back. Thank you Anandtech and the red team for this amazing back stage pass.
  • magreen - Wednesday, December 3, 2008 - link

    Great article! Really compelling story, too.
    Thanks AMD/ATI for making this possible!
    And thanks Anand for continually being the best on the web.
  • JPForums - Wednesday, December 3, 2008 - link

    Like others have said, this is probably the best article I've read in recent memory. It was IMHO well written and interesting. Kudos to ATI as well for divulging the information.

    I second the notion that similar articles from nVidia and Intel would also be interesting. Any chance of AMD's CPU division doing something similar? I always find the architectural articles interesting, but they gain more significance when you understand the reasoning behind the design.
  • jordanclock - Wednesday, December 3, 2008 - link

    This is easily one of my favorite articles on this website. It really puts a lot of aspects of the GPU design process into perspective, such as the shear amount of time it takes to design one.

    I also think this article really adds a great deal of humanity to GPU design. The designers of these marvels of technology are often forgotten (if ever known by most) and to hear the story of one of the most successful architectures to date, from the people that fought for this radical departure... It's amazing, to say the least.

    I really envy you, Anand. You get to meet the geek world's superheroes.
  • pattycake0147 - Wednesday, December 3, 2008 - link

    I couldn't agree more! This could be the best article I've read here at anandtech period. The performance reviews are great, but once in a while you need something different or refreshing and this is just precisely that.
  • LordanSS - Wednesday, December 3, 2008 - link

    Yep, I agree with that. This is simply one of the best articles I've read here.

    Awesome work, Anand.
  • Clauzii - Wednesday, December 3, 2008 - link

    I totally agree.

Log in

Don't have an account? Sign up now