What’s Next?

Much like the R300 days, the success of the RV770 was partially ensured by NVIDIA’s failure. Unlike NV30 however, GT200 wasn’t delayed nor was it terribly underperforming - it was simply overpriced. ATI got very lucky with RV770, NVIDIA was tied up making a huge chip and avoided two major risks: 55nm and GDDR5, both of which ATI capitalized on.

The next round won’t be as easy, NVIDIA will be at 55nm and they’ll eventually transition to GDDR5 as well. ATI can’t pull off another Radeon HD 4800 launch every year, so chances are 2010 will be closer. Even today NVIDIA has managed to close the gap quite a bit by aggressively pricing the GeForce GTX 260 Core 216, but there’s still the problem of there not being any mainstream GT200 derivative nor will there be until sometime in 2010. Not to mention the impact of selling a 576mm^2 die at the same price as ATI selling a 260mm^2 die will have on NVIDIA’s financials.

Carrell was very upfront about the follow-on to RV770, he told me frankly that it was impossible to have the perfect product every time. He’d love to, but the reality was that they’re not going to. There are many factors in doing this business that are out of ATI’s (or NVIDIA’s) control, but sometimes the stars align and you get a launch like the Radeon HD 4800 (or the Radeon 9700 Pro).

Carrell did add however that it is possible to, within the limits imposed by those outside factors, ATI can do things that are of compelling value. It’s possible to do the best you can within constraints, and while that may not result in one of these perfect products, it can be something good.

I asked specifically what would made the RV8xx series special all he could tell me was that ATI does have some things that are very interesting, very novel and very useful for the next product. I wanted more but given what Carrell and the rest of ATI had just given me, I wasn’t about to get greedy.

A Little About Larrabee

The big unknown in all of this is Larrabee, Intel’s first fully programmable GPU. Naturally, I talked to Carrell and crew about Larrabee during my final 30 minutes in the room with them.

First we’ve got to get the let’s all be friends speak out of the way. ATI and Intel (and NVIDIA) all agree that data parallelism is incredibly important, it’s the next frontier of compute performance. We don’t exactly know in what form we’ll see data parallel computing used on desktops, but when it happens, it’ll be big. Every single person in that room also expressed the highest respect and regard for ATI’s competitors, that being said, they did have some criticisms.

Like NVIDIA, ATI views the Larrabee approach as a very CPU-like approach to designing a GPU. The challenge from approaching the problem of accelerating data parallel algorithms from the GPU side is to get the programming model to be as easy as it is on the CPU. ATI admitted that Intel does have an advantage given that Larrabee is x86 and the whole environment is familiar to existing developers. ATI believes that it’ll still have the performance advantage (a significant one) but that Larrabee comes out of the gates with a programming advantage.

The thing worth mentioning however is that regardless of who makes the GPU, ATI, NVIDIA or Intel, you still need to rewrite your code to be data parallel. ATI believes that to write efficient parallel code requires a level of skill that’s an order of magnitude higher than what your typical programmer can do. If you can harness the power of a GPU however, you get access to a tremendous amount of power. You get ~1 TFLOP of performance for $170. If you’re a brilliant programmer, you know exactly what you should view as your next frontier...

The Last Hiccup and Recon from Taiwan Final Words
Comments Locked

116 Comments

View All Comments

  • Chainlink - Saturday, December 6, 2008 - link

    I've followed Anandtech for many years but never felt the need to respond to posts or reviews. I've always used anandtech as THE source of information for tech reviews and I just wanted to show my appreciation for this article.

    Following the graphics industry is certainly a challenge, I think I've owned most of the major cards mentioned in this insitful article. But to learn some of the background of why AMD/ATI made some of the decisions they did is just AWESOME.

    I've always been AMD for CPU (won a XP1800+ at the Philly zoo!!!) and a mix of the red and green for GPUs. But I'm glad to see AMD back on track in both CPU and GPU especially (I actually have stock in them :/).

    Thanks Anand for the best article I've read anywhere, it actually made me sign up to post this!
  • pyrosity - Saturday, December 6, 2008 - link

    Anand & Co., AMD & Co.,

    Thank you. I'm not too much into following hardware these days but this article was interesting, informative, and insightful. You all have my appreciation for what amounts to a unique, humanizing story that feels like a diamond in the rough (not to say AT is "the rough," but perhaps the sea of reviews, charts, benchmarking--things that are so temporal).
  • Flyboy27 - Friday, December 5, 2008 - link

    Amazing that you got to sit down with these folks. Great article. This is why I visit anandtech.com!
  • BenSkywalker - Friday, December 5, 2008 - link

    Is the ~$550 price point seen on ATi's current high end part evidence of them making their GPUs for the masses? If this entrire strategy is as exceptional as this article makes it out to be, and this was an effort to honestly give high end performance to the masses then why no lengthy conversation of how ATi currently offers, by a hefty margin, the most expensive graphics cards on the market? You even present the slide that demonstrates the key to obtaining the high end was scalability, yet you fail to discuss how their pricing structure is the same one nVidia was using, they simply chose to use two smaller GPUs in the place of one monolithic part. Not saying there is anything wrong with their approach at all- but your implication that it was a choice made around a populist mindset is quite out of place, and by a wide margin. They have the fastest part out, and they are charging a hefty premium for it. Wrong in any way? Absolutely not. An overall approach that has the same impact that nV or 3dfx before them had on consumers? Absolutely. Nothing remotely populist about it.

    From an engineering angle, it is very interesting how you gloss over the impact that 55nm had for ATi versus nVidia and in turn how this current direction will hold up when they are not dealing with a build process advantage. It also was interesting that quite a bit of time was given to the advantages that ATi's approach had over nV's in terms of costs, yet ATi's margins remain well behind that of nVidia's(not included in the article). All of these factors could have easily been left out of the article altogether and you could have left it as an article about the development of the RV770 from a human interest perspective.

    This article could have been a lot better as a straight human interest fluff piece, by half bringing in some elements that are favorable to the direction of the article while leaving out any analysis from an engineering or business perspective from an objective standpoint this reads a lot more like a press release then journalism.
  • Garson007 - Friday, December 5, 2008 - link

    Never in the article did it say anything about ATI turning socialistic. All it did mention was that they designed a performance card instead of an enthusiast one. How they approach to finally get to the enthusiast block, and how much it is priced, is completely irrelevant to the fact that they designed a performance card. This also allowed ATI to bring better graphics to lower priced segments because the relative scaling was much less than nVidia -still- has to undertake.

    The built process was mentioned. It is completely nVidia's prerogative to ignore a certain process until they create the architecture that works on one they already know; you are bringing up a coulda/woulda/shoulda situation around nVidia's strategy - when it means nothing to the current end-user. The future after all, is the future.

    I'd respectfully disagree about the journalism statement, as I believe this to be a much higher form of journalism than a lot of what happens on the internet these days.

    I'd also disagree with the people who say that AMD is any less secretive or anything. Looking in the article there is no real information in it which could disadvantage them in any way; all this article revealed about AMD is a more human side to the inner workings.

    Thank you AMD for making this article possible, hopefully others will follow suit.
  • travbrad - Friday, December 5, 2008 - link

    This was a really cool and interesting article, thanks for writing it. :)

    However there was one glaring flaw I noticed: "The Radeon 8500 wasn’t good at all; there was just no beating NVIDIA’s GeForce4, the Ti 4200 did well in the mainstream market and the Ti 4600 was king of the high end. "

    That is a very misleading and flat-out false statement. The Radeon 8500 was launched in October 2001, and the Geforce 4 was launched in April 2002 (that's a 7 month difference). I would certainly hope a card launched more than half a year later was faster.

    The Radeon 8500 was up against the Geforce3 when it was launched. It was generally as fast/faster than the similarly priced Ti200, and only a bit slower than the more expensive Ti500. Hardly what I would call "not good at all". Admittedly it wasn't nearly as popular as the Geforce3, but popularity != performance.
  • 7Enigma - Friday, December 5, 2008 - link

    That's all I have to say. As near to perfection as you can get in an article.
  • hanstollo - Friday, December 5, 2008 - link

    Hello, I've been visiting your site for about a year now and just wanted to let you know I'm really impressed with all of the work you guys do. Thank you so much for this article as i feel i really learned a whole lot from it. It was well written and kept me engaged. I had never heard of concepts like harvesting and repairability. I had no idea that three years went into designing this GPU. I love keeping up with hardware and really trust and admire your site. Thank you for taking the time to write this article.
  • dvinnen - Friday, December 5, 2008 - link

    Been reading this site for going on 8 years now and this article ranks up there with your best ever. As I've grown older and games have taken a back seat I find articles like this much more interesting. When a new product comes out I find myself reading the forwards and architectural bits of the articles and skipping over all the graphs to the conclusions.

    Anyways, just wish I was one of those brilliant programmers who was skilled enough to do massively parallelized programming.
  • quanta - Friday, December 5, 2008 - link

    While the RV770 engineers may not have GDDR5 SDRAM to play with during its development, ATI can already use the GDDR4 SDRAM, which already has the memory bandwidth doubling that of GDDR5 SDRAM, AND it was already used in Radeon X1900 (R580+) cores. If there was any bandwidth superiority over NVIDIA, it was because of NVIDIA's refusal to switch to GDDR4, not lack of technology.

Log in

Don't have an account? Sign up now