The RV770 Lesson (or The GT200 Story)

It took NVIDIA a while to give us an honest response to the RV770. At first it was all about CUDA and PhsyX. RV770 didn't have it, so we shouldn't be recommending it; that was NVIDIA's stance.

Today, it's much more humble.

Ujesh is wiling to take total blame for GT200. As manager of GeForce at the time, Ujesh admitted that he priced GT200 wrong. NVIDIA looked at RV670 (Radeon HD 3870) and extrapolated from that to predict what RV770's performance would be. Obviously, RV770 caught NVIDIA off guard and GT200 was priced much too high.

Ujesh doesn't believe NVIDIA will make the same mistake with Fermi.

Jonah, unwilling to let Ujesh take all of the blame, admitted that engineering was partially at fault as well. GT200 was the last chip NVIDIA ever built at 65nm - there's no excuse for that. The chip needed to be at 55nm from the get-go, but NVIDIA had been extremely conservative about moving to new manufacturing processes too early.

It all dates back to NV30, the GeForce FX. It was a brand new architecture on a bleeding edge manufacturing process, 130nm at the time, which ultimately lead to its delay. ATI pulled ahead with the 150nm Radeon 9700 Pro and NVIDIA vowed never to make that mistake again.

With NV30, NVIDIA was too eager to move to new processes. Jonah believes that GT200 was an example of NVIDIA swinging too far in the other direction; NVIDIA was too conservative.

The biggest lesson RV770 taught NVIDIA was to be quicker to migrate to new manufacturing processes. Not NV30 quick, but definitely not as slow as GT200. Internal policies are now in place to ensure this.

Architecturally, there aren't huge lessons to be learned from RV770. It was a good chip in NVIDIA's eyes, but NVIDIA isn't adjusting their architecture in response to it. NVIDIA will continue to build beefy GPUs and AMD appears committed to building more affordable ones. Both companies are focused on building more efficiently.

Of Die Sizes and Transitions

Fermi and Cypress are both built on the same 40nm TSMC process, yet they differ by nearly 1 billion transistors. Even the first generation Larrabee will be closer in size to Cypress than Fermi, and it's made at Intel's state of the art 45nm facilities.

What you're seeing is a significant divergence between the graphics companies, one that I expect will continue to grow in the near term.

NVIDIA's architecture is designed to address its primary deficiency: the company's lack of a general purpose microprocessor. As such, Fermi's enhancements over GT200 address that issue. While Fermi will play games, and NVIDIA claims it will do so better than the Radeon HD 5870, it is designed to be a general purpose compute machine.

ATI's approach is much more cautious. While Cypress can run DirectX Compute and OpenCL applications (the former faster than any NVIDIA GPU on the market today), ATI's use of transistors was specifically targeted to run the GPU's killer app today: 3D games.

Intel's take is the most unique. Both ATI and NVIDIA have to support their existing businesses, so they can't simply introduce a revolutionary product that sacrifices performance on existing applications for some lofty, longer term goal. Intel however has no discrete GPU business today, so it can.

Larrabee is in rough shape right now. The chip is buggy, the first time we met it it wasn't healthy enough to even run a 3D game. Intel has 6 - 9 months to get it ready for launch. By then, the Radeon HD 5870 will be priced between $299 - $349, and Larrabee will most likely slot in $100 - $150 cheaper. Fermi is going to be aiming for the top of the price brackets.

The motivation behind AMD's "sweet spot" strategy wasn't just die size, it was price. AMD believed that by building large, $600+ GPUs, it didn't service the needs of the majority of its customers quickly enough. It took far too long to make a $199 GPU from a $600 one - quickly approaching a year.

Clearly Fermi is going to be huge. NVIDIA isn't disclosing die sizes, but if we estimate that a 40% higher transistor count results in a 40% larger die area then we're looking at over 467mm^2 for Fermi. That's smaller than GT200 and about the size of G80; it's still big.

I asked Jonah if that meant Fermi would take a while to move down to more mainstream pricepoints. Ujesh stepped in and said that he thought I'd be pleasantly surprised once NVIDIA is ready to announce Fermi configurations and price points. If you were NVIDIA, would you say anything else?

Jonah did step in to clarify. He believes that AMD's strategy simply boils down to targeting a different price point. He believes that the correct answer isn't to target a lower price point first, but rather build big chips efficiently. And build them so that you can scale to different sizes/configurations without having to redo a bunch of stuff. Putting on his marketing hat for a bit, Jonah said that NVIDIA is actively making investments in that direction. Perhaps Fermi will be different and it'll scale down to $199 and $299 price points with little effort? It seems doubtful, but we'll find out next year.

ECC, Unified 64-bit Addressing and New ISA Final Words
Comments Locked

415 Comments

View All Comments

  • Kougar - Friday, October 2, 2009 - link

    Hey Anand:

    Just wanted to say thanks for the article. Love the quotes and behind-the-scene views, and in general the ever so informative articles like this that just can't be found elsewhere. So, thank you!
  • bobvodka - Friday, October 2, 2009 - link

    Someone earlier askes if supporting doubles was going to waste silicon, I don't think it will.

    If you look at the through put numbers and the fact that FP64 is half that of FP32 with the SFU disabled I suspect what is going on is that the FP64 calculations are being done by 2 cores at once with the SFU being involved in some way (given how it is decoupled from the cores there is no apprent good reason why the SFU should be disabled during FP64 operation).

    A comment was also made re:ECC memory.
    I suspect this wont make it to the consumer board; there is no good reason to do so and it would just cost silicon and power for a feature users don't need.

  • Zool - Friday, October 2, 2009 - link

    Maybe the consumer board wont hawe ECC but it will be still in the silicon (disabled). I dont think that they will produce two different silicons just becouse of ECC.
  • bobvodka - Friday, October 2, 2009 - link

    hmmm, you are probably right on that score and that might aid yield if they can turn it off as any faults in the ECC areas could be safely ignored.

    Chances of them using ECC ram on the boards themselves I would have said was zero simply due to cost :)
  • halcyon - Friday, October 2, 2009 - link

    Same foundry, same process, much more transistors....

    Based on roughly extrapolating scaling from the RV870, how much bigger power draw would this baby have?

    The dollar draw from my wallet is going to be really powerful, that's for sure, but how about power?



  • deeper - Friday, October 2, 2009 - link

    Well, not only is the GT300 months away but it looks like the card they showed off is a fake anyhoo, check it out at Charlie Demerjian's www.semiaccurate.com
  • Zool - Friday, October 2, 2009 - link

    Could you pls delete majority of SiliconDoc replies and than this after them. Its embarassing to read them.
  • Pirks - Friday, October 2, 2009 - link

    I call BS. How many people have 2560x1600 30-inchers? Two? Three? Main point - resolutions are _VERY_ far from being stagnated, they have SOOOOOOOOO _MUCH_ room for growth until 2560x1600 which right now covers maybe 1% of the PC gaming market. 90% of PC gamers still use low-res 1680x1050 if not less (I for one have 1400x1050, yeah shame on me, I don't want to spend $800 on hi-end SLI setup just to play Crysis in all its hi-res beauty, for.get.it.)

    Shame Anand, real shame.

    Otherwise top notch quality stuff, as always with Ananad.
  • bigboxes - Friday, October 2, 2009 - link

    1680x1050 = low res??? Seriously? That's hi-def bro. I understand you can do better, but for my 20" widescreen it is definitley hi-def.
  • JarredWalton - Friday, October 2, 2009 - link

    I believe what you describe is exactly what is meant by stagnation. From Merriam-Webster: "To become stagnant." Stagnant: "Not advancing or developing." So yeah, I'd say that pretty much sums up display resolutions: they're not advancing.

    Is that bad? Not necessarily, especially when we have so many applications that do thing based purely on the wonderful pixel instead of on points or DPI. I use a 30" LCD, and I love the extra resolution for working with images, but the text by default tends to be too small. I have to zoom to 150% in a lot of apps (including Firefox/IE) to get what I consider comfortably readable text. I would say that 2560x1600 on a 30" LCD is about as much as I see myself needing for a good, looooong time.

Log in

Don't have an account? Sign up now