The RV770 Lesson (or The GT200 Story)

It took NVIDIA a while to give us an honest response to the RV770. At first it was all about CUDA and PhsyX. RV770 didn't have it, so we shouldn't be recommending it; that was NVIDIA's stance.

Today, it's much more humble.

Ujesh is wiling to take total blame for GT200. As manager of GeForce at the time, Ujesh admitted that he priced GT200 wrong. NVIDIA looked at RV670 (Radeon HD 3870) and extrapolated from that to predict what RV770's performance would be. Obviously, RV770 caught NVIDIA off guard and GT200 was priced much too high.

Ujesh doesn't believe NVIDIA will make the same mistake with Fermi.

Jonah, unwilling to let Ujesh take all of the blame, admitted that engineering was partially at fault as well. GT200 was the last chip NVIDIA ever built at 65nm - there's no excuse for that. The chip needed to be at 55nm from the get-go, but NVIDIA had been extremely conservative about moving to new manufacturing processes too early.

It all dates back to NV30, the GeForce FX. It was a brand new architecture on a bleeding edge manufacturing process, 130nm at the time, which ultimately lead to its delay. ATI pulled ahead with the 150nm Radeon 9700 Pro and NVIDIA vowed never to make that mistake again.

With NV30, NVIDIA was too eager to move to new processes. Jonah believes that GT200 was an example of NVIDIA swinging too far in the other direction; NVIDIA was too conservative.

The biggest lesson RV770 taught NVIDIA was to be quicker to migrate to new manufacturing processes. Not NV30 quick, but definitely not as slow as GT200. Internal policies are now in place to ensure this.

Architecturally, there aren't huge lessons to be learned from RV770. It was a good chip in NVIDIA's eyes, but NVIDIA isn't adjusting their architecture in response to it. NVIDIA will continue to build beefy GPUs and AMD appears committed to building more affordable ones. Both companies are focused on building more efficiently.

Of Die Sizes and Transitions

Fermi and Cypress are both built on the same 40nm TSMC process, yet they differ by nearly 1 billion transistors. Even the first generation Larrabee will be closer in size to Cypress than Fermi, and it's made at Intel's state of the art 45nm facilities.

What you're seeing is a significant divergence between the graphics companies, one that I expect will continue to grow in the near term.

NVIDIA's architecture is designed to address its primary deficiency: the company's lack of a general purpose microprocessor. As such, Fermi's enhancements over GT200 address that issue. While Fermi will play games, and NVIDIA claims it will do so better than the Radeon HD 5870, it is designed to be a general purpose compute machine.

ATI's approach is much more cautious. While Cypress can run DirectX Compute and OpenCL applications (the former faster than any NVIDIA GPU on the market today), ATI's use of transistors was specifically targeted to run the GPU's killer app today: 3D games.

Intel's take is the most unique. Both ATI and NVIDIA have to support their existing businesses, so they can't simply introduce a revolutionary product that sacrifices performance on existing applications for some lofty, longer term goal. Intel however has no discrete GPU business today, so it can.

Larrabee is in rough shape right now. The chip is buggy, the first time we met it it wasn't healthy enough to even run a 3D game. Intel has 6 - 9 months to get it ready for launch. By then, the Radeon HD 5870 will be priced between $299 - $349, and Larrabee will most likely slot in $100 - $150 cheaper. Fermi is going to be aiming for the top of the price brackets.

The motivation behind AMD's "sweet spot" strategy wasn't just die size, it was price. AMD believed that by building large, $600+ GPUs, it didn't service the needs of the majority of its customers quickly enough. It took far too long to make a $199 GPU from a $600 one - quickly approaching a year.

Clearly Fermi is going to be huge. NVIDIA isn't disclosing die sizes, but if we estimate that a 40% higher transistor count results in a 40% larger die area then we're looking at over 467mm^2 for Fermi. That's smaller than GT200 and about the size of G80; it's still big.

I asked Jonah if that meant Fermi would take a while to move down to more mainstream pricepoints. Ujesh stepped in and said that he thought I'd be pleasantly surprised once NVIDIA is ready to announce Fermi configurations and price points. If you were NVIDIA, would you say anything else?

Jonah did step in to clarify. He believes that AMD's strategy simply boils down to targeting a different price point. He believes that the correct answer isn't to target a lower price point first, but rather build big chips efficiently. And build them so that you can scale to different sizes/configurations without having to redo a bunch of stuff. Putting on his marketing hat for a bit, Jonah said that NVIDIA is actively making investments in that direction. Perhaps Fermi will be different and it'll scale down to $199 and $299 price points with little effort? It seems doubtful, but we'll find out next year.

ECC, Unified 64-bit Addressing and New ISA Final Words
Comments Locked

415 Comments

View All Comments

  • Pirks - Monday, October 5, 2009 - link

    No, Jarred, they are advancing towards 2560x1600 on every PC gamer's desk. Since they move towards that (used to be 800x600 everywhere, now it's more like 1280x1024 everywhere, in couple of years it'll be 1680x1050 everywhere and so on) they cannot be described as stagnant, hence your statement is BS, Jarred.
  • mejobloggs - Tuesday, October 6, 2009 - link

    I think I'll agree with Jared on this one

    LCD tech isn't advancing enough to get decent high quality large screens at a decent price. 22" seems about the sweet spot which is usually 1680x1050
  • Pirks - Wednesday, October 7, 2009 - link

    This sweet spot used to be 19" 1280x1024 a while ago, with 17" 1024x768 before that. In a couple of years sweet spot will move to 24" 1920x1200, and so on. Hence monitor resolution does progress, it does NOT stagnate, and you do listen to Jarred's fairy tales too much :P
  • JarredWalton - Friday, October 9, 2009 - link

    What we have here is a failure to communicate. My point, which you are ignoring, is that maximum resolutions are "stagnating" in the sense that they are remaining static. It's not "BS" or a "fairy tale", unless you can provide detail that shows otherwise. I purchased a 24" LCD with a native 1920x1200 resolution six years ago, and then got a 30" 2560x1600 LCD two years later. Outside of ultra-expensive solutions, nothing is higher than 2560x1600 right now, is it?

    1280x1024 was mainstream from about 7-11 years ago, and 1024x768 hasn't been the norm since around 1995 (unless you bought a crappy 14/15" LCD). We have not had a serious move to anything higher than 1920x1080 in the mainstream for a while now, but even then 1080p (or 1200p really) has been available for well over 15 years if you count the non-widescreen 1600x1200. I was running 1600x1200 back in 1995 on my 75 pound 21" CRT, for instance, and I know people that were using it in the early 90s. 2048x1536 and 2560x1600 are basically the next bump up from 1600x1200 and 1920x1200, and that's where we've stopped.

    Now, Anand seems to want even higher resolutions; personally, I'd be happier if we first found a way to make those resolutions work well for every application (i.e. DPI that extends to everything, not just text). Vista was supposed to make that a lot better than XP, but really it's still a crap shoot. Some apps work well if you set your DPI to 120 instead of the default 96; others don't change at all.
  • Pirks - Friday, October 9, 2009 - link

    I agree that maximum resolution has stagnated at 2560x1600, my point was that the average resolution of PC gamers is still moving from pretty low 1280x1024 towards this holy grail of 2560x1600 and who knows how many years will pass until every PC gamer will have such "stagnated" resolution on his/her desk.

    So yeah, max resolution stagnated, but average resolution did not and will not.
  • I am as mad as hell - Friday, October 2, 2009 - link

    Hi everyone,

    first off, I am not mad as hell! I just registered this acct after being an Anandtech reader for 10+ years. That's right. It's also my #1 tech website. I read several others, but this WAS always my favorite.

    I don't know what happened here lately, but it's becoming more and more of a circus in here.

    I am going to make a few points suggestions:

    1) In the old days reviews were reviews, this days I there are a lot more PRE-views and picture shows and blog (chit chatting) entries.

    2) In the old days a bunch of hardware was rounded up and compared to each other (mobos, memory, PSU's, etc..) I don't see that here much anymore. It's kind of worthless to me just to review one PSU or one motherboard at the time. Round em all up and then lets have a good ole fashioned shootout.

    3) I miss the monthly buyer guides. What happened to them? I like to see CPU + Mobo + Mem + PSU combo recommendations in the MOST BANG FOR THE BUCKS categories (Something most people can afford to buy/build)

    4) Time to moderate the comments section, but not censorship. My concern is not with f-words, but with trolls and comment abusers. I can't stand it. I remember the old days when then a famous site totally self-destruct, and at that time I think had maybe more readers than Anand, (hint: It had the English version of "Tiburon" as name as part of the domain name) when their forum went totally out of control because it was moderated.

    5) Also, time to upgrade the comments software here at Anandtech. It needs a up/down ratings feature that even many newspaper websites offer these days.

  • shotage - Saturday, October 3, 2009 - link

    I agree with the idea of a comments rating system (thumbs up or down). Its a democratic way of moderating. It also saves the need for those short replies when all you want to convey is that you agree or not.

    Maybe also an abuse button that people can click on should things get really out of control..?
  • Skiprudder - Friday, October 2, 2009 - link

    The up/down idea is perfect for the site. Now why didn't I think of that! =)
  • AnnonymousCoward - Friday, October 2, 2009 - link

    I don't like commenter ratings, since they give unequal representation/visibility of comments, they affect your perception of a message before you read it, and it's one more thing to look at while skimming comments.
  • Skiprudder - Friday, October 2, 2009 - link

    I'm sorry, but I'm going to ask for a ban of SiliconDoc as well. One person has single-handedly taken over the reply section to this article. I was actually interested in reading what other folks thought of what I feel is a rather remarkable new direction that nvidia is taking in terms of gpu design and marketing, and instead there are literally 30 pages of a single person waging a shouting matching with the world.

    Right now there isn't free speech in this discussion because an individual is shouting everyone down. If the admins don't act, people are likely to stop using the reply section all together.

Log in

Don't have an account? Sign up now