Process vs. Architecture: The Difference Between ATI and NVIDIA

Ever since NV30 (GeForce FX), NVIDIA hasn’t been first to transition to any new manufacturing process. Instead of dedicating engineers to process technology, NVIDIA chooses to put more of its resources into architecture design. The flipside is true at ATI. ATI is much less afraid of new process nodes and thus devotes more engineering resources to manufacturing. Neither approach is the right one, they both have their tradeoffs.

NVIDIA’s approach means that on a mature process, it can execute frustratingly well. It also means that between major process boundaries (e.g. 55nm to 40nm), NVIDIA won’t be as competitive so it needs to spend more time to make its architecture more competitive. And you can do a lot with just architecture alone. Most of the effort put into RV770 was architecture and look at what it gave ATI compared to the RV670.

NVIDIA has historically believed it should let ATI take all of the risk jumping to a new process. Once the process is mature, NVIDIA would switch over. That’s great for NVIDIA, but it does mean that when it comes to jumping to a brand new process - ATI has more experience. Because ATI puts itself in this situation of having to jump to an unproven process earlier than its competitor, ATI has to dedicate more engineers to process technology in order to mitigate the risk.

In talking to me Carrell was quick to point out that moving between manufacturing processes is not a transition. A transition implies a smooth gradient from one technology to another. But moving between any major transistor nodes (e.g. 55nm to 45nm, not 90nm to 80nm) it’s less of a transition and more of a jump. You try to prepare for the jump, you try your best to land exactly where you want to, but once your feet leave the ground there’s very little to control where you end up.

Any process node jump involves a great deal of risk. The trick as a semiconductor manufacturer is how you minimize that risk.

At some point, both manufacturers have to build chips on a new process node otherwise they run the risk of becoming obsolete. If you’re more than one process generation behind, it’s game over for you. The question is, what type of chip do you build on a brand new process?

There are two schools of thought here: big jump or little jump. The size refers to the size of the chip you’re using in the jump.

Proponents of the little jump believe the following. In a new process, the defect density (number of defects per unit area on the wafer) isn’t very good. You’ll have a high number defects spread out all over the wafer. In order to minimize the impact of high defect density, you should use a little die.

If we have a wafer that has 100 defects across the surface of the wafer and can fit 1000 die on the wafer, the chance that any one die will be hit with a defect is only 10%.


A hypothetical wafer with 7 defects and a small die. Individual die are less likely to be impacted by defects.

The big jump is naturally the opposite. You use a big die on the new process. Now instead of 1000 die sharing 100 defects, you might only have 200 die sharing 100 defects. If there’s an even distribution of defects (which isn’t how it works), the chance of a die being hit with a defect is now 50%.


A hypothetical wafer with 7 defects and a large die.

Based on yields alone, there’s no reason you’d ever want to do a big jump. But there is good to be had from the big jump approach.

The obvious reason to do a big jump is if the things you’re going to be able to do by making huge chips (e.g. outperform the competition) will net you more revenue than if you had more of a smaller chip.

The not so obvious, but even more important reason to do a big jump is actually the reason most don’t like the big jump philosophy. Larger die are more likely to expose process problems because they will fail more often. With more opportunity to fail, you get more opportunity to see shortcomings in the process early on.

This is risky to your product, but it gives you a lot of learning that you can then use for future products based on the same process.

The Cost of Jumping to 40nm The Payoff: How RV740 Saved Cypress
Comments Locked

132 Comments

View All Comments

  • Spoelie - Thursday, February 18, 2010 - link

    phoronix.com
    for all things ATi + Linux
  • SeanHollister - Monday, February 15, 2010 - link

    Fantastic work, Anand. It's so difficult to make pieces like this work without coming across as puffery, but everything here feels genuine and evenhanded. Here's hoping for similar articles featuring individuals at NVIDIA, Intel and beyond in the not-too-distant future.
  • boslink - Monday, February 15, 2010 - link

    Just like many others i'm also reading/visiting anandtech for years but this article made me register just to say damn good job.

    Also for the long time i didn't read article from cover to cover. Usually i read first page and maybe second (enough to guess what's in other pages) and than skip to conclusions.

    But this article remind us that Graphic card/chip is not only silicon. Real people story is what makes this article great.

    Thanks Anand
  • AmdInside - Monday, February 15, 2010 - link

    Great article as usual. Sunspot seems like the biggest non-factor in the 5x00 series. Except for hardware reviews sites which have lots of monitors lying around, I just don't see a need for it. It is like NVIDIA's 3D Vision. Concept sounds good but in general practice, it is not very realistic that a user will use it. Just another check box that a company can point to to an OEM and say we have it and they don't. NVIDIA has had Eyefinity for a while (SLI Mosaic). It just is very expensive since it is targeted towards businesses and not consumers and offers some features Eyefinity doesn't offer.I think NVIDIA just didn't believe consumers really wanted it but added it afterwards just so that ATI doesn't have a checkbox they can brag about. But NVIDIA probably still believes this is mainly a business feature.

    It is always interesting to learn how businesses make product decisions internally. I always hate reading interviews of PR people. I learn zero. Talk to engineers if you really want to learn something.
  • BelardA - Tuesday, February 16, 2010 - link

    I think the point of Eyefinity is that its more hardware based and natural... not requiring so much work from the game publisher. A way of having higher screen details over a span of monitors.

    A few games will actually span 2 or 3 monitors. Or some will use the 2nd display as a control panel. With Eyefinity, it tells the game "I have #### x #### pixels" and auto divides the signal onto 3 or 6 screens and be playable. That is quite cool.

    But as you say, its a bit of a non-factor. Most users will still only have one display to work with. Hmmm. there was a monitor that was almost seamless 3-monitors built together, where is that?

    Also, I think the TOP-SECRET aspect of Sun-Spots was a way of testing security. Eyefinity isn't a major thing... but the hiding of it was.

    While employees do move about in the business, the sharing of trade-secrets could still get them in trouble - if caught. It does happen, but how much?
  • gomakeit - Monday, February 15, 2010 - link

    I love these insightful articles! This is why Anandtech is one of my favorite tech sites ever!
  • Smell This - Monday, February 15, 2010 - link

    Probably could have done without the snide reference to the CPU division at the end of the article - it added nothing and was a detraction from the overall piece.

    It also implies a symbiotic relationship between AMDs 40+ year battle with Chipzilla and the GPU Wars with nV. Not really an accurate correlation. The CPU division has their own headaches.

    It is appropriate to note, however, that both divisions must bring their 'A' Game to the table with the upcoming convergence on-die of the CPU-GPU.
  • mrwilton - Monday, February 15, 2010 - link

    Thank you, Anand, for this great and fun-to-read article. It really has been some time where I have read an article cover to cover.

    Keep up the excellent work.

    Best wishes, wt
  • Ananke - Monday, February 15, 2010 - link

    I have 5850, it is a great card. However, what people saying about PC gaming is true - gaming on PC slowly fades towards consoles. You cannot justify several thousand-dollar PC versus a 2-300 multimedia console.

    So powerful GPU is a supercomputer by itself. Please ATI, make better Avivo transcoder, push open software development using Steam further. We need many applications, not just Photoshop and Cyberlink. We need hundreds, and many free, to utilize this calculation power. Then, it will make sense to use this cards.
  • erple2 - Tuesday, February 16, 2010 - link

    Perhaps. However, this "PC Gaming is being killed off by the 2-300 multimedia console" war has been going on since the Playstation 1 came out. PC gaming is still doing very well.

    I think that there will always be some sort of market (even if only 10% - that's significant enough to make companies take notice) for PC Gaming. While I still have to use the PC for something, I'll continue to use it for gaming, as well.

    Reading the article, I find it poignant that the focus is on //execution// rather than //ideas//. It reminds me of a blog written by Jeff Atwood (http://www.codinghorror.com/blog/2010/01/cultivate...">http://www.codinghorror.com/blog/2010/01/cultivate... if you're interested) about the exact same thing. Focus on what you //do//. Execution (ie "what do we have an 80%+ chance of getting done on time) is more important than the idea (ie features you can claim on a spec sheet).

    As a hardware developer (goes the same for any software developer), your job is to release the product. That means following a schedule. That means focusing on what you can do, not on what you want to do. It sounds to me like ATI has been following that paradigm, which is why they seem to be doing so well these days.

    What's particularly encouraging about the story written was that Management had the foresight to actually listen to the technical side when coming up with the schedules and requirements. That, in and of itself, is something that a significant number of companies just don't do well.

    It's nice to hear from the internal wing of the company from time to time, and not just the glossy presentation of hardware releases.

    I for one thoroughly enjoyed the read. I liked the perspective that the RV5-- err Evergreen gave on the process of developing hardware. What works, and what doesn't.

    Great article. Goes down in my book with the SSD and RV770 articles as some of the best IT reads I've done.

Log in

Don't have an account? Sign up now