Introduction

Today marks the launch of NVIDIA's newest graphics cards: the 7900 GTX, 7900 GT and the 7600 GT. These cards are all based on an updated version of the original G70 design and offer higher performance for the dollar. Today we will see just how much faster the new NVIDIA flagship part is. But first let's take a look at what makes it different.

At the heart of this graphics launch is a die shrink. The functionality of the new parts NVIDIA is introducing is identical to that of the original G70 based lineup. Of course, to say that this is "just a die shrink" would be selling NVIDIA a little short here. In the future, if either NVIDIA or ATI decide to move to TSMC's newly introduced 80nm half-node process, all that would be involved is a simple lithographic shrink. Sure, things might get tweaked a little here and there, but the move from 90nm to 80nm doesn't involve any major change in the design rules. Moving from 110nm to 90nm requires NVIDIA to change quite a bit about their register transfer logic (RTL), update the layout of the IC, and verify that the new hardware works as intended.

The basic design rules used to build ICs must be updated between major process shrinks because the characteristics of silicon circuits change at smaller and smaller sizes. As transistors and wires get smaller, things like power density and leakage increase. Design tools often employ standard components tailored to a fab process, and sometimes it isn't possible to drop in a simple replacement that fits new design rules. These and other issues make it so that parts of the design and layout need to change in order to make sure signals get from one part of the chip to another intact and without interfering with anything else. Things like clock routing, power management, avoiding hot spots, and many other details must be painstakingly reworked.

In the process of reworking the hardware for a new process, a company must balance what they want from the chip with what they can afford. Yield of smaller and smaller hardware is increasingly affecting the RTL of a circuit, and even its high level design can play a part. Making decisions that affect speed and performance can negatively affect yield, die size, and power consumption. Conversely, maximizing yield, minimizing die size, and keeping power consumption low can negatively affect performance. It isn't enough to come up with a circuit that just works: an IC design must work efficiently. Not only has NVIDIA had the opportunity to further balance these characteristics in any way they see fit, but the rules for how this must be done have changed from the way it was done on 110nm.

After the design of the IC is updated, it still takes quite a bit of time to get from the engineers' desks to a desktop computer. After the first spin of the hardware comes back from the fab, it must be thoroughly tested. If any performance, power, or yield issues are noted from this first run, NVIDIA must tweak the design further until they get what they need. Throughout this entire process, NVIDIA must work very closely with TSMC in order to ensure that everything they are doing will work well with the new fab process. As microelectronic manufacturing technology progresses, fabless design houses will have to continue to work more and more closely with the manufacturers that produce their hardware in order to get the best balance of performance and yield.

We have made quite a case for the difficulty involved in making the switch to 90nm. So why go through all of this trouble? Let's take a look at the benefits NVIDIA is able to enjoy.

NVIDIA's Die Shrink: The 7900 and 7600
Comments Locked

97 Comments

View All Comments

  • redlotus - Thursday, March 9, 2006 - link

    Where the heck is the X3: Reunion rolling demo benchmark? I was all geeked when AT reviewed it and said "it will make a fine addition to our round of benchmarks." Well then when the heck are you going to start using it? I have yet to see it being used for any of the articles posted since the review.
  • DerekWilson - Thursday, March 9, 2006 - link

    We really will be including X3 in our benchmarks ^_^;;

    The benchmark does take quite a long time and we needed to optimize our performance testing in order to make sure we could get the article up for the launch.

    As I have mentioned, we will be doing a follow up article, and I will look into including the X3 demo.

    Thanks,
    Derek Wilson
  • 5150Joker - Thursday, March 9, 2006 - link

    Check out these discrepancies with Anandtech's review, boy has this site been going downhill lately:

    From your older review:

    http://images.anandtech.com/graphs/ati%20radeon%20...">http://images.anandtech.com/graphs/ati%...0x1900%2...

    Then today's review:

    http://images.anandtech.com/graphs/7900%20and%2076...">http://images.anandtech.com/graphs/7900...%207600%...


    How did the XTX Crossfire lose 11 FPS with a very mild bump in resolution? Worst yet, their editors didn't even mention which drivers they used for their review.
  • Cygni - Friday, March 10, 2006 - link

    quote:

    Check out these discrepancies with Anandtech's review, boy has this site been going downhill lately:

    Wow, its like numbers change with different motherboards, chipsets, and driver revisions. ALRET THE PRESS!
  • Spinne - Thursday, March 9, 2006 - link

    That is really odd. I'd expect the numbers to swing a little, but 11 fps is 25% of 44fps. Could they be using different benchmarks? Atleast they aren't simply using the numbers from the X1900 review and are actually retesting stuff.
  • DerekWilson - Thursday, March 9, 2006 - link

    We retested with an updated motherboard (RD580) and an updated driver (CAT 6.2).

    We used the same test for F.E.A.R. (the built in performance test).

    I'm not sure why performance would drop in this case.
  • DerekWilson - Thursday, March 9, 2006 - link

    I've been looking into this, and we also are now using F.E.A.R. 1.03 rather than 1.02 which we used last time.

    I retested the x1900 xtx crossfire and got the same results. I'm really not sure what happened with this, but I'll keep poking around.
  • munky - Thursday, March 9, 2006 - link

    FEAR is one game where the x1900's have had a big lead over the 7800's, and your results from today just done make sense. How does a x1900xtx get 59fps at 1280x1024, when the gtx512 also get 59 and the 7900gtx ges 63? Comapare it to the results from another site - http://www.techreport.com/reviews/2006q1/geforce-7...">http://www.techreport.com/reviews/2006q1/geforce-7.... At 1280x960 they place the xtx at 57fps, the 7900gtx at 46, and the gtx512 at 44, which are more inline with the results I have seen before.
  • DigitalFreak - Thursday, March 9, 2006 - link

    There is a known bug in the current drivers that causes a performance drop with the 7900GTX in FEAR. Check our HardOCP's preview, where they use the updated driver from Nvidia. FEAR scores are the same or higher than the 1900XT(x)
  • DerekWilson - Thursday, March 9, 2006 - link

    We went back and updated our performance numbers with the afore mentioned driver fix.

    NVIDIA released it to the press late in the weekend, but we felt the performance increase was important enough to retest with the new driver.

    I haven't read Scott's article at the Tech Report yet, so I don't know what driver he used.

Log in

Don't have an account? Sign up now