Today marks the launch of NVIDIA's newest graphics cards: the 7900 GTX, 7900 GT and the 7600 GT. These cards are all based on an updated version of the original G70 design and offer higher performance for the dollar. Today we will see just how much faster the new NVIDIA flagship part is. But first let's take a look at what makes it different.

At the heart of this graphics launch is a die shrink. The functionality of the new parts NVIDIA is introducing is identical to that of the original G70 based lineup. Of course, to say that this is "just a die shrink" would be selling NVIDIA a little short here. In the future, if either NVIDIA or ATI decide to move to TSMC's newly introduced 80nm half-node process, all that would be involved is a simple lithographic shrink. Sure, things might get tweaked a little here and there, but the move from 90nm to 80nm doesn't involve any major change in the design rules. Moving from 110nm to 90nm requires NVIDIA to change quite a bit about their register transfer logic (RTL), update the layout of the IC, and verify that the new hardware works as intended.

The basic design rules used to build ICs must be updated between major process shrinks because the characteristics of silicon circuits change at smaller and smaller sizes. As transistors and wires get smaller, things like power density and leakage increase. Design tools often employ standard components tailored to a fab process, and sometimes it isn't possible to drop in a simple replacement that fits new design rules. These and other issues make it so that parts of the design and layout need to change in order to make sure signals get from one part of the chip to another intact and without interfering with anything else. Things like clock routing, power management, avoiding hot spots, and many other details must be painstakingly reworked.

In the process of reworking the hardware for a new process, a company must balance what they want from the chip with what they can afford. Yield of smaller and smaller hardware is increasingly affecting the RTL of a circuit, and even its high level design can play a part. Making decisions that affect speed and performance can negatively affect yield, die size, and power consumption. Conversely, maximizing yield, minimizing die size, and keeping power consumption low can negatively affect performance. It isn't enough to come up with a circuit that just works: an IC design must work efficiently. Not only has NVIDIA had the opportunity to further balance these characteristics in any way they see fit, but the rules for how this must be done have changed from the way it was done on 110nm.

After the design of the IC is updated, it still takes quite a bit of time to get from the engineers' desks to a desktop computer. After the first spin of the hardware comes back from the fab, it must be thoroughly tested. If any performance, power, or yield issues are noted from this first run, NVIDIA must tweak the design further until they get what they need. Throughout this entire process, NVIDIA must work very closely with TSMC in order to ensure that everything they are doing will work well with the new fab process. As microelectronic manufacturing technology progresses, fabless design houses will have to continue to work more and more closely with the manufacturers that produce their hardware in order to get the best balance of performance and yield.

We have made quite a case for the difficulty involved in making the switch to 90nm. So why go through all of this trouble? Let's take a look at the benefits NVIDIA is able to enjoy.

NVIDIA's Die Shrink: The 7900 and 7600
Comments Locked


View All Comments

  • yacoub - Friday, March 10, 2006 - link

    Any idea when we'll see a comparo showing the 7900GT against following cards?


    It is important for people running cards like those right now to know how much gain they will see going with a 7900GT versus going with a 7900GTX. Clearly they can see the difference between the 7900GT and 7900GTX on this review, but no one knows what improvement the 7900GT would have today (with today's drivers and games) over the cards many people are still using such as the X800XL or 7800GT.

    It's important to know if the 7900GT offers enough gain for such users over their current cards, or whether they should step all the way up to the 7900GTX.

  • spinportal - Friday, March 10, 2006 - link

    I definite like the review and the presentation Derek. There are definite tradeoffs for price, power load and performance.
    All this talk of HDCP, DX10 (with Vista) and HD (1080p) PureVideo vs. AVIVO (shame on you nVidia for asking more from the consumer when ATI bundles such goodies) just around the corner, I'm still on the wait-n-see list before making the plunge to PCIe (besides AMD's M2 chipset, and Intel's DuoCore refresh to spank the PentiumD).
    What I don't get is how drastic ATI's ability to do AA with HDR (how many games truly support this? FarCry? but Splinter Cell can't? Half-Life2 engine?) shines above nVidia's lack. Is this the only feature ATI has an exclusive win over nVidia?
    Also, there was a preface of the 7900GT being marginally faster than the 7800 GTX-256, with a nice price advantage going to the 7900GT (as well as lower power load), killing off the 7800 line for newcomers. So where is ATI's high-mid or low-high 300$ competing part? Along with the 1900XTX being a gratutitous weak "ultra" offering, since a Crossfire only paces the 1900XT, "wasting" the XTX's 5% extra power, except to prove the King-of-the-Hill mentality. More power to ATI's customers paying a cost premium.
    The 7900GT SLI might be penny-wise & pound-foolish, as two of these cards cost substaintially more (350x2 vs. 550 = ~150$ / 20%) a single 7900GTX, draw more power (hence hidden cost of a beefier UPS upgrade) for roughly ~15% gain (YMMV with oc'ing, or manufacturer tweaks).
    And sure, the ATI X1800GTO squeaks a victory from the nV 7600GT, with est. end of March '06 MSRP of 249 vs. ~180-200. For the non-graphic fanatic, cost-conscious WoW player, the 7600GT is a nice target for newer PCIe. For the gung-ho FPS shooter, for a bit more, why not aim for the 7900GT instead of the X1800GTO?
    For curiosity sake, Derek, could you downgrade a 7900GTX to GT core/mem clock speeds and see how much a difference the extra 256MB makes? If the GT has good OC headroom, it can be a better bargain. With the same basic core, how much can the clock be pushed on the GT? As for memory, how much is 256MB more of GDDR3 RAM worth (over the same bus), and how far can the GT's bandwidth get pushed to help the benchmarks? Is nVidia able to push a GT to a GTX due to better active cooling? This might be wait tweakers looks for to justify their purchase. We see that a 7800GTX-512 had definitive victories over a 7800GTX-256/7900GT of about ~20%, which does bring nearly playable frame-rates into the 60 fps point. Maybe an enterprising third-party might offer a $400 7900GT-512 with a slightly higher mem clk; there is room for opportunistic pricing there.
  • yacoub - Friday, March 10, 2006 - link

    I like what XFX and eVGA are offering:">
  • yacoub - Friday, March 10, 2006 - link

    Good article, good conclusion.

    7900GTX @ $475 is perfect competition for the X1900XT/XTX.
    7900GT @ $300 is a great price for 7800GTX performance.

    A 7900GT with a quieter, better cooling solution and accompanying overclock for around ~$350 will be my next purchase as soon as Asus or Gigabyte or someone releases such a card at such a price.
  • Leper Messiah - Friday, March 10, 2006 - link

    too bad the only 7900GTX I've seen is $559.00 Man, I though I read somewhere in Video that these things were supposed to be uber cheap. Guess that was just rumor. :(
  • yacoub - Friday, March 10, 2006 - link

    Really? I saw ones as low as $499 in the RealTime Pricing results..
  • KHysiek - Friday, March 10, 2006 - link

    I know it can be used to beat world record of fastest graphics cards, but for whom this tests are targeted. I think they are useless for 99.99999% of readers. Atlhon FXs, SLIs all over the place and almost none nonSLIsetups of currently available cards. How typical user, which has card 6-18 months old is supposed to evaluate speed of new cards and value of upgrading. I think it's the main task of such test - convince people to upgrade. Who have such systems like your testbed and who use SLI in real life - 0.00001% of these readers. Maybe even less.
  • yacoub - Friday, March 10, 2006 - link

    some of us have been making this request for months now but it routinely falls on deaf ears. it appears most anandtech readers would prefer to read what is essentially technical advertising for GPU performance as tested in ubersystems 99% of us will never own.
  • Egglick - Friday, March 10, 2006 - link

    Trying to compare cards with all those SLI and Crossfire scores everywhere can get really irritating.
  • bigboxes - Thursday, March 9, 2006 - link

    Can we please have separate charts for SLI/Crossfire setups and single cards that normal people will end up using. That way we can easily compare apples to apples. I'm sure that the 1% of you that use an SLI/Crossfire setup will like the articles, but the rest of us normal people will appreciate a direct comparison between the various single cards.

Log in

Don't have an account? Sign up now