Final Words

There's no question that NVIDIA has built a very impressive chip with the GT200. As the largest microprocessor we've ever reviewed, NVIDIA has packed an unreal amount of computational horsepower into the GT200. What's even more impressive is that we can fully expect NVIDIA to double transistor count once again in about 18 months, and once more we'll be in this position of complete awe of what can be done. We're a little over a decade away from being able to render and display images that would be nearly indistinguishable from reality, and it's going to take massive GPUs like the GT200 to get us there.

Interestingly, though, AMD has decided to make public its decision to go in the opposite direction. No more will ATI be pushing as many transistors as possible into giant packages in order to do battle with NVIDIA for the coveted "halo" product that inspires the masses to think an entire company is better because they made the fastest possible thing regardless of value. The new direction ATI will go in will be one that it kind of stumbled inadvertently into: providing midrange cards that offer as high a performance per dollar as possible.

With AMD dropping out of the high end single-GPU space (they will still compete with multiGPU solutions), NVIDIA will be left all alone with top performance for the forseable future. But as we saw from our benchmarks, that doesn't always work out quite like we would expect.

There's another very important aspect of GT200 that's worth considering: a die-shrunk, higher clocked version of GT200 will eventually compete with Intel's Larrabee GPU. The GT200 is big enough that it could easily smuggle a Penryn into your system without you noticing, which despite being hilarious also highlights a very important point: NVIDIA could easily toss a high performance general purpose sequential microprocessor on its GPUs if it wanted to. At the same time, if NVIDIA can build a 1.4 billion transistor chip that's nearly 6x the size of Penryn, so can Intel - the difference being that Intel already has the high performance, general purpose, sequential microprocessor that it could integrate alongside a highly parallel GPU workhorse. While Intel has remained relatively quiet on Larrabee as of late, NVIDIA's increased aggressiveness towards its Santa Clara neighbors is making more sense every day.

We already know that Larrabee will be built on Intel's 45nm process, but given the level of performance it will have to compete with, it wouldn't be too far fetched for Larrabee to be Intel's first 1 - 2 billion transistor microprocessor for use in a desktop machine (Nehalem is only 781M transistors).

Intel had better keep an eye on NVIDIA as the GT200 cements its leadership position in the GPU market. NVIDIA hand designed the logic that went into much of the GT200 and managed to produce it without investing in a single fab, that is a scary combination for Intel to go after. It's not to say that Intel couldn't out engineer NVIDIA here, but it's just going to be a challenging competition.

NVIDIA has entered a new realm with the GT200, producing a world class microprocessor that is powerful enough to appear on even Intel's radar. If NVIDIA had the ability to enable GPU acceleration in more applications, faster, then it would actually be able to give Intel a tough time before Larrabee. Fortunately for Intel, NVIDIA is still just getting started on moving into the compute space.

But then we have the question of whether or not you should buy one of these things. As impressive as the GT200 is, the GeForce GTX 280 is simply overpriced for the performance it delivers. It is NVIDIA's fastest single-card, single-GPU solution, but for $150 less than a GTX 280 you get a faster graphics card with NVIDIA's own GeForce 9800 GX2. The obvious downside to the GX2 over the GTX 280 is that it is a multi-GPU card and there are going to be some situations where it doesn't scale well, but overall it is a far better buy than the GTX 280.

Even looking to the comparison of four and two card SLI, the GTX 280 doesn't deliver $300 more in value today. NVIDIA's position is that in the future games will have higher compute and bandwidth requirements and that the GTX 280 will have more logevity. While that may or may not be true depending on what actually happens in the industry, we can't recommend something based on possible future performance. It just doesn't make sense to buy something today that won't give you better performance on the software that's currently available. Especially when it costs so much more than a faster solution.

The GeForce GTX 260 is a bit more reasonable. At $400 it is generally equal to if not faster than the Radeon HD 3870 X2, and with no other NVIDIA cards occupying the $400 pricepoint it is without a competitor within its own family. Unfortunately, 8800 GT SLI is much cheaper and many people already have an 8800 GT they could augment.

The availability of cheaper faster alternatives to GT200 hardware is quite dangerous for NVIDIA, as value does count for quite a lot even at the high end. And an overpriced high end card is only really attractive if it's actually the fastest thing out there.

But maybe with the lowered high end threat from AMD, NVIDIA has decided to make a gutsy move by positioning its hardware such that multiGPU solutions do have higher value than single GPU solutions. Maybe this is all just a really good way to sell more SLI motherboards.

Overclocked and 4GB of GDDR3 per Card: Tesla 10P


View All Comments

  • elchanan - Monday, June 30, 2008 - link

    VERY eye-opening discussion on TMT. Thank you for it.
    I've been trying to understand how GPUs can be competitive for scientific applications which require lots of inter-process communication, and "local" memory, and this appears to be an elegant solution for both.

    I can identify the weak points of it being hard to program for, as well as requiring many parallel threads to make it practical.

    But are there other weak points?
    Is there some memory-usage profile, or inter-process data bandwidth, where the trick doesn't work?
    Perhaps some other algorithm characteristic which GPUs can't address well?

  • merry89 - Thursday, July 14, 2022 - link

    Search <a href="">Mahabhulekh 7/12</a> online. Reply
  • Think - Friday, June 20, 2008 - link

    This card is a junk bond when taking into consideration cost/perfomance/power consumption.

    Reminds me of a 1976 Cadillac with a 7.7litre v8 with only 210 horsepower/3600 rpm.

    It's a PIG.
  • Margalus - Tuesday, June 24, 2008 - link

    this shows how many people don't run a dual monitor setup. I would snatch up one of these 260/280's over the gx2's anyday, gladly!!

    The performance may not be quite as good as an sli setup, but it will be much better than a single card which is what a lot of us are stuck with since you CANNOT run a dual monitor setup with sli!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
  • iamgud - Wednesday, June 18, 2008 - link

    "I can has vertex data"


    These look fine, but need to be moved to 55nm. By the time I save up for one they will .
  • calyth - Tuesday, June 17, 2008 - link

    Well what the heck are they doing with 1.4B transistors, which is becoming the largest die that TMSC has been producing so far?
    The larger the core, the more likely that an blemish would take out the core. As far as I know, didn't Phenom (4 cores on die) suffered low-yield problems?
  • gochichi - Tuesday, June 17, 2008 - link

    You know, when you consider the price and you look at the benchmarks, you start looking for features and NVIDIA just doesn't have the features going on at all.

    COD4 -- Ran perfect at 1920x1200 with last gen stuff (the HD3870 and 8800GT(S))so now the benchmarks have to be for outrageous resolutions that a handful of monitors can handle (and those customers already bought SLI or XFIRE, or GTX2 etc.)

    Crysis is a pig of a game, but it's not that great (it is a good technical preview though, I admit), and I don't think even these new cards really satisfy this system hog... so maybe this is a win, but I doubt too many people care... if you had an 8800GT or whatever, you're already played this game "well enough" on medium settings and are plenty tired of it. Though we'll surely fire it up in the future once our video cards "happen to be able to run it on high" very few people are going to go out of their way $500+ for this silly title.

    In any case, then you look at ATI, and they have the HDMI audio, the DX 10.1 support and all they have to do at this point is A) Get a good price out the door, B) Make a good profit (make them cheap, which these NVIDIA are expensive to make, no doubt) and C) handily beat the 8800GTS and many of us are going to be sold.

    These cards are what I would call a next gen preview. Some overheated prototypes of things to come. I doubt AMD will be as fast, and in fact I hope they aren't just as long as they keep the power consumption in check, the price, and the value (HDMI, DX10.1, etc).

    Today's release reminded me that NVIDIA is the underdog, they are the company that released the FX series (desperate technology, like these are). ATI has been around well before 3DFX made 3d-accelerators. They were down for a bit, and we all said it was over for ATI but this desperate release from NVIDIA makes me think that ATI is going to be quite tought to beat.

  • Brazofuerte - Tuesday, June 17, 2008 - link

    Can I go somewhere to find the exact settings used for these benchmarks? I appreciate the tech side of the write up but when it comes to determining whether I want one of these for my gaming machine (I ordered mine at midnight), I find HardOCP's numbers much more useful. Reply
  • woofermazing - Tuesday, June 17, 2008 - link

    AMD/ATI isn't going to abandon the high end like your article implies. Their plan is to make a really good mid range chip, and ductape to cores together ala the X2's. Nvidia goes from the high-end down, ATI from the mid-end up. From the look of it, ATI might have the right idea, atleast this time around. I seriously doubt we'll see a two core version of this monster anytime soon. Reply
  • DerekWilson - Tuesday, June 17, 2008 - link

    they are abandoning the high end single GPU ...

    we did state that they are planning on competing in the high end space with multiGPU cards, but that there are drawbacks to that.

    we'll certainly have another article coming out sometime soon that looks a little more closely at AMD's strategy.

Log in

Don't have an account? Sign up now