Final Words

There's no question that NVIDIA has built a very impressive chip with the GT200. As the largest microprocessor we've ever reviewed, NVIDIA has packed an unreal amount of computational horsepower into the GT200. What's even more impressive is that we can fully expect NVIDIA to double transistor count once again in about 18 months, and once more we'll be in this position of complete awe of what can be done. We're a little over a decade away from being able to render and display images that would be nearly indistinguishable from reality, and it's going to take massive GPUs like the GT200 to get us there.

Interestingly, though, AMD has decided to make public its decision to go in the opposite direction. No more will ATI be pushing as many transistors as possible into giant packages in order to do battle with NVIDIA for the coveted "halo" product that inspires the masses to think an entire company is better because they made the fastest possible thing regardless of value. The new direction ATI will go in will be one that it kind of stumbled inadvertently into: providing midrange cards that offer as high a performance per dollar as possible.

With AMD dropping out of the high end single-GPU space (they will still compete with multiGPU solutions), NVIDIA will be left all alone with top performance for the forseable future. But as we saw from our benchmarks, that doesn't always work out quite like we would expect.

There's another very important aspect of GT200 that's worth considering: a die-shrunk, higher clocked version of GT200 will eventually compete with Intel's Larrabee GPU. The GT200 is big enough that it could easily smuggle a Penryn into your system without you noticing, which despite being hilarious also highlights a very important point: NVIDIA could easily toss a high performance general purpose sequential microprocessor on its GPUs if it wanted to. At the same time, if NVIDIA can build a 1.4 billion transistor chip that's nearly 6x the size of Penryn, so can Intel - the difference being that Intel already has the high performance, general purpose, sequential microprocessor that it could integrate alongside a highly parallel GPU workhorse. While Intel has remained relatively quiet on Larrabee as of late, NVIDIA's increased aggressiveness towards its Santa Clara neighbors is making more sense every day.

We already know that Larrabee will be built on Intel's 45nm process, but given the level of performance it will have to compete with, it wouldn't be too far fetched for Larrabee to be Intel's first 1 - 2 billion transistor microprocessor for use in a desktop machine (Nehalem is only 781M transistors).

Intel had better keep an eye on NVIDIA as the GT200 cements its leadership position in the GPU market. NVIDIA hand designed the logic that went into much of the GT200 and managed to produce it without investing in a single fab, that is a scary combination for Intel to go after. It's not to say that Intel couldn't out engineer NVIDIA here, but it's just going to be a challenging competition.

NVIDIA has entered a new realm with the GT200, producing a world class microprocessor that is powerful enough to appear on even Intel's radar. If NVIDIA had the ability to enable GPU acceleration in more applications, faster, then it would actually be able to give Intel a tough time before Larrabee. Fortunately for Intel, NVIDIA is still just getting started on moving into the compute space.

But then we have the question of whether or not you should buy one of these things. As impressive as the GT200 is, the GeForce GTX 280 is simply overpriced for the performance it delivers. It is NVIDIA's fastest single-card, single-GPU solution, but for $150 less than a GTX 280 you get a faster graphics card with NVIDIA's own GeForce 9800 GX2. The obvious downside to the GX2 over the GTX 280 is that it is a multi-GPU card and there are going to be some situations where it doesn't scale well, but overall it is a far better buy than the GTX 280.

Even looking to the comparison of four and two card SLI, the GTX 280 doesn't deliver $300 more in value today. NVIDIA's position is that in the future games will have higher compute and bandwidth requirements and that the GTX 280 will have more logevity. While that may or may not be true depending on what actually happens in the industry, we can't recommend something based on possible future performance. It just doesn't make sense to buy something today that won't give you better performance on the software that's currently available. Especially when it costs so much more than a faster solution.

The GeForce GTX 260 is a bit more reasonable. At $400 it is generally equal to if not faster than the Radeon HD 3870 X2, and with no other NVIDIA cards occupying the $400 pricepoint it is without a competitor within its own family. Unfortunately, 8800 GT SLI is much cheaper and many people already have an 8800 GT they could augment.

The availability of cheaper faster alternatives to GT200 hardware is quite dangerous for NVIDIA, as value does count for quite a lot even at the high end. And an overpriced high end card is only really attractive if it's actually the fastest thing out there.

But maybe with the lowered high end threat from AMD, NVIDIA has decided to make a gutsy move by positioning its hardware such that multiGPU solutions do have higher value than single GPU solutions. Maybe this is all just a really good way to sell more SLI motherboards.

Overclocked and 4GB of GDDR3 per Card: Tesla 10P
Comments Locked

108 Comments

View All Comments

  • epobirs - Monday, June 16, 2008 - link

    There is an important precedent that gives Nvidia good reason to not rush to a new smaller process level. Recall when ATI first became a serious player in gaming GPUs with the 9700. It was for its time a big chip pushing the limits of the process level, while Nvidia at the time was concentrating on bleeding edge technology. Nvidia's chips got stomped by ATI's in that generation, in large part because the ATI chip had far better optimization of its transistors.
  • anartik - Monday, June 16, 2008 - link

    We can agree the pricing sucks. But the point that seems to be missing is that Nvidia promised a 50% performance improvement and they delivered. The 280 delivers 45FPS vs 32FPS for the 9800GT in Assassins creed. Thats just shy of 50% (48FPS) which is a huge performance increase compared to what we have been getting the past couple years for a new card. Slap 2x280 on a card and it vaporizes the 9800 GX2 or any SLI/Xfire solution. The 9800 GX2 scales ~63% over the 9800GTX. So if you do that for a 280GX2 (or SLI) you get roughly 73 frames per second. Plus the new cards have more memory to deal with bandwidth and large textures vs the nuetered 512 on the 96/9800's and 8800GT... the reason I have held onto my 8800 GTX with 768mb. Granted I won't be rushing out and buying one tomorrow but the 280 is the fastest GPU and a x2 will be faster than any other x2 card. It's a little rediculous to think the single 280 sucks becuase it's not faster than multiple GPU's like the 9800 GX2 (although when memory counts it is).
  • araczynski - Monday, June 16, 2008 - link

    this is all they could spit out? all that noise and all those transistors and it gets its ass handed to it by the gx2 more often than not?

    talk about fizzle, perhaps at best it'll lower the price of the gx2 a bit.

    not to mention no built in hdmi, good lord, they must've had two seperate teams working on the gtx and gx2 at the same time and just wanted to see what they could come up with separately.
  • Ananke - Monday, June 16, 2008 - link

    I don't think many people at this forum tread understand that nVidia target is the supercomputer market. I was totally impressed from one post a month ago, where a software engineer managed to put and use 3 SLI system for magnetoresonance rendering. Nvidia and AMD /that's why they acquired ATI/ have already significant experience in multiprocessor and parallel calculation. nVidia is ahead though, since they have CUDA becoming more popular for complex calculation. A year ago Intel realized parallel processing from Sun is their biggest danger, now nVidia and Ati come too. Imagine, supercomputers build with thousands of G200 chips, and only some Intels used for mapping, instead of thousands of Xeons. nVidia thinks way more ahead just for the mere visual/gaming market. I am very very impressed, and very eager to see what ATI can do. Also, I hope Ati and Havoc will be able to offer competition to CUDA, or uniformity? Anyway, from a scientific point of view, recent developments in the graphic market make foundamental science more affordable than anytime before.
  • Reflex - Monday, June 16, 2008 - link

    Anand/Derek -

    I am not sure why you are comparing this chip to a Penryn or other general purpose CPU as the comparisons are meaningless. GPU's are designed very differently than CPU's, namely a high level descriptor language is used and the design is then created by a program, which is then hand tweaked by engineers. By contrast, a CPU may use a high level language, but the actual design is almost entirely done by hand, with large teams working on each sub component and literally years of tweaking. It takes Intel between five and ten years to bring a design to market, which is why there is such a push by them to keep adjusting the design and optimizing it to stretch its usefulness out as long as possible to maximize the initial investment. This simply does not happen with a GPU.

    GPU's are designed to last 18-24 months as a competitive solution. nVidia and Ati cannot afford to spend even five years designing them. As a result the level of hand optimization is greatly reduced, and inefficiencies with transistors are tolerated. Typically they are produced on equipment that is already paid for by the previous, more optimized products, or contracted out to third parties(TSMC). Since the products are sold for a premium, the wasted die space is not very relevant. It is a diametrically opposed process to what you see with CPU development.

    Despite how impressive it may seem to go on about 1.4 billion transisters, truthfully a modern CPU does more with far less than a modern GPU, and honestly neither nVidia nor Ati are in the same league as Intel and AMD, neither at the engineering level nor when comparing the products they put out. To an Intel engineer, this GPU is at least four times larger than it needs to be to get the performance you get out of it.

    The maturation of the industry, either due to reaching a point where GPU's can do 90% of what anyone needs, or simply because power budgets get more restrictive, will come when the level of optimization required for a CPU is required for a GPU, and product cycles stretch out to 3-5 years. Then you will have a more direct comparison between the two, since the design parameters will be much more similiar.

    I am not knocking nV here, btw, I'm simply calling into question why one would even compare a Penryn to a GPU, it makes no sense at all when they were designed from the ground up for different purposes, lifespans and with different transister budgets.
  • 7Enigma - Tuesday, June 17, 2008 - link

    I think what this shows is there a brute force way of doing something that while not necessarily pretty can get you to a goal. Yes compared to Intel's latest and greatest it is a grotesque abomination of wasted energy/transistors/die size, but the bottom line is it is pretty darn impressive from a CPU/GPU standpoint.

    I think many of us long for the days of more than 2 major competetors for each race (CPU/GPU). We've been stuck in a rut with ATI and Nvidia, AMD and Intel. Yes you have some niche products by other companies, and budget pieces being made by a host of has-beens, but really tier 1 stuff is just not being fought over by more than 2 companies.

    What I want to see (complete dreamland here) is a start up from some very savvy disgruntled employees of say AMD/ATI, Intel, IBM, etc. (and don't forget possibly the most important segment, the marketing team) with some clout and a LOT of dough to say, "Screw this, we're going balls to the wall and throw the kitchen sink at the market."

    I mean let's be honest here, what's another 100 watts or a billion transistors anymore? I can guarantee you every geek out there would shell out more money for a product that devestates the current competition. I don't care if it's not as frugal with the power, or as small, or as pretty, I want the speed man, gimme the speed!
  • Anand Lal Shimpi - Monday, June 16, 2008 - link

    While I'd normally agree with you, GPUs have been getting pretty complex to design. Much of the shader multiprocessors in G80 and GT200 were designed by hand, and remember that G80 (the original predecessor to GT200) was in development for four years before its launch.

    The transistor comparison is a valid one, while Penryn is a very impressive design, it is so for different reasons than GT200. The size of GT200 also helps illustrate fundamental differences in approach to CPU vs. GPU design and really highlights why Intel is building Larrabee.

    -A
  • crimson117 - Monday, June 16, 2008 - link

    Because to non-engineers, they're two silicon computer chips, and 1.4 billion of anything is a lot!

    It also helps me to visually understand why this thing gets so hot, since it's got so much more surface area packed with transistors.

    You're right that CPUs and GPUs are designed for different tasks and shouldn't be considered pure apples to apples, but then you go against your own advice and start saying how CPUs are so much more advanced, and how Intel engineers could do that in 1/4 the size of a chip. So which is it - should they be compared, or should they not be compared?

    And the authors did mention how simple it could be for either company to slap the other type of chip right in with their usual type; make a Intel CPU with added GPU capabilities, or make a nVidia GPU with CPU capabilities. So there's another point where they recognize the differences but do try to illustrate the sameness.

    So I'm not really sure your criticisms hold water.
  • Reflex - Monday, June 16, 2008 - link

    You are looking for contradictions where there are none. A chip is a chip, but that does not mean that they are all designed with the same goals, budgets and time constraints. *IF* Intel devoted the resources to a GPU that they devote to a CPU, yes they could produce a product like this in a fraction of the transisters. That said, the product would take 5-10 years to design, would cost hundreds of millions of dollars to develop, and would need a lifespan of at least 5 years in the market to be worth the effort. Obviously this is not a reasonable approach in a market with such fast product turnover.

    My post was not an attempt to diss nV or this product, it was pointing out that the comparison of a GPU to a CPU is inane as they have completely different design constraints. You may as well compare a CPU to cache memory, or RAM or a sound processor. All have transisters, right?

    It especially bothered me when they implied that nVidia has the transister budget to toss a general purpose CPU on the die. The fact is that they may have the transister budget, but they do not have the time or money available to do so, and the product would be obsolete before it ever hit the market as a result of such an attempt. It would be marrying two completely different design philosophies, and this is why the combined CPU/GPU products that are upcoming are not likely to be the strongest performers.
  • paydirt - Monday, June 16, 2008 - link

    You all seem to be assuming that GPUs will only be used for games. If that's all you care about, then why do you whine when a GPU is made to perform well as a number cruncher (for science, for modeling/simulations)?

    It's the best single GPU gaming card.
    It's the best widely (?) available GPU number cruncher.
    For a whole system gaming GPU solution, it isn't the most cost effective.

    If you're all into numbers, then why are you assigning emotions to it. It simply is what it is.

Log in

Don't have an account? Sign up now