GT200 vs. G80: A Clock for Clock Comparison

The GT200 architecture isn't tremendously different from G80 or G92, it just has a lot more processing power. The comparison below highlights the clock for clock difference between GT200 and its true predecessor, NVIDIA's G80. We clocked both GPUs at 575MHz core, 900MHz memory and 1350MHz shader, so this is a look at the hardware's architectural enhancements combined with the pipeline and bus width increases. The graph below shows the performance advantage of GT200 over G80 at the same clock speeds:

Clock for clock, just due to width increases, we should be at the very worst 25% faster with GT200. This would be the case where we are texture bound. It is unlikely an entire game will be blend rate bound to the point where we see greater than 2x speedups, and while test cases could show this real world apps just aren't blend bound. More realistically, the 87.5% increase in SPs will be the upper limit on performance improvements at the same clock rate. We see our tests behave within these predicted ranges.

Based on this, it appears that Bioshock is quite compute bound and doesn't run into many other bottlenecks when the burden is eased. Crysis on the other hand seems to be limited by more than just compute as it didn't benefit quite as much.

The way compute has been rebalanced does affect the conditions under which performance will benefit from the additional units. More performance will be available in the case where a game didn't just need more compute, but it needed more computer per texture. The converse is true when a game could benefit from more compute, but only if there was more texture hardware to feed them.

NVIDIA's Dirty Dealing with DX10.1 and How GT200 Doesn't Support it Power and Power Management
Comments Locked

108 Comments

View All Comments

  • tkrushing - Wednesday, June 18, 2008 - link

    Say what you want about this guy but this is partially true which is why AMD/ATI is in the position they have been. They are slowly climbing out of that hole they've been in though. Would have been nice to see 4870x2 hit the market first. As we know competition = less prices for everyone!
  • hk690 - Tuesday, June 17, 2008 - link



    I would love to kick you hard in the face, breaking it. Then I'd cut your stomach open with a chainsaw, exposing your intestines. Then I'd cut your windpipe in two with a boxcutter. Then I'd tie you to the back of a pickup truck, and drag you, until your useless fucking corpse was torn to a million fucking useless, bloody, and gory pieces.

    Hopefully you'll get what's coming to you. Fucking bitch


    http://www.youtube.com/watch?v=XNAFUpDTy3M">http://www.youtube.com/watch?v=XNAFUpDTy3M

    I wish you a truly painful, bloody, gory and agonizing death, cunt
  • 7Enigma - Wednesday, June 18, 2008 - link

    Anand, I'm all for free speech and such, but this guy is going a bit far. I read these articles at work frequently and once the dreaded C-word is used I'm paranoid I'm being watched.
  • Mr Roboto - Thursday, June 19, 2008 - link

    I thought those comments would be deleted already. I'm sure no one cares if they are. I don't know what that person is so mad about .
  • hk690 - Tuesday, June 17, 2008 - link


    Die painfully okay? Prefearbly by getting crushed to death in a garbage compactor, by getting your face cut to ribbons with a pocketknife, your head cracked open with a baseball bat, your stomach sliced open and your entrails spilled out, and your eyeballs ripped out of their sockets. Fucking bitch
  • Mr Roboto - Wednesday, June 18, 2008 - link

    Ouch.. Looks like you hit a nerve with AMD\ATI's marketing team!
  • bobsmith1492 - Monday, June 16, 2008 - link

    The main benefit from the 280 is the reduced power at idle! If I read the graph right, at idle the 9800 takes ~150W more than the 280 while at idle. Since that's where computers spend the majority of their time, depending on how much you game, that can be a significant cost.
  • kilkennycat - Monday, June 16, 2008 - link

    Maybe you should look at the GT200 series from the point of view of nvidia's GPGPU customers - the academic researchers, technology companies requiring fast number-cruching available on the desktop, the professionals in graphics-effects and computer animation - not necessarily real-time, but as quick as possible... The CUDA-using crew. The Tesla initative. This is an explosively-expanding and highly profitable business for nVidia - far more profitable per unit than any home desktop graphics application. An in-depth analysis by Anandtech of what the GT200 architecture brings to these markets over and above the current G8xx/G9xx architecture would be highly appreciated. I have a very strong suspicion that sales of the GT2xx series to the (ultra-rich) home user who has to have the latest and greatest graphics card is just another way of paying the development bills and not the true focus for this particular architecture or product line.

    nVidia is strongly rumored to be working on the true 2nd-gen Dx10.x product family, to be introduced early next year. Considering the size of the GTX280 silicon, I would expect them to transition the 65nm GTX280 GPU to either TSMC's 45nm or 55nm process before the end of 2008 to prove out the process with this size of device, then in 2009 introduce their true 2nd-gen GPU/GPGPU family on this latter process. A variant on the Intel "tic-toc" process strategy.
  • strikeback03 - Tuesday, June 17, 2008 - link

    But look at the primary audience of this site. Whatever nvidia's intentions are for the GT280, I'm guessing more people here are interested in gaming than in subsidizing research.
  • Wirmish - Tuesday, June 17, 2008 - link

    "...requiring fast number-cruching available on the desktop..."

    GTX 260 = 715 GFLOPS
    GTX 280 = 933 GFLOPS
    HD 4850 = 1000 GFLOPS
    HD 4870 = 1200 GFLOPS
    4870 X2 = 2400 GFLOPS

    Take a look here: http://tinyurl.com/5jwym5">http://tinyurl.com/5jwym5

Log in

Don't have an account? Sign up now