GT200 vs. G80: A Clock for Clock Comparison

The GT200 architecture isn't tremendously different from G80 or G92, it just has a lot more processing power. The comparison below highlights the clock for clock difference between GT200 and its true predecessor, NVIDIA's G80. We clocked both GPUs at 575MHz core, 900MHz memory and 1350MHz shader, so this is a look at the hardware's architectural enhancements combined with the pipeline and bus width increases. The graph below shows the performance advantage of GT200 over G80 at the same clock speeds:

Clock for clock, just due to width increases, we should be at the very worst 25% faster with GT200. This would be the case where we are texture bound. It is unlikely an entire game will be blend rate bound to the point where we see greater than 2x speedups, and while test cases could show this real world apps just aren't blend bound. More realistically, the 87.5% increase in SPs will be the upper limit on performance improvements at the same clock rate. We see our tests behave within these predicted ranges.

Based on this, it appears that Bioshock is quite compute bound and doesn't run into many other bottlenecks when the burden is eased. Crysis on the other hand seems to be limited by more than just compute as it didn't benefit quite as much.

The way compute has been rebalanced does affect the conditions under which performance will benefit from the additional units. More performance will be available in the case where a game didn't just need more compute, but it needed more computer per texture. The converse is true when a game could benefit from more compute, but only if there was more texture hardware to feed them.

NVIDIA's Dirty Dealing with DX10.1 and How GT200 Doesn't Support it Power and Power Management
Comments Locked

108 Comments

View All Comments

  • Spoelie - Monday, June 16, 2008 - link

    On first page alone:
    *Use of the acronym TPC but no clue what it stands for
    *999 * 2 != 1198
  • Spoelie - Tuesday, June 17, 2008 - link

    page 3:
    "An Increase in Rasertization Throughput" -t
  • knitecrow - Monday, June 16, 2008 - link

    I am dying to find out what AMD is bringing to the table its new cards i.e. the radeon 4870

    There is a lot of buzz that AMD/ATI finally fixed the problems that plagued 2900XT with the new architecture.

  • JWalk - Monday, June 16, 2008 - link

    The new ATI cards should be very nice performance for the money, but they aren't going to be competitors for these new GTX-200 series cards.

    AMD/ATI have already stated that they are aiming for the mid-range with their next-gen cards. I expect the new 4850 to perform between the G92 8800 GTS and 8800 GTX. And the 4870 will probably be in the 8800 GTX to 9800 GTX range. Maybe a bit faster. But the big draw for these cards will be the pricing. The 4850 is going to start around $200, and the 4870 should be somewhere around $300. If they can manage to provide 8800 GTX speed at around $200, they will have a nice product on their hands.

    Time will tell. :)
  • FITCamaro - Monday, June 16, 2008 - link

    Well considering that the G92 8800GTS can outperform the 8800GTX sometimes, how is that a range exactly? And the 9800GTX is nothing more than a G92 8800GTS as well.
  • AmbroseAthan - Monday, June 16, 2008 - link

    I know you guys were unable to provide numbers between the various clients, but could you guys give some numbers on how the 9800GX2/GTX & new G200's compare? They should all be running the same client if I understand correctly.
  • DerekWilson - Monday, June 16, 2008 - link

    yes, G80 and GT200 will be comparable.

    but the beta client we had only ran on GT200 (177 series nvidia driver).
  • leexgx - Wednesday, June 18, 2008 - link

    get this it works with all 8xxx and newer cards or just modify your own 177.35 driver so it works you get alot more PPD as well

    http://rapidshare.com/files/123083450/177.35_gefor...">http://rapidshare.com/files/123083450/177.35_gefor...
  • darkryft - Monday, June 16, 2008 - link

    While I don't wish to simply another person who complains on the Internet, I guess there's just no way to get around the fact that I am utterly NOT impressed with this product, provided Anandtech has given an accurate review.

    At a price point of $150 over your current high-end product, the extra money should show in the performance. From what Anandtech has shown us, this is not the case. Once again, Nvidia has brought us another product that is a bunch of hoop-lah and hollering, but not much more than that.

    In my opinion, for $650, I want to see some f-ing God-like performance. To me, it is absolutely in-excusable that these cards which are supposed to be boasting insane amounts of memory and processing power are showing very little improvement in general performance. I want to see something that can stomp the living crap out of my 8800GTX. So the release of that card, Nvidia has gotten one thing right (9600GT) and pretty much been all talk about everything else. So far, the GTX 280 is more of the same.
  • Regs - Monday, June 16, 2008 - link

    They just keep making these cards bigger and bigger. More transistors, more heat, more juice. All for performance. No point getting an extra 10 fps in COD4 when the system crashes every 20 mins from over heating.

Log in

Don't have an account? Sign up now