The Card, The Test, and Power

There are a few key changes from the original 7800 GTX silicon that allow the 7800 GTX 512 to perform so highly. The 7800 GTX 512 still uses a 110nm process like the original 7800 GTX, but NVIDIA has tuned their fab process to speed up key data paths in the chip. These enhancements, while not altering the feature set in anyway, allow the chip to reach clock speeds of 550MHz (which is 120MHz faster than the original 7800 GTX). On top of changes in the silicon, the 7800 GTX 512 has gotten a PCB revision. And just in case anyone is wondering, the huge HSF solution is actually very quiet. NVIDIA is using a fan with larger blades that move a good volume of air without needing to run at super high RPMs. While it may look like an industrial sized leaf blower, it bark is certainly nothing compared to the bite this thing takes out of our performance tests.

Current 7800 GTX cards feature 8Mx32 GDDR3 with four chips on each side of the PCB. Most cards have a heat spreader on the back of the board, while some vendors have attached heatsinks. NVIDIA needed a better way to cool their RAM in order to hit the memory clock speeds they wanted. To this end the 7800 GTX 512 sees all of its RAM on the front of the PCB cooled by the very large heatsink previously employed on the Quadro FX 4500. Moving all the RAM to one side of the PCB may also have improved the routing to certain memory modules, which would also help increase attainable stable memory clock speeds. There are still only 8 modules total, as NVIDIA has also moved to higher density 16Mx32 GDDR3. The RAM used is also rated at 900MHz (1800MHz data rate), giving the stock memory clock speed of 1700MHz a little head room for vendors who like to overclock the cards they sell.

Here is a quick comparison of NVIDIA's 7800 series line up with the new GTX 512:

We can expect some applications to scale with either core clock speed or memory clock speed depending on where they are limited, in which case we could see anywhere from a 25% to 40% boost in performance. Of course, we will run into things like CPU and architectural limitations that could decrease the impact of the improved clock speed. As Intel found out via the Pentium 4, it doesn't matter how fast your clock spins if the chip spends a significant amount of time waiting on other hardware. This is where HyperThreading came into play, and is likely also the reason ATI put so much development time into keeping a huge number of contexts open at a time.

Slower architectures have the advantage of being less effected by latency as a long clock cycle time allows data to move further per cycle. At the same time, to compete with high frequency processing, much more work has to get done per clock cycle than a faster chip. Graphics tend to lend themselves to this type of architecture, which is partly why we haven't seen multiple-GHz graphics chips.

In any case, increasing core and memory clocks, framebuffer size, and adding a gigantic HSF will certainly require a little more power than the standard 7800 GTX. NVIDIA currently still recommends the same types of power supplies for the 7800 GTX 512 as it does for the 7800 GTX, but, as we can see from our tests, the 7800 GTX 512 does result in a much higher power draw from the wall. In fact, the outlets in our lab had some trouble getting consistent power to our PSU during SLI testing. Most people won't run into a problem like this unless they run quite a few PCs off the same circuit breaker at home. We actually had to solve our problem by running one of the 7800 GTX 512 cards off of a second power supply plugged into an extension cord running off of a different circuit. If nothing else, this setup could help people test for wiring problems in their homes.

These power numbers are measured at the wall before the PSU.

Power Consumption


Power Consumption


Certainly the 7800 GTX 512 is a power hog to say the least. Unfortunately, we didn't have a reliable way to test power draw for the 7800 GTX 512 SLI setup, but if we ever get around to rewiring the lab ...

Let's take a look at the hardware we will use.

Test Hardware

CPU: AMD Athlon 64 FX-57 (2.8GHz)
Motherboard: ASUS A8N32-SLI Deluxe
Chipset: NVIDIA nForce4 SLI X16
Chipset Drivers: nForce4 6.82
Memory: OCZ PC3500 DDR 2-2-2-7
Video Card: ATI Radeon X800 XL
ATI Radeon X1800 XT
ATI Radeon X850 XT
NVIDIA GeForce 6800 GS
NVIDIA GeForce 7800 GT
NVIDIA GeForce 7800 GTX
NVIDIA GeForce 7800 GTX 512
Video Drivers: ATI Catalyst 5.11 (WHQL)
NVIDIA ForceWare 81.89 (Beta)
Desktop Resolution: 1280x960 - 32-bit @ 60Hz
OS: Windows XP Professional SP2
Power Supply: OCZ PowerStream 600W PSU


This is definitely a powerful system we will be examining today. For our benchmarks, we test with sound disabled. Most of these tests are timedemos played back using ingame functionality, but the Black & White 2 benchmark is a FRAPS test using an ingame cut scene. We will provide bar graphs for the very popular 1600x1200 resolution while reporting data from 1280x960 to 2048x1536 in resolution scaling graphs.

Index Battlefield 2 Performance
Comments Locked

97 Comments

View All Comments

  • Ryan Smith - Monday, November 14, 2005 - link

    Actually, we were hoping to bring you CoD2 benchmarks for this review, but it didn't pan out. We do not equip our video testbeds with sound cards, so that we can more accurately compare cards; the problem with this is that we could not get CoD2 to run without sound, and we ran out of time unable to find a solution. It's still something we'd like to benchmark in the future if we get the chance though.
  • ElFenix - Monday, November 14, 2005 - link

    then benchmark it with sound and disclose that fact...
  • yacoub - Monday, November 14, 2005 - link

    Ditch DoD:Source for CoD2.

    Ditch DOOM3 for Quake4.

    Rename FEAR.EXE to anything else .exe (PHEAR.EXE, TEST.EXE, whatever) when benchmarking ATI cards if you're running any of the latest ATI driver sets since they have yet to fix a faulty "IF" code from the FEAR demo that is hindering performance in the full version game. (The fix did not make the latest driver release earlier this week.) It has shown to improve performance by as much as 15fps.
  • xbdestroya - Monday, November 14, 2005 - link

    I don't know about that FEAR 'fix' though. I mean how many card owners/PC users will actually know to do that? I think it's more legit to leave the bug in the testing - it is a legitimate bug afterall - and wait for the new Catalyst release where it will be 'fixed' and show the increased performance. Or if that's too strong against ATI, publish an article with benchmarks in FEAR highlighting that bug. But for standard comparisson benchmarks, I think it's best if they're done in as much of an 'out-of-the-box,' load it and play situation as possible.
  • tfranzese - Monday, November 14, 2005 - link

    I disagree with the 'out-of-box' notion. A product can't ship as a turd, but this is an enthusiast site. Enthusiasts should have the knowledge to use the proper drivers (not always the latest, which is why I say proper).
  • xbdestroya - Monday, November 14, 2005 - link

    Well but has this site even published anythign on that fix? Not to my knowledge. I only know abotu it because I'm on the B3D forums where it originated. I imagine that whoever knows it here knows about it from the AT forums. But the fact is that if you're going to include the 'fix' in benchmarks, you might as well have an article preceding it announcing that this fix even exists, don't you think? Not everyone's a forum-goer; I know there was a time once not-too-long ago were I just went to tech sites and rad the articles, not the forums.

    First the article describing this fix to the masses - *then* the banchmarks incorporating it. Don't you think that makes sense?
  • xbdestroya - Monday, November 14, 2005 - link

    I wish these posts could be edited after the fact, but alas they can not. Anyway sorry for the bad spelling above.

    Basically though, if we're talking about 'enthusiast' sites, the sites should be publishing 'enthusiast' news like the fear.exe fix, right? Then after that article I could agree with it's inclusion in benchmarks, because a precedent has been established.
  • ElFenix - Monday, November 14, 2005 - link

    or they could just write a blurb in the article, when they do the fear benches, that you can rename fear to anything else and fix the problem. and then bench it both ways.
  • xbdestroya - Monday, November 14, 2005 - link

    Seriously though, it deserves it's own article. If it doesn't deserve that, it doesn't deserve benches mixed in with a 'general' comparison. The vast majority of people don't even read the associated text with benchmarks anyway, so it would probably go unnoticed by quite a few if it just had a short explanation on the FEAR page of a banchmark round-up.
  • yacoub - Monday, November 14, 2005 - link

    quote:

    For our benchmarks, we test with sound disabled.


    LAAAAAAAAAAAAAAME. Start doing REAL tests. Okay fine, this is your last PEAK FPS test, right? Right?

    From now on show us average fps, sound on, etc. What we'll ACTUALLY GET using the card to PLAY the game, not dick-measure it.

Log in

Don't have an account? Sign up now