The Card, The Test, and Power

There are a few key changes from the original 7800 GTX silicon that allow the 7800 GTX 512 to perform so highly. The 7800 GTX 512 still uses a 110nm process like the original 7800 GTX, but NVIDIA has tuned their fab process to speed up key data paths in the chip. These enhancements, while not altering the feature set in anyway, allow the chip to reach clock speeds of 550MHz (which is 120MHz faster than the original 7800 GTX). On top of changes in the silicon, the 7800 GTX 512 has gotten a PCB revision. And just in case anyone is wondering, the huge HSF solution is actually very quiet. NVIDIA is using a fan with larger blades that move a good volume of air without needing to run at super high RPMs. While it may look like an industrial sized leaf blower, it bark is certainly nothing compared to the bite this thing takes out of our performance tests.

Current 7800 GTX cards feature 8Mx32 GDDR3 with four chips on each side of the PCB. Most cards have a heat spreader on the back of the board, while some vendors have attached heatsinks. NVIDIA needed a better way to cool their RAM in order to hit the memory clock speeds they wanted. To this end the 7800 GTX 512 sees all of its RAM on the front of the PCB cooled by the very large heatsink previously employed on the Quadro FX 4500. Moving all the RAM to one side of the PCB may also have improved the routing to certain memory modules, which would also help increase attainable stable memory clock speeds. There are still only 8 modules total, as NVIDIA has also moved to higher density 16Mx32 GDDR3. The RAM used is also rated at 900MHz (1800MHz data rate), giving the stock memory clock speed of 1700MHz a little head room for vendors who like to overclock the cards they sell.

Here is a quick comparison of NVIDIA's 7800 series line up with the new GTX 512:

We can expect some applications to scale with either core clock speed or memory clock speed depending on where they are limited, in which case we could see anywhere from a 25% to 40% boost in performance. Of course, we will run into things like CPU and architectural limitations that could decrease the impact of the improved clock speed. As Intel found out via the Pentium 4, it doesn't matter how fast your clock spins if the chip spends a significant amount of time waiting on other hardware. This is where HyperThreading came into play, and is likely also the reason ATI put so much development time into keeping a huge number of contexts open at a time.

Slower architectures have the advantage of being less effected by latency as a long clock cycle time allows data to move further per cycle. At the same time, to compete with high frequency processing, much more work has to get done per clock cycle than a faster chip. Graphics tend to lend themselves to this type of architecture, which is partly why we haven't seen multiple-GHz graphics chips.

In any case, increasing core and memory clocks, framebuffer size, and adding a gigantic HSF will certainly require a little more power than the standard 7800 GTX. NVIDIA currently still recommends the same types of power supplies for the 7800 GTX 512 as it does for the 7800 GTX, but, as we can see from our tests, the 7800 GTX 512 does result in a much higher power draw from the wall. In fact, the outlets in our lab had some trouble getting consistent power to our PSU during SLI testing. Most people won't run into a problem like this unless they run quite a few PCs off the same circuit breaker at home. We actually had to solve our problem by running one of the 7800 GTX 512 cards off of a second power supply plugged into an extension cord running off of a different circuit. If nothing else, this setup could help people test for wiring problems in their homes.

These power numbers are measured at the wall before the PSU.

Power Consumption


Power Consumption


Certainly the 7800 GTX 512 is a power hog to say the least. Unfortunately, we didn't have a reliable way to test power draw for the 7800 GTX 512 SLI setup, but if we ever get around to rewiring the lab ...

Let's take a look at the hardware we will use.

Test Hardware

CPU: AMD Athlon 64 FX-57 (2.8GHz)
Motherboard: ASUS A8N32-SLI Deluxe
Chipset: NVIDIA nForce4 SLI X16
Chipset Drivers: nForce4 6.82
Memory: OCZ PC3500 DDR 2-2-2-7
Video Card: ATI Radeon X800 XL
ATI Radeon X1800 XT
ATI Radeon X850 XT
NVIDIA GeForce 6800 GS
NVIDIA GeForce 7800 GT
NVIDIA GeForce 7800 GTX
NVIDIA GeForce 7800 GTX 512
Video Drivers: ATI Catalyst 5.11 (WHQL)
NVIDIA ForceWare 81.89 (Beta)
Desktop Resolution: 1280x960 - 32-bit @ 60Hz
OS: Windows XP Professional SP2
Power Supply: OCZ PowerStream 600W PSU


This is definitely a powerful system we will be examining today. For our benchmarks, we test with sound disabled. Most of these tests are timedemos played back using ingame functionality, but the Black & White 2 benchmark is a FRAPS test using an ingame cut scene. We will provide bar graphs for the very popular 1600x1200 resolution while reporting data from 1280x960 to 2048x1536 in resolution scaling graphs.

Index Battlefield 2 Performance
Comments Locked

97 Comments

View All Comments

  • steelmartin - Monday, November 14, 2005 - link

    I guess if you buy this card you´re doing so partly because you´re interested in running games in the hiqhest quality settings. But afaik it can´t do OpenEXR HDR and AA like in Far Cry, so I think this card is somewhat of a contradiction. Surely it depends on how the appliction uses HDR, like Valve showed with HDR and AA for everyone in Lost Coast. But I would say, not a very futureproof card then, as everyone predicts HDR will be big in games, and I guess a lot of them will use OpenEXR. Still, it will top the charts, for what that´s worth.

    And about the extra memory, how about taking the card for a spin with Call of Duty 2? Seems that game takes advantage of 512 MiB.

    /m
  • DerekWilson - Monday, November 14, 2005 - link

    The advantage ATI offers is MSAA with floating point HDR. We've already seen a game (Black and White 2) that employs AA and HDR by using Supersample FSAA, and as you pointed out Valves Source engine avoids full float render targets and still gets good results.

    The performance hit is larger with SSAA, but it is certainly possible to have HDR and AA without the ability to do MSAA on floating point/multiple render targets. And the sheer brute strenth the 7800 GTX 512 has can easily be spent on SSAA as shown (again) by Black and White 2.
  • quasarsky - Monday, November 14, 2005 - link

    i'm an ati fan but this is ridicoulous. ati just gets crushed and crushed. even the regular 7800 gtx gets crushed. but i knew something like this would happen if the 7800 was cranked up to a clockspeed close to the x1800xt. those extra 8 pipes and the extra memory bandwidth just lead to the same thing: crushing all opponents lol. man. is ati the new intel? i hope not :(. but thats how its looking currently :'(.

    ha ha i guess my x800xt aiw isn't looking so hot right now :-D.
  • George Powell - Monday, November 14, 2005 - link

    But quite useless for most people who don't run games at statospheric resolutions.

    I would really like to see this running at 2560x1600 on the Apple 30".

  • Ozenmacher - Monday, November 14, 2005 - link

    That is some pretty amazing performance. It makes my ATi X800Xl look rather pathetic...sighs
  • KaPolski - Monday, November 14, 2005 - link

    GoGo geforce 3 ti500 Woohoo!!!!! trust me it spanks the 7800 gtx 512 down to a carefully squeezed lemon :D
  • Xenoterranos - Monday, November 14, 2005 - link

    w00t! I traded my matching-numbers first-run GeForce 3 (Before they were TI'd) in for a 5900. im not upgrading till socket M2 comes a-rolling into the bargain bin.
  • LoneWolf15 - Monday, November 14, 2005 - link

    quote:

    That is some pretty amazing performance. It makes my ATi X800Xl look rather pathetic...sighs
    It's called "marketing". Don't succumb to it.

    It it a fast card? Heck yeah. Is it necessary? Far from it. I have an ATI X800XL as well, and I don't plan on switching until I have to. Game developers will continue to make games compatible with our cards for some time to come, and the only thing we'll be missing is Shader Model 3.0. So far, what I have seen of it hasn't been a big enough improvement to encourage me to go out and plunk cash down on a new card. And seeing as my gaming is now measured in hours per week (as opposed to hours per day, like when I worked in a computer store) I couldn't justify spending that kind of bread on something that isn't constantly in use.

    I think the 7800GTX 512 is a neat looking toy. But that's just it: it's a toy. I'd rather cover two car payments or two-thirds of a mortgage payment, things I NEED to spend money on
  • Pythias - Monday, November 14, 2005 - link

    quote:

    At $700 we are a little weary of recommending this part to anyone but the professional gamers and incredibly wealthy. The extra performance just isn't necessary in most cases.


    I agree. I also think the $600 dollar pricetag on the x1800xt is a bit much as well.
  • phusg - Monday, November 14, 2005 - link

    quote:

    At $700 we are a little weary of recommending this part to anyone but the professional gamers and incredibly wealthy.


    LOL. Weary != wary and in fact reads as the opposite to what I think you mean in this sentence!

Log in

Don't have an account? Sign up now