The Test

For this test, we used the same setup as in our 6800 and x800 launch articles. This time around, we are using newer drivers, a beta windows service pack, DX9.0c, and the 1.2 version of FarCry. The numbers that we originally ran are much different (in a good way) than the numbers that we will see here for the SM2.0 path on both cards.

In order to test image quality, we couldn't use Windows' built-in screen capture, or HyperSnap 5 (which we usually use to accommodate DX9 captures with special requirements). We had to use FarCry's built-in screen capture (default key is F12), which only captures images in .jpg format rather than any of the uncompressed formats that we would rather see for IQ comparisons. As such, pixel perfect comparisons (though not technically possible in the first place) aren't even a distant hope. Small versions of the images have only been cropped, not resized or resampled, and the full 1600x1200 images will be linked up.

 Performance Test Configuration
Processor(s): AMD Athlon 64 3400+
RAM: 2x 512MB OCZ PC3200 (2:2:3:6)
Hard Drive(s): Seagate Barracuda 7200.7
Video AGP & IDE Bus Master Drivers: VIA Hyperion 4in1 4.51
Video Card(s): NVIDIA GeForce 6800 Ultra Extreme
NVIDIA GeForce 6800 Ultra
NVIDIA GeForce 6800 GT
NVIDIA GeForce 6800
ATI Radeon X800 XT Platinum Edition
ATI Radeon X800 XT
ATI Radeon X800 Pro
Video Drivers: NVIDIA 61.45 SM3 Beta Graphics Drivers
ATI Catalyst 4.6
Operating System(s): Windows XP Professional SP2 RC2 with DX9.0c
and the Summer 2004 DirectX SDK Update
Power Supply: PC Power & Cooling Turbo Cool 510
Motherboards: FIC K8T800 (754 pin)

As is apparent from the table, we are introducing a couple of new cards this time around. For easy reference, here is the pixel width, core clock speed and memory data rate of all the parts included:

NVIDIA GeForce 6800: 12 pipes, 325 core, 700 mem
NVIDIA GeForce 6800 GT: 16 pipes, 350 core, 1000 mem
NVIDIA GeForce 6800 Ultra: 16 pipes, 400 core, 1100 mem
NVIDIA GeForce 6800 Ultra Extreme: 16 pipes, 460 core, 1200 mem

ATI Radeon X800 Pro: 12 pipes, 475 core, 900 mem
ATI Radeon X800 XT: 16 pipes, 500 core, 1000 mem
ATI Radeon X800 XT Platinum Edition: 16 pipes, 520 core, 1120 mem

ATI cards are always run in SM2.0 mode (as they don't support SM3.0), so the labels on the graphs only reflect the code path that NVIDIA's cards take. Each level analysis will have an SM2.0 comparison (both NVIDIA and ATI on the same path) and an SM3.0 comparison (NVIDIA running SM3.0 with ATI running SM2.0).

Also, keep in mind that this test is performing an analysis of two different rendering paths, and not the performance difference between SM2.0 and SM3.0 code. If this were really a test of SM2.0 versus SM3.0, we would be talking about using the same rendering techniques with different instructions (in which case, the lower complexity of SM2.0 has the potential to be faster in many cases). What we are looking at here are two different rendering methods.

In other words, this is the performance difference between two different implementations of CryTek's engine, not a generalization of SM2.0 versus SM3.0 performance. In this case, CryTek determined that SM3.0 provided functionality, which made changes to the rendering path, worth the cost of implementation. Let's take a look at the end result.

The Benchmark Level Analysis: mp_airstrip
Comments Locked

36 Comments

View All Comments

  • DerekWilson - Friday, July 2, 2004 - link

    Thanks Pete, we'll be setting AA and AF in the benchmark batch file from now on ... We've updated the site to reflect the fact that the first run of numbers had NV 4xAA set in the control panel (which means it was off in the game).

    We appologize for the problem, and these new numbers show an acurate picture of the NV vs. ATI playing field.

    Again, we are very sorry for the mistake.
  • Bonesdad - Friday, July 2, 2004 - link

    Wait till you see the numbers for NV's 6800 Ultra Extreme with Cheese!!!
  • Pete - Friday, July 2, 2004 - link

    Derek, was AA on for the nV cards? Apparently nV's latest drivers change behavior once again, to require AA to be set in-game, rather than via CP (which does nothing).

    Perhaps you could avoid this mess of ever-changing AA settings by using AA+AF for comparison screens? It'd also have the added benefit of showing the games in a more positive light. :)
  • joeyd - Friday, July 2, 2004 - link

  • gordon151 - Friday, July 2, 2004 - link

    pio!pio! x-bit labs tested the difference between performance with the 1.2 and 1.1 patch on the NV3x (5900 Ultra) and well it wasn't pretty. NV3x actually saw a rather big performance drop using the new patch. I dunno if nVidia is gonna do anything about this since they seem to be turning a blind eye to the NV3x line with respect to future optimizations.
  • DerekWilson - Friday, July 2, 2004 - link

    trilinear optimizations are on
    anisotripic filtering optimizations are off

    AA has less noticable benefit as resolution increases, but nearly vertical and nearly horizontal lines are still obvious in games with high contrast scenes.
  • kmmatney - Friday, July 2, 2004 - link

    Do you really need AA on when running at 1600 x 1200, as in these these benchmarks? Just wondering if its much of a benefit at this high of a resolution. I never go past 1024 x 768, so I wouldn't know.
  • pio!pio! - Friday, July 2, 2004 - link

    So how about just the performance jump from FarCry 1.1 to 1.2 w/o using these high end shaders? (Ie for the previous generation Geforce 5900 crowd and lower)
  • AnnoyedGrunt - Friday, July 2, 2004 - link

    Does that mean trilinear optimizations on, or trilinear filtering on?

    Thanks,
    D'oh!
  • DerekWilson - Friday, July 2, 2004 - link

    we used driver default:

    trilinear on
    anisotropic off

Log in

Don't have an account? Sign up now