Quake 4 Performance

There has always been a lot of debate in the community surrounding pure timedemo benchmarking. We have opted to stick with the timedemo test rather than the nettimedemo option for benchmarking Quake 4. To be clear, this means our test results focus mostly on the capability of each graphics card to render frames generated by Quake 4. The frame rates we see here don't directly translate into what one would experience during game play.

Additionally, Quake 4 limits frame rate to 60 fps during gameplay whether or not VSync is enabled. Performance characteristics of a timedemo do not reflect actual gameplay. So why do we do them? Because the questions we are trying to answer have only to do with the graphics subsystem. We want to know what graphics card is better at rendering Quake 4 frames. Any graphics card that does better at rendering Quake 4 frames will play Quake 4 better than other slower cards. While that doesn't mean the end user will see higher performance in the game, it does mean that the potential for seeing more performance is there, for instance if the user upgrades a CPU before the next graphics card upgrade.

Timedemos do walk a fine line between synthetic benchmarks and real world benchmarks. While we tend to favor real world data here at AnandTech, this type of benchmark is very capable of using a real world data set to test the maximum capabilities of the graphics cards under its particular work load without bottlenecking at other points in the system. To be sure, even timedemos can see memory and CPU bottlenecks, as data must be transfered to the graphics card some how. But this impact is much lower than the impact of running AI, physics, script management, I/O, and other game code at the same time.

What this means to the end user is that in-game performance will almost always be lower than timedemo performance. It also means that graphics cards that do slightly better than other graphics cards will not always show a tangible performance increase on an end user's system. As long as we keep these things in mind, we can make informed conclusions based on the data we collect.

Our benchmark consists of the first few minutes of the first level. This includes both inside and outdoor sections, with the initial few fire fights. We test the game with Ultra Quality settings, and we enable all the advanced graphics options except for VSync and antialiasing. Anisotropic filtering is manually set to 8x. Id does a pretty good job of keeping framerate very consistent; in-game framerates of 25 are acceptable. While we don't have the ability to make a direct mapping to what that means in the timedemo test, our experience indicates that a timedemo fps of about 35 translates into an enjoyable experience on our system. This will certainly vary on other systems, so take it with a grain of salt. But the important thing to remember is that this is more of a test of relative performance of graphics cards when it comes to rendering Quake 4 frames -- it doesn't directly translate to Quake 4 experience.

Quake 4

The Doom 3 engine was once NVIDIA's stomping grounds, but Quake 4 performance is now dominated by ATI's Radeon X1900 and X1950 series. The X1950 CrossFire manages a 26% performance advantage over the GeForce 7900 GTX SLI, while the X1900 CF setup pulls ahead by just under 16%.

Among single cards, the X1950 XTX manages about an 11% performance advantage over the 7900 GTX, meaning that CrossFire actually scales much better than SLI in Quake 4 for some reason. The new 256MB X1900 XT is outperformed by its 512MB sibling by a decent 16%. Honestly we were not expecting to see such big differences, especially with AA disabled, between the 256MB and 512MB cards. It's good to see that games are actually using all of this framebuffer being thrown at them.

Quake 4

With AA enabled, ATI does even better, with the X1950 CrossFire outperforming the 7900 GTX SLI by over 57%. ATI has done a lot of work on its OpenGL performance lately and we're currently investigating to see if that's the cause for such a stellar showing in Quake 4 here today.

Half-Life 2: Episode One Performance Splinter Cell: Chaos Theory Performance
Comments Locked

74 Comments

View All Comments

  • DerekWilson - Saturday, August 26, 2006 - link

    yeah ... i didn't test power with crossfire -- which is a whole lot higher. also, i have a minimal set of componets to make it work -- one hdd, one cdrom drive, and no addin cards other than graphics.

    we'll do multi-gpu power when we look at quadsli
  • ElFenix - Thursday, August 24, 2006 - link

    the review states that power consumption was measured at the wall wtih a kill-a-watt, during a 3Dmark run.

    in addition to the water cooling, it could be he's running a more efficient PSU. in a powerful system drawing 220 watts from the power supply would draw 277 watts from the wall with an 80% efficient PSU (like a good seasonic) and draw 314 watts with a 70% efficient PSU. that's a pretty decent difference right there.

    ... still waiting for nvidia's HQ driver run...
  • poohbear - Thursday, August 24, 2006 - link

    thanks
  • Rock Hydra - Wednesday, August 23, 2006 - link

    With those competitively price parts, hopefully nVIDIA will respond with lower prices.
  • CreepieDeCrapper - Wednesday, August 23, 2006 - link

    I'm not familiar with 1920x1440, did you mean 1920x1200? What resolution were these tests performed? Thank you!

  • JarredWalton - Wednesday, August 23, 2006 - link

    1920x1440 is a standard 4:3 aspect ratio used on many CRTs. It is often included as performance is somewhat close to 1920x1200 performance.
  • CreepieDeCrapper - Wednesday, August 23, 2006 - link

    Thanks, I've been using my LCD for so long I forgot about the vintage CRT res's out there ;) Plus I never ran that particular res on my CRT when I had one, so I just wasn't familiar.
  • cgaspar - Wednesday, August 23, 2006 - link

    While average frame rates are interesting, I _really_ care about minimum frame rates - 300fps average is useless if at a critical moment in a twitch game the frame rate drops to 10fps for 3 seconds - this is especially true in Oblivion. Of course it's possible that the minimums would be the same for all cards (if the game is CPU bound in some portion), but they might not be.
  • JarredWalton - Wednesday, August 23, 2006 - link

    A lot of games have instantaneous minimums that are very low due to HDD accesses and such. Oblivion is a good example. Benchmarking also emphasizes minimum frame rates, as in regular play they occur less frequently. Basically, you run around an area for a longer period of time in actual gaming, as opposed to a 30-90 second benchmark. If there's a couple seconds at the start of the level where frame rates are low due to the engine caching textures, that doesn't mean as much as continuos low frame rates.

    More information is useful, of course, but it's important to keep things in perspective. :)
  • kmmatney - Wednesday, August 23, 2006 - link

    The charts show tht the 7900GT gets a huge boost from being factory overclocked. It would be nice to see if the X1900XT 256 MB can also be overclocked at all, or if there is any headroom.

Log in

Don't have an account? Sign up now