It's almost ironic that the one industry we deal with that is directly related to entertainment has been the least exciting for the longest time. The graphics world has been littered with controversies surrounding very fickle things as of late; the majority of articles you'll see relating to graphics these days don't have anything to do with how fast the latest $500 card will run. Instead, we're left to argue about the definition of the word "cheating". We pick at pixels with hopes of differentiating two of the fiercest competitors the GPU world has ever seen, and we debate over 3DMark.

What's interesting is that all of the things we have occupied ourselves with in recent times have been present throughout history. Graphics companies have always had questionable optimizations in their drivers, they have almost always differed in how they render a scene and yes, 3DMark has been around for quite some time now (only recently has it become "cool" to take issue with it).

So why is it that in the age of incredibly fast, absurdly powerful DirectX 9 hardware do we find it necessary to bicker about everything but the hardware? Because, for the most part, we've had absolutely nothing better to do with this hardware. Our last set of GPU reviews were focused on two cards - ATI's Radeon 9800 Pro (256MB) and NVIDIA's GeForce FX 5900 Ultra, both of which carried a hefty $499 price tag. What were we able to do with this kind of hardware? Run Unreal Tournament 2003 at 1600x1200 with 4X AA enabled and still have power to spare, or run Quake III Arena at fairytale frame rates. Both ATI and NVIDIA have spent countless millions of transistors, expensive die space and even sacrificed current-generation game performance in order to bring us some very powerful pixel shader units with their GPUs. Yet, we have been using them while letting their pixel shading muscles atrophy.

Honestly, since the Radeon 9700 Pro, we haven't needed any more performance to satisfy the needs of today's games. If you take the most popular game in recent history, the Frozen Throne expansion to Warcraft III, you could run that just fine on a GeForce4 MX - a $500 GeForce FX 5900 Ultra was in no way, shape or form necessary.

The argument we heard from both GPU camps was that you were buying for the future; that a card you would buy today could not only run all of your current games extremely well, but you'd be guaranteed good performance in the next-generation of games. The problem with this argument was that there was no guarantee when the "next-generation" of games would be out. And by the time they are out, prices on these wonderfully expensive graphics cards may have fallen significantly. Then there's the issue of the fact that how well cards perform in today's pixel-shaderless games honestly says nothing about how DirectX 9 games will perform. And this brought us to the joyful issue of using 3DMark as a benchmark.

If you haven't noticed, we've never relied on 3DMark as a performance tool in our 3D graphics benchmark suites. The only times we've included it, we've either used it in the context of a CPU comparison or to make sure fill rates were in line with what we were expecting. With 3DMark 03, the fine folks at Futuremark had a very ambitious goal in mind - to predict the performance of future DirectX 9 titles using their own shader code designed to mimic what various developers were working on. The goal was admirable; however, if we're going to recommend something to millions of readers, we're not going to base it solely off of one synthetic benchmark that potentially may be indicative of the performance of future games. The difference between the next generation of games and what we've seen in the past is that the performance of one game is much less indicative of the performance of the rest of the market; as you'll see, we're no longer memory bandwidth bound - now we're going to finally start dealing with games whose pixel shader programs and how they are handled by the execution units of the GPU will determine performance.

All of this discussion isn't for naught, as it brings us to why today is so very important. Not too long ago, we were able to benchmark Doom3 and show you a preview of its performance; but with the game being delayed until next year, we have to turn to yet another title to finally take advantage of this hardware - Half-Life 2. With the game almost done and a benchmarkable demo due out on September 30th, it isn't a surprise that we were given the opportunity to benchmark the demos shown off by Valve at E3 this year.

Unfortunately, the story here isn't as simple as how fast your card will perform under Half-Life 2; of course, given the history of the 3D graphics industry, would you really expect something like this to be without controversy?

It's Springer Time
Comments Locked

111 Comments

View All Comments

  • Anonymous User - Sunday, September 14, 2003 - link

    Umm.. could you PLEASE not use shockwave for those
    tables? Our firewalls & browser configs also won't let it through, so these reviews become pretty much useless to read.
  • Anonymous User - Sunday, September 14, 2003 - link

    where are the benchmarks comparing HL2 at different CPUs? I mean, i obviously know I'm gonna have to upgrade my gf3 to a new card (first game to make me even think of that... didnt care for ut2k3), but what about my venerable athlon xp 1800+ ? :(
  • Anonymous User - Saturday, September 13, 2003 - link

    #98, look at the 9700 Pro numbers, subtract 4-5%.

    Still, if I were to see another set of benchmarks, I'd DEFINITELY want these:

    GeForce4 MX440 OR GeForce2 Ti - As an example of how well GF2/GF4MX cards perform on low detail settings, being DX7 parts.
    GeForce3 Ti200 OR GeForce3 Ti500 - It's a DX8 part, and still respectably fast; lots of people have Ti200s, anyway.
    GeForce4 Ti4200 - This is an incredibly common and respectably fast card, tons of people would be interested in seeing the numbers for these.
    GeForce FX 5600 Ultra - Obvious.
    GeForce FX 5900 Ultra - Obvious.
    Radeon 8500 - It's still a good card, you know.
    Radeon 9500 Pro - Admit it, you're all interested.
    Radeon 9600 Pro - Obvious.
    Radeon 9700 vanilla - Because it would show how clock speed scales, and besides these (and softmodded 9500s) are quite common.
    Radeon 9700 Pro - Obvious.
    Radeon 9800 Pro - Obvious.

    The GeForce FX 5200 and GeForce4 Ti4600 might be nice too, but the Radeons 9000 through 9200 would be irrelevant (R200-based).

    Also, obviously, I'd like to see them on two or three different detail levels (preferably three), to show how well some of the slower ones run at low detail and see how scalable Source really is. Speaking of scalability, a CPU scaling test would be extremely useful as well, like AnandTech's UT2003 CPU scaling test.

    This sort of thing would probably take a lot of time, but I'd love to see it, and I bet I'm not alone there. I think something like what AnandTehc did with UT2003 would be great.

    Just my ~$0.11.
  • clarkmo - Saturday, September 13, 2003 - link

    I can't believe the Radeon 9500 hacked to a 9700 wasn't included in the benchmarks. What was he thinking? I guess Anand didn't have any luck scoring the right card. There are some still available, you know.
  • Anonymous User - Saturday, September 13, 2003 - link

    Quote from the Irari information minister:
    "Nvidia is kicking ATI's butt. Their hardware is producing vastly superior numbers."
  • Anonymous User - Saturday, September 13, 2003 - link

    Nvidia quote: "Part of this is understanding that in many cases promoting PS 1.4 (DirectX 8) to PS 2.0 (DirectX 9) provides no image quality benefit."

    3Dfx said some years ago that no one ever would use or notice the benefits of 32 bit textures. Nvidia did and 3Dfx is gone. Will Nvidia follow the 3Dfx path?
  • Anonymous User - Saturday, September 13, 2003 - link

    Anyone remember when ati.com sold rubber dog crap?
  • Pete - Saturday, September 13, 2003 - link

    #74, straight from the horse's mouth:

    http://www.nvnews.net/#1063313306
    "The GeForce FX is currently the fastest card we've benchmarked the Doom technology on and that's largely due to NVIDIA's close cooperation with us during the development of the algorithms that were used in Doom. They knew that the shadow rendering techniques we're using were going to be very important in a wide variety of games and they made some particular optimizations in their hardware strategy to take advantage of this and that served them well. --John Carmack"

    Of course those D3 numbers were early (as are these HL2 ones), so things can change with updated drivers.
  • Anonymous User - Saturday, September 13, 2003 - link

    I don't know if it has already been asked, but even if it has I ask again for emphasis.

    Anand, it would be nice if you could add a 9600 non-pro bench to the results. You mention raw GPU power being the determining factor now, and as the 9600 Pro's difference in memory clock is more significant than its engine clock, it would be interesting and informative to the budget/performance croud to note the 9600 non-pro performance in HL2.

    Thanks for all your informative, insightful, accurate, in-depth articles.
  • Anonymous User - Friday, September 12, 2003 - link

    I always find it interesting how people say ATI is the "little guy" in this situation.

    ATI has been a major player in the video card market since the eighties (I had a 16-bit VGA Wonder stuck in an 8-bit ISA slot in an 8088/2 system) and that didn't change much even when 3dfx came onto the scene. A lot of those voodoo pass through cards got their video from an ATI 3d expression or some other cheap 2d card (Cirrus Logic or Trident anyone?).

    Nvidia and ATI have been at each others throats ever since Nvidia sold its first video card on the OEM market. 3dfx was just a little blip to ATI, Nvidia stealing away a bunch of its OEM sales with a bad 2d/good 3d video card on the other hand, well, that was personal.

    I imagine someone at ATi saying something like this:

    "All of you guys working on making faster DACs and better signal quality are being transferred to our new 3d department. Its sort of like 2d cept its got one more d, thats all we know for right now.".

    ATI knows how to engineer and build a video card, they have been doing it for long enough. Same with Matrox (Matrox builds the Rolls Royce's of video cards for broadcast and video editing use), Nvidia on the other hand knew how to build 3d accelerators, and not much else. The 2d on any early Rage card slaughtered the early Nvidia cards.

    Course, the 3d sucked balls, thats what a Canopus Pure 3d was for though.

    Now ATI has the whole "3d" part of the chip figured out. The driver guys have their heads wrapped around the things as well (before 3d cards came around ATI's drivers were the envy of the industry). Its had many years of experience dealing with games companies, OS companies, standards, and customers. And its maturity is really starting to show after a few minor bumps and bruises.

    ATI wants its market back, and after getting artx it has the means to do it. Of course, Nvidia is going to come out of this whole situation a lot more grown up as well. Both companies are going to have to fight blood tooth and nail to stay on top now. If they don't Matrox might just step up to the plate and bloody both of their noses. Or any of those "other" long forgotten video card companies that have some engineers stashed away working on DX 7 chips.

    God knows what next month is going to bring.

    Anyways, sorry for the rant..

Log in

Don't have an account? Sign up now