It's almost ironic that the one industry we deal with that is directly related to entertainment has been the least exciting for the longest time. The graphics world has been littered with controversies surrounding very fickle things as of late; the majority of articles you'll see relating to graphics these days don't have anything to do with how fast the latest $500 card will run. Instead, we're left to argue about the definition of the word "cheating". We pick at pixels with hopes of differentiating two of the fiercest competitors the GPU world has ever seen, and we debate over 3DMark.

What's interesting is that all of the things we have occupied ourselves with in recent times have been present throughout history. Graphics companies have always had questionable optimizations in their drivers, they have almost always differed in how they render a scene and yes, 3DMark has been around for quite some time now (only recently has it become "cool" to take issue with it).

So why is it that in the age of incredibly fast, absurdly powerful DirectX 9 hardware do we find it necessary to bicker about everything but the hardware? Because, for the most part, we've had absolutely nothing better to do with this hardware. Our last set of GPU reviews were focused on two cards - ATI's Radeon 9800 Pro (256MB) and NVIDIA's GeForce FX 5900 Ultra, both of which carried a hefty $499 price tag. What were we able to do with this kind of hardware? Run Unreal Tournament 2003 at 1600x1200 with 4X AA enabled and still have power to spare, or run Quake III Arena at fairytale frame rates. Both ATI and NVIDIA have spent countless millions of transistors, expensive die space and even sacrificed current-generation game performance in order to bring us some very powerful pixel shader units with their GPUs. Yet, we have been using them while letting their pixel shading muscles atrophy.

Honestly, since the Radeon 9700 Pro, we haven't needed any more performance to satisfy the needs of today's games. If you take the most popular game in recent history, the Frozen Throne expansion to Warcraft III, you could run that just fine on a GeForce4 MX - a $500 GeForce FX 5900 Ultra was in no way, shape or form necessary.

The argument we heard from both GPU camps was that you were buying for the future; that a card you would buy today could not only run all of your current games extremely well, but you'd be guaranteed good performance in the next-generation of games. The problem with this argument was that there was no guarantee when the "next-generation" of games would be out. And by the time they are out, prices on these wonderfully expensive graphics cards may have fallen significantly. Then there's the issue of the fact that how well cards perform in today's pixel-shaderless games honestly says nothing about how DirectX 9 games will perform. And this brought us to the joyful issue of using 3DMark as a benchmark.

If you haven't noticed, we've never relied on 3DMark as a performance tool in our 3D graphics benchmark suites. The only times we've included it, we've either used it in the context of a CPU comparison or to make sure fill rates were in line with what we were expecting. With 3DMark 03, the fine folks at Futuremark had a very ambitious goal in mind - to predict the performance of future DirectX 9 titles using their own shader code designed to mimic what various developers were working on. The goal was admirable; however, if we're going to recommend something to millions of readers, we're not going to base it solely off of one synthetic benchmark that potentially may be indicative of the performance of future games. The difference between the next generation of games and what we've seen in the past is that the performance of one game is much less indicative of the performance of the rest of the market; as you'll see, we're no longer memory bandwidth bound - now we're going to finally start dealing with games whose pixel shader programs and how they are handled by the execution units of the GPU will determine performance.

All of this discussion isn't for naught, as it brings us to why today is so very important. Not too long ago, we were able to benchmark Doom3 and show you a preview of its performance; but with the game being delayed until next year, we have to turn to yet another title to finally take advantage of this hardware - Half-Life 2. With the game almost done and a benchmarkable demo due out on September 30th, it isn't a surprise that we were given the opportunity to benchmark the demos shown off by Valve at E3 this year.

Unfortunately, the story here isn't as simple as how fast your card will perform under Half-Life 2; of course, given the history of the 3D graphics industry, would you really expect something like this to be without controversy?

It's Springer Time
Comments Locked

111 Comments

View All Comments

  • Anonymous User - Friday, September 12, 2003 - link

    #61.. i take it YOU have the money to shell out for top of the line hardware ????????? i sure as hell don't, but like #42 said, " more widely used comp "

    i my self am running a 1700+ at 2400+ speeds, no way in hell am i gonna go spend the 930 bucks ( in cdn funds )on a 3.2c P4, thats NOT inc the mobo and ram, and i'm also not gonna spend the 700 cdn on a barton 3200+ either, for the price of the above P4 chip i can get a whole decient comp, may not be able to run halflife at its fullest, but still, i'm not even interested in HL2, it just not the kind of game i play, but if i was, whay i typed above, is still valid..


    anand... RUN THESE HL2 BENCHES ON HARDWARE THE AVERAGE PERSON CAN AFFORD !!!!!!!!!!!!!!!!!!!!!!!! not he spoiled rich kid crap .....
  • Anonymous User - Friday, September 12, 2003 - link

    #42 "...should have benchmarked on a more widely used computer like a 2400 or 2500+ AMD...":

    The use of 'outdated' hardware such as your 2400 AMD would have increased the possibility of cpu limitations taking over the benchmark. Historically all video card benchmarks have used the fastest (or near fastest) GPU available to ensure the GPU is able to operate in the best possible scenario. If you want to know how your 2400 will work with HL2, wait and buy it when it comes out.

    In reference to the 16/32 bit floating point shaders and how that applies to ATI's 24 bit shaders:

    It was my understanding that this quote was referencing the need for Nvidia to use it's 32 bit shaders as future support for its 16 bit shaders would not exist. I don't see this quote pertaining to ATI's 24 bit shaders as they meet the DX9 specs. The chance of future HL2 engine based games leaving ATI users out in the cold is somewhere between slim and none. For an example of how software vendor's react to leaving out support for a particular line of video card, simply look at how much work Valve put into making Nvidia's cards work. If it was feasible for a software vendor to leave out support for an entire line like your are refering to (ATI in your inference) we would have had HL2 shipping by now (for ATI only though...).
  • Anonymous User - Friday, September 12, 2003 - link

    58, http://myweb.cableone.net/jrose/Jeremy/HL2.jpg
  • Anonymous User - Friday, September 12, 2003 - link

    Are pixel shader operations anti-aliased on current generation video cards? I ask because in the latest Half Life 2 technology demo movie, anti-aliasing is enabled. Everything looks smooth except for the specular highlights on the roof and other areas, which are still full of shimmering effects. Just seems a little sore on the eyes.
  • Anonymous User - Friday, September 12, 2003 - link

    An observation:

    Brian Burke = Iraqi Information Officer

    I mean this guy rode 3dfx into the dirt nap and he's providing the same great service to Nvidia.

    Note to self: Never buy anything from a company that has this guy spewing lies.
  • Anonymous User - Friday, September 12, 2003 - link

    OK, this article was great.

    For us freaks, can you do a supplement article. Do 1600x1200 benchmarks!!!

    Things will probably crawl, but it would be nice to know that this should be the worst case at this resolution when ATI and NVidia come out with next gen cards.

    Also, was any testing done to see if the benchmarks were CPU or GPU limited? Maybe use the CPU utilization montior in Windows o see what the CPU thought. maybe a 5.0 GHz processor down the road will solve some headaches. Doubtful, but maybe....
  • Anonymous User - Friday, September 12, 2003 - link

    Whats really funny is that Maximum PC magazine built an $11000 "Dream Machine", using a GeforeFX 5900 and i can built a machine for less then $2000 and beat it using a 9800 pro.

    Long Live my 9500 pro!
  • Anonymous User - Friday, September 12, 2003 - link

    I can play Frozen Throne and I am doing so on a GeForce2MX LOL (on a P2@400mhz).
  • Anonymous User - Friday, September 12, 2003 - link

    look at my #46 posting - i know it's different engines, different API's, different driver revisions etc...
    but still it's interesting..

    enigma
  • Anonymous User - Friday, September 12, 2003 - link

    #52 different engines, different results. hl 2 is probably more shader limited than doom 3. The 9600pro has strong shader performance, which narrows the gap in shader limited situations such as hl 2.

    btw, where did you get those doom 3 results? Only doom 3 benches I know about are based off the old alpha or that invalid test from back when the nv35 was launched...

Log in

Don't have an account? Sign up now