Introduction

When it was drafted, DirectX 10 promised to once again change the way developers approach real-time 3D graphics programming. Not only would graphics hardware be capable of executing short custom programs (called shaders) on vertices and fragments (pixels), but developers would be able to move much more high-level polygon work to the GPU through geometry shaders. Pulling polygon level manipulation off the CPU opens up a whole host of possibilities to the developer.

With adequate performance, many of the geometric details simulated through other techniques could be applied in simple, straightforward ways involving less overhead. Techniques like normal mapping, parallax occlusion mapping, and many others exist solely for generating the illusion of additional geometry. Ever wonder why a face can be incredibly detailed while the silhouette of the same head looks more like a stop sign than a melon? This is because modern real-time 3D relies on low polygon models augmented with pixel level "tricks" to make up for it.

There are lots of cool thing we can do with the ability to process geometry on the GPU. We could see particle systems on the GPU, fine grained model details like fur that can be affected by the physical characteristics of the world, procedural geometry for highly dynamic environments, "real" displacement mapping, and geometry amplification that can add detail to models. Some of these things may show up sooner than others in games, as we will still be limited by the performance of the hardware when it comes to implementing these features.

There are, of course, other benefits to DX10. We explored this in previous articles for those who are interested, but here's a quick run down. Object and state change overhead has been decreased, allowing for less CPU involvement when sending data to the GPU. This should improve performance and give developers more headroom in building larger, more complex scenes. We have more rigidly defined specifications, which means developers can focus less on how individual hardware will handle their game and more on the features they want to implement. With a larger focus on data types and accuracy, the results of calculations will be more consistent between hardware, and developers will have more flexibility in choosing how their data is processed.

In general, DX10 also offers a more generic computing model with lots of flexibility. This will be very important going forward, but right now developers still have de facto limitations on shader length and complexity based on the performance of the hardware that currently exists. As developers better learn how to use the flexibility they have, and as hardware designers continue to deliver higher performance year after year, we will see DirectX 10 applications slowly start to blossom into what everyone has dreamed they could be.

For now, before we get into features and performance, we would like to temper your expectations. Many of the features currently implemented in DirectX 10 could also be done using DirectX 9. Additionally, those features that are truly DX10 only either don't add much beyond what we would get otherwise, or require quite a bit of processing power to handle. Thus, we either get something that was already possible or something that requires expensive hardware.

The Test
POST A COMMENT

59 Comments

View All Comments

  • DerekWilson - Thursday, July 05, 2007 - link

    this is true -- our current information shows that AMD does relatively worse than NVIDIA when compared under DX10 than under DX9. Reply
  • rADo2 - Thursday, July 05, 2007 - link

    "there are applications where the 2900 xt does outperform its competition" - where? 2900XT has 22FPS, 8800GTX 24FPS nad 8800ULTRA 26FPS. Despite "crippled for NVIDIA" / "paid by ATI" I see still green camp to outperform ATI.

    And in Lost Planet NVIDIA has 2x (!) better performance.

    It is not even worth considering ATI for purchase.
    Reply
  • smitty3268 - Thursday, July 05, 2007 - link

    Read the comment again: The point is that the 2900XT does not compete against the 8800GTX, it competes against the 8800GTS, which it did outperform in that test.

    It certainly isn't the fastest card available, but I could also make a statement like "The GeForce7300Go outperforms it's competition" without saying it's the fastest thing available. I'm just saying it beats the cards in a similar price range.
    Reply
  • KeithTalent - Thursday, July 05, 2007 - link

    Um, what about price? Last time I checked the 8800GTX still costs about $150 more than the 2900XT. I will not even bring up the Ultra which is still way overpriced.

    So for $150 less you get a card that competes with the GTX some of the time and is more than capable of playing most games maxed out at high resolution. That is why the 2900XT is worth considering for purchase.

    KT

    Reply
  • rADo2 - Thursday, July 05, 2007 - link

    Problem is, 2900XT is many times NOT playable. Its performance is sometimes close to NVIDIA, sometimes 2x lower. And this applies for both DX9 and DX10. With 60+ games I own, I can assume ATI would "suck" on 30 of them.

    Look at Lost Planet with AA, 22FPS versus 40-50FPS is a huge difference in playability. On 8800GTX/ULTRA you can play even at 1920x1200 (30FPS), with 2900XT even 1280x1024 is giving you problems.
    Reply
  • KeithTalent - Thursday, July 05, 2007 - link

    Well I have two of them and they work more than fine on every single game I have tried so far, including demos, betas, and a couple of older games (using Catalyst 7.6).

    Lost Planet is a POS port anyway, but when I ran the test benchmark in DX9 with Crossfired 2900XTs I had frames well above 40 with everything maxed at 1920x1200 so I am somewhat confused by the numbers here. I will have to wait until I am home to see my exact numbers, but they were much higher than what was presented here. Maybe there is something wonky with the beta drivers?

    I'll post back tonight once I have verified my numbers.

    KT

    Reply
  • DerekWilson - Thursday, July 05, 2007 - link

    We didn't use the demo benchmark, and the release version (afaik) does not include an updated version of the benchmark either.

    For the Lost Planet test, we had to use FRAPS running through the snow. This will absolutely give lower performance, as the benchmark runs through a couple in door scenes as well with higher framerates.

    I mentioned FRAPS on the test page, but I'll add to the Lost Planet section the method we used for testing.
    Reply
  • defter - Thursday, July 05, 2007 - link

    Do Intel CPUs outperform currently AMD or not? After all, $200 AMD CPU is about as fast as $200 Intel CPU....

    It's natural that slower parts have good price/peformance ratio compared to competition since otherwise nobody would buy them. However, this has nothing to do which one fastest...

    Reply
  • KeithTalent - Thursday, July 05, 2007 - link

    Not sure what you are getting at, I was responding to this ridiculous statement:

    quote:

    It is not even worth considering ATI for purchase.


    Which is completely untrue, because price can be a big consideration.

    With respect to CPUs, if you spend an extra $50 - $100 for the better Intel processor, you are getting exponentially better performance (I know this from experience), while if you spend $150 more for a GTX, you are getting only marginally better performance.

    KT
    Reply

Log in

Don't have an account? Sign up now