Introduction

When it was drafted, DirectX 10 promised to once again change the way developers approach real-time 3D graphics programming. Not only would graphics hardware be capable of executing short custom programs (called shaders) on vertices and fragments (pixels), but developers would be able to move much more high-level polygon work to the GPU through geometry shaders. Pulling polygon level manipulation off the CPU opens up a whole host of possibilities to the developer.

With adequate performance, many of the geometric details simulated through other techniques could be applied in simple, straightforward ways involving less overhead. Techniques like normal mapping, parallax occlusion mapping, and many others exist solely for generating the illusion of additional geometry. Ever wonder why a face can be incredibly detailed while the silhouette of the same head looks more like a stop sign than a melon? This is because modern real-time 3D relies on low polygon models augmented with pixel level "tricks" to make up for it.

There are lots of cool thing we can do with the ability to process geometry on the GPU. We could see particle systems on the GPU, fine grained model details like fur that can be affected by the physical characteristics of the world, procedural geometry for highly dynamic environments, "real" displacement mapping, and geometry amplification that can add detail to models. Some of these things may show up sooner than others in games, as we will still be limited by the performance of the hardware when it comes to implementing these features.

There are, of course, other benefits to DX10. We explored this in previous articles for those who are interested, but here's a quick run down. Object and state change overhead has been decreased, allowing for less CPU involvement when sending data to the GPU. This should improve performance and give developers more headroom in building larger, more complex scenes. We have more rigidly defined specifications, which means developers can focus less on how individual hardware will handle their game and more on the features they want to implement. With a larger focus on data types and accuracy, the results of calculations will be more consistent between hardware, and developers will have more flexibility in choosing how their data is processed.

In general, DX10 also offers a more generic computing model with lots of flexibility. This will be very important going forward, but right now developers still have de facto limitations on shader length and complexity based on the performance of the hardware that currently exists. As developers better learn how to use the flexibility they have, and as hardware designers continue to deliver higher performance year after year, we will see DirectX 10 applications slowly start to blossom into what everyone has dreamed they could be.

For now, before we get into features and performance, we would like to temper your expectations. Many of the features currently implemented in DirectX 10 could also be done using DirectX 9. Additionally, those features that are truly DX10 only either don't add much beyond what we would get otherwise, or require quite a bit of processing power to handle. Thus, we either get something that was already possible or something that requires expensive hardware.

The Test
POST A COMMENT

59 Comments

View All Comments

  • jay401 - Friday, July 06, 2007 - link

    They're right though, the charts need work. They are not intuitive and there are multiple better ways to present 'percent change' data that would make sense on first glance without the reader having to decipher an unintuitive method that is contrary to the readability of the article. Reply
  • DerekWilson - Friday, July 06, 2007 - link

    The dx9 vs dx10 scaling graphs have been altered to present the data in a different way.

    Please let me know if this is still not adequate.
    Reply
  • Andyvan - Thursday, July 05, 2007 - link

    I had the exact same reaction to the charts. For the Lost Planet chart with the two colors, either pick better (more standard) colors, or make the performance drop bars grow to the left (or down), and the performance increase bars grow to the right (or up).

    -- Andyvan
    Reply
  • PrinceGaz - Thursday, July 05, 2007 - link

    The best thing to do with those charts is change them to show relative performance in DX10 compared to DX9, with 100% meaning no change (same performance in DX10 as DX9). Improvements with DX10 give scores above 100%, reduced performance gives a result below 100%.

    Doing that would make the graphs much easier to understand than the current mess.
    Reply
  • sterlinglittle - Thursday, July 05, 2007 - link

    This might be a silly question as I can't recall the current status of MultiGPU performance with Vista drivers. Will it be possible to test these games with SLI/CrossFire configurations soon? Reply
  • gigahertz20 - Thursday, July 05, 2007 - link

    The results show exactly why I am waiting to buy a DX10 video card, all these people who rushed out to buy a Geforce 8800GTX or AMD 2900XT..hah..especially all those 2900XT fanboys who said the R600 would destroy the 8800GTX in DX10 benchmarks because it has 320 stream processors and a 512-bit memory interface....well guess what, the benchmarks are in and they show the R600 is still the power hunry POS video card it is. Reply
  • KeithTalent - Thursday, July 05, 2007 - link

    I'm not sure how it is a 'hah' to the people that purchased these cards as they still blow everything else out of the water in DX9, I mean it is not even close.

    So for those of us running at higher resolutions (1920x1200 or higher), an 8800/2900 or two made perfect sense (and still does). I doubt very many people were expecting great DX10 performance right away anyway, particularly as the games available barely make use of it.

    KT
    Reply
  • Sceptor - Thursday, July 05, 2007 - link

    I agree with your idea, I've always skipped over one generation of hardware to another.

    Especially when users are still "testing" Vista gaming for Microsoft, Nvidia and AMD I see no need to part with my money until performance is at least on par with DX9.

    Good article...Nice to see some real numbers on DX10 vs DX9
    Reply
  • DerekWilson - Thursday, July 05, 2007 - link

    there are applications where the 2900 xt does outperform its competition, as is shown by call of juarez.

    it really depends on how developers go forward. we'll have to wait and see what happens. judging the future of AMD/NVIDIA competition under dx10 isn't really feasible with only 3 apps to go by.

    one thing's for sure though, we'd love to see better performance out of the mainstream parts from both camps. And having some parts to fill in the gap between the lower and higher end hardware would be nice too.
    Reply
  • defter - Thursday, July 05, 2007 - link

    I think the point here is that many claimed that "R6xx is designed for DX10, don't judge it based on DX9 performance blah blah blah". Those claims gave the impression, that relative DX10 performance of R6xx series will be much better than their DX9 performance.

    Your tests show that on average, R6xx takes a HIGHER performance hit from moving to DX10. Thus, under DX10 R6xx is even SLOWER than it was under DX9.
    Reply

Log in

Don't have an account? Sign up now