Introduction

When it was drafted, DirectX 10 promised to once again change the way developers approach real-time 3D graphics programming. Not only would graphics hardware be capable of executing short custom programs (called shaders) on vertices and fragments (pixels), but developers would be able to move much more high-level polygon work to the GPU through geometry shaders. Pulling polygon level manipulation off the CPU opens up a whole host of possibilities to the developer.

With adequate performance, many of the geometric details simulated through other techniques could be applied in simple, straightforward ways involving less overhead. Techniques like normal mapping, parallax occlusion mapping, and many others exist solely for generating the illusion of additional geometry. Ever wonder why a face can be incredibly detailed while the silhouette of the same head looks more like a stop sign than a melon? This is because modern real-time 3D relies on low polygon models augmented with pixel level "tricks" to make up for it.

There are lots of cool thing we can do with the ability to process geometry on the GPU. We could see particle systems on the GPU, fine grained model details like fur that can be affected by the physical characteristics of the world, procedural geometry for highly dynamic environments, "real" displacement mapping, and geometry amplification that can add detail to models. Some of these things may show up sooner than others in games, as we will still be limited by the performance of the hardware when it comes to implementing these features.

There are, of course, other benefits to DX10. We explored this in previous articles for those who are interested, but here's a quick run down. Object and state change overhead has been decreased, allowing for less CPU involvement when sending data to the GPU. This should improve performance and give developers more headroom in building larger, more complex scenes. We have more rigidly defined specifications, which means developers can focus less on how individual hardware will handle their game and more on the features they want to implement. With a larger focus on data types and accuracy, the results of calculations will be more consistent between hardware, and developers will have more flexibility in choosing how their data is processed.

In general, DX10 also offers a more generic computing model with lots of flexibility. This will be very important going forward, but right now developers still have de facto limitations on shader length and complexity based on the performance of the hardware that currently exists. As developers better learn how to use the flexibility they have, and as hardware designers continue to deliver higher performance year after year, we will see DirectX 10 applications slowly start to blossom into what everyone has dreamed they could be.

For now, before we get into features and performance, we would like to temper your expectations. Many of the features currently implemented in DirectX 10 could also be done using DirectX 9. Additionally, those features that are truly DX10 only either don't add much beyond what we would get otherwise, or require quite a bit of processing power to handle. Thus, we either get something that was already possible or something that requires expensive hardware.

The Test
POST A COMMENT

59 Comments

View All Comments

  • SniperWulf - Thursday, July 05, 2007 - link

    I would have been nice if you guys could have included numbers with the latest publicly available drivers (beta or not) from ATI and NV, just to get an idea of what type of performance we can expect in the future Reply
  • DerekWilson - Thursday, July 05, 2007 - link

    Actually, the beta drivers will give you a better idea of what to expect than the current WHQL drivers, which is why we used them. Reply
  • TA152H - Thursday, July 05, 2007 - link

    First of all, the whole name of the article is a poor choice of words. DX10 is pretty, but it's not fast. At least I think that was your point.

    Next, the choice of processors is too limited. I don't know where you guys get your sales figures from, but Intel's extreme processors aren't their best sellers. Since part of the point of these processors is to do work in the video card in DX10 that was done in the CPU before, you might want another data point with a relatively inexpensive processor and see how DX9 relates to DX10.

    Saying there will not be ANY DX10 only games for the next two years is strange. You will have an installed base to overcome, but for gamers it's not so important because they upgrade often, and some of the games don't design for old hardware anyway. Aren't there some games now that weren't made to play on hardware two years ago well? I would guess someone will decide well before then that designing for obsolete software isn't worth the effort and cost, and will ignore the installed base to create a better product that will take less time and cost less. Going only slightly, if you had an exceptionally good DX10 game, there would be enough people to make it profitable if you were the only one that did it. You always have the dorks that like using words like "eye-candy" (which probably means they have insufficient testosterone in their bloodstream) that will go out and buy whatever is prettiest. No DX9 means no wasted developers, means faster development for DX10, and less cost. So, before two years, it's entirely plausible that someone making a cutting edge game will decide it's not worth the effort, or cost, of supporting an obsolete software base.

    OK, those charts are horrible. Put a little effort in them, instead of letting everyone know they are an afterthought that you hate doing. The whole purpose of charts is to disseminate information quickly and intuitive, your charts totally fail at this. Top chart, you have the ambiguous "% change from DX9 performance". Now most people, just viewing the chart, would assume something to the right is an enhancement. But, no! Everything is less. The next chart uses the same words, but this time green denotes a gain in performance. Yes, blue is typically what people associate with negatives, green with positives. Ever hear of "in the red", or "in the black"? I have. Most people have. Red for negatives, and black for positives would have been a little more understandable if you simply refuse to put in a guide.

    Next you have "perf drop with xxx", which is what the first chart should have read, but you didn't want to go back and fix it. Even then, red would have been a better color for a negative, it's more intuitive and that's what charts are about.

    Last, maybe Nvidia and AMD are right, that features are important. I was saying this earlier, and you're comparing apples and oranges when you say performance went down. Did DX10 performance go down? No. So, broadly, performance did not. DX10 performance didn't even exist, nor did some of the other features. I'm not saying everyone should buy these cards at all, they should measure what they need, and certainly some of the older cards with their obsolete feature set are fine for many, many people. At least today. But, by the same token, someone running Vista is probably going to want DX10 hardware, and they may not have massive amounts of money either to spend on it. So, the low end cards make sense too. Someone wanting the best possible visual experience WILL buy a DX10 card, and an expensive one. They have reasons for existing, they aren't broadly failures, but I think what irritates you is they aren't broadly successful either, like previous generations are and you are used to. It's understandable, but I think you're taking this so far you aren't seeing the value in them either. Obviously, since everyone has disappointed you in DX10 performance (Intel, Nvidia and AMD), shouldn't it show it's not an easy implementation, and that maybe they aren't all screwing up? If you give an exam and everyone gets a 10, maybe you need to look at the nature of the grading or test.
    Reply
  • NT78stonewobble - Thursday, July 05, 2007 - link

    I think the point regarding the low end dx 10 hardware was the following.

    " Performance is so low that you will in reality not be able to use DX 10. "

    So everyone that cannot afford the high end DX 10 would be better off with a dx 9 card that will be cheaper and performing the same (or maybe even better).

    Reply
  • TA152H - Thursday, July 05, 2007 - link

    Not true, the only software that runs DX10 will not be super demanding video games. And you will not be forced to run them at very high settings. For gaming, yes, but that's not the whole thing. Reply
  • DerekWilson - Friday, July 06, 2007 - link

    in these cases, dx10 wouldn't necessarily be a better fit than dx9 -- or (more probably) opengl ... Reply
  • Martimus - Thursday, July 05, 2007 - link

    While you got voted down because your comment was set in an attacking tone, I felt that the comment was very well written and that you had some very valid points. As for the charts, I thought that those were performance increases until I read your comment. I doubt most casual readers will take the time to understand the counterintuitive charts. The charts are what most people look at, so they are really the most important part of the article to get right. Maybe the author will learn from this article and do a better job on the charts in the future. Reply
  • TA152H - Thursday, July 05, 2007 - link

    You know, when I see people like him writing things without thinking like that, it really irritates me, because it's so uninformed, so I get angry. I wish I didn't react like that, but let who is without sin cast the first stone. I guess it's better than being passionless. I really don't mind negative votes, I'd be more worried if I got positive ones.

    The charts really got under my skin, because he made no effort. It's just half-ass garbage. When you consider how many people read them, it's unsupportable. If I did work like that, I'd be ashamed.

    Instead of taking a step back and saying, well, all the DX10 hardware hasn't been what we expected, maybe there is a reason for it, they quickly damn every company that makes it. It's got to be comparitive, because obviously these authors really don't know anything about designing GPUs (nor do I for that matter, so I'm not saying it to be vicious). But, when Intel, AMD and Nvidia all have disappointing DX9 performance with their DX10 cards, and DX10 performance broadly isn't great (although it seems better unless you add features), then maybe you should take a step back and say "Hmmmm, maybe we need to adjust our expectations".

    It's kind of funny, because they do this with microprocessors already, because there was a fundemental change that made everyone reevaluate it. The Core 2 would be a complete piece of crap if you judged it by the 1980s and 1990s standard. It was, by those standards, an extremely small improvement over the P7 core, and even worse over the Yonah. But, the way processors are graded are different now, because our expectations were lowered somewhat by the P6 (it was a great processor, but again, it wasn't as good as the P5 was over the P4, or the P4 over the P3), and greatly by the K7 and P7. So, maybe GPUs are hitting that point in maturity where the incredible improvements in performance are a thing of the past, and the pace will slow down. Now, someone will correctly say, well, DX9 performance has decreased in some cases. Well, that's fine, but we also have a precedence in the processor world. Let's go back to the P6. It ran the majority of existing code (Real Mode) WORSE than the P5, because it was designed for 386 protected mode and didn't care much about Real mode. Or how about the 386? It wasn't any better on 16-bit code in terms of performance (it did at the virtual 86 mode though, but it wasn't for performance), but added two new modes the most important of which was (although not at the time) was 386 protected.

    Articles like this irritate me because they are so simplistic and have so little thought put in them. They lack perspective.
    Reply
  • titan7 - Thursday, July 12, 2007 - link

    Here's a bit of info on the cards. Back in the d3d8 era Matrox introduces the first 256bit memory bus for their cards with the Parhelia. All things equal that provides twice the speed of a 128bit bus, but is more expensive to manufacture.

    nvidia and ATI still had 128bit buses at the time, but for their high end d3d9 cards (Radeon 9700 and GeForceFX 5900) switched to 256 bit buses because 128 just couldn't keep up.

    We're four generations ahead now and both IHVs have used 128bit buses for their mid range d3d10 cards, even though 128bits was becoming bottle neck during the d3d8 era five generations ago!

    This article was right on the money, nvidia focused on making their 8600 pin compatible with their old mid range 6600 card which is now three generations old! Intel didn't even leave their p4 compatible with itself! Yay, nvidia allowed Asus, etc to save on R&D costs. Too bad it meant customers have a handy capped chip. This article called them on it.

    256bit, 512megabytes is the standard for mid range d3d10. We need to wait a generation to get there.
    Reply
  • DerekWilson - Thursday, July 05, 2007 - link

    First, the data presented in the graphs does make sense if you take a second to think about what it means.

    It shows basically that under DX10 NVIDIA generally performs better relative to AMD than under DX9.

    It also shows that under DX10 AMD handles AA better relative to NVIDIA than under DX9.

    I will work on altering the graphs to show +/- 100% if people think that would present better. Honestly, I don't think it will be much easier to read with the exception that people tend to think higher is better.

    But ... as for the rest of your post, I completely disagree.

    When we step back and stop pushing AMD and NVIDIA to live up to specific expectations, we've stopped doing our job.

    Justifying poor design choices by looking at the past is no way to advance the industry. A poor design choice is a poor design choice, no matter how you slice it. And it's the customer, not the industry of the company, who is qualified to decide whether or not something was a poor choice or not.

    The fundamental problem with engineering is that you are building a device to fit within specific constraints. It is a very difficult job that consists of a great deal of cost/benefit analysis and hard choices. But the bottom line with any engineering project is that it must satisfy the customer's needs or it will not sell and it doesn't matter how much careful planning went into it.

    It's when consumers (and hardware review sites who represent the consumers) stop demanding fundamental characteristics that absolutely must be present in the devices we purchase that we subject our selves to sub par hardware.

    Having studied computer engineering, with a focus in microprocessor architecture and 3d graphics, I certainly do know a bit about designing GPUs. And, honestly, there are reasons that DX10 hardware hasn't been what people wanted. This is a first generation of hardware that supports a new API using a very new hardware model based on general purpose unified shaders. It was a lot to do in one generation, and no one is damning anyone else for it.

    But that doesn't mean we have to pretend that we're happy about it. And our expectations have always been more subdued than that of the general public BTW. We've said for a while not to expect heavy DX10 dependent games for years. It's the same situation we saw with DX9.

    And honestly, the problems we are seeing are similar to what we saw with the original GeForce FX -- only not as extreme. Especially because both NVIDIA and AMD do well in the thing they must do well at: DirectX 9 rendering.

    Honestly, this supports what we've been saying all along: the most important factor in a 3d graphics purchase today is DirectX 9 performance.
    Reply

Log in

Don't have an account? Sign up now