NVIDIA's Dirty Dealing with DX10.1 and How GT200 Doesn't Support it

I know many people were hoping to see DX10.1 implemented in GT200 hardware, but that is not the case. NVIDIA has opted to skip including some of the features of DX10.1 in this generation of their architecture. We are in a situation as with DX9 where SM2.0 hardware was able to do the same things as SM3.0 hardware albeit at reduced performance or efficiency. DX10.1 does not enable a new class of graphics quality or performance, but does enable more options to developers to simplify their code and it does enhance performance when coding certain effects and features.

It's useful to point out that, in spite of the fact that NVIDIA doesn't support DX10.1 and DX10 offers no caps bits, NVIDIA does enable developers to query their driver on support for a feature. This is how they can support multisample readback and any other DX10.1 feature that they chose to expose in this manner. Sure, part of the point of DX10 was to eliminate the need for developers to worry about varying capabilities, but that doesn't mean hardware vendors can't expose those features in other ways. Supporting DX10.1 is all or nothing, but enabling features beyond DX10 that happen to be part of DX10.1 is possible, and NVIDIA has done this for multisample readback and can do it for other things.

While we would love to see NVIDIA and AMD both adopt the same featureset, just as we wish AMD had picked up SM3.0 in R4xx hardware, we can understand the decision to exclude support for the features DX10.1 requires. NVIDIA is well within reason to decide that the ROI on implementing hardware for DX10.1 is not high enough to warrant it. That's all fine and good.

But then PR, marketing and developer relations get involved and what was a simple engineering decision gets turned into something ridiculous.

We know that both G80 and R600 both supported some of the DX10.1 featureset. Our goal at the least has been to determine which, if any, features were added to GT200. We would ideally like to know what DX10.1 specific features GT200 does and does not support, but we'll take what we can get. After asking our question, this is the response we got from NVIDIA Technical Marketing:

"We support Multisample readback, which is about the only dx10.1 feature (some) developers are interested in. If we say what we can't do, ATI will try to have developers do it, which can only harm pc gaming and frustrate gamers."

The policy decision that has lead us to run into this type of response at every turn is reprehensible. Aside from being blatantly untrue at any level, it leaves us to wonder why we find ourselves even having to respond to this sort of a statement. Let's start with why NVIDIA's official position holds no water and then we'll get on to the bit about what it could mean.

The statement multisample readback is the only thing some developers are interested in is untrue: cube map arrays come in quite handy for simplifying and accelerating multiple applications. Necessary? no, but useful? yes. Separate per-MRT blend modes could become useful as deferred shading continues to evolve, and part of what would be great about supporting these features is that they allow developers and researchers to experiment. I get that not many devs will get up in arms about int16 blends, but some DX10.1 features are interesting, and, more to the point, would be even more compelling if both AMD and NVIDIA supported them.

Next, the idea that developers in collusion with ATI would actively try to harm pc gaming and frustrate gamers is false (and wreaks of paranoia). Developers are interested in doing the fastest most efficient thing to get their desired result with as little trouble to themselves as possible. If a techique makes sense, they will either take it or leave it. The goal of a developer is to make the game as enjoyable as possible for as many gamers as possible, and enabling the same experience on both AMD and NVIDIA hardware is vital. Games won't come out with either one of the two major GPU vendors unable to run the game properly because it is bad for the game and bad for the developer.

Just like NVIDIA made an engineering decision about support for DX10.1 features, every game developer must weight the ROI of implementing a specific feature or using a certain technique. With NVIDIA not supporting DX10.1, doing anything DX10.1 becomes less attractive to a developer because they need to write a DX10 code path anyway. Unless a DX10.1 code path is trivial to implement, produces the same result as DX10, and provides some benefit on hardware supporting DX10.1 there is no way it will ever make it into games. Unless there is some sort of marketing deal in place with a publisher to unbalance things which is a fundamental problem with going beyond developer relations and tech support and designing marketing campaigns based on how many games dispaly a particular hardware vendors logo.

The idea that NVIDIA is going to somehow hide the capabilities of their hardware from AMD is also naive. The competition through the use of xrays, electron microscopes and other tools of reverse engineering are going to be the first to discover all the ins and outs of how a piece of silicon works once it hits the market. NIVIDA knows AMD will study GT200 because NVIDIA knows it would be foolish for them not to have an RV670 core on their own chopping block. AMD will know how best to program GT200 before developers do and independantly of any blanket list of features we happen to publish on launch day.

So who really suffers from NVIDIA's flawed policy of silence and deception? The first to feel it are the hardware enthusiasts who love learning about hardware. Next in line are the developers because they don't even know what features NVIDIA is capable of offering. Of course, there is AMD who won't be able to sell developers on support for features that could make their hardware perform better because NVIDIA hardware doesn't support it (even if it does). Finally there are the gamers who can and will never know what could have been if a developer had easy access to just one more tool.

So why would NVIDIA take this less than honorable path? The possibilities are endless, but we're happy to help with a few suggestions. It could just be as simple as preventing AMD from getting code into games that runs well on their hardware (as may have happened with Assassin's Creed). It could be that the features NVIDIA does support are incredibly subpar in performance: just because you can do something doesn't mean you can do it well and admitting support might make them look worse than denying it. It could be that the fundamental architecture is incapable of performing certain basic functions and that reengineering from the ground up would be required for DX10.1 support.

NVIDIA insists that if it reveals it's true feature set, AMD will buy off a bunch of developers with its vast hoards of cash to enable support for DX10.1 code NVIDIA can't run. Oh wait, I'm sorry, NVIDIA is worth twice as much as AMD who is billions in debt and struggling to keep up with its competitors on the CPU and GPU side. So we ask: who do you think is more likely to start buying off developers to the detriment of the industry?

Derek's Conjecture Regarding SP Pipelining and TMT GT200 vs. G80: A Clock for Clock Comparison
Comments Locked

108 Comments

View All Comments

  • Chaser - Monday, June 16, 2008 - link

    Maybe I'm behind the loop here. The only competition this article refers to is some up coming new INTEL product in contrast to an announced hard release of the next AMD GPU series a week from now?

  • BPB - Monday, June 16, 2008 - link

    Well nVidia is starting with the hi end, hi proced items. Now we wait to see what ATI has and decide. I'm very much looking forward to the ATI release this week.
  • FITCamaro - Monday, June 16, 2008 - link

    Yeah but for the performance of these cards, the price isn't quite right. I mean you can get two 8800GTs for under $400 and they typically outperform both the 260 and the 280. Yes if you want a single card, these aren't too bad a deal. But even the 9800GX2 outperforms the 280 normally.

    So really I have to question the pricing on them. High end for a single GPU card yes. Better price/performance than last generations card, no. I just bought two G92 8800GTSs and now I don't feel dumb about it because my two cards that I paid $170 for each will still outperform the latest and greatest which cost more.
  • Rev1 - Monday, June 16, 2008 - link

    Maybe lack of any real competition from ATI?
  • hadifa - Monday, June 16, 2008 - link


    No, The reason is high cost to produce. over a Billion transistors, low yields, 512 bit bus ...

    Unfortunately the high cost and the advance tech doesn't translate to equally impressive performance at this stage. For example, if the card had much lower power usage under load, still it would have been considered a good move forward for having comparable performance to a dual GPU solution but with much cooler running and less demanding hardware.

    As the review mentions, this card begs for a die shrink. It will make it use less power, be cheaper, run cooler and even have a higher clock.
  • Warren21 - Monday, June 16, 2008 - link

    That competition won't come for another two weeks, but when it does -- rumour has it NV plan to lower their prices. Most preliminary info has HD 4870 at 299-329 and pretty much GTX 260 performance, if not, then biting at it's heels.
  • smn198 - Tuesday, June 17, 2008 - link

    You haven't seen anything yet. check out this picture of the GTX2 290!! http://tinypic.com/view.php?pic=350t4rt&s=3">http://tinypic.com/view.php?pic=350t4rt&s=3
  • Mr Roboto - Wednesday, June 18, 2008 - link

    Soon it will be that way if Nvidia has their way.

Log in

Don't have an account? Sign up now