All GPUs are Created Equal: Say Goodbye to Cap Bits

DX9 allows quite a bit of flexibility in implementation. ATI and NVIDIA are free to do things a little differently as they see fit. In order for software to understand how fully the hardware supports the required and optional features of DX9, the hardware has specific capability bits set that describe its features. Microsoft has eliminated this feature from DX10. Software written for DX10 will not have to worry about checking cap bits for DX10 hardware. This is due to the fact that Microsoft has been much more specific about the features required to support DX10. There will still be differences in implementation, optimizations, performance characteristics and the like, but all DX10 hardware will have the same basic feature set to draw from. On the down side, hardware vendors who want to add custom features will have to rely on OpenGL (which allows custom vendor specific extensions to the API).

This will make things much easier for game developers, as they won't have to worry about not having a specific feature around to use for an effect or rendering technique. This is also another step in the direction of eliminating the need for multiple GPU specific rendering paths. We can't say that developers won't write different code for different hardware, because we don't know anything about the differences in performance characteristics at this point. We do know from past experience (with NV30) that even something as simple as the order in which code is executed can make a significant difference in performance. We would like to think that issues like this won't present themselves, but we'll have to wait and see when more hardware and software comes along.

In order to avoid programming issues like the initial NV30 + SM2.0 problems, Microsoft will only allow HLSL (High Level Shader Language) to be used with DX10. This means no low level shader ASM optimization, but it also means that each graphics hardware maker will have full control over how shaders get compiled. There is certainly a trade off here, but this should help keep developers from inadvertently doing something that severely hampers performance on any given architecture.

If DirectX 10 sounds like a great boon to software developers, the fact that DX10 will only be supported in Windows Vista is certain to curb enthusiasm. Other than Vista-only games, all developers will still be required to support DX9 in order to keep the installed Windows XP user base as part of their target market. Some developers have actually made comments to the effect that DX10 is more of a headache than a help right now, and that won't change until they are able to abandon support of older hardware. Hopefully, the DX10 performance and feature benefits will be enough to encourage people to upgrade sooner rather than later, but if the past is any indication it could be several years before DX9 is abandoned by the majority of users and developers.

Unified Shaders

Unified shaders aren't actually a feature as much as a result of DX10. This is a small point that seems to get lost in the shuffle, but Microsoft doesn't require a specific implementation for DX10 compliance: they simply made a better implementation more feasible. Until now, building a GPU with unified shaders would not been have desirable, let alone practical, but Shader Model 4.0 lends itself well to this approach.

We haven't seen unified shaders yet because we didn't need or want them. Up to SM2.0, vertex shaders had a higher precision requirement than pixel shaders. While 32bit floating point was required for compliance at the vertex level, 24bit was all that was needed for full precision in pixel shaders. Partial precision hints were added to accommodate 16bit pixel shaders on NVIDIA hardware. It wouldn't have been practical at the launch of DX9 to require that all shader units be 32bit. The same goes for including pixel oriented features in the vertex shader hardware: the API didn't support it, so there was no need to include it. The R300 GPU is 218mm^2 with only 107 Million transistors, and adding any more complexity than necessary would have certainly produced a much larger chip than they would have been able to handle on the 150nm process employed at the time. These days, we are able to do much more in the same space: ATI's latest chip, the RV570, is about 230mm^2 and has 330 Million transistors.

It is much cheaper, easier, and more efficient to build hardware to fit exactly what is required of each step in the rendering pipeline. This is as true with older hardware as it is with G80. Now that DX10 calls for full 32bit in each shader and nearly the same functionality for both vertex and pixel shader units, it doesn't make sense to duplicate and segregate the hardware. Now that functionality can't be excluded from either vertex or pixel processing, hardware designers are optimizing their parts to make the most efficient use of space. It just so happens that the best way to do this and meet the requirements of DX10 is with unified shaders.

GPUs get Virtual Memory Shader Model 4.0 Enhancements
Comments Locked

111 Comments

View All Comments

  • DerekWilson - Thursday, November 9, 2006 - link

    i'm sure there was a lot burried in there ... sorry if it wasn't easy to find.

    8800 gtx and gtx are both no louder than 7900 gtx. 1950 xtx still takes the cake for loudest graphics card around by a long shot -- especially after it heats up in a game.
  • crystal clear - Thursday, November 9, 2006 - link

    My comments in Daily Tech on this subject-

    More "G80" Derivatives in February R
    E: More info would be nice
    By crystal clear on 11/8/06, Rating: 2
    By crystal clear on 11/8/2006 8:03:43 AM , Rating: 2

    If you link VISTA -SANTA ROSA platform-Core2DUO(merom)CPU line up(T7300,7500,7700 models)then a matching Graphics card
    to complete the link.

    So a G80 for laptops/notebooks?

    The pairing of Intels Santa Rosa platform with Vista in the 2Q 07 is next big thing for the first tier notebook manufacturers & all they need is a matching G80 for this setup.

    Unquote-
    Nvidia currently caters to Desktop requirement/needs with the new G80 releases,wonder how the notebook/server versions will be-with Vista ofcourse.



  • yyrkoon - Thursday, November 9, 2006 - link

    Vitual memory is probably a good thing for most cases, but in the graphics arena, this *could* potentially make for sloppy/ bad coding practises. Knowing a lot of game devers (some of which actually work for well known companies), I've heard them from time to time complain about maxing a 16x PCI-E pipe. What I'm trying to say here, is that while it would be a good thing for never having to run out of texture memory, but that system memory, and definately the swap disk can not hold a candle to the memory bandwidth that most Video cards are capable of. End result, is that you definately *will* get a performance hit. All this, and we already know the memory bandwidth capabilities of modern PCs, suffice it to say, the most we'll see from current systems is what ? 12-13K GB/s ? Even a 7800GS can do roughly 35 GB/s on card. A 7600GT ? 22GB/s ?

    Still I think Directx10 is a very good thing, and as I didnt read the whole article, perhaps a missed a little ? Reason being, I've been reading about Directx10 since April, and a friend of mine was privy to some of this information after an interview with ATI.

    http://www.gamedev.net/reference/programming/featu...">http://www.gamedev.net/reference/programming/featu...
  • saratoga - Thursday, November 9, 2006 - link

    I don't know how they threading really works, but its quite possible VM support is required in order to allow multiple threads to run without stepping all over each other,.
  • saratoga - Thursday, November 9, 2006 - link

    Sorry, should read "I don't know how THEIR threading works"
  • falc0ne - Thursday, November 9, 2006 - link

    I don't know what is the problem but I'm really unable to see the images within the latest articles from Anand...Can anyone give me a suggestion? What might be the cause of that?
    The thing is I'm really, really interested in these articles and I need to see those images. Thanks
  • yyrkoon - Thursday, November 9, 2006 - link

    Oh, er, then in the options tab of Firefox, (tools->options->content) check the "load images" check box ;)
  • falc0ne - Thursday, November 9, 2006 - link

    well...it would've been simple but I'm afraid is not that...It might be the addblock extension from firefox, other than that I have nooo ideeea...Well I will use the IE tab option instead and load the pages using IE 7. Thanks anyway:)
  • yyrkoon - Thursday, November 9, 2006 - link

    Checked the exceptions list ? I know that firefox makes it really simple to block images from a site (to a point of being too easy).
  • JarredWalton - Thursday, November 9, 2006 - link

    If you've got AdBlock on Firefox, press Ctrl+Shift+A and you can see what it's blocking. If it blocks the images.anandtech.com stuff, you can then see which RegEx isn't working right and edit that.

Log in

Don't have an account? Sign up now