What's new in DX 9.0c


This year the latest in the DirectX API is getting a bit of a face lift. The new feature in DirectX 9.0c is the inclusion of Pixel Shader and Vertex Shader 3.0. Rather than calling this DirectX 9.1, Microsoft opted to go for a more "incremental" looking update. This can end up being a little misleading because whereas the 'a' and 'b' revisions mostly extended and tweaked functionality, the 'c' revision adds abilities that are absent from its predecessors.

Pixel Shader 3.0 (PS3.0) allows shader programs of over 65,000 lines and includes dynamic flow control (branching). This revision also requires that compliant hardware offer 4 Multiple Render Targets (MRT's allow shaders to draw to more than one location in memory at a time), full 32-bit floating point precision, shader antialiasing, and a total of ten texture coordinate inputs per pixel.

The main advantage here is the ability for developers to write longer, more complex, shader programs that run more efficiently. The flow control will give developers the freedom to write more intuitive code without sacrificing efficiency. Branching allows a shader program the expanded ability to make decisions based on its current state and inputs. Rather than having to run multiple shaders that do different things on different groups of pixels, developers can have a single shader handle an entire object and take care of all its shading needs. Our example of choice will be shading a tree: one shader can handle rendering the dynamics of each leaf, smooth new branches near the top, rugged old bark on the trunk, and dirty roots protruding from the soil.

Vertex Shader 3.0 extends its flow control ability by adding if/then/else statements and including the ability to call subroutines in shader programs. The instruction limit on VS3.0 is also extended to over 65000. Vertex textures are also supported, allowing more dynamic manipulation of vertices. This will get even more exciting when we make our way into the next DirectX revision which will allow for dynamic creation of vertices (think very cool particle systems and hardware morphing of geometry).

One of the coolest things that VS3.0 offers is something called instancing. This functionality can remove a lot of the overhead created by including multiple objects based on the same 3d model (these objects are called instances). Currently, the geometry for every model in the scene needs to be setup and sent to the GPU for rendering, but in the future developers can create as many instances of one model as they want from one vertex stream. These instances can be translated and manipulated by the vertex shader in order to add "individuality" to each instance of the model. To continue with our previous example, a developer can create a whole forest of trees from the vertex stream of one model. This takes pressure off of the CPU and the bus (less data is processed and sent to the GPU).

Now that we've seen what developers are looking at with DirectX 9.0c, let's take a look at how NVIDIA plans to bring these features to the world.
Index NV40 Under the Microscope
Comments Locked

77 Comments

View All Comments

  • Reliant - Wednesday, April 14, 2004 - link

    Any ideas how the Non Ultra version will perform?
  • segagenesis - Wednesday, April 14, 2004 - link

    I cant agree with #45 more. People rush to judgement when its no secret that ATI will be coming out with thier goods very soon also. "Wow look this card is really fast!!! I cant believe it!" well this sounds like almost every other graphics card release from ATI or nVidia in the past. To me nVidia had better have come out with something good after ther lackluster Geforce FX 5800 wasnt anything terribly special. I used to like nVidia alot (heh my ti4600 still runs fine) but when it comes to looking for a new card, I'll pick whichever one is faster *and* has the features I want. If it wasnt for such turnoffs like the 2-slot design and now even 2 power connections required Im not sure I am ready to spend $500 just yet...

    Sorry if im obtuse but if ATI comes out with a part thats either equal (note the key term there) in performance or maybe even slightly slower... I'd go for ATI and thier better IQ that the Radeon 9700 series so impressed me on and made me wish for more out of my ti4600. That and a single slot/single power type design would probably put me in thier boat.

    Fanboy ATI opinion? I've owned nVidia from the Riva TNT to the ti4600 and many in-between.
  • Lonyo - Wednesday, April 14, 2004 - link

    #42, the jump from the Ti4600 to the 9700Pro wasn't good for you? I woul dhave thought finally playable AA/AF was quite a jump.
    Personally, it seems less of a jump than the 4600 -> 9700.


    And I will reserve judgement on how much of an accomplishment nVidia have made until I see what ATi release.
    If it's of similar power, but maybe has 1 molex, or is a single slot solution, they will have accomplished more.
    It's not just raw performance, we'll have to see how it all stacks up, and how long it takes to release the things!
  • ChronoReverse - Wednesday, April 14, 2004 - link

    Some site tested the 6800U on a 350W supply and it worked just fine.


    Myself, I think my Enermax 350W with its enhanced 12V rail will take it just fine as well.
  • Regs - Wednesday, April 14, 2004 - link

    Yeah, Nvidia did make one hell of an accomplishment. They just earned a lot of respect back from both fan clubs. You have to respect the development and research that went into this card and the end result turns out to be just as we anticipated if not more.

    I really don't know how anybody could pick a "club" when seeing hardware like this perform so well.

    Im hoping to see the same results from ATI.

    Just too bad they are some costly pieces of hardware ;)
  • araczynski - Wednesday, April 14, 2004 - link

    nice to FINALLY see a universally quantifiable performance increase from one generation to the next.

    but the important thing is how it competes with the x800 from ati, not against older cards.

    as for the power supply, i think the hardcore crowd that these are geared at already have more then enough power, and quite frankly i would be suprised if these woudln't work fine on a solid 350W from a reputable source (i.e. not your 350W ps for $10 from some 'special' sale).

    They're being conservative knowing that many of the people have crappy power supplies and don't know better.
  • klah - Wednesday, April 14, 2004 - link

    "Anyone know when it ships to retail stores?"

    http://www.eetimes.com/semi/news/showArticle.jhtml...

    "GeForce 6800 Ultra and GeForce 6800 models, are currently shipping to add-in-card partners, OEMs, system builders and game developers. Retail graphics boards based on the GeForce 6800 models are scheduled for release in the next 45 days."
  • Jeff7181 - Wednesday, April 14, 2004 - link

    This has me a bit curious... maybe I didn't read close enough... but is this the 6800 or the 6800 Ultra?
  • saechaka - Wednesday, April 14, 2004 - link

    wow impressive. i really want one. wonder if it will run ok with my 380w powersupply
  • Cygni - Wednesday, April 14, 2004 - link

    Personally, im very impressed, and i havent had an Nvidia product in my main gaming rig since my Geforce256. The card may be huge, power hungry, hot, and loud (maybe), but that is some SERIOUS performance.

    How long has it been since Nvidia has had a top end card that DOUBLED the performance of the last top end card? Pretty awesome, I think. I dont have the money to pick one up, but hopefully the mid/low end gets some love from both ATI and Nvidia as well. The 9200/9600/5200/5600 dont really appeal to me... not enough of a performance leap over a $20 8500!

Log in

Don't have an account? Sign up now