Tessellation and the Future

It's no secret that R600 is AMD's second generation unified shader architecture. The Xbox 360 houses their first attempt at a unified architecture, and the R600 evolved from this. It isn't surprising to learn that some of the non-traditional hardware from the Xenos (the Xbox 360 GPU) found its way into R600.

AMD has included a tessellator on their hardware, which is able to take input geometry and amplify it before passing it on to the vertex shader. This is something that Microsoft is planning on adding to future versions of DirectX as well, but in the meantime developers will need to take special steps to utilize the hardware.

The basic idea behind tessellation is in the subdivision of geometry. There are multiple algorithms for handling this process, and the R600 tessellator is capable of adapting to a developer's specific needs. The tessellator can take a polygon as input and break it up into smaller triangles, creating more vertices for a specific object. Taken on its own, this isn't particularly useful, but this concept can be paired with displacement mapping in order to reshape the tessellated polygon into something more like the actual surface a developer wants to represent (this is called the limit surface).

With low polygon models and lots of pixel shaders, normal maps and textures can approximate the look of more complex geometry, but we're always stuck with the very rough silhouette edge around the object. With more geometry, we could also better use pixel shaders to enhance the geometry present rather than trying to create the illusion of geometry itself.

We can't simply send millions of polygons per character to the graphics card. This isn't because the card can't handle the processing requirements, but rather the bandwidth and latency overhead of sending all this data to the hardware is too high. Tessellation and displacement gives us a way of really using the vertex shading power of unified architectures as well as removing the limitation on polygon count created by overhead.

While geometry shaders can be used for amplification and tessellators can be written as geometry shaders, this process is still way too slow on current programmable hardware. AMD's dedicated tessellator is capable of tessellating up to 15x more data and it can work much faster and more efficiently than a geometry shader set to the same task. With the next version of DX bringing tessellator hardware to all GPUs, developers should be able to focus on more interesting uses for the geometry shader as well.

Having this unit makes porting Xbox 360 games even easier for developers targeting AMD hardware. As most hardware still doesn't support the feature, a more general purpose path will still have to be written, but there wouldn't be any reason to remove what's already there. In these cases, R600 could benefit with greater performance than other hardware.

The downside is that it might be difficult to entice developers not already working with the Xbox 360 to touch the tessellator. It is definitely capable of high performance and terrific detail, but spending time on a feature only a small subset of gamers will be able to experience (for this generation) takes away from time spent making the game better for everyone.

We are always happy to see either hardware or software take a leap and create the first chicken or egg, but we just don't see the tessellator as a big selling point of R600. The technology is great, we're glad it's there, but we will really have to wait and see just how much (if any) real value this adds to the product. We'll leave this section on one final note about a tessellator landscape demo that really brings home what this thing can do.

CFAA and No Fixed Resolve Hardware The New Video Decode Pipeline: UVD
Comments Locked

86 Comments

View All Comments

  • Roy2001 - Tuesday, May 15, 2007 - link

    The reason is, you have to pay extra $ for a power supply. No, most probably your old PSU won't have enough milk for this baby. I will stick with nVidia in future. My 2 cents.
  • Chaser - Tuesday, May 15, 2007 - link

    quote:


    While AMD will tell us that R600 is not late and hasn't been delayed, this is simply because they never actually set a public date from which to be delayed. We all know that AMD would rather have seen their hardware hit the streets at or around the time Vista launched, or better yet, alongside G80.

    First, they refuse to call a spade a spade: this part was absolutely delayed, and it works better to admit this rather than making excuses.



    Such a revealing tech article. Thanks for other sources Tom.
  • archcommus - Tuesday, May 15, 2007 - link

    $300 is the exact price point I shoot for when buying a video card, so that pretty much eliminates AMD right off the bat for me right now. I want to spend more than $200 but $400 is too much. I'm sure they'll fill this void eventually, and how that card will stack up against an 8800 GTS 320 MB is what I'm interested in.
  • H4n53n - Tuesday, May 15, 2007 - link

    Interesting enough in some other websites it wins from 8800 gtx in most games,especially the newer ones and comparing the price i would say it's a good deal?I think it's just driver problems,ati has been known for not having a very good driver compared to nvidia but when they fixed it then it'll win
  • dragonsqrrl - Thursday, August 25, 2011 - link

    lol...fail. In retrospect it's really easy to pick out the EPIC ATI fanboys now.
  • Affectionate-Bed-980 - Tuesday, May 15, 2007 - link

    I skimmed this article because I have a final. ATI can't hold a candle to NV at the moment it seems. Now while the 2900XT might have good value, I am correct in saying that ATI has lost the performance crown by a buttload (not even like X1800 vs 7800) but like they're totally slaughtered right?

    Now I won't go and comment about how the 2900 stacks up against competition in the same price range, but it seems that GTSes can be acquired for cheap.

    Did ATI flop big here?
  • vailr - Monday, May 14, 2007 - link

    I'd rather use a mid-range older card that "only" uses ~100 Watts (or less) than pay ~$400 for a card that requires 300 Watts to run. Doesn't AMD care about "Global Warming"?
    Al Gore would be amazed, alarmed, and astounded !!
  • Deusfaux - Monday, May 14, 2007 - link

    No they dont and that's why the 2600 and 2400 don't exist
  • ochentay4 - Monday, May 14, 2007 - link

    Let me start with this: i always had a nvidia card. ALWAYS.

    Faster is NOT ALWAYS better. For the most part this is true, for me, it was. One year ago I boght a MSI7600GT. Seemed the best bang for the buck. Since I bought it, I had problems with TVout detection, TVout wrong aspect ratios, broken LCD scaling, lot of game problems, inexistent support (nv forum is a joke) and UNIFIED DRIVER ARQUITECTURE. What a terrible lie! The latest official drivers is 6 months ago!!!

    Im really demanding, but i payed enough to demand a 100% working product. Now ATi latest offering has: AVIVO, FULL VIDEO ACC, MONTHLY DRIVER UPDATES, ALL BUGS I NOTICED WITH NVIDIA CARD FIXED, HDMI AND PRICE. I prefer that than a simple product, specially for the money they cost!

    I will never buy a nvidia card again. I'm definitely looking forward ATis offering (after the joke that is/was 8600GT/GTS).

    Enough rant.
    Am I wrong?
  • Roy2001 - Tuesday, May 15, 2007 - link

    Yeah, you are wrong. Spend $400 on a 2900XT and then $150 on a PSU.

Log in

Don't have an account? Sign up now