So What's a Tessellator?

This has been covered before now in other articles about DirectX 11, but we first touched on the subject with the R600 launch. Both R6xx and R7xx hardware have tessellators, but since these are proprietary implementations, they won't be directly compatible with DirectX 11 which uses a much more sophisticated setup. While neither AMD nor the DX11 tessellator itself are programmable, DX11 includes programmable input to and output from the tesselator (TS) through two additional pipeline stages called the Hull Shader (HS) and the Domain Shader (DS).

The tessellator can take coarse shapes and break them up into smaller parts. It can also take these smaller parts and reshape them to form geometry that is much more complex and that more closely approximates reality. It can take a cube and turn it into a sphere with very little overhead and much fewer space requirements. Quality, performance and manageability benefit.

The Hull Shader takes in patches and control points out outputs data on how to configure the tessellator. Patches are a new primitive (like vertices and pixels) that define a segment of a plane to be tessellated. Control points are used to define the parametric shape of the desired surface (like a curve or something). If you've ever used the pen tool in Photoshop, then you know what control points are: these just apply to surfaces (patches) instead of lines. The Hull Shader uses the control points to determine how to set up the tessellator and then passes them forward to the Domain Shader.

The tessellator just tessellates: it breaks up patches fed to it by the Hull Shader based on the parameters set by the Hull shader per patch. It outputs a stream of points to the Domain Shader, which then needs to finish up the process. While programmers must write HS programs for their code, there isn't any programming required for the TS. It's just a fixed function block that processes input based on parameters.

The Domain Shader takes points generated by the tessellator and manipulates them to form the appropriate geometry based on control points and/or displacement maps. It performs this manipulation by running developer designed DS programs which can manipulate how the newly generated points are further shifted or displaced based on control points and textures. The Domain Shader, after processing a point, outputs a vertex. These vertices can be further processed by a Geometry Shader, which can also feed them back up to the Vertex Shader using stream out functionality. More likely than heading back up for a second pass, we will probably see most output of the Domain Shader head straight on to rasterization so that its geometry can be broken down into screen space fragments for Pixel Shader processing.

That covers what the basics of what the tesselator can do and how it does it. But do you find your self wondering: "self, can't the Geometry Shader just be used to create tessellated surfaces and move the resulting vertices around?" Well, you would be right. That is technically possible, but not practical at this point. Let's dive into that a bit more.

Going Deeper: The DX11 Compute Shader and OpenCL/OpenGL Tessellation: Because The GS Isn't Fast Enough
Comments Locked

109 Comments

View All Comments

  • DerekWilson - Saturday, January 31, 2009 - link

    Hi, thanks for the feedback ... I've already talked the vista issue to death elsewhere in these comments, so I'll skip that, but ...

    3) You are right that to get the most out of DX10 you need renderer designed for DX10 not ported from DX9. At the same time, there are things that can be done to make DX9 stuff faster by using DX10 capabilities that don't require an engine rewrite. Yes this also requires development time, but it seems developers have opted to put time into adding effects with DX10 rather than increasing framerate. I'm not disappointed with that direction, but performance was an option.

    I agree with what you said about OGL.

    4) I know it's not going to happen, but it didn't need to not be possible. MS could have designed the API to expose new hardware featuers without requiring the driver model change. DX9 can run on XP or Vista's new driver model for example. They chose to make it so that it was impossible to back-port rather than designing DX10 (and subsequent versions) to be tied to the driver model. OpenGL exposes most of the featuers of DX10 to WinXP and all of the things that are interesting to graphics programmers).

    5) You are right -- it's not required to support multithreading but it is required if you want a performance benefit from that multithreading.
  • bobvodka - Saturday, January 31, 2009 - link

    Well, to be fair, many of my comments were directed at others in the thread :)

    4) The thing is, OpenGL and DX9, specifically D3D9, live in different places. D3D9 had alot more kernel side code which caused expensive switches when issuing certain commands; this is why D3D9 got all that instance draw stuff and OpenGL didnt, because on small batch sizes OpenGL could be between 2.3x and 1.4x quicker at executing the draw call then D3D9. OpenGL sits the otherside of the kernel calls and gives the implimenters more control on when that switch occures. D3D10 also sits the other side, again not wasting that time.

    There were also changes in the resource model, the driver model and various other areas; some of which were to make the Vista windowing system possible. This resource model and everything about it was very much tied to D3D10 and how it does things. OpenGL's resource model was also fundamentally different to the D3D9 model, with again the implimenter having alot more control; you never suffered a 'lost device' in OpenGL for example and the runtime automatically controlled allocated memory unlike in D3D9. There are some things OpenGL could do which D3D9 couldn't as well; such as on-card async memory copies and render-to-vertex buffer (well, ATI had a hack for it but the OpenGL method was cross-hardware).

    Could they have done it? Well, of course they could have done but unlike previous DX updates it would have taken alot more effort, time and money. All for a, at the time, 5 year old OS they were hoping to begin getting rid of.

    Personally, I think this was a good move all in all, the problem was with how it was presented to the general masses who suddenly saw they had to pay a few 100 USD for a new DX version.
  • Dribble - Saturday, January 31, 2009 - link

    The reason all our games are DX9c are because that's what the consoles support (yes I know PS3 uses open GL, but it has 9c feature support - basically has 7800GTX in it).
    Most games are made for console as well as PC, the majority use some cross platform renderer (e.g. unreal 3 engine) and that will support DX9c. Cross platform DX support won't change until the Xbox 720 and PS4 arrive.
    I don't see how DX11 will change this?
  • DerekWilson - Saturday, January 31, 2009 - link

    That's a really good point and something I should have considered.

    I think the gap in performance between the PC and consoles will so heavily favor PCs that it will inspire developers to once again shift their focus to the PC. I could be wrong though.
  • ssj4Gogeta - Sunday, February 1, 2009 - link

    That will change if Microsoft choose Larrabee for XBox 720. Cause then it will support all the DirectX (even unreleased) versions and all the OpenGL versions. If MS chooses Larrabee, and it also becomes popular for PC gaming, we PC gamers may see a huge benefit because then the games will be built for the latest DX version.
  • bobvodka - Sunday, February 1, 2009 - link

    It's all well and good saying that but Larrabee is currently utterly unproven technology.

    Don't get me wrong, I'd like to see it do well if only because 3 players in the GPU race will be better than 2 from a technology and consumer stand point, however everyone seems to be pinning their hopes on this technology when there hasn't even been a working demo of DX9 at the same speed as NV/AMD, never mind DX10 or DX11.
  • ssj4Gogeta - Sunday, February 1, 2009 - link

    You're right. But I'm really excited. It would be so nice if Intel can really pull off such feat. we'll have 2 TFLOPS of general purpose parallel processing power!
  • bobvodka - Saturday, January 31, 2009 - link

    The problem is, the consoles is where the money is. Combine that with a fixed hardware platform (well, hard drives and different screen sizes not withstanding) it makes for a much easier time for devs.
  • piroroadkill - Saturday, January 31, 2009 - link

    That's definitely a good point. Any game engine these days has to support the major consoles for it to be successful - even if that only means the 360, you're still hamstrung by DirectX9, regardless of whether XP has it or not.

    From a personal point of view I'm still running XP because I run a clusterfuck of graphics cards that vista would shit the bed thinking about.
  • William Gaatjes - Saturday, January 31, 2009 - link

    "Many under-the-hood enhancements mean higher performance for features available but less used under DX10. "


    I must be remembering it wrong but the same thing was sad about dx10. However it turned out to be nothing more then getting people to buy vista. I wonder how much will come true this time.

Log in

Don't have an account? Sign up now