Better Image Quality: Jittered Sampling & Faster Anti-Aliasing

As we’ve stated before, the DX11 specification generally leaves NVIDIA’s hands tied. Without capsbits they can’t easily expose additional hardware features beyond what DX11 calls for, and even if they could there’s always the risk of building hardware that almost never gets used, such as AMD’s Tessellator on the 2000-4000 series.

So the bulk of the innovation has to come from something other than offering non-DX11 functionality to developers, and that starts with image quality.

We bring up DX11 here because while it strongly defines what features need to be offered, it says very little about how things work in the backend. The Polymorph Engine is of course one example of this, but there is another case where NVIDIA has done something interesting on the backend: jittered sampling.

Jittered sampling is a long-standing technique used in shadow mapping and various post-processing techniques. In this case, jittered sampling is usually used to create soft shadows from a shadow map – take a random sample of neighboring texels, and from that you can compute a softer shadow edge. The biggest problem with jittered sampling is that it’s computationally expensive and hence its use is limited to where there is enough performance to pay for it.

In DX10.1 and beyond, jittered sampling can be achieved via the Gather4 instruction, which as the name implies is the instruction that gathers the neighboring texels for jittered sampling. Since DX does not specify how this is implemented, NVIDIA implemented it in hardware as a single vector instruction. The alternative is to fetch each texel separately, which is how this would be manually implemented under DX10 and DX9.

NVIDIA’s own benchmarks put the performance advantage of this at roughly 2x over the non-vectorized implementation on the same hardware. The benefit for developers will be that those who implement jittered sampling (or any other technique that can use Gather4) will find it to be a much less expensive technique here than it was on NVIDIA’s previous generation hardware. For gamers, this will mean better image quality through the greater use of jittered sampling.

Meanwhile anti-aliasing performance overall received a significant speed boost. As with AMD, NVIDIA has gone ahead and tweaked their ROPs to reduce the performance hit of 8x MSAA, which on previous-generation GPUs could result in a massive performance drop. In this case NVIDIA has improved the compression efficiency in the ROPs to reduce the hit of 8x MSAA, and also cites the fact that having additional ROPs improves performance by allowing the hardware to better digest smaller primitives that can’t compress well.


NVIDIA's HAWX data - not independently verified

This is something we’re certainly going to be testing once we have the hardware, although we’re still not sold on the idea that the quality improvement from 8x MSAA is worth any performance hit in most situations. There is one situation however where additional MSAA samples do make a stark difference, which we’ll get to next.

Why NVIDIA Is Focused On Geometry Better Image Quality: CSAA & TMAA
Comments Locked

115 Comments

View All Comments

  • dentatus - Monday, January 18, 2010 - link

    " Im sure ATi could pull out the biggest, most expensive, hottest and fastest card in the world"- they have, its called the radeon HD5970.

    Really, in my Australia, the ATI DX11 hardware represents nothing close to value. The "biggest, most expensive, hottest and fastest card in the world" a.k.a HD5970 weighs in at a ridiculous AUD 1150. In the meantime the HD5850 jumped up from AUD 350 to AUD 450 on average here.

    The "smaller, more affordable, better value" line I was used to associating with ATI went out the window the minute their hardware didn't have to compete with nVidia DX11 hardware.

    Really, I'm not buying any new hardware until there's some viable alternatives at the top and some competition to burst ATI's pricing bubble. That's why it'd be good to see GF100 make a "G80" impression.
  • mcnabney - Monday, January 18, 2010 - link

    You have no idea what a market economy is.

    If demand outstrips supply prices WILL go up. They have to.
  • nafhan - Monday, January 18, 2010 - link

    It's mentioned in the article, but nvidia being late to market is why prices on ATI's cards are high. Based on transistor count, etc. There's plenty of room for ATI to drop prices once they have some competition.
  • Griswold - Wednesday, January 20, 2010 - link

    And thats where the article is dead wrong. For the most part, the ridiculous prices were dictated by low supply vs. high demand. Now, we finally arrived at decent supply vs. high demand and prices are dropping. The next stage may be good supply vs normal demand. That, and no second earlier, is when AMD themselves could willingly start price gouging due to no competition.

    However, the situation will be like this long after Thermi launched for the simple reason, that there is no reason to believe that Thermi wont have yield issues for quite some time after they have been sorted out for AMD - its the size of chipzilla that will give it a rough time for the first couple of months, regardless of its capabilities.
  • chizow - Monday, January 18, 2010 - link

    I'm sure ATI would've if they could've instead of settling for 2nd place most of the past 3 years, but GF100 isn't just about the performance crown, its clearly setting the table for future variants based on its design changes for a broader target audience (think G92).
  • bupkus - Monday, January 18, 2010 - link

    So why does NVIDIA want so much geometry performance? Because with tessellation, it allows them to take the same assets from the same games as AMD and generate something that will look better. With more geometry power, NVIDIA can use tessellation and displacement mapping to generate more complex characters, objects, and scenery than AMD can at the same level of performance. And this is why NVIDIA has 16 PolyMorph Engines and 4 Raster Engines, because they need a lot of hardware to generate and process that much geometry.

    Are you saying that ATI's viability and funding resources for R&D are not supported by the majority of sales which traditionally fall into the lower priced hardware which btw requires smaller and cheaper GPUs?
  • Targon - Wednesday, January 20, 2010 - link

    Why do people not understand that with a six month lead in the DX11 arena, AMD/ATI will be able to come out with a refresh card that could easily exceed what Fermi ends up being? Remember, AMD has been dealing with the TSMC issues for longer, and by the time Fermi comes out, the production problems SHOULD be done. Now, how long do you think it will take to work the kinks out of Fermi? How about product availability(something AMD has been dealing with for the past few months). Just because a product is released does NOT mean you will be able to find it for sale.

    The refresh from AMD could also mean that in addition to a faster part, it will also be cheaper. So while the 5870 is selling for $400 today, it may be down to $300 by the time Fermi is finally available for sale, with the refresh part(same performance as Fermi) available for $400. Hmmm, same performance for $100 less, and with no games available to take advantage of any improved image quality of Fermi, you see a better deal with the AMD part. We also don't know what the performance will be from the refresh from AMD, so a lot of this needs to take a wait and see approach.

    We have also seen that Fermi is CLEARLY not even available for some leaked information on the performance, which implies that it may be six MORE months before the card is really ready. Showing a demo isn't the same as letting reviewers tinker with the part themselves. Really, if it will be available for purchase in March, then shouldn't it be ready NOW, since it will take weeks to go from ready to shipping(packaging and such)?

    AMD is winning this round, and they will be in the position where developers will have been using their cards for development since NVIDIA clearly can't. AMD will also be able to make SURE that their cards are the dominant DX11 cards as a result.

  • Targon - Wednesday, January 20, 2010 - link

    Why do people not understand that with a six month lead in the DX11 arena, AMD/ATI will be able to come out with a refresh card that could easily exceed what Fermi ends up being? Remember, AMD has been dealing with the TSMC issues for longer, and by the time Fermi comes out, the production problems SHOULD be done. Now, how long do you think it will take to work the kinks out of Fermi? How about product availability(something AMD has been dealing with for the past few months). Just because a product is released does NOT mean you will be able to find it for sale.

    The refresh from AMD could also mean that in addition to a faster part, it will also be cheaper. So while the 5870 is selling for $400 today, it may be down to $300 by the time Fermi is finally available for sale, with the refresh part(same performance as Fermi) available for $400. Hmmm, same performance for $100 less, and with no games available to take advantage of any improved image quality of Fermi, you see a better deal with the AMD part. We also don't know what the performance will be from the refresh from AMD, so a lot of this needs to take a wait and see approach.

    We have also seen that Fermi is CLEARLY not even available for some leaked information on the performance, which implies that it may be six MORE months before the card is really ready. Showing a demo isn't the same as letting reviewers tinker with the part themselves. Really, if it will be available for purchase in March, then shouldn't it be ready NOW, since it will take weeks to go from ready to shipping(packaging and such)?

    AMD is winning this round, and they will be in the position where developers will have been using their cards for development since NVIDIA clearly can't. AMD will also be able to make SURE that their cards are the dominant DX11 cards as a result.

  • chizow - Monday, January 18, 2010 - link

    @bupkus, no, but I can see a monster strawman coming from a mile away.
  • Calin - Monday, January 18, 2010 - link

    "Because with tessellation, it allows them to take the same assets from the same games as AMD and generate something that will look better"

    No it won't.
    If the game will ship with the "high resolution" displacement mappings, NVidia could make use of them (and AMD might not, because of the geometry power involved). If the game won't ship with the "high resolution" displacement maps to use for tesselation, then NVidia will only have a lot of geometry power going to waste, and the same graphical quality as AMD is having.

    Remember that in big graphic game engines, there are multiple "video paths" for multiple GPU's - DirectX 8, DirectX 9, DirectX 10, and NVidia and AMD both have optimised execution paths.

Log in

Don't have an account? Sign up now