Closing Thoughts

With a tagline like ‘Graphics Reinvented’, NVIDIA is certainly not straying from putting Turing in the most revolutionary limelight as possible. In that sense, NVIDIA is choosing to compare Turing to Pascal rather than Volta for every possible circumstance, especially for gaming. This decision is certainly not unfounded because for consumers, the Turing-based GeForce 20-series succeeds the Pascal-based GeForce 10-series. However, this can give the impression that because Turing is so different from Pascal, it warrants dissimilar comparisons like RTX-OPS metrics or gaming performance uplifts with DLSS or raytracing enabled.

The situation becomes a little more muddled because of several reasons:

  • The pricing and availability of the RTX 20-series means that on a purely market segmentation level, it does not directly replace Pascal gaming products
  • As gaming-focused cards, the major new features (RT cores, tensor cores, advanced shading) of the RTX 20-series do not operate out-of-the-box in games, are specific to select games, and
  • The burden of communication is on the developers to educate consumers on the details of specific raytracing effects or use of AI-accelerated denoisers

These aren’t points that necessarily need to define Turing, except that NVIDIA has pushed the envelope by going all-in with marketing and branding. For their part, NVIDIA will have an continously updated list of games with RTX platform support.

On one hand, Turing seems like a possible solution to the gaming/compute architecture divergence. It seems less likely now that NVIDIA would backtrack into a more standard design for maximum rasterization performance, though obviously that remains to be seen with how the product fares. In any case, as most silicon design firms hvae leapfrogging design teams, the major decisions are likely not to move too far to the fixed function side, if only because the greatest strength of GPUs in compute is its programmability and versatility.

Looking back at ray tracing, it seems that even if it isn't immediately practical, there would still be a seeding effect to be gained via enthusiasts and certain gamers, which would work well with higher-profile AAA games. As we move into next week, it appears that the GeForce RTX 20-series is definitely one of the more nuanced graphics products, with both caveats and potential.

Unpacking 'RTX', 'NGX', and Game Support
Comments Locked

111 Comments

View All Comments

  • xXx][Zenith - Friday, September 14, 2018 - link

    Nice write-up! With 25% of the chip area dedicated to Tensor Cores and other 25% to RT Cores NVIDIA is betting big on DLSS and RTX for gaming usecases. With all the architectural improvements, better memory compression ... AMD is out of the game, quite frankly.

    Btw, for the fans of GigaRays/Sec, aka CEO metric, here is a Optix-RTX benchmark for Volta, Pascal and Maxwell GPUs: https://www.youtube.com/watch?v=ULtXYzjGogg

    OptiX 5.1 API will work with Turing GPUs but will not take advantage of the new RT Cores, so older-gen NV GPUs can be directly compared to Turing. Application devs need to rebuild with Optix 5.2 to get access to HW accelarated ray tracing. Imho, RTX cards will have nice speedup with Turing RT Cores using GPU renderers (Octane, VRay, ...) but the available RAM will limit the scene complexity big time.
  • blode - Friday, September 14, 2018 - link

    hate when i lose a game before my opponent arrives
  • eddman - Friday, September 14, 2018 - link

    As far as I can tell, GigaRays/Sec is not an nvidia made term. It's been used before by others too, like imagination tech.

    The nvidia made up term is RTX-ops.
  • xXx][Zenith - Friday, September 14, 2018 - link

    Ray tracing metric without scene complexity, viewport placement is just smoke and mirrors ...

    From whitepaper: "Turing ray tracing performance with RT Cores is significantly faster than ray tracing in Pascal GPUs. Turing can deliver far more Giga Rays/Sec than Pascal on different workloads, as shown in Figure 19. Pascal is spending approximately 1.1 Giga Rays/Sec, or 10 TFLOPS / Giga Ray to do ray tracing in software, whereas Turing can do 10+ Giga Rays/Sec using RT Cores, and run ray tracing 10 times faster."
  • eddman - Saturday, September 15, 2018 - link

    I don't know the technical details of ray tracing, but how is that nvidia statement related to scene complexity, etc?

    From my understanding of that statement, they are simply saying that turing is able to deliver far more rays/sec than pascal, because it is basically hardware-accelerating the operations through RT cores but pascal has to do all those operations in software through regular shader cores.
  • niva - Wednesday, September 19, 2018 - link

    If you hold scene complexity constant (take the same environment/angle) and run a ray tracing experiment, the new hardware will be that much faster at cranking out frames. At least that's how I'm interpreting the article and the statement above, I'm not really sure if that's accurate though...
  • Yojimbo - Saturday, September 15, 2018 - link

    When did Imgtec use gigarays? As far as I remember they didn't have hardware capable of a gigaray. They measured in hundreds of millions of rays per second, and I don't remember them using the term "megaray", either. Just something along the lines of "200 million rays/sec".
  • eddman - Saturday, September 15, 2018 - link

    Why fixate on the "giga" part? I obviously meant rays/sec; forgot top remove the giga part after copy/paste. A measuring method doesn't change with quantity.
  • Yojimbo - Saturday, September 15, 2018 - link

    Because the prefix is the whole point. The point is the terminology, not the measuring method. Rays per second is pretty obvious and has probably been around since the 70s. The "CEO metric" the OP was talking about was specifically about "gigarays per second". Jensen Huang said it during his SIGGRAPH presentation. Something like "Here's something you probably haven't heard before. Gigarays. We have gigarays."
  • eddman - Saturday, September 15, 2018 - link

    What are you on about? He meant "giga" as in "billion". It's so simple. He said it that way to make it sound impressive.

Log in

Don't have an account? Sign up now