It’s been roughly a month since NVIDIA's Turing architecture was revealed, and if the GeForce RTX 20-series announcement a few weeks ago has clued us in on anything, is that real time raytracing was important enough for NVIDIA to drop “GeForce GTX” for “GeForce RTX” and completely change the tenor of how they talk about gaming video cards. Since then, it’s become clear that Turing and the GeForce RTX 20-series have a lot of moving parts: RT Cores, real time raytracing, Tensor Cores, AI features (i.e. DLSS), raytracing APIs. All of it coming together for a future direction of both game development and GeForce cards.

In a significant departure from past launches, NVIDIA has broken up the embargos around the unveiling of their latest cards into two parts: architecture and performance. For the first part, today NVIDIA has finally lifted the veil on much of the Turing architecture details, and there are many. So many that there are some interesting aspects that have yet to be explained, and some that we’ll need to dig into alongside objective data. But it also gives us an opportunity to pick apart the namesake of GeForce RTX: raytracing.

While we can't discuss real-world performance until next week, for real time ray tracing it is almost a moot point. In short, there's no software to use with it right now. Accessing Turing's ray tracing features requires using the DirectX Raytracing (DXR) API, NVIDIA's OptiX engine, or the unreleased Vulkan ray tracing extensions. For use in video games, it essentially narrows down to just DXR, which has yet to be released to end-users.

The timing, however, is better than it seems. A year or so later could mean facing products that are competitive in traditional rasterization. And given NVIDIA's traditionally strong ecosystem with developers and middleware (e.g. GameWorks), they would want to leverage high-profile games for ringing up consumer support for hybrid rendering, which is where both ray tracing and rasterization is used.

So as we've said before, with hybrid rendering, NVIDIA is gunning for nothing less than a complete paradigm shift in consumer graphics and gaming GPUs. And insofar as real time ray tracing is the 'holy grail' of computer graphics, NVIDIA has plenty of other potential motivations beyond graphical purism. Like all high-performance silicon design firms, NVIDIA is feeling the pressure of the slow death of Moore's Law, of which fixed function but versatile hardware provides a solution. And where NVIDIA compares the Turing 20-series to the Pascal 10-series, Turing has much more in common with Volta, being in the same generational compute family (sm_75 and sm_70), an interesting development as both NVIDIA and AMD have stated that GPU architecture will soon diverge into separate designs for gaming and compute. Not to mention that making a new standard out of hybrid rendering would hamper competitors from either catching up or joining the market.

But real time ray tracing being what it is, it was always a matter of time before it became feasible, either through NVIDIA or another company. DXR, for its part, doesn't specify the implementations for running its hardware accelerated layer. What adds to the complexity is the branding and marketing of the Turing-related GeForce RTX ecosystem, as well as the inclusion of Tensor Core accelerated features that are not inherently part of hybrid rendering, but is part of a GPU architecture that has now made its way to consumer GeForce.

For the time being though, the GeForce RTX cards are not released yet, and we can’t talk about any real-world data. Nevertheless, the context of hybrid rendering and real time ray tracing is central to Turing and to GeForce RTX, and it will remain so as DXR is eventually released and consumer-relevant testing methodology is established for it. In light of these factors, as well as Turing information we’ve yet to fully analyze, today we’ll focus on the Turing architecture and how it relates to real-time raytracing. And be sure to stay tuned for the performance review next week!

Ray Tracing 101: What It Is & Why NVIDIA Is Betting On It
POST A COMMENT

111 Comments

View All Comments

  • gglaw - Saturday, September 15, 2018 - link

    Why bother to make up statements claiming the prices are completely as expected with inflation added without even having a slight clue what the inflation rate has been in recent history? Outside of the very young readers here, most of us were around for 700 series, 8800, etc. and know first hand what type of changes inflation has had in the last 10-20 years. Especially comparing to the 980 Ti, and 1080 Ti, inflation has barely moved since those releases. Reply
  • Spunjji - Monday, September 17, 2018 - link

    This. Most people here aren't stupid. Reply
  • notashill - Saturday, September 15, 2018 - link

    700 series wasn't even close. 780 was $650->adjusted ~$700, 780Ti was $700->adjusted ~$760. And the 780 MSRP dropped to $500 after 6 months when the Ti launched. Reply
  • Santoval - Monday, September 17, 2018 - link

    Yes, Navi will be midrange, at around a GTX 1080 performance level, or at best a bit faster. They initially planned a dual Navi package for the high end, linked by Infinity Fabric, but they canned (or postponed) it, due to the reluctance of game developers to support dual-die consumer graphics cards (according to AMD). They might release dual Navi professional graphics cards though.
    Tensor and RT cores should not be expected either. These will have to wait for the post-Navi (and post-GCN) generation.
    Reply
  • TropicMike - Friday, September 14, 2018 - link

    Good article. Lots of complicated stuff to try to explain.

    Just a quick typo on page 2: "It’s in pixel shaders that the various forms of lighting (shadows, reflection, reflection, etc) " I'm guessing you meant 'refraction' for one of those.
    Reply
  • Smell This - Wednesday, July 03, 2019 - link

    Super **Duper** Turbo Hyper Championship Edition Reply
  • Yaldabaoth - Friday, September 14, 2018 - link

    For the "eye diagram" on page 8, the texts says, "In this case we’re looking at a fairly clean eye diagram, illustrating the very tight 70ns transitions between data transfers." However, the image is labeled as "70 ps". Reply
  • Ryan Smith - Friday, September 14, 2018 - link

    Nano. Pico. Really, it's a small difference... =P

    Thanks!
    Reply
  • Bulat Ziganshin - Friday, September 14, 2018 - link

    It's not "Volta in spirit". It's Volta for the masses. The only differences
    - reduced FP64 cores
    - reduced sharedmem/cache from 128 KB to 96 KB
    - added RT cores

    Now let's check what you want to change to produce "scientific" Turing GPU. Yes, exactly these things. So, despite the name, it's the same architecture, tuned for the gaming market
    Reply
  • Yojimbo - Saturday, September 15, 2018 - link

    You don't really know that. This article, as explained in the beginning, focuses only on the RT core improvements. There are other Turing features that were left out. I think we have no idea if Volta has variable rate shading, mesh shading,or multi-view rendering. I'm guessing it does not.

    Besides, what you said isn't true even limiting the discussion to what was covered in this article. The Turing Tensor cores allow for a greater range of precisions.
    Reply

Log in

Don't have an account? Sign up now