It’s been roughly a month since NVIDIA's Turing architecture was revealed, and if the GeForce RTX 20-series announcement a few weeks ago has clued us in on anything, is that real time raytracing was important enough for NVIDIA to drop “GeForce GTX” for “GeForce RTX” and completely change the tenor of how they talk about gaming video cards. Since then, it’s become clear that Turing and the GeForce RTX 20-series have a lot of moving parts: RT Cores, real time raytracing, Tensor Cores, AI features (i.e. DLSS), raytracing APIs. All of it coming together for a future direction of both game development and GeForce cards.

In a significant departure from past launches, NVIDIA has broken up the embargos around the unveiling of their latest cards into two parts: architecture and performance. For the first part, today NVIDIA has finally lifted the veil on much of the Turing architecture details, and there are many. So many that there are some interesting aspects that have yet to be explained, and some that we’ll need to dig into alongside objective data. But it also gives us an opportunity to pick apart the namesake of GeForce RTX: raytracing.

While we can't discuss real-world performance until next week, for real time ray tracing it is almost a moot point. In short, there's no software to use with it right now. Accessing Turing's ray tracing features requires using the DirectX Raytracing (DXR) API, NVIDIA's OptiX engine, or the unreleased Vulkan ray tracing extensions. For use in video games, it essentially narrows down to just DXR, which has yet to be released to end-users.

The timing, however, is better than it seems. A year or so later could mean facing products that are competitive in traditional rasterization. And given NVIDIA's traditionally strong ecosystem with developers and middleware (e.g. GameWorks), they would want to leverage high-profile games for ringing up consumer support for hybrid rendering, which is where both ray tracing and rasterization is used.

So as we've said before, with hybrid rendering, NVIDIA is gunning for nothing less than a complete paradigm shift in consumer graphics and gaming GPUs. And insofar as real time ray tracing is the 'holy grail' of computer graphics, NVIDIA has plenty of other potential motivations beyond graphical purism. Like all high-performance silicon design firms, NVIDIA is feeling the pressure of the slow death of Moore's Law, of which fixed function but versatile hardware provides a solution. And where NVIDIA compares the Turing 20-series to the Pascal 10-series, Turing has much more in common with Volta, being in the same generational compute family (sm_75 and sm_70), an interesting development as both NVIDIA and AMD have stated that GPU architecture will soon diverge into separate designs for gaming and compute. Not to mention that making a new standard out of hybrid rendering would hamper competitors from either catching up or joining the market.

But real time ray tracing being what it is, it was always a matter of time before it became feasible, either through NVIDIA or another company. DXR, for its part, doesn't specify the implementations for running its hardware accelerated layer. What adds to the complexity is the branding and marketing of the Turing-related GeForce RTX ecosystem, as well as the inclusion of Tensor Core accelerated features that are not inherently part of hybrid rendering, but is part of a GPU architecture that has now made its way to consumer GeForce.

For the time being though, the GeForce RTX cards are not released yet, and we can’t talk about any real-world data. Nevertheless, the context of hybrid rendering and real time ray tracing is central to Turing and to GeForce RTX, and it will remain so as DXR is eventually released and consumer-relevant testing methodology is established for it. In light of these factors, as well as Turing information we’ve yet to fully analyze, today we’ll focus on the Turing architecture and how it relates to real-time raytracing. And be sure to stay tuned for the performance review next week!

Ray Tracing 101: What It Is & Why NVIDIA Is Betting On It
Comments Locked

111 Comments

View All Comments

  • StormyParis - Friday, September 14, 2018 - link

    Fascinating subject and excellent treatment. I feel informed and intelligent, so thank you.
  • Gc - Friday, September 14, 2018 - link

    Nice introductory article. I wonder if the ray tracing hardware might have other uses, such as path finding in space, or collision detection in explosions.

    The copy editing was a let down.

    Copy editor: please review the "amount vs. number" categorical distinction in English grammar. Parts of this article, that incorrectly use "amount", such as "amount of rays" instead of "number of rays", are comprehensible but jarring to read, in the way that a computer translation can be comprehensible but annoying to read.

    (yes: "amount of noise". no: "amount of rays, usually 1 or 2 per pixel". yes: "number of rays, usually 1 or 2 per pixel".) (Recall that "number" is for countable items, that can be singular or plural, such as 1 ray or 2 rays. "Amount" is for an unspecified quantity such as liquid or money, "amount of water in the tank" or "amount of money in the bank". But if pluralizable units are specified, then those units are countable, so "number of liters in the tank", or "number of dollars in the bank". [In this article, "amount of noise" does not refer to an event as in 1 noise, 2 noises, but rather to an unspecified quantity or ratio.] A web search for "amount vs. number" will turn up other explanations.)
  • Gc - Friday, September 14, 2018 - link

    (Hope you're all staying dry if you're in Florence's storm path.)
  • edzieba - Saturday, September 15, 2018 - link

    " I wonder if the ray tracing hardware might have other uses, such as path finding in space, or collision detection in explosions."

    Yes, these were called out (as well as gun hitscan and AI direct visibility checks) in their developer focused GDC presentation.
  • edzieba - Saturday, September 15, 2018 - link

    One thing that might be worth highlighting (or exploring further) is that raytraced reflections and lighting/shadowing are necessary for VR, where screen-space reflections produce very obviously incorrect results
  • Achaios - Saturday, September 15, 2018 - link

    Τhis is epic. It should be taught as a special lesson in Marketing classes. NVIDIA is selling fanboys technology for which there is presently no practical use for, and the cards are already sold out. Might as well give NVIDIA license to print money.
  • iwod - Saturday, September 15, 2018 - link

    Aren't we fast running to Memory Bandwidth bottleneck?

    Assuming we get 7nm next year at 8192 CUDA Core, that will need at least 80% more bandwidth, or 1TB/s. Neither 512bit memory nor HBM2 could offer that.
  • HStewart - Saturday, September 15, 2018 - link

    I wondering when professional rendering packages support RTX - I personally have Lightwave 3D 2018 and because of Newtek's excellent upgrade process - I could see supporting it in future. I could see this technology do wonders for Movie and Game creations - reducing the dependency on CPU cpres
  • YaleZhang - Saturday, September 15, 2018 - link

    Increased power use is disappointing. Is the 225W TDP for 2080 the power used or the heat dissipated? If it's power used, then that would include the 27W power used by VirtualLink. So then the real power usage would be 198 W.
  • willis936 - Sunday, September 16, 2018 - link

    I've been in signal integrity for five years. I write automation scripts for half million dollar oscilloscopes. I love it. It's my jam. Why on god's green earth does nvidia think their audience cares about eye diagrams? They mean literally nothing to the target audience. They're not talking to system integrators or chip manufacturers. Even if they were a single eye diagram with an eye width measurement means next to nothing beyond demonstrating that they have an image of what a signal at a given baud rate should look like (it's unclear if it's simulated or taken from one of their test monkeys). If they really wanted to blow us away they could say something like they've verified 97% confidence that their memory interface/channel BER <= 1E-15 when the spec commands BER <= 1E-12 or something. It's just a jargon image to show off how much they must really know their stuff. It just strikes me as tacky.

Log in

Don't have an account? Sign up now