Closing Thoughts

With a tagline like ‘Graphics Reinvented’, NVIDIA is certainly not straying from putting Turing in the most revolutionary limelight as possible. In that sense, NVIDIA is choosing to compare Turing to Pascal rather than Volta for every possible circumstance, especially for gaming. This decision is certainly not unfounded because for consumers, the Turing-based GeForce 20-series succeeds the Pascal-based GeForce 10-series. However, this can give the impression that because Turing is so different from Pascal, it warrants dissimilar comparisons like RTX-OPS metrics or gaming performance uplifts with DLSS or raytracing enabled.

The situation becomes a little more muddled because of several reasons:

  • The pricing and availability of the RTX 20-series means that on a purely market segmentation level, it does not directly replace Pascal gaming products
  • As gaming-focused cards, the major new features (RT cores, tensor cores, advanced shading) of the RTX 20-series do not operate out-of-the-box in games, are specific to select games, and
  • The burden of communication is on the developers to educate consumers on the details of specific raytracing effects or use of AI-accelerated denoisers

These aren’t points that necessarily need to define Turing, except that NVIDIA has pushed the envelope by going all-in with marketing and branding. For their part, NVIDIA will have an continously updated list of games with RTX platform support.

On one hand, Turing seems like a possible solution to the gaming/compute architecture divergence. It seems less likely now that NVIDIA would backtrack into a more standard design for maximum rasterization performance, though obviously that remains to be seen with how the product fares. In any case, as most silicon design firms hvae leapfrogging design teams, the major decisions are likely not to move too far to the fixed function side, if only because the greatest strength of GPUs in compute is its programmability and versatility.

Looking back at ray tracing, it seems that even if it isn't immediately practical, there would still be a seeding effect to be gained via enthusiasts and certain gamers, which would work well with higher-profile AAA games. As we move into next week, it appears that the GeForce RTX 20-series is definitely one of the more nuanced graphics products, with both caveats and potential.

Unpacking 'RTX', 'NGX', and Game Support
Comments Locked

111 Comments

View All Comments

  • BurntMyBacon - Monday, September 17, 2018 - link

    Good article. I would have been nice to get more information as to exactly what nVidia is doing with the RT cores to optimize ray tracing, but I can understand why they would want to keep that a secret at this point. One oversight in an otherwise excellent article:

    @Nate Oh (article): "The net result is that with nearly every generation, the amount of memory bandwidth available per FLOP, per texture lookup, and per pixel blend has continued to drop. ... Turing, in turn, is a bit of an interesting swerve in this pattern thanks to its heavy focus on ray tracing and neural network inferencing. If we're looking at memory bandwidth merely per CUDA core FLOP, then bandwidth per FLOP has actually gone up, since RTX 2080 doesn't deliver a significant increase in (on-paper) CUDA core throughput relative to GTX 1080."

    The trend has certainly been downward, but I was curious as to why the GTX 780 wasn't listed. When I checked it out, I found that it is another "swerve" in the pattern similar to the RTX2080. The specifications for the NVIDIA Memory Bandwidth per FLOP (In Bits) chart are:
    GTX 780 - 0.58 bits | 3.977 TFLOPS | 288GB/sec

    This is easily found information and its omission is pretty noticeable (at least to me), so I assume it got overlooked (easy to do in an article this large). While it doesn't match your initial always downward observation, it also clearly doesn't change the trend. It just means the trend is not strictly monotonic.
  • nboelter - Tuesday, September 18, 2018 - link

    I had to solve the problem of “random memory accesses from the graphics card memory are the main bottleneck for the performance of the molecular dynamics simulation” when i did some physics on CUDA, and got great results with Hilbert space-filling curves (there is a fabulous german paper from 1891 about this newfangled technology) to - essentially - construct BVHs. Only difference really is that i had grains of sand instead of photons. Now i really wonder if these RT cores could be used for physics simulations!
  • webdoctors - Tuesday, September 18, 2018 - link

    This will likely get lost in the 100 comments, but this is really huge and getting ignored by the pricing.

    I've often wondered and complained for years to my friends why we keep going to higher resolutions from 720p to 4K rather than actually improving the graphics. Look at a movie on DVD from 20 yrs ago at 480p resolution, and the graphics are so much more REALISTSIC than the 4K stuff you see in games today because its either real ppl on film or if CG raytraced offline with full lighting. Imagine getting REAL TIME renders that look like real life video, that's a huge breakthrough. Sure we've raytracing for decades, but never real time on non-datacenter size clusters.

    Rasterization 4K or 8K content will never look as REAL as 1080p raytraced content. It might look nicer, but it won't look REAL. Its great we'll have hardware where we can choose whether we want to use the fake rasterization cartoony path or the REAL path.

    A 2080TI that costs $1200 will be $120 in 10 years, but it won't change the fact that now you're getting REAL vs fake. 2 years ago, you didn't have the option, you couldn't say I'll pay you $5k to give me the ray traced option in the game, now we'll get (hopefully) developer support and see this mainstream. Probably can use AWS to gamestream this instead of buying a video card and than get the raytrace now too.

    If you're happy with non-ray tracing, just buy a 1070 and stick to playing games in 1080p. You'll never be perf limited for any games and move on.
  • eddman - Wednesday, September 19, 2018 - link

    You are not getting REAL with 20 series, not even close.
  • MadManMark - Wednesday, September 19, 2018 - link

    His point is that we are getting CLOSER to "real," not that it is CLOSE or IS real. Would have thought that was obvious, but guess ti isn't to everyone.
  • eddman - Thursday, September 20, 2018 - link

    It seems you are the one who misread. From his comment: "it won't change the fact that now you're getting REAL vs fake"

    So, yes, he does think with 20 series you get the REAL thing.
  • sudz - Wednesday, September 19, 2018 - link

    "as opposed Pascal’s 2 partition setup with two dispatch ports per sub-core warp scheduler."

    So in conclusion: RTX has more warp cores.

    Engage!
  • ajp_anton - Friday, September 21, 2018 - link

    This comment is a bit late, but your math for memory efficiency is wrong.

    If bandwidth+compression gives a 50% increase, and bandwidth alone is a 27% increase, you can't just subtract them to get the compression increase. In this example, compression increase is 1.5/1.27 = 1,18, or 18%. Not the 23% that you get by subtracting.

    This also means you have to re-write the text where you think it's weird how this is higher than the last generation increase, because it no longer is higher.
  • Overmind - Thursday, September 27, 2018 - link

    There are many inconsistencies in the article.
  • Overmind - Thursday, September 27, 2018 - link

    If the 102 with 12 complete functional modules has 72 RTCs (RTX-ops) how can the 2080 Ti with 11 functional modules has 78 RTCs ? The correct value is clearly 68.

Log in

Don't have an account? Sign up now