Meet The New Future of Gaming: Different Than The Old One

Up until last month, NVIDIA had been pushing a different, more conventional future for gaming and video cards, perhaps best exemplified by their recent launch of 27-in 4K G-Sync HDR monitors, courtesy of Asus and Acer. The specifications and display represented – and still represents – the aspired capabilities of PC gaming graphics: 4K resolution, 144 Hz refresh rate with G-Sync variable refresh, and high-quality HDR. The future was maxing out graphics settings on a game with high visual fidelity, enabling HDR, and rendering at 4K with triple-digit average framerate on a large screen. That target was not achievable by current performance, at least, certainly not by single-GPU cards. In the past, multi-GPU configurations were a stronger option provided that stuttering was not an issue, but recent years have seen both AMD and NVIDIA take a step back from CrossFireX and SLI, respectively.

Particularly with HDR, NVIDIA expressed a qualitative rather than quantitative enhancement in the gaming experience. Faster framerates and higher resolutions were more known quantities, easily demoed and with more intuitive benefits – though in the past there was the perception of 30fps as cinematic, and currently 1080p still remains stubbornly popular – where higher resolution means more possibility for details, higher even framerates meant smoother gameplay and video. Variable refresh rate technology soon followed, resolving the screen-tearing/V-Sync input lag dilemma, though again it took time to catch on to where it is now – nigh mandatory for a higher-end gaming monitor.

For gaming displays, HDR was substantively different than adding graphical details or allowing smoother gameplay and playback, because it meant a new dimension of ‘more possible colors’ and ‘brighter whites and darker blacks’ to gaming. Because HDR capability required support from the entire graphical chain, as well as high-quality HDR monitor and content to fully take advantage, it was harder to showcase. Added to the other aspects of high-end gaming graphics and pending the further development of VR, this was the future on the horizon for GPUs.

But today NVIDIA is switching gears, going to the fundamental way computer graphics are modelled in games today. Of the more realistic rendering processes, light can be emulated as rays that emit from their respective sources, but computing even a subset of the number of rays and their interactions (reflection, refraction, etc.) in a bounded space is so intensive that real time rendering was impossible. But to get the performance needed to render in real time, rasterization essentially boils down 3D objects as 2D representations to simplify the computations, significantly faking the behavior of light.

It’s on real time ray tracing that NVIDIA is staking its claim with GeForce RTX and Turing’s RT Cores. Covered more in-depth in our architecture article, NVIDIA’s real time ray tracing implementation takes all the shortcuts it can get, incorporating select real time ray tracing effects with significant denoising but keeping rasterization for everything else. Unfortunately, this hybrid rendering isn’t orthogonal to the previous concepts. Now, the ultimate experience would be hybrid rendered 4K with HDR support at high, steady, and variable framerates, though GPUs didn’t have enough performance to get to that point under traditional rasterization.

There’s a still a performance cost incurred with real time ray tracing effects, except right now only NVIDIA and developers have a clear idea of what it is. What we can say is that utilizing real time ray tracing effects in games may require sacrificing some or all three of high resolution, ultra high framerates, and HDR. HDR is limited by game support more than anything else. But the first two have arguably minimum performance standards when it comes to modern high-end gaming on PC – anything under 1080p is completely unpalatable, and anything under 30fps or more realistically 45 to 60fps hurts the playability. Variable refresh rate can mitigate the latter and framedrops are temporary, but low resolution is forever.

Ultimately, the real time ray tracing support needs to be implemented by developers via a supporting API like DXR – and many have been working hard on doing so – but currently there is no public timeline of application support for real time ray tracing, Tensor Core accelerated AI features, and Turing advanced shading. The list of games with support for Turing features - collectively called the RTX platform - will be available and updated on NVIDIA's site.

The RTX 2080 Ti & 2080 Review The RTX Recap: A Brief Overview of the Turing RTX Platform
Comments Locked

337 Comments

View All Comments

  • eddman - Thursday, September 20, 2018 - link

    It still doesn't justify their prices. Great cards, finally ray-tracing for games, horribly cutthroat prices.
  • Yojimbo - Saturday, September 22, 2018 - link

    So don't buy it, eddman. In the end the only real justification for prices is what people are willing to pay. If one isn't able to make a product cheaply enough for it to be sold for what people are willing to pay then the product is a bad product.

    I don't understand why you are so worried about the price. Or why you think they are "cut-throat". A cut-throat price is a very low price, not a high one.
  • eddman - Sunday, September 23, 2018 - link

    There is a wealthy minority who'd pay that much, and? It's only "justified" if you are an nvidia shareholder.

    The cards are overpriced compared to last gen and that's an absolute fact. Your constant defending of nvidia's pricing is certainly not a normal consumer behavior.
  • mapesdhs - Wednesday, September 26, 2018 - link

    Yojimbo is right that an item is only ever worth what someone is willing to pay, so in that sense NVIDIA can do what it likes, in the end it's up to the market, to consumers, whether the prices "make sense", ie. whether people actually buy them. In this regard the situation we have atm is largely that made by gamers themselves, because even when AMD released competitive products (whether by performance, value, or both), people didn't buy them. There are even people saying atm they hope AMD can release something to compete with Turing just so NVIDIA will drop its prices and thus they can buy a cheaper NVIDIA card; that's completely crazy, AMD would be mad to make something if that's how the market is going to respond.

    What's interesting this time though is that even those who in the past have been happy to buy the more expensive cards are saying they're having major hesitation about buying Turing, and the street cred which used to be perceived as coming with buying the latest & greatest has this time largely gone, people are more likely to react like someone is a gullible money pumped moron for buying these products ("More money than sense!", as my parents used to say). By contrast, when the 8800 GTX came out, that was a huge leap over the 7800 and people were very keen to get one, those who could afford it. Having one was cool. Ditto the later series right through to Maxwell (though a bit of a dip with the GTX 480 due to heat/power). The GTX 460 was a particularly good release (though the endless rebranding later was annoying). Even Pascal was a good bump over what had come before.

    Not this time though, it's a massive price increase for little gain, while the headline features provide sub-60Hz performance at a resolution far below what NVIDIA themselves have been pushing as desirable for the last 5 years (the focus has been on high frequency monitors, 4K and VR); now NVIDIA is trying to roll back the clock, which won't work, especially since those who've gotten used to high frequency monitors physically cannot go back (ref New Scientist, changes in the brain's vision system).

    Thus, eddman is right that the card's are overpriced in a general sense, as they don't remotely match what the market has come to expect from NVIDIA based on previous releases. However, if gamers don't vote with their wallets then nothing will change. Likewise, if AMD releases something just as good, or better value, but gamers don't buy them, then again nothing will change, we'll be stuck with this new expensive normal.

    I miss the Fermi days, buy two GTX 460s to have better performance than a GTX 580, didn't cost much, games ran great, and the lesser VRAM didn't bother me anyway as I wasn't using an uber monitor. Now we have cards that cost many hundreds that don't even support multi-GPU. It's as daft as Intel making the cost entry point to >= 40 PCIe lanes much higher than it was with X79 (today it's almost 1000 UKP); an old cheapo 4820K can literally do things a 7820X can't. :D

    Alas though, again it boils down to individual choice. Some want the fastest possible and if they can afford it then that's up to them, it's their free choice, we don't have the right to tell people they shouldn't buy these cards. It's their money afterall (anything else is communism). It is though an unfortunate reality that if the cards do sell well then NVIDIA will know they can maintain this higher priced and more feature restricted strategy, while selling the premium parts to Enterprise. Btw, it amazes me how people keep comparing the 2080 to the 1080 Ti even though the former has less RAM; how is that an upgrade in the product stack? (people will respond with ray tracing! Ray tracing! A feature which can't be used yet and runs too slow to be useful anyway, and with an initial implementation that's a pretty crippled implementation of the idea aswell).And why doesn't the 2080 Ti have more than 11GB? It really should, unless NVIDIA figures that if they can indeed push people back to 1080p then 11GB is enough anyway, which would be ironic.

    I'm just going to look for a used 1080 Ti, more than enough for my needs. For those with much older cards, a used 980 Ti or 1070, or various AMD cards, are good options.

    Ian.
  • Yojimbo - Wednesday, September 19, 2018 - link

    Yes, exactly. A very appropriate quote.
  • Skiddywinks - Thursday, September 20, 2018 - link

    No reason Ford couldn't have done both though. There is no technological reason nVidia could not have released a GTX 2080 Ti as well. But they know they couldn't charge as much, and the vast majority of people would not buy the RTX version. Instead, it makes their 1080 Ti stock look much more appealing to for value oriented gamers, helping them shift that stock as well as charge a huge price for the new cards.

    It's really great business, but as a gamer and not a stockholder, I'm salty.
  • Spunjji - Friday, September 21, 2018 - link

    Ford didn't invent the car, though. Ford invented a way to make them cheaper.

    Ford's strategy was not to make a new car that might do something different one day and then charge through the effing nose for it.
  • Gastec - Thursday, September 27, 2018 - link

    That quote applies perfectly to our digital electronic World: we want to go faster from point A to point B. To do that, Henry Ford gave us a car (a faster "horse"). We want the same from GPUs and CPU's, to be faster. Prettier sure, pink even. But first just make it fast.
  • Writer's Block - Monday, October 1, 2018 - link

    Except there is no evidence he said that - it is a great statement though, and conveys the intended message well
  • Hxx - Wednesday, September 19, 2018 - link

    overall dissapointing performance. RTX 2080 is a flat out bad buy at $800+ when 1080 ti custom boards are as low as $600. the RTX 2080 TI is a straight up ripoff when consumers can easily surpass its performance with 2 x 1080 TIs. I agree on the conclusion though that you are buying hardware that you wont take adavantage of yet but still, if Nvidia wants to push this hardware to all gamers, they need to drop the pricing in line with their performance otherwise not many will buy into the hype.

Log in

Don't have an account? Sign up now