Previewing GeForce RTX 2080 Ti

Turing and NVIDIA’s focus on hybrid rendering aside, let’s take a look at the individual GeForce RTX cards.

Before getting too far here, it’s important to point out that NVIDIA has offered little in the way of information on the cards’ performance besides their formal specifications. Essentially the entirety of the NVIDIA Gamescom presentation – and even most of the SIGGRAPH presentation – was focused on ray tracing/hybrid rendering and the Turing architecture’s unique hardware capabilities to support those features. As a result we don’t have a good frame of reference for how these specifications will translate into real-world performance. Which is also why we’re disappointed that NVIDIA has already started pre-orders, as it pushes consumers into blindly buying cards.

At any rate, with NVIDIA having changed the SM for Turing as much as they have versus Pascal, I don’t believe FLOPS alone is an accurate proxy for performance in current games. It’s almost certain that NVIDIA has been able to improve their SM efficiency, especially judging from what we’ve seen thus far with the Titan V. So in that respect this launch is similar to the Maxwell launch in that the raw specifications can be deceiving, and that it’s possible to lose FLOPS and still gain performance.

In any case, at the top of the GeForce RTX 20 series stack will be the GeForce RTX 2080 Ti. A major departure from the GeForce 700/900/10 series, NVIDIA is not retaining the Ti card as a mid-generation kicker. Instead they’re launching with it right away. This means that the high-end of the RTX family is now a 3 card stack from the start, instead of a 2 card stack as has previously been the case.

NVIDIA has not commented on this change in particular, and this is one of those things that I expect we’ll know more about once we reach the actual hardware launch. But there’s good reason to suspect that since NVIDIA is using the relatively mature TSMC 12nm “FFN” process – itself an optimized version of 16nm – that yields are in a better place than usual at this time. Normally NVIDIA would be using a more bleeding-edge process, where it would make sense to hold back the largest chip another year or so to let yields improve.

NVIDIA GeForce x80 Ti Specification Comparison
  RTX 2080 Ti
Founder's Edition
RTX 2080 Ti GTX 1080 Ti GTX 980 Ti
CUDA Cores 4352 4352 3584 2816
ROPs 88? 88? 88 96
Core Clock 1350MHz 1350MHz 1481MHz 1000MHz
Boost Clock 1635MHz 1545MHz 1582MHz 1075MHz
Memory Clock 14Gbps GDDR6 14Gbps GDDR6 11Gbps GDDR5X 7Gbps GDDR5
Memory Bus Width 352-bit 352-bit 352-bit 384-bit
VRAM 11GB 11GB 11GB 6GB
Single Precision Perf. 14.2 TFLOPs 13.4 TFLOPs 11.3 TFLOPs 6.1 TFLOPs
"RTX-OPS" 78T 78T N/A N/A
TDP 260W 250W 250W 250W
GPU Big Turing Big Turing GP102 GM200
Architecture Turing Turing Pascal Maxwell
Manufacturing Process TSMC 12nm "FFN" TSMC 12nm "FFN" TSMC 16nm TSMC 28nm
Launch Date 09/20/2018 09/20/2018 03/10/2017 06/01/2015
Launch Price $1199 $999 MSRP: $699
Founders: $699
$649

The king of NVIDIA’s new product stack, the GeForce RTX 2080 Ti is without a doubt an interesting card. And if we’re being honest, it’s not a card I was expecting. Based on these specifications, it’s clearly built around a cut-down version of NVIDIA’s “Big Turing” GPU, which the company just unveiled last week at SIGGRAPH. And like the name suggests, Big Turing is big: 18.6B transistors, measuring 754mm2 in die size. This is closer in size to GV100 (Volta/Titan V) than it is any past x80 Ti card, so I am surprised that, even as a cut-down chip, NVIDIA can economically offer it for sale. None the less here we are, with Big Turing coming to consumer cards.

Even though it’s a cut-down part, RTX 2080 Ti is still a beast, with 4352 Turing CUDA cores and what I estimate to be 544 tensor cores. Like its Quadro counterpart, this card is rated for 10 GigaRays/second, and for traditional compute we’re looking at 13.4 TFLOPS based on these specifications. Note that this is only 19% higher than GTX 1080 Ti, which is all the more reason why I want to learn more about Turing’s architectural changes before predicting what this means for performance in current-generation rasterization games.

Clockspeeds have actually dropped from generation to generation here. Whereas the GTX 1080 Ti started at 1.48GHz and had an official boost clock rating of 1.58GHz (and in practice boosting higher still), RTX 2080 Ti starts at 1.35GHz and boosts to 1.55GHz, while we don’t know anything about the practical boost limits. So assuming NVIDIA is being as equally conservative as the last generation, then this means the average clockspeeds have dropped slightly. Which in turn means that whatever performance gains we see from GTX 2080 Ti are going to ride entirely on the increased CUDA core count and any architectural efficiency improvements.

Meanwhile the ROP count is unknown, but as it needs to match the memory bus width, we’re almost certainly looking at 88 ROPs. Even more so than the core compute architecture, I’m curious as to whether there are any architectural improvements here. Otherwise because the ROP count is identical, then the maximum pixel throughput (on paper) is actually ever so slightly lower than it was on GTX 1080 Ti.

Speaking of the memory bus, this is another area that is seeing a significant improvement. NVIDIA has moved from GDDR5X to GDDR6, so memory clockspeeds have increased accordingly, from 11Gbps to 14Gbps, a 27% increase. And since the memory bus width itself remains identical at 352-bits wide, this means the final memory bandwidth increase is also 27%. Memory bandwidth has long been the Achilles heel of GPUs, so even if NVIDIA’s theoretical ROP throughput has not changed this generation, the fact of the matter is that having more memory bandwidth is going to remove bottlenecks and improve performance throughout the rendering pipeline, from the texture units and CUDA cores straight out to the ROPs. Of course, the tensor cores and RT cores are going to be prolific bandwidth consumers as well, so in workloads where they’re in play, NVIDIA is once again going to have to do more with (relatively) less.

Past this, things start diverging a bit. NVIDIA is once again offering their reference-grade Founders Edition cards, and unlike with the GeForce 10 series, the 20 series FE cards have slightly different specifications than their base specification compatriots. Specifically, NVIDIA has cranked up the clockspeed and the resulting TDP a bit, giving the 2080 Ti FE an on-paper 6% performance advantage, and also a 10W higher TDP. For the standard cards then, the TDP is the x80 Ti-traditional 250W, while the FE card moves to 260W.

Meanwhile, starting with the GeForce 20 series cards, NVIDIA is rolling out a new design to their reference/Founders Edition cards, the first such redesign since the original GeForce GTX Titan back in 2013. Up until now NVIDIA has focused on a conservative but highly effective blower design, pairing the best blower in the industry with a metal grey & black metal shroud. The end result is that these reference/FE cards could be dropped in virtually any system and work, thanks to the self-exhausting nature of blowers.

However for the GeForce 20 series, NVIDIA has blown off the blower, and instead opted to design their cards around the industry’s other favorite cooler design: the dual-fan open air cooler. Combined with NVIDIA’s metallic aesthetics, which they have retained, and the resulting product pretty much looks exactly like you’d expect a high-end open air cooled NVIDIA card to look like: two fans buried inside a meticulous metal shroud. And while we’ll see where performance stands once we review the card, it’s clear that NVIDIA is at the very least aiming to lead the pack in industrial design once again.

The switch to an open air cooler has three particular ramifications versus NVIDIA’s traditional blower, which for regular AnandTech readers you’ll know we’ve discussed before.

  1. Cooling capacity goes up
  2. Noise levels go down
  3. A card can no longer guarantee that it can cool itself

In an open air design, hot air is circulated back into the chassis via the fans, as the shroud is not fully closed and the design doesn’t force hot air out of the back of the case. Essentially in an open air design a card will push the hottest air away from itself, but it’s up to the chassis to actually get rid of that hot air. Which a well-designed case will do, but not without first circulating it through the CPU cooler, which is typically located above the GPU.

GPU cooler design is such that there is no one right answer. Because open air designs can rely on large axial fans with little air resistance, they can be very quiet. But overall cooling becomes the chassis’ job. Otherwise blowers are fully exhausting and work in practically any chassis – no matter how bad the chassis cooling is – but it is nosier thanks to the high-RPM radial fan. NVIDIA for their part has long favored blowers, but this appears to be at an end. It does make me wonder what this means for their OEM customers (whose designs often count on the video card being a blower), but that’s a deeper discussion for another time.

At any rate, from NVIDIA’s press release we know that each fan features 13 blades, and that the shroud itself is once again made out of die-cast aluminum. Also buried in the press release is information that NVIDIA is once again using a vapor chamber here to transfer heat between the GPU and the heatsink, and that it’s being called a “full length” vapor chamber, which would mean it’s notably larger than the vapor chamber in NVIDIA’s past cards. Unfortunately this is the limit to what we know right now about the cooler, and I expect there’s more to find out in the coming days and weeks. In the meantime NVIDIA has disclosed that the resulting card the standard size for a high-end NVIDIA reference card: dual slot width, 10.5-inches long.

Diving down, we also have a few tidbits about the reference PCB, including the power delivery system. NVIDIA’s press release specifically calls out a 13 phase power delivery system, which matches the low-resolution PCB render they’ve posted to their site. NVIDIA has always been somewhat frugal on VRMs – their cards have more than enough capacity for stock operation, but not much excess capacity for power-intensive overclocking – so it sounds like they are trying to meet overclockers half-way here. Though once we get to fully custom partner cards, I still expect the MSIs and ASUSes of the world to go nuts and try to outdo NVIDIA.

NVIDIA’s photos also make it clear that in order to meet that 250W+ TDP, we’re looking at an 8pin + 8pin configuration for PCIe power connectors. On paper such a setup is good for 375W, and while I don’t expect NVIDIA to go quite that far, typically we’d see a 300W 6pin + 8pin setup instead. So NVIDIA is clearly planning on drawing more power, and they’re using the connectors to match. Thankfully 8pin power connectors are fairly common on 500W+ PSUs these days, however it’s possible that older PSU owners may get pinched by the need for dual 8pin cables.

Finally, for display outputs, NVIDIA has confirmed that their latest generation flagship once again supports up to 4 displays. However there are actually 5 display outputs on the card: the traditional 3 DisplayPorts and a sole HDMI port, but now there’s also a singular USB Type-C port, offering VirtualLink support for VR headsets. As a result, users can pick any 4 of the 5 ports, with the Type-C port serving as a DisplayPort when not hooked up to a VR headset. Though this does mean that the final DisplayPort has been somewhat oddly shoved into the second row, in order to make room for the USB Type-C port.

Wrapping up the GeForce RTX 2080 Ti, NVIDIA’s new flagship has been priced to match. In fact it is seeing the greatest price hike of them all. Stock cards will start at $999, $300 above the GTX 1080 Ti. Meanwhile NVIDIA’s own Founders Edition card carries a $200 premium on top of that, retailing for $1199, the same price as the last-generation Titan Xp. The Ti/Titan dichotomy has always been a bit odd in recent years, so it would seem that NVIDIA has simply replaced the Titan with the Ti, and priced it to match.

Announcing The GeForce RTX 20 Series Previewing RTX 2080, RTX 2070, & Pre-Orders
Comments Locked

223 Comments

View All Comments

  • Yojimbo - Tuesday, August 21, 2018 - link

    EDIT:

    What I think is true is that with more AMD competition in the past prices dropped sooner after the debut. You can't use the Pascal generation for that comparison though, because it was a unique situation due to the crypto-currency craze and skyrocketing DRAM prices.
  • Yojimbo - Tuesday, August 21, 2018 - link

    Oh, one last comment. There's another reason prices dropped faster in the past and that's because the time between generations/refreshes was shorter. But I do think that AMD competition was also a factor.
  • eddman - Tuesday, August 21, 2018 - link

    When Pascal cards launched crypto-currency craze was yet to begin. Their MSRP was not affected by it. The craze began a few months later.
  • eddman - Tuesday, August 21, 2018 - link

    No, a business prices their stuff based on how much they can get from buyers. They can make a card that costs $200 to make, including R&D and else, and sell it for $1000 if they know 1) there is no competition and 2) there are enough people willing to pay for it. That's overpricing and they still can and will do it.

    We don't know how much these cards cost but I VERY much doubt they are on the edge of losing money per card. I have no proof, obviously, but I suspect they could drop 2080 Ti to $800, or even less, and still make a lot of profit. Do you think they would've still gone with such high prices if AMD was able to respond properly.

    Missed the point. Fermis were cheap (vs. now) because of AMD. 600 series did not have a big-chip card. The prices of big-chip cards went way up after fermi, starting with 780, and stayed that way since AMD could not properly compete.
  • Yojimbo - Tuesday, August 21, 2018 - link

    "No, a business prices their stuff based on how much they can get from buyers."

    Yes, exactly. That's what they ALWAYS do. It's not "overpricing", it's correct pricing. They price to maximize their profits.

    "We don't know how much these cards cost but I VERY much doubt they are on the edge of losing money per card."

    I never said they were on the edge of losing money per card.I can guarantee you they are no where near being on the edge of losing money per card. And they shouldn't be anywhere near there. You, as a consumer, shouldn't want them to be there, because if they were then they would have no money for investment and no ability to absorb any sort of recession or market downturn. But it's all irrelevant. The point we are supposed to be discussing in this thread is whether NVIDIA is making MORE money on the Turing cards than on cards in the same market segment in the past. That is what you seemed to claim and that is what I and that other guy are arguing against. And I have given you evidence that no, they are not.

    "Do you think they would've still gone with such high prices if AMD was able to respond properly."

    Yes, because AMD would have also gone with such high prices if they could respond properly.

    "Missed the point. Fermis were cheap (vs. now) because of AMD."

    Fermis were not cheap. I demonstrated that. In fact I looked even deeper since then and found that the Tesla-based GTX 280 launched in 2008 for $650. That's $760 in today's money, which is $60 more than the RTX 2080 is launching. AMD was competitive at that time.

    What has happened is that at times when AMD wasn't competitive and hardly anyone was buying their cards they lowered the prices of their GPUs to minimize their losses. NVIDIA responded to maintain their market share. That's less the result of healthy competition and more the result of a desperate company trying not to bleed cash. If AMD has a strong product they too will try to charge as much as they can for it. And that's exactly what they did in the past. And prices overall have not gone up or down in all that time.

    "The prices of big-chip cards went way up after fermi, starting with 780, and stayed that way since AMD could not properly compete."

    No. The Maxwell cards were among the cheapest, historically. And NVIDIA's market dominance was greatest during the Maxwell period.

    Anyway, I'm out. Thanks for the conversation.
  • eddman - Tuesday, August 21, 2018 - link

    ... except when there is no competition, it turns into overpricing. They are charging more than the card being replaced in that category, ergo they are overpricing. Simple as that.

    No, I didn't claim they are making more money. I'm saying the profit margin is probably high enough that they could cut the price and still make a healthy amount of money without being anywhere near the edge and without getting into any kind of financial problems. I don't want them to barely even out but I also don't want to be price gouged. Why are you even defending this? Do you like being overcharged?

    No, if AMD was in proper shape it would've probably ended up like fermi era pricing.

    You missed the point again and also missed a massive historical incident. You do realize that nvidia cut 280's price to $500 just a month after launch because it was unable to compete with 4870 at $650? There goes that argument.

    Fermi's ARE among the cheapest cards in the past 14 years.

    No, maxwells are not the cheapest big-chips; not even close. That honor goes to 285, 480 and 580. Fermis about $570-580, and 285 about $470. 980 Ti's launch price is about $690 today.

    You said you were out and then came back.
  • eddman - Tuesday, August 21, 2018 - link

    I just want to add that since I haven't mastered english yet, sometimes it might seem I'm being disrespectful. That's not the case. It's good to have a healthy discussion/argument once in a while.
  • eddman - Tuesday, August 21, 2018 - link

    *No edit button*

    8800 GTX launched for the same $600 as 7800 GTX, so the "it is more expensive because of newer technologies" does not hold water.
  • Yojimbo - Tuesday, August 21, 2018 - link

    "It's more expensive because of newer technologies" does hold water. You cannot claim to have refuted a statement simply by refuting one piece of evidence in support of it.
  • Yojimbo - Tuesday, August 21, 2018 - link

    I meant to say "...simply by refuting one piece of evidence provided in support of it."

Log in

Don't have an account? Sign up now