Meet the GeForce RTX 2080 Super Founders Edition

Taking a closer look at the RTX 2080 Super, there aren’t too many surprises to be found. Since we’re dealing with a mid-generation kicker here, NVIDIA has opted to stick with their original RTX 2080 reference designs for the new card, rather than design wholly new boards. This has allowed them to get the new card out relatively quickly, and to be honest there’s not a whole lot NVIDIA could do here that wouldn’t be superficial. As a result, the RTX 2080 Super is more or less identical to the RTX 2080 it replaces.

GeForce RTX 20 Series Card Compariaon
  RTX 2080
Founders Edition
RTX 2080 Super
(Reference Specs)
Base Clock 1650MHz 1650MHz
Boost Clock 1815MHz 1815MHz
Memory Clock 15.5Gbps GDDR6 15.5Gbps GDDR6
TDP 250W 250W
Length 10.5-inches N/A
Width Dual Slot N/A
Cooler Type Open Air
(2x Axial Fans)
Price $699 $699

As I noted earlier, the Founders Edition cards themselves are now purely reference cards. NVIDIA isn’t doing factory overclocks this time around – the high reference clock speeds making that process a bit harder – so the RTX 2080 Super Founders Edition is very straightforward examples of what reference-clocked RTX 2080 Super cards can deliver in terms of performance. It also means that the card no longer carries a price premium, with NVIDIA selling it at $699.

Externally then, possibly the only material change is quite literally in the materials. NVIDIA has taken the 2080 reference design and given the center segment of shroud a reflective coating. This, along with the Super branding, are the only two visually distinctive changes from the RTX 2080 reference design. For better or worse, the reflective section is every bit the fingerprint magnet that you probably expect, so thankfully most people aren’t handling their video cards as much as hardware reviewers are.

In terms of cooling, this means the RTX 2080 Super gets the RTX 2080’s cooler as well. At a high level this is a dual axial open air cooler, with NVIDIA sticking to this design after first introducing it last year. The open air cooler helps NVIDIA keep their load noise levels down, though idle noise levels on all of the RTX 20 series reference cards has been mediocre, and the new Super cards are no different. The fact that this reference design isn’t a blower means that the RTX 2080 Super isn’t fully self-exhausting, relying on the computer chassis itself to help move hot air away from the card. For most builders this isn’t an issue, but if you’re building a compact system or a system with limited airflow, you’ll want to make sure your system can handle the heat from a 250W video card.

Under the hood, the RTX 2080 Super inherits the RTX 2080’s heatsink design, with a large aluminum heatsink running the full length of the card. Deeper still, the heatsink is connected to the TU104 GPU with a vapor chamber, to help move heat away from the GPU more efficiently. Overall, the amount of heat that needs to be moved has increased, thanks to the higher TDP, however as this is also the same cooler design that NVIDIA uses on the 250W RTX 2080 Ti, it's more than up to the task for a 250W RTX 2080 Super.

According to NVIDIA the PCB is the same as on the regular RTX 2080. As I need this card for further testing, I haven’t shucked it down to its PCB to take inventory of components. But as the RTX 2080 was already a "fully populated" PCB as far as VRM circuitry goes, the same will definitely be true for the RTX 2080 Super as well. I have to assume NVIDIA is just driving their VRMs a bit harder, which shouldn't be an issue given what their cooler can do. It is noteworty though that as a result, the card's maximum power target is just +12%, or 280W. So while the card has a good bit of TDP headroom at stock, there isn't much more that can be added to it. Factoring in pass-through power for the VirtualLink port, and NVIDIA is right at the limit of what they can do over the 8pin + 6pin + slot power delivery configuration.

Finally, for display I/O, the card gets the continuing NVIDIA high-end standard of 3x DisplayPort 1.4, 1x HDMI 2.0b, and 1x VirtualLink port (DP video + USB data + 30W USB power).

The NVIDIA GeForce RTX 2080 Super Review The Test


View All Comments

  • Zoolook - Thursday, July 25, 2019 - link

    Essentially the GPU-division tided them over when the CPU-division didn't deliver, the new CPU cores where prioritized over developing the new GPU architecture for several years so it's no wonder they are behind on the GPU-side. Hopefully they will catch up for real in the next couple of years with the increased revenue flow put to good use. Reply
  • rocky12345 - Tuesday, July 23, 2019 - link

    LOL to funny I guess when a mid range card like the 5700XT was able to almost match the performance of AMD's top card that cost a lot more than the 5700XT AMD has a real problem there for sure. The only coars eof action for AMD was to remove the Radeon 7 form the picture as it served no more purpose other than triggering reviewers after the 5700 cards came out.

    Now if I was Nvidia I would be some what worried as to what AMD has on the books and what the next move is going to be. If the mid range cards like the 5700's can topple AMD's top card and hang in there and beat Nvidia's mid top cards what will the Navi 20 chip be able to do. You have to know AMD is planning on doing to Nvidia what they have been doing to Intel right.

    It has been proven and told by AMD themselves that they played Nvidia by showing higher launch prices as well as down playing the performance a bit to see how Nvidia would respond. I guess they had a good laugh after seeing Nvidia start the Super marketing and a really rushed launch to beat AMD to the punch line just to find out AMD played them in every way and lowered the prices just before launch and the performance was better than what they had shown in their slides by about 5%-7%.

    Have you even looked at the reviews. The 5700 creams the 2060 and the 5700XT creams the 2070. The 5700 is slightly faster than the 2060S and the 5700XT almost catches the 2070S and with driver tweaks it probably will get faster than a 2070S because of the Navi's being a totally new card lineup. Like I said if small Navi is this fast you know Nvidia is worried about big Navi which is coming very soon as well. Rest assured though Nvidia will have something to compete with big Navi for sure they will never settle for being second best or not the fastest...their CEO's ego could not live with that.
  • michael2k - Wednesday, July 24, 2019 - link

    You are way too emotionally invested if you think the CEO's ego has anything to do with product design.

    Paying for their second new HQ? That's NVIDIA's reason for never settling for second best:

    That's a lot of money to pay off, and you don't get there by releasing second best cards for several generations.

    But don't worry, NVIDIA has three tricks they can play:
    1) Smaller process (either 10nm or 7nm) will give them the ability to drop voltage, power consumption, and die size, all of which reduce the cost of the part and improve profits at the low end of the scale.
    2) Smaller process also gives them the headroom to boost clock and add more functional units without driving up power consumption, making their middle end competitive
    3) Smaller process also give them more room to design bigger GPUs, which means they can keep releasing their Ti parts

    All three means, of course, that even if they did nothing but die shrink, boost clocks, and increase the number of units, that they will remain competitive for another two years, on top of architectural changes.
  • Silma - Tuesday, July 23, 2019 - link

    Why hasn't NVIDA begun to sell 7nm cards?
    Does AMD have a time-limited exclusivity contract with TSMC?
    Or will it wait untill AMD launch cards faster than its own?
  • eastcoast_pete - Tuesday, July 23, 2019 - link

    Largely because they don't have to. My guess is that NVIDIA has quite a number of 12 nm FF dies in stock (probably a lot, thanks to the crypto craze) and are now selling them before they start the next generation . Reply
  • michael2k - Tuesday, July 23, 2019 - link

    I'm sure part of it is their existing contract, as opposed to an exclusivity contract. 12nm is a much more mature process, and in that regard it makes sense when trying to make a large part to use a proven process. 7nm wasn't available when NVIDIA was designing their RTX parts, so there was no way to estimate yield or improvements over time in 2018 (when they released the RTX parts)

    Now that the process is over a year old I'm sure they are working on a refreshed design to reduce power, increase clock, and add more or new functional units next year.
  • haukionkannel - Tuesday, July 23, 2019 - link

    Nvidia headmaster Also said in one interview that They can make 12nm chips much cheaper than 7nm chips... and that is good reason not to go for the newest new production technology. Reply
  • rocky12345 - Tuesday, July 23, 2019 - link

    Translation is 12nm is good enough right now we do not want to invest in 7nm as in not worth it because our power usage numbers are good enough right now.

    AMD on the other hand OMG our power numbers are through the roof we need a new node ASAP and that has worked out for them very well this time around. If Navi was on Global 14nm or 12nm the power usage would be insanely high for sure. TSMC's process is just better for GPU's than Global's. On the other hand TSMC's process not so good for CPU's as seen in Ryzen 3000 series and the lack luster clock speeds. Good thing those CPU's have a lto in them to make up for the lack of clock speed and they still perform like they are running at a higher speed than they are.
  • Rudde - Thursday, July 25, 2019 - link

    TSMC 7nm HPC node is more optimized for power usage than GF 12nm. The same can be said about Intel 10nm compared to Intel 14nm. That said, TSMC 7nm is not far behind GF 12nm in performance. Reply
  • DanNeely - Wednesday, July 24, 2019 - link

    IIRC this is a fairly traditional pattern with AMD/ATI being more aggressive about moving to new processes early on while NVidia waits until they're more mature. Reply

Log in

Don't have an account? Sign up now