Compute & Synthetics

Shifting gears, we'll look at the compute and synthetic aspects of the RTX 2070. Though it has its own GPU in the form of TU106, the hardware resources at hand are similar in progression to what we've seen in TU102 and TU104.

Starting off with GEMM tests, the RTX 2070's tensor cores are pulled into action with half-precision matrix multiplication, though using binaries originally compiled for Volta. Because Turing is backwards compatible and in the same compute capability family as Volta (sm_75 compared to Volta's sm_70), the benchmark continues to work out-of-the-box, though without any Turing optimizations.

Compute: General Matrix Multiply Half Precision (HGEMM)Compute: General Matrix Multiply Single Precision (SGEMM)

At reference specifications, peak theoretical tensor throughput is around 107.6 TFLOPS for the RTX 2080 Ti, 80.5 TFLOPS for the RTX 2080, and 59.7 TFLOPS for the RTX 2070. Unlike the 89% efficiency with the Titan V's 97.5 TFLOPS, the RTX cards are essentially at half that level, with around 47%, 48%, and 45% efficiency for the RTX 2080 Ti, 2080, and 2070 respectively. A Turing-optimized binary should bring that up, though it is possible that the GeForce RTX cards may not be designed for efficient tensor FP16 operations as opposed to the INT dot-product acceleration. After all, the GeForce RTX cards are for consumers and ostensibly intended for inferencing rather than training, which is the reasoning for the new INT support in Turing tensor cores.

In terms of SGEMM efficiency though, the RTX 2070 is hitting a ridiculous 97% of its touted 7.5 TFLOPS, though to be fair the reference specifications here are done manually rather with a reference vBIOS. The other two GeForce RTX cards are at similar 90+% levels of efficiency, though a GEMM test like this is specifically designed for maximum utilization.

Compute: CompuBench 2.0 - Level Set Segmentation 256

Compute: CompuBench 2.0 - N-Body Simulation 1024KCompute: CompuBench 2.0 - Optical Flow

 

Compute: Folding @ Home Single Precision

Compute: Geekbench 4 - GPU Compute - Total Score

The breakdown of the GB4 subscores seems to reveal a similar uplift like we spotted with the Titan V, which had scored in excess of 509,000 points. We'll have to investigate further but Turing and Volta are clearly accelerating some of these workloads beyond what was capable in Pascal and Maxwell.

Synthetic: TessMark, Image Set 4, 64x Tessellation

Given that TU106 has 75% of the hardware resources of TU104, the tessellation performance is in line with expectrations. For reference, we noted earlier that the Titan V scored 703 while the Titan Xp scored 604.

Synthetic: Beyond3D Suite - Pixel Fillrate

Synthetic: Beyond3D Suite - Integer Texture Fillrate (INT8)

Total War: Warhammer II Power, Temperature, and Noise
Comments Locked

121 Comments

View All Comments

  • thestryker - Tuesday, October 16, 2018 - link

    I feel much the same as you, and honestly I'd bet most people who buy the upper-mid range feel the same way. I also have a GTX 970 and as I told a couple of my friends while laughing at the new RTX pricing "this makes it so much easier to wait for 2020 to see if Intel can compete". I stick by that statement and barring a pricing revolution or my 970 dying here's to 2020.
  • Lazlo Panaflex - Friday, October 19, 2018 - link

    @thestryker, same here. I got a 970 a couple years ago, and won't be upgrading any time soon. I'm sure it'll run Doom Eternal just fine...thanks Vulcan ;-)
  • Targon - Tuesday, October 16, 2018 - link

    New consoles have been hitting $600 at release, and then come down after a year or two. So, $600 for a new card is still in that range of being the price of an entire console. When I see $700+, that is when I really question how much faster the card is to justify the higher price.
  • cfenton - Tuesday, October 16, 2018 - link

    The most expensive console launch recently was the Xbox One X at $500. The PS4 and PS4 Pro were $400 at launch.
  • eva02langley - Tuesday, October 16, 2018 - link

    The thing is that MS, Sony or Nintendo can sell their consoles at a lost because they are going to get it back on software... a GPU doesn`t work this way.

    @cfenton, 599$? https://www.youtube.com/watch?v=BOHqG1nc_tw
  • wr3zzz - Wednesday, October 17, 2018 - link

    Count me the same as well. With AAA developers no long pushing technology beyond console envelops, instead of a new GPU every other gen I am likely going with just one GPU (980) for this entire current console cycle.
  • colonelclaw - Thursday, October 18, 2018 - link

    Completely agree. For the cost of the most expensive games console you should at least get the most powerful gfx card. Have Nvidia forgotten that you basically need to spend the same amount again to get a working computer? $500 for a 'mid-range' card is utter lunacy.
  • adlep - Tuesday, October 16, 2018 - link

    Used, 2nd hand market price breakdown for both 1070ti and 1080 are going to be a major headache for Nvidia. I bought my MSI GTX 1080 Gaming X for the "buy it now price" of $320.00 and 1070ti cards go for less than $300.00 on the 2nd hand market such us ebay, facebook marketplace, and FS/FT sections of AT Forum.
  • The_Assimilator - Tuesday, October 16, 2018 - link

    1080 Ti as well - the fastest cards from the previous gen usually get the largest % discount.
  • brunis.dk - Tuesday, October 16, 2018 - link

    i get dizzy from turning my head to read the labels. i loved that you made the AMD bar in the compute benches red, helps me identify red team. maybe make a repeating bg with barely discernible logo's. Just saw i dont get dizzy, help an old man out :) If you need help with the web dev, let me know.

Log in

Don't have an account? Sign up now