Compute & Synthetics

Moving on to the low-level compute guts of the cards, we take a look at compute and synthetic results starting with tensor core accelerated GEMM.

Compute: General Matrix Multiply Single Precision (SGEMM)

While using binaries compiled for Volta, Turing is backwards compatible in that respect as it is in the same compute capability family (sm_75 compared to Volta's sm_70). In terms of compute resources, the RTX 2080 Ti's 544 tensor cores and 1545MHz boost clock is not far off of the Titan V's 640 tensor cores and 1455MHz boost clock, so the latest Turing-optimized binaries should better reflect the RTX 2080 Ti's raw GEMM acceleration capabilities. Likewise for the 368 tensor core RTX 2080, whose tensor-accelerated HGEMM performance in TFLOPS is somewhere around 20% less than the RTX 2080 Ti.

Compute: CompuBench 2.0 - Level Set Segmentation 256

Compute: CompuBench 2.0 - N-Body Simulation 1024K

Compute: CompuBench 2.0 - Optical Flow

Compute: Folding @ Home Single Precision

Compute: Geekbench 4 - GPU Compute - Total Score

Synthetic: TessMark, Image Set 4, 64x Tessellation

.

Total War: Warhammer II Power, Temperature, and Noise
Comments Locked

337 Comments

View All Comments

  • Holliday75 - Friday, September 21, 2018 - link

    Good thing there are cops around to keep me honest. If they weren't I'd go on a murder spree and blame them for it.
  • Yojimbo - Wednesday, September 19, 2018 - link

    It's NVIDIA making a conscious decision to spend its engineering resources on innovating and implementing new technologies that will shift the future of gaming instead of spending that energy and die space on increasing performance as much as it can in today's titles. If NVIDIA left out the RT cores and other new technologies they could have easily increased performance 50 or 60% in legacy technologies by building chips bigger than Pascal but smaller than Turing, while increasing prices only moderately. Then everyone would be happy getting a card that would be leading them into a gaming torpor. In a few years when everyone is capable of running at 4k and over 60 fps they'd get bored and wonder why the industry were going nowhere.
  • NikosD - Wednesday, September 19, 2018 - link

    nVidia has done the same thing in the past, introducing new technologies and platforms like tesselation, PhysX, HairWorks, GameWorks, GPP etc.
    All of these were proved to be just tricks in order to kill competition, like always, which nowadays means to kill AMD.
    Pseudoraytracing is not an innovation or something mandatory for gaming.
    It's just another premature technology that the opponent doesn't have in order to be nVidia unique again with huge cost for the consumer and performance regression.

    I repeat.

    Skip that Turing fraud.
  • maximumGPU - Thursday, September 20, 2018 - link

    i don't think it's fair to compare ray tracing to HairWorks...
    ray tracing is a superior way to render graphics compared to rasterisation, there's no question about this.
  • Lolimaster - Saturday, September 22, 2018 - link

    But with what, nvidia RTX only do it on a small part of a FRAME, on selected scenes. On tensor core repurposed for that.

    You will need tensor cores in the 100's to make nvidia implementation more "wowish", 1000's to actually talk about raytracing being a thing.

    Consoles dictate gaming progress, AMD holds that.
  • Lolimaster - Saturday, September 22, 2018 - link

    Exactly, to start talking about actual raytracing or at least most of the parts of a scene, we need 10-100x the current gpu performance.
  • Yojimbo - Saturday, September 22, 2018 - link

    GPP was a partner promotion program. Hairworks is part of Gameworks. PhysX is part of Gameworks. Gameworks is not a trick, and neither is the PhysX part of it. But neither of them compare to ray tracing. Maybe you should like up what the word "pseudo" means, because you're using it wrong.

    In 1 year or a year and a half AMD will have their own ray tracing acceleration hardware and then you'll be all in on it.

    As for killing AMD, NVIDIA are not interested in it. It wouldn't be good for them, anyway. NVIDIA are, however, interested in building their platform and market dominance.
  • Yojimbo - Saturday, September 22, 2018 - link

    Edit: look up*
  • Eris_Floralia - Thursday, September 20, 2018 - link

    I've read all ur comments and still struggle to find any consistent logic.
  • eva02langley - Thursday, September 20, 2018 - link

    Nvidia is throwing down the throat of gamers Ray Tracing development. We are paying for something that we didn't even wanted at first.

    You didn't even know about Ray Tracing and DLSS before it was announced. You are just drinking the coolaid unlike many of us who stand out and raging against these INDECENT prices.

Log in

Don't have an account? Sign up now