Compute & Synthetics

Moving on to the low-level compute guts of the cards, we take a look at compute and synthetic results starting with tensor core accelerated GEMM.

Compute: General Matrix Multiply Single Precision (SGEMM)

While using binaries compiled for Volta, Turing is backwards compatible in that respect as it is in the same compute capability family (sm_75 compared to Volta's sm_70). In terms of compute resources, the RTX 2080 Ti's 544 tensor cores and 1545MHz boost clock is not far off of the Titan V's 640 tensor cores and 1455MHz boost clock, so the latest Turing-optimized binaries should better reflect the RTX 2080 Ti's raw GEMM acceleration capabilities. Likewise for the 368 tensor core RTX 2080, whose tensor-accelerated HGEMM performance in TFLOPS is somewhere around 20% less than the RTX 2080 Ti.

Compute: CompuBench 2.0 - Level Set Segmentation 256

Compute: CompuBench 2.0 - N-Body Simulation 1024K

Compute: CompuBench 2.0 - Optical Flow

Compute: Folding @ Home Single Precision

Compute: Geekbench 4 - GPU Compute - Total Score

Synthetic: TessMark, Image Set 4, 64x Tessellation

.

Total War: Warhammer II Power, Temperature, and Noise
Comments Locked

337 Comments

View All Comments

  • Fritzkier - Wednesday, September 19, 2018 - link

    Blame both. Why the f you blame AMD for NVIDIA's own fault?
    And yes, AMD had competitive offering on mid-end, not on high end. But, that's before 7mm. Let's see what will we got on 7mm. 7mm will be released next year anyway, it's not that far off.
  • PopinFRESH007 - Wednesday, September 19, 2018 - link

    Yep, lets wait for those 7mm processes. Those chips should only be the size of my computer with a couple hundred thousand transistors.
  • Holliday75 - Friday, September 21, 2018 - link

    Haha I was about to question your statement until I paid more attention to the process size he mentioned.
  • Fritzkier - Saturday, September 22, 2018 - link

    We seriously needs an edit button. Thanks autocorrect.
  • Yojimbo - Wednesday, September 19, 2018 - link

    So you are saying that if AMD were competitive then NVIDIA could never have implemented such major innovations in games technology... So, competition is bad?
  • dagnamit - Thursday, September 20, 2018 - link

    Competition can stifle innovation when the market is involved in race to see how efficiently they can leverage current technology. The consumer GPU market has been about the core count/core efficiency race for a very long time.

    Because Nvidia has a commanding lead in that department, they are able to add in other technology without falling behind AMD. In fact, they’ve been given the opportunity to start an entirely new market with ray-tracing tech.

    There are a great many more companies developing ray-tracing hardware than rasterization focused hardware at the current moment. With Nvidia throwing their hat in now, it could mean other companies start to bring hardware solutions to the fore that don’t have a Radeon badge. It won’t be Red v. Green anymore, and that’s very exciting.
  • Spunjji - Friday, September 21, 2018 - link

    Your Brave New World would involve someone else magically catching up with AMD and Nvidia's lead in conventional rasterization tech. Spoiler alert: nobody has in the past 2 decades and the best potential competition, Intel, isn't entering the fray until ~2020
  • dagnamit - Sunday, September 23, 2018 - link

    No. I’m saying that companies that specialize in ray-tracing technology may have an opportunity to get into the consumer discrete GPU market. They don’t need to catch up with anything.
  • eva02langley - Thursday, September 20, 2018 - link

    Not AMD fault if Nvidia is asking 1200$ US. Stop blaming AMD because you want to purchase Nvidia cards at better price, BLAME Nvidia!

    It is not AMD who force Ray Tracing on us. It is not AMD who want to provide gamework tools to sabotage the competition and gamers at the same time. It is not AMD charging us the G-sync tax. It is not AMD that screw gamers for the wallet of investors.

    It is all Nvidia fault! Stop defending them! There is no excuses.
  • BurntMyBacon - Thursday, September 20, 2018 - link

    I accept that nVidia's choices are their own and not the "fault" of any third party. On the other hand, nVidia is a business and their primary objective is to make money. Manufacturing GPUs with features and performance that customers find valuable is a tool to meet their objective. So while their decisions are their own responsibility, they are not unexpected. Competition from a third party with the same money making objective limits their ability to make money as they now have to provide at least the perception of more value to the customer. Previous generation hardware also limits their ability to make money as the relative increase in features and performance (and consequently value) are less than if the previous generation didn't exist. If the value isn't perceived to be high enough, customers won't upgrade from existing offerings. However, if nVidia simply stops offering previous generation hardware, new builds may still be a significant source of sales for those without an existing viable product.

    Long story short, since there is no viable competition from AMD or another third party to limit nVidia's prices, it falls to us as consumers to keep the prices in check through waiting or buying previous gen hardware. If, however, consumers in general decide these cards are worth the cost, then those who are discontent simply need to accept that they fit into a lower price category of the market than they previously did. It is unlikely that nVidia will bring prices back down without reason.

    Note: I tend to believe that nVidia got a good idea of how much more the market was willing to pay for their product during the mining push. Though I don't like it (and won't pay for it), I can't really blame them for wanting the extra profits in their own coffers rather than letting it go to retailers.

Log in

Don't have an account? Sign up now