Professional Visualization and Rendering

With AMD strongly pushing the Radeon VII as a prosumer content creation card, it behooves us to look at rendering, CAD, and professional visualization performance. However, accurate and applicable benchmarks for this field are not so easy to find, especially since performance is highly dependent on workflow and proprietary licensed ISV software. Given AnandTech’s audience, which often includes engineers using these applications in critical production environments, our goal is to provide the most relevant metrics. However, as Ian has discussed previously, the route to the most accurate workstation benchmarking for professional applications is in the hands of ISVs, who are at best blasé and more typically negative about providing access, even at the prospect of lending limited software licenses in return for ongoing discussion and third-party benchmark data of their software.

Those caveats in mind, the next best thing for evaluating overall GPU workstation performance is the venerable SPECviewperf, recently updated to version 13. Separated into ‘viewsets,’ which are a group of application-specific workloads derived from real-world datasets, SPECviewperf has been a longstanding suite for generalized workstation/CAD GPU performance. For SPECviewperf 13, the viewsets are based on:

  • Autodesk 3ds Max 2016 (Nitrous DX11 driver)
  • Dassault Systèmes CATIA V6 R2012
  • PTC Creo 3 & Creo 4
  • Geosurvey software, with workloads based on rendering techniques utilized by the open-source OpendTect seismic visualization application
  • Autodesk Maya 2017
  • Radiological (i.e. CT, MRI scans) rendering, with workloads using the Tuvok rendering core of the ImageVis3D volume visualization application
  • Autodesk Showcase 2013
  • Siemens NX 8.0
  • Dassault Systèmes Solidworks 2013 SP1

While we didn’t have time for complete benchmarking of video editing/production software such as Adobe Premiere Pro CC, we will be looking to include that in the future.

Compute/ProViz: SPECviewperf 13 - 3dsmax-06

Compute/ProViz: SPECviewperf 13 - catia-05

Compute/ProViz: SPECviewperf 13 - creo-02

Compute/ProViz: SPECviewperf 13 - energy-02

Compute/ProViz: SPECviewperf 13 - maya-05

Compute/ProViz: SPECviewperf 13 - medical-02

Compute/ProViz: SPECviewperf 13 - showcase-02

Compute/ProViz: SPECviewperf 13 - snx-03 (Siemens NX)

Compute/ProViz: SPECviewperf 13 - sw-04 (Solidworks)

Looking over the results, it's clear that certain viewsets tend to perform better on one vendor's hardware than the other's. In those cases, the Radeon VII doesn't buck the trend, though in Siemens NX the lower performance is more likely than not related to driver maturity. In the reverse scenarios like in creo-02 or maya-05, the Radeon VII is in a similar spot, naturally ahead of the RX Vega 64 but behind the competing RTX and GTX cards. If anything, the results highlight the importance of software maturity for newer hardware, but there are definite signs of Vega 20 being a powerful workstation card. The caveat is that it doesn't seem to change the overall landscape for worksets that traditionally perform well on NVIDIA hardware.

Our next set of benchmarks look at rendering performance. To be clear, given the nature of ‘render wars’ as well as the adoption of CUDA, the featured render engines are not necessarily indicative of the overall GPU renderer landscape. Because we are looking at the Radeon VII, it’s not applicable to include some of the more popular renderers, such as Redshift and Octane, which are CUDA-only, and similarly the presence of Indigo Renderer helps as another datapoint even though it is less popular.

Compute/ProViz: LuxMark 3.1 - LuxBall and Hotel

Compute/ProViz: Cycles - Blender Benchmark 1.0b2

Compute/ProViz: V-Ray Benchmark 1.0.8

Compute/ProViz: Indigo Renderer 4 - IndigoBench 4.0.64

To note, official Blender releases have yet to incorporate CUDA 10, and so RTX 20 series cards are not officially supported.

V-RAY here is the only test that utilizes CUDA for NVIDIA cards, while the rest all use OpenCL. The results seem broadly similar to SPECviewperf, where the Radeon VII continues to excel at workloads where AMD hardware generally fare well.

Synthetics Radeon VII and RX Vega 64 Clock-for-Clock
Comments Locked

289 Comments

View All Comments

  • peevee - Tuesday, February 12, 2019 - link

    "that the card operates at a less-than-native FP64 rate"

    The chip is capapble of 2 times higher f64 performance. Marketoids must die.
  • FreckledTrout - Thursday, February 7, 2019 - link

    Performance wise it did better than I expected. This card is pretty loud and runs a bit hot for my tastes. Nice review. Where are the 8K and 16K tests :)-
  • IGTrading - Thursday, February 7, 2019 - link

    When drivers mature, AMD Radeon VII will beat the GF 2080.

    Just like Radeon Furry X beats the GF 980 and Radeon Vega 64 beats the GF 1080.

    When drivers mature and nVIDIA's blatant sabotage against its older cards (and AMD's cards) gets mitigated, the long time owner of the card will enjoy better performance.

    Unfortunately, on the power side, nVIDIA still has the edge, but I'm confident that those 16 GB of VRAM will really show their worth in the following year.
  • cfenton - Thursday, February 7, 2019 - link

    I'd rather have a card that performs better today than one that might perform better in two or three years. By that point, I'll already be looking at new cards.

    This card is very impressive for anyone who needs FP64 compute and lots of VRAM, but it's a tough sell if you primarily want it for games.
  • Benjiwenji - Thursday, February 7, 2019 - link

    AMD cards have traditional age much better than Nvidia. GamerNexus just re-benchmarked the 290x from 2013 on modern games and found it comparable to the 980, 1060, and 580.

    The GTX 980 came late 2014 with a $550USD tag, now struggles on 1440p.

    Not to mention that you can get a lot out of AMD cards if you're willing to tinker. My 56, which I got from Microcenter on Nov, 2017, for $330. (total steal) Now performs at 1080 level after BIOs flash + OC.
  • eddman - Friday, February 8, 2019 - link

    What are you talking about? GTX 980 still performs as it should at 1440.

    https://www.anandtech.com/bench/product/2142?vs=22...
  • Icehawk - Friday, February 8, 2019 - link

    My 970 does just fine too, I can play 1440p maxed or near maxed in everything - 4k in older/simpler games too (ie, Overwatch). I was planning on a new card this gen for 4k but pricing is just too high for the gains, going to hold off one more round...
  • Gastec - Tuesday, February 12, 2019 - link

    That's because, as the legend has it, Nvidia is or was in the past gimping their older generation cards via drivers.
  • kostaaspyrkas - Sunday, February 10, 2019 - link

    in same frame rates nvidia gameplay gives me a sense of choppiness...amd radeon more fluid gameplay...
  • yasamoka - Thursday, February 7, 2019 - link

    This wishful in-denial conjecture needs to stop.

    1) AMD Radeon VII is based on the Vega architecture which has been on the platform since June 2017. It's been about 17 months. The drivers had more than enough time to mature. It's obvious that in certain cases there are clear bottlenecks (e.g. GTA V), but this seems to be the fundamental nature of AMD's drivers when it comes to DX11 performance in some games that perform a lot of draw calls. Holding out for improvements here isn't going to please you much.

    2) The Radeon Fury X was meant to go against the GTX 980Ti, not the GTX 980. The Fury, being slightly under the Fury X, would easily cover the GTX 980 performance bracket. The Fury X still doesn't beat the GTX 980Ti, particularly due to its limited VRAM where it even falls back in performance compared to the RX480 8GB and its siblings (RX580, RX590).

    3) There is no evidence of Nvidia's sabotage against any of its older cards when it comes to performance, and frankly your dig against GameWorks "sabotaging" AMD's cards performance is laughable when the same features, when enabled, also kill performance on Nvidia's own cards. PhysX has been open-source for 3 years and has now moved on to its 4th iteration, being used almost universally now in game engines. How's that for vendor lockdown?

    4) 16GB of VRAM will not even begin to show their worth in the next year. Wishful thinking, or more like licking up all the bad decisions AMD tends to make when it comes to product differentiation between their compute and gaming cards. It's baffling at this point that they still didn't learn to diverge their product lines and establish separate architectures in order to optimize power draw and bill of materials on the gaming card by reducing architectural features that are unneeded for gaming. 16GB are unneeded, 1TB/s of bandwidth is unneeded, HBM is expensive and unneeded. The RTX 2080 is averaging higher scores with half the bandwidth, half the VRAM capabity, and GDDR6.

    The money is in the gaming market and the professional market. The prosumer market is a sliver in comparison. Look at what Nvidia do, they release a mere handful of mascots every generation, all similar to one another (the Titan series), to take care of that sliver. You'd think they'd have a bigger portfolio if it were such a lucrative market? Meanwhile, on the gaming end, entire lineups. On the professional end, entire lineups (Quadro, Tesla).

    Get real.

Log in

Don't have an account? Sign up now