Professional Visualization and Rendering

With AMD strongly pushing the Radeon VII as a prosumer content creation card, it behooves us to look at rendering, CAD, and professional visualization performance. However, accurate and applicable benchmarks for this field are not so easy to find, especially since performance is highly dependent on workflow and proprietary licensed ISV software. Given AnandTech’s audience, which often includes engineers using these applications in critical production environments, our goal is to provide the most relevant metrics. However, as Ian has discussed previously, the route to the most accurate workstation benchmarking for professional applications is in the hands of ISVs, who are at best blasé and more typically negative about providing access, even at the prospect of lending limited software licenses in return for ongoing discussion and third-party benchmark data of their software.

Those caveats in mind, the next best thing for evaluating overall GPU workstation performance is the venerable SPECviewperf, recently updated to version 13. Separated into ‘viewsets,’ which are a group of application-specific workloads derived from real-world datasets, SPECviewperf has been a longstanding suite for generalized workstation/CAD GPU performance. For SPECviewperf 13, the viewsets are based on:

  • Autodesk 3ds Max 2016 (Nitrous DX11 driver)
  • Dassault Systèmes CATIA V6 R2012
  • PTC Creo 3 & Creo 4
  • Geosurvey software, with workloads based on rendering techniques utilized by the open-source OpendTect seismic visualization application
  • Autodesk Maya 2017
  • Radiological (i.e. CT, MRI scans) rendering, with workloads using the Tuvok rendering core of the ImageVis3D volume visualization application
  • Autodesk Showcase 2013
  • Siemens NX 8.0
  • Dassault Systèmes Solidworks 2013 SP1

While we didn’t have time for complete benchmarking of video editing/production software such as Adobe Premiere Pro CC, we will be looking to include that in the future.

Compute/ProViz: SPECviewperf 13 - 3dsmax-06

Compute/ProViz: SPECviewperf 13 - catia-05

Compute/ProViz: SPECviewperf 13 - creo-02

Compute/ProViz: SPECviewperf 13 - energy-02

Compute/ProViz: SPECviewperf 13 - maya-05

Compute/ProViz: SPECviewperf 13 - medical-02

Compute/ProViz: SPECviewperf 13 - showcase-02

Compute/ProViz: SPECviewperf 13 - snx-03 (Siemens NX)

Compute/ProViz: SPECviewperf 13 - sw-04 (Solidworks)

Looking over the results, it's clear that certain viewsets tend to perform better on one vendor's hardware than the other's. In those cases, the Radeon VII doesn't buck the trend, though in Siemens NX the lower performance is more likely than not related to driver maturity. In the reverse scenarios like in creo-02 or maya-05, the Radeon VII is in a similar spot, naturally ahead of the RX Vega 64 but behind the competing RTX and GTX cards. If anything, the results highlight the importance of software maturity for newer hardware, but there are definite signs of Vega 20 being a powerful workstation card. The caveat is that it doesn't seem to change the overall landscape for worksets that traditionally perform well on NVIDIA hardware.

Our next set of benchmarks look at rendering performance. To be clear, given the nature of ‘render wars’ as well as the adoption of CUDA, the featured render engines are not necessarily indicative of the overall GPU renderer landscape. Because we are looking at the Radeon VII, it’s not applicable to include some of the more popular renderers, such as Redshift and Octane, which are CUDA-only, and similarly the presence of Indigo Renderer helps as another datapoint even though it is less popular.

Compute/ProViz: LuxMark 3.1 - LuxBall and Hotel

Compute/ProViz: Cycles - Blender Benchmark 1.0b2

Compute/ProViz: V-Ray Benchmark 1.0.8

Compute/ProViz: Indigo Renderer 4 - IndigoBench 4.0.64

To note, official Blender releases have yet to incorporate CUDA 10, and so RTX 20 series cards are not officially supported.

V-RAY here is the only test that utilizes CUDA for NVIDIA cards, while the rest all use OpenCL. The results seem broadly similar to SPECviewperf, where the Radeon VII continues to excel at workloads where AMD hardware generally fare well.

Synthetics Radeon VII and RX Vega 64 Clock-for-Clock
Comments Locked

289 Comments

View All Comments

  • zodiacfml - Friday, February 8, 2019 - link

    The first part of your conclusion describes what this product is. It is surprising to see this card's existence at 7nm, a Vega with 16GB of HBM2.
    It appears to me that AMD/TSMC is learning the 7nm process for GPUs/CPUs and the few chips they produce be sold as a high end part (as the volume/yields is being improved).
    AMD really shot high with its power consumption (clocks) and memory to reach the pricing of the GTX 2080.

    However, I haven't seen a publisher to show undervolting results. Most Vegas perform better with this tweak.
  • Samus - Saturday, February 9, 2019 - link

    I think you are being a little too critical of this card. Considering it’s an older architecture, it’s impressive it’s in the 2080’s ballpark.

    And for those like me that only care about Frostbite Engin based games, this card is obviously a better option between the two cards at the same price.

    You also ignored the overclockong potential of the headroom given by moving to 7nm
  • D. Lister - Saturday, February 9, 2019 - link

    "You also ignored the overclockong potential of the headroom given by moving to 7nm"

    Unfortunately it seems to be already overclocked to the max on the core. VRAM has some headroom but another couple of hundred MHz isn't going to do wonders considering the already exorbitant amount available.
  • D. Lister - Saturday, February 9, 2019 - link

    *...considering the already exorbitant amount [of bandwidth] available.
  • Oxford Guy - Saturday, February 9, 2019 - link

    "I think you are being a little too critical of this card."

    Unless someone can take advantage of the non-gaming aspects of it, it is dead in the water at the current price point. There is zero reason to purchase a card, for gaming only, that uses more power and creates vastly more noise at the same price point of one that is much more efficient for gaming purposes. And, the only way to tame the noise problem is to either massively undervolt it or give it water. Proponents of this GPU are going to have to show that it's possible to buy a 3 slot model and massively undervolt it to get noise under control with air. Otherwise, the claim is vaporware.

    Remember this information? Fiji: 596 mm2 for $650. Vega 10 495 mm2 for $500. Vega 20 331 mm2 for $700.

    Yes, the 16 GB of RAM costs AMD money but it's irrelevant for gaming. AMD not only gave the community nearly 600 mm2 of chip it paired it with an AIO to tame the noise. All the talk from Su about improving AMD's margins seems to be something that gamers need to stop lauding AMD about and starting thinking critically about. If a company only has an inferior product to offer and wants to improve margins that's going to require that buyers be particularly foolish.
  • Samus - Sunday, February 10, 2019 - link

    I wouldn't call the 16GB irrelevant. It trumps the 2080 in the two most demanding 4K titles, and comes relatively close in other ultra high resolution benchmarks.

    It could be assumed that's a sign of things to come as resolutions continue to increase.
  • Oxford Guy - Sunday, February 10, 2019 - link

    "It could be assumed that's a sign of things to come as resolutions continue to increase."

    Developers adapt to Nvidia, not to AMD. That appears to be why, for instance, the visuals in Witcher 3 were watered-down at the last minute — to fit the VRAM of the then standard 970. Particularly in the context of VRAMgate there was an incentive on the part of Nvidia to be certain that the 970's VRAM would be able to handle a game like that one.

    AMD could switch all of its discreet cards to 32 GB tomorrow and no developers would bite unless AMD pays them to, which means a paucity of usefulness of that 32 GB.
  • BenSkywalker - Saturday, February 9, 2019 - link

    This offering is truly a milestone in engineering.

    The Radeon VII has none of the RTX or tensor cores of the competition, uses markedly more power *and* is built with a half node process advantage and still, inexplicably, is slower than their direct competitor?

    I've gone back and looked, I can't find another example that's close to this.

    Either TSMC has *massive* problems with 7 nm or AMD has redefined terrible engineering in this segment. One of those, at least, has to be at play here.
  • Oxford Guy - Saturday, February 9, 2019 - link

    The RTX and Tensor die area may help with power dissipation when it's shut down, in terms of hot spot reduction for instance. Vega 20 is only 331 mm2. However, it does seem clear enough that Fiji/Vega is only to be considered a gaming-centric architecture in the context of developers creating engines that take advantage of it, à la DOOM.

    Since developers don't have an incentive to do that (even DOOM's engine is apparently a one-off), here we are with what looks like a card designed for compute and given to gamers as an expensive and excessively loud afterthought.
  • Oxford Guy - Saturday, February 9, 2019 - link

    There is also the issue of blasting clocks to compensate for the small die. Rip out all of the irrelevant bits and add more gaming hardware. Drop the VRAM to 8 GB. Make a few small tweaks to improve efficiency rather than just shrink Vega. With those things done I wonder how much better the efficiency/performance would be.

Log in

Don't have an account? Sign up now