Professional Visualization and Rendering

With AMD strongly pushing the Radeon VII as a prosumer content creation card, it behooves us to look at rendering, CAD, and professional visualization performance. However, accurate and applicable benchmarks for this field are not so easy to find, especially since performance is highly dependent on workflow and proprietary licensed ISV software. Given AnandTech’s audience, which often includes engineers using these applications in critical production environments, our goal is to provide the most relevant metrics. However, as Ian has discussed previously, the route to the most accurate workstation benchmarking for professional applications is in the hands of ISVs, who are at best blasé and more typically negative about providing access, even at the prospect of lending limited software licenses in return for ongoing discussion and third-party benchmark data of their software.

Those caveats in mind, the next best thing for evaluating overall GPU workstation performance is the venerable SPECviewperf, recently updated to version 13. Separated into ‘viewsets,’ which are a group of application-specific workloads derived from real-world datasets, SPECviewperf has been a longstanding suite for generalized workstation/CAD GPU performance. For SPECviewperf 13, the viewsets are based on:

  • Autodesk 3ds Max 2016 (Nitrous DX11 driver)
  • Dassault Systèmes CATIA V6 R2012
  • PTC Creo 3 & Creo 4
  • Geosurvey software, with workloads based on rendering techniques utilized by the open-source OpendTect seismic visualization application
  • Autodesk Maya 2017
  • Radiological (i.e. CT, MRI scans) rendering, with workloads using the Tuvok rendering core of the ImageVis3D volume visualization application
  • Autodesk Showcase 2013
  • Siemens NX 8.0
  • Dassault Systèmes Solidworks 2013 SP1

While we didn’t have time for complete benchmarking of video editing/production software such as Adobe Premiere Pro CC, we will be looking to include that in the future.

Compute/ProViz: SPECviewperf 13 - 3dsmax-06

Compute/ProViz: SPECviewperf 13 - catia-05

Compute/ProViz: SPECviewperf 13 - creo-02

Compute/ProViz: SPECviewperf 13 - energy-02

Compute/ProViz: SPECviewperf 13 - maya-05

Compute/ProViz: SPECviewperf 13 - medical-02

Compute/ProViz: SPECviewperf 13 - showcase-02

Compute/ProViz: SPECviewperf 13 - snx-03 (Siemens NX)

Compute/ProViz: SPECviewperf 13 - sw-04 (Solidworks)

Looking over the results, it's clear that certain viewsets tend to perform better on one vendor's hardware than the other's. In those cases, the Radeon VII doesn't buck the trend, though in Siemens NX the lower performance is more likely than not related to driver maturity. In the reverse scenarios like in creo-02 or maya-05, the Radeon VII is in a similar spot, naturally ahead of the RX Vega 64 but behind the competing RTX and GTX cards. If anything, the results highlight the importance of software maturity for newer hardware, but there are definite signs of Vega 20 being a powerful workstation card. The caveat is that it doesn't seem to change the overall landscape for worksets that traditionally perform well on NVIDIA hardware.

Our next set of benchmarks look at rendering performance. To be clear, given the nature of ‘render wars’ as well as the adoption of CUDA, the featured render engines are not necessarily indicative of the overall GPU renderer landscape. Because we are looking at the Radeon VII, it’s not applicable to include some of the more popular renderers, such as Redshift and Octane, which are CUDA-only, and similarly the presence of Indigo Renderer helps as another datapoint even though it is less popular.

Compute/ProViz: LuxMark 3.1 - LuxBall and Hotel

Compute/ProViz: Cycles - Blender Benchmark 1.0b2

Compute/ProViz: V-Ray Benchmark 1.0.8

Compute/ProViz: Indigo Renderer 4 - IndigoBench 4.0.64

To note, official Blender releases have yet to incorporate CUDA 10, and so RTX 20 series cards are not officially supported.

V-RAY here is the only test that utilizes CUDA for NVIDIA cards, while the rest all use OpenCL. The results seem broadly similar to SPECviewperf, where the Radeon VII continues to excel at workloads where AMD hardware generally fare well.

Synthetics Radeon VII and RX Vega 64 Clock-for-Clock
Comments Locked

289 Comments

View All Comments

  • KateH - Friday, February 8, 2019 - link

    thirded on still enjoying SupCom! i have however long ago given up on attempting to find the ultimate system to run it. i7 920 @ 4.2Ghz, nope. FX-8150 @ 4.5Ghz, nope. The engine still demands more CPU for late-game AI swarms! (and i like playing on 81x81 maps which makes it much worse)
  • Korguz - Friday, February 8, 2019 - link

    Holliday75 and KateH

    ive run supcom on a i7 930 OC'd to 4.2 on a 7970, slow as molasses late in the game VS the AI, and on my current i7 5930k and strix 1060 and.. same thing.. very slow late in the game.... the later patches supposedly helped the game use more then 1 or 2 cores, i think Gas Powered games called it " multi core aware "

    makes me wonder how it would run on something newer like a threadripper, top en Ryzen or top end i7 and an i9 with a 1080 + vid card though, compared to my current comp....
  • eva02langley - Friday, February 8, 2019 - link

    Metal Gear Solid V, Street Fighter 5, Soulcalibur 6, Tekken 7, Senua Sacrifice...

    Basically, nothing from EA or Ubisoft or Activision or Epic.
  • ballsystemlord - Thursday, February 7, 2019 - link

    Oh oh! Would you be willing to post some FLOSS benchmarks? Xonotic, 0AD, Openclonk and Supertuxkart?
  • Manch - Friday, February 8, 2019 - link

    I would like to see a mixture of games that are dedicated to a singular API, and ones that support all three or at least two of them. I think that would make for a good spread.
  • Manch - Thursday, February 7, 2019 - link

    Not sure that I expected more. The clock for clock against the V64 is telling. @$400 for the V64 vs $700 for the VII, ummm....if you need a compute card as well sure, otherwise, Nvidia got the juice you want at better temps for the same price. Not a bad card, but it's not a great card either. I think a full 64CU's may have improved things a bit more and even put it over the top.

    Could you do a clock for clock compare against the 56 since they have the same CU count?? I'd be curious to see this and extrapolate what a VII with 64CU's would perform like just for shits and giggles.
  • mapesdhs - Friday, February 8, 2019 - link

    Are you really suggesting that, given two products which are basically the same, you automatically default to NVIDIA because of temperatures?? This really is the NPC mindset at work. At least AMD isn't ripping you off with the price, Radeon VII is expensive to make, whereas NVIDIA's margin is enormous. Remember the 2080 is what should actually have been the 2070, the entire stack is a level higher than it should be, confirmed by die code numbers and the ludicrous fact that the 2070 doesn't support SLI.

    Otoh, Radeon II is generally too expensive anyway; I get why AMD have done it, but really it's not the right way to tackle this market. They need to hit the midrange first and spread outwards. Stay out of it for a while, come back with a real hammer blow like they did with CPUs.
  • Manch - Friday, February 8, 2019 - link

    Well, they're not basically the same. Who's the NPC LOL? I have a V64 in my gaming rig. It's loud but I do like it for the price. The 2080 is a bit faster than the VII for the same price. It does run cooler and quieter. For some that is more important. If games is all you care about, get it. If you need compute, live with the noise and get the VII.

    I don't care how expensive it is to make. If AMD could put out a card at this level of performance they would and they would sell it at this price.
    Barely anyone uses SLI/Crossfire. It's not worth it. I previously had 2 290X 8GB in Crossfire. I needed a beter card for VR, V64 was the answer. It's louder but it was far cheaper than competitors. The game bundle helped. Before that, I had bought a 1070 for the wife's computer. It was a good deal at the time. Some of yall get too attached to your brands get all frenzied at any criticism. I buy what suits my needs at the best price/perf.
  • AdhesiveTeflon - Friday, February 8, 2019 - link

    Not our fault AMD decided to make a video card with more expensive components and not beat the competition,
  • mapesdhs - Friday, February 8, 2019 - link

    Are you really suggesting that, given two products which are basically the same, you automatically default to NVIDIA because of temperatures?? This really is the NPC mindset at work. At least AMD isn't ripping you off with the price, Radeon VII is expensive to make, whereas NVIDIA's margin is enormous. Remember the 2080 is what should actually have been the 2070, the entire stack is a level higher than it should be, confirmed by die code numbers and the ludicrous fact that the 2070 doesn't support SLI.

    Otoh, Radeon II is generally too expensive anyway; I get why AMD have done it, but really it's not the right way to tackle this market. They need to hit the midrange first and spread outwards. Stay out of it for a while, come back with a real hammer blow like they did with CPUs.

Log in

Don't have an account? Sign up now