Professional Visualization and Rendering

With AMD strongly pushing the Radeon VII as a prosumer content creation card, it behooves us to look at rendering, CAD, and professional visualization performance. However, accurate and applicable benchmarks for this field are not so easy to find, especially since performance is highly dependent on workflow and proprietary licensed ISV software. Given AnandTech’s audience, which often includes engineers using these applications in critical production environments, our goal is to provide the most relevant metrics. However, as Ian has discussed previously, the route to the most accurate workstation benchmarking for professional applications is in the hands of ISVs, who are at best blasé and more typically negative about providing access, even at the prospect of lending limited software licenses in return for ongoing discussion and third-party benchmark data of their software.

Those caveats in mind, the next best thing for evaluating overall GPU workstation performance is the venerable SPECviewperf, recently updated to version 13. Separated into ‘viewsets,’ which are a group of application-specific workloads derived from real-world datasets, SPECviewperf has been a longstanding suite for generalized workstation/CAD GPU performance. For SPECviewperf 13, the viewsets are based on:

  • Autodesk 3ds Max 2016 (Nitrous DX11 driver)
  • Dassault Systèmes CATIA V6 R2012
  • PTC Creo 3 & Creo 4
  • Geosurvey software, with workloads based on rendering techniques utilized by the open-source OpendTect seismic visualization application
  • Autodesk Maya 2017
  • Radiological (i.e. CT, MRI scans) rendering, with workloads using the Tuvok rendering core of the ImageVis3D volume visualization application
  • Autodesk Showcase 2013
  • Siemens NX 8.0
  • Dassault Systèmes Solidworks 2013 SP1

While we didn’t have time for complete benchmarking of video editing/production software such as Adobe Premiere Pro CC, we will be looking to include that in the future.

Compute/ProViz: SPECviewperf 13 - 3dsmax-06

Compute/ProViz: SPECviewperf 13 - catia-05

Compute/ProViz: SPECviewperf 13 - creo-02

Compute/ProViz: SPECviewperf 13 - energy-02

Compute/ProViz: SPECviewperf 13 - maya-05

Compute/ProViz: SPECviewperf 13 - medical-02

Compute/ProViz: SPECviewperf 13 - showcase-02

Compute/ProViz: SPECviewperf 13 - snx-03 (Siemens NX)

Compute/ProViz: SPECviewperf 13 - sw-04 (Solidworks)

Looking over the results, it's clear that certain viewsets tend to perform better on one vendor's hardware than the other's. In those cases, the Radeon VII doesn't buck the trend, though in Siemens NX the lower performance is more likely than not related to driver maturity. In the reverse scenarios like in creo-02 or maya-05, the Radeon VII is in a similar spot, naturally ahead of the RX Vega 64 but behind the competing RTX and GTX cards. If anything, the results highlight the importance of software maturity for newer hardware, but there are definite signs of Vega 20 being a powerful workstation card. The caveat is that it doesn't seem to change the overall landscape for worksets that traditionally perform well on NVIDIA hardware.

Our next set of benchmarks look at rendering performance. To be clear, given the nature of ‘render wars’ as well as the adoption of CUDA, the featured render engines are not necessarily indicative of the overall GPU renderer landscape. Because we are looking at the Radeon VII, it’s not applicable to include some of the more popular renderers, such as Redshift and Octane, which are CUDA-only, and similarly the presence of Indigo Renderer helps as another datapoint even though it is less popular.

Compute/ProViz: LuxMark 3.1 - LuxBall and Hotel

Compute/ProViz: Cycles - Blender Benchmark 1.0b2

Compute/ProViz: V-Ray Benchmark 1.0.8

Compute/ProViz: Indigo Renderer 4 - IndigoBench 4.0.64

To note, official Blender releases have yet to incorporate CUDA 10, and so RTX 20 series cards are not officially supported.

V-RAY here is the only test that utilizes CUDA for NVIDIA cards, while the rest all use OpenCL. The results seem broadly similar to SPECviewperf, where the Radeon VII continues to excel at workloads where AMD hardware generally fare well.

Synthetics Radeon VII and RX Vega 64 Clock-for-Clock
Comments Locked

289 Comments

View All Comments

  • repoman27 - Thursday, February 7, 2019 - link

    The Radeon Pro WX 7100 is Polaris 10, which does not do DSC. DSC requires fixed function encoding blocks that are not present in any of the Polaris or Vega variants. They do support DisplayPort 1.3 / 1.4 and HBR3, but DSC is an optional feature of the DP spec. AFAIK, the only GPUs currently shipping that have DSC support are NVIDIA's Turing chips.

    The CPU in the iMac Pro is a normal, socketed Xeon W, and you can max the RAM out at 512 GB using LRDIMMs if you're willing to crack the sucker open and shell out the cash. So making those things user accessible would be the only benefit to a modular Mac Pro. CPU upgrades are highly unlikely for that platform though, and I doubt Apple will even provide two DIMM slots per channel in the new Mac Pro. However, if they have to go LGA3647 to get an XCC based Xeon W, then they'd go with six slots to populate all of the memory channels. And the back of a display that is also 440 square inches of aluminum radiator is not necessarily a bad place to be, thermally. Nothing is open about Thunderbolt yet, by the way, but of course Apple could still add existing Intel TB3 controllers to an AMD design if they wanted to.

    So yeah, in order to have a product, they need to beat the iMac Pro in some meaningful way. And simply offering user accessible RAM and PCIe slots in a box that's separate from the display isn't really that, in the eyes of Apple at least. Especially since PCIe slots are far from guaranteed, if not unlikely.
  • halcyon - Friday, February 8, 2019 - link

    Apple cannot ship Mac Pro with a vacuum cleaner. That 43 dBA is isane. Even if Apple downclocked and undervolted the bios, I doubt they could make it very quiet.

    Also, I doubt AMD is willing to sell the tons of them at a loss.
  • dark_light - Thursday, February 7, 2019 - link

    Well written, balanced and comprehensive review that covers all the bases with just the right
    amount of detail.

    Thanks Nate Oh.

    Anandtech is still arguably the best site for this content. Kudos guys.
  • drgigolo - Thursday, February 7, 2019 - link

    So I got a 1080Ti at launch, because there was no other alternative at 4K. Finally we have an answer from AMD, unfortunately it's no faster than my almost 2 year old GPU, priced the same no less.

    I really think this would've benefitted from 128 rops, or 96.

    If they had priced this at 500 dollars, it would've been a much better bargain.

    I can't think of anyone who I would recommend this to.
  • sing_electric - Thursday, February 7, 2019 - link

    To be fair, you could almost say the same thing about the 2080, "I got a 1080 Ti at launch and 2 years later, Nvidia released a GPU that barely performs better if you don't care about gimmicks like ray tracing."

    People who do gaming and compute might be very well tempted, people who don't like Nvidia (or just do like AMD) might be tempted.

    Unfortunately, the cost of the RAM in this thing alone is probably nearly $350, so there's no way AMD could sell this thing for $500 (but it wouldn't surprise me if we see it selling a little under MSRP if there is plentiful supply and Nvidia can crank out enough 2080s).
  • eva02langley - Thursday, February 7, 2019 - link

    That was the whole point of RTX. Beside the 2080 TI, there was nothing new. You were having the same performances for around the same price than the last generation. There was no price disruption.
  • Oxford Guy - Thursday, February 7, 2019 - link

    Poor AMD.

    We're supposed to buy a clearly inferior product (look at that noise) just so they can sell leftover and defective Instincts?

    We're supposed to buy an inferior product because AMD's bad business moves have resulted in Nvidia being able to devalue the GPU market with Turing?

    Nope. We're supposed to either buy the best product for the money or sit out and wait for something better. Personally, I would jump for joy if everyone would put their money into a crowdfunded company, with management that refuses to become eaten alive by a megacorp, to take on Nvidia and AMD in the pure gaming space. There was once space for three players and there is space today. I am not holding my breath for Intel to do anything particularly valuable.

    Wouldn't it be nice to have a return to pure no-nonsense gaming designs, instead of this "you can buy our defective parts for high prices and feel like you're giving to charity" and "you can buy our white elephant feature well before its time has come and pay through the nose for it" situation.

    Capitalism has had a bad showing for some time now in the tech space. Monopolies and duopolies reign supreme.
  • eva02langley - Friday, February 8, 2019 - link

    Honestly, beside a RX570/580, no GPUs make sense right now.

    Funny that Polaris is still the best bang for the $ still today.
  • drgigolo - Saturday, February 9, 2019 - link

    Well, at least you can buy a 2080Ti, eventhough the 2080 is of course at the same price point as the 1080Ti. But I won't buy a 2080Ti either, it's too expensive and the performance increase is too small.

    The last decent AMD card I had, was the R9 290X. Had that for a few years until the 1080 came out, and then, replaced that to a 1080Ti when I got a Acer Predator XB321HK.

    I will wait until something better comes along. Would really like HDMI 2.1 output, so that I can use VRR on the upcoming LG OLED C9.
  • sing_electric - Thursday, February 7, 2019 - link

    Oh, also, FWIW: The other way of looking at it is "damn, that 1080 Ti was a good buy. Here I am 2 years later and there's very little reason for me to upgrade."

Log in

Don't have an account? Sign up now