Radeon VII & Radeon RX Vega 64 Clock-for-Clock Performance

With the variety of changes from the Vega 10 powered RX Vega 64 to the new Radeon VII and its Vega 20 GPU, we wanted to take a look at performance and compute while controlling for clockspeeds. In this way, we can peek at any substantial improvements or differences in pseudo-IPC. There's a couple caveats here; obviously, because the RX Vega 64 has 64 CUs while the Radeon VII has only 60 CUs, the comparison is already not exact. The other thing is that "IPC" is not the exact metric measured here, but more so how much graphics/compute work is done per clock cycle and how that might translate to performance. Isoclock GPU comparisons tend to be less useful when comparing across generations and architectures, as like in Vega designers often design to add pipeline stages to enable higher clockspeeds, but at the cost of reducing work done per cycle and usually also increasing latency.

For our purposes, the incremental nature of 2nd generation Vega allays some of those concerns, though unfortunately, Wattman was unable to downclock memory at this time, so we couldn't get a set of datapoints for when both cards are configured for comparable memory bandwidth. While the Vega GPU boost mechanics means there's not a static pinned clockspeed, both cards were set to 1500MHz, and both fluctuated from 1490 to 1500MHZ depending on workload. All combined, this means that these results should be taken as approximations and lacking granularity, but are useful in spotting significant increases or decreases. This also means that interpreting the results is trickier, but at a high level, if the Radeon VII outperforms the RX Vega 64 at a given non-memory bound workload, then we can assume meaningful 'work per cycle' enhancements relatively decoupled from CU count.

Ashes of the Singularity: Escalation - 3840x2160 - Extreme Quality

Grand Theft Auto V - 3840x2160 - Very High Quality

F1 2018 - 3840x2160 - Ultra Quality

Shadow of War - 4K and 1440p - Ultra Quality

Wolfenstein II - 3840x2160 -

As mentioned above, we were not able to control for the doubled memory bandwidth. But in terms of gaming, the only unexpected result is with GTA V. As an outlier, it's less likely to be an indication of increased gaming 'work per cycle,' and more likely to be related to driver optimization and memory bandwidth increases. GTA V has historically been a title where AMD hardware don't reach the expected level of performance, so regardless there's been room for driver improvement.

Compute/ProViz: SPECviewperf 13 - 3dsmax-06

Compute/ProViz: SPECviewperf 13 - catia-05

Compute/ProViz: SPECviewperf 13 - creo-02

Compute/ProViz: SPECviewperf 13 - energy-02

Compute/ProViz: SPECviewperf 13 - maya-05

Compute/ProViz: SPECviewperf 13 - medical-02

Compute/ProViz: SPECviewperf 13 - showcase-02

Compute/ProViz: SPECviewperf 13 - snx-03 (Siemens NX)

SPECviewperf is a slightly different story, though.

Compute/ProViz: LuxMark 3.1 - LuxBall and Hotel

Compute/ProViz: Cycles - Blender Benchmark 1.0b2

Compute/ProViz: V-Ray Benchmark 1.0.8

Compute/ProViz: Indigo Renderer 4 - IndigoBench 4.0.64

 

Professional Visualization and Rendering Power, Temperature, and Noise
Comments Locked

289 Comments

View All Comments

  • schizoide - Thursday, February 7, 2019 - link

    Sure it does, at the bottom-end. It basically IS an instinct mi50 on the cheap.
  • GreenReaper - Thursday, February 7, 2019 - link

    Maybe they weren't selling so well so they decided to repurpose before Navi comes out and makes it largely redundant.
  • schizoide - Thursday, February 7, 2019 - link

    IMO, what happened is pretty simple. Nvidia's extremely high prices allowed AMD to compete with a workstation-class card. So they took a swing.
  • eva02langley - Friday, February 8, 2019 - link

    My take to. This card was never intended to be released. It just happened because the RTX 2080 is at 700+$.

    In Canada, the RVII is 40$ less than the cheapest 2080 RTX, making it the better deal.
  • Manch - Thursday, February 7, 2019 - link

    It is but its slightly gimped perf wise to justify the price diff.
  • sing_electric - Thursday, February 7, 2019 - link

    Anyone else think that the Mac Pro is lurking behind the Radeon VII release? Apple traditionally does a March 2019 event where they launch new products, so the timing fits (especially since there's little reason to think the Pro would need to be launched in time for the Q4 holiday season).

    -If Navi is "gamer-focused" as Su has hinted, that may well mean GDDR6 (and rays?), so wouldn't be of much/any benefit to a "pro" workload
    -This way Apple can release the Pro with the GPU as a known quantity (though it may well come in a "Pro" variant w/say, ECC and other features enabled)
    -Maybe the timing was moved up, and separated from the Apple launch, in part to "strike back" at the 2080 and insert AMD into the GPU conversation more for 2019.

    The timeline and available facts seem to fit pretty well here...
  • tipoo - Thursday, February 7, 2019 - link

    I was thinking a better binned die like VII for the iMac Pro.

    Tbh the Mac Pro really needs to support CUDA/Nvidia if it's going to be a serious contendor for scientific compute.
  • sing_electric - Thursday, February 7, 2019 - link

    I mean, sure? but I'm not sure WHAT market Apple is going after with the Mac Pro anyways... I mean, would YOU switch platforms (since anyone who seriously needs the performance necessary to justify the price tag in a compute-heavy workload has almost certainly moved on from their 2013 Mac Pro) with the risk that Apple might leave the Pro to languish again?

    There's certainly A market for it, I'm just not sure what the market is.
  • repoman27 - Thursday, February 7, 2019 - link

    The Radeon VII does seem to be one piece of the puzzle, as far as the new Mac Pro goes. On the CPU side Apple still needs to wait for Cascade Lake Xeon W if they want to do anything more than release a modular iMac Pro though. I can't imagine Apple will ever release another dual-socket Mac, and I'd be very surprised if they switched to AMD Threadripper at this point. But even still, they would need XCC based Xeon W chips to beat the iMac Pro in terms of core count. Intel did release just such a thing with the Xeon W 3175X, but I'm seriously hoping for Cascade Lake over Skylake Refresh for the new Mac Pro. That would push the release timeline out to Q3 or Q4 though.

    The Radeon VII also appears to lack DisplayPort DSC, which means single cable 8K external displays would be a no-go. A new Mac Pro that could only support Thunderbolt 3 displays up to 5120 x 2880, 10 bits per color, at 60 Hz would almost seem like a bit of a letdown at this point. Apple is in a bit of an awkward position here anyway, as ICL-U will have integrated Thunderbolt 3 and an iGPU that supports DP 1.4a with HBR 3 and DSC when it arrives, also around the Q3 2019 timeframe. I'm not sure Intel even has any plans for discrete Thunderbolt controllers after Titan Ridge, but with no PCIe 4.0 on Cascade Lake, there's not much they can even do to improve on it anyway.

    So maybe the new Mac Pro is a Q4 2019 product and will have Cascade Lake Xeon W and a more pro-oriented yet Navi-based GPU?
  • sing_electric - Thursday, February 7, 2019 - link

    Possibly, but I'm not 100% sure that they need to be at the iMac Pro on core count to have a product. More RAM (with a lot of slots that a user can get to) and a socketed CPU with better thermals than you can get on the back of a display might do it. I'd tend to think that moving to Threadripper (or EPYC) is a pipe dream, partly because of Thunderbolt support (which I guess, now that it's open, Apple could THEORETICALLY add, but it just seems unlikely at this point, particularly since there'd be things where a Intel-based iMac Pro might beat a TR-based Mac Pro, and Apple doesn't generally like complexities like that).

    Also, I'd assumed that stuff like DSC support would be one of the changes between the consumer and Pro versions (and AMD's Radeon Pro WX 7100 already does DSC, so its not like they don't have the ability to add it to pro GPUs).

Log in

Don't have an account? Sign up now