Radeon VII & Radeon RX Vega 64 Clock-for-Clock Performance

With the variety of changes from the Vega 10 powered RX Vega 64 to the new Radeon VII and its Vega 20 GPU, we wanted to take a look at performance and compute while controlling for clockspeeds. In this way, we can peek at any substantial improvements or differences in pseudo-IPC. There's a couple caveats here; obviously, because the RX Vega 64 has 64 CUs while the Radeon VII has only 60 CUs, the comparison is already not exact. The other thing is that "IPC" is not the exact metric measured here, but more so how much graphics/compute work is done per clock cycle and how that might translate to performance. Isoclock GPU comparisons tend to be less useful when comparing across generations and architectures, as like in Vega designers often design to add pipeline stages to enable higher clockspeeds, but at the cost of reducing work done per cycle and usually also increasing latency.

For our purposes, the incremental nature of 2nd generation Vega allays some of those concerns, though unfortunately, Wattman was unable to downclock memory at this time, so we couldn't get a set of datapoints for when both cards are configured for comparable memory bandwidth. While the Vega GPU boost mechanics means there's not a static pinned clockspeed, both cards were set to 1500MHz, and both fluctuated from 1490 to 1500MHZ depending on workload. All combined, this means that these results should be taken as approximations and lacking granularity, but are useful in spotting significant increases or decreases. This also means that interpreting the results is trickier, but at a high level, if the Radeon VII outperforms the RX Vega 64 at a given non-memory bound workload, then we can assume meaningful 'work per cycle' enhancements relatively decoupled from CU count.

Ashes of the Singularity: Escalation - 3840x2160 - Extreme Quality

Grand Theft Auto V - 3840x2160 - Very High Quality

F1 2018 - 3840x2160 - Ultra Quality

Shadow of War - 4K and 1440p - Ultra Quality

Wolfenstein II - 3840x2160 -

As mentioned above, we were not able to control for the doubled memory bandwidth. But in terms of gaming, the only unexpected result is with GTA V. As an outlier, it's less likely to be an indication of increased gaming 'work per cycle,' and more likely to be related to driver optimization and memory bandwidth increases. GTA V has historically been a title where AMD hardware don't reach the expected level of performance, so regardless there's been room for driver improvement.

Compute/ProViz: SPECviewperf 13 - 3dsmax-06

Compute/ProViz: SPECviewperf 13 - catia-05

Compute/ProViz: SPECviewperf 13 - creo-02

Compute/ProViz: SPECviewperf 13 - energy-02

Compute/ProViz: SPECviewperf 13 - maya-05

Compute/ProViz: SPECviewperf 13 - medical-02

Compute/ProViz: SPECviewperf 13 - showcase-02

Compute/ProViz: SPECviewperf 13 - snx-03 (Siemens NX)

SPECviewperf is a slightly different story, though.

Compute/ProViz: LuxMark 3.1 - LuxBall and Hotel

Compute/ProViz: Cycles - Blender Benchmark 1.0b2

Compute/ProViz: V-Ray Benchmark 1.0.8

Compute/ProViz: Indigo Renderer 4 - IndigoBench 4.0.64

 

Professional Visualization and Rendering Power, Temperature, and Noise
Comments Locked

289 Comments

View All Comments

  • repoman27 - Thursday, February 7, 2019 - link

    The Radeon Pro WX 7100 is Polaris 10, which does not do DSC. DSC requires fixed function encoding blocks that are not present in any of the Polaris or Vega variants. They do support DisplayPort 1.3 / 1.4 and HBR3, but DSC is an optional feature of the DP spec. AFAIK, the only GPUs currently shipping that have DSC support are NVIDIA's Turing chips.

    The CPU in the iMac Pro is a normal, socketed Xeon W, and you can max the RAM out at 512 GB using LRDIMMs if you're willing to crack the sucker open and shell out the cash. So making those things user accessible would be the only benefit to a modular Mac Pro. CPU upgrades are highly unlikely for that platform though, and I doubt Apple will even provide two DIMM slots per channel in the new Mac Pro. However, if they have to go LGA3647 to get an XCC based Xeon W, then they'd go with six slots to populate all of the memory channels. And the back of a display that is also 440 square inches of aluminum radiator is not necessarily a bad place to be, thermally. Nothing is open about Thunderbolt yet, by the way, but of course Apple could still add existing Intel TB3 controllers to an AMD design if they wanted to.

    So yeah, in order to have a product, they need to beat the iMac Pro in some meaningful way. And simply offering user accessible RAM and PCIe slots in a box that's separate from the display isn't really that, in the eyes of Apple at least. Especially since PCIe slots are far from guaranteed, if not unlikely.
  • halcyon - Friday, February 8, 2019 - link

    Apple cannot ship Mac Pro with a vacuum cleaner. That 43 dBA is isane. Even if Apple downclocked and undervolted the bios, I doubt they could make it very quiet.

    Also, I doubt AMD is willing to sell the tons of them at a loss.
  • dark_light - Thursday, February 7, 2019 - link

    Well written, balanced and comprehensive review that covers all the bases with just the right
    amount of detail.

    Thanks Nate Oh.

    Anandtech is still arguably the best site for this content. Kudos guys.
  • drgigolo - Thursday, February 7, 2019 - link

    So I got a 1080Ti at launch, because there was no other alternative at 4K. Finally we have an answer from AMD, unfortunately it's no faster than my almost 2 year old GPU, priced the same no less.

    I really think this would've benefitted from 128 rops, or 96.

    If they had priced this at 500 dollars, it would've been a much better bargain.

    I can't think of anyone who I would recommend this to.
  • sing_electric - Thursday, February 7, 2019 - link

    To be fair, you could almost say the same thing about the 2080, "I got a 1080 Ti at launch and 2 years later, Nvidia released a GPU that barely performs better if you don't care about gimmicks like ray tracing."

    People who do gaming and compute might be very well tempted, people who don't like Nvidia (or just do like AMD) might be tempted.

    Unfortunately, the cost of the RAM in this thing alone is probably nearly $350, so there's no way AMD could sell this thing for $500 (but it wouldn't surprise me if we see it selling a little under MSRP if there is plentiful supply and Nvidia can crank out enough 2080s).
  • eva02langley - Thursday, February 7, 2019 - link

    That was the whole point of RTX. Beside the 2080 TI, there was nothing new. You were having the same performances for around the same price than the last generation. There was no price disruption.
  • Oxford Guy - Thursday, February 7, 2019 - link

    Poor AMD.

    We're supposed to buy a clearly inferior product (look at that noise) just so they can sell leftover and defective Instincts?

    We're supposed to buy an inferior product because AMD's bad business moves have resulted in Nvidia being able to devalue the GPU market with Turing?

    Nope. We're supposed to either buy the best product for the money or sit out and wait for something better. Personally, I would jump for joy if everyone would put their money into a crowdfunded company, with management that refuses to become eaten alive by a megacorp, to take on Nvidia and AMD in the pure gaming space. There was once space for three players and there is space today. I am not holding my breath for Intel to do anything particularly valuable.

    Wouldn't it be nice to have a return to pure no-nonsense gaming designs, instead of this "you can buy our defective parts for high prices and feel like you're giving to charity" and "you can buy our white elephant feature well before its time has come and pay through the nose for it" situation.

    Capitalism has had a bad showing for some time now in the tech space. Monopolies and duopolies reign supreme.
  • eva02langley - Friday, February 8, 2019 - link

    Honestly, beside a RX570/580, no GPUs make sense right now.

    Funny that Polaris is still the best bang for the $ still today.
  • drgigolo - Saturday, February 9, 2019 - link

    Well, at least you can buy a 2080Ti, eventhough the 2080 is of course at the same price point as the 1080Ti. But I won't buy a 2080Ti either, it's too expensive and the performance increase is too small.

    The last decent AMD card I had, was the R9 290X. Had that for a few years until the 1080 came out, and then, replaced that to a 1080Ti when I got a Acer Predator XB321HK.

    I will wait until something better comes along. Would really like HDMI 2.1 output, so that I can use VRR on the upcoming LG OLED C9.
  • sing_electric - Thursday, February 7, 2019 - link

    Oh, also, FWIW: The other way of looking at it is "damn, that 1080 Ti was a good buy. Here I am 2 years later and there's very little reason for me to upgrade."

Log in

Don't have an account? Sign up now