Radeon VII & Radeon RX Vega 64 Clock-for-Clock Performance

With the variety of changes from the Vega 10 powered RX Vega 64 to the new Radeon VII and its Vega 20 GPU, we wanted to take a look at performance and compute while controlling for clockspeeds. In this way, we can peek at any substantial improvements or differences in pseudo-IPC. There's a couple caveats here; obviously, because the RX Vega 64 has 64 CUs while the Radeon VII has only 60 CUs, the comparison is already not exact. The other thing is that "IPC" is not the exact metric measured here, but more so how much graphics/compute work is done per clock cycle and how that might translate to performance. Isoclock GPU comparisons tend to be less useful when comparing across generations and architectures, as like in Vega designers often design to add pipeline stages to enable higher clockspeeds, but at the cost of reducing work done per cycle and usually also increasing latency.

For our purposes, the incremental nature of 2nd generation Vega allays some of those concerns, though unfortunately, Wattman was unable to downclock memory at this time, so we couldn't get a set of datapoints for when both cards are configured for comparable memory bandwidth. While the Vega GPU boost mechanics means there's not a static pinned clockspeed, both cards were set to 1500MHz, and both fluctuated from 1490 to 1500MHZ depending on workload. All combined, this means that these results should be taken as approximations and lacking granularity, but are useful in spotting significant increases or decreases. This also means that interpreting the results is trickier, but at a high level, if the Radeon VII outperforms the RX Vega 64 at a given non-memory bound workload, then we can assume meaningful 'work per cycle' enhancements relatively decoupled from CU count.

Ashes of the Singularity: Escalation - 3840x2160 - Extreme Quality

Grand Theft Auto V - 3840x2160 - Very High Quality

F1 2018 - 3840x2160 - Ultra Quality

Shadow of War - 4K and 1440p - Ultra Quality

Wolfenstein II - 3840x2160 -

As mentioned above, we were not able to control for the doubled memory bandwidth. But in terms of gaming, the only unexpected result is with GTA V. As an outlier, it's less likely to be an indication of increased gaming 'work per cycle,' and more likely to be related to driver optimization and memory bandwidth increases. GTA V has historically been a title where AMD hardware don't reach the expected level of performance, so regardless there's been room for driver improvement.

Compute/ProViz: SPECviewperf 13 - 3dsmax-06

Compute/ProViz: SPECviewperf 13 - catia-05

Compute/ProViz: SPECviewperf 13 - creo-02

Compute/ProViz: SPECviewperf 13 - energy-02

Compute/ProViz: SPECviewperf 13 - maya-05

Compute/ProViz: SPECviewperf 13 - medical-02

Compute/ProViz: SPECviewperf 13 - showcase-02

Compute/ProViz: SPECviewperf 13 - snx-03 (Siemens NX)

SPECviewperf is a slightly different story, though.

Compute/ProViz: LuxMark 3.1 - LuxBall and Hotel

Compute/ProViz: Cycles - Blender Benchmark 1.0b2

Compute/ProViz: V-Ray Benchmark 1.0.8

Compute/ProViz: Indigo Renderer 4 - IndigoBench 4.0.64

 

Professional Visualization and Rendering Power, Temperature, and Noise
Comments Locked

289 Comments

View All Comments

  • HollyDOL - Sunday, February 10, 2019 - link

    Please, read what others write before you start accusing others.
  • eva02langley - Friday, February 8, 2019 - link

    Yeah, when your speaker sound is at 70-80 dB next to you when playing CoD... /sarcasm

    AMD is going to solve the fan problems. Temps are lower than the RTX 2080, they can play with the fan profile a little bit better.
  • SeaTurtleNinja - Thursday, February 7, 2019 - link

    Lisa Su is liar and AMD hates gamers. This is just a publicity stunt and a way to give a gift to their friends in the Tech Media. This was created for YouTube content creators and not for people who play games. Another Vega dumpster fire.
  • GreenReaper - Thursday, February 7, 2019 - link

    But many YouTubers play games as their content. And people vicariously watch them, so effectively it's letting many people play at once, just for the cost of the video decode - which is far more efficient!
  • Korguz - Thursday, February 7, 2019 - link

    yea.. amd hates gamers.. you DO know AMD makes the cpu and vid cards that are in the current playstation and xbox... right ???
  • Oxford Guy - Thursday, February 7, 2019 - link

    Yes, it's difficult to forgot the fiasco that is the Jaguar-based "console"

    (actually a poor-quality x86 PC with a superfluous anti-consumer walled software garden).
  • Korguz - Friday, February 8, 2019 - link

    how is it a fiasco ??

    the original xbox used a Pentium 3 and Geforce for its cpu and gpu... the 360, and IBM CPU and ATI GPU...
  • Oxford Guy - Friday, February 8, 2019 - link

    1) Because it has worse performance than even Piledriver.

    2) Because the two Jaguar-based pseudo-consoles splinter the PC gaming market unnecessarily.

    Overpriced and damaging to the PC gaming platform. But consumers have a long history of being fooled by price tags into paying too much for too little.
  • eddman - Friday, February 8, 2019 - link

    Consoles have nothing to do with PC. They've existed for decades and PC gaming is still alive and even thriving.

    Why do you even care what processor is in consoles?
  • Oxford Guy - Friday, February 8, 2019 - link

    False. The only difference between the MS and Sony "consoles" and the "PC gaming" platform is the existence of artificial software barriers.

Log in

Don't have an account? Sign up now