Radeon VII & Radeon RX Vega 64 Clock-for-Clock Performance

With the variety of changes from the Vega 10 powered RX Vega 64 to the new Radeon VII and its Vega 20 GPU, we wanted to take a look at performance and compute while controlling for clockspeeds. In this way, we can peek at any substantial improvements or differences in pseudo-IPC. There's a couple caveats here; obviously, because the RX Vega 64 has 64 CUs while the Radeon VII has only 60 CUs, the comparison is already not exact. The other thing is that "IPC" is not the exact metric measured here, but more so how much graphics/compute work is done per clock cycle and how that might translate to performance. Isoclock GPU comparisons tend to be less useful when comparing across generations and architectures, as like in Vega designers often design to add pipeline stages to enable higher clockspeeds, but at the cost of reducing work done per cycle and usually also increasing latency.

For our purposes, the incremental nature of 2nd generation Vega allays some of those concerns, though unfortunately, Wattman was unable to downclock memory at this time, so we couldn't get a set of datapoints for when both cards are configured for comparable memory bandwidth. While the Vega GPU boost mechanics means there's not a static pinned clockspeed, both cards were set to 1500MHz, and both fluctuated from 1490 to 1500MHZ depending on workload. All combined, this means that these results should be taken as approximations and lacking granularity, but are useful in spotting significant increases or decreases. This also means that interpreting the results is trickier, but at a high level, if the Radeon VII outperforms the RX Vega 64 at a given non-memory bound workload, then we can assume meaningful 'work per cycle' enhancements relatively decoupled from CU count.

Ashes of the Singularity: Escalation - 3840x2160 - Extreme Quality

Grand Theft Auto V - 3840x2160 - Very High Quality

F1 2018 - 3840x2160 - Ultra Quality

Shadow of War - 4K and 1440p - Ultra Quality

Wolfenstein II - 3840x2160 -

As mentioned above, we were not able to control for the doubled memory bandwidth. But in terms of gaming, the only unexpected result is with GTA V. As an outlier, it's less likely to be an indication of increased gaming 'work per cycle,' and more likely to be related to driver optimization and memory bandwidth increases. GTA V has historically been a title where AMD hardware don't reach the expected level of performance, so regardless there's been room for driver improvement.

Compute/ProViz: SPECviewperf 13 - 3dsmax-06

Compute/ProViz: SPECviewperf 13 - catia-05

Compute/ProViz: SPECviewperf 13 - creo-02

Compute/ProViz: SPECviewperf 13 - energy-02

Compute/ProViz: SPECviewperf 13 - maya-05

Compute/ProViz: SPECviewperf 13 - medical-02

Compute/ProViz: SPECviewperf 13 - showcase-02

Compute/ProViz: SPECviewperf 13 - snx-03 (Siemens NX)

SPECviewperf is a slightly different story, though.

Compute/ProViz: LuxMark 3.1 - LuxBall and Hotel

Compute/ProViz: Cycles - Blender Benchmark 1.0b2

Compute/ProViz: V-Ray Benchmark 1.0.8

Compute/ProViz: Indigo Renderer 4 - IndigoBench 4.0.64

 

Professional Visualization and Rendering Power, Temperature, and Noise
Comments Locked

289 Comments

View All Comments

  • 29a - Thursday, February 7, 2019 - link

    As usual in these garbage articles the prices given are nowhere near reality. The Vega 64 is $100 cheaper than what is listed.
  • RSAUser - Thursday, February 7, 2019 - link

    Anandtech doesn't ever seem to update reviews or prices.
    They'll compare a device from their history even if there have been months of driver updates that fixed performance issues, so they'll be using non-current info and everyone will assume it's current.
  • Ryan Smith - Thursday, February 7, 2019 - link

    "Anandtech doesn't ever seem to update reviews or prices."

    On the contrary, quite a bit was updated for this review. Though as driver performance has been rather stable as of late, performance hasn't exactly gone anywhere for most cards on most games.

    If you see anything that seems wrong, please let us know. But we go out of our way to try to avoid using any card/driver combinations that result in performance issues.
  • Korguz - Thursday, February 7, 2019 - link

    29a
    if you think AT does nothing but garbage articles.. then, lets see YOU do better...

    as for prices.. meh.. thats something hard to account for as there are things called exchange rates, and other variables that no one can predict.....
  • Phil85 - Thursday, February 7, 2019 - link

    So when will prices of GPU's decrease? Is this the new normal?
  • eva02langley - Thursday, February 7, 2019 - link

    Navi should bring value back to mid-range.

    It is still a nice card for professional/compute/rendering. But for gaming, the price is maybe 50$ too expensive, and AMD really needs to get some better quality fans.
  • TEAMSWITCHER - Thursday, February 7, 2019 - link

    If Navi is missing next generation features like ray tracing and tensor cores, there will be ZERO value to it.
  • eva02langley - Thursday, February 7, 2019 - link

    AHAHAHAHA... Ray Tracing... you know the real problem of Ray Tracing? It was never on the table until Jensen brainwashed shill that it was important. by defending it, you obviously prove that you have no critical judgement.

    By the way, the problem with RT/DLSS is that it will never be implemented because AMD owns consoles, and that devs develop on consoles. There is no monetary benefit to implement gimmick proprietary gameworks features for 1% of the PC user base, unless if Nvidia is paying you to do so.

    It will never be a thing for the upcoming console generation. See you in 7 years, where it might be remotely relevant to the industry. As of now, unless you are rendering a CGI movie, it is worthless.
  • Dribble - Thursday, February 7, 2019 - link

    Both the next gen consoles are going to have ray tracing. Microsoft - who wrote and own the spec for the DX12 ray tracing extension currently used by PC's and hence a strong backer of ray tracing - will make one of them.
  • eva02langley - Thursday, February 7, 2019 - link

    Not going to happen because RTX is proprietary, it is a closed environment, and require hardware acceleration that AMD is not going to pursue in the short time. Nvidia shoot themselves in the foot by pushing it. Open source is the only way a new standard can be adopted. The whole G-synch fiasco should have been enough to prove it.

    Hardware could run it still, but the impact on performances is just to important. At that point, developers like Sony have incredible talent in creating new effect that look way more realistic.

    Just looking at The Last of Us Part 2 is a good example.

Log in

Don't have an account? Sign up now