Compute Performance

Shifting gears, we'll look at the compute aspects of the Radeon VII. Though it is fundamentally similar to first generation Vega, there has been an emphasis on improved compute for Vega 20, and we may see it here.

Beginning with CompuBench 2.0, the latest iteration of Kishonti's GPU compute benchmark suite offers a wide array of different practical compute workloads, and we’ve decided to focus on level set segmentation, optical flow modeling, and N-Body physics simulations.

Compute: CompuBench 2.0 - Level Set Segmentation 256

Compute: CompuBench 2.0 - N-Body Simulation 1024K

Compute: CompuBench 2.0 - Optical Flow

Moving on, we'll also look at single precision floating point performance with FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance.

Compute: Folding @ Home (Single and Double Precision)

Next is Geekbench 4's GPU compute suite. A multi-faceted test suite, Geekbench 4 runs seven different GPU sub-tests, ranging from face detection to FFTs, and then averages out their scores via their geometric mean. As a result Geekbench 4 isn't testing any one workload, but rather is an average of many different basic workloads.

Compute: Geekbench 4 - GPU Compute - Total Score

Lastly, we have SiSoftware Sandra, with general compute benchmarks at different precisions.

Compute: SiSoftware Sandra 2018 - GP Processing (OpenCL)

Compute: SiSoftware Sandra 2018 - GP Processing (DX11)

Compute: SiSoftware Sandra 2018 - Pixel Shader Compute (DX11)

 

Total War: Warhammer II Synthetics
Comments Locked

289 Comments

View All Comments

  • schizoide - Thursday, February 7, 2019 - link

    Sure it does, at the bottom-end. It basically IS an instinct mi50 on the cheap.
  • GreenReaper - Thursday, February 7, 2019 - link

    Maybe they weren't selling so well so they decided to repurpose before Navi comes out and makes it largely redundant.
  • schizoide - Thursday, February 7, 2019 - link

    IMO, what happened is pretty simple. Nvidia's extremely high prices allowed AMD to compete with a workstation-class card. So they took a swing.
  • eva02langley - Friday, February 8, 2019 - link

    My take to. This card was never intended to be released. It just happened because the RTX 2080 is at 700+$.

    In Canada, the RVII is 40$ less than the cheapest 2080 RTX, making it the better deal.
  • Manch - Thursday, February 7, 2019 - link

    It is but its slightly gimped perf wise to justify the price diff.
  • sing_electric - Thursday, February 7, 2019 - link

    Anyone else think that the Mac Pro is lurking behind the Radeon VII release? Apple traditionally does a March 2019 event where they launch new products, so the timing fits (especially since there's little reason to think the Pro would need to be launched in time for the Q4 holiday season).

    -If Navi is "gamer-focused" as Su has hinted, that may well mean GDDR6 (and rays?), so wouldn't be of much/any benefit to a "pro" workload
    -This way Apple can release the Pro with the GPU as a known quantity (though it may well come in a "Pro" variant w/say, ECC and other features enabled)
    -Maybe the timing was moved up, and separated from the Apple launch, in part to "strike back" at the 2080 and insert AMD into the GPU conversation more for 2019.

    The timeline and available facts seem to fit pretty well here...
  • tipoo - Thursday, February 7, 2019 - link

    I was thinking a better binned die like VII for the iMac Pro.

    Tbh the Mac Pro really needs to support CUDA/Nvidia if it's going to be a serious contendor for scientific compute.
  • sing_electric - Thursday, February 7, 2019 - link

    I mean, sure? but I'm not sure WHAT market Apple is going after with the Mac Pro anyways... I mean, would YOU switch platforms (since anyone who seriously needs the performance necessary to justify the price tag in a compute-heavy workload has almost certainly moved on from their 2013 Mac Pro) with the risk that Apple might leave the Pro to languish again?

    There's certainly A market for it, I'm just not sure what the market is.
  • repoman27 - Thursday, February 7, 2019 - link

    The Radeon VII does seem to be one piece of the puzzle, as far as the new Mac Pro goes. On the CPU side Apple still needs to wait for Cascade Lake Xeon W if they want to do anything more than release a modular iMac Pro though. I can't imagine Apple will ever release another dual-socket Mac, and I'd be very surprised if they switched to AMD Threadripper at this point. But even still, they would need XCC based Xeon W chips to beat the iMac Pro in terms of core count. Intel did release just such a thing with the Xeon W 3175X, but I'm seriously hoping for Cascade Lake over Skylake Refresh for the new Mac Pro. That would push the release timeline out to Q3 or Q4 though.

    The Radeon VII also appears to lack DisplayPort DSC, which means single cable 8K external displays would be a no-go. A new Mac Pro that could only support Thunderbolt 3 displays up to 5120 x 2880, 10 bits per color, at 60 Hz would almost seem like a bit of a letdown at this point. Apple is in a bit of an awkward position here anyway, as ICL-U will have integrated Thunderbolt 3 and an iGPU that supports DP 1.4a with HBR 3 and DSC when it arrives, also around the Q3 2019 timeframe. I'm not sure Intel even has any plans for discrete Thunderbolt controllers after Titan Ridge, but with no PCIe 4.0 on Cascade Lake, there's not much they can even do to improve on it anyway.

    So maybe the new Mac Pro is a Q4 2019 product and will have Cascade Lake Xeon W and a more pro-oriented yet Navi-based GPU?
  • sing_electric - Thursday, February 7, 2019 - link

    Possibly, but I'm not 100% sure that they need to be at the iMac Pro on core count to have a product. More RAM (with a lot of slots that a user can get to) and a socketed CPU with better thermals than you can get on the back of a display might do it. I'd tend to think that moving to Threadripper (or EPYC) is a pipe dream, partly because of Thunderbolt support (which I guess, now that it's open, Apple could THEORETICALLY add, but it just seems unlikely at this point, particularly since there'd be things where a Intel-based iMac Pro might beat a TR-based Mac Pro, and Apple doesn't generally like complexities like that).

    Also, I'd assumed that stuff like DSC support would be one of the changes between the consumer and Pro versions (and AMD's Radeon Pro WX 7100 already does DSC, so its not like they don't have the ability to add it to pro GPUs).

Log in

Don't have an account? Sign up now