The 2017 GPU Benchmark Suite & the Test

Paired with our RX Vega 64 and 56 review is a new benchmark suite and new testbed. The 2017 GPU suite features new games, as well as new compute and synthetic benchmarks.

Games-wise, we have kept Grand Theft Auto V and Ashes of the Singularity: Escalation. Joining them is Battlefield 1, DOOM, Tom Clancy’s Ghost Recon Wildlands, Warhammer 40,000: Dawn of War III, Deus Ex: Mankind Divided, F1 2016, and Total War: Warhammer. All-in-all, these games span multiple genres, differing graphics workloads, and contemporary APIs, with a nod towards modern and relatively intensive games. Additionally, we have retired the venerable Crysis 3 as our mainline power-testing game in favor of Battlefield 1.

AnandTech GPU Bench 2017 Game List
Game Genre Release Date API(s)
Battlefield 1 FPS Oct. 2016 DX11
(DX12)
Ashes of the Singularity: Escalation RTS Mar. 2016 DX12
(DX11)
DOOM (2016) FPS May 2016 Vulkan
(OpenGL 4.5)
Ghost Recon Wildlands FPS/3PS Mar. 2017 DX11
Dawn of War III RTS Apr. 2017 DX11
Deus Ex: Mankind Divided RPG/Action/Stealth Aug. 2016 DX11
(DX12)
Grand Theft Auto V Action/Open world Apr. 2015 DX11
F1 2016 Racing Aug. 2016 DX11
Total War: Warhammer TBS/Real-time tactics May 2016 DX11 + DX12

In terms of data collection, measurements were gathered either using built-in benchmark tools or with AMD's open-source Open Capture and Analytics Tool (OCAT), which is itself powered by Intel's PresentMon. 99th percentiles were obtained or calculated in a similar fashion: OCAT natively obtains 99th percentiles, GTA V's built-in benchmark include 99th percentiles, and both Ashes: Escalation and Total War: Warhammer's built-in benchmark outputs raw frame time data. Dawn of War III continutes to suffer from its misconfigured built-in benchmark calculations and so its native data cannot be used. In general, we prefer 99th percentiles over minimums, as they more accurately represent the gaming experience and filter out outliers that may not even be true results of the graphics card.

We are continuing to use the best API for a given card when given a choice. As before, we use DirectX 12 for Ashes of the Singularity: Escalation, being natively designed for that API. For DOOM (2016), using Vulkan is an improvement to OpenGL 4.5 across the board, and for those not in-the-know, Vulkan is roughly comparable to OpenGL in the same way DX12 is to DX11. We also stick to DX11 for Battlefield 1, with the persistent DX12 performance issues in mind, and similar reasoning follows with Deus Ex: Mankind Divided, where DX12 did not appear to give the best performance for RX Vega.

In the same vein, we have used DX12 for Total War: Warhammer when testing AMD cards, but we are still sussing out the exact effects on the Vega cards. With Vega running Total War: Warhammer, neither API seems to be absolutely better performing than the other, and we are continuing to investigate.

2017 GPU Compute and Synthetics

We have also updated our compute and synthetics suites, which are now as follows:

  • Compute: Blender 2.79 - BlenchMark
  • Compute: CompuBench 2.0 – Level Set Segmentation 256
  • Compute: CompuBench 2.0 – N-Body Simulation 1024K
  • Compute: CompuBench 2.0 – Optical Flow
  • Compute: Folding @ Home Single Precision
  • Compute: Geekbench 4 – GPU Compute – Total Score
  • Synthetics: TessMark, Image Set 4, 64x Tessellation
  • Synthetics: VRMark Orange
  • Synthetics: Beyond3D Suite – Pixel Fillrate
  • Synthetics: Beyond3D Suite – Integer Texture Fillrate (INT8)
  • Synthetics: Beyond3D Suite – Floating Point Texture Fillrate (FP32)

Testing with Vega

Testing was done with default configurations with respect to the High-Bandwidth Cache Controller (HBCC) and BIOS/power profiles. By default, HBCC is disabled in Radeon Software. As for power profiles, both Vega 64 and 56 come with primary and secondary VBIOS modes, each having three profiles in WattMan: Power Saver, Balanced, and Turbo. By default, both cards use the primary VBIOS' Balanced power profile.

GPU Power Limits for RX Vega Power Profiles
  Radeon RX Vega 64 Air Radeon RX Vega 56
Primary VBIOS Secondary VBIOS Primary VBIOS Secondary VBIOS
Power Saver 165W 150W 150W 135W
Balanced 220W 200W 165W 150W
Turbo 253W 230W 190W 173W

A small switch on the cards can be toggled away from the PCIe bracket for the lower power secondary VBIOS. In Radeon WattMan, a slider permits switching between Power Saver, Balanced, Turbo, and Custom performance profiles. In total, each card has six different power profiles to choose from. RX Vega 64 Liquid has its own set of six profiles as well, ranging from 165W to 303W. We don't expect Turbo mode to significantly change results: for Turbo vs. Balanced, AMD themselves cited a performance increase of about 2% at 4K.

The New 2017 GPU Skylake-X Testbed

Last, but certainly not least, we have a new testbed running these benchmarks and games. For that reason, historical results cannot be directly compared with the results in this review.

CPU: Intel Core i7-7820X @ 4.3GHz
Motherboard: Gigabyte X299 AORUS Gaming 7
Power Supply: Corsair AX860i
Hard Disk: OCZ Toshiba RD400 (1TB)
Memory: G.Skill TridentZ DDR4-3200 4 x 8GB (16-18-18-38)
Case: NZXT Phantom 630 Windowed Edition
Monitor: LG 27UD68P-B
Video Cards: AMD Radeon RX Vega 64 (Air Cooled)
AMD Radeon RX Vega 56
AMD Radeon RX 580
AMD Radeon R9 Fury X
NVIDIA GeForce GTX 1080 Ti Founders Edition
NVIDIA GeForce GTX 1080 Founders Edition
NVIDIA GeForce GTX 1070 Founders Edition
Video Drivers: NVIDIA Release 384.65
AMD Radeon Software Crimson ReLive Edition 17.7.2 (for non-Vega cards)
AMD Radeon Software Crimson Press Beta 17.30.1051
OS: Windows 10 Pro (Creators Update)
Competitive Positioning, Radeon Packs, & Crytocurrency Demand Battlefield 1
Comments Locked

213 Comments

View All Comments

  • Ryan Smith - Tuesday, August 15, 2017 - link

    3 CUs per array is a maximum, not a fixed amount. Each Hawaii shader engine had a 4/4/3 configuration, for example.

    http://images.anandtech.com/doci/7457/HawaiiDiagra...

    So in the case of Vega 10, it should be a 3/3/3/3/2/2 configuration.
  • watzupken - Tuesday, August 15, 2017 - link

    I think the performance is in line with recent rumors and my expectation. The fact that AMD beats around the bush to release Vega was a tell tale sign. Unlike Ryzen where they are marketing how well it runs in the likes of Cinebench and beating the gong and such, AMD revealed nothing on benchmarks throughout the year for Vega just like they did when they first released Polaris.
    The hardware no doubt is forward looking, but where it needs to matter most, I feel AMD may have fallen short. It seems like the way around is probably to design a new GPU from scratch.
  • Yojimbo - Wednesday, August 16, 2017 - link

    "It seems like the way around is probably to design a new GPU from scratch. "

    Well, perhaps, but I do think with more money they could be doing better with what they've got. They made the decision to focus on reviving their CPU business with their resources, however.

    They probably have been laying the groundwork for an entirely new architecture for some time, though. My belief is that APUs were of primary concern when originally designing GCN. They were hoping to enable heterogeneous computing, but it didn't work out. If that strategy did tie them down somewhat, their next gen architecture should free them from those tethers.
  • Glock24 - Tuesday, August 15, 2017 - link

    Nice review, I'll say the outcome was expected given the Vega FE reviews.

    Other reviews state that the Vega 64 has a switch that sets the power limts, and you have "power saving", "normal" and "turbo" modes. From what I've read the difference between the lowest and highest power limit is as high as 100W for about 8% more performance.

    It seems AMD did not reach the expected performance levels so they just boosted the clocks and voltage. Vega is like Skylake-X in that sense :P

    As others have mentioned, it would be great to have a comparison of Vega using Ryzen CPUs vs. Intel's CPUs.
  • Vertexgaming - Wednesday, August 16, 2017 - link

    It sucks so much that price drops on GPUs aren't a thing anymore because of miners. I have been upgrading my GPU every year and getting an awesome deal on the newest generation GPU, but now the situation has changed so much, that I will have to skip a generation to justify a $600-$800 (higher than MSRP) price tag for a new graphics card. :-(
  • prateekprakash - Wednesday, August 16, 2017 - link

    In my opinion, it would have been great if Vega 64 had a 16gb vram version at 100$ more... That would be 599$ apiece for the air cooled version... That would future proof it to run future 4k games (CF would benefit too)...

    It's too bad we still don't have 16gb consumer gaming cards, the Vega pro being not strictly for gamers...
  • Dosi - Wednesday, August 16, 2017 - link

    So the system does consumes 91W more with Vega 64, cant imagine with the LC V64... it can be 140W more? Actually what you saved on the GPU (V64 instead 1080) you already spent on electricity bill...
  • versesuvius - Wednesday, August 16, 2017 - link

    NVIDIA obviously knows how to break down the GPU tasks into chunks and processing those chunks and sending them out the door better than AMD. And more ROPs can certainly help AMD cards a lot.
  • peevee - Thursday, August 17, 2017 - link

    "as electrons can only move so far on a single (ever shortening) clock cycle"

    Seriously? Electrons? You think that how far electrons move matters? Sheesh.
  • FourEyedGeek - Tuesday, August 22, 2017 - link

    You being serious or sarcastic? If serious then you are ignorant.

Log in

Don't have an account? Sign up now