The 2017 GPU Benchmark Suite & the Test

Paired with our RX Vega 64 and 56 review is a new benchmark suite and new testbed. The 2017 GPU suite features new games, as well as new compute and synthetic benchmarks.

Games-wise, we have kept Grand Theft Auto V and Ashes of the Singularity: Escalation. Joining them is Battlefield 1, DOOM, Tom Clancy’s Ghost Recon Wildlands, Warhammer 40,000: Dawn of War III, Deus Ex: Mankind Divided, F1 2016, and Total War: Warhammer. All-in-all, these games span multiple genres, differing graphics workloads, and contemporary APIs, with a nod towards modern and relatively intensive games. Additionally, we have retired the venerable Crysis 3 as our mainline power-testing game in favor of Battlefield 1.

AnandTech GPU Bench 2017 Game List
Game Genre Release Date API(s)
Battlefield 1 FPS Oct. 2016 DX11
(DX12)
Ashes of the Singularity: Escalation RTS Mar. 2016 DX12
(DX11)
DOOM (2016) FPS May 2016 Vulkan
(OpenGL 4.5)
Ghost Recon Wildlands FPS/3PS Mar. 2017 DX11
Dawn of War III RTS Apr. 2017 DX11
Deus Ex: Mankind Divided RPG/Action/Stealth Aug. 2016 DX11
(DX12)
Grand Theft Auto V Action/Open world Apr. 2015 DX11
F1 2016 Racing Aug. 2016 DX11
Total War: Warhammer TBS/Real-time tactics May 2016 DX11 + DX12

In terms of data collection, measurements were gathered either using built-in benchmark tools or with AMD's open-source Open Capture and Analytics Tool (OCAT), which is itself powered by Intel's PresentMon. 99th percentiles were obtained or calculated in a similar fashion: OCAT natively obtains 99th percentiles, GTA V's built-in benchmark include 99th percentiles, and both Ashes: Escalation and Total War: Warhammer's built-in benchmark outputs raw frame time data. Dawn of War III continutes to suffer from its misconfigured built-in benchmark calculations and so its native data cannot be used. In general, we prefer 99th percentiles over minimums, as they more accurately represent the gaming experience and filter out outliers that may not even be true results of the graphics card.

We are continuing to use the best API for a given card when given a choice. As before, we use DirectX 12 for Ashes of the Singularity: Escalation, being natively designed for that API. For DOOM (2016), using Vulkan is an improvement to OpenGL 4.5 across the board, and for those not in-the-know, Vulkan is roughly comparable to OpenGL in the same way DX12 is to DX11. We also stick to DX11 for Battlefield 1, with the persistent DX12 performance issues in mind, and similar reasoning follows with Deus Ex: Mankind Divided, where DX12 did not appear to give the best performance for RX Vega.

In the same vein, we have used DX12 for Total War: Warhammer when testing AMD cards, but we are still sussing out the exact effects on the Vega cards. With Vega running Total War: Warhammer, neither API seems to be absolutely better performing than the other, and we are continuing to investigate.

2017 GPU Compute and Synthetics

We have also updated our compute and synthetics suites, which are now as follows:

  • Compute: Blender 2.79 - BlenchMark
  • Compute: CompuBench 2.0 – Level Set Segmentation 256
  • Compute: CompuBench 2.0 – N-Body Simulation 1024K
  • Compute: CompuBench 2.0 – Optical Flow
  • Compute: Folding @ Home Single Precision
  • Compute: Geekbench 4 – GPU Compute – Total Score
  • Synthetics: TessMark, Image Set 4, 64x Tessellation
  • Synthetics: VRMark Orange
  • Synthetics: Beyond3D Suite – Pixel Fillrate
  • Synthetics: Beyond3D Suite – Integer Texture Fillrate (INT8)
  • Synthetics: Beyond3D Suite – Floating Point Texture Fillrate (FP32)

Testing with Vega

Testing was done with default configurations with respect to the High-Bandwidth Cache Controller (HBCC) and BIOS/power profiles. By default, HBCC is disabled in Radeon Software. As for power profiles, both Vega 64 and 56 come with primary and secondary VBIOS modes, each having three profiles in WattMan: Power Saver, Balanced, and Turbo. By default, both cards use the primary VBIOS' Balanced power profile.

GPU Power Limits for RX Vega Power Profiles
  Radeon RX Vega 64 Air Radeon RX Vega 56
Primary VBIOS Secondary VBIOS Primary VBIOS Secondary VBIOS
Power Saver 165W 150W 150W 135W
Balanced 220W 200W 165W 150W
Turbo 253W 230W 190W 173W

A small switch on the cards can be toggled away from the PCIe bracket for the lower power secondary VBIOS. In Radeon WattMan, a slider permits switching between Power Saver, Balanced, Turbo, and Custom performance profiles. In total, each card has six different power profiles to choose from. RX Vega 64 Liquid has its own set of six profiles as well, ranging from 165W to 303W. We don't expect Turbo mode to significantly change results: for Turbo vs. Balanced, AMD themselves cited a performance increase of about 2% at 4K.

The New 2017 GPU Skylake-X Testbed

Last, but certainly not least, we have a new testbed running these benchmarks and games. For that reason, historical results cannot be directly compared with the results in this review.

CPU: Intel Core i7-7820X @ 4.3GHz
Motherboard: Gigabyte X299 AORUS Gaming 7
Power Supply: Corsair AX860i
Hard Disk: OCZ Toshiba RD400 (1TB)
Memory: G.Skill TridentZ DDR4-3200 4 x 8GB (16-18-18-38)
Case: NZXT Phantom 630 Windowed Edition
Monitor: LG 27UD68P-B
Video Cards: AMD Radeon RX Vega 64 (Air Cooled)
AMD Radeon RX Vega 56
AMD Radeon RX 580
AMD Radeon R9 Fury X
NVIDIA GeForce GTX 1080 Ti Founders Edition
NVIDIA GeForce GTX 1080 Founders Edition
NVIDIA GeForce GTX 1070 Founders Edition
Video Drivers: NVIDIA Release 384.65
AMD Radeon Software Crimson ReLive Edition 17.7.2 (for non-Vega cards)
AMD Radeon Software Crimson Press Beta 17.30.1051
OS: Windows 10 Pro (Creators Update)
Competitive Positioning, Radeon Packs, & Crytocurrency Demand Battlefield 1
Comments Locked

213 Comments

View All Comments

  • tipoo - Monday, August 14, 2017 - link

    For a while those APUs were floating them while their standalone CPUs and GPUs struggled. Maybe they've gotten too slim for three strong tentpoles, alas, and one will always suffer.
  • Da W - Tuesday, August 15, 2017 - link

    Buldozer core sucked next to ''ok'' GCN igpu. It was very bandwith dependant and the igpu was most of the time starving for data. There was no point of pushing another dozer apu. They were waiting for Ryzen core, and infinity fabric to feed the gpu. Vega is just launching now and if you noticed AMD is only making one 8 core monolitic die sold in multiple package (1 for ryzen-2 fro treadripper-4 for epic). They have yet to cut that 8 core die in half and integrate their new Vega core in there, which is, i believe, what R&D is doing as of this morning....
  • Yaldabaoth - Monday, August 14, 2017 - link

    So, the TL;DR is that the Vega 64 competes on (relatively) cheap computing power and perhaps 4K gaming, and the Vega 56 competes on (relatively) very cheap computing power and being a value for 1440p gaming? Neither seem to compete on efficiency.
  • tipoo - Monday, August 14, 2017 - link

    Vega 56 seems well positioned for now. 1070 performance at a decently lower price. Question is if Nvidia can/will drop that price on a whim with enough margin (with a smaller die in theory they could, but AMD is probably getting low margins on these). Vega 64 is a far less clear value prospect, in one way it's similar to the 1070 vs 1080, but with Nvidia you're actually getting the best, which 64 can't claim.
  • Jumangi - Monday, August 14, 2017 - link

    Thats the big unknown. I suspect Nvidia is playing with much better margins than AMD is when looking at the chips to compete with them here. If Nvidia can lower prices on the 1070 to squeeze AMD if they want and still make a good profit.
  • webdoctors - Monday, August 14, 2017 - link

    The Vega56 is so cheap for the hardware you get I wonder if its being sold for a loss. I commented earlier that I thought these chips would be selling for double what they released at, and if they're profitable at this price point AMD might have some secret low cost manufacturing technology that is worth more than their entire company right now.

    As a consumer I'm practically getting paid to take it LOL.
  • tipoo - Monday, August 14, 2017 - link

    I doubt it's at a loss, but it's probably at a very slim margin. Nvidia could potentially split the difference with a 50 dollar drop and still have the smaller cheaper die (presumably, if TSMC/Glofo cost similar).
  • Drumsticks - Monday, August 14, 2017 - link

    Great review Ryan and Nate. I totally agree with your comment at the end about where Vega was designed. Relative to Nvidia, it's a further step back in almost every metric you can measure - perf/w, perf/mm^2, absolue perf of high end flagship...

    You really have to hope AMD can find one more rabbit in their hat a year or two from now. Nevertheless, the Vega 56 looks like an impressive product, but you can't be happy about getting 8% more performance out of something >50% larger in silicon.
  • Morawka - Monday, August 14, 2017 - link

    yup and next generation memory to boot.. AMD need better gpu designers. If not for Crypto, AMD would be in serious trouble.
  • Threska - Thursday, April 4, 2019 - link

    Hello. I'm writing from the future and I bring important news about Google Stadia.

    " To make it possible on its servers, Google has combined an x86 processor (likely an Intel one) with hyperthreading that runs at 2.7GHz, with 16GB of RAM, and a custom AMD graphics chip. It’s said to uses HBM 2 and has 56 compute units, delivering enough raw horsepower for 10.7 TFlops.

    That sounds like a modified Vega 56, although it’s equally possible that it’s one of AMD’s upcoming Navi line of graphics cards."

    https://www.digitaltrends.com/gaming/google-stadia...

Log in

Don't have an account? Sign up now