Integrated GPU Testing

Switching gears from high performance discrete GPUs, we have our integrated GPUs. From a high level overview the gains from DirectX 12 are not going to be quite as large here as they are with dGPUs due to the much lower GPU performance, but there is still ample opportunity benefit from increased draw call performance.

Here we have Intel’s Haswell CPUs, and AMD’s Kaveri APUs. We'll start off with the higher-end processors, the Intel Core i3/i5i7 and AMD A10/A8.

3DMark API Overhead iGPU Scaling (High-End)

As expected, at the high-end the performance gains from DirectX 12 are not quite as great as they were with the dGPUs, but we’re still seeing significant gains. The largest gains of course are found with the AMD processors, thanks to their much stronger iGPUs. From DX11ST to DX12 we’re seeing a surprisingly large 6.8x increase in draw call performance, from 655K to 4,470K.

As to be expected, with a relatively weak CPU, AMD’s DX11 draw call performance isn’t very strong here relative to their strong GPU and of course our more powerful dGPUs. Still, it ends up being better than Intel (who otherwise has the stronger CPU), so we see AMD offering better draw call throughput at all levels. Ultimately what this amounts to is that AMD has quite a bit more potential under DX12.

Mantle meanwhile delivers a very slight edge over DX12 here, although for all practical purposes the two should be considered tied.

Meanwhile for the Intel CPUs, the gains from DX12 aren’t quite as large as with the AMD processors, but they’re still significant, and this is why Intel is happily backing DX12. All 3 processors share the same GT2 GPU and see similar gains. Starting from a baseline of 625K draw calls under DX11 – almost identical to AMD – the i7-4790K jumps up by 3.2x to 2,033K draw calls under DX12. The i5 and the i3 processors see 1,977K and 1,874K respectively, and after adjusting for clockspeeds it’s clear that we’re GPU command processor limited at all times here, hence why even a 2 core i3 can deliver similar gains.

Intel does end up seeing the smallest gains here, but again even in this sort of worst case scenario of a powerful CPU paired with a weak CPU, DX12 still improved draw call performance by over 3.2x. This means that in the long run even games that are targeting lower-performance PCs still stand to see a major increase in the number of draw calls they can use thanks to DirectX 12.

3DMark API Overhead iGPU Scaling (Low-End)

The story is much the same with our lower performance processors. AMD continues to see the largest gains and largest absolute performance under DirectX 12. With a 7x performance increase for the A8, even this weaker processor benefits greatly from the use of a low-level API.

The Intel processors see smaller gains as well, but they too are similarly significant. Even the Pentium with its basic GT1 processor and pair of relatively low clocked CPU sees a 2.7x increase in draw call performance from DirectX 12.

Discrete GPU Testing Closing Thoughts
Comments Locked

113 Comments

View All Comments

  • Barilla - Friday, March 27, 2015 - link

    Yeah, 285 might outperform the 980 but keep in mind this is a very specific test only focusing one one aspect of rendering a frame. I mean, a man can accelerate faster than an F1 car over very short distance of few meters, but that doesn't really mean much in the real world.
    Keeping my fingers crossed though since I've always been AMD fan and I hope they can egain some market share.
  • AndrewJacksonZA - Friday, March 27, 2015 - link

    What @Barilla said.
  • akamateau - Thursday, April 30, 2015 - link

    ALL Radeon will outperfrom nVidia if the Radeon dGPU is fed by AMD siliocn. Intel degrades AMD Radeon silicon.
  • lowlymarine - Friday, March 27, 2015 - link

    The Wii U is based on PowerPC 7xx/G3 and RV770, not ARM or GCN. Unless you're referring the the recently-announced "NX" which for all we know may not even be a traditional home console.
  • eanazag - Friday, March 27, 2015 - link

    I did some math on what available information there is for the 390 versus the Titan and it seems to go toe-to-toe. If it has a lead, it won't be huge. I compared some leaked slides with the numbers Anandtech had for the Titan review. I suspect it will use a lot more electricity though and create more heat.

    We can likely expect it to have much more compute built-in.
  • Refuge - Friday, March 27, 2015 - link

    It doesn't really say anything about the performance of the 285x or the 980, or any of the others for that matter.

    Just because they can make a couple more million draw calls a second doesn't mean you will ever see anything.

    Just means the video card is really good at screaming for more work, not doing it. Hell these draw calls are all way beyond anything realistic anyhow, you will NEVER have one of these GPU's ever make half as many draw calls as being shown in this test in any real world usage scenario.
  • Vayra - Saturday, March 28, 2015 - link

    If anything I would say that the Nvidia cards are more refined and more balanced, based on these draw call results. Nvidia has optimized more to get the most out of Dx11 while AMD shows a lead on actual hardware capacity through the greater gains both relative and absolute on draw call numbers. It is the very same trend you also see in the amount of hardware both companies use in their top tiered cards to achieve similar performance - AMD uses more, Nvidia uses less and wins on efficiency gains.
  • Crunchy005 - Monday, March 30, 2015 - link

    Well AMD does win at double precision even over the Titan X. Nvidia pulled a lot of the double precision hardware to save on power, one of the ways maxwell is more efficient. This isn't a bad thing in the gaming community but ruins the Titan X for a lot of compute scenarios. So Nvidia really did lose out a lot in one area to beat AMD at efficiency.

    http://anandtech.com/show/9059/the-nvidia-geforce-...
  • akamateau - Thursday, April 30, 2015 - link

    If Anandtech benched Radeon silicon being fed by AMD FX or A10 then NO INTEL/nVidia siliocn would even come close to AMD GCN enabled Asynchronous Shader hardware. Intel and nVidia are now second rate siliocn in a DX12 world.

    Why do yo think so many folks trashed MANTLE. FUD!!!!
  • xenol - Friday, March 27, 2015 - link

    Even if there were no consoles, games wouldn't be targeted for high end PCs. They will be targeted for lower end PCs to increase the amount of market share they can reach. Maybe once in a blue moon, some developer who doesn't care about that will make the next Crysis.

Log in

Don't have an account? Sign up now