Integrated GPU Testing

Switching gears from high performance discrete GPUs, we have our integrated GPUs. From a high level overview the gains from DirectX 12 are not going to be quite as large here as they are with dGPUs due to the much lower GPU performance, but there is still ample opportunity benefit from increased draw call performance.

Here we have Intel’s Haswell CPUs, and AMD’s Kaveri APUs. We'll start off with the higher-end processors, the Intel Core i3/i5i7 and AMD A10/A8.

3DMark API Overhead iGPU Scaling (High-End)

As expected, at the high-end the performance gains from DirectX 12 are not quite as great as they were with the dGPUs, but we’re still seeing significant gains. The largest gains of course are found with the AMD processors, thanks to their much stronger iGPUs. From DX11ST to DX12 we’re seeing a surprisingly large 6.8x increase in draw call performance, from 655K to 4,470K.

As to be expected, with a relatively weak CPU, AMD’s DX11 draw call performance isn’t very strong here relative to their strong GPU and of course our more powerful dGPUs. Still, it ends up being better than Intel (who otherwise has the stronger CPU), so we see AMD offering better draw call throughput at all levels. Ultimately what this amounts to is that AMD has quite a bit more potential under DX12.

Mantle meanwhile delivers a very slight edge over DX12 here, although for all practical purposes the two should be considered tied.

Meanwhile for the Intel CPUs, the gains from DX12 aren’t quite as large as with the AMD processors, but they’re still significant, and this is why Intel is happily backing DX12. All 3 processors share the same GT2 GPU and see similar gains. Starting from a baseline of 625K draw calls under DX11 – almost identical to AMD – the i7-4790K jumps up by 3.2x to 2,033K draw calls under DX12. The i5 and the i3 processors see 1,977K and 1,874K respectively, and after adjusting for clockspeeds it’s clear that we’re GPU command processor limited at all times here, hence why even a 2 core i3 can deliver similar gains.

Intel does end up seeing the smallest gains here, but again even in this sort of worst case scenario of a powerful CPU paired with a weak CPU, DX12 still improved draw call performance by over 3.2x. This means that in the long run even games that are targeting lower-performance PCs still stand to see a major increase in the number of draw calls they can use thanks to DirectX 12.

3DMark API Overhead iGPU Scaling (Low-End)

The story is much the same with our lower performance processors. AMD continues to see the largest gains and largest absolute performance under DirectX 12. With a 7x performance increase for the A8, even this weaker processor benefits greatly from the use of a low-level API.

The Intel processors see smaller gains as well, but they too are similarly significant. Even the Pentium with its basic GT1 processor and pair of relatively low clocked CPU sees a 2.7x increase in draw call performance from DirectX 12.

Discrete GPU Testing Closing Thoughts
Comments Locked

113 Comments

View All Comments

  • Ryan Smith - Sunday, March 29, 2015 - link

    DX12 brings two benefits in this context:

    1) Much, much less CPU overhead in submitting draw calls

    2) Better scaling out with core count

    Even though we can't take advantage of #2, we take advantage of #1. DX11ST means you have 1 relatively inefficient thread going, whereas DX12 means you have 2 (or 4 depending on HT) highly efficient threads going.
  • LoccOtHaN - Saturday, March 28, 2015 - link

    Hmm, Where are FX 8350 + and FX x4 x6 or Phenom x4 & x6 Tests?
    Lot of people have those CPU's, and i mean LOT of People ;-)
  • flabber - Saturday, March 28, 2015 - link

    It's too bad that AMD is at the end of the road. They were putting out some good technology. Or at least, pushing for technology to improve.
  • Michael Bay - Sunday, March 29, 2015 - link

    Intel will never let them die.
  • deruberhanyok - Saturday, March 28, 2015 - link

    Does this mean we could see games developed to similar levels of graphical fidelity as current ones, but performance significantly higher?

    In which case, current graphics hardware could, in theory, run a game in a 4k resolution at much higher framerates today, all other things being equal? Or run at a lower resolution at much higher sustained framerates (making a 120hz display suddenly a useful thing to have)?

    Or, put another way: does the increased CPU overhead, which allows for significantly more draw calls, mean that developers will only see a benefit with more detail/objects on the screen, or could someone, for instance, take a current game with a D3D11 renderer, add a D3D12 renderer to it, and get huge performance increases? I don't think we've seen that with Mantle, so I'm assuming it isn't the case?
  • Michael Bay - Sunday, March 29, 2015 - link

    You probably won`t get 4K out of middle to low-end cards of today, as it is also a memory size and bandwidth issue, but frameraates could improve I think.
  • Gigaplex - Monday, March 30, 2015 - link

    4k performance is generally ROP limited, not draw call limited. This won't help a whole lot.
  • Uplink10 - Saturday, March 28, 2015 - link

    Too bad publishers won`t issue developers to "remaster" older video games with DX12. Only new games will benefit from this.
  • lukeiscool10 - Saturday, March 28, 2015 - link

    Why do AMD and nvida fanboys continue to bitch at each other. Take a moment to realise we are going to both be getting great looking games but one thing hold us back consoles. So take your hate towards them as they are holding pc back
  • jabber - Monday, March 30, 2015 - link

    Maybe because we are entering into a new age when cards are not worth measuring on FPS alone in most cases and thats going to take a lot of fun out of the fanboy wars.

    To be honest unless you are running multi monitor/ultra high res just save up $200 and choose the card that looks best in your case.

Log in

Don't have an account? Sign up now