GPU Performance - Vega vs Iris

After many tests, it is very clear that Intel’s Ice Lake platform offers a significantly faster CPU, and the results were unsurprising. Although the Ryzen Mobile 3000 platform did launch in 2019, it already struggled on CPU tests against the older Skylake core processors. But on the GPU side, Intel is the one that needs to play catch-up. Previous to Ice Lake, Intel’s standard GT2 GPU platform, found on almost all U-Series 15-Watt processors, offered 24 execution units of their Gen 9.5 GPU. AMD squeezed their Vega GPU architecture into their Ryzen SoC, which could easily double the performance of the Gen 9.5 GT2 GPU.

Ice Lake is Intel’s first real attempt to make a powerful iGPU a standard feature for their CPUs, although it is only a first step. But the new Gen 11 architecture brings some improvements such as more advanced tile-based rendering, variable rate shading, and of course the LPDDR4X-3733 memory adding significant bandwidth, greatly helping the GPU. The biggest change though is just how much die space Intel has dedicated to graphics, jumping from 24 EUs on a full GT2 to 64 EUs on a full GT2 part such as the Core i7-1065G7. And, following in AMD’s footsteps again, Intel is offering cut-down GPUs on lower-spec processors. It’s confused their already confusing processor naming, but the lowest-spec Core announced so far still has 32 EUs, meaning it is still better than the previous gen even at the “G1” level.

AMD has some tricks up their sleeves as well. For the Surface Laptop 3, Microsoft requested a slightly more powerful configuration for their Surface-branded processor. While the CPU side matches the same specifications as the non-Surface CPUs, Microsoft's processor SKUs add an extra GPU Compute Unit to both its Ryzen 5 and Ryzen 7, bringing them to 9 and 11 respectively. So the Surface Laptop 3 should be the best possible showcase for GPU performance on the 3000 series Ryzen mobile APU.

Before the results, let’s go over the driver situation. The Intel system ships with an updated driver over what we used on the Dell XPS 13 2-in-1, which resolves the 3DMark issues we saw on that laptop. The driver is from 2019-11-06 and is version 26.20.100.7463. The AMD platform’s driver is from 2019-10-07 and is version 26.20.12027.5004. Unfortunately, the AMD driver can’t be updated from AMD directly, and instead will be released by Microsoft. The current driver has some quirks, so an updated driver is needed for usability, but it did not prevent any GPU workloads from being run. But, the AMD system would only output 1280x720 where we normally test at 1366x768, and attempts to output to an external monitor were thwarted by the buggy driver, so be aware that in most of the gaming tests, the AMD system was outputting at a slightly lower resolution.

Let’s see how the do starting with some synthetics, and then moving to some real-world games.

3DMark

Futuremark 3DMark Fire Strike

Futuremark 3DMark Sky Diver

Futuremark 3DMark Cloud Gate

Futuremark 3DMark Ice Storm Unlimited

Futuremark 3DMark Ice Storm Unlimited - Graphics

Futuremark 3DMark Ice Storm Unlimited - Physics

3DMark offers several tests of varying complexity, from Fire Strike as the most demanding, to Ice Storm Unlimited, which can be run on tablets. Here the Ice Lake platform pulls ahead, with better CPU performance helping quite a bit, although the Ice Lake’s Iris Plus graphics is also able to outperform Vega 11 as well.

GFXBench

GFXBench 5.0 Aztec Ruins Normal 1080p Offscreen

GFXBench 5.0 Aztec Ruins High 1440p Offscreen

Kishonti’s latest GFXBench suite added DirectX 12 tests to the fold, making it far more relevant than the older OpenGL versions available on the desktop previously. AMD’s previous work in low-level drivers when they developed Mantle has provided the groundwork for DX12 as well, with Vega 11 offering slightly better results than Iris Plus in this test.

Tomb Raider

Tomb Raider - Value

Running at our value settings, Tomb Raider was easily playable on both systems, with framerates approaching 100 FPS. The Ice Lake platform performed better on this test.

Rise of the Tomb Raider

Rise of the Tomb Raider - Value

The second installment in the Tomb Raider series offers much more demanding visuals, and both systems struggle to play it at our value settings. The DirectX 12 title performs slightly better on Vega, and with some additional settings tweaks, the game would be playable, which is not something you could have said on an integrated GPU previous to Ryzen and Ice Lake.

Strange Brigade

Strange Brigade - Value

A new title we’re bringing to our laptop suite is Strange Brigade, which scales down nicely on integrated graphics. This game also supports DirectX 12, and as tends to be the pattern, performs very well on Vega 11.

F1 2017

F1 2017 - Value

Back with a DirectX 11 title, we see that Intel has again closed the gap, and this game tends to be somewhat CPU bottlenecked as well, so the Sunny Cove cores likely help out here too, but once again Vega 11 wins, if only by a nose.

F1 2019

F1 2019 - Value

Codemasters updated the underlying EGO engine to support DirectX 12, which was utilized on this test. Despite that, the Vega 11 GPU is a bit slower than the Iris Plus in this test.

Far Cry 5

Far Cry 5 - Value

Both systems are within striking distance of being playable, which is somewhat remarkable since the Far Cry series is one of the most popular AAA first-person shooters. The Vega 11 GPU was slightly ahead, which is somewhat surprising as this game tends to be CPU bound, but clearly at this low of a GPU limit that hasn’t come into play yet.

System Performance Benchmark Analysis: Boost Behavior
Comments Locked

174 Comments

View All Comments

  • TheinsanegamerN - Friday, December 13, 2019 - link

    It isnt just speed, the intel chip uses LPDDR4X. That's an entirely different beat from LPDDR4, let alone normal DDR4.

    AMD would need to redesign their memory controller, and they have just...not done it. The writing was on the wall, and I have no idea why AMD didnt put LPDDR4X compatibility in their chips, hell I dont know why intel waited so long. The sheer voltage difference makes a huge impact in the mobile space.

    You are correct, pushing those speeds at normal DDR4 voltage levels would have tanked battery life.
  • ikjadoon - Friday, December 13, 2019 - link

    Sigh, it is just speed. DDR4-2400 to DDR4-3200 is simply speed: there is no "entirely new controller" needed. The Zen+ desktop counterpart is rated between DDR4-2666 to 2933.

    LPDDR4X is almost identical to LPDDR4: "LPDDR4X is identical to LPDDR4 except additional power is saved by reducing the I/O voltage (Vddq) to 0.6 V from 1.1 V." Whoever confused you that LPDDR4X is "an entirely different beat" from LPDDR4 is talking out of their ass and I caution you to believe anything else they ever say.

    And, no: DDR4-3200 vs DDR4-2400 would've tanked battery life, but simply made it somewhat worse. DDR4-3200 can still run on the stock 1.2V that SO-DIMM DDR4 relies on, but it's pricier and you'd still pay the MHz power penalty.

    I don't think RAM speed/voltage has ever "tanked" a laptop's battery life: shaking my head here...
  • mczak - Friday, December 13, 2019 - link

    I'm quite sure you're wrong here. The problem isn't the memory itself (as long as you get default 1.2V modules, which exist up to ddr4-3200 itself), but the cpu. Zen(+) cpus require higher SoC voltage for higher memory speeds (memory frequency is tied to the on-die interconnect frequency). And as far as I know, this makes quite a sizeable difference - not enough to really matter on the desktop, but enough to matter on mobile. (Although I thought Zen+ could use default SoC voltage up to ddr4-2666, but I could be wrong on that.)
  • Byte - Friday, December 13, 2019 - link

    Ryzen had huge problems with memory speed and even compatibility at launch. No doubt they had to play it safe on laptops. They should have it mostly sorted out with Zen 2 laptop, it is why the notebooks are a gen behind where as intel notebook are usually a gen ahead.
  • ikjadoon - Saturday, December 14, 2019 - link

    We both agree it would be bad for battery life and a clear AMD failure. But, the details...more errors:

    1. Zen+ is rated up to DDR4-2933. 3200 is a short jump. Even then, AMD couldn't even rate this custom SKU to 2666 (the bare minimum of Zen+). AMD put zero work into this custom SKU (whose only saving grace is graphics and even that was neutered). It's obviously a low-volume part (relative to what AMD sells otherwise) or such a high-profile design win.

    2. If AMD can't rate (= bin) *any* of its mobile SoC batches to support even 2666MHz at normal voltages, I'd be shocked.

    For any random Zen+ silicon, sure, it'd need more voltage. The whole impetus for my comments are that AMD created an entire SKU for Microsoft and seemed to take it out of oven half-baked.

    Or, perhaps they had binned the GPU side too much that very few of those CU 11 units could've survived a second binning on the memory controller.
  • azazel1024 - Monday, December 16, 2019 - link

    So all that being said, yes it had a huge impact. GPU based workloads are heavily memory speed dependent. Going from 2400 to 3200MHz likely would have seen a 10-25% increase in the various GPU benchmarks (on the lower end for those that are a bit more CPU biased). That changes AMD from being slightly better overall in GPU performance to a commanding lead.

    On the CPU side of things, many of the Intel wins were on workloads with a lot of memory performance needed. Going from 2400 to 3200 would probably have only resulted in the AMD chip moving up 3-5% in many workloads (20-40% in the more memory subsystem dependent SPEC INT tests), but that would have still evened the playing field a lot more.

    Going to 3766 like the Intel chip would have just been even more of the same.

    Zen 2 and much higher memory bandwidth can't come soon enough for AMD.
  • Zoolook - Saturday, December 21, 2019 - link

    It's not about binning, they couldn't support that memory and keep within their desired TDP because they would have to run infinity fabric at a higher speed.
    They could have used faster memory and lower CPU and/or GPU speed but this is the compromise they settled on.
  • Dragonstongue - Friday, December 13, 2019 - link

    AMD make/design for a client what that client wants, in this case, MSFT as "well known" for making sure to get (hopefully pay much for) what they want, for only reasons that they can understand.

    this case, AMD really cannot say "we are not doing that" as this would mean loss of likely into the millions (or more) vs just saying "not a problem, what would you like?"

    MSFT is very well known for catering to INTC and NVDA whims (they have, still do, even if it cost everyone many things)

    still they AMD and MSFT should have "made sure" to not hold back it's potential performance by using "min spec" memory speed, instead choosing the highest speed they know (through testing) it will support.

    I imagine AMD (or others) could have chosen to use LP memory selection as I call BS on others saying AMD would have no choice but to rearchitecture their design to use the LP over standard power memory, seeing as the LP is likely very little changes need to be done (if any compared to ground up for an entirely different memory type)

    they should have "upped" to the next speed levels however instead of 2400 baseline, 2666, 2933, 3000, 3200 as power draw difference is "negligible" with proper tuning (which MSFT likely would have made sure to do...but then again is MSFT whom pull stupid as heck all the time, so long it keeps their "buddies happy" who care about the consumers themselves)
  • mikeztm - Friday, December 13, 2019 - link

    LPDDR4/LPDDR4X is not related to DDR4.
    It's a upgraded LPDDR3 which is also not related to DDR3.

    LPDDR family is just like GDDR family and are total different type of DRAM standard.
    They almost draw 0 watt when not in use. And in active ram access they do not draw less power significantly compare to DDR4.

    LPDDR4 was first shipped with iPhone 6s in 2015 and it takes Intel 4 years to finally catch up.
    BTW this article has a intentional typo: LPDDR4 3733 on Intel is actually quad channel because each channel is half width 32bit instead of DDR4 64bit.
  • Dragonstongue - Friday, December 13, 2019 - link

    AMD make/design for a client what that client wants, in this case, MSFT as "well known" for making sure to get (hopefully pay much for) what they want, for only reasons that they can understand.

    this case, AMD really cannot say "we are not doing that" as this would mean loss of likely into the millions (or more) vs just saying "not a problem, what would you like?"

    MSFT is very well known for catering to INTC and NVDA whims (they have, still do, even if it cost everyone many things)

    still they AMD and MSFT should have "made sure" to not hold back it's potential performance by using "min spec" memory speed, instead choosing the highest speed they know (through testing) it will support.

    I imagine AMD (or others) could have chosen to use LP memory selection as I call BS on others saying AMD would have no choice but to rearchitecture their design to use the LP over standard power memory, seeing as the LP is likely very little changes need to be done (if any compared to ground up for an entirely different memory type)

    they should have "upped" to the next speed levels however instead of 2400 baseline, 2666, 2933, 3000, 3200 as power draw difference is "negligible" with proper tuning )

    IMO

Log in

Don't have an account? Sign up now