IGP: 720p Gaming Tests

Testing our Cezanne sample for integrated graphics is a double-edged sword – AMD fully expects this CPU to be paired with a discrete solution in almost all notebook environments, whereas mini-PC designs might be a mix of integrated and discrete. The integrated graphics on this silicon is more geared towards the U-series processors at 15 W, and so that is where the optimizations lie. We encountered a similar environment when we tested Renoir at 35 W last year as well.

In order to enable the integrated graphics on our ASUS ROG Flex X13 system, we disable the GTX 1650 through the device manager. This forces the system to run on the Vega 8 graphics inside, which for this processor runs at 2100 MHz, a +350 MHz jump from the previous generation based on the improved power management and minor manufacturing improvements. We did the same to the other systems in our test suite.

Integrated graphics over the years has been built up from something barely useable in a 2D desktop environment to hardware that can competitively run the most popular eSports titles at good resolutions, medium settings, at playable framerates. In our recent review of AMD’s Ryzen 4000G Desktop APUs, we noted that these were the best desktop APUs that money could buy, held back at this point mostly by the memory bandwidth, but still enabling some good performance. Ultimately modern day integrated graphics has cannibalized the sub-$100 GPU market, and these sorts of processors work great in budget builds. There’s still a way to go on performance, and at least mobile processors help in that regard as more systems push to LPDDR4X memory systems that afford better memory bandwidth.

For our integrated graphics testing, we’re using our lowest configuration for our game comparisons. This typically means the lowest resolution and graphics fidelity settings we can get away with, which to be honest is still a lot better visually than when I used to play Counter Strike 1.5 with my dual core netbook in the late 2000s. From there the goal is to showcase some good graphics performance tied in with CPU performance to see where the limits are – even at 720p on Low settings, some of these processors are still graphics limited.

Integrated Graphics Benchmark Results
AnandTech Ryzen 9
5980HS
Ryzen 9
4900HS
Ryzen 7
4800U
Core i7
1185G7
Power Mode 35 W 35 W 15 W 28-35 W
Graphics Vega 8 Vega 8 Vega 8 Iris Xe
Memory LP4-4267 D4-3200 LP4-4267 LP4-4267
Frames Per Second Averages
Civilization 6 480p Min 101.7 98.9 68.4 66.2
Deus Ex: MD 600p Min 80.7 76.5 61.2 69.1
Final Fantasy XV 720p Med 31.4 31.3 29.1 36.5
Strange Brigade 720p Low 93.2 85.2 75.7 89.3
Borderlands 3 360p VLow 89.8 93.6 - 64.9
Far Cry 5 360p Low 68.0 69.5 60.0 61.3
GTA 5 720p Low 98.9 80.7 80.0 81.9
Gears Tactics 720p Low 86.8 - 87.8 118.2
95th Frame Time Percentiles (shown as FPS)
Civilization 6 480p Min 69.0 67.4 45.7 43.8
Deus Ex: MD 600p Min 45.6 57.3 38.1 44.1
Final Fantasy XV 720p Med - 26.6 24.6 26.5
Strange Brigade 768p Min 84.2 77.0 68.6 73.0
Borderlands 3 360p VLow 63.6 73.8 - 48.9
Far Cry 5 360p Low 50.3 62.3 43.8 49.8
GTA 5 720p Low 66.8 52.8 56.0 55.7
Gears Tactics 720p Low 67.5 - 78.3 104.5

Despite the Ryzen 9 5980HS having LPDDR4X memory and extra frequency, the performance uplift against the Ryzen 9 4900HS is relatively mediocre – a few FPS at best, or losing a few FPS at worst. This is except for GTA, where the uplift is more ~20%, with the Zen 3 cores helping most here. In most tests it’s an easy win against Intel’s top Xe solution, except in Gears Tactics, which sides very heavily with the Intel solution.

With all that being said, as mentioned, the Ryzen 9 parts here are more likely to be paired with discrete graphics solutions. The ASUS ROG Flow X13 we are using today has a GTX 1650, whereas the ASUS Zephyrus G14 with the 4900HS has an RTX 2060. These scenarios are what really dictate the cooling solution in these systems, as well as how they are both used in workloads that requires CPU and GPU performance.

For any users confused as to why we run at these settings; these are our low 'IGP'-class settings in our CPU Gaming test format. As mentioned in our new CPU Suite article in the middle of last year, our CPU Gaming tests have four sets of settings: 720p Low (or Lower), 1440p Low, 4K Low, and 1080p Maximum. The segment above our lowest this in our suite is 1440p, which for a lot of these integrated GPUs would put numbers into the low double digits, if not lower, which something we've done in the past to massive complaints about why even bothering with such low framerate numbers. The point here is to work from a maximum frame rate, see if the game is even playable to begin with, and then detect where in a game the bottleneck can be; in some of these tests we're still dealing with GPU/DRAM bottlenecks. I've played CSS1.5 and other games at a Lan party on dual core AMD netbooks in the late 2000s, having to use low resolution texture packs to get it even 20 FPS playable. I still had masses amount of fun. From these numbers you can see the best possible frame rates for a given title and engine, and work down from there. It provides a starting point for further directions. These processors more often being paired with discrete solutions anyway, making discussions about IGP performance almost somewhat trivial compared to the rest of the data/

CPU Tests: Synthetic and SPEC Conclusions: Focusing on Premium Experiences
Comments Locked

218 Comments

View All Comments

  • Meteor2 - Thursday, February 4, 2021 - link

    Great point.
  • ikjadoon - Tuesday, January 26, 2021 - link

    It's great to see AMD kicking Intel's butt in a much larger market (i.e., laptops vastly outsell desktops): AMD really should be alongside, or simply replacing, Intel in most premium notebooks. Gaming notebooks are not my cup of tea, but glad to see for upcoming 15W Zen3 parts.

    Will we see actual, high-end Zen3 notebooks? Lenovo, HP, ASUS, Dell: for shame if you keep ramming toasty Tiger Lake down customers' throats. Lenovo's done some great offerings with both AMD & Intel; that means some compromises with notebook design (just go all AMD, man; if/when Intel is on top, switch back!), but beefier cooling for Intel will also help AMD.

    Still, overall, I don't see anything convincing me that x86 is really right for notebooks, either. So much waste heat...for what? The M1 has rightly rejiggered expectations: 20 hours on 150 nits should be ordinary, not miraculous. Limited to no fan spin-up and max CPU load should yield a chassis maximum of 40C (slightly warmer than body temperature). And, all the while with class-leading 1T performance.

    As this is a gaming laptop, it's not too relevant to compare web benchmarks (what most laptops do), but this is peak Zen3 mobile and it still falls quite short:

    Speedometer 2.0
    35W Ryzen 5980HS: 102 points (-57%)
    125W i9-10900K: 119 points (-49%)
    35W i7-1185G7: 128 points (-46%)
    105W Ryzen 5950X: 140 points (-40%)
    30W Apple M1: 234 points

    You can double / triple x86 wattage and still be miles behind M1. I almost feel silly buying an x86 laptop again: just kilowatts of waste heat over time. Why? Electrons that never get used, just exhausted and thrown out as soon as possible because it'll throttle even worse otherwise.
  • undervolted_dc - Tuesday, January 26, 2021 - link

    because you here are benchmarking javascript engine in the browser
    but not being enough you are comparing those in single thread so here you are comparing 1/16 of the 5950hs vs 1/4 of the m1
    a 128core epyc or a 64core threadripper probably will be even worse in this single threaded benchmark ( because those are levaring threads and are less efficient in single threaded app )
    if you like wrong calculations then 1 core of the 15w version use less tha 1w for what result ? ~ 100 points ? so who is wasting electrons here ?
    ( btw 1 core doesn't use 1/16 because there are boosts , but it's even less wrong than your comparison )
  • ZoZo - Tuesday, January 26, 2021 - link

    128-core EPYC? Where?
    His comparison is indeed misleading in terms of energy efficiency, but it's sad that no x86 is able to come even close to that single-threaded performance.
  • WaltC - Tuesday, January 26, 2021 - link

    Doubly sad for the M1 that we are living in the multicore/multithread era...;)
  • ikjadoon - Tuesday, January 26, 2021 - link

    The energy efficient comparisons are pretty clear: the best x86 (Zen3) has stunningly lower IPC than M1, which barely cracks 3 GHz. The only way to make up for such a gulf in IPC is faster clocks. Faster clocks require the 100+W TDPs so common in high-performance desktop CPUs. It's why Zen3 mobile clocks so much lower than Zen3 desktop (3-4 GHz instead of 4-5 GHz)

    A CPU that needs 3x power to do the same work (and do it slower in most cases) must exhaust an enormous amount of heat, when considering nT or 1T benchmarks (Zen3 requires ~20W for 5 GHz boost on a *single* core). Look at those boost power consumption measurements.

    Specifically in desktops (noted in my comparison about tripling TDP...), the CPU *alone* eats up an extra 60 to 90 watts during peak usage. Call it +20W average continuously, so we can do the math.

    20W x 8 hours x 7 days a week = +1.1 kWh excess exhaust heat per week. x86 had two corporate giants to do better. It's been severely litigated, but that's Intel's comeuppance. If Intel can't put out high-perf, high-efficiency x86 architectures, then people will start to feel less attached to x86 as an ISA. x86 had billions and billions and billions of R&D.

    I see no reason for consumers to religiously follow x86 Wintel or Wintel-clones in laptops especially, but desktops, too: where is the efficiency going to be coming from? Even if Apple *had flat 1T* for the next three years, I'd still feel more optimistic about M1-based CPUs in the long-term than x86.
  • Dug - Tuesday, January 26, 2021 - link

    "I see no reason for consumers to religiously follow x86 Wintel or Wintel-clones in laptops especially, but desktops, too: where is the efficiency going to be coming from?"

    Software, and getting work done. M1 is great and all, but just need to convince the boss that Apple or 3rd party has software available for our company....... Nope, oh well.
    Other negatives-
    For personal use, people aren't going to spend thousands of dollars to get new software on new platform.
    They can't play games (or should I say they can't play a majority), which is probably the largest market.
    They can't change anything about their software
    They can't customize anything.
    They can't upgrade any piece of their hardware.
    They don't have options for same accessories.

    So I'll go ahead and spend the extra $15 a year on energy to keep Windows.
  • Spunjji - Thursday, January 28, 2021 - link

    "A CPU that needs 3x power to do the same work"
    It doesn't. It's been demonstrated a few times now that if you scale back Zen 3 cores to similar performance levels to M1, M1's perf/watt advantage drops to about 30%. It's still better than the node advantage alone, but it's not crippling, and M1 is simply not capable of scaling up to the clock speeds required to match x86 on desktop / HPC workloads.

    They're different core designs matched to different purposes (ultra-mobile first vs. server first) and show different strengths as a result.

    M1 is a significant achievement - no doubt about it - but you're *massively* overstating the case in its favour.
  • GeoffreyA - Friday, January 29, 2021 - link

    Thank you for this.
  • Meteor2 - Thursday, February 4, 2021 - link

    "M1 is simply not capable of scaling up to the clock speeds required to match x86 on desktop / HPC workloads" ...Yet. In a couple of years x86 will be behind ARM across the board.

    Fastest HPC in the world is ARM *right now*. Only the fifth fastest is x86.

Log in

Don't have an account? Sign up now