GPU Performance

Ever since the original Raven Ridge systems, AMD’s Zen-based APUs have set the bar for GPU performance in a 15-Watt package. Coupling AMD’s Vega GPU cores with their Zen CPU cores really changed the game in terms of integrated GPU performance. Their second-generation Picasso platform even bumped the compute unit count up to 11 with the Ryzen Microsoft Surface Edition processor, which was once again based on Vega.

AMD’s philosophy has changed slightly for their third generation Zen-based APU, Renoir. Thanks to the move to TSMC’s 7 nm process, they decided to cut back the GPU cores, but provide more frequency headroom, and coupled with the CPU performance gains of Zen 2, AMD promised faster GPU performance despite the reduction in compute units. So, for the Ryzen 7 models, they are now offering either seven compute units on the Ryzen 7 4700U we have in our review unit, or eight compute units in the Ryzen 7 4800U, with maximum frequencies of 1600 Mhz and 1750 MHz respectively. That means that the unit we are testing today is down four compute units compared to the Ryzen 7 3780U found in the Surface Laptop 3, and although the maximum boost frequency is higher, it is only 200 MHz higher than the best Picasso APU available. We’ve seen AMD ramp up frequency before to get more performance out of their GPUs, and it has served them well, but in a laptop platform, there is far less thermal room to work with, so we shall see how the new philosophy works out for them.

Our testing philosophy for laptops with integrated GPUs has had to evolve very quickly. Laptops based on Intel’s Gen 9.5 graphics had such poor GPU performance that it was not even worth testing them against most 3D games, but both AMD’s Vega GPU on Ryzen, and Intel’s Iris graphics in their Ice Lake SoC have both moved the bar upwards, so more vigorous testing is now required. As with any new testing we bring, please give us some time to fill in the data as we test new devices.

3DMark

Futuremark 3DMark Fire Strike

Futuremark 3DMark Sky Diver

Futuremark 3DMark Cloud Gate

Futuremark 3DMark Ice Storm Unlimited

Futuremark 3DMark Ice Storm Unlimited - Graphics

Futuremark 3DMark Ice Storm Unlimited - Physics

UL’s 3DMark suite offers a range of tests of varying complexity, from the high-end Fire Strike, down to Ice Storm Unlimited, which can be run on smartphones. The Acer Swift 3 is off to a great start with these DX11 based tests, easily outperforming the previous Picasso scores as well as Intel’s Ice Lake platform. The most interesting result is the Ice Storm Unlimited Physics results, where Renoir scores 2.7 times higher than Picasso. The Physics tests are really CPU bound tests, which clearly shows how CPU limited the previous AMD APUs had been.

GFXBench

GFXBench 5.0 Aztec Ruins Normal 1080p Offscreen

GFXBench 5.0 Aztec Ruins High 1440p Offscreen

Thanks to the new DX12 tests Kishonti introduced with version 5.0 of their GFXBench suite, this cross-platform benchmark is again relevant on Windows. The results from the Ryzen 7 4700U are interesting. AMD has often done well on DX12, likely thanks to their early efforts with low-level drivers, and their previous Picasso GPU did very well on this benchmark. The Renoir does surpass Picasso, but the margin of victory is small.

Tomb Raider

Tomb Raider - Value

Although a bit long in the tooth, the original Tomb Raider reboot can still be taxing on notebooks, although the game is certainly playable on the modern integrated GPU if you turn down the settings. On our value settings the Acer Swift 3 was able to achieve over 100 frames per second, meaning there is quite a bit of room to increase the quality as long as you keep the resolution down. On both Ice Lake and Renoir, the game is unplayable with our Enthusiast settings at 1920x1080, despite its age.

Rise of the Tomb Raider

Rise of the Tomb Raider - Value

The first sequel to Tomb Raider added DX12 support, and as usual that means our AMD APUs did well. Although the game would be barely playable on Picasso or Ice Lake, the Acer Swift 3 with Ryzen 7 4700U looks like it just squeezes past the 30 FPS minimum you’d really want. This game is still at the limits of integrated GPUs.

Strange Brigade

Strange Brigade - Value

The co-operative third person shooter Strange Brigade is a DX12 title that does well even on integrated GPUs. It also appears to be more CPU bound than some of the other DX12 titles in our list, so the extra performance offered by the Zen 2 cores in the Acer Swift 3 help propel it to the top, with far more performance than the previous generation AMD APU.

F1 2019

F1 2019 - Value

Codemaster’s F1 series is yet another game that can do well even on integrated graphics if you want to turn the settings down. The Acer Swift 3 does very well here, almost achieving an average of 60 frames per second.

Far Cry 5

Far Cry 5 - Value

Ubisoft’s Far Cry franchise is well known, but it’s also been impossible to play on most integrated GPUs. That is still the case with the Acer Swift 3, although it just manages over 30 frames per second in this test, outclassing all other integrated notebooks tested. We will get into this a bit later, but this game also showed some very severe thermal throttling, with major dropouts in framerate during the benchmarking.

GPU Results

Although AMD cut back on the size of the GPU in Renoir, they don’t seem to have hurt maximum performance, and the new Zen 2 cores really show that the previous Raven Ridge and Picasso APUs were very CPU bound on some titles. With 36% fewer compute units than the Ryzen 7 3780U found in the Surface Laptop 3, the Acer Swift 3 still manages to outperform it in all cases.

However, the Acer did run into thermal issues when running at maximum performance. Some of this will come down to the cooling system in this particular notebook, but the power management of the Renoir APU also did not seem to be able to cope very well. Rather than find a sweet spot where it can perform well without throttling, the frame rate in Far Cry 5 was like a roller coaster the longer you left the game running. AMD’s choice to cut back on the number of compute units and then increase frequency certainly won’t help here either, and it makes you wonder how well the GPU would have been able to perform on the new 7 nm process with more CUs but a lower maximum boost frequency. We will dig more into the thermals later in the review.

System Performance Display Analysis
Comments Locked

191 Comments

View All Comments

  • Spunjji - Wednesday, May 6, 2020 - link

    Deicidium doesn't care that the benchmarks invalidate his talking points. As long as Intel only have 4 cores at 15W, 4 cores will be enough. Once Intel get 8 cores at 15W, he'll find a reason why it suddenly makes sense.
  • SolarBear28 - Tuesday, May 5, 2020 - link

    Regardless of whether or not 8 cores is necessary in a budget device, I think its a good policy to get as much CPU as you can for the money, especially in a non upgradable laptop. I have a usable 10 year old laptop because I got more CPU power than i needed at the time of purchase.
  • Spunjji - Wednesday, May 6, 2020 - link

    This. If you're keeping a device for more than 4 years, you're buying for the software made then, not now.
  • 0iron - Thursday, May 7, 2020 - link

    I wish I could give 👍 to your comment!
  • sonny73n - Tuesday, May 5, 2020 - link

    @deici

    “ The extra cores are useless and nothing more than a marketing exercise - no one using this laptop will be doing anything that even requires 4 cores. ”

    I open 15 tabs on average with web browser(s). I also transcode movies very frequently. So the more cores the better. We know you’re full of BS, Intel shill.
  • Dribble - Thursday, May 7, 2020 - link

    Having millions of tabs open doesn't require lots of cores, just enough memory. You aren't going to be transcoding movies on a cheap laptop with a little SSD.
  • sonny73n - Thursday, May 7, 2020 - link

    @Dribble
    Obviously you’ve never worked with browsers before. More cores is better for multitasking but someone on here said otherwise.
    Yes, I’m transcoding a few Disney movies every week for my 4 years old son so he can watch them with his iPad.
    I’m not going to explain to you in technical details. If you think “the extra cores are useless...” like Deici does, you’re totally in a different league.
  • Spunjji - Wednesday, May 6, 2020 - link

    "The extra cores are useless and nothing more than a marketing exercise"
    Maybe, maybe not. Not really your place to say how other people are supposed to use their laptops, is it?

    Bear in mind that we only got 4 cores at 15W from Intel *after* AMD announced the original Ryzen APU. Your logic now sounds like theirs when they designed Kaby Lake.
  • GreenReaper - Thursday, May 7, 2020 - link

    Browser tabs, virus checkers, malware, that stream you have going in the corner on your secondary monitor - all of those things take up cores. A lot of software has been redeveloped to use all the cores available to it nowadays.
  • fmcjw - Tuesday, May 5, 2020 - link

    Is it because the 8-core throttles more to an even lower power state, spending more time to cool down?

    I'm also curious to see evaluations of how much of that 25% battery time is consumed by wasted cycles in actual application performance. Probably not a lot on a benchmarks, since the SoC is designed to be shut down idle cores, but in real life the OS probably fires them up for no good reason, probably just because they they're there. I mean this is a mix of apps with legacy UI's and libraries we're talking about, not server optimized cloud applications.

    Accordingly, from a software and sustained performance perspective, is it better just to focus on single core performance?

    I have a tendency to get a more predictable Core i3-8145U sans Turbo Boost, or a Ryzen 3-3200U with fewer cores, despite them looking poor on benchmarks. I have no idea what I'm missing, but hopefully it's a worthy trade-off. At least there's less throttling due to quicker heat up of quad/hex/octa-cores, longer battery life due to less cores to address, and wasted cycles due to inefficiencies in the application/OS.

    Hope this is something Anandtech can investigate? Too many brilliant minds are trained on phone topics, but not the state of affairs in PC land.

Log in

Don't have an account? Sign up now