Scheduler mechanisms: WALT & PELT

Over the years, it seems Arm noticed the slow progress and now appears to be working more closely with Google in developing the Android common kernel, utilizing out-of-tree (meaning outside of the official Linux kernel) modifications that benefit performance and battery life of mobile devices. Qualcomm also has been a great contributor as WALT is now integrated into the Android common kernel, and there’s a lot of work going on from these parties as well as other SoC manufacturers to advance the platform in a way that benefits commercial devices a lot more.

Samsung LSI’s situation here seems very puzzling. The Exynos 9810 is the first flagship SoC to actually make use of EAS, and they are basing the BSP (Board support package) kernel off of the Android common kernel. The issue here is that instead of choosing to optimise the SoC through WALT, they chose to fall back to full PELT dictated task utilisation. That’s still fine in terms of core migrations, however they also chose to use a very vanilla schedutil CPU frequency driver. This meant that the frequency ramp-up of the Exynos 9810 CPUs could have the same characteristics as PELT, which means it would be also bring with it one of the existing disadvantages of PELT: a relatively slow ramp-up.

Source: BKK16-208: EAS

Source: WALT vs PELT : Redux – SFO17-307

One of the best resources on the issue actually comes from Qualcomm, as they had spearheaded the topic years ago. In the above presentation presented at Linaro Connect 2016 in Bangkok, we see the visual representation of the behaviour of PELT vs WinLT (which WALT was called at the time). The metrics to note here in the context of the Exynos 9810 are the util_avg (which is the default behaviour on the Galaxy S9) and the contrast to WALT’s ravg.demand and actual task execution. So out of all the possible options in terms of BSP configurations, Samsung seemed to have chosen the worst one for performance. And I do think this seems to have been a conscious choice as Samsung had made additional mechanisms to the both the scheduler (eHMP) and schedutil (freqvar) to counteract this very slow behaviour caused by PELT.

In trying to resolve this whole issue, instead of adding additional logic on top of everything I looked into fixing the issue at the source.

What was first tried is perhaps the most obvious route, and that's to enable WALT and see where that goes. While using WALT as a CPU utilisation signal for the Exynos S9 gave outstandingly good performance, it also very badly degraded battery life. I had a look at the Snapdragon 845 Galaxy S9’s scheduler, but here it seems Qualcomm diverges significantly from the Google common kernel which the Exynos is based on. This being far too much work to port, I had another look at the Pixel 2’s kernel – which luckily was a lot nearer to Samsung’s. I ported all relevant patches which were also applied to the Pixel 2 devices, along with porting EAS to a January state of the 4.9-eas-dev branch. This improved WALT’s behaviour while keeping performance, however there was still significant battery life degradation compared to the previous configuration. I didn’t want to spend more time on this so I looked through other avenues.

Source : LKML Estimate_Utilization (With UtilEst) 

Looking through Arm's resources, it looks very much like the company is aware of the performance issues and is actively trying to improve the behaviour of PELT to more closely match that of WALT. One significant change is a new utilisation signal called util_est (Utilisation estimation) which is added on top of WALT and is meant to be used for CPU frequency selection. I backported the patch and immediately saw a significant improvement in responsiveness due to the higher CPU frequency state utilisation. Another simple way of improving PELT was reducing the ramp/decay timings, which incidentally also got an upstream patch very recently. I backported this as well to the kernel, and after testing a 8ms half-life setting  for a bit and judging it to not be good for battery life, I settled on a 16ms settings, which is an improvement over the 32ms of the stock kernel and gives the best performance and battery compromise.

Because of these significant changes in the way the scheduler is fed utilisation statistics, the existing tuning from Samsung obviously weren’t valid anymore. I adapted most of them to the best I could, which basically involves just disabling most of them as they were no longer needed. Also I significantly changed the EAS capacity and cost tables, as I do not think that the way Samsung populated the table is correct or representative of actual power usage, which is very unfortunate. Incidentally, this last bit was one of the reasons that performance changed when I limited the CPU frequency in part 1, as it shifted the whole capacity table and changed the scheduler heuristic. 

But of course, what most of you are here for is not how this was done but rather the hard data on the effects of my experimenting, so let's dive into the results.

The New Modifications & A Scheduling Recap Performance & Battery Results


View All Comments

  • hansmuff - Friday, April 20, 2018 - link

    Fantastic article, thank you so much!

    I'd be REALLY pissed if I had an Exynos S9+. Seems to me like that would feel like the S6 in terms of battery life, that phone was terrible. Incidentally, I had the S6, and I hated the battery life even as a light user. And IIRC that phone had an Exynos in it even in the US version. Hmmmmmmm.
  • Toss3 - Saturday, April 21, 2018 - link

    As an owner I wouldn't say I'm pissed as the battery life overall is pretty decent (getting around 6h+ of SOT which is similar to what most SD845 users are getting). You shouldn't base battery life on just web browsing as people tend to do a lot of other stuff on their phones besides that (check other Youtube comparisons and you'll see that they are on par pretty much). Definitely sucks that Samsung hasn't optimized the performance, and they can't really change the clockspeed now after they've released it. Reply
  • lucam - Friday, April 20, 2018 - link

    When iPhone X Review? Reply
  • MrCommunistGen - Friday, April 20, 2018 - link

    WOW. Excellent work, great results, and awesome writeup! Bravo Andrei. I love all the detail about what works and what doesn't.

    I'd be interested in seeing even rough numbers for what performance and battery looked like when using WALT and when you tried using 8ms half-life with PELT. Like: "Using WALT only gave ~5% performance improvement over Config 2 at the cost of cutting battery life down to ~3 hours..." of course using your figures instead of the ones I made up.
  • Andrei Frumusanu - Friday, April 20, 2018 - link

    If I remember correctly 6h with WALT and 6.5h with 8ms PELT at 1794MHz, performance was great but just murder on the battery. Obviously something wasn't right with the WALT config so that's why I didn't post the result as it wasn't representative. Reply
  • Dizoja86 - Friday, April 20, 2018 - link

    I love these Anandtech articles that read like a school textbook. They're challenging, and I genuinely feel like I better understand technology by the end of them. Well done. Reply
  • jospoortvliet - Friday, April 20, 2018 - link

    I can only agree. I was looking forward to this article and out is beyond expectation - fantastic work. This is why I come to this site... Reply
  • Lau_Tech - Friday, April 20, 2018 - link

    Good job Andrei! Very interesting and unique article Reply
  • lilmoe - Friday, April 20, 2018 - link

    Just looking at the power curves (finally!), this is totally a laptop chip, or a hybrid are least.

    Dear Microsoft, use this chip for Windows on ARM.
  • stepz - Saturday, April 21, 2018 - link

    Love that you replaced the confusing double barchart thing with a scatterplot of power curves. Much much clearer. I would suggest showing the same data as a energy/performance plot - this way one can see from the same plot if the performane to power trade-off is worth it. Reply

Log in

Don't have an account? Sign up now