Play Store Open & Scroll

Google's Play Store is used by almost every Android user. Many times it seems though as Google's own app is quite performance hungry or un-optimized when it comes to using it, so let's have a closer look at what happens when opening up the app and browsing its home-activity.
 

The little cores all have significant load placed onto them. It looks like the app multi-threads well in this scenario and the little threads are well fitted to accomodate the load that is placed onto them.

Surpsingly, we also see the big CPUs having some continuous load. The app launch itself triggers the big cluster to go to full speed of 2.1GHz and migrate threads onto all 4 CPUs. Scrolling through the page also loads at least 1 significant big thread. The CPU's frequency remains quite moderate though as we only see some small bursts to up to 1GHz while the rest of the time the big cores idle on the minimum 800MHz frequency.

Overall, the Play Store app also seems to be optimized and aimed for 4-core designs. Here big.LITTLE seems to work well as we see a mix of small threads with a mix of big threads running concurrently on both clusters. 

App: Reddit Sync Scrolling App: Play Store App Updates
Comments Locked

157 Comments

View All Comments

  • yankeeDDL - Tuesday, September 1, 2015 - link

    Just wanted to say that it's agreat article. Well done and very interesting: the use of 4+4 cores on a mobile platform while on a PC we still have plenty of 2 cores CPUs, seemed quite ridiculous. But no, clearly, it makes sense.
  • Tolwyns - Tuesday, September 1, 2015 - link

    Very interesting article. These test were done on Android 5, I take it. I know that this analysis is geared toward current hardware, but most of the "4cores are only marketing" discussion was quite a while back when most device had some version of Android 4. I wonder if the benefits of more cores did show up then. The second thing i'm interested in is "How much of this is applicable to other SOCs". Not much I gather. And related to that "How much of this is limited to Samsung devices", because they made the CPU and the Firmware-softwarelayer of the tested device.
  • SunLord - Tuesday, September 1, 2015 - link

    I'm kinda curious how a 8 core version of the x20 with 2 lower power 4 mid power and 2 high power cores would perform
  • Shadowmaster625 - Tuesday, September 1, 2015 - link

    It is kind of a misleading analysis. One single haswell core could juggle all of these processes and still have plenty of time to sleep. So you're not really telling us anything here. Is a wider fatter core better than all these narrow underpowered cores? Given the performance and power consumption of the apple SoCs, I would still have to say yes.
  • IanHagen - Tuesday, September 1, 2015 - link

    This! When developing for iOS I usually have to span several threads (queues in Apple's world) for things that would otherwise block the main queue, which would cause the UI to "freeze" and the dual core SoC inside the devices I'm targeting are munching my threads absolutely fine. Just by saying that the several extre cores found in Android phones aren't sleeping you're not coming to any definitive conclusion about any clear advantage of having them.
  • nightbringer57 - Tuesday, September 1, 2015 - link

    The thing is that when you have 4 threads, 4 cores can potentially do the job more efficiently with performance equal to a single core with 4 times the execution speed.
  • nightbringer57 - Tuesday, September 1, 2015 - link

    *by efficiently, I mean, using less power*
  • metafor - Tuesday, September 1, 2015 - link

    Potentially, but not necessarily. Threading and thread migration aren't free. It depends on how much performance you really need. The A57(R3), for instance, at very low frequencies is actually slightly more power efficient than the A53 at its peak frequency (surprising, I know).

    If you have 4 threads that need absolutely-bare-minimum performance that a min-frequency single-core could handle, waking up 4 cores (even if they're smaller) and loading the code/data into the caches of each of those cores isn't necessarily a clear win. Especially if they share the same code.
  • lilmoe - Tuesday, September 1, 2015 - link

    "The A57(R3), for instance, at very low frequencies is actually slightly more power efficient than the A53 at its peak frequency (surprising, I know)."

    Cool story. Except that, in most of the smaller multithreaded workload cases, the little cores usually aren't near their saturation levels. Also, in most cases, when they _do_ get saturated, the workload is transferred and dealt with by big core or two in short bursts.

    Even if it isn't a "clear win", in *some* workloads mind you, saying that there isn't any apparent merit in these configurations is really irresponsible.
  • metafor - Tuesday, September 1, 2015 - link

    I don't think I said there's no merit to such configurations. I simply said parallelizing a workload isn't always a clear win over using a single core. It depends on the required performance level and the efficiency curve of the small core and big core.

    If 4 threads running on 4 small cores at 50% FMax can be done by one big core at FMin without wasting any cycles, the advantage actually goes to the big core configuration. The small core configuration works if there's a thread that requires so little performance, it'd be wasteful to run it on the big core even at FMin.

    The conclusion of which is best for the given workload isn't as clear cut as saying "look, the small cores are being used by a lot of threads!". But rather, by measuring power and perf using the two configurations.

Log in

Don't have an account? Sign up now