While we mentioned this in our Galaxy Alpha launch article, Samsung is finally announcing the launch of their new Exynos 5430 SoC.

The main critical upgrade that the new chips revolve around is the manufacturing process, as Samsung delivers its first 20nm SoC product and is also at the same time the first manufacturer to do so.

On the CPU side for both the 5430, things don’t change much at all from the 5420 or 5422, with only a slight frequency change to 1.8GHz for the A15 cores and 1.3GHz for the A7 cores. We expect this frequency jump to actually be used in consumer devices, unlike the 5422’s announced frequencies which were not reached in the end, being limited to 1.9GHz/1.3GHz in the G900H version of the Galaxy S5. As with the 5422, the 5430 comes fully HMP enabled.

A bigger change is that the CPU IP has been updated from the r2p4 found in previous 542X incarnations to a r3p3 core revision. This change, as discussed by Nvidia earlier in the year, should provide better clock gating and power characteristics for the CPU side of the SoC.

On the GPU side, the 5430 offers little difference from the 5422 or 5420 beyond a small frequency boost to 600MHz for the Mali T628MP6.

While this is still a planar transistor process, a few critical changes have been made that make 20nm HKMG a significant leap forward from 28nm HKMG. First, instead of a gate-first approach for the high-K metal gate formation, the gate is now the last part of the transistor to be formed. This improves performance because the characteristics of the gate are no longer affected by significant high/low temperatures during manufacturing. In addition, lower-k dielectric in the interconnect layers reduce capacitance between the metal and therefore increase maximum clock speed/performance and reduce power consumption. Finally, improved silicon straining techniques should also improve drive current in the transistors, which can drive higher performance and lower power consumption. The end-effect is that we should expect an average drop in voltage of about 125mV, and quoting Samsung, a 25% reduced power.

In terms of auxiliary IP blocks and accelerators, the Exynos 5430 offer a new HEVC (H.265) hardware decoder block, bringing its decoding capabilities on par with Qualcomm’s Snapdragon 805.

Also added is a new Cortex A5 co-processor dedicated to audio decoding called “Seiren”. Previously Samsung used a custom FPGA block called Samsung Reprogrammable Processor (SRP) for audio tasks, which seems to have been now retired. The new subsystem allows for processing of all audio-related tasks, which ranges from decoding of simple MP3 streams to DTS or Dolby DS1 audio codecs, sample rate conversion and band equalization. It also provides the chip with voice capabilities such as voice recognition and voice triggered device wakeup without external DSPs. Samsung actually published a whitepaper on this feature back in January, but we didn’t yet know which SoC it was addressing until now.

The ISP is similar to the one offered in the 5422, which included a clocking redesign and a new dedicated voltage plane.

The memory subsystem remains the same, maintaining the 2x32-bit LPDDR3 interface, able to sustain frequencies up to 2133MHz or 17GB/s. We don’t expect any changes in the L2 cache sizes, and as such, they remain the same 2MB for the A15 cluster and 512KB for the A7 cluster.

The Galaxy Alpha will be the first device to ship with this new SoC, in early September of this year.

Comments Locked

77 Comments

View All Comments

  • name99 - Thursday, August 14, 2014 - link

    You're saying this with more authority than I think is justified.
    Consider tasks like "handling background notifications" or "geofencing" or "polling a mail server".
    Apple is certainly capable of writing bespoke code to handle these tasks on the M7 or its successor. I've no idea whether they do or not, but I don't see why they COULDN'T.

    I'm not suggesting that they run third party code on the M7 --- that's part of my point that it's difficult to know what code is and is not time sensitive.
  • lilmoe - Thursday, August 14, 2014 - link

    "What makes you say this?"
    Optimum efficency of a particular SoC != perceived overall platform efficiency (the combination of hardware/software). I'd say even previous Exynos chips like the 5422 are likely to be more efficient under "ideal" use cases than Apple's A7. Apple's platform seems to be more efficient becuase their chip isn't strained nearly as much as platforms like the GS5. Exynos chips are burdened with higher resolutions, more software overhead, and lots of gimmicky featers that aren't ideal to achieve their maximum efficiency levels. True, they have managed all of that well, but not at optimum efficiency.

    There has been lots of design mistakes with past big.LITTLE implementations both on the hardware and software levels (for understandable reasons). First, the whole point of the dual cluster system is to have more of the "normal/usual" load handled by the smaller cluster, because it was assumed that most of the load is "low" (UI navigation/scrolling, typing/messaging, making calls, playing videos, etc..), and the higher load should only amount to 10-20% of actual usage (page rendering during browsing, playing games, converting videos, etc..).
    Here are some of the problems observed on the chip design side:
    1) The first big.LITTLE chip (5410) didn't even have HMP enabled nor a working interconnect (cci400), forcing it to run cluster migration only. This isn't the most efficient big.LITTLE implementation.
    2) Even when that was presumably fixed in the 5420/5422, power/clock gating the Cortex A15 cores was still an issue (due to overloading the chips as described above), and Samsung even addmitted that while those chips fully supported HMP (all cores being online), their kernels were still implementing cluster migration. They did that because having all cores online has proven less efficient in the longer run in current platforms, again for the reasons above. The Cortex A15 cluster was online more often than not, and their clock gating wasn't at best.
    3) Having that many cores online on a 28nm chip meant that thermal throttling will be kicking in more often than not (This also resulted in some implementations not reaching their max clock rate).

    This lead many to debunk the whole efficiency argument of big.LITTLE. It was a bit late, but the issue was partially addressed by ARM with the r3p3 revision of Cortex A15 and presumably handled better with Cortex A57. Competition (Krait and Swift/Cyclone) wasn't helping either, and OEMs like Samsung were rushing chips out to compete before futher optimizations. ARM was VERY late to provide an alternative; Cortex A12, and therefore was never used.

    Samsung has obviously tried using lots of hardware tricks and hacks to solve these issues, and it kind of worked. But now with better know-how/experience, the newer revision of Cortex A15, along with the smaller, more efficient 20nm process and a plethora of more efficient co-processing and ISPs/DSPs, the power/efficiency curve should be better than ever and HMP (GTS) should be working as intended. Samsung claims 25% efficiency from the process shrink alone, but if you factor in all the other improvements, one would conclude that it would be closer to 50-60% more efficient (which is a LOT more than you'd get from the *current* competition). Apple's M7 (as described by extide below) is overhyped, it can only do so much and isn't as nearly feature rich as the co-processors added by Samsung to their newer chips.

    To iterate again; The efficiency of the chip is only one side of the store. On the software and device side, well, you have Android's overhead and other gimmicks contributing to the overall inefficies of the platform. Your "normal" load patterns shouldn't exceed a certain threshold in order for these chips to perform in optimum efficiency, and small cores shouldn't be limited to run background tasks alone. Most of the forground UI tasks should also be handled by the little cores. It isn't as hard to detect as you imply. There are lots of other factors kicking in:
    1) Screens resolutions were shooting through the roof. 1080p isn't helping as is, and some were even pushing well above QHD. This isn't efficient at all, and while 1080p is 50-60% more pixels, the extra power required was well above 200% VS if it were only pushing 720p. You notice that right away because most Exynos chips got really warm during presumably "trivial" tasks.
    2) Since the Delvic VB wasn't the most efficient implementation, Cortex A15's were going online for trivial tasks such as GC(!!!). True, Android was getting better and better in standby and background task handling, but once that screen is turned on, watch your battery drain and the back of the phone getting HOT!
    3) Android gives too much freedom for the app devs. Most Android apps aren't as efficient as they need to be. One app could virtually keep the big cluster on for a significant amount of time.
    3) OEM skins weren't helping either. Sure, they're getting better, but they're no where near as lightweight and efficient as they should be. There's absolutely NO NEED for the big cores to turn on just to handle trivial UI tasks!

    The solution? Well, we have Android L now, and according to Google, efficiency and performance should improve significantly (1.5x all the way up to 2x). GC, memory management, and UI rendering have been significantly improved as well. This should be enough to allow the smaller cores to run more of the tasks, and therefore shoot efficiency through the roof as it should be with big.LITTLE.

    Devices with the new Exynos 5430/5433 should see significant improvements. Especially on devices like the Galaxy Alpha since, for starters, the chip doesn't need to push a many pixels, so most of the strain is gone, and the power draw should stay close to optimum levels. We'll still have to see real world numbers to prove all that, and I'm watching closely here and there to see how things unfold (XDA mostly since that's where kernel devs are. These guys tell you exactly what's going on).
  • name99 - Friday, August 15, 2014 - link

    Thanks for the long (and unlike so many on the internet, polite) reply. However, I'm afraid I'm not convinced; I gave my reasoning before, no point in typing it again.

    Presumably by the end of September we'll have both phones reviewed on AnandTech and we shall see. My personal expectation is for the WiFi web browsing battery life benchmark to reveal higher efficiency (ie longer lifetime per battery capacity) for iPhone 6 than for Alpha. That's, I think, the closest benchmark to what we all ACTUALLY care about in this context, which is some sort of generalized "system efficiency under lowish load".

    Six weeks to go...
  • lilmoe - Friday, August 15, 2014 - link

    I'm not going to debunk the entire WiFi web browsing battery benchmark here at Anandtech. But normal usage doesn't always conform to the numbers presented.The iPhone 5S scores higher than the Galaxy Note 3 in that test, and the GS5 scored better than both, and we all know which phone lasts longer out of the three during normal/heavy usage, and that's definitely the Note 3. Battery life benchmarks aren't supposed to be specific about one aspect of usage, but the package as a whole. That's what I believe matters most, and you can't test that with a single simple benchmark. You'll have to own both devices and use them both under equal conditions (as equal as you can get) then record your findings.
    Browser and video playback benchmarks get you a small glimpse of what to might expect, but aren't nearly half the story.

    That said, Apple has done a remarkable job optimizing their software, and one would assume that the A7/A8 are performing to their maximum efficiency levels. However the Galaxy Alpha, or any other Android phone for the matter, will not show it's maximum efficiency (battery life) potential until it's loaded with Android L. Check out how Android L improved battery life on the Nexus 5. It's quite remarkable for Android. There are also concerns about the actual size of the battery inside the Alpha relative to other Android smartphones, but if it's closer in size to the one inside the iPhone 6 then we'll probably have a good comparison, especially if the A8 was also 20nm.

    Also, the Note 4 and the Exynos 5433 will be entering the comparison as well ;)
  • Andrei Frumusanu - Friday, August 15, 2014 - link

    Your second point on the 5420/5422's power management is very incorrect.
  • lilmoe - Friday, August 15, 2014 - link

    Maybe, but it's something I've read on several articles a little while back. Some articles were stating that the Note 3 fully supports HMP but cluster migration was the switching method implemented.
  • kron123456789 - Thursday, August 14, 2014 - link

    Problem with Tegra K1 is it's CPU, not GPU. GPU is really more efficient, but CPU...not so much. That's why i wanna see how Denver would work. BTW, K1 still have less power consumption than Tegra 4.
  • saliti - Thursday, August 14, 2014 - link

    I guess Note 4 will have 20nm Exynos 5433 with Mali T760.
  • Laststop311 - Thursday, August 14, 2014 - link

    no the note 4 will have snapdragon 805 (same cpu architecture as 801 but new adreno 4xx graphics)
  • lilmoe - Thursday, August 14, 2014 - link

    You're crazy if you want the SD805 over the newer 20nm Exynos... They already support LTE now, and the Note 4's will most probably be a significant bit faster.

Log in

Don't have an account? Sign up now