Back to Article

  • twotwotwo - Wednesday, August 13, 2014 - link

    The perils of following this stuff too closely--if I get that nice-looking ARM Acer now, I'll have to sit and watch as stuff like this, the Denver-based K1, and A5x-based SoCs all roll out. Reply
  • TheJian - Thursday, August 14, 2014 - link

    Not really if you're after the gpu. This 628 will not be much of a challenge for K1. Not sure why they didn't go with a better gpu, maybe just testing the waters with a less complicated version for 20nm? Or maybe it really fills some need they had so they made it anyway.
    "Its metal construction suggests a flagship smartphone position, but its weaker specifications relative to the Galaxy S5 place it closer to the mid-tier category.

    "It feels very much like an experiment to me, like so many other Samsung devices in the past," said Jackdaw Research analyst Jan Dawson."

    They chose the wrong gpu or maybe it could have been called flagship. Having said that it's first device goes metal (fixing plastic complaints) and should have great battery considering low gpu and new process. Maybe this will be in the $400 range or something. Either way, you're ok buying a chromebook, but yeah, I'm waiting for 20nm K1/M1...LOL. FAR too big of a jump to ignore for my uses. Meaning whatever device I pick, hooked up to TV for gaming (either streamed from PC or just android games output to tv with gamepad etc), the rest is just bonus uses in or out of the house. Like couch browsing, hooked to tv for movie viewing (my bluray chokes on some files, android with all it's players won't, and it's a portable player out of the house too).

    No harm buying now if you have a need. But 20nm is going to be big for mobile across the board on the high end. I don't know if Denver K1 will get a die shrink or they'll just save that for July M1 (maxwell based gpu). But I'll be waiting for the 20nm version either way. But again it's only because my main use is gaming when I get off the PC (I can only sit in a chair so long these days for PC, so need a couch/bed break...LOL).
  • galanoth - Thursday, August 14, 2014 - link

    You said "They chose the wrong gpu or maybe it could have been called flagship."
    GSMArena benchmarks are showing the Exynos 5430 easily has flagship SOC power.
    The Mali GPU even besting Snapdragon 801 devices in Offscreen tests.
  • kron123456789 - Thursday, August 14, 2014 - link

    How is this possible? It's the same GPU that in Exynos 5420/5422. Anyway, it should be competitor to Snapdragon 805, not 801. Reply
  • galanoth - Thursday, August 14, 2014 - link

    Maybe being built on 20nm instead of 28nm gives it a performance boost.
    I think the 5430 is a competitor to the Snapdragon 800/801 with the Mali 628.
    There is another 5433? with a Mali 760 graphics.
    That's probably the Snapdragon 805 competitor.
  • kron123456789 - Thursday, August 14, 2014 - link

    Or maybe gsmarena mistaken and there is actually somthing like Mali T760MP4? Because 20nm instead of 28nm may give performance boost, but not 2x. Reply
  • lilmoe - Thursday, August 14, 2014 - link

    Probably did some optimization for the drivers to run OpenGL ES 3.0 as it should. Mali GPUs are usually more optimized for OpenGL ES 2.0 and are almost always the fastest at that. Reply
  • Andrei Frumusanu - Thursday, August 14, 2014 - link

    The 5422 was just a few frames away from the 801, given the small frequency boost and memory bandwidth boost, the scores are exactly there where you would expect them to be, there's no big surprise there. Reply
  • kron123456789 - Thursday, August 14, 2014 - link

    Not in Manhattan Offscreen test. Galaxy S5 with Exynos in that test has 8.6fps and Galaxy S5 with Snapdragon has 11.8fps. Galaxy Alpha has 13.4 fps, that is over 50% performance increase(not 2x, but still too much). But T-Rex Offscreen test shows much less difference. Reply
  • lilmoe - Thursday, August 14, 2014 - link

    As I said before, Samsung and ARM most probably fixed their OpenGL ES 3.0 drivers (which was overdue). That with the new process produce the numbers you see now.
    The newer drivers should work with older Mali T628 GPUs as well, probably coming with future firmware updates.
  • name99 - Thursday, August 14, 2014 - link

    "Not sure why they didn't go with a better gpu, maybe just testing the waters with a less complicated version for 20nm?"

    The whole product seems very much like an attempt to call FIRST for 20nm, ie a short-sighted marketing "triumph" (headlines today, forgotten tomorrow). There's no attempt to exploit the additional features 20nm gives you (ie run faster, use more transistors), it looks like all they did was "recompile" an existing CPU, make the absolute minimum changes they needed to get the damn thing to work, and rushed it out the door.
  • lilmoe - Thursday, August 14, 2014 - link

    I don't know why you say that, but Samsung never claimed this was their flagship chip for the new process. I suggest you hold your assumptions until the 5433 is out... Reply
  • GC2:CS - Friday, August 15, 2014 - link

    Yeah, agree. They needed to ramp up 20 nm production so they put in exynos 5422 onto 20nm die, they made a new phone, that will be made in limited quantities ( I heard just a million to be made). Then when 20nm is ramped up somewhat, quickly put in a 64-bit octa core, that will power some small fraction of galaxy note 4's and claim FIRST for 64-bit octa core as well.
    I have a strong feeling that android OEM's would kill to prevent Apple from introducing the world's second 64-bit phone....
  • jameskatt - Thursday, August 14, 2014 - link

    No need to suffer envy.
    If the new one is better that what you have, buy it.
    Then sell the old one.
    There is always something new coming around.
    The rush of being king of the hill is always fleeting.
    So keep buying to keep that rush going.
  • danielfranklin - Wednesday, August 13, 2014 - link

    So i guess this pretty much confirms that Apple's new SOC next month will be on 20nm...
    I was wondering how they would achieve their x2 performance increase this year if it wasnt ready in time, looks like it is though.
  • dakishimesan - Thursday, August 14, 2014 - link

    But apple has been in volume production for months already. Reply
  • Spunjji - Thursday, August 14, 2014 - link

    Given that Samsung have just released a phone based around this, it's safe to say that this particular chip must have already been in production for some time now as well. Granted there's animosity between Samsung and Apple, but it would be odd for Samsung to prevent their biggest foundry customer from helping recoup costs on their shiny new 20nm process by giving them early access. Reply
  • melgross - Thursday, August 14, 2014 - link

    Apple isn't using Samsung this year. They are using TSMC, which is why Samsung announced that due to the loss of a major customer, their fab sales would be significantly lower than expected. Reply
  • sigmatau - Friday, August 15, 2014 - link

    Actually they are using Samsung in addition to TSMC. Neither one can output the amount of 20nm silicon that they require so they will be using both. Reply
  • dylan522p - Monday, August 18, 2014 - link

    Pretty much everyone states it's all TSMC. TSMC and Samsungs 20nm are different. No way in hell they would do that. Reply
  • Gondalf - Thursday, August 14, 2014 - link

    No 2X this year obviously :).
    My bet is A8 pretty similar to A7 with a some sort of turbo (one core) up to 2Ghz, there isn't space for other things on 20nm.
    The main Apple news will be the form factor and the display, this is the reason IMO iPhone 6 will not sell much more than iPhone 5.
  • kwrzesien - Thursday, August 14, 2014 - link

    "The main Apple news will be the form factor and the display, this is the reason IMO iPhone 6 will not sell much more than iPhone 5."

    That is *exactly* why it will sell massively well, as much as they can deliver, and with as much or more excitement for it as the iPhone 5.
  • GC2:CS - Friday, August 15, 2014 - link

    I think you are rather underestimating their "magic", as they aren't newbies in this space anymore, they don't fear to make drastic changes in chip design, every year.
    But we will see.
  • jjj - Wednesday, August 13, 2014 - link

    Failed to notice this one, wonder how far it can clock in bigger phone, maybe we'll see it in the Note 4. Reply
  • HillBeast - Wednesday, August 13, 2014 - link

    "While details are somewhat sparse, this new SoC is a big.LITTLE design with four Cortex A15s running at 1.8 GHz and four Cortex A7s running at 1.3 GHz for the CPU side"

    Seriously Samsung... ARMv7 is getting old now. Switch to ARMv8.
  • JoshHo - Wednesday, August 13, 2014 - link

    ARM v7 will probably be around until 2015 for Android phones. Reply
  • Alexvrb - Thursday, August 14, 2014 - link

    Outside of Apple, does anyone even have a phone on the market with a 64-bit ARMv8-A chip and full OS support? Not that I really think there's a huge rush, without software support it won't help performance and on mobile we're not quite at the memory limit anyway. Reply
  • madmilk - Thursday, August 14, 2014 - link

    Most Android code is in Java, so only ART has to support ARMv8 to reap most of the benefits. Reply
  • melgross - Thursday, August 14, 2014 - link

    Samsung can't innovate too much with their designs because unlike Apple and Qualcomm, they don't have an architecture license, just a processor license. That limits what they can do to process innovations and minor modifications to ARM's original designs. Reply
  • ArthurG - Wednesday, August 13, 2014 - link

    and still no article on Nvidia Denver, the most interesting CPU in a long time... Reply
  • twotwotwo - Wednesday, August 13, 2014 - link

    Not all AT's fault--I think if NV mailed them a Denver gadget to play with, we'd hear about it. :) Reply
  • Ryan Smith - Thursday, August 14, 2014 - link

    We're hard at work on Denver. It hasn't been ignored, it just takes time to go through it all. Reply
  • GC2:CS - Thursday, August 14, 2014 - link

    Well if it will be the same story as the GPU, I am not interested at all. Reply
  • kron123456789 - Thursday, August 14, 2014 - link

    what story? Reply
  • GC2:CS - Thursday, August 14, 2014 - link

    Big words, about "PC class performance" and "incredible efficiency", all of those incredible technologies they implemented and how it blows away the competition, then the products will came after all those months and we will see some nice performance numbers, but the chip is power hungry as a kitchen sink.
    Compared to the A7 they claimed 3 times the performance and 50% higher efficiency back in January, and today we got 2,4 times the performance at 7W which is for what I know 2 to 3 times as much as the A7 and that is a year old SoC, built on a potentially less efficient process, that matches the nvidia's best GPU offering in terms of efficiency.... I am just asking myself where is the revolution in which to be interested in ? I am more interested in a detailed comparison on this site between last years iPad mini with Retina display and nVidia tablet Shield that would prove me wrong, but nothing did so far.
  • lilmoe - Thursday, August 14, 2014 - link

    The overheating and short battery life of the iPhone 5S totally contradict your "facts"... Reply
  • GC2:CS - Thursday, August 14, 2014 - link

    And still, the iPhone with its tiny 5,92 Wh battery can withstand ~114 minutes, at full GPU load which is not so much less than an K1 tablet with over 3x the battery size running just close to its full load for ~135 minutes. The iPhone is warm when under load, so either the chip doesn't run on 85 degrees like Tegra, or I should get rid of that thick skin on my hands.

    This actually supports my "facts" as you just can't put Tegra into a very thin 4" phone and run it out of a downright feature phone sized battery like the A7.
  • lilmoe - Thursday, August 14, 2014 - link

    1) Tegra's GPU is HUGE, and the performance difference is night and day (Tegra K1 being 2x-3x faster). Nvidia is also relaxing the thermal headroom to showcase its power.
    2) Tegra is driving more pixels on the Shield tablet
    3) Tegra is still 28nm, which obviously isn't efficient enough for the initially intended power envelope of the GPU. Nvidia have been pretty vocal about how displeased they are with TSMC's 20nm process.
    4) This chip isn't intended for smartphones in the first place.

    Back to topic, I'm reading lots of articles about the new Exynos and Samsung's 20nm process here and there. This seems to be the most power efficient performance ARM chip to date. It will most likely be more efficient than Apple's A8 if Apple decides to stick with their dual core Cyclone setup and a faster GPU. big.LITTLE is fully and *truly* functional this time around, and the Linux kernel is getting better and better at supporting it. I seriously won't be surprised if the Galaxy Alpha's smaller battery lasts just as long as the GS5's. Samsung has loaded that chip with efficient co-processing, signaling, encoding/decoding features. The screen also appears to be built on a newer, better process too.
  • name99 - Thursday, August 14, 2014 - link

    "It will most likely be more efficient than Apple's A8 if Apple decides to stick with their dual core Cyclone setup and a faster GPU. big.LITTLE is fully and *truly* functional this time around, "

    What makes you say this? I understand your claim to be that this SoC will be more efficient than A8 because of big.LITTLE.
    That's THEORETICALLY possible, in the sense that big.LITTLE can steer work that it knows can be performed slowly to the (slower, more power efficient) core. But there's a whole lot of gap between that theory and reality.

    The first problem is the question of how much work fits this category of "known to be able to run slowly". Android and iOS obviously both have various categories of background work of this sort, but we won't know, until the phone is released, how well the combination of latest Android plus Samsung's additions actually perform. You're asserting "big.LITTLE is fully and *truly* functional this time around" based on hope, not evidence; AND you're assuming that this type of background work makes up a substantial portion of the energy load. I'm not sure this second is true. It obviously is if you use your phone primarily as a phone, ie you look at it a few times a day to scan notifications, and do nothing else. Well, when I use my iPhone5 that way, it already gets about 4 days of battery life. Improving that is nice, but it's not going to excite anyone, and it's not going to show up in reviews.
    The problem is that the code that runs based on user interaction is a lot harder to segregate into "can run slow" vs "must run as fast as possible". and that's what's needed to reduce energy consumption for common usage patterns. THIS is dependent as much on the quality of the APIs, the quality of their implementation, and the quality of the developers exploiting those APIs as it is on the hardware. Samsung has no control over this.

    The second problem is that you assume Apple doesn't have the equivalent of a low-power core. But they do, in the 5S (the M7 chip), and presumably will going forward. This is described as a sensor fusion chip, but I expect that Apple actually considers it the "slow, low power" chip, and within their OS they move as much as possible background work off to it. So we're further reducing the real-world gap between big.LITTLE and iOS. At the high end, as I said in part 1, the work can't easily be segregated into fast vs slow. At the low end, the work that can be segregated as slow has its equivalent of a companion core to run on.

    The third problem, for the mid-range, is that we have no idea how much dynamic range control Apple has for Cyclone. For example, it certainly appears to be a clustered design, based on two 3-wide clusters (basically take the execution guts of Swift and duplicate them). If this is so, an obvious way to run the CPU slower and cooler (if you know that this is acceptable) is to shut down one of the clusters. Their are many other such tricks that are available to CPUs, for example you can shut down say 3/4 of the branch prediction storage, or some of the ways in the I or D cache. It's all a balancing act as to what gives you the best bang for the buck.

    Point is
    (a) power-proportional computing (as opposed to "hurry up and sleep") is a legitimate strategy BUT it relies on knowing that code can run slowly. This knowledge is hard to come by, outside some specialized areas.
    (b) there are multiple ways to implement power-proportional computing. ONE is big.LITTLE; but another is a companion core (which Apple definitely has) and a third is to shut down micro-architectural features on the CPU (which Apple could do, and for all we know, already does do).
  • extide - Thursday, August 14, 2014 - link

    Uh dude, the M7 is a Cortex M -- definitely not a companion core man. It is absolutely not doing any OS background tasks, it is just a sensor fusion hub. That's about all it has the capability to do anyways, those little processors are quite slow... I mean like typically in the 40-80 Mhz range, and that is not a typo, plus they run the Thumb instruction set which is different to the regular ARM instruction set, so you would need to convert the instructions (using the Cyclone cores) first. It would not save any power and would be pointless. Reply
  • name99 - Thursday, August 14, 2014 - link

    You're saying this with more authority than I think is justified.
    Consider tasks like "handling background notifications" or "geofencing" or "polling a mail server".
    Apple is certainly capable of writing bespoke code to handle these tasks on the M7 or its successor. I've no idea whether they do or not, but I don't see why they COULDN'T.

    I'm not suggesting that they run third party code on the M7 --- that's part of my point that it's difficult to know what code is and is not time sensitive.
  • lilmoe - Thursday, August 14, 2014 - link

    "What makes you say this?"
    Optimum efficency of a particular SoC != perceived overall platform efficiency (the combination of hardware/software). I'd say even previous Exynos chips like the 5422 are likely to be more efficient under "ideal" use cases than Apple's A7. Apple's platform seems to be more efficient becuase their chip isn't strained nearly as much as platforms like the GS5. Exynos chips are burdened with higher resolutions, more software overhead, and lots of gimmicky featers that aren't ideal to achieve their maximum efficiency levels. True, they have managed all of that well, but not at optimum efficiency.

    There has been lots of design mistakes with past big.LITTLE implementations both on the hardware and software levels (for understandable reasons). First, the whole point of the dual cluster system is to have more of the "normal/usual" load handled by the smaller cluster, because it was assumed that most of the load is "low" (UI navigation/scrolling, typing/messaging, making calls, playing videos, etc..), and the higher load should only amount to 10-20% of actual usage (page rendering during browsing, playing games, converting videos, etc..).
    Here are some of the problems observed on the chip design side:
    1) The first big.LITTLE chip (5410) didn't even have HMP enabled nor a working interconnect (cci400), forcing it to run cluster migration only. This isn't the most efficient big.LITTLE implementation.
    2) Even when that was presumably fixed in the 5420/5422, power/clock gating the Cortex A15 cores was still an issue (due to overloading the chips as described above), and Samsung even addmitted that while those chips fully supported HMP (all cores being online), their kernels were still implementing cluster migration. They did that because having all cores online has proven less efficient in the longer run in current platforms, again for the reasons above. The Cortex A15 cluster was online more often than not, and their clock gating wasn't at best.
    3) Having that many cores online on a 28nm chip meant that thermal throttling will be kicking in more often than not (This also resulted in some implementations not reaching their max clock rate).

    This lead many to debunk the whole efficiency argument of big.LITTLE. It was a bit late, but the issue was partially addressed by ARM with the r3p3 revision of Cortex A15 and presumably handled better with Cortex A57. Competition (Krait and Swift/Cyclone) wasn't helping either, and OEMs like Samsung were rushing chips out to compete before futher optimizations. ARM was VERY late to provide an alternative; Cortex A12, and therefore was never used.

    Samsung has obviously tried using lots of hardware tricks and hacks to solve these issues, and it kind of worked. But now with better know-how/experience, the newer revision of Cortex A15, along with the smaller, more efficient 20nm process and a plethora of more efficient co-processing and ISPs/DSPs, the power/efficiency curve should be better than ever and HMP (GTS) should be working as intended. Samsung claims 25% efficiency from the process shrink alone, but if you factor in all the other improvements, one would conclude that it would be closer to 50-60% more efficient (which is a LOT more than you'd get from the *current* competition). Apple's M7 (as described by extide below) is overhyped, it can only do so much and isn't as nearly feature rich as the co-processors added by Samsung to their newer chips.

    To iterate again; The efficiency of the chip is only one side of the store. On the software and device side, well, you have Android's overhead and other gimmicks contributing to the overall inefficies of the platform. Your "normal" load patterns shouldn't exceed a certain threshold in order for these chips to perform in optimum efficiency, and small cores shouldn't be limited to run background tasks alone. Most of the forground UI tasks should also be handled by the little cores. It isn't as hard to detect as you imply. There are lots of other factors kicking in:
    1) Screens resolutions were shooting through the roof. 1080p isn't helping as is, and some were even pushing well above QHD. This isn't efficient at all, and while 1080p is 50-60% more pixels, the extra power required was well above 200% VS if it were only pushing 720p. You notice that right away because most Exynos chips got really warm during presumably "trivial" tasks.
    2) Since the Delvic VB wasn't the most efficient implementation, Cortex A15's were going online for trivial tasks such as GC(!!!). True, Android was getting better and better in standby and background task handling, but once that screen is turned on, watch your battery drain and the back of the phone getting HOT!
    3) Android gives too much freedom for the app devs. Most Android apps aren't as efficient as they need to be. One app could virtually keep the big cluster on for a significant amount of time.
    3) OEM skins weren't helping either. Sure, they're getting better, but they're no where near as lightweight and efficient as they should be. There's absolutely NO NEED for the big cores to turn on just to handle trivial UI tasks!

    The solution? Well, we have Android L now, and according to Google, efficiency and performance should improve significantly (1.5x all the way up to 2x). GC, memory management, and UI rendering have been significantly improved as well. This should be enough to allow the smaller cores to run more of the tasks, and therefore shoot efficiency through the roof as it should be with big.LITTLE.

    Devices with the new Exynos 5430/5433 should see significant improvements. Especially on devices like the Galaxy Alpha since, for starters, the chip doesn't need to push a many pixels, so most of the strain is gone, and the power draw should stay close to optimum levels. We'll still have to see real world numbers to prove all that, and I'm watching closely here and there to see how things unfold (XDA mostly since that's where kernel devs are. These guys tell you exactly what's going on).
  • name99 - Friday, August 15, 2014 - link

    Thanks for the long (and unlike so many on the internet, polite) reply. However, I'm afraid I'm not convinced; I gave my reasoning before, no point in typing it again.

    Presumably by the end of September we'll have both phones reviewed on AnandTech and we shall see. My personal expectation is for the WiFi web browsing battery life benchmark to reveal higher efficiency (ie longer lifetime per battery capacity) for iPhone 6 than for Alpha. That's, I think, the closest benchmark to what we all ACTUALLY care about in this context, which is some sort of generalized "system efficiency under lowish load".

    Six weeks to go...
  • lilmoe - Friday, August 15, 2014 - link

    I'm not going to debunk the entire WiFi web browsing battery benchmark here at Anandtech. But normal usage doesn't always conform to the numbers presented.The iPhone 5S scores higher than the Galaxy Note 3 in that test, and the GS5 scored better than both, and we all know which phone lasts longer out of the three during normal/heavy usage, and that's definitely the Note 3. Battery life benchmarks aren't supposed to be specific about one aspect of usage, but the package as a whole. That's what I believe matters most, and you can't test that with a single simple benchmark. You'll have to own both devices and use them both under equal conditions (as equal as you can get) then record your findings.
    Browser and video playback benchmarks get you a small glimpse of what to might expect, but aren't nearly half the story.

    That said, Apple has done a remarkable job optimizing their software, and one would assume that the A7/A8 are performing to their maximum efficiency levels. However the Galaxy Alpha, or any other Android phone for the matter, will not show it's maximum efficiency (battery life) potential until it's loaded with Android L. Check out how Android L improved battery life on the Nexus 5. It's quite remarkable for Android. There are also concerns about the actual size of the battery inside the Alpha relative to other Android smartphones, but if it's closer in size to the one inside the iPhone 6 then we'll probably have a good comparison, especially if the A8 was also 20nm.

    Also, the Note 4 and the Exynos 5433 will be entering the comparison as well ;)
  • Andrei Frumusanu - Friday, August 15, 2014 - link

    Your second point on the 5420/5422's power management is very incorrect. Reply
  • lilmoe - Friday, August 15, 2014 - link

    Maybe, but it's something I've read on several articles a little while back. Some articles were stating that the Note 3 fully supports HMP but cluster migration was the switching method implemented. Reply
  • kron123456789 - Thursday, August 14, 2014 - link

    Problem with Tegra K1 is it's CPU, not GPU. GPU is really more efficient, but CPU...not so much. That's why i wanna see how Denver would work. BTW, K1 still have less power consumption than Tegra 4. Reply
  • saliti - Thursday, August 14, 2014 - link

    I guess Note 4 will have 20nm Exynos 5433 with Mali T760. Reply
  • Laststop311 - Thursday, August 14, 2014 - link

    no the note 4 will have snapdragon 805 (same cpu architecture as 801 but new adreno 4xx graphics) Reply
  • lilmoe - Thursday, August 14, 2014 - link

    You're crazy if you want the SD805 over the newer 20nm Exynos... They already support LTE now, and the Note 4's will most probably be a significant bit faster. Reply
  • RussianSensation - Friday, August 15, 2014 - link

    Actually it will have a choice of 2 processors depending on the market/region:

    Snapdragon 805 and Exynos 5433:
  • darkich - Thursday, August 14, 2014 - link

    Well something doesn't make sense about that combination.
    If it is 20nm, why is Samsung clocking the cores to only 1.3Ghz?

    Presumably the answer could be - because the cores are cortex A57 and A53, but Snapdragon 810 is reported to be clocked at 2GHz with the same basic core design and 20nm node.
  • darkich - Thursday, August 14, 2014 - link

    .. anyways, my bet is that the 5433 is a 28nm HPM endeavor with A5x big little clocked at 1.3GHz Reply
  • Laststop311 - Thursday, August 14, 2014 - link

    Samsungs soc's are never the best performing. Snapdragon has been routinely beating them over and over in performance and efficiency. Can almost guarantee the snapdragon 810 will beat this too. Reply
  • saliti - Thursday, August 14, 2014 - link

    I don't think this is their high end SOC. This is certainly not meant to compete with Snapdragon 810. Reply
  • kron123456789 - Thursday, August 14, 2014 - link

    Devices with S810 will show up only in 2015. And then there will be new Exynos chip. Reply
  • hung2900 - Thursday, August 14, 2014 - link

    You don't know anything
    Samsung Hummingbird (First gen Galaxy S) vs Snapdragon S2 - same same CPU but much better GPU
    Samsung Exynos 4210 (Galaxy S2, latter Note 1) - miles ahead Crapdragon S3
    Samsung Exynos 4412 (Galaxy S3 - Korean version 2GB RAM) - better than Snapdragon Plus dual-core that time and Tegra 3 of course.
  • kron123456789 - Thursday, August 14, 2014 - link

    But in the end of 2012 there was Snapdragon S4 Pro... Reply
  • lilmoe - Thursday, August 14, 2014 - link

    And you probably haven't the slightest Idea what you're talking about... Reply
  • lilmoe - Thursday, August 14, 2014 - link

    Ok, now the 1860mah batter in Samsung's new metal frame phone makes more sense. I can't wait to see real world numbers. Reply
  • GC2:CS - Thursday, August 14, 2014 - link

    Yeah the alpha is an experiment, I heard its going to be in limited special production device (who will even buy that ?) with just a million produced by the end of the year. The price is so high because of metal, and I heard about very bad yield rates on the 20nm process for now. Samsung had to ramp up the production to some real world devices , so they created the alpha to kick it up. They will possibly continue this with the octa version of note 4, or exynos 5433, which I think is just 5430 but heavily over-clocked to fit the battery size (28 nm exynos 5422 has 2,1/1,5 Ghz clock speed)
    I would not expect Apple to use this as last year samsung started their 28nm experimental ramp-up with introduction of first exynos octa in January and they needed almost 8 months to pass it from announcement to the big mass production of Apple A7 comfortably. Also considering strange delays of TSMC's 20nm process for nvidia, amd and Qualcomm, it could mean two things, they have a delay or a new very lucrative consumer that can afford to buy entire 20nm production capacity for quite some time.(they want to make ~70 millions of new iPhones before the end of the year, plus who knows how many new iPads) Also sales of samsung's semiconductor divisions are failing, while in previous years they were always up this quarter, because of new iPhone ramp up.
  • przemo_li - Thursday, August 14, 2014 - link

    Good analysis.

    Also Intel/AMD/Nvidia are all behind "expected" node shrink.

    Meaning nobody is really ready for that next move.
    (Which may serve Samsung well if they open their fabs to somebody else)
  • extide - Thursday, August 14, 2014 - link

    Uhh, what are you talking about? Intel's 14nm is a full generation ahead of everyone else's 20nm, and was also a true shrink from 22nm, as opposed to TSMC/Samsung's "16nm" which isnt really a shrink at all... Reply
  • ruzveh - Thursday, August 14, 2014 - link

    World is moving onto 14nm and they are introducing 20nm.. wow Reply
  • kron123456789 - Thursday, August 14, 2014 - link

    Not world, only Intel. Or Intel IS your world? Reply
  • R3MF - Thursday, August 14, 2014 - link


    intel is about to move to 14nm, the rest of the world is transitioning to 20nm.
  • devashish90 - Thursday, August 14, 2014 - link

    Samsung has already started mass producing 10nm NAND chips. So its not too far from them to achieve the same for AP's and modems. Reply
  • devashish90 - Thursday, August 14, 2014 - link

    Rather 10nm class (10-20nm) Reply
  • extide - Thursday, August 14, 2014 - link

    Flash memory fabrication is totally different, and not comparable at all. Reply
  • iwod - Thursday, August 14, 2014 - link

    Wait a minutes, is this Fabed by TSMC? I am pretty sure Samsung skipped 20nm and went straight to 14nm. Reply
  • lilmoe - Thursday, August 14, 2014 - link

    Those were rumors, not facts. This 20nm chip is definitely fabbed by Samsung. Reply
  • extide - Thursday, August 14, 2014 - link

    No, Samsung and TSMC are on the same "Path" -- 28nm ---> 20nm ---> "16nm" (not really a true shrink though) Reply
  • bleh0 - Thursday, August 14, 2014 - link

    I would be more excited but Samsung needs more adoption and more open documentation for me to actually care about anything Exynos related. Reply
  • lilmoe - Thursday, August 14, 2014 - link

    Yea? I used to care too back in the custom ROM days, then gave up and flashed back to stock on my GS2...
    What seriously PISSED me off was the non-functional GTS on my Exynos 5410 GS4. Samsung too their sweet time, but I'm glad they're back on track with their Exynos line of chips.
  • Frenetic Pony - Thursday, August 14, 2014 - link

    Wait... what manufacturing process is this? This is the first I've heard of it, and it sounds like A. A Samsung created process B. Has nothing at all to do with a process shrink, and is them just improving their 28nm process somewhat and calling it something different just to bullshit some PR for themselves. Reply
  • name99 - Thursday, August 14, 2014 - link

    It does seem very strange. I don't know if they've been keeping it (VERY) secret, or if it is some sort of "optimistic reinterpretation" of a modified 28nm process.
    If you go to their foundry website all evidence is that claims of 20nm support have been added at the last minute. The main page:
    lists 20nm in two places (but left out of a list where it would make sense) and the link to "more info" about 20nm is broken.
    If you go to
    there is likewise nothing about 20nm.

    Previous stories mention Samsung 20nm RAM and NAND production, which isn't really relevant.
    There IS a collection of stories from June 2012 talking about Samsung beginning construction of a 20/14nm plant. Assuming we can trust those, they seem to have been able to the the plant up-and-running with extraordinary stealth.
  • Achtung_BG - Friday, August 15, 2014 - link

    8 may 2014 Taiwan Semi:
    In our April 28th AAPL Update, we noted that 20nm production levels at Samsung Austin were in the 3000-4000 wpm range. These volumes were sufficient to debug/improve their yields as they vied for second source position for the AAPL designs. But our latest checks indicate a surprising twist to the 20nm development story. We are getting indications that Samsung Austin is planning to ramp their 20nm technology designs to 12,000 wpm by July, but the upside is for QCOM, not AAPL. It is our understanding that QCOM is not happy with the 20nm development/yield progress at TSM and thus have been qualifying their latest technology node designs at Samsung. Obviously, the potential loss of business from AAPL and QCOM would be bad news for TSM after recently losing the AMD (AMD) GPU business. And while 20nm demand will continue to be strong for TSM, we expect Samsung to be a viable threat to TSM for the advanced process nodes going forward.

    S1 Fab GiHeung South Korea upgarde to 14 nm finfets
    S3 Fab HwaSeong South Korea upgrade to 14 finfets
    S2 Fab Austin USA upgrade to 20nm 30 000 wafers per month

Log in

Don't have an account? Sign up now