POST A COMMENT

60 Comments

Back to Article

  • purerice - Tuesday, May 27, 2014 - link

    Huge? Perhaps. Long-awaited. Not at all. Intel has had an ARM license for a long time and done nothing with it. The question was not "if" but "when" and the answer is "about time".

    This is ironic in a way. Lots of development happens in the "west" and gets fabbed back "east". Now we have "eastern" developers asking "western" fabs to make their chips.
    Reply
  • Guspaz - Tuesday, May 27, 2014 - link

    This agreement has to do with producing x86 SoCs, so this has nothing to do with ARM or Intel's ARM license. Reply
  • purerice - Tuesday, May 27, 2014 - link

    You're right. I had read another article first that had the details wrong. Then I messed up by not reading this one carefully enough. The other point stands. Reply
  • chubbypanda - Tuesday, May 27, 2014 - link

    Other point about '...asking "western" fabs to make their chips.'? According to Anand, it'll TSMC making those chips, "east" in your division. Reply
  • The Hardcard - Tuesday, May 27, 2014 - link

    Unless there is something more than I am seeing in this story, it is not about ARM. It appears that Intel wants to drive into mobile through China, and Rockchip has more pull there, mainly in setting up SOCs the way OEMs and ODMs want them. Reply
  • Homeles - Tuesday, May 27, 2014 - link

    RTFA Reply
  • purerice - Tuesday, May 27, 2014 - link

    EYFP(runes) <3 Reply
  • DanNeely - Tuesday, May 27, 2014 - link

    Isn't that license just a residual asset from before they sold their in house arm line (XScale) to Marvell? Reply
  • ilt24 - Tuesday, May 27, 2014 - link

    Intel only sold the Xscale application processor line to Marvell. They also had Network, Storage and maybe Multimedia lines, which they kept and continued to sell until they were were replaced with atom based versions. Reply
  • jjj - Tuesday, May 27, 2014 - link

    Sofia is made at TSMC and should be safe to assume this version is TSMC too, Intel seems unable to integrate it's 3G on 22nm and i doubt they'll do low end on 14nm in early 2015.
    Too bad their timing sucks, with the LG G3 launching at the same time.
    Reply
  • Flunk - Tuesday, May 27, 2014 - link

    That doesn't sound too likely. Intel hasn't engineered a version of Atom to fab under TSMC's process. Plus, why partner with Intel if you don't get access to their leading fab technology. Reply
  • ilt24 - Tuesday, May 27, 2014 - link

    A couple of things:

    Back in 2009 Intel and TSMC announced a deal that would allow a third party design house to use an Atom core along with their own or TSMC IP to make custom SOC's. Due to lack of interest the effort was put on hold in 2010.

    Earlier this year is was reported that the first version of SoFIA would be made at TSMC on 28nm and that an in house version would come later on 14nm.
    Reply
  • toyotabedzrock - Tuesday, May 27, 2014 - link

    Smaller nodes for the radio is a universal problem. I think this deal is about volume and reducing risk since rockchip can reuse their current end user product designs. Reply
  • extide - Tuesday, May 27, 2014 - link

    The most exciting part about this for me is the possibility of being able to see an Atom on Intel process vs and Atom on TSMC process head to head. Should be interesting! Reply
  • Krysto - Tuesday, May 27, 2014 - link

    It would finally expose the fact that "x86 myth is NOT busted". Intel is barely competing in the mobile CPU space with 22nm vs 28nm process AND a Trigate (which gives the advantage of a full node shrink).

    If Intel made Atom at 28nm with not Trigate/FinFET, everyone would see how far behind Intel's architecture really is compared to ARM, for the mobile market.
    Reply
  • IntelUser2000 - Tuesday, May 27, 2014 - link

    Actually, TSMC's 28nm process isn't far away from Intel's 22nm process in power usage and density. I assume 28nm TSMC is about 30% larger. What Intel had was a transistor performance advantage(and that they come a year earlier), which wasn't much use if the design couldn't take advantage of it. Reply
  • extide - Tuesday, May 27, 2014 - link

    If you are talking about the old atom (Bonnell core) then yeah, it was very outdated, even when it first came out. The new Silvermont arch is much better, but I agree that it is not amazing. Intel will need to keep iterating on this design, and hopefully, with Atom, they will stick with the 2 year cadence (like core) that they have said they will.

    There are a lot of people speculating about how Intel's process nodes stack up with TSMC, but AFAIK there has never before been a single product built at both foundries with which we can do an accurate comparison, that doesn't include other variables. I think people may be surprised by the outcome, but one thing is for sure, it will be fun to watch!
    Reply
  • factual - Wednesday, May 28, 2014 - link

    Silvermont core is not "barely competing" in mobile, it's best in-class. It's has superior perf/watt compared to all ARM-designed cores. It's also superior to Qualcomm designed Krait core (ARM-compatible).

    Intel's problems in mobile have nothing to do with ARM. They have to do with GPU, integration and, most importantly, the cellular modem.

    Stop spreading misinformation!
    Reply
  • darkich - Wednesday, May 28, 2014 - link

    The Cyclone core is ARM based and absolutely trumps Silvermont in every regard, even while made on inferior process.

    YOU stop spreading misinformation
    Reply
  • factual - Wednesday, May 28, 2014 - link

    No, I did not! I clearly said ARM-designed cores. Cyclone is NOT ARM-designed, it's ARM ISA-compatible, there is a huge difference between the two. Intel is not competing against apple, since it will probably never be able to win a socket in an IOS device (neither will Qualcomm for that matter) . Reply
  • Wilco1 - Wednesday, May 28, 2014 - link

    Even if you just consider ARM-designed cores, Silvermont does not come close to eg. Cortex-A7 perf/Watt, especially on the same process. Reply
  • MarcusMo - Friday, May 30, 2014 - link

    Oo-architectures such as Silvermont will always lag behind in-order architectures such as the A7 in perf/watt simply because of all the additional circuitry that needs to be on-chip. Thus the comparison with any in-order core is flawed. They have a different optimization point. Reply
  • Wilco1 - Saturday, May 31, 2014 - link

    Exactly, which is why it is so ridiculous to claim Silvermont has best perf/Watt of all ARM cores. There are many ARM CPUs out there optimized for different uses, so one generic CPU can never beat more specialized ones in every aspect.

    Note this is not a flawed comparison as A7 is actually fast enough for most tasks in a phone/tablet (many ship with a dual or quad A7). With big.LITTLE you can switch to an even faster OoO core when you do need more performance without reducing the overall power efficiency.
    Reply
  • Krysto - Tuesday, May 27, 2014 - link

    OEMs considering buying Atom chips for their mobile devices, should consider the fact that Intel is only HEAVILY subsidizing them right now to the point of losing $1 billion dollars every quarter, and they'll throw them under the bus as soon as they get decent market share. Reply
  • mrdude - Tuesday, May 27, 2014 - link

    Asus is an example of the above. Intel subsidized Merrifield last year when Asus was entering the smartphone market, but this year Asus has backtracked and pulled the Intel models in favor of Qualcomm due to the pricing and integrated modem.

    I'm not sure adding a middle party between Intel's IP/designs to the end consumer is going to make it more competitive or ensure any sort of significant design wins. To me this seems like Intel is admitting two significant problems with their mobile approach -- 1) Intel's too damn expensive, and 2) They're too slow. I'm not sure how binding themselves to Rockchip at the low end is going to cure either of these ailments.
    Reply
  • Impulses - Tuesday, May 27, 2014 - link

    It's all about gaining market share in the short term I imagine... Reply
  • darkich - Wednesday, May 28, 2014 - link

    That's the sole point here.
    Intel are dumping the market at all costs..that's the only way they can "compete "
    Reply
  • name99 - Tuesday, May 27, 2014 - link

    "More than anything this is a sign that Intel is willing to try something new/different, and that's absolutely what the company needs."

    Except that it does nothing to solve Intel's fundamental problem.
    The problem is that Intel wants to sell chips at $100 to $3000 --- its entire operation is set up around that assumption. All this does is allow them to sell some chips at $1 to $10.

    The problem longterm remains --- how do they sell Atoms that are competitive enough with ARM, but not so competitive that they steal the desktop business? This hasn't been an obvious problem yet, while ARM was taking its sweet time getting to 64 bit and a standard server platform, but all that changes starting this year.
    This does nothing to help the situation; if anything it makes it worse, because the more x86 is visible (with a non-Windows infrastructure et al) in non-desktop environments, the more it becomes just one CPU --- and the more it becomes an eminently reasonable assumption for desktop users that they buy a non-x86 laptop. (Chromebooks today, and not very interesting. But 64-bit ARM Android laptops in two years... Or maybe someone puts together a laptop or AIO based on WinRT just to see how it flies.

    Intel may think it can compete on performance, and continue to charge its traditional prices for that --- good luck. The story of a dozen RISC companies is that, yes there are a few people who will pay 3x as much for 20% faster, but not enough to sustain a market.
    Reply
  • easp - Tuesday, May 27, 2014 - link

    I think you misunderstand the larger context. Cannibalization of their desktop and server business is an issue, sure, but the larger issue is that their desktop and server business would be plateauing whether or not ARM was an issue, and that in order to keep if from collapsing completely, they still have to invest in new process technology. The thing is, each generation yields more and more transistors that they have to sell in order to amortize their investment. They can't sell them in their traditional market, because it is mature, so they can't sell more chips, and their isn't much point in selling bigger chips, because of diminishing returns.

    So, the only way they have any chips they can sell for $100-3000, is to find ways to keep their fabs utilized.
    Reply
  • name99 - Tuesday, May 27, 2014 - link

    I understand the point just fine. This is the mess Intel got itself into in the first place by insisting that its mobile strategy was to make EXACT replicas of x86. Now they have to pay the x86 design fee costs for each CPU, and they are cannibalizing themselves.

    What they SHOULD have done is launch the mobile chips using a completely new ISA (ideally one designed by someone who has never set foot on Intel because if there is one thing Intel sucks at, it is ISA design). This would have prevented all the cannibalization issues and allowed them to run at the speed of ARM, without being slowed down by x86 legacy.

    A second best alternative would have been to launch Atom as purely the clean x86-64 subset of x86, ditching everything else from MMX and x87 to SMM and 32 bit mode. This would have made the design surface a lot smaller and cleaner (not as nice as ARM, but still a lot less painful) and would have avoided the problems of cannibalization for at least a few years.

    The point is, having made this huge blunder five or so years ago, dicking around with these sorts of deals today is not going to solve the problem. When you're in a hole, stop digging! At the very least they should be working on moving to the second strategy I suggest RIGHT NOW. Put everyone using Atoms on notice that, unlike with legacy x86, they're going to be ripping out all the crud very soon --- and if that means Atoms can't run legacy Windows, well, we have this nice i7 we could sell you instead...
    Reply
  • darkich - Wednesday, May 28, 2014 - link

    You sir, are absolutely right.

    I just hope Intel doesn't kill the market with this unfair subsidiary dumping, preventing the natural evolution passing it by.
    Good thing is, the odds for that happening are pretty low.
    Reply
  • BPM - Wednesday, May 28, 2014 - link

    Agreed. What they should have done is launch a new ISA. That's what I've been telling my friends since I got into University. Very good points. Reply
  • factual - Wednesday, May 28, 2014 - link

    As I have mentioned in another post, Silvermont (Atom's core) has been superior (in terms of perf/watt) compared to ARM-designed cores since it came out at the end of 2013. Intel has already won the CPU perf/watt game against ARM vendors and I expect it to continue to widen the perf/watt gap.

    Intel's problems are related to the soc not the cpu. Intel's socs have inferior GPUs, inferior image signal processors, they don't have integrated connectivity and cellular and a lot of other issues.

    All this talk about x86 vs ARM is irrelevant. Intel's CPUs are superior to ARM's, and they superiority will only grow over time, but that doesn't mean Intel will succeed in mobile, because in mobile, it's the soc product that matters, and Qualcomm is king when it comes to designing well-integrated, well-rounded, cost-effective Socs for now.

    Nvidia has the best mobile GPU, Intel has the best mobile CPU but it's Qualcomm that dominates the mobile market because they have the best mobile SOC.
    Reply
  • darkich - Wednesday, May 28, 2014 - link

    Technically, Apple has the best mobile CPU.
    And I am willing to bet that Qualcomm will have the best one, in years to come.

    Atom simply has way too much x86 baggage to compete in non x86 environment..only thing keeping it afloat is the more advanced manufacturing process.
    Reply
  • Wilco1 - Wednesday, May 28, 2014 - link

    Exactly, and that manufacturing advantage is being lost for the Sofia line of SoCs. I agree it's big like Anand says, in the sense of becoming a big flop. Reply
  • darkich - Wednesday, May 28, 2014 - link

    Funny thing is, soFIA looked as a flop even a year ago.
    https://www.semiwiki.com/forum/content/2962-intel-...
    Reply
  • darkich - Wednesday, May 28, 2014 - link

    Correction, yes.. half a year ago Reply
  • factual - Wednesday, May 28, 2014 - link

    Apple's mobile chip performance is irrelevant to Intel, Qualcomm, Nvidia and the rest of the merchant chip vendors. None of them can win the Apple's mobile socket and none of them will be directly competing against apple in the android space.

    I disagree with your claim that x86 ISA is somehow inferior to ARM ISA. ISA has minimal effect on the CPU performance anyways. What determines the performance of a CPU is micro-architecture and manufacturing technology.
    Reply
  • Wilco1 - Wednesday, May 28, 2014 - link

    Agreed on the first paragraph. As to ISA, yes ISA still matters. Not just in terms of design time (compare how many cores ARM designs each year vs Intel), but the additional complexity of x86 does require a lot more transistors, increasing power consumption and die area. I'd argue that an ISA has quite a big penalty if a 2-way in-oder core is 3.5 times larger than a 3-way OoO core:

    http://chip-architect.com/news/2013_core_sizes_768...
    Reply
  • Wilco1 - Wednesday, May 28, 2014 - link

    Actually Silvermont's perf/Watt is far worse than eg. Cortex-A7. It's slower than Cortex-A15, Apple A7 and high-end Krait. Its die area is several times larger than a typical ARM core despite being using a better process.

    The hard fact is that Silvermont is by no means the best core in any market it tries to compete. So it is no surprise there are no Silvermont phones yet, despite being on the market for quite a while. 28nm Silvermont will be interesting to see as it will be even harder to compete against far smaller and more efficient ARM cores.
    Reply
  • factual - Wednesday, May 28, 2014 - link

    Not really! A7 is a different story but Intel has better performance compared to both A15 and the best Krait at the same or lower TDP levels:

    http://www.phoronix.com/scan.php?page=news_item&am...
    http://openbenchmarking.org/result/1405010-PL-1405...
    http://www.fool.com/investing/general/2014/05/25/w...

    And your die size claims are false as well:

    http://www.intel.com/content/dam/www/public/us/en/...
    http://tabshowdown.blogspot.ca/2013/11/apple-a7-vs...

    Do some research before making false claims!
    Reply
  • darkich - Wednesday, May 28, 2014 - link

    So what's the Bay Trail die size?
    Forgive me for missing the exact number among 373 pages of material
    Reply
  • factual - Wednesday, May 28, 2014 - link

    look at page 355 of the datasheet. Die size is about 100mm^2 (9.7 x 10.4 ). Reply
  • darkich - Thursday, May 29, 2014 - link

    Well I looked and can't see it.
    Please enlighten me!
    Reply
  • Wilco1 - Wednesday, May 28, 2014 - link

    I was explicitly talking about core die size, not SoC die size. Do you actually have a die size estimate for a Silvermont *core*? Here is a link with several ARM CPU die sizes vs previous Atom: http://chip-architect.com/news/2013_core_sizes_768...

    Do you think a Silvermont core will be smaller than say a Cortex-A15 core?

    As for the Phoronix scores, these don't seem to be just a comparison of CPU performance. For example one is using an SSD, the other slow eMMC flash. Also one is a phone/tablet SoC, the other a desktop part - you won't find the J1900 in phones or tablets...

    I haven't seen K1 scores yet, but this is how the lower clocked NVidia Shield does vs Z3770 on Android:

    http://browser.primatelabs.com/geekbench3/compare/...

    Based on that it's safe to say the K1 will beat Silvermont by a good margin on integer and FP benchmarks.
    Reply
  • factual - Thursday, May 29, 2014 - link

    I don't know the die size of silvermont itself, but given the fact the die size of the silvermont-based SoCs are more-or-less the same as A15-based SoCs, I would say the die size of the two are comparable. Saltwell is a 5-year-old design on a 4-year-old silicon technology, I'd say its die size is pretty good for what it is!

    The phoronix comparison is actually a pretty "just" comparison, if you actually understood the numbers! J1900's TDP is 10W at maximum load, while Jetson's Tegra K1 uses 11W.

    http://forums.anandtech.com/showthread.php?t=23769...

    So, J1900 has Superior performance while consuming less power.

    I tried to stay away from the poorly written mobile benchmarks and reference the accurate phoronix comparison. I can play the primelabs game too ... Comparing power-hungry Nvidia Shield to a low-power tablet is not a real comparison! Nvidia has a 30wh battery, now let's compare it to a device with the same power consumption/battery;

    http://browser.primatelabs.com/geekbench3/compare/...

    Here, Z3770 completely annihilates Cortex-A15. But let's stick with the accurate phoronix benchmark.

    Silvermont is superior to Cortex-A15 by a considerable margin. Now the question is why? Some claim its only due to Intel's superior manufacturing technology, I guess we'll know when Silvermont is manufactured using Tsmc28!
    Reply
  • Wilco1 - Thursday, May 29, 2014 - link

    Silvermont SoCs are far behind in GPU performance so if the SoC die size is comparable then it's a given that the GPU is smaller, while the CPUs are larger. This is also a reason why TDP for K1 appears high - has an amazingly fast GPU which takes most of the die and power.

    Phoronix comparisons have been quite wrong in the past (using old compilers, incorrect options, using different codepaths for x86 and ARM, this has been widely discussed on sites like RWT), and appear to be system benchmarks rather than CPU benchmarks, so the HD performance matters. Calling them accurate is wishful thinking. Or did you really believe that Silvermont is 8x faster doing VP8?

    As for your Geekbench link, you deliberately chose a slow Shield score - very unprofessional. I could also compare against the slowest Z3770 score one could find, however lets be fair and compare your Z3770 score with my Shield score:

    http://browser.primatelabs.com/geekbench3/compare/...

    So a Z3770 is barely faster than Shield on integer (due to using hardware encryption instructions which improves the average score by 14.5%) and loses on FP - despite running at a 26% higher frequency. So that's a huge loss for Z3770. At similar frequencies A15 wins by a big margin. Can you imagine how it will compare with next generation 64-bit ARM CPUs?

    When manufactured on 28nm TSMC, Silvermont will obviously lose clock frequency. The question is how much, but it is obvious that it won't be able to compete at all given how much trouble 22nm Silvermont has just keeping up with previous generation SoCs.
    Reply
  • factual - Thursday, May 29, 2014 - link

    There is discussion on the die size comparison picture you posted. The A-15 size comes from a doctored Nvidia die shot, and the real A-15 die size is actually 3.1mm^2 :

    http://forums.anandtech.com/showthread.php?t=22943...

    Bay Trail die area is not just comparable to Tegra's but also exynos's and allwinnder's as well. Silvermont's die area would not be any larger than A15, if not smaller, judging by the SoC die areas.

    Phoronix is the gold standard of benchmarks; It's really laughable when you try to question Phoronix's accuracy and post results from Geekbench instead, which is a closed-source, poorly written benchmark, just like most mobile benchmarks. Phoronix is the open-source benchmark which is trusted by engineers and professionals. And yes, silvermont's performance is that much more superior compared to Cortex-A15.

    I intentionally posted that primatelabs comparison to show how inconsistent and questionable it is, at best! Here's another one comparing another Z3770 to your Nvidia shield version:

    http://browser.primatelabs.com/geekbench3/compare/...

    Silvermont annihilates Cortex-A15 again!!

    Silvermont's perf/watt is without a doubt superior to A15, by a large margin. Next generation 64-bit ARM will probably catch-on to Bay Trail but by that time next generation Atom (Broxton) will be out which will widen the perf/watt gap even further. But the CPU performance alone is not that important in mobile and that's where intel has been lacking, i.e. the overal SoC's capabilities, and that problem has nothing to do with ARM!
    Reply
  • Wilco1 - Friday, May 30, 2014 - link

    Different processes result in different die sizes, and one can synthesize a core for high performance or low power and get different area. So even if Samsung's A15 happen to be larger on their process, that doesn't invalidate the 1.62mm^2 Hans reported for the A15 in Tegra 4 at all.

    Again, comparing SoC area and concluding anything CPU die area is ridiculous fanboi-ism. The hard fact is Atom is several times larger than ARM cores.

    "Phoronix is the gold standard of benchmarks"!!! LOL, thanks you made my day - you clearly don't have a clue!

    Here is yet another benchmark where 1.9GHz A15 beats 2.4GHz Z3770: http://www.7-cpu.com/

    What's the excuse this time?

    Nobody has shown Silvermont to be superior than A15. After all A15 is much faster while running at a lower frequency as all the benchmarks show. I haven't seen detailed perf/W comparisons, but it looks like Tegra K1 should beat Silvermont in perf/W, despite being effectively 2 process nodes behind. A 28nm Silvermont will be far slower and with significantly worse perf/W, and yet has to compete against much faster 64-bit ARM cores like Denver, Cortex-A57 and A53 in the second half this year.
    Reply
  • factual - Sunday, June 01, 2014 - link

    Of course different processes result in different size! obviously you can synthesize a soft ARM core IP for high performance or low power and get different die areas! That's why your initial claim that cortex A15 had some kind dramatic area advantage over Silvermont, without having any evidence, was so ridiculous!

    The so-called Cortex A15 die shot from Nvidia's tegra 4 marketing picture looks identical to the Cortex A9 die shot from tegra 4i, proving it to be fake! Nvidia has been routinely using doctored/fake die shots as pre-release marketing material:

    http://www.xtremesystems.org/forums/showthread.php...
    http://www.brightsideofnews.com/2011/03/09/nvidia-...

    Nvidia using fake die shots for pre-release marketing purposes is completely understandable, but what is not understandable is using these marketing shots to make technical claims about die areas!! Also given the lack of any hard facts regarding Silvermont or Cortex A15 die areas (ones synthesized for similar power specs), comparing SoC die areas, which are actually specified in the datasheet, is the only logical thing to do.

    You are the fanboi here, who despite all the evidence, continue your baseless claims regarding the non-existent "ARM advantage"! All the evidence points to absolute perf/watt superiority of Silvermont over Cortex A15 and the claim (often espoused by ARM fanbois who are generally ignorant when it comes to technical knowledege) that Intel woes in the mobile market have to do with the "power efficient" ARM cores is patently false!

    I guess it's expected that a fanboi, with very little technical understanding, would "lol" at Phoronix; but developers, engineers and professionals consider Phoronix the gold standard of benchmarks:

    https://packages.debian.org/stable/utils/phoronix-...

    Although the Phoronix benchmark is comparing a silvermont core clocked at a lower frequency than the Tegra's Cortex A15, but the clock speed of the cores don't matter at all! Cyclone clocked at 1.3Ghz outperforms both Silvermont and A15. What matters is perf/watt and that's it!

    The compression benchmark you posted only shows Silvermont widely outperforming the Exynos cortex A15 but equaling the performance (outperforming compression, under-performing decompression) of an unknown Corex A15 with unknown power consumption ... So what's the point! At least the not-so-reliable geekbench benchmark you posted earlier tested all aspects of the cpu!

    While I prefer the reliable Phoronix benchmark due it's known power specs and reliability of the benchmark suite, all the other not-so-reliable mobile benchmark show silvermont wildly outperforming Cortex A15 as well:

    http://openbenchmarking.org/result/1405010-PL-1405...
    http://browser.primatelabs.com/geekbench3/compare/...
    http://us.hardware.info/reviews/4792/11/intel-atom...

    At this point the huge superiority of Silvermont over Cortex A15 is undeniable. But educating a fanboi who refuses to learn and is in denial is really impossible.
    Reply
  • Wilco1 - Sunday, June 01, 2014 - link

    You're just clutching straws by trying to claim Phoronix is the "gold" standard. Show me a single site that uses Phoronix benchmarks in their reviews of ARM SoCs. I can point out many sites that use GeekBench.

    Bringing up AnTuTu really helps your case, it's well known that Intel cheated those. All that is even more proof that you don't have any evidence on your side and have to resort to cheating to make your point.

    Clockspeed certainly matters, especially when different SoCs run at different frequencies. To get a fair comparison you have to consider the frequency. Given that a 1.9GHz A15 already matches or beats a 2.4GHz Silvermont on Geekbench and 7-ZIP, we can safely conclude that a 2.3GHz A15 will outperform it by a huge margin. Also consider the fact that not all Silvermonts will be clocked at 2.4GHz, there are many slower parts as well (those are the ones typically used for phones and tablets).

    It's undeniable that you're just a fanboi with no technical knowledge at all. Do you even understand why A15 has a superior microarchitecture?
    Reply
  • factual - Sunday, June 01, 2014 - link

    ARM cores are inferior for enterprise use, that's why you don't often see Phoronix used for ARM. Phoronix and Spec are used to for enterprise Linux benchmarking, and ARM is virtually non-existent in the enterprise space.

    CPU Clock frequency, when comparing different micro-architectures, does not matter. cpus can increase performance in several ways, including increasing clock speed or increasing issue width (instructions computed per cycle). Depending on the microarchitecture and process technology, a cpu can consume less power than another cpu design while running at similar or higher clock frequency. So what needs to be measured when comparing different microarchitectures is not clock frequency but performance per watt. If a CPU can achieve same power consumption at higher clock frequencies compared to other CPUs, that's a design advantage. All that said, all the comprehensive benchmarks showed so far that Silvermont performs superior to Cortex A15 while running at lower clock speeds than A15, not the other way around!

    However, no matter how many times technical reasoning and benchmarks show Silvermont to be wildly superior to Cortex A15, a fanboi in denial will remain in denial! As I said before, educating someone who insist on remaining ignorant is impossible. But just for the record all the comprehensive benchmark (even your favorite, Geekbench) show Silvermont to handily beat Cortex A15 when it comes to performance/watt:

    http://openbenchmarking.org/result/1405010-PL-1405...
    http://browser.primatelabs.com/geekbench3/compare/...
    http://us.hardware.info/reviews/4792/11/intel-atom...
    http://www.anandtech.com/show/7314/intel-baytrail-...
    Reply
  • darkich - Friday, May 30, 2014 - link

    Tegra K1 is much better than you think.

    http://wccftech.com/nvidia-tegra-k1-performance-po...
    Reply
  • name99 - Wednesday, May 28, 2014 - link

    I don't believe we have any information proving that Silvermont is superior to Cyclone in operations/watt.
    What we have is scattered information that doesn't really tell us what we need. Consider (Because I have the numbers easily available) the MacBook Air (Haswell) vs iPad air (Cyclone).
    For Geekbench type benchmarks (and a range of other things, like sunspider) the MBA is about 2.5x the iPad. That's one piece of info. The iPad has a 32Whr battery, MBA has 38Whr, iPad gets about 10hrs of battery life on some sort of "real world tasks" benchmark, MBA gets about 12 hrs.
    That's a second piece of info. BUT the important point is that the second benchmark mainly measures the speed and efficiency at which the CPU (and the system as a whole) goes into deep sleep. Going into deep sleep fast is important for battery life, CPU performance is important for snappiness ---both are good benchmarks for the user experience. But neither measures what we want.

    What we want is a benchmark that operates like this:
    - starting with a fully charged device (with screen off and wireless off), run Geekbench over and over. At the end of each run, Geekbench records to permanent storage an incremented count and a timestamp. The process runs until the battery dies.
    From this one can know how many geek benches ran, and how long the took, which allows one to calculate a "Geekbenches per Watt-hour" metric. THAT is as close to "CPU efficiency" as one is likely to get easily, given the realities of different devices, OSs, limited control over the device, etc.
    It's not a perfect measure, in particular it's very misleading if one of the CPUs has AES and SHA instructions and the other doesn't, but assuming that's not the case it's about the best we can do today.

    THIS would be the interesting metric in terms of quantifying which of Cyclone vs Haswell vs Silvermont is most absolutely efficient. I honestly don't know how it would turn out. Intel have process and circuit advantage, Haswell has the advantage of having thrown vastly more man-hours and transistors at the problem (for which you pay in the dollar price of the part...). Cyclone has the advantage of a substantially simpler architecture which allows you to spend more time optimizing and less time just getting the damn thing to work in verification and debugging --- but of course Apple is not nearly as experience as Intel in managing project of this size optimally.

    (If one had this information, there are other interesting things one could also learn. In particular, the graph of geek bench runtimes as a the run count changes could be interesting in terms of seeing which devices are forced to thermally throttle, and how rapidly this kicks in. Of course that effect also confuses the actual number we are trying to calculate, so one may want to do one run with natural cooling, and one run with the device having a heavy duty fan continually blowing chilled air over the device.)
    Reply
  • Wilco1 - Saturday, May 31, 2014 - link

    Perf/Watt is not a fixed number. So you can indeed test perf/Watt at maximum performance like you state, but that result is not actually that useful. If you say ran at 100%, 75%, 50% or 25% of performance the perf/Watt results would be very different (and one CPU may win at 100% but another may win at 25%). It gets even more complicated for multithreaded workloads as you can choose to run one CPU at 100% or 4 CPUs at 25% - these will get different perf/Watt results depending on the process and microarchitecture.

    For phones/tablets the issue is that the average use case is more like 1% of maximum performance, and so testing at 100% performance does not produce a meaningful result.
    Reply
  • pauliek - Tuesday, May 27, 2014 - link

    Three things:
    1. You did not even mention Intel's modems. Rockchip has no modems but will soon need them, and Intel's are actually pretty good. My guess is that the modem in this deal is just as important as the CPU.
    2. Intel is famously paying customers to use its tablet chips because its design is too expensive. When they stress that this is "strategic", that probably means it will teach them to design for low cost.
    Reply
  • pauliek - Tuesday, May 27, 2014 - link

    3. I deleted the third before posting and can't edit now, sorry. Reply
  • abufrejoval - Tuesday, May 27, 2014 - link

    While this is a strategic deal, I'm wondering whether for the first product this is a fab deal or an IP block deal: I guess all Intel mobile IP blocks currently aren't actually fab'ed on Intel, so this would be IP block.

    I'm also guessing, Intel doesn't seem to get any significant ROI on their radio stuff, being all discrete outside Intel SoCs and therefore not easily competitive with Qualcom on power terms.

    It must be good or at least on par with discrete Qualcom modems, because it's showing up even in Samsung devices paired with Exynos (or Samsung dislikes Qualcom's LTE dominance enough to partner with Intel wherever they can).

    But it would certainly be better fully Soc and this may be what this deal is about primarily.

    But if this were to be a fab deal, with Rockchip SoCs done on 22nm or 14nm Intel processes, that could make even Apple pale. Don't see how how this could happen inside 2 years, though, because Intel fabs simply aren't open SoC..

    On the other hand, they are probably running underutilized and that's hurting...

    Please, more information!
    Reply
  • mrdude - Tuesday, May 27, 2014 - link

    According to eetimes, the Rockchip SoCs will still be fabbed at TSMC and any sort of transition won't happen until 2016. These SoCs are low cost (only 3G connectivity), so utilizing a bleeding edge fab process is out of the question.
    http://www.eetimes.com/document.asp?doc_id=1322516

    This seems like a case of eating your toes in order to stave off starvation. Sure, Intel is penetrating the mobile market and gaining market share, but just how much money are they burning doing it? And what about the staggering PC space and decreasing laptop ASPs? All of these moves made toward mobile yet Intel seems less and less likely to ever turn a profit.
    Reply
  • toyotabedzrock - Tuesday, May 27, 2014 - link

    They are risking a bunch of jobs if those Cpu designs are stolen. Reply

Log in

Don't have an account? Sign up now