In a story posted today on EETimes, Altera announced at the ARM Developers Conference that they have entered into a partnership with Intel to have their next generation 64-bit ARM chips produced at Intel’s fabs. According to the report, Altera will be using Intel's upcoming 14nm FinFET process technology to manufacture a Cortex-A53 quad-core SoC, which will be surrounded by FPGA logic.

The Intel/Altera partnership was first announced back in February 2013, and it's worth noting that FPGAs are not an area where Intel currently competes. Even though ARM logic will be on the new chips, this likely won't lead to direct competition with Intel's own chips. The bigger deal of course is that while Intel's 22nm process would give anyone willing to pay Intel’s price a leg up on the competition, 14nm is a full step ahead of the competition.

Intel has apparently inked deals with other companies as well. The Inquirer has this quote from an Intel spokesperson: “We have several design wins thus far and the announcement with Altera in February is an important step towards Intel's overall foundry strategy. Intel will continue to be selective on customers we will enable on our leading edge manufacturing process.”

The key there is the part about being “selective”, but I would guess it’s more a question of whether a company has the volume as well as the money to pay Intel, rather than whether or not Intel would be willing to work with them. There are many possibilities – NVIDIA GPUs on Intel silicon would surely be interesting, and given that AMD has gone fabless as well we could also see their future CPUs/GPUs fabbed by Intel. There are many other ARM companies as well (Qualcomm), not to mention Apple. But those are all more or less in direct competition with Intel's own processors, so unless we're talking about potential x86 or Quark licensees, it's tough to predict where this will lead.

If we take things back another step, the reality of the semiconductor business is that fabs are expensive to build and maintain. Then they need to be updated every couple of years to the latest technology, or at least new fabs need to be built to stay competitive. If you can’t run your fabs more or less at capacity, you start to fall behind on all fronts. If Intel can more than utilize all of their fabrication assets, it’s a different story, but that era appears to be coming to a close.

The reason for this is pretty simple. We’re seeing a major plateau in terms of the computing performance most people need on a regular basis these days. Give me an SSD and I am perfectly fine running most of my everyday tasks on an old Core 2 Duo or Core 2 Quad. The difference between Bloomfield, Sandy Bridge, Ivy Bridge, and Haswell processors is likewise shrinking each generation – my i7-965X that I’m typing this on continues to run very well, thank you very much! If people and businesses aren’t upgrading as frequently, then you need to find other ways to keep your fabs busy, and selling production to other companies is the low hanging fruit.

Regardless of the reasons behind the move, this potentially marks a new era in Intel fabrication history. It will be interesting to see what other chips end up being fabbed at Intel over the next year or two. Will we see real competitors and not just FPGA chips fabbed at Intel? Perhaps some day, but probably not in the short term.

Source: EE Times

Comments Locked

44 Comments

View All Comments

  • Krysto - Wednesday, October 30, 2013 - link

    Hold your horses. Intel is only allowing this for a company that isn't really their competitor.

    They wouldn't give this to Nvidia or Qualcomm, who are direct competitors.
  • JarredWalton - Wednesday, October 30, 2013 - link

    Of course they're not "giving" anything to anyone. I'm sure Altera is paying Intel a nice price for the chips produced there, enough so that Intel is willing to talk. If NVIDIA were willing to pay enough, Intel would likely talk to them as well. Of course, the costs for NVIDIA to do something at Intel are likely high enough that Intel simply buying NVIDIA would be more likely. ;-)
  • Dentons - Wednesday, October 30, 2013 - link

    An Intel purchase of Nvidia could have a tough time meeting regulatory approval. They may have to divest Nvidia's ARM division, and then, what's the point?
  • JarredWalton - Wednesday, October 30, 2013 - link

    The winky-face was supposed to let people know that I'm not at all serious about Intel buying NVIDIA. Ten years ago, it could have happened maybe, but not today. NVIDIA of course seems more interested in becoming more like Intel and building their own CPU designs, so we may see some interesting stuff down the road from the green team.
  • easp - Wednesday, October 30, 2013 - link

    A few years ago, when I first started the question of whether Intel could, in the long run, compete with the Merchant Fab + Fabless Semiconductor + IP Developer ecosystem, I never really considered that Intel would become a merchant fab.
  • sherlockwing - Wednesday, October 30, 2013 - link

    A53 is the key word in that announcement. Anand & Brain have said a few times that Intel currently don't have a Silvermont design that can compete with A7/A53 class chips in price, so that's a market where Intel can't get in without making ARM chips.
  • iwod - Wednesday, October 30, 2013 - link

    My exact feeling on today's PC performance. The Core2Duo, combined with a PCI-E based SSD and 8GB of Memory. While the geeks may not agree, larger then 90% people wont need anything more then that. And it has been this way for I cant remember how long.

    That is why the emergence of Tablet and ARM is seriously threatening Intel. Apple A7 28nm now, Quad Core Ax 20nm in 2014, And double that in 2015.

    So a decade after Apple switched Mac over to Intel, Apple will have created a Chip that is capable to replace it. Sometimes when you look back in times you are simply amazed at how much technology has leap and evolve.
  • code65536 - Thursday, October 31, 2013 - link

    Not really. Only on the low end are dGPUs rubbing up against Iris. iGPUs will not match high-end dGPUs for the forseeable future, not when high-end dGPUs are currently much more complex than the CPU itself.

    And it would synergize with Intel's high-end CPU products.
  • Krysto - Thursday, October 31, 2013 - link

    FYI, the 20nm process is already good to go, and we'll probably see 20nm ARM chips in smartphones early next year, BEFORE the 22nm Merrified in smartphones.

    Also, 16nm FinFET seems to be on track for early 2015 as expected, and even this Altera chip won't be made at 14nm at least until late 2014. So Intel really doesn't have any real process advantage anymore, especially in mobile. By 2015, ARM fabs will have pretty much caught up with them.

    http://semiaccurate.com/2013/10/30/tsmc-shows-prod...

    Seeing how the 22nm tablet version of Atom is barely competitive with LAST year's 28nm ARM CPU and GPU's, I can't wait to see how much ahead 14nm/16nm FinFET ARM chips will be of Intel's 14nm Atom in 2015 - probably close to 2 generations ahead in performance, if Intel is already a generation behind now, even though it has a node generation ahead of ARM.
  • azazel1024 - Thursday, October 31, 2013 - link

    I have to agree on the performance bit.

    I am maybe a bit more demanding than the average user, but my Core 2 Duo (E7500) was fast enough for all the basics I wanted to do. It feel behind in video transcode, but that was one of the few "really demanding" tasks I threw at the thing that I felt if was short in. Oh, and I hadn't played it at the time, but KSP probably wouldn't have been nearly as fun on it. 18 months ago, I upgraded to an i5-3570 upclocked to 4.0/4.2Ghz. I can't imagine upgrading the thing for a number of years now. It tears through pretty much anything I throw at it with aplomb.

    My laptop, an HP Envy 4t with an i5-3317u is the first "fast enough" laptop I've ever owned. About the only thing in it that makes we want to ugprade, are the graphics (just HD4000)...okay, and the screen, but that isn't a "processing power" issue. Depending on what Broadwell/Skylake deliver, whenever I upgrade the laptop, it just might be "fast enough" for a lot of years of use. Even now, if it wasn't for some of the games I play on the laptop, I'd probably be happy enough with how fast it is on the whole for years and years.

    I am looking at getting an Asus T100 for a tablet and occasional laptop use. The z3740 sounds like it'll probably be fast enough for everything I'd want a tablet to do and most things I'd want effectively a netbook to do. That sucker I can DEFFINITELY see wanting to upgrade in another generation or two of Atom processors though for faster graphics, CPU and more RAM.

    After a couple of generations, dunno. It might have hit "more than fast enough".

    Both due to processors tending not to get that significantly better between generations and, IMHO, computing tasks not getting significantly harder these days CPU churn is getting a lot lower. Go back a bit in time and I would have been upgrading a laptop ever 12-24 months because the newest thing was really just that much better to be worth the upgrade. Desktop has always stagnated a bit for me, but before the Core 2 Duo, I was also upgrading every 12-24 months. Then it was almost 4 years between desktop upgrades on my latest cycle and it just might be again (I am eyeing Skylake with the SATA express, DDR4 and PCI-e 4.0 support, plus hopefully some real measurable gains in CPU performance between Ivy and Sky, not a few single digit percentage points here and there). Its been basically 12 months on the laptop, but I really don't have an itch to upgrade it (other than the screen, it isn't the worst TN panel I've ever seen, but I really need an IPS in my next laptop), maybe in another year, or two.

Log in

Don't have an account? Sign up now