Brooke Crothers broke a very important story today - he published the name Silvermont. Atom's first incarnation came to us in 2008 as a Pentium-like dual-issue in-order microprocessor. The CPU core was named Bonnell, after the tallest point in Austin at around 750 feet. Small mountain, small core. Get it?

Bonnell and the original Atom were developed on a 5-year cadence, similar to how Intel ran things prior to the Core 2 revolution (the P6 to Netburst/Pentium 4 move took 5 years). With the original chip out in 2008, five more years would put the next major architecture shift at 2013, which happens to be exactly when the Cnet report mentions Silvermont will be introduced.

When I first met with the Atom design team they mentioned that given the power budget and manufacturing process, the Bonnell design would be in-order. You get a huge performance boost from going to an out-of-order architecture, but with it comes a pretty significant die area and power penalty. I argued that eventually Intel would have to consider taking Atom out of order, but the architects responded that Atom was married to its in-order design for 5 years.


Intel's Moorestown - same Atom core, just more integrated

Since 2008, Atom hasn't had any core architecture changes. Sure Intel integrated the GPU and memory controller, however the CPU still communicates with both of them over an aging FSB. The CPU itself remains mostly unchanged from what we first saw in 2008. Even Intel's 32nm Atom due out by the end of this year doesn't change its architecture, this is the same dual-issue in-order core that we've been covering since day 1. The 32nm version just runs a bit quicker and is paired with a beefier GPU.

Intel Atom "Diamondville" Platform 2008
Intel Atom "Pine Trail" Platform 2009-2010

Silvermont however changes everything. It is the first new redesign of the Atom architecture and it marks the beginning of Atom being on a tick-tock cadence. Say goodbye to 5 year updates, say hello to a new architecture every 2 years.

Given what Intel said about Atom being in-order for 5 years, I think it's safe to say that Silvermont is an out-of-order microprocessor architecture. The other big news is that Silvermont will be built using Intel's 22nm transistors. What may not have been possible at 45nm gets a lot easier at 22nm. Assuming perfect scaling, a chip built on Intel's 22nm process would be a quarter the size of the same chip built at 45nm. With Apple paving the way for 120mm2+ SoCs, Silvermont can be much more complex than any Atom we've seen thus far.


Intel's 22nm transistors offer huge gains at low voltages, perfect for Silvermont

By 2013 Intel's 22nm process should be very mature, which maintains Intel's sensible design policy of only moving to a new architecture or a new process, but not both at the same time in order to minimize risk. With 22nm debuting in Ivy Bridge at the end of this year (with availability sometime in 1H 2012), this puts Silvermont at a full year behind

Intel isn't talking core counts at this point, but for 2013 I'd expect both monolithic dual and quad-core variants. If we use history as any indicator, Intel will likely drop the FSB in favor of a point-to-point bus interface between Silvermont and its cohorts.

The big question is about GPU technology. Intel has historically used GPUs licensed from Imagination Technologies in its smartphone/tablet/MID line, while opting for its own in-house GPU solutions for nettops/netbook versions of Atom. At 32nm the rumor is that may change to an all Imagination lineup, but at 22nm I do wonder if Intel will keep licensing 3rd party IP or switch to its own.

Intel is expected to announce more details about its Atom roadmap at an analyst event next week. While the expectation is that we'll see Atom based Android smartphones this year, I'm personally quite interested in Silvermont.

Single and dual-core 32nm Atom designs should be able to hold their own in a world dominated by dual-core ARM Cortex A9s, but an out-of-order Atom on an aggressive roadmap is something to be excited about.

By 2013 we should be seeing smartphones based on Tegra 3 and 4 (codename Wayne and Logan) and ARM's Cortex A15. GPU performance by then should be higher than both the PS3 and Xbox 360 (also implying that Silvermont needs Sandy Bridge level graphics performance to be competitive, which is insane to think about).

POST A COMMENT

53 Comments

View All Comments

  • L. - Monday, May 16, 2011 - link

    Not going to happen.
    For Ivy Bridge to reach Llano levels, it would need a real GPU.
    So either one coming from AMD or one coming from nVidia.

    And besides, the comparison between Intel 2000/3000 and HD 5450 has been done by anand and it's really clear both are tied, with the AMD poor-man's board leading on the good games.

    The main thing to remember is the following :

    Brazos is CPU / GPU balanced for the modern days.
    Intel Core processors with even the 3000 igp are a joke, anyone buying those cpu's wouldn't care less for the on-die gpu.

    Llano is going to be balanced, just as brazos is, and will therefore not be in the same class as SB.

    Both not being in the same market segment means none will kill the other, but there is an extremely high chance all-round cheap systems built with Llano will do some damage on the market segments above and below it, because of the unmatched performance / price.
    Reply
  • JasonInofuentes - Thursday, May 12, 2011 - link

    And naming chips for mountains in a punny way is hilarious. But can I just say that I really love that NVIDIA is naming their chips after superhero aliases. I can't wait to see which Flash they choose. And Green Lanterns! The possibilities are endless! Reply
  • jjj - Thursday, May 12, 2011 - link

    "By 2013 we should be seeing smartphones based on Tegra 3 and 4 (codename Wayne and Logan)"
    Kal-El is most likely Tegra 3 even if they avoid naming it that (for now).Logan i guess might be Project Denver since it's in 2013 .
    The big question is what 2013 means for Atom,sampling or actual phones hitting retail.By 2013-2014 the ARM bunch might as well be on 20nm so Intel better hurry.I hope they get it out fast but I do wonder what is up with Intel lately since they keep messing up .Larabee,the G3 SSD,the always far from perfect GPU's,Atom and it's significant faults,P67-Z68.Maybe Intel is getting a bit too cocky (so much so that we need a mainstream socket and a high end socket- had to say it ,it just pisses me off).
    Anyway competition is fun and as Andorid,QNX,WebOS and maybe others (Apple got to unify it's OS sooner or later too) scale up ,thngs will be even more fun.
    Reply
  • KMJ1111 - Thursday, May 12, 2011 - link

    The problem I see with Intel in the future is that as low end performance increases substantially, at a lower cost, they will cut into their more powerful chips. I mean, the bulk of everyday users and emerging market economy users will be advantaged in that ARM chips and low end chips should in theory offer daily functional use and little strain to power grids. At the same time, the middle range chips for Intel will lose some of their value in these areas because performance will be good enough. I mean, wasn't part of the problem with the original atom in theory that they had to limit their power so they wouldn't kill other parts of their market?

    If they have to compete with increasing ARM performance and power consumption, I see it as quite a predicament for Intel. Server wise they will probably remain dominant unless other aspects of the market change, but if they want to retain a market or gain brand identity in a market, they cannot limit the power of the lower end chips. I mean, sticking with atoms they do retain the ability to run X86 instructions, but my phone now can do many of my day to day functions and how useful the added extra processes are for average end users seems to be diminishing to me. Some of this probably comes from reading all about Google's presentations and their thoughts on how future computing will be run and distributed, but with Microsoft also building an ARM OS that will probably be able to strip out many of the legacy supports to offer a speedy experience, it seems like there is going to be a big change and computing power that is needed for the end user.
    Reply
  • mckirkus - Friday, May 13, 2011 - link

    Yes. This. I've heard it called the "good enough" factor. Once it's good enough for most day to day tasks, battery life becomes a major factor in mobile/tabled devices. Even if atom is clock for clock twice as fast (due to OoE, cache, etc.) if it eats 4x the power it's a non-starter.

    Now we could see some interesting distributed alternatives to HTTP in the next ten years that require cpu/bandwidth to buy currency in a distributed social world (think bit-torrent share ratio), that works like folding@home with fancier networking, and that might swing things back in Intel's favor.

    In my mind, Intel needs to hire some math majors from CalTech to focus on killer apps for their soon to be overkill processors.
    Reply
  • dagamer34 - Friday, May 13, 2011 - link

    While Intel can get away with what they call a GPU on it's desktop and mobile chips, I doubt anyone's that interested in using their GPUs when you get to the scale of smartphones and such, as OEMs really want a proven solution in their hardware. When you're competing against GPUs like the Adreno 220 (or 300 at that point) from Qualcomm, or the Tegra 3 or the PowerVR 5 series and eventually 6 series, I don't think Intel can afford to be a slouch in that market. They may have the best die process, but Intel's definitely shown they have a lot of difficulty using it effectively when it comes to non-CPU components (i.e. Larrabee and Intel GMA series). Reply
  • Mike1111 - Friday, May 13, 2011 - link

    I'm not sure that Silvermont will really matter by then. Sure, if it's as good as ARM SoCs or even a bit better then Intel will get a few design wins. But I don't really see the incentives for most device manufacturers to switch, unless Silvermont is at least 50% better overall then the ARM competition (and not worse in any category) for a cheaper price. And don't forget that with ARM you have the choice between at least three high-end SoCs from different manufacturers or you can even design your own chips. And more and more stuff gets shifted to the cloud, too.

    By the end of 2013 we probably have devices with the second gen of Cortex-A15 SoCs @22nm/20nm with a quad-core Cortex-A15 and the second iteration of IMG's Series 6. And high-end tablets will probably have >=1080p resolutions, 2GB RAM and 3rd/4th gen LTE chips. What I mean by that is that we'll be with tablets where we are now with PCs: current hardware is good enough for most people and most tasks. Of course sofware and the whole ecosystem will play a huge role: e.g. even though last year's(!) MacBook Air is vastly underpowered in terms of specs, most people using it for most tasks will find it to be a snappy machine and good enough.
    Reply
  • Shadowmaster625 - Friday, May 13, 2011 - link

    Can you get a review of one of these chips? (And/or the B810) I cannot be sure but I think the die size on this chip is 140mm^2. So at 22nm we're talking 100mm^2. Talk about an incredibly powerful chip for 100mm^2. If they took one of those chips, cut the cache in half they could easily make it 70mm^2. With the ultra low voltage operation improvements, as well as the reduced cache, it should be easy enough to get this under 5W. This chip would be a monster in a netbook. I'm more interested in this than in atom. Reply
  • L. - Monday, May 16, 2011 - link

    At 22nm, you will have brazos-style chips from anyone in the arm-cloud + amd that will have the performance of a Llano in a 10 watts TDP more or less (and yes, llano is not out but you can imagine it beats my config : c2d @ 3.89 + hd4850).

    And thus .. what would be the point of a shrinked celeron, a shrink of the worst kind of cpu ever made, that is already so low on cache you can feel it.
    Reply
  • iwod - Friday, May 13, 2011 - link

    Let just say GPU will be the same and most ARM and Atom will be based on PowerVR Gen 6.
    Then the only difference would be the CPU.

    I keep wonder, why do we have to keep all those MMX and useless instruction on a Mobile / Smartphone CPU when 99.9% of the Apps will have to be rewritten anyway. Why not take those out to spare the transistors.

    22nm 2nd Gen Atom with Out of Order WILL surely be enough for a smartphone, but at what power lv?

    I still think ARM will have the advantage in terms of power / performance and flexibility.
    Reply

Log in

Don't have an account? Sign up now