Back to Article

  • runner50783 - Monday, December 24, 2012 - link

    This is why I don't care if you post your podcast on time : P, you guys are the best when it comes to technical analysis and reviews, you are the reference for the rest of the review sites, keep it up!

    Well, I guess graphics power may be the only advantage of tegra 3 over clover trail right now, and that may change with proper drivers too... Now it really makes little sense to have an RT based system, I feel bad for pulling the trigger on a Surface RT after playing with a Clover trail tablet, it was much faster on almost everything except games.
  • designerfx - Tuesday, December 25, 2012 - link

    This is the first time in a long while that I've seen Anandtech declare that something which isn't accurately measured, is actually accurately measured and compared.

    I'm quite disappointed, honestly. 5th core on the Tegra is a significant part of what defines its power savings, and yet they're saying "Welp. It's about the same!".

    I'm really questioning this a lot, Anand.
  • Deo Domuique - Thursday, December 27, 2012 - link

    This is his yearly fat bonus by Intel :P

    As for us, just ignore it and move on.
  • MrSpadge - Saturday, December 29, 2012 - link

    What the ...?! If there's currently no way to enable the 5th core in Win 8, what exactly are they supposed to do? I'm sure Anand would love to repeat these tests if any way to fix this was available.. but right now that's how both devices perform. Call Tegra 3 crippled on Win 8 if you want, as that's what it apears to be. Reply
  • Activate: AMD - Saturday, December 29, 2012 - link

    Whats the problem? They're comparing performance under the windows RT environment and Anand mentions on the very first page:

    "One last note: unlike under Android, NVIDIA doesn't use its 5th/companion core under Windows RT. Microsoft still doesn't support heterogeneous computing environments, so NVIDIA had to disable its companion core under Windows RT."

    The only thing thats disappointing here is your inability to comprehend whats being tested.
  • Wolfpup - Thursday, January 03, 2013 - link

    That would improve idle performance, but this is a real world test for the time being, and even still-Atom is more powerful, and yet uses less power, AND of course because it's more powerful, finishes doing work faster.

    Even if that fifth core were supported, this is still quite nifty.

    The fact that I can, today, buy an actual x86 PC that can easily double as an iPad class ebook reader/light web browsing tablet is just amazing. (eInk's better, but you know what I mean).
  • Alexvrb - Thursday, December 27, 2012 - link

    I'm not a huge fan of Tegra 3, but I wouldn't regret your Surface purchase too much. My dad has one, it's built better than any other tablet I've held. Plus Inte's solution is almost entirely unsatisfactory for 3D gaming. Intel is only using a single SGX 545 core. They clock it pretty high, but it isn't enough. The latest iPad uses a 4-core SGX 554 solution, and clocked right it seems to do pretty darn good on power too. Clocks aren't as high but it blows both the Tegra 3 and any ULV Atom out of the water.

    I'm not a huge fan of Apple either, but they generally use the most cutting-edge Imagination Technologies IP available. That might include Rogue (Series 6) in the next go around... Intel needs to get Series 6 in their chips ASAP.

    Oh, and graphics drivers? Intel? Good luck! If this was Nvidia or AMD I might have agreed with you.
  • ssj3gohan - Monday, December 24, 2012 - link

    This is really pornographic power electronics stuff. I do this stuff every day with my ultra-power efficient computer projects, but I didn't know that Intel walked around with this kind of a setup to show transparently what they are doing with power consumption.

    Good work, Intel. Now go and apply your knowledge to desktop platforms as well, because I'm kicking your ass here. I have yet to come across a desktop platform that cannot be made at least 50% more power efficient just through power electronics hacks.
  • madmilk - Monday, December 24, 2012 - link

    Did anyone honestly think Intel, with a hundred times the revenue of ARM, would let themselves be devoured?

    Intel will still trouble entering the mobile device market because of the ISA lock-in ARM enjoys. On the other hand, ARM at this moment does not stand a chance in the (micro)server market either, especially as Intel increases its server Atom line.
  • tipoo - Monday, December 24, 2012 - link

    Yeah, I think Intel will do fine and remain the 900lb gorilla in the mobile arena as well. ARM has the lead now, but once Intel has its sights set on something vital to its interests it tends to destroy the competition. Reply
  • snoozemode - Monday, December 24, 2012 - link

    The difference this time is huge compared to competitors Intel has faced in its history. Its not just about specs anymore, no matter if its performance or performance per watt, its a lot more about economy and margins. Anyone with an ARM license can build a SoC and a lot of new players are coming to the game, especially from China. And players like Samsung who produces own SoCs for their own devices, soon LG and possibly Apple aswell means that the days of Intel dictating the terms and dominating the industry is just not going to happen this time. Reply
  • KitsuneKnight - Monday, December 24, 2012 - link

    You mean it's much closer to Intel's early days, when it faced dozens of competing companies, instead of just a couple tiny ones. Reply
  • snoozemode - Tuesday, December 25, 2012 - link

    Still very different because of ARMs business model which is nothing like the past with Intel, AMD, Cyrix, IBM etc. Intel definitely didn't became the giant it is because of products with good value. They are what they are because of continously raising the bar above everyone else from a performance standpoint at a time when performance was everyting and energy efficiency was close to nothing. And when they sometimes have failed to compete from a performance standpoint (ex AMD around 2000) they have resorted to some serious foul play. Reply
  • ay@m - Thursday, December 27, 2012 - link

    well, what if Intel is now measuring the energy efficiency as the new performance benchmark?

    that's exactly what this article is pointing out, that Intel's Atom chip is starting to focus on energy efficiency and not focusing on the performance anymore...right?

    so i think Intel understands now what it takes to enter the mobile market and what the end user will perceive as the new performance benchmark.
  • Kidster3001 - Friday, January 04, 2013 - link

    You make Intel's point!

    Intel will continue to have the performance advantage. They are now going after the power part of the equation. When they are best at both (or at least significantly better at one and same at the other) what do think the result will be?
  • Kevin G - Tuesday, December 25, 2012 - link

    The problem for Intel is that they'll design the SoC that they want to manufacture and not necessarily what the OEM wants. Intel could over spec an SoC to cover a broader market but at an increased die size and power consumption. Intel's process advantage mitigates those factors but they'll still be at a disadvantage compared to another SoC designer that'll make a product with the bare necessary functional blocks. For Intel to become a major mobile player, they'll have to start listening to OEM's and designing chips specific to them. Reply
  • yyrkoon - Tuesday, December 25, 2012 - link

    Intel already is a major player in the mobile market. They have been far longer than anyone producing ARM based processors.

    Granted according to an article I read a few months back. Tablet / smart phone sales are supposedly eclipsing the sales of laptops, and desktops combined. whether true or not, I can see it being possible soon, if not already.
  • p3ngwin1 - Tuesday, December 25, 2012 - link

    yep, it's also about PRICE.

    if you've seen the price of the cheap Android tablets and devices, you know Intel will have to either convince us we need to pay a premium for the extra battery-life and performance, or they're going to have to lower their prices to compete aggressively.

    you can buy decent SoC's from Allwinner, Rockchip, Nvidia, Qualcomm, etc that deliver "good enough" specs and a decent price, while Intel charges a premium for it's processors.

    those cheap China SoC's like Rockwell and Allwinner, etc are ridiculously cheap and you can get Android 4.1 Tablets with 1.6Ghz dual-core ARM A9 processors with Mali400 GPU's and 7" 1200x720 screens for less than $150.

    Intel charges way more for their processors compared to the ARM equivalents.
  • name99 - Tuesday, December 25, 2012 - link

    The price issue is a good point.

    A second point is to ask WHY Intel does better in this comparison. I'd love to see a more detailed exploration of this, but maybe the tools don't exist. However one datapoint we do know is this
    (a) SMT on Atom is substantially more effective than on Nehalem/SB/IB. Whereas on the desktop processors it gets you maybe a 25% boost (on IB, worse on older models), on Atom it gets you about 50%. (This isn't surprising because Atom's in-order'ness means there are just that many more execution slots going vacant.)
    (b) SMT on Atom (and IB and SB, not on Nehalem) is extremely power efficient.

    So one way to run an Atom and get better energy performance than CURRENT ARM devices is to be careful about using only one core, dual threaded, for as long as you can (which is probably almost all the time --- there is very little code that makes use of four cores, or even two cores both at max speed).

    I bring this up because this sort of Intel advantage is easily copied. (I'm actually surprised it hasn't already been done --- in my eyes it's a toss up whether Apple's A7 will be a 64-bit Swift or a Swift with SMT. There are valid arguments either way --- it's in Apple's interests to get to 64-bit and a clean switchover as fast as possible, but the power advantages in SMT are substantial, since you can keep your second core asleep so much more often.)

    Once you accept the value of companion cores it becomes even more interesting. One could imagine (especially for an Apple with so much control over the OS and CPU) a three-way graduated SOC, with dual-core OoO high performance cores (to give a snappiness to the UI), a single in-order SMT core (for work most of the time), and a low-end in-order core (for those long periods of idle punctuated by waking up to examine the world before going back to sleep). The area costs would be low (because each core could be a quarter or less of the area of its larger sibling; the real pain would be in writing and refining the OS smarts to use such a device well. But the payoff would be immense...

    Point is --- I wouldn't write off ARM yet. Intel has process advantages, and some µArch advantages. But the µArch advantages can be copied; and is the process advantage enough to offset the extra cost (in dollars) that it imposes?
  • Kidster3001 - Friday, January 04, 2013 - link

    big.LITTLE is a waste of time. It is a stop-gap ARM manufacturers are using to try and keep the power down. As they increase performance they are not able to keep power in the envelopes they thought they could. It is far more efficient to have cores that are capable of filling all the roles you suggest by dynamically changing how they operate. This is where Intel will succeed, just look at what Haswell can do. Reply
  • yyrkoon - Tuesday, December 25, 2012 - link

    First, no one claimed ( at least seriously / sanely) that "ARM" would devour Intel. There *are* far more ARM based processors out there in the world than Intel. That is a simple fact. ARM has also existed for a long time, albeit not quite as long as Intel if I remember right.

    ARM sells far more processors. ARM has processors in just about any processor controlled type device you can think of ( and many you probably have not ). However, what is considered "ARM" may be a simple Cortex M0 processor that costs only a few US dollars. Used as a simple keyboard controller, or countless other possible uses. These devices are also RISC based, and are made to do specific purpose compute tasks while using very little power. Even less while in low power mode ( sleep ) with the ability to sometimes wake from an interrupt in as little as 1-2 cycles( we're talking microseconds here).

    Lastly, if you wanted to compare processor revenue. You would have to compare profits from ARM, and ARM's partners who sell ARM based processors. You see, this is not an ARM vs Intel thing. This is the Intel vs the hoard that is ARM. "Thing". While also keeping in mind that costs can be considerably lower for ARM processors.
  • name99 - Tuesday, December 25, 2012 - link

    Yes and no.
    You are right about Intel's disadvantages. But it's also worth remembering that ther are huge numbers of embedded devices out there based on some flavor of PPC (lots of game consoles, lots of network devices, lots of auto entertainment systems) or MIPS. And neither IBM nor MIPS was able to take that embedded advantage into the "branded" CPU space.

    ARM is obviously different. They've managed to make their brand matter, and they are working hard on improvements (whereas both IBM and MIPS seem to be content to sell ever smaller dies of a design from the mid-nineties). But it would be unwise, IMHO, to assume too much advantage in ARM's plentiful very low-end sales.
  • yyrkoon - Tuesday, December 25, 2012 - link

    "ARM" is winning in the context that more android devices have sold in the last few years, than x86 PC's have ever sold. At least according to an article I read a few months back.

    However that was not meant to be my point. My point was meant to say that ARM has been around a while, and will continue to be around a lot longer. While their profits are multii-entity, not single.

    Personally, I like the fact that *now* Intel is paying attention.. It is good for the industry.
  • stimudent - Tuesday, December 25, 2012 - link

    Also consider Intel's masterful use kickbacks to manufacturers and suppliers as well as threats to those who won't conform. That should help them too against ARM. Reply
  • mrdude - Tuesday, December 25, 2012 - link

    and convincing these same OEMs to use Intel is going to be a tougher sell now. Apple and Samsung have no desire nor need for an Atom in a tablet when they've got their own SoCs. And superior SoCs, might I add.

    Mobile applications don't care what ISA they're running on, thus Intel loses the x86 compatibility point here. You're not going to run Photoshop on your smartphone and while it might work on your tablet, you're going to be pulling your hair out due to it's lackluster performance for productivity apps (see Anand's Clover Trail's review). If Intel restricts their x86 Atoms to Win8 devices, they're going to have a hard time selling these in any large quantity.

    Then there's the issue of on-die GPU, which for tablets and smartphones is even more important than the CPU performance. That's one area where Intel still lags way behind the others. For mobile devices, gaming apps are the most popular. If Intel has great perf-per-watt and good CPU throughput but still lags woefully behind in GPU performance, the OEMs and consumers won't buy it. Asking a Clover Trail to game on a full HD display or even retina quality isn't going to work out well.

    Price is what matters here above all else. In order for Intel to maintain their fab advantage they also require selling loads of processors at high margins. With huge competitors in the mobile space (Qualcomm just surpassed Intel in market cap) who sell these by the boatloads, Intel's going to have a very tough time of it.

    It is great to see that the x86 power myth is busted, though. That 2-3% of die space dedicated to the x86 decoder doesn't seem to make too much of a difference. Now for the price and GPU portion...
  • Sahrin - Tuesday, December 25, 2012 - link

    Intel doesn't compete with ARM, they compete with Qualcomm, Samsung, nVidia, etc. There is no ISA-level competition; ISA is irrelavant...that's the thesis of the article. Reply
  • Kidster3001 - Friday, January 04, 2013 - link

    The ISA argument is exactly what people said about desktop PC's in the early 80's. "Everything is owned by IBM and Motorola. Intel can't win with this new x86 architecture thing." 10 years later?

    Same thing happened in Servers. "It's all owned by RISC now, x86 will never succeed in servers." 10 years later?

    Then super-computers (HPC) "Intel doesn't stand a chance!" 10 years later?

    Let's wait and see what mobile looks like in 5 or 10 years. History tells us that once Intel decides to seriously play the game, they figure out a way to win.
  • mavere - Monday, December 24, 2012 - link

    I don't expect power efficiency here to compete well with the iPad, but I hope Intel gets there soon, if only to preempt any ideas Cupertino might have about moving the MBA line to ARM.

    Also, maybe Google's new Motorola subsidiary will do something with this. Samsung and Apple have their own chip designs, and MS doesn't really have room in its Surface + Surface Pro dichotomy for a slow(er) x86 part.
  • tipoo - Monday, December 24, 2012 - link

    I'll be curious to see if Intel designs get such a power/performance lead that it gets to a point where Apple would be foolish not to switch over. Reply
  • Kevin G - Tuesday, December 25, 2012 - link

    From a hardware stand point, Intel could get there. The problem with transitioning from ARM to x86 would be one of application compatibility. Apple has to maintain their app ecosystem and a platform change would be very costly. On the other hand, there would be a benefit to Apple solidifying their operating systems on one platform (and possibly merging iOS and OS X themselves).

    The other factor is that Apple is designing not only their own SoC's but also their own CPU cores. That is a major investment and Intel would have to have a seemingly overwhelming product for Apple to write those investments off.
  • Exodite - Tuesday, December 25, 2012 - link

    I think it's probably a fair guess that Apple has planned to converge their mobile and traditional computer business to the same hardware platform for some time.

    It's just not going to be x86.
  • wsw1982 - Tuesday, December 25, 2012 - link

    However, the fact is the ATOM can emulate the ARM with similar performance, but not the other way around. It will be interesting to see apple fully commit to netbook level performance. Reply
  • StevoLincolnite - Tuesday, December 25, 2012 - link

    Software compatibility isn't that big of a deal either as Intel showed us Binary Translation awhile ago allowing Medfield to run x86 and ARM instructions. Reply
  • krumme - Monday, December 24, 2012 - link

    I think this just proves Intels business is not tailored for the new low cost mobile market. A9 on low cost 40nm eats Atom for breakfast each and every day on the market -fact- and A15 will do exactly the same on dirt cheap 28nm. Reply
  • tipoo - Monday, December 24, 2012 - link

    I think we read different articles. Atom is rather competitive. It is not eaten for breakfast by 40nm ARM parts. Reply
  • yyrkoon - Tuesday, December 25, 2012 - link

    depends on how you look at it. Find me an atom based 7" tablet for $200. Such as the Nexus 7 ( which many regard as the best tablet in its class ) Reply
  • p3ngwin1 - Tuesday, December 25, 2012 - link

    like i said in another comment here, you can get Chinese tablets running Android 4.1 with 1.6Ghz Dual-Core ARM processors with Mali400 GPU's (good enough for many) with 7" 1280x720 screens, etc all for less than $150.

    the chips are usually Allwinner or Rockchip, etc and the performance is good enough for most people at an incredible price that Intel simply can't match.

    There's a reason in the early Netbook days why Intel wouldn't let Atom processors inside anything larger than 10" and 1024x768 screens, etc

    It's because Intel didn't want people being happy with Atom performance in larger computers, so if you wanted larger screens, etc you were artificially forced to pay for more processing potential than you needed.

    Intel have performance, and lately they're getting the power efficient too, but they are still a premium processor option and that's where ARM still has the advantage.

    I don't see Intel willing to drop their prices any time soon to compete with cheap Android tablets. Intel would rather create bullcrap categories like "Ultra-book" (it's still a laptop for Christ-sake !) and convincing people they NEED expensive computers that cost $800+.

    Meanwhile other ISA's like ARM and MIPS are lowering the price barrier to products with more than enough processing power and battery-life for most people.

    Intel are left to convince people they need a desktop/laptop in a world increasingly going mobile and always-connected
  • yyrkoon - Tuesday, December 25, 2012 - link

    You can even buy ICS tablets for as little as $50 is you keep an eye out.

    Personally though, I would not settle for anything less than the Nexus 7. Sometimes, peace of mind means more than money.

    Point was however, that there is more than just power /watt efficiency to consider here. Especially when enjoying those number comes at a huge price premium.

    Along the lines of what Intel can not match price wise. I am fairly confident that Intel can not even match prices with Texas Instruments in most cases. But I also believe that Intel does not need to convince buyers into thinking that desktops, and laptops are still necessary. Mainly because mostly they are( and will continue to be ). At minimum, high performance workstations, and servers will still need to exist for various applications.

    I think that x86 and ARM both will continue to be around for a very long time. Which is a good thing.
  • wsw1982 - Tuesday, December 25, 2012 - link

    So Intel cannot make atom cheaper because it don't need to pay TSMC for manufacture, don't need to pay ARM for license,do have mature 32 process than rest of the industry, do have the medfield's die area smaller than both tegra 3 and krait, and do have enough production capabilities idle for nothing.

    Desktop and server along could make Intel maintain manufacture advantage and R&D spending, how hurt is it to adopting the wasted production capability in produce the mobile chips as bonus? Anyway, the PC market will be still growing according to the all the prediction professionals, and the ATOM is quite safe to reuse the R&D spent on the core processors.

    With 100$ you can also got the netbook from Chinese manufactures which, despite the cheap feeling and bad building quality, is as responsive and useful as the netbooks from big companies. But the 80$ android tablet is nearly unusable.
  • talzara - Thursday, December 27, 2012 - link

    You do realize that Texas Instruments has exited the ARM processor business for mobile devices, right? The margins were too thin. They're still making ARMs for embedded devices, but they've given up on mobile.

    ARM is the classic disruptive innovation. It reduces prices for consumers, and cuts a swathe of destruction through industry margins. There are just too many players in ARM -- they're interchangeable enough that they have a hard time charging any kind of premium for their products.

    Nvidia has shipped tens of millions of Tegras, so much so that it now accounts for 20% of Nvidia's revenues. Great business, right? Think again. Tegra accounts for -16% of Nvidia's net income. That's not a typo -- it really is a negative number. Nvidia makes all of its money from GPUs -- gaming GPUs, consumer GPUs, and GPUs sold for massively-parallel computing. (Source: Nvidia 10-Q for Q3 of fiscal 2013 -- segment breakdown at the bottom of page 27.)

    So now we've got one major ARM vendor quitting, and another major ARM vendor bleeding cash. The ones that are doing well are the ones that don't actually care about the CPU. Qualcomm is horizontally integrated -- they make money on the LTE chipset. Apple and Samsung are vertically integrated -- they make their money on the whole device, not on the CPU.

    In such a crazy market, Intel may well prefer to sell a premium product to 5% of the market, rather than a price-competitive product to 30%.
  • felixyang - Friday, December 28, 2012 - link

    You get the keypoint. Reply
  • st.bone - Tuesday, December 25, 2012 - link

    Now that's just out right ignorant on your comment, that's if you even took the time to read the article Reply
  • jeffkibuule - Monday, December 24, 2012 - link

    This kind of makes me shake my head as to why a year old SoC was used when Samsung was shipping Exynos 5250 and Qualcomm had APQ8064, heck nVidia has Tegra4 just waiting in the wings for a likely CES announcement (I know why, bad timing).

    My only hope is that in 2013, ARM and Atom SoCs can support 1080p displays, I don't think I can use anything less without wanting to poke my eyes out.
  • kyuu - Monday, December 24, 2012 - link

    Agreed. This article mostly highlights what an out-of-date SoC Tegra3 is, and how bad it is without its companion core. Which is why its so perplexing that Microsoft went with the Tegra 3 for the Surface. I can only guess that Nvidia must be practically giving away Tegra3 nowadays, otherwise I have no idea why anyone would use it.

    I'm not sure it's a huge win for Intel that Clover Trail beats a mostly obsolete ARM SoC in power use with such an incredibly mediocre GPU.

    A more interesting comparison would be between a current-gen Qualcomm chipset and/or Samsung's current-gen Exynos.
  • kyuu - Monday, December 24, 2012 - link

    To clarify the second "paragraph", Clover Trail is the one with the mediocre GPU. Reply
  • lmcd - Tuesday, December 25, 2012 - link

    They're both mediocre GPUs at this point. Didn't Adreno 225 keep pace with the T3 GPU? Reply
  • jeffkibuule - Tuesday, December 25, 2012 - link

    The reasoning for going for Tegra 3 was pretty obvious. They needed a more finalized chip to develop Windows RT against and Tegra 3 was the only thing available. Relying on a faster class of SoCs in the Tegra 4, Snapdragon S4 Pro, or Exynos 5250 would mean delaying the Windows RT version of tablets by several months at least since I doubt development would have been done in time. Reply
  • lmcd - Tuesday, December 25, 2012 - link

    S4 standard would've done better more likely. Clover Trail demo'ed how viable two cores was for W8, let alone W8 RT Reply
  • wsw1982 - Tuesday, December 25, 2012 - link

    There is already the comparison between the medfield, krait, swift and Exynos 5250 on the android platform, isn't there? I think microsoft is the one to blame that you can only compare clover trails and tegra 3 on windows platform, LOL Reply
  • Krysto - Monday, December 24, 2012 - link

    Intel will not have anything that even approaches next year's ARM SoC's GPU's anytime soon. And they can't match Cortex A15 in performance anytime soon either. Reply
  • karasaj - Monday, December 24, 2012 - link

    All they need to do is either put intel HD graphics (Haswell) or license a better gpu from Imagination, I imagine. Although ARM (Samsung?) have really been developing better GPUs lately, they seem to be catching up. Reply
  • jeffkibuule - Tuesday, December 25, 2012 - link

    They've already stated they will be integrating a variant of their Intel HD 4000 GPU in their next-generation Atom SoC, the only question is how many Execution Units and what kind of power profile their will be targeting.

    With Intel, the question isn't so much about performance, but maximizing profits. If they build an Atom SoC that's so great and also cost competitive with other ARM chips, who would buy their more expensive Core CPUs? This is one reason why I believe that the Atom and Core lines will eventually have to merge, just like how the Pentium and Pentium M lines had to converge into the original Core series back in 2006 (oh, how the irony in history repeating itself).
  • lmcd - Tuesday, December 25, 2012 - link

    It's more likely a variant of the 2500, which won't be enough. 4k doesn't even beat the 543MP3 does it? Reply
  • jeffkibuule - Tuesday, December 25, 2012 - link

    There haven't really been any comparisons of mobile and smartphone GPUs yet. We'll have to wait for 3DMark for Windows 8 to get our first reliable comparison. Reply
  • wsw1982 - Tuesday, December 25, 2012 - link

    I think it depends, 16 543MP3 cores should beat the 4K :) Single core 543MP3 is not better than 545. Reply
  • mrdude - Wednesday, December 26, 2012 - link

    It's also a matter of TDP, though. The ARM SoCs pack a lot of punch on the CPU side but with often better GPU performance at an equal footing with respect to TDP (sub-2W for smartphones and ~sub-5W for tablets).

    As much as Intel wants to pound home the point that x86 is power efficient, it's an SoC and therefore a package deal. Intel still suffers from the lopsided design approach, dedicating far too much die space to the CPU with the GPU an afterthought. If you look at the more successful and popular/powerful ARM SoCs, it tends to be the other way around. A balanced approach with great efficiency is what makes the Snapdragon S4's such fantastic SoCs and why Qualcomm has now surpassed Intel in total market cap. The GPU is only going to become more and more important going forward due to PPI increasing drastically. At least for Apple, they've already reached a point where they're required to spend a huge portion of the die to the GPU with smaller, incremental bumps in CPU performance.

    This really seems like Intel is shoehorning their old Atom architecture into a lower TDP, saying: "Look! It's efficient! Just don't pay any attention to the fact that we're comparing it to a 40nm Tegra 3 and don't you dare do any GPU benchmarks." These things are meant for tablets, are Intel not aware just how much MORE the GPU matters? Great perf-per-watt (maybe), but that's all for nothing if the SoC sucks.
  • somata - Monday, December 31, 2012 - link

    As others have said, it'll be nice once we can do proper comparisons between tablet/notebook/desktop GPUs, but in the meantime just consider the peak shader performance of each:
    Intel HD 4000 - 16(x4x2) @ 1.3GHz - 333 GFLOPS
    Intel HD 3000 - 12(x4) @ 1.3GHz - 125 GFLOPS
    PowerVR SGX 543MP4 - 16(x4) @ 300MHz - 38.4 GFLOPS
    PowerVR SGX 554MP4 - 32(x4) @ 300MHz - 76.8 GFLOPS
    The PowerVR numbers are based off of Anand's analysis. Obviously not exactly a fair comparison, but clearly Intel's mainstream integrated GPUs are substantially more powerful than any current PowerVR design. Of course that shouldn't be a surprise given the TDP of each platform.
  • p3ngwin1 - Tuesday, December 25, 2012 - link

    there are already smartphones with 1080P displays and Android tablets with even high resolutions :) Reply
  • coolhund - Tuesday, December 25, 2012 - link

    Plus the Atom is not OoO, IO is known to use much less power. Plus the OS is not the same.
    Sorry, but for me this comparison is nonsense.
  • tipoo - Monday, December 24, 2012 - link

    I'll be very interested to read the Cortex A15 follow up. From what I gather, if compared on the same lithography the A15 core is much larger than the A9, which likely means more power, all else being equal. It brings performance up to and sometimes over the prior generation Atom, but I wonder what power requirement sacrifices were made, if any.

    I'm thinking in the coming years, Intel vs ARM will become a more interesting battle than Intel vs AMD.
  • teiglin - Monday, December 24, 2012 - link

    You don't have to wait for the coming years for Intel vs. ARM to replace Intel vs. AMD. The latter stopped being interesting when Bulldozer fell so short of Sandy Bridge. I was a long-time AMD fan, but they haven't released a chip I'd consider buying for myself since Deneb. Reply
  • kyuu - Tuesday, December 25, 2012 - link

    I have to disagree. I'm far more interested in what AMD is going to bring to the x86 tablet space with Hondo than what Intel's doing, ATM. Reply
  • aspartame - Monday, December 24, 2012 - link

    Intel cannot compete with ARM despite having the most advanced fabrication technology. Surely the new atom is somewhat more power efficient than the old Tegra 3, but it costs 3 times more. Reply
  • KitsuneKnight - Monday, December 24, 2012 - link

    The 'new' Atom is also just a tweaked 5 year old Atom. What will be interesting is seeing how the next generation of Atoms compare against ARM's latest and greatest. Intel has proven that they can go blow for blow with ARM SoCs, despite just a couple years ago people claiming that x86 would never even be within several watts of any ARM. Reply
  • yyrkoon - Tuesday, December 25, 2012 - link

    Technically speaking, Intel still can not approach ARM in power usage. It is all in the definition of "ARM". So, it is a matter of context.

    I think more appropriately people were saying that atom could never hope to approach ARM in the embedded market. Where they would be completely right. Unless you think an Atom based SoC could run under 100mw under full load.

    Again . . .context.
  • KitsuneKnight - Tuesday, December 25, 2012 - link

    People were talking about the Smartphone and Tablet spaces, not the ultra-tiny processors embedded into devices like SSDs. Intel doesn't really seem to want to compete in that space with their CPUs, as there's no profit to be made and no threat to their core business (they do occasionally compete with other products, but those aren't core products).

    The 'context' most people were talking about is the context that Intel is actually shown to be competitive in (at the very least against last gen devices). We'll see if they can appropriately pull the rest of their ecosystem together to lower the power consumption of the rest of the system, along with further reducing the power consumption of Atom while upping the performance.
  • yyrkoon - Tuesday, December 25, 2012 - link

    This discussion started years ago in the embedded space where it should have stayed. Where ARM is still truly RISC in nature

    However, no less than a year ago( and probably more like 2 years ago ), several ARM low power desktop systems were demonstrated to use only 1-2w power consumption under full load. On YouTube no less. While Intel ( with atom ) was still fumbling around above 10w.

    Having said that. "Competitive" is still a subjective term in this case.

    At some point one has to realize, <this company> has <this> advantage over <another company>. But at what cost ? Which is partly why partners of ARM still exist in this market space.
  • jjj - Monday, December 24, 2012 - link

    Funny how you compare 2 chips running 2 different OSes and you deem the results conclusive. How low can you go? Reply
  • karasaj - Monday, December 24, 2012 - link

    Except if anything Windows RT will draw less power than Windows 8.

    Also, if you hook a resistor/volt meter etc. up to the CPU itself, the OS isn't going to do much.
  • Reikon - Monday, December 24, 2012 - link

    Windows 8 and RT are essentially the same OS for different instruction sets with a few arbitrary feature differences unrelated to performance. Reply
  • yyrkoon - Tuesday, December 25, 2012 - link

    same OS, different HAL. Reply
  • Krysto - Monday, December 24, 2012 - link

    Good job comparing a last-gen chip like Tegra 3 with Intel's latest Atom chip, Anand. Just because Microsoft is too slow to adopt a cutting edge ARM chip for Windows, doesn't mean Intel is now "toe to toe" with ARM.

    Compare Nexus 10 and iPad 4 with Clover Trail in GPU performance and price, and then let me know how it went.
  • croc - Monday, December 24, 2012 - link

    People that tend to use terms like 'fact' also seem to be such complete tools... Reply
  • Barnassey - Monday, December 24, 2012 - link

    My main thing is why is anand comparing a QUAD core cpu to a DUAL core With hyper-threading cpu? Of course the hyperthreading cpu will use less power. Reply
  • Reikon - Monday, December 24, 2012 - link

    Because the dual core CPU still outperforms the quad core one? Reply
  • Barnassey - Monday, December 24, 2012 - link

    No because the power consumption will be different. Thats the point im trying to make. Plus anand has shown hes a little biased over the years. Reply
  • r1cky4n - Monday, December 24, 2012 - link

    a single atom core is not equivalent to a single a9 core

    The important metric is performance/watt, not watt/core
  • sseemaku - Tuesday, December 25, 2012 - link

    So you want them to disable one of the channels in memory controller and reduce the cache size to match tegra3 for a fair comparison? Reply
  • tempestglen - Monday, December 24, 2012 - link

    It seems that "GPU Workload" and ""Photo Enhance" CPU consumption on summary table are inconsistent with diagrams. Reply
  • tempestglen - Monday, December 24, 2012 - link

    It also seems that Kraken test diagram of total platform power is just same as cpu only diagram. I guess anandtech posted a wrong picture.

    Merry Christmas!
  • flashbacck - Monday, December 24, 2012 - link

    "... Intel adopted a "no more surprises" policy... "

    What the F does that even mean? Don't build shitty products? Only upper management could come up with this genius policy.
  • magreen - Monday, December 24, 2012 - link

    Now we had a chance to hear your comment, and boy you're just a straight shooter with upper management written all over you. Reply
  • lmcd - Tuesday, December 25, 2012 - link

    Intel's philosophy pays off in Windows, for sure.

    Now, they've got Qualcomm to beat. That's going to take a bit more doing than beating a year-old chip.
  • lunarx3dfx - Tuesday, December 25, 2012 - link

    I think several people misunderstood the purpose of this article. The purpose was to make the point clear that x86 could be performance/watt comparable to arm. Yes, clover trail is only beating a Tegra, but considering where Intel was only a year ago this looks promising. I don't see Anand being biased here, I see him making a point about power efficiency. Reply
  • coolhund - Tuesday, December 25, 2012 - link

    Really? The x86 processor was an out of order architecture?
    Nope. Apples and oranges that way.
  • Gigaplex - Tuesday, December 25, 2012 - link

    So what if it's not out of order? That's got nothing to do with the ISA. ARM could build an out of order chip if they wanted. Reply
  • coolhund - Tuesday, December 25, 2012 - link

    The A9 is out of order, so is the A15. Thats not what I meant. Reply
  • yyrkoon - Tuesday, December 25, 2012 - link

    The myth where ARM did perform more power efficiently than x86 before Intel starting caring ?

    Simple matter of the fact is that A9 does not encompass what ARM *is* (Neither does Tegra 3 for that matter ), and there are far more ARM based processors out in the world than Intel. This will likely continue into the foreseeable future. Simply because "ARM" is not locked into a specific market in the compute space.

    Personally I am all for seeing Intel improve the power efficiency of their products. However, my own opinion is that Intel should either ditch atom, and improve their server, desktop, and mobile processors. And / or create another processor(s) that can decide on what it really wants to be. e.g. embedded application processor, or not.

    One thing is for sure. Intel has their work cut out for them if they want to compete with ARM in the embedded market. One thing worth mentioning that is kind of ironic. x86 is supposed to be the general purpose type processor, yet the usage of various ARM based processor seem to be more diverse.
  • wsw1982 - Tuesday, December 25, 2012 - link

    The atom want to be used in the smartphone and cheap tablet market, I think, at least now, Intel is quite clear about it. And it's very competitive now with the ARM solutions, so I don't see the reason why intel should abandon it along with the smartphone market. The core has been actually improved a lot and, to me, always the main focus of Intel. Reply
  • beginner99 - Tuesday, December 25, 2012 - link

    ... to compare to tegra 3. I think most would agree that comparing to dual-core A15 the outcome would be vastly different. But then I think we should not get fooled by Intel. As was mentioned this is still only a derivative of the original Atom on a smaller node. For Intel this is IMHO just a current placeholder. The real deal will come with the complete new uArch for Atom and if they manage to "pull off a Core 2" again (which I believe) it won't look pretty for Team ARM. Reply
  • yyrkoon - Tuesday, December 25, 2012 - link

    "The real deal will come with the complete new uArch for Atom and if they manage to "pull off a Core 2" again (which I believe) it won't look pretty for Team ARM."

    Maybe, but also keep in mind that android device sales have already eclipsed that of the x86 PC based market( in total numbers sold ). Which means things are not looking pretty for Intel already. Granted, it is a low cost market, which has less revenue potential.

    "Core 2 based atom" would be pretty cool. If they managed to keep the performance up. We'll see how that works out. in a few years ( maybe ).
  • beginner99 - Tuesday, December 25, 2012 - link

    I would say phones always outsold PC in the last decade ( I have no proof) but then they were just that phones with pathetic ARM cores. Now they are more like computers.

    I did not mean Core 2 based, but just shocking your competitors and basically making them irrelevant overnight. It's a new uArch that will be way better than current design.
  • coolhund - Tuesday, December 25, 2012 - link

    There have been reports before that stated the super duper uArch (OoO) would become reality with 32 nm Atoms, but it didnt. Now the same crap is assumed (!) again with the 22 nm ones.

    I dont believe it.
  • wsw1982 - Tuesday, December 25, 2012 - link

    The different is assumed vs. official... Reply
  • coolhund - Wednesday, December 26, 2012 - link

    Oh really?
    Show me that official statement then.
  • coolhund - Thursday, December 27, 2012 - link

    Thought so. Reply
  • yyrkoon - Tuesday, December 25, 2012 - link

    My conclusion that android device sales eclipsed that of the total x86 PC sales came from an Article in an embedded trade magazine. It does make sense however as well.

    Personally, I feel that Intel is going about the idea of Atom all wrong. Lower powered versions of the latest processors they have now would make more sense. Say another tier under their current mobile line. While perhaps re-tasking Atom to embedded duties.

    As it stands now. Atom is a processor that can not make up it's mind what it wants to be. In a few ways it is competitive, but in terms of cost it has a ways to go yet.
  • coolhund - Wednesday, December 26, 2012 - link

    Well at least they are trying ULV versions now of Sandy and Ivy Bridge.
    IMO the much better way to go.
  • InsGadget - Sunday, January 06, 2013 - link

    Your conclusion concerning Android vs PC sales is wrong. While Android is approaching (and perhaps, this year, surpassing) total PC sales per year, more PCs are sold right now than Androids. Source: Reply
  • Exodite - Tuesday, December 25, 2012 - link

    The iPhone 5 review contains some nice tables that includes both the Motorola RAZR M and Motorola RAZR i, even then the general performance category favors the RAZR i.

    Battery life is a more difficult proposition, as the RAZR M has LTE while the RAZR i does not.

    Still, the A15 will make for a interesting comparison when available. Both for Intel as well as Qualcomm's Krait-based SoCs.
  • dangerjaison - Tuesday, December 25, 2012 - link

    The only reason why arm could make this progress is bcoz of android and ios. They are built mainly to run on arm architecture. There are lots of issues mainly hardware acceleration in intel's architecture coz the developers build their games n apps to perform well on arm. The recently launches Intel device with atom running android had good backup and performance but couldn't succeed bcoz of compatibility. Intel still can make a big comeback to take over arm in mobile market. Reply
  • SilentLennie - Tuesday, December 25, 2012 - link

    Euh... actually, everyone knew the NVidia product sucked on the power efficiency front.

    There is still a lot of work to do for Intel.
  • Blaster1618 - Tuesday, December 25, 2012 - link

    One would think that Nvidia would have spent a couple of dollars to to work on their GPU efficiency. lol
    ULP Geforce at 520 MHz in (40 nm) process easily beat a Power VR SGX545 (65 nm).
    Even when when Nvida moves to (28 nm) technology next year it will move form a pig to a Pig-lite.
    Another thought it is so Microsoft to make an ARM specific OS that does not support the 5th core on the Tegra 3.
  • CeriseCogburn - Friday, January 25, 2013 - link

    Tegra4 is looking mighty fine, so whatever.

    Tegra3 was great when it came to gaming - it kept making Apple's best look just equal.

    Microsoft may actually be the bloated pig syndrome company. I find it likely that the LP 5th tegra core wasn't enough to keep the fat msft pig OS running.
    Of course it could just be their anti-competitive practice in full swing.
  • GillyBillyDilly - Tuesday, December 25, 2012 - link

    but when I watch the power eaters on a Nexus 7, up to 90 percent is used up by display alone, which makes the cpu power efficiency somewhat seem irrelevant. Isn't it time to talk about display efficiency? Reply
  • CeriseCogburn - Friday, January 25, 2013 - link

    Yep. Good point. No, great point, although it looks to be more like 40% for display power use on the new large screen mobile phones and phablets. Reply
  • shadi_h - Tuesday, December 25, 2012 - link

    I really believe Microsoft missed a great opportunity to go forward with an Intel only CPU strategy (they already have the best development kits for x86). An Intel powered cellphone is what I really want! Maybe the RT version should have been Clover Trail w/ 32-bit and Pro w/ 64-bit. Their decision makes me believe they put too much emphasis getting easy app conversions from the iOS/Android communities and not creating the best hardware. Reply
  • jeffkibuule - Tuesday, December 25, 2012 - link

    It's not such a great idea to hitch all of your hopes on Intel, they seem to only do their best work when they have a strong competitor. Reply
  • shadi_h - Tuesday, December 25, 2012 - link

    True but that's not anyone's fault but AMD. it seems they have no clue how to even enter this space. That's puzzling since it can be argued they potentially could have the best overall SoC tech (thought that was the whole reason they bought ATI in the first place). Reply
  • Powerlurker - Wednesday, January 02, 2013 - link

    AMD dumped their mobile lineup in 2008 and sold it to Qualcomm (now known as Adreno) and sold their STB lineup (Xilleon) to Broadcom. Anything they could have used is gone at this point. Reply
  • powerarmour - Tuesday, December 25, 2012 - link

    This is apples to oranges in some respects, the 5th companion core is disabled in Windows RT for Tegra 3, plus it's GPU is faster than the PowerVR core in the Clover Trail also. Reply
  • sonelone - Tuesday, December 25, 2012 - link

    Once an A15 gets put in a Windows tablet, I would like to see an updated comparison. Reply
  • skiboysteve - Tuesday, December 25, 2012 - link

    I'm a long time reader and and engineer at national instruments. I have used that usb-6259 and signalExpress many times. Very cool to see it on my favorite site Reply
  • Beenthere - Tuesday, December 25, 2012 - link

    AMD is the one who has been preaching the "User Experience" for years because their products in fact have been delivering a better User Experience when you run real apps and cut thru all the marketing B.S. and tainted benches that Intel spends fortunes on each year to manipulate hacks and consumers.

    It's no surprise that Intel is now trying to mislead the sheep on power consumption also, seeing as though AMD has had better power saving features in their CPU for years. Naturally Intel and certain hacks will proclaim this as an "Intel breakthrough" when in fact AMD has been leading the way in power reduction consumption in actual use - for years.

    It's pretty easy to dupe the naive and gullible and Intel is really good at buying the media reporting they desire be it with ad dollars, invites to "special" events or other "perks" to gain a psychologically favorable interpretation of their latest marketing ruse-of-the-week.

    The sheeple will buy into it all hook, line and sinker. If you're that technically challenged, you deserve to get fleeced.
  • puppies - Tuesday, December 25, 2012 - link

    Don't hold back. Tell us how you really feel!

    Claiming AMD "leads the way" on power consumption and intel just follows is stupid.

    Have you been in a coma since "I" series cpus were released. They beat AMDs CPUs on performance, power usage and performance per watt. I'm surprised you can see to type properly with your head that far up AMDs butt.
  • B3an - Friday, December 28, 2012 - link

    It's Beenthere. He posts nothing but THE most stupid sh*t. He takes stupid to another level.

    I just hope he's mentally retarded in some way because if he's not, and is actually this stupid, then i feel embarrassed for him.
  • CeriseCogburn - Friday, January 25, 2013 - link

    He is just the average amd fanboy. He is what all you'all ( u know whom ui r) have been for years here, he has just kept hanging on this past few months or half year instead of moving with the masses politically correct mindset change.

    Thus, we should all treat him as any other recently yet former and now scared to remain the same amd fanboy should be treated.

    He was the majority here, he still is the majority here, he's just the last one with the guts to keep the charade going... perhaps forever. A nice landmark outlining the years of abuse the amd fanboys have delivered. Hopefully he will never change, a solid reminder and future melding point for the large amd fanboy base that will re-emerge ASAP when some opportunity presents.
  • yyrkoon - Tuesday, December 25, 2012 - link

    Maybe on some level what you're saying is true but I find that both brands have their uses Depending on what a user wants / needs.

    Mostly the distinction I find is value vs reliability. Intel being slightly more reliable, while AMD offers more value. In this day and age, I think it is a mixed bag on which is more power efficient between the two.

    It is hard to claim that AMD is more power efficient compared to Intel when AMD is offering 60w, 90w, and 120w desktop TDPs, where Intel is offering 35w, 45w, and 90w TDP variants. In the mobile arena, there is less distinction.
  • skiboysteve - Tuesday, December 25, 2012 - link

    I'm a long time reader and I'm an engineer at national instruments. Very cool to see our stuff show up in an article! Reply
  • Tunnah - Tuesday, December 25, 2012 - link

    I don't normally post comments, as I'm not smart enough to be able to figure into the usual conversations here, but I just had to post this to say this is an absolutely amazing article.

    I'm a high school drop out from the age of 15 with no further education, all my teaching has been done on my own accord, so what I know isn't indepth, it's just the broad strokes. I

    I just wanted to let you know that your articles help me so much in my quest to educate myself, they're absolutely AMAZING, so easy to understand, and although I can only grasp the concepts of the things you're talking about, you propose them in a way where it doesn't go over my head.

    I suppose this is just a drunken thank you message from a very grateful reader, who, without you, would be a lot more clueless about the things he really wants to understand :)
  • dc77gti - Tuesday, December 25, 2012 - link

    Looking forward to 22nm Bay Trail-T. Hopefully Intel can get this out before 2014.
    That's half the battle. The other half lies with Google Android. We'll have to wait and see. With Intel's hardware might and Android's open source project, things will get more exciting.
  • kyuu - Wednesday, December 26, 2012 - link

    *sigh* Everyone seems to have forgotten about AMD's Temash which should be out by mid-2013... hopefully. Hell, I'd even take a Hondo tablet over Clover Trail, if anyone actually made one. Reply
  • thebeastie - Tuesday, December 25, 2012 - link

    I don't know about this.. all of a sudden the tables have turned against ARM?
    Mythbusters do a pretty thorough job on testing, I want them to do the same round of testing on these chips and see what THEY come up with.

    Apples A6 still reams the Atom so its not really that great.

    Anyway ARMs real kryptonite has been price. Intel might sell thee chips cheap as a last ditch stand one off or two off but they can't do it forever before what matters to them most falls apart.... and thats making a lot of money.
  • puppies - Wednesday, December 26, 2012 - link

    I think most of us realise that intels biggest problem right now is that in the desktop environment (their main source of income from the public) a 3 or 4 year old chip is more than adequate for 99% of tasks that the average pc user wants to perform. Software just isn't being developed (or is there much need for it) that requires a quad core 3.5ghz cpu with turbo and HT when a 2.5ghz dual core will more than suffice.

    If however they can push the performance envelope of these ultra low voltage parts to a point where software starts being developed that can utilise those chips to their potential then ARM will not be a viable option for anyone who needs that performance.

    Most tablet reviews state something along the lines of "it is ok for a few last minute corrections to a presentation but you might want something more powerful for when you aren't on the train/plain etc". If intel changes that to "This ultra portable tablet has enough grunt for all your word/powerpoint/excel creation needs" and ARM can't keep up then Intel becomes the required CPU for business users, intel really doesn't care about $200 tablet sales. They are generally bought as presents for kids and there just isn't the profit available that intel seems to desire.

    Combine this with the fact that no company is going to want to deal with the headache of trying to sync workloads between X86 office pcs and ARM based ultra portables and intel suddenly has a reason to charge the big bucks again.
  • FunBunny2 - Friday, December 28, 2012 - link

    I expect the reality is: for consumers, the need for anything much more than a 486 is, well, past. Not much computation outside of Excel. Pretty pixels, on the other hand...

    If M$ could write a yet more bloated OS, then the old Wintel symbiotic monopoly might return. Fact is, we're still where Xerox PARC put us 3 decades ago. The hardware isn't much different, save for touch, either. There was a time when PCs shipped with monochrome tubes, by default. And the OS was a command line ark.
  • war59312 - Wednesday, December 26, 2012 - link


    This image as linked on page 5, the very last image appears to be broken:


  • Veteranv2 - Wednesday, December 26, 2012 - link

    Reading this review, it makes me realize how websites are abused as a marketing gimmick.

    Keep this in mind:
    - It is all teste on Win8.
    - Win8 is primarely Win7 but with ARM support
    - Windows has been optimized for X86 in any way, they just recently added support.
    - Who says ARM support on Win8 is any good for ARM performance? It is compatible, but x86 has enjoyed +20 years of optimization

    If anandtech would have wanted to do this right they would have used this:
    - A6X or A15 ARMcores

    What has anandtech proven:
    - Win8 is bad for ARM
    - Tegra3 on 40nm has worse power consumtion then a 32nm part
    - It is a marketing tool for intel which is struggling in the tablet market and needs positive things like this
    - That it cannot objectively make differences clear between architectures, cause this review has nothing to do with architectures...

    A sad day for anandtech...
  • thebeastie - Wednesday, December 26, 2012 - link

    I guess I could agree with this, Anandtech is my absolute favorite tech site for the truth but some times he just seems to be a little bit too much of an Intel fan. But alternatively I do see that he is ready to hand out credit where it is due, and Intel is all too often the company to beat.

    Ultimately I think his heading was a poor choice of words and is just as much appeared to be skewed towards headline grabbing as much has being an accurately balanced review.

    I guess that will always be part of the game with "the press"
  • CeriseCogburn - Friday, January 25, 2013 - link

    You guys missed the bucket of fudge, and forgot the nvidia hate.

    Thus the Tegra3. you've missed the bias boat, bros.

    Anand got screwed on nVidia gpu's a long while back, and they AND Tom's have never forgotten the slight.

    It was the (made notorious) "rebranded" nVidia 9800x, where Anand just reused another of the nVidia gpu's they had in house and adjusted the clocks and claimed they thus tested the "new" nVidia release.

    The deep hatred has been seething here ever since, in every article, pumping up the amd fanboys, and only recently has that latter group somewhat receded, due to continual epic fails by amd.
  • powerarmour - Friday, December 28, 2012 - link

    Agreed Reply
  • dealcorn - Wednesday, December 26, 2012 - link

    I applaud the mention of Intel's internal performance modeling team at the start of the article but where was the picture? This is a disturbing article and a picture helps humanize the story and soften the blow. Large portions of the readership have bought into the ARM mythology regarding efficiency so the factual content of the article is disturbing and will trigger a denial response. However, there are some rather obvious conclusions that should have been stated to assist readers in assessing the mythology. I may overshoot slightly.

    5 years ago the idea that Intel could compete effectively in the non laptop mobility market was laughable because Intel was "clueless" about all that SOC stuff. Intel's process advantage gave it a big competitive advantage but it's intimate knowledge of how to tweak X86 to achieve varying performance targets was worthless as long as ARM wielded substantial advantages in efficiency and cost. Clover Trail is proof that Intel has learned a lot in the last 5 years. Today, even without exploiting Intel's advantages in process technology and X86 tuning, Clover Trail is roughly comparable to ARM in efficiency. This gain in efficiency is solely the result of Intel being smarter today than they were 5 years ago. Clover Trail is build on a nearly obsolete (by Intel standards) process geometry using a 5 year old core designed during Intel’s “Clueless” era. This should be profoundly disturbing to ARM supporters because ARM has had a near mono-maniacal focus on efficiency for 22 years. Constant, incremental improvement is the name of the game and ARM is a well funded, adequately staffed, old hand at this game with some of the finest talent and best IP in the industry. That the new comer (Chipzilla) can reach rough parity with ARM in the space of 5 years based solely on getting smarter, rather than some fancy process advantage, means that Intel is on a steeper learning curve than the gang over at ARM and the rest of the eco-sludge system. This is scary because Intel is not going to hit it’s stride until 22 nm when it gets to combine what it has learned with some of it’s process advantage and a long overdue re-design of the Atom core (i.e., OoO execution). The full process advantage does not hit until 14 nm which Intel should achieve about a year after hitting 22 nm. Today all the ARM fabs can talk a good game about reaching process parity with Intel because talk is cheap. Let’s count how many ARM fabs actually achieve mass SOC production at 14 nm within 4 years of Intel hitting this milestone.

    People like to talk about eco-sludge but it is understood that 14 nm fabs are expensive toys that not everyone can afford. The market is not big enough to buy 14 nm fab’s for every ARM player. There is a mass extinction event coming and the stench of rotting fabs will soon permeate the ecosystem. Basically everyone other than ARM and its fabs will seek other opportunities (i.e. Chipzilla) as soon as the stench gets unbearable. Other than the surviving fabs and ARM, the remaining eco-sludge system should transition to Intel fairly easily.

    Intel’s goal in addressing this market is domination, not rough parity. ARM is likely to clearly lose it’s efficiency advantage at either 22 nm or 14 nm so its only playable card is that it is cheap which is a credible strategy, just ask Rory. Intel plays in a different league so this is a transitional strategy at best. By that time. Rory should be a free agent so if ARM wants the benefit of his perspective, he may be available.

    It is wired deep in the Intel mojo that each process geometry step should achieve perfect economic scaling which means that the cost to produce one transistor in the new fab should be half the production cost of producing the transistor in the old fab. You never achieve perfect economic scaling but it is a point of pride at Intel then its newest transistors are always the cheapest to produce and ever time you move production from an old fab to a new, fully utilized fab, your production costs drop. Recall all the jabber when FinFET was introduced about how cool it was and that the incremental cost to achieve FinFET was basically a rounding error. Now contrast that Intel jabber with the jabber coming from assorted ARM fabs that the newest process technologies will be more expensive. Some of that jabber is just warming up for the impending death moans associated with any mass extinction event. However, non Intel fabs are making a sincere attempt to reach rough parity with Intel’s process technology and it is expensive which will be reflected in the cost to produce contemporary ARM chips using a contemporary process technology. It will work but the ARM chips will not be as relatively cheap as they were before and they will be less power efficient than Atom.

    That is the environment in which Intel plans to play several trump cards which are already known. If Intel is able able to incorporate the radio technology they already demo’ed into Atom at 14 nm and it does favorably affect BOM and efficiency, is there any doubt that Intel can ride that to a 50% market share or more in segments that demand radio? If Intel is able to incorporate V-Pro into devices for the corporate market which already values V-Pro as a known good technology, is there any doubt that Intel will be able to ride that technology to a 50% market share in the corporate device market? Of course ARM is working on a “me too” technology that is unproven. However, there is a joke waiting to be told and the punch line is: “Nobody ever got fired for buying V-Pro.” IT managers do not want their firing to be the butt of a V-Pro joke. Better to buy what is known good, and preserve your ERISA vesting. I did not pay attention to whatever Intel is cooking up with VISA, but somehow, between Intel. McGaffe, VISA and the Bunnies, I expect Intel should be able to figure out something to say that will help market share in the consumer segment. Never underestimate the Bunnies because when they get properly motivated, they are a nearly unstoppable force. Based on what I see in my local market the best Samsung can come up with to compete is hot, nameless Asian chicks and while it pains me to say so, in the consumer space they can not stand up to the Bunnies. ARM itself has no branding in the consumer space.

    Intel's new found enthusiasm for efficiency and low cast has not yet peaked. ARM peaked some time ago and is struggling to maintain its momentum with a slower learning curve. Now is a time when execution matters. While Intel's execution record is far from flawless, a case may be made that it has the best execution in the industry. Time will tell how it works out but insofar as ARM target market ambitions go beyond the uncontested garage door opener market, it is going to be an uphill battle every step of the way even though they start with a dominant market share.
  • ET - Wednesday, December 26, 2012 - link

    I love my Nexus 7, and I think 7" tablets are a great form factor. However I'd appreciate full Windows compatibility, making this into a tiny tablet. Looks like Intel's chip might have what it takes to provide a good solution for this form factor. Reply
  • lchen66666 - Wednesday, December 26, 2012 - link

    Google did good job with its nexus 7 to secure the large market share for Android to compete with iPad. At same time the ecosystem for ARM tablets was well established. All low cost components are everywhere for the ARM platform. All Apps for Android and IOS are everywhere. The game is completely different than 10 years back AMD vs INTEL 64 bits CPU because they are pretty much in the same ecosystem.

    In order to gain the tablet market share for Surface(x86 version), the devices have to be priced very competitive or even lower than corresponding Android devices because Windows is behind in the tablet Apps. Many web sites don't even have corresponding version for Windows. This means the device with that that chip has to be around $300(definitely not $500) with at least full HD(1080P) resolution(not 1366x768 type of resolution). This certainly cannot be done without Intel and Microsoft cutting their part price.
    Intel latest ATOM chip has to be priced lower than $30/each. Microsoft Windows 8 for tablets license has to be less than $10/each. Otherrwise, forgot about this for the consumer market. The massive consumer won't not switch to Windows. However, there are still market for the SurfacePro(with Intel i5), many corporate users may buy it for the reason of having single device.

  • rburnham - Wednesday, December 26, 2012 - link

    After spending a few days with a new Acer W510 tablet, I see no reason to own an ARM tablet anymore. Well done Intel. Reply
  • GeorgeH - Wednesday, December 26, 2012 - link

    A very nice article overall, but the average power efficiency numbers aren't a very good measurement to make. Grab a P4 system off the shelf, run a few benchmarks, then turn the PC off. Grab a TI-89, make it do the same tasks, then turn it off. Compare power as in this article and you could end up with a statement like "the P4 only consumed 0.1mW; we can't wait to see it in calculators!"

    I understand why the measurements were made in the way that they were and I don't think the example above really applies here, but it's irksome to see "A is much better than B" statements using a methodology that could easily be used (intentionally or not) to reach ridiculous conclusions.

    However, given that both test systems were supplied by Intel, I do think it could be argued that Intel pulled the wool over your eyes a little bit to yield numbers that put Atom in the best possible light.
  • lancedal - Wednesday, December 26, 2012 - link

    Intel enjoyed a big margin on their CPU. I don't know how will they compete in the sub $30 SoC market. Their biggest goal is probably to prevent an invasion of ARM into the server market. But in that market, the priorities are probably performance, pricing, then power in that order.

    As another poster mentioned, ARM not only competing effectively against Intel on power, performance, and pricing, their business model enables their customer to customize the SoC to meet their target. For example, Apple focuses on performance and power at the cost of silicon area while Amazon focuses on cost mainly. Samsung is probably somewhere in between. ARM business model makes that possible as it allows their customer to assembe their SoCs the way they wanted.

    As a consumer, I really like the idea of an x86 compatible device with the idea of total compatibility between my mobile devices and my computing devices. However, that seems to be less and less important as even MS is using the cloud to connect between computing device and mobile device. I was really disapointed to realize that when I bought my 1st Windows Phone 8 devices and it has absolutely no direct-sync ability with my laptop. Not even Outlook.
  • Exophase - Thursday, December 27, 2012 - link


    It'd be great if you could provide graphs showing what the utilization is for each core, as well as what frequency the cores are being run at. I'm assuming MS has kept this monitoring capability in Windows 8 and Windows RT.

    This way we could see how well Tegra's 3 synchronous cores are being managed, including how eager the kernel is to keep cores enabled (and at the same frequency/voltage as all the others, only with clock gating) and how aggressive it is with frequency, and where if anywhere the companion core could have been viably used. It'd also highlight how much the Atom is being scheduled to use HT instead of separate cores and how much of a role turbo boost is playing. And it'd clear the question (or at least my question) of whether or not Clover Trail can asynchronously clock its two cores; I'd assume it can.

    Right now it's popular for people to assume that one Atom core beats two Cortex-A9 cores, because they assume the workloads are heavily threaded. It doesn't help when your articles claim that JS benchmarks like Kraken are well threaded when they're still strictly single threaded.
  • 68k - Saturday, December 29, 2012 - link

    The metrics you are asking for would be interesting to see, but I would say that it is far more "popular" for people to assume that x86 does not stand a chance of getting close to work done per Joule than assuming that a single core Atom is faster than a dual core A9.

    And looking at the numbers: I would say that a single core Atom core would beat a dual core A9 when clocked as in this comparison. HT on Atom yields far more compared to HT on the "Core" CPUs, which is not too hard to understand as Atom is in-order. So while HT gives 20-30% on "core" CPUs, it usually result in >50% more throughput on Atom.

    On top of that, one can be far more naive when programming non-embarrassingly parallel things on HT compared to doing the same thing across physical CPU cores as both HT share the L1 cache which SIGNIFICANTLY lower the cost of locks which will cause cache-line ping/pong even if the lock is non-contended. So the cost of acquiring the lock is at least one order of magnitude cheaper between HTs compared to between two cores sharing L2$ but having separate L1$.

    But most importantly: how many programs used on a phone or table actually benefits from multiple CPU cores? Having the best and most power-efficient single core performance is far more important than having the best performance across 4 cores.
  • raysmith1971 - Thursday, December 27, 2012 - link

    Shouldnt be bothered by this article , but this article is pretty much exactly the same on the three tech sites that I read, namely tomshardware guide, xbitlabs, and anandtech. Seems that either intel is suddenly amazing compared to ARM or that Intel is smooching with the tech sites and they in turn are fawning over with Intel. Dont get me wrong I like Intel and have their chips in my systems rather than AMD , but it seems that this article like on the other sites is more a product placement ad, rather than an actual review, and because of this I may be reading these sites 'reviews' with a little more scepticism from now on. Reply
  • Braumin - Thursday, December 27, 2012 - link

    What's amazing his how people have just been ignoring Intel, and holding to this untrue belief that ARM cores are inherently more power efficient than Intel ones due to some magical pixie dust and RISC vs CISC.

    Medfield was released almost a year ago, and even then, a single core Atom showed itself to be faster than current gen ARM processors, and offer competitive battery life.

    Why then do you think Intel is "suddenly amazing" compared to ARM?

    The reviews of all of the clover trail tablets have shown that they are a significantly better experience than the ARM ones.
  • Veteranv2 - Friday, December 28, 2012 - link

    You mean on Win8 right?
    You take a bloated x86 optimized OS with crappy ARM support and you compare it to the best x86 chip maker.
    Great... Happy world you live in.
  • Braumin - Friday, December 28, 2012 - link

    Um, smart ass. Medfield was running Android.

    And Windows 8/RT is far from bloated. Good try though.
  • jwcalla - Saturday, December 29, 2012 - link

    Unfortunately we can't draw too many conclusions about Win8 vs. RT since we can't look at the code base. Especially for benchmarks like "cold boot". The manner of bringing up devices is likely rather distinct.

    And, unfortunately, we don't know anything about the compilers used for the software either.

    Now a compiler like GCC can give us some broad ideas... we can test and see that, in general, it compiles code to an x86 target more efficiently than to ARM architectures... and this isn't particularly surprising since x86 has years upon years of optimizations built into it (GCC). Not to mention that some software will have better x86 optimizations also (this is typical; developing for ARM does require different software implementation methods for optimizations).

    But we don't know much of anything about Windows code or compilers since it's kind of closed up. But, in theory, 8 vs. RT shouldn't be line-by-line identical... or that would be a job poorly done.

    We can only draw some non-specific conclusions that the two are in the same field of range, and that's all that matters from a user experience perspective anyway. (Or, at least, this particular x86 implementation vs. that particular ARM implementation are relatively close in terms of performance and power efficiency.)
  • mrtanner70 - Thursday, December 27, 2012 - link

    A little more journalistic push back would go a long way. In Intel's eyes they have co-opted you Anand, it's a clear strategy on their behalf, they know you are influential.

    A brand new atom should not be solely compared with an aging Tegra. I have no problem with the companion core issue given RT does not use it, but a Krait RT product, or Ipad (despite the o/s issue) would have been a much more apples to apples comparison in terms of chip generation. If this article had included one or both of those in addition to the Tegra3 it would have been vastly more credible.

    You have the power to push back for the best comparison YOU, not Intel, can think of.
  • Braumin - Thursday, December 27, 2012 - link

    Brand new Atom and aging Tegra 3? The Atom CPU is the same CPU with minor tweaks that was first introduced 5 YEARS ago.

    The Tegra 3 is a chip that is barely a year old... and still on sale today.

    Anand will test a Krait tablet when they are available. Right now there are none. That alone makes this a fair comparison.

    And there will be a new Atom next year. When the Krait comes out on a Windows RT tablet, should we then hold off until the new Atom comes out before we do comparisons, or should we compare PRODUCTS YOU CAN BUY RIGHT NOW? I mean both of these devices are new products in the last two months. How is this not relevant?

    I mean seriously. These comments are disgusting I used to think most Anandtech readers were tech savvy, but that's obviously not the case anymore.
  • jwcalla - Thursday, December 27, 2012 - link

    The problem with most hardware sites, this one not excluded, is that they seem to be unconcerned about the software influences on these comparisons.

    Now I love Anand but he's just as guilty in this point in many of the reviews he posts. He'll have a SunSpider benchmark of this platform vs. that platform and then draw conclusions about the hardware, which you just simply cannot do. Such a benchmark allows you to draw conclusions about the entire system only. Now that's definitely important from the perspective of a product review where, ultimately, only the user experience matters. But you can't draw conclusions about A6X vs. Samsung Dual or whatever based on a benchmark run on completely different software stacks.

    Likewise, measuring differences in the cold boot process of Surface RT vs. Surface Pro doesn't really tell us... anything... about the hardware at least. Just a cursory understanding of computers explains to us why that is.

    However, the goal of this article is to show that Intel can make a chip that plays in the same ballpark as ARM's Cortex-A9s. I'm not sure that that actually establishes anything significant though, since Intel is clearly the challenger in this market... and showing up isn't enough to get wins -- as Anand points out. But it's also what we already knew: when you scale back performance and simplify the instruction set, you get lower power usage.
  • kyuu - Thursday, December 27, 2012 - link

    I think Win8 vs. WinRT are as close as your going to get software-wise between an ARM-system vs. an x86-system. Reply
  • mrtanner70 - Thursday, December 27, 2012 - link

    While the underlying architecture is not new, clover trail absolutely is. It just started showing up in shipping devices. It's a comparison designed to be the most favorable one possible for Intel, and a good journalist would push for the best one period. You have an odd definition of disgusting. Reply
  • jwcalla - Thursday, December 27, 2012 - link

    To me these results don't even begin to provide a compelling motivation for Apple or Samsung (e.g.) to drop their own designs (and in the latter case, fabs) for Intel in their products. Why introduce a third-party to your supply chain for what amounts to differences that are almost entirely in the noise? (And can easily be caused by measuring errors, software differences, compiler advantages, etc.)

    The only real plus that Intel brings to the table is for the Windows folks who are enamored with the idea of running XP apps on their phones. But there aren't enough of these people to turn the tide in a market where ARM clearly has a stronghold.

    However, Intel should be successful at stemming any losses in the ultrabook or server markets.
  • kyuu - Thursday, December 27, 2012 - link

    There aren't a whole lot of people who care about running x86 apps on their phone, true. However, there are lot of people who would like to run x86 apps on their tablets. Reply
  • mrdude - Friday, December 28, 2012 - link

    On the same Clover Trail Atom that has trouble keeping a fluid scrolling motion in Metro? The same Atom that's only 32-bit with 2GB RAM on the devices? The same Atom that can't run your legacy productivity software any better than the old netbooks?

    The x86 legacy requirements are going to be significantly higher than the ARM parts due to the software the x86 chips are required to run. If you can't run CAD or Photoshop or Blender, or any other useful productivity application on your tablet/notebook, then you likely don't need x86 in the first place. All of the other applications that don't require that much horsepower already have alternatives, and often better alternatives, in the iOS and Android app stores.

    If I can't run the same games and demanding software on an x86 tablet, then do I really need an x86 tablet?

    That's the dilemma Intel and Microsoft both face. Currently, the sales figures of these x86 tablets are less than 1% of all Win8 devices thus far sold. People aren't going to pay a higher price just because it says Microsoft or Intel on it. Given the robustness of Google Play and iOS App stores and the market penetration of those respective devices, the majority of the same crowd that's buying tablets is likely to be just as familiar with Android and iOS as they are with Windows. In fact, perhaps even more so given the dramatic UI changes to Win8. If people need Office they won't have to wait long because Microsoft announced it was offering an Android and iOS version of Microsoft Office in the Spring of 2013.

    x86 compatibility looks great until you realize just how much you're missing out on with respect to the Android and iOS app stores. And if I'm going to buy a tablet for work then I'm sure as hell going to demand that it actually has enough power to run my software. Otherwise what's the point? A weak x86 tablet that can't run productivity software and games is just a tablet that's missing out on both ends.
  • CeriseCogburn - Friday, January 25, 2013 - link

    Worse than that, who wants to switch to the awful faded out rectal-boxed sick pastels of msft 8 with their crappy harder to identify the "icon box" from heck UI? Reply
  • pugster - Thursday, December 27, 2012 - link

    Intel probably got their best cpu against ARM's worst cpu in an operating system that might not be optimized for ARM. I personally would like to see if Intel would start outfitting these cpu's in android tablets and with more optimized arm cpu's like cortex a7/15. Reply
  • ddriver - Friday, December 28, 2012 - link

    I bet Intel ordered and paid for this article, potentially providing guidelines how and against what product to be tested.

    It is shameful for Intel to miss out on mobile platforms. Surely, low profit margin is not Intel's territory to begin with, but still, the rise of popularity of ARM devices represents a substantial blow to the reputation and monopoly of Intel - the world has seen there is more to the market than Intel (and poor perpetual runner up AMD). It is a long term threat, as indirect as it may be.

    A15 is pretty much here and a dual A15 is about twice as fast as a quad A9. The Atom is competitive to current aging ARM designs, but it will slip back as soon as A15 designs flood the market.

    x86 and CISC in general are something that should have died out a long time ago, but its life was artificially prolonged, because no one in the industry really gives a damn about efficiency. Efficiency is the enemy of profit, and profit is the sole motivation for everything being done nowadays.

    Don't get me wrong, the overheads of CISC are insignificant, and Intel will probably be able to make up for it with better in-house manufacturing process. And with such a hefty profit margin on their middle to high end, they can afford to give low power CPUs for free to big players, just to kill a potential long term threat to their business. It won't be the first time Intel will pay to cripple its competition.

    x86 is crippled by licensing, while anyone is free to license ARM and do his own spin - big companies like Samsung have the capacity to design and manufacture an ARM based CPU according to their needs, which is something Intel won't be bothered with. Selling entire early batches exclussively to Apple to put in their useless toys first is an entirely different matter from doing customizations to the actual hardware and production lines just to please a client.

    The sooner the world moves away from x86 the sooner we will see some actual competition, instead of this MONOPOLY in the disguise of a duopoly we have today. I do run an Intel rig, because I need the performance, but I am no fan of Intel, or of the amount of money I had to pay for it because Intel has no competition and does whatever it wants. I'd happily watch the company burn to the ground, the world doesn't need a greedy monopolistic entity - it is that entity that needs the world to suck the wealth out of it.
  • nofumble62 - Saturday, December 29, 2012 - link

    skip that software incompatibility nonsense. Reply
  • ddriver - Saturday, December 29, 2012 - link

    Due to the horrendous (at least compared to ARM cores) power consumption, this is only possible with a significantly big battery, which would make the tablet experience almost the same as if were a stone tablet.

    Software incompatibility - you can thank Microsoft for this, they have pushed their vendor and platform limited development tools for so long. That is why I ditched any MS related technology, I still use windows, but I don't code against the MS APIs, instead I use Qt - the same code runs on Windows, Mac, Linux and very soon iOS and Android will be officially supported too (right now those are only unofficially supported).

    Big companies like Adobe and Autodesk have already embraced Qt in their workflow, but it will still take some time to shake off the MS legacy crap from their code base.

    Sure, you can go for something lame and inefficient like Java or HTML+JavaScript, but nothing beats the performance and memory efficiency of native machine code.
  • Pazz - Saturday, December 29, 2012 - link

    The fundamental point which should be highlighted throughout this entire analysis is that Clovertrail is significantly newer tech than Tegra 3.

    The Tegra 3 inside the MS Surface had availability as soon as Q4 2011.

    Microsofts choice to implement the Tegra 3 SOC was a matter of timing. It was the best available to the Surface team at the time. The extensive development of a new product, particularly as important as Surface given the current market and timing with Win8, always tends to result in older tech being included. More time is invested in other non-SOC specific areas.
  • theSuede - Sunday, December 30, 2012 - link

    A very well executed run-through, but:
    Wouldnt it be possible to do a measurement sample floating average over five samples or something? The charts are close to unreadable, and the graphical presentation fools the eye into averaging sample points incorrectly.
    Spikes in graphs are only averageable by human vision in point charts, in line charts the eye puts far to much weight on local deviations.

    The same goes for most storage/HD performance graphs at AnandTech. Just my 5c on statistics presentation interpretation...
  • casper.bang - Tuesday, January 01, 2013 - link

    I'm a little confused about what this article is trying to tell; looks to me as if Anand is comparing next-gen Intel Atom with current-gen ARM A9 and by doing so arrives at the *shocking* conclusion that the former is more performance/power efficient... and adds a sensational headline to top it off?!

    I honestly expected a bit more; say focus on Cortex A15 vs. Hashwell. I'm also surprised at the lack of a discussion about how Intel is possibly lowering wattage, considering they are infamous for solving problems in the past decade by throwing on clunky power-hungry caches.

    Clearly Intel has some brilliant engineers and world-class manufacturing, but should we not wait to compare apples to apples before declaring that "x86's high power-consumption is a myth"? Come again when routers, picture frames, media centers, tablets, phones etc. based on x86 is mainstream rather than a wet dream in Otellini's head.
  • Kidster3001 - Friday, January 04, 2013 - link

    You mean the 5 year old Clovertrail core design... designed before Tegra3? THAT next-gen Atom? Reply
  • vvv - Wednesday, January 02, 2013 - link

    Idle power comparisons on a sample size of one should not count. You would think that binning makes all samples look alike but you will be surprised. Reply
  • Wolfpup - Thursday, January 03, 2013 - link

    This is just so impressive...since at least the late 90's I've been wanting exactly this-an actual x86 PC in a mobile form factor.'s actually here.

    Heck, this Atom could go in today's CELL phones no problem (the bigger, higher end ones anyway). It's better than Vita's CPU (although granted, wasn't available at the time).

    An actual tablet that can be SMALLER than ARM tablets even, real Windows, real x86...same price. Soooo impressive!

    Granted the hardware will keep getting better and the software kinks will get worked out,'s actually happening!
  • shriganesh - Saturday, February 09, 2013 - link

    Very nice comparison! x86 does seem to be very power efficient! Except the Tegra 3's companion core isn't working. Compare an x86 Android tablet/phone with ARM tablet/phone. That will be more of an apple to apple comparison.

    All the guys rooting for Intel. Beware! Intel crushed AMD with Core CPUs. But when AMD lost the battle, there were no more moderately priced intel CPUs! Intel is like Apple! They always charge a premium.

    I hope Intel looses to the armada of ARM CPU designers (Samsung, nVidia, Quailcomm) just for the sake of cheap CPUs. With the world heading towards internet of everything, the last thing we need is COSTLY intel CPUs!

Log in

Don't have an account? Sign up now