Back to Article

  • sciwizam - Tuesday, February 21, 2012 - link

    Any thoughts on how Krait will compare against the A15 chips and when's the earliest will those be on market? Reply
  • wapz - Tuesday, February 21, 2012 - link

    Look in the architecture article on page 1 here:

    "ARM hasn't published DMIPS/MHz numbers for the Cortex A15, although rumors place its performance around 3.5 DMIPS/MHz."

    Krait has 3.3 DMIPS/MHz, so if a dual Cortex A15 would run at the same frequency they would be fairly comparable I would imagine (obviously ignoring all other elements that could help performance on either of them).
  • wapz - Tuesday, February 21, 2012 - link

    And if that's the case, HTC will have an interesting problem with their new lineup. It would mean, if the rumours are correct, their new flagship One X model using Tegra3 AP33 chipset at 1,5GHz and a 4,7 inch 720p screen might be slower compared to the One S, sporting the Snapdragon S4 chipset and a 4,3 inch screen with qHD. Reply
  • Lucian Armasu - Tuesday, February 21, 2012 - link

    In that case even if the GPU's were equal in performance, the One S would be faster due to the fact that it uses a lower resolution. Reply
  • zorxd - Tuesday, February 21, 2012 - link

    If by "faster", you mean more FPS in a 3D game, then yes.
    The resulting image quality would be lower however.
  • metafor - Tuesday, February 21, 2012 - link

    Yes. FPS is only one factor in the overall equation of user experience. Higher resolution rendering is definitely preferable assuming one could maintain ~60fps. Reply
  • zorxd - Tuesday, February 21, 2012 - link

    I'd rather have a 34 fps 1280x720 than a 60 fps 960x540. Reply
  • sosrandom - Tuesday, February 21, 2012 - link

    or better graphics at 30fps @ 960x540 than 30fps @ 1280x720 Reply
  • trob6969 - Wednesday, February 22, 2012 - link

    I agree, i would take quality over speed any day as long as the difference in speed is measured in mere seconds. Reply
  • vol7ron - Thursday, March 01, 2012 - link

    I can't wait to see the power savings, especially since the modem is a huge power draw, and one of the benefits is that QC is the manufacturer, which means integrated chip and less power consumption (as well as thinner device). Reply
  • nbenedetto1998 - Tuesday, January 01, 2013 - link

    Krait is a15 just built on the a9 nm process which is 32 nm I believe. The real A15 will be built on a 28nm process. Reply
  • sjael - Tuesday, February 21, 2012 - link

    Would be great to put some Transformer Prime numbers next to it. You know, just for kicks... Reply
  • infra_red_dude - Tuesday, February 21, 2012 - link

    Some numbers where it beats Tegra3 everywhere (almost):,2817,2400409,
  • Loki726 - Tuesday, February 21, 2012 - link

    I'd like to see a clock-for-clock comparison. Tegra 3's A9 is at 1.3ghz and Krait is at 1.5ghz here. I'd be interested to see what happens when A9 gets scaled down to 28nm, and whether or not Krait will still have an advantage. Reply
  • Wishmaster89 - Wednesday, February 22, 2012 - link

    And what would you need 1.5Hhz A9 for at 28nm? A15 is much better than any A9, and will come at lower process nodes.
    Besides, you can see that 1.2Ghz Exynos is much slower than 1.5Ghz Krait so I don't think higher clock will change it.
  • LordConrad - Tuesday, February 21, 2012 - link

    I want one of these in my next phone. My HTC Thunderbolt works great, but it's getting a bit dated. Reply
  • TedKord - Tuesday, February 21, 2012 - link

    Being an Anandtech reader, you've gotta be rooted and have a custom rom slapped on. What do you have flashed on yours? Reply
  • LordConrad - Wednesday, February 22, 2012 - link

    Right now I'm using the BAMF Forever ROM, nice features and very stable. I'll stick with it until I find something better (or get a new phone). Reply
  • ShieTar - Tuesday, February 21, 2012 - link

    "At a lower resolution Apple's A5 is unable to outperform the Adreno 225 here."

    Actually, 960x640 and 1024x600 both have exactly the same number of total pixels.

    Not that it changes the conclussion.
  • lowlymarine - Tuesday, February 21, 2012 - link

    "At a lower resolution Apple's A5 is unable to outperform the Adreno 225 here."

    Perhaps I'm misreading this line, but aren't 1024x600 and 960x640 actually exactly the same resolution in different aspect ratios?
  • ssj4Gogeta - Tuesday, February 21, 2012 - link

    I think he meant lower compared to the 720p GLBenchmark where the A5 wins. Reply
  • zanon - Tuesday, February 21, 2012 - link

    I agree the wording is a bit awkward there since they are both driving identical numbers of pixels. If he meant to compare it to the earlier 720p results it'd probably be better to make that explicit. Reply
  • jjj - Tuesday, February 21, 2012 - link

    Looks like it's faster than Tegra 3 and with single threaded perf certainly much better the only remaining big question is power consumption. Reply
  • Malih - Tuesday, February 21, 2012 - link

    I've been my old android device that comes with Android 1.6, and Cyanogenmod-ded to Gingerbread (it's not so responsive when running more than one app), because I need the new version of the Gmail app. Reply
  • Malih - Tuesday, February 21, 2012 - link

    correction: I've been *using* my old...

    In short: it looks like I'll be waiting in line for a smartphone with this SoC
  • Zingam - Tuesday, February 21, 2012 - link

    I haven't been impressed by a CPU/GPU for years but this thing looks amazing! If they manage to go on like that we'll soon have a true ARM desktop experience.

    Great job! I wish now they support the latest DirectX/OpenGL/OpenCL/OpenVG etc. stuff and we'll have it!!! It is unimaginable what ARM based SoCs would deliver when the time for 14nm comes.
  • Torrijos - Tuesday, February 21, 2012 - link

    Since both devices actually render the same amount of pixel but with different aspect ratio, would it be possible, that the performance hit seen for the iPhone 4S, is the result of graphics rendered in a standard aspect ratio (16:9 or something else) then having to be transformed to fit the particular screen? Reply
  • cosminmcm - Tuesday, February 21, 2012 - link

    Maybe it's because at the lower resolution the faster CPU on the Krait (newer architecture with higher clocks) matters more than the faster GPU on the A5. When the resolution grows, the difference between the GPU becomes more apparent. Reply
  • LetsGo - Tuesday, February 21, 2012 - link

    What difference?
  • metafor - Tuesday, February 21, 2012 - link

    Considering Apple controls the entire software stack and the A5 silicon, it'd be pretty stupid of them to do that. And if you look at how performance scales between the iPad (4:3) and iPhone (16:9), there's no slowdown due to aspect ratio. Reply
  • k1ng617 - Tuesday, February 21, 2012 - link

    Honestly, I don't trust Linpack and believe it is probably one of the most outdated android benchmarks, that doesn't represent what a person will see with realworld user experience.

    Can you try out Antutu & CFBench and post the scores please?
  • juicytuna - Tuesday, February 21, 2012 - link

    Indeed. Linpack is a test of software as much as hardware, who knows what kind of optimizations they could have done to the VM to get these headline grabbing scores.

    GPU is distincly meh for a 2012 soc, and single threaded performance doesn't seem that impressive to me. Sunspider and Browsermark seem to be on a par with what you'd expect to see from an A9@1.5ghz.

    And how much of that 'faster feel' can be attributed to NAND performance?
  • metafor - Wednesday, February 22, 2012 - link

    There are some hickups in Android that have to do with the UI thread looking up storage but for the most part, it's a CPU thing. The thing to keep in mind is that UI fluidity is an entirely different type of code than Javascript parsing. And looking at the Basemark results, Krait is quite capable in that department. Reply
  • arm.svenska - Tuesday, February 21, 2012 - link

    Why is the phone so long? I get that it is a reference design. But, could someone tell why it is like that? Reply
  • douglaswilliams - Tuesday, February 21, 2012 - link

    I don't know for sure, not a definitive answer here, just adding to the discussion.

    Like you said, it's a reference design (Mobile Development Platform). They put as little time as possible into making this pretty.

    When I was in college we had some old development platforms for some Motorola chips that were essentially a large circuit card with ports on all the sides for all the I/O and buttons to push for different operating modes like programming mode. It in no way looked like what an actual product would look like - because that wasn't its purpose.
  • peevee - Tuesday, February 21, 2012 - link

    Out-of-order Krait core at 1.5GHz consumes only 750mW. An Atom core at the same frequency consumes as much as 10x of that! While being in-order, no faster, if not slower! What a fail for Intel! Reply
  • Khato - Tuesday, February 21, 2012 - link

    You might consider reading Anandtech's article covering the Intel Atom Z2460 launch in January -

    Granted, we're only given SunSpider and BrowserMark benchmarks for the Atom Z2460 reference platform, but they're both actually ahead of the numbers for the Krait MDP - 1331.5 versus 1532 on SunSpider and 116425 vs 110345 on BrowserMark. While I expected Atom to be competitive, I'd thought it likely for Krait to be slightly ahead on the single threaded benchmarks, so I'm somewhat surprised that it's not. (Note that I'm somewhat surprised that there was no mention of how Krait compares to Atom Z2460 in the article.)

    As for power, that same article states that the Atom Z2460 SoC consumes ~750 mW at 1.6GHz - that's for the entire SoC, not just the CPU core. It'll be quite interesting to see how actual battery life compares between products once released.
  • metafor - Tuesday, February 21, 2012 - link

    The difference is, one is Intel's numbers and the other is a 3rd party reviewer's on an actual device.

    So yes, I agree. We'll have to see what actual phones using Atom will be like. Note that Sunspider isn't the end-all of "single-threaded performance" either. The JIT for Javascript on x86 is far more mature -- having been developed for a decade now -- than it is for ARM.
  • Khato - Tuesday, February 21, 2012 - link

    Well, I tend to trust Intel's numbers when they're actual hard numbers rather than percentages or normalized figures - they can't exactly get away with making up figures.

    And no question about the fact that SunSpider/Browsermark aren't indicative of all too much... but I wouldn't claim that Intel's advantages on those benchmarks are due to a superior JIT/software advantage. Remember the performance figures from that Oak Trail Tablet prototype running an early Android port from June of 2011? That was a prime example of the sort of software disadvantage that Intel had to overcome in order to get Android running well on x86. While a bit dated, here's an excellent example of the performance differences on x86 java implementations between OS (note that linux had a slightly newer version, but they were both using the latest available) -
  • metafor - Tuesday, February 21, 2012 - link

    No, but you'd be surprised how much a bit of pick-and-choose can help. Most comprehensive reviews are pretty rigorous with how many times they repeat a test, how much warm-up they give a device and whether or not they pick the median, average, etc.

    One could easily pick the best number, which can vary quite a bit especially for a JIT benchmark.

    I've also seen that comparison before. There was a rather thorough discussion of it and its relative lack of merits at RWT. I'd link, but it's being marked as spam :/
  • Exophase - Wednesday, February 22, 2012 - link

    That 750mW is not for the entire core, it's for the CPU with hyperthreading disabled plus L2 cache. Intel said enabling hyperthreading adds some further draw to it, something between 10-20% as far as I can remember. Reply
  • hova2012 - Tuesday, February 21, 2012 - link

    I'm pretty sure I'm not the only one who is disappointed that the Krait dual core wasn't compared to intel's upcoming atom socs. Isn't this the generation that was supposed to bring arm up to parity with intel's chip? All in all this was a great article, sets the standard for tech reviews. Reply
  • Exophase - Wednesday, February 22, 2012 - link

    Hard to give a good comparison when you don't have direct access to Intel's reference hardware, or if you do you aren't under liberty to publish benchmarking results. Reply
  • Lucian Armasu - Tuesday, February 21, 2012 - link

    Why are the MDP versions of Qualcomm's chips significantly higher performance than the ones in the market, like the one in the HTC Rezound? Doesn't that seem strange to you?

    Could it be that Qualcomm's MDP chips are meant to run significantly faster to show good benchmarks, but then they weaken them in shipping products? Or is there something I'm missing?
  • Death666Angel - Tuesday, February 21, 2012 - link

    How would they weaken them in shipping products?
    Do you have more comparisons between the MDP and shipping phones apart from the Rezound?
    I would look to different governor settings, different software builts and different settings as an explanation before jumping into conspiracy territory. :D
  • Lucian Armasu - Tuesday, February 21, 2012 - link

    I was just looking at the score charts Anandtech provided. Look at these 2 for example:

    Why does the MDP MSM8660 have significantly higher (double) performance than the presumably "same" MSM8660 chip in HTC Rezound? Isn't the MDP MSM8660 supposed to showcase the same MSM8660 that will go into the market?

    I'd love for Brian or Anand to prove me wrong here with some kind of technical explanation, but until then I'll just assume it's Qualcomm being sneaky and trying to manipulate the public's opinion about their chips.
  • ndk - Tuesday, February 21, 2012 - link

    The article clearly mentions that MDP 8660 was running with governor settings at "performance". I can't imagine HTC or any other company shipping their products with this setting on. Reply
  • Brian Klug - Tuesday, February 21, 2012 - link

    NDK is correct about governor being the reason, and I found that result interesting as well.

    It's clear to me at least that the governor settings on the Rezound are fairly conservative, and that even with a workload that's supposed to completely load both cores (mashing the multi-threaded test button in linpack pro), you never really get better that single threaded performance.

  • infra_red_dude - Tuesday, February 21, 2012 - link

    All the overlay customization by OEM and the power management code in the device kernel does have a detrimental effect on the device's performance. Reply
  • phoenix_rizzen - Tuesday, February 21, 2012 - link

    Not to mention versions of Android. The MDP is running 4.0.3. What's the Rezound running? Reply
  • metafor - Tuesday, February 21, 2012 - link

    It's Sense. If you look at some of the phones with a less bloated version of Android (like the Xiaomi phone used in the Vellamo benchmark article that runs the same processor as the Rezound but with MIUI), they score pretty close to the MDP scores. Reply
  • Wishmaster89 - Tuesday, February 21, 2012 - link

    That is why I will never buy device that bares the mark 'with HTC Sense'.
    They went one step too far with their customisations when they impacted performance of the device.
  • monoik - Tuesday, February 21, 2012 - link

    Did you make your tests on Gingerbread SGS2? I'm getting very different results on Ice Cream Sandwich Cyanogen Alpha:

    For worse:
    Linpack single 47.257 MFLOPS
    Linpack multi 71.987 MFLOPS

    For better:
    BrowserMark 105937
    Vellamo 1596
    SunSpider-0.9.1 1762.2ms
    non-cached about 4,5 seconds from touching "go" to the progressbar disappearing.
    cached less than 2 seconds.

    Stock browser. No overclocking. GT-I9100 Exynos version.
  • Kaboose - Tuesday, February 21, 2012 - link

    I am going to assume the latest OFFICIAL OS released by samsung Anandtech is not in the business of benchmarking every different ROM or OS on every phone. You would most probably be getting different results running ICS Cyanogen. As far as I know ICS is only official on the Nexus. Reply
  • monoik - Tuesday, February 21, 2012 - link

    I assume you're right, so we're comparing apples and oranges here. No real value there, don't you think? Reply
  • rahvin - Tuesday, February 21, 2012 - link

    Cyanogen is typically crippled by the fact that they are restricted to the open source versions. Especially in early release they don't have access to many of the customizations and binary code in release versions let alone per-release.

    It's my experience that Cyanogen doesn't even come close to release performance or power use until about a year later. This is because it takes the manufacturers about 6 months to post their kernel source then another 6 months to port and modify for the cyanogen system.

    So comparing a cyanogen alpha mod to a developer preview isn't even relevant, as was said.
  • tomhoward - Tuesday, February 21, 2012 - link

    There original SGS2 results were incorrect in the S2 vs 4S post a while back. There was a pretty big flame war in the comments from people with stock phones getting around 2000ms in Sunspider but Anandtech just ignored them. Reply
  • B3an - Tuesday, February 21, 2012 - link

    @Brain or Anand...

    Do you think that Win 8 tablets using ARM SoC's will likely have a SoC based on many of the components inside Krait? I know there will have be certain changes for WOA but the CPU and possibly the GPU (now that it supports Direct3D 9_3) will be used for these tablets?

    And the same goes for ARM's A15, will WOA likely be running on SoC's based on that too?
  • kyuu - Tuesday, February 21, 2012 - link

    If I can get this SoC in a Lumia-ish Windows 8 phone with a decent screen and removable micro-SD storage, whoever makes that phone will have my money. Reply
  • kyuu - Tuesday, February 21, 2012 - link

    I mean Windows *Phone* 8 phone, of course. Reply
  • BaronMatrix - Tuesday, February 21, 2012 - link

    We can look at the perf of CedarTrail or Ivy Blossom or whatever. Since Intel has said they are more so competing with Qualcomm. And this is only at 1.5GHz. When the 2.5Ghz chips come out with the new Adreno (Former ATi GPU), everyone will have to pack up and go home. Reply
  • iwod - Tuesday, February 21, 2012 - link

    The rumors of Apple A5X leads some to suggest next iPad would not be 28nm SoC. So this prove we may still have chance of 28nm SoC coming in next iPad.

    Krait is bringing A15 level performance while being on a A9 class core?? Sorry i must be missing something. Or Since Krait is designed by Qualcomm A9 and A15 naming doesn't matter? (o.O)

    No Mention of comparison to Intel newest Atom?
  • infra_red_dude - Tuesday, February 21, 2012 - link

    Correct, Krait cannot be directly compared to A9 or A15 architecture. I think calling Krait contemporary to A15 is more correct than "A15/9-class" CPU. Reply
  • snoozemode - Tuesday, February 21, 2012 - link

    It's really about time you can plug in your mobile to computer screen and run the Tablet UI, preferably at native resolution. Don't know what I would need this processing power to otherwise. Reply
  • tipoo - Tuesday, February 21, 2012 - link

    This is very obviously faster than something like the Tegra 3 in single or dual threaded performance, I wonder how many apps take advantage of more than two threads on Android or iOS? I'm guessing for the foreseeable future faster duals will win out. Reply
  • remixfa - Tuesday, February 21, 2012 - link

    Can Brian Klug & Anand Lal Shimpi please clarify for me which version of the SGS2 is being used? Its a very pertinent question. Is it the i9000 with the 1.2ghz Exynos chip or the American Hercules T989/Skyrocket variants that have the lesser Snapdragon 1.5ghz chips in them.

    Judging from the benchmarks, it really makes me think its the hercules/skyrocket. That really needs to be clarified, since unfortunately not all SGS2s are created equal.
  • Brian Klug - Tuesday, February 21, 2012 - link

    The SGS2 used in the article is the UK SGS2 with Exynos 4210 inside.

  • larry6hi5 - Tuesday, February 21, 2012 - link

    On page 1 of the article, the first table gives the MSM8660 as running at 1.5 GHz. Shouldn't this be 1.0 GHz? Reply
  • Brian Klug - Tuesday, February 21, 2012 - link

    That's because the MSM8660 is indeed at 1.5 GHz :)

    If you go back to our original MDP article we note it there:

    And also the official MDP MSM8660 page:

  • ncb1010 - Tuesday, February 21, 2012 - link

    "Even at its lower native resolution, Apple's iPhone 4S is unable to outperform the MSM8960 based MDP here"

    1024 x 600 = 614,400 pixels
    960 x 640 = 614,400 pixels

    There is no basis for saying the iPhone 4S has a lower resolution than this MDP being evaluated.
  • bhspencer - Tuesday, February 21, 2012 - link

    Does anyone know if Linpak is using the hardware or software floating point calculations for the MFLOPS number. Reply
  • metafor - Wednesday, February 22, 2012 - link

    Hardware. But it's run on the JIT instead of native code. According to CF-Bench, Java FP performance is around 1/3 of native. Neither actually use NEON but instead uses the older VFP instructions. Reply
  • vision33r - Tuesday, February 21, 2012 - link

    The Tegra 3 is actually a big disappointment from a performance standpoint. It actually has 5 CPU cores and the GPU performance isn't much better than the Tegra 2. The Adreno 225 is a much bigger upgrade but I'm afraid that it's another marginal upgrade.

    The A5 in the iPad2/iPhone 4S are over 1 year old by March. In that time, Nvidia's Tegra 2/3 has not dominated and the MSM8960 is finally a true contender for the fastest SOC on the market. By the time this thing is out in volume, Apple has the A6 ready and most likely another 4-8x performance increase over the A5.

    This SOC will probably be forgotten when the A6 is out.
  • LetsGo - Wednesday, February 22, 2012 - link

    Yeah your right looking at my Asus Transformer Prime running GTA 3. /S

    A lot of graphical optimisations can be done on the CPU cores before data is offloaded to the GPU.

    The moral of the story is that Benchmarks are only a rough guide at best.
  • tipoo - Wednesday, February 22, 2012 - link

    Unless the rumors are true and its A5X, not A6, with just faster dual cores rather than quads on a newer architecture. I would not be surprised, its like the 3G-3GS was an architecture change, then the 4 was just a faster chip on a similar architecture. The iPad 2 was an architecture change, the 3 might just be a faster version of the same thing, hopefully with improvements in the GPU. I'd be fine with that, as long as the GPU kept up with the new resolution. Reply
  • Stormkroe - Tuesday, February 21, 2012 - link

    I was just plotting out what little resolution scaling info there is here and noticed something very odd. Both the iphone 4s and galaxy s2 actually score MUCH higher when the resolution is raised to 720p offscreen. I can see that in the 4s' case it could be explained with fps caps, but the S2 is definitely not hitting a cap at 34.6 fps @ 800x480, yet it hits 42.5 fps @ 1280x720. All other phones predictably step down in speed. Anyone else notice this? Reply
  • Alexstarfire - Tuesday, February 21, 2012 - link

    Yes I did. It was actually the reason I was going to post. I was curious to know if the iPhone had VSync or not because it made no sense that it would get better performance at a higher resolution. Neither of the results make any sense to me since they are counter-intuitive.

    If the "offscreen" tests force VSync off then that could explain it for the iPhone but not really for the SGSII unless some parts of the test go way past the 60FPS cap with VSync turned on.
  • alter.eg00 - Wednesday, February 22, 2012 - link

    Shut up & take my money Reply
  • Denithor - Wednesday, February 22, 2012 - link


    I'm still carrying a first generation HTC Incredible (yep, one of the original ones!), been out of contract for a few months, was waiting to hear more about the 28nm SoC update. These look really, really good, seriously looking forward to them hitting the market now!
  • tipoo - Wednesday, February 22, 2012 - link

    I wonder how many apps scale beyond two cores. For the time being, I doubt its many, and since you're still not doing any true multitasking I think a faster dual core like this will trump a slower quad like the Tegra 3 most of the time. Reply
  • Denithor - Wednesday, February 22, 2012 - link

    Probably very much like desktop performance here: going from 1 to 2 cores is a huge upgrade, even for single-threaded apps, because it off-loads background chores to the second core and you get a full core dedicated to your running app. Going from there to 3/4/6/8 cores is really only helpful if your apps are truly multi-threaded or you're heavily multitasking.

    Now, the increase in IPC is definitely going to help everything go faster.
  • vision33r - Wednesday, February 22, 2012 - link

    Then, what's the point of Nvidia doing the quad core and even planning for greater than 4 cores. Reply
  • Wishmaster89 - Wednesday, February 22, 2012 - link

    IMHO its only pure marketing. Eventually we'll get to a point where we'll have more cores in our phones than in most desktop workstations, only because it'll sell better. Sad but true. Reply
  • Smart hero - Friday, February 24, 2012 - link

    I think an optimizes performance for Smart phone win over high-end end one ( I believe pod’s need high-end performance )

    1.Less game can be played in a small screen . phone is not a cloud computer .
    3.Traditionally , still we use win-32 bit more than 64 bit …are we changing with CPU???????????
  • R1V4L - Tuesday, April 03, 2012 - link

    How can You compare these video cards when the hardware is running different versions of software ?
    Let me tell you that I've tested my Samsung Galaxy S2 running official Android 4.0.3 with Vsync on and on Egypt test I got 60fps !!! - evidently this result is influenced by the framelimit imposed by Samsung drivers.
    So these benchmarks You did are not showing us the truth. Sorry to raise this problem - but I dare You to do these tests again :)))
  • R1V4L - Tuesday, April 03, 2012 - link

    Galaxy S2 - official android 4.0.3 : 45.67 FPS
    Funny, huh ?!
    Redo the tests, please - otherwise we will think this is a commercial crappy presentation for new products - not a professional benchmark.

Log in

Don't have an account? Sign up now