POST A COMMENT

49 Comments

Back to Article

  • wimbet - Thursday, February 23, 2012 - link

    Thanks for posting comparisons with Tegra 3. It will be real interesting to see how OMAP4470 and Exynos4412 match up. I have a feeling we will see a lot more of OMAP5 and Exynos5250 at MWC as well. Reply
  • AmdInside - Thursday, February 23, 2012 - link

    When are Krait 4 phones due? Still a while before my plan expires but just curious. Reply
  • infra_red_dude - Thursday, February 23, 2012 - link

    Apparently MWC will see the launch of MSM8960 consumer devices. Reply
  • douglaswilliams - Thursday, February 23, 2012 - link

    "I do hope the device vendors do these SoCs justice."

    "Will Moore's Law, and the 28nm LP process in particular, be enough to offset the power consumption of a higher performance Krait core under full load? Depending on how conservative device makers choose to build their power profiles we may get varying answers to this question."

    Anand, perhaps some justice could lie in allowing user selectable power profiles, as on laptops. Let the user jump to a performance profile while playing a game or plugged in. Is that a possibility? Or will they just attempt to do that automatically in their stock power profile?
    Reply
  • Wishmaster89 - Thursday, February 23, 2012 - link

    Asus already did something like that with transformer prime so there's a possibility that with Krait powered padfone they could do the same thing. And don't forget that up till now Asus was quite good when it came to optimized software. So I have high hopes for Krait based Asus padfone with LTE radio.
    Perfect when connected to the tablet docking station.
    Reply
  • Pipperox - Thursday, February 23, 2012 - link

    It already happens, sort of.
    Any CPU governor will lower the CPU clock for light workloads, and max it out for games.
    It's for the intermediate situations where you can see a big difference.

    Anyway, this will be used on Android phones.
    Hence, it'll be rooted in the blink of an eye, and custom kernels will offer multiple choices to the users concerning governors, so battery or performance optimized profiles.
    Reply
  • ratn9ne - Thursday, February 23, 2012 - link

    At&T doesn't even sell the galaxy nexus yet... so I expect this to be out sometime 2015. Reply
  • Loki726 - Thursday, February 23, 2012 - link

    Anand,

    Can you comment on the Tegra 3 perf difference on sunspider in this review compared to your previous transformer prime review? This figure shows a score of 2300, and the previous figure in the transformer prime review shows a score of 1600. That's a pretty significant difference. Is there some change in the configuration that can explain this?

    I saw that you mentioned a regression going from honeycomb to icecream sandwich, but then
    you say that you included the faster honeycomb results.

    Thanks
    Reply
  • rahvin - Friday, February 24, 2012 - link

    Without looking at the previous review, this review was clear that the Transformer had recently been upgraded to ICS. For those of you that haven't used ICS yet, it's significantly faster than Gingerbread on the same hardware. Reply
  • Gideon - Thursday, February 23, 2012 - link

    "Oh the things I would do for an Unreal Engine 3 benchmark on Android..."

    Second that !

    I don't think I have seen a single review/preview of a phone on Anandtech for the last year that doesn't include the same message. Hopefully the devs will finally notice.
    Reply
  • Alaa - Thursday, February 23, 2012 - link

    How about running more than one of these processes at once? Wouldn't 4 cores get more benefit? Reply
  • MonkeyPaw - Thursday, February 23, 2012 - link

    The thing is, Tegra 3 is still on a single channel of memory. This was one of the design decisions that made me really question how valuable it was going to 4+1 cores. I think this is probably a very big reason it can't best Krait's 2 cores in any multi-threaded tests. I think they would have been better off going to a 2 core dual channel setup with improved graphics, and saved the move to quad core until Tegra 4. Reply
  • tipoo - Thursday, February 23, 2012 - link

    The single channel memory was my first thought also. Four cores active at once plus a GPU all sharing one channel sounds like a terrible idea, that's probably a significant factor. Reply
  • piroroadkill - Thursday, February 23, 2012 - link

    Your two cores crapped all over Tegra 3's 5.
    My god, Tegra 3 looks bloated compared to MSM8960.
    I eagerly await devices of all types with this chipset! Bravo!
    Reply
  • rahvin - Friday, February 24, 2012 - link

    Just like Tegra and Tegra2 nVidia made design decisions to push paper specs over real performance and power savings. Personally other than a few token devices I expect Tegra3 to be a no show once Krait and Omap5 hit the scene. Reply
  • LetsGo - Thursday, February 23, 2012 - link

    Looking at the number in the linpack benchmark I would expect the results to scale less than linearly, MDP MSM8960 = 107 MFLOPS single threaded, 218MFLOPS Multi- threaded!

    That chip has the smarts Impressive!
    Reply
  • metafor - Thursday, February 23, 2012 - link

    Since two cores share an L2 cache, it's very possible that one core's prefetch also feeds another. I don't think the Linpack program is very intelligent in this regard :/ Reply
  • tipoo - Thursday, February 23, 2012 - link

    The SGX 543MP2 in the iPad 2 pushes 148 in GLbench Pro offscreen. A year after its first début, why is no one else using it yet? I know the A5 die size is much larger than comparable SoCs because of it, but this here shows everyone else is focusing on CPU performance over GPU, and I think after a certain point the GPU will matter more for rich content.

    And the iPad 3 will be launching within months no doubt, some saying with a 543MP4 like the Vita has, which would make a large lead into an enormous one. Seriously, what's going on with everyone else?
    Reply
  • Death666Angel - Thursday, February 23, 2012 - link

    iOS unfortunately is pretty much alone in the mobile gamer department. Android is not really pushing mobile games as much. So big beefy GPUs are not that marketable. :-( Reply
  • Dribble - Thursday, February 23, 2012 - link

    I always thought of tegra 3 as a stepping stone between last years standard of dual A9's @ 40nm to tomorrows standard of quad A15's @ 28nm.

    It looks like we have most of tomorrows chip today as we have a dual A15 equivalent @ 28nm. However is it really shipping soon - 28nm production isn't exactly where it needs to be yet for cheap mobile phone chips? Are the final released products going to be that far ahead of the 28nm A15's?
    Reply
  • Arnulf - Thursday, February 23, 2012 - link

    Quad core Krait in a tablet/notebook should do just fine competing not only with other ARM implementations but other architectures as well. I think this is going to be the Core2 of the ARM world, the first truly "good enough" CPU that can take all reasonable loads and last for 5+ years without becoming totally useless due to ubiquitous software bloat. Reply
  • Death666Angel - Thursday, February 23, 2012 - link

    I want your crystal ball! Nao! :D Reply
  • peter123 - Thursday, February 23, 2012 - link

    So can we conclude that a dual core krait will outrun any quad core A9 implementation?
    What about the omap4470?
    Reply
  • metafor - Thursday, February 23, 2012 - link

    OMAP4470's dual memory controller may help it. But in many workloads, Krait will obviously still be ahead. The 4470's SGX 544 would likely -- given TI's history of high clockspeeds on the GPU -- outperform the Adreno 225 and possibly by a wide margin. Reply
  • peter123 - Thursday, February 23, 2012 - link

    But besides a new GPU and a higher clock, the 4470 is a standard dual core A9 chip, isn't it? Or is there some other improvements? Reply
  • metafor - Friday, February 24, 2012 - link

    Well, it runs the A9's at 1.8GHz IIRC. So that'll close the margins between it and 8960. There's no getting around the inherent architectural deficiency, of course but judging by the few benchmarks in the article, they don't seem to rely all that much on architecture and are more a function of just clockspeed.

    High-clocked A9 devices will likely benchmark well. We need better benchmarks.
    Reply
  • mutil0r - Thursday, February 23, 2012 - link

    It all depends on the workload. Given most smartphone use cases, it wouldn't be difficult to imagine the krait being more useful. But it wouldn't be correct to write off the A9, especially compared to a krait which isn't a full blown A15. GPU seems to be another story altogether...as can be seen. The MDP device which I assume is developer only and hence all-out, is bested by the Transformer Prime. Until the Adreno 305 comes out krait will be bring up the rear on GPU benchmarks Reply
  • peter123 - Thursday, February 23, 2012 - link

    Ok it's clear that until Qualcomm uses the Adreno 3xx the S4's GPU will be the not sso good side of this SoC. What I wonder is how the S4 will stand against a A9 quad core Exynos 4412 or other implementations. This will be I think the main contest over 2012 SoC supremacy. Reply
  • metafor - Friday, February 24, 2012 - link

    Hrm? I understand Adreno 225 isn't exactly blowing anyone's mind but Tegra 3's GPU was ahead in 1 out of 5 benchmarks. And looking at the Basemark scenes, Adreno can be significantly faster. Reply
  • Wishmaster89 - Saturday, February 25, 2012 - link

    And there is still probably some place left for newer and more optimized drivers, so I wouldn't call 225 worse than T3 GPU.

    We need more demanding benchmarks to truly test those new high performance GPU's.
    Reply
  • mutil0r - Saturday, February 25, 2012 - link

    While true, outside of rare exceptions (Xperia Play) where the OEM specifically asks the manufacturer for optimized drivers, rarely do OEM's call for anything beyond baseline drivers because of massive catrier testing and validation cycles.

    We havent reached desktop GPU type maturity and cadence to have drivers bump up performance, yet.
    Reply
  • Wishmaster89 - Saturday, February 25, 2012 - link

    It would depend on relations between qualcomm and ODM, but I'd suspect that after last year's fiasco with msm8x60 they'll try their best to assure that final devices are as good as they can get, and that would mean upgrading drivers for their chipsets.

    In worst case scenarios we'll have to put our faith in custom ROM's to always use most recent drivers from newer devices, cause it was proven that both Adreno 205 and 220 got faster with more mature drivers.
    Reply
  • mutil0r - Saturday, February 25, 2012 - link

    I think it is important to remind ourselves that we are still comparing a development platform (the MDP) to a shipping device (Transformer Prime). To know what I'm trying to say here, please have a look at the previous MDP8660 numbers vs. those of shipping 8x60 devices. I understand manufacturers are trying to close this gap, but I would be wary of simply taking their word for it.

    Next, I would not give Electopia much weight because it is a Qualcomm developed benchmark. I'm surprised AT even published those numbers.

    IMHO the only benchmark in the above list where the 225 has an advantage, on paper, is Basemark. Basd
    Reply
  • mutil0r - Saturday, February 25, 2012 - link

    Based on what i understand, Basemark tests have unrealistically long shader calls. While it is good to know that the Qualcomm architecture is better equipped to handle this, real world implications are far less impressive, given the current state of mobile graphics in the industry.

    Simply put, the comparison is not correct and therefore to draw conclusions based on this would also not be right.

    As an aside, im interested in knowing what sort of memory typr/clocks theMDP is running. I'm willing to make a calculated guess that this is probably not what we'll be seeinf on shippibg devices because of BoM, packaging and thermal concerns.

    Also, I read (i dont remember where exactly though) that the Tegra 3 CPU clocks have been bumped from 1.3/1.4 to 1.4/1.5. Again, i'll believe it only when I see it, but im curious if this supposed new revision also includes a gpu clock bump.
    Reply
  • Eneq - Tuesday, March 13, 2012 - link

    Regarding Electopia...

    What you say is not quite true, its developed on contract from Qualcomm but the engine itself is a commercial engine thats been used in multiple titles.

    That said it is slightly skewed by not focusing too much on things that are known to be a slight problem for Adreno (FBOs and pixel shaders for instance) however thats not a big concern for modern games.

    You can just compare the results from an Adreno run with Imagination which are comparable, however Tegra 2 has always had issues. But Tegra 2 has other issues as well so unlikely due to this specific app (the Tegra 2 devices I have been working on show some problems with either fillrate or bus bandwidth and that doesnt seem to be changing...)
    Reply
  • ChronoReverse - Thursday, February 23, 2012 - link

    It seems to me that there's some seriously problem with this benchmark.

    For instance, with Exynos you get 34.6 fps @ 800x480 but somehow you get 42.5 fps @ 1280x720 (offscreen).

    This really doesn't make a lick of sense and cannot be explained by vsync either.
    Reply
  • dcollins - Thursday, February 23, 2012 - link

    "Today we're focused on the SoC comparisons however the first MSM8960 devices will also benefit from having integrated 28nm LTE baseband as well."

    This to me is the most important factor. Tegra 3 SOCs will be forced to use a discreet baseband chip while the MSM8960 has an integrated baseband. I think this fact alone will be sufficient to give Krait the lead in terms of battery life while allowing for slimmer devices.

    I have an upgrade coming in March and I cannot wait to get my hands on a new Krait based phone. I have been itching to own an HTC Android phone for some time now; these new devices cannot come soon enough!
    Reply
  • jwcalla - Thursday, February 23, 2012 - link

    It's pretty clear -- and exciting -- to see where the future is going with all of this. The consistent improvements being made in these chips are both impressive and rapid.

    Somehow -- and I'm still scratching my head a bit on this one -- the announcement of Ubuntu for Android didn't make it to the front page of AT. But that concept kind of ties into where these higher-performing chips are really going to shine. It might be an instance where a quad-core could offer benefits over a higher-clocked dual-core.
    Reply
  • Kidster3001 - Thursday, February 23, 2012 - link

    SunSpider performance will go down on all devices with that switch to ICS. The Crankshaft engine has some startup overhead that cannot be overcome during the extremely short test times of SunSpider. It will however do much better than the old V8 engine in longer running javascript such as V8 benchmark or Kraken. SunSpider has been good for a long time but it runs too quickly on modern hardware/javascript engines to be meaningful any more. I suggest you retire it gracefully and move to either V8 or Kraken for pure javascript performance benchmarking. Reply
  • Lucian Armasu - Friday, February 24, 2012 - link

    I think we should stop using the Sunspider benchmark. Google said last year that they aren't focusing so much on it because they don't find it relevant anymore, and they even used a "50x Sunspider" test to have a better idea of where the browsers are today. But either way their point was that the Sunspider benchmark is obsolete, and it doesn't really give a feel for the real browser performance anymore. Reply
  • lancedal - Thursday, February 23, 2012 - link

    Hi Anand,
    What is the CPU voltage for the 1.5GHz?
    Reply
  • boostern - Thursday, February 23, 2012 - link

    It's been almost 10 years that I'm following you.
    It's always a joy to read one of your articles.
    Thank you Anand, really.
    Reply
  • Black1969ta - Thursday, February 23, 2012 - link

    Tegra 2 was out before other dual cores and fell short of those later designs, it is not surprising that Tegra 3 is in the same position performance wise.

    Any one have a link to more news on Kal-El+, other than just name?
    Is Kal-El+ a tock, to Kal-El, from Intel's Playbook?

    If it is then I could see not only a process shrink down to 28nm or 32nm, but "tweaks" also.

    Perhaps with the smaller process they could add a 2nd Memory channel.

    The HTC One X is rumored to be a 1.5 GHz Tegra 3, instead of the 1.3Ghz in the Prime.
    Reply
  • Lucian Armasu - Saturday, February 25, 2012 - link

    If they actually really the Tegra 3+ this year (they were supposed to release the Tegra 2 3D too last year but didn't because they were late), it will probably be a quad core with at least the first core at 1.8 Ghz or even 2 Ghz, and the others a little lower.

    It should be at least at 2 Ghz if they want to compete with Krait. The problem is, although they were already very late with Tegra 3 to market, they also only release the tablet version first, and the phone version months later. So Krait phones will be available in Q2 this year, Tegra 3+ tablets probably in Q3, and Tegra 3+ phones late Q3 or early Q4.

    If Nvidia actually managed to deliver their chips when they promised they would deliver them, I think they would be in a much better position today, because for example it would be understandable if Krait is more powerful than Tegra 3 in Q2 2012, if phones with Tegra 3 started appearing 2 quarters earlier, like they promised. But that didn't happen, so now once again Tegra 3 is late to market, just like Tegra 2 was, and the competition is already better by the time it starts to get a foothold in the market.
    Reply
  • fteoath64 - Friday, February 24, 2012 - link

    Nvidia better get to A15 very quick!. They are getting creamed by the strong competition. Here is my suggestion. Stop with the quadcore nonsense. Do a 1+1 (big.little at 2Ghz, dual ch ram), 2+1 and 3+1 for good measure. If they can make the 3+1 to turbo at 2.2Ghz with single core, it would be great.
    Also to find a way to retask the small core to be I/O processor when it is inactive.
    Reply
  • curtisas - Saturday, February 25, 2012 - link

    Can Qualcomm just make that device have a little less of a lip on the bottom and sell me it? It's running stock Android which is awesome, and the reference devices always have the top of the line hardware! Reply
  • gamoniac - Saturday, February 25, 2012 - link

    Javascript performance can be multithreaded at times...


    I am reposting my comment on an earlier article by Brian --

    The browsers are multi-threaded but javascript does not support multi-threading until the advent of web worker in HTML 5. Although the browser could load images/files with multi threads, javascript snippets on a pre-HTML 5 page only runs in one thread. Was SunSpider's benchmark written in HTML 5?
    Reply
  • thebeastie - Monday, February 27, 2012 - link

    Now I got a Sony HMZ-T1 all I really care about is how well it can handle a Bluray 13GB MKV rip of Avatar 3D that I may have on local file server or off the device it self so I can go into my bedroom and watch a 3D movie at highest quality possible with no mess or fuss of a PC.

    ATM iPad2 can't handle my really high end rips and I don't want to buy a laptop etc for my bedroom, I want something simple like a tablet.
    Reply
  • superg05 - Saturday, March 10, 2012 - link

    these so called standard benchmarks only show so much let me see a benchmark that shows how many cores are running for tegra 3 for example for browsing only suppose to use the companion lower core how many cores and gpu cores during the test is each different system running? Reply

Log in

Don't have an account? Sign up now