Final Words

As I mentioned at the start of this comparison, we're trying to compare two SoCs in two platforms that may offer wildly different experiences than shipping devices based on these SoCs. The hope (on both sides) is that we'll see similar, but likely slightly lower performance in phones. The reality will have to wait until we have final hardware in hand.

Qualcomm's strengths are clearly single/lightly threaded CPU performance as Krait is able to offer some significant steps forward in that department. Tegra 3 can hold onto an advantage in heavily threaded apps, but I'm not entirely convinced that in phones we'll see a lot of that.

The bigger question is about power efficiency, and this is the one not as easily answered based on what we know today. Qualcomm gains a lot by being on a 28nm LP process, however it also has more power hungry cores on that process. Device level power efficiency for a given workload may truly improve as a result of having faster cores on a lower power process (race to sleep, lower power idle). Generally speaking however, single threaded performance often comes at the expense of core level power efficiency. That's the reason it's taken this long for a 3-wide out-of-order core to make it into a smartphone. Will Moore's Law, and the 28nm LP process in particular, be enough to offset the power consumption of a higher performance Krait core under full load? Depending on how conservative device makers choose to build their power profiles we may get varying answers to this question.

Tegra 3 on the other hand should be a known quantity from a power consumption standpoint. All of the A9s in Tegra 3 are power gated (unlike in Tegra 2) and there's the fifth core for light workloads. For typical usage models I would expect better battery life out of Tegra 3 phones compared to Tegra 2 counterparts since the extra cores will likely be power gated, and idle power consumption should be lower. It's only for the heavier workloads where all cores are engaged that the impact of Tegra 3 remains to be seen.

There's also the LTE component. Today we're focused on the SoC comparisons however the first MSM8960 devices will also benefit from having integrated 28nm LTE baseband as well. Qualcomm will also have discrete 28nm LTE baseband solutions as well (e.g. MDM9615) for device makers who choose not to use Qualcomm application processors.

We'll obviously figure all of this out in due time, but my final concern remains with the device vendors. Far too often we review great platforms that are burdened with horrible software sold under the guise of differentiation. We're finally on the cusp of getting some really powerful smartphone hardware, I do hope the device vendors do these SoCs justice.

GPU Performance
Comments Locked

49 Comments

View All Comments

  • mutil0r - Saturday, February 25, 2012 - link

    While true, outside of rare exceptions (Xperia Play) where the OEM specifically asks the manufacturer for optimized drivers, rarely do OEM's call for anything beyond baseline drivers because of massive catrier testing and validation cycles.

    We havent reached desktop GPU type maturity and cadence to have drivers bump up performance, yet.
  • Wishmaster89 - Saturday, February 25, 2012 - link

    It would depend on relations between qualcomm and ODM, but I'd suspect that after last year's fiasco with msm8x60 they'll try their best to assure that final devices are as good as they can get, and that would mean upgrading drivers for their chipsets.

    In worst case scenarios we'll have to put our faith in custom ROM's to always use most recent drivers from newer devices, cause it was proven that both Adreno 205 and 220 got faster with more mature drivers.
  • mutil0r - Saturday, February 25, 2012 - link

    I think it is important to remind ourselves that we are still comparing a development platform (the MDP) to a shipping device (Transformer Prime). To know what I'm trying to say here, please have a look at the previous MDP8660 numbers vs. those of shipping 8x60 devices. I understand manufacturers are trying to close this gap, but I would be wary of simply taking their word for it.

    Next, I would not give Electopia much weight because it is a Qualcomm developed benchmark. I'm surprised AT even published those numbers.

    IMHO the only benchmark in the above list where the 225 has an advantage, on paper, is Basemark. Basd
  • mutil0r - Saturday, February 25, 2012 - link

    Based on what i understand, Basemark tests have unrealistically long shader calls. While it is good to know that the Qualcomm architecture is better equipped to handle this, real world implications are far less impressive, given the current state of mobile graphics in the industry.

    Simply put, the comparison is not correct and therefore to draw conclusions based on this would also not be right.

    As an aside, im interested in knowing what sort of memory typr/clocks theMDP is running. I'm willing to make a calculated guess that this is probably not what we'll be seeinf on shippibg devices because of BoM, packaging and thermal concerns.

    Also, I read (i dont remember where exactly though) that the Tegra 3 CPU clocks have been bumped from 1.3/1.4 to 1.4/1.5. Again, i'll believe it only when I see it, but im curious if this supposed new revision also includes a gpu clock bump.
  • Eneq - Tuesday, March 13, 2012 - link

    Regarding Electopia...

    What you say is not quite true, its developed on contract from Qualcomm but the engine itself is a commercial engine thats been used in multiple titles.

    That said it is slightly skewed by not focusing too much on things that are known to be a slight problem for Adreno (FBOs and pixel shaders for instance) however thats not a big concern for modern games.

    You can just compare the results from an Adreno run with Imagination which are comparable, however Tegra 2 has always had issues. But Tegra 2 has other issues as well so unlikely due to this specific app (the Tegra 2 devices I have been working on show some problems with either fillrate or bus bandwidth and that doesnt seem to be changing...)
  • ChronoReverse - Thursday, February 23, 2012 - link

    It seems to me that there's some seriously problem with this benchmark.

    For instance, with Exynos you get 34.6 fps @ 800x480 but somehow you get 42.5 fps @ 1280x720 (offscreen).

    This really doesn't make a lick of sense and cannot be explained by vsync either.
  • dcollins - Thursday, February 23, 2012 - link

    "Today we're focused on the SoC comparisons however the first MSM8960 devices will also benefit from having integrated 28nm LTE baseband as well."

    This to me is the most important factor. Tegra 3 SOCs will be forced to use a discreet baseband chip while the MSM8960 has an integrated baseband. I think this fact alone will be sufficient to give Krait the lead in terms of battery life while allowing for slimmer devices.

    I have an upgrade coming in March and I cannot wait to get my hands on a new Krait based phone. I have been itching to own an HTC Android phone for some time now; these new devices cannot come soon enough!
  • jwcalla - Thursday, February 23, 2012 - link

    It's pretty clear -- and exciting -- to see where the future is going with all of this. The consistent improvements being made in these chips are both impressive and rapid.

    Somehow -- and I'm still scratching my head a bit on this one -- the announcement of Ubuntu for Android didn't make it to the front page of AT. But that concept kind of ties into where these higher-performing chips are really going to shine. It might be an instance where a quad-core could offer benefits over a higher-clocked dual-core.
  • Kidster3001 - Thursday, February 23, 2012 - link

    SunSpider performance will go down on all devices with that switch to ICS. The Crankshaft engine has some startup overhead that cannot be overcome during the extremely short test times of SunSpider. It will however do much better than the old V8 engine in longer running javascript such as V8 benchmark or Kraken. SunSpider has been good for a long time but it runs too quickly on modern hardware/javascript engines to be meaningful any more. I suggest you retire it gracefully and move to either V8 or Kraken for pure javascript performance benchmarking.
  • Lucian Armasu - Friday, February 24, 2012 - link

    I think we should stop using the Sunspider benchmark. Google said last year that they aren't focusing so much on it because they don't find it relevant anymore, and they even used a "50x Sunspider" test to have a better idea of where the browsers are today. But either way their point was that the Sunspider benchmark is obsolete, and it doesn't really give a feel for the real browser performance anymore.

Log in

Don't have an account? Sign up now