CPU Performance

For our cross-platform CPU performance tests we turn to the usual collection of Javascript and HTML5 based browser tests. Most of our comparison targets here are smartphones with two exceptions: Intel's Bay Trail FFRD and Qualcomm's MSM8974 Snapdragon 800 MDP/T. Both of those platforms are test tablets, leveraging higher TDP silicon in a tablet form factor. The gap between the TDP of Apple's A7 and those two SoCs isn't huge, but there is a gap. I only include those platforms as a reference point. As you're about to see, the work that Apple has put into the A7 makes the iPhone 5s performance competitive with both. In many cases the A7 delivers better performance than one or both of them. A truly competitive A7 here also gives an early indication of the baseline to expect from the next-generation iPad.

We start with SunSpider's latest iteration, measuring the performance of the browser's js engine as well as the underlying hardware. It's possible to get good performance gains by exploiting advantages in both hardware and software here. As of late SunSpider has turned into a bit of a serious optimization target for all browser and hardware vendors, but it can be a good measure of an improving memory subsystem assuming the software doesn't get in the way of the hardware.

SunSpider Javascript Benchmark 1.0 - Stock Browser

Bay Trail's performance crown lasted all of a week, and even less than that if you count when we actually ran this benchmark.  The dual-core A7 is now the fastest SoC we've tested under SunSpider, even outpacing Qualcomm's Snapdragon 800 and ARM's Cortex A15. Apple doesn't quite hit the 2x increase in CPU performance here, but it's very close at a 75% perf increase compared to the iPhone 5. Update: Intel responded with a Bay Trail run under IE11, which comes in at 329.6 ms.

Next up is Kraken, a heavier js benchmark designed to stress more forward looking algorithms. Once again we run the risk of the benchmark becoming an optimization target, but in the case of Kraken I haven't seen too much attention paid to it. I hope it continues to fly under the radar as I've liked it as a benchmark thus far.

Mozilla Kraken Benchmark - 1.1

The A7 falls second only to Intel's Atom Z3770. Although I haven't yet published these results, the 5s performs very similarly to an Atom Z3740 - a more modestly clocked Bay Trail SKU from Intel. Given the relatively low CPU frequency I'm not at all surprised that the A7 can't compete with the fastest Bay Trail but instead is better matched for a middle of the road SKU. Either way, A7's performance here is downright amazing. Once again there's a performance advantage over Snapdragon 800 and Cortex A15, both running at much higher peak frequencies (and likely higher power levels too, although that's speculation until we can tear down an S800 platform and a 5s to compare).

Compared to the iPhone 5, the 5s shows up at over 2.3x the speed of last year's flagship.

Next up is Google's Octane benchmark, yet another js test but this time really used as a design target for Google's own V8 js engine. Devices that can run Chrome tend to do the best here, potentially putting the 5s at a disadvantage.

Google Octane Benchmark v1

Bay Trail takes the lead here once again, but again I expect the Z3740 to be a closer match for the A7 in the 5s at least (it remains to be seen how high the iPad 5 version of Cyclone will be clocked). The performance advantage over the iPhone 5 is a staggering 92%, and obviously there are big gains over all of the competing ARM based CPU architectures. Apple is benefitting slightly from Mobile Safari being a 64-bit binary, however I don't know if it's actually getting any benefit other than access to increased register space.

Our final browser test is arguably the most interesting. Rather than focusing on js code snippets, Browsermark 2.0 attempts to be a more holistic browser benchmark. The result is much less peaky performance and a better view at the sort of moderate gains you'd see in actual usage.

Browsermark 2.0

There's a fair amount of clustering around 2500 with very little differentiation between a lot of the devices. The unique standouts are the Snapdragon 800 based G2 from LG, and of course the iPhone 5s. Here we see the most modest example of the A7's performance superiority at roughly 25% better than the iPhone 5. Not to understate the performance of the iPhone 5s, but depending on workload you'll see a wide range of performance improvements.

The Move to 64-bit iPhone Generational Performance & iPhone 5s vs. Bay Trail
Comments Locked

464 Comments

View All Comments

  • Wilco1 - Wednesday, September 18, 2013 - link

    If all you can do is name calling then you clearly haven't got a clue or any evidence to prove your point. Either come up with real evidence or leave the debate to the experts. Do you even understand what IPC means?

    For example in your link a low clocked Jaguar is keeping up with a much higher clocked Bay Trail (yes it boosts to 2.4GHz during the benchmark run), so the obvious conclusion is that Jaguar has far higher IPC than Bay Trail. For example Jaguar has 28% higher IPC than BT in the 7-zip test. Just like I said.

    Now show me a single benchmark where BT gets better IPC than Jaguar. Put up or shut up.
  • zeo - Wednesday, September 18, 2013 - link

    The point that BT Beats Jaguar, especially at performance per watt, clearly proved the point given!

    And insisting as you are on your original assessment is a characteristic of acting like a Troll... So you're not going to convince anyone by simply insisting on being right... especially when we can point to Anandtech pointing out multiple benchmarks in this article that showed the Kabini performing lower than bother BT and the A7!

    So either learn to read what these reviews actually post or accept getting labeled a Troll... either way, you're not winning this argument!
  • Wilco1 - Wednesday, September 18, 2013 - link

    No, Bob's claim was that Bay Trail was faster clock for clock than Jaguar, when the link he gave to prove it clearly showed that is false. BT may well beat Jaguar on perf/watt, but that's not at all what we were discussing.

    So next time try to understand what people are discussing before jumping in and calling people a Troll. And yes I stand by my characterization of various microarchitectures, precisely because it's based on actual benchmark results.
  • Bob Todd - Wednesday, September 18, 2013 - link

    IPC as a comparison point made a lot of sense when we were arguing about which 130 watt desktop processor had the better architecture. It seems largely irrelevant for mobile where we care about performance per watt. Your argument is continually that the ARM/AMD designs are 'faster' based on Geekbench. If Jaguar has a 28% higher IPC than Bay Trail, do you honestly think it matters if Bay Trail is still the faster chip @ 1/3 (or less) of the power requirements? If someone came up with a crazy design that needed 5x the clocks to have a 2x performance advantage of their competitor, but did so with half the power budget, they'd still be racking up design wins (assuming parity for all other aspects like price). That's a two way street. If ARM designs a desktop/server focused chip that needs higher clocks than Intel to reach performance parity or be faster than Haswell, but does so with significantly less power it's still a huge win for them.
  • Wilco1 - Wednesday, September 18, 2013 - link

    IPC matters as you can compare different microarchitectures and make predictions on performance at different clock speeds. I'm sure you know many CPUs come in a confusing variation of clockspeeds (and even different base/turbo frequencies for Intel parts), but the underlying microarchitecture always remains the same. You can't make claims like "Bay Trail is faster than Jaguar" when such a claim would only valid at very specific frequencies. However we can say that Jaguar has better IPC than BT and that will remain true irrespectively of the frequency. So that is the purpose of the list of microarchitectures I posted.

    I was originally talking about the performance of Apple A7 and Bay Trail in Geekbench. You may not like Geekbench, but it represents close to actual CPU performance (not rubbish JavaScript, tuned benchmarks, cheating - remember AnTuTu? - or unfair compiler tricks).

    Now you're right that besides absolute performance, perf/W is also important. Unfortunately there is almost no detailed info on power consumption, let alone energy to do a certain task for various CPUs. While TDP (in the rare cases it is known!) can give some indication, different feature sets, methodologies, "dial-a-TDP" and turbo features makes them hard to compare. What we can say in general is that high-frequency designs tend to be less efficient and use more power than lower frequency, higher IPC designs. In that sense I would not be surprised if the A7 also shows a very good perf/Watt. How it compares with BT is not clear until BT phones appear.
  • Bob Todd - Wednesday, September 18, 2013 - link

    Your point about benchmarks is actually what surprises me the most nowadays. The biggest thing every in-depth review of a new ARM design brings to light is how freaking piss poor the state of mobile benchmarking is from a software standpoint. I didn't expect magic by the time we got to A9 designs, but it's a little ridiculous that we're still in a state of infancy for mobile benchmarking tools over half a decade after the market really started heating up.
  • Bob Todd - Wednesday, September 18, 2013 - link

    And by "ARM design" I mean both their cores or others building to their ISA.
  • Wilco1 - Thursday, September 19, 2013 - link

    Yes, mobile benchmarking is an absolute disgrace. And that's why I'm always pointing out how screwed up Anand's benchmarking is - I'm hoping he'll understand one day. How anyone can conclude anything from JS benchmarks is a total mystery to me. Anand might as well just show AnTuTu results and be done with it, that may actually be more accurate!

    Mobile benchmarks like EEMBC, CoreMark etc are far worse than the benchmarks they try to replace (eg. Dhrystone). And SPEC is useless as well. Ignoring the fact it is really a server benchmark, the main issue is that it ended up being a compiler trick contest than a fair CPU benchmark. Of course Geekbench isn't perfect either, but at the moment it's the best and fairest CPU bench: because it uses precompiled binaries you can't use compiler tricks to pretend your CPU is faster.
  • akdj - Thursday, September 19, 2013 - link

    SO.....what is it the 'crew' is supposed to 'do'? NOT provide ANY benchmarks? Anand and team are utilizing the benchmarks available right now. They're not building the software to bench these devices...they're reviewing them...with the tools available, currently, NOW---on the market. If you're so interested in better mobile benchmarking (still in it's infancy---it's only really been 5 years since we've had multiple devices to even test), why not pursue and build your own benchmarking software? Seems like it may be a lucrative project. Sounds like you know a bit about CPU/GPU and SoC architecture---put something together. Sunspider is ubiquitous, used on any and all platforms from desktops to laptops---tablets to phones, people 'get it'. As well, GeekBench is re-inventing their benchmarking software---as well, the Google Octane tests are fairly new...and many of the folks using these devices ARE interested in how fast their browser populates, how quick a single core is---speed of apps opening and launching, opening a PDF, FPS playing games, et al.
    Again---if you're not 'happy' with how Anand is reviewing gear (the best on the web IMHO), open your own site---build your own tools, and lets see how things turn out for ya!
    Give credit where credit is due....I'd much rather see the way Anand is approaching reviews in the mobile sector than a 1500 word essay without benchmarking results because current "mobile benchmarking is an absolute disgrace"
    YMMV as always
    J

    PS---Thanks for the review guys....again, GREAT Job!
  • Bob Todd - Thursday, September 19, 2013 - link

    Umm...I think you missed my point. I love the reviews here. That doesn't change the fact that mobile benchmarking software sucks compared to what we have available on the desktop. That isn't a slam against this site or any of the reviewers, and I fully expect them to use the (relatively crappy) software tools that are available. And they've even gone above and beyond and written some tools themselves to test specific performance aspects. I'm just surprised that with mobile being the fastest growing market, nobody has really stepped up to the plate to offer a good holistic benchmarking suite to measure cpu/gpu/memory/io performance across at least iOS/Android. And no, I don't expect anyone at Anandtech to write or pay someone to write such a tool.

Log in

Don't have an account? Sign up now