Custom Code to Understand a Custom Core

Section by Anand Shimpi

All Computer Engineers at NCSU had to take mandatory programming courses. Given that my dad is a Computer Science professor, I always had exposure to programming, but I never considered it my strong suit - perhaps me gravitating towards hardware was some passive rebellious thing. Either way I knew that in order to really understand Swift, I'd have to do some coding on my own. The only problem? I have zero experience writing Objective-C code for iOS, and not enough time to go through a crash course.

I had code that I wanted to time/execute in C, but I needed it ported to a format that I could easily run/monitor on an iPhone. I enlisted the help of a talented developer friend who graduated around the same time I did from NCSU, Nirdhar Khazanie. Nirdhar has been working on mobile development for years now, and he quickly made the garbled C code I wanted to run into something that executed beautifully on the iPhone. He gave me a framework where I could vary instructions as well as data set sizes, which made this next set of experiments possible. It's always helpful to know a good programmer.

So what did Nirdhar's app let me do? Let's start at the beginning. ARM's Cortex A9 has two independent integer ALUs, does Swift have more? To test this theory I created a loop of independent integer adds. The variables are all independent of one another, which should allow for some great instruction level parallelism. The code loops many times, which should make for some easily predictable branches. My code is hardly optimal but I did keep track of how many millions of adds were executed per second. I also reported how long each iteration of the loop took, on average.

Integer Add Code
  Apple A5 (2 x Cortex A9 @ 800MHz Apple A5 Scaled (2 x Cortex A9 @ 1300MHz Apple A6 (2 x Swift @ 1300MHz Swift / A9 Perf Advantage @ 1300MHz
Integer Add Test 207 MIPS 336 MIPS 369 MIPS 9.8%
Integer Add Latency in Clocks 23 clocks   21 clocks  

The code here should be fairly bound by the integer execution path. We're showing a 9.8% increase in performance. Average latency is improved slightly by 2 clocks, but we're not seeing the sort of ILP increase that would come from having a third ALU that can easily be populated. The slight improvement in performance here could be due to a number of things. A quick look at some of Apple's own documentation confirms what we've seen here: Swift has two integer ALUs and can issue 3 operations per cycle (implying a 3-wide decoder as well). I don't know if the third decoder is responsible for the slight gains in performance here or not.

What about floating point performance? ARM's Cortex A9 only has a single issue port for FP operations which seriously hampers FP performance. Here I modified the code from earlier to do a bunch of single and double precision FP multiplies:

FP Add Code
  Apple A5 (2 x Cortex A9 @ 800MHz Apple A5 Scaled (2 x Cortex A9 @ 1300MHz Apple A6 (2 x Swift @ 1300MHz Swift / A9 Perf Advantage @ 1300MHz
FP Mul Test (single precision) 94 MFLOPS 153 MFLOPS 143 MFLOPS -7%
FP Mul Test (double precision) 87 MFLOPS 141 MFLOPS 315 MFLOPS 123%

There's actually a slight regression in performance if we look at single precision FP multiply performance, likely due to the fact that performance wouldn't scale perfectly linearly from 800MHz to 1.3GHz. Notice what happens when we double up the size of our FP multiplies though, performance goes up on Swift but remains unchanged on the Cortex A9. Given the support for ARM's VFPv4 extensions, Apple likely has a second FP unit in Swift that can help with FMAs or to improve double precision FP performance. It's also possible that Swift is a 128-bit wide NEON machine and my DP test compiles down to NEON code which enjoys the benefits of a wider engine. I ran the same test with FP adds and didn't notice any changes to the data above.

Sanity Check with Linpack & Passmark

Section by Anand Shimpi

Not completely trusting my own code, I wanted some additional data points to help understand the Swift architecture. I first turned to the iOS port of Linpack and graphed FP performance vs. problem size:

Even though I ran the benchmark for hundreds of iterations at each data point, the curves didn't come out as smooth as I would've liked them to. Regardless there's a clear trend. Swift maintains a huge performance advantage, even at small problem sizes which supports the theory of having two ports to dedicated FP hardware. There's also a much smaller relative drop in performance when going out to main memory. If you do the math on the original unscaled 4S scores you get the following data:

Linpack Throughput: Cycles per Operation
  Apple Swift @ 1300MHz (iPhone 5) ARM Cortex A9 @ 800MHz (iPhone 4S)
~300KB Problem Size 1.45 cycles 3.55 cycles
~8MB Problem Size 2.08 cycles 6.75 cycles
Increase 43% 90%

Swift is simply able to hide memory latency better than the Cortex A9. Concurrent FP/memory operations seem to do very well on Swift...

As the last sanity check I used Passmark, another general purpose iOS microbenchmark.

Passmark CPU Performance
  Apple A5 (2 x Cortex A9 @ 800MHz Apple A5 Scaled (2 x Cortex A9 @ 1300MHz Apple A6 (2 x Swift @ 1300MHz Swift / A9 Perf Advantage @ 1300MHz
Integer 257 418 614 47.0%
FP 230 374 813 118%
Primality 54 87 183 109%
String qsort 1065 1730 2126 22.8%
Encryption 38.1 61.9 93.5 51.0%
Compression 1.18 1.92 2.26 17.9%

The integer math test uses a large dataset and performs a number of add, subtract, multiply and divide operations on the values. The dataset measures 240KB per core, which is enough to stress the L2 cache of these processors. Note the 47% increase in performance over a scaled Cortex A9.

The FP test is identical to the integer test (including size) but it works on 32 and 64-bit floating point values. The performance increase here despite facing the same workload lends credibility to the theory that there are multiple FP pipelines in Swift.

The Primality benchmark is branch heavy and features a lot of FP math and compares. Once again we see huge scaling compared to the Cortex A9.

The qsort test features integer math and is very branch heavy. The memory footprint of the test is around 5MB, but the gains here aren't as large as we've seen elsewhere. It's possible that Swift features a much larger branch mispredict penalty than the A9.

The Encryption test works on a very small dataset that can easily fit in the L1 cache but is very heavy on the math. Performance scales very well here, almost mirroring the integer benchmark results.

Finally the compression test shows us the smallest gains once you take into account Swift's higher operating frequency. There's not much more to conclude here other than we won't always see greater than generational scaling from Swift over the previous Cortex A9.

Decoding Swift Apple's Swift: Visualized
Comments Locked

276 Comments

View All Comments

  • themossie - Tuesday, October 16, 2012 - link

    The manufacturer's charger uses a set of pull-up resistors connected between the various USB lines, to indicate that the phone can pull maximum current. Unfortunately, every manufacturer (and sometimes different phones) use different resistances.

    See http://electronics.stackexchange.com/questions/144... for a brief writeup.

    For what it's worth, I've only had this problem with iDevices and the HP Touchpad. I own circa-2011+ HTC, Motorola and Samsung phones, and they all work fine with every charger. My Droid 2 Global was my primary work phone until a few months ago, and works great with every charger. Not sure why your wife is having problems there.
  • crankerchick - Tuesday, October 16, 2012 - link

    "The non-LTE phones see a sharp drop in battery life. At least at 28nm the slower air interfaces simply have to remain active (and drawing power) for longer, which results in measurably worse battery life. Again, the thing to be careful of here is there's usually a correlation between network speed and how aggressive you use the device. With a workload that scaled with network speed you might see closer numbers between 3G and 4G LTE."

    Perhaps you all could devise a test for this? Something like, change your LTE and 3G tests, where you decrease the time between page loads for the LTE test, to simulate doing more browsing since the pages load faster? One data point on this, with a reasonably selected change in page load duration, would be very helpful now that we have this very interesting dynamic clearly visible.

    That said, as always, I appreciate the reviews presented here. Always thorough with lots of information to chew on beyond specs and "user opinion on user experience."

    Just wish the reviews didn't take so long, but they are always worth it in the end.
  • TofDriver - Tuesday, October 16, 2012 - link

    Thanks for this awesome article. Gigantic work, we'll worth the wait.
    I've learnt so much.
    Would still appreciate it as an ebook, even after the web reading!
    Seems like you're perfectionists who love to push limits... To me it does resonate with the team who designed the reviewed product.
  • name99 - Tuesday, October 16, 2012 - link

    "Another potential explanation is that the 3-wide front end allowed for better utilization of the existing two ALUs, although it's also unlikely that we see better than perfect scaling simply due to the addition of an extra decoder."

    Remember the standard numbers. On this type of integer code:
    1/6 instructions are branch
    1/6 instructions are store
    1/3 instructions are load
    1/3 instructions are ALU
    This means the usual first throttling point i cache access, if you can only do one load/store cycle.
    If you limit your cache to one op/cycle, it's generally not worth going beyond 2-wide --- too often you're waiting on the cache.
    Once you widen your cache (usually, at this stage, by allowing simultaneous read and write per cycle) three-wide makes sense.
    Each cycle now (on ideal and some sort of "average" idealized code) you can now do some sort of combination of half a branch, 1.5 load/stores, and 1 ALU. Meaning that 2 ALUs (as long as they are not overloaded and also handling some aspect of the load/store) is enough for now.
    [Of course things never work out quite this ideal --- you have burstiness in operation types, not to mention delays. But the compiler should try to schedule instructions to get this sort of average, and likewise the re-order queues will do what they can to shuffle things to this sort of average. 2ALUs helps with the bursts, 3ALUs is overkill.]

    So I would say the primary important change made to go to three-wide in a way that is not a waste of time was to convert the L1 cache to dual-ported, supporting simultaneous load & store per cycle.
  • jiffylube1024 - Tuesday, October 16, 2012 - link

    I have to commend the Anandtech team for the great review! It was a long wait, but well worth it. The info on anodizing, the "Swift" CPU @ 1.3 GHz, camera performance, etc. was worth waiting for. This article, in my eyes, is a culmination of the Anandtech team's knowledge in the tech industry - deconstructing A6 to figure out what it's made of, discussing Apple's manufacturing capabilities, etc. Very informative and well written!

    I am always amazed at how many complaints (and petty platform wars) get exposed on the comment board. I certainly appreciate them when an article is poorly written, contains false information or outright lies, but with an article like this, the comments section seems shy of the effusive praise it deserves!
    ------

    On a slight tangent, I've enjoyed the first 8 Anandtech podcasts as well, and I have to say that I look forward to more non-iPhone related disucssion on future podcasts. The information was much appreciated, but for a tech site as broad as Anandtech, the first 8 podcasts have been VERY iPhone heavy in their content! Keep up the good work.
  • jamyryals - Thursday, October 18, 2012 - link

    I think you're right, it has been iPhone heavy, but the start of the podcast kind of lined up with the launch/review process. Let's be honest, it is a huge selling high quality device and it's treated as such. I have a feeling Brian and Anand will have a lot to say about all of the impending Nexii/WP8 when they come out this quarter.
  • krumme - Tuesday, October 16, 2012 - link

    Good to see reviewers apreciation of low light capabilities for the BSI sensor, reflecting real world usage. Oposite to a lot of uninformed stupid reviews on the net.

    Its exactly the same practical difference between s2 and s3 cameras. Big difference for real usage.

    All the mpix race must stop now. 8M is way to much for the quality anyway.
  • Zanegray - Tuesday, October 16, 2012 - link

    I love the level of analysis and attention to detail. Keep it up!
  • mrdude - Tuesday, October 16, 2012 - link

    Wow, what an article. Really fantastic read. The lengths you guys have gone to in this review is stunning, frankly. Well done. Although I'm no Apple fanatic, I must say that this is one of the better articles I've read on AT :)
  • Dennis Travis - Tuesday, October 16, 2012 - link

    Totally outstanding review. You guys covered everything. Thanks so much!

Log in

Don't have an account? Sign up now