Custom Code to Understand a Custom Core

Section by Anand Shimpi

All Computer Engineers at NCSU had to take mandatory programming courses. Given that my dad is a Computer Science professor, I always had exposure to programming, but I never considered it my strong suit - perhaps me gravitating towards hardware was some passive rebellious thing. Either way I knew that in order to really understand Swift, I'd have to do some coding on my own. The only problem? I have zero experience writing Objective-C code for iOS, and not enough time to go through a crash course.

I had code that I wanted to time/execute in C, but I needed it ported to a format that I could easily run/monitor on an iPhone. I enlisted the help of a talented developer friend who graduated around the same time I did from NCSU, Nirdhar Khazanie. Nirdhar has been working on mobile development for years now, and he quickly made the garbled C code I wanted to run into something that executed beautifully on the iPhone. He gave me a framework where I could vary instructions as well as data set sizes, which made this next set of experiments possible. It's always helpful to know a good programmer.

So what did Nirdhar's app let me do? Let's start at the beginning. ARM's Cortex A9 has two independent integer ALUs, does Swift have more? To test this theory I created a loop of independent integer adds. The variables are all independent of one another, which should allow for some great instruction level parallelism. The code loops many times, which should make for some easily predictable branches. My code is hardly optimal but I did keep track of how many millions of adds were executed per second. I also reported how long each iteration of the loop took, on average.

Integer Add Code
  Apple A5 (2 x Cortex A9 @ 800MHz Apple A5 Scaled (2 x Cortex A9 @ 1300MHz Apple A6 (2 x Swift @ 1300MHz Swift / A9 Perf Advantage @ 1300MHz
Integer Add Test 207 MIPS 336 MIPS 369 MIPS 9.8%
Integer Add Latency in Clocks 23 clocks   21 clocks  

The code here should be fairly bound by the integer execution path. We're showing a 9.8% increase in performance. Average latency is improved slightly by 2 clocks, but we're not seeing the sort of ILP increase that would come from having a third ALU that can easily be populated. The slight improvement in performance here could be due to a number of things. A quick look at some of Apple's own documentation confirms what we've seen here: Swift has two integer ALUs and can issue 3 operations per cycle (implying a 3-wide decoder as well). I don't know if the third decoder is responsible for the slight gains in performance here or not.

What about floating point performance? ARM's Cortex A9 only has a single issue port for FP operations which seriously hampers FP performance. Here I modified the code from earlier to do a bunch of single and double precision FP multiplies:

FP Add Code
  Apple A5 (2 x Cortex A9 @ 800MHz Apple A5 Scaled (2 x Cortex A9 @ 1300MHz Apple A6 (2 x Swift @ 1300MHz Swift / A9 Perf Advantage @ 1300MHz
FP Mul Test (single precision) 94 MFLOPS 153 MFLOPS 143 MFLOPS -7%
FP Mul Test (double precision) 87 MFLOPS 141 MFLOPS 315 MFLOPS 123%

There's actually a slight regression in performance if we look at single precision FP multiply performance, likely due to the fact that performance wouldn't scale perfectly linearly from 800MHz to 1.3GHz. Notice what happens when we double up the size of our FP multiplies though, performance goes up on Swift but remains unchanged on the Cortex A9. Given the support for ARM's VFPv4 extensions, Apple likely has a second FP unit in Swift that can help with FMAs or to improve double precision FP performance. It's also possible that Swift is a 128-bit wide NEON machine and my DP test compiles down to NEON code which enjoys the benefits of a wider engine. I ran the same test with FP adds and didn't notice any changes to the data above.

Sanity Check with Linpack & Passmark

Section by Anand Shimpi

Not completely trusting my own code, I wanted some additional data points to help understand the Swift architecture. I first turned to the iOS port of Linpack and graphed FP performance vs. problem size:

Even though I ran the benchmark for hundreds of iterations at each data point, the curves didn't come out as smooth as I would've liked them to. Regardless there's a clear trend. Swift maintains a huge performance advantage, even at small problem sizes which supports the theory of having two ports to dedicated FP hardware. There's also a much smaller relative drop in performance when going out to main memory. If you do the math on the original unscaled 4S scores you get the following data:

Linpack Throughput: Cycles per Operation
  Apple Swift @ 1300MHz (iPhone 5) ARM Cortex A9 @ 800MHz (iPhone 4S)
~300KB Problem Size 1.45 cycles 3.55 cycles
~8MB Problem Size 2.08 cycles 6.75 cycles
Increase 43% 90%

Swift is simply able to hide memory latency better than the Cortex A9. Concurrent FP/memory operations seem to do very well on Swift...

As the last sanity check I used Passmark, another general purpose iOS microbenchmark.

Passmark CPU Performance
  Apple A5 (2 x Cortex A9 @ 800MHz Apple A5 Scaled (2 x Cortex A9 @ 1300MHz Apple A6 (2 x Swift @ 1300MHz Swift / A9 Perf Advantage @ 1300MHz
Integer 257 418 614 47.0%
FP 230 374 813 118%
Primality 54 87 183 109%
String qsort 1065 1730 2126 22.8%
Encryption 38.1 61.9 93.5 51.0%
Compression 1.18 1.92 2.26 17.9%

The integer math test uses a large dataset and performs a number of add, subtract, multiply and divide operations on the values. The dataset measures 240KB per core, which is enough to stress the L2 cache of these processors. Note the 47% increase in performance over a scaled Cortex A9.

The FP test is identical to the integer test (including size) but it works on 32 and 64-bit floating point values. The performance increase here despite facing the same workload lends credibility to the theory that there are multiple FP pipelines in Swift.

The Primality benchmark is branch heavy and features a lot of FP math and compares. Once again we see huge scaling compared to the Cortex A9.

The qsort test features integer math and is very branch heavy. The memory footprint of the test is around 5MB, but the gains here aren't as large as we've seen elsewhere. It's possible that Swift features a much larger branch mispredict penalty than the A9.

The Encryption test works on a very small dataset that can easily fit in the L1 cache but is very heavy on the math. Performance scales very well here, almost mirroring the integer benchmark results.

Finally the compression test shows us the smallest gains once you take into account Swift's higher operating frequency. There's not much more to conclude here other than we won't always see greater than generational scaling from Swift over the previous Cortex A9.

Decoding Swift Apple's Swift: Visualized
Comments Locked

276 Comments

View All Comments

  • Zink - Wednesday, October 17, 2012 - link

    That's would be light enough to float.
  • manders2600 - Wednesday, October 17, 2012 - link

    It would be really nice to see some of these benchmarks next to an Android device running Jellybean.

    From my personal experience with the Galaxy Nexus, all of the benchmarks run in this article improve dramatically (many by more than 50%) with that OS version.

    I'm really curious to see what a comparison between the performance of an S4 (Krait) and an A6 would be in that situation, since so much of the CPU tests are impacted by OS.
  • manders2600 - Wednesday, October 17, 2012 - link

    But great read, though!

    . . . sorry, forgot to include that.

    Tremendous research went into this, and it is well appreciated.
  • phillyry - Sunday, October 21, 2012 - link

    I agree.

    I mean it's good that you have the devices on their native OSes but showing them on their upgraded OSes would bee good too 'cause it would add another realistic point of comparison.
  • cjl - Wednesday, October 17, 2012 - link

    In the article, you state:

    "Which brings us to the next key detail with the anodization process: typically, the thickness of the anodization is half the thickness of the base aluminum. So if you had an aluminum plate that was 1mm thick, post-anodization, you would end up with a 1.5mm thick plate"

    You also talk about the pore density in anodizing, and claim that apple has a pore density higher than most.

    To put it quite simply, all of this is wrong.

    Anodizing creates a layer that is on the order of micrometers thick. How thick the coating is depends on the details of the anodizing process, not on the thickness of the base metal. Most decorative anodized coatings are a few micrometers thick, and as you discussed, it's really not that hard to scratch them. Thicker anodizing, sometimes known as hard anodizing, is possible, and it can be done to thicknesses of 25 micrometers (0.001") or greater - from what I can find, over 100 micrometers is possible. These thicker coatings provide pretty substantial scratch resistance, and significant increases in durability, but they require substantially more process control, and it is more difficult to get a consistent coating. Note that even the thickest of these coatings is around 0.006 inches (150 micrometers) or so, which is far, far less than a 2:1 ratio on the aluminum on which it is applied. Interestingly, this thickest possible coating is about what you speculate is the thickness on the iPhone 5, but given its propensity for scratching, I sincerely doubt this to be the case.

    Now for pores. The pore size on anodized aluminum is a few tens of nanometers. There is absolutely no way that you could visibly see this, or any improvement in this from one product to the next. This is 20 times smaller than the smallest wavelength of visible light. Quite simply, you can't possibly see this, and this won't be any different between Apple and any other manufacturer.

    That having been said, there are some slight differences in pore structure between coatings. They won't make a significant visible difference (if any at all), but they can make a difference in durability. Specifically, hard anodized coatings (as mentioned above) tend to have thicker walled pores relative to the pore diameter. This again helps increase the wear resistance of hard anodized parts.

    TL,DR: The iPhone probably has a really thin anodizing coat (<10 um). The pores are never visible on anodizing. Anodizing can be done, even on very thin aluminum, such that it would be incredibly scratch resistant.
  • Jaguar36 - Wednesday, October 17, 2012 - link

    +1 on this.

    Not sure where the Vivek got the 2:1 ratio for an anodization thickness, but its nonsense. If you have a 0.25" thick part you're not going to be getting a 0.125" thick anodization. Anodization is usually less than 0.001" thick, and has no relation to the base part thickness.
  • Cibafsa - Wednesday, October 17, 2012 - link

    Whilst Android based device manufacturers do not have to bear the majority of the SOC design/manufacture costs or the OS development costs, they do not share in the iAds/App Store type revenue Apple does.

    Surely it is Apple that can afford to cut prices to cost or even lower. Perhaps it is the Android manufacturers that have to worry about cheap high end phones.

    Will be interesting to see what price point the iPad mini comes in at.
  • steven75 - Wednesday, October 17, 2012 - link

    Most people following this industry are well aware by now that the App Store is run near break-even and iAds were not very successful.
  • Calista - Wednesday, October 17, 2012 - link

    A good and through review but I found it a bit too long-winded. An example would be the following example straight from the first page:
    'All previous iPhones have maintained the same 3.5-inch, 3:2 aspect ratio display. With the rest of the world quickly moving to much larger displays, and with 16:9 the clear aspect ratio of choice, when faced with the decision of modernizing the iPhone platform the choice was obvious.'

    It could have been shortened to:
    'iPhone 5 moves from the previously used 3.5", 3:2 aspect ration to a 4", 16:9 aspect ratio as common among smartphones of today. They kept roughly the same width while increasing the hight with xx mm. The resolution went from 960x640 to 1136x640."

    More information is contained in the rewritten part while at the same time being shorter. Don't forget that this is Anandtech and I assume every single one of your readers are familiar with both the size and resolution of previous iPhones as well as common aspect ratios used on phones.

    The same could be said about the design. I'm sure every single one of your readers have held and played with an iPhone 4/4s, and so when comparing to those two you guys could have kept a lot shorter.
  • phillyry - Sunday, October 21, 2012 - link

    Read better as originally posted than as you rewrote it.

Log in

Don't have an account? Sign up now