Apple's Swift: Pipeline Depth & Memory Latency

Section by Anand Shimpi

For the first time since the iPhone's introduction in 2007, Apple is shipping a smartphone with a CPU clock frequency greater than 1GHz. The Cortex A8 in the iPhone 3GS hit 600MHz, while the iPhone 4 took it to 800MHz. With the iPhone 4S, Apple chose to maintain the same 800MHz operating frequency as it moved to dual-Cortex A9s. Staying true to its namesake, Swift runs at a maximum frequency of 1.3GHz as implemented in the iPhone 5's A6 SoC. Note that it's quite likely the 4th generation iPad will implement an even higher clocked version (1.5GHz being an obvious target).

Clock speed alone doesn't tell us everything we need to know about performance. Deeper pipelines can easily boost clock speed but come with steep penalties for mispredicted branches. ARM's Cortex A8 featured a 13 stage pipeline, while the Cortex A9 moved down to only 8 stages while maintining similar clock speeds. Reducing pipeline depth without sacrificing clock speed contributed greatly to the Cortex A9's tangible increase in performance. The Cortex A15 moves to a fairly deep 15 stage pipeline, while Krait is a bit more conservative at 11 stages. Intel's Atom has the deepest pipeline (ironically enough) at 16 stages.

To find out where Swift falls in all of this I wrote two different codepaths. The first featured an easily predictable branch that should almost always be taken. The second codepath featured a fairly unpredictable branch. Branch predictors work by looking at branch history - branches with predictable history should be, well, easy to predict while the opposite is true for branches with a more varied past. This time I measured latency in clocks for the main code loop:

Branch Prediction Code
  Apple A3 (Cortex A8 @ 600MHz Apple A5 (2 x Cortex A9 @ 800MHz Apple A6 (2 x Swift @ 1300MHz
Easy Branch 14 clocks 9 clocks 12 clocks
Hard Branch 70 clocks 48 clocks 73 clocks

The hard branch involves more compares and some division (I'm basically branching on odd vs. even values of an incremented variable) so the loop takes much longer to execute, but note the dramatic increase in cycle count between the Cortex A9 and Swift/Cortex A8. If I'm understanding this data correctly it looks like the mispredict penalty for Swift is around 50% longer than for ARM's Cortex A9, and very close to the Cortex A8. Based on this data I would peg Swift's pipeline depth at around 12 stages, very similar to Qualcomm's Krait and just shy of ARM's Cortex A8.

Note that despite the significant increase in pipeline depth Apple appears to have been able to keep IPC, at worst, constant (remember back to our scaled Geekbench scores - Swift never lost to a 1.3GHz Cortex A9). The obvious explanation there is a significant improvement in branch prediction accuracy, which any good chip designer would focus on when increasing pipeline depth like this. Very good work on Apple's part.

The remaining aspect of Swift that we have yet to quantify is memory latency. From our iPhone 5 performance preview we already know there's a tremendous increase in memory bandwidth to the CPU cores, but as the external memory interface remains at 64-bits wide all of the changes must be internal to the cache and memory controllers. I went back to Nirdhar's iOS test vehicle and wrote some new code, this time to access a large data array whose size I could vary. I created an array of a finite size and added numbers stored in the array. I increased the array size and measured the relationship between array size and code latency. With enough data points I should get a good idea of cache and memory latency for Swift compared to Apple's implementation of the Cortex A8 and A9.

At relatively small data structure sizes Swift appears to be a bit quicker than the Cortex A8/A9, but there's near convergence around 4 - 16KB. Take a look at what happens once we grow beyond the 32KB L1 data cache of these chips. Swift manages around half the latency for running this code as the Cortex A9 (the Cortex A8 has a 256KB L2 cache so its latency shoots up much sooner). Even at very large array sizes Swift's latency is improved substantially. Note that this data is substantiated by all of the other iOS memory benchmarks we've seen. A quick look at Geekbench's memory and stream tests show huge improvements in bandwidth utilization:

Couple the dedicated load/store port with a much lower latency memory subsystem and you get 2.5 - 3.2x the memory performance of the iPhone 4S. It's the changes to the memory subsystem that really enable Swift's performance.

 

Apple's Swift: Visualized Six Generations of iPhones: Performance Compared
Comments Locked

276 Comments

View All Comments

  • medi01 - Wednesday, October 17, 2012 - link

    1) Compare ipad2's gamut, cough
    2) Check values on toms
    http://media.bestofmicro.com/3/4/331888/original/g...
    http://www.tomshardware.com/reviews/ipad-3-benchma...

    Unlike anand, toms was beyond primitive contrast/brightness benchmarking for quite a while.
  • thunng8 - Thursday, October 18, 2012 - link

    Not sure if I should trust Tom's figures compared to Anands's.

    In any case, both show the ipad3 has higher gamut, especially in sRGB.
  • steven75 - Wednesday, October 17, 2012 - link

    I think what you meant to say is that AMOLEDs win on black levels and that's about it. LCDs still win in accuracy and most importantly ability to see them in outdoor settings.
  • KoolAidMan1 - Tuesday, October 16, 2012 - link

    Not even close. Even the better Android displays like the Galaxy S3 has a PenTile display. Despite having more "pixels" it actually has fewer subpixels than the iPhone does. Unless you have bad eyesight the S3 display looks really bad in comparison, and this is before we get to even worse smartphone displays out there by HTC, etc.
  • Sufo - Tuesday, October 16, 2012 - link

    Old pentile displays were visibly jaggy on vertical lines - even my old lumia 800 exhibited this to some extent. On the GS3 tho, it is not noticeable and it has nothing to do with eyesight.

    Your comment makes it sound (to someone who has seen many different smartphone displays in person) as though you haven't spent much time with the GS3 (read: many smartphones) at all. Simply mentioning that is uses pentile subpix config, from you, sounds like regurgitated information. Not only that, but you seem to gloss over the many benefits that amoled panels bring. It's arguable that these benefits are more important than an accurate colourspace on (specifically) a mobile phone - although it is ofc entirely subjective.

    This brings me to the last tell of ignorance I noted; your mention of HTC. Have you used a One X? For those who do not like amoled panels, the display on the one x is perhaps nicer than both the gs3 and the ip5. Ofc you may say Android is not your cup of tea, and that's a perfectly justifiable stance, however it has nothing to do with display tech.

    tl;dr You sound like you don't know what you're talking about
  • KoolAidMan1 - Tuesday, October 16, 2012 - link

    I do know what I'm talking about given that I've seen many smartphones, and I've calibrated my share of desktop displays to sRGB.

    Differences in display tech aside, Android phones have never gotten color profiles right, EVER. They're almost always oversaturated, have too much contrast, and are inaccurate. Anand even posted a difference in color accuracy between several devices, and the profile for the S3 is totally what I expected.

    The S3 really doesn't look good, period, but then again there are people who argue that TN panels are just fine against IPS. I'm used to hearing nonsense on forums when it comes to display from people who don't know what to look for.
  • KoolAidMan1 - Tuesday, October 16, 2012 - link

    BTW, apologies if that came out harsh, but the difference in color and contrast accuracy between something like the S3 and a properly calibrated device is a night and day difference to me. I'm pretty sensitive to display quality though; my main desktop display at home is still an NEC and my plasma is a Pioneer Elite (RIP)
  • rocketbuddha - Tuesday, October 16, 2012 - link

    For Android you have the following 720p HD Displays

    SLCD - HTC Rezound (2011 tech)
    SLCD 2 - HTC One X, Sony HD
    HD SAMOLED Pentile - GS3, Galaxy Nexus, Moto Razr HD
    HD SAMOLED RGB - Galaxy Note II
    True IPS LCD - LG Optimus 4X, Optimus G
    Super IPS LCD -Asus Padphone, Sharp phones etc

    So you have big set of choices. If dark contrasts are important then SAMOLED is the way to go. SAMOLED RGB over SAMOLED Pentile.
    If overall color and whites are important go with SLCD2.
    IPS LCDs are the closest to the Retina Display and u have a choices there too. You can pick and choose what is good for you and have alternatives.
  • Spunjji - Thursday, October 18, 2012 - link

    The HTC One X has what is hailed to be one of the best LCD smartphone displays out there. Your claim is invalid.

    Similarly, the Galaxy Note 2 has an AMOLED display without PenTile. Sure, it's lower density, but one does not hold a 5.5" screen so close to one's face.
  • medi01 - Wednesday, October 17, 2012 - link

    ""The iPhone 5 display is better than any current Android display.""
    Why don't you go hit your dumb head with something heavy, ipad would do?

Log in

Don't have an account? Sign up now