Final Words

Putting out a new chassis design, whether large or small, requires a ton of resources and effort. There are up front design, tooling, prototyping and manufacturing costs that have to be recouped over the life of the product. The newer the product, the less likely Apple is to re-use its design. We saw this with the first generation iPhone and Apple TV, both of which saw completely new designs in their second incarnations. Have a look at Apple’s more mature product lines and you’ll see a much longer design lifespan. The MacBook Pro is going on three years since a major redesign and the Mac Pro is even longer at four (six if you count the Power Mac G5 as an early rev of the design). Apple uses design as a competitive advantage. In markets where it feels more confident or less driven to compete, designs are allowed to live on for longer - improving the bottom line but removing one reason to upgrade. In the most competitive markets however, Apple definitely leans on a rapidly evolving design as a strength. The iPhone is no exception to this rule.


The evolution of iPhone (Left to right: iPhone 4S, iPhone 4, iPhone 3GS, iPhone 1)

Thus far Apple has shown that it’s willing to commit to a 2-year design cycle with the iPhone. I would go as far as to say that from a design standpoint, Apple isn’t terribly pressured to evolve any quicker. There are physical limits to device thickness if you’re concerned with increasing performance and functionality. Remember, the MacBook Air only happened once Moore’s Law gave us fast-enough CPUs at the high-end that we could begin to scale back TDP for the mainstream. Smartphones are nowhere near that point yet. The iPhone 4S, as a result, is another stop along the journey to greater performance. So how does it fare? 

The original iPhone 4 design was flawed. Although Apple downplayed the issue publicly, it solved the deathgrip antenna problem with the CDMA iPhone 4. The iPhone 4S brings that fix to everyone. If you don’t remain stationary with your phone in an area with good coverage, the dual-chain antenna diversity introduced with the iPhone 4S is a tangible and significant improvement over the previous GSM iPhone 4. In North Raleigh, AT&T’s coverage is a bit on the sparse side. I get signal pretty much everywhere, but the quality of that signal isn’t all that great. The RSSI at my desk is never any better than -87dBm, and is more consistently around -94. Go down to my basement and the best you’ll see is -112dBm, and you’re more likely to see numbers as low as -130 thanks to some concrete walls and iron beams. The iPhone 4’s more sensitive cellular stack made it possible to receive phonecalls and text messages down there, although I couldn’t really carry on a conversation - particularly if I held the phone the wrong way. By comparison, the iPhone 3GS could not do any of that. The iPhone 4S’ antenna diversity makes it so that I can actually hold a conversation down there or pull ~1Mbps downstream despite the poor signal strength. This is a definite improvement in the one area that is rarely discussed in phone reviews: the ability to receive and transmit a cellular signal. The iPhone 4 already had one of the most sensitive cellular stacks of any smartphone we’d reviewed, the 4S simply makes it better.

Performance at the edge of reception is not the only thing that’s improved. If you’re on a HSPA+ network (e.g. AT&T), overall data speeds have shifted upwards. As our Speedtest histograms showed, the iPhone 4S is about 20% faster than the 4 in downstream tests. Best case scenario performance went up significantly as a result of the move to support HSPA+ 14.4. While the iPhone 4 would top out at around 6Mbps, the 4S is good for nearly 10Mbps. We’re still not near LTE speeds, but the 4S does make things better across the spectrum regardless of cellular condition.

The improvements don’t stop at the radio, Apple significantly upgraded the camera on the 4S. It’s not just about pixel count, although the move to 8MP does bring Apple up to speed there, overall quality is improved. The auto whitebalance is much better than the 4, equalling the Samsung Galaxy S 2 and setting another benchmark for the rest of the competition to live up to. Sharpness remains unmatched by any of the other phones we’ve reviewed thus far, whether in the iOS or Android camp. Performance outside of image quality has also seen a boost. The camera launches and fires off shots much quicker than its predecessor.

Our only complaint about the camera has to do with video. Apple is using bitrate rather than more complex encoding schemes to deliver better overall image quality when it comes to video. The overall result is good, but file sizes are larger than they needed to be had Apple implemented hardware support for High Profile H.264.

Then there’s the A5 SoC. When we first met the A5 in the iPad 2 it was almost impossible to imagine that level of performance, particularly on the GPU side, in a smartphone. As I hope we’ve proven through our analysis of both the solution and its lineage, Apple is very committed to the performance race in its iOS devices. Apple more than doubled the die size going from the A4 to the A5 (~53mm^2 to ~122mm^2) on the same manufacturing process. Note that in the process Apple didn’t integrate any new functionality onto the SoC, the additional transistors were purely for performance. To be honest, I don’t expect the pursuit to slow down anytime soon. 

The gains in CPU and GPU speed aren’t simply academic. The 4S is noticeably faster than its predecessor and finally comparable in its weakest areas to modern day Android smartphones. In the past, iOS could guarantee a smooth user experience but application response and web page loading times were quickly falling behind the latest wave of dual-core Android phones. The 4S brings the iPhone back up to speed.

On the software side, there’s Siri. The technology is a nod to decades of science fiction where users talk to an omnipotent computer that carries out complex calculations and offers impartial, well educated advice when needed. In practice, Siri is far away from being anywhere close to that. Through an admittedly expansive database of patterns, Siri is able to give the appearance of understanding and depth. That alone is enough to convince many mainstream consumers. The abstraction of Wolfram Alpha alone is a significant feature, as I’m not sure how many out-of-the-loop smartphone users would begin to use it as a tool had it not been for Siri. But what about for power users, is Siri a game changer?

There are a few areas that Siri does improve user experience. Making appointments and setting alarms are very natural and quite convenient thanks to Siri. There’s still the awkwardness of giving your phone verbal commands, but if no one is looking I find that it’s quicker to deal with calendar stuff via Siri than by manually typing it in. Setting alarms via Siri actually offers an accuracy benefit as well. Whereas I’ve all too frequently set an alarm for 7PM instead of 7AM because I didn’t definitively swipe the day/night roller, Siri doesn’t let me make that mistake. Searching for restaurants or figuring out how much to tip are nice additions as well.

Text dictation is a neat feature for sure, but to be honest I’m still not likely to rely on it for sending or replying to messages. It’s convenient while driving but the accuracy isn’t high enough to trust it with sending messages to important contacts.

Siri is a welcome addition, but not a life changer. As Apple continues to expand Siri’s database and throws more compute at the problem (both locally on the phone and remotely in iCloud), we’ll hopefully see the technology mature into something more like what years of science fiction moves have promised us.

From a hardware perspective, the iPhone 4S is a great upgrade to the iPhone 4. If the 4 was your daily driver, despite the lack of physical differences, the 4S is a noticeable upgrade. While not quite the speed improvement we saw when going from the iPhone 3G to the 3GS, the 4S addresses almost every weakness of the iPhone 4. 

The biggest issue is timing one’s upgrade. History (and common sense) alone tell us that in about 12 months we’ll see another iPhone. If you own an iPhone 4 and typically upgrade yearly, the 4S is a no-brainer. If you want to keep your next phone for two years, I’d wait until next year when it’s possible you’ll see a Cortex A15 based iPhone from Apple with Qualcomm’s MDM9615 (or similar) LTE modem. The move to 28/32nm should keep power in check while allowing for much better performance. 

If you own anything older than an iPhone 4 (e.g. 2G/3G/3GS), upgrading to the 4S today is a much more tempting option. The slower Cortex A8 is pretty long in the tooth by now and anything older than that is ARM11 based, which I was ready to abandon two years ago.

Battery Life
Comments Locked

199 Comments

View All Comments

  • robco - Monday, October 31, 2011 - link

    I've been using the 4S from launch day and agree that Siri needs some work. That being said, it's pretty good for beta software. I would imagine Apple released it as a bonus for 4S buyers, but also to keep the load on their servers small while they get some real-world data before the final version comes in an update.

    The new camera is great. As for me, I'm glad Apple is resisting the urge to make the screen larger. The Galaxy Nexus looks nice, but the screen will be 4.65". I want a smartphone, not a tablet that makes phone calls. I honestly wouldn't want to carry something much larger than the iPhone and I would imagine I'm not the only one.

    Great review as always.
  • TrackSmart - Monday, October 31, 2011 - link

    I'm torn on screen size myself. Pocketable is nice. But I'm intrigued by the idea of a "mini-tablet" form factor, like the Samsung Galaxy Note with it's 5.3" screen (1280x800 resolution) and almost no bezel. That's HUGE for a phone, but if it replaces a tablet and a phone, and fits my normal pants pockets, it would be an interesting alternative. The pen/stylus is also intriguing. I will be torn between small form factor vs mini-tablet when I make my phone upgrade in the near future.

    To Anand and Brian: I'd love to see a review of the Samsung Galaxy Note. Maybe Samsung can send you a demo unit. It looks like a refined Dell Streak with a super-high resolution display and Wacom digitizer built in. Intriguing.
  • Rick83 - Wednesday, November 2, 2011 - link

    That's why I got an Archos 5 two years ago. And what can I say? It works.

    Sadly the Note is A) three times as expensive as the Archos
    and B) not yet on Android 4

    there's also C) Codec support will suck compared to the Archos, and I'm pretty sure Samsung won't release an open bootloader, like Archos does.

    I'm hoping that Archos will soon release a re-fresh of their smaller size tablets base on OMAP 4 and Android 4.
    Alternatively, and equally as expensive as the Note, is the Sony dual-screen tablet. Looks interesting, but same caveats apply....
  • kylecronin - Monday, October 31, 2011 - link

    > It’s going to be a case by case basis to determine which 4 cases that cover the front of the display work with the 4S.

    Clever
  • metafor - Monday, October 31, 2011 - link

    "Here we have two hypothetical CPUs, one with a max power draw of 1W and another with a max power draw of 1.3W. The 1.3W chip is faster under load but it draws 30% more power. Running this completely made-up workload, the 1.3W chip completes the task in 4 seconds vs. 6 for its lower power predecessor and thus overall power consumed is lower. Another way of quantifying this is to say that in the example above, CPU A does 5.5 Joules of work vs. 6.2J for CPU B."

    The numbers are off. 4 seconds vs 6 seconds isn't 30% faster. Time-to-complete is the inverse of clockspeed.

    Say a task takes 100 cycles. It would take 1 second on a 100Hz, 1 IPC CPU and 0.77 seconds on a 130Hz, 1 IPC CPU. This translates to 4.62 sec if given a task that takes 600 cycles of work (6 sec on the 100Hz, 1 IPC CPU).

    Or 1W * 6s = 6J = 1.3W * 4.62s

    Exactly the same amount of energy used for the task.
  • Anand Lal Shimpi - Monday, October 31, 2011 - link

    Err sorry, I should've clarified. For the energy calculations I was looking at the entire period of time (10 seconds) and assumed CPU A & B have the same 0.05W idle power consumption.

    Doing the math that way you get 1W * 6s + 0.05W * 4s = 6.2J (CPU B)

    and

    1.3W * 4s + 0.05W * 6s = 5.5J (CPU A)
  • metafor - Monday, October 31, 2011 - link

    Erm, that still presents the same problem. That is, a processor running at 130% the clockspeed will not finish in 4 seconds, it will finish in 4.62s.

    So the result is:

    1W * 6s + 0.05W * 4s = 6.2J (CPU B)
    1.3W * 4.62s + 0.05 * 5.38s = 6.275J (CPU A)

    There's some rounding error there. If you use whole numbers, say 200Hz vs 100Hz:

    1W * 10s + 0.05W * 10s = 10.5W (CPU B running for 20s with a task that takes 1000 cycles)

    2W * 5s + 0.05W * 15s = 10.75W (CPU A running for 10s with a task that takes 1000 cycles)
  • Anand Lal Shimpi - Monday, October 31, 2011 - link

    I wasn't comparing clock speeds, you have two separate processors - architectures unknown, 100% hypothetical. One draws 1.3W and completes the task in 4s, the other draws 1W and completes in 6s. For the sake of drawing a parallel to the 4S vs 4 you could assume that both chips run at the same clock. The improvements are entirely architectural, similar to A5 vs. A4.

    Take care,
    Anand
  • metafor - Tuesday, November 1, 2011 - link

    In that case, the CPU that draws 1.3W is more power efficient, as it managed to gain a 30% power draw for *more* than a 30% performance increase.

    I absolutely agree that this is the situation with the A5 compared to the A4, but that has nothing to do with the "race to sleep" problem.

    That is to say, if CPU A finishes a task in 4s and CPU B finishes a task in 6s. CPU A is more than 30% faster than CPU B; it has higher perf/W.
  • Anand Lal Shimpi - Tuesday, November 1, 2011 - link

    It is race to sleep though. The more power efficient CPU can get to sleep quicker (hurry up and wait is what Intel used to call it), which offsets any increases in peak power consumption. However, given the right workload, the more power efficient CPU can still use more power.

    Take care,
    Anand

Log in

Don't have an account? Sign up now