Final Words

Putting out a new chassis design, whether large or small, requires a ton of resources and effort. There are up front design, tooling, prototyping and manufacturing costs that have to be recouped over the life of the product. The newer the product, the less likely Apple is to re-use its design. We saw this with the first generation iPhone and Apple TV, both of which saw completely new designs in their second incarnations. Have a look at Apple’s more mature product lines and you’ll see a much longer design lifespan. The MacBook Pro is going on three years since a major redesign and the Mac Pro is even longer at four (six if you count the Power Mac G5 as an early rev of the design). Apple uses design as a competitive advantage. In markets where it feels more confident or less driven to compete, designs are allowed to live on for longer - improving the bottom line but removing one reason to upgrade. In the most competitive markets however, Apple definitely leans on a rapidly evolving design as a strength. The iPhone is no exception to this rule.


The evolution of iPhone (Left to right: iPhone 4S, iPhone 4, iPhone 3GS, iPhone 1)

Thus far Apple has shown that it’s willing to commit to a 2-year design cycle with the iPhone. I would go as far as to say that from a design standpoint, Apple isn’t terribly pressured to evolve any quicker. There are physical limits to device thickness if you’re concerned with increasing performance and functionality. Remember, the MacBook Air only happened once Moore’s Law gave us fast-enough CPUs at the high-end that we could begin to scale back TDP for the mainstream. Smartphones are nowhere near that point yet. The iPhone 4S, as a result, is another stop along the journey to greater performance. So how does it fare? 

The original iPhone 4 design was flawed. Although Apple downplayed the issue publicly, it solved the deathgrip antenna problem with the CDMA iPhone 4. The iPhone 4S brings that fix to everyone. If you don’t remain stationary with your phone in an area with good coverage, the dual-chain antenna diversity introduced with the iPhone 4S is a tangible and significant improvement over the previous GSM iPhone 4. In North Raleigh, AT&T’s coverage is a bit on the sparse side. I get signal pretty much everywhere, but the quality of that signal isn’t all that great. The RSSI at my desk is never any better than -87dBm, and is more consistently around -94. Go down to my basement and the best you’ll see is -112dBm, and you’re more likely to see numbers as low as -130 thanks to some concrete walls and iron beams. The iPhone 4’s more sensitive cellular stack made it possible to receive phonecalls and text messages down there, although I couldn’t really carry on a conversation - particularly if I held the phone the wrong way. By comparison, the iPhone 3GS could not do any of that. The iPhone 4S’ antenna diversity makes it so that I can actually hold a conversation down there or pull ~1Mbps downstream despite the poor signal strength. This is a definite improvement in the one area that is rarely discussed in phone reviews: the ability to receive and transmit a cellular signal. The iPhone 4 already had one of the most sensitive cellular stacks of any smartphone we’d reviewed, the 4S simply makes it better.

Performance at the edge of reception is not the only thing that’s improved. If you’re on a HSPA+ network (e.g. AT&T), overall data speeds have shifted upwards. As our Speedtest histograms showed, the iPhone 4S is about 20% faster than the 4 in downstream tests. Best case scenario performance went up significantly as a result of the move to support HSPA+ 14.4. While the iPhone 4 would top out at around 6Mbps, the 4S is good for nearly 10Mbps. We’re still not near LTE speeds, but the 4S does make things better across the spectrum regardless of cellular condition.

The improvements don’t stop at the radio, Apple significantly upgraded the camera on the 4S. It’s not just about pixel count, although the move to 8MP does bring Apple up to speed there, overall quality is improved. The auto whitebalance is much better than the 4, equalling the Samsung Galaxy S 2 and setting another benchmark for the rest of the competition to live up to. Sharpness remains unmatched by any of the other phones we’ve reviewed thus far, whether in the iOS or Android camp. Performance outside of image quality has also seen a boost. The camera launches and fires off shots much quicker than its predecessor.

Our only complaint about the camera has to do with video. Apple is using bitrate rather than more complex encoding schemes to deliver better overall image quality when it comes to video. The overall result is good, but file sizes are larger than they needed to be had Apple implemented hardware support for High Profile H.264.

Then there’s the A5 SoC. When we first met the A5 in the iPad 2 it was almost impossible to imagine that level of performance, particularly on the GPU side, in a smartphone. As I hope we’ve proven through our analysis of both the solution and its lineage, Apple is very committed to the performance race in its iOS devices. Apple more than doubled the die size going from the A4 to the A5 (~53mm^2 to ~122mm^2) on the same manufacturing process. Note that in the process Apple didn’t integrate any new functionality onto the SoC, the additional transistors were purely for performance. To be honest, I don’t expect the pursuit to slow down anytime soon. 

The gains in CPU and GPU speed aren’t simply academic. The 4S is noticeably faster than its predecessor and finally comparable in its weakest areas to modern day Android smartphones. In the past, iOS could guarantee a smooth user experience but application response and web page loading times were quickly falling behind the latest wave of dual-core Android phones. The 4S brings the iPhone back up to speed.

On the software side, there’s Siri. The technology is a nod to decades of science fiction where users talk to an omnipotent computer that carries out complex calculations and offers impartial, well educated advice when needed. In practice, Siri is far away from being anywhere close to that. Through an admittedly expansive database of patterns, Siri is able to give the appearance of understanding and depth. That alone is enough to convince many mainstream consumers. The abstraction of Wolfram Alpha alone is a significant feature, as I’m not sure how many out-of-the-loop smartphone users would begin to use it as a tool had it not been for Siri. But what about for power users, is Siri a game changer?

There are a few areas that Siri does improve user experience. Making appointments and setting alarms are very natural and quite convenient thanks to Siri. There’s still the awkwardness of giving your phone verbal commands, but if no one is looking I find that it’s quicker to deal with calendar stuff via Siri than by manually typing it in. Setting alarms via Siri actually offers an accuracy benefit as well. Whereas I’ve all too frequently set an alarm for 7PM instead of 7AM because I didn’t definitively swipe the day/night roller, Siri doesn’t let me make that mistake. Searching for restaurants or figuring out how much to tip are nice additions as well.

Text dictation is a neat feature for sure, but to be honest I’m still not likely to rely on it for sending or replying to messages. It’s convenient while driving but the accuracy isn’t high enough to trust it with sending messages to important contacts.

Siri is a welcome addition, but not a life changer. As Apple continues to expand Siri’s database and throws more compute at the problem (both locally on the phone and remotely in iCloud), we’ll hopefully see the technology mature into something more like what years of science fiction moves have promised us.

From a hardware perspective, the iPhone 4S is a great upgrade to the iPhone 4. If the 4 was your daily driver, despite the lack of physical differences, the 4S is a noticeable upgrade. While not quite the speed improvement we saw when going from the iPhone 3G to the 3GS, the 4S addresses almost every weakness of the iPhone 4. 

The biggest issue is timing one’s upgrade. History (and common sense) alone tell us that in about 12 months we’ll see another iPhone. If you own an iPhone 4 and typically upgrade yearly, the 4S is a no-brainer. If you want to keep your next phone for two years, I’d wait until next year when it’s possible you’ll see a Cortex A15 based iPhone from Apple with Qualcomm’s MDM9615 (or similar) LTE modem. The move to 28/32nm should keep power in check while allowing for much better performance. 

If you own anything older than an iPhone 4 (e.g. 2G/3G/3GS), upgrading to the 4S today is a much more tempting option. The slower Cortex A8 is pretty long in the tooth by now and anything older than that is ARM11 based, which I was ready to abandon two years ago.

Battery Life
Comments Locked

199 Comments

View All Comments

  • metafor - Tuesday, November 1, 2011 - link

    When you say power efficiency, don't you mean perf/W?

    I agree that perf/W varies depending on the workload, exactly as you explained in the article. However, the perf/W is what makes the difference in terms of total energy used.

    It has nothing to do with race-to-sleep.

    That is to say, if CPU B takes longer to go to sleep but it had been better perf/W, it would take less power. In fact, I think this was what you demonstrated with your second example :)

    The total energy consumption is directly related to how power-efficient a CPU is. Whether it's a slow processor that runs for a long time or a fast processor that runs for a short amount of time; whichever one can process more instructions per second vs joules per second wins.

    Or, when you take seconds out of the equations, whichever can process more instructions/joule wins.

    Now, I assume you got this idea from one of Intel's people. The thing their marketing team usually forgets to mention is that when they say race-to-sleep is more power efficient, they're not talking about the processor, they're talking about the *system*.

    Take the example of a high-performance server. The DRAM array and storage can easily make up 40-50% of the total system power consumption.
    Let's then say we had two hypothetical CPU's with different efficiencies. CPU A being faster but less power efficient and CPU B being slower but more power efficient.

    The total power draw of DRAM and the rest of the system remains the same. And on top of that, the DRAM and storage can be shut down once the CPU is done with its processing job but must remain active (DRAM refreshed, storage controllers powered) while the CPU is active.

    In this scenario, even if CPU A draws more power processing the job compared to CPU B, the system with CPU B has to keep the DRAM and storage systems powered for longer. Thus, under the right circumstances, the system containing CPU A actually uses less overall power because it keeps those power-hungry subsystems active for a shorter amount of time.

    However, how well this scenario translates into a smartphone system, I can't say. I suspect not as well.
  • Anand Lal Shimpi - Tuesday, November 1, 2011 - link

    I believe we're talking about the same thing here :)

    The basic premise is that you're able to guarantee similar battery life, even if you double core count and move to a power hungry OoO architecture without a die shrink. If your performance gains allow your CPU/SoC to remain in an ultra low power idle state for longer during those workloads, the theoretically more power hungry architecture can come out equal or ahead in some cases.

    You are also right about platform power consumption as a whole coming into play. Although with the shift from LPDDR1 to LPDDR2, an increase in effective bandwidth and a number of other changes it's difficult to deal with them independently.

    Take care,
    Anand
  • metafor - Tuesday, November 1, 2011 - link

    "If your performance gains allow your CPU/SoC to remain in an ultra low power idle state for longer during those workloads, the theoretically more power hungry architecture can come out equal or ahead in some cases."

    Not exactly :) The OoOE architecture has to perform more tasks per joule. That is, it has to have better perf/W. If it had worse perf/W, it doesn't matter how much longer it remains idle compared to the slower processor. It will still use more net energy.

    It's total platform power that may see savings, despite a less power-efficient and more power-hungry CPU. That's why I suspect that this "race to sleep" situation won't translate to the smartphone system.

    The entire crux relies on the fact that although the CPU itself uses more power per task, it saves power by allowing the rest of the system to go to sleep faster.

    But smartphone subsystems aren't that power hungry, and CPU power consumption generally increases with the *square* of performance. (Generally, this wasn't the case of A8 -> A9 but you can bet it's the case to A9 -> A15).

    If the increase in CPU power per task is greater than the savings of having the rest of the system active for shorter amounts of time, it will still be a net loss in power efficiency.

    Put it another way. A9 may be a general power gain over A8, but don't expect A15 to be so compared to A9, no matter how fast it finishes a task :)
  • doobydoo - Tuesday, November 1, 2011 - link

    You are both correct, and you are also both wrong.

    Metafor is correct because any chip, given a set number of tasks to do over a fixed number of seconds, regardless of how much faster it can perform, will consume more energy than an equally power efficient but slower chip. In other words, being able to go to sleep quicker never means a chip becomes more power efficient than it was before. It actually becomes less.

    This is easily logically provable by splitting the energy into two sections. If 2 chips are both equally power efficient (as in they can both perform the same number of 'tasks' per W), if one is twice as fast, it will consume twice the energy during that time, but complete in half the time, so that element will ALWAYS be equal in both chips. However, the chip which finished sooner will then have to be idle for LONGER because it finished quicker, so the idle expense of energy will always be higher for the faster chip. This assumes, as I said, that the idle power draw of both chips being equal.

    Anand is correct, because if you DO have a more power efficient chip with a higher maximum wattage consumption, race to sleep is the OFTEN (assuming reasonable idle times) the reason it can actually use less power. Consider 2 chips, one which consumes 1.3 W per second (max) and can carry out '2' tasks per second. A second chip consumes 1 W per second (max), and can carry out '1' task per second (so is less power efficient). Now consider a world without race-to-sleep. To carry out '10' tasks over a 10 second period, Chip one would take 5 seconds, but would remain on full power for the full 10 seconds, thereby using 13W. Chip two would take 10 seconds, and would use a total of 10W over that period. Thus, the more power efficient chip actually proved less power efficient.

    Now if we factor in race-to-sleep, the first chip can use 1.3 for the first 5 seconds, then go down to 0.05 for the last 5. Consuming 6.75W. The second chip would still consume the same 10W.

    Conclusion:

    If the chip is not more power effficient, it can never consume less energy, with or without race-to-sleep. If the chip IS more power efficient, but doesn't have the sleep facility, it may not use less energy in all scenarios.

    In other words, for a higher powered chip to reduce energy in ALL situations, it needs to a) be more power efficient fundamentally, and b) it needs to be able to sleep (race-to-sleep).
  • djboxbaba - Monday, October 31, 2011 - link

    Well done on the review Brian and Anand, excellent job as always. I was resisting the urge to tweet you about the eta of the review, and of course I end up doing it the same day as your release the review :).
  • Mitch89 - Monday, October 31, 2011 - link

    "This same confidence continues with the 4S, which is in practice completely usable without a case, unlike the GSM/UMTS iPhone 4. "

    Everytime I read something like this, I can't help but compare it to my experience with iPhone 4 reception, which was never a problem. I'm on a very good network here in Australia (Telstra), and never did I have any issues with reception when using the phone naked. Calls in lifts? No problem. Way outside the suburbs and cities? Signal all the way.

    I never found the iPhone 4 to be any worse than other phones when I used it on a crappy network either.

    Worth noting, battery life is noticeably better on a strong network too...
  • wonderfield - Tuesday, November 1, 2011 - link

    Same here. It's certainly possible to "death grip" the GSM iPhone 4 to the point where it's rendered unusable, but this certainly isn't the typical use case. For Brian to make the (sideways) claim that the 4 is unusable without a case is fairly disingenuous. Certainly handedness has an impact here, but considering 70-90% of the world is right-handed, it's safe to assume that 70-90% of the world's population will have few to no issues with the iPhone 4, given it's being used in an area with ample wireless coverage.
  • doobydoo - Tuesday, November 1, 2011 - link

    I agree with both of these. I am in a major capital city which may make a difference, but no amount or technique of gripping my iPhone 4 ever caused dropped calls or stopped it working.

    Very much an over-stated issue in the press, I think
  • ados_cz - Tuesday, November 1, 2011 - link

    It was not over-stated at all and the argument that most people are right handed does not hold a ground. I live in a small town in Scotland and my usual signal strength is like 2-3 bars. If browsing on net on 3G without case and holding the iPhone 4 naturaly with left hand (using the right hand for touch commands ) I loose signal completely.
  • doobydoo - Tuesday, November 1, 2011 - link

    Well the majority of people don't lose signal.

    I have hundreds of friends who have iPhone 4's who've never had any issue with signal loss at all.

    The point is you DON'T have to be 'right handed' for them to work, I have left handed friends who also have no issues.

    You're the exception, rather than the rule - which is why the issue was overstated.

    For what it's worth, I don't believe you anyway.

Log in

Don't have an account? Sign up now