Camera

The iPhone 5s continues Apple’s tradition of sensible improvements to camera performance each generation. I was pleased to hear Phil Schiller deliver a line about how bigger pixels are a better route to improving image quality vs. throwing more at the problem. I remember hearing our own Brian Klug deliver almost that exact same message a year earlier when speaking to some engineers at another phone company.

The iPhone 5s increases sensor size compared to the iPhone 5. Last week Brian dug around and concluded that the 5s’ iSight camera sensor likely uses a format very similar to that of the HTC One. The difference here is while HTC opted for even larger pixels (arriving at 4MP), Apple chose a different balance of spatial resolution to light sensitivity with its 8MP sensor.

One thing ingrained in my mind from listening to Brian talk about optics is that there is no perfect solution, everything ultimately boils down to a selection of tradeoffs. Looking at Apple/HTC vs. the rest of the industry we see one set of tradeoffs, with Apple and HTC optimizing for low light performance while the rest of the industry chasing smaller pixel sizes. Even within Apple and HTC however there are differing tradeoffs. HTC went more extreme in pixel size while Apple opted for more spatial resolution.

iPhone 4, 4S, 5, 5S Cameras
Property iPhone 4 iPhone 4S iPhone 5 iPhone 5S
CMOS Sensor OV5650 IMX145 IMX145-Derivative ?
Sensor Format 1/3.2"
(4.54x3.42 mm)
1/3.2"
(4.54x3.42 mm)
1/3.2" ~1/3.0"
(4.89x3.67 mm)
Optical Elements 4 Plastic 5 Plastic 5 Plastic 5 Plastic
Pixel Size 1.75 µm 1.4 µm 1.4 µm 1.5 µm
Focal Length 3.85 mm 4.28 mm 4.10 mm 4.12 mm
Aperture F/2.8 F/2.4 F/2.4 F/2.2
Image Capture Size 2592 x 1936
(5 MP)
3264 x 2448
(8 MP)
3264 x 2448
(8 MP)
3264 x 2448
(8 MP)
Average File Size ~2.03 MB (AVG) ~2.77 MB (AVG) ~2.3 MB (AVG) 2.5 MB (AVG)
From Brian's excellent iPhone 5s Camera Analysis post

Apple moved to 1.5µm pixels, up from 1.4µm in the iPhone 5. Remember that we’re measuring pixel size in a single dimension, so the overall increase in pixel size amounts to around 15%. Apple also moved to a faster aperture (F/2.2 vs. F/2.4 on the iPhone 5) to increase light throughput. The combination can result in significantly better photos than the outgoing 5 when taking photos in low light.

iPhone 5/5c Low Light

iPhone 5s Low Light

With the move to larger pixels, Apple has done away with its 2x2 binning mode in low light settings. The iPhone 5 would oversample each pixel after scene brightness dropped below a certain threshold to improve low light performance. The oversampled image would then be upscaled to the full 8MP, trading off spatial resolution for low light performance. The iPhone 5s doesn’t have to make this tradeoff. In practice I didn’t find any situations where the 5s’ low light performance suffered as a result. It always seemed to produce better shots than the iPhone 5.

iPhone 5/5c

iPhone 5s

Unlike some of the larger flagships we’ve reviewed lately, the iPhone 5s doesn’t ship with optical image stabilization (OIS). We’ve seen devices from HTC, LG and Nokia all ship with OIS, and have generally been pleased with the results. It’s not a surprise that the 5s doesn’t come with OIS as it’s largely the same physical platform as the outgoing 5. Still it would be great to see an Apple device ship with OIS. Perhaps on a larger iPhone.

As is always the case in space constrained camera systems, what Apple could not achieve in the physical space it hopes to make up for computationally. The 5s leverages electronic image stabilization as well as automatic combination of multiple frames from the capture buffer in order to deliver the sharpest shots each time.

Apple’s cameras have traditionally been quite good, not just based on sensor selection but looking at the entire stack from its own custom ISP (Image Signal Processor) and software. With the A7 Apple introduces a brand new ISP. Although we know very little about the new ISP, you can find references to Apple’s H6 ISP if you dig around.

Apple continues to ship one of the better auto modes among smartphone cameras I've used. I still want the option of full manual controls, but for most users Apple's default experience should be a very good one.

Capturing shots under iOS 7 is incredibly quick. Shot to shot latency is basically instantaneous now, thanks to a very fast ISP and the A7’s ability to quickly move data in and out of main memory. It’s impossible to write shots to NAND this quickly so Apple is likely buffering shots to DRAM before bursting them out to non-volatile storage.

 

The new ISP enables a burst capture mode of up to 10 fps. To active burst mode simply hold down the shutter button and fire away. The iPhone 5s will maintain a 10 fps capture rate until the burst counter hits 999 images (which was most definitely tested). Although it took a while to write all 999 images, all of them were eventually committed to NAND.

Photos captured in burst mode are intelligently combined as to not clutter your photo gallery. The camera app will automatically flag what it thinks are important photos, but you’re free to choose as many (or as few) as you’d like to include in your normal browsing view. Since all of the photos captured in burst mode are physically saved, regardless of whether or not you select them to appear among your photos, you can always just pull them off the 5s via USB.

The rear facing camera is paired with a new dual-LED True Tone flash. Rather than featuring a single white LED to act as a flash, Apple equips the iPhone 5s with two LEDs with different color tones (one with a cool tone and one with a warm tone). When set to fire, the 5s’ ISP and camera system will evaluate the color temperature of the scene, pre-fire the flash and determine the right combination of the two LEDs to produce the most natural illumination of the subject.

I’m not a huge fan of flashes, but I have to say that in a pinch the True Tone flash is appreciably better than the single LED unit on the iPhone 5. Taking photos of people with the new True Tone flash enabled produces much warmer and more natural looking results:

True Tone Flash Enabled

Even if your subject happens to be something other than a person I’ve seen really good results from Apple’s True Tone flash.

I still believe the best option is to grab your photo using natural/available light, but with a smartphone being as portable as it is that’s not always going to be an option.

I have to say I appreciate the vector along which Apple improved the camera experience with the iPhone 5s. Improving low light performance (and quality in low light situations where you’re forced to use a flash) is a great message to carry forward.

Front Facing Camera

The iPhone 5s and iPhone 5c share the same upgraded front-facing FaceTime HD camera. The front facing camera gets a sensor upgrade, also with a move to larger pixels (1.9µm up from 1.75µm) while resolution and aperture remain the same at 720p and F/2.4. The larger sensor size once again improves low light performance of the FaceTime HD camera (iPhone 5 left vs. iPhone 5s right):

Battery Life Video
Comments Locked

464 Comments

View All Comments

  • MatthiasP - Tuesday, September 17, 2013 - link

    Wow, first real review on the web AND deep as always, a very nice job from Anand. :)
  • sfaerew - Wednesday, September 18, 2013 - link

    Benchmarks(GFXBench 2.7,3DMark.Basemark X.etc.) are AArch64 version?
    There are 30~40% performance gap between v32geekbench and v64geekbench.
    INT(ST)1471 vs 1065.
    FP(ST)1339 vs 983
  • Wilco1 - Wednesday, September 18, 2013 - link

    And Bay Trail Geekbench at 2.4GHz: 1063 (INT), 866 (FP)

    So A7 has beaten BT already by a huge margin despite BT not even being for sale yet...
  • TraderHorn - Wednesday, September 18, 2013 - link

    You're comparing 64bit A7 vs 32bit BT. The 32bit #s are dead even. It'll be interesting to see if BT gets a similar performance boost when Win8 64bit versions are released in 1h 2014.
  • Wilco1 - Wednesday, September 18, 2013 - link

    BT's 32-bit result includes hardware accelerated AES, which skews its score (without it, its score is ~936). The 64-bit A7 result does also use hardware acceleration, so it is more comparable.

    Yes BT will get a speedup from 64-bit as well, but won't be nearly as much as A7 gets: its 32-bit result already has the AES acceleration, and x64 nearly isn't as different from x86 as A64 is from A32.

    However the interesting things is that not even in 32-bit A7 wins by a good margin, but that it wins despite running at almost half the frequency of Bay Trail... Forget about Bay Trail, this is Haswell territory - the MacBook Air with the 15W 3.3GHz i7-4650U scores 3024 INT and 3003 FP.

    Now imagine a quad core tablet/laptop version of the A7 running at 2GHz on TSMC 20nm next year.
  • smartypnt4 - Wednesday, September 18, 2013 - link

    Why does the frequency matter? If the TDP of the chips are similar (Bay Trail was tested and verified by Anand as using 2.5W at the SoC level under load), who gives a flip about the frequency?

    If Apple wanted to double the frequency of the chip, they'd need something on the order of 4x the amount of power it already consumes (assuming a back-of-the-napkin quadratic relationship, which is approximately correct), putting it at ~6-8W or so at full load. That's assuming such a scaling could even be done, which is unlikely given that Apple built the thing to run at 1.3GHz max. You can't just say "oh, I want these to switch faster, so let's up the voltage." There's more that goes in to the ability to scale voltage than just the process node you're on.

    Now, I will agree that this does prove that if Apple really wanted to, they could build something to compete with Haswell in terms of raw throughput. Next year's A8 or whatever probably will compete directly with Haswell in raw theoretical integer and FP throughput, if Apple manages to double performance again. That's not a given since they had to use ~50% more transistors to get a performance doubling from the A6 to the A7, and building a 1.5B transistor chip is nontrivial since yields are inversely proportional to the number of transistors you're using.

    Next year will be really interesting, though. What with Apple's next stuff, Broadwell, the first A57 designs, Airmont, and whatever Qualcomm puts out (haven't seen anything on that, which is odd for Qualcomm.)
  • Wilco1 - Wednesday, September 18, 2013 - link

    Frequency & process matters. Current phones use about 2W at max load without the screen (see recent Nexus 7 test), so the claimed 2.5W just for BT is way too much for a phone. That means (as you explained) it must run at a lower frequency and voltage to get into phones - my guess we won't see anything faster than the Z3740 with a max clock of 1.8GHz. Therefore the A7 will extend its lead even further.

    According to TSMC 20nm will give a 30% frequency boost at the same power. So I'd expect that a 2GHz A7 would be possible on 20nm using only 35% more power. That means the A7 would get 75% more performance at a small cost in power consumption. This is without adding any extra transistors.

    Add some tweaks (like faster memory) and such a 2GHz A7 would be similar in performance as the 15W Haswell in MacBook Air. So my point is that with a die shrink and a slight increase in power they already have a Haswell competitor.
  • smartypnt4 - Wednesday, September 18, 2013 - link

    Frequency and process matter in that they affect power consumption. If Intel can get Bay Trail to do 2.4GHz on something like 1.0V, then the power should be fine. Current Haswell stuff tops out its voltage around 1.1V or so in laptops (if memory serves), so that's not unreasonable.

    All of this assumes Geekbench is valid for comparing HSW on Win8 to ARMv8/Cyclone on iOS, which I have serious reservations about attempting to do.

    The other issue I have is this: you're talking about a 50% clock boost giving a 100% increase in performance if we look at the Geekbench scores. That's simply not possible. Had you said "raise the clock to 1.6-1.7GHz and give it 4 cores," I'd be right behind you in a 2x theoretical performance increase. But a 50% clock boost will never yield a 100% increase with the same core, even if you change the memory controller.

    Also, somehow your math doesn't add up for power... Are you hypothesizing that a 2GHz A7 (with 75% of the performance of Haswell 15W, not the same - as per Geekbench) can pull 2.6W while Haswell needs 15W to run that test? Granted, Haswell integrates things that the A7 doesn't. Namely, more advanced I/O (PCIe, SATA, USB, etc.), and the PCH. Using very fuzzy math, you can claim all of that uses 1/2 the power of the chip.

    That brings Haswell's power for compute down to 7-8W, more or less. And you're going to tell me that Apple has figured out how to get 75% of the performance of a 7W part in 2.6W, and Intel hasn't? Both companies have ~100k employees. One is working on a ton of different stuff, and one makes processors, basically exclusively (SSDs and WiFi stuff too, but processors is their main drive). You're telling me that a (relatively) small cadre of guys at Apple have figured out how to do it, and Intel hasn't done it yet on a part that costs ~6x as much after trying to get deep into the mobile space for years. I find that very hard to believe.

    Even with the 14nm shrink next year, you're talking about a 30% power savings for Intel's stuff. That brings the 15W total down to 10.5W, and the (again, super, ridiculously fuzzy) computing power to ~5-6W. On a full node smaller than what Apple has access to. And you're saying they'd hypothetically compete in throughput with a 2.6W part. I'm not sure I believe that.

    Then again, I suppose theoretical bandwidth could be competitive. That's simply a factor of your peak IPC, not your average IPC while the device is running. I don't know enough about the low level architecture of the A7 (no one does), so I'll just leave it here I guess.

    I'm gonna go now... I'm starting to reason in circles.
  • Wilco1 - Wednesday, September 18, 2013 - link

    The sort of "simple" tweaks I was thinking of are: an improved memory controller and prefetcher, doubling of L2, larger branch predictor tables. Assuming a 30% gain due to those tweaks, the result is a 100% speedup at 2GHz (1.3 to 2.0 GHz is a 54% speedup, so you get 1.54 * 1.3 = 2.0x perf). The 30% gain due to tweaks is pure speculation of course, however NVidia claims 15-30% IPC gain for similar tweaks in Tegra 4i, so it's not entirely implausible. As you say a much simpler alternative would be just to double the cores, but then your single threaded performance is still well below that of Haswell.

    You can certainly argue some reduction in the 15W TDP of Haswell due to IO, however with Turbo it will try to use most of that 15W if it can (the Air goes up to 3.3GHz after all).

    Yes I am saying that a relative newcomer like Apple can compete with Intel. Intel may be large, but they are not infallible, after all they made the P4, Itanium and Atom. A key reason AMD cited for moving into ARM servers was that designing an ARM CPU takes far less effort than an equivalent performing x86 one. So the ISA does still matter despite some claiming it no longer does.
  • smartypnt4 - Wednesday, September 18, 2013 - link

    My point wasn't that Apple can't compete; far from it. If anything, the A7 shows they can compete for the most part. However, what you suggest is that Apple could theoretically have the same performance as Intel on a full node process larger at half the power. I

    have no illusions that Intel is infallible. Stuff like Larrabee and the underwhelming GPU in Bay Trail prove that they aren't. I just seriously doubt that Apple could beat Intel at its own game. Specifically, in CPU performance, which is an area it's dominated for years. It's possible, but I find it relatively unlikely, especially this early in Apple's lifetime as a chip designer.

    On a different note, after looking at the Geekbench results more, I feel like it's improperly weighted. The massive performance improvement in AES and SHA encryption may be skewing the overall result... I need to dig more in to Geekbench before coming to an actual conclusion. I'm also still not convinced that comparing cross-platform results is actually valid. I'd like to believe it is, but I've always had reservations about it.

Log in

Don't have an account? Sign up now