Video Capture Quality

The iPhone 4 shot excellent quality 720p30 video and remained arguably the best in that category for a considerable run. Recently though it has been outclassed by smartphones that are shooting 1080p30 with impressive quality which record 720p30 just as well. The 4S catches back up on paper and likewise can capture video at 1080p30. Like every prior iDevice, there are no toggles to change video capture size - it’s always at the device’s maximum quality - 1080p30. Apple also made note of their own gyro-augmented electronic stabilization which the 4S brings. Practically every other smartphone we’ve seen has likewise included some electronic stabilization which leverages the pixels around the target 1080p or 720p area.

We’ve captured videos from the 4S in the dual camera mount alongside the 4, an SGS2, and a reference Canon Vixia HF11 for comparison. I also shot a low light comparison between the 4 and 4S. Showing the differences in video between all of those is something of a challenge, so I’ve done a few different things. First, you can grab the native format 4S versus 4 videos here (442 MB) and the 4S versus SGS2 video here (289 MB).

It’s hard to compare those side by side unless you have multiple instances of VLC open and hit play at the same time, so I also combined and synchronized the comparison videos side by side. The frame is 4096x2048 so we can see actual 1080p frames side by side. Though I realize 4K displays are hard to come by, you at least can see full size images which I’ve synchronized.

It’s readily apparent just how much more dynamic range the 4S has over the 4 when you look at the highlights and dark regions. In addition, the 4S does indeed have better white balance, whereas the 4 changes its white balance a few times as we pan left and right through different levels of brightness and ends up looking blue at the very end of the first clip.

Then comes the SGS2 comparison, and I start out with some unintentional shake where you can really see the 4S’ anti shake kick in. I considered the SGS2’s electronic anti shake pretty good, however its narrower field of view in 1080p capture exacerbates the shaking. Subjectively the two are pretty closely matched in terms of video quality, but the SGS2 runs its continual auto focus a lot and has a few entirely unfocused moments. The 4S’ continual auto focus is much more conservative and often requires a tap to refocus.

The Vixia HF11 comparison gives you an idea how the 4S compares to a consumer level camcorder shooting in its own maximum quality mode. I’d say the 4S actually gives it a run for its money, surprisingly enough, though the 4S (like every smartphone) still has rolling shutter in movement. Finally I shot a low light side by side with the 4S and 4, again white balance is better, but the 4S video in this mode looks a bit noisier than the 4. In addition, the 4S exhibits more lens flaring (something I noticed while shooting stills as well) than the 4.

Subjectively video quality from the 4S is very good, but it falls short in other ways. The 4S shoots video at 1080p30 baseline with 1 reference frame at 24 Mbps, with single channel 64 Kbps AAC audio. If you’ve been following our smartphone reviews, you’ll know that although this is the highest bitrate of any smartphone thus far (Droid 3 we’ve seen at 15 Mbps, SGS2 at 17 Mbps), it’s just baseline and not high profile we’ve seen on Exynos 4210 or OMAP4. In addition, two channel audio is becoming a new norm.

Media Info from video shot on the iPhone 4S

The result is that Apple is compensating for lower encoder efficiency (quality per bit) by encoding their 1080p video at a higher bitrate. Other players are getting the same quality at lower bitrates by using better high profile encoders. We dug a little deeper with some stream analysis software, and it appears that Apple’s A5 SoC is using the same encoder as the A4, complete with the same CAVLC (as opposed to CABAC which the other encoders in OMAP4 or Exynos 4210) and efficiency per frame size. It’s just a bit unfortunate, since the result is that video shot on the 4S will use ~40% more space per minute compared with 1080p30 video shot on other platforms (180 MB for 1 minute on the 4S, 128 MB for 1 minute on the SGS2, and 113 MB for 1 minute on OMAP4).

iPhone 4S iPhone 4

One last thing to note is that Apple roughly keeps the same cropped field of view size as the 4 on the 4S when shooting video. You can see this behavior in the rollover above. The 4S field of view is just slightly narrower than the 4. Note that the actual area reported from the sensor when in video capture mode is almost always a crop (sometimes with a 2x2 binning) of the full sensor size with some pixels around the frame for image stabilization.

Still Image Capture Quality Battery Life
Comments Locked

199 Comments

View All Comments

  • doobydoo - Friday, December 2, 2011 - link

    Its still absolute nonsense to claim that the iPhone 4S can only use '2x' the power when it has available power of 7x.

    Not only does the iPhone 4s support wireless streaming to TV's, making performance very important, there are also games ALREADY out which require this kind of GPU in order to run fast on the superior resolution of the iPhone 4S.

    Not only that, but you failed to take into account the typical life-cycle of iPhones - this phone has to be capable of performing well for around a year.

    The bottom line is that Apple really got one over all Android manufacturers with the GPU in the iPhone 4S - it's the best there is, in any phone, full stop. Trying to turn that into a criticism is outrageous.
  • PeteH - Tuesday, November 1, 2011 - link

    Actually it is about the architecture. How GPU performance scales with size is in large part dictated by the GPU architecture, and Imagination's architecture scales better than the other solutions.
  • loganin - Tuesday, November 1, 2011 - link

    And I showed it above Apple's chip isn't larger than Samsung's.
  • PeteH - Tuesday, November 1, 2011 - link

    But chip size isn't relevant, only GPU size is.

    All I'm pointing out is that not all GPU architectures scale equivalently with size.
  • loganin - Tuesday, November 1, 2011 - link

    But you're comparing two different architectures here, not two carrying the same architecture so the scalability doesn't really matter. Also is Samsung's GPU significantly smaller than A5's?

    Now we've discussed back and forth about nothing, you can see the problem with Lucian's argument. It was simply an attempt to make Apple look bad and the technical correctness didn't really matter.
  • PeteH - Tuesday, November 1, 2011 - link

    What I'm saying is that Lucian's assertion, that the A5's GPU is faster because it's bigger, ignores the fact that not all GPU architectures scale the same way with size. A GPU of the same size but with a different architecture would have worse performance because of this.

    Put simply architecture matters. You can't just throw silicon at a performance problem to fix it.
  • metafor - Tuesday, November 1, 2011 - link

    Well, you can. But it might be more efficient not to. At least with GPU's, putting two in there will pretty much double your performance on GPU-limited tasks.

    This is true of desktops (SLI) as well as mobile.

    Certain architectures are more area-efficient. But the point is, if all you care about is performance and can eat the die-area, you can just shove another GPU in there.

    The same can't be said of CPU tasks, for example.
  • PeteH - Tuesday, November 1, 2011 - link

    I should have been clearer. You can always throw area at the problem, but the architecture dictates how much area is needed to add the desired performance, even on GPUs.

    Compare the GeForce and the SGX architectures. The GeForce provides an equal number of vertex and pixel shader cores, and thus can only achieve theoretical maximum performance if it gets an even mix of vertex and pixel shader operations. The SGX on the other hand provides general purpose cores that work can do either vertex or pixel shader operations.

    This means that as the SGX adds cores it's performance scales linearly under all scenarios, while the GeForce (which adds a vertex and a pixel shader core as a pair) gains only half the benefit under some conditions. Put simply, if a GeForce core is limited by the number of pixel shader cores available, the addition of a vertex shader core adds no benefit.

    Throwing enough core pairs onto silicon will give you the performance you need, but not as efficiently as general purpose cores would. Of course a general purpose core architecture will be bigger, but that's a separate discussion.
  • metafor - Tuesday, November 1, 2011 - link

    I think you need to check your math. If you double the number of cores in a Geforce, you'll still gain 2x the relative performance.

    Double is a multiplier, not an adder.

    If a task was vertex-shader bound before, doubling the number of vertex-shaders (which comes with doubling the number of cores) will improve performance by 100%.

    Of course, in the case of 543MP2, we're not just talking about doubling computational cores.

    It's literally 2 GPU's (I don't think much is shared, maybe the various caches).

    Think SLI but on silicon.

    If you put 2 Geforce GPU's on a single die, the effect will be the same: double the performance for double the area.

    Architecture dictates the perf/GPU. That doesn't mean you can't simply double it at any time to get double the performance.
  • PeteH - Tuesday, November 1, 2011 - link

    But I'm not talking about relative performance, I'm talking about performance per unit area added. When bound by one operation adding a core that supports a different operation is wasted space.

    So yes, doubling space always doubles relative performance, but adding 20 square millimeters means different things to the performance of different architectures.

Log in

Don't have an account? Sign up now