Video Capture Quality

The iPhone 4 shot excellent quality 720p30 video and remained arguably the best in that category for a considerable run. Recently though it has been outclassed by smartphones that are shooting 1080p30 with impressive quality which record 720p30 just as well. The 4S catches back up on paper and likewise can capture video at 1080p30. Like every prior iDevice, there are no toggles to change video capture size - it’s always at the device’s maximum quality - 1080p30. Apple also made note of their own gyro-augmented electronic stabilization which the 4S brings. Practically every other smartphone we’ve seen has likewise included some electronic stabilization which leverages the pixels around the target 1080p or 720p area.

We’ve captured videos from the 4S in the dual camera mount alongside the 4, an SGS2, and a reference Canon Vixia HF11 for comparison. I also shot a low light comparison between the 4 and 4S. Showing the differences in video between all of those is something of a challenge, so I’ve done a few different things. First, you can grab the native format 4S versus 4 videos here (442 MB) and the 4S versus SGS2 video here (289 MB).

It’s hard to compare those side by side unless you have multiple instances of VLC open and hit play at the same time, so I also combined and synchronized the comparison videos side by side. The frame is 4096x2048 so we can see actual 1080p frames side by side. Though I realize 4K displays are hard to come by, you at least can see full size images which I’ve synchronized.

It’s readily apparent just how much more dynamic range the 4S has over the 4 when you look at the highlights and dark regions. In addition, the 4S does indeed have better white balance, whereas the 4 changes its white balance a few times as we pan left and right through different levels of brightness and ends up looking blue at the very end of the first clip.

Then comes the SGS2 comparison, and I start out with some unintentional shake where you can really see the 4S’ anti shake kick in. I considered the SGS2’s electronic anti shake pretty good, however its narrower field of view in 1080p capture exacerbates the shaking. Subjectively the two are pretty closely matched in terms of video quality, but the SGS2 runs its continual auto focus a lot and has a few entirely unfocused moments. The 4S’ continual auto focus is much more conservative and often requires a tap to refocus.

The Vixia HF11 comparison gives you an idea how the 4S compares to a consumer level camcorder shooting in its own maximum quality mode. I’d say the 4S actually gives it a run for its money, surprisingly enough, though the 4S (like every smartphone) still has rolling shutter in movement. Finally I shot a low light side by side with the 4S and 4, again white balance is better, but the 4S video in this mode looks a bit noisier than the 4. In addition, the 4S exhibits more lens flaring (something I noticed while shooting stills as well) than the 4.

Subjectively video quality from the 4S is very good, but it falls short in other ways. The 4S shoots video at 1080p30 baseline with 1 reference frame at 24 Mbps, with single channel 64 Kbps AAC audio. If you’ve been following our smartphone reviews, you’ll know that although this is the highest bitrate of any smartphone thus far (Droid 3 we’ve seen at 15 Mbps, SGS2 at 17 Mbps), it’s just baseline and not high profile we’ve seen on Exynos 4210 or OMAP4. In addition, two channel audio is becoming a new norm.

Media Info from video shot on the iPhone 4S

The result is that Apple is compensating for lower encoder efficiency (quality per bit) by encoding their 1080p video at a higher bitrate. Other players are getting the same quality at lower bitrates by using better high profile encoders. We dug a little deeper with some stream analysis software, and it appears that Apple’s A5 SoC is using the same encoder as the A4, complete with the same CAVLC (as opposed to CABAC which the other encoders in OMAP4 or Exynos 4210) and efficiency per frame size. It’s just a bit unfortunate, since the result is that video shot on the 4S will use ~40% more space per minute compared with 1080p30 video shot on other platforms (180 MB for 1 minute on the 4S, 128 MB for 1 minute on the SGS2, and 113 MB for 1 minute on OMAP4).

iPhone 4S iPhone 4

One last thing to note is that Apple roughly keeps the same cropped field of view size as the 4 on the 4S when shooting video. You can see this behavior in the rollover above. The 4S field of view is just slightly narrower than the 4. Note that the actual area reported from the sensor when in video capture mode is almost always a crop (sometimes with a 2x2 binning) of the full sensor size with some pixels around the frame for image stabilization.

Still Image Capture Quality Battery Life
Comments Locked

199 Comments

View All Comments

  • robco - Monday, October 31, 2011 - link

    I've been using the 4S from launch day and agree that Siri needs some work. That being said, it's pretty good for beta software. I would imagine Apple released it as a bonus for 4S buyers, but also to keep the load on their servers small while they get some real-world data before the final version comes in an update.

    The new camera is great. As for me, I'm glad Apple is resisting the urge to make the screen larger. The Galaxy Nexus looks nice, but the screen will be 4.65". I want a smartphone, not a tablet that makes phone calls. I honestly wouldn't want to carry something much larger than the iPhone and I would imagine I'm not the only one.

    Great review as always.
  • TrackSmart - Monday, October 31, 2011 - link

    I'm torn on screen size myself. Pocketable is nice. But I'm intrigued by the idea of a "mini-tablet" form factor, like the Samsung Galaxy Note with it's 5.3" screen (1280x800 resolution) and almost no bezel. That's HUGE for a phone, but if it replaces a tablet and a phone, and fits my normal pants pockets, it would be an interesting alternative. The pen/stylus is also intriguing. I will be torn between small form factor vs mini-tablet when I make my phone upgrade in the near future.

    To Anand and Brian: I'd love to see a review of the Samsung Galaxy Note. Maybe Samsung can send you a demo unit. It looks like a refined Dell Streak with a super-high resolution display and Wacom digitizer built in. Intriguing.
  • Rick83 - Wednesday, November 2, 2011 - link

    That's why I got an Archos 5 two years ago. And what can I say? It works.

    Sadly the Note is A) three times as expensive as the Archos
    and B) not yet on Android 4

    there's also C) Codec support will suck compared to the Archos, and I'm pretty sure Samsung won't release an open bootloader, like Archos does.

    I'm hoping that Archos will soon release a re-fresh of their smaller size tablets base on OMAP 4 and Android 4.
    Alternatively, and equally as expensive as the Note, is the Sony dual-screen tablet. Looks interesting, but same caveats apply....
  • kylecronin - Monday, October 31, 2011 - link

    > It’s going to be a case by case basis to determine which 4 cases that cover the front of the display work with the 4S.

    Clever
  • metafor - Monday, October 31, 2011 - link

    "Here we have two hypothetical CPUs, one with a max power draw of 1W and another with a max power draw of 1.3W. The 1.3W chip is faster under load but it draws 30% more power. Running this completely made-up workload, the 1.3W chip completes the task in 4 seconds vs. 6 for its lower power predecessor and thus overall power consumed is lower. Another way of quantifying this is to say that in the example above, CPU A does 5.5 Joules of work vs. 6.2J for CPU B."

    The numbers are off. 4 seconds vs 6 seconds isn't 30% faster. Time-to-complete is the inverse of clockspeed.

    Say a task takes 100 cycles. It would take 1 second on a 100Hz, 1 IPC CPU and 0.77 seconds on a 130Hz, 1 IPC CPU. This translates to 4.62 sec if given a task that takes 600 cycles of work (6 sec on the 100Hz, 1 IPC CPU).

    Or 1W * 6s = 6J = 1.3W * 4.62s

    Exactly the same amount of energy used for the task.
  • Anand Lal Shimpi - Monday, October 31, 2011 - link

    Err sorry, I should've clarified. For the energy calculations I was looking at the entire period of time (10 seconds) and assumed CPU A & B have the same 0.05W idle power consumption.

    Doing the math that way you get 1W * 6s + 0.05W * 4s = 6.2J (CPU B)

    and

    1.3W * 4s + 0.05W * 6s = 5.5J (CPU A)
  • metafor - Monday, October 31, 2011 - link

    Erm, that still presents the same problem. That is, a processor running at 130% the clockspeed will not finish in 4 seconds, it will finish in 4.62s.

    So the result is:

    1W * 6s + 0.05W * 4s = 6.2J (CPU B)
    1.3W * 4.62s + 0.05 * 5.38s = 6.275J (CPU A)

    There's some rounding error there. If you use whole numbers, say 200Hz vs 100Hz:

    1W * 10s + 0.05W * 10s = 10.5W (CPU B running for 20s with a task that takes 1000 cycles)

    2W * 5s + 0.05W * 15s = 10.75W (CPU A running for 10s with a task that takes 1000 cycles)
  • Anand Lal Shimpi - Monday, October 31, 2011 - link

    I wasn't comparing clock speeds, you have two separate processors - architectures unknown, 100% hypothetical. One draws 1.3W and completes the task in 4s, the other draws 1W and completes in 6s. For the sake of drawing a parallel to the 4S vs 4 you could assume that both chips run at the same clock. The improvements are entirely architectural, similar to A5 vs. A4.

    Take care,
    Anand
  • metafor - Tuesday, November 1, 2011 - link

    In that case, the CPU that draws 1.3W is more power efficient, as it managed to gain a 30% power draw for *more* than a 30% performance increase.

    I absolutely agree that this is the situation with the A5 compared to the A4, but that has nothing to do with the "race to sleep" problem.

    That is to say, if CPU A finishes a task in 4s and CPU B finishes a task in 6s. CPU A is more than 30% faster than CPU B; it has higher perf/W.
  • Anand Lal Shimpi - Tuesday, November 1, 2011 - link

    It is race to sleep though. The more power efficient CPU can get to sleep quicker (hurry up and wait is what Intel used to call it), which offsets any increases in peak power consumption. However, given the right workload, the more power efficient CPU can still use more power.

    Take care,
    Anand

Log in

Don't have an account? Sign up now