Camera Video Recording

In terms of video recording, the iPhone XS promises an improved dynamic range in modes up to 30fps. What this likely means is that the phone’s able to capture in HDR mode in the 30fps modes, doing the same kind of processing we also see in SmartHDR still pictures.

Also something I’ve dreaded on iPhones for years; the new iPhone XS finally introduces stereo audio recording. Why it took Apple such a long time to finally introduce stereo recording is something that boggles the mind, but, let’s not complain, as we now finally have it on the new generation.

 

iPhone XS:      iPhone X: 

Comparing the iPhone XS video to the iPhone X, there’s one thing that is immediately very evident: the new XS is able to produce much better image stabilisation than last year’s flagship. Indeed, it looks like Apple vastly improved the OIS/EIS on the new phones, as the wobble that happens when walking is gone on the XS.

Audio recording finally is up to par, and we can hear the wind and rustling leaves of the trees around us. I think Apple might still have to work a bit on the wind noise cancellation, as in some parts the audio sounded as if it was inside a tube.

In terms of image quality, Apple’s claims of the improved dynamic range are very much verified. The phone showcases a lot more brought down highlights in the scene, and in darker areas, show better shadows. It’s unfortunate that this is limited only to the 30fps modes, but it’s understandable.

Switching over from the main lens to the telephoto lens happens relatively fast, although with a short exposure flash and a slight delay on the first zoom. 4K60 recording doesn’t allow for the use of the telephoto lens.

All in all, the video recording quality of the new iPhone XS is massively improved in all areas of stabilisation, picture quality, and audio. 4K30 recording on the XS is probably the best I’ve seen on any smartphone – a definitive applause to Apple for the improvements here.

Speaker Evaluation

Apple claimed to have improved the speaker audio quality on the new iPhone XS, allowing for more stereo separation and filling sound. I had introduced a new speaker evaluation method a few months ago because this year’s efforts by smartphone vendors to improve speaker quality has been very pronounced, and I wanted to have a way to objectively convey these improvements.

Starting off with speaker loudness, we’re measuring the phones at maximum volume, both in one-hand portrait mode, as well as two-handed mode where the palms are cupped towards the user. These two use-cases are what I find myself most often using the phone’s speakers in, so hopefully that also represents how most users use it as well, please let me know otherwise!

Speaker Loudness

Using a pink noise signal, the iPhone XS pretty much falls into line with the results of the iPhone X, coming in at a very loud 82.8dBA in portrait mode and 87.6dbA in two-handed mode. Apple’s sound directionality on the iPhone X and XS is among the best, most likely due to the fact that the stereo earpiece is among the loudest of current generation smartphones.

Measuring the frequency response of the speakers, we see the iPhone XS closely following the measurement of the iPhone X, however there’s a major difference in the mid-range where the XS is around 5dB louder, raising instrumental frequencies and voices. This difference is what I think Apple is referring to when talking about better “fullness”, as it is evident when playing back media.

To better demonstrate the difference between the phones, I’ve attempted to capture them with a binaural microphone setup. Now I know my environment isn’t perfect as I don’t have the necessary sound dampening equipment, but I hope it does serve as an overall adequate A/B comparison between the phones. I’ve tried to calibrate the sound as much as possible recorded by the setup to a flat frequency response, although I’m sure there are improvements to be made. As a comparison, I also included calibrated speakers as a baseline to get an idea of the microphone setup.

The audio is meant to be listened to with headphones, or even better with IEMs, as this will give the intended playback of the binaural recording.

The iPhone XS’ improvements in the mid-range are quite evident as voices sound deeper and more pronounced on the new phone. Stereo separation is also quite good – resulting in a filling audio experience.

I included the S9+ and G7 as comparison devices. Samsung still does a significantly better job at the low-mid ranges which gives the phone more overall presence than the iPhones, also has an advantage in the very high frequencies giving more clarity, however the new iPhone’s XS strength point in the mid-ranges is the S9’s weakness, and vocals sound a lot less present than on the XS.

As for the G7, I just wanted to showcase a mono speaker device, and just how huge the audio difference is. Unfortunately the G7, even though it promises to have a good speaker, fails in practice.

Camera - Low Light Evaluation Conclusion & End Remarks
Comments Locked

253 Comments

View All Comments

  • zepi - Saturday, October 6, 2018 - link

    Otherwise a nice idea, but Datacenter CPU-market is too little to be interesting for Apple, as crazy as it is.

    Intel makes about $5b/quarter selling Xeons and other Datacenter stuff.

    Apple makes some $50B. I don't think they can waste chip-development resources to design something for such a little "niche".
  • tipoo - Thursday, October 18, 2018 - link


    Well, it would be largely reusing the R&D they already do for iOS chips, making the high performance cores is the hardest part, scaling them up to more cores would be a fraction the work.
  • varase - Tuesday, October 23, 2018 - link

    The Enterprise server business is already a crowded field, and it's not really something Apple has any expertise with.

    In Apple terms, it's not like there's a huge profit potential there, even if they were successful.

    Why put all that effort into learning, when most of their income comes from a portable consumer device they first released in 2007?
  • iwod - Saturday, October 6, 2018 - link

    What are the other die area used for? The labels only has ~half of the die. I could add image signal processing, video encode and decode if that is not included in GPU. Some FPGA we know Apple had included in their SoC. But all that accounted that is likely less than 25% of that due space. What about the other 25%?
  • Glaurung - Sunday, October 7, 2018 - link

    Hardware accelerators for anything and everything that can be hardware accelerated.

    Plus the "secure enclave" is also on there somewhere - a fenced off, cut down SOC within the SOC for handling logins/unlocking and other security stuff.
  • Antony Newman - Sunday, October 7, 2018 - link

    Andrei - This is an awesome review. Do you think Apple could roll out a low end laptop with 6 Vortex cores - or are there still SoC design areas that Apple still needs to address?

    AJ
  • Constructor - Sunday, October 7, 2018 - link

    I'm not Andrei, but my speculation on this would be:

    • It would make no sense to start with the weakest Macs because that would put the transition to Apple's own CPUs in a bad light from the start. As in the Intel transition 12 years ago they would need to start with the middle of their lineup (with iMacs and MacBook Pros) in order to demonstrate the strength of the new CPU platform and to motivate software developers to jump on board, including actually working on the new machines full time if possible.

    • They would need to have an emulation infrastructure for Intel legacy code in place like they did with Rosetta back then (also for Windows/Linux VMs!). And even in emulation that legacy code cannot be much slower than natively on then-current Intel machines, so their own CPUs already need to be a good bit faster than the corresponding Intel ones at the time in order to compensate for most of the emulation cost.

    • As in 2006, this would have a significant impact on macOS so at announcement they would need to push at least developer versions of the new macOS to developers. Back in 2006 they had Intel-based developer systems ready before the actual Intel Macs came out – this time they could actually provide a macOS developer version for the then top-of-the-line iPads until the first ARM-based Macs were available (which already support Blutooth keyboards now and could then just support Bluetooth mice and trackpads as well). But this also means that as back then, they would need to announce the transition at WWDC to explain it all and to get the developers into the boat.

    • Of course Apple would need to build desktop/notebook capable versions of their CPUs with all the necessary infrastructure (PCIe, multiple USB, Thunderbolt) but on the other hand they'd have more power and active cooling to work with, so they could go to more big cores and to higher clock speeds.

    Again: This is sheer speculation, but the signs are accumulating that something this that may indeed be in the cards with Intel stagnating and Apple still plowing ahead.

    I just don't think that it would be practical to put the current level of Apple CPUs into a Mac just like that even though from sheer CPU performance it looks feasible. These transitions have always been a massive undertaking and can't just be shot from the hip, even though the nominal performance seems almost there right now.
  • Constructor - Sunday, October 7, 2018 - link

    Oops – this forum insists on putting italics into separate lines. Oh well.
  • ex2bot - Sunday, October 7, 2018 - link

    Not to mention they’d have to maintain two processor architectures for an extended period. By that, I mean, I doubt they’d transition high-end Macs for a long, long time to avoid angering pros... again.
  • serendip - Monday, October 8, 2018 - link

    A real left field move would be for Apple to release a MacOS tablet running ARM, like a Qualcomm Windows tablet. I wouldn't rule it out considering how Apple went from a single product for the iPhone and iPad to making multiple sizes.

Log in

Don't have an account? Sign up now