The Pixel 3 Camera - Computational Photography

The Pixel 3 makes big promises in regards to its camera: Google staunchly claims it to be the best camera on a smartphone, without a doubt. On the hardware side of things, the Pixel 3 doesn’t seem to bring all that many upgrades, as we’re still seeing a 12.2MP sensor in the form of a Sony IMX363, a seemingly generational update to last year’s IMX362. The sensor is supported by a f/1.8 aperture lens and the module supports OIS. A difference to last year’s Pixels is that we no longer find a laser-autofocus mechanism on the new phone, so the new sensor must’ve improved its phase-detection focus capabilities.

Google promises two big new features that aim to actually improve the picture quality of still pictures: Super Res Zoom, and Night Sight.

Super Res Zoom is an application of the geometrical super resolution image processing technique: Google captures multiple quick succession frames in this mode and infers a higher resolution image of the picture. Google doesn’t explain exactly where this processing is done, but it’s possible it’s a NN algorithm running on the Pixel Visual Core (Which on the Pixel 3 is the same silicon as on the Pixel 2). Google actually isn’t the first to introduce such a zoom method, as Huawei uses similar algorithms to achieve its hybrid 2x and 5x zoom modes, with the difference being that Huawei uses differing image inputs from its different sensors, while Google uses temporally different image from the same sensor. The resulting image should be superior to a simple digital zoom – however there’s diminishing returns on the amount of spatial resolution that can be reconstructed using this method.

Night Sight uses very much a similar algorithm to super resolution, with the difference being that instead of using multiple captures to achieve a higher spatial resolution, it can vastly increase the resulting image exposure while having significant noise reduction applied. Again, Huawei was the first to employ a multi-frame stacking algorithm some generations ago, but only this year with the P20’s were they also first to employ the critical combination of multi-frame stacking along with the ability to stack them correctly with spatial offsets to objects on each frame, in order to avoid blurring. Google’s mechanism inherently doesn’t seem to differ much from Huawei’s in what it does, but the implementation itself and results might obviously differ from each other. I expect we’ll be seeing a lot more vendors introduce similar techniques in upcoming devices, as it can bring greatly improved low-light imaging.

Night Sight pre-release APK credits

Officially, Google has to date not released Night Sight for public usage on the Pixel 3 family – this was something of great annoyance to me as the camera testing is a very major part of our reviews that takes significant time investment. Luckily, XDA member cstark27 was able to figure out that the module could be enabled in the camera APK by a simple flip of a settings flag. The timing was perfect as I was heading out with 18 devices in my pockets the very next day – and for this review I used the supplied modified camera APK alongside the stock camera, which spares me a lot of headaches in having to revisit the camera in the near future!

It’s also important to note that the resulting pictures might not be representative of the final camera that Google is planning to publicly release – however in the testing I found minor issues with it and the resulting pictures should be very close to the final product.

Camera - Daylight Evaluation - Superzoom and Scenic

I’m taking advantage of this review to essentially review all major cameras released this year along with some of their predecessors of last year. This involves a major 18-device shootout in various capturing modes. Included along the new Pixel 3 units are also the new Huawei Mate 20 and Mate 20 Pro which we’ll do a follow-up review shortly after this piece. Unfortunately the LG V40 didn’t make in time for the camera shootout, so we’ll revisit that one in a smaller scope in a few weeks’ time.

In the first round of scenarios I’m focusing on the Pixel 3’s new “Super Res Zoom” and how that stacks up to past generation Pixels, as well as the nearest competitors with optical zoom modules.

Click for full image
[ Pixel 3 ]
[ Pixel 2 ] - [ Pixel XL ]
[ Mate 20Pro ] - [ Mate 20 ]
[ P20Pro ] - [ P20 ] - [ Mate 10Pro ]
[ iPhone XS ] - [ iPhone X ] - [ Note9 ] - [ S9+ ]
[ S8 ] - [ LG G7 ] - [ LG V30 ] - [ OnePlus 6 ]
[ OPPO FindX ] - [ MIX2S ]

In this first set we see the Pixel 3’s zoom notably improve detail compared to just a digital crop of the full-frame image. The edges are more defined and the phone even manages to resolve some details that weren’t visible in the full-frame crop.

The comparison to the Pixel 2 reveals the crucial differences between just having a crop of the full resolution frame and the Super Res Zoom algorithm applied: The 5x zoom shot is the easiest to dissect in this regard – the details on the clock face visibly look improved and we see new edges on the Pixel 3 image that weren’t present on the Pixel 2.

Comparing to the zoomed in results to any of the optical zoom module competition, we however see that there’s still a major difference in quality: even though the Pixel 3 improves on its spatial resolution, I’d say it only manages to do so up to an equivalent level of a 1.5x zoom. Here the actual gains will depend on the granularity of the sub-pixel image localisation that Google uses. If it’s actually just on a sub-pixel level, then a 1.5x / 50% increase in spatial resolution is the logical limit of what we should be expecting of such an implementation, and further “sub-sub-pixel” increases would require more complex algorithms and more frame captures.

In the wide-angle shot, the Pixel 3 doesn’t differ too much from the Pixel 2. The only notable change is a slight difference in colour temperature, producing a colder image than the Pixel 2, a characteristic of the new camera we’ll see prevalent throughout all of the upcoming scenes.

Click for full image
[ Pixel 3 ]
[ Pixel 2 ] - [ Pixel XL ]
[ Mate 20Pro ] - [ Mate 20 ]
[ P20Pro ] - [ P20 ] - [ Mate 10Pro ]
[ iPhone XS ] - [ iPhone X ] - [ Note9 ] - [ S9+ ]
[ S8 ] - [ LG G7 ] - [ LG V30 ] - [ OnePlus 6 ]
[ OPPO FindX ] - [ MIX2S ]

In the next scene, we can apply the same zoom conclusions as on the first shot. The Pixel 3’s Super Res Zoom is a definite improvement over just a digital zoom, however again the spatial resolution increases are limited and cannot compete with optical zoom modules.

In the wide shot, the Pixel 3 again produces a colder image than the Pixel 2, but overall the processing and image are pretty much identical. A characteristic of Google’s phones we’ll see throughout the pictures is that the processing likes to darken the shadows more than what the sensor actually sees, and this most visible in the trees in these pictures, as the pines in the middle picture lose a lot of detail compared to any other phone, also something that happens throughout darker objects of the whole scene.

Here the Pixel 3 achieves nice contrast in the picture, but it’s just a tad darker than how the scene was in reality, with the iPhone XS’s result being much closer to an accurate representation of the actual scene, with many of the competing phones falling in-between these two comparisons in terms of their HDR results.

Click for full image
[ Pixel 3 ]
[ Pixel 2 ] - [ Pixel XL ]
[ Mate 20Pro ] - [ Mate 20 ]
[ P20Pro ] - [ P20 ] - [ Mate 10Pro ]
[ Note9 ] - [ S9+ ] - [ S8 ] - [ LG G7 ] - [ LG V30 ]
[ OnePlus 6 ] - [ OPPO FindX ] - [ MIX2S ]

The next scene again we have a good opportunity to compare the super zoom in the Pixel 3 to the results of the Pixel 2: We see again a definitive improvement, but again this is somewhat limited to a 50% in spatial resolution. Beyond this, the optical zoom competition again manages to vastly outpace the Pixel 3 in terms of clarity.

The wide shot here is actually quite tough as we have major shadow-cast in half of the scene, while the top half is sun-lit. For fun and testing, I tested the Night Sight mode in many of the daylight pictures to see if there was any major difference in processing. The one difference I can see is that there’s a change in colour temperature, with the Pixel 3’s auto mode again producing ever so slightly colder images.

The P20 Pro in its 10MP mode clearly has the best dynamic range in the resulting shot, followed by the MIX2S, OP6, and FindX. The Pixel phone’s shots offer very good contrast and are doing very well in bringing out highlights in the dark areas, but this comes at a great cost in terms of very dark shadows nearly clipping into black on many parts of the scene.

Detail-wise, while the Pixel 3 isn’t doing well in the shadows, it is able to outclass many other devices in terms of overall detail in the rest of the scene, and has absolutely no issues with loss of detail on the frame edges, pointing out to a high quality lens.

Click for full image
[ Pixel 3 ]
[ Pixel 2 ] - [ Pixel XL ]
[ Mate 20Pro ] - [ Mate 20 ]
[ P20Pro ] - [ P20 ] - [ Mate 10Pro ]
[ iPhone XS ] - [ iPhone X ] - [ Note9 ] - [ S9+ ]
[ S8 ] - [ LG G7 ] - [ LG V30 ] - [ OnePlus 6 ]
[ OPPO FindX ] - [ MIX2S ]

The Pixel 3’s tendency to clip shadows to black and just being too under-exposed is again visible in this scene where part of the valley is shadow-cast. Google does excellently in the foreground trees and maintains a great amount of contrast, but other devices just have a significantly better dynamic range in big parts of the picture.

Click for full image
[ Pixel 3 ] - [ Pixel 2 ] - [ Pixel XL ]
[ Mate 20Pro ] - [ Mate 20 ] - [ P20Pro ]
[ P20 ] - [ Mate 10Pro ] - [ iPhone XS ] - [ iPhone X ]
[ Note9 ] - [ S9+ ] - [ S8 ] - [ LG G7 ] - [ LG V30 ]
[ OnePlus 6 ] - [ OPPO FindX ] - [ MIX2S ]

In less demanding lighting conditions, the Pixel phones fare significantly better with shadows, and actually perform very well. A tendency that is continuously present is that Google likes to bring down the sky’s brightness – this could be a reason why dynamic range on the lower end isn’t quite as good as other phones who tend to have brighter or even blown out sky exposures.

In terms of detail, the Pixel 3 is among the top performers – only outclassed by Samsung’s recent Note9 and S9 sensors whose improved deep trench isolation is a step above other phones, and manage to retain much more of the stone’s grain as well as fine details of the statues. Of course, when the lighting conditions allow it, Huawei’s 40MP cameras always win detail wise because of their enormous resolution advantage.

Battery Life Camera - Daylight Evaluation - Dynamic Range
Comments Locked

135 Comments

View All Comments

  • Impulses - Saturday, November 3, 2018 - link

    They're not using bad sensors, I mean, they often manufacture everyone else's, the post processing is often horrid for Sony tho. You think they could get someone from their dedicated camera division to better tune that (they don't have the greatest JPEG engine either but it's gotten better every year).
  • Edwardmcardle - Friday, November 2, 2018 - link

    Awesome review as always. Will there be a mate 20 pro review? Have one and am considering returning because of odd screen issue. Also the new performance mode seems to suck battery, but animations seem laggy when not engaged...would be great to have a professional insight!
  • luikiedook - Friday, November 2, 2018 - link

    Excellent review, and comparison photos galore! I think a lot of the day light photos is a matter of opinion. Personally I find the iPhone Xs and Samsung photos over exposed and less pleasing than the pixel photos. The outdoor seating area for example, the black table tops look reflective and almost white in the iphone Xs and Samsung photos.

    Most of the time the Pixel photos are darker, but I'm not convinced there is less detail, most of the time.

    The p20 pro seems to crush everything in 5x zoom.
  • melgross - Sunday, November 4, 2018 - link

    The shadows on the Pixel are all blocked up. It’s pretty obvious. Some people mistakenly equate black shadows with better contrast, as Google apparently does, but that’s wrong. You can always darken the shadows later in a quick edit. But if the detail is killed on the photo, you can never retrieve it.
  • Dr. Swag - Friday, November 2, 2018 - link

    Hey Andrei, you got the displays mixed up. The 3XL uses a Samsung amoled panel whereas the 3 uses a p-oled from LG. The table on the first page says the opposite.
  • warrenk81 - Friday, November 2, 2018 - link

    haven't even read the article yet, just want to say i'm so happy to see smartphones review return to Anandtech!!
  • spooh - Friday, November 2, 2018 - link

    Pixel XL used in the review has optics issue affecting corner sharpness, and light fallof. I think it's also slightly less sharp than good unit.
    I've had one with the same issue, but returned it.
  • id4andrei - Friday, November 2, 2018 - link

    The Verge reviewer Vlad Savov is a big fan of Google's computational photography. He makes it seem like the Pixel is clearly above the latest flagships. Your expansive review paints a different picture, that of a phone that tries to keep up with a single camera module.

    On a personal level I have a dilemma. Isn't computational photography basically post-processing? Even if it produces a subjectively better outcome out of stitching several shots, isn't it "fake" somehow as it is not an accurate representation of a frame?
  • Andrei Frumusanu - Friday, November 2, 2018 - link

    > Even if it produces a subjectively better outcome out of stitching several shots, isn't it "fake" somehow as it is not an accurate representation of a frame?

    Not really. If a sensor fails to have sufficient dynamic range by itself, then even with no processing that's also going to lead no an "inaccurate representation".
  • Impulses - Friday, November 2, 2018 - link

    It's a little fancier than the post processing you could (easily) manage yourself, mostly cause of the way they chop up frames in tiles to then stack them intelligently... You could say it's "fake" in instances where their algorithm just decided to drop a tile to avoid artefacts or movement etc., but wouldn't you just clone those out yourself if you were anal about the overall end result?

    It's an interesting question without a straightforward answer IMO. It's just gonna vary by shot and usage case, if you're getting consistently better DR then you're consistently closer to "what you see", but all photography is ultimately an interpretation.

Log in

Don't have an account? Sign up now