Evaluating Image Quality

How do we evaluate quality from an image? Tomes have been written about this, and really there are many things to look for in a good image. Chief among those is really sharpness, or MTF, the modulation transfer function. That’s a discussion in and of itself, but basically MTF plots show us how much contrast we will see in a square wave at a particular spatial frequency. MTF also tells us about what the highest frequency (spatial resolution) will make it through a system, this is the cutoff frequency. There are other things to look for too, like third order aberrations.

No camera system is perfect, and good design balances one aberration against the other. If we look at field dependency the most difficult part of an image for a designer is the edges, where aberrations increase quickly.

These previous aberrations have been monochromatic, there are also aberrations which exist as a function of wavelength or color. Axial chromatic we can fix with a doublet to some extent or try to minimize. Transverse is what we sometimes see with color fringing, although in most commercial systems purple fringing is often an artifact of ISP.

So what can we look for? Again, distortion is visible quickly since these systems in a smartphone are so wide angle. Chromatic fringing since this is annoying and something easy to notice on silhouetted subjects. Obviously sharpness is a big deal, does the image look blurry. Finally the presence of any residual vignetting and lens color shading, despite lots of gnashing of teeth from the optical designers and lots of ISP tweaking — which if you’re like my ex girlfriend you’re going to add back in with Instagram or Twitter filters to look “vintage,” you hipster. Test charts will tell us a lot, and there are many good choices, but good test scenes sometimes tell a lot more.

I hate pictures of keyboards in reviews since they’re the laziest subject of all to photograph when doing a review of a smartphone, but here’s one I couldn’t resist. The image is so noisy I can’t read the keys, and the totally homogenous desk looks awash with luminance noise. There isn’t much chroma (color) noise.

Here’s one I complain about a lot, huge halos around contrasty regions thanks to the sharpening kernel or unsharp mask applied to the image. This is an attempt by the OEM to add back in spatial resolution or contrast after killing it all with noise reduction, and after you see halos you won’t un-see them. We can also see some serious moire in the bottom left, partly why I love that scene.

This is a photo from a recently released device which clearly has some strong field curvature. Again the center of the image is easy to get nice and sharp, but if you look at the edges, it gets dramatically blurry. The center is easy, the edge of the field is hard.

There was a very popular phone which was criticized for having some purple color stray light visible in the image when a light source was just out of the field of view. It turns out stray light is a big issue for everyone, since obviously nobody wants a huge lens hood sticking out of their phone, or at least industrial designers don’t. Well, again, this isn’t an isolated problem for just one vendor, it’s something everyone has. I believe the purple color gets picked up from a magnesium fluoride antireflection coating or some other AR coating.

The image on the left is from a very popular device, and the image on the right is of the next generation of this popular device. The left image has a very pronounced green spot in the center, and then a definite red ring around the outside. After you see this pattern, it’s unlikely you’ll be able to un-see it. I used to play a game on Reddit looking for the green circle in people’s images, then going and checking EXIF, and about 90 percent of the time I could nail what smartphone this was coming from, just from the green spot. This is classic failure to correct for lens color shading, either their ISP couldn’t do it or they didn’t characterize it well enough, but it was fixed in the next generation. These lens shading errors are incredibly annoying when taking a photo of a subject with a flat monochromatic field, like a book, whiteboard, or so forth.

There are other things that I look for as well, aggressive noise reduction, again moire, bad auto white balance are pretty easy to spot. Another annoyance are cameras which completely miss focus, even on very contrasty scenes which should be easy to focus on with contrast based auto focus.

The Image Signal Processor (ISP) Trends in Smartphone Cameras
Comments Locked

60 Comments

View All Comments

  • ssj3gohan - Sunday, February 24, 2013 - link

    Couple of comments on this and your rant in the podcast :)

    First of all, you're lauding HTC for their larger pixel size and lamenting the move towards smaller pixels. But isn't it true that effective resolution, especially when your pixels are significantly smaller than the airy disk, is basically a function of integration area? The only downside to using smaller pixels is that you increase the effect of read noise and decrease fill factor. In an ideal world, a 100MP phone camera with the same sensor size as a 10MP one would make pictures that are just as good. With read noise being essentially absent nowadays, I don't see the reason to particularly bash on 13MP phone cameras compared to larger-pixel but same-integration-area sensors. They make the same pictures, just take up a little less space on the sd card.

    Of course, you could make the argument that it's wrong to give in to the 'moar megapixels!' consumer side of things and try to educate people that sometimes less is more.

    Next, you say that refractive index and focal length is essentially what limits the focal length for very thin cameras, but this can be alleviated by using diffractive optics (not yet now, but in the future). We may very well see 3mm-thickness 35mm focal length equivalent camera modules with large sensors someday. It's technically possible. Especially with, as you said, nanodiamonds and other very high refractive index synthetic lens materials in the making.

    Next, about the resolving power. There's the airy disk and rayleigh's criterion, but this is not the end of resolving power. It does make sense to oversample beyond this point, you will get extra image information. It becomes exponentially less as you increase the megapixel count but you can still get about 150% extra image information by oversampling beyond the size of the airy disk. Again, in an ideal world without drawbacks to doing so, this does make sense.
  • tuxRoller - Sunday, February 24, 2013 - link

    Especially, with the use of metamaterials that make use of negative indexes of refraction to allow you to resolve detail beyond the diffraction limit?
  • ssj3gohan - Monday, February 25, 2013 - link

    Well, keep in mind that the reason you can resolve beyond the diffraction limit is the fact that the geometrical properties of the sensor and optics differ. Optics will by definition cause gaussian blur as their defect mode, while the sensor has square and offset pixels. These areas do not overlap perfectly, so in order to perfectly image that blurry optical image you need pixels that are smaller than the fundamental size of the diffraction pattern (airy disk).

    These optical effects don't go away when you're using metamaterials/quantum opticss/etc. Light will still be a wave that will not necessarily enter the sensor perfectly perpendicular.
  • UltraTech79 - Monday, February 25, 2013 - link

    I ave seen many many reviews of lenses and the technical details of digital imaging ect, and almost every time the article would have really shitty JPG images. I found it highly ironic. Kudos to you for using PNG throughout this quality article.
  • AnnihilatorX - Monday, February 25, 2013 - link

    I was reading the review for Sony's Xperia Z at techradar, I was astonished at how poor the 13MP Exmor RS sensor performs. Frankly, the image looks blurry and more like it's taken by a 5MP scaled up, with heavy noise even in a well lit scene:

    http://mos.futurenet.com/techradar/art/mobile_phon...

    While I don't really care too much about smart phone camera, and I use my budget DSLR (cheaper than a smart phone) for my photography pleasure, I was thinking if the MP race and new gen smart phones can eliminate the need for me to lunge a DSLR around. If this article is correct on the physical limitations of smartphone camera technology, looks like there is still a future for DSLRs.
  • danacee - Monday, February 25, 2013 - link

    Traditional, aka -crap- P&S clearly are at a disadvantage now, only the still very useful of optical zoom keeping them alive. However high end, 'big' sensor P&S such as the not too young Sony RX100 are still many many generations ahead of smartphone cameras, even the Nokia Pureview has terrible image quality next to it.
  • pandemonium - Tuesday, February 26, 2013 - link

    I am surprised at the lack of mention for Carl Zeiss lenses in here. If you're going to make an article about lens quality and cameraphone technology, why wouldn't you include the best in the market for such? Or are we disputing that fact?

    Also, not all cameraphones suffer as much from dramatic lens flare discoloration issues as said "very popular phone."
  • ShieTar - Tuesday, February 26, 2013 - link

    Sure, you get a 3µm diffraction spot on your camera, and with 1.1µm pixels it gets oversampled. But that does not have to be a waste. As long as the diffraction pattern is well characterised, you can remove the diffraction effect through a deconvolution as part of your ISP. This even remains true for near-field optical effects that occur once you pixel size gets close to or below the image wavelength. As long as such corrections are implemented, and as long as your per-pixel noise is small enough for these algorithms to work, decreasing the pixel size does make a certain sense.

    Once noise becomes a larger problem then resolution, the smaller pixels hurt though, by wasting light through the larger crop factor and also by increasing the overall read-out noise. When exactly that point is reached depends on the light conditions you want to use your camera in, so it would be interesting to understand for which kind of conditions smartphone-cameras are being optimised.
  • rwei - Wednesday, February 27, 2013 - link

    hurr hurr
  • theSuede - Wednesday, February 27, 2013 - link

    I don't know where your Rayleigh limit comes from, but in real world optics, Rayleigh is:
    [1.22 x F# x wavelength] -giving 1.3µm for green (550nm) light in an F2.0 lens.
    But maybe it's your interpretation of Rayleigh that is wrong, and that's where the error stems from. From the graphs, you show spot resolution limit as 2xRayleigh - and it isn't. Spot resolution is 1xRayleigh - giving an F2.0 lens a maximum resolution of the aforementioned 1.3µm - NOT 2.6µm.

    The definition of Rayleigh:
    -"Two point sources are regarded as just resolved when the principal diffraction maximum of one image coincides with the first minimum of the other.

    "Just resolved" in this case means a resulting MTF of about 7% - i.e The minimum distance between two peaks where you can still resolve that they are two, not one large is equal to the RADIUS of the first null on the Airy disk. Not the diameter. This is quite a common error made by people from the "E" side of ElectrOptics.

Log in

Don't have an account? Sign up now