The Image Signal Processor (ISP)

So what purpose does ISP have? Well, pixels are sensitive to light between some set of wavelengths, essentially they’re color agnostic. The way to get a color image out is to put a filter on top, usually a bayer pattern color filter, then interpolate the color of the pixels adjacent. Your 8 MP CMOS doesn’t sense red green and blue for each pixel, it senses one color for each, then ISP guesses the color based on what’s next to it. This is called demosaicing, and it’s probably the primary job of ISP, and there are many secret sauce methods to computing this interpolated image. In addition ISP does all the other housekeeping, it controls autofocus, exposure, and white balance for the camera system. Recently correcting for lens imperfections like vignetting or color shading imparted by the imperfect lens system (which you’ll add right back in with instagram, you heathen) has been added, along with things like HDR recombining, noise reduction, other filtering, face or object detection, and conversion between color spaces. There’s variance between the features that ISP does, but this is really the controller for getting that bayer data into a workable image array.

Obviously the last part is the human interface part of the equation, which is an ongoing pain point for many OEMs. There are two divergent camps in smartphone camera UX – deliver almost no options, let the ISP and software configure everything automatically (Apple), and offer nearly every option and toggle that makes sense to the user (Samsung). Meanwhile other OEMs sit somewhere in-between (HTC, others). The ideal is an opt-in option for allowing users to have exposure control, with safe naive-user defaults. There are still many players making horrible, almost unthinkable mistakes in this area too. I wrote about how the iPhone 5 crops the preview to a 16:9 size, yet captures a 4:3 image, and later was amazed to see the AOSP camera UI on the Nexus 4 deliver an arbitrary shape (not even 16:9 or something logical) crop in the preview, and also capture a 4:3 image. Composition unsurprisingly matters when taking a photograph, and it’s mind-blowing to see established players blow off things like preview. In addition, preview framerate and resolution can be an issue on some platforms, to say nothing of outright broken or unstable user interfaces on some devices. Many OEMs have been thrust into crafting a camera UI who really have limited to no camera experience — previously it was a feature to have a camera period, much less controls. As the smartphone evolves from being a camera of convenience to the primary imaging device for most people, having robust controls for when ISP and auto exposure functionalities fail will become important. Right now camera UI and UX is rapidly changing from generation to generation, with more and more serious toggles being added. I don’t think any one player has a perfect solution yet.

For video we need to also consider the encoder. The pipeline is much the same, though the ISP will usually request a center crop or subsample from the CMOS, depending on the capabilities of the sensor. The encoder takes these images and compresses them into a format and bitrate of the OEM or user’s choice, basically H.264 at present. Not every encoder is the same, as Ganesh will tell you. There are a number of players in this market supplying IP blocks, and other players using what they have built in-house. Many OEMs make interesting choices to err on the side of not using too much storage, and don’t encode at the full capabilities of the encoder. This latest generation of phones we saw settle somewhere between 15 and 20 Mbps H.264 high profile for 1080p30 video.

The Camera Module & CMOS Sensor Trends Evaluating Image Quality
Comments Locked

60 Comments

View All Comments

  • Sea Shadow - Friday, February 22, 2013 - link

    I am still trying to digest all of the information in this article, and I love it!

    It is because of articles like this that I check Anandtech multiple times per day. Thank you for continuing to provide such insightful and detailed articles. In a day and age where other "tech" sites are regurgitating the same press releases, it is nice to see anandtech continues to post detailed and informative pieces.

    Thank you!
  • arsena1 - Friday, February 22, 2013 - link

    Yep, exactly this.

    Thanks Brian, AT rocks.
  • ratte - Friday, February 22, 2013 - link

    Yeah, got to echo the posts above, great article.
  • vol7ron - Wednesday, February 27, 2013 - link

    Optics are certainly an area the average consumer knows little about, myself included.

    For some reason it seems like consumers look at a camera's MP like how they used to view a processor's Hz; as if the higher number equates to a better quality, or more efficient device - that's why we can appreciate articles like these, which clarify and inform.

    The more the average consumer understands, the more they can demand better products from manufacturers and make better educated decisions. In addition to being an interesting read!
  • tvdang7 - Friday, February 22, 2013 - link

    Same here they have THE BEST detail in every article.
  • Wolfpup - Wednesday, March 6, 2013 - link

    Yeah, I just love in depth stuff like this! May end up beyond my capabilities but none the less I love it, and love that Brian is so passionate about it. It's so great to hear on the podcast when he's ranting about terrible cameras! And I mean that, I'm not making fun, I think it's awesome.
  • Guspaz - Friday, February 22, 2013 - link

    Is there any feasibility (anything on the horizon) to directly measure the wavelength of light hitting a sensor element, rather than relying on filters? Or perhaps to use a layer on top of the sensor to split the light rather than filter the light? You would think that would give a substantial boost in light sensitivity, since a colour filter based system by necessity blocks most of the light that enters your optical system, much in the way that 3LCD projector produces a substantially brighter image than a single-chip DLP projector given the same lightbulb, because one splits the white light and the other filters the white light.
  • HibyPrime1 - Friday, February 22, 2013 - link

    I'm not an expert on the subject so take what I'm saying here with a grain of salt.

    As I understand it you would have to make sure that no more than one photon is hitting the pixel at any given time, and then you can measure the energy (basically energy = wavelength) of that photon. I would imagine if multiple photons are hitting the sensor at the same time, you wouldn't be able to distinguish how much energy came from each photon.

    Since we're dealing with single photons, weird quantum stuff might come into play. Even if you could manage to get a single photon to hit each pixel, there may be an effect where the photons will hit multiple pixels at the same time, so measuring the energy at one pixel will give you a number that includes the energy from some of the other photons. (I'm inferring this idea from the double-slit experiment.)

    I think the only way this would be possible is if only one photon hits the entire sensor at any given time, then you would be able to work out it's colour. Of course, that wouldn't be very useful as a camera.
  • DominicG - Saturday, February 23, 2013 - link

    Hi Hlby
    photodetection does not quite work like that. A photon hitting a photodiode junction either has enough energy to excite an electron across the junction or it does not. So one way you could make a multi-colour pixel would be to have several photodiode junctions one on top of the other, each with a different "energy gap", so that each one responds to a different wavelength. This idea is now being used in the highest efficiency solar cells to allow all the different wavelengths in sunlight to be absorbed efficiently. However for a colour-sensitive photodiode, there are some big complexities to be overcome - I have no idea if anyone has succeeded or even tried.
  • HibyPrime1 - Saturday, February 23, 2013 - link

    Interesting. I've read about band-gaps/energy gaps before, but never understood what they mean in any real-world sense. Thanks for that :)

Log in

Don't have an account? Sign up now