The Camera Module & CMOS Sensor Trends

So after we have the lenses, what does that go into? Turns out there is some standardization, and that standardization for packaging is called a module. The module consists of of course our lens system, an IR filter, voice coil motor for focusing, and finally the CMOS and fanout ribbon cable. Fancy systems with OIS will contain a more complicated VCM and also a MEMS gyro somewhere in the module.

Onto CMOS, which is of course the image sensor itself. Most smartphone CMOSes end up being between 1/4“ and 1/3” in optical format, which is pretty small. There are some outliers for sure, but at the high end this is by far the prevailing trend. Optical format is again something we need to go look at a table for or consult the manufacturer about. Front facing sensors are way smaller, unsurprisingly. The size of the CMOS in most smartphones has been relatively fixed because going to a larger sensor would necessitate a thicker optical system, thus the real trend to increase megapixels has been more of smaller pixels.

The trend in pixel size has been pretty easy to follow, with each generation going to a different size pixel to drive megapixel counts up. The current generation of modern pixels is around 1.1 microns square, basically any 13 MP smartphone is shipping 1.1 microns, like the Optimus G, and interestingly enough others are using 1.1 microns at 8 MP to drive thinner modules, like the thinner Optimus G option or Nexus 4. The previous generation of 8 MP sensors were using 1.4 micron pixels, and before that at 5 MP we were talking 1.65 or 1.75 micron pixels. Those are pretty tiny pixels, and if you stop and think about a wave of very red light at around 700nm, we’re talking about 1.5 waves with 1.1 micron pixels, around 2 waves at 1.4 microns, and so forth. There’s really not much smaller you can go, it doesn’t make sense to go smaller than one wave.

There was a lot of talk about the difference between backside (BSI) and front side illumination (FSI) for systems as well. BSI images directly through silicon into the active region of the pixel, whereas FSI images through metal layers which incur reflections and a smaller area and thus loss of light. BSI has been around for a while in the industrial and scientific field for applications wanting the highest quantum efficiency (conversion of photons to electrons), and while they were adopted in smartphone use to increase the sensitivity (quantum efficiency) of these pixels, there’s an even more important reason. With pixels this small in 2D profile (eg 1.4 x 1.4 microns) the actual geometry of a pixel began to look something like a long hallway, or very tall cylinder. The result would be quantum blur where a photon being imaged onto the surface of the pixel, converted to an electron, might not necessarily map to the appropriate active region underneath - it takes an almost random walk for some distance. In addition the numerical aperture of these pixels wouldn’t be nearly good enough for the systems they would be paired with.

Around the time I received the One X and One S last year, I finally became curious about whether we could ever see nice bokeh (blurry background) with an F/2.0 system and small pixels. While trapped on some flight somewhere, I finally got bored enough to go quantify what this would be, and a side effect of this was some question about whether an ideal, diffraction limited (no aberrations, ideal, if we had perfect optics) system could even resolve a spot the size of the pixels on these sensors.

It turns out that we can’t, really. If we look at the airy disk diameter formed from a perfect diffraction limited HTC One X or S camera system (the parameters I chose since at the time this was, and still is, the best system on paper), we get a spot size around 3.0 microns. There’s some fudge factor here since interpolation takes place thanks to there being a bayer grid atop the CMOS that then is demosaiced, more on that later, so we’re close to being at around the right size, but obviously 1.1 microns is just oversampling.

Oh, and also here are some hyperfocal distance plots as a function of pixel size and F/# for the same system. It turns out that everything is in focus pretty close to your average smartphone, so you have to be petty close to the subject to get a nice bokeh effect.

Smartphone Imaging Constraints The Image Signal Processor (ISP)
Comments Locked


View All Comments

  • mdar - Thursday, February 28, 2013 - link

    You say "This is the standard format for giving a sensor size, but it doesn’t have anything to do with the actual size of the image circle, and rather traces its roots back to the diameter of a vidicon glass tube"
    The above statement, though partially true, is misleading. The dimension DOES give sensor size multiplied by factor of roughly 1.5. For example if some one says 1/1.8" sensor, the sensor diagonal is ~ 1/(1.8*1.5). The 1.5 factor probably comes from vidicon glass tube.

    Infact if some one wants just one parameter for image quality, it should be sensor size. Pixel technologies do improve (like using BSI) but even now a 1/3" sensor size of iphone or samsung or lumia 920 camera can just barely match quality 1/1.8" sensor of 4-year old Nokia N8.
  • frakkel - Thursday, February 28, 2013 - link

    I am currious if you can elaborate a little regarding lens material.

    You say that today most lens elements are made of plastic. Is this both for front and rear facing camera lenses?

    I was under the impression that lens elements in phones still were made of glass but that the industry is looking to change to plastic but this change has not been done yet. Please correct me if I am wrong and a link or two would not hurt :)
  • vlad0 - Friday, March 1, 2013 - link

    I suggest reading this white paper as well:

    it deals with a lot of the limitations of a smartphone camera in a very elegant way, and the results are sublime.

    Nokia solved several important issues the industry has been dealing with for a long time...
  • wally626 - Monday, March 4, 2013 - link

    Although the term Bokeh is commonly used to refer to the effect in pictures of low depth of field techniques it should only be used to refer to the quality of the out-of-focus regions of such photographs. It is much more an aesthetic term than technical. Camera phones usually have such deep depth of focus that little is out of focus in normal use. However, with the newer f/2, f/2.4 phone cameras when doing close focus you can get the out of focus regions from low depth of field.$file/cln35_bokeh_en.pdf

    Is a very good discussion of this by Dr. Nasse of Zeiss
  • wally626 - Monday, March 4, 2013 - link

    Someone fixed the Zeiss link to the Nassw article for me awhile back but I forgot the exact fix. In any case a search on the terms Zeiss, Nasse and Bokeh should bring up the article.
  • huanghost - Thursday, March 21, 2013 - link

  • mikeb_nz - Sunday, December 22, 2013 - link

    how do i calculate or where do i find the field of view (angle of view) for smartphone and tablet cameras?
  • oanta_william - Monday, July 20, 2015 - link

    Your insight in Smartphone Cameras is awesome! Thanks for everything!

    From your experience would it be possible to have only the camera module on a device, with a micro-controller/SoC that has sufficient power ONLY for transmitting the non processed 'RAW' data on another device via Bluetooth - on which the ISP and the rest needed for image processing to be situated.

    I have a homework regarding this. Do you now any reference material/books that could help me?

  • solarkraft - Monday, January 9, 2017 - link

    What an amazing article! Finally something serious about smartphone imaging (the processor/phone makers don't tell us ****)! Just an updated version might be cool.
  • albertjohn - Tuesday, November 20, 2018 - link

    I like this concept. I visited your blog for the first time and became your fan. Keep posting as I am going to read it everyday.

Log in

Don't have an account? Sign up now