Recently I was asked to give a presentation about smartphone imaging and optics at a small industry event, and given my background I was more than willing to comply. At the time, there was no particular product or announcement that I crafted this presentation for, but I thought it worth sharing beyond just the event itself, especially in the recent context of the HTC One. The high level idea of the presentation was to provide a high level primer for both a discussion about camera optics and general smartphone imaging trends and catalyze some discussion.

For readers here I think this is a great primer for what the state of things looks like if you’re not paying super close attention to smartphone cameras, and also the imaging chain at a high level on a mobile device.

Some figures are from of the incredibly useful (never leaves my side in book form or PDF form) Field Guide to Geometrical Optics by John Greivenkamp, a few other are my own or from OmniVision or Wikipedia. I've put the slides into a gallery and gone through them pretty much individually, but if you want the PDF version, you can find it here

Smartphone Imaging

The first two slides are entirely just background about myself and the site. I did my undergrad at the University of Arizona and obtained an Optical Sciences and Engineering bachelors doing the Optoelectronics track. I worked at a few relevant places as an undergrad intern for a few years, and made some THz gradient index lenses at the end. I think it’s a reasonable expectation that everyone who is a reader is also already familiar with AnandTech.

Next up are some definitions of optical terms. I think any discussion about cameras is impossible to have without at least introducing the index of refraction, wavelength, and optical power. I’m sticking very high level here. Numerical index refers of course to how much the speed of light is slowed down in a medium compared to vacuum, this is important for understanding refraction. Wavelength is of course easiest to explain by mentioning color, and optical power refers to how quickly a system converges or diverges an incoming ray of light. I’m also playing fast and loose when talking about magnification here, but again in the camera context it’s easier to explain this way.

Other good terms are F-number, the so called F-word of optics. Most of the time in the context of cameras we’re talking about working F-number, and the simplest explanation here is that this refers to the light collection ability of an optical system. F-number is defined as the ratio of the focal length to the diameter of the entrance pupil. In addition the normal progression for people who think about cameras is in square root two steps (full stops) which changes the light collection by a factor of two. Finally we have optical format or image sensor format, which is generally in some notation 1/x“ in units of inches. This is the standard format for giving a sensor size, but it doesn’t have anything to do with the actual size of the image circle, and rather traces its roots back to the diameter of a vidicon glass tube. This should be thought of as being analogous to the size class of TV or monitor, and changes from manufacturer to manufacturer, but they’re of the same class and roughly the same size. Also 1/2” would be a bigger sensor than 1/7".

There are many different kinds of optical systems, and since I was originally asked just to talk about optics I wanted to underscore the broad variety of systems. Generally you can fit them into two different groups — those designed to be used with the eye, and those that aren’t. From there you get different categories based on application — projection, imaging, science, and so forth.

We’re talking about camera systems however, and thus objective systems. This is roughly an approximation of the human eye but instead of the retina the image is formed on a sensor of some kind. Cameras usually implement similar features to the eye as well – a focusing system, iris, then imaging plane.

 

Smartphone Imaging Constraints
Comments Locked

60 Comments

View All Comments

  • ltcommanderdata - Saturday, February 23, 2013 - link

    http://www.sony.net/SonyInfo/News/Press/201208/12-...

    Sony's new Exmor RS sensors use a stacked structure which places the circuit section underneath the pixel section instead of beside it which should free up more room for more pixels or larger pixels within a given sensor module area or enable smaller modules. They also add a dedicated white channel for a RGBW coding which they claim improves low light performance. Any comment on the efficacy of these techniques?

    Apple has been using Sony image sensors for both the iPhone 4S and iPhone 5 so that new Sony IMX135 with 13.13MP, 1/3.06", 1.12 μm pixel sensor looks like a prime candidate for the iPhone 5S.
  • slatanek - Saturday, February 23, 2013 - link

    Well done, Brian. At last someone trying to explain and stress how much more is there to tiny smartphone cameras than just the megapixels. A few years ago I felt like we got over it and the race was over, but now few years later the race just continues but in the smartphone realm (previously compact cameras had the same issues where ultimate picture quality was compromised just to get to a higher number of megapixels on the box). Hopefully this will change in near future, but somehow I'm afraid that were stuck with it just as with the horrible quality displays in notebooks. And it's not that people don't want good quality, it's just that the behemot companys are not willing to take the risk.

    Anyways, thanks for a good read.
  • Shftup - Saturday, February 23, 2013 - link

    Brain - Well done!

    This is a fanatastic article. Its well written, relatively easy to understand for any layman, and most importantly makes the reader coming back for more.
  • jabber - Saturday, February 23, 2013 - link

    ....for a smartphone to give me as good a picture quality as I got from my 3.2MP Nikon from 2004.

    When I find one I'll be happy.

    I think a lot of the phone companies need to start poaching the optics and software specialists from the camera companies. It's all very well looking at the spec lists for components but its another thing entirely to make them all work together to produce a decent picture.

    In the past for me near decent cams have been ruined by over zealous compression settings (Palm Pre2) or what appears to be zero configuration of the imaging processors (Nexus4).

    No one yet has tweaked every part of the chain to provide a truly viable alternative to taking a $250+ camera along instead.

    Never had anything to do with megapixels IMO. It's other factors that let them down.

    I reckon in another 2 years we'll have it pretty much there.
  • MrSpadge - Sunday, February 24, 2013 - link

    Same here, my Sony DSC-W5 from 2005 or so is still way better than my smartphone. If I could get at least similar quality in a new phone that would easily make it worth 50 - 100€ more for me.
  • Tarwin - Saturday, February 23, 2013 - link

    I enjoyed the article, it cleared up some doubts I had and taught me a few new things.

    In light of this article I hope you go into more detail in the One review when you eventually do it. I assume you will give special attention to the c amera due to how it goes against the trend and HTC's focus on it but I also hope you mention how it fits into all this a bit for those of us who read both articles. Thanks again.
  • DominicG - Saturday, February 23, 2013 - link

    Hi Brian,

    much enjoyed your intro to camera optics. However you state:
    "If we look at the airy disk diameter formed from a perfect diffraction limited ... we get a spot size around 3.0 microns"

    You mentioned this is a back illuminated chip so that the light is focussing in a medium of refractive index ~3.5. Therefore the wavelength of red light inside the medium is ~700/3.5 = 200nm - still much smaller than the pixel. It is a bit more complicated than this since the optical resolution is of course determined not only by the wavelength but also the numerical aperture (NA) - the light refracts at the air-silicon interface so that a beam in air at 37 deg (2omega =75deg - your example lens), becomes a beam in silicon of only 10 deg. 10 deg in silicon gives a theoretical resolution (Rayleigh condition) of 0.7microns. Anyway, did you take all these factors into account?
  • fokka - Sunday, February 24, 2013 - link

    thanks for this insightful article, brian, that's the sort of read i i visit this site for regularly!

    i have to say that i expected a little more (side-) content, though, like a more thorough look at the htc one initially pictured, one of the two reasons i clicked on the article. also a comparison with nokias pureview-approach would've been nice, since it's on the complete other side of the spectrum.

    but this is just to nitpick, as i enjoyed the read none the less.
  • StormyParis - Sunday, February 24, 2013 - link

    Great article
  • StormyParis - Sunday, February 24, 2013 - link

    Seeing the interest everyone has in good pictures, and the relatively small size and cost of the lens apparatus, wouldn't it make sense to have 2 distinct cameras on a smartphone, and coordinate them to get better image quality ?

Log in

Don't have an account? Sign up now