Samsung this week has introduced the industry’s first 108 megapixel image sensor for smartphones. The Samsung ISOCELL Bright HMX promises to allow smartphones to take photos comparable to those from DSLR cameras. One of the first handsets to use the new sensor will be a phone from Xiaomi.

Samsung’s flagship 108 MP ISOCELL Bright HMX is a fairly large 1/1.33-inch image sensor for smartphones. Thanks to the combination of its sheer size and the underlying sensor technology, the Bright HMX is designed to absorb more light than existing image sensors, allowing it to offer better photos even in low-lit conditions. The sensor also supports the company’s 2x2 pixel-binning Tetracell technology, which merges four pixels into one to produce brighter 27 MP photos. In addition, the ISOCELL Bright HMX features Samsung's Smart-ISO capability, which uses high ISOs in darker settings to reduce noise and low ISOs in brighter settings to improve pixel saturation.

When it comes to video, the HMX supports recording at up to 6K (6016 x 3384) resolutions at 30 frames per second, with the same field-of-view as photo mode.

One thing to note is that while the 108 MP ISOCELL Bright HMX sensor can offer a rather high image resolution, it's not a complete system in and of itself. Smartphone manufacturers will need to use SoCs that can handle such a large sensor, both in terms of consuming the sensor's 108 MP output, as well as offering the processing power (ISP and otherwise) to handle all the post-processing expected for a modern smartphone camera.

Samsung plans to begin mass production of the 108 MP ISOCELL Bright HMX image sensor later this month. Meanwhile, Samsung has confirmed that partner Xiaomi will be the first to use the sensor, though the companies are not saying what that first device will be.

Related Reading:

Source: Samsung

POST A COMMENT

39 Comments

View All Comments

  • JanW1 - Wednesday, August 14, 2019 - link

    It's about signal-to-noise ratio. Increasing signal by a factor of four may be unrealistic. Reducing noise by that much maybe not. In practice it will be a combination of both. Reply
  • Zizy - Wednesday, August 14, 2019 - link

    I agree that 9 year old sensor has been greatly surpassed. Heck, Lumia 1020 with 1/1.5" sensor was about the level of 6 year old DSLRs at the time of release if you check reviews. (compared to kit lens; grab say telephoto lens and phones will take another decade to catch up)
    But 5-10 years ahead is BS. When a new APS-C/FF/… sensor gets released, it uses approximately the same tech (slightly behind) as in phones. It is updated much less often though; and the image processing stuff is way behind the phones.
    Reply
  • zodiacfml - Tuesday, August 13, 2019 - link

    No, even a recent 24MP DSLR with kit lens will find it hard to compete with current high-end smartphones because of HDR, the one available in Google's camera app.
    An ASPC kit lens only resolves of roughly 4-6MP of pixel sharp detail and smartphones are really close. Dynamic range and noise goes to smartphones with HDR.
    Video capability and device usability also goes to smartphones. It is sad that I haven't touched my conventional cameras for at least two years now.
    The latest Sony cameras are nice to have but I couldn't justify the cost.
    Reply
  • Fritzkier - Wednesday, August 14, 2019 - link

    Actually, video capabilities on smartphone are pretty bad. Too many over sharpening that gives you "a smartphone look". Reply
  • FunBunny2 - Wednesday, August 14, 2019 - link

    "Sensor size matters and even APS-C is over 4x the size of a 1/1.33" sensor. "

    I guess, don't know, whether capture accuracy scales down with pixel size? Most cameras have ASP-C, except a Sony and the multi-thousand dollar pro cameras. IOW, if real cameras used sensor pixels from smartphone sensors, how many would there be in a 24X36 sensor? billions?
    Reply
  • p1esk - Tuesday, August 13, 2019 - link

    Here's a crazy idea: why not just have a huge sensor with no lens, and train a neural network to map its raw unfocused input to what would a sensor with lens would receive. Combine this with another crazy idea: use the entire OLED display of a smartphone as a sensor. To train the neural net, walk around recording video on your smartphone with both its screen, and its front camera. Reply
  • saratoga4 - Tuesday, August 13, 2019 - link

    >Here's a crazy idea: why not just have a huge sensor with no lens, and train a neural network to map its raw unfocused input to what would a sensor with lens would receive.

    While lensless imaging does exist, if you just have a bare sensor with no optics and point it at a scene, you record a featureless, solid grey frame. No amount of post processing can turn in a image that has every pixel identical into anything meaningful. You have to have actual data to feed into the processing.

    >use the entire OLED display of a smartphone as a sensor

    OLEDs cannot sense light, only produce it.
    Reply
  • p1esk - Thursday, August 15, 2019 - link

    > OLEDs cannot sense light, only produce it.

    They can, you just need to reverse the polarity. Which can be done when pressing the camera button. But the efficiency will probably be very bad. Still, if you make the sensor 100x larger, and the efficiency degrades 20x you still end up with a 5x better sensor.

    As others said, the main problem is you still need some kind of lens.
    Reply
  • Skeptical123 - Wednesday, August 14, 2019 - link

    I've thought of this too, the issue is this impossible without more data. The same way it's not possible to just keep zooming in on a photo/video even if you have a super computer. You would need something like a lytro camera system on a single die for this to work. Which is about one or two "levels" above an idea like this to be frank. But hey no harm no foul. I'm just waiting to see smart phone companies cut holes in the screen/bezel to be able to put the sensor as far back as possible. With "bezel less" phone. More likely now would to be just integrate them on the same substrate in manufacturing where companies are willing to increase the hardware cost to save that .5mm on overall camera bump and or device thickness. Reply
  • saratoga4 - Tuesday, August 13, 2019 - link

    >The sensor also supports the company’s 2x2 pixel-binning Tetracell technology, which merges four pixels into one to produce brighter 27 MP photos.

    FYI, CMOS sensors like this cannot perform binning, so what they mean by "2x2 pixel-binning" is really "resizing the image in software after it is acquired".

    >the ISOCELL Bright HMX features Samsung's Smart-ISO capability, which uses high ISOs in darker settings to reduce noise and low ISOs in brighter settings to improve pixel saturation.

    FYI, this is called gain control, and it is implemented on all smart phone cameras ever made. It also doesn't help very much on modern sensors, since the SNR is almost independent of "ISO" (really amplifier gain).

    So essentially, that marketing fluff is saying that they have designed a normal CIS, except larger.
    Reply

Log in

Don't have an account? Sign up now