back to article Samsung shaves 0.1μm off pixels to make new ISOCELL sensor lineup 15% slimmer

The latest crop of ISOCELL sensors from Samsung continue the trend of chasing ever smaller pixels, resulting in a dramatically reduced footprint and height. For rear-facing cameras, Samsung now has three configurations in 108MP, 64MP, and 48MP flavours. There's also a front-facing ISOCELL option rated at 32MP, which Samsung …

  1. Neil Barnes Silver badge

    Pixel binning

    The more pixels, the less sensitive and the more noise... and what you do have is further compressed before it's stored. Remind me again, what's the point of a tiny sensor with ridiculous pixel counts? Apart from willy-waving, of course...

    1. juice

      Re: Pixel binning

      > Remind me again, what's the point of a tiny sensor with ridiculous pixel counts?

      Because software can take that large lump of mediocre data, and combine it all to produce something smaller and better. Or at least that's the theory...

      Personally, I'm not convinced, and so far, Google doesn't appear convinced either, despite allegedly having the best Android photography software going!

      Still, I'm probably not going to buy another new phone for at least one more generation. Time yet to see how this race towards uber-pixel count goes...

      1. Vincent Ballard

        Re: Pixel binning

        Or you could just combine it in hardware by having larger pixels for a cheaper, better sensor. It's like the megahertz race in personal computers when anything off the shelf had a CPU which supported a really high clock speed and so little RAM that it spent all of its time swapping.

    2. bob42

      Re: Pixel binning

      The point is any pixel sensor is imperfect, the sensor has to be calibrated to compensate the difference in each pixel. If you are deriving each final pixel from four individual pixels, in theory the overall sensor will be more consistent.

      1. Neil Barnes Silver badge

        Re: Pixel binning

        Yabbut... don't these sensors filter square blocks to RGBG combined pixels anyway? Presumably they now just have four cells under each filter block? And after that it's going to get compressed to a jpeg.

        Lies, damn lies, and statistics pixel counts...

        1. Martin an gof Silver badge

          Re: Pixel binning

          As you know, the individual sensor "pixels" are only sensitive to luminance so in order to have a colour image you need to apply colour filters.

          In a "traditional" sensor, as I understand it, one pixel in a group of 2x2 has a red filter, one a blue filter and two have green filters. Other combinations are possible, of course.

          In the final result, each image pixel corresponds to one sensor pixel, and the full RGB value of the image pixel is calculated by interpolating from adjacent pixels so that - in effect - what you end up with is an image with full resolution luminance but quarter resolution colour. It's actually a bit more complex than that because what the sensor pixels are measuring doesn't give them a true luminance measurement and because with the RGBG sytem you get a quarter resolution for red and blue, but a half resolution for green - our eyes are more sensitive to green anyway.

          And then, as you say, you perform yet more blurring if the image is stored as a JPEG so the "true" resolution of the final image is actually lower again.

          With "pixel binned" sensors the four pixels in the group are output as just one final image pixel. Effectively you end up with full resolution colour, but you "throw away" a lot of luminance and some green-channel resolution. What you gain (in theory) is "accuracy" - by combining the luminance values of four sensor pixels you should end up with a "cleaner" (less noisy) result. In theory you also gain some sensitivity because although each sensor pixel is smaller than in a traditional sensor and so can "collect" fewer photons during the period when it is doing so, there are four of them all collecting at the same time.

          These sorts of techniques have been used for many years in various forms. "HDR" image recording is one common example - in this case the "sensor pixels" which are combined are separated in time rather than space, as the camera takes three (usually) images in rapid succession, each with slightly different settings. The difference is that while a 12Mpixel camera which uses HDR is marketed as a 12Mpixel camera - not a 36Mpixel camera - a 12Mpixel camera which uses pixel binning is marketed as having 48Mpixels. Of course, it really does have 48Mpixels, but you only get that resolution if you treat the sensor as a traditional sensor, which sort of defeats the object and probably leads to worse images than you would have got from an actual 12Mpixel sensor with larger pixels.

          There is an additional thing at play here, of course, and that is that very few people view the images they take at 1:1 on their screens. If you are viewing a 12Mpixel image on a 1920x1200 desktop monitor, each monitor pixel is combining the values of four or five image pixels, so is effectively performing "pixel binning" on display! With very high resolution displays, we rarely watch them from a distance where individual display pixels are discernable to the eye, so even if we did have the image at 1:1 scale on screen, the Mk I eyeball performs the pixel binning.

          It also applies to other fields; I was once involved involved in the creation of a device for measuring the thickness of materials (metals, mainly) by firing an ultrasonic pulse into the material and timing the reflection(s). To create a cheaper device we tried to do away with the extremely accurate high-speed timers usually used in these circumstances and instead took multiple measurements using a less accurate timer which was not synchronised to the measurement process, theorising that the natural "dither" thus introduced could be averaged out to give a more accurate result. It did seem to work, but I left the project before it was commercialised.

          M.

  2. MJI Silver badge

    I prefer Sony sensors anyway

    They do seem to be good, and fitted to most things which take pictures.

  3. Anonymous Coward
    Anonymous Coward

    What about the wavelength of light

    The visible spectra has a wavelength of about 0.35 to 0.75 micrometres so these pixels will be out the same size as the wavelength of the red end of the spectra. This should cause diffraction effects and other distortions.

  4. JDPower Bronze badge

    This is stupid, was nothing learned from the megapixel race of digital cameras. The worst things you can do to a sensor when it comes to image quality - smaller sensor, smaller pixels, massive megapixel count. They are literally spending money on making the product worse! And the entire tech industry has known this for years. But hey, it'll make the phone a few μm thinner, and we all know thinner phones are better right? Doesn't matter if the camera is worse, battery is tiny, no room for headphone jacks or SD slots. All unimportant, just bow down at the altar of thinness and keep buying the shiny shiny.

    1. ThatOne Silver badge
      Devil

      > Doesn't matter if the camera is worse

      Indeed, it doesn't matter. What will it be used anyway?

      Taking pictures of your "hip" food to post on "Social" media...

  5. Giles C Silver badge

    The megapixel wars seem to be going in the phone world.

    As a comparison I shoot using a Canon EOS R which has a pixel pitch of 5.3 micrometers and can take photos in virtual darkness. The flagship 1DX II has a pitch of 7.

    The smaller the photosite the less light it can receive and the worse the picture quality, and as the other comment above says they are soon getting down to the physical wavelength being the same size as the photo site which means that have literally run out of light to work with.

  6. RobbieM

    Does It Matter

    Given that the vast majority of photos taken on phones are displayed on phones via Facebook etc or fool tube in the case of video, which usually compresses the file anyway before displaying, does the resolution really matter? If you are in the business of taking high-quality images you are going to use a good quality camera set up. As was said earlier, it's all about willy-waving.

    (Icon was nearest I could get to a willy)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like