Removing Reflections from RAW Photos

(arxiv.org)

135 points | by zerojames 12 days ago

8 comments

  • kloch 10 days ago
    What we need is sensors that can scan polarization on a per-pixel basis (like 256 orientations per pixel per image. Then it would be much easier to detect and remove consistently polarized components of the image (as specular reflections from glass are).

    This would just be a fully electronic/computational version of a mechanical polarizing filter.

    • planede 10 days ago
      > like 256 orientations per pixel per image

      You only need 4 parameters to describe the polarization at a single wavelength[1]. Naively this could be 4 parameters per color channel, so 12 channels overall. I think you could potentially need more color channels though to capture the full spectrum. But 12 channels at least looks feasible for a camera.

      [1] https://en.wikipedia.org/wiki/Stokes_parameters

      edit:

      On second thought for dealing with reflections you might get away not capturing the "V" Stokes parameter, as you might not care about circular polarization.

      edit2:

      The I,Q and U parameters can be captured fully by a single polarization filter at three different rotations. This could be feasible with existing cameras with a tripod and a static subject. I wonder if this has been done before.

      • cornellwright 10 days ago
        You can buy [1] polarization cameras, both mono and with a Bayer filter. They're expensive right now, but I agree it would be really cool to see what could be done with a consumer grade version in a smart phone.

        [1] https://thinklucid.com/product/phoenix-5-0-mp-polarized-mode... (among many others)

        • planede 10 days ago
          Interesting. From what I can find the pixel format is 4 polarization directions per pixel, 45 degrees apart. Even though there are 4 channels this doesn't allow to deduce the V Stokes parameter (this camera can't capture circular polarization). Technically one channel is redundant here, but I guess it can be useful for reducing error.

          I wonder if an alternative pixel format, with 3 polarization directions 60 degrees apart and a circular polarization channel would be desirable for some applications.

        • c0nfused 10 days ago
          What is the advantage here vs a polarization filter and a standard camera?
          • londons_explore 10 days ago
            You can capture the whole scene in a single exposure. Handy for moving images.
      • im3w1l 10 days ago
        I'm pretty sure he means a single byte-valued parameter. As you mention a single parameter is not enough to fully describe the polarization but maybe it's good enough - I guess you would average across colors, and say circular polarization would lead to a basically random value.
        • kloch 10 days ago
          I did indeed mean a singe, byte-valued parameter indicating angle (similar to the single angle parameter of a mechanical polarizing filter)

          Full polarization and phase info would be great to have also but probably not necessary for reflection suppression. And yes purely circular polarization would be undefined in this scenario but again not common (possible?) with reflections.

    • HALtheWise 10 days ago
      Due to quantum physics, there's actually only two degrees of freedom in the ways light can be polarized, referred to as the "Jones Vector". In other words, it's impossible even in theory to distinguish between light that has exactly two perpendicular polarizations mixed together and light that is fully unpolarized and has thousands located all around the circle. That makes it surprisingly possible to build a camera that captures _everything_ there is to know about light at some particular frequency.

      https://en.wikipedia.org/wiki/Jones_calculus

      • amluto 10 days ago
        Not quite — that’s for polarized light. For general light that may be unpolarized, you need four parameters. You can use the Stokes parameters, or, if you’re feeling very quantum, you can describe the full polarization state of a photon by a 2x2 density matrix. (I have never personally calculated this, but I’m pretty sure you can straightforwardly translate one formulation to the other — the density matrix captures the polarization distribution of a photon sampled, by whatever means, from any source of incoming light.)
    • johnmaguire 10 days ago
      > [...] remove consistently polarized components of the image (as specular reflections from glass are).

      It was my understanding that reflections in glass can be either polarized or non-polarized, or a mix of both.

      If you use a polarizing filter on a camera (e.g. when taking photos of artwork through glass, or shooting over water that you want to see into), you will often find that it does not remove all reflections.

      https://en.wikipedia.org/wiki/Brewster%27s_angle

    • buildbot 10 days ago
      • dylan604 10 days ago
        Yes, because my mom* is not going to carry that around to take pics of the grandkids.

        Just because something exists does not mean it is practical. I can totally see how having a software solution that Apple can include in its fakeypics app, then my mom would be able to take advantage of this.

        *Avoiding the use of the phrase "your mom"!

        • buildbot 10 days ago
          …Nobody said they were going to…

          Apple could request a sensor with the polarsens mask. It’s just not worth it, from a resolution & light gathering perspective. Big tradeoffs improvements in specific scenarios is not a path Apple has taken typically for their cameras.

          • dylan604 10 days ago
            > …Nobody said they were going to…

            talking about missing the point...

            Apple is not going to make a hardware change like your suggestion, but they would be much more likely use the software concept from TFA. I'm assuming that Googs, Samsung, CCPhardware would be similar. They need to do something compelling with all of the specific compute they are including in their devices.

    • ikari_pl 10 days ago
      i used to have a Lytro camera

      very interesting device, also took about 8 angles of every photo and built a spatial interpretation, not too advanced

  • CharlesW 10 days ago
    > "RAW inputs improve prior methods, but our system outperforms them."

    I understand why RAW is useful in general and why all methods would benefit (i.e. higher dynamic range, >8bpc color depth), but I don't understand how this system disproportionately benefits from that.

    Is it because the models used in this system are trained from RAW, where they're not in other systems?

    • GrantMoyer 10 days ago
      My guess: raw inputs preserve the linearity of radiance at each pixel. In other words, for a linear function f, f(Total Radiance) = f(Base Radiance + Reflected Radiance) → f(Total Radiance) = f(Base Radiance) + f(Reflected Radiance). Conversion from raw to another format may introduce a non-linear map on total radiance to compress the range to 8 bits while preserving contrast in most of the image (particularly for parts of the image washed out by a bright reflection).

      So with raw images, the value you need to find is f(Reflected Radiance), which is probably why having a reference photo in the reflected direction helps. On the other hand, for other formats the reflection component of the image isn't a simple linear transform of whats being reflected, so even with a reference image, the reflection component would be hard to determine.

    • Derbasti 10 days ago
      Maybe in this case, because these are phone pictures, which are quite heavily processed (sharpening, denoising, tone mapping, local white balance, local contrast). The raw image may contain a bit less of that stuff.
  • _ache_ 10 days ago
    They use a context picture to help with the removing of the reflections. It's the first time I see something like that.

    But without the context it doesn't seem that good, the S24 AI that remove reflections seems better.

  • jlas 10 days ago
    If you're a photographer, the low tech way of doing this is just use a polarizing filter
    • user_7832 10 days ago
      Is there a reason that reflected light off a vertical plane has a particular polarization? I know that light reflected off the ground gets polarized (which is why polarized sunglasses help so much) but that reflection is at a steep angle and not near/at 90 degrees.
      • yorwba 10 days ago
        https://farside.ph.utexas.edu/teaching/em/lectures/node104.h... has a detailed derivation of dielectric reflection, but you can also skip it and just look at figure 57 at the bottom showing the predicted reflectances for the two directions of polarization depending on the incidence angle.

        You're right that for perfectly vertical reflection, the polarization doesn't matter, but you're unlikely to exactly hit that. For angles between 0 and 90 degrees, light polarized parallel to the surface is always reflected better. If you perfectly hit Brewster's angle https://en.wikipedia.org/wiki/Brewster%27s_angle the light will be completely linearly polarized, but that is equally unlikely. So in general you're going to get mixed polarization that's slightly biased in one direction.

        • user_7832 9 days ago
          Thank you for your detailed comment!
      • drivers99 10 days ago
        It depends on the material as well, I recently learned. Specifically, metal does not polarize the light but glass, water, etc do.
        • user_7832 9 days ago
          Thanks! Fun fact, do you know that apparently in reflections off a mirror, the original photon is destroyed and replaced by an "identical" one apparently?
  • tedunangst 10 days ago
    I'm still amazed there isn't a simpler popular method that uses another shot at an oblique angle to resolve and remove reflections. Google's PhotoScan does this, but it's kinda awkward to use. I feel like we have the technology that should be able to dump a few photos in an app, pick one to refine, and then have it use the extras to fill in obscured areas. There was another project that removed chain link fences using a similar approach I forget the link to.

    At least for me, it's really easy for me to take a few steps to the side and take another photo. But haven't found a program that can use that photo.

    https://research.google/blog/photoscan-taking-glare-free-pic...

  • wittjeff 10 days ago
    I assume this could really help with in-the-wild OCR for blind people.
  • Wistar 10 days ago
    I spend a few hours a day manually editing photos of shiny stuff to remove certain reflections—mostly reflections of the photographer.

    This is tech I could use.

  • ImHereToVote 10 days ago
    Awesome. Is there code?