Add Depth of Field to Screenshots

(blurmatic.com)

259 points | by s16h 13 days ago

21 comments

  • probabletrain 13 days ago
    Oh hey, I made this a couple of days ago!

    I initially made this to experiment with 'faking' depth of field in CSS (check out my penultimate tweet for the demo vid and inspiration from shuding_, link at the bottom of the site).

    But last night I remembered that ThreeJS exists, so rewrote it using react-three-fiber. This was my first time playing around with it and I'm super impressed, it's incredibly ergonomic.

    Edit: not documented, but right-click drag to pan

    • javierbyte 13 days ago
      Very cool! I also made a CSS based fake depth of field inspired by Shuding :)

      https://depth-of-field.vercel.app/

    • mistersquid 13 days ago
      Love the interactivity of the UI. Nicely done!

      > Edit: not documented, but right-click drag to pan

      Confirming undocumented feature. Scratching my head why ctrl-left-click on macOS doesn't enable panning, too.

      • lupusreal 12 days ago
        Right-click to pan is how a lot of video games do it, if not left-click drags. Needing to also use a keyboard key at the same time wouldn't very ergonomic.
        • mistersquid 12 days ago
          > Needing to also use a keyboard key at the same time wouldn't very ergonomic.

          Agreed. But maybe better awkward than non-existent? Otherwise, users with no right-click can’t pan.

          Didn’t realize ctrl-click and right-click are not always functional equivalents.

          • smegsicle 12 days ago
            > users with no right-click

            "here's a nickle, kid. buy yourself a better computer."

  • derefr 13 days ago
    Here I was hoping this would be something that works with the OS to take pre-window-compositor “3D screenshots” of a desktop, and then assigns the windows Z depths so that they’re floating above the desktop + each-other. Looking at the rendering of such a “3D screenshot” (in orthogonal projection) would look exactly like a regular screenshot… until you added a depth-of-field effect.

    But, of course, you could also look at it in other projections; tilting it around in 3D space (as done here); applying fog to shadow “distant” windows; lighting the scene from a point-source so as to make the windows cast real shadows on one-another (with more light let through translucent areas!); etc. I would imagine that the (ideally HTML5 embeddable) viewer for this “3D screenshot” format would do all those things.

    (I do hope someone does try creating such a “3D screenshot” format and viewer, as IMHO it would have a fairly-obvious use-case: reproducing static “look-around-able” snapshots of the already-depth-mapped AR window layouts in visionOS. Being able to tack arbitrary depth-maps onto windows from a 2D desktop OS would just be a bonus.)

    • Pulcinella 12 days ago
      The closest thing I have seen to this (which isn't that close) is the UI debug viewer in Xcode. You can get an exploded-view diagram of all the UI elements that you can rotate in 3D space. No lighting or shadows though and its limited to apps you have debug access to.

      I think the Amazon Fire phone also tried something similar in real time with several front facing eye tracking cameras and the gyro+accelerometer to shift the phone UI and simulate a 3D view with parallax. The old mobile Safari tab view also used to shift the tabs based on the phone's orientation.

      I would love to see a "globally illuminated" UI someday, even if it's impractical. Something like all those Windows renders the Microsoft Design team put out, but in real time. Totally impractical and a poor use of electricity, but it would be cool to have path traced, soft drop-shadows.

      • whywhywhywhy 12 days ago
        >I would love to see a "globally illuminated" UI someday, even if it's impractical

        Apple patented a ton of stuff for this probably a decade ago. It seemed at some point they were going to start procedurally rendering aqua materials and the like using recipes for lighting that could all be dynamic.

        Presuming some of it made it into VisionOS

    • jhaenchen 12 days ago
      You're getting very close to the motivation behind VisionOS, aka a 3D spatially oriented operating system.
  • tempaway3345751 13 days ago
    Thats great I've often thought that screenshots aren't blurry enough these days
    • jprete 13 days ago
      Limiting depth of field is a very useful way to emphasize specific elements of a photograph and for that reason it's almost always used for portraits.

      It also compensates for portrait photos being best taken from a distance using a telephoto lens. The reason those are best is because it captures the face people remember instead of the face they actually see. The reason compensation is needed is because the same lens configuration will get a much smaller depth-of-field up close but a much higher one at a distance.

      • refulgentis 13 days ago
        Difference being, the face is the content, the background is noise. Here, the screenshot is the content. Ex. The only physical feature of my face i like is my eyes. Yet, it'd be weird if I depth of field'd everything but my eyes.

        I'm sure this has uses but it's hard to argue it does from fundamentals of photography

        • probabletrain 13 days ago
          Its real use is the few hours of fun I had making it, this is really a toy
        • stetrain 12 days ago
          There are lots of cases, especially in say marketing images, where the entire screenshot is not the content. You may want to highlight a specific bit of UI while still keeping the general background context instead of cropping into a small image.

          This lets you click a portion of the screenshot to bring into focus.

          At the end of the day it's a fun toy web app, but I don't think the general concept is useless.

          • refulgentis 12 days ago
            Agreed, for the crowd, c.f. "I'm sure this has uses"
        • AlecSchueler 13 days ago
          You can highlight certain elements of a screenshot.
        • jprete 13 days ago
          I hadn't actually seen the demo when I commented! I'd assumed it was using an AI model to estimate distance from camera of the elements of a photograph and then reproducing a shallower depth-of-field. So my comment isn't all that relevant.
      • 7734128 12 days ago
        I hate it in any context. Blur is not natural to human vision and it gives me a headache.

        Things like the modern Link's Awakening would have me projectile vomiting if I tried to play it.

        There should never be blur, ever.

        • SAI_Peregrinus 12 days ago
          Depth of Field being visible is an artifact of 2D screens & prints. Our eyes do have depth of field, but when we look straight at something they very quickly refocus onto that thing, so it's often very hard to perceive. With a static photograph or picture on a screen we can't refocus on what we're looking at within the picture automatically.

          It's not that our eyes don't have depth of field, it's that they operate differently than fixed photos or pre-set DoF effects in a game.

        • jdiff 12 days ago
          Blur is completely natural to human vision, to any kind of vision. Look far away while bringing your hand close. Blur. Now look at your hand. Everything else goes blurry. You can even unfocus your eyes while looking at something. Now everything's blurry.
        • 2024throwaway 12 days ago
          Damon Albarn would disagree.
          • lencastre 12 days ago
            Tatatata tatá feel good
    • hn_throwaway_99 12 days ago
      This is unnecessarily snarky. I would hate to use one of these depth-of-field screenshots for, say, an attachment to a bug report.

      But it looks like a great visual effect to use on a marketing site, especially to highlight a specific part of a screenshot to get across whatever you want to emphasize.

      • kleiba 12 days ago
        > This is unnecessarily snarky.

        You think? I thought it was just a funny quip.

      • mparnisari 12 days ago
        > This is unnecessarily snarky.

        It's a joke, chill

    • glitchc 13 days ago
      Yes, let's blur the screenshot first and then use a hidpi display to unblur it /s
  • mprovost 13 days ago
    Looks like Screenstab [0] which was a previous Show HN [1].

    [0] https://www.screenstab.com/editor/ [1] https://news.ycombinator.com/item?id=34729849

    • btown 12 days ago
      Are there any solutions like this that generate video montages, with depth of field and a slow rotation, from a set of screenshots and captions?
  • daxaxelrod 13 days ago
    Just to answer some questions I'm seeing from other comments, this is built with three.js. Think of it as a 3d scene with an image rendered on a 2d plane and the camera has certain gaussian blur applied to a section of it. He's using orbit controls for the zoom in/out and for the ability to pivot the image.
  • jandrese 13 days ago
    This is just a fancy way of saying "blur out everything else" in the screenshot? Personally I prefer to crop the unnecessary content out of the screenshot.
    • JadeNB 12 days ago
      > This is just a fancy way of saying "blur out everything else" in the screenshot? Personally I prefer to crop the unnecessary content out of the screenshot.

      Other threads have pointed out that it might be useful for a screenshot to be able to show somebody where to navigate in complicated UI, without cropping out the rest of the UI (and so removing the navigational cues) but without making the screenshot as hard to navigate as the UI itself.

  • mastermedo 13 days ago
    Looks interesting. In the beginning I thought it would blur the part of the screenshot farthest away from you and keep the closest thing in focus. But that's not how it works.

    I guess it could be useful for focusing on a particular part of the screenshot if you one could mark what the important part is.

    • streb-lo 13 days ago
      Click on part of the image to focus it.
    • jprete 13 days ago
      Shallow depth of field is used in photography (and movies, sometimes) for exactly the purpose of focusing viewer attention on a specific element or elements.

      EDIT: But now that I've looked at the demo...I am not sure what I would want this for.

  • ei23 12 days ago
    You may add some RGB pixels for even more realism like i have done in Blender some years ago: https://blenderartists.org/t/rgb-display-shader/684533
  • mondobe 13 days ago
    This would be fun to use in the background of the thumbnail for a YouTube video (or something like that), although I certainly wouldn't want to see it used to convey actual information. It gets enough across, I think, to figure out what application is being used.
  • nightpool 12 days ago
    Please please please support pasting images! Any form field where I have to select a file from my filesystem instead of using my paste buffer increases the friction of using it like, 5x. (This applies to every image upload button I've ever used)
  • splatzone 13 days ago
    Very cool! Did you create this? What was the motivation? I'm just wondering if this is a common enough task that it's easier than doing it in Photoshop
    • probabletrain 13 days ago
      I made this a couple of days ago, mainly as a fun excuse to try out some cool frontend stuff
  • lacoolj 12 days ago
    This is absolutely awesome. I wonder if this is the same concept that phones use to focus/blur elements of the photo in post
  • mdrzn 13 days ago
    Fun to use, not sure where I would post a tilted screenshot with a grey background. Maybe if you can export it as .png?
  • aquir 13 days ago
    I need this to be integrated w/ Shottr! That would be very cool! Is there an API for this?
  • DHPersonal 13 days ago
    Oh, and zooming works, too!
    • dmitshur 13 days ago
      Indeed. Panning as well.
      • DHPersonal 13 days ago
        You typed this just as I was posting my own response. Yes, it does work! I found out that using the right mouse button does the panning. Is there another method that you found?
        • dmitshur 11 days ago
          Yes, I was trying it on a device with a touchscreen (an iPhone) and found that it was possible to pan with the usual touch gesture (moving two fingers together).
    • DHPersonal 13 days ago
      Another thing I found out is that holding the right mouse button drags the image around the screen.
  • StephTsimis 13 days ago
    Very cool!
  • artur_makly 13 days ago
    [flagged]
  • poulpy123 13 days ago
    But why ?
  • Shindi 13 days ago
    This is super cool! Huge upgrade to my product screenshots. Wondering if you're offering this as a react component - something I can embbed with a lead magnet or on a site.
    • candiddevmike 13 days ago
      Why would this be an upgrade to your product screenshots? As a buyer, seeing this would annoy the shit out of me...
    • dotancohen 13 days ago
      Please don't use this. It might look nice, but as a user and potential client or customer, this is reducing the utility of your screenshots.

      _You_ know what your screens look like, so you might enjoy seeing them blurred and tilted. But _I_ don't know, that's the information that I would be trying to get.

  • siadhal 13 days ago
    Holy f*ck! Is this AI?
    • hcaz 13 days ago
      Why would this be AI?
  • account42 13 days ago
    > Application error: a client-side exception has occurred (see the browser console for more information).

    And this is why you write your websites HTML instead of javascript.

    • arcatech 13 days ago
      But you can’t do this in HTML…
      • thih9 13 days ago
        I hope someone takes this as a challenge.

        Technically html + css + user interaction can be turing complete: https://github.com/brandondong/css-turing-machine

        • JadeNB 12 days ago
          > Technically html + css + user interaction can be turing complete: https://github.com/brandondong/css-turing-machine

          Turing completeness is about what computations can be expressed, not what user interactions can be performed. The lambda calculus is Turing complete, but, if I whip up a lambda calculus interpreter and don't give it a print statement, then you'll never know anything about the computations it's performing.