28 comments

  • mitchelhall 1525 days ago
    Hi all, really cool that you have taken an interest in this project, a lot of your comments below are very insightful and interesting. I work on the team that deploys this tech on set. We focus on how the video signals get from the video output servers to the LED surfaces, coordinating how we deal with tracking methods, photogrammetry, signal delivery, operator controls and the infrastructure that supports all of this. As noted in some of the comments, the VFX industry solved tracking with mocap suits a long time ago for the post-production process. What we are exploring now is how we can leverage new technology, hardware, and workflows to move some of the post-production processes into the pre-production and production (main/secondary unit photography) stages.

    If you are interested in checking out another LED volume setup, my team also worked on First Man last year. This clip shows a little more of how we assemble these LED surfaces as well as a bit of how we use custom integrations to interface with mechanical automation systems. [https://vimeo.com/309382367]

    • kragen 1525 days ago
      Thanks! This work is really inspiring! Are these OLED screens, matrices of separate LEDs of the usual InGaAs and GaN type, or LCDs backlit with LEDs? The 2.84 mm pixel pitch makes it sound like it's separate inorganic LEDs.

      Are there times short of sunshine where you need more directionality to the lighting than the screens can provide? Because the screens can emit from any direction but not toward any direction, being quasi-Lambertian emitters.

      • marcan_42 1525 days ago
        Discrete LED screens go all the way down to 2mm or even 1.9mm pitch (I have a few 2mm tiles at home). These are certainly discrete LED panels, not OLEDs. I'm not aware of OLEDs being used in "wall" applications like this.

        When you hear LED walls, think millions of discrete LEDs mounted on PCBs (and thank China for making this low cost enough to be viable!)

        • jweather 1525 days ago
          LED video walls are down in the 0.7mm dot pitch vicinity. Not quite the ~0.3mm of a 4K 55" LCD, but we're getting there. And the brightness and contrast ratio can't be beat.
        • kragen 1525 days ago
          In this case, 106 168 320 discrete LEDs, if we assume RGB and believe the 12288×2160 + 4096×2160 figure in the article; or 35 389 440 discrete LEDs if we count each (presumably RGB) pixel as a single unit, or if they're using a Bayer-pattern screen like PenTile AMOLEDs.
        • canada_dry 1525 days ago
          > low cost enough

          The article doesn't mention cost, but I'm expecting it was in the $5-10M range. Anyone have actual figures for this type of hi-res, hi-lumen hi-refresh LED screen?

    • m3at 1525 days ago
      Great work! It must be a tremendously interesting job.

      You might be able to answer my question which is: why use exclusively LED and no projector? I imagine that it's mainly because it's too dim and the main goal is to get good reflections. Is that something that was considered?

      (I'm wondering as I found the work of teamLab very impressive, which rely heavily on projectors: https://www.teamlab.art/)

      • akiselev 1525 days ago
        How fast are the refresh rates on projector lighting? A really cheap consumer LED can get anywhere from a few hundred Hz to tens of kHz depending on the LED and connection topology, enough to synchronize with high speed cameras with short exposure times and other equipment.
    • devindotcom 1525 days ago
      Would love to talk to you! Are you at ILM or a partner like Fuse or ROE? You can contact me at devin at techcrunch dot com, I'm working on more pieces on this tech.
    • werber 1525 days ago
      That article is mindblowing, how do the tech side and the creative side work together on this kind of a project? How much does the technology shape the story telling?
  • oseibonsu 1526 days ago
    Here is the Unreal Engine tech they are using: https://www.unrealengine.com/en-US/spotlights/unreal-engine-... . This is a video of it in action: https://www.youtube.com/watch?v=bErPsq5kPzE&feature=emb_logo .
    • jahlove 1526 days ago
      Here's a video of it in action on The Mandalorian set:

      https://www.youtube.com/watch?v=gUnxzVOs3rk

      • sgc 1525 days ago
        That explains a lot to me. The actors need those visual cues, and it shows through in the final product. It was a great result. I look forward to improved acting as this tech makes its way into other works.
    • KineticLensman 1526 days ago
      Unreal Engine is also used by the BBC to create virtual studios for football punditry programmes. This uses a simpler green screen technology, but it demonstrates how Epic are moving away from their gaming roots.
      • ChuckNorris89 1525 days ago
        Don't think they're moving away as much as they're expanding in growing markets where there's money to be made.
    • foota 1526 days ago
      That's really cool!
  • devindotcom 1526 days ago
    This is super interesting stuff and I've been following it for some time. I just wrote it up with a bit more context:

    https://techcrunch.com/2020/02/20/how-the-mandalorian-and-il...

    It's not just ILM and Disney either, this is going to be everywhere. It's complex to set up and run in some ways but the benefits are enormous for pretty much everyone involved. I doubt there will be a major TV or film production that doesn't use LED walls 5 years from now.

    • dd36 1525 days ago
      I wonder how much this reduces the environmental footprint. The explosion in shows and movies always looking for raw nature or awesome settings has me wondering how much destruction it makes. And how much waste it produces.
      • ishtanbul 1525 days ago
        Well maybe you arent trucking a huge crew out into the woods for weeks but you are creating a ton of disposable ewaste instead. Save on jet fuel too
      • BubRoss 1525 days ago
        Environmental footprint? A local trade show, movie theater or small office building probably uses more electricity. Let's try to keep things in perspective.
        • folli 1525 days ago
          I think he meant flying large crews, including actors, caterers etc., to Iceland to film a movie vs a small photogrammetry team.
    • tobr 1525 days ago
      Is there any way to read your article if I can’t figure out how to navigate through the legalese and dark patterns of Verizon’s privacy extortion wall?
      • dredmorbius 1525 days ago
        • tobr 1525 days ago
          Some of the behind the scenes photos are really uncanny. Your eyes tell you you’re looking at a photo of an outdoor scene, but the illusion of reality breaks down in subtle ways at the edges of the image.
          • devindotcom 1525 days ago
            Interesting, right? I imagine it may actually be kind of distracting for the actors, though it has to be better than working in a giant bright green cave. You should watch the behind the scenes video, seeing it in motion is really cool.
      • mnw21cam 1525 days ago
        I don't see a privacy extortion wall, but I do see pictures with completely the wrong aspect ratio.
  • cbhl 1526 days ago
    "The solution was ... a dynamic, real-time, photo-real background played back on a massive LED video wall and ceiling ... rendered with correct camera positional data."

    Gee, that sounds a lot like a holodeck. We've come a long way from using Wii Sensor Bars[0] for position tracking.

    [0] https://www.youtube.com/watch?v=LC_KKxAuLQw

    • modeless 1526 days ago
      The "holodeck" version of this is called a CAVE and the first one was built in 1992: https://www.youtube.com/watch?v=aKL0urEdtPU https://en.wikipedia.org/wiki/Cave_automatic_virtual_environ...
      • ci5er 1526 days ago
        The haptics on CAVE were pretty clumsy, but boy howdy! I was pretty damn impressed when I got to interact with it at SIGGRAPH that following year.

        I've got to say that the NC (Chapel Hill) pixel machine may have been a more supreme technical achievement, but it was harder to "get a feel for". As a young semiconductor geek at Motorola (Tokyo Design Ops) at the time, I did push for getting 1~4 bit CPUs on every column of 128-~192- bit deep framebuffers for a time. I almost got Sega to sponsor the project, but then something distracted me (I don't know what - but I suspect it was the Motorola 96K and a bank of AMD bit-slicer chips)) and I wandered off to do something else.

    • ragebol 1526 days ago
      For the VFX industry, the tracking had already been solved for ages, with those reflective little balls on suits etc. in a mocap system. The Wii sensor bar's thing was that it was really cheap.

      But yes, damn close to a holodeck. But you can't see depth in this setup, right?

      • nocut12 1526 days ago
        If it's perspective corrected for the camera, it would probably look very distorted for anyone else on set -- whether there's depth or not

        And that's certainly not the goal with this. Something along those lines has been around for a while (https://en.wikipedia.org/wiki/Cave_automatic_virtual_environ...). This system seems specifically targeted for solving challenges for film production, as it probably should be.

        I am pretty impressed that real time rendering has gotten good enough to use for these purposes. I certainly wouldn't have expected that those backgrounds in the show were coming out of a video game engine.

        • miohtama 1526 days ago
          They mention they cannot push enough GPU juice to the screens, so they only render the camera focus area in full resolution. Also there is 12 frame lag which prevents moving camera too fast.
          • Cthulhu_ 1525 days ago
            One solution to this would be to put the camera on a fixture that replays the same motions every time, so they can do a 'dry run' and correct the rendering. (Putting a camera on a fixture is not new, IIRC they did it in Back to the Future - https://www.youtube.com/watch?v=AtPA6nIBs5g is a really good yet succinct documentary on it)
            • fhars 1525 days ago
              IIRC they did it in Star Wars for the space battles (but it is forty years since I read about it, so may memory may be playing tricks on me).
              • ygra 1525 days ago
                They did. They only had so many models of the spaceships, so multiple takes with a programmed camera path and compositing was used to increase the number of vessels in the scene.
          • toyg 1526 days ago
            > they cannot push enough GPU juice to the screens

            ...yet. That's just a matter of waiting a few more years.

            I wonder if most of the next Star Wars movies will be shot with this tech.

          • Aeolun 1525 days ago
            I don’t fully get this. They could just employ different computers to render different parts of their cave. It seems more like a cost savings thing than a technical limitation.

            And I’m not sure why you’d skimp on a few PC’s if you’re already building a humongous led wall, so maybe there’s something I’m missing.

            • GonzaloQuero 1525 days ago
              I worked in a somewhat similar project in 2015, though not as complex as this, to build background videos for DotParty, using UE4 for panoramas and then stitching them. One of the hugest issues we found was that, because of this being a game engine, a lot of things are not deterministic, so if we used multiple cards or computers, particles and other environmental effects would not be in sync, and the stitches were glaring.
              • Aeolun 1522 days ago
                Yeah, that’s a good point. And taking out the particles and doing those separately is probably near impossible.

                For the non-visible screens it wouldn’t matter that much, but they’d still end up with the moving fulcrum for the main engine.

                • GonzaloQuero 1520 days ago
                  I believe it's been improved in later versions, as they've focused in these use cases, and there might even be deterministic particles now, but I'm not sure, because I've been out of the VFX market for quite a while.
            • gmueckl 1525 days ago
              The part you are missing is the insane complexity involved in keeping perfect frame sync with a low latency across GPUs and machines, especialliy when some final compositing of partial outputs is involved. The stuff sounds simple on paper and it looks like you can just go buy the tech, unpack it and switch it on. The reality is nothing like that. The off the shelf tech is fiddly and barely stable because it is always a low priority feature added with the least possible effort.
              • Aeolun 1522 days ago
                If you have a 12 frame delay regardless you have an awful lot of time to get your clocks in sync.

                Obviously that tech is not simply unpackable, because they’re on the cutting edge. But that’s also why you could expect some customization.

                • gmueckl 1520 days ago
                  The overall latency says nothing about the sync precision required. The displays need to be synced and the graphics cards need to have their vsync synchronized between them (usually via dedicated hardware). If the displays are out of sync, you immediately get visible tearing at the seams.

                  If your parallel renderer divides the image along a grid that does not correspond to display boundaries, you need to gather and composite the partial framebuffers after rendering them. This means that you're now sending frames across the network amd you need to take care that you aren't compositing frames from different timesteps, for example, because the the part of the rendered framebuffer that goes to compositor/display node A arrived in time, but the part going to compositor/display node B somehow didn't.

            • rangibaby 1525 days ago
              “They could just...”

              Pretty much every time I’ve thought this, it’s turned out I was underestimating the difficulty of doing “just” that.

              If it were just that easy, wouldn’t they have done that already?

              • Aeolun 1522 days ago
                Who knows? Sometimes people make things a lot more difficult than they have to be.

                That isn’t always the case, but asking the question is better than the alternative.

        • RaptorJ 1526 days ago
          From the video posted upthread, around 3:40 it looks like they're doing just that: https://youtu.be/gUnxzVOs3rk?t=220
          • vernie 1526 days ago
            The most interesting aspect to me is that the system is pulling double duty by displaying both a dynamic, perspective-correct backdrop for the camera's POV and a static view for environment lighting and reflections outside of the camera's view frustum.

            I wonder if they had to take care to mitigate artifacts caused by the dynamic view bouncing off of surfaces facing away from the camera.

            • BubRoss 1525 days ago
              What artifacts?
              • rangibaby 1525 days ago
                I guess he is thinking about situations where you would use a negative fill
                • BubRoss 1525 days ago
                  What does that mean?
          • mlrtime 1526 days ago
            Look at the ceiling, you can see it moving.
      • rebuilder 1526 days ago
        Mocap systems haven't really been able to produce deliverable results without human intervention very long. I'd argue they're still not there, some filtering and cleanup of the data is usually required. A lot of VFX is still about throwing human labour at problems.

        Edit: I should note I'm talking about motion capture for characters etc. Capturing the motion of a rigid object like a camera in a controlled environment is very doable.

        • ChuckNorris89 1525 days ago
          You're right. The mocap data for the actor who played Thanos in Avengers had to be post processed by artist by hand since the motion of a regular man doesn't correspond to the motion of a larger and heavier Titan. I guess in a few years all that could be automated with ML.
      • whatshisface 1526 days ago
        They have polarized 3D screens, and with head tracking, you have it.
        • ragebol 1525 days ago
          Was thinking the same, but that'll also work for just 1 PoV
          • andybak 1525 days ago
            You could multiplex more images if the glasses were synced and actively polarized. Every person gets a timeslice of the total image.
          • jacobush 1525 days ago
            Or VERY high framerates.
      • babypuncher 1526 days ago
        It won't trick anyone with stereo vision, but the perspective correction provides all the depth cues needed to create a convincing 3D visual for pirates and non-stereo cameras.
        • lsaferite 1524 days ago
          Interestingly I would imagine pulling the non-CG part out of the frame would be possible as they have the ability to generate the exact image without the real-world aspects. Basically a virtual green-screen. Combine that with a stereo camera and the fact that the source CG is actually 3D and you could get very convincing 3D movies I'd think. Yeah, that simplified a lot, but I'd say it's possible. And they still get the benefits of the lighting aspects as well as the immersion for the actors.
        • ragebol 1525 days ago
          pirates? What arrrr you talking about?
    • overcast 1526 days ago
      Speaking of the Wii, does anyone else wish that motion controls could just be removed from its existing library? Not only has it just a major annoyance in most games, but its basically locked into that hardware now.
      • mercer 1525 days ago
        Metroid Prime is one of the few games I've played all the way through multiple times. When it became increasingly difficult to do so on a GameCube, I relented and tried the ii U version. Much as I've tried,I can't get comfortable with the controls.

        Thankfully I finally have a laptop that can properly emulate the GC version and allow me to use the GC controller that I dusted off.

        But I can't wait for a GC-inspired version on the Switch!

        • overcast 1525 days ago
          I've been procrastinating playing Metroid Prime 3 because I don't want to waggle around the whole time. Pumped for the endlessly delayed Switch version though, I've had it preordered FOREVER.
  • flashman 1526 days ago
    I wonder how the photogrammetry aspect will intersect with intellectual property laws. The example used - scanning in a Santa Monica bar so that you can do reshoots without revisiting the location - would be an obvious example that might raise someone's hackles ("because it's our bar you're using to make your money" for instance). If you add that bar to your digital library, do you have to pay them royalties each time you use it? Is it any different to constructing a practical replica of a real-life location?

    Can someone wearing a few cameras walk through a building and digitise it completely without getting the owner's permission? Here in Australia, "copyright in a building or a model of a building is not infringed by the making of a painting, drawing, engraving or photograph of the building or model or by the inclusion of the building or model in a cinematograph film or in a television broadcast," for instance. (Copyright Act 1968 §66)

    • kragen 1526 days ago
      If you have to pay royalties, they won't be to the bar; they'll be to the bar's architect. Copyright law generally only covers expressive, rather than functional, aspects of a copyrighted work, so things like doors and walls might be okay, but architectural design is recognized as copyrightable.

      In general I strongly recommend avoiding the term "intellectual property" because it conflates several different areas of law with almost totally unrelated traditions, statutes, and (in common-law countries) precedents — copyrights, patents, design patents, mask works, trademarks, trade secrets, and most alarmingly, in the EU, database rights. (Moreover, it's an extremely loaded term, like "pro-life" and "pro-choice".)

      • flashman 1525 days ago
        That's why I chose the term "intellectual property": it's a useful term for the issues around "creating a thing that's somewhat like another thing," even if it's not a super helpful legal term.

        I would not be surprised to see someone argue for a new class of property rights for owners, surrounding the reproduction of a scene or location where the reproduction has commercial value. What happens if Facebook bootlegs the Mall of America for Oculus? Does that lower the value of a similar venture by the mall's owners?

    • paulmd 1526 days ago
      A potential analogy might be something like using Carrie Fisher's image in the new Star Wars movies. I would assume the estate got paid for that. Or holo-tupac.

      Practically speaking I think it will come down to what you negotiate. If you negotiate usage of the bar for your series then you can use it, otherwise not. If you negotiate resale of that model then that's legal, otherwise not. Most large productions will probably want to stay far on the right side of the law and get a written/financial agreement until things are hammered out, then you'll have amateur filmmakers who have to do vigilante shoots.

      And again, probably something that will have to be legislated out for the long term.

      In France, the appearance of buildings can be copyrighted, famously the Eiffel Tower is very aggressive about suing photographers.

      • icebraining 1525 days ago
        It's not the Eiffel Tower that is copyrighted, but the light installation that is turned on at night. Taking pictures during the day is free from such restrictions: https://www.toureiffel.paris/en/business/use-image-of-eiffel...
      • stormbrew 1526 days ago
        Royalties for use of the likeness of an actor was a fought for thing with the actor's guild after they did various tricky things to put Crispin Glover into Back to the Future 2 without his consent. I dunno that that really applies to locations in the same way.
    • BubRoss 1525 days ago
      When shooting at a location, the owner is payed a location fee. Detailed and specialized photography has been used at locations for decades at this point. This is a refinement of what is already happening, not something completely new.
  • huebomont 1526 days ago
    Fascinating, but this article needs a proofread, damn...

    "The virtual world on the LED screen is fantastic for many uses, but obviously an actor cannot walk through the screen, so an open doorway doesn't work when it's virtual. Doors are an aspect of production design that have to be physical. If a character walks through a door, it can’t be virtual, it must be real as the actor can’t walk through the LED screen."

    Not to mention the multiple paragraphs that are basically re-stated immediately afterwards. It's like they hit publish in the middle of editing.

  • rebuilder 1526 days ago
    The Mandalorian was probably a very likely candidate for this kind of approach, since it's essentially a western, meaning a lot of wide landscape shots.

    The LED screen approach works nicely for fairly uncomplicated background geometry, like a plain. Try shooting Spiderman climbing up walls on that, and things will get tricky fast.

    As the article notes, slow camera moves are a plus as well. The reason given is technical, but I also wonder how far you could really push the camera motion even if tracking lag wasn't an issue. The background is calculated to match the camera's viewpoint, so I expect it would be very disorienting for the actors if the camera was moving at high speeds.

    • wbl 1525 days ago
      Spiderman climbing up a wall can be done via forced perspective. It's also an action scene, reducing the need for a background to help the actor. And some brave souls will Harold Lloyd it.
  • DonHopkins 1526 days ago
    "Once Upon a Time" (2011-2018, with Robert Carlyle as Rumplestiltskin!) was shot on virtual chroma-keyed sets with real time integrated pipeline tools to preview how it would look.

    https://en.wikipedia.org/wiki/Once_Upon_a_Time_(TV_series)

    The tech behind Once Upon a Time’s Frozen adventures

    https://www.fxguide.com/fxfeatured/the-tech-behind-once-upon...

    Once Upon a Time” TV Series VFX Breakdown

    https://web.archive.org/web/20180623020817/http://www.animat...

  • mdorazio 1526 days ago
    For those wondering, this appears to be not nearly as expensive as I thought. The 20" square panels used are available for around $1000 each if you buy a lot of them used. Compared to a typical production budget for a high-quality franchise, it's surprisingly cheap to build one of these walls. The equipment to run it, on the other hand, is likely not cheap at all.
    • ishtanbul 1525 days ago
      If the mandolorian was filmed entirely on location with vfx in post it would have cost hundreds and hundreds of millions. The sets were incredibly detailed. So i think they saved a ton of money for the output quality. I also doubt they bought second hand gear.
      • BubRoss 1525 days ago
        Multiple hundreds of millions to shoot in the desert with red cameras and add buildings behind armored people with their hair hidden? Plenty of movies have been done like that and they didn't become the most expensive movies ever made (which is actually Pirates 3).
  • russellbeattie 1525 days ago
    Sony had a big demo of this in their CES booth a few weeks ago. They showed the camera moving around the Ghostbuster's Ecto-1 car and the background moving as well. You could see from the overhead screens what the final composite looked like. It was pretty awesome, given it was all set up in a booth at a trade fair. [1]

    As expensive as all this is now, I think this is really going to make an impact in lower-budget movies. Not having to fly all over the world or having massive sets to film convincing scenes might be a really good thing.

    1. https://www.youtube.com/watch?v=kh2Q9pCxuJw

  • severak_cz 1525 days ago
    Funny that this is practically the same concept as shooting in atelier with exterior background painted on walls as it was done in old movies. The progress is only in technology - now it's created by game engine and projected to giant LEDs, back then in 1930s it was done by hand by painters.
    • estebank 1525 days ago
      I think the innovation is the perspective correction of the background depending on the camera. That could have been accomplished with rear projection in film if it had been necessary by having the camera follow a preset path, but I don't think even BTTF attempted that.
  • csours 1526 days ago
    This reminds me of The Mill BLACKBIRD - https://www.youtube.com/watch?v=OnBC5bwV5y0
  • treblig 1525 days ago
    "Postproduction was mostly refining creative choices that we were not able to finalize on the set in a way that we deemed photo-real."

    Does anyone know how they were able to swap out the in-camera version of the background originally shown on the LED wall with something more convincing later? Seems like it'd be tough since it's not a green screen!

    • janekm 1525 days ago
      While currently they are using "green screen" in those instances, given that the camera positions are already being tracked, and the image displayed on the screens is known, it would be possible to re-render what image the camera should have seen if the foreground elements hadn't been present and use the difference with the recorded image as a mask for further post-processing.

      (which would be very cool as it would also allow using a low-resolution version of the background during production that could then be re-rendered with a higher resolution image after the fact)

      • estebank 1525 days ago
        It seems to me that making the system be able to perform masking itself (instead of projecting a green screen) would reduce the ability to completely replace the scene easily, but the reflections already give you that problem. The advantage you would have is that the real elements in the scene would no longer need any feathering around the edges causing them to look hazy, because now you can take a few pixels of the projected background as a transition to the higher resolution render.
    • czr 1525 days ago
      iirc they project small green region around the actors and real props, so that ambient light and reflections are still mostly correct but they can also get clean matte out.
  • en4bz 1526 days ago
    I think the demise of Lytro was a huge missed opportunity for the film industry. They had this and a number of other features in their cinema camera before they became defunct a few years ago.

    https://www.youtube.com/watch?v=4qXE4sA-hLQ

    • anchpop 1526 days ago
      I watched that video, it doesn't really seem like the same thing in this article (although it's very cool). This is a real screen behind the actors rendering the scene from the perspective of the camera
  • asmosoinio 1525 days ago
    Why is this one person always referred to with "ASC, ACS" after their name?

    > Greig Fraser, ASC, ACS

    • emmsii 1525 days ago
      It means they are a member of the American Society of Cinematographers (ASC) and I believe the Australian Cinematographers Society (ACS).
    • numpad0 1525 days ago
      I believe many unions of film-making workers demands its members’ names to be always followed by the group’s names zero exceptions, to protect itself and their rights.

      Show businesses was/is one of industries that unions worked.

  • jpmattia 1525 days ago
    I've had a peripheral interest in virtual sets and real-time compositing by way of a colleague from grad school.

    A quick visual summary of this tech: http://www.lightcrafttech.com/portfolio/beauty-beast/

    This video was from a pilot several years ago, and it didn't make it to air, but it was visually wonderful.

    • russellbeattie 1525 days ago
      With F. Murray Abraham as well! Nice. He's looked the same since 1984's Amadeus. Crazy!
  • resoluteteeth 1525 days ago
    This seems slightly limited in its current form in that they either have choose to either shoot the rendered background as-is (making it harder to modify in post production) or specifically turn the areas around the actors into green screens (sort of defeating the purpose).

    I wonder if they could use some sort of trick like projectors synced with high fps cameras to make the real time rendering invisible to the cameras instead?

  • pupdogg 1525 days ago
    The highest resolution LED panel pixel pitch I've seen to date is 0.7mm...wouldn't this result in a lower resolution capture of the projected background? Specifically, when they're trying to shoot movies at or above 4K range? Also, how do they cope with the scan rate of the background video being played back to sync with the camera recording the footage?
    • thegoleffect 1525 days ago
      In some photos, you can see that from the camera's POV, the area around the actors is displayed on LED as green screen so the actors can be masked out. Then, a higher resolution background is composited in. Thus, the original LED serves to accurately light the scene to reflect the background but not always to actually be the background.
      • pupdogg 1525 days ago
        This makes more sense!
    • snowwrestler 1525 days ago
      One of the details from the article is that using anamorphic lenses essentially treats the camera sensor as if it is larger than it actually is, which reduces the effective depth of field.

      If you look carefully at the backgrounds in Mandalorian scenes, a lot of the time, they are slightly soft (out of focus)--which hides the pitch of the LED wall by expanding each LED point into larger, overlapping circles of confusion. To be clear, that softness is a physical effect of the camera lens, not a digital effect on the wall, so it can be captured by the camera sensor up to any resolution you want.

      > Also, how do they cope with the scan rate of the background video being played back to sync with the camera recording the footage?

      In the article they say the latency is about half a frame, which they handled by using slow camera moves--which conveniently is similar to how the original Star Wars films were shot.

      If you're talking about the refresh rate of the LEDs, I believe those can be cranked up way higher than the frame rate of the camera, which was likely 24 or maybe 30 frames per second to give that cinematic feel.

    • jxy 1525 days ago
      Depending on your viewing distance, the pixel pitch of 2.84mm is practically a retina display if you look at it 10m away.
  • petermcneeley 1526 days ago
    This technique will produce potentially significant rendering artifacts in the final image. The backdrop is correct only from the position of the camera. A reflection from any surface will not be geometrically correct (as seen by the image from the article) I think that even ambient lighting would contain noticable deformations.
    • virtualritz 1526 days ago
      It's much better than the reflection of a green/blue screen or an empty studio with some cameras and people.

      Glossy surfaces are usually not a problem unless they're (near) perfect mirrors. Even then -- lights are usually what you see in most reflections because they're orders of magnitude brighter than the rest of the set.

      If reflections are problem with this new technique in certain settings, they would be even more with the current state of the art.

      In those cases you replace them digitally. No way around this; either way.

      Related trivia: The chrome starships in EP1 were actually rendered with environment mapping and reflection occlusion[1]. Even most games do better stuff today. Did you notice? :]

      [1] http://www.spherevfx.com/downloads/ProductionReadyGI.pdf 5.3 Reflection Occlusion, pp. 89

  • vsareto 1526 days ago
    Can you get a decent job just by knowing Unreal Engine well? Maybe by just doing small POC projects?
    • mattigames 1525 days ago
      If by "well" you mean including Blueprints and physics-based-shaders then probably yes; although stuff like 3D rigging and modeling which is done on different third-party tools are a must for a lot related jobs.
    • vernie 1526 days ago
      Libreri and Sweeney are trying their damnedest to make that true.
  • vagab0nd 1524 days ago
    This is so cool. How do they connect the virtual scene with the ground? Do they have screens on the floor as well?
  • web-cowboy 1526 days ago
    So we'll be able to play video game adaptations of the locations in the episodes really easily/soon, right? ;)
    • cgrealy 1525 days ago
      I'd say it wouldn't help much. You're building one small scene that's designed to be viewed from a relatively small area or path. I highly doubt they're building anything that doesn't appear on screen. If you were to walk about that scene in Unreal... I'd imagine it's the digital equivalent of a fake old west town.
    • ghostbrainalpha 1526 days ago
      That has to be a consideration. But I don't know how much it would really speed up production of a AAA Mandalorian game. Some... maybe a 6 month head start on a 4-5 year game.

      It would definitely help make the game environments higher quality and be a cost saving to the studio.

      • marzell 1525 days ago
        It could be really cool to have in-game art using the exact same assets from the actual film. Even whole scene cinematography could be taken from scene data used in a movie.

        I can imagine a consumer experience using high-end VR (say an attraction at Disneyland)... take the assets and cinematography from a scene in a movie, digitally recreate everything that WASN'T already digital from the scene, and then you could relive a scene in VR, with the bonus that you can navigate it in realtime and see it from different perspectives. This would be especially adaptable for franchises like Avatar where everything in the film is already composited in 3D.

        On a darker note, you could combine all this with things like the Body Transfer Illusion, take some kinda psychedelic and star in your own horror flick where gruesome things are done to your own body in VR. Good times for the whole family!

  • jedberg 1525 days ago
    How cool would it be to hook your Xbox up to the wall. Or a TV receiver.
  • ghostbrainalpha 1526 days ago
    So I guess the next step will just be movies made entirely inside of Virtual Reality environments, and an Actor in a MoCap suit who plays his virtual Avatar right?
  • fuzzfactor 1524 days ago
    The car, the man, DeLorean!

    Oh wait, never mind . . .

  • lmilcin 1526 days ago
    I don't care how "groundbreaking" the graphics pipeline. I watched couple of episodes and I had to force myself to keep watching to, I don't know, give it a chance?

    I wonder when The Industry figures out the story is more important than the graphics. You don't buy books for beautiful cover and typesetting... at least not most you and not most of the time.

    • anigbrowl 1526 days ago
      I enjoyed the story and apparently many other people did too. It's fanservice for sure, hence all the callbacks to characters and aliens that had background or very brief appearances in the original movies and left people wanting more. Cheesy, perhaps, but the entire franchise is pineapple pizza in space.
    • cgrealy 1525 days ago
      Whether or not you liked the story is utterly irrelevant to the article.
  • tigershark 1525 days ago
    Please send me a link of someone that watched the original Star Wars, the later trilogy and finally the last “attempt” and really appreciated it. I even watched “Rogue One” in the biggest screen available around me, with high expectations, and I’m feeling really sad because of that.
    • UI_at_80x24 1525 days ago
      I watched the original trilogy in the theaters when they were released. (Admittedly I was a bit young for the first one.)

      I've seen all the follow-ups/addons/sequels/rewrites that exist. Rogue One is the movie that I waited 30 years to see another story in the SW universe. It wasn't perfect but it was damn good enough.

      The Mandalorian gives me hope that 'Grownups' are in charge and can create something worth looking at.

      I am holding out hope that a story and plot will emerge. I really hate "baby yoda", but if that's what it takes to move a real story along I am willing to tolerate it.

      #1 It looks incredible. Must win Emmy for best cinematography! #2 It feels real. It feels right.

      I'm sorry you didn't like Rogue One.

    • aurizon 1525 days ago
      I am over 80, I have watched them all, and enjoyed them all. What few criticisms I had were lost in the overall enjoyment of all that good work. What we do now makes those early shows look crude - which they are by modern standards, but in the day OOOOHHH, AAAHHHH. I still recall the first Star Wars crawl and it makes me shiver - I guess that's why they still use it...
    • vidarh 1525 days ago
      I've seen all of them, and loved all of them.

      To me, most of the criticism feels like it comes from people who have had time to rationalize the old plots and settings, but who see the new ones with a more jaded mind or a set idea of what they "should" be like instead of approaching them with an open mind. They have flaws, but so does the original trilogy.

      Star Wars from the beginning were silly westerns set in space with all kinds of ridiculous aliens thrown in. Taking them too seriously and applying that as a constraint on the following trilogies would never end well.

      When people complained about Jar Jar, for example, all I could think about was how people could take issue with Jar Jar but take no issue with Chewbacca, or the ewoks, or R2D2 and C3PO. They're also incredibly cheesy in other ways in a way that was common in the 80's, but that is really dated today (e.g. the Ewok celebration scene).

      I think seeing e.g. Spaceballs is a good way of having it driven home how just how ridiculous Star Wars really is and looked at the time if you don't let yourself be immersed in it. Spaceballs crosses the line from "serious" space opera to comedy very clearly, but it to me it also illustrates just how ridiculous lengths they had to go to in order to clearly be a send-up instead of a cheap Star Wars knockoff. They'd not had to cut that many jokes and toned back that many things before it'd have seemed like an attempt at being serious.

      Star Wars is fun in part because it manages to "sell" a setting that is on the face of it so crazy and not taking itself too seriously. But it seems a lot of the fans of the original trilogy bought into that and then decided to take it all very seriously instead of seeing them as light adventure movies and "space westerns".

      The challenge is also in no large part a question of changing tastes in other ways as well - my son finds the original trilogy horribly slow moving to the point of boredom, for example, and I can understand that. Tastes have changed. Pacing has changed. Composition and cinematography has changed. But that also means that the modern trilogies had to be very different or fall flat with younger audiences in ways that would always annoy the die hard fans; they're trying to reflect how we remember them more than how they are, but different people remember them in different ways.

      [I tend to treat criticism of the last Indiana Jones in much the same way; people venerate the original movies, but they were extremely cheesy and contrived, involving literal deus ex machina, but surviving a test explosion in a fridge and interdimensional aliens is suddenly over the top]

      As for Rogue One, to me it's one of the most enjoyable movies of the franchise. In no small part because they were allowed to explore the setting with much more freedom (ok to let characters more central to the plot die for example).