Ray Tracing Is No New Thing

(bytecellar.com)

58 points | by fallingbinary 2039 days ago

8 comments

  • jdmoreira 2038 days ago
    Sure, but real-time raytracing at home is quite a new thing and that's what the hype is about. Right?
    • blakespot 2038 days ago
      Yes, as I explain in the article. I kept coming across media calling ray tracing a new technique, in articles about the RTX boards. I responded to a few and thought I'd talk about it on the blog. Enjoying ray traced animations (pre-rendered) on the Amiga was a thrill for me back in the old days, and so it felt a good fit for my vintage computing blog.
    • deburo 2038 days ago
      The title could've been: Ray Tracing, an old technique finally viable in games.
    • syn0byte 2038 days ago
      http://www.q3rt.de/

      if by "new" you mean well over a decade old...

    • georgeecollins 2038 days ago
      Doom was real time ray traced.
  • vvilliam0 2038 days ago
    Obviously. Good article but clickbait title. "A History of Ray Tracing" would have done the job.
    • blakespot 2038 days ago
      What prompted me to write the article was running across several tech news articles declaring ray tracing to be a new technique on pieces about the RTX hardware -- real-time aside. To clarify that ray tracing is not new but real-time ray tracing (generally) is was my purpose in writing the piece.
    • neetdavid 2038 days ago
      Can we retire calling things clickbait?

      The post does cover the history but it is a response to coverage like this: "...a new graphics rendering technique called ray tracing."

    • peterashford 2036 days ago
      How can a factually correct statement be click bait?
  • rollulus 2038 days ago
    I'd say that it is rather obvious that ray tracing is not a new thing, since it is simulates how light physically behaves.

    I consider this 3d rendering as a spectrum: rasterization requires little computation but has little to do with physics. Ray tracing is what requires a lot of computation and has everything to do with physics. Somewhere in between are hybrid methods: rasterization with ray tracing components added to it, or ray tracing with approximations.

    For instance, pure rasterization cannot do shadows. It is approximated by rendering the scene from the viewpoint of a light and test the rasterized scene for occlusions casting shadows. And the other way around: real time ray tracing cannot compute all indirect lighting paths, a subset is only considered at the cost of e.g. variance.

    • swerner 2038 days ago
      It's much simpler than that: Both rasterisation and ray tracing are methods to solve visibility. The main difference is that one answers the question "given a primitive, what pixels does it overlap?", the other "given a pixel, what primitives does it overlap?"

      Light transport, shading, shadowing are all just implemented on top and not a direct result of the visibility calculation.

    • mbel 2038 days ago
      > For instance, pure rasterization cannot do shadows.

      Well... the technique that you describe (shadow mapping) is actually pure rasterization it just requires more than one rasterization pass. This also ignores the fact that there are other techniques for getting shadows in rasterizing renderers (stencil volumes and other stuff that is rather considered historical today).

      I get your point that rasterization doesn't support shadows "naturally" like ray tracing, but in my opinion your wording and the example is rather unfortunate. The same goes for reflections, I would say SSS or caustics probably are better examples since they are really only done with techniques based on ray tracing.

      • swerner 2038 days ago
        SSS has been done with point clouds for a long time. Only recently have films switched to ray tracing for SSS.

        Games these days implement SSS in screen space using rasterisation, no rays tracing either.

    • GuB-42 2038 days ago
      Rasterization has just as much to do with physics as raytracing. They can both get you a correct solution to the rendering equation (i.e. be physically correct) if you wait long enough, or get you an approximation of it in a reasonable time scale.

      For example if all you need are shadows cast from a point light source, a rasterization technique like stencil shadows will give you the same exact result as simple raytracing, and neither will be physically correct.

      It is just that depending on your rendering time budget, some things are better done with rasterization and others are better done with raytracing. Real time engines, with simplified models typically work better with rasterization, while precalculated scenes, with more realistic models work better with raytracing.

    • pcwalton 2038 days ago
      Rasterization and raytracing are formally equivalent in a sense. You should be able to algebraically rearrange ray/triangle intersection tests performed in raytracing to get Pineda rasterization. So I don't really see one as more physical than the other. Rather the difference is that rasterization starts with each triangle and determines which rays intersect it, while raytracing starts with each ray and determines which triangles intersect it.
      • theoh 2037 days ago
        Historically, rasterization has been away of putting triangles onto the screen, maybe with a Z-buffer to determine visibility. It's basically an image space idea, with things like Gouraud shading happening in image space, and though you could extend it put it to use in calculating shadow volumes or shadow maps, it doesn't implicitly deal with light transport. That's the first difference.

        Ray tracing, on the other hand, has always been about (forward or backward) rays of light propagating through object space. It wasn't about light transport in the early days (just visibility and shading, based on simple models like Phong) but it is very well-suited to modelling transport, because it addresses the notion of fully-spatial rays in object space.

        Writing a physically based (light transport)renderer which was internally based purely on rasterization to rectangular images would be an odd choice, partly because many of the intermediate images would be have to somehow be parameterized to represent locations on a hemisphere, etc.

        I'm open to correction on this, but rasterization algorithms are really tied to projections onto a rectilinear grid, orthographic or perspective. Ray-tracing doesn't need to assume/know about this raster grid idea and as a result can be used with other geometries. This makes it strictly more powerful than rasterization. This kind of thing, for example: https://www.glassner.com/computer-graphics/graphics-research... is a very bad fit for polygon rasterizers because each triangle is going to be warped in image space.

        • pcwalton 2037 days ago
          You can actually extend the rasterization concept to 3D, as shown in this paper: http://cg.ivd.kit.edu/publications/p2012/3dr/gi2012.pdf
          • theoh 2037 days ago
            Hmmm, yes, though they do say "In this paper we focus on primary (camera) rays, i.e. rays with a common origin or parallel rays, because only these are also covered by rasterization. We consider secondary rays and efficient global illumination algorithms, such as path tracing or photon mapping, as orthogonal to our approach."

            Just what that "orthogonal" means is a bit mysterious, but their project seems to be to generalize rasterization further than they've got in this paper: "we aim for further generalization, in particular, a parameterization which allows for incremental computation, not only for the ray direction, but also the ray origin"

    • octachron 2038 days ago
      Even ray tracing does not capture all physics of electromagnetism and only works at the level of geometrical optics: any effect, like diffraction or iridescence, that arises due to the wave-like nature of light still need to be implemented in ad-hoc way in a ray tracing algorithm. But fully simulating Maxwell's equations (or QFT) to keep track of those minor effects would be insanely expansive.
    • toolslive 2038 days ago
      > since it is simulates how light physically behaves.

      I think ray tracing is at best an ad-hoc model for light that produces nice results, but physically based it isn't.

      • mattnewport 2038 days ago
        It's physically based, it's just not a complete simulation of all the physics involved. It's at least as physically based as most rigid body, cloth or fluid physics simulation.
        • toolslive 2038 days ago
          Radiosity would qualify better as `based on physics` than ray tracing, but can't do things like mirrors. I could live with classifying ray tracing as `geometry based`, like Newton studied mirrors.

          (You could do something that uses ray tracing to determine what's visible and what isn't and radiosity to determine the colour, but that's an entirely different story).

          • mattnewport 2038 days ago
            All rendering is an attempt to solve the rendering equation which is very much physically based. Whitted ray tracing is doing a lot of simplification but path tracing (which is closer to what people are talking about doing in real time with new hardware) is a pretty principled attempt to solve the rendering equation via Monte Carlo methods. I don't see how that's not physically based. The definitive book on this stuff is literally called "Physically Based Rendering" https://pbrt.org/
          • ctrl-j 2038 days ago
            What is your qualm with calling ray-tracing physically based? Don't most models even include transmissivity and refraction?

            I mean, unless you're expecting them to calculate absorption/re-emission at every bounce...

            It still seems pretty "physical" to me.

            • toolslive 2038 days ago
              Here you're using geometrical optics which models is a narrow beam (ray) which is idealized as a line. It all becomes simple vector math form there onwards.

              However, Physics knows since the end of the 18th century light is a wave.

              https://en.wikipedia.org/wiki/Young%27s_interference_experim...

              It's not that the model breaks down only in extreme conditions (like Newton's laws of mechanices), but in day to day situations as well.

              I think that's the essence of my qualm.

              • wnkrshm 2038 days ago
                Where in day-to-day situations, like walking around outside or in a building do you see diffractive phenomena where a ray approximation breaks down far enough for you to notice?

                Usually all of that is smoothed over by light sources being extended sources not points, so the interference contrast is lost by infinitely many interference patterns being overlaid incoherently. Also, almost all light sources (except for rlasers) have microseconds of coherent emission, so the pattern changes so fast it blurs into a regular blurry edge of shadow.

                I can only think of some very special situations where some blinds select a very narrow angular range of sunlight and then you see interference fringes in the shadow.

                Or when you look into a puddle with an oil film or at some sort of diffraction grating or holographic film (which can be predicted with ray-based methods, like Wigner-distribution based ray-tracing, though that still comes with some error at large angles).

                Even in laser optics, 95% of the optics design is done with geometrical optics methods, because the rays you use can be related to the phase profile of the radiation in the system. You can then integrate (with rays) the diffraction pattern (but not as well in the shadow of apertures ofc).

              • ctrl-j 2038 days ago
                The wave-like nature of photons does not exclude the particle behavior. Light is both a particle and a wave.

                Physicists still rely on snells law. Optics courses still includes path tracing when studying refraction and dielectrics.

                Excluding the particle behavior of light just because the wave nature exists, is not something a physicist would do.

                • toolslive 2038 days ago
                  True, but the concept of ray used here is neither particle nor wave. The whole thing is way more geometry than physics.
                  • ctrl-j 2038 days ago
                    I mean, you're arguing a term that's used in physics isn't physical. Raycasting is often used when solving EM equations.

                    Are you purporting that photon streams don't follow a ray? And when they interact with a surface they don't obey Snells law? And when you look at the interface between mediums a percentage of the intensity is transmitted and the remainder is reflected (the ratio of which is determined by the angle of incidence?)

                  • mattnewport 2038 days ago
                    Would you say the same thing about rigid body physics simulations? There's no such thing as an idealized rigid body in reality but such simulations use lots of useful approximations the are also used in "real" physics.
  • bitL 2038 days ago
    Can someone from AMD/NVidia please jump in and explain the difference between Radeon Rays (2016?) and RTX (2018)? Thanks!
    • twtw 2038 days ago
      Radeon Rays is more comparable to OptiX (2009), in that both are GPU accelerated ray tracing libraries. RTX (2018) differs in that it uses special purpose ray tracing units on the GPU, rather than running only on the SMs/CUs.
    • mattnewport 2038 days ago
      Dedicated hardware to accelerate BVH traversal and ray triangle intersections. Previous GPU accelerated ray tracing implementations have just used existing hardware for programmable shaders for these but that hardware isn't ideally suited to the task.
      • SketchySeaBeast 2038 days ago
        So in that case is this going to be a "G-Sync"/"Freesync" or "Gameworks"/"No, I like my framerates" thing? If developers begin supporting the RTX hardware is there a way AMD can get on board, or is this another one of NVidia's patented industry hurting moves?
        • twtw 2038 days ago
          I can't answer the question, but I'm curious how you think NVIDIA should have introduced real time raytracing to the industry? They can make it proprietary at the API level, or proprietary at the ISA level, but at some point you're stuck with the fact that their hardware can do something AMD's cannot.

          Is it industry hurting to make the GPU they sell have any exposed new capability, other than making the same old thing slightly faster?

          • SketchySeaBeast 2038 days ago
            NVidia actively denies others the ability to interact with their "Gsync" platform - an AMD card will never be able to take advantage of a Gsync monitor. I'm getting the answer that this is a Vulkan/DirectX Api, and assuming that's the case, that's great. But if it was a proprietary API (like Gameworks), that could only possibly run on proprietary hardware, I'd have a problem. As long as everyone can leverage those API calls at the same time I'm perfectly happy.
            • pjmlp 2037 days ago
              It's a OpenGL/Vulkan/DX12 API.

              It is up to the other vendors to produce their own hardware and respective OpenGL/Vulkan extensions.

              And maybe Khronos eventually gets to standardize them.

        • maeln 2038 days ago
          As far as I know, both Vulkan (with extensions, for nvidia VK_NV_raytacing) and DirectX (with DXR) have ray-tracing capabilities in their standard. Their shouldn't be any patent that deny AMD from providing an implementation in their drivers.
        • Asooka 2038 days ago
          Developers support RTX via DirectX's DXR raytracing API. AMD can put whatever hardware they want on their cards to accelerate those calls and implement the appropriate drivers.
        • mattnewport 2038 days ago
          For games most will access the functionality via the new DirectX ray tracing APIs (or perhaps the Vulkan equivalent in the future) so there's nothing stopping AMD or Intel from adding hardware to accelerate those APIs too.
          • SketchySeaBeast 2038 days ago
            Ah, excellent. Then I support this future. I thought this would be a gameworks version 2.0 problem.
    • ksec 2038 days ago
      And if someone from Game development communities could explain how hard is it to do RTX or DirectX RT on current games? Would companies now have to do two set of graphics, design, and code path? One that does it with DirectX RT the other doing it with all the Rasterisation technique such as Shadow mapping etc.

      Basically I am interested in the Cost of such Direct X RT implementation. And what sort of time frame could we see these on the market.

      • annywhey 2037 days ago
        It's a big step, both assets and codepath have to change to make full use of it. It's possible to change some effects to work in a forwards-compatible way, but this isn't a free lunch. As an traditional GPU, the RTX is only slightly better than the existing 10-series.

        Since AMD continues to hold console developers with their semi-custom chips, this is only likely to impact a limited set of titles in the near future.

      • pjmlp 2037 days ago
        Naturally multiple code paths are required, as always.
        • ksec 2037 days ago
          Yes but it means another set of path to test and optimise. Another set of Graphics design asset for RT. As if the current AAA titles budget are not expensive enough. Instead of making quality games cheaper, we are making graphics intensive games even more expensive to build. And I don't think it is healthy at all.
  • arayh 2038 days ago
    Gamers have long been clammering about how they can't wait until in-game graphics match those of pre-rendered cinematics (that said, a lot of cinematics these days are no longer pre-rendered). Ray tracing is such an expensive operation. According to this quora, it took 29 hours to render a single frame of "Monsters University": https://www.quora.com/How-long-does-it-take-to-render-a-Pixa...

    We're probably nowhere close to getting real-time Pixar-quality rendering in our games right now, but we've definitely made leaps and bounds over the last few decades.

    • learc83 2038 days ago
      That's because it's a moving goalpost. Pixar keeps increasing the quality to take advantage of faster hardware and better algorithms.

      We'll never be close to getting today's real-time Pixar-quality until Pixar rendering is good enough that they stop meaningfully improving it. What we can have is yesterday's Pixar quality rendering in-game.

      • MisterTea 2038 days ago
        That should be a new goal post: can it render a pixar film in real-time. e.g. Do we have a Toy Story capable card yet?

        (Note: I know offline rendering and real-time raster rendering we use in GPU's are completely different methods. But there is a point where the raster trickery can catch up and match the offline stuff.)

        • gmueckl 2038 days ago
          Today's GPU are probably much closer to what Renderman was doing back when Toy Story was made than you think. They used an algorithm called REYES, which has nothing to do with raytracing and in fact can only barely be made to combine with ray tracing at all [1]. It was completely thrown out of Renderman only on the last couple of years for that reason.

          REYES really is an early take on rasterization with tesselation, designed for hardware with extreme memory constraints. Although the actual tesselation algorithm works differently from GPU hardware tesselation, the basic idea of tesselating dynamically to the required level of detail for the current frame carried over intonthe hardware.

          [1] https://en.m.wikipedia.org/wiki/Reyes_rendering

      • minikites 2038 days ago
        Exactly, look at the first Toy Story and compare that to something like The Witcher 3.
    • maaark 2038 days ago
      We've also long since passed the point where games have pre-rendered scenes that could have been rendered in-engine on high-end hardware.

      The only tip-off I had that (some of) Nier Automata's cutscenes are pre-rendered was the drop to 30fps...

    • kabes 2038 days ago
      Actually, even till late 2000s most of pixar movies were based on rasterization (reyes). So you don't necessarily need ray tracing for that.
    • EpicEng 2038 days ago
      >Gamers have long been clammering about how they can't wait until in-game graphics match those of pre-rendered cinematics

      Well, they do, you just have to look back X years depending on your criteria. Look at modern in game graphics v PSX cinematics for instance. It's not even close.

  • bni 2038 days ago
    Im an Amiga fan but in what way is the Juggler demo relevant here at all? Ray tracing and storing the 2D result, was surely done much earlier on workstations from SGI, Sun etc
    • blakespot 2038 days ago
      I wrote this article. The Juggler was the first time I'd seen raytracing on a computer and watching these pre-rendered animations on the Amiga was one of the thrills of the system, as it surpassed all consumer micros of the day graphically, as far as on-screen colors. The Amiga's Hold-And-Modify (HAM) mode could render the full 4096-color palette on-screen and was very well suited for displaying ray traced scenes with their realistic coloring and shading and could do so at a resolution of 320x400 (4:3 aspect) and at a sufficient framerate, given the flexibility and power of the Amiga's blitter and memory architecture. As such, it seemed worth a mention here.
    • mgkimsal 2038 days ago
      i'd say it's somewhat relevant in that it popularized the idea to a new generation of folks (like me) who'd never heard of it before. spent hours with pov-ray as a kid (well... minutes, then hours waiting). didn't know at the time it wasn't 'new' (it was 'new' to me), but also AFAIK there were no other moderately affordable home computing systems where this was possible. maybe it was a thing on ms-dos clones of the late 80s and I just missed it there?
  • Coffeewine 2038 days ago
    The article concludes with a succinct TL;DR, but it's worth a read if you're at all interested.

    So, ray tracing. It’s a rendering technique that has been around for over 45 years. It’s nothing new. Finally seeing the benefits of this technology enhance the environments in our games and VR worlds — in real time — thanks to a new API and dedicated consumer hardware, that’s the New Thing.

    • geforce 2038 days ago
      We did learn POV-Ray in high school computer class. We rendered on our faithful Sun Microsystems workstations. Good times.