Yes, as I explain in the article. I kept coming across media calling ray tracing a new technique, in articles about the RTX boards. I responded to a few and thought I'd talk about it on the blog. Enjoying ray traced animations (pre-rendered) on the Amiga was a thrill for me back in the old days, and so it felt a good fit for my vintage computing blog.
I think the term is 'ray casting' - though a quick bit of googling shows that wolfenstein 3d was ray casted, but doom was not. (instead Doom used a BSP-based technique that I can't find a pithy name for)
What prompted me to write the article was running across several tech news articles declaring ray tracing to be a new technique on pieces about the RTX hardware -- real-time aside. To clarify that ray tracing is not new but real-time ray tracing (generally) is was my purpose in writing the piece.
I'd say that it is rather obvious that ray tracing is not a new thing, since it is simulates how light physically behaves.
I consider this 3d rendering as a spectrum: rasterization requires little computation but has little to do with physics. Ray tracing is what requires a lot of computation and has everything to do with physics. Somewhere in between are hybrid methods: rasterization with ray tracing components added to it, or ray tracing with approximations.
For instance, pure rasterization cannot do shadows. It is approximated by rendering the scene from the viewpoint of a light and test the rasterized scene for occlusions casting shadows. And the other way around: real time ray tracing cannot compute all indirect lighting paths, a subset is only considered at the cost of e.g. variance.
It's much simpler than that: Both rasterisation and ray tracing are methods to solve visibility. The main difference is that one answers the question "given a primitive, what pixels does it overlap?", the other "given a pixel, what primitives does it overlap?"
Light transport, shading, shadowing are all just implemented on top and not a direct result of the visibility calculation.
> For instance, pure rasterization cannot do shadows.
Well... the technique that you describe (shadow mapping) is actually pure rasterization it just requires more than one rasterization pass. This also ignores the fact that there are other techniques for getting shadows in rasterizing renderers (stencil volumes and other stuff that is rather considered historical today).
I get your point that rasterization doesn't support shadows "naturally" like ray tracing, but in my opinion your wording and the example is rather unfortunate. The same goes for reflections, I would say SSS or caustics probably are better examples since they are really only done with techniques based on ray tracing.
Rasterization has just as much to do with physics as raytracing. They can both get you a correct solution to the rendering equation (i.e. be physically correct) if you wait long enough, or get you an approximation of it in a reasonable time scale.
For example if all you need are shadows cast from a point light source, a rasterization technique like stencil shadows will give you the same exact result as simple raytracing, and neither will be physically correct.
It is just that depending on your rendering time budget, some things are better done with rasterization and others are better done with raytracing. Real time engines, with simplified models typically work better with rasterization, while precalculated scenes, with more realistic models work better with raytracing.
Rasterization and raytracing are formally equivalent in a sense. You should be able to algebraically rearrange ray/triangle intersection tests performed in raytracing to get Pineda rasterization. So I don't really see one as more physical than the other. Rather the difference is that rasterization starts with each triangle and determines which rays intersect it, while raytracing starts with each ray and determines which triangles intersect it.
Historically, rasterization has been away of putting triangles onto the screen, maybe with a Z-buffer to determine visibility. It's basically an image space idea, with things like Gouraud shading happening in image space, and though you could extend it put it to use in calculating shadow volumes or shadow maps, it doesn't implicitly deal with light transport. That's the first difference.
Ray tracing, on the other hand, has always been about (forward or backward) rays of light propagating through object space. It wasn't about light transport in the early days (just visibility and shading, based on simple models like Phong) but it is very well-suited to modelling transport, because it addresses the notion of fully-spatial rays in object space.
Writing a physically based (light transport)renderer which was internally based purely on rasterization to rectangular images would be an odd choice, partly because many of the intermediate images would be have to somehow be parameterized to represent locations on a hemisphere, etc.
I'm open to correction on this, but rasterization algorithms are really tied to projections onto a rectilinear grid, orthographic or perspective. Ray-tracing doesn't need to assume/know about this raster grid idea and as a result can be used with other geometries. This makes it strictly more powerful than rasterization. This kind of thing, for example: https://www.glassner.com/computer-graphics/graphics-research... is a very bad fit for polygon rasterizers because each triangle is going to be warped in image space.
Hmmm, yes, though they do say "In this paper we focus on primary (camera) rays, i.e. rays with a
common origin or parallel rays, because only these are also covered
by rasterization. We consider secondary rays and efficient global illumination algorithms, such as path tracing or photon mapping, as
orthogonal to our approach."
Just what that "orthogonal" means is a bit mysterious, but their project seems to be to generalize rasterization further than they've got in this paper: "we aim for further generalization, in particular, a parameterization which allows for incremental computation, not only for the ray direction, but also the ray origin"
Even ray tracing does not capture all physics of electromagnetism and only works at the level of geometrical optics: any effect, like diffraction or iridescence, that arises due to the wave-like nature of light still need to be implemented in ad-hoc way in a ray tracing algorithm. But fully simulating Maxwell's equations (or QFT) to keep track of those minor effects would be insanely expansive.
It's physically based, it's just not a complete simulation of all the physics involved. It's at least as physically based as most rigid body, cloth or fluid physics simulation.
Radiosity would qualify better as `based on physics` than ray tracing, but can't do things like mirrors. I could live with classifying ray tracing as `geometry based`, like Newton studied mirrors.
(You could do something that uses ray tracing to determine what's visible and what isn't and radiosity to determine the colour, but that's an entirely different story).
All rendering is an attempt to solve the rendering equation which is very much physically based. Whitted ray tracing is doing a lot of simplification but path tracing (which is closer to what people are talking about doing in real time with new hardware) is a pretty principled attempt to solve the rendering equation via Monte Carlo methods. I don't see how that's not physically based. The definitive book on this stuff is literally called "Physically Based Rendering" https://pbrt.org/
Here you're using geometrical optics which models is a narrow beam (ray) which is idealized as a line. It all becomes simple vector math form there onwards.
However, Physics knows since the end of the 18th century light is a wave.
Where in day-to-day situations, like walking around outside or in a building do you see diffractive phenomena where a ray approximation breaks down far enough for you to notice?
Usually all of that is smoothed over by light sources being extended sources not points, so the interference contrast is lost by infinitely many interference patterns being overlaid incoherently. Also, almost all light sources (except for rlasers) have microseconds of coherent emission, so the pattern changes so fast it blurs into a regular blurry edge of shadow.
I can only think of some very special situations where some blinds select a very narrow angular range of sunlight and then you see interference fringes in the shadow.
Or when you look into a puddle with an oil film or at some sort of diffraction grating or holographic film (which can be predicted with ray-based methods, like Wigner-distribution based ray-tracing, though that still comes with some error at large angles).
Even in laser optics, 95% of the optics design is done with geometrical optics methods, because the rays you use can be related to the phase profile of the radiation in the system. You can then integrate (with rays) the diffraction pattern (but not as well in the shadow of apertures ofc).
I mean, you're arguing a term that's used in physics isn't physical. Raycasting is often used when solving EM equations.
Are you purporting that photon streams don't follow a ray? And when they interact with a surface they don't obey Snells law? And when you look at the interface between mediums a percentage of the intensity is transmitted and the remainder is reflected (the ratio of which is determined by the angle of incidence?)
Would you say the same thing about rigid body physics simulations? There's no such thing as an idealized rigid body in reality but such simulations use lots of useful approximations the are also used in "real" physics.
Radeon Rays is more comparable to OptiX (2009), in that both are GPU accelerated ray tracing libraries. RTX (2018) differs in that it uses special purpose ray tracing units on the GPU, rather than running only on the SMs/CUs.
Dedicated hardware to accelerate BVH traversal and ray triangle intersections. Previous GPU accelerated ray tracing implementations have just used existing hardware for programmable shaders for these but that hardware isn't ideally suited to the task.
So in that case is this going to be a "G-Sync"/"Freesync" or "Gameworks"/"No, I like my framerates" thing? If developers begin supporting the RTX hardware is there a way AMD can get on board, or is this another one of NVidia's patented industry hurting moves?
I can't answer the question, but I'm curious how you think NVIDIA should have introduced real time raytracing to the industry? They can make it proprietary at the API level, or proprietary at the ISA level, but at some point you're stuck with the fact that their hardware can do something AMD's cannot.
Is it industry hurting to make the GPU they sell have any exposed new capability, other than making the same old thing slightly faster?
NVidia actively denies others the ability to interact with their "Gsync" platform - an AMD card will never be able to take advantage of a Gsync monitor. I'm getting the answer that this is a Vulkan/DirectX Api, and assuming that's the case, that's great. But if it was a proprietary API (like Gameworks), that could only possibly run on proprietary hardware, I'd have a problem. As long as everyone can leverage those API calls at the same time I'm perfectly happy.
As far as I know, both Vulkan (with extensions, for nvidia VK_NV_raytacing) and DirectX (with DXR) have ray-tracing capabilities in their standard.
Their shouldn't be any patent that deny AMD from providing an implementation in their drivers.
Developers support RTX via DirectX's DXR raytracing API. AMD can put whatever hardware they want on their cards to accelerate those calls and implement the appropriate drivers.
For games most will access the functionality via the new DirectX ray tracing APIs (or perhaps the Vulkan equivalent in the future) so there's nothing stopping AMD or Intel from adding hardware to accelerate those APIs too.
And if someone from Game development communities could explain how hard is it to do RTX or DirectX RT on current games? Would companies now have to do two set of graphics, design, and code path? One that does it with DirectX RT the other doing it with all the Rasterisation technique such as Shadow mapping etc.
Basically I am interested in the Cost of such Direct X RT implementation. And what sort of time frame could we see these on the market.
It's a big step, both assets and codepath have to change to make full use of it. It's possible to change some effects to work in a forwards-compatible way, but this isn't a free lunch. As an traditional GPU, the RTX is only slightly better than the existing 10-series.
Since AMD continues to hold console developers with their semi-custom chips, this is only likely to impact a limited set of titles in the near future.
Yes but it means another set of path to test and optimise. Another set of Graphics design asset for RT. As if the current AAA titles budget are not expensive enough. Instead of making quality games cheaper, we are making graphics intensive games even more expensive to build. And I don't think it is healthy at all.
Gamers have long been clammering about how they can't wait until in-game graphics match those of pre-rendered cinematics (that said, a lot of cinematics these days are no longer pre-rendered). Ray tracing is such an expensive operation. According to this quora, it took 29 hours to render a single frame of "Monsters University": https://www.quora.com/How-long-does-it-take-to-render-a-Pixa...
We're probably nowhere close to getting real-time Pixar-quality rendering in our games right now, but we've definitely made leaps and bounds over the last few decades.
That's because it's a moving goalpost. Pixar keeps increasing the quality to take advantage of faster hardware and better algorithms.
We'll never be close to getting today's real-time Pixar-quality until Pixar rendering is good enough that they stop meaningfully improving it. What we can have is yesterday's Pixar quality rendering in-game.
That should be a new goal post: can it render a pixar film in real-time. e.g. Do we have a Toy Story capable card yet?
(Note: I know offline rendering and real-time raster rendering we use in GPU's are completely different methods. But there is a point where the raster trickery can catch up and match the offline stuff.)
Today's GPU are probably much closer to what Renderman was doing back when Toy Story was made than you think. They used an algorithm called REYES, which has nothing to do with raytracing and in fact can only barely be made to combine with ray tracing at all [1]. It was completely thrown out of Renderman only on the last couple of years for that reason.
REYES really is an early take on rasterization with tesselation, designed for hardware with extreme memory constraints. Although the actual tesselation algorithm works differently from GPU hardware tesselation, the basic idea of tesselating dynamically to the required level of detail for the current frame carried over intonthe hardware.
>Gamers have long been clammering about how they can't wait until in-game graphics match those of pre-rendered cinematics
Well, they do, you just have to look back X years depending on your criteria. Look at modern in game graphics v PSX cinematics for instance. It's not even close.
Im an Amiga fan but in what way is the Juggler demo relevant here at all? Ray tracing and storing the 2D result, was surely done much earlier on workstations from SGI, Sun etc
I wrote this article. The Juggler was the first time I'd seen raytracing on a computer and watching these pre-rendered animations on the Amiga was one of the thrills of the system, as it surpassed all consumer micros of the day graphically, as far as on-screen colors. The Amiga's Hold-And-Modify (HAM) mode could render the full 4096-color palette on-screen and was very well suited for displaying ray traced scenes with their realistic coloring and shading and could do so at a resolution of 320x400 (4:3 aspect) and at a sufficient framerate, given the flexibility and power of the Amiga's blitter and memory architecture. As such, it seemed worth a mention here.
i'd say it's somewhat relevant in that it popularized the idea to a new generation of folks (like me) who'd never heard of it before. spent hours with pov-ray as a kid (well... minutes, then hours waiting). didn't know at the time it wasn't 'new' (it was 'new' to me), but also AFAIK there were no other moderately affordable home computing systems where this was possible. maybe it was a thing on ms-dos clones of the late 80s and I just missed it there?
The article concludes with a succinct TL;DR, but it's worth a read if you're at all interested.
So, ray tracing. It’s a rendering technique that has been around for over 45 years. It’s nothing new. Finally seeing the benefits of this technology enhance the environments in our games and VR worlds — in real time — thanks to a new API and dedicated consumer hardware, that’s the New Thing.
if by "new" you mean well over a decade old...
The post does cover the history but it is a response to coverage like this: "...a new graphics rendering technique called ray tracing."
I consider this 3d rendering as a spectrum: rasterization requires little computation but has little to do with physics. Ray tracing is what requires a lot of computation and has everything to do with physics. Somewhere in between are hybrid methods: rasterization with ray tracing components added to it, or ray tracing with approximations.
For instance, pure rasterization cannot do shadows. It is approximated by rendering the scene from the viewpoint of a light and test the rasterized scene for occlusions casting shadows. And the other way around: real time ray tracing cannot compute all indirect lighting paths, a subset is only considered at the cost of e.g. variance.
Light transport, shading, shadowing are all just implemented on top and not a direct result of the visibility calculation.
Well... the technique that you describe (shadow mapping) is actually pure rasterization it just requires more than one rasterization pass. This also ignores the fact that there are other techniques for getting shadows in rasterizing renderers (stencil volumes and other stuff that is rather considered historical today).
I get your point that rasterization doesn't support shadows "naturally" like ray tracing, but in my opinion your wording and the example is rather unfortunate. The same goes for reflections, I would say SSS or caustics probably are better examples since they are really only done with techniques based on ray tracing.
Games these days implement SSS in screen space using rasterisation, no rays tracing either.
For example if all you need are shadows cast from a point light source, a rasterization technique like stencil shadows will give you the same exact result as simple raytracing, and neither will be physically correct.
It is just that depending on your rendering time budget, some things are better done with rasterization and others are better done with raytracing. Real time engines, with simplified models typically work better with rasterization, while precalculated scenes, with more realistic models work better with raytracing.
Ray tracing, on the other hand, has always been about (forward or backward) rays of light propagating through object space. It wasn't about light transport in the early days (just visibility and shading, based on simple models like Phong) but it is very well-suited to modelling transport, because it addresses the notion of fully-spatial rays in object space.
Writing a physically based (light transport)renderer which was internally based purely on rasterization to rectangular images would be an odd choice, partly because many of the intermediate images would be have to somehow be parameterized to represent locations on a hemisphere, etc.
I'm open to correction on this, but rasterization algorithms are really tied to projections onto a rectilinear grid, orthographic or perspective. Ray-tracing doesn't need to assume/know about this raster grid idea and as a result can be used with other geometries. This makes it strictly more powerful than rasterization. This kind of thing, for example: https://www.glassner.com/computer-graphics/graphics-research... is a very bad fit for polygon rasterizers because each triangle is going to be warped in image space.
Just what that "orthogonal" means is a bit mysterious, but their project seems to be to generalize rasterization further than they've got in this paper: "we aim for further generalization, in particular, a parameterization which allows for incremental computation, not only for the ray direction, but also the ray origin"
I think ray tracing is at best an ad-hoc model for light that produces nice results, but physically based it isn't.
(You could do something that uses ray tracing to determine what's visible and what isn't and radiosity to determine the colour, but that's an entirely different story).
I mean, unless you're expecting them to calculate absorption/re-emission at every bounce...
It still seems pretty "physical" to me.
However, Physics knows since the end of the 18th century light is a wave.
https://en.wikipedia.org/wiki/Young%27s_interference_experim...
It's not that the model breaks down only in extreme conditions (like Newton's laws of mechanices), but in day to day situations as well.
I think that's the essence of my qualm.
Usually all of that is smoothed over by light sources being extended sources not points, so the interference contrast is lost by infinitely many interference patterns being overlaid incoherently. Also, almost all light sources (except for rlasers) have microseconds of coherent emission, so the pattern changes so fast it blurs into a regular blurry edge of shadow.
I can only think of some very special situations where some blinds select a very narrow angular range of sunlight and then you see interference fringes in the shadow.
Or when you look into a puddle with an oil film or at some sort of diffraction grating or holographic film (which can be predicted with ray-based methods, like Wigner-distribution based ray-tracing, though that still comes with some error at large angles).
Even in laser optics, 95% of the optics design is done with geometrical optics methods, because the rays you use can be related to the phase profile of the radiation in the system. You can then integrate (with rays) the diffraction pattern (but not as well in the shadow of apertures ofc).
is a pretty one.
Physicists still rely on snells law. Optics courses still includes path tracing when studying refraction and dielectrics.
Excluding the particle behavior of light just because the wave nature exists, is not something a physicist would do.
Are you purporting that photon streams don't follow a ray? And when they interact with a surface they don't obey Snells law? And when you look at the interface between mediums a percentage of the intensity is transmitted and the remainder is reflected (the ratio of which is determined by the angle of incidence?)
Is it industry hurting to make the GPU they sell have any exposed new capability, other than making the same old thing slightly faster?
It is up to the other vendors to produce their own hardware and respective OpenGL/Vulkan extensions.
And maybe Khronos eventually gets to standardize them.
Basically I am interested in the Cost of such Direct X RT implementation. And what sort of time frame could we see these on the market.
Since AMD continues to hold console developers with their semi-custom chips, this is only likely to impact a limited set of titles in the near future.
We're probably nowhere close to getting real-time Pixar-quality rendering in our games right now, but we've definitely made leaps and bounds over the last few decades.
We'll never be close to getting today's real-time Pixar-quality until Pixar rendering is good enough that they stop meaningfully improving it. What we can have is yesterday's Pixar quality rendering in-game.
(Note: I know offline rendering and real-time raster rendering we use in GPU's are completely different methods. But there is a point where the raster trickery can catch up and match the offline stuff.)
REYES really is an early take on rasterization with tesselation, designed for hardware with extreme memory constraints. Although the actual tesselation algorithm works differently from GPU hardware tesselation, the basic idea of tesselating dynamically to the required level of detail for the current frame carried over intonthe hardware.
[1] https://en.m.wikipedia.org/wiki/Reyes_rendering
The only tip-off I had that (some of) Nier Automata's cutscenes are pre-rendered was the drop to 30fps...
Well, they do, you just have to look back X years depending on your criteria. Look at modern in game graphics v PSX cinematics for instance. It's not even close.
So, ray tracing. It’s a rendering technique that has been around for over 45 years. It’s nothing new. Finally seeing the benefits of this technology enhance the environments in our games and VR worlds — in real time — thanks to a new API and dedicated consumer hardware, that’s the New Thing.