How does a ray tracer work?

Ray tracing is the generic name for a family of algorithms designed for generating photorealistic renderings of geometric scenes. Realism is primed above raw velocity, if necessary. This goal is achieved with a careful emulation of the behavior of light and with a rich palette of geometrical objects for describing scenes.

It's useful to compare ray tracing with hardware accelerated techniques used in games:

Visual rays, light rays, who cares?

The basic idea in ray tracing is very simple: it would be perfect to simulate the behavior of light tracing photons emitted from light sources, but this is almost impossible. Most light rays never reach the eyes, or the camera, and it doesn't make any sense to care about them. Thus, ray tracing tracks visual rays, inverting what really happens in the physical world.

The image below shows the simplest interaction in ray tracing:

First, a visual ray is emited, in order to probe the scene. If this ray doesn't hit any object, the returned color is taken from the scene background. When the visual ray hits on a surface, more rays are sent, this time, from the hit point to each light in the scene. We want to know whether any object is obstructing the ray between the hit point and the light source. If the answer is affirmative, the hit point is in a shadow zone. The color returned by the algorithm is computed with a formula depending of the total illumination at the hit point, the color of the object at the hit point, an ambient light factor, etc.

Reflection and refraction

Reality is not so simple. We must consider reflections, transparency and refraction. In our first step, we considered all surfaces to behave as perfect Lambertian surfaces: a surface that scatters all the light it receives in all directions, with uniform probability. Now we must add support for reflective surfaces:

When a visual ray hits on the surface of a reflective material, the ray is modified: its new origin will be the hit point, and its direction will be computed so the incidence angle relative to the normal is the same as the angle between the normal and the new direction. This new ray is traced in a recursive fashion, and it returns a color value. The value from reflection is multiplied by an attenuation factor and added to the original radiance value for the hit point. In the above image, the pixel corresponding to the visual ray will carry color information about the hit blue point in the sphere plus some slightly degraded information about the green rod above the sphere.

Finally, we have transparent materials, as the prism in the following image:

In this case, the visual ray branches in two directions, and the contributions from both subbranches must be added in order to obtain the final radiance. Note that, in this general case, we need a recursive implementation... or an equivalent implementation using a explicit stack.

Attitudes regarding ray tracing

There are at least two equally valid attitudes regarding ray tracing. On one hand, there's the never ending quest for the ultimate photorealism. On the other hand, you could simply regard ray tracers as another useful tool for creating stunning graphics and animations.

This may sound too obvious, but it's very important to realize this fact when evaluating how useful could prove the addition of a given technique to the ray tracing toolbox. Even when realism is lacking, you could use the feature to create interesting visual effects.

See also

Home | An overview of XSight Ray Tracer | The limits of ray tracing | Why XSight RT? | Small Instantiation Language Reference