Ray Tracing is so time-consuming because of the intersection calculations. Since each ray must be checked against all objects, for a naive raytracer (with no speedup techniques) the time is proportional to the number of rays X the number of objects in the scene. Each intersection requires from a few (5-7) to many (15-20) floating point (fp) operations. Thus for a scene with 100 objects and computed with a spatial resolution of 512 x 512, assuming 10 fp operations per object test there are about 250,000 X 100 X10 = 250,000,000 fps. This is just for the primary rays (from the eye through the image plane) with no anti-aliasing. Clearly there are computational problems with this.

- There are several approaches to speeding up computations:
- 1. Use faster machines
- 2. Use specialized hardware, especially parallel processors.
- 3. Speed up computations by using more efficient algorithms
- 4. Reduce the number of ray - object computations

This means that we stop generating reflected/transmitted rays when the computed intensity becomes less than a certain threshold. You must always set a certain maximum depth or else the program would generate an infinite number of rays. But it is not always necessary to go to the maximum depth if the surfaces are not highly reflective. To test for this the ray tracer must compute and keep the product of the global and reflection coefficients as the rays are traced.

Example: let Kr = 0.5 for a set of surfaces. Then from the first surface the maximum contribution is 0.5, for the reflection from the second: 0.5 * 0.5 = 0.25, the third: 0.25 * 0.5 = 0.125, the fourth: 0.125 * 0.5 = 0.0625, the fifth: 0.0625 * 0.5 = 0.03125, etc. In addition we might implement a distance attenuation factor such as 1/D2, which would also decrease the intensity contribution.

For a transmitted ray we could do something similar but in that case the distance traveled through the object would cause even faster intensity decrease. As an example of this, Hall & Greenberg found that even for a very reflective scene, using this with a maximum depth of 15 resulted in an average ray tree depth of 1.7.

We enclose groups of objects in sets of hierarchical bounding volumes and first test for intersection with the bounding volume, and then only if there is an intersection, against the objects enclosed by the volume.

Bounding volumes should be easy to test for intersection, for example a sphere or box (slab). The best bounding volume will be determined by the shape of the underlying object or objects. For example, if the objects are long and thin then a sphere will enclose mainly empty space and a box is much better. Boxes are also easier for hierarchical bounding volumes.

Note that using a herarchical system like this (assuming it is done carefully) changes the intersection computational time from a linear dependence on the number of objects to something between linear and a logorithmic dependence. This is because, for a perfect case, each interesction test would divide the possibilities by two, and we would have a binary tree type structure. Spatial subdivision methods, discussed below, try to achieve this.

Kay & Kajiya give a list of properties for hierarchical bounding volumes:

1. Subtrees should contain objects that are
near each other and the further down the tree the closer should
be the objects.

2. The volume of each node should be minimal.

3. The sum of the volumes of of all bounding volumes should be
minimal.

4. Greater attention should be placed on the nodes near the root
since pruning a branch near the root will remove more potential
objects than one farther down the tree.

5. The time spent constructing the hierarchy should be much less
than the time saved by using it.

As mentioned in the section on adaptive depth control, by using that technique the average depth of the ray tree may be less than two. This means that a large percentage of the work is performed in finding the first intersection. Weghorst has suggested using a modified Z-buffer algorithm to determine the first hit. The scene would be pre-processed, with the resultant z-buffer storing pointers to the objects intersected. Then the ray tracing would proceed from that point.

He showed that incorporating the above three techniques (adaptive depth control, hierarchical bounding volumes, and first-hit speedup) approximately halved the intersection computational time for complex scenes. Note that he use spheres as the bounding volumes. In general the computational improvement was inversely dependent on scene complexity.

Weghorst showed that incorporating the above three techniques (adaptive depth control, hierarchical bounding volumes, and first-hit speedup) approximately halved the intersection computational time for complex scenes. Note that he used spheres as the bounding volumes. In general the computational improvement was inversely dependent on scene complexity.

Main ray trace
page

HyperGraph Table
of Contents.

HyperGraph Home
page.