Motion Blur

Motion blur is the simulation of the phenomenon that occurs when we perceive a rapidly moving object. The object appears to be blurred because of our persistence of vision. Doing motion blur makes computer animation appear more realistic. It can be thought of as adding back some of the time dependence expressed in the Rendering Equation.

If we compute an image, e.g. a single frame of an animation, without motion blur, the process is as follows:

  1. Compute the updated positions of all objects in the scene at time t1 = t0 + dt, including the camera
  2. Render the scene at time t

When we include motion blur, this is modified by the following (as an approximation):

  1. Compute the updated positions of all objects in the scene at time t1 = t0 + dt/2
  2. Render the scene
  3. Compute the updated positions of all objects in the scene at time t1 = t0 + dt
  4. Render the scene
  5. Compute the updated positions of all objects in the scene at time t1 = t0 + 3*dt/2
  6. Render the scene
  7. Average the rendered scenes into one

  1. The number and size of the time increments is variable. Also, the system may only render those parts of the image that are changing. Motion blur requires extra computation time.

Here are two examples of motion blur, as rendered in the Pixar Renderman system.

The first image is of a partially closed sphere, without motion blur.

The second image is of the same sphere closing, with motion blur.

The third image is of the sphere moving, with motion blur.


Go to main animation page.
Go to HyperGraph home page.

Last changed January 8, 1996, G. Scott Owen, owen@siggraph.org