This section addresses the representation of object positions and orientations and how these representations can be manipulated to produce animation. The main intent of this section is to define terminology and review the graphics display procedure.
The basic rendering pipeline is usually diagrammed as a series of transformations through several identifiable spaces. An object is defined in its own space, its object space. Objects are transformed into positions and orientations in world space. This is the same space in which light sources are placed as well as the observer or camera. The observer parameters include position, view direction (sometimes specified with a center of interest in which case the view direction is the vector from the observer to the center of interest) and tilt.
Object space and world space are typcially right-handed coordinate systems. For purposes of our discussions, the world space coordinate system will have the positive x-axis pointing to the right, the positive y-axis pointing up and the z-axis points away from the observer representing depth. This is different from some application areas in which the x-y plane is the ground plane and z is altitude. It doesn't make any difference which convention is adopted as long as the animator and programmer agree on the convention.
In preparation for the perspective transformation, objects are usually transformed from world space to eye space in which the eye is at the origin looking down the positive z-axis in right-handed space. The perspective transformation transforms object definitions from eye space to image space. Visible extents in image space are usually standardized into the minus one to plus one range in x and y, and from zero to one in z. Image space is then scaled and translated into screen space by making the visible ranges in x and y coincide with the display coordinates; z can be left alone. So that we have:
object space -> world space -> eye space -> image space -> screen space
Ray tracing differs from the above sequence of transformations in that the act of tracing rays implicitly accomplishes the perspective transformation. If the rays are constructed in world space based on pixel coordinates, which is not an unreasonable approach, then the progression through spaces for ray tracing reduces to:
object space -> world space -> screen space
In either case, animation is typically produced by modifying the object space to world space transformation of objects over time or, in the case of walk-throughs or fly-bys, by transforming the observer in world space. Object space to world space transformations can be kept in 4x4 transformation matrices. The 4x4 matrix is the smallest matrix that can represent all of the relevant transformation and, because it is a square matrix, has a good chance at having a computable inverse (in fact, in our case of transformations, it always has a computable inverse) and can be concatenated with other transformations matrices to produce compound transformations while still maintaining 4x4ness. The 4x4 identity matrix has zeros everywhere except along its diagonal wherein lies ones.
As a review, computer graphics typically post multiplies a 1x4 row matrices which represents a point, by the 4x4 transformation matrix to produce a transformed 1x4 matrix. Except when doing perspective transformation, the fourth element of the row point vector is unity and the first three elements are the x, y and z coordinates of the point.The translation matrix has x, y, and z translation values as the first three values of the fourth row. The uniform scale matrix that scales by 'S' is the identity matrix with 1/S as the fourth element of the 4th row. Non-uniform scale is the identity matrix with Sx, Sy, and Sz as the first three elements along the diagonal. Matrices which represent rotations about the principle axes are shown in the following diagram. A general rotation is represented by a direction cosine matrix is also shown. Shearing is merely a rotation followed by a non-uniform rotation.
Alternatively, a 3x3 rotation matrix and a translation vector can be used to contorl the object to world space transformation. The 4x4 matrix referred to above is simply the 3x3 in the upper left corner of the 4x4 with the transpose of the translation vector as the row underneath it with a column of [0 0 0 1] completing the values on the right side of the matrix.
In robotics and some computer graphics texts, most notably the 2nd edition of Foley et al. [Foley], column point matrices are premultiplied by 4x4 transformation matrices. Everything mentioned still applies; all matrices using this convention are merely transposes of the previous convention, including the vectors.
For the non-ray tracing case, the transformations are of the form:
ObjectTransform = Scale*Rotationx*Rotationy*Rotationz*Translation World2EyeTransform = Translastion-obs*Rotationy*Rotationx*Rotationz*Left2Right Perspective = perspective divide Image2ScreenTransform = [add sample transformation matrices for the viewing pipeline?]
Once a transformatin matrix has been formed for an object, the object is transformed by simply multiplying all of the object's object-space points by the object-to-world-space transformation matrix. When doing animation, an object's points will have to be iteratively transformed over time. However, incremental transformation of world-space points usually leads to the accumulation of roundoff errors. For this reason, it is almost always better to modify the transformation from object to world space and reapply the transformation to the object space points rather than repeatedly transform the world space coordinates. To further transform an object which already has a transformation matrix associated with it, one simply has to form a transformation matrix and premultiply it by the existing transformation matrix to produce a new one. However, roundoff errors also can accumulate when repeatedly modifying a transformation matrix.
Consider the case of the moon orbiting the earth. For sake of simplicity, let's assume that the center of the earth is at the origin, and initially the moon data is defined with its center at the origin. We have four approaches that could be taken and will illustrate various effects of roundoff error.
First the moon data could be transformed out to its orbit position, let's say (r,0,0). For each frame of animation, we could apply a delta y-axis transformation matrix to the moon's points where each delta represents the angle it move in one frame. Roundoff errors will accumulate in the world-space object points. Points which began as coplanar will no longer be coplanar. This can have undersirable effects, especially in display algorithms which linearly interpolate values along a surface.
The second approach is to build a y-axis transformation matrix that will take the object space points into their current world-space points. For each frame, we concatenate a delta y-axis transformatin matrix with the current transformation matrix and then apply that resultant matrix to the moon's points. Roundoff error will accumulate in the transformation matrix. Over time, the matrix will deviate from representing a rigid transformation. Shearing effects will begin to creap into the transformation and angles will cease to be preserved.
The third approach is add the delta value to an accumulating angle variable and then build the y-axis rotation matrix from that angle parameter. This would then be concatenated with the x-axis translation matrix and the resultant matrix would be applied to the original moon points in object space. In this case, roundoff error will accumulate in the angle variable and the angle of rotation may begin to deviate from what is desired. This may have undesirable effects when trying to coordinate motions, but the transformation matrix, which is built anew every frame, will not accumulate any errors itself. The transformation will always represent a valid rigid transformationwith planarity and angles being preserved.
Other considerations for specifying motion, especially representation schemes for orientation, are given in Section 3.x.
Specify observer by a position, a view direction, and a head tilt. View direction can be specified either by specific vector in world space, or by a point in world space that represents the observer's center of interest. The view direction is the vector from the observer to the center of interest. The magnitude of the view direction, and therefore the distance the center of interest is from the observer, is of no consequence.
Usually the default orientation of the observer is 'head up'. That is, the observer usually has his up vector in the plane formed by the view vector and the y-axis. This, of course, is undefined for straight up and straight down views and must be dealt with as special cases or simply avoided. This default orientation means that if the observer has a fixed center of interest and the observer's position arcs directly over the center of interest, then just before and just after being directly overhead, the observer's up vector will instantaneously rotate by almost one hundred eighty degrees.
In addition to the observer position and orientation, the field of view has to be specified. This includes a viewing angle, hither clipping distance, and yon clipping distance.
Go back to Table of Contents
Go to Previous Chapter: Chapter2. Recording Techniques and Animation hardware
Go to Next Chapter: Chapter 4. Techniques to Aid Motion Specification