The SIGGRAPH Papers program is the world's premier forum for disseminating new results and research in computer graphics and interactive techniques. SIGGRAPH 2000 continues to set the standard of research and forging new trends with the presentation 59 papers selected from 315 submissions. Kurt Akeley, SGI, is the SIGGRAPH 2000 Papers chair.
Four papers are of special interest to the real-time graphics community and display a progression from working with purely video to synthesizing images with or without the help of digital geometry to a more geometric approach to drawing. The last two papers discussed help to solve a number of animation problems by introducing methods to produce impressively realistic physical simulations and intelligent digital characters that can improve upon a "poor man's" motion capture.
As an example of one application domain, the game developers, who need both fast graphics and fast computing, will find research approaches of interest in the following papers.
On the purely video end of the scale, the paper entitled Video Textures introduces a handy technique that solves the problem of jerkiness when a video clip is looped, and then goes much further to convincingly produce a continuously varying stream of images that can play for as long as you like. This is accomplished by locating a number of plausible "loop back frames" within the video sequence and then randomly selecting or bypassing each loop back point as it is encountered when the video is played back. Thus, while the individual frames of a video texture may be repeated from time to time, the video sequence as a whole is never repeated exactly. By varying the number of loops (and frames) in the video sequence, you can produce very compact video textures perhaps for Web page download that are convincingly realistic.
The Video Texturing technique can also analyze and maintain plausible internal loops for multiple separate portions of a video. For example, two children playing on a swing set can be considered independently of one another, multiplying the variations possible for video texture playback.
You can even produce video sprites and "put together" elements such as fish, air bubbles, and swaying plants that create a virtually realistic fish tank without the bother of real fish. One could easily imagine in the near future such video textures used in television, game, and Web page backgrounds, or even replacing the desktop family photo with a continuous motion image.
The WarpEngine is a new architecture designed to render natural scenes that can be traversed in real time. The system uses image-based rendering to produce realistic 3D objects and environments using high-quality images rather than the typical approach of many thousands of polygons. The WarpEngine system will be embedded in hardware (e.g. a PC video graphics card) for fast, reliable performance.
Relief Texture Mapping is a clever technique to increase realism by enhancing a texture map's implied surface details to give a 3D surface relief effect, such as would be displayed by a brick wall. This is accomplished by recording "depth" information for points (or "texels") on the 2D texture map image as it would relate to the viewer's position in space, and then mapping the extended texture onto a simple 3D object (e.g., cube). The authors display dramatic results when their Relief Texture Mapping technique is applied to a virtual Volkswagen Beetle and sculptured statue.
Conservative Visibility Preprocessing Using Extended Projections demonstrates how an "occlusion culling" technique allows much faster real-time processing of an extremely large dataset of polygons in a virtual environment. This approach pre-processes a scene's information to determine which objects will not be visible from a user's given viewpoint, thus saving much processing at run time. The authors give impressive demonstrations of pre-processed occlusion culling in a side-by-side comparison with common view-frustrum culling while traversing a digital city produced by rendering more than 6 million polygons including 3,000 moving cars and within a simulated forest of more than 7.8 million polygons. Such complexity quickly overburdens computer processing while using only view-frustrum culling to determine which objects are within the users view (and therefore need to be rendered). The video response rate is quite noticeably faster using the occlusion culling technique.
While the occlusion culling demonstrations were run on a four-processor SGI Onyx, so the particular scene here could not be run on a typical PC, the algorithm is shown to be impressively superior and of interest to anyone working with large datasets in real time.
Timewarp Rigid Body Simulation demonstrates a series of impressive (and fast, though not quite yet real-time) physical simulations that solve many problems of realistic interaction and collision detection in scenes involving multiple moving objects. From simulations of molecular gas behavior under pressure or in mixture, to an avalanche of varying-shaped objects behaving and interacting in a realistic manner as they bounce and roll down the hillside, the authors demonstrate the effectiveness of their technique when applied to interactions of hundreds of rigid bodies.
The Style Machines paper tackles the problem of inexpensively and effectively using motion capture to animate digital characters. Style Machines extends a 3D digital model such that it can "learn" styles of motion; an animator can then use "training" videos of professional performers, such as dancers, to instill motion ability within a digital character's "memory." The digital model can then capture and extrapolate its own motion from a given consumer-grade video sequence, like an intelligent "auto-rotoscoping." Such a system allows efficient animation through a quick and easy motion capture technique that does not require a professional actor, expensive equipment, or multiple cameras.
Arno Schodl, Georgia Institute of Technology
Richard Szeliski, Microsoft Research
David Salesin, Microsoft Research and the
University of Washington
Irfan Essa, Georgia Institute of Technology
The WarpEngine: An Architecture for the Post-Polygonal Age
Voicu Popescu, John Eyles, Anselmo Lastra,
Josh Steinhurst, Nick England, and Lars Nyland,
University of North Carolina at Chapel Hill
Relief Texture Mapping
Manuel M. Oliveira, Gary Bishop, and David McAllister, University of North Carolina at Chapel Hill
Conservative Visibility Preprocessing using Extended Projections
Fredo Durand, iMAGIS-GRAVIR / MIT LCS
George Drettakis, Joelle Thollot, and Claude
Timewarp Rigid Body Simulation
Brian Mirtich, MERL - Mitsubish Electric Research
Matthew Brand, MERL - Mitsubish Electric
Aaron Hertzmann, New York
University Media Research Lab