Innovations in Rendering
SIGGRAPH is the major annual conference for innovation in graphics and interaction for academia and industry. This year's SIGGRAPH conference saw a wide range of novel techniques presented and a few of the older ones consolidated. The paper acceptance rate for the conference was 18% from nearly 500 papers submitted, underlining the strict adherence to quality and importance of SIGGRAPH within the graphics community.
In the field of rendering, novel contributions were presented in the papers, sketches and posters session. Additionally, the courses helped highlight the current trends and techniques used in this field of graphics.
Rendering techniques are the methods and algorithms used in computer graphics to generate images from virtual worlds, which are usually composed of mathematical representations of geometric objects and materials. When rendering images of virtual environments, two major techniques are used. The first is based on rasterisation and involves projecting three-dimensional geometric models, usually represented as polygons, onto an image plane and applying visual effects. These methods are mostly used in three-dimensional games and interactive simulations due to their fast computational speed (possible since they can make use of fast graphics hardware). The second set of techniques attempts to simulate the way that light travels in nature, by shooting photons, or more commonly inverting the process and shooting rays, around a virtual scene. Using this method, appropriately termed ray tracing, rays are shot out of a virtual camera and capture the physical properties of the light interacting with the geometry and media. These methods are noted for their highly realistic look since they rely on physical simulations. Due to the complexity of the simulation the resulting images take much longer to compute than the rasterisation methods.
The rasterisation methods demonstrated at SIGGRAPH showed a convergence towards the more realistic techniques. The course on physically based reflectance for games showed how physical reflections, more commonly associated with ray traced renderers, can be used in interactive environments to improve the realism of the images produced. The course on advanced rendering on 3D graphics and games further showed how advanced rendering techniques, including interreflections and high dynamic range, can be used in games. This is particularly useful in games, since in games the result need not be physically accurate as long as it is close enough to realism for an enjoyable experience. In terms of systems, Microsoft introduced the Direct 3D 10 rendering system, set to become one of the leading rasterisation rendering libraries, in one of the papers sessions.
The methods, based on pre-computed radiance transfer, that are bridging the gap between rasterisation and high-fidelity rendering, had their own papers session. Methods were presented that further enhance this new method of rendering for real time editing and all-frequency lighting. A method was also shown that successfully used pre-computed transfer for sounds rather than light.
The realistic rendering began to make inroads into the domains of interactivity. The course on interactive ray tracing showed attendees how to construct ray tracers using techniques which obtain improved performance from the CPU by taking advantage of instruction level parallelism and cache coherence. They also showed the importance of constructing and using the appropriate data structures for acceleration. They demonstrated their methods with a number of ray tracers that are interactive, dynamic and produce realistic images. Another course used an alternative approach that relied less on engineering fast ray tracers. It did so by leveraging aspects of the human visual system to exploit perception for realistic rendering and interaction in virtual environments. That is to say, it rendered aspects of the image not attended to by a viewer at a lower quality, without the viewers noticing.
The papers session on realistic rendering showed techniques for speeding up ray tracing of dynamic scenes, through fast acceleration data structures, and efficient methods of sampling the image plane, ideal for reducing noise and aliasing. The light transport session proved particularly interesting with novel methods for rendering hair, using methods commonly associated with other media. Pixar presented work on rendering animations by first computing coarse representations of their animations, then refining the results based on statistical methods. Many rendering techniques allow the viewer to interact with an environment without the possibility of changing the lighting. The direct-to-indirect transfer technique turns this method on its head allowing rapid modification of the lights within an image. This method also uses global illumination, a technique that is very useful for when animators want to identify ideal lighting conditions for cinematic animations. The perceptual methods for rendering were further highlighted in the multidimensional lightcuts, a sequel in many ways to last year's paper in which many high-fidelity rendering effects (such as participating media, motion blur and depth of field) can be rendered adaptively based on perceptual criteria.
The sketches sessions also showed some promise for the years ahead. An example of this was the temporal radiance cache method for rendering realistic animations of dynamic environments by storing and re-using parts of the computation, a technique that had not yet successfully been applied to dynamic scenes.
For another year SIGGRAPH technical sessions lived up to the expectations. Obtaining further realistic rendered images, computed in real-time, seems to be the major goal of the rendering community. We look forward to next year's conference in San Diego for further innovations in rendering.