Vol.34 No.1 February 2000
Report of the Workshop on Rendering, Perception, and Measurement
February 00 Columns
The Workshop on Rendering, Perception, and Measurement was held at Cornell University in Ithaca, NY from April 8-10, 1999. The event was attended by approximately 125 people, including 20 speakers and 45 graduate students. Five people attended from government agencies and labs, 16 from industry and the balance from academia. Nine countries were represented, with 19 attendees in all from outside the U.S.
The workshop was co-sponsored by the Division of Advanced Computational Infrastructure and Research of the National Science Foundation (through grant number ASC-9523483), by ACM SIGGRAPH and by the NSF Graphics and Visualization Center (ASC-8920219). This joint sponsorship enabled considerable subsidy of the event costs to make the event affordable for students and international attendees.
The format of the workshop provided six primary sessions on topics central to current research in rendering and the closely related fields of perception and measurement. The slide presentations for each session of the workshop are available in PDF format at the workshop web page. The following brief summary of the research sessions indicates the range of research discussed. At the final session, several more general recommendations for the future were also presented.
Turner Whitted, Don Greenberg and others.
The Rendering Problem
The opening session on Thursday included talks by Andrew Glassner (Microsoft Research), Alain Fournier (University of British Columbia) and Turner Whitted (Microsoft Research). Peter Shirley (University of Utah) delivered the talk for Andrew Glassner, who was at the last minute unable to attend.
For his talk, "Rendering 201," Glassner described the rendering problem as a quest for control in working toward goals of realism, analysis and aesthetics. These goals can be expressed as creating a visual experience indistinguishable from a natural visual experience, being able to match synthesis with visual perception and creating expressions of widely diverse visual points of view. In the future we will want to freely mix synthetic and real images with a fully natural feeling, for enhanced visual experiences.
Fournier described and illustrated how far we are from automatically modeling complex natural objects and scenes in "The Tiger Experience." Current limitations in modeling and rendering can be seen both as a failure and an opportunity for new approaches.
Whitted’s talk, "The Rendering Problem Part II: Architectures," discussed the extent to which increased processing power would or would not solve current rendering problems. He recommended that more research effort be devoted to exploring new representations beyond the common alternatives of polygons or pixels, especially representations that map better to human visual perception.
Dealing with Complexity
Anselmo Lastra (University of North Carolina, Chapel Hill), Matt Pharr (Stanford), Richard Bukowski (Berkeley) and Francois Sillion (iMAGIS/INRIA, Grenoble) delivered the talks for the Thursday afternoon session.
Lastra discussed hardware and software-based approaches for incorporating depth information with images in a talk entitled, "All the Triangles in the World." Recent research at UNC has been exploring depth information for image-based geometry acquisition, image warping and hierarchical image-based rendering. UNC researchers are also exploring what advantages can be gained from implementing image-based rendering in hardware.
Pharr presented a talk entitled "Rendering Natural Scenes with Generalized Object Instancing," describing a new approach to rendering outdoor scenes with geometric, material and lighting complexity beyond the scope of current algorithms.
Bukowski presented the Citywalk collaboration between Berkeley and MIT in his talk, "A Perspective on Managing Complexity in Large-Scale Architectural Environments." The Citywalk project is developing a networked, interactive city-sized model that includes both exterior and interior models of buildings. The project is working to integrate a large number of existing techniques, such as seamlessly combining dense occlusion culling with level-of-detail algorithms, in order to scale a walkthrough to such large environments.
In the final talk, entitled "Rendering a Complex World," Sillion addressed how techniques including culling, geometric simplification and image-based models can successfully support greater complexity. His research group is exploring the use of hierarchical models, generalized textures, pre-computed illumination and abstractions of light reflectance and scattering behavior. Finding acceptable levels of approximation for shadows, dealing with time-dependency and interactive rendering remain challenges.
On Friday the workshop resumed with a session on geometry acquisition, and Dan Huttenlocher (Cornell), Jean-Yves Bouguet (Caltech) and Paul Debevec (Berkeley) each presented recent research.
In the opening talk, "Computer Vision for Recovering Information about Scene Geometry," Huttenlocher presented techniques for recovering scene geometry from multiple images, including depth or disparity maps from stereopsis and 3D surfaces or point sets from motion analysis. Both motion parallax between views and motion that does not obey parallax can be used to extract and replace objects from video independently of the background. Geometric model-based matching techniques facilitate identifying people and their motions.
In his talk, "3D Acquisition Using Shadows," Bouguet described joint research with Pietro Perona at Caltech on 3D reconstruction using shadows. Spatial and temporal processing are combined in a structured lighting system that has been implemented for real-time scanning. Future work will include the registration of multiple scans into complete models.
For the final talk in the session, "Geometry Acquisition with Applications to Image-Based Modeling, Rendering, and Lighting," Debevec presented his research on geometry acquisition to enhance image-based modeling, rendering and lighting. A sequence of 40 radiance maps of a room has been used to recover both geometry and reflectance information to permit simulating new views of the environment.
Greg Larson, Holly Rushmeier with Jim Ferwerda.
Measurement Panel (Westin, Marschner, Nadal, Dana).
Friday afternoon’s session covered three distinct techniques for reflectance measurement. Steve Marschner (Microsoft Research), Steve Westin (Cornell), Maria Nadal (National Institute of Standards and Technology) and Kristin Dana (Columbia University) delivered talks on recent research.
In the first talk, "Image-Based Reflectometry," Marschner and Westin presented new image-based techniques for capturing bidirectional reflectance distribution functions, or BRDFs. Their techniques reduce the need for expensive gonioreflectometer and 3D scanning equipment in acquiring accurate reflectance data.
Dr. Nadal presented a talk entitled, "Bidirectional Reflectance Distribution Function (BRDF) Measurements at NIST." The NIST Advanced Reflectance Project includes research on coating formulation and microstructure characterization, surface characterization and reflectance modeling, appearance and color measurement and appearance rendering.
Kristin Dana presented joint work with Shree Nayar at Columbia entitled, "Models and Measurement of 3D Textures." 3D textures include information on surface height variation as well as color variation. Columbia and Utrecht Universities have placed sample measurements on the Web.
Perception and Rendering
Saturday morning’s first session include talks by Jim Ferwerda (Cornell), Gary Meyer (University of Oregon) and Dan Kersten (University of Minnesota).
In the opening talk, "Varieties of Realism," Ferwerda compared three approaches to evaluating realism in computer graphics – in terms of physical realism, photorealism and functional realism. The concept of fidelity should incorporate the accuracy of a visual stimulus, the visual response by the observer and task-appropriate visual information transfer.
In his talk, "Perceptually Based Approaches to Improved Rendering Efficiency," Meyer reviewed the history of perception research in computer graphics, and discussed how visual models can be used in redesigning rendering algorithms so that they provide the most important visual information, for both individual images and image sequences.
Kersten’s talk, "What is the Visual System Like? The Role of Computer Graphics," discussed the role of computer graphics in research on quantitative studies of human vision, especially for improving our understanding of the relationship between visual information and visual performance. Computer graphics can be used to help characterize the limits to visual inference given the complexity of natural images, and to test quantitative theories of visual behavior.
A Dose of Reality
The conference closing session had been designed as an opportunity for speakers from industry to address functional disconnects between computer graphics research and the needs of production systems. Larry Gritz (Pixar), Rob Shakespeare (Indiana University) and Eugene Fiume (University of Toronto and Alias|Wavefront) each presented a distinct industry perspective.
In "Who’s Afraid of ‘Advanced’ Rendering," Larry Gritz from Pixar provided a perspective from the entertainment industry. Deadlines cannot be broken, and images must portray and accent story points so they may need to be "bigger than life" rather than true to life. Production rendering systems must have extreme flexibility and robustness, and skilled lighters can usually outperform ray tracing in achieving a specific effect, even at the cost of changing the laws of physics. At the same time, the film industry is looking to solve problems with ray tracing and global illumination for production rendering in order to enhance specific scenes and revisit the tradeoff between automation and custom control.
In Rob Shakespeare’s role as a lighting designer, he uses computer graphics to preview the impact of design ideas, explore visual perception issues and to help engineer illumination. For his talk entitled, "A Dose of Reality...," Shakespeare described how global illumination today is used by only a fraction of the professionals who could find it useful as a primary tool for advanced engineering and scientific visualizations, illumination engineering and lighting design. The current capabilities and benefits of advanced rendering are not well known, and the software interfaces need to be familiar and in the language of the intended users, perhaps as plugins to interfaces already in use. Improved photometry representations (currently limited to point sources) and better material attributes standards, archives, translators and sensors will also be required.
For the closing talk, "On Realistic Industrial Rendering: Let it Go," Eugene Fiume provided a comparison of different rendering requirements by major functions. These functions include film and video effects, games, scientific visualization, conceptual design, industrial design and manufacturing design. To achieve desired performance it will be critical to take advantage of time coherence, overcome platform-based limitations on using hardware rendering and improve the efficiency of complex shading and texturing. Physically-based special effects and selective application of physically-based rendering (e.g., smoke, cloth, caustics) may find more application than full light transport solutions. Image-based rendering is already useful in games and films, but suffers from quality and scaling concerns, as well as sampling issues. Partnerships with industry toward solving challenging long-term research problems are likely to be more fruitful than just addressing the problems industry has with rendering today.
Discussions and Activities
Many additional opportunities were provided for small group interaction over the three days of the workshop, including breakout sessions on image based rendering, perception, global illumination and measurement.
The workshop also included a tour of the Program of Computer Graphics at Cornell. Photos from this event and others at the workshop can be viewed on the workshop web pages.
The copyright of articles and images printed remains with the author unless otherwise indicated.