Conference Main
Course 15
Rendering
Image-Based Modeling and Rendering

This course explained and demonstrated a variety of methods for turning photographs into models and then back into renderings, including movie maps, panoramas, image warping, photogrammetry, light fields, and 3D scanning. It also reviewed relevant topics in computer vision to show how these methods relate to image-based rendering techniques and how to apply the techniques to animation and 3D navigation.

Prerequisites
Solid understanding of the standard 3D graphics pipeline, including perspective projection, depth-buffering, visibility, lighting, and texture mapping recommended. Knowledge of basic image processing, especially image resampling, and familiarity with the basic mechanisms of global illumination helpful.

Topics Covered
Methods of deriving geometric information from photographs, including stereo, structure from motion, interactive techniques, and structured light; image-based data structures and rendering methods that use images and/or geometry for novel view generation.

Organizers
Paul Debevec
University of California, Berkeley

Steven Gortler
Harvard University

Lecturers
Chris Bregler
Paul Debevec

University of California, Berkeley

Steven Gortler
Harvard University

Leonard McMillan
Massachusetts Institute of Technology

Richard Szeliski
Microsoft Corporation

Main Comments and questions