Visit Debevec's home page.

 

 

 

 

 

 

 

 

Contents © 1998 ACM SIGGRAPH All Rights Reserved Send your comments to SIGGRAPH 98 Online.

 

Cont...

Image Based Modeling and Rendering

For example, show your vacation snapshots of the White House or the Arc de Triomphe to Paul Debevec and he sees something most of us might overlook: vertices, edges and planes. Show your pictures to his computer and it sees even more: three dimensional architecture. Debevec, a research scientist at the University of California at Berkeley, is on the forefront of Image Based Modeling and Rendering (IBMR), which creates three dimensional models and renderings from 2D images. Unlike conventional modeling techniques, which require the user to assemble detailed geometry and to specify complex material properties and lighting, IBMR uses scanned pictures to directly achieve a photorealistic look and to minimize the work involved. "It's fascinating; we can model the geometry of a scene from a relatively small number of photographs, and reconstruct its appearance by projecting the photographs back onto the geometry. This technique can generate surprisingly realistic views of a scene," comments Debevec. One of the processes used to acquire geometry from photographs is called "stereo correspondence." "Stereo correspondence is similar to how depth perception works in the human brain," Debevec explains. "Your visual cortex receives two images from slightly different angles -- one from each eye -- and compares them. The items that shift the most are evaluated to be closest; the items that shift the least are assumed to be distant, and everything else is assigned intermediate values. Using a series of photographs and certain algorithms we can do essentially the same thing." Debevec also spearheaded the animation "The Campanile" presented at SIGGRAPH last year, where a mixture of live-action film and 3D photorealistic models of the bell tower at the Berkeley campus were composited into a three minute film that kept audience members guessing what was real and what was CG. The models were generated from twenty photos of the Berkeley campus using kite aerial photography and views from the top of the campanile. Architectural features marked in the various viewpoints were correlated to produce 3D geometry using custom software developed at Berkeley. The original images were projected back on to the model to create photorealistic renderings and live action was edited in for the creation of an unparalleled effect.

 

 


Modeling | Rendering | Animation | Interaction | Virtual Reality | Synthetic Actors