Find out more about Debevec's image-based rendering.

 

Debevec's paper appears in the Image-Based Modeling & Rendering session.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Visit Cinesite.

 

Kisacikoglu's paper is part of the Techniques Sketches session.

 

 

 

 

 

 

 

 

 

 

 

 

Contents © 1998 ACM SIGGRAPH All Rights Reserved
Send your comments to

  SIGGRAPH 98 Online.

 

Cont...

Using Photos Speeds Up Process

One way to work around the difficulties involved with rendering outdoor scenes or other similarly detailed environments is to lean more heavily on what Larson terms "captured environments and imagery," rather than creating models from whole cloth. This involves enhancing existing imagery with computer graphics. Called image-based rendering, this entails using flat or holographic images to reduce the number of three-dimensional geometric elements required to render a scene. But this form of rendering brings its own challenge, "needing enormous amounts of memory or being limited to relatively simple reflection models, Larson says.

Put simply, image-based rendering derives its model from a photo. "Using traditional techniques, you construct each model by building it, polygon by polygon, then you assign a texture map to everything," says Paul Debevec, a professor of computer science at the University of California, Berkeley. "That takes too long. Traditional rendering with computer graphics is labor intensive. It takes a lot of effort to generate the model, and then it takes lots of competing models to do it photorealistically. Using a real scene with a photo is a lot faster." But photos don't tell the artist whether the surface the light's bouncing off of "is grass or a shiny surface, or that it's green," he says. "It only tells you just what it looks like with that lighting at that angle.

The Flying Chevette was where it all started with image-based rendering.

Another drawback to image-based rendering, Debevec says, is that the scene is static. "You can fly around it, but you can't change the lighting, so it's hard to add a new object to the scene. Debevec is working on a new method whereby new synthetic objects can be added to an existing scene with the correct lighting, correct shadows, and correct reflections. This would be helpful for movies, where a synthetic Godzilla, for instance, could be added to existing photos of a particular scene and illuminated. This method uses available natural light as well as computer-simulated light falling onto a synthetic object, he says.

Image-based rendering holds a great deal of potential for some applications, particularly education. "What I'm excited about is getting renderings to look like stuff they've never seen before, and people think it's real," Debevec says. "I hope it will be used in education, such as walk-arounds of Monticello, or the Pyramids."

Rendering time is also a major consideration in volume rendering, defined as the accumulation or summation of the colors and densities you see as the rays get away from the eye, says Gokhan Kisacikoglu, a civil engineer who works as a technical director at Cinesite, a Kodak subsidiary based in Hollywood, which offers digital scanning and recording services to filmmakers. This means you can see the object itself as well as seeing through it. The advantage of volume rendering is that you can fly through, say, layers of fog, fire or smoke, which you cannot do realistically with surface rendering techniques.

In particular, Kiksacikoglu explains, rendering transparent objects is time consuming because they are so geometrically complex. That means volume rendering requires some special function definition for each object represented mathematically "simply because we do not have simple surfaces anymore, but rather solids that have properties such as inside and outside. The mathematical definition of the objects should be extended, and function representation of the solids is much more appropriate than for the regular objects enclosed with surface primitives, because it is hard to understand fast enough during the rendering which part of the object is inside and which part is outside."

"The concept of volume rendering using function representation of objects will be used more frequently as computational resources improve and become cheaper. We developed a totally new interface to model and render smoke and fire, clouds, and nebulae. I developed shading techniques to illuminate and give colors to the shapes procedurally. We can build models that you can totally see and fly through," such as in the movie "Sphere."

Kisacikoglu and his team at Cinesite developed several procedures for improving the speed and complexity of their renderings. For instance, they subdivided the volume space into smaller chunks, distributed the renderings over the Internet to multiple machines and found better shading algorithms to accelerate rendering.

 


Modeling | Rendering | Animation | Interaction | Virtual Reality | Synthetic Actors