Guenter and Pighin's papers are part of the session on Facial Modeling & Animation.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Contents © 1998 ACM SIGGRAPH All Rights Reserved Send your comments to SIGGRAPH 98 Online.

 

Cont...

Let's Face It

Although the term synthetic actors usually implies realistically-rendered human visages, sticking with cartoon-like characters gives animators a huge practical advantage: Simple characters are considerably easier to get viewers to accept as real. Fred Parke, the father of 3-D facial animation, first noticed a decade or so ago that viewers demand much greater accuracy from photorealistic human characters than cartoon-like characters (human or not). "Modeling Godzilla, you don't need the same level of detail," says facial animator Frederic Pighin. "You're trying to fool the eye with your model." Because of this more aggressive evaluation, even state-of-the-art synthetic actors often don't fool people completely, unless they're in the background like the "Titanic" extras.

Traditional computer-generated character animation is featureed in "The Smell of Horror," by Mitch Butler

 

Software engineers have been hard at work on several new methods of rendering more realistic faces. Brian Guenter's group at Microsoft Research built a 3-D camera system that not only records a face's image but also maps the face's geometry at the same time. To do that, operators first glue 200 florescent markers to an actor's face. Then they position five video cameras at different angles around the 180-degree arc from ear to nose to other ear. When the five resulting video streams are processed simultaneously, stereo matching techniques triangulate every point in space. Then the shape of the face is modeled with 5,000 polygons.

The markers used to capture facial expressions.

 

"The results are fairly realistic," Guenter claims. "Most people are struck that it looks like video of a real person." And compression techniques are shrinking the required bandwidth. The newest version allows scaling the number of polygons representing the facial geometry, and the texture information is directly proportional to the image size. That makes a low-res Web display close to doable at today's dialup speeds.

But as with any new technology, there are problems. "Some expressions cover up some markers," Guenter explains, "and you have to establish dot correspondence from frame to frame. There's a low tolerance for error--only 0.1 percent or 0.01 percent--since you can't fix any frames by hand."

Frederic Pighin, a graduate student at The University of Washington, is taking a less data-intensive approach with a photography-based system the University and Microsoft are jointly exploring. He starts with six still photographs, taken around the semicircle from ear to ear. Then he manually overlays them on a generic 3-D facial model, taking advantage of the common rendering trick of hiding coarseness and inaccuracy in the geometry by covering it with a realistic texture, which includes facial wrinkles.

To animate these faces, he selects a sequence of key frames, each with a different expression. The software then creates the in-betweens by morphing from one gesture to the other. To create even more expressions, the user can take a part of one expression, like a raised eyebrow, and paint it on a different face, say a smiley one instead of the pursed-lip face it came from.

There are some limitations. "One thing that is difficult is modeling hair," he explains. "Hair is hard because its real geometry is very complex and rendering it is difficult because it reflects light in complex ways--it's translucent."


Modeling | Rendering | Animation | Interaction | Virtual Reality | Synthetic Actors