Conference
  eTech & Art
  Courses
  Papers
  Panels
   New in 02!
   
Exhibition
   
Interviews
   
Life at SIGGRAPH
   
 
 
 
  Reports from SIGGRAPH 2001
 

Papers: Character Animation

by Wendy Ju
July 23, 2002

 

How do computer graphics characters get their moves? Should they practice modeling? Study the masters? Work the mechanics out for themselves? Train with real people? The Papers session on Character Animation explored these various alternatives to giving simulated characters life.

Advertised at the Papers Fast Forward as a "dangerous technology," Tony Ezzat's talk on Trainable Videorealistic Speech Animation did not disappoint. Using eight videotaped minutes of a subject reading, Ezzat and his fellow researchers at the Center for Computational and Biological Learning at MIT were able to form a speech model correlating various facial images with specific phonemes, and rearrange these images to make it look like their subject was saying something that she never said. The use of multidimensional morphable models allowed the transitions between data points to be smooth and very realistic. Even with knowledge of how the system worked, the video was compelling-- even paranoia-inducing. Ezzat reports that in visual "Turing tests" subjects were only able to guess which clips were synthesized 54.3% of the time--roughly equal to blind guessing.

Stanford animator-in-residence Lorie Loeb presented work she and Chris Bregler's group performed to enable motion capture of cartoons. Loeb argued that while motion capture certainly enabled animators to achieve realistic human motion in their cartoons, these methods might not necessarily lead to expressive animation. "The goal of animation is not always to be realistic," she pointed out. By reverse engineering the animations of existing cartoons and mapping the actions onto animated models, animators can capture the spirit and dynamicism of the old animation masters.

Karen Liu from University of Washington was also interested in prototyping dynamic character animations, but approached the challenge from a physical model that used environmental constraint inference and momentum-based constraints to allow her three-dimensional characters to take on high-energy tasks such as jumping, hopscotch and ice-skating. The system is designed to allow these actions to be animated with very minimal specifications on the user's part, so the system must understand what constraints are implied with a user moves a model from one spot to another or positions joints in various positions.

The session closed with a presentation from MIT Media Lab's Bruce Blumberg and Marc Downie. Their work, AlphaWolf, is applies machine learning to the realm of virtual characters; the wolves in the AlphaWolf world act and react to one another and to their environments based on what they have learned based on state-action theory. The wolves are trained the way real-world dogs are trained, as they are rewarded for doing things correctly and scolded for ding things wrong; in this way, the dogs are able to take on new behaviors over time.

Considered as a whole, this session staked out numerous new areas for the field of character animation. The highlight, however, was still Ezzat's first talk. His demos of demonstrated their subject speaking one and two syllable words, and then had her sing "Hunter" by Dido, and "Automatic" by Hikaro Utada . The highlight of the talk--and indeed of the session overall-- was Marilyn Monroe doing her own karaoke rendition of "Hunter."

Official Paper Session Description


 


Details on Authors, etc
 

 

 

 


 

 

This page is maintained by
Jan Hardenbergh
jch@siggraph.org
All photos you see in the 2002 reports are due to a generous loan of Cybershot digital cameras from SONY