Aaron Hertzman's paper, Painterly Styles for Expressive Rendering, is part of the session Art, Illustration, Expression.

 

 

 

 

 

 

 

Radek Grzeszczuk's paper, "Neuro- Animator: Fast Neural Network Emulation and Control of Physics- Based Models, belongs to the session," Animation & Simulation.

 

 

 

 

Contents © 1998 ACM SIGGRAPH All Rights Reserved Send your comments to SIGGRAPH 98 Online.

 

Cont...

Tools and Innovations

Sid and the Penguins features actors with their own personalities

Meanwhile, the steadfast refusal of the computer animation field to forsake the gospel of artistic talent and worship the idol of technology has only inspired the engineers further. Ken Perlin just wants to build better tools. He's currently taking the concepts of procedural texturing he previously developed for image rendering and applying them to interactive character animation. Whereas procedural texturing uses tunable algorithms to model materials like hair, smoke, or fur, Perlin's new project will control behavior and movement instead of appearance. This will give "people the ability to tune unpredictable things in predictably unpredictable ways," Perlin says. For example, there's a lot about how a character's arm moves that is uncontrolled and depends on the context of the action. But, Perlin adds, there are lots of arm motions that are anatomically impossible, and his new algorithms will limit the character to what's realistic as well as helping to define the character's unique kinesthetic personality.

Imagine visiting a virtual world drawn not in straight lines, but with the deft ambiguity of Cezanne's brushstrokes. The people you meet there aren't a crude mass of polygons, but glow in the sympathetic softness of a Monet portrait. That's the vision NYU grad student Aaron Hertzman has for his "painterly rendering" algorithms. They offer an alternative to photorealism by painting over a source photo. "A photo is not expressive because it only captures the way light bounces off objects in the world," Hertzman explains. "A painting will represent just the elements of a scene that are important. It will exaggerate and manipulate to clarify the scene."

To do this, Hertzman's software makes multiple passes much as a human painter would do, first using broad brushstrokes to establish large areas of color, and then use a smaller brush to refine the detail. "It's more appealing," Hertzman says, "instead of just going over the whole image with a small brush." He's not trying to replace the human artistic eye, but he is trying to teach the computer to render more like people paint. "It's more expressive to paint in ways that match what your eye picks up. Your eye picks up edges and colors. A small brush on everything flattens the image."

A double major in computer science and art history as an undergrad, Hertzman picked up this definition of expressiveness from a painting teacher who referred not just to a picture's emotional content, but how well the painting conveyed the light and other visual qualities.

If Hertzman's automated painter is bringing the visual ambiguity of traditional art to the science of computer graphics, Radek Grzeszczuk is bringing the hard mathematics of physical science to the art of computer animation. Physical simulations are valuable because they model the process underlying a movement, such as the contraction of a dolphin's tail fluke muscles, or the pivoting of a pendulum pulled by gravity. An animation produced by such a simulation is not a guess based on the animator's observation or intuition; it is objectively realistic and it is generated by an automated process. But until now, the worlds of physical simulation and real-time animation have been held apart by the vast computational resources required to mathematically model real-world processes.

Working at Intel, Grzeszczuk (pronounced "Jesh TOOK") bridges the gap with neural networks, which learn via feedback loops between layers of indivudual software modules. Grzeszczuk's method involves training a neural network to emulate the physical simulation, and then giving the neural net a performance goal to iteratively reach the desired output (such having as the dolphin swimming as far as it can). "NeuroAnimator," as Grzeszczuk has dubbed his creation, both cuts calculation overhead by a factor of as much a hundred and gives control over the animation that running a traditional simulation doesn't allow. So far NeuroAnimator does well with systems with continuous forces acting on them. It will take some more work, Grzeszczuk admits, to deal with the constantly changing forces involved in a person walking or collisions between objects.

According to Grzeszczuk, other developers could implement NeuroAnimator fairly easily, since it uses an off-the-shelf neural network tool.

 


Modeling | Rendering | Animation | Interaction | Virtual Reality | Synthetic Actors