Virtual Humans or Animating Synthetic Actors

Definitions:



Motion Control Methods (MCM)

We previously have looked at examples of different types of MCM's

We can characterize each type by the type of information of primary importance in animating an object, especially an articulated figure. For a keyframe system this is the angle of each of the joints. In a forward kinematics system the motion of all the joints is explicitly set by the animator, i.e., for a human, the animator would move the shoulder, upper arm, elbow, forearm, and hand. In inverse kinematics the animator moves only the end effector and the system computes the corresponding position of the rest of the chain of links.

For Geometric MCMs the primary information is geometric, e.g. coordinate positions, angles, etc. For physically based MCMs, that are driven by physical laws, the primary information is the set of physicals characteristics of the system, e.g. mass, moments of inertia, stiffness (spring force constants), etc. For Behavioral systems the primary information is the set of behaviors that motivate the system. We also need to consider how the actor interfaces with the rest of the scene.

Actor Interfaces

There are four basic cases:

  1. Single actor: the synthetic actor is alone and does not interact with other objects
  2. Actor-environment interface: the actor is alone but is moving in and interacting with an environment
  3. Actor-actor interface: actions performed by one actor are none to other actors and may change their behavior
  4. Actor-animator interface: The animator can communicate information to the actor and the actor might be able to respond and communicate information to the animator

Look at MCMs for each of the above actor interface categories

Single Actor Situation

The animator has all the control of the actor and the actor is unaware of its environment. There is no real-time control but only batch level control, e.g. keyframes. Currently, this is the dominant form of computer animation.

Geometrics MCMs: Kinematics, both forward and inverse fall into this category. A good method to get realistic motion is rotoscopy (motion capture), where sensors attached to real actors are used to provide coordinates for input to the synthetic actors.

Vision Based Navigation for Synthetic Actors

Synthetic vision is used for navigation by a synthetic actor, with the vision being the only communication channel between the actor and its environment. The actor can learn about its environment and have the environment affect its behavior. In this sense it is related to the work by Reynolds on Flocks, etc.

The specific goal is for the actor to explore an unknown environment and to build mental models from this exploration. The actor needs a navigation system. The task of a navigation system is to plan a path to a specified goal and execute this plan. There are two parts to this:

A global nav. system uses a prelearned model of the domain that may be simplified and not reflect recent changes. This is used to perform a path palnning algorithm. The local nav. system uses direct input from the environment to reach goals and sub-goals given by the global nav system and to avoid unexpected obstacles. The local nav. system doesn't have a domain model.

This approach uses a synthetic vision system, where the actor sees a 2D image with each pixel having the object that would be projected to that pixel plus the distance to that object.

Global Nav. System

The internal representation of the environment is an octree. It is constructed by taking the 2D image and mapping that information to the appropriate voxel in the octree (sine we know the distance and object from the 2D image). The octree is dynamic in that as new information is received new nodes are created (insert operation). Also nodes can be deleted if the volumes have moved or disappeared. Note that the octree is only a rough picture of the environment.

Then we can consider the set of empty voxels in the octree as a graph and use heuristic path searching algorithms to find a path through these voxels to our goal.

Local Nav system

There are three basic modules:

Vision module:

this is the synthetic vision described earlier that provides a 2D image that has the projected object (as an object-id) and its distance.

Controller module

This uses DLAs (Displacement Local Automata) that can create goals and sub-goals to accomplish the main goal. There are two goals to consider. The global or final goal is the goal the actor much reach. The local or temporary goal is to avoid obstacles encountered in the path to the final goal. Goal generation and actor displacement is perfomed by the DLAs. The controller chooses the appropriate DLAs. There may be an external guide so the choice is the DLA follow-the-guide, or it maty be hard coded to correspond to certain object-ids.

Performer Module

The module actually conatins the DLAs. There are three types:

DLAs creating the global goal (follow-the-corridor, follow-the-wall, follow-the-visual-guide), the Dlas creating local goals (avoid-obstacle,closest-to-goal) and the DLA overall moving the actor (go-to-global-goal). The DLAs react to the vision system. For example, follow-the-corridor computes agoal by finding the center of the ends of the two corridor walls. In follow-the-guide, the vision system is used to locate the guide.

 

HyperGraph Table of Contents.
HyperGraph Home page.

Last changed December 31, 1998, G. Scott Owen, owen@siggraph.org