A SIGGRAPH AUDIO GALLERY


Artists give the computer a face and a body. Composers give the computer a voice, and give substance in sound when bodies make contact. What we cannot touch in the computer we can feel with our ears.

Frame by frame time is brought to play on the image. The voice articulates time and this articulation is used to guide the movements in design before the motion can be seen. The interplay of real-time in sound and imagined time in animation keyframes can be called composition.

In the old days sound was added after the image was finished. Today sound still comes after the image in most "post-production" approaches: an after-image, after thought. Sound added later is too late to help decide a picture or its movement.

The animations represented on this page fly against the tradition of picture-first-sound-later. Musicians were involved in each of these projects from the beginning, contributing to storyboards, shaping narrative rhythms, making time for sounds to have their say.

	
	Multi-track Music and FX: 
	Venus and Milo  Cox et. al. 1990. 
	Voices, environmental sounds, music and flying
	chocolates remind us modern art is not so hard 
	to swallow.
	

SFX Synchronized to Motion Paths:
Garbage  Cox et. al. 1991.  
Impact sounds are coordinated to animation data. 
When a bottle bounces on pavement, our software 
automatically obtains the appropriate sound and 
aligns it to the visual bounce. 
			
Facial Expression 
Data Drives Sound 
Synthesis:
The Listener   Landreth and Bargar, 1991.  

Data from muscles in an animated face modulate rhythm and timbre, 
creating abstract electro-acoustic prose, which echos the facial 
expressions. Like animated imagination, the animated Listener's 
face generates sounds to which the Listener appears to be 
reacting.
	     


Granular Sound Synthesis from Particle Systems; and Sound Control     
 Signals Applied to Animated Facial Expressions: 
		     Data-Driven: The Story of Franz K. Landreth, 1993. 
		     Particle systems create swirls of sound from Franz's 
		     cigarette smoke, forming a 3D volumetric head. In 
		     return the head pours out musical data applied to 
		     the animation channels controlling Franz's face.  
		     His ecstasy results from a literal and numerical 
		     interpretation of the information content of music, 
		     transformed by a listener into an emotive response.

		        Reversing the data control 
		     flow used in The Listener, this scene brings new  
		     meaning to the phrase face the music.


Composition on a Global Scale; Music and Motion Capture; Voice Morphing:

The End Landreth and Bargar, 1995.  

				   A new tool, the NCSA Sound Server gives  
				   great flexibility and rapid prototyping  
				   of audio-visual algorithms.  
				   Sound synthesis happens in real-time  
				   on the same platform that renders the 
				   graphics. Creating The End in only six
				   months was possible because we could test 
				   post-production concepts in production, 
				   while the frames were still in wireframe. 
				   No need to "fix it in the mix."

For more information about our music compositions please listen to the Audio Group home page.
Images courtesy of Donna Cox, Chris Landreth, NCSA, NCSC, and Alias.


Back to top