NCSA Sound Server
There are a formidable number of numerical models that may be valuable for sound synthesis or composition purposes. Since these models are already implemented in software by experts in various fields it does not make sense to duplicate their efforts while trying to generalize their code. We have adopted a client-server architecture so that existing software models can be used as control programs for synthesis software, with a minimum of re-programming.
The server can process messages sent to high-level composition routines or to low-level primitives. Our class hierarchy includes 3 subsystems: at the low end, the scheduler and sample buffers along with basic synthesis algorithms, a middle layer that defines note events and synthesis instrument configurations, and a level for describing complex musical events.
HTM [Freed, 1993] is a system for real-time interactive sound creation. It is based on the client-server model. In this model, the application program which needs sound is called the client. The client sends requests to the server,
which is another program, usually running on a different computer. The server then fulfills the client's requests to the best of its ability. The HTM server is a program that accepts commands from a client application program and schedules message processing and sound sample generation. On top of this are implemented a number of synthesis algorithms that HTM uses to generate its
samples, such as FM, additive synthesis, sample playback, and MIDI. We call this the Vanilla Sound Server (VSS). New additions to the sound server's set of synthesis algorithms include a more efficient additive synthesis engine (Oscillator Bank), IRCAM's Chant Libraries, and Granular Synthesis.
VSS is based on the concept of a note event, which is a continuing auditory event that has a unique identity. When the client starts a note playing on the server, a note handle is returned. This is a floating point number that can be
used to refer to this note in the future, so that the client can, for instance, change the pitch of the note or turn the note off.
Functions that control groups of parameter changes are implemented above VSS to provide higher level control of the existing functionality of VSS. Groups are
composed of dynamic objects that lie just above the level of VSS. The client program can communicate with these objects to control VSS. In this way, the client can take advantage of the objects' built-in rules and knowledge, making
the interaction much simpler and on a higher-level. The objects that make up the complex models provide access to all the functionality in VSS, and preserve the concept of the note handle. In addition, each object also has a unique
handle, so that the client can send multiple message to the same object. As
with notes, the object handle is a floating point number returned to the client when it is created or retrieved. For each VSS synthesis algorithm, there is a corresponding object that basically functions as an interface wrapper for this algorithm. Many instances of each object can be created, and can either act
independently or in tandem. For every command applicable to a VSS algorithm, there is a corresponding message you can send to its higher-level object, so you
don't lose access to lower-level functionality by using these objects. The messages that objects send to each other are in the same form that the client
uses to send messages to the server. The result of this is that an object does not know or care whether a message comes from a client or from another object.
This is useful in building up a network, as the client can test different subsets of the network independently.
The Generic Interface
The generic interface is intended to simplify the task of adding and modifying sound in an application. It is designed so that although the flow of control and structure is defined in the application code, the types of sounds that are
actually played are defined externally, in an input file. This allows an application's sound to be modified without changing or recompiling the application itself. To use the interface, the client must tell the server which objects it wants to use and how it wants those objects configured. Then, the
client will send data to the objects, either at regular intervals or whenever a state in the application changes.
If you are using an SGI with IRIX 5.0 or higher, these examples will actually run VSS on your local machine, and synthesize sound from your choice of the following simple clients.
No SGI? I understand. Here's what they would sound like!
Chant. To ensure real-time play, this client tells VSS to play a pre-existing samples-file of this Chant sound.
Additive Synthesis *
This samples-file was analyzed using
an analysis/synthesis package developed by the
CERL Sound Group at the
University of Illinois. The result?
A beautiful picture. (Time is shown horizontally, frequency is shown vertically, and amplitude represented by varying shades of grey.) *
(* Note to SGI users: if you insist on playing these 8 kHz soundfiles
instead of freshly computing them, reset the audio output rate to 22 kHz
R. Bargar. Model-based interactive sound for an immersive virtual environment. Proc. International Computer Music Conference, Aarhus, Denmark, October 1994.
A. Freed. Tools for rapid prototyping of Music Sound Synthesis Algorithms. Proc. International Computer Music Conference, San Jose, October 1992.
Back to top