Notes for an ACM SIGCOMM 97 Tutorial

The notes shown below are for a tutorial on "Internetworked 3D Computer Graphics: Beyond the Bottlenecks and Roadblocks" that will be presented at ACM SIGCOMM '97. The instructors in this tutorial are Don Brutzman, Mike Macedonia, Theresa-Marie Rhyne and Steve McCanne. SIGCOMM is ACM's Special Interest Group on Data Communications.

This tutorial is a cooperative effort between ACM SIGCOMM and ACM SIGGRAPH to examine the inter-relationships between Data Communications and Computer Graphics. This Special Project is being funded by the SIGCOMM organization, the SIGCOMM 97 conference and the SIGGRAPH Special Projects Committee. The material presented below are notes for the lecture on "Overview of 3D Interactive Graphics".

Internetworked 3D Computer Graphics: Beyond Bottlenecks and Roadblocks

II. Overview of 3D Interactive Graphics

Theresa Marie Rhyne
Director-at-Large, ACM SIGGRAPH


Defining Interactive 3D Computer Graphics

Computer graphics includes the process and outcomes associated with using computer technology to convert created or collected data into visual representations. The computer graphics field is motivated by the general need for interactive graphical user interfaces that support mouse, windows and widget functions. Other sources of inspiration include digital media technologies, scientific visualization, virtual reality, arts and entertainment. Computer graphics encompasses scientific, art, and engineering functions. Mathematical relationships or Geometry define many of the components in a particular computer graphics "scene" or composition. Physics fundamentals are the basis for lighting a scene. The layout, color, texture and harmony of a particular composition are established with Art and Perception principles. Computer graphics hardware and software are rooted in the Engineering fields. The successful integration of these concepts allows for the effective implementation of interactive and three-dimensional (3D) computer graphics.

Interactive 3D graphics provides the capability to produce moving pictures or animation. This is especially helpful when exploring time varying phenonmena such as weather changes in the atmosphere, the deflection of an airplane wing in flight, or telecommunications usage patterns. Interaction provides individual users the ability to control parameters like the speed of animations and the geometric relationship between the objects in a scene to one another.

The Building Blocks of Interactive 3D Computer Graphics

Defining Primitives:

Primitives are the basic geometrical shapes used to construct computer graphics scenes and the resulting final images. Each primitive has attributes like size, color, line and width. For two dimensions, examples of primitives include: a line, circle, ellipse, arc, text, polyline, polygon, and spline.

Figure #1: Examples of 2D primitives, image by Theresa-Marie Rhyne, 1997.

For 3D space, examples of primitives include a cylinder, sphere, cube and cone. 3D viewing is complicated by the fact that computer display devices are only 2D. Projections resolve this issue by transforming 3D objects into 2D displays.

Figure #2: Examples of 3D primitives, image provided courtesy of Mark Pesce. This is a snapshot from a 3D VRML world entitled "Zero Circle": (

Comprehending Lighting Models:

Lighting models also assist with viewing 3D volumes that are transformed on to 2D displays. For example, a 3D red ball looks like a 2D red circle if there are not highlights or shading to indicate depth. The easiest type of light to simulate is "Ambient" light. This lighting model produces a constant illumination on all surfaces of a 3D object, regardless of their orientation.

"Directional" light refers to the use of a point light source that is located at infinity in the user's world coordinates. Rays for this light source are parallel so the direction of the light is the same for all objects regardless of their position in the scene.

"Positional" light involves locating a light source relatively close to the viewing area. Rays from these light sources are not parallel. Thus, the position of an object with respect to a point light source affects the angle at which this light strikes the object.

Figures #3 & 4: Examples of Direct (Figure #3) and Positional (Figure #4) lighting. These images are courtesy of Thomas Fowler and Theresa-Marie Rhyne, (developed by Lockheed Martin for the U.S. Environmental Protection Agency), (

Understanding Color:

Color is an important part of creating 3D computer graphics images. The color space or model for computer (CRT) screens is "additive color" and is based on red, green and blue (RGB) lights as the primary colors. Other colors are produced by combining the primary colors together. Color maps and pallettes are created to assist with color selection. When layering images on top of one another, the luminance equation helps determine good contrast.

Luminance (L) = .30*Red + .59*Green + .11*Blue

Here are the luminance values for a few colors: Black = 0.00; White = 1.00; Red = 0.30; Green = 0.59; Blue = 0.11; Cyan = 0.11; Magneta = 0.41; Orange = 0.60; and Yellow = 0.89.

So, a yellow image on a black background has very high contrast while a yellow image on a white background has very little contrast.

Figure #5: Example of the effects of color lights on a sphere. Image courtesy of Wade Stiell and Philip Sansone, (students in Nan Schaller's Computer Graphics course at the Rochester Institute of Technology). See: (

The color space or model for printing images to paper is "subtractive color" and is based on cyan, magenta and yellow (CMY) as the primary colors. Printers frequently include black ink as a fourth component to improve the speed of drying inked images on paper. It is difficult to transfer from RGB color space to CMY color space. So, images on a color CRT screen can look different printed to paper. A more detailed discussion on color is listed as a reference below.


Adrian Ford and Alan Roberts, " Colour Space Conversions", (, September 1996.

Explaining Texture Mapping:

Texture mapping is similar to wrapping paper for real 3D objects. It is a fast and effective computer graphics technique for laying bitmapped images over solid models. A flat cartographic map can be wrapped around a sphere to yield a global view of planet earth. Texture maps can be scanned into a computer or rendered by the computer. Rendering is the process of creating computer generated images on a computer screen.

Figure #6: Example of a 2D geographic map overlayed on to a 3D surface. Image courtesy of Thomas Fowler and Theresa-Marie Rhyne, (developed by Lockheed Martin for the U.S. Environmental Protection Agency), (

Focusing on Animation Concepts:

Prior to computers, the process of simulating motion through animation was done with techniques like flip books and hand painted cells recorded to motion picture film. In computer animation, a virtual camera is placed and oriented at a location related to the scene to obtain a perspective projection of the 3D objects. As the animation proceeds, the camera's viewpoint changes.

With computers, there are two different ways to show animation of an image. The first method includes creating a series of pre- rendered or pre-scanned images to quickly playback on the computer screen. The second method involves rendering and displaying the frames in real time using double buffering. Double-buffering allows for the smooth motion of animated images on a computer screen without flashing or flickering. Double-buffering is typically built into computer graphics hardware.

Figure #7: Zoom in from space - Quicktime Animation (1.5MB)

Image and animation are courtesy of Thomas Fowler and Theresa-Marie Rhyne, (developed by Lockheed Martin for the U.S. Environmental Protection Agency), (

Three computer animation methods will be highlighted here: keyframe, physically based modeling and motion capture. Keyframe animation involves determining a path or direction for the animation to proceed. The computer then calculates the frames inbetween each of the key frames usually via linear interpolation. Physical based modeling animation uses principals of physics and non-linear motion equations to determine the animation path. Motion capture involves using sensors and hardware attached to a person or physical object in the real world to capture and record precise motion. Once captured, this information can be transfered to the computer to assist with the motion of computer generated and rendered objects.

Reference: John Vince, 3-D Computer Animation, Addison-Wesley Publishing, New York, 1993.

Photorealism versus Real-Time Interaction

Photorealism is the process of generating computer graphics images that closely resemble a photograph. Advanced rendering techniques that take into account reflective properties of materials, light sources, illuminations, shadows and transparency are used. Two methodologies include "ray tracing" and "radiosity".

Ray tracing calculates both direct and reflected light sources. This technique approximates a scene by tracing rays from the eye, through each pixel in the image plane, and into the world or environment. Sharp, clear and realistic highlights and reflections are produced. Ray tracing is view dependent and so real time interaction with a scene is not possible. Animation of ray traced images is accomplished by playing back a series of previously rendered, stored and indexed files.

Figure #8: Example ray traced image created from a raytracer designed and implemented by Philip Sansone and Wade Stiell, (students in Nan Schaller's Computer Graphics course at the Rochester Institute of Technology). See: (

Radiosity is a second method for producing photorealistic images. This technique models the inter-reflections of diffuse light between surfaces of the world or environment. Radiosity is view independent so interactions or walk throughs of scences are possible. Radiosity techniques tend to produce softer shadows than ray traced methods.

Figure #9: Example radiosity image created with a real time radiosity system (VRad) under development at the Computer Science Deparment - University of Manchester. Image courtesy of Alan Murta (modeling) and Simon Gibson (modeling and rendering), ( .

Ray tracing and radiosity methods can be combined to create high quality photorealistic view independent images. Both of these advanced techniques are computationally intensive and require high performance workstations.

Often, real time interaction with visualized data is of higher priority than photorealism. This frequently applies to the development of scientific visualization and virtual reality tools.

Data Visualization and Virtual Environments

Data visualization transforms numerical or symbolic data and information into geometric computer generated images. Scientific visualization is the process of transforming spatial data generated by scientific processes into visual images. Information visualization employs similar techniques to non-inherent spatial data. Challenges of information visualization include establishing the visual metaphors, "geography" and interactive techniques for gaining insight into widespread data sets. 3D Virtual environments step beyond 2D computer graphics displays and allow for immersive interaction with visualized data in a 3D physical space.


B.H. MacCormick, T.A. DeFanti, and M.D. Brown, (editors.), "Visualization in Scientific Computing," Computer Graphics, Vol 21, No. 6, ACM SIGGRAPH, New York, (Special Issue) (November), 1987.

R.S. Gallagher, (editor), Computer Visualization: Graphics Techniques for Scientific and Engineering Analysis, CRC Press, Baco Raton, Florida, 1995

NASA Ames Research Center, "Annotated Bibliography of Scientific Visualization Web Sites Around the World", (

Transforming Data to 3-D Pictures:

The production of scientific and information visualizations involves transforming data into visual representations. The two most common approaches are: 1) conversion of mesh geometry and data directly into graphical primitives (e.g. points, lines, polygons) and 2) data sampling.

Converting to Graphics Primitives:

Conversion of scientific data into computer graphics primitives is a three stage process:

filtering -> mapping -> rendering.

"Filtering" takes modeled or collected data and "filters" it into another form which is more informative and less massive. Examples of filtering operations include computing derived quantities such as the gradient of an input scalar field, deriving a flow line from a velocity field, or extracting a portion of a solution.

The next step "maps" the filtered or newly derived data into geometric primitives such as points, lines, or polygons. Once a set of geometric primitives is chosen and calculated, the geometric data are "rendered" into pictures. At this stage, the user chooses coloring, placement, illumination, and surface properties for the visualized image.

Figure #10: Example visualization of air pollution concentrations. Computational model data is filtered and mapped into geometric primitives. Image courtesy of Theresa-Marie Rhyne, (created with visualization software developed by Numerical Design, Ltd. for the U.S. Environmental Protection Agency).

Data Sampling:

Data Sampling involves moving data into a structured grid of points using interpolation and extrapolation. The user specifies the resolution and position of the sampling grid and attempts to sample at a high enough frequency to capture the details of the solution. Once the data sample is placed on a regular grid, volume rendering techniques are used to create the visual display. Volume rendering is the name given to the process of creating a two-dimensional image directly from three-dimensional volumetric data.

Figure #11: Example application of volume rendering techniques to visualize air pollution concentrations. Image courtesy of Theresa-Marie Rhyne, (created with visualization software developed by Numerical Design, Ltd for the U.S. Environmental Protection Agency - volume renderer created by Lee Westover using his splatting algorithm).

Comparing the Two Approaches:

Conversion to graphics primitive is the most frequently used approach as it provides information that computer graphics hardware can efficiently process. This yields highly interactive scientific visualization systems. Data sampling, however, is ideal for volumetric representations and visualization techniques. An example of volumetric visualization would be reconstructing a sequence of 2-D slices obtained from Magnetic Resonance Imaging (MRI) or Computed Tomography (CT) into a volume model for medical visualization and diagnostic purposes.

Figured #12: Example biomedical image (head composite) created by a volume visualization system (VolVis) developed by the Center for Visual Computing (CVC) at the State University of New York (SUNY) at Stony Brook. Image courtesy of Arie E. Kaufman. ( .

Visualization Software:

Over the last ten years, a number of visualization systems evolved. Some are defined as turn key systems that are focused on specific types of visualization problems such as computational fluid dynamics, weather modeling, medical imaging and so forth. There are also a group of software packages targeted more at the end user than at graphics programmers. These tools are called Modular Visualization Environments (MVE).

Modular Visualization Environments (MVEs):

With MVEs, software components are connected to create a visualization. The components are called modules. MVEs allow the modules for filtering, mapping and rendering to be combined into executable flow networks. To do this, the user selects modules from menus and places the icons representing the modules in a diagram. Each module appears as a box and connections between the modules are drawn as lines. Once the structure of the visualization application has been established, the MVEs execute the network and display the computer generated image.

Figure #13: Example modular visualization environment (MVE) tool, the Application Visualization System (AVS) is shown in this image. Image courtesy of Thomas Fowler and Theresa-Marie Rhyne, (developed by Lockheed Martin for the U.S. Environmental Protection Agency), (

Each image that is produced is a visual representation of the scene defined by the modules. The user can interact with the image by moving or changing lights, by modifying surfaces, by rotating, shifting or resizing objects, or by changing the point of view and angle of view.

Reference: Gordon Cameron (guest editor), "Modular Visualization Environments: Past, Present and Future", Computer Graphics, Vol. 29, No. 2, ACM SIGGRAPH, New York, (Special Issue) (May), 1995.

Web-based 3D Interactive Visualization:

Recent developments in World Wide Web (Web) technology have expanded to include 3D visualizations. Some 3D Web visualizations use the Virtual Reality Modeling Language (VRML). This is a specification originally based on an extended subset of the Silicon Graphics Inc. (SGI) OpenInventor scene description language. Key contributions of the initial VRML 1.0 standard were a core set of object oriented constructs augmented by hypermedia links. VRML 1.0 allowed for scene generation by Web browsers on Intel and Apple personal computers as well as UNIX workstations.

VRML 1.0 file - A Statistical Surface (38K file)

Figure #14: Example web visualization of a statistical surface using the VRML 1.0 specification. VRML conversion code written by Thomas Folwer. Image courtesy of Thomas Fowler and Theresa-Marie Rhyne, (developed by Lockheed Martin for the U.S. Environmental Protection Agency), (

The interaction model of 3D VRML browsers is a client-server approach, similar to most other Web browsers. 3D browsers are usually embedded into 2D browsers (e.g. Netscape or Microsoft Internet Explorer) or launched as helper applications when connecting to a 3D site. The VRML 2.0 standard, released August 1996, expands VRML 1.0 to address real time animation issues on the Web. VRML 2.0 provides local and remote hooks (i.e. an application programming interface or API) to graphical scene description. Dynamic scene changes are simulated by any combination of scripted actions, message passing, user commands, or behavior protocols (such as Distributed Interaction Simulation (DIS) or Java).

An example of a VRML 2.0 file and its pictorial image can be found by clicking here.

Reference: Mark Pesce, VRML Browsing and Building Cyberspace, New Riders Publishing, 1995.

Chapter 3 is online at: (

David R. Nadeau, "VRML 2.0 Glossary - The key terms you need to know to get started with VRML", Netscape World, 1997 ( vrmlglossary.html)

Computer Graphics Standards

Historically, there have been a number of computer graphics standards efforts. For this tutorial, we highlight two of the current standard specifications in use today: OpenGL and VRML.

What is OpenGL ?

OpenGL (Open Graphics Library) is a set of procedures and functions that allow programmers to specify objects and operations associated with producing high quality computer graphics images. This specification assists with drawing computer graphics primitives. Primarily, it is a software interface to graphics hardware. Recently, OpenGL implementations (e.g. Cosmo OpenGL from SGI) that allow for no specialized graphics hardware have been developed for Windows 95 platforms. OpenGL evolved from SGI's GL (Graphics Library) and has grown to be a fundamental industry standard with an architecture review board. A technical discussion of OpenGL is located at: (

More information on OpenGL can be found at their Web site: (

What is VRML ?

As noted in the previous section of these notes, the Virtual Reality Modeling Language (VRML) is a specification for producting 3D interactive "worlds" on the Web. VRML 1.0 evolved from SGI's OpenInventor scene description language. VRML 2.0 (released in August 96) expands the specification to address real time animation on the Web. VRML 2.0 has been submitted to the International Standards Organization (ISO) with an expected publication in May 1997. Detailed discussions on components of the VRML specification can be found at the VRML Repository located at the San Diego Supercomputer Center: (

The VRML Consortium fosters future developments for VRML: (

The next sections of this tutorial highlight detailed aspects of VRML. Large Scale Virtual Environments and Large Scale Multicast Applications are addressed. The integration of computer graphics and networking standards will also be discussed.


Selected examples of general computer graphics references are listed below.

General References on Computers Graphics:

Hewlett Packard, "An Introduction to Computer Graphics", ( GraphicsPrimer/)

J. Foley, A. van Dam, S. Feiner, and J. Hughes, Computer Graphics: Principals and Practice, (2nd Edition in C), Addison-Wesley, New York, 1990.

D.D. Hearn and M.P. Baker, Computer Graphics, C Edition, Prentice Hall, 1996.

How to find out more on Computer Graphics publications:

ACM SIGGRAPH Online Bibliography Database: ( html)


Many of the concepts presented in these tutorial notes were inspired by a set of lecture notes developed by Pat Hanrahan for an Introduction to Computer Graphics course he taught, during Winter 1997, at Stanford University.

Pat Hanrahan ( Lectures/), 1997.

Many of the images shown were provided by various computer graphics experts. Their names are respectively noted under their images. Their support in creating these notes is very much appreciated.

Special thanks to Don Brutzman, Mike Macedonia and Steve McCanne for joining this collaboration and for thinking this tutorial was a good idea.