Volume Visualization and Rendering

References:

"A Survey of Algorithms for Volume Visualization", T. Todd Elvins, Computer Graphics 26:3, pp. 194-201 (August, 1992)

T. Todd Elvins San Diego Supercomputer Center, Introduction to Volume Visualization: Imaging Multi-dimensional Scientific Data, SIGGRAPH 94 Course #10 Notes, 25 JULY 1994

Introduction

Challenges

Data Characteristics

Volume Characteristics

Common Steps in Volume Visualization Algorithms

Volume Visualization Methods

Data Classification

Traversals

Viewing and Shading

Volume Visualization Algorithms

IRIS Explorer Example of Volume Visualization


Introduction

Volume visualization is used to create images from scalar and vector datasets defined on multiple dimensional grids, i.e., it is the process of projecting a multidimensional (usually 3D) dataset onto a 2D image plane to gain an understanding of the structure contained within the data. Most techniques are applicable to 3D lattice structures. Techniques for higher dimensional systems are rare. It is a new but rapidly growing field in both computer graphics and data visualization. These techniques are used in medicine, geoscience, astrophysics, chemistry, microscopy, mechanical engineering, and other areas.

There are five fundamental algorithms in volume visualization. In the following, for each algorithm a brief description is given that includes advantages and disadvantages, a rough analysis of computation space and time requirements, ease of implementation, and the type of data each algorithm handles best.

Implicit in the discussion is the fact that animation is very important for a full understanding of 3D volumetric data and so needs to be a straight forward extension to the fundamental techniques.

Return to Page Top


Challenges

The size of typical datasets is in the megabytes to gigabytes and scientists would like to combine these and use even larger datasets (terabytes). A reason is that two different methods of obtaining data, for example, a CT scan and MRI, may give different information. The user may want to view both data sets, spatially aligned (registered) so they can see the features of both. Also the system might display the most appropriate information from each data set at each point. The rendering must be fast enough to allow for interaction, i.e., the user must be able to change parameters and see the result quickly.

The datasets can have scalar values or vectors at the grid points. For example, atmospheric scientists may record more than thirty parameters at each gridpoint (usually only one parameter at a time is displayed).

Return to Page Top


Data Characteristics

Data may be acquired from empirical observation or computer simulation. Datasets are often obtained from scanning the objects, e.g., Magnetic Resonance Imaging (MRI), Computer-aided Tomography (CT), etc. It can be generated from synthetic objects, e.g., a block of wood texture.

Sometimes volume datasets are combined. For example, two different techniques (such as MRI and CT) might be used on the same object. Then the datasets are combined to show the most important results from each method. Another example might be to combine simulated data with experimental data.

Return to Page Top


Common Steps in Volume Visualization Algorithms

Several of the steps are common to all the algorithms and most of the algorithms contain a subset of the steps discussed here.

The steps are as follows:

  1. Data acquisition either via empirical measurement or computer simulation.
  2. Put the data into a format that can be easily manipulated. This may entail scaling the data for a better value distribution, enhancing contrast, filtering out noise, and removing out-of-range data. The same set of operations must be applied to all the data slices.
  3. The data is mapped onto geometric or display primitives.
  4. The primitives are stored, manipulated, and displayed.

Return to Page Top


Volume Visualization Methods

The fundamental algorithms are of two types: direct volume rendering (DVR) algorithms and surface-fitting (SF) algorithms.

DVR methods map elements directly into screen space without using geometric primitives as an intermediate representation. DVR methods are especially good for datasets with amorphous features such as clouds, fluids, and gases. A disadvantage is that the entire dataset must be traversed for each rendered image. Sometimes a low resolution image is quickly created to check it and then refined. The process of increasing image resolution and quality is called "progressive refinement".

SF methods are also called feature-extraction or iso-surfacing and fit planar polygons or surface patches to constant-value contour surfaces. SF methods are usually faster than DVR methods since they traverse the dataset once, for a given threshold value, to obtain the surface and then conventional rendering methods (which may be in hardware) are used to produce the images. New views of the surface can be quickly generated. Using a new threshold is time consuming since the original dataset must be traversed again.

Sometimes an external geometric primitive needs to be added to the dataset. For example, looking at the ultrasound heat treatment (or radiation treatment) of a tumor, the beam of the ultrasound could be represented as a cone and the tumor data acquired from MRI. This is easy for an SF method since the geometric primitive representing the external object can just be added into the SF primitives and rendered. For DVR, the external object can be voxelized by scan-converting its 3D geometry.

Return to Page Top


Data Classification

Data Classification means choosing such parameters as a threshold value for an SF method or the color and opacity values for a DVR method.

The DVR color table is used to map data values to meaningful colors. The opacity table is used to expose the part of the volume most interesting to the user and to make transparent the uninteresting parts. Each element in a volume contributes both color and opacity and the final pixel value is determined by the sum of all the values. The setting of the color and opacity tables can be time consuming. The user may look at a data slice and get the range of data values. Then she sets up a preliminary table, does a test rendering, modifies the tables, and continues the process until satisfied with the image. An example would be a table for CT data that sets bone density to white/opaque, muscle density values to red/semi- transparent, and fat density values to beige/mostly transparent.

Sometimes the user wants to make the outer layers transparent, even though they are the same value as the inner layers. This might be done by also considering the position and value of the elements.

Return to Page Top


Traversals

Traversals occur when the image is created. They can be an image-order traversal of the pixels in the image plane or an object-order traversal of the elements in the dataset. Object-order traversals compute the projection and contribution of that element to the pixels in the image plane. The objects can be traversed in a back-to-front manner that allows the user to see the final image being built up and see all the structures, including those that will eventually be hidden. The advantage of the front-to-back traversal is speed, since some objects won't be rendered. Image-order traversals are usually done in scan-line order.

Return to Page Top


Viewing and Shading

Most DVR algorithms use orthographic viewing since perspective viewing can cause ray divergence problems, which may cause some data to be missed. Also, this prevents the data from being warped by the perspective transformation, which could be confusing for the scientists.

Shading with light sources requires a surface normal. Most SF and DVR methods use the gradient of the values to determine the normal. This gradient can be interpolated across an element since the values are known at the corners. A system might pre-compute the dot product of the light vector and the gradient at each gridpoint and store it. This costs space but is faster for rendering as long as the lights don't change position.

Return to Page Top


Volume Visualization Algorithms

Interactive Methods

Surface-fitting Algorithms

DVR Algorithms

Return to Page Top


HyperVis Table of Contents

Last modified on February 16, 1999, G. Scott Owen, owen@siggraph.org