Skip to content. | Skip to navigation

siggraph.org

Sections
Personal tools
You are here: Home Publications Computer Graphics (CG) Quarterly Volume 43, Number 2 Visualizing What Lies Inside
Document Actions

Visualizing What Lies Inside



Author:
Carlos D. Correa

University of California, Davis


Medical imaging has given radiologists an ability that photography was not able to provide: it lets them see inside the human body. With the advent of 3D visualization systems, these images can be put together into crisp and impressive renderings of the human body from a variety of perspectives that were only dreamt of before, revolutionizing clinical practice.

Light transport models soon emerged to allow light interactions that, although not realistic in the physical sense, proved to be more effective for understanding the complex relationships among the anatomical structures. For instance, bone could be made semi-transparent to provide visibility of brain tissue. Skin could be removed altogether from an image to show only muscle or internal organs. However, soon it became evident that simply rendering these images in their raw form was no longer effective and the clear visualization of internal structures remains elusive.

The depiction of internal parts in the context of the enclosing space is a difficult problem that has occupied the mind of artists, illustrators and visualization practitioners.  Despite the advances made in computer graphics for simulating the light transport in semi-transparent media, the problem of visualizing internal objects is no longer a rendering problem, but that of classification.  Medical imaging technology obtains representations of anatomical structures via indirect ways, such as the response of tissue to X-rays or the alignment of electrons in a magnetic field. Therefore, the absence of semantic information prevents visualization practitioners from clearly marking up the regions that must be visualized. Without access to those regions, exploration becomes tedious and time-consuming. The predominant approach has been the use of transfer functions, or opacity mappings, which assign transparency properties to different intervals in the data. This method, however, does not guarantee that internal structures are visible. Other strategies must be used. In this article, I describe some visualization techniques that have emerged to obtain clear views of internal features in 3D volume data.


Direct Volume Manipulation


Complex 3D objects can be seen as virtual counterparts of real physical objects. However, there is little support in contemporary visualization systems that exploits this direct metaphor. With the fast growth in computational power of graphics hardware, only until recently it has been possible to manipulate 3D volume data in fashions that were only possible for surface meshes and CAD models, where semantic information is often explicit and readily available.
When volume data are understood as explorable objects, we can decompose it into parts that can be disassembled in various ways. One of the first ideas that were explored in this direction where cutaways, where certain parts can be removed to reveal hidden parts of an object. [14] Exploded views extend this idea to reveal the relationships among the internal parts of a complex volume [1] (Figure 1).


Figure 1. Cutaway of the skin of a CT data set.

Another strategy is to assign material properties to different regions or layers of a volume and simulate the physical response to the deformation and cutting of such regions [5,6,10]. Figure 2, for example, shows the result of simulating a peel away on a CT scan of a piggy bank to reveal a number of coins in its interior.


Figure 2. Volume peeling of a CT scan of a piggy bank [3]

Although the deformation “unrealistically” simulates an elastic material for the piggy bank, the metaphor is effective for depicting the internal structures.

More realistic effects are obtained by simulating the response of tissue, such as skin, to incisions and retractions, as used in real surgical procedures. Figure 3 shows the result of peeling the skin, and muscle layers of a foot CT scan to reveal the internal vessels (left) or bone (right).


Figure 3. Volumetric cuts on a CT anatomical data set of a foot to reveal superficial vessels (left) or bone (right) [4].

Rigid and deformable cuts, although effective for visualizing the internal structures, work under the premise that the internal and external layers are clearly separated. In a more general sense, this separation is not easy to come by, and, in most cases, there is a degree of uncertainty. For this reason, the effective visualization of internal structure must rely on robust classification.


Advanced Classification


The main challenge when attempting to see the internal features remains that of classification. An effective visualization must first decide what is it that we must preserve and what regions are unimportant. Traditional classification systems, found in off-the-shelf visualization systems, only consider a single dimension for classification, without regards of the spatial characteristics or the semantics of the data. However, volume data seldom contain any semantics about the captured structures. Acquisition technology outputs a series of images with intensity values, while simulations of 3D phenomena sample a continuous scalar or vector field in a grid.

In an attempt to extract semantic information, one may analyze the spatial properties of the data, such as the location of boundaries [9], regions of high curvature [7], shape [12] or size[6].  In most of these cases, these properties are just approximations of the local distribution of data in a small neighborhood. Size, for example, can be measured as the extents of regions of a certain homogeneity. Regions of a certain material, such as brain, that occupy a large volume, have  different properties than those regions, such as skull and skin, that are relatively thin.
These observations have enabled us to construct classification based on size, and assign opacity based on the relative thickness of features. A particular example is the visualization of brain MRI, where the data is comprised of a series of thin layers (i.e, skin, skull and tissue) surrounding a large region, the brain, of a certain material. Exploiting these properties lets us minimize the effects of occluding tissue, such as skin, to reveal the brain tissue clearly, as shown in Figure 4.


Figure 4. Example of a size-based transfer function to visualize the brain in an MRI data set. On the left, we show the original rendering, where brain is obscured by the skin [6].

Other approaches do not operate on the data itself but on the rendering process. For example, importance-driven techniques [13] and ghosted views [2,8] assign different opacities in a viewpoint dependent manner, so that the user constantly gets an uninterrupted view of internal structures. A different approach, opacity peeling, automatically finds the layers that compose an image from a given point of view [11].

To be more effective, these approaches, along with traditional transfer function design, must incorporate a measure of visibility. Visibility can be measured as the contribution of a structure of interest to the final image. One of the limitations of contemporary visualization systems is the inability to quantify how visible a feature of interest is.


Figure 5. Visibility-driven rendering of a tomato CT data set. Note how the seeds and inside structures are clearly visible and the overall shape of a tomato is retained.

Visibility-driven classification is used to alleviate these problems. This process measures visibility of all structures in a volume to arrive at a good transfer function, either manually (Figure 5) or automatically (Figure 6). In general, a visibility-driven transfer function is constructed in such a way that we guarantee the visibility of all structures of interest and at the same time maximizes the visibility of structure of interest, in particular, of those features lying at the interior of a data set.

The issue of visibility is not exclusive of medical data. Simulations of 3D phenomena often contain structures that evolve and are intertwined in 3D space with other less interesting structures. Therefore, visualization of internal flow becomes difficult. Figure 6 shows an example of a supernova entropy simulation, where layers of entropy occlude each other. Incorporating visibility gradually provides a clearer view of internal flow.


Outlook


Obtaining clear views of internal structures remains an elusive goal in scientific and medical visualization. Future visualization systems, powered by fast graphics hardware and multi-core architectures, will be able to analyze and extract meaningful structures from volume data, such as size and shape descriptors, to isolate internal regions from external layers. Interactive tools, such as deformation and cutaways, become more effective with such a classification and an explicit measure of visibility that guarantees that internal structures are represented in the rendered images.



Figure 6. Visibility-driven rendering of a supernova entropy simulation. Without regards of visibility (left), internal flow is difficult to see. As we incorporate visibility, these structures become more apparent [5].



About the author:


Carlos D. Correa

is a postdoctoral researcher at the University of California at Davis in the VIDI research group. He received the B.Sc. degree in computer science from EAFIT University, Colombia, in 1998, and the M.Sc. and Ph.D. degrees in electrical and computer engineering from Rutgers University, New Jersey, in 2003 and 2007, respectively. His research interests are computer graphics, visualization and user interaction. His current research and activities can be seen at http://vis.cs.ucdavis.edu/~correac.



References

  1. S. Bruckner and M. Eduard Groller. Exploded Views for Volume Data. IEEE Transactions on Visualization and Computer Graphics 12, 5 (Sep. 2006), 1077-1084.
  2. S. Bruckner, S. Grimm, A. Kanitsar, and M. E. Gröller, Illustrative context-preserving volume rendering. In Proceedings of EuroVis 2005, pages 69–76, 2005
  3. C. Correa, D. Silver, and M. Chen, Discontinuous displacement mapping for volume graphics. In Proc. Volume Graphics '06, 2006, pp. 9 -16.
  4. C. D. Correa , Deborah Silver , Min Chen, Feature Aligned Volume Manipulation for Illustration and Visualization, IEEE Transactions on Visualization and Computer Graphics, v.12 n.5, p.1069-1076, September 2006
  5. C. D. Correa and K-L. Ma. Visibility Driven Transfer Functions. IEEE VGTC Pacific Visualization Symposium 2009, Beijing, China, Apr 2009.
  6. C. D. Correa, and K. Ma. 2008. Size-based Transfer Functions: A New Volume Exploration Technique. IEEE Transactions on Visualization and Computer Graphics 14, 6 (Nov. 2008), 1380-1387.
  7. G. Kindlmann, R. Whitaker, T. Tasdizen, and T. Moller. Curvature-based transfer functions for direct volume rendering: Methods and applications. In Proc. IEEE Visualization 2003, 2003.
  8. J. Krüger, J. Schneider, R. Westermann, ClearView: An Interactive Context Preserving Hotspot Visualization Technique. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE Visualization 2006)
  9. M. Levoy. Display of surfaces from volume data. IEEE Comput. Graph. Appl., 8(3):29–37, 1988.
  10. M. McGuffin, L. Tancau, and R. Balakrishnan, Using deformations for browsing volumetric data. In Proceedings of IEEE Visualization 2003, pages 401–408, 2003.
  11. C. Rezk-Salama and A. Kolb, Opacity peeling for direct volume rendering. Computer Graphics Forum, 25 (3): 597–606, 2006.
  12. Y. Sato, C.-F. Westin, A. Bhalerao, S. Nakajima, N. Shiraga, S. Tamura, and R. Kikinis. Tissue classification based on 3d local intensity structure for volume rendering. IEEE Trans. on Visualization and Computer Graphics, 6(2):160–180, 2000.
  13. Ivan Viola , Armin Kanitsar , M. Eduard Groller, Importance-Driven Feature Enhancement in Volume Visualization, IEEE Trans. on Visualization and Computer Graphics, v.11 n.4, p.408-418, July 2005
  14. D. Weiskopf, K. Engel, and T. Ertl, Interactive clipping techniques for texture-based volume visualization and volume shading. IEEE Trans. Vis. Comput. Graph., 9 (3): 298–312, 2003.

Powered by Plone CMS, the Open Source Content Management System

This site conforms to the following standards: