Vol.32 No.1 February 1998
Computer Graphics in Medicine: From Visualization to Surgery Simulation
Markus H. Gross
Medicine is an extremely challenging field of research, which has been -- more than any other discipline -- of fundamental importance in human existence. The variety and inherent complexity of unsolved problems has made it a major driving force for many natural and engineering sciences. Hence, from the early days of computer graphics the medical field has been one of most important application areas with an enduring provision of exciting research challenges. Conversely, individual graphics tools and methods have become increasingly irreplaceable in modern medicine, where medical imaging systems are only one prominent example.
The purpose of the following article is twofold: Without claiming completeness, the first part gives a brief retrospective of the fruitful relationship between computer graphics and individual subareas of the medical field. We start with early imaging and 3D visualization and move via interactive, collaborative data analysis to the emerging field of surgery simulation. The second part of the paper presents a more detailed view on the interdisciplinary field of virtual and simulated surgery which encompasses knowledge from medicine, computer graphics, computer vision, mechanics, material sciences, robotics and numeric analysis. The author describes the leading role of graphics and VR as core technologies and summarizes his personal vision of current and future research problems, which have to be pursued for realizing our vision of fully interactive and immersive surgery simulation.
The Past: Insight Through Visualization
3D medical imaging systems, such as X-ray (CT), magnet resonance (MR) or nuclear (SPECT) scanners, were revolutionary developments of modern diagnostics which conquered the field starting in the early ‘70s. These methods gave insight into almost every individual section of the human body and have saved countless lives by early diagnosis of tumors, heart diseases and others.
The data sets, usually defined on equally spaced 3D grids, have been visualized in terms of large sequences of individual slices, sometimes with the help of pseudocolor. The search for more sophisticated ways of visualizing this new type of volume data created an enormous push in a new subfield of computer graphics: volume rendering. Based on landmark works of Blinn and Kajiya, many different algorithms have been developed for the efficient and realistic rendering of volumes. The variety of approaches ranges from simple back-to-front rendering to sophisticated lighting and shading models implemented via ray tracers. The produced images are highly realistic and ubiquitous.
Besides direct approaches to volume rendering, the geometric reconstruction of anatomic structures gained increasing attention, since it enabled the further processing and analysis of medically important features. In this context, data segmentation has proven to be a critical preprocessing step. In addition, higher-level volume data analysis algorithms have allowed the identification of individual anatomic substructures.
Generally, volume data feature extraction and interpretation is a paramount example of the fruitful relationship and convergence of graphics and vision. Here, the computer vision community developed many different strategies, which partly belong to the repertoire of any graphics researcher working in the medical field (for instance, John Canny's famous edge detector). Conversely, many graphics methods are actually integral parts of sophisticated computer vision techniques (for instance, parametric polynomial surfaces).
Many of the graphics and vision methods designed in the early days are presently well established and support advanced applications in medicine, such as radiation and operation planning, prosthesis design, dental treatment, education and training and many others.
The Present: Interactive Exploration and Telecollaboration
Whereas, in the early days, graphics and visualization algorithms were mostly designed as preprocessors producing still images, the emerging high performance computing and graphics hardware swiftly conquered the field and changed the way medical data was analyzed. Nowadays, hardware-assisted real-time rendering of complex shapes and volumes allows for the interactive exploration and analysis of huge amounts of medical data.
In addition, upcoming 3D input and output devices allow the development of virtual reality systems for interactive journeys through the human body. Highly immersive interfaces generate gorgeous illustrations and are currently being investigated as enabling tools for the next generation of medical training and education systems. In some other application scenarios, VR systems are envisioned to replace costly and dangerous invasive methods. Yet, in most cases the lack of force feedback turns out to be a critical issue because it seriously diminishes the degree of realism.
|Figure 1: A JAVA applet for distributed compression domain volume rendering. (This JAVA app can be found here.) Figure 2: User interface of the telecooperation system KAMEDIN. Image provided courtesy of the Computer Graphics Center, Darmstadt, Germany.||
In times where medical information systems and databases are commonplace in most hospitals, networked compression, visualization and analysis methods gain more and more importance. Typical requirements encompass fast searching and browsing of large CT or MR databases as well as scalability to the network performance and computational power of the clients. As an example, Figure 1 presents a JAVA applet for rendering remotely located volume data sets instantaneously from a highly compressed file format.
Another vast subfield of graphics in medicine is collaborative diagnosis and telepresence. For instance in radiology, physicians often face the problem of having instantaneous access to a specialist who is located somewhere remote. Whereas in the past images were exchanged as hardcopies via surface mail, upcoming high-speed digital networks open the door for real-time distributed and collaborative diagnosis. Medical telecooperation projects and systems are currently supported by various telecommunications companies, and in some cases they are already available as commercial products. Figure 2 shows a user interface layout of a typical telecooperation system for radiologists.
Besides high-end CT and MR data sets, graphics researchers are discovering the lower end of medical imaging systems. Here, ultrasonic devices dominate the scene and various approaches for rendering and reconstruction of ultrasonic data sets can be found in contemporary literature. Although noise and alignment problems make a robust segmentation and feature extraction much more difficult, it is expected that some of those methods will be part of next generation’s ultrasonic systems.
Other research groups design high-end telepresence systems, which enable the surgeon to perform telesurgery using haptic and robotics interfaces. Additionally, augmented reality systems have been used for interactive diagnosis and medical check-ups. Images taken from a camera on the physician's head are superimposed with coincidentally recorded ultrasonic scans and are composed in a head-mounted display. Real-time projections of 3D reconstruction of individual anatomic features allow the location and visual inspection of the inner part of the human body.
Furthermore, 3D hardcopy devices, such as stereolithographs, have been widely and successfully used in medicine. Current research activities include the automatic computation and reproduction of prostheses or missing bones, based on geometric reconstruction. In particular, the symmetry of the human anatomy is exploited to compute missing pieces, which are then reproduced by milling machines and implanted by surgeons. Additionally, 3D hardcopies help the study of anatomic deviations or malformations in the approaches of complex surgical procedures, such as in facial surgery.
|Figure 3: Physics-based modeling of facial soft tissue: a) Presurgical facial shape profile, b) Postsurgical profile, c) Postsurgical frontal view. Data source: Visible Human Project, Courtesy National Library of Medicine.||
Besides mere geometric reconstruction of anatomic structures, graphics researchers developed models to simulate the physics of the human body. Presently, one of the most challenging and attractive subfields here is the fast and accurate computation of the mechanical behavior of soft tissue, which is an essential prerequisite for the vast and mostly unexplored area of virtual and simulated surgery. Although we are still far from a mature computational model, some existing soft-tissue models perform surprisingly well and therefore can be considered a first step towards more sophisticated approaches. An example from facial surgery simulation is given in figure 3, where pre- and postsurgical facial shapes of a case study are presented for a lower jaw bone repositioning. In this particular application, highly realistic images of the post-surgical appearance of the patient's face are of enormous importance to relax patient's fear.
The Future: Virtual Operation and Surgery Simulation
Undoubtedly, future key applications of graphics in the medical field include operation planning and surgery simulation. Here, we foresee fully immersive real-time simulation environments in which surgeons can learn, plan and rehearse complex operations using individual patient models and being supported by sophisticated input devices, such as virtual scalpels. The design of these types of simulation environments is a highly interdisciplinary project which requires input from a variety of disciplines including graphics, vision, robotics, numeric analysis, mechanics and material sciences, hardware design and, needless to say, medicine. However, unlike any other discipline, computer graphics will be paramount, since it will take the role of the system architect.
The conceptual components of an advanced surgery simulation environment can be summarized as follows: In a first step, raw data sets have to be acquired by highly accurate 3D medical imaging or vision systems. Subsequent preprocessing steps extract anatomic substructures and create geometric models of the patient attributed with respective material parameters, stemming from an appropriate material database. A sophisticated modeler allows the surgeon to modify the geometry and the topology of individual parts of the derived model while simulating cuts, bone repositioning, transplants etc. Force feedback as a function of the underlying material has to be computed and interpolated to meet the high update rates which are necessary to beat the temporal resolution of the tactile sensory channel. Tissue volume forces and deformation fields as well as collision detection must be computed in real time, since they convey the parameters for visual and force feedback. In essence, fast approximations of the underlying differential equations have to be found. Although for some applications, such as facial surgery simulation, more expensive and accurate solution strategies could compute the exact deformation fields in batch mode, the design of appropriate real-time engines remains the most challenging part of the simulation. As an option, appropriate rendering algorithms might generate photorealistic still images of the patient.
Data Acquisition and Analysis
Since CT and MR scanners still have limited resolution, highly accurate 3D surface data acquisition is fundamental for some application scenarios. In this context, fast and robust 3D range finding methods are necessary to satisfy the resolution constraints imposed by the simulation environment. Furthermore, color and texture samples must be recorded to equip subsequent rendering methods.
Especially when using both volume and surface data, manifold registration and alignment problems arise, including surface-surface, surface-volume and volume-volume registration. Robust (semi)automatic methods are desirable.
Since the early days of medical imaging, segmentation and feature extraction have lost none of their importance, and despite decades of research, there is no robust and fully automatic method in sight. For matching problems, semiautomatic strategies might be an appropriate alternative and the collaborative effort of computer graphics and computer vision researchers is necessary to accomplish this goal.
The Human Computer Interface
The design of a sophisticated natural human computer interface is certainly one of the keys to successful simulation. The illusion of a full medical working environment can only be created with the help of advanced virtual reality hardware. Contemporary output devices, such as head-mounted displays, Caves, Visdomes and Workbenches are promising tools to study the behavior and acceptance of medical users. However, we foresee smart, small, lightweight and very high resolution eyeglass displays as the ultimate devices for mediating visual information to the surgeon. We believe that future display technology will provide a new generation of sophisticated solutions.
Much more important and fundamental for any surgeon than the display itself is the provision of highly accurate tactile and force feedback information. Therefore, we require highly accurate haptic interfaces, which capture the responses of human tissue to mechanical stimuli. One of the major problems, however, is that the haptic device must be thoroughly tailored to the underlying application. In facial surgery simulation, for instance, completely different setups are required than in laparoscopy. Moreover, complex surgical procedures usually employ extensive sets of individual mechanical tools. Although various haptic interfaces are available in research labs or on the market and enable first experimental steps into the right direction, we are still far from convenient solutions. All in all, the design of individual force feedback devices is strongly influenced by the application context and by the underlying physics. Optimization to users’ demands requires multiple iterations and design cycles. Therefore, successful solutions can only be developed in a tight collaboration of researchers from medicine, robotics and computer graphics.
Mechanics and Numeric Analysis
The differential equations governing any volumetric tissue deformation have their roots in mechanics. In the past, linear volumetric strain and deformation models have been extensively investigated and are widely available for various kinds of material simulation. In surgery simulation, however, large strains and deformations occur and the material tensors become non-constant. The resulting phenomena are highly nonlinear and extremely difficult to model. Other effects relate to compressibility of human soft tissue. The high percentage of liquid makes it almost incompressible; local tissue forces generated during surgery force liquid to stream out and to distort the mechanical behavior of the material.
Besides mathematical models, appropriate material parameter databases are of great importance in surgery simulation. Moduli of elasticity or non-linearity are usually functions of age, sex, ethnic group and others. Unfortunately, it is extremely difficult to obtain the desired parameters and extensive experimental research has to be pursued. Here, we need the help of material scientists to provide us with appropriate sets of parameters and interpolation models for individual patients and tissue types.
The strategies for optimal solutions of partial differential equations are critically dependent on the complexity of the underlying mechanical model. In particular, in order to achieve real-time response we have to balance mathematical accuracy versus computational effort. Numeric analysts have developed various classes of solvers, where FEM is only one prominent example. Upon simplifying the physics, appropriate solution strategies have to be designed in close cooperation with numeric analysts. Here, hierarchy, progression and localization might be some of the key words to success.
The Paramount Role of Computer Graphics
The particular attractiveness of highly complex surgery simulation environments for us lies in the versatility and depth of individual research problems touching or covering almost every important subfield of computer graphics. Therefore, we finally illuminate the paramount role of graphics in surgery simulation and briefly sketch some of the major research challenges:
Markus H. Gross is full Professor at the Computer Science Department of the Swiss Federal Institute of Technology (ETH) in Zürich. He received a degree in Electrical and Computer Engineering in 1986 and a Ph.D. in Computer Graphics and Image Analysis in 1989, both from the University of Saarbrücken, Germany. From 1990 to 1994, he was with the Computer Graphics Center in Darmstadt, where he established and directed the Visual Computing Group. His current research interests include physics based and multiresolution methods for graphics with applications in the medical field.
In summary, from the earliest days, the medical field has been one of the most appealing areas for computer graphics research. Being the chief engineer of next generation’s medical training and simulation systems, computer graphics will help to improve our present health system. Thus it will provide a significant contribution to our society -- and to modern human life.