Skip to content. | Skip to navigation

siggraph.org

Sections
Personal tools
You are here: Home Publications Computer Graphics (CG) Quarterly Volume 42, Number 2 Interacting with Three Dimensional Flow Fields
Document Actions

Interacting with Three Dimensional Flow Fields

Author - Han-Wei Shen, The Ohio State University


"Han-Wei
Shen is known for his outstanding work in 3D flow visualization. In this article, he shows us a new approach to depicting 3D vector fields that results in artistic and revealing pictures." — Kwan-Liu Ma


As teraflop and petaflop computers become increasingly accessible, large-scale simulations that can produce three-dimensional, time-dependent scalar and vector data begin to play an instrumental role in problem solving and scientific discovery. For the past two decades, researchers have developed various visualization techniques to enable effective analysis of scientific data. Among the many research directions, visualization of vector fields has remained one of the most active research topics in the field.

Generally speaking, two popular types of methods are used to visualize 2D/3D vector fields. The first is to synthesize textures that follow the flow directions everywhere in the field. One example of such techniques is the Line Integral Convolution (LIC) algorithm [1]. Although LIC was originally designed for visualizing 2D vector fields, various 3D extensions of LIC have also been proposed [2][3]. Figure 1 shows examples of 2D and 3D flow textures generated by extensions of LIC.

 
Figure 1.  Visualizing 2D and 3D vector fields using flow textures

The other method for visualizing flow data is to insert mass-less particles to the field and observe their trajectories computed with numerical integration. This is sometimes referred to as particle tracing. While straightforward, this is still a more popular approach because the output resembles what the scientists are used to seeing in traditional experimental flow visualizations. From the particle trajectories, geometric objects such as streamlines, stream ribbons, or stream surfaces can be created and visualized. If it is a time-varying flow field, the particle traces can be used to construct pathlines, streaklines, or time lines. Figure 2 shows an examples of  numerical flow visualization.


 
Figure 2. Numerical Visualization of a Vector Field. 

Since streamlines are still widely used by application scientists, visualization researchers have put a lot of effort into making streamline visualization easier and more effective.  In this article, I focus on an important issue of visualizing vector fields using streamlines - how to interact with 3D vector field data and place streamline seeds so that the final visualization is insightful and free of clutter.

There have been several seed placement algorithms proposed in the past [4,5,6]. A general goal of this research is to generate enough numbers of streamlines to show important flow features without creating visual clutter. To date, most of the available methods are two dimensional, with very little work done for 3D data. Among the existing 2D algorithms, one common strategy is widely used to reduce visual clutter   That strategy is to generate streamlines that are evenly distributed in space so that streamlines are not too close to each other. While those algorithms work reasonably well for 2D data, extending the 2D seed placement algorithms to 3D is nontrivial because as the 3D streamlines are projected to the screen, they can arbitrarily intersect or overlap with one another. The final image can easily become very cluttered if too many streamlines are displayed. 

To tackle the 3D streamline placement problem, let’s first consider how artists draw. Starting from a blank canvas, strokes are placed one at a time in selected local regions, which are often the projection of the 3D objects that the artists have in mind to draw;  for example, the surface of a river, or the body of the cloud. Depending on the desired effect, the distance between the strokes is carefully controlled. The key observation here is that even though the objects intended to be drawn are three dimensional, all the strokes are placed and controlled on the canvas, which is a 2D space.  Figure 3 shows a famous sketch of water flow by Leonardo da Vinci.


 
Figure 3. Drawing of Leonardo da Vinci


Inspired by the process described above, we have recently developed an image-based streamline placement algorithm to tackle the problem of visualizing 3D streamlines. The major goal of the method is to let the user directly interact with the 3D vector data on the screen. Another goal of the algorithm is to tightly couple the exploration of flow data with visualization of other related variables produced from the same simulation. As we know,  multiple variables generated from a simulation are often correlated. To define the region of interest, the user can employ standard visualization techniques such as isosurfaces or slicing planes.  These are computed from some flow related quantity such as velocity magnitude, flow density, energy, pressure, etc. The user can interactively visualize the data by changing the iso-value or the orientation of the slicing plane. When a region of interest is found, the user can query the flow directions in the region with our algorithm described blow.

To place streamlines, we first read back the depth values stored in the depth buffer based on what is being visualized on the screen. Then, the user can pick a point on the screen inside the region of interest. From the depth value and the user-picked screen position, we can infer the corresponding 3D position in object space, and drop a seed there. From the seed, we compute a streamline based on the 3D flow field data,  and immediately project the streamline back to the screen. From the first computed streamline, we can pick more seeds on the screen with a preset screen distance away from the existing streamline. As we integrate those streamlines, we keep monitoring the distance of the streamlines to the existing ones in screen space. If they are  too close to each other, we immediately stop the integrations so that the final image will not be  too cluttered.

With this algorithm, we can generate visualizations of streamlines that are free of visual clutter. Figure 4 shows an isosurface of an implicit stream function and the streamlines generated from the surface. Some additional run time control of the appearance of the streamlines can also be added. For example, we can perform level of detail rendering of streamlines. Assuming we use a constant screen space distance threshold, as we zoom out of the domain, fewer streamlines will be generated because the visualized object now becomes smaller. Figure 5 shows an example of level of detail rendering of streamlines.
         

Figure 4. An isosurface of a stream function and the corresponding streamlines.


We can display streamlines in different depth layers. Since streamline seeds are dropped on the screen and attached to the 3D positions defined by the depth map, we can modify the seed positions easily by increasing or decreasing the depth values from the original depth map.  We can also ‘cut a region open’ by modifying the depth values in the depth map with a mask, and drop seeds there. Figure 6 shows an example.



Figure 5. Level of detail rendering of streamlines.


Figure 6. Control the streamline layers by manipulating the depth map

In our algorithm the streamline seeds are computed in image space, which does not mean that the final visualization can be generated only from one particular view. What it is designed to do in our algorithm is enable the user to choose a particular viewpoint, generate a set of streamlines, and then switch to a different camera view. The user can use a few different views, and then compose all the streamlines generated from our algorithm together to create the final visualization. Figure 7 shows an example of combining streamlines generated from two views together.

Because the only input to our algorithm is the flow field and a depth map resulting from other visualization techniques, we can easily use objects of various shapes to place the streamline seeds. Our seed placement algorithm does not need to know the underlying representation of the object (such as whether it is a parametric surface, an OpenGL rendering primitive, or a polygonal model) as long as the depth buffer from rendering those objects are available. Once the streamlines are drawn to the screen, we can also map some textures of different kinds to create different illustrative effects. Figure 8 shows one example.

The algorithm presented here is only the first step of our effort toward providing the scientists with intuitive user interface to explore 3D flow fields. We believe that an intuitive interface should allow the user to directly interact with the objects that they can see, which is the motivation for us to develop this image space algorithm. In the future, we will develop additional interfaces for the user to more effectively explore complex flow fields.

References

[1] B. Cabral and C. Leedom, Imaging vector fields using line integral convolution, in Proceedings of SIGGRAPH 93, pages 263-270,  1993.
[2] C. Rezk-Salama, P. Hastreiter, C. Teitzel, and T. Ertl. Interactive exploration of
volume line integral convolution based on 3D-texture mapping. In IEEE Visualization’99,
pages 233 – 240, 1999.
[3] H.-W. Shen, G.-S. Li, and U. Bordoloi. Interactive visualization of threedimensional
vector fields with flexible appearance control. IEEE Transactions
on Visualization and Computer Graphics, 10(4):434–445, 2004.
[4] G. Turk and D. Banks. Image-guided streamline placement. In Proceedings of ACM SIGGRAPH ’96, pages 453–460, 1996.
[5] B. Jobard and W. Lefer. Creating evenly-spaced streamlines of arbitrary density, In Visualization in Scientific Computing, pages 43–56, 1997.
[6] V. Verma, D. T. Kao, and A. Pang. A flow-guided streamline seeding strategy,In IEEE Visualization, pages 163–170, 2000.


Figure 7. Streamlines generated from two views and combined together.

Figure 8. Streamlines visualized by using two different illustrative styles.


Powered by Plone CMS, the Open Source Content Management System

This site conforms to the following standards: