Skip to content. | Skip to navigation

siggraph.org

Sections
Personal tools
You are here: Home Publications Computer Graphics (CG) Quarterly Volume 44, Number 1 Understanding Earthquakes with Advanced Visualization
Document Actions

Understanding Earthquakes with Advanced Visualization

"Dr. Hsieh has done extensive work in earthquake data visualization. After what recently happened in Haiti, this article is timely." - Kwan-Liu Ma, VisFiles editor

Author - Tung-Ju Hsieh, Assistant Professor, National Taipei University of Technology


Introduction

An earthquake is one of the most devastating forces in nature. On January 12th, 2010, an earthquake with a magnitude of Mw 7.0 struck Haiti at 21:53:09 UTC. Figure 1 shows the epicenter of the Haiti earthquake. The epicenter was located at 18.457°N, 72.533°W, approximately 25 km west of its capital Port-au-Prince, with a focal depth of 13 km. It was classified as a very shallow earthquake and it was the major factor that caused such a miserable event. Located at the boundary region of the North American plate and the Caribbean plate, this area accommodates about 20 mm/y slip, with the Caribbean plate moving eastward with respect to the North America plate. A week after the strong earthquake, the United Nation Office of the Coordination of Humanitarian Affairs has announced that there were approximate 75,000 dead, 200,000 injured, and 1 million homeless. The estimated number of collapsed buildings is about 4,000, including the presidential palace, the parliament house, hospitals and churches [1]. Figure 2 shows the destoryed Haiti’s presidential palace. The U.S. Geological Survey (USGS) has been recording a serious of 52 aftershocks of magnitude 4.5 or greater [2].


Figure 1. 2010 Haiti earthquake epicenter was located at 18.457°N and 72.533°W, near Port-au-Prince. The Enriquillo-Plantain Garden fault contributed to the Haiti earthquake. (Source: BBC)


Figure 2. The Haiti’s presidential palace was seriously damaged during the earthquake and the second story collapsed completely.

During the past century, millions of earthquakes have been recorded worldwide and seismologists seek to learn from these events to construct accurate analytical models to better predict the future earthquakes. Based on observations collected since the 1900’s, the National Earthquake Information Center (NEIC) estimates that millions of earthquakes occur in the world each year. Table 1 shows the frequency of occurrence of earthquake.

Table 1: Estimated number of earthquakes worldwide each year based on observations since 1900 [3].



In dense urban regions, even a magnitude 4.0 earthquake may result in significant losses for wide spread areas. Beyond the tragic loss of life, important civil infrastructure such as buildings, dams, and bridges may be damaged or destroyed and critical lifeline systems such as power grids, water and gas lines, interrupted. As a consequence, a better understanding of earthquake patterns is desirable. Large earthquake events require a critical review of current seismic design guidelines and development of new approaches. The study of statistical data characterizing historical events can greatly contribute towards the development of new earthquake resistant design guidelines. In addition, careful processing and use of interactive tools with measured earthquakes data sets can assist seismologists in identifying new fault traces. Mapping of these historic data sets with urban planning networks such as telecommunication or power grids may assist in pre-event planning and hazard mitigation.

Seismographs record ground motions of an earthquake and operate as a seismographic network. The output of a seismograph is proportional to ground velocity. In contrast, an accelerograph is designed to record strong ground motion and its output is proportional to ground acceleration. Ground motions sensed by the seismograph are recorded for further analysis to determine the time, location, depth, and magnitude of the earthquakes. The USGS provides access to field-measured seismic data. The historical earthquake records can be obtained from the National Earthquake Information Center (NEIC). It is a part of the USGS Earthquake Hazards Program, providing an extensive seismic database for scientific research compiled from records of global seismograph networks. Currently, the NEIC registers about 50 earthquakes each day. Table 2 lists the number of recorded worldwide earthquakes.

Table 2: Number of earthquakes worldwide for 2000-2009 located by USGS NEIC [3].




Vast amounts of information are available from the USGS and can be used to explore our changing planet. A common approach at interconnecting multiple field data sets has been through the use of (GIS) mapping software packages such as ArcView or ArcGIS. Using such software frameworks, earthquake data is often overlaid onto levels of 2D maps with embedded GIS information [4]. However, as desktop computers and graphics hardware become more powerful, scientists have begun to develop interactive 3D visualization tools for intuitive data manipulation to ease the interpretation of multi-dimensional and time-varying information.

There are mainly two sources of seismic data sets: (i) field-measured, and (ii) numerical simulations. Earthquake simulations are important because they help researchers better understand seismic wave propagation characteristics and establish proper earthquake response plans to mitigate seismic hazards. Field-measured data is particularly useful in verifying data from earthquake simulation. Without interactive visualizations, data exploration would be difficult to achieve.



Visualization of Field-Measured Seismic Data Sets

A conventional approach for the analysis of field-collected data sets is to study earthquake induced effects as time histories by plotting and comparing two-dimensional (2D) discrete waveforms. The frequency characteristics can then be studied by overlaying different plots. Nayak et al. [5] used 3D glyphs, graphics primitives or symbols with various geometric and color attributes to represent the measured seismic data. These glyphs were rendered in real time and combined with a 3D topography terrain map. Large field-measured seismic data sets posed significant challenges for processing and analyzing. Yuen et al. [6] presented a web-based system for visualizing seismic data. Wolfe et al. [7] presented a visualization system for examining the seismic volume data generated from ultrasound reflections, looking for high-amplitude seismic events through a synoptic view of the interior. Tools were developed for interpreting and illustrating 2D slices of seismic volumetric reflection data [8].



Visualization of Numerical Simulation Seismic Data Sets

Simulation of seismic wave propagation characteristics of a region can further aid with estimating potential damage caused by hypothetical or actual earthquakes. For example, the ground motion of the 1906 San Francisco earthquake was simulated by [9] and they concluded that intense horizontal ground displacements were developed along the entire length of the Saint Andreas fault plane that ruptured during the earthquake for the San Francisco Bay Area. Their simulation results were plotted on a 2D map using vectors at each grid cell, pointing into the direction of displacement. A series of grayscale images, taken at equal time intervals, was produced indicating the evolution of the ground displacement field over time. Similarly, Hirahara et al. [10] conducted simulations of complex fault system ruptures and used a sequence of 2D plots to reveal the temporal-spatio evolution of seismic waves. Sometimes, large 3D simulation of earthquake ground motions were overlaid on maps [11]. Another Simulation of wave propagation for the San Francisco Bay Area using a numerical finite difference method was performed by [12]. One result was a 90-second video showing wave propagation overlapped with a 2D contour map. The result of 3D simulation was displayed on a 2D surface plane with red color representing positive amplitudes and blue color representing negative amplitudes. Another larger-scale simulation was conducted by [13] to simulate wave propagation resulting from large earthquakes. An earth-scale 5.5-billion-grid-point (average grid surface spacing of 2.9 km) crustal model was constructed and simulated on a 1944-processor supercomputer. The obtained results were visualized as seismic waveforms plotted on a 2D world map.

In order to fully explore the results of high-resolution simulation runs in combination with field collected data, new rendering algorithms are needed that fuse different data sources to provide a geospatially and temporally anchored, visual representation of the studied physical phenomenon. Zhang et al. [14] developed parallel rendering algorithms to visualize time-varying 3D volume data from earthquake simulations. Akcelik et al. [15] used a 3000-processor supercomputer to simulate an earthquake in the Los Angeles Basin using one billion grid points. This data was subsequently used to develop parallel rendering algorithm capable of rendering time-varying large-scale earthquake simulation results in real-time. Ma et al. [16] rendered the earth crust volume data semi-transparent allowing viewers to see through the volume, revealing 3D seismic wave propagation originating from the hypocenter of a simulated earthquake event. Yu et al. [17] presented a parallel visualization algorithm for studying a large earthquake simulation to model 3D seismic wave propagation of the 1994 Northridge earthquake. Chourasia et al. [18] presented a cast study of visualizing large-scale earthquake simulations on a supercomputer, encoding the rendered images into animations. These works dealt with data of simulation results. This decomposition of simulation and visualization became undesired as the large simulations moved into terascale and petascale. In contrast, Tu et al. [19] presented another type of system, which simulation components and visualization results were tightly coupled together so that real-time rendering of simulation data was made possible. Chopra et al. [20] presented a visualization work of earthquake simulation data to enhance collaboration between structural engineers, seismologists, and computer scientists in an immersive virtual environment to present the simulation data better. These simulations provide a mean to better understand the earthquake, however, it is also important to verify simulation results with field-collected seismic data. Komatitsch et al. [21] conducted a global-scale simulation of seismic wave propagation and superimposed the simulation data over field-measured waveforms data.



Visualization of NEIC Historical Earthquake Records

A set of interactive 3D visualization tools was developed to facilitate the interpretation of multi-component spatially and temporally varying data sets. When properly geo-referenced, complex historical earthquake data sets can be intuitively presented, enabling detailed evaluation of relevant parameters. To provide a reference model for the earthquake data, a real-time terrain rendering engine is used to display USGS DEMs. Terrain rendering is accomplished using the DEMs available from the USGS Earth Resources Observation Systems (EROS) Data Center. DEMs are the digital representations of elevation information in a raster grid of regularly spaced intervals derived from USGS topographic map series. A variety of topographic map images are texture mapped on top of the terrain to provide a flexible visual representation of the different parameters. Earthquake data sets are subsequently augmented in the form of geometric proxies representing the hypocenter, magnitude and year. The seismic data sets can then be explored in real-time.

Figure 3 shows the historical earthquake records from Washington State. The total number of earthquakes recorded in this area is 2583. A spherical proxy is rendered at the earthquake hypocenter. Its radius represents the magnitude of the event and the color represents the earthquake occurrence year. In order to see the small earthquake proxies, the earthquake magnitude is used to determine its translucent value. The larger the event, the more translucent the proxy becomes. The earthquake hypocenters are displayed in a perspective view point and color-coded according to their occurrence years. The 3D rendering highlights earthquake hypocenter locations, magnitudes, and the occurrence years. In addition, detailed 3D terrain models and aboveground level information are shown to reveal important terrain features. The user can interactively navigate through the data. Interactive visualization of earthquake records provides a means for scientists to understand geologic processes. In particular, interactive visualization of DEMs combined with field collected earthquake data sets provides new insights that would have been difficult to obtain from the numerical data alone. The following example, of the Washington State area, highlights the use of the data augmentation techniques to identify significant geologic events.



Figure 3. Rendering of NEIC historical earthquake records augmented with DEMs for the Washington State. Red represents recent events (2009) and blue represents old events (1973). (46 to 49 N, 122-125 W).


Visualizing and Exploring Geologic Events: The 1980 Eruption of Mount St. Helens

Mount St. Helens has been more active and more violent during the past few thousand years than any other volcano in the continuous United States with the last two eruptions occurring in 1857 and 1980. Chronologic visualization of earthquake records can reveal historic geologic events. Form Figure 3, it immediately becomes clear that Mount St. Helens was subjected to a significant geologic event in 1980 (a cluster of earthquake records below a Mount St. Helens). Combined with the knowledge that these events occurred underneath a rather substantial mountain and associating it with possible volcano activity, this information provides motivation for further investigation of this unusual historical event. Major features of the events in 1980 included a gradual increase in seismicity starting in mid-March and on the morning of May 18, a magnitude 5.1 earthquake preceding the eruption. The data sets show the 374 earthquakes that were recorded in 1980 below Mount St. Helens. Figure 4 shows the DEMs for Mount St. Helens volcano before and after the eruptions.


Figure 4: DEMs for Mount St. Helens: (a) before and (b) after the eruption in 1980.



Visualization of CSMIP Field-Measured Seismic Data Sets of the 1994 Northridge Earthquake

For the past few decades, the California Strong Motion Instrumentation Program (CSMIP) has installed accelerographs at various representative locations throughout California to measure ground motions. Currently, more than 900 stations have been installed, including 650 ground-response, 170 building, 20 dam and 60 bridge stations. The corresponding digitized acceleration recordings can be obtained through the Consortium of Organizations for Strong Motion Observation Systems (COSMOS) virtual data center.

The CSMIP field-measured tri-directional acceleration records of the 1994 Northridge earthquake can be used to generate an animation of ground motions during the earthquake. A total of 72 station records are available for the 1994 Northridge earthquake. Figure 5(a) shows the locations of these stations. Figure 5(b) shows the Delauney triangulation of the accelerometer network. Among the existing triangulation schemes, the Delauney triangulation is an optimal triangulation, meaning that the smallest angles are at least as large as those in any other triangulation. Color coding is used to represent the amount of ground motion in vertical direction. Red represents the maximum displacement (30 cm), and blue represents zero displacement. During the animation, color-coded triangles are rendered to indicate the vertical ground motion. Figure 5(c) shows the color-coded Delauney triangulation. Displacement mapping is a computer graphics technique that takes a displacement texture, such as a terrain height fields, as inputs and manipulates the position of the rendered geometry. Effectively, this method adds details to a terrain surface by perturbing the surface along the surface normals provided via the displacement map. The result of the Delauney Triangulation can be used to create the displacement map as shown in Figure 5(d), which can be used as a reference to change the vertices’ vertical positions in the terrain mesh. This allows time-varying displacement mapping, to recreate the recorded ground movement starting from the epicenter.


Figure 5. (a) Locations of 72 accelerograph stations that recorded the 1994 Northridge earthquake in southern California. (b) Delauney triangulation of the 72 stations. (c) Color-coded vertical ground displacement map; red color represents 30 cm and blue represent zero displacement. (d) Grayscale vertical ground displacement map.


Figure 6 shows a sequence of screenshots of color-coded vertical ground movement maps of the 1994 Northridge earthquake. Figure 7 shows the corresponding 3D rendering, it clearly reveals the spatial relationship between terrain topography and the propagation of seismic waves. During the playback of the displacement records, individual displacement maps are used for each frame, to alter the 3D terrain surface and recreate the vertical ground movements. From the playback of field measured ground displacement, seismic wave propagation was studied for the greater Los Angeles area and it can be clearly seen that the ground displacement at the San Fernando Valley near the epicenter is much larger than the surrounding areas for the first 10 seconds of the earthquake and that the seismic wave is constrained and amplified by the valley at the beginning of the earthquake. Subsequently, the northwest areas of the epicenter, consisting of mountain ranges, experience larger ground displacement compared with the southeast areas of the epicenter belonging to the Los Angeles Basin. The vertical displacements at various locations are easily observed and trigger times of each accelerometer can be visually identified.


Figure 6. Color-coded horizontal ground displacement maps corresponding to sensor data recorded on January 17th, 1994 at (a) 12:30:56.6, (b) 12:31:04.0, (c) 12:31:04.2, and (d) 12:31:04.4 UTC.


Figure 7. Screenshots of 3D rendering of terrain models and the corresponding exaggerated vertical ground displacement recorded on January 17th , 1994 at (a) 12:30:56.6, (b) 12:31:04.0, (c) 12:31:04.2, and (d) 12:31:04.4 UTC.


Visualization of TSMIP Field-Measured Seismic Data Sets of the 1999 Taiwan Chi-Chi Earthquake

The magnitude 7.6 Chi-Chi earthquake struck Taiwan in 1999. It was Taiwan’s largest earthquake in the twentieth century and caused a surface rupture with 100 km long. The Taiwan Strong Motion Instrumentation Program (TSMIP) had completed the installation of strong-motion accelerographs in 1996 before the event occurred. A sequence of ground-motion wave-field maps of a 350x200 regular grid covers the entire Taiwan island was generated from raw data obtained from the TSMIP sensor network. The result is a total of 1000 ground-motion wave-field maps with 0.1 second interval, forming a 1000x350x200 volume data set. This allows the temporal information of the field-measured seismic data sets to be examined spatially so that the time history can be directly observed [22]. Figure 8 shows the volume rendering field-measured data of Taiwan's Chi-Chi earthquake. The red bubble-shape regions indicate the locations and durations of major energy releases. The epicenter is located near the first few major energy releases.


Figure 8. Volume rendering field-measured data of Taiwan's Chi-Chi earthquake. The temporal information is presented spatially. The evolution of these bubbles shows how seismic waves propagated over Taiwan.



Conclusions

During the past century, millions of earthquakes have been recorded worldwide by modern seismometers and seismologists seek to learn from these events to construct accurate analytical models and thus better predict the future distribution of earthquakes in space, time, and magnitude. The development of these improved models requires a careful and thorough understanding of historic earthquake events and their effects. The field-measured seismic data in combination with topological structures and new exploration techniques are of utmost importance in this process. When properly geo-referenced and treated, seismic data sets can be presented in a natural and intuitive form that facilitates the understanding of governing events and mechanisms. In particular, interactive visualization provides additional visual clues and an intuitive representation of the time-varying wave propagation that is difficult to represent with conventional 2D map plotting techniques. Field collected seismic data sets can be rendered in 3D and geo-referenced with Digital Elevation Models (DEMs), and subsequently displayed. These visual paradigms provide an enhanced understanding of the field measured seismic data sets.



References

[1] United States Agency for International Development (USAID) http://www.usaid.gov/
[2] United States Geological Survey (USGS) http://www.usgs.gov/
[3] USGS Earthquake Facts and Statistics http://earthquake.usgs.gov/
[4] G. Leonard, Z. Somer, Y. Bartal, and B. Horin, 2002. “GIS as a Tool for seismological Data Processing”, In Pure and Applied Geophysics, 159:945-967.
[5] A.M. Nayak, K. Lindquist, R. Newman, D. Kilb, F.L.Vernon, A. Johnson, J. Leigh, and Renambot L., 2003, “Using 3D glyph visualization to explore real-time seismic data on immersive and high-resolution display systems”, Transactions of American Geophysical Union, 84(46):ED32C–1208.
[6] D. A. Yuen, B. J. Kadlec, E. F. Bollig,W. Dzwinel, Z. A. Garbow, and C. R. S. da Silva, 2005, “Clustering and Visualization of Earthquake Data in a Grid Environment”, Visual Geosciences, 1–12.
[7] R. H.Wolfe, Jr. and C. N. Liu, 1988, “Interactive Visualization of 3D Seismic Data: a Volumetric Method”, IEEE Computer Graphics and Application, 8(4):24–30.
[8] D. Patel, C. Giertsen, J. Thurmond, J. Gjelberg, and E. Groller, 2008, “The Seismic Analyzer: Interpreting and Illustrating 2D Seismic Data”. IEEE Transactions on Visualization and Computer Graphics, 14(6):1571–1578.
[9] G. P. Mavroeidis and A.S. Papageorgiou, 2001, “Simulation of long-period near-field ground motion for the great 1906 San Francisco earthquake”, Seismological Research Letters, 72:227.
[10] K. Hirahara, N. Kato, T. Miyatake, T. Hori, M. Hyodo, J. Inn, N. Mitsui, Y. Wada, T. Miyamura, Y. Nakama, T. Kanai, and M. Iizuka, 2004, “Simulation of Earthquake Generation Process in a Complex System of Faults”, Technical report, Annual Report of the Earth Simulator Center.
[11] A. Chourasia, S. Cutchin, and B. Aagaard, 2008, “Visualizing the Ground Motions of the 1906 San Francisco Earthquake”, Computers and Geosciences, 34(12):1798–1805.
[12] M. Antolik, C. Stidham, D. Dreger, S. Larsen, A. Lomax, and B. Romanowicz,1997, “2-D and 3-D models of broadband wave propagation in the san Francisco bay region and north coast ranges”, Seismological Research Letter, 68:328.
[13] D. Komatitsch, S. Tsuboi, C. Ji, and J. Tromp, 2003, “A 14.6 billion degrees of freedom, 5 teraflops, 2.5 terabyte earthquake simulation on the earth simulator”, In ACM/IEEE Supercomputing 2003, 4.
[14] H. Zhang, S. Chen, S. Chen, S. Chen, H. Jing, D. A. Yuen, and Y. Shi, 2008, “Parallel Visualization of Seismic Wave Propagation”, Visual Geosciences, 13(1):1610–2924.
[15] V. Akcelik, J. Bielak, G. Biros, I. Epanomeritakis, A. Fernandez, O. Ghattas, E.J. Kim, J. Lopez, D. O’Hallaron, T. Tu, and J. Urbanic, 2003, “High resolution forward and inverse earthquake modeling on terascale computers”, In ACM/IEEE Supercomputing 2003, 52.
[16] K.-L. Ma, A. Stompel, J. Bielak, O. Ghattas, and E.J. Kim, 2003, “Visualizing very large-scale earthquake simulations”, In ACM/IEEE Supercomputing 2003, 48.
[17] H. Yu, K.-L. Ma, and J. Welling, 2004, “A Parallel Visualization Pipeline for Terascale Earthquake Simulations”, In ACM/IEEE Supercomputing 2004.
[18] A. Chourasia, S. Cutchin, Y. Cui, R. W. Moore, K. Olsen, S. M. Day, J. B. Minster, P. Maechling, and T. H. Jordan, 2007, “Visual Insights into High-Resolution Earthquake Simulations”, IEEE Computer Graphics and Applications, 27(5):28–34.
[19] T. Tu, H. Yu, L. Ramirez-Guzman, J. Bielak, O. Ghattas, K.-L. Ma, and D. R. O’Hallaron, 2006, “From Mesh Generation to Scientific Visualization: An End-to-End Approach to Parallel Supercomputing”, In Proceedings of the International Conference for High-Performance Computing, Networking, Storage and Analysis.
[20] P. Chopra, J. Meyer, and A. Fernandez, 2002, “Immersive Volume Visualization of Seismic Simulations: A Case Study of Techniques Invented and Lessons Learned”, In IEEE Visualization 2002, 497–500.
[21] D. Komatitsch, S. Tsuboi, C. Ji, and J. Tromp, 2003, “A 14.6 Billion Degrees of Freedom, 5 Teraflops, 2.5 Terabyte Earthquake Simulation on the Earth Simulator”, In ACM/IEEE Supercomputing 2003, 4.
[22] T.-J. Hsieh, C.-K. Chen, K.-L. Ma, 2010, “Visualizing Field-Measured Seismic Data”, In IEEE PacificVis 2010.


About the Author


Tung-Ju Hsieh

is an Assistant Professor in the Department of Computer Science and Information Engineering at the National Taipei University of Technology. Prior to his current role he was a postdoctoral researcher at the California Institute for Telecommunications and Information Technology (Calit2). He received his Ph.D. in Electrical and Computer Engineering from the University of California, Irvine in 2006. His research areas include scientific visualization and computer graphics. His research is aimed at using real-time visualization technology to explore massive data.


Powered by Plone CMS, the Open Source Content Management System

This site conforms to the following standards: