PRAC_2014_v23 - University of Southern California

advertisement
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
Project Description
Earthquake system science presents some of the most challenging computational pathways in
geoscience. Taken from end to end (“rupture-to-rafters”), the earthquake problem comprises the loading
and failure of tectonic faults, the generation and propagation of seismic waves, ground shaking, and—in
its application to seismic risk—the damage caused by earthquakes to the built environment. The complex
cascades of physical processes that lead to the emergence of earthquake phenomena involve a wide
variety of interactions, some highly nonlinear and multiscale.
The Community Modeling Environment (CME) of the Southern California Earthquake Center (SCEC)
is a large collaboration that involves seismologists, computer scientists, structural geologists, and
engineers in the development of high-performance computational platforms. The main science goal of the
CME Collaboration is to transform seismic hazard analysis into a physics-based science through HPC
implementations of four computational pathways (Figure 1).
1. Target Problem: Reduce Epistemic Uncertainty in Physics-Based PSHA by Extending the
Spatiotemporal Scales of Deterministic Earthquake Simulations
Seismic hazard analysis (SHA) is the scientific basis for many engineering and social applications:
performance-based design, seismic retrofitting, resilience engineering, insurance-rate setting, disaster
preparation, emergency response, and public education. All of these applications require a probabilistic
form of seismic hazard analysis (PSHA) to express the deep uncertainties in the prediction of future
seismic shaking [Baker and Cornell, 2005]. Earthquake forecasting models comprise two types of
uncertainty: an aleatory variability that describes the randomness of the earthquake system [Budnitz et
al., 1997], and an epistemic uncertainty that characterizes our lack of knowledge about the system.
According to this distinction, epistemic uncertainty can be reduced by increasing relevant knowledge,
whereas the aleatory variability is intrinsic to the system representation and is therefore irreducible within
that representation [Budnitz et al., 1997; Goldstein, 2013].
As currently applied in the United States and other countries, PSHA is largely empirical, based on
parametric representations of fault rupture rates and strong ground motions that are adjusted to fit to the
1
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
available data. The data are often very limited, especially for earthquakes of large magnitude M. In
California, for instance, no major earthquake (M > 7) has occurred on the San Andreas fault during the
post-1906 instrumental era. Consequently, the forecasting uncertainty of current PSHA models, such as
the U.S. National Seismic Hazard Mapping Project (NSHMP), is very high [Petersen et al., 2008; Stein et
al., 2012]. Reducing this uncertainty has become the “holy grail” of PSHA [Strasser et al., 2009].
In physics-based PSHA, we can reduce the overall uncertainty through an iterative modeling process
that involves two sequential steps: (1) we first introduce new model components that translate aleatory
variability into epistemic uncertainty, and (2) we then assimilate new data into these representations that
reduce the epistemic uncertainty.
As a primary example, consider the effects of three-dimensional (3D) geological heterogeneities in
Earth’s crust, which scatter seismic wavefields and cause local amplifications in strong ground motions
that can exceed an order of magnitude. In empirical PSHA (Pathway 0 of Figure 1), an ergodic
assumption is made that treats most of this variability as aleatory [Anderson and Brune, 1999]. In physicsbased PSHA, crustal structure is represented by a 3D seismic velocity model—a community velocity
model (CVM) in SCEC lingo—and the ground motions are modeled for specified earthquake sources
using an anelastic wave propagation (AWP) code, which accounts for 3D effects (Pathway 2 of Figure 1).
As new earthquakes are recorded and other data are gathered (e.g., from oil exploration), the CVM is
modified to fit the observed seismic waveforms (via Pathway 4 of Figure 1), which reduces the epistemic
uncertainties in the Pathway 2 calculations.
The proposed project will strive to reduce the epistemic uncertainty in PSHA through more accurate
deterministic simulations of earthquakes at higher frequencies. A variance-decomposition analysis of
SCEC’s CyberShake simulation-based hazard model for the Los Angeles region (Figure 2) indicates that
accurate earthquake simulation has the potential for reducing the residual variance of the strong-motion
predictions by at least a factor of two, which would lower exceedance probabilities at high hazards by an
order of magnitude [Wang and Jordan, 2014]. The practical ramifications of this probability gain in the
formulation of risk-reduction strategies would be substantial [Strasser et al., 2009].
2
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
2. Intellectual Merit
This project will expand the spatiotemporal scales of physics-based PSHA with the goal of reducing the
epistemic uncertainties in anelastic wavefield modeling (Pathway 2) and dynamic rupture modeling
(Pathway 3). We propose to extend the upper frequency limit of physics-based PSHA from 0.5 Hz, our
current CyberShake models, to 2 Hz, and we propose to push the computation of realistic scenarios for
large San Andreas earthquakes to 4 Hz and beyond. Our eventual goal is to extend physics-based PSHA
across the full bandwidth needed for seismic building codes; i.e., up to 10 Hz.
Such high frequencies have thus far been inaccessible owing to a series of physical, geological, and
computational limitations. As documented in this proposal, the computational limitations are being
overcome by the optimization of our simulation codes on the largest, most capable CPU and GPU
systems; in particular, we are focusing on making judicious use of hybrid CPU-GPU systems like Blue
Waters. These computational advances have significantly reduced the time-to-solution of our simulations.
In the case of our heterogeneous CyberShake workflows, access to Blue Waters has meant time-tosolution speedups of over one order of magnitude (Table 1).
At the same time, we are conducting basic seismological research to address the scale limitations in
our models of source physics, anelasticity, and geologic heterogeneity:

Source physics. High-frequency simulations require new source models to capture the small-scale
processes of wave excitation. In particular, we must model the fractal roughness observed on fault
surfaces, as well as other geometrical complexities of fault systems [Shi and Day, 2012a,b, 2013a,b].
Slip on non-planar faults generates large stress perturbations that accelerate and decelerate the
rupture, releasing bursts of high-frequency seismic waves (Pathway 3 example in Figure 1). Because
these dynamic stresses can exceed the yield stress of near-fault rocks, we must model the
surrounding medium as a nonlinear (plastic) solid. To incorporate these source complexities into the
CyberShake calculations, which make use of elastic reciprocal relations, we must capture the
nonlinear wave-excitation effects in equivalent kinematic (pseudo-dynamic) rupture models. Nearsource plasticity requires that the pseudo-dynamic representation be volumetric, increasing its
dimensionality from two to three.

Anelasticity. As seismic waves propagate, they dissipate kinetic energy into heat. The dependence of
ground motion simulations on the attenuation structure increases with frequency. In low-frequency
simulations (< 0.5 Hz), attenuation structure can be represented in terms of a spatially variable but
–1
–1
frequency-independent attenuation factor for pure shear, 𝑄𝜇 ; attenuation in pure compression, 𝑄𝜅 ,
can be ignored. At frequencies above 1 Hz, however, the attenuation becomes strongly frequencydependent [Pasyanos, 2013] and the compressional attenuation becomes significant [Hauksson and
Shearer, 2006]. These effects increase the propagation efficiency of the shear-dominated waves and
raise their amplitudes relative to compressional waves.

Small-scale, near-surface heterogeneities. Seismic velocities drop to very low values near the free
surface. The excitation of this near-surface waveguide dictates the amplitudes of the seismic shaking
3
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
at surface sites. At low frequencies, the structure of this waveguide can be approximated by 3D
structures that are spatially smooth on scales of hundreds of meters, and appropriate representations
are provided by tomographic refinements of the deterministic CVMs. However, at high frequencies,
seismic wave scattering by small-scale heterogeneities becomes much more important. Stochastic
representations of these small-scale heterogeneities have been calibrated against borehole and
seismic reflection data in the Los Angeles region [Savran et al., 2013].
The proposed research will improve the physical representations of earthquake processes and the
deterministic codes for simulating earthquakes, which will benefit earthquake system science worldwide.
3. Project Objectives
SCEC/CME scientists have developed dynamic fault rupture (DFR) codes that accommodate rough-fault
geometries and near-fault plasticity and anelastic wave propagation (AWP) codes that incorporate
frequency-dependent attenuation and small-scale near-surface heterogeneities. The scalability of these
codes has been verified on Blue Waters. We propose to use these codes to advance physics-based
seismic hazard analysis through a coordinated program of numerical experimentation and large-scale
simulation targeted at three primary objectives:
O1. Validation of high-frequency simulations (up to 8 Hz) against seismic recordings of historical
earthquakes, such as the 1994 Northridge earthquake (M 6.7).
O2. Computation of 2-Hz CyberShake hazard model for the Los Angeles region as input to the
development of high-resolution urban seismic hazard maps by the USGS and SCEC.
O3. High-frequency (4 Hz) simulation of a M7.8 earthquake on the San Andreas fault that will revise the
2008 Great California ShakeOut scenario and improve the risk analysis developed in detail for this
scenario.
We will achieve the validation objective O1 by increasing the seismic bandwidth and earthquake
magnitude of the simulations beyond our recent studies of small-magnitude events, such as the 2008
Chino Hills earthquake (M 5.4) [Mayhew and Olsen, 2012; Taborda and Bielak, 2013]. Validation against
larger events is necessary to qualify simulation-based models of seismic hazards for practical use. Highfrequency source complexity will be obtained through SORD rough-fault simulations [Shi and Day, 2013],
and high-frequency scattering will be modeled by introducing small-scale heterogeneities to the CVM
[Savran et al., 2013]. As described in §5, properly conditioned sets of strong-motion data are already
available from SCEC’s validation of its 1D Broadband Platform (BBP-1D). We can directly apply the
workflows developed for BBP-1D validation, including ensemble averaging of stochastic realizations and
goodness-of-fit (GOF) measurement, to the validation of the 3D simulations.
Objective O2 will extend the CyberShake simulation-based hazard model from 0.5 Hz to 2.0 Hz. The
calculation of hazard maps at higher frequencies will improve the utility of the model for earthquake
engineering, owing to the sensitivity of smaller structures to high-frequency shaking. The simulations will
include frequency-dependent anelastic attenuation, which becomes important for frequencies greater
than 1 Hz [Withers et al, 2013]. Source complexity will be parameterized in a pseudo-dynamic rupture
model derived from dynamic fault ruptures (DFR) simulations of rough faults. High-frequency scattering
will be modeled using small-scale crustal inhomogeneities in the AWP simulations.
Objective O3 will revise a M 7.8 earthquake scenario on the southern San Andreas fault that has
been extensively used for disaster planning in California [Porter et al., 2011]. The proposed simulations
will leverage recent improvements of our AWP codes to calculate expected ground motions across all of
Southern California up to frequencies of 4 Hz. A calculation with these inner-to-outer scale ratios is
feasible only on Blue Waters (see Table 2). The code improvements will include the linear effects of
frequency-dependent attenuation and the nonlinear effects of Drucker-Prager plasticity, which have been
shown to reduce the long period (< 0.5 Hz) ground motions for the ShakeOut scenario [Roten et al., 2014]
(see Figure 4) and other events [Taborda et al., 2012; Restrepo et al., 2012a,b]. Source complexity at
high frequencies will be modeled as small-scale fault roughness in the DFR simulations, and wavefield
scattering will be modeled using small-scale crustal inhomogeneities in the AWP simulations. The results
4
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
of these simulations will be made available for earthquake engineering and disaster planning studies to
revise the risk calculations derived from the original ShakeOut simulation.
The proposed research plan carefully matches SCEC/CME computational requirements to the CPU
and GPU capabilities of Blue Waters. This system is nearly ideal for running SCEC’s large-scale (>1M
node-hours) deterministic wave propagation simulations. In many cases, these simulations require highperformance processors, substantial memory per node (> 32 GB/node), fast I/O rate (> 500 GB/s), linear
I/O scaling (> 1,700 OSTs), and extreme-scale heterogeneous computing environment (hybrid XE6/XK7),
which is available on Blue Waters but not on other HPC systems. Our experience with CyberShake
calculations on Blue Waters has also demonstrated how the judicious use of both GPUs and CPUs can
increase research productivity and time-to-solution. The Blue Waters system offers other unique
advantages, such as very large temporary data storage (24 PB file system and 1.2 PB disk cache) and
support for automated workflow tools, including remote job submission, for multi-week calculations.
4. Broader Impacts
California comprises about three-quarters of the nation’s long-term earthquake risk, and Southern
California accounts for nearly one half [FEMA, 2000]. The proposed project will support SCEC’s efforts to
translate basic research into practical products for reducing risk and improving community resilience in
California and elsewhere. One important product will be a revision of the M 7.8 earthquake scenario for
the southern San Andreas fault that drove the first (2008) Great California ShakeOut [Graves et al.,
2008]. This canonical scenario was the basis for a coordinated set of risk-analysis studies [ShakeOut
Special Volume, 2011], which have led to substantial revisions in earthquake preparedness plans
throughout Southern California. We will update the M 7.8 ShakeOut scenario with improved source and
structure models, and we will extend its seismic frequency range to 4 Hz.
Our primary goal is to reduce the epistemic uncertainty in physics-based PSHA by extending the
spatiotemporal scales of deterministic earthquake simulations. The feasibility of this goal is supported by
an analysis of the CyberShake results using the new technique of “averaging-based factorization” (ABF)
[Wang and Jordan, 2014]. By utilizing the large number ground-motion predictions generated by the
simulation-based hazard models (about 240 million per CyberShake run), ABF can factor any seismic
intensity functional into the aleatory effects of source complexity and directivity and the deterministic
effects of 3D wave propagation. This variance-decomposition analysis demonstrates that improvements
to simulation-based hazard models such as CyberShake can, in principle, reduce the residual variance by
a factor of two relative to current NSHMP standards [Wang and Jordan, 2014]. The consequent decrease
in mean exceedance probabilities could be up to an order of magnitude at high hazard levels. Realizing
this gain in forecasting probability would have a broad impact on the prioritization and economic costs of
risk-reduction strategies [Strasser et al., 2009].
Earthquake system science is a difficult intellectual enterprise. Earthquakes emerge from complex,
multiscale interactions within active faults systems that are opaque, and are thus difficult to observe. They
cascade as chaotic chain reactions through the natural and built environments, and are thus difficult to
predict. The intellectual challenges faced in earthquake forecasting exemplify those of system-level
prediction in many other areas. Therefore, our research program is likely to inform other integrated
studies of complex geosystems. It will also enhance the computational environment for geosystem
studies by promoting a vertical integration through the various layers of HPC cyberinfrastructure.
5. Results from Prior NSF Support
NSF Award OCI-0832698: Petascale Research in Earthquake System Science on Blue Waters
(PressOn) and NEIS-P2 Supplement ($40,000 - PI: Thomas H. Jordan) from Oct 1, 2009 through
Sept 30, 2014. SCEC established its computational research program on Blue Waters through an
original PRAC award that started in 2009. We also received a NEIS-P2 supplement to this PRAC award
for porting our wave-propagation software onto GPUs. We developed a CPU/GPU version of our forward
wave propagation code (AWP-ODC-GPU), as well as a reciprocity-based version (AWP-ODC-GPU-SGT)
for computing CyberShake strain Green tensors (SGTs). These codes were targeted for use on NCSA
Blue Waters and OLCF Titan. We tested these codes on Blue Waters and developed an innovative coscheduling approach that lets us run our parallel SGT CyberShake processing on Blue Waters GPUs,
5
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
while running the serial post-processing jobs on Blue Waters CPUs. We used the highly efficient AWPODC-GPU code on Titan to run a 10Hz deterministic simulation using two alternative 3D velocity models
(Figure 1). The GPU development efforts have been documented by Cui et al. [2013a,b].
NSF Award OCI-1148493: SI2-SSI: A Sustainable Community Software Framework for Petascale
Earthquake Modeling ($2,522,000 PI: Thomas H. Jordan) from August 1, 2012 through July 30,
2015. This project supports the development of a Software Environment for Integrated Seismic Modeling
(SEISM) that comprises state-of-the-art computational platforms for generating a wide variety of
simulation-based hazard products. One is the SCEC 1D broadband platform (BBP-1D), which has been
designed to service the needs of earthquake engineers for broadband (0-10 Hz) earthquake simulations.
Under this NSF grant and with additional support from the electrical power industry, BBP-1D has been
extended to include new ground motion simulation methods, and its performance has been validated
against historical earthquakes for use in the seismic hazard analysis of nuclear power plants in Northern
California, Southern California, and Arizona. The BBP-1D software released as open-source in June 2013
(v13.6) has three new seismic models, as well as extensive new goodness of fit methods that support the
validation activities. Phase 1 of the ground motion simulation validation included 7 well-recorded historical
earthquakes as well selected Ground Motion Prediction Equation (GMPE) targets [Goulet, 2014].
An important SEISM effort is to
validate 3D deterministic simulations
using historical earthquakes at
seismic frequencies exceeding 1 Hz.
Using our Hercules finite-element
wave propagation software on Blue
Waters, we simulated the wellrecorded
Chino
Hills
(M 5.4)
earthquake up to 4 Hz using
extended fault source models
[Taborda and Bielak 2013] and using
different velocity models [Taborda
and Bielak 2014] (Figure 3).
Comparisons of these simulations
with data support the need to
incorporate frequency-dependent anelastic attenuation and small-scale
heterogeneity
into
the simulation models. Sonic logs taken in deep boreholes in the Los Angeles Basin indicate that the
amplitude of the heterogeneity associated with sedimentary layering is strong at scale lengths of 10-100
m. We have developed stochastic models to represent this heterogeneity and have incorporated it into
preliminary sets of high-frequency simulations [Savran et al., 2013].
We have developed new capabilities for 3D numerical calculations of dynamic fault ruptures on nonplanar faults, based on the SORD code, and we have applied these methods to study the effects of fault
roughness on rupture propagation and ground motion excitation. We have implemented off-fault plastic
yielding, rate-and-state fault friction, and grid generation to conform the computational mesh to fractal
fault-surface topography. The Pathway-3 SORD solutions have been linked to the Pathway-2 AWP-ODC
code to enable propagation of simulated ground motion to large distances. We used these new
capabilities to compute ruptures for M 7 and greater strike-slip earthquakes, propagate 0-10 Hz
wavefields up to 100 km distance, and compare the simulated ground motion statistics with standard
ground motion prediction equations (GMPEs). Synthetic response spectra show median distance and
period dependence, absolute level, and intra-event standard deviation that are remarkably similar to
GMPE estimates throughout the period range 0.1-3.0 sec. These types of very-high resolution simulations
allow us to examine how frictional parameters, off-fault yielding, and characteristics of the fault roughness
spectrum combine to set high-frequency limits on earthquake source spectra [Shi and Day, 2012a,b,
2013; Withers et al., 2012, 2013a,b].
An important SEISM development is the CyberShake Platform, which is capable of generating the
very large suites of simulations (>108 synthetic seismograms) needed for physics-based PSHA using
seismic reciprocity. The CyberShake PSHA Study 13.4, begun in April, 2013, used the UCERF2
6
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
earthquake rupture forecast and calculated hazard curves for 286 sites in southern California at
frequencies up to 0.5 Hz. The scientific goal of this calculation was to evaluate the impact of (a)
alternative wave propagation codes (AWP-ODC and AWP-RWG) and (b) alternative community velocity
models (CVM-S4 and CVM-H11.9). This study, calculated on NCSA Blue Waters and TACC Stampede,
was four times larger than our 2009 CyberShake study, but it was completed in approximately the same
wall-clock time (~61 days). The CyberShake 13.4 Study results are being used to calculate ground
motions more rapidly and accurately for use in the California Integrated Seismic Network (CISN)
ShakeAlert Earthquake Early Warning system [Böse et al., 2014] as well as for calculations in planned
updates to model building codes.
In addition to the validation efforts using the Chino Hills recordings, we simulated the M6.7 1994
Northridge earthquake at seismic frequencies up to 5 Hz, as part of our prior PetaUrbs project, which was
one component of our previous PRAC allocation on Blue Waters. These simulations allowed us to
characterize the level of scattering due to
the presence of large inventories of
building foundations and study the
structural response of buildings in dense
urban settings under strong shaking
conditions [Isbiliroglu et al. 2014]. This
was an important advance because our
simulations will ultimately be used for
assessing urban seismic hazards and the
risks associated with critical infrastructure
systems such as dams and nuclear power
plants, as well as transportation and
infrastructure networks.
A particular focus of the SEISM
project has been to improve the
underlying physics of our computational
platforms. Towards that end, we have
included off-fault nonlinear effects into the
AWP-ODC platform, and tested the effects
on the 2008 M 7.8 ShakeOut earthquake
scenario. This scenario shows strong longperiod
ground motions in the densely populated Los Angeles basin owing to the channeling of surface waves
through a series of interconnected sedimentary basins. By simulating the ShakeOut earthquake scenario
for a medium governed by Drucker-Prager plasticity, we have shown that nonlinear material behavior
could reduce these predictions by as much as 70% compared to the linear viscoelastic solutions (Figure
4). These reductions are primarily due to yielding near the fault, although yielding may also occur in the
shallow low-velocity deposits of the Los Angeles basin if cohesions are close to zero. Fault-zone plasticity
remains important even for conservative values of cohesions, and we can infer that current simulations
assuming a linear response of rocks are overpredicting ground motions during future large earthquakes
on the southern San Andreas fault [Roten et al., 2014]. Testing this hypothesis will have a high priority in
our proposed simulation program.
At high frequencies, anelastic attenuation needs to be modeled as a frequency-dependent Q.
Frequency dependence has been incorporated into the AWP-ODC code via a power-law formulation,
Q = Q0f n, where Q is the quality factor, f is frequency, and Q0 and n are parameters that vary with
position. We have adopted Day’s [1998] coarse-grained numerical algorithm, which represents the
attenuation spectrum as a superposition of absorption bands [Withers et al., 2013].
NSF Award EAR-1226343: Geoinformatics: Community Computational Platforms for Developing
Three-Dimensional Models of Earth Structure; ($1,550,000; P.I. Thomas H. Jordan) which runs
from September 1, 2012, to August 31, 2014. A main objective of SCEC’s Geoinformatics project is to
establish interoperable community computational platforms that facilitate the development and delivery of
seismic velocity models at a variety of scales. The Unified Community Velocity Model (UCVM) platform
7
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
developed under this project provides a common framework for comparing and synthesizing Earth
models and delivering model products to a wide community of geoscientists. The UCVM software enables
users to quickly build meshes for earthquake simulations through a standardized, high-speed query
interface. We have registered seven different CVMs into our current UCVM prototype. The UCVM opensource software has been ported to several HPC systems, including USC HPCC, TACC Stampede,
NCSA Blue Waters, and NWSC Yellowstone. UCVM’s interface has been used to perform high-speed,
parallel queries of CVM-S4 and CVM-H11.9 to build simulation meshes with more than 14 billion mesh
points, and its file utilities have been used to export meshes in both eTree and NetCDF formats.
Two different computational platforms are being employed in full-3D waveform tomography (F3DT),
the AWP-ODC finite-difference code [Olsen et al., 1995; Cui et al., 2008, 2010] and SPECFEM3D
spectral element code [Komatitsch and Tromp, 2002; Peter et al., 2011]. The latter is capable of modeling
wave propagation using fully unstructured hexahedral meshes through aspherical structures of essentially
arbitrary complexity. Capabilities for its application in adjoint tomography have been extensively
developed. SPECFEM3D won the 2003 Gordon Bell Award for Peak Performance. Calculations on
Jaguar have attained seismic frequencies as high as 0.9 Hz on a global scale using 149,784 cores. This
widely used, open-source package has been partly optimized for GPU computing.
6. Description of the Computational Codes To Be Used
At the spatiotemporal scales required for physics-based hazard analysis, individual wave propagation
simulations can require millions of node-hours on Blue Waters (see Table 2) We are continuing to
optimize these simulations through development of GPU-based codes and the use of multi-resolution
meshes. Our PRAC research plan is to employ four primary scientific simulation codes: AWP-ODC, AWPODC-GPU, SORD, and Hercules. These codes can be mapped onto our three major work areas: velocity
model improvements (AWP-ODC and Hercules), earthquake source improvements (SORD), and wave
propagation improvements (AWP-ODC, AWP-ODC-GPU, and Hercules). We propose to use two different
core modeling schemes, namely, finite differences, in AWP-ODC [Cui et al, 2010] and AWP-ODC-GPU
[Cui et al., 2013], and finite elements, in SORD [Ely et al., 2010] and Hercules [Tu et al., 2006; Taborda et
al., 2010].
A. Application Platforms
AWP-ODC is a finite difference (FD) anelastic wave propagation Fortran code originally developed by K.
B. Olsen [Olsen et al., 1995], modified by S. Day to include dynamic rupture components and anelastic
attenuation, and optimized for petascale computing by Y. Cui [Cui et al., 2010, 2013]. AWP-ODC solves
the 3D velocity-stress wave equation explicitly by a staggered-grid FD method with 4th-order accuracy in
space and 2nd-order accuracy in time. It has two versions that provide equivalent results. The standard
version can efficiently calculate ground motions at many sites, as required by the SCEC PRAC project.
The SGT version can efficiently calculate ground motions from many ruptures at a single site by
producing SGTs which serve as input to seismic reciprocity (e.g. CyberShake).
In the CPU-based code, Perfectly Matched Layers (PLM) absorbing boundary conditions (ABCs) are
implemented on the sides and bottom of the grid, and a zero-stress free surface boundary condition is
implemented at the top. In the GPU-based code, ABCs are implemented as simple ‘sponge layers’, which
damp the full wavefield within a boundary layer and yield an unconditionally stable numerical scheme.
Seismic waves are subjected to anelastic losses in the Earth, which can be quantified by quality
factors for S waves (Qs) and P waves (Qp). Early implementations of attenuation models included Maxwell
solid and standard linear solid models. Day’s efficient coarse-grained methodology significantly improved
the accuracy of earlier stress relaxation schemes [Olsen et al., 2003]. This method closely approximates
frequency-independent Q by incorporating a large number of absorption bands (eight in our calculations)
into the relaxation spectrum without sacrificing computational performance or memory. We have recently
implemented a frequency-dependent Q in AWP-ODC that can model the observed roll-off in anelastic
attenuation with increasing frequency. We modified the coarse-grained approach to accommodate a
relaxation spectrum optimized to fit a target Q(f) function. This method was cross-validated against a 1D
frequency-wavenumber code [Withers et al, 2013]. We have also added to AWP-ODC an implementation
8
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
of Drucker-Prager plasticity in the off-fault medium to account for the nonlinear behavior of large, nearfault strains [Roten et al., 2014].
Under development is a discontinuous meshing (DM) scheme that provides variable grid spacing and
is provably stable. DM reduces the AWP-ODC run time, because deeper regions with higher wave
speeds can be discretized with coarser grids than the low-velocity, near-surface layers.
AWP-ODC-GPU is a C and CUDA-based version of AWP-ODC. This code was restructured from the
original Fortran code to maximize performance on GPU-based accelerators. CUDA calls and kernels
were added to the application for single-GPU computation [Zhou et al., 2012]. In the multi-GPU
implementation, each GPU is controlled by an associated single CPU or multiple CPUs [Zhou et al.,
2013]. The code recently achieved sustained petaflops running a 0-10 Hz ground motion simulation on a
mesh comprising 443-billion elements that includes both small-scale fault geometry and media complexity
[Cui et al., 2013].
Hercules is an octree-based parallel finite element (FE) earthquake simulator developed by the Quake
Group at Carnegie Mellon University [Tu et al., 2006; Taborda et al., 2010]. The Hercules forward solver
has second-order accuracy in both time and space. Hercules can solve wave propagation problems in
nonlinear elasto-plastic media using a Drucker-Prager plasticity model, account for surface topography
using special finite elements, and model the built environment using simplified building models [Taborda
and Bielak, 2011; Restrepo et al., 2012a,b; Isbiliroglu et al., 2013]. A C++ and CUDA version of Hercules
is currently being developed for GPU and has undergone initial tests on Blue Waters XK nodes.
Hercules relies on a linear unstructured octree-based mesh generator and solves the elastic wave
equations by approximating the spatial variability of the displacements and the time evolution with trilinear elements and central differences, respectively. The code uses a plane-wave approximation of the
absorbing boundary condition and implements viscoelastic attenuation with a memory-efficient
combination of generalized Maxwell and Voigt models to approximate a frequency-independent Q [Bielak
et al., 2011]. The traction-free boundary conditions at the free-surface are natural in FEM, so no special
treatment is required. The solver computes displacements in an element-by-element fashion, scaling the
stiffness and lumped-mass matrix-templates according to the material properties and octant edge-size.
This approach allows considerable reductions in memory compared to standard FEM implementations
[Taborda et al., 2010].
SORD, the Support Operator Rupture Dynamics code, is a structured FE code developed by Geoff Ely
and Steven Day at SDSU [Ely et al., 2010]. SORD models dynamic fault ruptures using logically
rectangular, but curved, hexahedral meshes that can be distorted to accommodate non-planar ruptures
and surface topography, achieving second-order accuracy in space and time. The Kelvin-Voigt model of
viscoelasticity, in which Q is inversely proportional to wave frequency, accounts for anelastic energy
losses and is effective in damping numerical noise that can feedback into the nonlinear rupture
calculations. Wave motions are computed on a logically rectangular hexahedral mesh using the
generalized finite difference method of support operators. Stiffness and viscous hourglass corrections are
employed to suppress zero-energy grid-oscillation modes. The fault surface is modeled by coupled
double nodes, where the strength of the coupling is determined by a linear slip-weakening friction law.
External boundaries may be reflective or absorbing; absorbing boundaries are handled using the PML
method. The hexahedral mesh can accommodate non-planar ruptures and surface topography [Ely et al.,
2010]. New features added to SORD include rate- and state-dependent friction, as well as DruckerPrager plasticity [Shi, 2013].
All of these application code sets have been extensively validated for a wide range of problems, from
simple point sources in a half-space to dipping extended faults in 3D crustal models [e.g., Bielak et al.,
2010]. In this project, AWP-ODC will be used to perform both forward wave propagation simulations and
full-3D tomography. Hercules will be used primarily to perform forward wave propagation simulations.
SORD will be used to perform dynamic rupture simulations.
9
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
B. Parallel Programming Languages and Libraries
The HPC applications involved in this PRAC project use the standard MPI communications protocol for
message passing. AWP-ODC is written in Fortran 90, AWP-ODC-GPU in C/CUDA, SORD in Fortran 95
with Python scripts, and Hercules in C. Output of these codes is done efficiently using MPI-I/O, and the
velocity or tensor output data are written to a single file. AWP-ODC also uses MPI-IO for input generation
and partitioning, while SORD currently uses serial input. Hercules uses components of the etree library,
developed at Carnegie Mellon as part of SCEC/CME’s Euclid project, for database management [Tu et
al., 2003; Taborda et al., 2010]. The etree library components are primarily used in the mesh generation
process for manipulating the material model. This guarantees fast query/retrieval transactions between
the program and the model, and it helps to order the mesh components (nodes and elements) in tables
with a z-order in memory. A C++/CUDA version of Hercules is in development for use on GPU/CPU
hybrid systems.
We will use a community IO library developed as part of the SEISM middleware for managing the I/O
of earthquake simulations on petascale machines. This community IO framework has been tested against
validated modeling results and observed data. HDF5, pNetCDF and ADIOS libraries have been
integrated into the wave propagation engine for efficient volumetric outputs. A prototype version is now
available for the AWP-ODC solver. We plan to apply the library starting with SORD to allow scalable I/O
and efficient checkpointing and to support a PRAC high performance computing environment that
includes message-passing and parallel file access.
C. Computational Readiness
The computational software to be used in our proposed research is based on well-established, selfcontained community parallel solvers with
excellent scalability and portability across a variety
of operating systems including Blue Waters (BW),
OLCF Titan, ALCF Mira, TACC Stampede, and
NWSC Yellowstone.
The
nearest-neighboring
communication
pattern of these codes benefits most from 3D torus
network topology due to its memory access
locality.
AWP-ODC,
for
example,
has
demonstrated optimal topology-mapping using
Cray topaware technique to match the virtual 3D
Cartesian mesh topology of the typical mesh
proportion to an elongated physical subnet prism
shape in the Blue Waters torus. By maximizing the
faster connected Blue Waters XZ plane allocation
with the longest virtual mesh topology edgealigned to the fastest communicating Blue Waters
torus Z direction, we were able to obtain a tighter,
more compact and cuboidal shaped Blue Waters
subnet allocation with a smaller network diameter
that allows the AWP-ODC code to achieve efficient
global barrier synchronization (Figure 5).
AWP-ODC was run on the ORNL XT5 Jaguar
supercomputer at full machine scale to simulate a
magnitude-8 earthquake on the San Andreas fault [Cui et al., 2010]; this project was a finalist for a
Gordon Bell award. AWP-ODC-GPU achieved sustained 2.3 Petaflop/s on Titan [Cui et al., 2013]. We
plan to add PML to this GPU-based code to support the high-resolution F3DT calculations of Pathway 4
(see Figure 1). Hercules is also a highly efficient and scalable code; its solver was the kernel of a Gordon
Bell award in 2003 and a HPC Analytics Challenge award in 2006. SORD is being actively expanded and
optimized by IBM, KAUST and ANL researchers for the best mix of Hybrid MPI/OpenMP, with a goal of
achieving optimal performance on large petascale systems, including Blue Waters.
10
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
Figure 6 shows the strong and weak scaling curves for the primary HPC codes used in this project.
AWP-ODC recorded super-linear speedup in strong scaling up to 223K Jaguar XT5 cores. The weak
scaling of AWP-ODC-GPU is nearly perfect with 94% of parallel efficiency on full Titan system scale.
Weak scaling is particularly important for CyberShake calculations, in which thousands of SGT
calculations are involved. SORD, whose scaling curves are not shown here, has demonstrated nearly
perfect weak scaling up to 65,536 cores on ALCF Mira system.
Recent simulations [Taborda and Bielak, 2013, 2014; Isbiliroglu et al., 2013; Cui et al., 2010] have
demonstrated the capabilities of Hercules and AWP-ODC in simulating earthquakes at frequencies of
4 Hz and above on less capable systems than Blue Waters (Figure 6). Simulations at even higher
frequencies (up to 10 Hz) are clearly possible in the near future. These codes are being shared by a
number of researchers within and outside of the SCEC community. For example, CMU’s Quake Group
has shared Hercules on GitHub to facilitate open collaborations in earthquake simulation.
We are collaborating with other computer scientists to optimize our solvers for simultaneous
computing of a single message passing simulation in a hybrid XE6/XK7 environment with the goal of
running at the full Blue Waters system scale. Target simulations are described in Table 2 (M4-2). The
optimized workload distribution between XE6 and XK7 partitions will allow petascale earthquake
simulations in a truly heterogeneous computing environment.
D. Simulation I/O and Data Analysis
SCEC computational platforms require the execution of complex workflows that challenge the most
advanced HPC systems. SCEC has developed, in collaboration with USC Information Sciences Institute
(ISI), workflow management tools to support high-throughput computational platforms such as
CyberShake. These tools, which include Globus, HTCondor, and Pegasus-WMS, have been successfully
deployed on other NSF HPC systems for managing the CyberShake data and job dependencies. They
can combine the execution of the massively parallel SGT calculations with millions of loosely coupled
post-processing tasks. Our plans to increase the maximum seismic frequency of CyberShake to 2.0 Hz
will increase the mesh points by a factor of 64 relative to the 0.5 Hz calculations and the time steps by a
factor of 4.
Hercules implements an end-to-end approach to numerical simulations that combines into one code
the processing of input data for the earthquake source, the generation of an unstructured hexahedral
mesh, forward explicit finite element solving, and I/O operations. Our data-intensive I/O works particularly
well on Blue Waters system where the independent building blocks create a linearly scaling file system.
Hercules does on-the-fly sub-element interpolation. Having 64 GB memory available on CPU nodes
11
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
allows effective aggregation of outputs in CPU memory buffers before being flushed. The codes support
run-time parameters to select a subset of the data by skipping mesh points as needed. Our hero roughfault runs scheduled on Blue Waters will use spontaneous rupture sources converted from dynamic
rupture simulations with many millions of subfaults and hundreds of thousands of time histories. Blue
Waters allows us to keep partitions of these terabyte-sized dynamic inputs in memory for high efficiency
without requiring frequent access to the Lustre file system [Poyraz et al., 2014].
Restart I/O capability is available to most of our wave propagation engines. A single checkpointing
step in the proposed simulations can be as large as 100 terabytes. AWP-ODC has implemented ADIOS
for efficient checkpointing. This improves the efficiency of read/write of files significantly, enabling
frequent checkpointing for long-running jobs. Hercules has a similar capability to suspend an ongoing
triangulation process, flush the data to disk, and reopen later to continue the insertions.
Several of our researchers are working with the ORNL ADIOS group to develop a standardized
platform-independent data format. We have also developed a runtime environment for co-scheduling
across CPUs and GPUs on Blue Waters. Co-scheduling enables us to perform both phases of a
CyberShake calculation simultaneously, reducing our time-to-solution and making efficient use of all
available computational resources. To enable co-scheduling, we launch multiple MPI jobs on XK7 nodes
via calls to aprun, the ALPS utility for launching jobs on compute nodes from a Node Manager (MOM).
We use core specialization when launching the child aprun calls to keep a core available for GPU data
transfer and communication calls, since both the GPU and CPU codes use MPI [Cui et al., 2013].
SCEC has a history of advancing visualizations of earthquake simulations to effectively communicate
seismic hazards to scientists, educators, and the public. SCEC earthquake animations created in
collaboration with SDSC Visualization Services have been shown worldwide, and these achievements
have garnered a number of scientific visualization awards, including Honorable Mention at International
Science & Engineering Visualization Challenges (2010), SciDAC Office of Advanced Scientific Computing
Research (OASCR) Visualization Awards in 2009 and 2011, and TeraGrid Best Visualization Display
award (2011). We have used ray casting to perform volumetric rendering and GlyphSea [McQuinn et al.,
2010] to encode the vector information of seismic waves. We have recently developed an in-situ
visualization scheme as part of the co-scheduling procedure to allow concurrent visualization calculations
in a heterogeneous computing environment.
7. Development Plan
Our PRAC computational plan is based on the four simulation codes discussed in §6.A. Under SCEC’s
previous PRAC project, we successfully ran each of these codes on Blue Waters and produced useful
scientific results. The research proposed here will help us improve the scientific value and computational
efficiency of our calculations. We propose project milestones that will mark new scientific and technical
capabilities in three main areas: (1) integration of new physics into our simulation codes, (2) increased
use of GPU-based codes, and (3) extension of CyberShake workflows to larger regions and higher
frequencies.
Improving the scientific capabilities of the codes is our first priority. These developments will include
the addition of alternative friction laws in dynamic rupture simulations (SORD), incorporation of frequencydependent attenuation in wave propagation simulations (Hercules), and adaptation of scientific workflows
for hybrid simultaneous (runtime) use of CPU and GPU nodes (AWP-ODC).
Our second development priority is the continued acceleration of wave-propagation calculations using
GPUs. SCEC personnel have worked with NCSA Blue Waters staff in producing the AWP-ODC-GPU
software, which has demonstrated sustained petaflops on real science problems. The results using this
code for single earthquake simulations and CyberShake ensemble calculations have been excellent;
therefore, we plan to migrate more of our simulation suites to this high-performance code. We have
begun development of a GPU version of Hercules. A pilot version of the code has been tested on Blue
Waters, and we expect it to become available for PRAC research activities before next year.
Our third development priority is the improvement of CyberShake workflows on Blue Waters. We plan
to increase the parallel size and data volume in these workflows. We will improve the CyberShake
computational and scheduling strategies to increase parallelism and reduce the number of jobs submitted
to the Blue Waters queue. We will also closely examine relationships between congestion protection
12
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
events and the I/O profile of the many small tasks running in our workflows. Once our GPU-based
workflows are optimized, we will add co-scheduling features that can combine solver and post-processing
calculations into a single job at runtime for more effective use of hybrid CPU-GPU systems.
8. Development Funding and Resources
Our multi-year PRAC research program will be supported by three major NSF research grants:
1. SCEC’s core research program, Southern California Earthquake Center, funded under NSF
Cooperative Agreement 1033462 and USGS Cooperative Agreement G12AC20038 (T. H. Jordan,
P.I.; $21,000,000 for Feb 1, 2012 to Jan 31, 2017). This core program coordinates basic earthquake
research using Southern California as its principal natural laboratory. The Center’s theme of
earthquake system science emphasizes the connections between information gathering by sensor
networks, fieldwork, and laboratory experiments; knowledge formulation through physics-based,
system-level modeling; improved understanding of seismic hazard; and actions to reduce earthquake
risk and promote community resilience.
2. NSF Award EAR-1349180, Community Computational Platforms for Developing Three-Dimensional
Models of Earth Structure, Phase II (T. H. Jordan, P.I.; $1,100,000 for May 1, 2014 to April 30, 2016).
The goal of this project is to develop the cyberinfrastructure needed to progressively and efficiently
refine 3D Earth models. As part of this project, SCEC researchers are developing high-performance
computational systems, including the SCEC Unified Community Velocity Model (UCVM) software for
synthesizing, comparing, and delivering Earth models and the software used for full-3D tomography.
They are improving the interoperability and I/O capabilities of the simulation codes, optimizing them
on heterogeneous CPU-GPU systems to increase the scale of F3DT inversions, and creating
scientific workflows to automate these inversions. The scientific objectives include delivering
improved global and regional structural representations by adding complexity to the model
parameterizations (e.g., anisotropy, attenuation, small-scale stochastic heterogeneities), simplifying
community access to these representations, and evaluating them against standardized sets of
observations.
3. NSF Award Number OCI-1148493: SI2-SSI: A Sustainable Community Software Framework for
Petascale Earthquake Modeling (SEISM) (T. H. Jordan, P.I.; $2,522,000 for August 1, 2012 to July
30, 2015). The goal of the SEISM project is to incorporate better theory and data into computationally
intensive modeling of earthquake processes. The project will integrate scientific software elements
into a Software Environment for Integrated Seismic Modeling (SEISM) that will facilitate physicsbased seismic hazard analysis. SEISM will integrate many SHA components: community velocity
models, codes for dynamic and pseudo-dynamic rupture generation, deterministic and stochastic
earthquake simulators, and applications that employ forward simulations in two types of inverse
problems, seismic source imaging and full 3D tomography. On the SEISM project, SCEC researchers
are developing software tools that can manage large, heterogeneous scientific workflows and can be
run efficiently on petascale supercomputers such as Blue Waters. SCEC’s SEISM project focuses on
software lifecycle issues, including model formation, verification, prediction, and validation.
9. Required Resources
We attempt to match our research requirements with the computational capabilities of HPC systems
available through academic, NSF, and DOE resource providers. During the time period of our proposed
PRAC allocation, we will also request and use allocations from other computer facilities.
We regularly receive allocation from the University of Southern California high-performance
computing facility. The USC HPCC system provides both CPU and GPU nodes and can support serial
and parallel jobs of moderate size. We use approximately 1M core-hours each year at USC HPCC for
software development, prototyping of new capabilities, research simulation projects, and software
regression testing.
SCEC’s current XSEDE allocation (T. H. Jordan, P.I.) ends in July 2014. We will request XSEDE
computing time through the April XRAC process. We will request time on TACC Stampede to continue
OpenSHA/UCERF developments, as well as smaller scale code development projects; time on PSC
13
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
Blacklight for large shared memory dynamic rupture code; and time on SDSC Gordon for the SCEC
Broadband Platform. We have previously used Stampede in our heterogeneous CyberShake workflow
applications, because TACC has supported our Pegasus-WMS, HTCondor DAGMan workflow system
[Courvares et al., 2007]. Blue Waters now supports these workflow tools, and we have recently shown
that using Blue Waters alone reduces our CyberShake time-to-solution from 8 weeks down to 2 weeks.
SCEC has an INCITE allocation for 2014 (T. H. Jordan, P.I.) under which we will run F3DT inversions
on Mira and high-frequency deterministic wave propagation simulations on Titan GPUs. We hope to use
Titan for 1 Hz CyberShake hazard curves; however we have not determined whether we can run the
CyberShake workflow system on Titan. The ALCF Mira computing environment has been highly
productive for our full 3D tomography research. We will submit an INCITE 2015 allocation request to
continue F3DT research at ALCF.
We developed our Blue Waters research plan to include the largest and most challenging aspects of
our computational research program. From our perspective, Blue Waters has capabilities well matched to
our needs: support for hybrid CPU/GPU computing, fast interconnect, large memory usage, fast I/O, a
scalable file system, and support for workflow automation. Our resource requirements are based on a
simulation plan that designed to achieve the three primary objectives described in §3. This plan
comprises five milestone activities:
M1. Dynamic Rupture Simulations (up to 8 Hz): We will conduct a series of simulations to generate
source models that yield wavefields in agreement with fractal roughness observed on fault surfaces, as
well as other geometrical complexities of fault systems, including plasticity in the material surrounding the
fault. The outputs from these simulations will be used to generate equivalent kinematic source models to
be used in forward wave propagation simulations.
M2. Validation Simulations (up to 4 Hz): Simulations of multiple historic earthquakes to investigate the
sensitivity of the ground motion to source, velocity, and attenuation models; to test recent improvements
in velocity models when applied to high-frequency simulations; and to quantify the level of accuracy of
synthetics when compared to data. Quantitative validation will be done using goodness-of-fit measures
equivalent to those used in the SCEC BBP.
M3. 1994 Northridge Earthquake (up to 8 Hz): Verification (between simulation codes) and validation
(with data and results from the SCEC BBP) of the 1994 Northridge, California earthquake, using multiple
plausible rough-fault models, velocity models including small-scale heterogeneities, and wave
propagation with and without off-fault plasticity. Source models will be obtained from M1 activities.
Verification and validation insight gained from M2 activities will be used for interpretation of results
M4. 2014 ShakeOut (up to 4 Hz): Simulations of a revised 2008 ShakeOut scenario using a rough fault
description (from M1), an improved velocity model (CVM-S4.26), and more realistic physics (attenuation,
plasticity). Tasks within this milestone activity will include verification of results between codes and
analysis of sensitivity of the ground motion to the response of soft sedimentary deposits and off-fault
plasticity.
M5. CyberShake (2 Hz): Composition of seismic hazard maps for the greater Los Angeles region in
southern California based on UCERF2 fault rupture scenarios and using alternative/improved community
velocity models.
The specific problem sizes and resources required for these milestone activities are detailed in
Table 2. Note that moderate-magnitude scenario and historical earthquakes will be used primarily for
calibration of the models and verification of simulation codes. As we progressively scale-up the
spatiotemporal resolution of our simulations, we will perform a sequence of verification exercises directed
at ensuring that the capabilities of each code/team (Hercules and AWP-ODC/AWP-ODC-GPU) yield
equivalent results. Large scenario and historic events will be used primarily for validation of velocity
models and ground motion synthetics, and as scientific tools to understand and characterize the
associated seismic hazard.
Validation of historical events will be done through comparisons of synthetics with data using
goodness-of-fit measures that convey physical meaning to both seismologists and engineers [Anderson,
2004; Olsen and Mayhew, 2010]. We have accumulated extensive experience in validation of simulations
14
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
(up to 4 Hz) of moderate events such as the 2008 Chino Hills earthquakes [Taborda and Bielak, 2013,
2014], and our M2 activities will extend this validation work to other events. Verification at this level is
particularly important, owing to the complexity in high-frequency simulations added by the improved
physics and finer resolution. In the past we have successfully performed cross-verification exercises at
lower frequencies (0.5 Hz) using the 2008 ShakeOut scenario [Bielak et al., 2010]. This cross-verification
will be redone for the higher-frequency 2014 ShakeOut, where the differences intrinsic to the
methodologies implemented in our simulation codes (FE and FD) are likely to be exposed in new ways.
Table 2. Milestones of the SCEC Research Plan for Computations on Blue Waters
Milestone
Activities
M1
Dynamic
Rupture
M2-1
Validation
Simulations
M2-2
1994
Northridge
M3-3
2014
ShakeOut
M4-3
M5
WCT
(Hours)
CPU
[GPU]
Nodes
per Run
CPU
[GPU]
Repetitions
4 Hz
350km x 150km x 36km
30 billion elements, 45Ktimesteps
SORD
8.75
4,000
8
0.35
10
2 Hz; 200m/s
200km x 150km x 50km
2 billion elements, 50K timesteps
Hercules
2.3
750
12
0.02
10
4 Hz; 200 m/s
200km x 150km x 50km
16 billion elements,100K timesteps
Hercules
9.3
3,000
6
0.17
10
24.0
6,000
4
0.58
40
Hercules
8.7
3,000
4
0.10
10
Hercules
24.0
13,000
2
0.63
20
14.8
4,000
4
0.24
20
23.5
[13.4]
22,000
[4,000]
3
1.71
170
23.1
6,000
1
0.14
10
4
[5]
2,250
[4,300]
286
8.72
1,700
Total
12.66
2,000
8 Hz; 400 m/s
150km x 100km x 50km
15 billion elements,100K timesteps
8 Hz; 200 m/s
150km x 100km x 50km
90 billion elements,200K timesteps
2 Hz; 500 m/s
600km x 300km x 100km
AWP-ODC
144 billion elements,160K timesteps
M4-1
M4-2
Code
8 Hz; 400 m/s
150km x 100km x 30km
AWP-ODC
450 billion elements, 200K timesteps
M3-1
M3-2
Problem Size
CyberShake
4 Hz; 500 m/s
AWP-ODC
600km x 300km x 87.5km
GPU
1.91 trillion elements, 320 timesteps
4 Hz; 200 m/s
600km x 300km x 80km
Hercules
40 billion elements, 200K timesteps
CyberShake
2 Hz; 250 m/s
Framework
3 components
AWP-ODC
83.3 billion elements, 80K timesteps
SGT-GPU
Total
Storage
Node Hrs
(TB)
(million)
15
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
References
Anderson, J. (2004). Quantitative Measure of the Goodness-of-fit of Synthetic Seismograms, in Proc.
13th World Conf. Earthquake Eng., Paper 243, Int. Assoc. Earthquake Eng., Vancouver, British
Columbia, Canada.
Anderson, J. G., and J. N. Brune (1999). Probabilistic seismic hazard analysis without the ergodic
assumption, Seismol. Res. Lett. 70, 19–28.
Baker, J. W., and C. A. Cornell (2008). Uncertainty propagation in probabilistic seismic loss estimation,
Structural Safety, 30. 236–252, doi:10.1016/j.strusafe.2006.11.003.
Bielak, J., H. Karaoglu, and R. Taborda (2011). Memory-efficient displacement-based internal friction for
wave propagation simulation, Geophysics 76, no. 6, T131–T145, doi 10.1190/geo2011-0019.1.
Bielak, J., R. W. Graves, K. B. Olsen, R. Taborda, L. Ramírez-Guzma´n, S. M. Day, G. P. Ely, D. Roten,
T. H. Jordan, P. J. Maechling, J. Urbanic, Y. Cui, and G. Juve (2010). The ShakeOut earthquake
scenario: Verification of three simulation sets, Geophys. J. Int. 180, no. 1, 375–404, doi 10.1111/j.1365715 246X.2009.04417.x.
Bielak, J., Y. Cui, S. M. Day, R. Graves, T. Jordan, P. Maechling, K. Olsen, and R. Taborda (2012). High
frequency deterministic ground motion simulation (High-F project plan). Technical report, Southern
California Earthquake Center. Available online at: http://scec.usc.edu/scecpedia/High-F (last accessed
March 2014).
Böse, M., R. Graves, D. Gill, S. Callaghan and P. Maechling (2014). CyberShake-Derived Ground-Motion
Prediction Models for the Los Angeles Region with Application to Earthquake Early Warning, Geophys. J.
Int., in revision.
Budnitz, R. J., Apostolakis, G., Boore, D. M., Cluff, L. S., Coppersmith, K. J., Cornell, C. A., Morris, P. A.,
1997. Senior Seismic Hazard Analysis Committee; Recommendations for Probabilistic Seismic Hazard
Analysis: Guidance on Uncertainty and Use of Experts, U.S. Nuclear Regulatory Commission, U.S. Dept.
of Energy, Electric Power Research Institute; NUREG/CR-6372, UCRL-ID-122160, Vol. 1-2.
Callaghan, S., P. Maechling, P. Small, K. Milner, G. Juve, T. H. Jordan, E. Deelman, G. Mehta, K. Vahi,
D. Gunter, K. Beattie and C. Brooks (2011). Metrics for heterogeneous scientific workflows: A case study
of an earthquake science application, International Journal of High Performance Computing Applications
2011, 25: 274, doi: 10.1177/1094342011414743.
Christen, M., O. Schenk, and Y. Cui (2012), PATUS: Parallel Auto-Tuned Stencils For Scalalble
Earthquake Simulation Codes, SC12, Salt Lake City, Nov 10-16, 2012.
Computational Science: Ensuring America’s Competitiveness (2005). The President’s Information
Technology Advisory Committee (PITAC)
Couvares, P., T. Kosar, A. Roy, J. Weber and K Wenger (2007). Workflow in Condor, In Workflows for eScience, Editors: I. Taylor, E. Deelman, D. Gannon, M. Shields, Springer Press, January 2007, ISBN: 184628-519-4.
Cui, Y., R. Moore, K. B. Olsen, A. Chourasia, P. Maechling, B. Minster, S. Day, Y. Hu, J. Zhu, A.
Majumdar & T. H. Jordan. Towards petascale earthquake simulations, Acta Geotechnica, Springer,
doi:10.1007/s11440-008-0055-2.
Cui, Y., K. B. Olsen, T. H. Jordan, K. Lee, J. Zhou, P. Small, D. Roten, G. P. Ely, D. K. Panda, A.
Chourasia, J. Levesque, S. M. Day & P. J. Maechling (2010), Scalable earthquake simulation on
petascale supercomputers, in Proceedings of the 2010 ACM/IEEE International Conference for High
Performance Computing, Networking, Storage, and Analysis, New Orleans, Nov. 13-19,
doi:10.1109/SC.2010.45 (ACM Gordon Bell Finalist).
16
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
Cui, Y., E. Poyraz, K.B. Olsen, J. Zhou, K. Withers, S. Callaghan, J. Larkin, C. Guest, D. Choi, A.
Chourasia, Z. Shi, S.M. Day, J.P. Maechling, and T.H. Jordan (2013a). Physics-based seismic hazard
analysis on petascale heterogeneous supercomputers, Procs, Supercomputing Conference, 2013.
Cui, Y., Poyraz, E., Callaghan, S., Maechling, P., Chen, P. and Jordan, T. (2013b), Accelerating
CyberShake Calculations on XE6/XK7 Platforms of Blue Waters, Extreme Scaling Workshop 2013,
August 15-16, Boulder.
Deelman, E., G. Singh, M.-H. Su, J. Blythe, Y. Gil, C. Kesselman, G. Mehta, K. Vahi, G. B. Berriman, J.
Good, A. Laity, J. C. Jacob and D. S. Katz (2005). Pegasus: a framework for mapping complex scientific
workflows onto distributed systems. Scientific Programming, 13(3), 219-237.
Ely, G. P., S. M. Day, and J.-B. Minster (2010), Dynamic rupture models for the southern San Andreas
fault, Bull. Seism. Soc. Am., 100 (1), 131-150, doi:10.1785/0120090187.
Federal Plan for High-end Computing: Report of the High-end Computing Revitalization Task Force
(2004). Executive Office of the President, Office of Science and Technology Policy.
FEMA (2000). HAZUS 99, estimated annualized earthquake losses for the United States, Federal
Emergency Management Agency Report 366, Washington, D.C., September, 32 pp.
Graves, R., B. Aagaard, K. Hudnut, L. Star, J. Stewart and T. H. Jordan (2008). Broadband simulations
for Mw 7.8 southern San Andreas earthquakes: Ground motion sensitivity to rupture speed, Geophys.
Res. Lett., 35, L22302, doi: 10.1029/2008GL035750.
Graves, R., T. H. Jordan, S. Callaghan, E. Deelman, E. Field, G. Juve, C. Kesselman, P. Maechling, G.
Mehta, K. Milner, D. Okaya, P. Small and K. Vahi (2011). CyberShake: A physics-based seismic hazard
model for Southern California. Pure and Applied Geophysics, 168(3), 367-381.
Goldstein, M. (2013). Observables and models: exchangeability and the inductive argument, in Bayesian
Theory and Its Applications, ed. P. Damien, P. Dellaportas, N. G. Olson, and D. A. Stephens, Oxford, pp.
3-18.
Goulet, C. (2014), Summary of a Large Scale Validation Project Using the SCEC Broadband Strong
Ground Motion Simulation Platform, Seism. Res. Lett., SSA Annual Mtg 2014, Anchorage, AK.
Hauksson, E. and P. M. Shearer (2006), Attenuation models (QP and QS) in three dimensions of the
southern California crust: Inferred fluid saturation at seismogenic depths, J. Geophys. Res., 111, B05302,
doi:10.1029/2005JB003947.
Isbiliroglu, Y., R. Taborda, and J. Bielak (2013). Coupled soil-structure interaction effects of building
clusters during earthquakes, Earthquake Spectra, in press, doi 10.1193/102412EQS315M.
Komatitsch, D., and J. Tromp (2002). Spectral-element simulations of global seismic wave propagation—
I. Validation, Geophys. J. Int., 149, 390-412; Spectral-element simulations of global seismic wave
propagation—II. Three-dimensional models, oceans, rotation and self-gravitation, Geophys. J. Int., 150,
308–318.
McQuinn, E., Chourasia, A., Minster, J. H. & Schulze, J. (2010). GlyphSea: Interactive Exploration of
Seismic Wave Fields Using Shaded Glyphs, Abstract IN23A-1351, presented at the 2010 Fall Meeting,
AGU, San Francisco, CA, 13-17 Dec.
National Research Council (2004). Getting Up to Speed: The Future of Supercomputing. Washington,
DC: The National Academies Press.
Olsen, K. B., R. J. Archuleta & J. R. Matarese (1995). Three-dimensional simulation of a magnitude 7.75
earthquake on the San Andreas fault, Science, 270, 1628-1632.
Olsen, K.B., S. M. Day & C. R. Bradley (2003), Estimation of Q for long period (>2 s) waves in the Los
Angeles Basin, Bull. Seismol. Soc. Am., 93, 627-638.
17
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
Olsen, K.B. and J.E. Mayhew (2010). Goodness-of-fit Criteria for Broadband Synthetic Seismograms, with
Application to the 2008 Mw 5.4 Chino Hills, California, Earthquake, Seismol. Res. Lett. 85, 5, 715–723,
doi 10.1785/gssrl.81.5.715.
Pasyanos, M. E. (2013). A lithospheric attenuation model of North America, Bull. Seismol. Soc. Am., 103,
1-13, doi:10.1785/0120130122.
Peter, D., D. Komatitsch, Y. Luo, R. Martin, N. Le Goff, E. Casarotti, P. Le Loher, F. Magnoni, Q. Liu, C.
Blitz, T. Nissen-Meyer, P. Basini & J. Tromp (2011). Forward and adjoint simulations of seismic wave
propagation on fully unstructured hexahedral meshes, Geophys. J. Int., 186, 721-739.
Petersen, M. D., A. D. Frankel, S. C. Harmsen, C. S. Mueller, K. M. Haller, R. L. Wheeler, R. L. Wesson,
Y. Zeng, O. S. Boyd, D. M. Perkins, N. Luco, E. H. Field, C. J. Wills and K. S. Rukstales (2008).
Documentation for the 2008 Update of the United States National Seismic Hazard Maps, U. S. Geological
Survey Open-File Report 2008-1128.
Porter, K., L. Jones, D. Cox, J. Golkz, K. Hudnut, D. Mileti, S. Perry, D. Ponti, M. Reichle, A.Z. Rose, et al.
(2011), The ShakeOut scenario: A hypothetical Mw7.8 earthquake on the southern San Andreas fault,
Earthquake Spectra, 27(2), 239-261.
Poyraz, E., Xu, Heming and Cui, Y. (2014). I/O Optimziations for High Performance Scientific
Applications. ICCS’14, Cairns, June 10-12 (accepted).
Restrepo, D., R. Taborda, and J. Bielak (2012a). Effects of soil nonlinearity on ground response in 3D
simulations — An application to the Salt Lake City basin, in Proc. 4th IASPEI/IAEE Int. Symp. Effects of
Surface Geology on Seismic Motion, University of California, Santa Barbara, August 23–26.
Restrepo, D., R. Taborda, and J. Bielak (2012b). Simulation of the 1994 Northridge earthquake including
nonlinear soil behavior, in Proc. SCEC Annu. Meet., Abstract GMP-015, Palm Springs, CA, September 9–
12.
Reynolds, C. J., S. Winter, G. Z. Terstyanszk, T. Kiss (2011). Scientific workflow makespan reduction
through cloud augmented desktop grids, Third IEEE International Conference on Cloud Computing
Technology and Science.
Roten, D., K.B. Olsen, S.M. Day, Y. Cui, and D. Fah (2014), Expected seismic shaking in Los Angeles
reduced by San Andreas fault zone plasticity, Geophysical Research Letters, in revision.
Rynge, M., G. Juve, K. Vahi, S. Callaghan, G. Mehta, P. J. Maechling, E. Deelman (2012). Enabling
Large-scale Scientific Workflows on Petascale Resources Using MPI Master/Worker", XSEDE12, Article
49.
Savran, W., and K.B. Olsen (2014). Deterministic simulation of the Mw Chino Hills event with frequencydependent attenuation, heterogeneous velocity structure, and realistic source model, Seism. Res. Lett.,
SSA Annual Mtg, Anchorage, AK.
ShakeOut Special Volume (2011), Earthquake Spectra, Vol. 27, No. 2.
Shi, Z., and S.M. Day (2012a). Ground motions from large-scale dynamic rupture simulations (poster
023), SCEC Annual Meeting, Palm Springs.
Shi, Z., and S.M. Day (2012b), Characteristics of ground motions from large-scale dynamic rupture
simulations, Abstract S14A-06, presented at the 2012 Fall Meeting, AGU, San Francisco, California, 3-7
Dec.
Shi, Z, and S.M. Day (2013a). Validation of dynamic rupture simulations for high-frequency ground
motion, Seismological Research Letters, Vol. 84.
Shi, Z., and S. M. Day (2013b). Rupture dynamics and ground motion from 3-D rough-fault simulations,
Journal of Geophysical research, 118, 1–20, doi:10.1002/jgrb.50094.
18
Extending the Spatiotemporal Scales of Physics-Based Seismic Hazard Analysis – Lead: Jordan
Strasser, F. O., N. A. Abrahamson, and J. J. Bommer (2009). Sigma: issues, insights, and challenges,
Seismol. Res. Lett. 80, 40–56.
Stein S., R. Geller and M. Liu (2012). Why earthquake hazard maps often fail and what to do about it,
Tectonophysics, 562-563, 1-25, doi:10.1016/j.tecto.2012.06.047.
Taborda, R. and J. Bielak (2013). Ground-motion simulation and validation of the 2008 Chino Hills,
California, earthquake, Bull. Seismol. Soc. Am. 103, no. 1, 131–156, doi 10.1785/0120110325.
Taborda, R. and J. Bielak (2014). Ground-motion simulation and validation of the 2008 Chino Hills,
California, earthquake using different velocity models, Bull. Seismol. Soc. Am., in revision.
Taborda, R., and J. Bielak (2011). Large-scale earthquake simulation — Computational seismology and
complex engineering systems, Comput. Sci. Eng. 13, 14–26, doi 10.1109/MCSE.2011.19.
Taborda, R., J. Bielak, and D. Restrepo (2012). Earthquake Ground Motion Simulation Including
Nonlinear Soil Effects Under Idealized Conditions with Application to Two Case Studies, Seismol. Res.
Lett. 83, no. 6, 1047–1060, doi 10.1785/0220120079.
Taborda, R., J. López, H. Karaoglu, J. Urbanic, and J. Bielak (2010). Speeding up finite element wav
propagation for large-scale earthquake simulations. Technical Report CMU-PDL-10-109, available at
http://www.pdl.cmu.edu/Publications/pubs-tr.shtml.
The Opportunities and Challenges of Exascale Computing (2010). Summary Report of the Advanced
Scientific Computing Advisory Committee (ASCAC) Subcommittee on Exascale Computing, Department
of Energy, Office of Science, Fall 2010.
Tu, T., H. Yu, L. Ramírez-Guzmán, J. Bielak, O. Ghattas, K.-L. Ma, and D. R. O’Hallaron (2006). From
mesh generation to scientific visualization: An end-to-end approach to parallel supercomputing, in SC’06:
Proc. of the 2006 ACM/IEEE Int. Conf. for High Performance Computing, Networking, Storage and
Analysis, Tampa, Florida, pp. 15. available at http://doi.acm.org/10.1145/1188455.1188551 (last 832
accessed March 2014), doi 10.1145/1188455.1188551.
Wang, F., and T. H. Jordan (2014), Comparison of probabilistic seismic hazard models using averagingbased factorization, Bull. Seismol. Soc. Am., in press.
Withers, K.B., K.B. Olsen, Z. Shi, S.M. Day, and R. Takedatsu (2013a). Deterministic high-frequency
ground motions from simulations of dynamic rupture along rough faults, , Seismol. Res. Lett., 84:2, 334.
Withers, K.B., K.B. Olsen, and S.M. Day (2013b), Deterministic high-frequency ground motion using
dynamic rupture along rough faults, small-scale media heterogeneities, and frequency-dependent
attenuation (Abstract 085), SCEC Annual Meeting, Palm Springs.
Withers, K.B., K.M. Olsen, Z. Shi, R. Takedatsu, and S.M. Day (2012), Deterministic high-frequency
ground motions from simulations of dynamic rupture along rough faults (Poster 019), SCEC Annual
Meeting, Palm Springs.
Zhou, J., Didem, U., Choi, D., Guest, C. & Y. Cui (2012). Hands-on Performance Tuning of 3D Finite
Difference Earthquake Simulation on GPU Fermi Chipset, Proceedings of International Conference on
Computational Science, Vol. 9, 976-985, Elesvier, ICCS 2012, Omaha, Nebraska, June 4-6.
Zhou, J., Y. Cui, E. Poyraz, D. Choi, and C. Guest (2013), Multi-GPU implementation of a 3D finite
difference time domain earthquake code on heterogeneous supercomputers. Proceedings of International
Conference on Computational Science, Vol. 18, 1255-1264, Elesvier, ICCS 2013, Barcelona, June 5-7.
19
Download