Representation of Global Earth Models (LLNL Seismic Symposium)

advertisement
Representation of Global
Earth Models
Interoperability, efficiency and
prediction accuracy
Bill Menke
Lamont-Doherty Earth Observatory
Interoperability
Being able to use someone else’s model in a
quantitative way
Goal
providing the right information to
make your model interoperable
even simple, well known issues
can cause substantial
interoperability problems
here’s an example
the world is not a
sphere ….
to center
down
north pole, Z
q
f
X
(X,Y,Z) EarthCentered, Earth
Fixed Cartesian
coordinates
Geographical latitude f not equal to geocentric latitude q
Down not the same as direction to center
Recommendation – ECEF Coordinates
You should always specify know to translate from
what ever coordinate system that you’re using to
earth-centered, earth- fixed (ECEF) Cartesian
coordinates, (X,Y,Z)
(0,0,0) at center of mass
[0,0,1] points geographic north
[1,0,0] points to intersection of equator
and prime meridian
And you should describe how to do it for any models
that you publish or make widely available
Example: formula for converting geodetic coordinates to EDEF
ellipticity correction ~0.5s
over continental distances
issues associated with ellipticity
1. Broad consensus to use geodetic lat, lon’s
But what datum (model for earth’s shape)?
GPS satellite system: WGS-84
hundreds of others in use
Might effect:
station locations
event locations
registration of regional tomographic results with
rest of world
positioning errors of a few hundred meters are possible from using wrong datum
significant in earthquake location; somewhat significant in global tomography
issues associated with ellipticity
2. How is depth defined? A problem here …
“geodetic height at a point is the distance
from the reference ellipsoid to the point in a
direction normal to the ellipsoid”
so height is parallel to local up/down, which
makes intuitive sense
Oops! … depth
defined this way
misses the center of
the earth
down
to center
center
Yet if you define depth as straight-line distance to the earth’s
center, then geographic (lat, lon) varies along that line.
issues associated with ellipticity
3. How so you “ellipsify” a radial model of earth
structure? You need to know how ellipticity
varies with depth.
Why different?
Kennett, B., Seismological Tables: AK135
“… ellipticity coefficients were made for the iasp91
model using the algorithms presented by Doornbos
(1988) with the density distribution from PEMC
model of Dziewonski et al (1975) and the
assumption that the ellipticity is nearly hydrostatic”
OK: So how do I figure out what ellipticity vs. depth
function Kennett used? I need to know e(r)!
issues associated with ellipticity
4. What is meant by a traveltime observation?
A. Traveltime
B. Traveltime corrected for somebody’s
model of ellipticity
I suspect that the reference to iasp91 in the
AK135 Tables means that traveltime data
were de-ellipsified using before being used
recommendation
Good
Model prediction + correction = observed data
Bad
Model prediction = observed data – correction
bad because it tempts you to publish “data” that have
been “corrected” in some not-very-obvious way.
corrections: ellipticity, station elevation, anelasticity …
Case Study #1 of Interoperabilty
comparing continental scale models
of P and S velocity
Interoperabilty includes having
enough information to understand
the comparison
P velocity: Phillips (2007)
isotropic model
database of traveltimes
0.5x0.5 grid, registered on 0, ½
50 km depth
Here’s the biggest problem …
S velocity: Nettles & Dziewonski (2007)
Voigt average of anisotropic model
Love & Rayleigh waveforms
0.5x0.5 grid, registered on ¼, ¾
50 km depth
My solution: interpolate S to 0, ½
registration using bicubic splines
Irony is that I know Nettles & Dziewonski
(2007) model was based on spherical
splines, so exact values at desired
registration were “theoretically” available.
Error probably small, since grid appears to
be oversampled, but its hard to be sure.
1:3 slope
Clearly some
systematic
disagreement in
location western
edge of craton
what would we
need to know to
understand it ?
there’s a rather
lot of scatter
around
dVs/dVp3
expected from
effect of
temperature
what would we
need to know to
understand it ?
Case Study #2 of Interoperabilty
how well does surface wave derived
shear velocity model predict S-wave
traveltimes?
Shear velocity: Nettles & Dziewonski (2007)
Voigt average of anisotropic model
Love & Rayleigh waveforms
Crust 2.0 crust included in inversion but not given
50 km depth intervals given form mantle
0.5x0.5 grid, registered on ¼, ¾
Traveltime calculation
3D model with 0, ½  registration, 50 km depth slices
raytracing by shooting in model with tetrahedral splines
Crust2.0 crust on top of Nettles & Dziewonski (2007) mantle
S-wave Traveltimes
Database of North America Regional Earthquake Traveltimes
provided by W-Y Kim (personal communication)
and where did he get the corresponding event locations?
Map of ray exit points
Western craton
NA
Arrivals from
Transition zone
Another Oops …
(but for a different reason)
The surface wave model has
negative lithospheric velocity gradient
over most of North America
So there are no S-wave rays turning in
the lithosphere
This problem is not limited to Nettles and Dziewonski’s (2007) North
America model. Kutowski’s (2007) Eurasian model has regions with
strong negative velocity gradients, too.
The Nettles and Dziewonski (2007) model and body
wave travel times are inherently interoperable
Why, we don’t know … maybe
Earth looks different at very long periods
or
S-arrivals are not real rays (Thybo & Perchuc, 1997)
About the best that one can do is to use it to
“motivate the construction” of a model that
would be useful for body waves
Example of “motivating the construction”
Nettles & Dziewonski (2007) Vs-voigt at 50 km
depth, scaled to Vp, and used to tweak velocity up
& down uniformly through-out an ak135 lithosphere
Travetime residuals
predicted by model
Traveltime residuals observed
for Gnome explosion
many different model representations
spherical harmonics
voxels
splines, with varying
arrangement of nodes
interpolant
etc.
How should models be converted
between representations?
Guiding principle
a model is a way of summarizing the data
therefore
the conversion should try to
reproduce an “idealized” version of the data
used to create the model
“idealized in the sense that one might not know
exactly what data were used to create the model
Example
The figure at the left shows a
hypothetical “layer cake” crustal
model.
crust
Many such models have been
published on the basis of activesource seismic refraction surveys.
mantle
Suppose that we want to switch to
a representation that uses linear
splines.
We use as the guiding principle that
the layer-cake model probably fits
the first arrival traveltime data well,
in the distance range of a typical
refraction experiment (say 200 km).
mantle
crust
Layer Cake Model
calculated
from
simple
formula
Traveltime Prediction
Traveltime “observations”
and predictions
calculated
from
inversion
mantle
solid – tt’s from
layer-cake model
crust
grey – tt’s from
linear spline
model
Comparison of layer-cake
and linear spline models
The fit to the “data” is excellent, r.m.s.=100 ms, even with the crust being
represented with only two linear splines. The down-side is that an inverse
problem needed to be solved to computer the new model representation.
recommendation
Model representations always be converted to
preserve “idealized data”,
Even though this requires solving an inverse
problem
However, the in hard cases the sense of
idealization can be broadened to include
preserving quantities that are merely “data-like”,
rather than exactly data.
Example – traveltime data
traveltime-like data: integrals of slowness
along a suite of ray paths, whose shape is
itself determined by the model
traveltime-like data: integrals of slowness
along a suite of prescribed curves that look
something like ray paths
efficiency and prediction accuracy
Some of my experiences with a
ray-based traveltime tomography - earthquake
location algorithm
that uses a 3D model representation based upon
linear tetrahedral splines
Choice of linear splines and tetrahedra motivated by efficiency
Advantages of tetrahedral models
Ray paths known function (arc of circle) within
tetrahedron
Important ray integrals can be performed
analytically, e.g. traveltime, T
Where v is velocity, g is its gradient and
Disadvantages of linear tetrahedral
models
Curved surface of approximated as surface of
polyhedron (but ½  node spacing gives
reasonably accurate – 100 ms - traveltime).
Velocity gradient discontinuous between tetrahedra
(makes geometrical amplitudes rather rough)
Not obvious how to generalize method to
anisotropic earth models
A few examples
Low Velocity Zone (LVZ)
Curved interface approximated as
surface of polyhedron
Slice through a 3D model represented with linear tetrahedral splines
Some sample ray paths
Fan array in next slide
rays loop in
LVZ
Moho
triplication
upper crustal
triplication
Map view of ray exit points for source at
traveltime, s
Traveltime plot for fan array geometry
LVZ
reverberations
delayed by LVZ
Fan array Y-distance
Efficiency issues
Given an irregular tetrahedral grid ..
1. How do you figure out whether
a given (x,y,z) point is in a given tetrahedron ?
2. How do you figure out which
tetrahedron a given (x,y,z) point is in ?
1. Is point in a given tetrahedron? Yes or No
Its is
if for each of the 4 faces of the tetrahedron
it is on the same side of the face
as the excluded vertex* of the face
This one’s not
* Three of the four vertices
of the tetrahedron define a
face. The fourth vertex is
the “excluded” vertex.
Its excluded
vertex
This point is on the same side as
the excluded vertex
The inside/outside test needs to be very
efficient, because it is used so often
Thus information needed to perform it (e.g.
the outward pointing normal of each
face) needs to be pre-computed and
stored.
2. Which tetrahedron is a given (x,y,z) point in ?
The probability is overwhelmingly high that its in
the same tetrahedron as the last point that you
focused upon …
… because most operations are spatially
localized
An if not, then its probably in a tetrahedron that
is close-by
Finding the tetrahedron by walking toward it
current
point
test point
New point
Always move to tetrahedron
adjacent to face with outward
pointing normal is most
parallel to the line connecting
test point and new point
vector connecting
old and new points
Deciding which tetrahedron is adjacent to
a given face of a given tetrahedron
needs to be very efficient, because it is
used so often
Thus adjacency (nearest 4 neighbors)
information needed needs to be precomputed and stored.
ECEF Cartesian Hash Table can be used to facilitate
finding tetrahedron that encloses a point
2
3
4
5
6
7
6
1
7
5
2
4
9
10
5
3
21
16
19
8
24 1
17
20
6
4
13
15
26
27
12
28
29
2
3
14
25
32
23
18
22
36
30
50 51
54
37
31
53
55
34
33
41
35
44
42
45
43
39
40
38
48
47
46
1
52
Point
is in Hash Cell (5,5,0) which overlaps 5 tetrahedra 16, 19, 20, 21, 26
Station-specific 3D Traveltime
Tables that allow multiple arrivals
Rays shot at suite of angles from station
Ray tubes with traveltime
increments tabulated
And hashed onto underlying tetrahedral model
Multiple ray tubes allowed
Point  tetrahedra  ray tubes overlapping this tetrahedron 
walk each ray tube to find segment inclosing point
Some Closing Remarks
Interoperability
Growing need to use each others models in
quantitative ways
Other people to be able to understand your model
representation in considerable detail
Many subtle issues related to model representation
make interoperability difficult
Models are not as interoperable as they might seem,
for reasons that go beyond mere differences in
representation
Conversion between Representations
Conversions should try to preserve the original
predictions of a model, not merely the model itself.
Conversion process should match idealized data, a
process that itself requires inversion, not merely
resampling.
Which means that the authors need to state what data
are being fit – and how well – by their models.
Efficiency and Accuracy
Accuracy includes how well model fits earth
features, not just how well quantities are
computed in that model.
Efficiency trades of with generality.
Computation efficiency can be enhanced by
clever choices of pre-computed information
(e.g. hash tables) but trades off with storage
efficiency.
Download