number of equations

advertisement

Comments on UCERF 3

Art Frankel

USGS

For Workshop on Use of UCERF3 in the

National Seismic Hazard Maps

Oct. 17-18, 2012

220,000 rupture scenarios, r solving for rate of each one number of equations:

2,600 sub-sections, s

34 paleoseismic rates

(recently added slip per event constraint for some

Sites)

About 25 magnitude bins

(assuming dmag = 0.1)

Have recently added about

65,000 spatial smoothing equations (2600 sub-segments and 25 mag bins)

New Fault MFD constraint adds

About 300 x 25 = 7500 equations

• Grand Inversion solves for 220,000 unknowns (rates of every possible combination of sub-segments) using less than 2,200 independent data points (230 geologic slip rates and 34 paleoseismic recurrence rates; plus geodetic data which is used to estimate slip rates on 1,877 fault segments [see spreadsheets], but most of these are not independent).

• You are trying to solve for 220,000 unknowns using about

2,630 equations with data (2,600 sub-segment slip rates + 34 paleoseismic rates).

• You also have m equations for GR b=1 constraint, where m is number of magnitude bins (about 25?).

• Recently-added spatial smoothing constraint would give about 65,000 additional equations (2600 sub-segments x 25 magnitude bins). Recently added MFD constraint on B-type faults (300 faults x 25 magnitude bins)

• This is a mixed-determined problem (more unknowns than equations) and some rupture rates will have non-unique solutions and cannot be resolved. Which ones? It’s not clear from simulated annealing solutions we’ve seen. There will be correlations between rupture rates on different faults, so treating each rupture rate as independent could lead to a misleading hazard value. Just calculating means from all the runs is not sufficient, because of correlations.

• How much of the solutions we’ve been shown can we believe? How well are the rates resolved? In a typical inverse problem involving inverting a matrix, one can determine a formal resolution matrix. With simulated annealing, you can’t.

• Should invert on simulated test cases. Set up rupture model with multiple fault ruptures, specify slip rates and some recurrence rates, add noise, and try inversion to see how many rupture rates you can resolve.

Results depend on choices of spatial smoothing, relative weights of different data sets (slip rates vs. paleoseismic recurrence rates), relative weights of constraints, etc.

How do these subjective choices affect the seismic hazard values?

A question about multi-fault rupture

M7.0

M7.0

M7.6

If these faults have the same slip rate, does the GI give the same rate of two-fault rupture with M7.6 as single rupture on large fault?

Should it?

Beware of highly-complicated results from a simplified model

• Slip for large earthquakes with long lengths do not look like rainbows or boxcars. Some portions of rupture near ends have higher slip than rest of fault (e.g., M7.9

Denali earthquake, M7.6 Chi Chi earthquake). This is not included in GI model, so complex results from a simplistic model can be misleading. Especially true of multi-fault ruptures.

• Ambiguity of rates for M6.0-6.5 events. Absence of evidence in trench may mean they didn’t occur or could mean they occurred but were not observed. Just multiplying rate by 2 to account for probability of observing is not a unique interpretation

Figure From UCERF3 TR8: all rupture scenarios involving northern segment of Hayward fault

(similar to what was shown yesterday)

Modal earthquake is M7.5

Need to close the loop with geologists: is this MFD plausible given geologic observations?

From WGCEP 1999

North Hayward

Segment: modal rupture scenario is single segment

M6.6 with rate

Of 0.0026/yr

Also: 0.0019/yr rate of NH + SH rupture

With M7.1

From UCERF3 TR8

Implies that 52% of future M6.5+ events will occur off the faults in the model

(from cyan line to black line)

This becomes 38% if one includes

M6.5’s in fault polygon boxes

In UCERF2 we determined that 25% of

M6.5+ earthquakes in past 150 yr occurred off faults in model

Note that UCERF3 geodetically-derived deformation models find about

30% of total moment is off the faults in model (not using fault polygons)

Why should the collective MFD of M5.0-

6.5 earthquakes in polygons around faults be continuous with collective rate of M6.5+ earthquakes on fault traces?

See orange line

issues with determining the target MFD for the

Grand Inversion

• Start with GR b=1 for all of CA, then have to decide how much of this MFD is on faults in model to get the target MFD for the GI

• So you have to get rid of seismicity that is not on the faults in the model. They choose polygons around faults and use M5+ . Assumes that, collectively, rate of earthquakes (M5.0-6.5) in boxes around faults is continuous with rates of

M6.5+ earthquakes on fault traces and they form a continuous GR distribution. Is this true ? Is the percentage of M6.5+ earthquakes on faults relative to all of CA the same as the percentage of M5’s in fault polygons, relative to all of CA?

In UCERF2 we estimated percentage of M6.5+ over past

150 years that were associated with faults in model, using catalog

• It is partially an epistemic judgment to estimate how complete your fault inventory is, that is, what percentage of future M6.5+ earthquakes will occur on faults in your model. You can use off fault versus on-fault deformation or the earthquake catalog to constrain this, with assumptions

• It’s always been a problem on how to handle the seam around M6.5 between smoothed seismicity and fault model (stochastic versus deterministic parts of model)

Independent Earthquake Rate Issue

• Dependent-independent event problem: we need rates of independent events to do PSHA (aftershock hazard could be included in a separate calculation). The GI provides total rates, with aftershocks.

• a M6.5 earthquake on the Hayward fault is less likely to be aftershock of modal earthquake on that fault than a M6.5 earthquake on San Andreas fault; how do rates of M6.5 earthquakes compare to expected rates of aftershocks of M>=

7.5 events?

• For example, say the GI finds approximately equal rates of

M6.5 and M7.5 earthquakes on the SAF: if each M7.5 had one

M6.5 aftershock, the entire rate of M6.5 earthquakes could represent aftershocks. One should not assume that the ground-shaking probabilities for the M6.5’s and M7.5’s are independent, which is the assumption for standard PSHA.

From

UCERF3 TR8

Proposed UCERF3 approach to removing aftershocks is not similar to how it was done in UCERF2, which reduced the moment rates by 10%, assuming these were aftershocks or small events on adjacent faults, in effect lowering the rates for all magnitudes by the same factor.

• How do predicted rates of M6.5 earthquakes on individual faults compare to observed rates? e.g., SAF. If inversion predicts many more M6.5’s than observed, then there is a problem. Need to see MFD’s for each fault.

• Forward problem (essentially used in UCERF1 and 2) directly used the measured mean values of slip rates and paleoseismic recurrence rates, not the values from an inversion that approximately fits these observations;

Used expert opinion on fault segmentation and multi-segment ruptures (A faults) and used plausible characteristic and GR models for B faults. Now the inversion for 220,000 rupture rates has supplanted the judgment of geologists who have studied these faults

• UCERF3 process inherently prefers inversion to expert opinion; now it’s a smaller group of experts who are familiar with GI. Also, fewer geologic experts involved compared to S.F. Bay area study (WGCEP, 1999).

Looks like Central Valley hazard could be much higher from Deformation Model

From UCERF3 TR8

Other issues

 Adding faults with slip rates inferred from recency of activity or slip-rate class. Violates past principle of

NSHM’s that we only use faults that have some direct, measurement of slip rate or paleoearthquake rate

(full disclosure: in NV we used slip rates based on geomorphic expression).

 Assigning fixed weights to geodetic models that have regionally-varying uncertainties. Weighting of models should vary spatially with relative quality of geologic and geodetic information (why we used geodetic info in

Puget Sound, eastern CA- western NV)

Using geodetic models will make substantial changes to hazard maps, even for areas with well determined geologic slip rates (e.g., north coast portion of SAF)

GR b=1.0 ?

b= 0.62 from M5.0-6.5

From UCERF 2 report

(Field et al., 2009); independent events

Does GR b=0.8 fit observed

MFD’s for northern and southern

CA (for independent events)?

b=0.6 for southern CA M5.0-6.5

1932-1980

Hutton et al. (2010) for southern CA

1981-2008

The GR relation with b=1.0

• What’s the physical cause? Lots of reasons presented. Especially interesting that the slope doesn’t change as rupture widths go from part of seismogenic thickness to equal or more than this thickness

• In inversion, the rates on widely-separated faults are dependent on each other. Why should the earthquake rate of the Death Valley fault be affected by the earthquake rate of the Little

Salmon fault in NW CA? There are now correlations between distant faults.

Do we think the map on the right represents a good picture of where the M6 off-fault earthquakes are most likely to occur in the next 50-100 years?

My personal view

• Grand Inversion is a promising research tool. It will take time to evaluate its resolution and non-uniqueness. It will have to be reproducible by others.

• There are important issues on the target MFD for the GI and the aftershock problem.

• Is the GI suitable for policy such as building codes and insurance? Two tracks: policy track and research track that intersect at times. When will the GI be ready for prime time? This update cycle?

• UCERF3 will change the hazard substantially at some locations. Why?

• Justifying changes in hazard maps: “The inversion made me do it” rather than

“the geologic experts who have studied these faults made me do it”

• Using the slip rates from the geodetic models will change the hazard. That doesn’t mean we shouldn’t use them.

• Imperative that UCERF3 delivers a set of rupture scenarios and rates that can be input into the FORTRAN codes of the NSHMP. This has to be integrated into the national seismic hazard maps and associated products such as deaggs.

• Is there a way to refine UCERF2 to include new geologic and geodetic data and include multi-fault ruptures? Can UCERF3 inversion be constrained to be close as possible to UCERF2?

Download