Project Plan - University of Southern California

advertisement

726992444

Project Plan for the

Working Group on California Earthquake Probabilities

(WGCEP) development of a

Uniform California Earthquake-Rupture Forecast

(UCERF)

written by the

Executive Committee (EC)

for consideration by the

Management Oversight Committee (MOC)

1

726992444 2

Table of Contents

TABLE OF CONTENTS ............................................................................................................................................ 2

INTRODUCTION ....................................................................................................................................................... 3

G OAL ......................................................................................................................................................................... 3

M ANAGEMENT S TRUCTURE ....................................................................................................................................... 3

P ROJECT C OSTS ......................................................................................................................................................... 4

T HE M EANING OF

“C

ONSENSUS

” ............................................................................................................................... 4

D ECISION M AKING P ROCESS ..................................................................................................................................... 5

P OTENTIAL I NNOVATIONS R ELATIVE TO P REVIOUS WGCEP S .................................................................................. 5

D ELIVERY S CHEDULE ................................................................................................................................................ 5

DEVELOPMENT PLAN ............................................................................................................................................ 6

B ASIC UCERF S TRUCTURE & C OMPONENTS ............................................................................................................ 6

D ATA C OMPONENTS IN M ORE D ETAIL ...................................................................................................................... 7

1.

California Reference Geologic Fault Parameters ........................................................................................ 7

2.

GPS Database .............................................................................................................................................. 9

3.

Instrumental Earthquake Catalog .............................................................................................................. 10

4.

Historical Earthquake Catalog .................................................................................................................. 10

M ODEL C OMPONENTS IN M ORE D ETAIL .................................................................................................................. 11

A.

Fault Model(s) ........................................................................................................................................... 11

B.

Deformation Model(s) ................................................................................................................................ 11

C.

Earthquake Rate Model(s) ......................................................................................................................... 13

D.

Time-Dependent UCERF(s) ...................................................................................................................... 15

I MPLEMENTATION A GENDA AND P RIORITIES ........................................................................................................... 16

UCERF Version 1.0 by Nov. 30, 2005 ............................................................................................................... 16

UCERF Version 2.0 by Sept. 30, 2007 ............................................................................................................... 16

UCERF 3.0 by ???????? ................................................................................................................................... 17

U NRESOLVED I SSUES ............................................................................................................................................... 17

ACTION ITEMS ....................................................................................................................................................... 18

T ASKS ...................................................................................................................................................................... 18

W ORKSHOPS ............................................................................................................................................................ 22

M EETINGS ............................................................................................................................................................... 23

APPENDIX A – THE CALIFORNIA REFERENCE GEOLOGIC FAULT PARAMETERS .......................... 24

APPENDIX B – A FRAMEWORK FOR DEVELOPING A TIME-DEPENDENT ERF .................................. 27

APPENDIX C – DEFORMATION MODELS ........................................................................................................ 43

APPENDIX D – WELDON ET AL. ANALYSIS OF SO. SAF PALEOSEISMIC DATA .................................. 49

APPENDIX E – UCERF 1.0 SPECIFICATION ..................................................................................................... 52

726992444 3

Introduction

Goal

Our goal is to provide the California Earthquake Authority (CEA; http://www.earthquakeauthority.com

) with a statewide, time-dependent earthquake-rupture forecast (ERF) that uses best available science and is endorsed by the United States Geological

Survey (USGS), the California Geological Survey (CGS), and the Southern California

Earthquake Center (SCEC). This model, called the Uniform California Earthquake Rupture

Forecast (UCERF), will be delivered to the CEA by September 1, 2007.

All activities of this working group are tightly coordinated with those of the USGS

National Seismic Hazard Mapping Project (NSHMP). The UCERF will be formally evaluated by a Scientific Review Panel (SRP), the California Earthquake Prediction Evaluation Council

(CEPEC), and possibly the National Earthquake Prediction Evaluation Council (NEPEC). We intend to deploy the model in an adaptable and extensible framework whereby modifications can be made as warranted by scientific developments, the collection of new data, or following the occurrence of significant earthquakes (subject to the approval of the review panels). The model will be “living” to the extent that the update process can occur in short order. CEA has made it clear that they would/could use 1-year forecasts updated on a yearly basis. That being said, we do not want to get side tracked by grand ambitions, so our implementation strategy is to add more advanced capabilities only after achieving more modest goals.

Management Structure

Management is based on two complementary groups, a Management Oversight Committee

(MOC) and a Working Group on California Earthquake Probabilities (WGCEP). The MOC is comprised of the four leaders who control the resources of the participating organizations:

 Thomas H. Jordan, SCEC Director, University of Southern California (USC)

 William L. Ellsworth, Ex Chief Scientist, Earthquake Hazards Team, USGS, Menlo Park

(Rufus Catchings, the new chief scientist, may take over)

 Jill McCarthy, Chief Scientist, Geologic Hazards Team, USGS, Golden

 Michael Reichle, State Seismologist, CGS, Sacramento

The MOC is chaired by the SCEC Director who serves both as the Principal Investigator, responsible for overall success, and as a single point of contact for the CEA. The other three

MOC members act as Co-P.I.’s with responsibilities for the activities of their respective organizations. The MOC approves all project plans, budgets, and schedules.

The WGCEP is responsible for all technical aspects of the project including model design and implementation and supporting database development. The group is comprised of:

 Group Leader (Edward (Ned) Field) - Responsible for overall coordination and serves as a single point of contact with the MOC.

726992444 4

 Executive Committee (EC) - Comprised of the leader and 5 co-leaders (Mark Petersen, Chris

Wills, Tom Parsons, Ray Weldon, and Ross Stein) - responsible for convening experts, reviewing options, making decisions, and orchestrating implementation of the model and supporting databases.

 Key Scientist – Individuals that contribute in a major way by providing expert opinion and/or specific model elements. These individuals are more likely to receive funding for their activities and are expected to document their contributions via peer-reviewed publications.

 Contributors – Members of the broader community that contribute in some way.

Project Costs

The MOC has estimated a total 32-month project cost of ~$8 million, which by organization breaks down approximately as:

Organization

USGS, Golden

USGS, Menlo Park

CGS

Cost (32 mo.)

$2,206,000

$1,840,000

$764,000

SCEC $3,300,000

Total Project Cost $8,110,000

We anticipate that up to $1.75 million of this will come from the CEA to support:

 All project workshops

 Implementation of software framework for merging databases and model components

 Support of ERF model development and database development projects, complementary to ongoing SCEC, USGS, and CGS efforts

The Meaning of “Consensus”

In simplest terms, the CEA wants a model they can defend in court. To this end, they desire a model that is “consensus” in terms of having broad support. Unfortunately it is presently impossible to construct a single forecast model that enjoys unanimous consent among the entire science community. The appropriate approach in this situation is to accommodate some number of alternative forecast models that span the range of credible scientific interpretations. For example, the regional earthquake probabilities determined by the 2002

WGCEP were based on 10,000 different ERFs. Fortunately, probabilistic seismic hazard analysis (PSHA) can easily handle multiple models; indeed, its proper implementation requires that, ideally, all viable models be accommodated.

Our WGCEP will strive to accommodate whatever minimum number of models are needed to span the range of viability and importance (and given budgetary and time constraints).

Indeed, our adoption of an object-oriented implementation will greatly increase our ability to

726992444 5 accommodate multiple alternatives. However, we will not be able to include all viable options, so there will inevitably be some members of the science community who’s perspectives are not represented. Fortunately, these individuals have an alternative venue where they can put forth a fully developed and articulated alternative model for evaluation and testing (SCEC’s RELM working group; http://www.RELM.org

). There is no reason why such an alternative model couldn’t be included at a future date if adequately vetted (e.g., by CEPEC), although it is doubtful that any models would be available by our initial delivery date of September 2007. The important point here is that we can’t afford an attempt to accommodate all viable options in the

UCERF. Furthermore, we believe the outsiders will understand the appropriateness of our approach, be all too happy to be spared the inevitable compromises they’d have to make in being part of this WGCEP, and be more motivated to develop a competing alternative (which will be healthy for SHA).

While the UCERF will not have unanimous consent among the science community, it will represent consensus among the WGCEP participants. Perhaps most important, however, is that it will have been reviewed by the SRP and CEPEC, and will have the endorsement of the

USGS, CGS, and SCEC.

Decision Making Process

After convening experts and reviewing options, decisions regarding model components and recommended logic-tree branch weights will be made by the Executive Committee, subject to approval by the MOC, SRP, and CEPEC. In fact, we hope for strong feedback from the SRP and CEPEC, and would be open to an alternative protocol where the latter decide on final model weights.

Potential Innovations Relative to Previous WGCEPs

 Statewide model

 Use of kinematically consistent deformation model(s) that include GPS data directly

 Use of Weldon/Biasi/Fumal paleo-data analysis and interpretation, which should better constrain viable rupture models at least for the San Andreas Fault (SAF).

 Relaxation of strict segmentation assumptions (persistent rupture boundaries)

 Allowance of fault-to-fault ruptures

 Inclusion of stress-change and/or foreshock/aftershock based probability changes (the former were considered by WGCEP-2002; the latter will be considered only at a later stage and if deemed appropriate)

 Accommodation of more alternative models (logic-tree branches) as made feasible by an object-oriented implementation

 The model will be adaptive and extensible (living)

Delivery Schedule

Nov. 30, 2005

726992444 6

UCERF 1.0

Southern SAF Assessment

May 31, 2006

Fault Section Database 2.0

Earthquake Rate Model 2.0 (preliminary for NSHMP)

June 1, 2007

Final, reviewed Earthquake Rate Model delivered to the NSHMP for use in their 2007 revisions.

September 30, 2007

UCERF 2.0 delivered to CEA (reviewed by SRP and CEPEC).

Development Plan

Basic UCERF Structure & Components

Our implementation of UCERF in an object-oriented, modular framework will provide the following advantages: 1) we can more easily accommodate different epistemic uncertainties

(branches of a logic tree); and 2) we can implement something relatively simple (yet defensible) in the near term, while simultaneously accommodating more sophisticated components if and when they are available. The important point is that relatively ambitious features will not hold up delivery of a useful model, yet more innovated contributions can be ushered in later.

The UCERF will be composed of the following model components, each of which will make use of one or more of the data components that follow:

Model Components

A.

Fault Model(s)

B.

Deformation Model(s) from data components 1, 3, and 4 below from Fault Model(s) and data components 1, 2, and 5 below

C.

Earthquake Rate Model(s) from the Deformation Model(s) and data components 1,

3, and 4 below

D.

Time Dependent UCERF(s) from the Earthquake Rate Model(s) and data components 1, 3, and 4 below

Data Components

1.

California Reference Geologic Parameter Database

726992444 7

2.

GPS Database

3.

Instrumental Earthquake Catalog

4.

Historical Earthquake Catalog

Some of the model components are actually composed of sub-model components (more details below), and there will likely be alternative versions of each. There are also other useful data and/or model elements that are not listed here (e.g., NUVEL plate-tectonic rates). Because the model components depend on the data components, we will discuss the latter in more detail first.

Data Components in More Detail

1.

California Reference Geologic Fault Parameters

This database is the result of much planning and coordination between SCEC, CGS, and the

USGS offices in Golden and Menlo Park. It is designed not as a complete archive of fault information, but rather as a set of reference parameters for use by modelers (supporting first and foremost this WGCEP, but other modelers will likely find the data useful as well). This database has two primary components:

Fault-Section Database

Composed of the following information for each fault section: fault trace average dip estimate average upper seismogenic depth estimate average lower seismogenic depth estimate average long-term slip-rate estimate average aseismic-slip-factor estimate average rake estimate

(note that “estimate” includes formal uncertainties)

Neighboring sections are defined to the extent that any of the parameters (except fault trace) are significantly different. Note that this sectioning is for descriptive purposes only, and does not necessarily imply rupture segmentation. Note also that this Fault-Section Database is nearly (and deliberately) identical to information used in the NSHMP 2002 hazard model. The versions we are likely to consider are:

726992444 8

Fault-Section Database 1.0:

Exactly that used by the NSHMP 2002

Fault-Section Database 2.0:

Revision of version 1.0 based on the SCEC CFM (including viable alternative representations – versions 2.1, 2.2, …) and any equivalent efforts in northern California. This will also depend on reevaluating slip rates (which the SCEC CFM group is not focused on, although these could perhaps be considered in conjunction with the formal CFM evaluation).

Notes/Issues:

These will not utilize the triangular surfaces (t-surfs) defined by the CFM because they present challenges (how to float earthquakes down t-surfs) and this level of detail is overkill for hazard. The CFM group (Shaw &

Plesch) is providing the simplified “rectilinear” representations needed here.

Should we support depth-dependent dips within fault sections?

Where will we get the seismogenic depth estimates? WGCEP-2002 determined these from

the distribution of regional seismicity and heat flow data, with the depths of microearthquakes being the primary consideration.

Do we need to add heat-flow as another data element?

Where do we get aseismicity factors?

Fault representation(s) for Cascadia will be provide by the Cascadia group

(led by USGS, Golden).

Paleo Sites Database

The primary purpose of this database is to provide the following at specific fault locations: slip-rate estimates event sequence lists

Each slip-rate estimate is associated with a specified time span, allowing more than one value to distinguish between shorter-term and longer-term estimates. No

“preferred” values are specified because there is no objective way of making such designations (we can add different types of “preferred” or “best” estimates if users

726992444 9 can define exactly what they mean). This is also not necessarily a complete archive of all previously inferred values. This slip-rate information will be useful for both deformation modeling and in assigning averages for fault sections.

An event-sequence is someone’s interpretation of the previous events at the site

(including date of and/or amount of slip). The database supports more than one event sequence (thus, the event-sequence list) because there may be differing opinions (an epistemic uncertainty in that there is, at most, only one correct sequence). This database is designed explicitly to support the

Weldon/Biasi/Fumal-type analyses. It will also, of course, support probability calculations that depend on the date or amount of slip in the last event.

We envision the following versions:

Paleo Sites Database 1.0:

Whatever will be needed to support the time dependent calculations in UCERF 1.0 (more below)

Paleo Sites Database 2.0:

Whatever will be needed to support the time dependent calculations in UCERF 2.0 (more below), which may include the Weldon/Biasi/Fumal-type analyses.

Note that this database has a rather sophisticated treatment of uncertainties (e.g., allowing full

PDFs if available, and not allowing ambiguous values such as “min” and “max; see the data model in Appendix A for details).

This database is machine readable enabling both on-the-fly access during a probability calculation and GUI based data entry and review (a prototype GUI exists).

Both the continually updated “living” version of this database, as well as any locked-down, official release (e.g., that used in a version of the UCERF) will reside at USGS, Golden (as part of the National database).

Chris Wills is in charge of managing the development of this database. His implementation plan and a pointer to the data model are given in Appendix A.

2.

GPS Database

A statewide GPS database is currently being compiled by Duncan Agnew and others

(with funding from this project). In Duncan’s own words:

726992444 10

“We will merge and reprocess survey-mode GPS data from the SCEC and NCEDC archives along with other survey-mode observations from within California, and combine these with analyses of continuous GPS sites in California and western Nevada. We will also, as appropriate, use velcity fields computed by others, rotating these into a common reference frame. The aim will be to produce velocities that represent the interseismic motion over all of California and northern Baja California, in addition to bounding zones extending East and North in order to facilitate modeling. Velocities will be referenced to stable North America, and will include errors (including a covariance matrix). Sites subject to nontectonic motion will be removed, in consultation with the modeling efforts. The initial compilation … [is available now, and] … an updated second [version will be delivered in] January 2006.”

Tom Parsons is coordinating this activity with the WGCEP as part of his management of the deformation modeling efforts (see Appendix C for the specific plan).

3.

Instrumental Earthquake Catalog

This should be in good shape for both northern and southern California?

However, there might be some magnitude-completeness issues if we use seismicity to infer foreshock/aftershock statistics and/or stress changes (although this is definitely not an immediate priority).

There also may be issues related to automatic access to the data, such as: 1) is there a standard interface for both N. and S. California? 2) Is there versioning in terms of magnitude estimate updates?

Do we need a declustered catalog?

Ned Field is working with Matt Gerstenberger and Lucile Jones to address data access issues (since they’re dealing with them with their STEP model).

It is not yet clear exactly how, or if, the WGCEP will use this data.

4.

Historical Earthquake Catalog

Version 1.0 of this will be what was used by NSHMP-2002.

Version 2.0 is being assembled by Tianqing Cao under the supervision of Chris Wills.

Here is what Tianquing has said regarding his plan:

When we updated the CDMG 1996 statewide catalog (Petersen et al., 1996) for the 2002 hazard model, we used the catalog from Oppenheimer of USGS for northern California, which was a merge of the USGS and UC Berkeley catalogs.

726992444 11

For southern California, we used Kagan’s catalog (UCLA), which was from a merge of CIT and other catalogs. We plan to follow the similar steps in the new round of updating. The following are some details planed.

1.

I have contacted Drs. Oppenheimer and Kagan to obtain their updated catalogs for northern and southern California. They have agreed to my request. Dr.

Kagan’s group recently just created a new catalog for southern California earthquakes from 1800 to 2005 (M ≥ 4.7), which has been provided to us. We wish to extend the low cutoff magnitude to 4.0 like we did in the past. For the eastern California, the UNR catalog will be collected and merged.

2.

In 2002 updating, we used the catalog of Toppozada and Branum (2002) for the

California earthquakes of M ≥ 5.5, which added about 50 pre-1932 events of M ≥

5.5 to the catalog. But some of the historical events are not well determined and different in magnitude and location from Bakun of USGS. We will revisit those events and discuss with Toppozada and Bakun.

3.

Because we don’t need the catalog in this year we plan to produce the merged catalog in early next year so the catalog will extend to the end of 2005.

Both Bill Bakun and Tousson Toppozada have expressed willingness to review the catalog.

Do we need moment tensors?

Shall someone do a Wesson-type Bayesian analysis to try to associate these events with known faults?

What about declustering?

Model Components in More Detail

A.

Fault Model(s)

The fault model(s) used for hazard calculations will be the “Fault-Section Database” component listed above under the California Reference Geologic Fault Parameter

Database. Version 1.0 is that used in the NSHMP-2002 model, and Versions 2.0, 2.1, etc. will represent an update and alternatives based on the SCEC CFM and any equivalent northern California efforts.

Chris Wills is in charge of this effort. See Appendix A for details.

B.

Deformation Model(s)

726992444 12

Ideally a deformation model would give us crustal stressing and/or straining rates. More specifically (in terms of what we would likely use in the UCERF), we would like estimates of: a) slip-rate estimate on known major faults b) moment-accumulation rates elsewhere

Following previous WGCEPs, our initial deformation models will be:

Versions 1: Fault-Section Database 1.0 described above (fault sections with average slip rates from NSHMP-2002).

Versions 2: Fault-Section Database 2.0 described above (update of version 1.0, which will most likely be what’s used by the NSHMP in their next update).

Remember, these Fault-Section Databases include average long-term slip rates for each fault section, which represent consensus and expert judgment using a variety of available constraints . Although these models will presumably be consistent with the total tectonic deformation across the plate boundary, there is no guarantee that they are kinematically consistent. They also do not incorporate GPS data explicitly, nor do they give deformation rates off of known faults.

Therefore, as outlined in Appendix C, we are pursuing a variety of deformation modeling approaches (i.e., Peter Bird’s NeoKinema, MIT’s block model, and Tom Parsons’ 3D finite-element model). The primary goal is to obtain improved slip-rate estimates for each fault section. The current question is whether any of these models will provide results that are deemed worthy of application (i.e., better than doing nothing). If so, they will constitute Deformation Model versions 3.0 and above.

In addition to providing alternative slip rates for each fault section, the deformation models may also provide deformation rates (or moment release rates) elsewhere in the region (off of the modeled faults). How to represent such output has yet to be determined.

Finally, deformation models may also provide the stressing rates on faults so that “clock change’ type calculations, based on stress changes caused by actual events, can be made.

Tom Parsons is in charge of organizing the deformation modeling activities (see

Appendix C).

As noted above, both a statewide GPS dataset and geologic slip rates at points on faults

(Paleo Sites Database) are being compiled to support these modeling activities.

726992444 13

C.

Earthquake Rate Model(s)

An Earthquake Rate Model gives the long-term rate of all possible earthquakes throughout the region (above some magnitude threshold, and at some spatial- and magnitude-discretization level). This model has two components: 1) a Fault-Qk-Rate

Model giving the rate of events on explicitly modeled faults; and 2) a Background-

Seismicity Model to account for events elsewhere (off the explicitly modeled faults).

More precisely, the Fault-Qk-Rate Model gives:

FR f,m,r

= rate of r th rupture for the m th magnitude on the f th fault (where the possible rupture surfaces will depend on the magnitude)

This model can be constructed by making assumptions on the spatial extent of possible ruptures (e.g., segmentation) and/or the magnitude-frequency distribution for the fault, using magnitude-area relationships, and by satisfying known slip rates. Examples include the WGCEP-2002 long-term SF bay-area model (e.g., Chapter 4 at http://pubs.usgs.gov/of/2003/of03-214/ ) and the NSHMP-2002 model

( http://pubs.usgs.gov/of/2002/ofr-02-420/OFR-02-420.pdf

).

Likewise, the Background-Seismicity Model gives:

BR i,j,m

= background rate of m th magnitude at the i th latitude and j th longitude

This model is usually a Gutenberg-Richter distribution assigned to each grid point, the avalue of which is based on smoothed historical seismicity (e.g., NSHMP-2002 model), but could be constrained by a deformation model as well. It’s important to note that the

Background-Seismicity Model is a hypocenter forecast, and does not give actual rupture surfaces as provided the Fault-Qk-Rate Model (something SHA calculations need to account for).

Combing a Fault-Qk-Rate and Background-Seismicity model gives us a complete

Earthquake-Rate Model. Obviously this needs to be moment balance with respect to the deformation model, and consistent with observed seismicty rates. Listed below are the versions of each model we plan/hope to use.

Fault-Qk-Rate Models:

Version 1.0: That used by NSHMP-2002 (which includes the WGCEP-2002

Fault-Qk-Rate Model for the bay area).

Version 2.0: A slight revision of version 1.0 based on new data (e.g., Fault

Section Database 2.0) and whatever revisions to the segmentation/cascade models the advocates of this approach request.

726992444 14

Version 3.0 This is a more experimental model that attempts to relax segmentation and allow fault-to-fault jumps. The idea is to divide all faults into ~5km sections, define all possible combination of ruptures (including fault-to-fault jumps), and constraint the rate of each by satisfying fault slip rates, a regional Gutenberg-Richter constraint, historical seismicity, paleoseismic data, and whatever other information exists. The problem will be posed as a formal inversion that can be resolved whenever new constraints are available. The solution space will define the range of viable models. More details can be found in Appendix B.

Notes

The fault-slip rates in versions 2 or 3 above could be based on one of the more sophisticated deformation models (Versions 3.0 or above) if available.

Do we apply aseismicity factors (from the Fault-Section database) as reduced rupture area or reduced effective slip rate (WGCEP-2002 assumed the former)?

Paul Somerville has volunteered to do a systematic evaluation of existing magnitude-area relationships as part of this WGCEP

Background-Seismicity Model

Version 1.0 The background seismicity model used by NSHMP-2002

Version 2.0 An update of Version 1.0 based on any revised catalog and/or analysis approach. For example, background seismicity cannot double count with respect to the Fault-Qk-Rate Model, so a modification of the latter will dictate some modification of the former. This version could also utilize the Bayesian approach of

Wesson for assigning the probability that previous events occurred on the modeled faults (as in WGCEP-2002).

Version 3.0 The background seismicity model could, at least in part, be constrained by any off-fault deformation predicted by one of the more sophisticated deformation models (Versions 3.0 or above)

Questions/Issues:

Should we consider precarious rock constraints with respect to offfault seismicity (Brune thinks their existence implies the smoothing is too large?).

726992444 15

D.

Time-Dependent UCERF(s)

Here we want to give time dependent probabilities for ruptures defined in the Earthquake

Rate Model(s).

CEA wants 1- to 5-year probabilities, although we should be able to support other durations as well.

An important question here is the relative importance of statewide consistency versus legitimate complexity.

As outlined in Appendix E, UCERF 1.0 will be based on simple renewal-model timedependent probabilities calculated for ruptures on Type A faults in the NSHMP-2002 model.

The principle remaining difficulty seems to be computing conditional probabilities for both single and multi-segment ruptures, or even worse, when segmentation is relaxed all together.

The ERF Framework outlined by Ned Field in Appendix B is an attempt to define a model architecture that can both solve these problems and accommodate a wide variety of viable approaches for making time-dependent forecasts (from models that invoke elastic rebound theory to those based on foreshock/aftershock statistics). It outlines how the time-dependent model for single versus multi-segment ruptures applied by the

WGCEP-2002 seems to be logically inconsistent (although it may give the “right” answer). It also outlines a simple solution that utilizes Bayes’ theorem, where the

Poisson probabilities constitute the priors, which get modified by a likelihood function

(as a probability gain or loss). This framework has been presented to many, including the

WGCEP ExCom, and so far so good. However, there may be a fatal flaw with the logic so others are highly encouraged to develop alternative approaches.

Based on a July 18, 2005 meeting among the ExCom, we tentatively plan to pursue the following types of time-dependent calculations for UCERF ≥2.0:

BPT model (~same as lognormal model)

BPT model with “clock change” e.g., WGCEP-1988 e.g., WGCEP-1990

BPT model with “clock change” & Rate&State e.g., Stein et al. (1997)

BPT-step model (based on Coulomb calcs) e.g., WGCEP-2002

Rate changes from Coulomb and Rate&State

Rate changes from seismicity/aftershock/triggering statistics

726992444 16 e.g., the “Empirical” approach of WGCEP-2002, STEP applied to 1-5 yrs, or what Karen Felzer is working on.

All of these should fit into the framework defined by Ned Field in Appendix B (e.g., the latter two of “  N” type models).

The BPT-step and clock change models will require stressing rate estimates for each fault. These will be provided as part of the Deformation modeling effort (see Appendix

C).

Note also that “clock change” in terms modifying the date of the last event is not technically correct (the date didn’t actually change). In terms of the modifying the date of next event it’s a viable concept, but care must be taken with respect to implied/required changes to the COV. Another way of modeling these effects is via timedependent loading rates (e.g., effective slip-rate changes).

It may be that ongoing work by Fred Pollitz, which includes viscoelastic effects, will be useful for time-dependent probabilities as well.

Implementation Agenda and Priorities

UCERF Version 1.0 by Nov. 30, 2005

Based on Earthquake Rate Model 1.0 (the existing NSHMP-2002 model) plus timedependent probabilities (see Appendix E for details).

This model will be published with the RELM special issue of SRL.

UCERF Version 2.0 by Sept. 30, 2007

This will be based on Fault Model 2.#, including the SCEC CFM alternatives and any equivalent representations for northern California faults.

This will be based Deformation Model 2.# (revised fault-section slip rates including aseismicity factors).

Note that Fault Model 2 & and Deformation Model 2 will be one and the same with

Fault-Section Database 2.

This will also be based on Earthquake-Rate-Model 2.0, which will be composed of Fault-

Qk-Rate-Model 2 and Background-Seismicity-Model 2 (both described above).

726992444 17

Note that Earthquake-Rate-Model 2.0 is what will be delivered to the NSHMP for their

2007 update. They will need a preliminary version by June 2006, and the final version by

June 2007.

It remains to be seen what time-dependent probability options will be included in UCERF

2.0, but we’ll obviously build on what was done for UCERF 1.0.

Building UCERF 2.0 will constitute the bulk of effort by the WGCEP.

UCERF 3.0 by ????????

This model would represent having gone above and beyond the “call of duty”.

This might include the use of more sophisticated Deformation Models (versions 3.0 or above).

This might also include more sophisticated probability models that account for stress or seismicty-rate changes. This would in turn necessitate the ability to simulate catalogs if we want to go beyond next-event forecasts

The point here is that we don’t want to guarantee delivery of any of these more speculative capabilities.

We need to focus and delivering UCERF 2.0 before we pursue these advanced capabilities (although we do need to plan ahead, and we should pursue the deformation models if deemed ready for prime time).

Unresolved Issues

(an ad hoc list)

 Fault-to-fault jumps

 1906 stress shadow interpretation (is it real?)

 Determination of seismogenic depths

 Treatment of offshore faults (do we have them covered?)

 Aseismic slip (where from & how to apply?)

 Recurrence-interval COVs

 The need for catalog simulations

726992444 18

Action Items

This section does not necessarily include tasks that have been accomplished or meetings that have occurred, but rather reflects what remains to be done (so elements may get cut out and others added in future versions of this document). Perhaps we need to document accomplished tasks elsewhere?

Tasks

Task 1 - Fault Section Database 2.0

Description: update sectioning, fault traces, slip rates, average dips, upper and lower seismogenic depths, and asiesmicity factors. Chris Wills is in charge of most of these activities. See Appendix A for more details.

Subtasks:

# Description EC

Contact

Person(s)

In

Charge

Related

Workshops

(W) & meetings (M)

W1 1.1 S. California Faults (incl. offshore)

1.2 N. California Faults (incl. offshore)

1.3 E Cal, W Ariz., W. Nev.,

Mexico, & Oregon faults

1.4 Seismogenic Depth Est.

1.5 Aseismicity Est

1.6 Oracle Database

1.7 Interoperability

Wills

Wills

Wills

Wills

Wills

Petersen

Field

Haller,

Perry, &

Gupta

Perry &

Gupta

W2

W3, & W4

W1 & W2

??

M4

M4

How are we going to get consistent, statewide seismogenic depth estimates (start by finding out how Shaw is doing this). What about aseismicity estimates?

Task 2 - Cascadia Model

Mark Petersen says that Art Frankel will organize and deliver all aspects of this model. It remains to be seen what, if any, resources he will need.

This relates to Workshop #6 below.

726992444

Task 3 - Deformation Models

Tom Parsons is in charge of this activity. See Appendix C for more details.

#

3.1

3.2

Description

Evaluate viable options and data/resources needed for their implementation

Begin statewide data compilation

EC

Contact

Parsons

Parsons

Person(s) In

Charge

Agnew &

Murray

Related Workshops

(W) & meetings (M)

M1

M1

Task 4 - Paleo Sites Database (1.0 and 2.0)

Subtasks:

# Description EC

Contact

4.1 Finalize Data Model

4.2 Oracle implementation

Field

Petersen

4.3 Interoperability (incl. GUI) Field

4.4 Implement UCERF 1.0 Needs Wills

4.5 Prepare for UCERF 2.0 Needs Wills

4.6 Accommodate Deformation modeling needs

Wills

Person(s)

In

Charge

Related

Workshops

(W) & meetings (M)

Perry

?

M4

M4

V. Gupta M4

Wills M4

Wills

Wills?

M4

M1

Task 5 - Task Weldon/Biasi/Fumal SAF Analysis

Continue developing their methodology with the goal of constraining viable rupture models for at least the southern SAF (See Appendix D for details).

Can this be extended to N. Cal (task 5.2)?

Ray Weldon is in charge of this.

This relates to Meeting 2 and Workshop 5 below

Task 6 - Historical Earthquake Catalog 2.0

# Description

6.1 Compile statewide historical catalog

EC

Contact

Wills

Person(s) In

Charge

Tianqing Cao

Related Workshops

(W) & meetings (M)

M5

19

726992444 20

6.2 Bayesian fault association?

Petersen Wesson? M5

Declustering and Bayesian association will depend on how the final model (can’t decide right now)

Task 7 - UCERF 1.0

# Description

7.1 Decide on appropriate probability models to apply

7.2 Object-oriented implementation

EC

Contact

Petersen &

Field

Field

Person(s) In

Charge

Tianqing

Cao

Vipin

Related Workshops

(W) & meetings (M)

M3 & M4

Task 8 - Earthquake Rate Model 2

This is basically what the NSHMP will use in their next update. It involves constructing the Fault Qk Rate Model 2.0 and Background Seismicity Model 2.0.

Subtasks:

# Description EC

Contact

Person(s)

In

Charge

Petersen ?

Related

Workshops (W)

& meetings (M)

M2, W5, & W7 8.1 Segment/Cascade models on type-

A faults

8.2 MFDs on type-B faults & type-C zones

8.3 Background GR-distribution parameters

Petersen ?

Petersen ?

W3, W4, W5, &

W7

M5 & W7

Task 9 - Develop Plan for Advanced Time Dependent Probabilities

# Description

9.1 Explore & decide on viable options for time-dependent probabilities

EC

Contact

Field

Field

Person(s)

In

Charge

Field,

Parsons, and

Stein?

Vipin

Related

Workshops (W)

& meetings (M)

M3 & W6

9.2 Design and Implement Object-

Oriented Framework

9.3 CEPEC/NEPEC review

9.4 Final documentation

Field

Field

MOC

All EC

726992444

9.1 is a huge task that will obviously involve all EC members to some extent.

Task 10 – MOC review of WGCEP progress

Every two months

Task 11 – Other possible tasks

# Description

11.1 Define fault-to-fault rupture probabilites

11.2 Update M(A) relationships?

EC

Contact

Field

Field

Person(s)

In Charge

Harris?

Paul

Somerville

Related

Workshops (W)

& meetings (M)

W5 & W8

21

726992444

Workshops

# Topic Purpose

1

2

3

Evaluate N. Cal. Faults

Evaluate S. Cal. Fault

Pacific NW (including

Cascadia & S. Oregon) fault hazards

4 Intermountain West Hazards NSHMP/CGS workshop, as requested by Mark

Petersen

5 Fault Segmentation &

Cascade Models

To present and evaluate viable options for segmenting/cascading faults (or not)

6 Time Dependent Earthquake

Probabilities

Evaluation & update of all Reference

Geologic Fault

Parameters for N.

California

Final, formal evaluation of CFM, including alternatives, and plan for assigning slip rates. Paleo Sites

Data considered at this meeting?

NSHMP/CGS workshop, as requested by Mark

Petersen

7 Review of Earthquake Rate

Model 2.0

To present and evaluate viable options assigning time dependent probabilities

Public presentation and review of the proposed NSHMP

2007 model, w/

CEPEC?

8 Fault-to-fault jump constraints from dynamic rupture modeling?

To explore what dynamic rupture modelers can tell us about through going rupture probabilities

Target Date Convened

By

July 26,

2005

Wills &

Petersen w/ SCEC annual meeting?

Jan., 2006

Wills and

Graymer

Frankel &

Petersen

March, 2006 Petersen &

?

~Feb. 1,

2006

Weldon,

Field, and

Petersen

Field and/or

Ruth

Harris

Related Tasks

(T)

T1.1

T1.2

T1.3, T2, T8.2

T1.3, T8.2

T5, T8.1

Sept., 2006 Field and ? T9

June 2006

Day after

W5

Petersen T8

T8

22

726992444

Meetings

# Topic Purpose

1 Deformation Modeling To explore viable deformation models and needed resources

2

4

Weldon/Biasi/Fumal

SAF Analysis

3 Viable Time-Dependent

Probabilities for UCERF

1.0

California Reference

Geologic Parameters

Database

To further explore how this work can contributed to the

WGCEP

To explore what probability modes could/should be applied

To strategize Oracle implementation, interoperability, and data entry

Target

Date

June 3rd,

2005

Convened

By

Parsons

Critical

Participants

Bird, Hager,

Field (&

Thatcher or

Segall or

Agnew?)

Weldon, Biasi ,

& Field

July 14

2005 (in

Reno & conf call)

July 18

Park th in Menlo

Weldon

Field

July 12th Perry &

Field

EC members plus Biasi,

Wesson, Boyd,

Campbell,

Tianquing

Wills, Perry,

Field, and V.

Gupta

5 Historical Earthquake

Catalog

To strategize the development of a statewide catalog

ASAP Wills? Cao,

Toppozada, &

Bakun (&

Kagan?)

Related

Tasks (T)

T3.1, T3.2,

& T4.6

T5, T8.1

T7.1, T9.1

T1.6, T1.7,

T4.1, T4.2,

T4.3, T4.4

T4.5, T4.6,

& T7.1

T6.1, T6.2,

T8.2, &

T8.3

23

726992444 24

Appendix A – The California Reference Geologic Fault Parameters

The data model is available at: http://www.relm.org/models/WGCEP/#Anchor-Reference-35882

Project plan for continued development of a Reference Fault Parameter Database in support of WGCEP

Written by Chris Wills

Any SHA for California depends on geologic data on the seismic sources (mainly the major faults) of the State. Compiling data on faults and seismic parameters has been ongoing for over

20 years, with major contributions by Clark (1984), Ziony and Yerkes (1985). Beginning in

1996, fault data compilation was combined with the National Seismic Hazard Maps Program

(NSHMP), resulting in tables of fault parameters compiled by Petersen and others (1996) and modified by Bryant (2002). These tables of fault parameters for the NSHMP represent the best estimate parameters for slip rates, earthquake recurrence etc, based on workshops where geologic, geodetic, and seismic data were considered. As such they could be directly used in calculating seismic shaking hazards.

In support of the NSHMP, USGS developed a second database, the National Quaternary Fault and Fold database. In contrast to the NSHMP, the NQFFD, was intended to hold the entire range of geologic fault data, similar to the earlier work of Clark (1984) and Ziony and Yerkes (1985), rather than parameters that had been discussed in workshops, influenced by geodetic of seismic data, and adopted as "best-estimate" parameters for seismic shaking analysis. The NQFFD effort built upon the Southern California "Fault Activity Database", which attempted to compile in great detail all that was know about the faults in part of the Los Angeles basin, as well as evaluate that data. More recently, geologists in the San Francisco bay area have begun work on a database intended to supplement the NQFFD by including more detail in the mapping of faults and in describing and locating in a Geographic Information System (GIS) the location where fault features were observed.

Fault locations are also critical for SHA, and more sophisticated SHA can require more detailed mapping or 3d representations of faults. Currently in California, there is the statewide geometric representation of faults for the NSHMP, maps showing the surface traces of faults, most recently the updated statewide map of Bryant (2005), and the highly detailed 3 dimensional fault representation in the "Community Fault Model" (CFM) prepared by Shaw and Plesch.

To support the ongoing WGCEP, a new “California Reference Geologic Fault Parameter

Database” is being developed. This includes 1) a Fault Section Database and 2) a Paleo Sites

Database.

The Fault Section Database includes the following for all fault sections: the fault trace, average upper and lower seismogenic depth estimates, average dip estimate, average rake estimate,

726992444 25 average aseismic slip factor estimate, and an average long-term slip rate estimate. A complete fault model represents a list of fault sections. There will be alternative fault models to the extent that some fault sections are speculative. The slip-rate estimates represent consensus and expert judgment using a variety of available constraints. This Fault Section Database 2.0 will constitute a viable deformation model in UCERF-2.0, with the associated Earthquake Rate Model 2.0 likely being what’s used in the next NSHMP model (fault sections with highly uncertain slip rates might be removed for this purpose). The fault geometries in Fault Section Database 2.0 will be provided to the deformation modelers (they will ignore the section slip rates in lieu of “pure” geologic slip-rate estimates at points along the fault from the Paleo Sites Database; these deformation models, which will include GPS data, will hopefully provide alternative section sliprate estimates that could constitute the basis of alternative Earthquake Rate Models).

The Paleo Sites Database contains total fault offset or slip-rate estimates associated with specific time spans at various points on the faults. It also contains inferred dates and amounts of slip for specific events, as well as lists thereof representing viable interpretations of the event history at the site.

The current tasks associated with these databases are as follows:

1.

Update all appropriate aspects of the 2002 NSHMP Fault Section Database.

2.

Create an alternative set of slip rates in the Fault Section Database that are appropriate for deformation modeling (pure geologic rate estimates uninfluenced by other considerations).

Peter Bird will likely be influential in this compilation. An alternative to this is to accommodate such information in the Paleo Sites Database (made feasible by the latter accommodating slip rate estimates between two points along a fault).

3.

Implement the Paleo Sites Database in Oracle with a GUI based data input/output interface, and begin data collection.

Details associated with the above databases can be found in the documents describing the associated data models, and please note that this activity is tightly coordinated with the

NSHMP program in Golden.

Work Plan

Under the leadership of Chris Wills, the following activities are planned:

1) Work with John Shaw and Andreas Plesch to incorporate the SCEC Community Fault Model into the Fault Section Database. Shaw and Plesch have completed a "rectilinear" representation of the CFM (CFM-R). As a first step in creating a fault model for all of

California, CGS has acquired that version of the CFM and imported it into a standard GIS.

We have also met at the SCEC annual meeting to compare the CRM-R with the detailed

CFM created with triangular surfaces and the NSHMP fault geometry. With the CFM-R in a

GIS, we can compare the CFM-R representation with the previous model and can use the coordinates and dip values from the CFM-R directly in updating the Fault Section Database.

726992444 26

Tasks under activity 1 include:

· Acquire the CFM-R from John Shaw and Andreas Plesch in a format that includes the coordinates for the four corners of each rectangular fault section, as well as the upper and lower extent of the seismogenic fault, and the dip of the section. (Task completed as of 10/31/05)

· Acquire alternative models for parts of CFM-R in the same format.

· Translate CFM-R into a standard GIS. (Task completed as of 10/31/05)

· Compare CFM-R to the 2002 CGS/USGS model

· Correlate rectangular sections from CFM-R with sections from the 2002 model.

· Acquire new geometric representation of faults in the San Francisco Bay area.

· Assign slip rates for each section based on 2002 model, modified by input from workshops in Menlo Park in July, 2005, Palm Springs in September, 2005 and additional workshops for Cascadia and the Intermountain West (including the eastern California shear zone). a)

· Deliver Fault-Section Database 2.0

2) Revise the fault geometries in the Fault Section Database for those parts of California outside the current CFM to ensure compatibility with the "rectilinear" CFM, after verifying the level of detail in that version is appropriate.

3) Update other elements of the Fault Section Database (e.g., slip rates, aseismicity estimates).

This will develop from a series of workshops with the geologists most knowledgeable about faults in Northern California (held on July 26, 2005), Southern California (held on September

11, 2005); the Basin and Range province (to be organized by the NSHMP as an

"intermountain west" conference; and the Cascadia Subduction Zone and related faults (also to be organized by the NSHMP as an "Pacific Northwest" workshop).

4)

Implement the Paleo Sites Database. Vipin Gupta and Ned Field will work with Golden to get the database implemented in Oracle with a GUI-based interface. Data for this database will be discussed at the workshops above, with further refinements of the database structure and user interface likely from the geologists input. The total displacement (or slip rate) estimates will be given priority with respect to data entry because they are needed by deformation modelers ASAP. The dates and amounts of slip in previous events will be entered ASAP. The following individuals have received funding from SCEC to contribute to this data gathering effort: James Dolan, Tom Rockwell, and Lisa Grant; and the following have received such funding from NEHRP: Sue Perry and Bill Bryant.

726992444 27

Appendix B – A Framework for Developing a Time-Dependent ERF

By Edward (Ned) Field

Introduction

This document outlines an attempt to establish a simple framework that can accommodate a variety of approaches to modeling time-dependent earthquake probabilities.

There is an emphasis on modularity and extensibility, so that simple, existing approaches can be applied now, yet more sophisticated approaches can also be added later. A primary goal is to relax the assumption of persistent rupture boundaries (segmentation) and to allow fault-to-fault jumps in a time-dependent framework (no solution to this problem has previously been articulated, at least not as far as the author is aware).

Long-Term Earthquake Rate Model

The purpose of this model is to give the long-term rate of all possible earthquake ruptures

(magnitude, rupture surface, and average rake) for the entire region. Of course “all possible” might include an infinite number, so what we mean is all those at some level of discretization

(and above some magnitude cutoff) that is sufficient for hazard and loss estimation. Although a tectonic region may evolve (with old faults healing and new faults being created), the concept of a long-term model is legitimate in that there will be some absolute rate of ruptures over any given time span. By “long” we simply mean long enough to capture the statistics of events relevant to the forecast duration, and short enough that the system does not significantly change.

Known faults provide a means of identifying where future ruptures will occur. Given the fault slip rate, seismogenic depths, and knowledge or assumptions about spatial extent of ruptures and/or magnitude frequency distributions, it is possible solve for the relative rate of all ruptures on each fault. We can write the rate of these discrete events as:

FR f,m,r

= rate of r th rupture for the m th magnitude on the f th rupture surfaces will depend on the magnitude)

fault (where the possible

Of course known faults do not provide the location of all possible ruptures, so we need to account for off-fault seismicity as well (often referred to as “background” seismicity). Such seismicity is usually modeled with a grid of Gutenberg Richter (GR) sources, where the a-values and perhaps b-values are spatially variable. Regardless of the magnitude frequency distribution, we can write the rate of background seismicity as:

BR i,j,m

= background rate of m th magnitude at the i th latitude and j th longitude

An example or background seismicity from the National Seismic Hazard Mapping

Program (NSHMP) 1996 model is shown in Figure 1 along with their fault ruptures. The spatial variation in their background rates is determined from smoothed historical seismicity, although stressing rates from a deformation model could be used as well.

726992444 28

One problem with using gridded-seismicity models is that they don’t explicitly provide a finite rupture surface for the events, as needed for seismic-hazard analysis (SHA), but rather they provide hypocentral locations (and treating ruptures as point sources underestimates hazard).

One must, therefore, either construct and consider all possible finite rupture surfaces, or assign a single surface at random as done in the NSHMP 1996 and 2002 models. This highlights one of the primary advantages of fault-based models, as they provide valuable information about rupture-surfaces.

Figure 1 . Fault (red) and background-grid (gray) rupture sources from the 1996 NSHMP model for southern California (the darkness of the grid points is proportional to the a-value of the background GR seismicity).

Relaxing The Assumption of Fixed Rupture Boundaries:

Previous WGCEPs have assumed, at least for time-dependent probabilities, that faults are segmented (an example for the Hayward/Rodgers-Creek from the 2002 WGCEP is given below).

This means that ruptures can occur on one or perhaps more segments, but cannot occur on just part of any segment. Previous models have also precluded ruptures that jump from one fault to another, as occurred in both the 1992 M 7.2 Landers and the 2002 M 7.9 Denali earthquakes.

For the latter, rupture began on the Susitna Glacier Fault, jumped onto the Denali fault, and then

726992444 29 jumped off onto the Totschunda fault. The following outlines a recipe for building a model that relaxes these assumptions. It is basically a hybrid between the models outlined by Field et al.

(1999, BSSA , 89 , 559-578) and Andrews and Schwerer (2000, BSSA , 90 , 1498-1506), although the latter deserves most of the credit.

We start with a model of fault sections and slip rates, such as that shown in Figure 1

(although applied statewide here). Note that fault sectioning has been applied only for descriptive purposes, and should not be interpreted as rupture segmentation . The first task is to subsection these faults into smaller increments (~5km lengths) such that further sub-sectioning would not influence SHA (from here on we’ll refer to these ~5km subsections as “sections”).

Following Andrews and Schwerer (2000), but for a larger region and using smaller fault sections, we then define every possible earthquake rupture involving two or more contiguous sections (at least two because we consider only events that rupture the entire seismogenic thickness). We use an indicator matrix, I mi

, containing 1s and 0s to denote whether the i th m th rupture ( “1” if yes and zero otherwise). The M

section is involved in the

ruptures so defined can include those that involve fault-to-fault jumps if deemed appropriate (e.g., for faults that are separated by less than

10 km). We then use the slip rate on each section to constrain the rate of each rupture:

I mi u m f m

 r i m where f m

is the desired long-term rate of the m

 th rupture, u m

is the average slip of the m th r i

is the known slip rate of the i

rupture th section.

This system of equations can be solved for all f m

. However, it is under determined so an infinite number of solutions exist. Our task now is simply to add equations in order to achieve a unique solution, or to at least significantly narrow the solution space.

Again, following Andrews and Schwerer (2000), we can add positivity constraints for all f m

, as well as equations that represent the constraint that rates for the region as a whole must exhibit a Gutenber-Richter distribution (taking into consideration uncertainties in the latter at the largest magnitudes). This is the extent to which Andrews and Schwerer (2000) took their model, and they concluded that additional constraints would be needed to get reliable estimates of the rate of any particular event. It is doubtful that our extension of their San Francisco Bay Area model to the entire state (including significantly smaller fault sections) will change that conclusion. Therefore, we need to apply whatever additional constraints we can find to narrow the solution space as much as possible.

One approach is to penalize any ruptures that are thought to be less probable. For example, there might be reasons to believe that ruptures do not pass certain points on a fault.

Such segmentation could easily be imposed. In addition, dynamic rupture modeling might be able to support the assertion that certain fault-to-fault rupture geometries are less probable. It remains to be seen how best to apply such constraints in the inversion, but one simple approach would be to force these rates to be some fraction of their otherwise unconstrained values.

We can also add equations representing constraints on rupture rates from historical seismicity (e.g., Parkfield) or from paleoseismic studies at specific fault locations. Furthermore, the systematic interpretation of all southern San Andreas data by Weldon et al. (2004) will be particularly important to include. Finally, assumptions could be made (and associated constraints applied) as to the magnitude frequency distribution of particular faults (e.g., Field et al. (1999) assumed a percentage of characteristic versus Gutenberg-Richter events on all faults, and tuned this percentage to match the regional rates). However, to the extent that fault-to-fault

726992444 30 jumps are allowed, it becomes difficult to define exactly what “the fault” is when it comes to an assumed magnitude-frequency distribution.

Also following Field et al. (1999), off-fault seismicity can be modeled with a Guternberg-

Richter distribution that makes up for the difference between the regional moment rate and the fault-model moment rate, and where the maximum magnitude is uniquely defined by matching the total regional rate of events above the lowest magnitude.

The important point is that we define all possible fault rupture events (with no a priori segment boundaries and allowing fault-to-fault jumps) and solve for the rates via a linear inversion that applies all presently available constraints (including strict segmentation if appropriate). We will thereby have a formal, tractable, mathematical framework for adding additional constraints when they become available in the future.

One might be concerned that the solution space will still be large after applying all available constraints. However, this is exactly what we need to know for SHA, as the solution space will represent all viable models, which can and should be accommodated as epistemic uncertainties in the analysis. The size of this inversion is potentially problematic, so we may need to ways of simplifying the problem.

In conclusion, the long-term model simply represents an answer to the question: Over a long period of time, what is the rate of every possible discrete rupture? There will, of course, be strong differences of opinion on what such a model should look like, especially when it comes to whether strict fault segmentation is enforced and whether and how different faults can link up to produce even larger events. However, the fact that there is some long-term rate of events cannot be disputed, especially if we specify what that long time span actually is. The question then becomes, given a viable long-term model (of which there will be many for reasons just stated), how do we make a credible time-dependent forecast for a future time span?

Rate Changes in the Long-Term Model

Just as everyone would agree that there is some rate of events over a long period of time

(given by the long-term model), so too would they agree that these rates vary to some degree on shorter times scales. Because large events occur infrequently, we are only able to demonstrate statistically significant rates of change for all events above some magnitude threshold, which we will take as M=5 here since this is the threshold of interest to hazard.

Suppose we have a model that can predict that average rate change for M≥5 events, relative to the long-term model, over a specified time span and as a function of space:

 N i,j

( timeSpan ) where i and j are latitude and longitude indices, and timeSpan includes both a start time and duration. Then we can simply apply these rate changes to all ruptures in the long-term model in order to make a time-dependent forecast for that specific time span. The only slightly tricky issue is how rate changes are assigned to large fault ruptures. If we assume a uniform probability distribution of hypocenters, then we can simply apply the average  N i,j

predicted over the entire rupture surface.

The obvious assumption here is that the change in rate of M≥5 events implies an equivalent change in the rate of larger events. This seems reasonable in that large events must

726992444 31 start out as small events, so that an increase of smaller events implies and increased probability of a large-event nucleation. This is precisely the assumption made in the “empirical” model applied by the 2002 WGCEP (where all long-term rates were scaled down by an amount determined by temporal extrapolation of the post-1906 regional seismicity rate reduction). This assumption is also implicit in the time-dependent models of Stein and others (e.g., 1997,

Geophys. J. Int.

128 , 594-604), and in the Short-Term Earthquake Probability (STEP) model of

Gerstenberger et al. (2005, Nature , 435 , 328-331), which uses foreshock/aftershock statistics.

Therefore, the wide application of this assumption implies that it is certainly a viable approach for time-dependent earthquake forecasts.

Of course the challenge is to come up with credible models of  N i,j

( timeSpan ). Here again, there is likely to be a wide range of options (three having just been mentioned in the previous paragraph). Fortunately we can, in principle, apply any that are available, including alarm-based predictions (where a declaration is made that an earthquake will occur in some polygon by some date) as long as the probability of being wrong is quantified. Care will be needed to make sure the  N i,j

( timeSpan ) model is consistent with the long-term model (e.g., moment rates must balance over the long term, or stated another way, no double counting).

For models such as STEP that predict changes that are a function of magnitude as well

(e.g., by virtue of b-value changes), then rates should be modified on a magnitude-by-magnitude basis using:

 N i,j,m

( timeSpan )

(where a subscript for magnitude has been added). The advantage here, relative to the current

STEP implementation, is that the upper magnitude cutoff is not arbitrary and the largest ruptures are not treated as point sources, but rather come from the fault-based ruptures defined in the long term model.

Some might question the wisdom of applying the time-dependent modifications of the background model described here. The answer simply comes down to, after considering the use of the final outcome and potential problems with the model, whether we are better off applying it than doing nothing. For example, if the California Earthquake Authority (CEA) wants yearly forecasts updated on a yearly basis, should we or should we not include foreshock/aftershock statistics? We obviously have to apply the approach presented here to some extent if we are to model the post-1906 siesmicity lull as was done by the 2002 WGCEP.

Time-Dependent Conditional Probabilities

From the long-term model we have the rate of all possible earthquake ruptures ( FR f,m,r

&

BR i,j,m

), perhaps modified (or not) by a rate change model  N i,j

( timeSpan ) as just described. The question here is how to modify the fault-rupture probabilities based on information such as the date of or amount of slip in previous events.

As demonstrated the 1988 WGCEP, computing conditional probabilities is relatively straight forward if strict segmentation is applied (ruptures are confined to specific segments, with no possibility of overlap or multi-segment ruptures). Using the average recurrence interval

(perhaps determined by moment balancing), a coefficient of variation, the date of the most recent event, and an assumed distribution (e.g., lognormal) one can easily compute the conditional

726992444 32 probability of occurrence for a given time span. Unfortunately the procedure is not so simple if one allows multi-segment ruptures (sometimes referred to as cascade events).

The WGCEP-2002 approach:

The long-term model developed by the 2002 WGCEP resulted in a moment-balanced relative frequency of occurrence for each single and multi-segment rupture combination on each fault. A simplified example of the possible ruptures and their frequencies of occurrence for the

Hayward/Rodgers-Creek fault is shown in Figure 2 (see the caption for details, and note that none of the simplifications influence the conclusions drawn here). Again, the frequency of each rupture has been constrained to honor the long-term moment rate of each fault section.

Figure 2 . Example of a long-term, moment balanced rupture model for the Hayward/Rodgers-

Creek fault, obtained from the WGCEP-2002 Fortran code. The image, taken from the WGCEP-

2002 report, shows the segments on the left and the possible ruptures on the right. The tables below give information including the rate of each earthquake rupture and the rate that each segment ruptures. Note that this example represents a single iteration from their code, where modal values in the input file were given exclusive weight, aseismicity parameters were set to

1.0, no GR-tail seismicity was included, sigma of the characteristic magnitude-frequency distribution was set to zero, and the floating ruptures were given zero weight (specifically, the line on the input-file that specified the segmentation model that was given exclusive weight read:

“0.11 0.56 0.26 0.07 0.00 0.00 0.00 0.00 0.00 0.00 model-A-modified”)

Segment Info

Name Length

(km)

RC

HN

HS

52.54

34.89

62.55

Width

(km)

12

12

12 slip-rate

(mm/yr)

9

9

9

Rupture

Rate (/yr)

3.87e-3

3.95e-3

4.08e-3

Date of

Last Event

1868

1702

1740

Rupture Info

Name

HS

HN

HS+HN

RC

HN+RC

HS+HN+RC floating mag rate

7.00

1.28e-3

6.82

1.02e-3

7.22

2.16e-3

7.07

3.32e-3

7.27

0.32e-3

7.46

0.44e-3

6.9

0

726992444 33

Let’s now look at how they computed conditional probabilities for each earthquake rupture. We will focus here on their Brownian Passage Time (BPT) model, but the basic conclusions drawn will apply to their other two conditional probability models as well (the BPTstep and time-predictable models). They used the BPT model to compute the conditional probability that each segment will rupture using the long-term rupture rate for each segment, which is simply the sum of the rates of all ruptures that include that segment, and the date of last event on the segment. They then partitioned each segment probability among the events that include that segment, along with the probability that the rupture would nucleate in that segment, to give the conditional probability of each rupture. In other words, each segment was treated as a point process. Therefore, if a segment has just ruptured by itself, for example, there should be a near-zero probability that it will rupture again soon after (according to their renewal model).

However, there is nothing in their model stopping a neighboring segment from triggering a multi-segment event that re-ruptures the segment that just ruptured. In other words, one point process can be reset by a different, independent point process, which seems to violate the concept of a point process.

This issue is illustrated by the Monte-Carlo-simulations shown in Figure 3, where

WGCEP-2002 probabilities of rupture were computed for 1-year time intervals, ruptures were allowed to occur at random according to their probabilities for that year, dates of last events were updated on each relevant segment when a rupture occurred, and probabilities were updated for the next year. This process was repeated for 10,000 1-year time steps. The simulation results show that segment ruptures occur more often than they should soon after previous ruptures, which is at odds with the model used to predict the segment probabilities in the first place. Thus, there seems to be a logical inconsistency with the WGCEP-2002 approach. If one does not allow multi-segment ruptures, then the simulated recurrence intervals match the BPT distributions exactly (as expected). However, strict segmentation is exactly what we are trying to get away from. Another way to state the problem with the WGCEP-2002 approach is that the probability of one segment triggering a rupture that extends into a neighboring segment is completely independent of when that neighboring segment last ruptured.

726992444

Figure 3 . The BPT distribution of recurrence intervals used to compute segment rupture probabilities (red line), as well as the distribution of segment recurrence intervals obtained by simulating ruptures according the

WGCEP-2002 methodology (gray bins). The

BPT probabilities assume a coefficient of variation of 0.5 and the segment rates given in

Figure 2. Note the relatively high rate of short recurrence intervals in the simulations.

34

An Alternative Approach:

What we seem to lack is a logically consistent way to apply time-dependent, conditional probabilities where both single and various multi-segment ruptures are allowed. The first question we should ask is whether a strict segmentation model might be adequate (making conditional probability calculations trivial). Certainly the segmentation model of the 1988

WGCEP is inconsistent with two most important historical earthquakes, namely the 1857 and

1906 San Andreas Fault (SAF) events. The only salvation for strict segmentation is if those historical events represent the only ruptures that occur on those sections of the SAF (or that other ruptures can be safely ignored). The best test of this hypothesis, at least that this author is aware of, is the paleoseismic data analysis of Weldon and others for the southern SAF (e.g., 2005,

Science , 308 , 966). They use both dates and amounts of slip inferred for previous events at points along the fault, along with Bayesian inference, to constrain the spatial and temporal distribution of previous events. Uncertainties inevitably allow more than one interpretation, two of which are shown in Figure 4 (the image and caption were provided by Weldon via personal communication). Figure 4A represents and interpretation what is largely consistent with a twosegment model, with 1857-type ruptures to the north and separate ruptures to the south. Figure

4B, which is also consistent with the data, is an alternative interpretation where no systematic rupture boundaries are apparent (not even if multi-segment ruptures are acknowledged). Thus, it appears that one viable interpretation is that no meaningful, systematic rupture boundaries can be identified on the one fault that has been studied the most.

726992444 35

Figure 4 . Rupture scenarios for the Southern San Andreas fault. Vertical bars represent the age range of paleoseismic events recognized to date, and horizontal bars represent possible ruptures. Gray shows regions/times without data. In (A) all events seen on the northern 2/3 of the fault are constrained to be as much like the 1857 AD rupture as possible, and all other sites are grouped to produce ruptures that span the southern ½ of the fault; this model is referred to the North Bend/South Bend scenario. In (B) ruptures are constructed to be as varied as possible, while still satisfy the existing age data.

The results for the southern SAF imply that not only do we need a logically consistent way of applying conditional probabilities in the case where we have multi-segment ruptures, but also in the case where no persistent rupture boundaries exist. The most general case would be where a variety of rupture sizes are allowed to occur anywhere along the fault (as outlined in the previous section for the long-term model). Again, the question is how to sensibly apply conditional probabilities in this situation. More specifically, given a viable interpretation of the past history of events on the fault (for which Figure 4 shows only two of perhaps thousands for the southern SAF), how do we compute a conditional probability for each possible future rupture. The challenge arises from the fact that these probabilities are not independent, as the degree to which one rupture is more likely might imply that another, somewhat overlapping rupture is less likely.

The approach outlined below makes use of Bayesian methods. Before introducing the proposed solution, however, it’s probably worth quoting from an excellent manuscript that’s available on-line (D’Agostini, 2003, http://www-zeus.roma1.infn.it/~agostini/rpp/):

‘… two crucial aspects of the Bayesian approach [are]:

1) As it is used in everyday language, the term probability has the intuitive meaning of “the degree of belief that an event will occur.”

2) Probability depends on our state of knowledge, which is usually different for different people. In other words, probability is unavoidably subjective.’

726992444 36

The concept of subjective probabilities is not new to SHA. Indeed, the different branches of a logic tree, which represent epistemic uncertainties, explicitly embody such differences of opinion

(where at most only one can be correct).

Our present understanding of what influences the timing and growth of ruptures in complex fault systems suggests that a simple model (e.g., strict segmentation with point renewal processes) will not be reliable. On the other hand, our most sophisticated physics based models

(Ward, Rundle) are too both too simplistic and have too many free parameters to be of direct use in forecasting (although they probably constitute our best hope for the future). Society nevertheless has to make informed decisions, so we are obligated to assimilate our present knowledge (however imperfect) and make the best forecast we can, including an honest assessment of the uncertainties therein. It is hoped that the Bayesian approach outlined below constitutes a credible framework for doing just this.

(note that in what follows it is assumed that the reader has a basic understanding of Bayesian methods; the above reference provides an excellent introduction and overview if not).

The long-term fault model given above, FR f,m,r

, gives the rate of each fault rupture over a long period of time (perhaps modified by  N i,j

( timeSpan ) if appropriate). Therefore, if we have no additional information, then the consequent Poisson probabilities, which we write as

P pois

( FR f,m,r

), represent the best forecast we can make. What we would like to do is improve this forecast based on additional information or data, written generically as “ D ”. Fortunately, Bayes’ theorem gives us exactly what we need to make such an improvement. Specifically:

P ( FR f , m , r

| D )

P pois

( FR

 f , m , r

P pois

( FR f , m , r

) P ( D | FR f , m , r

) P ( D | FR f , m , r

)

) f , m , r

This says that the relative probability of a given rupture, FR f,m,r

, is just the Poisson probability of with that rupture.

One immediate issue is that the additional data ( D ) might not be available for all possible ruptures. However, if it’s available for a sufficiently large set, such that the collective probability of these ruptures doesn’t change (only their relative values), then I think the above can be applied. For example, we might be confident that the total probability of a rupture on all faults with date-of-last-event information is 0.1; application of Bayes’ theorem would constitute a rational approach for modifying the relative long-term probabilities that each will be the one that occurs.

If the above approach is legitimate, then the challenge becomes finding appropriate model(s) for the conditional probability (also called the likelihood):

P ( D | FR f , m , r

)

Again, this simply gives the probability that the data is consistent with the occurrence of that rupture (and not that the data is caused by the rupture). This likelihood is essentially used in

Bayes’ theorem to assign a probability gain relative to the long-term Poisson probabilities.

726992444 37

For example, in the spirit of the time-dependent renewal models applied by previous

WGCEPs, we might want to apply a model that captures the notion that events are less probably where one has recently occurred, and more probable where one has not (in the seismic gaps).

Looking at Figure 4, this reasoning would identify the southern most part of the southern SAF as the most likely place for a large rupture.

Recall that in the long-term model we have defined the rupture extent of every possible earthquake (as well as its magnitude, from a magnitude-area relationship, which uniquely defines the average slip as well). Suppose we could look in a crystal ball and know exactly which one would occur next, but we were left with figuring out when it would occur. The following are two approaches for defining a probability distribution for the next occurrence (and therefore could constitute the basis for the likelihood function in Bayes’ theorem above):

Method 1 (Average Slip-Predictable Model):

Here we say the best estimate of the date of the next event is when sufficient moment has accumulated since the last event(s) to match the moment, M o

, of the next event (the latter coming from the long-term model). If v i , t o i

,

and A i

are the slip rate, date of last event, and area of each fault section comprising the rupture, then this best-estimate time of occurrence, t

, satisfies the following:



M o

 

I  i

 v i

A i

0

( t

 t o i

)

 where  is the shear modulus. Solving for t

we have:



 t

M o

 

I  i

0 v i

A i t o i

I  i

0 v i

A i or

 t

  t

 t o where



 t

M o

I  i

 v i

A i

0 and

 t o

I  v i

A i

0 i t o i

I  v i

A i

0 i

.





726992444 38

 t

is the time needed to accumulate the required moment, and t o

is a weight-averaged time of the last event(s) – where the plural indicates that the date of last event, t o i

, may vary between sections.

 t

as rupture occurring repeatedly (characteristic earthquakes), as we have made no assumptions regarding persistent rupture boundaries.

The expression for  t

above shows that the time of next event depends on the size, or more specifically the amount of slip, of the next event. The implicit assumption is, therefore, that a rupture brings each fault-section, on average, back to some base level of stress (as in a slipin previous event(s). This model will lead us astray if each large event does not, on average, exhaust the stress available to produce subsequent events.

Method 2 (Average Time-Predictable Model):

If we also know the amount of slip produced by the last event in each fault section (D i

), then we could apply the time-predictable model on a section-by-section basis. That is, the best estimate for the time of next event on each section is: t i

D i

 t o i v i

Then for a hypothesized rupture including multiple sections, we can define the best estimate of the rupture time for the next event as a weighted-average of t i

for each section:



ˆ 

I  i

0

A i





D i v i

I

 t o i





 i

0 or

A i



ˆ   ˆ  ˆ o where



 ˆ 

I  i

0

I  i

0

A i

D i v i

A i and



726992444 t o

I  i

0

I  i

0

A i t o i

A i

.

Note that we are using hats over the symbols here to distinguish them from the symbols with

39 assuming some triggering threshold for each section, as in the time-predictable model). This model will interpret a relatively low amount of slip in an event as indicative that another event should occur soon (because significant stress remains), rather than indicating the fault section has already been depleted.

Therefore, the differing assumptions between methods 1 and 2 above are that the time of next event depends either on the size of the next event, or on the size of the last event, respectively (albeit both with some intrinsic variability as modeled by a BPT-type distribution).

We have made no presumptions regarding the persistence of repeated, identical ruptures, but rather have presumed knowledge of the location of the next event. We will return to how this can be used in Bayes’ theorem shortly, but lets diverge for a moment to examine whether the assumptions in methods 1 and/or 2 above are testable.

The question is whether either of these predicted intervals, actual observed intervals ( t

 t o

or t

 ˆ o normalized difference between the two,

( t

 t o

) /

 t

or

( t

 ˆ o

 t

or  ˆ , correlate with

, respectively, where t is the observed time) and whether

) /

 ˆ , exhibit something like a BPT

 et al. (2004,

 

56 , 761-771).

Virtual California is a cellular-automata-type earthquake simulator comprised of 650 fault sections, each of which is about 10 km in length. Each section is loaded according to its long-term slipping rate and is allowed to rupture according to a friction law. Both static and quasi-dynamic stress interactions are accounted for between sections, giving rise to a selforganization of the statistical dynamics. In particular, the distribution of events for the entire region (California) exhibits a Gutenberg Richter distribution. The interesting question is whether this complex interaction effectively erases any predictability hope for by elastic-rebound-type considerations. Specifically, do methods 1 or 2 above provide any predictability with respect to

Virtual California events? The results shown in Figure 5 are quite encouraging, as the distribution of events for method 1 (the average slip-predictable approach shown in 5a) are fit well by a BPT distribution with a coefficient of variation of less than 0.2. The results for method

2 are even better (the average time-predictable approach) with a coefficient of variation of less than 0.15. That method 2 would exhibit more predictability is not surprising, however, because

Virtual California’s stochastic element is applied as a random 10% over- or under-shoot of the amount of slip in each event; therefore, knowing the size of the last event is more helpful than knowing the size of the next event.

726992444

(a) VC distribution of ( t

 t o

) /

 t



(b) VC distribution of



( t

 ˆ o

) /

 t

40

Figure 5 . This shows the distribution of observed versus predicted recurrence intervals from the

Virtual California (VC) simulation of Rundle et al.

(2004). The red bins in (a) and (b) are for prediction methods 1 and 2 (average slip and time predictable methods, respectively), and various PDF fits are shown as well. (c) rupture locations on the northern San Andreas Fault for a 3000 year time slice, showing a lack of persistent rupture boundaries. (d) and (e) show similar plots after assigning a random time of occurrence for each of the VC ruptures, revealing Poissonian behavior as expected.

(d) VC distribution of ( t

 t o for randomized events

) /

 t



(c) VC events on N. SAF

(e) VC events randomized on N. SAF

These results are encouraging in that one of the most sophisticated physics-based earthquake simulators implies there is some predictability in the system. The relevant question is, however, how robust is this conclusion is given all the assumptions and simplifications embodied in Virtual California. What we need is to propagate all the uncertainties in this model, as well as examine the results of other earthquake simulators, to see what predictability remains

(e.g., the variability of the coefficients of variation in methods 1 and 2 above). Results obtained using Steve Ward’s earthquake simulator (not yet shown here) reveal predictability as well, but with significantly higher coefficients of variation (~0.8).

Also shown in Figure 5 is a space-time diagram for VC ruptures on the northern San

Andreas Fault (5c), as well as the distribution of method-1 recurrence intervals after assigning a random time of occurrence for all VC ruptures (5d, which shows an approximately exponential distribution as expected for a Poisson model). Figure 5e shows a space-time diagram for the time-randomized events on the northern SAF. Note that the Poissonian behavior in Figure 5e exhibits much more clustering and gaps in seismicity compared to Figure 5c. This exemplifies how the physics in Virtual California (specifically, the frictional strength of faults) effectively

726992444 41 regularizes the occurrence of earthquakes to avoid too much stress buildup or release. An interesting corollary is whether the Poisson model, with its associated random clustering and gaps in event times, can be ruled out on the basis of simple fault strength arguments.

Returning now to the application of Bayes’ theorem, recall that the long-term model gives the long-term (Poissonian) probability of every possible rupture. We can then use either method 1 or 2 above to compute a time-dependent conditional probability (modified by stress change calculations if desired, either as a clock change or a step in the BPT distribution), and apply this as the likelihood function in Bayes’ theorem. This will effectively increase the probability of ruptures that involve sections that are overdue, and reduce the probability of events on sections that have ruptured recently (with intermediate probabilities for ruptures that overlap both recently ruptured and overdue sections). Note that both methods 1 and 2 will maintain a balance of moment release on the faults.

Here’s another way to view this Bayesian solution. Suppose our task is to predict the next event on one of the faults where we have additional information. This can be considered an epistemic uncertainty in that only one next event can actually occur. For every fault rupture defined in the long-term model (now a branch on a logic tree), we can ask the following question

– if we knew for sure that this is the next event to go, how would we define its probability in a given time span. Again, one approach would be to apply methods 1 or 2 above. In this way we can assign a probability for each event in the long-term model (which are, again, on different branches of a logic tree). We now have the task of weighting the logic-tree branches in order to define the relative likelihood that each is the next to go. One approach would be to assign weights based on the relative rate of occurrence from the long-term model, which would lead to the same solution as outlined above using Bayes’ theorem.

Some might take issue with the assumptions implied in methods 1 and 2 above. What’s important here, however, is that a wide variety of alternative likelihood models could be defined and accommodated in the framework, such as the Bowman-type accelerating-moment-releasebased predictions (i.e., ruptures in the long-term model that exhibit accelerating seismicity are more likely).

One issue is whether simulated catalogs obtained using this approach might lead to longterm behavior that differs from the long-term model (e.g., if the likelihood systematically favors some events over others). If so, is this bad, or is it actually good in that the likelihood model effectively pushes us toward more physically consistent results?

As an alternative to the Bayesian approach presented here, one might be tempted to extend the WGCEP-2002 approach to accommodate many more segments (to allow all important discrete rupture possibilities) and to solve the interacting point processes problem discussed above (e.g., with a BPT-step model that tracks stress interactions from occurring events).

However, it seems like any such solution might end up as complicated as a Ward or Rundle type model, which would beg the question of whether our efforts would be better spent on improving the latter model types.

What I’ve attempted to define here is a simple, justifiable framework that can accommodate a range of ideas on how to make credible time-dependent earthquake forecasts.

Again, our most advanced physics-based simulation models (e.g., Ward’s or Rundle’s) are not yet sophisticated enough to be used directly, and also present challenges with respect to defining probabilities based on present conditions. Until these problems are solved, we need an interim rational basis for stating the degree of belief that various events might occur, and it looks like

Bayes’ theorem provides precisely such a framework. Some might dislike this approach because

726992444 42 it doesn’t provide a model of how the earth actually works and/or because it starts with the

Poisson model (as the prior distribution) for which many believe is wrong in the first place.

However, until someone figures out how the earth does work, the approach outlined does provide a rational basis for incorporating existing, and perhaps physics based, constraints for defining the probability of various earthquakes. Assuming someone doesn’t find a fatal flaw in my reasoning,

I think the next step is to build such a model and simulate earthquakes from it. As I hope was demonstrated in Figure 3 above, we can learn a lot by simulating earthquakes from even statistics-based models.

726992444 43

Appendix C – Deformation Models

(Deformation modeling in support of the WGCEP)

Written by Tom Parsons, USGS Menlo Park, (650) 329-5074, tparsons@usgs.gov

Introduction: This outlines one phase of GPS data compilation and two phases of numerical deformation models. After a uniform, state-wide GPS database is assembled, we propose development of a set of fault-slip estimates. Lastly, we propose determination of a set of tectonic stressing estimates for use in interaction probability calculations and potentially for independent recurrence-interval modelling. Modeled fault-slip estimates are needed because direct observations are not continuous, and earthquake probability calculations depend on estimates of past earthquake rates. In the absence of observed discrete earthquake events, we depend on measured or modeled fault slip-rates to infer earthquake rates. Here we propose a collaborative effort to use geodetic observations to augment geologically determined slip rates. A combination of geological and geodetic methods enables us to more fully exploit the available array of crustal strain information than using segment-averaged slip rates from sparse geological observations.

Geologically-observed slip rates are determined from a much longer period than are geodetic measurements. Thus a key challenge will be to develop fault slip models consistent with both time-frames.

Calculated tectonic stressing rates are needed if fault-to-fault stress interactions are going to be a component of the WGCEP likelihood calculations. Currently not measurable, these values must be generated from numerical models. Most simply, stressing rate values can be used to make

“clock-change” corrections to probability calculations from calculated coseismic stress change perturbations. An array of more sophisticated applications can be employed such as rate-andstate friction and/or viscoelastic post-seismic loading models.

What is needed:

Phase 1 - GPS data compilation : A primary step toward comprehensive deformation models will be a uniform compilation of GPS-derived strain-rate observations that covers the state of

California and surroundings. Key features would be assembly of reliable campaign and permanent data edited for spurious values, and with formal uncertainties defined (including covariance). This database, along with geologic fault-slip observations, would represent the most important observational constraints on deformation models.

Phase 2 - Fault-slip models: Required are fault-slip rate estimates on major segments in

California. Also necessary are estimates of “off-fault” deformation, since the majority of damaging earthquakes that have occurred in California since 1906 fall into that category. We acknowledge that these rates may vary with differing time scales. Although we will likely be most interested in long-term (Holocene) deformation rates, it is quite possible that short-term variations may be useful as well. Because there are uncertainties with respect to fault structure, slip-rate models must be able to accommodate alternative fault representations. Supported model features would thus include these features:

726992444 44

1) GPS data explicitly included

2) Kinematically consistent

3) Provides deformation rates off major faults.

4) Accommodate all important faults

5) Can handle multiple fault models as input (epistemic uncertainties)

6) Can handle geologic and geodetic data uncertainties

7) Includes significant 3D effects (may not be necessary for all aspects)

8) Statewide

Phase 3 - Tectonic stressing rate models : Regions of California are apparently subject to the legacy of past large earthquakes. Thus it is likely that at least one branch of the earthquake rupture forecast will incorporate stress interactions. In that event, we will require estimates of tectonic stressing rate at each fault segment or point where probability calculations are made.

Transient strain observations following large earthquakes are commonly observed; we could thus also require more sophisticated loading models that can handle 3-D time-dependent stressing.

Strategy and Key Scientists:

We propose supporting development of a uniform GPS compilation and multiple deformation models. This combination will permit us to estimate sensitivity of fault-slip and stressing-rate determinations to input data, modeling methods, and bias/opinion of the modelers. Below we outline extant and proposed modeling and compilation efforts.

1) GPS compilation: Key scientists include Duncan Agnew (IGPP at U.C. San Diego) and Mark

Murray (U.C. Berkeley)

GPS horizontal velocities: Our proposed strategy is to merge and reprocess survey-mode GPS data from the SCEC and NCEDC archives along with other survey-mode observations from within California, and combine these with analyses of continuous GPS sites in California and western Nevada. We will also, as appropriate, use velocity fields computed by others, rotating these into a common reference frame. The aim will be to produce velocities that represent the interseismic motion over all of California and northern Baja California, in addition to bounding zones extending East and North in order to facilitate modeling. Velocities will be referenced to stable North America, and will include errors (including a covariance matrix). Sites subject to nontectonic motion will be removed, in consultation with the modeling efforts. The initial compilation is available now and an update will be available in January 2006.

Details: Presently available GPS observations include ~300 campaign GPS repeated measurements (primarily USGS) and ~300 continuous GPS time series (primarily BARD and

SCIGN networks). In addition, new Earthscope GPS sites in California have been running since early 2005 and may contribute to the compilation.

Progress: An initial version of the GPS velocity field for California, and the surrounding neighborhood is complete and is called “California CMM 0.02” It is based on:

1. For southern California, an update to the CMM to include additional data, notably more

726992444 45 velocities from SCIGN.

2. The BAVU velocities from the San Francisco Bay area

3. Velocities for the region north of BAVU from an ongoing project of Mark Murray's

4. Velocities for Cascadia and northern California from an on-going project of Rob McCaffrey's.

These separate velocity fields have been combined by adjusting reference frames to make velocities match at common stations. Because of the method of producing these data, a full covariance matrix is not provided. Instead 1-sigma errors are given. Only modest quality control of the velocities has been conducted thus far. A more complete set of points for Northern

California is expected in a future version.

2) Fault slip models: Key scientists include Peter Bird (UCLA), Brad Hager (MIT), Robert

McCaffrey (Rensselaer Polytechnic Institute), Brendan Meade (Harvard), and possibly

Zhengkang Shen (UCLA), Yuehua Zeng (USGS), and Tom Parsons (USGS).

Fault-slip rates are determined by breaking the crust into fault-bounded blocks that are displaced at rates consistent with GPS observations and in some cases, geologically-determined slip rates.

Three models are suggested for inclusion: one is already published and two additional models are proposed.

2a) NeoKinema – Bird : NeoKinema is a kinematic finite element program. It is non-dynamic, does not involve rheological variation, and does not calculate stress intensity. The code is 2-D, and has limited consideration of fault dips. The code solves for long-term velocity components, but does not give formal uncertainties. Rather it calculates a set of “best estimate” slip rates.

Stress rates can be calculated in the model continuum by locking faults and assuming elastic behavior. A key advantage represented by NeoKinema is that permanent strain components are calculated. Thus “off-fault” seismicity rates can be estimated.

Details : Modeling begins with a uniform viscous sheet. The sheet contains geodetic benchmarks and faults. Principal strain directions are constrained by observed principal stress directions. GPS velocities are corrected to estimated long-term rates by subtracting dislocation modeling of fault slip that is updated iteratively. Slip rates must be input as Gaussian distributions; however, very large uncertainties can be accommodated. Smoothed stress orientations are used as a constraint.

Expected deliverables to WGCEP: (a) Adjusted long-term fault slip rates including alternate geologic slip-rate database. (b) Stress rates for interseismic intervals due to temporary locking on seismogenic faults (assumes deep fault slip is at steady rates). Stress rates tend to be focused along fault zones (similar as seen in dislocation loading models). (c) Long term average anelastic strain rates.

The time frame for deliverables depends on the rate of data availability; If creation of the new all-geologic fault slip-rate data base occurs during 2005, and it can be delivered to Bird along with the GPS compilation in a way that requires only automated re-formatting, then about one month’s time would be required for calculations. However, If the new all-geologic slip-rate data base does not get completed, and independent development is necessary, then that sub-task will

726992444 46 take until the end of March 2006. Then NeoKinema results would be delivered at the end of

April 2006.

2b) Harvard-MIT Block Model – Hager and Meade : This block model consists of elastic blocks, bounded by faults, [ Meade and Hager , 2005], the use of which is justified because long, large faults release most of the seismic moment in California (90%). This block modeling approach varies somewhat from NeoKinema in that it is not a finite-element model and thus handles offfault deformation a little differently. Faults can be expanding/contracting cracks to deal with overlaps or divergences of blocks. If necessary, off-fault deformation can be further accommodated by “pseudo faults” that allow blocks to deform internally to reduce residuals.

Pseudo-faults do not necessarily correlate with known structures. Model slip rates are linearly related to differential block motions and are therefore implicitly kinematically consistent. The new California block model will integrate geometry from the Community Block model, with kinematic constraints from geology and geodesy, and, where data are lacking, a priori guesses.

Fault-locking depths are solved for prior to inversion for slip rates.

Expected deliverables to WGCEP: (a) Adjusted long-term fault slip rates. (b) An estimate of offfault strain rate.

2c) McCaffrey [2005] Block Model : A recently-published block model that includes California will be a valuable additional reference model.

2d) Alternative fault slip models:

(i) Zhengkang Shen and Yuehua Zeng propose to extend their southern California analysis statewide. They model secular crustal deformation using a linked-fault-segment approach. This approach interprets surface deformation by dislocation along fault segments beneath the locking depth. The linked-fault-element model can incorporate geological and geodetic data either individually or jointly. Fault slip continuity is enforced by imposing finite constraints on adjacent fault segment slip. If strict constraints are imposed, a block-fault model with kinematic consistency is simulated. At the other end member, with no constraints imposed, segment slip rates are independent. Thus by optimally adjusting constraints, this approach can simulate nonrigid blocks for regions experiencing significant postseismic deformation. Additionally, the nonblock-like behavior allows deformation associated with faults not explicitly defined in the model to be estimated through the residual strain rates.

Schedule:

(1) Model ready by Dec. 2005

(2) Preliminary results based on geodetic and geologic data ready by March 2006

(3) Final results ready by June 2006

726992444 47

(ii) Finite element block model: If WGCEP finds it useful, Parsons can adopt the 3D California finite element model (see item 3c below) for use as an alternative block model/fault slip model.

A key advantage to this is that finite element blocks deform internally, which can highlight inconsistencies in fault models or need for accommodating structures not identified in fault models. In addition, a fully 3D model can be developed that includes dip-slip faulting. A disadvantages is that meshed blocks are labor intensive to generate and not easily changed to accommodate alternative fault models.

3) Tectonic Stressing Rate Models : Key scientists include Fred Pollitz (USGS), Bridget Smith and David Sandwell (UCSD), Tom Parsons (USGS)

3a) Smith and Sandwell [2003] Coulomb stressing-rate model – This model was published in

2003 and uses a 3-D elastic half-space technique to calculate Coulomb stressing rates on major faults of the San Andreas fault system. Locking depths were solved for with GPS observations and dislocations were slipped beneath the locked faults to simulate tectonic loading. This model will provide reference stressing rate values on some of the key faults needed for the WGCEP effort.

3b) Viscoelastic coupling model for tectonic, coseismic, and post-seismic stressing – Pollitz This technique combines physical models of coseismic and post-seismic deformation of known or expected earthquakes with background tectonic loading to create a detailed loading history. A known historic earthquake catalog can be incorporated, or since earthquake rate estimates will be calculated as a WGCEP component, these frequencies could be used. The key advantage of this method is that it explicitly incorporates post-seismic loading in a manner consistent with the overall budget of strain accumulation and release. The method provides quantitative estimates of the differing stress states expected in the early versus late stages of a major fault cycle. It thus encompasses the range of states from stress shadows, emergence, and subsequent stress focussing.

3c) California finite element model - Parsons A 3-D model of California and surroundings is subjected to GPS displacements and the resulting stressing-rate tensors enable tectonic stressing rates to be calculated at any point or on any surface. Since the model continuum is required to strain according to GPS rates, it includes coseismic displacements and post-seismic transients implicitly.

References:

McCaffrey, R. (2005), Block kinematics of the Pacific–North America plate boundary in the southwestern United States from inversion of GPS, seismological, and geologic data, J.

Geophys. Res.

, 110 , doi:10.1029/2004JB003307.

Meade, B. J. and B. H. Hager (2005), Block models of crustal motion in southern California constrained by GPS measurements, J. Geophys. Res.

, 110 , doi:10.1029/2004JB003209.

Parsons, T. (2005), Tectonic stressing in California modeled from GPS observations, J. Geophys.

Res.

, 110 , in press.

Smith, B. and D. Sandwell (2003), Coulomb stress accumulation along the San Andreas Fault system, J. Geophys. Res.

, 108 , doi:10.1029/2002JB002136.

726992444 48

726992444 49

Appendix D – Weldon et al. Analysis of So. SAF Paleoseismic Data

We will apply two approaches to the SAF (and other well characterized faults if possible). The first will be a segment-based approach, implemented with 2002 Working Group methodology but with the latest available data and correcting some small technical issues. The second will be an “all possible” earthquake approach with the range of possible earthquake size, location, and frequency informed directly from the paleoseismic dataset. If the two approaches yield significantly different results, an expert opinion process will be used to apply relative weights to the two approaches and the weighted combination will be used as the final model.

The problem with a strict segmentation model is that the overlapping ruptures of the past 3 well characterized earthquakes on the southern San Andreas fault (1857, 1812, and ~1685) indicate that segments are not hard boundaries in that they always limit rupture. One approach to solving this problem is simply to make the segments soft, i.e. allow rupture to proceed through a segment boundary some fraction of the time, either by defining the fraction and running models to generate a population of ruptures or by simply hand constructing a suite of models that can be combined by a weighting process, as has been done to some extent by past Working Groups.

Another approach is to build ruptures from the paleoseismic site data, without making any assumptions about where the rupture boundaries are or how often they are breeched by rupture.

We believe that these two methods are complimentary, and thus will pursue both approaches in the current Working Group.

Because the segmentation approach is widely applied and described by past Working Groups, here we will describe the “site-based” approach. Every site has a sequence of paleo-earthquakes, each with an age range described by a probability density function (pdf; see Fig 1a) in time.

Some paleo-earthquakes have know displacement, and each site has an average slip rate (in some cases over the same time interval as the paleo-earthquakes but for many over the Holocene to late Pleistocene).

As illustrated in Figure 1a, “all possible” ruptures are constructed by progressively linking paleoearthquakes that overlap in time between sites into increasingly longer ruptures. Each rupture is assigned an age (determined from the pdfs of all paleo-earthquakes included in the rupture) and assigned an average displacement based on its length (using empirical scaling relationships)

“Rupture scenarios” or possible rupture histories for the past ~1500 yrs are constructed from the

“all possible” rupture pool by sampling. Figure 1b shows three examples. Each scenario is then weighted according to its consistency with the available age control, displacement per event

(where actual offset is available to compare to the assigned values), and slip rate (qualitatively discussed in Figure 1b). It is important to recognize that only one (if any!) of these scenarios represent the actual history of earthquakes during the past 1500 years. However, all viable scenarios should share characteristics of the actual series, which allows us to make useful predictions from the set of all likely scenarios without knowing exactly which scenario is the actual history. For example if no viable scenario has a particular site rupturing alone, then it is very unlikely that has occurred (or will occur) in the future; similarly if rupturing of the entire

726992444 50 southern San Andreas fault rarely (or never) occurs in viable scenarios, it is unlikely to be part of our future. Preliminary results suggest that there are relatively few scenarios that are consistent with the three constraints we apply (timing, displacement and cummulative slip rate/moment); so far we have found only about 20 in the first 200,000 samples of possible scenarios. Thus we feel that the data will limit possible scenarios to a relative small number, that will have similar properties.

Figure AppD1 – Left (a) shows a hypothetical set of overlapping event ages from 4 sites. All possible ruptures that include these 4 sites can be constructed by progressively linking all of the sites, beginning with 1 earthquake per site, to a single earthquake that spans all 4 sites. Right (b) shows three possible scenarios for 4 paleo-earthquakes at each of 4 sites. The most likely fit to the data will contain a mixture of earthquake sizes, that will maximize the combined likelihood of age, displacement, and slip rate (equivalently moment rate).

While it is easy to qualitatively express how these scenarios constrain the possibility of future earthquakes, doing so quantitatively is more challenging. We are working on two approaches.

The first is to recognize patterns or themes in the likely scenarios. For example, many viable scenarios have overlapping north (1857) and south (~1685) events; thus one can look at the frequency of each type and calculate the conditional probability of each recognizable “type” of rupture. This is analogous to the simplifying approach of applying a characteristic earthquake model, but in our approach, the “segmentation” is defined by occurrence of past events, not a priori segment boundaries. The second approach is to apply Bayesian reweighting to our population of “all possible” ruptures. A particular rupture (such as the entire San Andreas or a

726992444 51 single site) has a known frequency in our pool of “all possible” scenarios. By comparing this expected (prior) frequency to the actual frequency in the collection of likely scenarios we can calculate the most likely frequency of each possible rupture. In our example, single site ruptures will be greatly downweighted (they are too small to satisfy the data), as will complete fault ruptures (they are too big). This reweighted set of likely ruptures could be sampled to calculate conditional probabilities.

Data needs: a) complete set of site pdfs for all paleoseismic events (done for the Southern San

Andreas but will need to be constructeded for the Northern after the basic data is compiled). b) complete set of ruptures for the San Andreas (done for the Southern;

requires (a) for the Northern). c) all available displacement per event data for the San Andreas (largely compiled for the Southern San Andreas; in progress for Northern) d) slip rates for all paleoseismic sites. While slip rates are available for the entire fault, there are a number of studies in progress (including the San Bernardino portion of the San Andreas by McGill & Weldon and Littlerock on the Mojave portion of the San Andreas by Weldon & Fumal) that we would like to include in the analysis. e) complete set of weighted rupture scenarios for the San Andreas (almost

done for the Southern, and the methodology will make doing the northern

easy once we've constructed pdfs for the site event data).

726992444 52

Appendix E – UCERF 1.0 Specification

A PRELIMINARY TIME-DEPENDENT EARTHQUAKE RUPTURE FORECAST

This has been extracted from the paper “Time-independent and Timedependent Seismic Hazard Assessment for the State of California” by

Petersen, Cao, Campbell, and Frankel (2005, Submitted to SRL). A copy is available at: http://www.relm.org/models/WGCEP/ California-hazard-paper.v9.pdf

The time-dependent hazard presented here is based on the time-independent, or Poissonian, 2002 national seismic hazard model ( http://pubs.usgs.gov/of/2002/ofr-02-420/OFR-02-420.pdf

) and additional recurrence information for A-type faults that include: San Andreas, San Gregorio,

Hayward, Rodgers Creek, Calaveras, Green Valley, Concord, Greenville, Mount Diablo, San

Jacinto, Elsinore, Imperial Valley, Laguna Salada, and the Cascadia subduction zone (Figure 1).

A-type faults are defined as having geologic evidence for long-term rupture histories and an estimate of the elapsed time-since the last earthquake. A simple elastic dislocation model predicts that the probability of an earthquake rupture increases with time as the tectonic loading builds stress on a fault. Thus, the elapsed time is the first-order parameter in calculating timedependent earthquake probabilities. Other parameters such as static elastic fault interactions, viscoelastic stress-transfer, and dynamic stress changes from earthquakes on nearby faults will all influence the short-term probabilities for earthquake occurrence. In this paper we only consider the influence of the elapsed time since the last earthquake for a characteristic type model.

Over the past 30 years, the USGS and CGS have developed time-dependent source and ground motion models for California using the elapsed time since the last earthquake (Working Group on California Earthquake Probabilities, 1988, 1990, 1995 (led by the Southern California

Earthquake Center), 1999, 2003; Cramer et al., 2000; and Petersen et al., 2002). The probabilities of oreccurrence for the next event were assessed using Poisson, Gaussian, lognormal, and

Brownian Passage Time statistical distributions. Past Working Groups applied a value of about

0.5 +/- 0.2 for the ratio of the total sigma to the mean of the recurrence distribution. This ratio, known as the coefficient of variation,e uncertainty shape parameter (sigma) accounts for the periodicity in the recurrence times for an earthquake; a coefficient of variationsigma of 1.0 represents irregular behavior (nearly Poissonian) and a coefficient of variationsigma of 0 indicates periodic behavior. For this analysis, we have applied the parameters shown in Table 1 to calculate the time-dependent earthquake probabilities. The basic parameters needed for these simple models are the mean-recurrence interval (T-bar), parametric uncertainty Sigma-p, intrinsic uncertainty variability Sigma-i, and the year of the last earthquake. The parametric sigma is calculated from the uncertainties in mean displacement and mean slip rate of each fault

(Cramer et al. 2000). The intrinsic sigma describes the randomness in the periodicity of the recurrence intervals. The total sigma for the lognormal distribution is the square root of the sum of the squares of the intrinsic and parametric sigmas. For this analysis we assume characteristic earthquake recurrence models with segment boundaries defined by previous working groups.

726992444 53

We calculated the time-dependent hazard using the 2002 Working Group on California

Earthquake Probabilities report (WGCEP, 20022003) for the San Francisco Bay area, the 2002

National Seismic Hazard model, theand Cramer et al. (2000) models for the other faults in northern and southern California, and the Petersen et al. (2002) model for the Cascadia subduction zone. Dr. Ned Field and Dr. Bill Ellsworth reran the computer code that was used to produce the WGCEP (20022003) report and provided an update to the time-dependent probabilities for the San Francisco Bay area for a 30-year time period beginning in 2006.

For the Cascadia subduction zone, we applied a time-dependent model for the magnitude 9.0 events using the results of Petersen et al. (2002). Recurrence rates for the magnitude 9 earthquakes in the model were estimated from paleo-tsunami data along the coast of Oregon and

Washington. The M 8.3 earthquakes in the subduction plate-interface model were parameterized using Poisson process statistics (Frankel et al., 2002). The M 9.0 and 8.3 models were equally weighted in the 2002 hazard model as well as in this time-dependent model.

The San Andreas (Parkfield segment), San Jacinto, Elsinore, and Imperial, and Laguna Salada faults were all modeled using single segment ruptures following the methodology of Cramer et al. (2000). Multi-segment ruptures were allowed in the WGCEP 1995 model, but these were not incorporated in this preliminary time-dependent model.

The southern San Andreas fault was modeled using three models to account for epistemic uncertainty. We developed The three southern San Andreas models that consider various combinations of the five segments of the southern San Andreas Fault that were defined by previous working groups: Cholame, Carrizo, Mojave, San Bernardino, and Coachella and three multiple-segment ruptures. It is easier to define the single-segment time-dependent ruptures probabilities because there are published recurrence rates and elapsed times since the last earthquake rupture for these segments based on historical and paleo-earthquake studies (e.g.,

WGCEP 1995).

The first time-independent model (T.I. Model 1)), is based on the 2002 national seismic hazard model (model 1) and assumes single-segment and multiple-segment ruptures with weights that balance the moment rate and that are similar to the observed paleoseismic rupture rates (Frankel et al., 2002, Example A below). Possible rupture modelss of the southern San Andreas include:

(a) ruptures along five individual segments, (b) rupture of the southern two segments and of the northern three segments (similar to the 1857 earthquake rupture), and (c) rupture of all of the five segments together. For each of the complete rupture models, the magnitudes of the earthquake were determined from the rupture area. Recurrence rates were assessed by dividing the moment rate along the rupture with this calculated magnitude. The sSingle-segment rupture modelss were weighted 10% and the multi-segment rupture modelss were weighted 90% (50% forin sub-model b and 40% for sub-modelin c) to fit the observed paleoseismic data.

In the first time-dependent model (T.D. Model 1), which is based on T.I. Model 1, probabilities are calculated, as in previous working groups, by using a lognormal distribution for all the segments and parametric sigmas listed in Table 1. In this model we have adjusted the Poisson probabilities upward to account for the information from the time-dependent probabilities of single-segment events. The southern San Andreas individual fault segments have higher time-

726992444 54 dependent probabilities than the corresponding Poissonian probabilities (a probability gain), therefore, the multi-segment rupture should also have higher time-dependent probabilities than the Poissonian model. Since it is not known in advance what segment might trigger rupture of the cascade, Tthis multi-segment rupture probability is calculated using the weighted average of the probability gains from each of the segments involved in the rupture, where the weights are proportional to the 30-year time-dependent probability of each segment. We show an example containing two segments A and B in the Examples B and C below.

The second time independent model (T.I. Model 2) is also based on the 2002 national seismic hazard model (model 2) and considers characteristic displacements for earthquake ruptures. This model assumes two multiple-segment ruptures that are composed of segments from Cholame through Mojave (1857 type ruptures) and from San Bernardino through Coachella. In addition, single segment ruptures of the Cholame and Mojave are considered. The model assumes that the

Carrizo segment only ruptures in 1857 type earthquakes with a rate of 4.7e-3 events/yr, based on paleoseismic observations. Therefore, given this rate and a 4.75 characteristic slip on the

Cholame segment, this accounts for 22 mm/yr of the total slip rate of 30-34 mm/yr (WGCEP,

1995). The remaining 12 mm/yr is taken up by single-segment ruptures of the Cholame segment.

Using a single-segment magnitude of 7.3 and a 12mm/yr slip rate yields a single-segment recurrence rate for Cholame of 2.5e-3/yr. and a displacement of 4.75 m/event which requires 22 mm/yr of the available total slip rate , that varies fromof 30 to 34 mm/yr (WGCEP, 1995). In addition to the 1857 type ruptures, the Cholame segment is assumed to rupture with average displacements of 4.75 m/event with the remaining 12 mm/yr available on the segment after the slip from 1857 type ruptures are removed. A magnitude 7.3 rupture occurring at a rate of 2.5e-

3/yr is used in the second model. For the Mojave segment, the slip rate available after the slip from 1857 type ruptures is removed is 9 mm/yr. Using an An earthquake with magnitude 7.4 (4.4 m/event) for single-segment rupture and a slip rate of 9 mm/yr yields a recurrenceand rate of

2.05e-3 for a singgnle segment Mojave San Bernardino ruptureis applied to this segment. For the

San Bernardino through Coachella rupture a M 7.7 earthquake with recurrence rate of 5.5e-3 event/yr is used to be consistent with the paleoseismic data. Inclusion of other ruptures on these segments leads to estimated recurrence rates that exceed the paleoseismic observationswould cause an unbalanced moment rate. The total moment rate of this model is 92% of the total predicted moment rate.

This second time-dependent model (T.D. Model 2), which is based on T.I. Model 2, is based onaccommodates the difference between the total segment time-dependent rupture rate (the timedependent rate of all potential ruptures that involveon that segment) and the corresponding multiple-segment rupture rate that involves that segment. The segment time-dependent probabilities for all ruptures combined are calculated the same way as for the first model and are shown in Table 1. The Carrizo segment is assumed to rupture only in 1857 type events and its total segment time-dependent probability is the same as the time-dependent probability for the

1857 type events (following the partial cascades models in Petersen et al., ., (1996) and Cramer et and the conditional probability model in Cramer et al., (, 2000). We first calculate a timedependent probability P ctotal for any type of rupture scenario involving the Cholame segment

(single segment or 1857 type). Here we use the total recurrence rate derived from the time independent calculation from Model 2. Next we calculate the time dependent probability P

1857 for 1857 type ruptures using the paleoseismic recurrence rate. Now the time-dependent

726992444 55 probability of a single-segment Cholame rupture is derived fromFor the Cholame and Mojave segments, the time-dependent rate (converted from the probability) is the total time-dependent segment rate (calculated from P ctotal

) subtracted by the rate of the 1857 type events (converted from P

1857 probability). An example is shown in Example C below.

The remaining rate is applied to update the single-segment time-dependent rupture rate. The time-dependent rate for the Coachella and San Bernardino segments rupturing together has to be the smaller of the two segment rates. In T.I Model 2, the San Bernardino segment is not allowed to rupture by itself. Now, when the conditional probability weighting is applied, this rupture has to be allowed in order to accommodate the excess rate on this segment. Its time-dependent rate is the segment rate (converted from probability) subtracted by the event rate of the Coachella and

San Bernardino segments rupturing together.

For the third model we have applied two rupture scenarios that are based on new (i.e., post 2002) geologic data and interpretations: (1) single segment time-dependent rates that were used in model 1 above and (2) two multi-segment ruptures, the 1857 type rupture that includes the

Carrizo, Chalome, and Mojave segments and the southern multi-segment rupture that includes the San Bernardino and Coachella segments. The five single- segment rupture models wereas weighted 10% and the two multi-segment ruptures were weighted 90%, similar to the weighting in T.I. Model 1. The recurrence rates and elapsed time since the last earthquake for multisegment ruptures are based on geologic data shown in Weldon et al. (2004, Figure 125). For these multi-segment earthquakes, we applyuse a recurrence time of 200 years (5 events in 1,000 years) and elapsed time of 149 years for the 1857 type event and of a recurrence time of 220 years (4 events in 880 years) and elapsed time of 310 years for the southern two- segment ruptures. Weldon et al. (2005) indicate variability in the southern extent of the 1857 ruptures and the northern extent of the southernmost multi-segment rupture in the vicinity of the 1812 rupture.

Therefore, we have also included an aleatory variability for the segment boundary near the southern end of the 1857 rupture and have not included the 1812 rupture as the date of the last event. . We have developed time-independent (Poisson) and time-dependent models for these ruptures (T.I. Model 3 and T.D. Model 3). An example calculation is shown in Example C below.

Example A

For this paper we have calculated the time-dependent probabilities for time-periods of 5, 10, 20,

30, and 50 years. For these calculations we have assumed a lognormal probability density function. Following the WGCEP 1995 report we find that the density function f(t) has the following form: f

T

( t )

 t

 ln T i

1

2

 exp{

[ln(

2

 t

2

/ ln T i

^

)]

2

}

, (B1) where μ is the mean, μ^ is the median, σ lnTi

is the intrinsic sigma, t is time period of interest. If

μ^ and σ lnTi

are known, then the conditional time-dependent probability in time interval (t e

, t e

+

ΔT) is given by:

726992444 56

P ( t e

T

 t e

  t | T

 t e

)

P ( t

P e

( t

 e

T

T t e

) t )

, (B2) where t e

and is the elapsed time and ΔT is the time period of interest. A Poisson process follows the rule: P=1-exp(-rT), where P is the Poisson probability, r is the rate of earthquakes, and T is the time period of interest. If we want to convert between probability P and rate r, then we can use the formula: r=-ln(1-P)/t. (B3)

We calculate the probability and annualize this rate using the above formula.

Example B

If we denote the calculated time-dependent probabilities and time-independent (Poisson) probabilities for two single-segment rupture events as P a t , P b t , P a p , and P b p , the ratios

R a

P a t

/ P a p and

R b

P b t

/ P b p are sometimes called the probability gain or loss over the average

Poisson probabilities. For a multi-segment (cascade) event involving these two segments, we also define the probability gain or loss as

R ab

 t

P ab

/ P ab p , in which the Poisson probability

P ab p is known. Since P ab p already accounts for the conditional probability of multi-segment rupture, we further assume that the cascade event is triggered by independent rupture of one of the segments

A or B. So we know that R ab

R a

if the cascade event starts from A and that R ab

R b

if it starts from B. Assuming segment A is more likely to rupture in some future time period than segment

B, then

R

 a

R b

, and the chance of a cascade event occurring must be smaller than the chance of

A rupturing but larger than the chance of B rupturing. Therefore,

R ab

has to be smaller than but larger than

R b

if

R

 a

R b

R a

and vice versa. Considering that a cascade event can start from A or B with different likelihoods, we approximate

R ab

by weighting

R a

and

R b

by P a t and P b t , their probabilities of rupture, resulting in the cascade event ratio

R ab

( P a t

R a

P b t

R b

) /( P a t 

P b t

)

The physical basis for this type of weighting process is that a multi-segment rupture has to start from one of the segment and the higher segment probability has higher probability to lead to or trigger a multi-segment event.

Example applications for calculating time dependent rates

Models 1 and 3 :

Example C

In this section we show how the annual occurrence rates for a multi-segment rupture are calculated in Models 1 and 3. For our first example, we calculate the rate of rupture that involves all five segments. The time-dependent 30-year probabilities for the five segments

Coachella, San Bernardino, Mojave, Carrizo, and Cholame are 0.325, 0.358, 0.342, 0.442, and

0.512 assuming a lognormal distribution. The equivalent annual rates are calculated using the formula r = -ln(1-p)/t, where p is the segment time-dependent probability in t (30 years). This

.

726992444 57 rate is divided by the Poissonian rate of the 2002 model and produces the probability gain for each segment. The gains for five segments are 1.141, 1.918, 1.065, 1.690, and 1.114. The weighted gain for this 5-segment rupture is 1.384 (= (0.325x1.141 + 0.358x1.918 + 0.342x1.065

+ 0.442x1.690 + 0.512X1.114)/(0.325 + 0.358 + 0.342 + 0.442 + 0.512)). The final annual rate for this rupture is the Poissonian rate (0.00355) multiplied by this gain and the 2002 model weight (0.4), which is 0.00196 (= 0.00355x1.384x0.4).

For model 3, the cascading allows only 1857 and 1690 types of events and their recurrence times are 200 and 220 years respectively, which are different from the 2002 model. We follow the same steps as in the T.D. Model 1 to calculate the time-dependent annual rates for the multisegment ruptures with the new Poissonian rates for multi-segment events. After obtaining the time-dependent annual rates for the 1857 and 1690 multi-segment ruptures, we weight each of the Weldon et al. (2004) rupture scenarios included in the model.

Model 2:

In 2002 model 2, the Poissonian rates for the five segments are different from T.D. model 1. We apply these different mean recurrence times and the same elapse times and intrinsic and parametric uncertainties and calculate time-dependent 30-year probabilities and their equivalent annual rates as we did in model 1. These rates are 0.008260, 0.010336, 0.008908, 0.007396, and

0.011173 for the segments: Coachella, San Bernardino, Mojave, Carrizo, and Cholame respectively. The Carrizo segment in T.D. model 2 only ruptures in 1857 type of events, so the time-dependent annual rate for 1857 type of rupture is defined as the rate for Carrizo segment

(0.007396). The Cholame and Mojave segments are allowed in 2002 model to rupture independently. The time-dependent rates for these two segments are their time-dependent rates, which are converted from their 30-year probabilities, subtracted by the rate for 1857 type events or 0.003777 (= 0.011173 – 0.007396) for Cholame and 0.001512 (= 0.008908 – 0.007396) for

Mojave ruptures. The time-dependent rate for Coachella and San Bernardino segments rupturing together has to be the smaller of the two segment rates or 0.00826 (< 0.011173). In the 2002 model, San Bernardino segment is not allowed to rupture by itself. But now the difference between the San Bernardino segment rate (0.010336) and the rate (0.008260) for San Bernardino and Coachella segments rupturing together defines the single segment rupture on the San

Bernardino segment, i.e., (0.002076 = 0.010336 – 0.008260).

REFERENCES

Please see those in http://www.relm.org/models/WGCEP/ California-hazard-paper.v9.pdf

726992444 58

Figure 1: Locations and names of A-faults contained in the source model.

726992444

TABLE 1: parameters used in the time-dependent analysis

59

SOUTHERN SAN ANDREAS FAULT

Model 1:

SAF - Coachella seg.

SAF - San Bernardino seg.

SAF - Mojave seg.

SAF - Carrizo seg.

SAF - Cholame seg.

Model 2:

SAF - Coachella seg.

SAF - San Bernardino seg.

SAF - Mojave seg.

SAF - Carrizo seg.

SAF - Cholame seg.

Model 3: Same as model 1 for single segments

SAF - Parkfield seg.

ELSINORE FAULT

Whittier

Elsinore - Glen Ivy seg.

Elsinore - Temecula seg.

Elsinore - Julian seg.

Elsinore - Coyote Mtn. Seg.

Laguna Salada

SAN JACINTO FAULT

SJF - San Bernardino seg.

SJF - San Jacinto Valley seg.

SJF - Anza seg.

SJF - Coyote Creek seg.

SJF - Borrego seg.

SJF - Superstition Mtn. Seg.

SJF - Superstition Hills seg.

Imperial

CASCADIA SUBDUCTION ZONE

Cascadia megathrust - mid (M 9.0)

Cascadia megathrust - top (M 9.0)

Cascadia megathrust - bottom (M 9.0)

Cascadia megathrust - old (M 9.0)

T

(mean)

87

130

76

87

47

182

182

148

212

138

25

641

340

240

340

625

337

100

83

250

175

175

500

250

79

501

501

501

501

T

(median)

71

112

56

74

37

149

158

108

179

111

Sigma-

P

0.39

0.19

0.20

0.29

0.43

0.39

0.19

0.20

0.29

0.43

23

553

292

206

294

532

287

85

71

212

146

148

421

212

66

452

452

452

452

0.16

0.21

0.24

0.24

0.21

0.27

0.26

0.28

0.27

0.29

0.33

0.29

0.31

0.29

0.35

0.14

0.14

0.14

0.14

2004

650

1910

1818

1892

1892

1892

1890

1918

1750

1892

1968

1430

1987

1979

1700

1700

1700

1700

Last

Event

1690

1812

1857

1857

1857

1690

1812

1857

1857

1857

0.38

0.54

0.55

0.55

0.54

0.57

0.56

0.57

0.57

0.58

0.60

0.58

0.59

0.58

0.61

0.45

0.45

0.45

0.45

Sigma-

T

0.63

0.53

0.80

0.58

0.66

0.63

0.53

0.80

0.58

0.66

Elapse

Time

316

194

149

149

149

316

194

149

149

149

2

1356

96

188

114

114

114

116

88

256

114

38

576

19

27

306

306

306

306

0.412896

0.475706

0.188205

0.228000

0.080351

0.098268

0.005682

0.359390

0.076498

0.076498

0.076498

0.076498

0.808552

0.080761

0.043533

0.188134

0.056206

0.007411

0.062880

P in 30 yrs

0.325496

0.358353

0.342035

0.441794

0.512431

0.219495

0.266615

0.234520

0.198981

0.284790

726992444 60

TABLE 2: probabilities calculated for different time periods

Fault

NORTHERN SAN ANDREAS FAULT

SAF - Santa Cruz seg.

SAF - Peninsula seg.

SAF - North Coast seg. (so.)

SAF - North Coast seg. (no.)

SAF - Santa Cruz & Peninsula

SAF - Peninsula & North Coast (so.)

SAF - North Coast seg. (so. & no.)

SAF - Santa Cruz, Peninsula & North Coast (so.)

SAF - Peninsula & North Coast (so. & no.)

SAF - 1906 Rupture

SAF - 1906 Rupture (floating)

HAYWARD-RODGERS CREEK

Hayward (southern)

Hayward (northern)

Hayward (so. & no.)

Rodgers Creek

Hayward (no.)-Rodgers Creek

Hayward (so. & no.)-Rodgers Creek

Hayward-Rodgers Creek (floating)

CALAVERAS FAULT

Calaveras (southern)

Calaveras (central)

Calaveras (so. & cent.)

Calaveras (northern)

Calaveras (cent. & no.)

Calaveras (so., cent. & no.)

Calaveras (entire floating)

Calaveras (so. & cent. floating)

CONCORD-GREEN VALLEY FAULT

Concord

Green Valley (southern)

Concord-Green Valley (so.)

Green Valley (northern)

Green Valley (so. & no.)

Concord-Green Valley (entire)

Concord-Green Valley (floating)

SAN GREGORIO FAULT

San Gregorio (southern)

San Gregorio (northern)

San Gregorio (so. & no.)

San Gregorio (floating)

5-years 10-years 20-years 30-years 50-years

0.003315 0.007285 0.017492 0.030944 0.065042

0.007566 0.015335 0.031023 0.046477 0.077429

0.00135 0.002769 0.005622 0.008434 0.014394

0.001522 0.003199 0.006617 0.010385 0.018332

0.005979 0.012169 0.025033 0.038379 0.066354

0 0 0 0 0

0.005943 0.012151 0.02475 0.03756 0.06403

0.0001 0.000203 0.000412 0.000624 0.001055

0.000277 0.000563 0.001151 0.001753 0.002989

0.008707 0.017626 0.035838 0.054407 0.092085

0.011712 0.02413 0.049251 0.074827 0.12933

0.022994 0.045034 0.086287 0.12393 0.189386

0.026388 0.050625 0.093602 0.130553 0.190905

0.017676 0.034412 0.065308 0.093136 0.141075

0.031004 0.060478 0.115179 0.164745 0.250697

0.003816 0.007431 0.014118 0.020166 0.030685

0.002133 0.004167 0.00796 0.011421 0.0175

0.001321 0.00264 0.005271 0.007894 0.013115

0.054532 0.099227 0.169106 0.222278 0.300099

0.03191 0.060155 0.108546 0.148665 0.211439

0.01114 0.021149 0.038684 0.053678 0.077973

0.025715 0.049964 0.094514 0.134474 0.203198

0.00064 0.00125 0.002389 0.003436 0.005289

0.004166 0.008054 0.01517 0.021572 0.032686

0.013752 0.02721 0.053276 0.078262 0.125224

0.052935 0.101711 0.188368 0.262716 0.382761

0.009858 0.019288 0.036985 0.053301 0.082437

0.004674 0.009117 0.017381 0.024932 0.038312

0.003176 0.006212 0.011913 0.017178 0.026628

0.012413 0.024104 0.045597 0.064948 0.098554

0.006387 0.012445 0.023686 0.033925 0.051985

0.011831 0.023175 0.044537 0.064319 0.099852

0.012023 0.023583 0.045412 0.065668 0.102079

0.004422 0.008785 0.017328 0.025643 0.041629

0.007545 0.014956 0.029378 0.043265 0.069491

0.004749 0.009463 0.018776 0.02793 0.045732

0.003797 0.007577 0.015088 0.022533 0.037228

726992444

GREENVILLE FAULT

Greenville (southern)

Greenville (northern)

Greenville (so. & no.)

Greenville (floating)

Mt. Diablo Thrust

SOUTHERN SAN ANDREAS FAULT

Model 1:

SAF - Coachella seg.

SAF - San Bernardino seg.

SAF - Mojave seg.

SAF - Carrizo seg.

SAF - Cholame seg.

Model 2:

SAF - Coachella seg.

SAF - San Bernardino seg.

SAF - Mojave seg.

SAF - Carrizo seg.

SAF - Cholame seg.

SAF - Parkfield seg.

ELSINORE FAULT

Whittier

Elsinore - Glen Ivy seg.

Elsinore - Temecula seg.

Elsinore - Julian seg.

Elsinore - Coyote Mtn. Seg.

Laguna Salada

SAN JACINTO FAULT

SJF - San Bernardino seg.

SJF - San Jacinto Valley seg.

SJF - Anza seg.

SJF - Coyote Creek seg.

SJF - Borrego seg.

SJF - Superstition Mtn. Seg.

SJF - Superstition Hills seg.

Imperial

CASCADIA SUBDUCTION ZONE

Cascadia megathrust - mid (M 9.0)

Cascadia megathrust - top (M 9.0)

Cascadia megathrust - bottom (M 9.0)

Cascadia megathrust - old (M 9.0)

61

0.005894 0.011737 0.023248 0.034542 0.056504

0.005408 0.010775 0.021385 0.031832 0.052235

0.002846 0.005672 0.011258 0.016761 0.027522

0.000791 0.001582 0.003161 0.004738 0.007881

0.014486 0.028646 0.056056 0.082298 0.131579

0.06465 0.124687 0.23234 0.32550 0.47644

0.07155 0.13791 0.25648 0.35835 0.52097

0.06951 0.133372 0.24619 0.34203 0.49377

0.09397 0.178644 0.32375 0.44179 0.61659

0.11673 0.218408 0.38475 0.51243 0.68805

0.04083 0.079849 0.15281 0.21950 0.33629

0.04976 0.097313 0.18603 0.26662 0.40559

0.04421 0.086231 0.16413 0.23452 0.35575

0.03490 0.069191 0.13566 0.19898 0.31523

0.05454 0.106061 0.20062 0.28479 0.42620

0.00105 0.046858 0.45975 0.80855 0.98359

0.01397 0.027726 0.05464 0.08076 0.13073

0.00550 0.011715 0.02626 0.04353 0.08546

0.03315 0.065619 0.12840 0.18813 0.29805

0.00770 0.016094 0.03490 0.05621 0.10515

0.00085 0.001844 0.00429 0.00741 0.01590

0.00888 0.018438 0.03950 0.06288 0.11527

0.08470 0.162433 0.29884 0.41290 0.58724

0.10098 0.192404 0.34915 0.47571 0.65876

0.03391 0.066783 0.12947 0.18820 0.29452

0.04016 0.079619 0.15589 0.22800 0.35820

0.00694 0.016449 0.04344 0.08035 0.17599

0.01707 0.033859 0.06661 0.09827 0.15845

0.00007 0.000278 0.00171 0.00568 0.02628

0.04985 0.107736 0.23435 0.35939 0.56901

0.01239 0.024937 0.05048 0.07650 0.12949

0.01239 0.024937 0.05048 0.07650 0.12949

0.01239 0.024937 0.05048 0.07650 0.12949

0.01239 0.024937 0.05048 0.07650 0.12949

726992444 62

Download