Key UCERF3 Assumptions Our inventory of large faults is adequate: Events that end up being partially on one of these faults and partially in the “background” are negligible Missing links (that might imply more connectivity between faults than we now have) are negligible. Fault endpoints are reasonable well constrained for purposes of quantifying multi-fault ruptures. Deformation Models: The amount of “off fault” deformation and spatial distribution thereof (i.e., seismic moment in background events versus on faults)? Slip-rate variations along faults, especially at toward the ends. Long-term effects of large earthquakes (postseismic relaxation) Earthquake Rate Models Distribution of average slip along rupture length, especially where there are fault jumps. Distribution of average slip with depth (magnitude dependent?). Manifestation of surface creep and after slip (reduction of area or overall slip rate?) That sub-regions of certain sizes should exhibit a GR distribution of nucleations. Probabiliy of seeing events in a paleo trench (as a function of magnitude or average slip below) Multi-fault rupture probabilities: o Static coulomb calcs can be used to define relative probs o Rules can be extracted from observations o Dynamic rupture modeling can help That any apparent seismicity rate changes are real (or not) That smoothed seismicity is a reasonable approximation of the distribution of a-values for future events (both large and small). Mmax for off-fault seismicity (based on other faults, deformation rates, and/or precarious rocks). That relative frequency of different focal mechanisms in off-fault seismicity can be predicted from past observations. What’s the aleatory variability in mag for given area. Whether or not aleatory Mag-Area variability is being double counted between ERFs and GMPEs in PSHA. Unique Cascadia assumptions (e.g., regarding turbidites?) Whether or not precarious rocks provide reliable constraints on smoothed seismicity, maximum slip rate on lesser faults (or Mmax or aleatory mag-area variability mentioned above). Whether or not nucleation probabilities uniform over fault surfaces (or do they correlate with rates of microseismicity). Earthquake Probability Models: That there is elastic-rebound predictability in the system for large earthquakes (and that simulators are not leading us astray); that slip in last event correlates with time to next event. COV of renewal models. That ETAS statistic apply to largest events. That spatial distribution of long-term seismicity influences the location of aftershocks. That aftershocks are sampled from the same magnitude frequency distribution as in the long-term model in a region or lat/lon bin. The distance and temporal decay of aftershocks. Fraction of main shocks versus aftershocks (and magnitude independence thereof). Applicability of sequence-specific and/or spatially variable ETAS parameters. That elastic rebound and ETAS behavior can be modeled in a separate but complimentary fashion (e.g., that aftershock statistic are not adversely effecting the COVs we’re assuming). That we have a reasonable distinction between multi-fault ruptures and quickly triggered, separate events? That our ETAS model will adequately capture any static or dynamic triggering effects by virtue or updates based on ongoing seismicity. That other time dependencies can be safely ignored (double branching ETAS, swarms, mode-switching between the ECSZ and the Los Angeles area).