Appendix_I_replyToHardebeck - University of Southern California

advertisement
Reponse to review of Appendix I by Jeanne Hardebeck – responses are given in italic
text, below.
First of all, I would like to thank the reviewer for writing such a careful and thorough
review!
(1) My biggest concern is that the b-value calculation is based on a catalog that is not
complete according to the results of the completeness estimates. The b-value was
calculated using the time period 1990-2005 and a minimum magnitude of M3.5. The
results of the magnitude of completeness analysis, shown in Table 5, put the magnitude
of completeness in 1990 much higher than M3.5 throughout most of the state. From
Table 5, the completeness in 1990 is M4.8 for North, M4.1 for San Francisco, M4.3 for
Central Coast, M4.2 for Mid, M5.5 for Northeast, and M5.3 for Rest of State.
(Completeness in 1990 for LA and Mojave is only constrained as ≤M4.0 from this table,
but Figure 5 suggests a completeness of around M3.5 for the southern California catalog,
so these two regions might be complete to M3.5.)
The computed b-value from the M≥3.5 1990-2005 catalog could be fine if the
completeness estimates from the modified Schorlemmer approach over-estimate the
magnitude of completeness. The completeness levels are much higher than others have
estimated for similar catalogs using the more traditional method of looking for a breakdown in the Gutenberg-Richter relation (e.g. Woessner et al. find a magnitude of
completeness of M1.2 for the San Francisco area 1998-2002.) However, I’m inclined to
agree that magnitudes found from the roll-off of the G-R relation underestimate the
magnitude of completeness. We compared the NCSN and PG&E catalogs for the Central
Coast and found that although the G-R curves looked complete down to M~2, there were
events in one catalog missing from the other up to M~4. So the completeness levels from
the modified Schorlemmer approach seem reasonable to me, leaving us with the problem
that the b-value is computed from an incomplete catalog.
Assuming that the estimated magnitude of completeness is correct, there are two possible
solutions to the b-value problem. One is to perform a b-value estimate for a shorter time
window and/or a larger minimum magnitude to ensure catalog completeness. The
drawback to this will be fewer earthquakes and hence larger uncertainty in the b-value. I
recommend the second option, to estimate b from a longer time period using the Weichert
method, utilizing more data and hence hopefully reducing the uncertainty. An argument
is made that estimating b-value using the Weichert method would be difficult because the
corrections for magnitude rounding and error need to be performed prior to applying this
method, and these require knowing the b-value. This could be addressed using an
iterative procedure: correct the magnitudes using b=1.0, estimate b-value using Weichert,
put the new b-values into the correction, and repeat this processes until convergence.
(2) In many places in this Appendix, it is assumed that the b-value is known to be exactly
1.0, and no uncertainty in this value is considered. I know that the author of this
appendix strongly believes that b-value is exactly 1.0, but many in the community do not,
and it has not been strongly demonstrated here since the b-value calculation is based on
an apparently incomplete catalog. Therefore, while it appears that b≈1.0, it needs to be
acknowledged that there is uncertainty in the b-value (e.g. ±0.05 in Figure 6, larger if
only the complete part of the catalog is used), and it would be appropriate to propagate
this uncertainty through into the uncertainty in seismicity rate. Additionally, if a revised
b-value estimate following the recommendations above finds a different preferred value,
the use of b=1.0 should be changed to the new estimated b-value for consistency.
Specifically, b=1.0 is assumed in the following places:
- Section 2, in the correction for magnitude rounding. If the synthetics that validate this
correction were generated using b=1.0, they do not test the validity of the assumption that
b=1.0.
- Section 3, in the correction for magnitude error. Again, if the synthetics that validate
this correction were generated using b=1.0, they do not test the validity of the assumption
that b=1.0.
- Section 6.1, in the calculation of earthquake rates using the Weichert method. The
Weichert method can also be used to estimate b, but in this case b is fixed.
- Section 6.2, in the time average estimate of earthquake rates.
This comment is very well taken. In response, I first comprehensively recalculated all of
the instrumental completeness thresholds by switching from an assumed completeness
amplitude threshold at all of the stations to station-specific amplitude thresholds
calculated from the record of which earthquakes each station did or did not record (see
Section 5.1.2, ‘Determining Instrumental magnitude completeness thresholds’, in the
text). Although this is still not as comprehensive as the original Schorlemmer et al.
approach, it is much closer. The result of this analysis was somewhat lower (although in
many cases not by much) instrumental completeness thresholds throughout the state. In
particular, I found the whole state to be complete to M 4 from 1997-2006. Thus I
recalculated the b value using just the M=4 1997-2006 data. I felt more comfortable
calculating b directly from the uniform and relatively accurate 1997-2006 data base then
by taking an iterative Weichert approach. From the recalculation b was found to be 1.02
+- 0.11 (at 98% confidence) for the full catalog and 0.85 +- 0.13 for the declustered
catalog. Since these values are very close and very much within error of the b=1.0
observed globally for full catalogs and b=0.8 used in the 2002 NHM for the declustered
catalog, respectively, I continued to use b=1.0 and b=0.8 to get the mean value of the
seismicity rates but I also did full sets of calculations with the 98% confidence limits on b
of b=0.91 and b=1.13 for the full catalog, and b = 0.72 and b=0.98 for the declustered
catalog. The results of these calculations are now given in Tables 18-20. For
calculation of model rates (e.g. Weichert and averaged Weichert, which now replaces my
old time averaged method) the b value is used to correct for rounding and magnitude
errors and to correct for different magnitude completeness thresholds in different time
periods. For the direct count seismicity rates, the b value is needed only for rounding
and magnitude error correction. I found that the b value variation of +-0.11 or +-0.13
did significantly change the model seismicity rates but, importantly, did not significantly
impact the direct count seismicity rates. This relative stability of the direct count rates
are important since these are probably the rates that will be given the highest weight by
the Working Group.
(3) There is something perplexing about the computed rate of M≥5 earthquakes and
whether or not this rate is different between the historical (pre-1935) and instrumental
(post-1935) catalogs. Table 6 and Figure 9 are tremendously at odds with each other (it
would help if we knew how the rates in Figure 9 were computed.) Table 6 supports the
large difference between historical and instrumental rates referred to in the text (historical
M≥5 seismicity rate of 11.81 and instrumental rate of 6.20, adding the columns.) In
contrast, Figure 9 shows essentially no difference (historical average rate of ~5.75 and
instrumental average rate of ~5, using values estimated from the plot.) The text refers to
1995-2005 as being a time of relatively low seismicity rate, but in Figure 9, the rate in
1995-2005 is the median of the values shown and is quite close to the mean. If these
values in Figure 9 are correct, it seems wrong that the time-average seismicity rate
(uniform weighting over the full catalog) should be so much higher than the Weichert
seismicity rate (biased towards 1995-2005.) All I can conclude is that there is some
serious error in Figure 9.
Yes there was a serious error in Figure 9. For Figure 9 I was counting all earthquakes
above the local completeness threshold in each region for each 10 year time period, and
then summing the regional rates together. Theoretically, and had the completeness
magnitudes been relatively low, this should have produced correct results. The problem
is that the completeness thresholds in many regions are simply too high for older periods
in time, leading to a systematic underestimation of older rates when measured over ten
year periods. For example, say that the completeness magnitude threshold for a region
for successive 10 year time periods is M 7.5, M 7.2, and M 6.4, and that the region
produces an average of 10 M>=6 earthquake every ten years. Even if this rate remains
constant over the three ten year periods in question, with these completeness thresholds
one might easily measure rates of 0, 0, and 4 M>=6 earthquakes for the successive
decades, inaccurately implying an increasing seismicity rate.
I couldn’t find any way to solve this problem and solve for accurate seismicity rates over
ten year periods in the historic record. For longer periods of time, however, rates may
be solved for more accurately because over 50 or 60 years we can expect to observe a
handful of earthquakes, even over a high (M>=7) magnitude threshold; in addition, with
time, within the historic era, the completeness thresholds of most regions dip below M 7.
Therefore I still have Table 13, which compares 1850-1932 and 1932-2006 seismicity
rates for most regions, but I have deleted Figure 9.
(4) The correction for magnitude error was somewhat confusing because of the apparent
contradiction between assuming symmetric Gaussian magnitude errors and computing a
non-zero average change in magnitude. I think I understand what’s being done, but it
could be explained better. Tinti and Mulargia correct for magnitude error by shifting the
frequency-magnitude curve down by changing the a-value. As I understand it, Equation
4 makes an equivalent correction by shifting the frequency-magnitude curve to the left by
changing M, with results fixed to those of Tinti and Mulargia’s for the case of uniform
magnitude uncertainty. (I think M is the average increase in magnitude, rather than the
average decrease as stated, since subtracting M=Mobserved-Mcorrected from the
observed magnitude will give the corrected value.)
I have added to the magnitude error correction section to try to make the method more
clear.
(5) The correction for magnitude rounding could also use some more explanation,
especially how the Monte Carlo trials were carried out. Is there only one corrected
magnitude per event, or are there many values from multiple trials? If there are many
values, how is a single magnitude for that event chosen, or are the results of all of the
trials combined into a single frequency-magnitude distribution? (Instead of having an
appendix to an appendix, I think you could just include the equations implemented by the
matlab code in the text of section 2.)
I moved the appendix of the appendix up into the text and tried to add more text to also
make the magnitude rounding correction section more clear.
(6) Section 5.1.2. It’s not clear why station corrections at a set of stations should average
to 0, this will depend on the site conditions at the stations.
This is right. On further investigation I found that the stations I could not find
corrections for really did not have corrections – corrections have not yet been calculated
for these stations, and they are not currently being used to calculate magnitudes. So I
simply removed these stations from the analysis. All of the remaining stations have
specific, non-zero corrections.
Download