luka grasselli

SEMINAR 2001/2002, 17.4.2002
The aim of this seminar is a brief description of the lively science field on the intersection primarily between
physics and geology. We start with the well-founded knowledge about earthquake occurrence and its distribution
in space, time and magnitude. Some physical models, simulating certain properties of earthquakes, are discussed
in next section. We continue with an overview of the current approaches and achievements in earthquake
prediction. In the last part we finally describe the application of the intermediate-term medium-range CN
(California-Nevada) prediction algorithm to the Italian and large parts of Slovenian territory in order to give an
example of the up to date limits of predictability. The fourth and fifth part (see bellow) of the paper are based on
the contributions to the »Sixth Workshop on Non-Linear Dynamics and Earthquake Prediction« , held in October
2001 in Trieste, Italy.
2.1. Earthquakes and their spatial distribution
2.2. The temporal distribution
2.3. The distribution of magnitudes
3.1. The seismic cycle
3.2. Slider-block model
3.3. Fractal properties of earthquakes
4.1. Startpoints
4.2. Earthquake precursors
4.3. Premonitory seismicity patterns and prediction algorithms
From the beginnings, the problem of predicting earthquakes has always been a great challenge of seismology.
Within the 20th century, for many years this was considered a task beyond the possibilities of true science. From
about 1970 until 1980, attitudes were more optimistic and the solution of the problem was considered to be
practically at hand. More recently, a more critical view has returned, recognizing the difficulty of the problem
and even hinting at its impossibility. However, since forecasting the occurrence of earthquakes with sufficient
advance warning is a very efficient way of diminishing the number of casualties and preventing, in part, damage,
it remains a key question for seismologists. [see Udias, 1999] The patterns of earthquake occurrence, on the
other hand, are intrinsically connected to the problem of prediction. Classical examples are modern prediction
algorithms, which exploit the data of the past in order to get information on the future events.
2.1. Earthquakes and their spatial distribution
The litosphere of the Earth consists of 6 major litospheric plates (Eurasian, Pacific, Australian, Antarctic,
African, American); relatively rigid layers with the thickness of about 100 km (temperature ranges between
270K -1600K), floating in the partially molten upper mantle (temperature is less than about 3000K). The thermal
convection in the mantle provides the mechanism for plate tectonics, which also includes several smaller plates,
positioned at the boundaries of the major ones. The layers are in relative motion at a velocity of about 5 cm/year
(i.e. 500 km/10Myear), accumulating strain at their boundaries and thus generating stress (typical values are 10 6
Pa). When the stress exceeds the resistance of the material a rupture occurs with a sudden release of energy, and
with a consequent drop of the acccumulated stress. This energy release, part of which propagates through the
Earth as seismic waves, is known as an earthquake . However, tectonic plates continue to move, and the process
is repeated. Thus a stationary state may be reached, consisting of deformation, stress accumulation and
earthquake occurrence; the process known as a seismic cycle . The accumulation of the stress caused by plate
motions takes from about few hunderds to few tousands years, the release of the energy occures in a fraction of a
minute, whereas the third time scale for the duration of aftershock series has to be added, lasting up to a year.
Besides, during a seismic cycle, apart from the main shock, foreshocks, and aftershockes, there also occur events
of lower energy with an apparently random time and space distribution (in a waste majority they are still limited
to the plate boundaries). [see Correig, 2001] The described scenario is more or less valid for convergent zones,
whereas at the divergent plate boundaries only small earthquakes are more likely to occur.
Picture 1: The layers of the Earth interior.
Picture 2: The worldwide spatial distribution of stronger earthquakes.
2.2. The temporal distribution
From a statistical point of view, the simplest model of the occurrence of earthquakes with time is a Poisson
distribution. This distribution assumes that earthquakes are independent events, that is, the occurrence of an
earthquake does not affect the occurrence of another event. The global distribution of large earthquakes follows
this distribution quite well, but moderate and small earthquakes, in particular those in a limited region, do not.
On the other hand, the statistical distribution of ultimate crust stress (i.e. value of stress immediately before the
rupture) may be expressed by a Weibull distribution  ( t ) = K tm , with K > 0 and m > -1, which is also widely
applied to the probabilistic quality control with respect to the failure-time of buildings and factory products. The
customary Gaussian distribution seems inadequate, because it may result in a »minus« ultimate strain, which
does not exist in reality (Hagiwara, 1974).
2.3. The distribution of magnitudes
The absolute strength of an earthquake is given in terms of its magnitude, a measure of the energy released by
the earthquake and propagated as elastic waves. It is computed from the measurements of the maximum
amplitudes in the local seismic observatories. It would be preferable to measure the earthquake's strength from
the energy released, but this cannot be done directly. An attempt is the definition of scalar seismic moment :
M0 =  u S
 being the rigidity modulus, u the mean value of the slip or displacement on the fault plane and S the area of
the rupture zone. For illustration, for the moderate strong (M=6.0) Bovec-Easter event of April 12, 1998 we have
M0=4.5*1017Nm, u=18cm, and S=6*10km2 [Bajc, 2001]. Seismic moment is proportional to the total energy
released E0 by
E0 
where  is the stress drop (typical value is  /=10-4).
Whereas it seems that the best solution consists in characterizing the earthquake strength by means of M0, there
are several definitions of magnitude in use. Empirically, we are able to observe the amplitude and the
wavelength of the propagated elastic waves, as well as their distance from the source. The characteristic of the
relations between magnitudes and the energy of the waves, is the logarithmic dependence of magnitude on the
energy. The Richter magnitude M satisfies the equation
log E/[J] = 11.3 + 1.8 M ,
where E is the released elastic energy [Prelovšek,2000].
Picture 3: Different kinds of faults.
Observations show that in seismic regions, during any period of time, the number of small earthquakes is many
times of the larger ones (see Table 1). This fact is expressed in the empirical law proposed by Gutenberg and
Richter in 1954:
log N ( M ) = a – b M
where N ( M ) is the number of earthquakes of magnitude larger than M , a is a constant that represents the
number of earthquakes of magnitude larger than zero, and b is the proportion of earthquakes with small and large
magnitudes. Although we do not know the exact form in which the elastic energy, accumulated in a region, is
released by earthquakes, the distribution of their magnitudes follows this universal law. The deviations
corresponds to very small and very large earthquakes on the one hand, and to the observations on a single fault
or in a limited area on the other hand. A general rule is, the smaller the area under consideration the shorter the
magnitudes validity-interval of the Gutenberg-Richter law. Values obtained for b are very stable, in the range 0.6
– 1.4, and its most common value is close to unity. High values of b indicate a high number of small
earthquakes, which is to be expected in regions of low strength and large heterogeneity, whereas low values
indicate the opposite, namely high resistance and homogeneity. Changes of b with time for the same region are
associated with changes in stress conditions and hence b has been proposed as a parameter to consider in the
problem of the prediction of earthquakes. [see Udias, 1999]
Table 1:
E [J]
100 000
10 000
Picture 4: The log-number of earthquakes versus with their magnitudes for the Azores-Gibraltar region (left),
and worldwide distribution for 1995 (right)
Observations of the occurrence of earthquakes in space, time, and magnitude have initiated models that simulate
certain properties of their distribution. These models specify the form in which accumulated elastic stress is
released along the fault surface, producing earthquakes of varying magnitude at different times. Models must
satisfy Gutenberg-Richter relation and must reproduce the universal value b1.
3.3. The seismic cycle
As we have seen, the process of stress loading and the subsequent release of stress constitutes a seismic cycle. In
the simplest case, the rate of the accumulation of the stress is spatially uniform and always reaches the same
critical level at which an earthquake is produced with constant drop in stress, reducing the stress to the initial
level, from which it starts to accumulate again. In this case, all earthquakes on a particular fault would have the
same magnitude and would repeat at constant time intervals, resulting in a periodic seismic cycle. A more
realistical model was proposed by Shimazaki&Nakata (1970), whereby stress is released when it reaches the
same maximum value, but the drops in stress for earthquakes are not always the same. This situation, known as
the time-predictable model, allows prediction of the time of occurrence of the next earthquake, but not its size.
The concept of a seismic cycle is very important, if one is to understand the occurrence of earthquakes on the
same fault. Although the accumulation of stress may be approximately uniform, the heterogeneity of the material
does not allow the existence of periodic or regular cycles of occurrence of earthquakes of the same magnitude.
Owing to the heterogeneous conditions on the fault, stress may be released starting at various levels and
producing earthquakes of different sizes, so that the time-predictable model is not adequate. Furthermore, part of
the accumulated stress is released by continuous aseismic slippage of the fault, independently from the
occurrence of earthquakes. The aseismic slippage depends on the nature of each fault and it is difficult to
observe. [see Udias, 1999] The main problem with respect to the regularity of seismic cycles is that observations
of the occurrence of large earthquakes are very limited in availability. Instrumental data started to be recorded at
the beginning of 20th century and reliable global data do not extend more than 50 years back.
3.4. Slider-block model
A more realistic model of what is going on on the fault consists of a series of blocks, connected by springs that
rest on or slide along a horizontal plane as they are drawn by springs that connect them to the upper block that
moves horizontaly with a constant velocity (constructed by Burridge and Knopoff in 1967). Once the springs
have accumulated enough energy to overcome the friction between the blocks and the lower plane, these move.
The motion of one or a few adjacent blocks represents a small earthquake and that of many blocks corresponds
to a large one. Owing to the friction, the blocks do not move uniformly, but rather by sudden increments, in the
form called stick-slip motion. This model mimics the earthquake distribution in space, time, and magnitude
much more realistic than any previous one. A result of such a model is that small events takes place in a random
manner whereas large ones occur in a quasi-periodic way.
Picture 5: Slider-block models.
The equation of motion for the i-th block of the model with N blocks with equal masses m, mutually coupled by
springs with Hooke constant k, and pulled by the upper block moving at velocity v through constant elastic shear
L against the friction F, is :
m xi'' = k ( xi+1 – 2 xi + xi-1 ) – L ( xi – v t ) – F
where xi is the departure of block i from its equilibrium position. This set of coupled differential equations has to
be solved numerically for the whole system simultaneously. About N=100 blocks are needed to get a distribution
of events similar to the Gutenberg-Richter law. However, N=2 blocks are enough to get a quasi-chaotic
behaviour (see bellow).
More recent models of this type have been generalized to two and three dimensions, and are more complex with
viscoelastic elements between blocks and other conditions that allow the simulation of series of aftershocks after
large events and other properties of the occurrence of earthquakes. Although these systems are in themselves
deterministic, they behave more and more chaotically as their complexity is increased, due to the increase in
their instability. The result is a quasi-chaotic behavior in which very small changes in initial conditions lead to
very large effects on the evolution of the system that behaves more and more randomly with time. While we
have discussed some important similarities between slider-block models and earthquakes there are also important
differences. Slider-block models would be representative of a distribution of earthquakes on a single fault.
However, the Gutenberg-Richter distribution of earthquakes is not associated with a single fault but rather with a
hierarchy of faults. In addition, the heterogenity of the crustal material cannot be properly included into the
3.5. Fractal properties of earthquakes
Another approach to studying the occurrence of earthquakes is to consider their fractal nature using the fractal
theory developed by Mandelbrot (1977). In addition to some theoretical evidence for fractality of earthquakes
(e.g. slider-block models ), there are several characteristics of the seismic catalogues supporting this choice (e.g.
diverse power law distributions, 'triggering '). An important characteristic and a practical definition of a fractal is
its scale invariance, which means the absence of a characteristic length. Another basic property of all fractal
distributions of variable x is that they obey a power law of the type
A x -D ,where the exponent D represents
the fractal dimension, which is a generalization of the Euclidian space dimensions and might be non-integer. In
the following we briefly explain the connection between scale invariance and power law distributions.
Mathematicaly, scale invariance of f(x) means that f(x) f( x) for all  . Therefore, if f(x) is fractal there
exists a function C() such that f(x)=C() f( x). After differentiation with respect to x and elimination of
C() we obtain the equation f '(x) / f(x)= f '(  x ) / f( x). On setting x=1 and integrating with respect to  , it
follows that f(x)=f(1)x  , where  =f '(1) / f(1). It follows that power functions are the only scale invariant
It follows from equations (2),(3), and (4) that the Gutenberg/Richter law can be transformed into a power law for
the number of earthquakes as a function of seismic moment
N ( M0 ) = B M0-D
where B is constant for a constant drop in stress. Since the seismic moment is proportional to the source
dimensions, a similar relation can be written for the faults length or area. According to Aki ( 1981 ), D=2*b and,
since b has a value near to unity, the fractal dimension, D, is approximately equal to two (in our simpified case
we get D=b/1.8). This result can be interpreted as representing the fact that earthquakes are distributed over two
dimensions, which agrees with the observation of their occurrence on fault planes. The fractal nature of the
occurrence of earthquakes implies that they have a self-similar stochastic distribution, that is, they behave in a
similar form, independently of the range of sizes considered. Although this is the general rule, self-similarity
need not be perfectly satisfied. The already mentioned deviation of the global data from the Gutenberg/Richter
law at law magnitudes can be attributed to the resolution limits of the global seismic network, whereas the
deviation of data at magnitudes greater than M=7.5 is more controversial. It can be attributed to the small
number of very large earthquakes considered. However, there is a physical basis for a change in scaling for large
earthquakes. The rupture zone of smaller earthquakes can be approximated by a circle of radius r, so that r 
A1/2 , A being the rupture zone area. On the other hand, the depth of large earthquakes is confined by the
thickness of the seismogenic zone, say about 20km, whereas the length l can increase virtually without limit.
Thus for large earthquakes, we expect l  A .
The best known scale free phenomena in physics are the critical phenomena that occur at the phase transitions
(e.g. liquid-gas transition at the critical temperature). The two parameters, defining the evolution of all classes of
critical phenomena are generically termed order parameter and correlation length  (density step and the lengthscale of the fluctuations for liquid-gas transition). The correlation length  expresses the typical distance over
which the behavior of a variable is correlated with, or influenced by, the behavior of another variable, and can be
viewed as a measure of the typical linear dimension of the largest piece of correlated spatial structure. Near to
the critical point the correlation length is proportional to    p – pC  , where p is the parameter that defines the
phase transition and pC its critical value. For the liquid-gas transition, p is the temperature, and for earthquakes p
can be associated with the stored crustal stress. At the critical point  diverges and all scale lengths are present,
following a power law distribution. In the case of earthquakes, the order parameter can be associated with the
stress drop, and the correlation length with the length of the fault.
Once we have accepted the fractal nature of earthquakes and their self-similarity, we may ask what is the
dynamic process responsible for this behavior. Some authors have answered this question in terms of selforganized criticality (SOC). This term is applied to dynamic systems that evolve by themselves until they reach a
critical state in which phenomena take place randomly, according to a power law distribution. The concept of
self-organized criticality originally evolved from the 'sandpile' model proposed by Bak et al (1987). In this
model there is a square grid of boxes and at each time step a particle is dropped into a randomly selected box.
When a box accumulates four particles, they are redistributed to the four adjacent boxes, or in the case of edge
boxes they are lost from the grid (Picture 6). Redistributions can lead to further instabilities and avalanches of
particles in which many particles may be lost from the edges of the grid. A measure of the state of the system is
the average number of particles in the boxes. This 'density' fluctuates about a quasi-equilibrium value. The
slider-block models were also proposed to exibit self-organized critical behaviour [Turcotte, 1999]. In the case of
earthquakes, the system is the material of the Earth's crust, which evolves under tectonic stresses until it reaches
a state of SOC. In this state, earthquakes of all sizes take place with only the limitation of a minimum and a
maximum possible size. This behavior is seen as a consequence of the nonlinear dynamic characteristics of the
Earth's crustal system in response to stresses generated by litospheric plate motion. Assuming the validity of this
picture, prediction of large events might appear impossible, because large events differ from small ones only in
Picture 6: The sandpile model.
4.1. Startpoints
Earthquake prediction is pivotal for the fundamental understanding of the dynamics of the litosphere as well as
for the reduction of the damage caused by earthquakes. The problem of earthquake prediction is posed as
consecutive, step by step, narrowing of the time interval, space, and magnitude range, in which a strong
earthquake is to be expected. Advance prediction is the only definitive test of any prediction method. Five major
stages of earthquake prediction are usually distinguished; they differ in characteristic time intervals for which an
alarm is declared. These stages are the following: Background one (100 years) which is the mapping of maximal
possible magnitudes and evaluation of seismic hazard; long-term (10 years); intermediate-term (years); shortterm (months); and immediate (days). The seismic hazard is given by the probability that, in a particular region
and within a given time interval, the maximum value of the intensity or ground acceleration exceeds a particular
value (see Picture 7). A further division according to the space scale of prediction is possible: exact (earthquake
size (EqS)), narrow-range (some EqS), medium-range (10 EqS), and long-range (100 EqS). Typical values for
EqS for the earthquakes under consideration in the prediction algorithms are tenths of kilometers. Whereas a
deterministic prediction seems impossible due to the complex or chaotic systems, underlying the earthquake
mechanism, a statistical (probabilistic) prediction is very limited due to the long time scales involved, and due to
the lack of long-lasting systematic observations. In view of this restrictions, the problem of prediction is based
on the identification of precursor phenomena that indicate the impending occurrence of an earthquake.
Picture 7: Seismic hazard for the Slovenian territory (estimated ground acceleration {return period 500 years})
4.2. Earthquake precursors
Precursors are observables that are related to changes in physical conditions in the focal region. Strictly
speaking, some phenomena such as the abnormal behavior of animals can not be considered true seismelogic
precursors. It is difficult to verify and quantify such phenomena, let alone establish their relation to a future
earthquake. Some relevant precursory phenomena might be :
- Variations in the seismic activity.
- Changes in the velocity and in the spectral content of seismic waves and earthquakes sources.
- Crustal deformations and variations of the stress in the crust.
- Gravitational, geomagnetic and geoelectrical precursors.
- Anomalous changes in the underground water level and chemical components.
- Anomalies in the atmospheric pressure, temperature and heat flow.
A seismic precursor that had wide acceptance at one time is the change in the velocity of seismic waves in the
neighborhood of the focal region, previous to the occurrence of an eartquake, and related to the magnitude of the
future earthquake. However, although this precursor raised great expectations, more controlled experiments have
shown that it has little reliability. Following validation criteria for precursor candidates were established from
IASPEI ( International Association for Seismology and Physics of the Earth's Interior ) :
- The observed anomaly should be related to some mechanism leading to earthquakes.
- The anomaly should be simultaneously observed at more than one site or instrument
- The definition of the anomaly and of the rules for its association with subsequent earthquakes
should be precise.
- Both, anomaly and the rules, should be derived from an independent set of data than the one for
which the precursory anomaly is claimed.
In this context, only five possible precursors, out of the about forty proposed, recently seem to deserve further
study (Wyss, 1997) :
- One based on ground water chemistry.
- One on crustal deformation.
- Three on premonitory seismicity patterns.
4.3. Premonitory seismicity patterns and prediction algorithms
A popular seismic precursor is the observation of patterns of seismicity in a region with variations in space and
time. The analysis of seismicity patterns is useful not only for prediction purposes, but it provides also the wide
set of systematic observations, without which any physically based model remains a merely theoretical
speculation. Some changes that are observed in the earthquake's flow (i.e. temporal succession of earthquakes)
before a large event are interpreted as the response of a non-linear system before a catastrophic event to the
perturbations caused by the small earthquakes.
In an active region, where the seismicity is expected to be distributed more or less homogeneously, the existence
of an area where the level of seismicity is lower than average is called a seismic gap and is considered the most
probable place for a large earthquake in the future. The following single seismicity patterns were formally
defined as premonitory [Panza, 2001] :
- The burst of aftershocks, which is associated to moderate magnitude events characterised by a large
number of aftershocks.
- The seismic quiescence (seismic gap).
- The relative increase of the b-value for the moderate events, with respect to smaller events.
- The increase of the spatial correlation in the earthquake flow
Many modern programs of prediction have abandoned the idea of an unique precursor and search for a coherent
pattern of convergence of several precursors. Currently a realistic goal appears to be the medium-range
intermediate-term prediction, which involves a space uncertainty of hundreds of kilometers and a time
uncertainty of few years. A family of such algorithms has been developed, applying pattern recognition
techniques based on the identification of premonitory seismicity patterns. Algorithms globally tested for
prediction are :
- M8 algorithm, applied on a global scale for the prediction of the strongest events (M  8).
- CN algorithm.
- The MSc algorithm can be applied as a second approximation of M8. It allows a reduction of the
area of the alarm by a factor from 5 to 20.
- Independently, the algorithm NSE (next strong earthquake) is applied to predict a strong aftershock
or a next main shock in a sequence.
A high statistical significance of 95% (for the estimation of statistical significance predictions are compared with
random guess) or more was established for M8, MSc and NSE prediction algorithms in a large-scale experiment
in advance prediction of strong earthquakes in numerous regions worldwide, 1985-1997 (see Table 2) [KeilisBorok, 1999].
Regions considered
Circum Pacific,
Strong earthquakes
Timespace of
alarms, %
Circum Pacific,
20 regions
Areas around 20
strong earthquakes
Table 2[Panza 2001]: Results of the in advance prediction, mostly for the large-scale experiment (see text)
performed from 1985 to 1997.
Picture 8: Gutenberg-Richter law for Northern Italy. Also, the magnitude M 1 (see text) is shown.
Algorithm CN is based on the formal analysis of the anomalies in the flux of the seismic events that follow the
Gutenberg-Richter law, and predicts the strong events (M > M1) that are anomalous with respect to this law (see
Picture 8). The anomalies are theoretically justified by the so called multiscale seismicity model, which implies
that only the set of earthquakes with dimensions that are small with respect to the elements of the zonation can
be described adequately by the Gutenberg-Richter law [Panza, 2001]. A set of empirical functions considered
allows for a quantitative analysis of the premonitory patterns which can be detected in the seismic flow. The
seismic functions considered are: level of seismic activity, quiescence, space-time clustering, and space
concentration of the events. Although CN has been designed by the retrospective analysis of the seismicity of
California-Nevada, it is currently used in several different areas of the world, without the necessity to adjust the
parameters, because its functions are normalised by the level of the seismic activity of the considered region. The
area selected for predictions, using the algorithm CN, must satisfy three general rules [Peresan et al, 1999]:
1.) Its linear dimensions must be greater or equal to 5L – 10L, where L is the length of the expected source.
2.) On average, at least 3 events with magnitude exceeding the completeness treshold M C (the lower limit for
which the catalogue used can be considered complete) should occur inside the region each year;
3.) The border of the region must correspond, as much as possible, to minima in the seismicity.
CN application to a fixed region consists of two steps:
In the first step, referred to as learning step, the magnitude M1, the magnitudes for normalisation of functions
and the thresholds for discretization of functions are defined. These parameters of the model are fixed on the
basis of the past data of the catalogue used, in a way to achieve the best retrospective predictions of the past
events. Furthermore, the return period for events with MM1 should not be lower than 6-7 years, as well as the
relation M1 – 3  MC should be satisfied (the last is not satisfied in our example of Northern Italy, which will
In the second step the monitoring of seismicity is performed using the parameters fixed in the learning phase.
This is the period of advance prediction. The main result of the CN application is the identification of the Time of
Increased Probability (TIP) for the occurrence of an earthquake with magnitude greater than or equal to M1.
The quality of prediction can be defined by using two parameters : The rate of failures-to-predict N0 =n / N, and
the rate of Times of Increased Probability (TIPs) T0=t / T, where N is the number of strong earthquakes occurred
during the time period T covered by prediction; the TIPs were declared altogether for the time t, and they have
missed n strong events.
After the first application of the CN algorithm to Italy in 1990, it was shown [Costa et al, 1995] that the results
of predictions are sensitive to the choice of the region. In particular, it was observed that a regionalisation,
following the borders of the seismotectonic zoning, improves the stability of the algorithm, while reducing the
percentage of TIPs and the failures to predict [Peresan et al, 1999]. Compared to the California-Nevada, the
Italian peninsula and the entire Mediterranean area exhibit a considerable heterogeneity in the tectonic regime,
revealed by the coexistence of fragmented seismogenic structures of greatly differing kinds, where a complex
and apparently paradoxical kinematic evolution can be observed within very narrow regions. The succesive
applications of the CN algorithm to the Italian territory led to the identification of three main regions, partially
overlapping and corresponding approximately to the north, centre and south of Italy (see Picture 9).
Picture 9: Three main regions for CN-application (left), and northern region in the regionalisation used by
Peresan et al (1999).
From the geodinamic point of view, Northern Italy is characterised by the Africa-Europe convergence and by the
counterclockwise rotation of the Adria plate, subducting under the Eastern Alps and Northern Apennines. The
catalogue used for the northern region by Peresan et al (1999) is complete for MC  3.0 beginning from 1960,
and the treshold for the selection of events to be predicted is fixed at M1 = 5.4. Only events with depth up to
100km are considered and aftershocks are removed with space-time windows depending on the magnitude of the
main event. The latter is a rather delicate affair, which is necessary every time, when we are to consider mutualy
independent earthquakes. The results obtained can be summarised as follows: In the retrospective analysis, both
events with M  M1 ( M = 6.5, May 6, 1976 and M = 5.4, February 1, 1988 ) are predicted with 20% of the total
time considered occupied by TIPs and 2 false alarms. The improvement with respect to the results obtained by a
previous study (Costa et al (1996)) is a reduction in the percentage of TIPs (from 27% to 20%) and in the spatial
uncertainty (around 38%). The two further strong events, both of which occurred after the end of the learning
period are correctly preceded by TIPs. In particular, the already mentioned Bovec-Easter-event is a real forward
prediction [Peresan et al, 1999].
Picture 10: TIPs obtained with the CN algorithm for the Italian territory (shadowed time intervals). The triangles
represents strong events. The period of in advance predictions is given in brackets.
We conclude with some comments on the aspects dealt with previously. Back in the past it was observed, that
earthquakes occur in narrow zones that surround relatively stable regions. In many cases, these alignments of
earthquakes coincide with the margins of continents. However, not all continental margins correspond to seismic
zones, so that they can be divided into active and passive margins, which classification is related to plate
tectonics. In searching for regularities in earthquake occurrence, we have seen the central position of the
empirical Gutenberg-Richter law, describing the magnitude distribution of the events. Not only that it plays an
important role in physical models of earthquake mechanisms, but it also reappears in the CN prediction
algorithm, which exploits the deviations from the law on a local scale. The simple slider-block model for
earthquake dynamics reveals certain chaotic properties, which are confirmed and broadened by concepts like
SOC, based on fractal characteristics of earthquakes. Without doubt, these observations signify important
limitations to the predictability of earthquakes. In the problem of earthquake prediction, at present, none of the
proposed precursors can be considered a clear indication of the occurrence of a future earthquake. An important
achievement is the adequate assessment of the seismic hazard for a region, since it is fundamental for prevention
of the damage caused by earthquakes. Recent earthquakes in countries with strict antiseismic design practice and
in those without such practice produced very different numbers of casualties. The Loma Prieta, California,
earthquake of 1989 (M = 7) produced only 62 dead whereas the Latur, India, earthquake of 1993 (M = 6.3)
produced an estimated 11 000 dead [Udias, 1999]. The CN algorithm employed on Italy is proving its robustness
by being used as a tool to verify the seismotectonic zoning [Panza, 2001]. The future prospects of the field of
earthquake prediction are uncertain. While a prominent author involved regards a five-fold increase of accuracy
and a transition to short-term prediction as realistic [Keilis-Borok, 1999], some non-involved authors point out
the possibility that earthquakes may, in practice, not be predictable [Udias, 1999].
Bajc, J.,et al., The 1998 Bovec-Krn mountain earthquake sequence, Geophysical research letters, 28/9,1839-1842
Correig, A.M., 2001, Earthquake Occurrence, Sixth Workshop on Non-Linear Dynamics and Earthquake
Prediction, International Center for Theoretical Physics, Trieste
Costa, G.,et al,1996, Seismotectonic model and CN algorithm: the case of Italy, Pure Appl. Geophys., 147, 1-12
Goltz, C., 2001, Fractal and Chaotic Properties of Earthquakes and Seismicity Pattern Decomposition, Sixth
Workshop on Non-Linear Dynamics and Earthquake Prediction, International Center for Theoretical
Physics, Trieste
Hagiwara, Y., 1974, Probability of earthquake occurrence as obtained from a Weibull distribution analysis of
crustal strain, Tectonophysics, 23, 313-318
Keilis-Borok, V.I., 1999, What comes next in the dynamics of lithosphere and earthquake prediction?, Physics of
the Earth and Planetary Interiors, 111, 179-185
Panza, G.F., 2001, Experiments with Intermediate-term Medium-Range Earthquake Prediction Algorithms in
Italy and Seismic Hazard, Sixth Workshop on Non-Linear Dynamics and Earthquake Prediction,
International Center for Theoretical Physics, Trieste
Peresan, A., Costa, G., Panza, G.F., 1999, Seismotectonic Model and CN Earthquake Prediction in Italy, Pure
and applied geophysics, 154, 281-306
Prelovšek, P., 2000, Geofizika, Skripta predavanj, FMF, Ljubljana
Turcotte, D.L., 1999, Self-organized criticality, Rep. Prog. Phys., 62, 1377-1429
Udias, A., 1999, Principles of Seismology, Cambridge University Press
Related flashcards

23 Cards


16 Cards


35 Cards

Create flashcards