INTRODUCTION TO MODERN TIDAL ANALYSIS METHODS

advertisement
INTRODUCTION TO MODERN TIDAL ANALYSIS METHODS
B.Ducarme
(International Centre for Earth Tides)
By modern tidal analysis methods we mean the methods that have been developed after 1965
i.e. the computer era. After a general review we shall focus on the more recent methods.
1. Preliminary remarks
From the spectral content of the tidal curve we can derive preliminary rules.
We have four main tidal families:
- LP : longue période
- D: diurnal
- SD: semi-diurnal
- TD : ter-diurnal
In each of them the structure is complex.
We have also some constituents in the quarter-diurnal (QD) band, typically M4, and in some
places non-linear tides, called “shallow water components” in oceanography, are going up to
the degree six (sixth-diurnal tides) or even more. Instrument non-linearities can produce a
similar effect.
We shall discuss essentially of the determination of the main diurnal (D) and semidiurnal (SD) tides as well as the ter-diurnal (TD) ones, leaving the determination of long
period (LP) tides for a more specialised approach.
We want to determine the amplitude and the phase of the different tidal waves, but as
explained under section 1.1 we have to put the waves in groups assumed to have similar
properties and called according to the main constituent. As we know beforehand the exciting
tidal potential and as we have reliable models of the Earth response to the tidal force, the goal
of the tidal analysis is mainly to determine, for the main wave of each group, the amplitude
ratio between the observed tidal amplitude and the modelled one (body tides) as well as the
phase difference between the observed tidal vector and the theoretical one. Moreover the tidal
loading due to the corresponding oceanic tidal waves cannot be separated from the body tides
as they have practically the same spectrum and what we get is the superposition of both
effects. The only way to determine the true body tides is to compute what is called the tidal
loading vector L(L,) by integrating the oceanic tidal vector over the whole ocean, using the
Farrell algorithm. Let us consider the figure 1 corresponding to a given tidal wave with a
known angular speed . The x axis correspond to the in phase component cost and the y axis
to the out of phase component sint.
On the x axis we represent the theoretical tidal vector R(dth.Ath,0) where Ath is the theoretical
astronomical amplitude (rigid body approximation) for the corresponding component, dth the
ratio between the modelled body tide amplitude R and the astronomical amplitude. For strain,
tides in wells and displacements there is no astronomical tidal vector and R is directly
computed for a given Earth model.
The observed tidal vector A(Ao ,), rotating at the same angular speed, but with a phase
advance . We have generally the relation Ao = d.Ath. In the least square approach d and  are
the primary unknowns (section 3.1). When there are no astronomical tides we shall define Ao
as d.R.
The discrepancy between the observation and the model is thus the residual vector B(B,)
B=A–R
(1.1)
If we represent the oceanic tidal influence as a vector L(L,)
We can construct a modelled tidal vector Am(Am,m), with generally Am = dm.Ath
Am = R + L
(1.2)
or a corrected tidal vector Ac(Ac,c), with generally Ac = dc.Ath
Ac = A –L
(1.3)
And compute the final residue X(X,)
X=B–L=A–R–L
(1.4)
1.1 Separation of the waves
In principle two tidal frequencies can be separated on an interval of n equally spaced
observations if their angular speeds differ at least by 360°/n (Rayleigh criterion).
It is equivalent to require that their periods are commensurable on the interval T=n*sampling
rate
(1.5)
T  T1.T2
T1 T2
You can beat this Rayleigh criterion by a large amount if you know the frequencies a priori.
The amount depends on the signal to noise ratio. In practice the separation is already
numerically possible when you have only 50% of the basic interval.
The main implications of this principle are:
a) There is a minimum length for the filters to be used either for the separation of the
tidal bands or for the elimination of the low frequency spectrum, generally called
“drift”.
b) We must associate the tidal waves in groups for which we suppose constant tidal
parameters for the amplitude and phase.
1.2 sampling rate
It has been shown that a two hours interval between samples is well sufficient for the earth
tides analysis but the usual procedure is to use hourly readings, that allow to go reach
harmonics of M2 up to of 12 cycles per day. Half hourly readings do not produce any
improvement.
However the aliasing problem implies that all the spectral content with frequencies higher
than 0.5 cph (cycle per hour) which correspond to the Nyquist frequency has been removed
from the records before decimating to hourly values. In the past, analog recordings were hand
smoothed by experienced operators prior to digitisation, as it was not possible to build
electronic filters with a so low cut off frequency. It is indeed the great advantage of the
modern digital data acquisition systems to go to high sampling rates that are numerically
decimated up to hourly sampling. It is usual to use electronic filters with a 30 seconds time
lag and it is then sufficient to use minute sampled data. This sampling rate has been
conventionally adopted by the network of superconducting gravimeters of the “Global
Geodynamics Project” (GGP), even if most of the stations use higher sampling rates from 1 to
10 seconds. A first decimation is then applied on the original data to get one minute sampled
data.
1.3 Causal or non-causal filtering
The tidal signal being purely harmonic we can use non-causal filters. It means that to evaluate
some parameter at the instant ti we can use observations performed at any instant ti-k as well as
ti+k with k0. It is not the case for example in seismology. To determine the arrival time of a
seismic phase at a moment ti we can only use the preceding data.
For tidal analysis we can thus build even or odd filters centred around the central epoch which
will be associated to the cosine or sine functions of the harmonic constituents.
2. General introduction to tidal analysis
We shall first define several classes of analysis methods and show how they derive from
specific assumptions and see if they take advantage from the fact that the spectral content of
the tidal generating potential can be theoretically computed.
2.1 The transfer function
The goal of any tidal measurements is to determine the response of the Earth to the tidal
force F(t) through an instrument, using a modelling system. In the output O(t) we cannot
separate what in the physical system response is due to the Earth and what is due to the
instrument. We must thus determine independently the transfer function of the instrument in
amplitude as well as in phase. This operation is often called the calibration of the instrument.
The impulse response of the system Earth-instrument s(t) is the unknown. According
to the Love theory we suppose it is linear. We can also reasonably suppose it is stable and
time invariant. As we know that the input has an harmonic form, we can consider the global
system as non causal. In these conditions, for a single input F(t) we can write the predicted
output p(t) by a convolution integral under the form

p(t)  s().F(t ).d
(2.1)

In the frequency domain the Fourier transform gives
p() s().F()
(2.2)
As in our case p(t) and F(t) are pure harmonic functions, their Fourier transforms exist only
for a finite data set of length T, often referred as data window. In practice this is not a
restriction as we cannot produce infinite series of observations.
The transfer function s() is a complex one
i()
s()  s() .e
(2.3)
We generally refer to s() as the amplitude factor and to () as the phase difference
between the output and the input. The introduction of a rheological model to describe the
transfer function of the instrument
i()
T()  T() .e
(2.4)
will give the true transfer function of the Earth
s()
E() 
(2.5)
T()
()()()
(2.6)
2.2 Direct solutions of the linear system (spectral analysis)
If we consider that our modelling system can fit perfectly the physical system i.e. that
there are no errors in the observations, we put
O(t)  p(t)
(2.7)
Replacing in equation (2.1), the evaluation of the Fourier transform will give us
s()O() / F()
(2.8)
where O() is obtained for a data window [-T/2,T/2].
We can also use the autocorrelation functions COO() and CFF(), defined as follows
T /2
C
OO
()lim O(t).O(t ).dt (2.9)
T   T / 2
C FF ()lim
T /2
 F(t).F(t ).dt
(2.10)
T   T / 2
The corresponding Fourier transforms COO() and CFF() generally, called power spectra, will
give us an evaluation of s() as
2 COO()
(2.11)
s() 
CFF ()
With these spectral techniques we have no error calculation as we supposed errorless
observations. However we can evaluate the mean noise amplitude  around the tidal spectral
lines and deduce signal to noise ratios
A=/A
(2.12)
where A is the amplitude of the tidal wave. The error on the phase difference is then
=(/A)rad
(2.13)
These spectral methods are generally time consuming and they do not take advantage from
the fact that we do know beforehand the spectral content of the input function F(t).
2.3 Optimal linear system solution
If we consider the observational errors we can try to optimise our representation by
minimizing the residual power.
Let us define residuals as
e(t)=o(t)-p(t)
(2.14)
We want to get
e
2

lim 1 e(t) .dt min (2.15)
lT   T  
2
By introducing (2.1) we get
T /2

lim 1 .  (O(t)  s().F(t ).d) .dt
lT   T  T / 2

2
e
2
The only unknown being s(), the minimizing conditions are
 e 0,  e 0
 s()  s()
2
2
2
2
This yields directly
(2.16)
T /2
1 . 2(O(t)  s().F(t ).d) .F(t u)dt 0

lim

lT   T  T / 2

2
(2.17)
By using the definitions of the auto- and cross-covariance functions we write
C OF (u)lim
T /2
 s().C
T   T / 2
FF
(u ).d
(2.18)
This equation is known as the Wiener-Hopf integral equation
Its Fourier transform is expressed in the frequency domain as
C
OF
()s().C FF ()
(2.19)
Two classes of tidal analysis methods derive directly from the Wiener-Hopf integral
equation
- in the time domain the so called response method;
- in the frequency domain the cross-spectral method.
Both of them start from the time domain representation of the tidal input F(t).
- In the response method we write (2.19) under its matrix form for a finite data length.
We must define the sampling interval  of s() and its length n. s() is the “impulse
response” defined as a linear combination of the original readings o(t) under the form
n
s(ti ) c j.o(ti  (j 1) )
(2.20)
j 1
-
Increasing its length allows more oscillations of the function s() and increasing the
sampling interval  decreases the bandwidth for which s() is defined. In practice
equation (2.19) is solved independently for the different tidal families (D,SD,TD).
This method was used at the Tidal Institute in Liverpool and also at the Royal
Meteorological Institute of Belgium.
The variation of the tidal amplitudes and phases are approximated by smooth functions
of the frequency and the response method is not suitable to model the liquid core
resonance effect in the D band.
The cross-spectral method was seldom applied and leads to very similar results.
2.4 Harmonic analysis methods
Let us consider the input function F(t) through its spectral representation F() according to
the general formula

F(t)  F().e
2it
.d
(2.21)

which becomes for a discrete spectrum such as the tidal one
N
2i j t
F(t)1/ 2 j  N F( j ).e
(2.22)
We can introduce this representation into equation (2.19) or directly into (2.17) and obtain
for the real and imaginary parts of s() a set of classical normal equations as in the least
square adjustments methods. We shall describe later on the corresponding observation
equations. This harmonic formulation based on a frequency domain representation of the
tidal force is the most widely used.
Differences appear in the formulation of the observation equations, in the prefiltering
procedure and in the error evaluation. Some authors treat separately the different tidal
bands (Venedikov, 1966;Venedikov, 1975; Venedikov, 199?; Venedikov & al., 2003).
Others try a global solution and differ mainly by the design of the filters: direct least
square solution with a minimum filter length (Usandivaras and Ducarme, 1968), Pertsev
filter on 51 hours (Chojnicki, 1967) or long filters with steep slope and high rejection
(Wenzel, ????).
As a matter of fact the results are very similar but large differences can appear in the error
evaluation through the RMS error on the unit weight so. Least square solutions generally
underestimate the errors as they suppose a white noise structure and uncorrelated
observations i.e. a unit variance-covariance matrix.
This second hypothesis cannot be replaced as we do not know the real variancecovariance matrix.
The first hypothesis is certainly not valid as we know from spectral analysis that the
noise is a so called coloured noise. Moreover there is often a noise accumulation in the
tidal band, a so called cusp effect due to the imperfect representation of the tidal
potential, non linearities and leakage effects. This first problem has been solved using
more detailed tidal potential developments. The methods dealing separately with the
different tidal bands as VEN66 lead to more realistic error estimates as they
approximate the noise level independently in each band. Another way is to estimate
directly the noise by a spectral analysis of the residuals and “colour” the error
estimated under the white noise hypothesis (ETERNA3.4, Wenzel, 1997). The
“HYbrid least squares frequency domain CONvolution” (HYCON, Schüller, 197?) is
studying the variations of the real and imaginary parts of s() in consecutive partial
analyses of the data.
More recently (Ishiguro & al., 1981) developed a general method including both harmonic
analysis and response method: BAYTAP-G. It allows to determine not only the tidal
constituents but it is also evaluating the aperiodic “drift” signal. Moreover additional
“hyperparameters” allow to adjust the degree of “smoothness” of the solution. A final version
of the program has been developed at the National Astronomical Observatory in Japan.
3
Least squares analysis of the tidal records
A least squares solution requires a complete mathematical representation of the
phenomenon through the development of the tidal potential.
We can write equations of observation corresponding to each reading li under the form


(3.1)
li vi  f(x, y,...)
 
where x, y... are the estimated unknown tidal parameters, e.g. the amplitude and the phase of

the wave groups, and the vi are the estimated observation residuals.
The solution is obtained from the normal equations giving the most probable estimates
of the unknowns and the corresponding errors under the assumption that they are normally
distributed and uncorrelated.

As a matter of fact the tidal records are disturbed not only by accidental errors vi but
also by a non stationary noise function g(u,v,…) which is related to the instrumental drift, the
meteorological effects and so on. The equations of observation become


(3.2)
li vi  f(x, y...) g(u,v...)
However we can reduce the solution of the system (3.2) to the solution of the system (3.1) by
using a filtering process. If we apply analogical and numerical filters to the records in such a
way that frequencies external to the tidal spectrum are removed we can write


(3.3)
l'i v'i F(li ) f'(x, y,...)
and then use a classical least squares solutions.
3.1 Constitution of the observation equations
We can write at epoch ti equation (3.1) under the form
li  vi d j.ATj, k.cos( iT, j, k  j )
j
where
i
j
k
specifies
dj= ATj is
Aj
T
j, k
A
iT, j, k
j
(3.4)
k
the epoch
the tidal group
a wave inside a tidal group
the ratio of the observed to the theoretical amplitude of
the group j
the theoretical amplitude of a tidal constituent k inside of
the group j
the argument at instant ti of a tidal constituent (j,k)
the phase difference supposed constant inside group j
We have to solve the system (3.4) for the unknowns dj and j.
In practice it is usual to take as auxiliary unknowns the odd and even components of
the wave groups
(3.5)
 j d j.cos j
(3.6)
 j d j.sin  j
and we get
li  vi (aij j bij j )
(3.7)
j
where
T
aij  ATjk.cosijk
(3.8)
T
bij  ATjk.sin  ijk
(3.9)
k
After resolution of system (3.7) by various methods (Venedikov, 1966;Wenzel,1997) we
return to the original unknowns
d 2j  2j  2j
(3.10)
j
 j arctg( )
j
The associated errors can be computed from the unit weight error s0 given by the least squares
solution and the elements of weight matrix Q
sd  so
( .Q  2Q  Q
s  so
( .Q 2Q  Q
2
2
d
2
(3.11)
2
2
d
2
3.2 Implicit assumptions of the least squares solution
The following implicit assumptions should be verified by the data set
3.21
3.22
The model is complete
It means that no energy from deterministic signals is left in the data outside the tidal
frequencies.
It follows that all harmonic constituents coming from the pressure or eventually the
oceanic shallow water components should be included in the model. In practice it is
never satisfied for the pressure waves such as S2 and its harmonics.
The unknown parameters are stable.
It means that the amplitude ratios dj and phase differences j must be constant over the
whole recording periods. The changes of sensitivity or instrumental delays have thus
to be carefully monitored.
3.3 Sources of perturbation of the tidal parameters
3.31
3.32
Noise and harmonic signals inside the tidal bands
To minimize the noise in the tidal bands, especially the diurnal one, the instruments
should be carefully protected against meteorological effects. As the changes of
pressure induce significant gravity variations, of the order of –3nms-2/hPa, they should
be recorded as auxiliary signal and included in the model. For gravity meters it is
significantly improving the unit weight error s0. Temperature variations can also be
introduced in the model.
There is no possibility to eliminate an harmonic signal witch has the frequency of a
tidal wave. Tidal oceanic loading is the best example of such a superposition. The only
possibility to get rid of it is to model these “indirect” effects and subtract them from
the tidal vector. A similar problem arises with the planetary pressure wave S2 and its
harmonics which have a lower efficiency ( -1nms-2/hPa) than the pressure noise
background and is not perfectly eliminated from the corresponding tidal groups.
Noise and harmonic signals external to the tidal bands
The smoothing and the filtering must really eliminate the noise outside the tidal band
to avoid eventual “leakage” effects. We call leakage the perturbation of the signal on
one tidal frequency of the energy present in another spectral band. It can happen when
the side lobes of the filter transfer function are important. The effect could be very
dangerous if there is an harmonic signal present in one of the side lobes. Happily, in
tidal analysis, we know beforehand the existing signals and can build the filters
accordingly.
4. General methodology of data preprocessing
Much care should be devoted in tidal analysis to the preparation of the data and the calibration
of the instruments (determination of the transfer function in amplitude and in phase) as well as
to the detection of the anomalous parts of the records in order to process only data as fitting as
possible to the requirements outlined before in 3.2.
Depending on its frequency the noise can be eliminated or at least reduced at different steps of
the method, by filtering or modelling.
It should be pointed out that a spike or a jump produces noise on the entire spectrum and
should be eliminated prior to any filtering. The filtering prior to least square analysis aims at
separating the tidal bands and one uses thus generally band-pass filters.
4.1 Elimination of the noise external to the tidal band
4.11
4.12
4.13
4.14
Energy at frequencies higher than the Nyquist frequency (T  2h or   0.5cph)
To avoid aliasing effects this part of the spectrum must be suppressed. With modern
Digital Data Acquisition Systems (DDAS) we use a sampling rate of one acquisition
per minute as a minimum. The electronic signal has to be filtered accordingly. The
following decimation steps up to hourly readings require a correct digital filtering with
an appropriate transfer function, to avoid any aliasing and smooth out the noise.
Energy at frequencies higher than the main tidal bands (T  8h or   0.125cph)
This part of the spectrum is frequently referred to as “noise” but can incorporate
harmonic constituents due to non linearity effects in the instruments or shallow water
components, as well as the harmonics of the pressure wave S2. It is easily eliminated
by a low pass filtering prior to analysis.
Energy at frequencies lower than the diurnal band (T  30h or   0.03cph)
This part of the spectrum is conventionally called “drift” although it contains the long
period tides.
Besides the separation of the three main tidal bands the filtering of the data prior to
least squares analysis aims at the elimination of the drift. Of course it is supposed that
the drift is a smooth function and jumps should be eliminated prior to filtering.
Energy between the tidal bands
It is the most difficult to eliminate in the filtering procedure. Part of it will remain in
the filtered data
4.2 Elimination of the perturbations inside the tidal bands.
As already stressed in 3.2 we should avoid or model any coherent or incoherent energy
besides the true tidal spectrum.
Coherent energy will mix up to the tidal spectrum. If it has the same frequency we
should correct the results by modelling the effects. It is the case of the oceanic tidal loading
effects. If there exists additional waves such as shallow water components it is necessary to
introduce them in the model. If not they will corrupt the corresponding tidal group. In the last
versions of VAV some options have been introduced to solve these problems.
There exists also steady waves of meteorological energy such a S1 and S2. If
these waves are seasonally modulated it will create sides waves in the spectrum. For example
the annual modulation of S1 corresponds to the frequencies of P1 and K1 and the semi-annual
one to PI1 and 1. As these modulations are changing with time it is practically impossible to
model these effects which are directly affecting the determination of the liquid core resonance
near the K1 frequency. Here the only solution is to protect the instrument against these
thermal influences.
Incoherent energy is mainly produced by the pressure and temperature variations. It is
often possible to determine “efficiency coefficients” for a given source of perturbation and to
eliminate its contribution to the considered frequency band. The noise level is well
characterised by the RMS error on the unit weight s0. The diminution of s0 is generally a
good criterion to evaluate the real impact of the perturbation on the data.
4.3 The leakage effect
We call leakage effect the influence of one tidal frequency of the energy really present
in another spectral band. Due to unfavourable window functions coupled to the side lobes of
the transfer function of the filters, perturbations from even distant frequencies could leak into
the tidal bands where they could bias the tidal parameter estimates (Schüller, 1978). For the
48h filters designed by Venedikov in 1966 there is a maximum leakage from each frequency
band situated at 7°.5/h in angular speed. A numerical simulation showed that the O1 group
gets a maximum effect from the frequency 21°.44/h (16.8h) and 6°.44/h (55.9h). However we
know that there is no coherent energy at these frequencies in the tidal spectrum. Venedikov
himself considered, in the VEN98 method, the problem of the leakage of one tidal band into
the other by introducing the diurnal waves as a group in the evaluation of the SD groups and
inversely, showing that the effect is negligible.
To avoid the leakage one has to smooth the window function (ideally a Hanning or
Hamming window should be applied in place of the usual square box), avoid gaps and remove
side lobes by designing better and thus longer filters. It is the basic philosophy of Wenzel in
ETERNA. However long moving filters are spreading out spikes and jumps and the loss of
data becomes important in presence of gaps. It is the main reason why data “repair” becomes
necessary.
5. Data repair prior to analysis
In tidal data analysis it is customary to interpolate small gaps (less than one day), to remove
spikes and to adjust obvious jumps in the curve.
It is a controversial matter as we cannot “re-create” missing or spoiled information.
In fact the smoothing of digital data is useless when the applied band pass filters are short and
not overlapping as in VAV. It is then sufficient to eliminate the corresponding perturbed
sections from the filtered numbers in the computation procedure. It can be done automatically.
An intermediate solution is to weight the filters in function of their internal noise as done in
NSV98.
The problem is worse when people want to use very long moving filters to get steep slopes
and avoid side lobes, like in ETERNA. The number of data lost on each side of a block can
become very large in “gappy” data and gaps are affecting the transfer function of the data
window, by creating side lobes. The main concern is here always the possible leakage from
adjacent frequencies. It explains why data interpolation is so popular even with hourly
sampling rate.
With high rate sampling for tidal data it became necessary to decimate the original data to
minute and even hourly data. Decimation filters are built to avoid aliasing and are generally
rather long. A typical filter to decimate from minute to hour covers xxx minutes. A missing
ordinate will thus create a gap of xxx minutes. Moreover, as the minimum filter length for
tidal analysis is 24 hours we shall loose a minimum of 24 hours of data. It is why
interruptions of a few minutes should be interpolated.
Concerning spikes and tares the moving filtering will spread out the perturbation on all the
adjacent readings. This effect will also increase with the length of the filter. It is a reason why
it can be necessary to suppress spikes and jumps. However in some instruments the curve is
slowly going back to its previous value after the jump and the correction of the jump is in fact
producing an artificial drift. Correction of jumps can seriously affect the determination of LP
waves and long period gravity changes. Spikes and jumps should be corrected on the original
data prior to decimation as the decimation filter will spread out this kind of perturbation.
A last category of perturbation usually “repaired” are the portions of the curve perturbed by
large earthquakes. In fact most of the signal is harmonic and with a high frequency compared
to the tides. It will be filtered out during the decimation process to the hourly sampling.
However some instruments can show an offset during earthquakes and some repair can be
necessary.
As a conclusion we should say that each philosophy has its advantages and drawbacks. Either
you want to use short filters to avoid loosing too many data in “gappy” records and you are
exposed to leakage from the side lobes of the filters and window transfer functions, or you are
using long filters on uninterrupted data series with optimal transfer functions and you have to
repair the data from spikes, and tares on one hand and to interpolate the gaps on the other,
“creating” artificial data. The decimation procedure requires to interpolate gaps smaller than
the final sampling rate to avoid enlargement of very small gaps.
5.1 preprocessing techniques
In the early stages of the tidal research it was necessary to have uninterrupted records of at
least 28 days to perform one tidal analysis with separation of the main tidal groups. The
situation did radically change with the use of least square techniques which are only limited
by the filter length. It became possible to use data with only two consecutive hours
(Usandivaras & Ducarme, 1967), two days (Venedikov, 1966), 51 hours with Pertsev filtering
(Chojnicki, 19??, Wenzel, 19??) or more. Very general methods have been developed for
unevenly spaced data (Venedikov, 2003) and the notion of gap has thus disappeared. However
it has always been customary to interpolate a few missing hours.
In the early days smoothing was performed manually and simple filters were designed to
identify bad portions of the curve, such as Lecolazet. Specific filters allowed to discriminate
spikes and tares (Z½5, DM47).
The new preprocessing techniques are based on the “remove-restore” principle. A model of
tides (tidal prediction) and pressure effects are subtracted from the observations (“remove”
procedure) and the corrections directly applied on the residues. Interpolation of missing data
becomes a simple linear interpolation between the two edges of a gap.
Corrected observations are then recomputed (“restore” procedure). The quality of the tidal
model is essential as the interpolated data will in fact be replaced by this model during the
“restore” procedure.
Of course the degree of automation is critical. Completely automated procedures such as
PRETERNA can be dangerous. It is why the “T-soft” software (Vauterin, 1999) has been
developed with a high degree of interactivity.
Generally there is no “a-priori” model and the model will be derived from the analysis of a
less perturbed part of the data set. As a first approximation it is also possible to get
“modelised” tidal factors using a body tides model and oceanic loading computations. The
model can be improved in an iterative way.
To apply safely the remove-restore principle it is necessary to remember that
- The tidal factors of the model should be computed without application of the
instrumental time lag;
-
The calibration used to compute the model should be the same as the calibration
applied on the data in the preprocessing phase.
Here also T-soft can provide useful options. It is possible to compute a linear regression
between the tidal prediction and the raw gravity data to determine automatically an apparent
sensitivity, without any knowledge of the real calibration. The global fit replaces the tidal
prediction and the residues can be directly corrected. The corrected data are obtained by
summing up the global fit and the corrected residues.
The modelling can be improved by adding to the regression a pressure channel. It is even
possible to take into account an unknown instrumental time lag or timing error by using the
time derivative of the theoretical tides as auxiliary channel. This procedure is only precise for
time offsets of a few minutes.
6. Monitoring the sensitivity changes
The calibration factor C is expressed in [physical units] per [recording units] i.e. for a
gravimeter connected to a digital voltmeter in nms-2/V. The instrumental sensitivity s is the
inverse and will thus be expressed in V/nms-2.
The usual way of calibrating an instrument is to apply a perturbation with a known
amplitude K to the instrument and observe the amplitude of its reaction d. The calibration of
the instrument is then
C=K/d
(6.1)
and the original observations should be multiplied by C prior to tidal analysis.
Its sensitivity is then given by
s = d/K
(6.2)
If K is constant the variation of the sensitivity is directly proportional to the variations of d.
The calibration procedure is generally a long and tedious procedure and, what is worse, is
perturbing the tidal records. It is why during the recording period there are only a few
calibrations available and nobody knows how the sensitivity behave between two calibrations.
The best we can do is to compute an instantaneous calibration value by linear interpolation
between two successive calibrations.
However we can follow accurately the changes of sensitivity using the tidal records
themselves as we suppose that the tidal parameters are perfectly stable. If we have a good
model for the tidal amplitude factors and phase differences, we can generate a tidal prediction
in physical units and fit the observations to the prediction on let’s say 48 hours blocks. The
regression coefficient d’ is thus expressed in [recording units] per [physical units] and is thus
an expression of the instrumental sensitivity. This option has been implemented in T-soft
under “moving window regression”. Auxiliary channels, such as pressure can be incorporated
in the multilinear regression. We can thus follow from block to block the sensitivity
variations. Due to noise and perturbations it is often necessary to smooth the individual values
to get a continuous behaviour.
However one has generally for block i where a calibration was performed with an amplitude d
d  d’i.
As we know that the calibration value must be the same we can write
C = K/d = K’/di’
(6.3)
With
K’/K = d/d’i= fi
(6.4)
or
K’ = fi .K
(6.5)
To determine a mean value f we have only to compute d’ on the days where a calibration
has been performed and average all the d/d’ values. Then all the successive d’j values from
each block will provide a continuous series of calibration factors
Cj = f .K/d’j
(6.6)
If we are using non overlapping filters of the same length as the blocks we can directly
multiply the filtered values by the corresponding calibration factor. This possibility has been
widely used at ICET with the VEN66 method.
More generally we can linearly interpolate between the smoothly changing values of Ci and
multiply each observation by the corresponding value.
7. Recent tidal analysis methods
As outlined before, in the least squares approach, there are two main families of methods:
- moving window filtering and global evaluation of the tidal families following T.
Chojnicki. The most popular tidal analysis method along this line is “ANALYZE”
from ETERNA package by H.-G.Wenzel (1999).
- Non overlapping filtering and separation of the tidal families following A. P.
Venedikov. There is acontinuous lineage of methods from VEN66 to VAV03.
Numerous tests on the same data sets have never shown any difference in the evaluation of
the tidal parameters at a level exceeding the confidence intervals. However, as outlined in
section 2.4, the RMS errors determination can be very different, due to the coloured noise
characteristics of the tidal data. The separation of the tidal families allows to approximate the
noise in each frequency band, providing directly realistic error estimates.
The BAYTAP-G method (Tamura & al., 1991) proceeds from a different philosophy.
7.1 ETERNA
ETERNA became the most popular tidal analysis package due to its versatility. It is very well
documented. The associated data format became the official transfer format adopted by the
International Centre for Earth Tides (ICET)
One can use different sampling rates from minute to hour.
It is valid for all tidal components: potential, gravity, tilt, strain, displacements.
It can use any tidal development from less than 400 waves (Doodson) up to more than 10,000
(Hartman-Wenzel).
It is possible to include up to five auxiliary channels in order to evaluate a simple regression
coefficient between the main channel and the perturbing signals.
It includes a tidal prediction program (PREDICT), a tidal analysis program (ANALYZE), a
preprocessing program (PRETERNA). As auxiliary facilities you can prepare time series from
IERS polar motion data to compute polar motion effects on gravity and evaluate tidal loading
vectors from any oceanic tidal model.
The data can be separated in blocks. On each block one can apply a different scale factor and
time lag for the main tidal signal. It is also possible to apply a bias parameter on any channel
at the beginning or even inside of a block.
The most usual way is to apply a band pass filter for the evaluation of the tidal families from
diurnal up to four-diurnal. A wide range of filters is available from the simple Pertsev filter on
51hourly values up to very long filters.
In the unfiltered option it is also possible to model the long term “drift” by a polynomial
representation on each block. This option should be used to evaluate the LP tides and even the
polar motion signal.
The drawbacks of the program ANALYZE are:
- The least squares adjustment suppose a white noise on the data, hypothesis certainly not
verified as geophysical noise is coloured. The RMS error on the unit weight is thus a mean
value on the tidal bands and the computed errors on the estimated parameters are thus underestimated in the D band and over-estimated in the TD one. In the last version ETERNA3.4,
Wenzel improved the situation by taking into account the noise in the different tidal bands to
“colour” the error estimation.
- To avoid leakage effects it is necessary to use very long filters with the problems already
mentioned in sections 4.3 and 5.
- The program is computing the residues on the filtered data and producing a list of the
residues larger than the statistical 3 level. However the rejection of the corresponding data
would produce gaps with the problems outlined before. The only solution is the reiteration of
the preprocessing to correct the corresponding portions of the curve.
- The only way to take into account the sensitivity changes is to create blocks with constant
calibration. It is why it is recommended to multiply the data by their calibration table before
the analysis, using at least a linear interpolation between successive calibrations or even a
smoothed calibration table as explained earlier in section 6.
7.2 Evolution from the VEN66 to the VAV03 method
Initially the main advantages of the VEN66 method was that it was very cheap in
computer time due to the non overlapping filtering on 48 hours blocks. Moreover the
separation of the different tidal bands lead to more realistic error estimation than the other
least squares estimates. This version was until recently in use at ICET which extended the
program to strain computation (MT71) and adopted the Tamura tidal potential (MT71tam).
Auxiliary programs were used to detect the bad portions of the data and the corresponding
blocks were easily removed. For instruments with variable sensitivity it was also possible to
produce smoothed calibration tables following the procedure described in section 6. The last
version takes into account the systematic difference in tidal response between the waves
derived from P2m and P3m and refines the group separation in order to separate the P31 and P32
waves depending on the Lunar perigee (8.847 year period) and the nodal waves associated
with the 18.6124 year Saros period.
This program is now obsolete due to the fact that it is not possible to include auxiliary
channels to correct for pressure or temperature perturbations.
One important step in the evolution is the VEN98 variant which, besides the
improvements of MT71, included a lot of new options.
- The influence of pressure and temperature was evaluated in amplitude and phase,
separately for each tidal band.;
- It was possible to weight the even an odd filtered number according to their variance.
- It was possible to reject bad intervals on the basis of a given confidence interval e.g.
3;
- It was possible to introduce non tidal frequencies besides the tidal spectrum;
- The Akaike criterion was introduced to judge of the real improvement of a solution
when the number of parameters increased;
- The P31 and P32components were treated as specific groups mixed up with the normal
P21 and P22 waves.
Finally the VAV software was developed since 2000 and will be extensively presented
here.
7.3 The BAYTAP-G approach
BAYTAP-G (Bayesian Tidal Analysis Program - Grouping Model) is a general analysis
program for earth tides and crustal movements, which includes Bayesian model in analysis.
BAYTAP-G (original version 85-03-26) has following utilities.
(1) estimation of tidal amplitude factors and phases
(2) determination of trend and estimation of its spectrum
(4) interpolation of missing data and estimation of step value
(5) rough search for abnormal data
(6) calculation of ABIC (Akaike's Bayesian Information Criterion) which shows goodness of
analysis model
BAYTAP-G determines several constants minimising next equation with hyperparameters D
and W which are chosen to minimise ABIC.
n
M
 y  A C
i
m
i 1
W
2
m 1
M
 A
m
mi
2
K
 Bm Smi di bk xi  k hzi  D
k 0
2
n
 d 2d
i
i 1
 di  2
2
i 1
 Am 1  Bm  Bm 1 
2
2
m2
where m is a group number of tidal constituents, Cmi and Smi are summation of cosine and sine
parts of each constituents in m-th group respectively (theoretical values), Am and Bm are tidal
constants to be determined, di is drift (trend), xi are associated data, bk are response weights, h
is step value and zi is a step function whose value is zero till step exists and varies one after
step. The summation order K fixes the degree of the impulse response for the associated data.
Entering K=0 is equivalent to evaluate a single efficiency.
The hyperparameter D is a coefficient referred to 2nd order difference of trend (optionally
it is possible to select 3rd order difference). If the value of D is large, the trend will be
determined closely linear. In opposition, if D is small, it will be determined bending close to
original data. We say that the trend is rigid in the former case and soft in the latter.
Hyperparameter D is automatically chosen to let ABIC minimum by BAYTAP-G. It is
assumed in BAYTAP-G that the trend is only changing slowly with time. So we can treat
complex trend which cannot be expressed as a polynomial function or a periodic one. This is
one of the most interesting features of this analysis program.
The hyperparameter W expresses the smoothness of the tidal admittance curve. This
condition is not made at the boundary of tidal species (D,SD,TD..) where there is a gap in the
frequency distribution of the tidal constituents. In the previous equation, this condition is not
implemented to clarify the analysis model. A special option allows to represent the liquid core
resonance using a theoretical model based on a resonance frequency at 15.0737deg/h with
Q=1,150.
It is optionally possible to suppress the harmonic expression of the tidal potential and the
program becomes a simple response method. If one prepares associated data sets and does
not include theoretical tides in analysis, BAYTAP-G works as a response method. Though the
standard analysis model of BAYTAP-G is an harmonic one, the use of the response method
can be actually useful in the analysis of non-tidal data or heavily disturbed data. To determine
tidal constants by the response method, it is necessary prepare a time domain representation of
the theoretical tides in each tidal band as associated data sets.
Download