Weakly nonlinear and stochastic properties of ocean Karsten Trulsen

advertisement
Weakly nonlinear and stochastic properties of ocean
wave fields: Application to an extreme wave event
Karsten Trulsen
Mechanics Division, Department of Mathematics, University of Oslo, Norway
Abstract These notes give a graduate level introduction to the weakly nonlinear
and stochastic theory of sea surface waves, for application to wind-generated wave
fields on deep-water open ocean, with particular application to the Draupner “New
Year Wave” that occurred in the central North Sea on January 1st 1995. The
reader is assumed to be familiar with linear dispersive wave theory. No background
in nonlinear or stochastic wave theory is assumed.
1 Introduction
The material contained here is to a large extent motivated by the so-called Draupner
“New Year Wave”, an extreme wave event that was recorded at the Draupner E platform
in the central North Sea on January 1st 1995 (Haver 2004; Karunakaran et al. 1997).
This location has an essentially uniform depth of 70 m. The platform is of jacket type
and is not expected to modify the wave field in any significant way. The platform had
a foundation of a novel type, and for this reason was instrumented with a large number
of sensors measuring environmental data, structural and foundation response. We are
particularly interested in measurements taken by a down looking laser-based wave sensor,
recording surface elevation at a speed of 2.1333 Hz during 20 minutes of every hour. The
full 20 minute time series recorded starting at 1520 GMT is shown in Figure 1 and a
close-up of the extreme wave event is shown in Figure 2. To remove any doubt that
the measurements are of good quality, Figure 3 shows an even finer close-up with the
individual measurements indicated. It is clear that the extreme wave is not an isolated
erroneous measurement. The minimum distance between the sensor and the water surface
was 7.4 m.
The significant wave height was Hs = 11.9 m while the maximum wave height was
25.6 m with a crest height of 18.5 m. Haver (2004) states that the wave itself was not
beyond design parameters for the platform, but basic engineering approaches did not
suggest that such a large wave should occur in a sea state with such a small value of Hs .
Some damage was reported to equipment on a temporary deck. The wave has often been
referred to as a freak or rogue wave.
During the past decade, a considerable effort has been undertaken by many people to
understand the Draupner wave, and rogue waves in general. Statoil should be commended
for their policy of releasing this wave data to the public, thus igniting an avalanche of
exciting research. Much of the work reported here has been carried out by this motivation,
1
20
surface elevation (m)
15
10
5
0
-5
-10
0
200
400
600
time (s)
800
1000
1200
Figure 1. Draupner 20 minute time series starting at 1520 GMT.
with a particular philosophy to divert attention away from the immediate neighborhood
of the extreme wave itself, and rather try to understand the general wave conditions of
the entire wave field in which it occurred.
Freak waves are by definition unusual in comparison with the sea state in which
they occur. A possibly surprising conclusion of the following considerations is that the
Draupner “New Year Wave” may not deserve to be called a freak wave, at least not
to the extent first anticipated, taking into account the sea state in which it occurred.
A consequence of second- and third-order nonlinear wave theory is that waves like the
Draupner “New Year Wave” should occur much more frequently, within the given sea
state, than anticipated from linear theory.
2 Preliminary considerations — empirical description
We define the mean water level by its mean position in the N = 2560 discrete measurements of duration 20 minutes. Thus Figures 1, 2 and 3 show the surface elevation
relative to the mean water level, ηn ≡ η(tn ), where tn = n∆t are the discrete times, and
where the time average is by definition
η≡
N −1
1 X
ηn = 0.
N n=0
2
20
surface elevation (m)
15
10
5
0
-5
-10
180
200
220
240
260
280 300
time (s)
320
340
360
380
Figure 2. Extract of Draupner time series close to the extreme wave.
The standard deviation, or root-mean-square, of the discrete measurements can be
computed as
v
u N −1
q
u1 X
2
η 2 = 2.98m.
σ= η =t
N n=0 n
The significant wave height is defined as four times the standard deviation
Hs = 4σ = 11.9m.
(2.1)
For reasons that will become clear soon, we define the characteristic amplitude as
√
ā = 2σ = 4.2m.
(2.2)
The wave height H is defined as the vertical distance between a crest and a neighboring
trough. Care must be taken to distinguish wave heights defined by zero-up-crossing or
zero-down-crossing. For the given time series, there are 106 down-crossing wave heights
and 105 up-crossing wave heights. The maximum down-crossing wave height is 25.6
m and the maximum up-crossing wave height is 25.0 m. After all the individual wave
heights are found, we can sort them in decreasing order. For any positive number α, the
average of the 1/α highest waves is denoted as H1/α . Particularly common is the average
3
20
surface elevation (m)
15
10
5
0
-5
-10
245
250
255
260
265
time (s)
270
275
280
285
Figure 3. Extract of Draupner time series close to the extreme wave, with discrete
measurements indicated.
of the 1/3 highest waves; the down-crossing H1/3 is 11.6 m and the up-crossing H1/3 is
11.4 m. These values are almost identical to Hs .
The maximum crest height of 18.5 m is equal to 6.2 standard deviations. The maximum wave height of 25.6 m is equal to 8.6 standard deviations, or equivalently, 2.1
significant wave heights. Some researchers like to define freak waves by some of these
ratios being larger than some thresholds. If the Draupner “New Year Wave” deserves to
be called freak, then the crest height is certainly more freak than the wave height.
The frequency spectrum S(ω) can be estimated by the square magnitude of the Fourier
transform of the time series 2|η̂(ω)|2 , properly normalized such that the integral under
the spectral curve is equal to the variance of the surface elevation. Continuous and
discrete Fourier transforms are reviewed in appendix A. We employ the discrete Fourier
transform of the time series
N −1
1 X
(2.3)
ηn eiωj tn ,
η̂j =
N n=0
and use only the positive frequencies ωj = 2πj/T for j = 0, 1, . . . , N/2 for the representation of the frequency spectrum. Figure 4 shows the estimated spectrum with linear
axes, while Figure 5 shows the same with logarithmic axes, in both cases without any
smoothing.
4
0.6
frequency spectrum (m^2 s)
0.5
0.4
0.3
0.2
0.1
0
0
0.05
0.1
0.15
frequency (Hz)
0.2
0.25
Figure 4. Frequency spectrum estimated from 2|η̂(ω)|2 without smoothing, linear axes.
Based on the Fourier transform of the time series for surface elevation, we may estimate a characteristic angular frequency as the expected value of the spectral distribution
ωc =
2
j |ωj ||η̂j |
P
= 0.52s−1
2
|η̂
|
j
j
P
(2.4)
corresponding to a characteristic frequency of 0.083 Hz and a characteristic period of
Tc = 12 s. Here it is understood that the index j is centered around the origin.
The characteristic wavenumber can be estimated based on the assumption that the
waves are linear to leading order. Then we simply solve the linear dispersion relation
ω 2 = gk tanh kd
were g = 9.81 m/s2 is the acceleration of gravity and d = 70 m is the relevant depth.
We find the characteristic wavenumber kc = 0.029 m−1 , the characteristic wave length
λc = 217 m and the characteristic non-dimensional depth kc d = 2.0. As far as estimating
the characteristic wavenumber is concerned, no great error is done using infinite depth
in which case we get kc = 0.028 m−1 and λc = 225 m.
Knowing the characteristic wavenumber, we may proceed to compute the characteristic steepness ǫ = kc ā = 0.12.
5
10
ω-5
frequency spectrum (m2 s)
1
ω-4
0.1
0.01
0.001
1e-04
1e-05
1e-06
1e-07
0.01
0.1
frequency (Hz)
1
Figure 5. Frequency spectrum estimated from 2|η̂(ω)|2 without smoothing, logarithmic
axes. Overlain are two possible power decay laws for the high frequency tail.
It will be useful to have some measure of the characteristic deviation ∆ω of the
frequency spectrum around the characteristic frequency ωc . We define the dimensionles
bandwidth to be the ratio between the deviation and the mean
δω =
∆ω
.
ωc
The most simple-minded approach to estimate ∆ω is to stare at Figure 4. This leads to
the conclusion that ∆ω/2π ≈ 0.025 Hz, and thus δω ≈ 0.32.
More sophisticated approaches to compute ∆ω, for example by computing the standard deviation of frequency in the frequency spectrum, typically yield unreasonably large
and less useful values of ∆ω. This is because the high-frequency tail of the spectrum
is largely due to nonlinear bound waves and measurement noise. As far as appraising
the desired properties of possible simplified models is concerned, it turns out that the
eye-balling approch is more reliable.
In conclusion of these hand-waiving considerations, the important observations are
given by three non-dimensional numbers: The characteristic steepness is ǫ = kc ā ≈
0.12, the characteristic bandwidth is δω = ∆ω/ωc ≈ 0.32 and the characteristic nondimensional depth is h = kc d ≈ 2.0. This means that the wave field is weakly nonlinear,
has finite small bandwidth and is essentially on deep water. Notice also that the band-
6
width and the steepness have the approximate scaling relationship
δω ∼
√
ǫ.
In the next sections we show how to take advantage of these scaling considerations to
construct simplified mathematical models that can describe this wave field.
Finally it is instructive to see how the sea state of the Draupner wave compares to
typical sea states in the northern North Sea. Figure 6 contains approximately 70 000
data points, each representing a 20 minute wave record from the northern North Sea. Tp
denotes the wave period corresponding to the frequency at the spectral peak; we have
approximately Tp ' Tc . There is no doubt that the sea state of the Draupner “New
Year Wave” is extreme. This provokes the need to make the following distinction: Is
it only the sea state that is extreme while the Draupner “New Year Wave” should be
anticipated within its sea state, or is the Draupner “New Year Wave” also extreme given
the sea state in which it occurred?
ε=0.03
ε=0.05
ε=0.075
ε=0.1
Figure 6. Scatter diagram for Hs and Tp from the northern North Sea. Pooled data
1973–2001, from the platforms Brent, Statfjord, Gullfax and Troll. Curves of constant
steepness are also shown. The figure was prepared by K. Johannessen Statoil.
7
3 The governing equations
As our starting point we take the equations for the velocity potential φ(r, z, t) and surface
displacement η(r, t) of an incompressible inviscid fluid with uniform depth d,
∇2 φ = 0
for
− d < z < η,
(3.1)
∂φ
∂η
+ ∇φ · ∇η =
at z = η,
(3.2)
∂t
∂z
∂φ
1
2
+ gη + (∇φ) = 0
at z = η,
(3.3)
∂t
2
∂φ
=0
at z = −d.
(3.4)
∂z
Here g is the acceleration of gravity, the horizontal position vector is r = (x, y), the
vertical coordinate is z, ∇ = (∂/∂x, ∂/∂y, ∂/∂z) and t is time.
The solution of the linearization of equations (3.1)–(3.4) is a linear superposition of
simple harmonic waves such as
η(r, t) = a cos(k · r − ωt)
ωa cosh k(z + d)
sin(k · r − ωt)
k
sinh kd
subject to the dispersion relation for gravity waves on finite depth
φ(r, z, t) =
ω 2 = gk tanh kd,
(3.5)
where
qa is the amplitude, ω is the angular frequency, k = (kx , ky ) is the wave vector and
k = kx2 + ky2 is the wavenumber.
In section 2 we found characteristic scales ωc , kc and ā, which can be used to introduce
properly scaled dimensionless quantities
āη ′ = η,
ωc ā ′
φ = φ,
kc
(r ′ , z ′ ) = kc (r, z),
t′ = ωc t,
h = kc d.
Dropping the primes, the normalized equations become
∇2 φ = 0
for
− h < z < ǫη,
∂η
∂φ
+ ǫ∇φ · ∇η =
at z = ǫη,
∂t
∂z
ǫ
∂φ η
2
+ + (∇φ) = 0
at z = ǫη,
∂t
s 2
∂φ
=0
at z = −h,
∂z
where we use the notation ǫ = kc ā and s = tanh h.
8
(3.6)
(3.7)
(3.8)
(3.9)
With small steepness ǫ ≪ 1 it is reasonable to assume perturbation expansions
η = η1 + ǫη2 + ǫ2 η3 + . . .
(3.10)
φ = φ1 + ǫφ2 + ǫ2 φ3 + . . .
4 Weakly nonlinear narrow-banded equations
4.1
The bandwidth
The empirical considerations in section 2 reveal that the steepness is small, but the
bandwidth is not quite as small. Traditionally there have been two basic approaches
to deal with the bandwidth of weakly nonlinear ocean waves. In the classical literature
on nonlinear Schrödinger equations, the bandwidth is assumed to be as small as the
steepness. Otherwise, with no constraint on bandwidth, a straightforward application
of the perturbation expansions (3.10) leads to the much more complicated Zakharov
integral equations (Zakharov 1968; Krasitskii 1994). The numbers derived in section 2
suggest that some intermediate model may be optimal in terms of mathematical and
computational complexity (Trulsen & Dysthe 1997).
To get a good feeling for the significance of the bandwidth, a good example is the
idealized wave packet
2
f (x) = e−(κx) cos kc x
(4.1)
of approximate width 1/κ containing oscillations of length 2π/kc . The Fourier transform
is
Z ∞
k+kc 2
k−kc 2
1 1
f (x)e−ikx dx = √ e−( 2κ ) + e−( 2κ ) .
(4.2)
fˆ(k) =
2π −∞
4κ π
Thus the Fourier transform is centered around k = ±kc with a width κ that is the inverse
of the length of the wave packet. A natural definition of bandwidth could now be
δk =
κ
kc
(4.3)
If the width of the packet is much greater than the length of a basic wave oscillation,
δk ≪ 1, is is natural to identify two characteristic scales for x; a fast scale associated with
rapid oscillations x0 = k0 x and a slow scale associated with the envelope x1 = κx = δk x0 .
Thus we may write
2
f (x) = f (x0 , x1 ) = e−x1 cos x0 .
(4.4)
4.2
Derivation of higher-order nonlinear Schrödinger equations
Based on all previous considerations, we now propose the “optimal” scaling assumptions for the wave field in which the Draupner wave occurred (Trulsen & Dysthe 1996,
1997):
ǫ ≈ 0.1,
δk , δω = O(ǫ1/2 ),
h−1 = O(1).
(4.5)
Pursuing these “optimal” scaling assumptions leads to messy-looking higher-order equations. Some limiting cases of this exercise have been done, for finite-depth narrow-banded
9
waves by Brinch–Nielsen & Jonsson (1986), and for wider-banded deep-water waves by
Trulsen & Dysthe (1996).
The much simpler higher-order equations derived from the following simplified assumptions still capture a large portion of the essential physics:
ǫ ≈ 0.1,
δk , δω = O(ǫ),
h−1 = O(ǫ).
(4.6)
This was done for infinite depth by Dysthe (1979) and for deep water by Lo & Mei (1985,
1987).
Pursuing the simplified scaling assumptions (4.6), the following harmonic expansions
for the velocity potential and surface displacement are employed:
1 ′ iθ
A e + ǫA′2 e2iθ + ǫ2 A′3 e3iθ + · · · + c.c. ,
(4.7)
2 1
1
η = ǫη̄ +
Beiθ + ǫB2 e2iθ + ǫ2 B3 e3iθ + · · · + c.c. .
(4.8)
2
Here c.c. denotes the complex conjugate. The phase is θ = x − t after having oriented the
x-axis in the direction of the characteristic wave vector kc . The slow drift φ̄ and set-down
η̄ as well as the harmonic amplitudes A′1 , A′2 , A′3 , . . . , B, B2 , B3 , . . . are functions of the
slow modulation variables ǫr and ǫt. In the assumed case of deep water h−1 = O(ǫ)
the induced current φ̄ also depends on the slow vertical coordinate z̄ = ǫz while the
variables A′1 , A′2 , A′3 , . . . also depend on the basic vertical coordinate z. For finite depth
h−1 = O(1) it would be necessary to rescale φ̄ and assume that it too depends on the
basic vertical coordinate.
The vertical dependence of the harmonic amplitudes A′n for n > 1 is found from the
Laplace equation and the bottom boundary condition,
φ = ǫφ̄ +
∂A′
∂ 2 A′n
∂ 2 A′n
∂ 2 A′n
− n2 A′n + 2ǫin n + ǫ2
+ ǫ2
=0
2
2
∂z
∂x
∂x
∂y 2
for
− h < z,
∂A′n
= 0 at z = −ǫ−1 h.
∂z
The vertical dependence can then be found by the perturbation expansion
A′n = An,0 + ǫAn,1 + ǫ2 An,2 + . . .
(4.9)
(4.10)
(4.11)
For the harmonic amplitudes that decay exponentially on the basic vertical scale, the
bottom condition is essentially at infinite depth. Thus the leading-order solution is
An,0 = An enz
(4.12)
which evaluates to An at z = 0. An is not a function of z. With the surface boundary
condition
An,j = 0 at z = 0 for j = 1, 2, . . .
we get solutions at higher orders
An,1 = −iz
10
∂An nz
e ,
∂x
(4.13)
An,2 =
z 2 ∂ 2 An
z ∂ 2 An
−
−
2 ∂x2
2n ∂y 2
enz .
(4.14)
Note: Throughout the following we use the notation B ≡ B1 and A ≡ A1 .
A review of literature on higher order nonlinear Schrödinger equations over the past
three decades may lead to a certain degree of confusion because authors sometimes appear
to have a lack of awareness of whether they are expressing the equations in terms of the
surface displacement B or the velocity potential A. At the cubic Schrödinger level these
equations are quite similar and the distinction is not important. However, at higher
order the different versions of the equations (A or B, temporal or spatial evolution) turn
out to have different types of terms giving them remarkably different properties. The
spatial evolution equations in terms of A have particularly nice properties for analytical
considerations, while the spatial evolution equations for B are more convenient for most
practical applications. This choice may also have important consequences for the stability
and accuracy of numerical schemes.
In the following sections we summarize the higher-order equations for A and B, for
spatial and temporal evolution. In passing between temporal and spatial evolution, we
have made the assumption that the induced flow φ̄ is only due to nonlinearly bound
response to the free first-harmonic waves.
4.3
Deep water time evolution of A
This was the form first derived by Dysthe (1979) for infinite depth, and by Lo & Mei
(1985, 1987) for finitely deep water. The evolution equations are
i ∂2A
i ∂2A
i
∂A 1 ∂A
+
+
−
+ |A|2 A
∂t
2 ∂x
8 ∂x2
4 ∂y 2
2
1 ∂3A 3 ∂3A
7
∂ φ̄
∂A 1 ∂|A|2
−
+
+ |A|2
− A
+ iA
= 0 at
3
2
16 ∂x
8 ∂x∂y
4
∂x
4
∂x
∂x
∂ φ̄
1 ∂|A|2
=
∂z
2 ∂x
at
z = 0,
∂ 2 φ̄ ∂ 2 φ̄ ∂ 2 φ̄
+ 2 + 2 = 0 for
∂x2
∂y
∂z
∂ φ̄
=0
∂z
The reconstruction formulas are
η̄ =
B = iA +
at
− h < z,
z = −h.
1 ∂ φ̄
,
2 ∂x
i ∂2A
i ∂2A
i
1 ∂A
+
−
+ |A|2 A,
2 ∂x
8 ∂x2
4 ∂y 2
8
A2 = 0,
11
z = 0,
(4.15)
(4.16)
(4.17)
(4.18)
(4.19)
(4.20)
(4.21)
1
∂A
B2 = − A2 + iA
,
2
∂x
(4.22)
A3 = 0,
(4.23)
B3 = −
4.4
3i 3
A .
8
(4.24)
Deep water space evolution of A
This form was employed by Lo & Mei (1985, 1987). The evolution equations are
∂A
∂A
∂2A
i ∂2A
+2
+i 2 −
+ i|A|2 A
∂x
∂t
∂t
2 ∂y 2
∂ φ̄
∂A
∂3A
− 8|A|2
− 4iA
= 0
−
∂t∂y 2
∂t
∂t
∂ φ̄
∂|A|2
=−
∂z
∂t
4
at
∂ 2 φ̄ ∂ 2 φ̄ ∂ 2 φ̄
+ 2 + 2 =0
∂t2
∂y
∂z
∂ φ̄
= 0 at
∂z
at
z = 0,
(4.25)
z = 0,
(4.26)
for
(4.27)
z = −h.
− h < z,
(4.28)
Equation (4.25) has the exceptional property that there is no term proportional to
A∂|A|2 /∂t.
The reconstruction formulas are
η̄ = −
B = iA −
∂ φ̄
,
∂t
∂A 3i 2
− |A| A,
∂t
8
(4.29)
(4.30)
A2 = 0,
(4.31)
∂A
1
,
B2 = − A2 − 2iA
2
∂t
(4.32)
A3 = 0,
(4.33)
B3 = −
12
3i 3
A .
8
(4.34)
4.5
Deep water time evolution of B
The evolution equations are
∂B
1 ∂B
i ∂2B
i ∂2B
i
+
+
−
+ |B|2 B
2
∂t
2 ∂x
8 ∂x
4 ∂y 2
2
3 ∂3B
5
1 ∂|B|2
∂ φ̄
∂B
1 ∂3B
+
+ |B|2
+ B
+ iB
= 0 at
−
3
2
16 ∂x
8 ∂x∂y
4
∂x
4
∂x
∂x
z = 0,
1 ∂|B|2
∂ φ̄
=
at z = 0,
∂z
2 ∂x
∂ 2 φ̄ ∂ 2 φ̄ ∂ 2 φ̄
+ 2 + 2 = 0 for − h < z,
∂x2
∂y
∂z
∂ φ̄
=0
∂z
The reconstruction formulas are
at
A = −iB +
(4.36)
(4.37)
z = −h.
(4.38)
1 ∂ φ̄
,
2 ∂x
η̄ =
(4.39)
3i ∂ 2 B
i ∂2B
i
1 ∂B
+
−
+ |B|2 B,
2 ∂x
8 ∂x2
4 ∂y 2
8
(4.40)
A2 = 0,
B2 =
4.6
(4.41)
i ∂B
1 2
B − B
,
2
2 ∂x
A3 = 0,
B3 =
(4.35)
(4.42)
(4.43)
3 3
B .
8
(4.44)
Deep water space evolution of B
The evolution equations are
i ∂2B
∂B
∂2B
∂B
+ i|B|2 B
+2
+i 2 −
∂x
∂t
∂t
2 ∂y 2
∂3B
∂|B|2
∂ φ̄
∂B
−
− 6|B|2
− 2B
− 4iB
= 0 at
2
∂t∂y
∂t
∂t
∂t
∂|B|2
∂ φ̄
=−
at z = 0,
∂z
∂t
∂ 2 φ̄ ∂ 2 φ̄ ∂ 2 φ̄
4 2 + 2 + 2 = 0 for − h < z,
∂t
∂y
∂z
∂ φ̄
=0
∂z
at
13
z = −h.
z = 0,
(4.45)
(4.46)
(4.47)
(4.48)
The reconstruction formulas are
η̄ = −
A = −iB −
∂ φ̄
,
∂t
∂B
∂2B
3i
+ i 2 − |B|2 B,
∂t
∂t
8
A2 = 0,
B2 =
∂B
1 2
B + iB
,
2
∂t
A3 = 0,
3
B3 = B 3 .
8
4.7
(4.49)
(4.50)
(4.51)
(4.52)
(4.53)
(4.54)
Finite depth
When the depth is finite h ≈ 1, all the numerical coefficients in the above equations
become functions of s = tanh h. The induced flow φ̄ must be rescaled to a lower order.
The equations for the slow response (η̄, φ̄) change qualitative nature such as to support
free long waves. Furthermore, several new types of terms enter the evolution equations
for the short waves. Brinch–Nielsen & Jonsson (1986) derived equations for the temporal
evolution of the velocity potential A, corresponding to those summarized in section 4.3.
Sedletsky (2003) derived fourth-harmonic contributions also for the temporal evolution
of the velocity potential A.
We limit to just two particularly interesting observations, the following two reconstruction formulas that can be extracted from Brinch–Nielsen & Jonsson (1986)
B2 =
3 − s2 2
B + ...,
4s3
(4.55)
3(3 − s2 )(3 + s4 ) 3
B + ...
(4.56)
64s6
In the limit of inifinite depth the two coefficients become 1/2 = 0.5 and 3/8 = 0.375,
respectively, while for the target depth h = 2.0 for the Draupner wave field the coefficients are 0.58 and 0.47, respectively. It is evident that nonlinear contributions to the
reconstruction of the surface profile are more important for smaller depths.
B3 =
5 Exact linear dispersion
The above equations are based on the “simplified” scaling assumptions, and the perturbation analysis was carried out up to O(ǫ4 ) for the evolution equations.
The “optimal” scaling assumptions for the Draupner wave field require wider bandwidth than that strictly allowed by the above equations. Carrying out the perturbation
analysis to O(ǫ7/2 ) with the “optimal” bandwidth assumption leads to a large number of
additional linearly dispersive terms, but no new nonlinear terms, see Trulsen & Dysthe
(1996).
14
It is not difficult to account for the exact linear dispersive part of the evolution equations using pseudo-differential operators. This was shown for infinite depth by Trulsen
et al. (2000). Here we make the trivial extension to any depth.
The full linearized solution of (3.6)–(3.9) can be obtained by Fourier transform. The
surface displacement can thus be expressed as
Z
Z
1
ik·r
η(r, t) = η̂(k, t)e
dk =
b(k)ei(k·r−ω(k)t) dk + c.c.
(5.1)
2
where the frequency ω(k) is given by the linear dispersion relation
ω 2 = k tanh kh.
(5.2)
Writing this in the style of the first-harmonic term of the harmonic perturbation expansion (4.8) we get
1
η(r, t) = B(r, t)ei(x−t) + c.c.
(5.3)
2
The complex amplitude B(r, t) is defined by
Z
Z
iλ·r
B(r, t) = B̂(λ, t)e
dλ = b(x̂ + λ)ei[λ·r−(ω(x̂+λ)−1)t] dλ
(5.4)
where x̂ is the unit vector in the x-direction, λ = (λ, µ) is the modulation wave vector
and k = x̂ + λ. The Fourier transform B̂ satisfies the equation
∂ B̂
+ i [ω(x̂ + λ) − 1] B̂ = 0.
∂t
(5.5)
The evolution equation for B can be formally written as
∂B
+ L(∂x , ∂y )B = 0
∂t
(5.6)
with
L(∂x , ∂y ) = i
1/2 1/2
1/2
h
tanh (1 − i∂x )2 − ∂y2
−1 .
(1 − i∂x )2 − ∂y2
(5.7)
On infinite depth this reduces to
L(∂x , ∂y ) = i
n
(1 − i∂x )2 − ∂y2
1/4
o
−1 .
(5.8)
Linear evolution equations at all orders can now be obtained by expanding (5.6) in powers
of the derivatives. Hence we recover the linear part of the classical cubic NLS equation
up to second order, the linear part of the modified NLS equation of Dysthe (1979) up
to third order, and the linear part of the broader bandwidth modified NLS equation of
Trulsen & Dysthe (1996) up to fifth order.
Alternatively, an evolution equation for space evolution along the x-direction can be
written as
∂B
+ L(∂t , ∂y )B = 0
(5.9)
∂x
15
where L results from expressing the linear dispersion relation for the wavenumber as a
function of the frequency. This is difficult to do in closed form for finite depth, but for
infinite depth it reduces to
L(∂t , ∂y ) = −i
n
o
1/2
−1 .
(1 + i∂t )4 + ∂y2
(5.10)
Expanding (5.9) in powers of the derivatives we recover the linear part of the broader
bandwidth modified NLS equation for space evolution in Trulsen & Dysthe (1997) up
to fifth order. Equation (5.10) has the interesting property that for long-crested waves
(∂y = 0) the pseudodifferential operator becomes an ordinary differential operator with
only two terms.
The linear constant coefficient equations (5.6) and (5.9) are most easily solved numerically in Fourier transform space so there is no need to approximate the operators
with truncated power series expansions.
After having derived exact linear evolution equations, and noticing that they are the
linear parts of the corresponding nonlinear evolution equations derived earlier, the higher
order nonlinear Schrödinger equations with exact linear dispersion are easily obtained
in an ad-hoc manner. The replacement for equations (4.15), (4.25), (4.35) and (4.45)
become, respectively,
i
7
∂ φ̄
∂A 1 ∂|A|2
∂A
+ L(∂x , ∂y )A + |A|2 A + |A|2
− A
+ iA
= 0 at
∂t
2
4
∂x
4
∂x
∂x
∂A
∂ φ̄
∂A
+ L(∂t , ∂y )A + i|A|2 A − 8|A|2
− 4iA
= 0
∂x
∂t
∂t
at
z = 0, (5.11)
z = 0,
(5.12)
∂B
i
5
1 ∂|B|2
∂ φ̄
∂B
+ L(∂x , ∂y )B + |B|2 B + |B|2
+ B
+i B
∂t
2
4
∂x
4
∂x
∂x
at
z = 0,
(5.13)
∂B
∂|B|2
∂ φ̄
∂B
+ L(∂t , ∂y )B + i|B|2 B − 6|B|2
− 2B
− 4i B
∂x
∂t
∂t
∂t
at
z = 0,
(5.14)
where we employ the versions of L and L for infinite depth, given by equations (5.8) and
(5.10), respectively.
The Zakharov integral equation (Zakharov 1968) contains exact linear dispersion both
at the linear and the cubic nonlinear orders. The above nonlinear Schrödinger equations
with exact linear dispersion are limiting cases of the Zakharov integral equation after
employing a bandwidth constraint only to the cubic nonlinear part. If a bandwidth
constraint is applied on both the linear and nonlinear parts, the higher order nonlinear
Schrödinger equations in sections 4.3–4.6 can also be derived as special cases of the
Zakharov equation. Stiassnie (1984) first showed how the temporal A equation (4.15)
can be obtained from the Zakharov integral equation in a systematic way. Kit & Shemer
(2002) showed how both the spatial A equation (4.25) and the spatial B equation (4.45)
can be obtained by the same approach.
16
6 Properties of the higher order nonlinear Schrödinger equations
Let us define the generic evolution equations
∂A
∂2A
∂2A
∂A
+ c1
+ ic2 2 + ic3 2 + ic4 |A|2 A
∂t
∂x
∂x
∂y
∗
∂3A
∂3A
∂ φ̄
2 ∂A
2 ∂A
+ c5 3 + c6
+
c
|A|
+
c
A
+ ic9 A
= 0
7
8
2
∂x
∂x∂y
∂x
∂x
∂x
∂|A|2
∂ φ̄
=β
∂z
∂x
α2
∂ 2 φ̄ ∂ 2 φ̄ ∂ 2 φ̄
+ 2 + 2 =0
∂x2
∂y
∂z
∂ φ̄
=0
∂z
6.1
at
at
at
z = 0,
(6.1)
z = 0,
(6.2)
for
(6.3)
− h < z,
z = −h.
(6.4)
Conservation laws
Let the following two quantities be defined,
Z
Z
I = |A|2 dr = (2π)2 |Â|2 dk,
J=
Z i ∂A∗
A
+ c.c.
2 ∂x
dr = (2π)2
where we have used the Fourier transform A =
k = (kx , ky ), see e.g. Trulsen & Dysthe (1997).
It is readily shown that I is conserved
R
Z
kx |Â|2 dk,
(6.5)
(6.6)
 exp(ik · r) dk where r = (x, y) and
dI
= 0.
dt
(6.7)
On the other hand, J is conserved when the coefficient c8 vanishes
dJ
= c8
dt
Z
∂A∗
−i A
∂x
2
+ c.c.
!
dr.
(6.8)
Thus the spatial evolution equations for the velocity potential, section 4.4, are exceptional since they satisfy more conservation laws than all the other forms of the evolution
equations.
Recommended excersise: Show that the energy and the linear momentum of the
waves are conserved even though J is not conserved.
17
6.2
Modulational instability of Stokes waves
Equations (6.1)–(6.4) have a particularly simple exact uniform wave solution, known
as Stokes waves, given by
A = A0 e−ic4 |A0 |
2
t
and
φ̄ = 0.
(6.9)
Figure 7 shows the reconstructed surface elevation
η = ǫ cos θ +
3ǫ3
ǫ2
cos 2θ +
cos 3θ
2
8
at three separate orders for the much exaggerated steepness ǫ = 0.3.
0.3
0.2
η
0.1
0
-0.1
-0.2
-0.3
0
0.2
0.4
0.6
0.8
1
θ/2π
Figure 7. Stokes wave of steepness ǫ = 0.3 reconstructed at three orders: —, first order;
– –, second order, · · · , third order.
The stability of the Stokes wave can be found by perturbing it in amplitude and phase
2
A = A0 (1 + a + iθ)e−ic4 |A0 | t .
After linearization in a, θ and φ̄, and assuming plane-wave solutions

 

â
a
 θ  =  θ̂  ei(λx+µy−Ωt) + c.c.,
φ̄
φ̂
18
(6.10)
(6.11)
we find that the perturbation behaves according to the dispersion relation
s 2c9 βλ2 |A0 |2 coth Kh
+ c28 |A0 |4 λ2
Ω = P ± Q Q − 2c4 |A0 |2 +
K
(6.12)
where
P = c1 λ − c5 λ3 − c6 λµ2 + c7 |A0 |2 λ,
2
2
Q = c2 λ + c3 µ
and
K=
p
α2 λ2 + µ2 .
(6.13)
(6.14)
(6.15)
A perturbation is unstable if the modulational “frequency” Ω has positive imaginary
part. The growth rate of the instability is defined as ImΩ. The growth rate is seen to
be symmetric about the λ-axis and the µ-axis, therefore we only need to consider the
growth rates of perturbations in the first quadrant of the (λ, µ)-plane.
Figures 8 and 9 show growth rates for target steepness ǫ = 0.12 on infinite depth,
while Figures 10 and 11 show growth rates for target steepness ǫ = 0.12 and target
depth h = 2.0. For time evolution it makes no difference selecting equations for A or
B. For space evolution it makes a lot of difference. Numerical simulation of the spatial
evolution equation for A is more likely to suffer energy leakage through the unstable
growth of rapidly oscillating modulations. Thus the most attractive model for analytical
work turns out to be the least attractive model for numerical work!
The most unstable perturbation is collinear with the principal evolution direction
for sufficiently deep water. When the depth becomes smaller than a threshold, which
depends on the steepness, the most unstable perturbation bifurcates into an oblique
direction. The criterion for bifurcation was discussed by Trulsen & Dysthe (1996).
As far as the sea state of the Draupner wave is concerned, it follows that an “equivalent” Stokes wave in fact has its most unstable perturbations in an oblique direction,
see Figure 10. This result is however most likely of little practical importance since
the bandwidth of the Draupner wave field is so wide that the waves are modulationally
stable.
7 Application of the higher-order nonlinear Schrödinger model
to the Draupner wave field
The following is a good excercise (see Trulsen 2001):
Initialization of the nonlinear Schrõdinger equation with the measured time
series. Use the equations in section 4.6 for the spatial evolution of the surface elevation.
Start by assuming that the waves are long-crested, i.e. disregard all y-derivatives.
Extract the complex amplitude B by bandpass filtering the Fourier transform η̂ in a
neighborhood of ωc . Compute B2 (t), B3 (t) and η̄(t).
Plot the measured time series η(t) together with the reconstructed surface displacement, and plot the contributions from each harmonic term in expansion (4.8). How well
do we reconstruct the wave field in general, and the extreme wave in particular?
19
µ
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
λ
Figure 8. Growth rate for time evolution of MNLS equations for A and B for ǫ = 0.12
and h = ∞. The maximum growth rate 0.0057 is achieved at (0.20,0). Contour interval
0.0005.
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0
µ
µ
0
0.8
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0
0.8
0
0
λ
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
λ
Figure 9. Growth rate for space evolution of MNLS equations for A (left) and B (right)
for ǫ = 0.12 and h = ∞. For the A equations the maximum growth rate 0.011 is achieved
at (0.10,0). For the B equations the maximum growth rate 0.011 is achieved at (0.098,
0). Contour interval 0.0005.
20
µ
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
λ
Figure 10. Growth rate for time evolution of MNLS equations for A and B for ǫ = 0.12
and h = 2.0. The maximum growth rate 0.0039 is achieved at (0.27,0.14) corresponding
to waves at an angle 6.5◦ with the principal propagation direction. Contour interval
0.0005.
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0
µ
µ
0
0.8
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0
0.8
0
0
λ
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
λ
Figure 11. Growth rate for space evolution of MNLS equations for A (left) and B
(right) for ǫ = 0.12 and h = 2.0. For the A equations the maximum growth rate 0.0081 is
achieved at (0.14,0.16). For the B equations the maximum growth rate 0.0072 is achieved
at (0.11, 0.11). The definition of angle is not meaningful since the second axis denotes
frequency. Contour interval 0.0005.
21
Bandpass filter the Fourier transform of the measured time series around the origin
in order to extract the slowly varying mean surface displacement directly from the measurements. Notice that according to long-crested nonlinear Schrödinger theory, the slowly
varying mean surface has a set-down under the extreme wave group. On the other hand,
when we extract the slowly varying mean surface directly from the measurements, we
find that there is a set-up (Walker, Taylor & Eatock Taylor 2004). There appears to be
no good explanation of this discrepancy so far.
8 Stochastic description of surface waves
Our concern is to account for the vertical displacement η(r, t) of the water surface as
a function of horizontal position r = (x, y) and time t. A quick look at any water
surface under the action of wind suggests that the previous deterministic description of
uniform waves is unsatisfactory. Given otherwise identical conditions (wind, etc.) it is our
everyday experience that the waves are going to be random and unpredictable. We may
therefore think of the surface displacement as a stochastic process with several possible
outcomes or realizations. A collection of realizations we denote an ensemble. We may
think of any single realization as the result of tossing dice or drawing a number at random.
The view elaborated here is that once the random number has been drawn, equivalent to
chosing an initial condition, the spatiotemporal evolution is deterministically given as a
solution of nonlinear evolution equations such as those previously derived. Another point
of view not elaborated here is that the continued action of random wind etc. requires
the spatiotemporal evolution to be stochastic to a certain level as well. There is an
underlying assumption behind the view employd here that the effect of wind is much
slower than the rapid deterministic modulations described by the above deterministic
evolution equations.
In reality we have of course only one realization of the waves on the water surface.
However, in our minds we can imagine several realizations, and on the computer we can
simulate several realizations. In reality it may also be possible to achieve several essentially independent realizations even if we do not have several identical copies of our lake.
In the case that we suffice with measurements during a limited time interval only, we
may achieve several independent realizations by measuring waves over several different
limited time intervals under otherwise identical conditions (weather, etc.), provided the
limited time intervals are sufficiently separated in time. Alternatively, if we suffice with
measurements at a group of wave probes only, we may achieve several independent realizations of such time series from several different groups of wave probes provided they are
sufficiently far apart. The process of achieving a realization we denote as an experiment.
We define a stochastic variable X as a rule that assigns a number xo to every outcome
o of an experiment.
We define a stochastic process X(t) as a rule that assigns a function xo (t) to every
outcome o of an experiment. Of fundamental interest to us, we shall describe the vertical
displacement of the water surface as a stochastic process Z(r, t) defined as a rule that
assigns a function ηo (r, t) to every outcome o of an experiment.
There are now four different interpretations of a stochastic process X(t):
22
1. In the most general case we consider the prosess X(t) for all times t and all outcomes
o.
2. Given an outcome o, we consider a time series xo (t) for all times t.
3. Given a time t1 , we consider a stochastic variable X(t1 ) for all outcomes o.
4. Given an outcome o and a time t1 , we consider a number xo (t1 ).
9 Theory for stochastic variables
9.1
Theory for a single stochastic variable
With the notation {X ≤ x} we refer to the collection of all outcomes xo of the
stochastic variable X such that xo ≤ x. The probability for this collection of outcomes
defines the cumulative distribution function
F (x) ≡ P {X ≤ x}
where P {·} reads the “probability of {·}”. The cumulative probability function has the
properties that
1. F (−∞) = 0,
2. F (∞) = 1,
3. F (x) is an increasing function, F (x1 ) ≤ F (x2 ) for x1 < x2 .
The probability that an outcome is between a lower and an upper bound is P {a ≤
X ≤ b} = F (b) − F (a). Similarly, the probability that an outcome is in an interval of
infinitesimal width is P {x ≤ X ≤ x + dx} = F (x + dx) − F (x) ≈ dF
dx dx = f (x)dx. We
define the probability density function as
dF
.
dx
The probability density function has the properties that
1. fR (x) ≥ 0,
∞
2. −∞ f (x) dx = 1,
Rx
3. F (x) = −∞ f (ξ) dξ.
The mode of a stochastic variable X is the value of x such that the probability density
function f (x) achieves its maximum. This is the most probable outcome for the stochastic
variable X. If the probability density has a single maximum, then the variable X is said
to be unimodal.
The median of a stochastic variable X is the value of x such that the cumulative
distribution function F (x) = 0.5. It is equally probable that the stochastic variable X
gives an outcome smaller than or greater than the median.
The expected value µ of a stochastic variable X is the weighted average
Z ∞
xf (x) dx.
µ = E[X] =
f (x) ≡
−∞
The expected value of a function g(X) of the stochastic variable X is the weighted average
Z ∞
g(x)f (x) dx.
E[g(X)] =
−∞
23
It is seen that the expected value operator is linear,
E[ag(X) + bh(X)] = aE[g(X)] + bE[h(X)].
The variance σ 2 of a stochastic variable X is given by
Z ∞
(x − µ)2 f (x) dx.
σ 2 = Var[X] = E[(X − µ)2 ] =
−∞
By the linearity of the expected value operator, this can be written
σ 2 = E[(X − µ)2 ] = E[X 2 − 2µX + µ2 ] = E[X 2 ] − 2µE[X] + µ2 = E[X 2 ] − µ2 .
The standard deviation σ of a stochastic variable is the square root of the variance.
The nth moment of a stochastic variable X is
Z ∞
xn f (x) dx,
mn = E[X n ] =
−∞
while the nth centered moment is
n
E[(X − µ) ] =
Z
∞
−∞
(x − µ)n f (x) dx.
The variance can be defined as the second centered moment of the stochastic variable.
We see that µ = m1 and σ 2 = m2 − m21 .
The skewness γ of a stochastic variable is the third centered moment normalized by
the cube of the standard deviation
γ=
E[(X − µ)3 ]
m3 − 3m2 µ + 3µ3 − µ3
m3 − 3σ 2 µ − µ3
=
=
.
3
3
σ
σ
σ3
The kurtosis κ of a stochastic variable is the fourth centered moment normalized by
the square of the variance
κ=
E[(X − µ)4 ]
m4 − 4m3 µ + 6m2 µ2 − 4µ4 + µ4
m4 − 4γσ 3 µ − 6σ 2 µ2 − µ4
=
=
.
σ4
σ4
σ4
Given a stochastic variable X we can transform it into a new stochastic variable Y
as a function of X. Particularly useful is the transformation given by
Y =
X −µ
.
σ
The cumulative distribution function of Y is
FY (y) = P {Y ≤ y} = P {
X −µ
≤ y} = P {X ≤ µ + σy} = FX (µ + σy).
σ
The probability density function of Y is
fY (y) =
dFY (y)
dFX (µ + σy)
=
= σfX (µ + σy).
dy
dy
24
(9.1)
For the special transformation (9.1) with µ and σ 2 being the mean and variance of X,
we notice that the mean of Y is
E[Y ] = E[
1
µ
X −µ
] = E[X] − = 0,
σ
σ
σ
the variance of Y is
E[Y 2 ] = E[
(X − µ)2
1
] = 2 E[(X − µ)2 ] = 1,
σ2
σ
while the skewness and kurtosis of Y are the same as the skewness and kurtosis of X,
respectively.
The characteristic function φ(k) of a stochastic variable X is the Fourier transform
of the probability density function, or equivalently, the expected value of eikx ,
Z ∞
f (x)eikx dx,
φ(k) =
−∞
with inverse transform
f (x) =
1
2π
Z
∞
φ(k)e−ikx dk.
−∞
We can formally expand the complex exponential function in a power series
i
1
eikx = 1 + ikx − k 2 x2 − k 3 x3 + . . .
2
6
and thus the characteristic function can be written as a superposition of all the moments
i
1
φ(k) = 1 + ikm1 − k 2 m2 − k 3 m3 + . . .
2
6
In fact, the nth moment can be conveniently achieved by differentiating the characteristic
function n times and evaluating the result at k = 0.
Example: Gaussian or normal distribution. A stochastic variable X is said to be
normally distributed with mean µ and variance σ 2 with the probability density function
given by
(x−µ)2
1
e− 2σ2 .
f (x) = √
2πσ
The cumulative distribution function is
Z x
(ξ−µ)2
1 1
x−µ
1
√
e− 2σ2 dξ = + erf( √ )
F (x) =
2
2
2πσ
2σ
−∞
Rz
2
where erf(z) = √2π 0 e−t dt is the error function. The stochastic variable X is unimodal
with mode xmode = µ. The median is xmedian = µ since erf(0) = 0. The mean is
Z ∞
(x−µ)2
x
√
e− 2σ2 dx = µ,
E[X] =
2πσ
−∞
25
the variance is
2
E[(X − µ) ] =
Z
∞
−∞
2
(x − µ)2 − (x−µ)
√
e 2σ2 dx = σ 2 .
2πσ
the skewness is
E[(X − µ)3 ]
1
= 3
σ3
σ
Z
∞
1
E[(X − µ)4 ]
= 4
κ=
4
σ
σ
Z
∞
γ=
−∞
2
(x − µ)3 − (x−µ)
√
e 2σ2 dx = 0.
2πσ
and the kurtosis is
−∞
The characteristic function is
φ(k) = e−
2
(x − µ)4 − (x−µ)
√
e 2σ2 dx = 3.
2πσ
σ 2 k2
2
+iµk
.
In the limit of zero variance, σ → 0, the probability density function converges to a
Dirac delta function f (x) = δ(x − µ) and the characteristic function has unit magnitude
|φ(k)| = 1.
Example: Uniform distribution. A stochastic variable Θ is said to be uniformly
distributed on the interval [0, 2π] with the probability density function given by
1
0 ≤ θ ≤ 2π
2π
f (θ) =
0
otherwise
The cumulative distribution function is

 0
θ
F (θ) =
 2π
1
θ<0
0 ≤ θ ≤ 2π
θ > 2π
The mode is not well defined since f (θ) does not have an isolated extremal point. The
median is θmedian = π because F (π) = 0.5. The mean is µ = E[Θ] = π. The variance
is σ 2 = E[(Θ − µ))2 ] = π 2 /3. The skewness is γ = 0. The kurtosis is κ = 9/5. The
characteristic function is
1
(e2πik − 1).
φ(k) =
2πik
Example: Rayleigh distribution. A stochastic variable X is said to be Rayleigh
distributed with the probability density function given by
(
αx2
αxe− 2
x≥0
f (x) =
0
x<0
where α is a parameter. Find the mode, median, mean, standard deviation, skewness,
kurtosis and characteristic function!
26
Example: Exponential distribution. A stochastic variable X is said to be exponentially distributed with the probability density function given by
αe−αx x ≥ 0
f (x) =
0
x<0
where α is a parameter. Find the mode, median, mean, standard deviation, skewness,
kurtosis and characteristic function!
9.2
Theory for several stochastic variables
Let X and Y be two stochastic variables. With the notation {X ≤ x and Y ≤ y} we
refer to the collection of all outcomes of the two stochastic variables X and Y such that
X ≤ x and Y ≤ y simultaneously. The probability for this collection of outcomes defines
the joint cumulative distribution function F (x, y) ≡ P {X ≤ x and Y ≤ y}. The joint
cumulative probability function has the properties that
1. F (−∞, −∞) = 0,
2. F (∞, ∞) = 1,
3. F (x1 , y1 ) ≤ F (x2 , y2 ) for x1 ≤ x2 and y1 ≤ y2 .
The marginal cumulative distribution functions are
FX (x) = F (x, ∞)
and
FY (y) = F (∞, y).
The joint probability density function is defined as
f (x, y) ≡
∂ 2 F (x, y)
∂x∂y
and has the properties that
1. fR (x, y)
≥ 0,
∞ R∞
2. −∞ −∞ f (x, y) dx dy = 1,
Rx
3. F (x, y) = −∞ f (ξ) dξ.
The marginal probability density functions are defined as
Z ∞
Z ∞
∂FY (y)
∂FX (x)
f (x, y) dx =
and
fY (y) =
.
f (x, y) dy =
fX (x) =
∂x
∂y
−∞
−∞
Two stochastic variables X and Y are said to be statistically independent if the joint
probability density function can be factorized f (x, y) = fX (x)fY (y).
The mean values of X and Y are now
Z ∞
Z ∞Z ∞
xfX (x) dx,
xf (x, y) dx dy =
µX = E[X] =
−∞
µY = E[Y ] =
Z
∞
−∞
−∞
−∞
Z
∞
yf (x, y) dx dy =
Z
∞
yfY (y) dy.
−∞
−∞
The covariance of two stochastic variables X and Y is given by
Cov[X, Y ] = E[(X − µX )(Y − µY )] = E[XY ] − µX µY .
27
The correlation coefficient of the two stochastic variables is the ratio
Cov[X, Y ]
.
σX σY
We show that the absolute value of the correlation coefficient is not greater than one, or
equivalently | Cov[X, Y ]| ≤ σX σY .
Proof. Look at
2
E[(a(X − µX ) + (Y − µY ))2 ] = a2 σX
+ 2a Cov[X, Y ] + σY2 ≥ 0.
Solving for a when this expression is zero we get
p
2 2
σY
− Cov[X, Y ] ± Cov[X, Y ]2 − σX
.
a=
2
σX
Observing that there cannot be two distinct solutions for a, the radicand must be non-positive
2 2
Cov[X, Y ]2 ≤ σX
σY .
Two stochastic variables X and Y are said to be uncorrelated if Cov[X, Y ] = 0, which
is equivalent to the correlation coefficient being equal to zero, and which is equivalent to
E[XY ] = E[X]E[Y ].
We now observe that if two stochastic variables are statistically independent, then
they are uncorrelated and their covariance is zero. The converse is not always true.
Generalization to an arbitrary number of stochastic variables should now be obvious.
Example: Multi normal distribution. The n stochastic variables X1 , X2 , . . . , Xn
are said to be jointly normally distributed with the probability density function given by


p
X
|A|
1
exp −
(xj − µj )ajl (xl − µl )
f (x1 , . . . , xn ) =
2
(2π)n/2
j,l
where C −1 = A = {ajl } is a symmetric positive definite matrix and |A| denotes the
determinant of A. The mean values are E[Xj ] = µj , the covariances are Cov[Xj , Xl ] =
E[(Xj − µj )(Xl − µl )] = cjl where A−1 = C = {cjl } is the covariance matrix. The
characteristic function is given by the n-dimensional Fourier transform


X
X
1
φ(k1 , . . . , kn ) = E[exp(i(k1 X1 + . . . + kn Xn ))] = exp −
kj cjl kl + i
kj µj  .
2
j
j,l
It is useful to also consider the limit when some of the variances cjj go to zero. In this case
the the covariance matrix becomes singular and the probability density function becomes
a generalized function. However, the characteristic function is still a well defined ordinary
function, and it may therefore be more convenient to take the characteristic function as
the definition of the multi normal distribution.
28
Suppose that the variable X1 is uncorrelated with all the other variables Xj . Then
the covariances cj,1 = c1,j = 0 for all j 6= 1, and thus the n-dimensional characteristic
function can be factored φ(k1 , k2 , . . . , kn ) = φ1 (k1 )φ2,...,n (k2 , . . . , kn ). From the multidimensional inverse Fourier transform, it follows that the probability density function
can be factored likewise. Hence for jointly normally distributed stochastic variables,
statistical independence is equivalent to uncorrelatedness.
9.3
The Central Limit Theorem
Suppose that a stochastic variable Y is a superposition of n statistically independent
variables Xj with mean values µj and variances σj2 , respectively,
Y = X1 + X2 + . . . + Xn .
Due to the assumption of statistical independence we have for the joint probability density
function
f (x1 , x2 , . . . , xn ) = f1 (x1 )f2 (x2 ) . . . fn (xn ).
The expected value of Y is


X
X
X
E[Y ] = E 
Xj  =
E[Xj ] =
µj = µ.
j
j
j
The variance of Y is

2 

 X
Var[Y ] = E[(Y − E[Y ])2 ] = E  (Xj − µj ) 
j
=
X
j
E[(Xj − µj )2 ] +
X
j6=l
E[(Xj − µj )(Xl − µl )] =
X
σj2 = σ 2
j
where the sum over all j 6= l vanishes because of statistical independence. We have
defined µ as the sum of the means and σ 2 as the sum of the variances.
Now define the transformed variable
n
Z=
Y − µ X Xj − µj
=
σ
σ
j=1
such that E[Z] = 0 and E[Z 2 ] = 1. The characteristic function φXj (k) is not known,
however we can write down the first three terms in the power series expansion in k
φXj (k) = E[eikXj ] = 1 + ikE[Xj ] −
k 2 σj2
k2
E[Xj2 ] + . . . = 1 + ikµj −
+ ....
2
2
29
Similarly, the characteristic function φZ (k) has power series expansion in k



n
n
X
Y
X
−
µ
Xj − µj
j
j 
ikZ


φZ (k) = E[e ] = E exp ik
E exp ik
=
σ
σ
j=1
j=1
!
n
Y
k 2 σj2
k2 n
+ . . . = (1 −
) +R
1−
=
2
2σ
2n
j=1
where the transition from sum to product depends on statistical independence, and R is
a remainder.
Now we let n → ∞ and recall the limit
x n
→ ex
as n → ∞.
1−
n
Provided the remainder term R vanishes as n → ∞ it follows that
φZ (k) → e−
k2
2
and thus the asymptotic probability density of the transformed variable Z is
z2
1
f (z) → √ e− 2 ,
2π
which is the Gaussian distribution with mean 0 and variance 1.
If all the variables Xj are equally distributed it becomes particularly simple to demonstrate that R = O(n−1/2 ). If the Xj are not equally distributed, sufficient conditions for
the vanishing of R could depend on certain conditions on the variances σj2 and higher
moments of all Xj (see e.g. Papoulis 1984).
The central limit theorem states that when these conditions are met, the sum of a great
number of statistically independent stochastic variables tends to a Gaussian stochastic
variable with mean equal to the sum of the means and variance equal to the sum of the
variances.
As far as the sea surface is concerned, if the surface elevation is a linear superposition
of a great number of independent wave oscillations, then the surface elevation is expected
to have a Gaussian distribution.
We can now understand an important reason why the Gaussian assumption may be
broken for sea surface waves: If the spatiotemporal behavior is governed by a nonlinear
evolution equation, then the individual wave oscillations will not be independent. For the
weakly nonlinear and narrowbanded case described in earlier sections, there are at least
two different reasons why the sea surface displacement should deviate from a Gaussian
distribution: Firstly, the reconstruction of the sea surface involves the contributions from
zeroth, second and higher harmonics, which are nonlinear wave oscillations that depend
on the first harmonic linearly dispersive waves. Secondly, the spatiotemporal evolution
equation for the first harmonic is nonlinear, and will introduce dependencies between the
wave oscillations comprising the first harmonic.
30
Common practice in the stochastic modelling of weakly nonlinear sea surface waves
is often to assume that the first harmonic contribution to the wave field is Gaussian.
This implies an assumption that the nonlinear Schrödinger equation itself does not introduce deviations from Gaussian statistics, but the nonlinear reconstruction formulas
for the surface displacement do produce such deviations. This assumption has recently
been checked experimentally and by Monte Carlo simulations using the nonlinear evolution equations (Onorato et al. 2004; Socquet–Juglard et al. 2005). It is found that for
unidirectional long-crested waves the nonlinear Schrödinger equation can produce significant deviation from Gaussian statistics, with an increased number of extreme waves.
For more realistic short-crested waves, typical for the sea surface, the assumption that
the first harmonic is Gaussian is surprisingly good, even subject to the nonlinearities of
the nonlinear Schrödinger equation.
10 Theory for stochastic processes
We already mentioned four different ways to interpret a stochastic process X(t). Suppose
t is time. At a fixed time t1 the interpretation is a stochastic variable X(t1 ). At two
fixed times t1 and t2 the interpretation is two stochastic variables X(t1 ) and X(t2 ). At
an arbitrary number of times tj the interpretation is a collection of stochastic variables
X(tj ). We seek a description of the joint distribution of these stochastic variables.
The first order distribution describes the behavior at one fixed time t1
F (x1 ; t1 ) = P {X(t1 ) ≤ x1 }
and
f (x1 ; t1 ) =
∂F (x1 ; t1 )
.
∂x1
The second order distribution describes the joint behavior at two fixed times t1 and
t2
and
F (x1 , x2 ; t1 , t2 ) = P {X(t1 ) ≤ x1 and X(t2 ) ≤ x2 }
∂ 2 F (x1 , x2 ; t1 , t2 )
.
∂x1 ∂x2
Several compatibility relations follow, e.g. F (x1 ; t1 ) = F (x1 , ∞; t1 , t2 ), etc.
In principle we can proceed to derive the n-th order distribution for the joint behavior at n fixed times F (x1 , x2 , . . . , xn ; t1 , t2 , . . . , tn ), however, the first and second order
distributions will suffice in the following.
Care should be exercised not to be confused by our double use of the word “order”:
The order of a distribution of a stochastic process refers to the number of distinct spatiotemporal locations employed for joint statistics. The order of nonlinearity refers to the
number of wave-wave interactions that produce some effect.
The expected value of the stochastic process is
Z ∞
xf (x; t) dx.
µ(t) = E[X(t)] =
f (x1 , x2 ; t1 , t2 ) =
−∞
We define the autocorrelation function as
Z ∞Z
R(t1 , t2 ) = E[X(t1 )X(t2 )] =
−∞
31
∞
−∞
x1 x2 f (x1 , x2 ; t1 , t2 ) dx1 dx2 .
The mean power of the process is defined as the second moment R(t, t) = E[|X(t)|2 ].
The autocovariance function is defined as
C(t1 , t2 ) = E[(X(t1 ) − µ(t1 ))(X(t2 ) − µ(t2 ))] = R(t1 , t2 ) − µ(t1 )µ(t2 ).
A process is said to be steady state or stationary if the statistical properties are
independent of translation of the origin, i.e. X(t) and X(t+τ ) have the same distribution.
This implies that the n-th order distribution should be the same for all orders n. For the
first order distribution we need f (x1 ; t1 ) = f (x1 ). For the second order distribution we
need f (x1 , x2 ; t1 , t2 ) = f (x1 , x2 ; τ ) where τ = t2 − t1 . Similarly, the distributions at any
higher order should only depend on the time intervals and not the absolute times.
A process is said to be weakly stationary if the expected value is constant with respect
to time E[X(t)] = µ and the autocorrelation function is independent of translation of
the origin R(t1 , t2 ) = E[X(t1 )X(t2 )] = R(τ ) where τ = t2 − t1 .
A process X(t) is said to be ergodic for the computation of some function g(X(t)) if
ensemble-averaging gives the same result as time-averaging, e.g.
Z T
Z ∞
1
g(x(t)) dt.
g(x)f (x; t) dx = lim
E[g(X(t))] ≡
T →∞ 2T −T
−∞
It is obvious that ergodicity is only meaningful provided the process has some kind of
stationarity. Ergodicity with respect to second order statistics, such as the mean and the
autocorrelation, is meaningfull for a weakly stationary process.
Often we suppose without further justification that ocean waves are both weakly
stationary and ergodic. However, the sea state is known to change in time. It may still
be a good approximation to assume that within a limited time, say a few hours, the sea
state is nearly weakly stationary and ergodic.
For a complex stochastic process X(t), the autocorrelation function is defined as
R(t1 , t2 ) = E[X(t1 )X ∗ (t2 )].
In the folloing we describe some properties of the autocorrelation function for weakly
stationary processes:
• For a complex process R(−τ ) = R∗ (τ ), and for a real process R(−τ ) = R(τ )
R(−τ ) = E[X(t)X ∗ (t − τ )] = E[X(t + τ )X ∗ (t)] = R∗ (τ ).
• R(0) is real and non-negative
R(0) = E[X(t)X ∗ (t)] = E[|X(t)|2 ] ≥ 0.
• For a complex process R(0) ≥ |ReR(τ )|, and for a real process R(0) ≥ |R(τ )|.
Proof. Let a be a real number and look at
E[|aX(t) + X(t + τ )|2 ]
= a2 E[|X(t)|2 ] + aE[X(t)X ∗ (t + τ )] + aE[X ∗ (t)X(t + τ )] + E[|X(t + τ )|2 ]
= a2 R(0) + 2aReR(τ ) + R(0) ≥ 0.
32
Solving for a when this expression is zero we get
p
−ReR(τ ) ± (ReR(τ ))2 − R2 (0)
a=
.
R(0)
Since there cannot be two distinct real solutions for a we have R(0) ≥ |ReR(τ )|.
• R(0) is the second moment of X(t). If E[X(t)] = 0, then R(0) is the variance of
X(t).
• If X(t) and Y (t) are statistically independent processes with zero mean, and Z(t) =
X(t) + Y (t), then
RZZ (τ ) = E[(X(t) + Y (t))(X ∗ (t + τ ) + Y ∗ (t + τ ))] = RXX (τ ) + RY Y (τ ).
The cross-correlation between two complex processes X(t) and Y (t) is defined as
RXY (t1 , t2 ) = E[X(t1 )Y ∗ (t2 )].
If the two processes are independent, then the cross-correlation is zero.
Example: Simple harmonic wave with random phase.
monic wave with fixed amplitude a and arbitrary phase
Consider a simple har-
η(x, t) = a cos(kx − ωt + θ)
where θ is uniformly distributed on the interval [0, 2π).
The expected value of the surface displacement is
µ(x, t) = E[η(x, t)] =
Z
0
2π
a cos(kx − ωt + θ)
1
dθ = 0.
2π
The autocorrelation function is
R(x, t, x + ρ, t + τ ) = E[η(x, t)η(x + ρ, t + τ )]
Z 2π
1
dθ
a2 cos(kx − ωt + θ) cos(k(x + ρ) − ω(t + τ ) + θ)
=
2π
0
a2
=
cos(kρ − ωτ ).
2
2
The mean power of the process is E[η 2 (x, t)] = R(x, t, x, t) = a2 . In fact we could
p
have defined the amplitude of the process as a = 2E[η 2 ].
Let us proceed to derive the first-order distribution of the process. We have

0
z < −a

1 − π1 arccos az |z| ≤ a
F (z; x, t) = P {η(x, t) ≤ z} =

1
z>a
33
∂F (z; x, t)
=
f (z; x, t) =
∂z
(
√1
πa
z 2
)
1−( a
0
|z| ≤ a
|z| > a
Notice that the first order distribution is independent of x and t. In fact, this can
be seen even without deriving the exact distribution, upon making the substitution ψ =
kx − ωt + θ and noting that ψ is a stochastic variable uniformly distributed on an interval
of length 2π. With this observation it becomes clear that the stochastic distributions
at any order are independent of translation of the spatiotemporal origin, and hence the
process is steady state or stationary.
The process is ergodic for computation of the mean and the autocorrelation by time
averaging when ω 6= 0, and by spatial averaging when k 6= 0, e.g.
1
T →∞ 2T
lim
Z
T
−T
a cos(kx − ωt + θ) dt
a
(− sin(kx − ωT + θ) + sin(kx + ωT + θ)) = 0,
T →∞ 2ωT
= lim
1
lim
T →∞ 2T
Z
T
−T
a2 cos(kx − ωt + ǫ) cos(kx − ω(t + τ ) + ǫ) dt
a2
T →∞ 2T
= lim
Z
T
−T
cos2 (kx − ωt + ǫ) cos(ωτ ) +
1
sin 2(kx − ωt + ǫ) sin(ωτ ) dt
2
=
a2
cos ωτ.
2
However, if k = 0 or ω = 0 the process is not ergodic for computation of these quantities
by spatial or time averaging, respectively.
Example: Third order Stokes wave with random phase. The third-order normalized Stokes wave with arbitrary phase can be written as
η(ψ) = ε cos ψ + γ2 ε2 cos 2ψ + γ3 ε3 cos 3ψ
(10.1)
where η is now the normalized surface displacement, the phase is ψ = x − t + θ, and
the coefficients γ2 and γ3 are found by reference to section 4. The stochastic variable θ
is uniformly distributed on the interval [0, 2π), and thus ψ is also uniformly distributed
on some interval of length 2π. The steepness ε of the first-harmonic term should not be
confused with the steepness ǫ = kc ā of the wave field.
The expected value of the normalized surface displacement is
E[ζ(ψ)] =
Z
2π
ζ(ψ)
0
34
1
dψ = 0.
2π
The autocorrelation function is
R(ρ, τ ) = E[η(ψ)η(ψ + ρ − τ )] =
Z
0
2π
η(ψ)η(ψ + ρ − τ )
1
dψ
2π
γ 2 ε4
γ 2 ε6
ε2
cos(ρ − τ ) + 2 cos 2(ρ − τ ) + 3 cos 3(ρ − τ ).
=
2
2
2
The mean power of the process is
Var[η] = E[η 2 ] = R(0, 0) =
γ 2 ε4
γ 2 ε6
ε2
+ 2 + 3 .
2
2
2
p
It is meaningful to define the effective steepness of the process as ǫ = kc ā = 2E[ζ 2 ]. It
follows that the relationship between the steepness of the first harmonic and the overall
steepness of the wave field is
ǫ2 = ε2 + γ22 ε4 + γ32 ε6
ε2 = ǫ2 − γ22 ǫ4 + (2γ24 − γ32 )ǫ6 .
and
(10.2)
In any case, for the small Draupner overall steepness ǫ = 0.12 and depths not smaller
than the Draupner depth we find ǫ = ε within two digits accuracy.
We now want to derive the distribution of surface elevation, i.e. the first-order distribution of the stochastic process, accurate to third order in nonlinearity. This can be
achieved by explicitly inverting the expression for surface elevation ψ(η). We limit to
deep water (γ2 = 1/2, γ3 = 3/8). The trick is to rewrite the left-hand side as
3
1
η = εz + ε2 + ε3 s
2
8
(10.3)
where s is the sign of η and |z| ≤ 1. Employing the perturbation expansion
ψ = ψ0 + εψ1 + ε2 ψ2 + . . .
(10.4)
ψ0 = arccos z,
p
ψ1 = − 1 − z 2
(10.5)
we find
and
(10.6)
3(z − s)
ψ2 = √
.
8 1 − z2
The cumulative probability distribution is therefore
p
1
3ε2 (z − s)
2
arccos z − ε 1 − z + √
F (η) = 1 −
π
8 1 − z2
and the probability density function is

1
3ε2 (1 − sz)

√
1 − εz −
2
f (η) =
8(1 − z 2 )
 επ 1 − z
0
35
for |z| < 1
for |z| > 1
(10.7)
(10.8)
(10.9)
where
3
η 1
− ε − ε2 s.
(10.10)
ε 2
8
The probability densities at various nonlinear orders are shown in Figure 12 for an unrealistically high steepness ε = 0.3.
z=
5
4
3
2
1
0
-0.3
-0.2
-0.1
0
η
0.1
0.2
0.3
Figure 12. Probability density functions for Stokes waves of first-harmonic steepness
ε = 0.3: —, linear; - -, second nonlinear order; · · · , third nonlinear order.
Example: Simple harmonic wave with random amplitude and phase. Consider
a simple harmonic wave with arbitrary amplitude and phase, more specifically let
η(x, t) = a cos(kx − ωt) + b sin(kx − ωt)
where a and b are statistically independent Gaussian stochastic variables with common
mean 0 and common variance σ 2 .
The expected value of the surface displacement is
µ(x, t) = E[η(x, t)] = E[a] cos(kx − ωt) + E[b] sin(kx − ωt) = 0.
The autocorrelation function is
R(x, t, x + ρ, t + τ ) = E[η(x, t)η(x + ρ, t + τ )] = σ 2 cos(kρ − ωτ ).
36
2
The mean power is E[η 2 (x, t)] = R(x,
pt, x, t) = σ√ . It is meaningful to define the
2
effective amplitude of the process as ā = 2E[η ] = 2σ.
Exercise: Proceed to derive the first-order distribution of the process
F (z; x, t) = P {η(x, t) ≤ z}
and
f (z; x, t) =
∂F (z; x, t)
.
∂z
11 The spectrum
11.1
Definition of frequency spectrum
The frequency spectrum S(ω) of a weakly stationary process is defined as the Fourier
transform of the autocorrelation function R(τ ). Whereas the Fourier transform is in principle undetermined by a multiplicative constant, the spectrum becomes uniquely defined
by the constraint that the integral of the spectrum over the domain of the frequency axis
that is used, should be equal to the variance of the process. Since our target process (the
surface elevation) is real, the Fourier transform is complex conjugate symmetric about
the origin, and it is enough to use only positive frequencies to represent the spectrum.
The desired Fourier transform pair is then
Z
1 ∞
S(ω) =
R(τ )eiωτ dτ,
(11.1)
π −∞
1
R(τ ) =
2
Z
∞
S(ω)e−iωτ dω.
(11.2)
∞
Some authors define the spectrum as the squared absolute value of the Fourier transform of the process. In this case the Fourier transform pair (11.1)–(11.2) is called the
Wiener–Khintchine theorem. On the other hand, we show in section 11.3 that the squared
absolute value of the Fourier transform is an estimator for the spectrum.
For a real process we recall that the autocorrelation function is real and even R(−τ ) =
R(τ ) and thus we may write
S(ω) =
1
π
∞
Z
−∞
and
R(τ ) =
1
2
R(τ ) cos ωτ dτ =
Z
∞
2
π
S(ω) cos ωτ dω =
Z
∞
R(τ ) cos ωτ dτ
0
Z
∞
S(ω) cos ωτ dω
0
−∞
and thus it follows that S(ω) is real and even. In particular we have
Z
∞
S(ω) dω = R(0)
0
which shows that the normalization criterion is satisfied.
37
For application to a complex process we cannot expect any symmetry for the spectrum and would need to include both positive and negative frequencies, the appropriate
transform pair being
Z ∞
1
R(τ )eiωτ dτ,
(11.3)
S(ω) =
2π −∞
Z ∞
S(ω)e−iωτ dω.
(11.4)
R(τ ) =
∞
For a complex process we recall that R(−τ ) = R∗ (τ ) and thus
Z ∞
Z ∞
1
1
S(ω) =
R(τ )eiωτ dτ =
R∗ (τ )e−iωτ + R(τ )eiωτ dτ
2π −∞
2π 0
which shows that S(ω) is real.
It can further be shown that for a real or complex weakly stationary process, the
spectrum is non-negative
S(ω) ≥ 0.
Example: Periodic oscillation with random amplitude and phase.
real periodic process with period T
η(t) =
∞
X
aj cos ωj t + bj sin ωj t
Take the
(11.5)
j=1
where ωj = 2πj/T where aj and bj are statistically independent Gaussian stochastic
variables with mean 0 and variance σj2 .
The mean is zero E[η(t)] = 0, the autocorrelation function is
X
R(τ ) =
σj2 cos ωj τ,
j
and using the discrete Fourier transform the spectrum is
Z
2 T
R(τ )eiωj τ dτ = σj2 .
S(ωj ) =
T 0
(11.6)
The normalization criterion is seen to be satisfied by observing that
X
X
S(ωj ) =
σj2 = R(0) = Var[η(t)].
j
11.2
j
Definition of wave spectrum
The wave spectrum S(k, ω) of a weakly stationary wave process η(x, t) is defined
as the Fourier transform of the autocorrelation function R(ρ, τ ). Again the spectrum
becomes uniquely defined by the constraint that the integral of the spectrum over the
three-dimensional domain of the spectral axes that is used, should be equal to the variance
38
of the process. For a real process like the surface elevation, the Fourier transform has one
complex conjugate symmetry, and we therefore limit to only positive frequencies ω ≥ 0
while the wave vector k is unconstrained. The desired Fourier transform pair is then
S(k, ω) =
1
4π 3
1
R(ρ, τ ) =
2
Z
∞
dρ
−∞
Z
∞
dk
Z
∞
∞
Z
dτ R(ρ, τ )e−i(k·ρ−ωτ )
(11.7)
−∞
dω S(k, ω)ei(k·ρ−ωτ ) .
(11.8)
−∞
−∞
Recall that for a real process R(−ρ, −τ ) = R(ρ, τ ) and thus it follows that the
spectrum is real and has the symmetry S(−k, −ω) = S(k, ω). Then it also follows that
the normalization criterion is satisfied
Z ∞
Z
1 ∞
dω S(k, ω) = R(0, 0).
dk
2 −∞
−∞
The wave vector spectrum S(k) can now be defined as the projection of the wave
spectrum into the wave vector plane
S(k) =
Z
∞
1
2
S(k, ω) dω =
0
∞
Z
S(k, ω) dω =
−∞
1
4π 2
Z
∞
R(ρ, 0)e−ik·ρ dρ.
−∞
The frequency spectrum S(ω) is recovered by projecting into the frequency axis
S(ω) =
Z
∞
S(k, ω) dk =
−∞
1
π
Z
∞
R(0, ω)eiωτ dτ.
−∞
The wavenumber spectrum S(k) is achieved through the transformation
kx = k cos θ
ky = k sin θ
The wavenumber k =
spectrum is thus
S(k) =
Z
0
q
2π
dθ
∂(kx , ky )
= k.
∂(k, θ)
with Jacobian
kx2 + ky2 ≥ 0 is by definition non-negative. The wavenumber
Z
∞
dω S(k, ω)k =
0
1
4π 2
Z
∞
0
dθ
Z
∞
−∞
The directional spectrum S(θ) similarly becomes
S(θ) =
Z
0
∞
dk
Z
0
39
∞
dω S(k, ω)k.
dρ R(ρ, 0)ke−ik·ρ .
Example: Linear waves with random phase. Look at the real process
η(x, t) =
X
j
aj cos(kj x − ωj t + θj )
where aj are fixed scalars and θj are statistically independent stochastic variables uniformly distributed between 0 and 2π. The mean is zero E[η(x, t)] = 0, and the autocorrelation function is
X1
a2j cos(kj ρ − ωj τ ).
R(ρ, τ ) =
2
j
The variance or mean power of the process is
R(0, 0) =
X1
j
where ā =
p
2
a2j ≡
1 2
ā
2
2R(0, 0) is the equivalent characteristic amplitude. The spectrum is
S(k, ω) =
X a2j
j
2
(δ(k + kj )δ(ω + ωj ) + δ(k − kj )δ(ω − ωj ))
where δ(·) is the Dirac delta function.
Example: Linear waves with random amplitude and phase. Let us consider the
real process
X
η(x, t) =
aj cos(kj x − ωj t) + bj sin(kj x + ωj t)
j
where aj and bj are statistically independent Gaussian stochastic variables with mean 0
and variance σj2 .
The mean is zero E[η(x, t)] = 0, and the autocorrelation function is
R(ρ, τ ) =
X
j
σj2 cos(kj ρ − ωj τ ).
Notice that
Var[η(x, t)] = R(0, 0) =
X
j
σj2 =
1 2
ā .
2
The spectrum is readily found
S(k, ω) =
X
j
σj2 (δ(k + kj )δ(ω + ωj ) + δ(k − kj )δ(ω − ωj )) .
40
11.3
An estimator for the spectrum
Using the real periodic process (11.5), let us now compute the Fourier transform of
η(t) using the Fourier transform (A.8)
Z
1
1 T
η(t)eiωj t dt = (aj + ibj ).
η̂j =
T 0
2
Let us construct the quantity S̃j
S̃j = 2|η̂j |2 =
1 2
(a + b2j ).
2 j
Can S̃j be used as an estimator for S(ωj ) found in equation (11.6)? To answer this
question we compute the expected value
E[S̃j ] =
1
E[a2j + b2j ] = σj2
2
which shows that it is an unbiased estimator. Then let us compute the variance
Var[S̃j ] = E[(S̃j − σj2 )] = E[S̃j2 ] − σj4
where
σj4
1
1
E[a4j + 2a2j b2j + b4j ] = E[a4j ] +
.
4
2
2
Invoking the Gaussian assumption we get E[a4j ] = 3σj4 , and finally we find that
E[S̃j2 ] =
Var[S̃j ] = σj4 ,
or in other words, S̃j has standard deviation equal to its expected value! This implies
that if S̃j is an estimator for the spectrum, but we should expect a messy looking result,
like what we have indeed seen in Figures 4 and 5.
11.4
The equilibrium spectrum
Phillips (1958) first argued that the high frequency tail of the spectrum could be
expected to obey the power law ω −5 . Later several observations suggested that wind
waves are better characterized by the power law ω −4 . In Figure 5 the two power laws
are superposed the estimated unsmoothed spectrum.
Dysthe et al. (2003) showed that the 3D shortcrested MNLS equation is the most
simplified equation that reproduces the ω −4 power law for the frequency spectrum within
the range of the first harmonic. Starting with initial conditions that do not obey any
power law, they showed that the ω −4 power law is established within the range of the
first harmonic after a time comparable to the growth of modulational instability.
Onorato et al. (2002) showed that using the full Euler equations, the ω −4 power law is
established over a much wider range than just the first harmonic, and the establishment
of the power law for the higher spectral range happens after a longer time than that of
modulational instability.
41
12 Probability distributions of surface waves
12.1
Linear waves
Let the surface displacement be a linear superposition of simple-harmonic waves
η(x, t) =
X
j
aj cos(kj · r − ωj t) + bj sin(kj · r − ωj t)
=
X
j
cj cos(kj · r − ωj t + θj )
where
aj = cj cos θj ,
bj = cj sin θj
and cj ≥ 0.
Our standard assumptions have been that aj and bj are statistically independent
Gaussian variables with mean 0 and standard deviation σj2 , the frequencies depend on
the wave vectors through the linear dispersion relation ωj = ω(kj ). The variance of the
process is
X
σ2 =
σj2 .
j
We now derive the probability distributions for the amplitudes cj and phases θj .
The cumulative distribution function for the amplitude cj is
Z
q
2
2
Fcj (z) = P { aj + bj ≤ z} =
0
z
and the probability density is

z2
 z − 2σj2
e
fcj (z) =
σ2
 j
0
z≥0
z<0
which is the Rayleigh distribution.
The cumulative distribution for the phase θj
Fθj (θ) = P {θj ≤ θ} =
Z
0
θ
1
dψ
2π
and the probability density is
fθj (θ) =
(
1
2π
0
which is the uniform distribution.
42
2
r
r − 2σ
2
j dr
e
2
σj
0 ≤ θ ≤ 2π
otherwise
The energy density of spectral component (kj , ωj ) is defined by
ǫj =
1 2
(a + b2j )
2 j
and has the probability density

z
 1 e− σj2
σ2
fǫj (z) =
 j
0
z≥0
z<0
which is the exponential distribution.
12.2
Linear narrowbanded waves
In section 4.2 we introduced harmonic perturbation expansions for weakly nonlinear
narrowbanded waves. This can now be related to the superposition of simple harmonic
waves in the previous section 12.1
η(r, t) =
X
j
aj cos(kj · r − ωj t) + bj sin(kj · r − ωj t) =
=
X
j
cj cos(kj · r − ωj t + θj )
1 i(kc x−ωc t)
Be
+ c.c. = |B| cos(kc x − ωc t + arg B).
2
According to the narrowband assumption, the complex amplitude B has slow dependence
on r and t. Necessarily, the narrowband assumption implies that σj2 rapidly decays to
zero for values of kj not close to kc = (kc , 0). The magnitude |B| is called the linear
envelope of the process.
In the limit of extremely small bandwidth, we may consider that only one index
j contributes to the sum, and thus the limiting distribution for |B| is the Rayleigh
distribution
z2
z
(12.1)
f|B| (z) = 2 e− 2σ2 for z ≥ 0.
σ
In this limiting case, the probability distribution of crest heights is identical to the
probability distribution of the upper envelope.
In the limit that the bandwidth goes to zero, the wave height is twice the crest height,
H = 2|B|, and is also Rayleigh distributed
fH (z) =
z − z22
e 8σ
4σ 2
for z ≥ 0.
(12.2)
Let us consider the distribution of the 1/N highest waves. The probability density
for wave height (12.2) is shown in Figure 13. The threshold height H∗ that divides the
1/N highest waves from the smaller waves is the solution of
Z ∞
z − z22
1
e 8σ dz =
2
4σ
N
H∗
43
which is H∗ =
√
8 ln N σ. The probability distribution for the 1/N highest waves is
fH≥H∗ (z) = N
z − z22
e 8σ
4σ 2
for
z ≥ H∗
and then mean height of the 1/N heighest waves is
H1/N = N
Z
∞
H∗
i
h√
√
√
z 2 − z22
8σ dz =
8
ln
N
+
2πN
erfc
ln
N
σ
e
4σ 2
where erfc is the complementary error function. If we set N = 3 then we get H1/3 =
4.0043σ which should be compared with the definition Hs = 4σ.
3.5
3
H*
2.5
2
1.5
1
1/N
0.5
0
0
0.2
0.4
0.6
0.8
1
H
Figure 13. Highest 1/N waves, threshold height H∗ , definition sketch using Rayleigh
distribution (σ = 0.1, N = 3).
12.3
Second order nonlinear narrowbanded waves
At the second nonlinear order we must account for two types of nonlinear contributions. One is the effect of the cubic nonlinear term in the cubic nonlinear Schrödinger
equation, and another is the second harmonic contribution to the reconstruction of the
wave profile. If the first type of contribution can be neglected, then it is reasonable to
assume that the first harmonic contribution to the harmonic perturbation expansion has
a Gaussian distribution. Let us consider the expansion
η(r, t) =
1 i(kc x−ωc t)
Be
+ γB 2 e2i(kc x−ωc t) + c.c.
2
44
where γ is a constant that can be found with reference to section 4.
It now makes a lot of sense to define nonlinear upper and lower nonlinear envelopes
eU = |B| + γ|B|2 ,
eL = −|B| + γ|B|2 .
For small bandwidth the distribution of crest height is the same as the distribution of the
upper envelope eU , while the distribution of trough depth is the same as the distribution
of lower envelope eL . For the limit of vanishing bandwidth the distribution of wave
height is the same as the distribution of the distance between upper and lower envelope,
eU − eL = 2|B|, thus the Rayleigh distribution (12.2) is valid to second nonlinear order.
Assuming that |B| is Rayleigh distributed, we get the distribution for the second order
nonlinear upper envelope
√
1 + 4γz − 1 − 2γz
1
1
√
exp{
1−
} for z > 0.
(12.3)
feU (z) =
2γσ 2
(2γσ)2
1 + 4γz
A similar distribution can be found for the nonlinear lower envelope. These distributions
were first derived by Tayfun (1980).
For application to the Draupner wave field, we notice that the normalized standard
deviation σ = 0.078, and γ is 0.5 for infinite depth and 0.62 for the target depth. The
probability densities for linear and nonlinear crest height is seen in Figures 14.
8
10
7
1
6
0.1
5
0.01
4
0.001
3
1e-04
2
1e-05
1
1e-06
0
1e-07
0
1
2
3
4
5
6
7
0
1
2
η/σ
3
4
5
6
7
η/σ
Figure 14. Probability density functions for crest height for the Draupner wave field.
Linear second axis left, logarithmic second axis right. —, linear Rayleigh distribution; – –,
second-order nonlinear Tayfun distribution for infinite depth; · · · , second-order nonlinear
Tayfun distribution for target depth.
12.4
Higher order nonlinear and broader banded waves
Tayfun distributions for higher-order nonlinear wave envelopes can readily be derived,
and will give small corrections to that already found.
45
Linear crest distributions for broader banded waves is dealt with in a classical theory
by Cartwright & Longuet–Higgins (1956).
A more novel problem is to assess the influence of nonlinearity in the spatiotemporal
evolution equations. It should be anticipated that nonlinearity in the evolution equation
will break the Gaussian distribution of the first-harmonic free waves, and therefore the
Rayleigh distribution of the linear crest heights as well.
For long-crested waves, nonlinearity in the spatiotemporal evolution does indeed produce deviation from the Gaussian/Rayleigh distribution for the first-harmonic free waves.
Moreover, there is evidence that increased intensity of extreme waves results when steep
narrow-banded waves undergo rapid spectral change, establishing a broader-banded equilibrium spectrum. Numerical evidence has been provided by Onorato et al. (2002),
Janssen (2003) and Socquet–Juglard et al. (2005), in good agreement with experimental
observations by Onorato et al. (2004).
For directionally distributed, short-crested waves, numerical evidence found by Onorato et al. (2002) and Socquet–Juglard et al. (2005) suggests that there is much less
increase in intensity of extreme waves during rapid spectral change, and there is much
less deviation from the Gaussian/Rayleigh distribution for the first-harmonic free waves,
despite nonlinear spatiotemporal evolution. Socquet–Juglard et al. (2005) find evidence
that the distribution of crest height of the reconstructed sea surface is quite well described by the second-order nonlinear Tayfun distribution of the previous section 12.3.
This is indeed remarkable: A directionally distributed nonlinearly evolving sea with nonvanishing bandwidth is impudently well described by weakly non-Gaussian statistics for
vanishing bandwidth!
13 Return periods and return values
The return period is defined as the expected time that an extreme event, exceeding some
threshold, shall occur once. The return value is the threshold that must be exceed once.
For example, for waves with constant period T0 the 100-year wave height H100 is
defined by the relationship
P {H ≥ H100 } =
T0
.
100years
For design application to field sites, assessment of return periods and return values
requires the knowledge of the joint distribution of periods and heights over all the sea
states that characterize the site, and requires systematic observations over extended time.
Is the Draupner “New Year Wave” freak? Let us make the following thought
experiment: Imagine that the sea state of the Draupner wave time series is extended
to arbitrary duration, assess the return periods for the “New Year Wave” according to
the distributions derived above. The characteristic period was previously found to be
Tc = 12.8 s. The standard deviation was found to be σ = 2.98 m.
The exceedance probability for the Draupner wave height HD = 25.6 m is according
46
to the Rayleigh distribution (12.2)
P {H ≥ HD } = exp(−
2
HD
) = 9.86 · 10−5 .
8σ 2
This is a 36 hour event. This estimate holds at first and second nonlinear order, and
irrespective of the assumed depth.
The exceedance probability for the Draupner crest height ηD = 18.5 m is according
to the Rayleigh distribution (12.1)
P {η ≥ ηD } = exp(−
2
ηD
) = 4.28 · 10−9 .
2σ 2
This is a 95 year event. This estimate holds at linear order irrespective of the assumed
depth.
Using the second-order nonlinear Tayfun distribution (12.3) the exceedance probability for the Draupner crest height is
√
1 + 4γηD − 1 − 2γηD
1.26 · 10−6 for infinite depth,
)
=
P {η ≥ ηD } = exp(
2
3.58 · 10−6 for target depth.
(2γσ)
These are 118 and 41 day events, for infinite and target depth, respectively.
These estimates are justified by the conclusions of Socquet–Juglard et al. (2005), that
the nonlinear evolution of a weakly nonlinear directional wave field gives an essentially
Gaussian first-harmonic behavior, and that the statistics of the combined first and second
nonlinear orders is essentially described by theory for vanishing bandwidth.
Now recall that Haver (2004) argues that the Draupner “New Year Wave” was not
beyond design parameters for the platform, but basic engineering approaches did not
suggest that such a large wave should occur in such a mild sea state in which it occurred.
If our application of second-order nonlinear distributions for wave and crest heights is
correct, then one should certainly not be flabbergasted by the wave height, and probably
not by the crest height either, given the sea state in which it occurred.
Many people have suggested that extreme waves, such as the Draupner wave, can be
produced by exotic third-order nonlinear behaviour such as “breather solutions” of the
nonlinear Schrödinger equation (Dysthe & Trulsen 1999; Osborne et al. 2000, etc.). If
such effects enter in addition to the above second-order behavior, there should be even
more reason to expect more waves like the Draupner wave.
47
A Continuous and discrete Fourier transforms
A.1
Continuous Fourier transform of continuous function f (t) on an infinite
interval
The Fourier transform pair is
fˆ(ω) =
Z
∞
f (t)eiωt dt
(A.1)
fˆ(ω)e−iωt dω
(A.2)
−∞
1
f (t) =
2π
Z
∞
−∞
Substituting one into the other, it is worthwhile to note the integral form of the Dirac
delta function
Z ∞
1
eiωt dω
(A.3)
δ(t) =
2π −∞
Parseval’s theorem states that
Z ∞
Z ∞
1
|f (t)|2 dt =
|fˆ(ω)|2 dω
2π −∞
−∞
(A.4)
which is derived substituting either (A.1) or (A.2) and then using (A.3).
A.2
Fourier series of continuous function f (t) on an interval of finite length
T
We want to write
f (t) =
∞
X
fˆj e−iωj t
(A.5)
j=−∞
where ωj = 2πj/T . Using the L2 inner product
Z T
f (t)g ∗ (t) dt,
hf (t), g(t)i =
(A.6)
0
looking at the system of complex exponentials, we find that
heiωj t , eiωl t i = T δj,l
(A.7)
where δj,l is the Kronecker delta function. Thus the complex exponentials are orthogonal.
Taking the inner product of (A.5) and a complex exponential and using orthogonality,
we get
Z
1 T
f (t)eiωj t dt
(A.8)
fˆj =
T 0
Parseval’s theorem states that
Z T
∞
X
|fˆj |2
(A.9)
|f (t)|2 dt = T
0
j=−∞
which is derived substituting (A.8) into the left-hand side of (A.9) and then using (A.7).
48
A.3
Discrete Fourier Transform (DFT) of series fn with finite number of
points N
We want to write
fn =
N
−1
X
f˜j e−iωj tn
(A.10)
j=0
where ωj = 2πj/T and tn = nT /N . The tn are known as collocation points. Using the
l2 inner product
N
−1
X
fn gn∗ ,
(A.11)
hfn , gn i =
n=0
looking at the system of complex exponentials, we find that
heiωj tn , eiωl tn i = N δj,l
(A.12)
where δj,l is the Kronecker delta function. Thus the complex exponentials are orthogonal.
Taking the inner product of (A.10) and a complex exponential and using orthogonality,
we get
N −1
1 X
(A.13)
fn eiωj tn
f˜j =
N n=0
Note: The FFT (Fast Fourier Transform) is an algorithm that computes the DFT in
O(N log N ) rather than O(N 2 ) steps. Matlab/octave/etc. have routines fft and ifft
that compute the DFT transform pair (A.10) and (A.13); care needs to be taken to keep
track of the sign of the exponent and where one divides by N !
Parseval’s theorem states that
N
−1
X
n=0
|fn |2 = N
N
−1
X
j=0
|f˜j |2
(A.14)
which is derived substituting either (A.13) or (A.10) and then using (A.12).
In the DFT inverse transform (A.10) we may rotate the sum-indices cyclically, after
making the periodic extension f˜j+N = f˜j . For reconstruction at the collocation points
we may start at an arbitrary first index α
α+N
X−1
f˜j e−iωj tn =
N
−1
X
f˜j e−iωj tn .
(A.15)
j=0
j=α
Thus cyclic permutation gives the same answer for reconstruction at the collocation points
tn , but what about interpolation between the collocation points? It is tempting to use
(A.10) with arbitrary t. In that case we usually desire an interpolation that oscillates as
little as possible, so the frequencies employed in the reconstruction formula should be as
small as possible. This is achieved by setting α ≈ N/2.
49
A.4
Relationship between Fourier series and DFT
It is readily computed that
f˜j =
∞
X
fˆj+nN
(A.16)
n=−∞
This superposition of Fourier components is known as aliasing. Note that if f (t) has
sufficiently small bandwidth, or alternatively N is chosen sufficiently large for constant
interval length T , that the entire bandwidth is represented within the lowest N Fourier
components, then f˜j ≈ fˆj and the reconstruction formula
N/2
f (t) ≈
X
f˜j e−iωj t
j=−N/2
gives an extremely good interpolation between the collocation points.
50
(A.17)
Bibliography
U. Brinch-Nielsen and I. G. Jonsson. Fourth order evolution equations and stability
analysis for Stokes waves on arbitrary water depth. Wave Motion, 8:455–472, 1986.
D. E. Cartwright and M. S. Longuet-Higgins. The statistical distribution of the maxima
of a random function. Proc. R. Soc. Lond. A, 237:212–232, 1956.
K. B. Dysthe. Note on a modification to the nonlinear Schrödinger equation for application to deep water waves. Proc. R. Soc. Lond. A, 369:105–114, 1979.
K. B. Dysthe and K. Trulsen. Note on breather type solutions of the NLS as models for
freak-waves. Physica Scripta, T82:48–52, 1999.
K. B. Dysthe, K. Trulsen, H. E. Krogstad, and H. Socquet-Juglard. Evolution of a narrow
band spectrum of random surface gravity waves. J. Fluid Mech., 478:1–10, 2003.
S. Haver. A possible freak wave event measured at the Draupner jacket Januar 1 1995.
http://www.ifremer.fr/web-com/stw2004/rw/fullpapers/walk_on_haver.pdf.
In Rogue Waves 2004, pages 1–8, 2004.
P. A. E. M. Janssen. Nonlinear four-wave interactions and freak waves. J. Phys. Ocean.,
33:863–884, 2003.
D. Karunakaran, M. Bærheim, and B. J. Leira. Measured and simulated dynamic response of a jacket platform. In C. Guedes-Soares, M. Arai, A. Naess, and N. Shetty,
editors, Proceedings of the 16th International Conference on Offshore Mechanics and
Arctic Engineering, volume II, pages 157–164. ASME, 1997.
E. Kit and L. Shemer. Spatial versions of the Zakharov and Dysthe evolution equations
for deep-water gravity waves. J. Fluid Mech., 450:201–205, 2002.
V. P. Krasitskii. On reduced equations in the Hamiltonian theory of weakly nonlinear
surface-waves. J. Fluid Mech., 272:1–20, 1994.
E. Lo and C. C. Mei. A numerical study of water-wave modulation based on a higherorder nonlinear Schrödinger equation. J. Fluid Mech., 150:395–416, 1985.
E. Y. Lo and C. C. Mei. Slow evolution of nonlinear deep water waves in two horizontal
directions: A numerical study. Wave Motion, 9:245–259, 1987.
M. Onorato, A. R. Osborne, and M. Serio. Extreme wave events in directional, random
oceanic sea states. Phys. Fluids, 14:L25–L28, 2002a.
M. Onorato, A. R. Osborne, M. Serio, L. Cavaleri, C. Brandini, and C. T. Stansberg.
Observation of strongly non-gaussian statistics for random sea surface gravity waves
in wave flume experiments. Phys. Rev. E, 70:1–4, 2004.
M. Onorato, A. R. Osborne, M. Serio, D. Resio, A. Pushkarev, V. E. Zakharov, and
C. Brandini. Freely decaying weak turbulence for sea surface gravity waves. Phys.
Rev. Lett., 89:144501–1–144501–4, 2002b.
A. R. Osborne, M. Onorato, and M. Serio. The nonlinear dynamics of rogue waves and
holes in deep-water gravity wave trains. Physics Letters A, 275:386–393, 2000.
A. Papoulis. Probability, random variables, and stochastic processes. McGraw-Hill, 1984.
O. M. Phillips. The equilibrium range in the spectrum of wind-generated waves. J. Fluid
Mech., 4:426–34, 1958.
Y. V. Sedletsky. The fourth-order nonlinear Schrödinger equation for the envelope of
Stokes waves on the surface of a finite-depth fluid. J. Exp. Theor. Phys., 97:180–193,
2003.
51
H. Socquet-Juglard, K. Dysthe, K. Trulsen, H. E. Krogstad, and J. Liu. Probability
distributions of surface gravity waves during spectral changes. J. Fluid Mech., 2005
in press.
M. Stiassnie. Note on the modified nonlinear Schrödinger equation for deep water waves.
Wave Motion, 6:431–433, 1984.
M. A. Tayfun. Narrow-band nonlinear sea waves. J. Geophys. Res., 85:1548–1552, 1980.
K. Trulsen. Simulating the spatial evolution of a measured time series of a freak wave.
In M. Olagnon and G. Athanassoulis, editors, Rogue Waves 2000, pages 265–273.
Ifremer, 2001.
K. Trulsen and K. B. Dysthe. A modified nonlinear Schrödinger equation for broader
bandwidth gravity waves on deep water. Wave Motion, 24:281–289, 1996.
K. Trulsen and K. B. Dysthe. Freak waves — a three-dimensional wave simulation. In
Proceedings of the 21st Symposium on Naval Hydrodynamics, pages 550–560. National
Academy Press, 1997a.
K. Trulsen and K. B. Dysthe. Frequency downshift in three-dimensional wave trains in
a deep basin. J. Fluid Mech., 352:359–373, 1997b.
K. Trulsen, I. Kliakhandler, K. B. Dysthe, and M. G. Velarde. On weakly nonlinear
modulation of waves on deep water. Phys. Fluids, 12:2432–2437, 2000.
D. A. G. Walker, P. H. Taylor, and R. Eatock Taylor. The shape of large surface waves
on the open sea and the Draupner New Year wave. Appl. Ocean Res., 26:73–83, 2004.
V. E. Zakharov. Stability of periodic waves of finite amplitude on the surface of a deep
fluid. J. Appl. Mech. Tech. Phys., 9:190–194, 1968.
52
Download