Document Room, 20B-221 Besearch Laboratory of Electronlic echnology

advertisement
Document Room, 20B-221
Besearch Laboratory of Electronlic
echnology
Massachusetts Institute of
MASSACHUSETTS
illtd
INSTITUTE
OF TECHNOLOGY
RESEARCH LABORATORY OF ELECTRONICS
September 1, 1950
Technical Report No. 181
C
LOAA/
5!X-F'-
APPLICATION OF STATISTICAL METHODS TO COMMUNIC
Y. W.
N PROBLEMS
Lee
Abstract
The basic ideas and tools in the statistical theory of communication are discussed.
The methods of the theory in the formulation of some important communication problems
are outlined.
Some techniques which have been developed for application to practical
problems, and certain laboratory results in support of the theory are described.
To
indicate the possibility of practical application, primarily with respect to improvement
of signal-to-noise ratio, experimental results are presented.
The research reported in this document was made possible
through support extended the Massachusetts Institute of Technology, Research Laboratory of Electronics, jointly by the Army
Signal Corps, the Navy Department (Office of Naval Research)
and the Air Force (Air Materiel Command), under Signal Corps
Contract No. W36-039-sc-32037, Project No. 102B; Department
of the Army Project No. 3-99-10-022.
CONTENTS
Introduction
1
1.
Time Functions in a Communication System
1
2.
Generalized Harmonic Analysis
2
(a)
(b)
(c)
(d)
Periodic Functions
Aperiodic Functions
Random Functions
Summary
2
6
7
13
3.
Measurement of Correlation Functions
15
4.
Characterization of a Random Process
17
5.
Separation of Periodic and Random Components
21
6.
Filtering and Prediction
23
7.
Use of Random Noise in System Characteristic Determination
25
8.
Detection of Repetitive Signal in Random Noise
28
41A
APPLICATION OF STATISTICAL METHODS TO COMMUNICATION PROBLEMS
Introduction
A significant advance in communication engineering in recent years is the development
of a theory of communication in which the methods and techniques of the statistician have
augmented those of the communication engineer. The basis for the new development is
the concept that the flow of information, which is the primary concern of a communication system, is a statistical phenomenon. In addition to providing effective and practical
methods for the solution of a number of problems which have faced considerable difficulty
under classical theory, statistical theory in the present state of development has already
indicated the need and the method for recasting certain accepted theories. It has also
indicated the possibility of new and effective systems of transmission, reception, and
detection.
Among the many contributors in the new field of development is Norbert Wiener. His
work on generalized harmonic analysis (ref. 1), his theory of statistical filtering and prediction (ref. 2), and his theory of cybernetics (ref. 3) have had a measure of influence in
most of the recent work of other contributors.
In this paper the basic ideas and tools in the statistical theory are discussed. The
methods of the theory in the formulation of some important communication problems are
outlined. Some techniques which have been developed for application to practical problems,
and certain laboratory results in support of the theory are described. To indicate the
possibility of practical application, primarily with respect to improvement of signal-tonoise ratio, experimental results are presented.
1. Time Functions in a Communication System
In a communication system, varying quantities such as currents and voltages, distributed in time, are processed during their passage through the system in a multitude
of ways for the purpose of producing the desired results. Time functions also serve as
processing agents within the system. These functions, which are usually continuous,
may be classified as periodic, aperiodic, and random.
Among the great variety of functions which a periodic wave performs in a communica-
tion system, it carries information, as it does for instance in a pulse-type radar set when
the target is stationary.
However,
it is important to note that a periodic wave does not
maintain a continuous flow of information.
The reason is that a periodic wave is completely specified by its amplitude and phase spectrums or by one complete period and
knowledge of its periodic behavior. Once this information is given to the receiver,
continuation of its transmission does not add further information.
....
Aperiodic functions of time are usually associated with transient phenomena. For
analysis, an aperiodic function is represented by an expression which determines all
its value for all time. Since its complete specification at the outset is possible, it may
-1-
represent information in a restricted manner only.
A code number instead of the entire
time function does just as well in identifying the particular function involved.
Obviously
-
an aperiodic function cannot represent a flow of information although it is essential for
many other purposes in a communication system.
As long as information is kept in a steady flow, the receiver is uncertain of forthcoming events so that what he or the machine receives is a series of selections made by
the sender from a finite set of all possible choices.
When the receiver has full knowledge
of future events then whatever "message" he continues to receive actually contains no
information.
type.
Clearly a function which represents a message should be of the random
Of course it is necessary to be more specific when the word "random" is used,
but for the moment let the word be used to imply that the fluctuating instantaneous variation of the phenomenon is not subject to precise prediction.
The term "message" may be generalized to mean an information-carrying function
which may take any one of a great variety of forms.
Aside from the obvious ones con-
sisting of spoken or written words, wind gusts on an airplane equipped with automatic
control, temperature fluctuations in an industrial process,
electrical impulses in the
nervous system of an animal and barometric changes in weather forecast are examples
of messages.
Disturbances such as shot noise and thermal noise are random functions.
The term
"noise" may also be generalized to mean a fluctuating disturbance while a message is
being transmitted in a system.
The disturbance need not be a noise in the ordinary
sense but a message belonging to a nearby channel of transmission.
Thus a message
in one channel is a noise in another where it is not wanted.
2.
Generalized Harmonic Analysis
The harmonic analysis of periodic and aperiodic functions is well developed and
engineers have made use of the mathematical development in its many extended forms
in the solution of their problems.
However, for many years communication engineers
have employed the tools of analysis for periodic and aperiodic functions in the attack
on problems having to do with random phenomena.
A basic reason for adopting this
procedure is that classical theory fails to realize that a communication problem is in
essence a statistical one.
With the development of a statistical communication theory
it is necessary that the basic tools available for analysis be well understood.
In order
to show the extension of harmonic analysis for the inclusion of random phenomena, the
present discussion begins with periodic functions.
a.
Periodic functions
The Fourier representation of a periodic function of time t of general form is
F(n) e j n'lt
f(t) = E
n=-
-2-
(1)
.
where
is the fundamental angular frequency and F(n) is the complex line spectrum
given by the expression
1
0
in which T 1 is a complete period.
By application of the multiplication theorem for periodic functions, which states
that if fl(t) and f(t)
are two periodic functions of the same fundamental frequency and
Fl(n) and F 2 (n) are their respective complex spectrums, then
T
E'
Fl(n) F(n) e
=
fl(t) f(t
+ T) dt,
(3)
it is possible to extend harmonic analysis from that of amplitude and phase to that of
power.
For this purpose let fl(t) = f 2 (t), so that (3) becomes
E
00 in2 jnwl
I F(n) i
e
1
=
T1
T1
fl(t) f l (t + T)dt.
The expression on the right side of (4) involves the following operations:
given periodic function fl(t) is displaced by the time
T,
(4)
1) the
2) the product of the given
function and its displaced form is taken, 3) the average over a complete period of the
integral of the product is obtained.
placement
T
This process generates a function with the dis-
as the independent variable and is called an autocorrelation function.
The multiplication theorem in the form (4) states that the same function is obtainable
by the following procedure:
1) the power spectrum of fl(t) is formed by multiplying the
complex spectrum by its conjugate thus squaring the amplitudes and canceling phase
angles, 2) the harmonic components are synthesized according to the power spectrum.
If the autocorrelation function is represented by qll
(T),
that is
TI
fl(t) fl(t + T)dt
ll(T).
(5)
and the power spectrum by 1ll(n), that is
2
= I Fl(n) I
(ll(n)
(6)
then (4) reads
00
11(T) =
i1
jnWIT
ll(n) eJnlT
i· C
4lc","j"Y'
(7)
(7)
n=-co
Physical reasoning or examination of (4) shows that 11(T) is an even periodic function
which retains the fundamental and all harmonics of the given function but which drops all
phase angles.
If fl(t) has the general form
-3-
a
co
fl(t) =
(ancos n wl t + bnsin n
+
lt)
(8)
n=l
then
a2
o
+
1(T)
(a
1
+ b)
cos n
(9)
lT-
n=l
For example, if fl(t) has the form shown in Fig. 1, then the autocorrelation function of
fl(t) is the function given in Fig. 2.
The autocorrelation function in this instance is
easily obtained by the graphical interpretation of (5) already given.
For the simple
case of
(10)
fl(t) = a 1 coS(Wlt + 01)
the autocorrelation function is
011(T)
2
a1
= - COS
(11)
1T.
f (t)
1
I
I -T1
Fig. 1
_
I
0
T
E
T,
IE
il
't
Periodic rectangular wave
ll (T)
-
\V
Fig. 2
I
a
-\
-
,-TI
/.
,
,
T
V
\V 0
Autocorrelation function of rectangular wave of Fig. 1
Since ~11(T) is a periodic function and has the form (7) as a Fourier series, it is
permissible to write the power spectrum pl(n) as
T.1
il(n) =
=1
91
(T)e-
1 dT.
(12)
10
In other words, harmonic analysis of the autocorrelation function of a periodic function
fl(t) yields the power spectrum of fl(t), and synthesis of the harmonics regains the
autocorrelation function.
If calculation of the power spectrum of a periodic function is the objective, then
-4-
obviously it is unnecessary to consider correlation.
However, the purpose of this
discussion is to bring out the fact that a reciprocal relationship between power spectrum
and autocorrelation exists, and that it will help make clear a similar relationship for
random functions.
When the phenomenon is random and is of the type to be defined,
autocorrelation becomes an effective and indispensable process in analysis.
The crosscorrelation between two periodic functions of the same fundamental
frequency brings out some interesting facts.
In the multiplication theorem (3) let
T1
1
1
I fl(t)
0
T
(13)
f 2 (t + T)dt
be the crosscorrelation function between fl(t) and f2(t) and
(14)
12(n) = FWl(n) F 2 (n)
be the cross power spectrum.
With these symbols the theorem takes the form
00
lZ(T)=
Z
n= -oo
i1Z
(n )
(15)
enl
The cross power spectrum is in general a complex function as (14) indicates.
Con-
sequently the crosscorrelation function (15) retains phase information in contrast with
autocorrelation which discards it.
It is also noted that
T1
f 2 (t) fl(t
f21(T) =
+
T)dt
(16)
0
and
P2 (n)
l , (n
(17)
so that
~12(
T)
=
(18)
~21(T)
Since +1 2 (T) is a periodic function with the Fourier expansion (15), the cross power
spectrum is
T.
12
(n) =
4i 0
lZ(T) e jnwT dT
(19)
This expression and (15) form a Fourier transform pair for the crosscorrelation of
I
t
two periodic functions of the same fundamental frequency.
5-
In terms of the explicit expansions
fl (t ) = 2
a10
0
+
Cln cos(nlt + 0n)
n=l
(20)
(20)
and
a20
2(t)
cn
2
+
(21)
C 2 n cos(nc
n=l
t + 02)
(21)
the crosscorrelation function according to (13) is
I
1_
a10a20
1+ 2
41 nlin
C
C 2n
2ncos(nW
n=l
+
(2n
_01n)
n
(22)
from which it is clear that crosscorrelation retains only those harmonics which are
present in both fl(t) and f 2 (t), and the phase differences of these harmonics.
Hence if
fl(t) is a known function with all harmonics present and f 2 (t) is an unknown function,
crosscorrelation is capable of extracting all amplitude and phase information from the
unknown function, provided that both functions have the same fundamental frequency.
b.
Aperiodic functions
The aperiodic functions to be considered here are confined to those whose integral
squares are finite, that is
S
f 2 (t)dt
finite
.
(23)
-00
Analogous to (3), the multiplication theorem for aperiodic functions is
o
)
27r
00
F 1 (w) F 2 (W)e
dwo=
fl(t) f 2 (t +T)dt
_00
(24)
-x
in which F 1(w) and F 2(W) are the Fourier transforms of fl(t) and f 2 (t) respectively.
If
fl(t) = f 2 (t), the theorem is
2TrIF 1(,)l eiw
f 1(t) f(t +T)dt =
00I
d-a
_
(25)
200
In a manner similar to the autocorrelation of periodic functions, let the left-hand
member of (25) be represented by
11 (T),
that is
00
11 ( )
fl(t) fl(t + T)dt
(26)
which is the autocorrelation function of the aperiodic function fl(t). Unlike the autocorrelation of periodic functions, (26) does not involve averaging over an interval
because of condition (23).
Let
~11()
= 2Tr1 F 1()[
-6-
.
(27)
If fl(t) represents a voltage or current and the load is normalized to one ohm of resistance than
l(w) is the energy density spectrum of fl(t) in watt sec per unit of angular
frequency.
This statement follows from the fact that when
o
T =
0, (25) becomes
00
fl(t)dt=
2d
-00
(28)
-co
whose physical meaning is evident.
In mathematical literature (28) is the well-known
Parseval theorem.
The multiplication theorem (25) may now be written as
0
211(X ) e j
ll(T) =
T
do
(29)
and application of Fourier transform theory yields
11(
)
Ir
(
T
(T)eJ
dT
(30)
-0
Because
l11 (T) and
1 ()
are even functions, (29) and (30) are simplified to read
i 1 1 (W)cos
(T=)=
T
do
(31)
-00
and
00
{~i11(o)
2
9(T)COS
ST dT
(32)
.
Equations (31) and (32) state that the autocorrelation function and the energy density
spectrum of a function of finite total energy are Fourier cosine transforms of each
other.
It is not difficult to see that a similar relationship holds for the crosscorrelation
of two functions of finite total energy.
c.
Random functions
Random functions of time are generated by complex mechanisms. Repeated
experiments performed under similar conditions produce results of the same type
but not identical in instantaneous variation.
If experiments of long duration are
performed with a large number of systems of like character under essentially the
same circumstances, the statistical characteristics of the set of functions thus
obtained are invariant under a shift in time.
The ensemble of random functions then
represents a random process which is stationary in time.
istics concerned are discussed in a later section.
The statistical character-
In a stationary random process, none of the member functions has a waveform
-7-
the same as another in the process.
When a random process is analyzed the objec-
tive is not in the reproduction of a particular waveform, for no repeated experiment in
a similar process can reproduce a waveform obtained in another experiment in the
past.
To characterize the random process, functions expressing its regularity must
be used.
These characteristic functions are naturally the statistical distributions.
But before this matter is discussed let it be noted that one useful characteristic may
be assumed to be common to all member functions of a stationary random process.
This characteristic is the power density spectrum.
The power density spectrum of a random function cannot be obtained by direct
application of Fourier series or integral theory because a random function does not
come under the restrictions of either theory as it stands.
However,
an extension of
the Fourier integral theory to the harmonic analysis of random functions is possible
(ref. 1).
In Fig.
3 let the random wave fl(t) represent a member function of a station-
Fig. 3
A member function of a stationary random process
ary random process which does not have a hidden periodic component.
Because of the
stationary character of the process an arbitrary origin may be chosen as shown.
Let
a section of duration 2T be taken from the wave and let it be
f~fl(t)
-T < t < T
~~~~~f
~~~~~(33)
elsewhere
Since flT(t) is of integrable square, Fourier integral theory may be applied for obtaining
its complex amplitude and phase spectrum as
T
FiT(C)
= 2-
T fiT(t) e
dt
(34)
.
Now in view of the preceding discussion an analytic process which discards phase
information but which retains amplitude information is desirable in the study of random
phenomena. Such a process has already been used here in the analysis of periodic and
aperiodic functions and is found in the multiplication theorem.
For flT(t); the theorem
reads, according to (25)
Td
2-r
T
FiT(C)
2 ejT do=
00
-8-
flT(t) flT(t + T)dt.
(35)
,
It is recalled that 2 IF1 T() 2 is the energy density spectrum of flT(t). As the interval
T is extended to infinity it is necessary to change energy considerations to those of power
because mean power of a random function is assumed to be finite in this analysis although
total energy is infinite.
Accordingly in the limiting form of the theorem (35) for T -
Xo,
that is, in the expression
o
T
|
1TT
-dIim
[
flT(t)flT (t + T)dt
(36)
T
the term in brackets on the left is the power density spectrum of the random wave in
watts per unit angular frequency if fl(t) is a current or voltage and the load is one ohm
of resistance.
This fact is clear from (36) for
T=
lim
11(=)
T-+co
Let the power density spectrum be
0.
(37)
T IF1T(W) I
and let
T
ll(T) =
T
lim
(t+ T)dt(38)
(
-o
(38)
-T
which is the autocorrelation function of the random function fl(t).
The expression (36)
now reads
00
ll())ejTd
1l(T) =
(39)
X
-00
To assume that the power of fl(t) is finite is to say that
+11
lim ()2T
is finite.
fl(t)dt
(40)
It follows from (39) that
00
-0
(0)dw
(41)
-00
is finite.
Furthermore, because of the absence of periodic components in fl(t), H11()
Fourier transformation theory is applicable
contains no points of infinite discontinuity.
to (39) and it yields the result that
S
00
11( ) =!
' 1 1( T)eJ
-00
-9-
T
dT
.(42)
Although fl(t) is a random function with fluctuating amplitudes, its autocorrelation
function given by (38) is a well-defined function for all values of
transform.
T,
and has a Fourier
On the assumption that the power density spectrum is the same for every
member function of the random process the autocorrelation function becomes another
In spite of the fact that experimentally ll(T) can be
characteristic of the process.
obtained only from the past history of fl(t), it remains unchanged for the future since
the process is stationary.
The same statement may be made for the power density
Whereas the mean power in a periodic wave is completely determined by
spectrum.
its behavior in one complete cycle, the mean power in a random wave theoretically can
only be approached by knowledge of its past history as the time of observation is
indefinitely prolonged.
Furthermore, the limit is approached in a random manner
unlike the smooth course which a nonrandom variable takes when it tends to a limit.
Since
P
1 1 (T)
and }1 1(w) are even functions, the integrals (39) and (42) may be put
into the simpler forms
00
11()
do
T d
I11(0)cos w,,1·).r~
=,,(·)-(
(43)
-o
and
-co
These equations express the fact that the autocorrelation function and the power density
spectrum of a stationary random process are determinable one from the other by a
Fourier cosine transformation.
Wiener has given this theorem a rigorous proof (ref. 1)
and it is generally known as Wiener's theorem.
A similar theorem may be established for two stationary random processes which
The theorem reads:
are in some manner related to each other.
if Q1Z(T) is the cross-
correlation function of two coherent stationary random processes defined by the
expression
T
fl(t) f 2 (t + T)dt
T
12(T) = lim
(45)
2T+°
where f(t) and f 2 (t) are member functions of the two processes, then
Z(T)
(46)
l2(w) ejwT dw
=
-00
and
00
Sl2()
-co
-10-
TT)l2(
)e
jdT
.
(47)
,,
The function
12(w) is the complex cross power density spectrum.
The physical signifi-
cance of crosscorrelation and the corresponding spectrum is not clear at this moment
but will become evident when specific systems are considered in conjunction with these
functions.
While autocorrelation discards all information in regard to phase relationships of
the harmonics, crosscorrelation retains them as in the case of periodic functions.
For
this reason the cross power density spectrum is a complex expression giving amplitude
as well as phase characteristics.
A stationary random process is described by a set of distribution functions (refs. 6,
7, 8).
Shown in Fig. 4 is an ensemble of random functions which represents a stationary
random process.
The member functions are generated by similar mechanisms.
Ina
practical situation it is not necessary that different mechanisms are used for making
these records.
mechanisms,
If a particular mechanism is representative of a large class of similar
such as a specific type of gas tube for generating noise,
sections of suffi-
cient length made from a single long record from an experiment with that mechanism
may be considered as members of an ensemble.
Theoretically,
an ensemble has an
infinite number of member functions.
TI
ti
Y3
5
Y34
Y/
yI
Y2
y
Y2
.,
Y3
t ~~~~~~~~~~~~~~~~~~~I
_____________
eTC.
Fig. 4
At an arbitrary time t
Arm~~~~
An ensemble of random functions
observations are made on the ensemble giving a set of values
of the random variable y l , that is,
yl = y'
yl'
...
.
A distribution function P(Y 1)
is formed so that p(yl)dyl is the probability that Y1 lies in the range (Yl, Y1 + dyl).
Because the process is stationary, this distribution and all other subsequent distributions are independent of the time t.
A step further in gaining information on the
ensemble is to consider the observations at two different times separated by the time
T.
Let the second set of values be Y2 = y,
y,
y',
...
as indicated in the figure.
relationship between Y1 and Y2 is given in the form of a joint probability P(Y1,
The
Y2 ; T1 )
dyldY2 which is the probability of the joint occurrence of yl in the range (Y1, Y1 + dyl)
and Y2 in (Yz, Y2
+
dYz) at time T1 after the occurrence of Y1.
Pursuing further, the
joint probability P(Y 1, Y2 , Y3 ; T1 , T2 )dyldy2dy
An
3 is defined in a similar manner.
infinite set of the probability distribution functions, each successive one adding
-
X
,~
information in greater detail completes the description of the random process.
-11-
While it is simple to state that a stationary random process is described by a set
of probability distribution functions, the actual determination of these functions either
analytically or experimentally is extremely difficult, if at all possible, when physical
Of course exceptions should be made for such simpler
situations are considered.
cases as the purely random process, the Gaussian process and the Markoff process.
In general the experimental determination of the first distribution, which is the ampliThe second distribution requires elaborate equipment and
is simple.
tude distribution,
considerable time.
It takes four variables (p, Y1, Y2 , T ) to represent it so that a com-
plete determination incurs incalculable labor.
A partial determination of this distribu-
tion for speech has recently been made (ref. 12).
Needless to say the third distribution
which involves six variables and the higher order distributions are well beyond practical
considerations for measurement.
Fortunately, these difficulties do not substantially hinder the application of statistical methods to a number of communication problems.
distribution is required.
In these problems only the second
Obviously in general not all of the information concerning a
random process is utilized when only one distribution is used.
But because of the manner
in which the problems are formulated and because of the design criterion adopted, this
distribution is sufficient.
Of greater importance in the simplification of the methods and
techniques in practical application is the fact that, instead of the second distribution, a
function dependent upon it actually characterizes the random process in a problem.
This
function is expressed as
Yl Y2 P(Yl1 Y2
-00
(48)
T)dYldY 2
which gives the mean product of the random variables yl and y 2 as a function of their
separation time
T.
(The subscript 1 for
T
is dropped since only one
T is
involved.)
This
function is frequently much simpler to evaluate analytically from given conditions than
the joint distribution itself.
Experimentally the situation is similar.
Electronic and
mechanical machines are now in use for the evaluation of (48) with ease whereas no
comparable facility is available in handling the joint distribution.
An important hypothesis in the theory of stationary random processes known as the
ergodic hypothesis furnishes the tie between this discussion and the preceding one on
autocorrelation.
The hypothesis states that the function (48) is equivalent to the auto-
correlation function (38) which is obtained from a member function of a stationary
random process.
Thus it is permissible to write
c1(
00
X4
-0
-00
T) =
YlY2 P(Yl Y2; Ti)dY ldY2
(49)
The expression (38) is known as the time average, (49) as the ensemble average for
the autocorrelation function.
-12-
%
Some of the properties of the autocorrelation function are as follows:
1.
The value at
T =
0 is the mean square of the random process;
no other value exceeds it in magnitude.
2.
The function is even.
3.
When no hidden periodic component exists in the random
process the function tends to the square of the mean of
the process as
4.
T
tends to infinity.
The Fourier transform of the function is positive for all
angular frequencies.
5.
It contains no waveform information.
The ensemble average for a crosscorrelation function may be written in a manner
similar to (49).
Thus
12(T ) =
yz p(y, z;
-0
T)dy dz
(50)
-0
in which y and z are the variables belonging to two random processes which are in some
way related to each other.
For two independent processes, p(y, z; T) is simply p(z) so
that
+12(T) =
y p(y)dy
-o00
=y
z p(z)dz
(51)
00
z
In other words the crosscorrelation between two independent processes is simply the
product of their individual mean values.
These processes are said to be incoherent.
In general, for two coherent processes without hidden periodic components,
approaches p(z) as
d.
T
becomes large so that
1 2 (T)
p(y, z;T)
is asymptotic to y z .
Summary
The analysis of the three principal types of functions in communication theory may
be summarized in the table of Fig. 5.
Whereas periodic and aperiodic functions are
definitely known for all values of the independent variable and are subject to analysis
and synthesis according to well-known Fourier theories, random functions are not
precisely predictable functions of time so that the same theories are not applicable
to them.
A stationary random process is completely described by a set of probability
distribution functions.
To completely specify a periodic or an aperiodic function we
may use the amplitude and phase spectrums resulting from analysis.
But while the
latter functions are totally regained by the reverse operation of synthesis, no corresponding procedure holds for a random process.
-13-
In fact to ask for this procedure is
-
3
a)
a
r
E:
I
I
O~~~~
o
C,
N
O
0 0
hO
O
0. 0
od
o
CE
0,,)0
.2
2Z.2
S
_)
C2)
'
0)433
o
0
0o0.
)0
1-
U)a.z ;:
o =4
Utp. 3
o
o
P.
C1)
E
0
.
I-.
4
I
N
3
I
N
P4
3
U Xs
I.
o
rz
U
3
3
0
o
V)
r.
3
3.U ) 03
II-
.0)
.44c0 3
0)3
'a 0
M
bfl
U
o
6.
o
U) a
IL)
0)
C 2
U)
a
U r
01)
.I
0
U)
~.
.-- M
0)
>)
Id
3
")
-3
-
0
U
0
h
zd
0)
o
-.
E
0
3
.,
S
U)
0
C1)
N)I
>
4
8
I
.S+
_E_
C)
." t
C
+
4-I-
I0
0
.
0)
8
m
+} _4
.3
_
N
.-
I
rz8
8__
,
8
8
a
O
U)
II
c::S
'2
.,
0
II
m
P'
-3
7
;°.U
fI
.,1)
U)
h
U)
d
a)
U)
ova
sQ
U)
0
r.-I
0l
0
4
0
C)
2
FW
3
3
o
O
U)
0
3
v3
N
z
w
zw
o
o
0
.)
a:
3
0)
U)
1C)
Cd0
U)
8
I-
3
3
0
.,
C)
U)
+
II
o
r
k
1a
0
S
S
fr
II
0)
o
3
'
h
k
4
8_
C)
8
N
I-
8
3d En
C)
3
_
w) b.0
3
_
"F-
)td °
J
sA
tb
,5
5-'
.~(x
.,~~~~~~~~~~a
o
.o
tU
0
.I
+
ps
0
3:
8
88
C)
0)
v
C
_
_
C)
4HpQ
6·°
I _&~
r_4r
loBrE
c0
O
8
"
0
^
_-
8W
[
' ·O H
-P
n
II
to contradict the nature of the process.
To generalize the principles of harmonic analy-
sis for the inclusion of random functions, autocorrelation is introduced.
By application
of the Wiener theorem the power density spectrum and the autocorrelation function are
determinable one from the other.
experimentally measurable.
These functions possess physical meaning and are
The reciprocal operations of analysis and synthesis so
important in analytical work are now available for random phenomena.
Autocorrelation
of a random process with a hidden periodic function yields distinguishable periodic and
aperiodic components.
The harmonic analysis of these separate components may then
be carried out according to their behavior.
Furthermore, the idea of correlation when
extended to the periodic and apericdic functions leads to a better and more comprehensive understanding of harmonic analysis for communication work.
Generalized harmonic analysis is a broad subject of which the present discussion
has given but a limited view.
However, it is sufficient to show that statistical theory
has made a substantial and major addition to the set of basic tools for communication
problems.
Classical theory has not shared the advantages of those tools which have
been developed primarily for handling information-carrying functions.
Along with the
tools of statistics have come a statistical concept of information, new methods in the
formulation of filtering, prediction, and detection problems, and the development of
statistical techniques for communication work.
3.
Measurement of Correlation Functions
Correlation curves may be computed from a given record by interpreting the time
average expression (38) graphically.
Thus, in Fig. 6, if fl(t) is a record of long duration,
it may be represented
1+
)
'
Ft---
Fig. 6
N DIVISIONS
I
-
Graphical method for computing a correlation function
by N equally spaced discrete points.
The displacement
T
is represented by a shift of
m points as indicated in the lower curve which is fl(t + Tm).
-,En
I
The approximate expres-
sion of (38) is therefore
-15-
--
-
1
1~
-
11(m=
N-m
yYnYn
y +,
N - m + 1 E
+m ·
n=O
)
(52)
(T
(52)
Repeated application of this formula for different values of m results in a graph such
as the one shown in Fig. 7.
In general the record should be considerably longer than
the apparent lowest frequency of the random function.
Just how long it should be is
largely a matter of experiment, although approximate calculations may be made.
i(T)
Fig. 7
Autocorrelation curve obtained by graphical method (ref. 15)
Several mechanical methods have been in use for the correlation of low-frequency
random functions.
At the Research Laboratory of Electronics,
M. I. T.,
an electronic
method has been developed which operates on sampling principles (ref. 14, 15).
In
Fig. 8 is shown a portion of a random function fl(t) whose autocorrelation curve is
desired.
If the function is divided into sections each of length L which is of sufficient
duration so that the values al, a 2 , a 3 , ...
are independent of one another, the sections
may be considered as parts of member functions of a stationary random process.
For
the moment assume that the random function contains no periodic component.
.%
v.(t
r rl~~~~~~~~~~~~~~~~~~0
_r
(2
Tick
L
Fig. 8
06 bC
04(b
I
1
Tk
rk~-
ck
_
L
L -L
Portion of a random function showing representation of equation 53 (ref. 15)
Under these conditions the autocorrelation function on the basis of an ensemble average
(49) becomes
n
11(Tk) = lim 1 j 1 a.b.
where
b, b, .. is aJ Jset
where b,
b,2
b 3 ...
is
ofn
(53)
a set of values of the random function each taken at a selected
-16-
,
time
Tk
after the correspondingly numbered a-value.
For a limited but large number
observations N, the approximate expression is obviously
N
N(Tk
=
11Tk
N
a.b.
j=1 J j
(54)
·
When L is not sufficiently long to insure the independence of the sections, the expression (53) may be shown to hold for the autocorrelation of a member function of a stationary random process.
The autocorrelation function is then obtained not as an ensemble
average but a time average from sets of sampled values of the random function.
In the presence of an additive periodic component in the random function, the values
of the periodic component in the different sections of Fig. 8 are no longer independent
because of the definite ratio between L and the period of the periodic component.
How-
ever, consideration of the random and periodic components separately leads to the conclusion that (53) still holds provided that the ratio of the two periods is not such that the
values al, a 2 , a 3 , ... cover but a small portion of the periodic wave or a few restricted
locations on the wave.
Figure 9 shows the characteristic waveforms of the electronic correlator.
For the
computation of the correlation curve for the random wave (A) two sets of timing pulses
(B) of period L are derived from a master oscillator, with the second set delayed by
an adjustable time Tk.
...
By means of the first set of timing pulses the values, a, a 2, a 3,
are selected and a boxcar wave (C) is generated so that the discretely varying heights,
are proportional to these values.
A second boxcar wave (D) is similarly generated for
the values bl, b 2, b3,....
A sawtooth sampling wave converts the b-values as shown in
(D) into a train of pulse-duration modulated pulses (E). A gating circuit multiplies the
waveforms (C) and (E) resulting in the waveform (F) which is a series of pulses whose
heights are proportional to the a-values and whose durations are proportional to the
b-values.
The summation of a large number N of these pulses by an integrating cir-
cuit represents the value 11(Vk) as expressed by (54).
designed for complete automatic operation.
The correlator has been
The displacement Tk is varied in discrete
steps and the correlation curve is presented by an automatic recorder.
performs crosscorrelation as well as autocorrelation.
The machine
A sample autocorrelation
curve for filtered random noise from a Type 884 gas tube is shown in Fig. 10.
4.
Characterization of a Random Process
A principal use of statistical methods in communication problems is the charac-
terization of a random process by its autocorrelation function.
This characteristic
is equivalent to its power spectrum because of the reciprocal relationship through a
Fourier transformation as discussed under Section 2, Part C.
Both the autocorrelation
function and the spectrum are essential in the characterization of a random process.
The situation is much like the characterization of a linear system.
It is well known
that the system behavior may be expressed either as a transfer characteristic, which
-17-
----
11_
_1
-111
'4
[A]
[e]
[C]
[D]
I
[El
"iiI
k
T
[F]
-z
I
I
H
Fig. 9
E
ITr
n
/r
['JZ
-r
E a4
I-
n
r
d
a
-b,
I:?
H
-b
-!
I
II
I
!!!
I
'
2
H- -b
3
.,
7 --Y
.
-b4
T
Characteristic waveforms of electronic correlator (ref. 15).
-18-
-
I-
Ts
Fig. 10
Autocorrelation curve for filtered noise from type 884 gas tube.
nrUIEJwE
L
·
Fig. 11
I
!
_
_·
!
I
flT
-'
nlFIU I
C
I t
F
I
I'
1
I
l
t
Random flat-top wave with zero-crossings obeying the Poisson distribution.
-19-
is a function of frequency, or as a response to a unit impulse excitation, which is a
function of time.
While they are determinable one from the other by a Fourier trans-
-
formation, each characteristic places certain important information in much better
light than the other.
The spectrum of a random wave which is specified in terms of statistics and probability should be determined through autocorrelation; Fourier series and integral methods
which have been used for approximate solutions have not led to satisfactory results.
a simple illustration consider the random wave of Fig. 11.
As
The wave takes only the
values +E and -E volts alternately with zero-crossings obeying the Poisson distribution
which states that if k is the average number of crossings per second, the probability
p(n,T) that the number of crossings in any duration
(k n
p(n,T)
T
-k
e
of the wave is n, is
T
(55)
Random noise which has been strongly clipped may have this waveform.
Clearly Fourier
theories for periodic and aperiodic functions offer no satisfactory solution to the problem
of spectrum determination.
Detailed calculations are not presented here, but it is possi-
ble to show that the random wave has the autocoirrelation function (ref. 4)
= E2 e
k
.
(56)
This is a simple expression in spite of the apparent complex instantaneous behavior of
the wave.
The spectrum of the random wave is readily computed by use of (44).
Hence
the power density spectrum of the wave in volt2 per unit of angular frequency is
E z e -2k|TI
= cos
1()I
E
2k
r r 4k 2 +
T
d
T
2
(57)
A large number of random functions under simplifying conditions may be analyzed in a
similar manner for their autocorrelation functions and spectrums.
A number of non-
linear problems such as random noise through a rectifier have been solved (refs. 5, 9)
The experimental determination of the spectrum of a random process through the
medium of autocorrelation is often more accurate and convenient than the method of
filtering.
In measuring the autocorrelation function, data obtained from a long duration
enter into the computations thus ensuring an accurate measurement.
For low-frequency
phenomenon mechanical correlators may be used to cover frequencies as low as a fraction of a cycle per second.
Because a random phenomenon reveals its statistical characteristics only after a
considerable duration, correlation involves a large amount of data for accurate determination.
The problem differs from the experimental spectrum analysis of a periodic wave
which yields the same data every cycle, and that of a transient which may be made to take
-20-
__I_
__
__
,,
a short duration and to repeat periodically.
Whereas a filter or a selective circuit will
suffice in the harmonic analysis of periodic and aperiodic functions, its memory in the
general case is not sufficiently long for the harmonic analysis of a random phenomenon.
After an autocorrelation function is obtained the transformation required for producing the power spectrum may be performed by machine.
For illustration an autocorrela-
tion curve of filtered noise from a Type 884 gas tube which has been computed by the
electronic correlator is shown in Fig. 12a (ref. 11).
The power density spectrum of
the noise given in Fig. 12b is obtained by a cosine transformation by means of the highspeed differential analyzer developed at the Research Laboratory of Electronics, M. I. T.
b
a
Fig. 12b Power density spectrum of the
noise determined by differential analyzer.
Fig. 12a Autocorrelation curve of filtered random noise from Type 884 gas
tube measured by correlator.
5.
Separation of Periodic and Random Components
A random process may have a hidden periodic component and in many communication
problems it is important to separate the random and periodic components.
and periodic phenomena behave differently under autocorrelation,
manner,
Since random
each in a characteristic
autocorrelation is an effective tool in the problem of separation.
Autocorrela-
tion of a periodic function produces another periodic function of the same fundamental
frequency while that of a random function results in a maximum value when the displacement is zero, which represents the function's mean square value, and the square of the
mean value as the displacement tends to infinity.
These different properties make it
possible to resolve the autocorrelation function into two components.
By way of illustration, consider the pulse-duration modulated wave of Fig. 13.
This wave has a periodic component although it is not apparent, by examination of its
l
-X1-
l
n TE
FH
n
A Pi -A .A
+
A
' A ---A--*A pulse-duration modulated wave.
Fig. 13
-21-
t
wave form, as far as the exact amount is concerned.
occur at regular intervals.
The leading edges of the pulses
For reasons of simplicity let the intervals be ZA,
and the
pulse duration x vary randomly with a uniform distribution between the limits (0, A),
and let the durations be independent of each other.
Based upon these assumptions, it
is possible to show that the autocorrelation function of the pulse-duration modulated wave
has the form shown in Fig. 14.
The curve has the highest peak at the center and peaks
a 1(T)
-5A -4A -3A -2A
-A
0
A
2A
3
4A
5A
r
Fig. 14 Autocorrelation function of the pulseduration modulated wave of Fig. 13.
of the same shape at periodic intervals of 2A.
Clearly this curve may be resolved into
two parts, one periodic and one aperiodic as Fig. 15 shows.
The aperiodic component
has the expression
E2
E
(~'T)]
ll(T)]
random
IT) 3
(A-
3
-A<
<
12 A
-
(58)
0
elsewhere
which is the autocorrelation function of the random component of the pulse-duration
modulated wave.
By a Fourier transformation the power density spectrum of the random
component wave is found to be
2
[
11()]
random
4ir(AW )2
L
(sin
1
A- -
2
(59)
A
The autocorrelation function of the hidden periodic component wave is determined completely when the expression for a half cycle of the function is known.
E
periodic
[11()
2
4A2
2
[14 (a(AT
- T)
3
E
3
-
The expression is
(A
-
T
3
)
0 < T<
(60)
from which the power spectrum of the periodic component wave is readily computed by
application of (12).
[11 (r)] PERIODIC
er
-D
-so
_/AA
A
-
-s
-
n
,
U
Li
n
.
-.
a
4
Do
:
T
|[ l()] RANDOM
-A
0
Fig. 15
Periodic and aperiodic
components of the autocorrelation
function of Fig. 14.
A
T
-22-
The method of analysis is applicable to other types of pulse-modulated waves
(ref. 10).
When the degree of dependence between pulses is appreciable,
calculations
increase in complexity so that it becomes necessary to obtain the autocorrelation
curve experimentally.
as
T
If the autocorrelation curve maintains its periodic behavior
tends to large values the method described here should be effective in the sepa-
ration of periodic and random component waves which are not easily distinguished in
a random process.
6.
Filtering and Prediction
In filtering, the objective is to extract the desired message from a mixture of
message and noise.
Stated in more general terms, the problem is the recovery of
a random process which has been corrupted by another random process.
The recovery
is to be done by a linear system which has been designed in accordance with the statisThe filter operates
tical characteristics of the stationary random processes involved.
upon the past values of the corrupted random process as it progresses in time in such
a way that the instantaneous output of the filter is the best approximate instantaneous
The meaning of the term "best" depends upon
value of the desired random process.
the particular measure of error and the criterion in the formulation of the filter problem.
In the statistical filter theory of Wiener (ref. 2), the input to the filter consists of
an additive mixture of message and noise as indicated in Fig. 16.
The desired ideal out-
put is the message itself with possibly a preassigned fixed delay time
.
Since perfect
recovery is a physical impossibility, the instantaneous error is the difference between
the actual output fo(t) and the desired message with a lag fm(t - a).
fm (t)
INPUT
MESSAGE
- I
INPUT
NOISE
,
A1.
.
....
/i,,4
~tft~t-
k
fn(t)
LAGGING
FILTER
-
f (t)
IL. .
*9IUIAL
OUTPUT
fd(t)=
fm(t- a)
DESIRED
OUTPUT
L
t
Fig. 16
\-
i
116A
t
A measure of error
a
Statistical filtering
which is relatively simple to manage in the subsequent analytical work, and which is
meaningful in a number of physical situations is the mean square error. The expression for the mean square error is
T
T- im
lim 2T
i
(t
[fo( ) - fm(t
m - a) ]
2
dt
.
(61)
The criterion in the design of a filter should therefore be that the mean square error
is the minimum.
A filter of this type is known as an optimum filter.
-23-
By application
of calculus of variations it is possible to show that if
function and
.ii(T)
is the input autocorrelation
im(T) the input and message crosscorrelation function, the response
characteristic h(t) of the optimum filter to a unit-impulse excitation must satisfy the
condition that
~im(
h(Ca)bii(T
) =
T-
-
for
)dcr,
T
0
(62)
-00
This integral equation is of the Wiener-Hopf type, the explicit solution of which for the
Fourier transform of h(t) in terms of the Fourier transforms of ii(T) and bim(T) has
been achieved.
Although the analytical work is by no means simple, it is possible to
proceed from the given autocorrelation and crosscorrelation functions to obtain the
optimum filter characteristic for the practical construction of a filter consisting of
linear passive elements and possibly amplifiers.
In classical filter theory the concepts are those belonging to periodic phenomena.
The idea of the division of the spectrum into pass bands and attenuation bands is basic.
When the messages and noise involved in a filter problem have spectrums which are
essentially not overlapping,
the classical filter is simple and effective.
However,
in
the general case where the best separation of two or more random processes by means
of linear systems is required, classical theory fails to provide a satisfactory solution.
The difficulty in classical theory in this respect is a fundamental one.
An interesting and important problem in the theory of random processes is that of
prediction.
It is remarkable that, on the basis of statistical concepts, filtering and
prediction are closely related problems.
absence of noise.
For simplicity consider prediction in the
The desired output in a linear predictor is the input itself except
for an advance in the time of occurrence.
Since exact prediction for a random process
even in the absence of noise is impossible, a measure of error and a design criterion
are necessary as in the case of filtering.
If f(t)
is the input message, f(t)
the actual
output, and a the preassigned prediction time, the mean square error is clearly
T
T
fo(t) - f(t+
-T
a)]
2
(63)
dt
-T
For optimum prediction by a linear system, the predictor characteristic h(t) as a time
response to unit-impulse excitation must satisfy the integral equation
00
~mm(T
+ a)
h()jMM(
=
- Co)dC
for T
0
(64)
-00
which is similar to (62).
Here
mm(
T)
is the autocorrelation function of the message.
The theory of the optimum linear statistical predictor has been demonstrated in
the Laboratory (ref. 16).
The block diagram for the demonstration is given in Fig. 17.
Random noise from a Type 6D4 gas tube is used as the message to be predicted.
The
filter shown is a simple R-L-C parallel circuit for shaping the spectrum of the noise.
-24-
Fig. 17
Block diagram for demonstration of prediction.
''
Fig. 18
Basic circuit diagram of predictor for demonstration (ref. 16).
When designed in accordance with statistical theory the predictor has the simple basic
form shown in the diagram of Fig. 18.
The comparison device of Fig. 17 is a high-speed
camera which records continuously the input and output random waves with the aid of a
two-beam oscilloscope.
This instantaneous recording permits the comparison of actual
and predicted values of the random noise.
three prediction times.
Sample records are shown in Fig. 19 for
Inspection of these results shows that a close agreement between
actual and predicted values has been obtained.
7.
Use of Random Noise in System Characteristic Determination
Of fundamental importance in the analysis of linear systems is the characterization
of the system.
Because of the applicability of the principle of superposition, a linear
system is characterized either by a time response h(t) to a unit-impulse excitation
(a unit-step function may be used instead), or by amplitude and phase spectrums of the
transfer ratio H(w) when the driving force is in steady-state sinusoidal form.
functions h(t) and H(w) are known to be related by a Fourier transformation.
The
These
characteristics furnish basic information concerning the behavior of the system and
are therefore useful directly or indirectly in a problem.
A third method for obtaining the system characteristic involves the use of random
noise instead of the unit impulse or the steady-state sinusoid.
This method has some
advantages.
For the development of the theory, consider the linear system shown in Fig. 20.
The input is a stationary random function f(t)
1
and the output is f (t).
0
The system has
the time response h(t) to a unit-impulse excitation or the corresponding complex
spectrum H(w).
The crosscorrelation function between the input and output of the
system is:
-25-
o bl
a)
.I
0
U)2
- O,- 4
a)
0 O
0
0
0O
o -U)O4
.
-'-4
.-
O
0
C)
U)
H -o
a
^4
-
r
T
io (T) = lim
1
T+ oo 2
(65)
tf(t
)f(t)d
-T
Since the system is linear, the superposition integral
f(t) =
i
(66)
)do
h(a)fi(t-
-00
The use of this integral in (65) leads to
holds for the random functions fi(t) and fo(t).
For the type of random functions under consideration, a change in the order of integration is permissible, so that
co
h(o)d
bio (T) =
limr 1
T
T*
1O
fi(t)fi(t
+ -
)dt
(68)
T
The limit integral appearing in the equation is recognized as the autocorrelation function
ii(T -
).
When this symbol is used (68) becomes
-o
4.(
)
=
h(cr)ii(r - c)d0
(69)
.
This is a fundamental equation for a linear system with random input and output functions.
The equation is a convolution integral, exactly the same in form as the well-known superposition integral (66).
The integral (69) states that the input-output crosscorrelation
function of a linear system, which is under excitation by a stationary random function,
is the convolution of the system response to a unit-impulse excitation and the input autocorrelation function. Since 4ji is an even function it is possible to write (69) in the form
of a correlation function by changing ii(T -
fi(t )
=
) to
ii(c-
T)
A
f(t
h(l -
H(w)
Fig. 20 A linear system with
a random function as input.
The expression of the integral (69) in the frequency domain can readily be done by
a Fourier transformation.
This transformation leads to the result that
tio ()
= H(w)
iii()
(70)
is the cross power density spectrum between input and output, and cii(o)
is the input power density spectrum. This basic equation shows clearly that crossin which io()
correlation retains phase information.
In this case the phase spectrum of the system
-27-
is exactly the phase spectrum of the cross power density spectrum.
When white noise is considered,
ii in (69) takes the form of an impulse.
If a unit
impulse is assumed for this function the integral simplifies to
bio(T
)
= h(T)
(71)
.
This is an interesting result.
It states that the time response of a linear system to a
unit-impulse excitation has a form identical to the crosscorrelation function between
the input and output of the system when the input is a white noise.
The corresponding
statement of the result in the frequency domain according to (70) is that
~io(O) = H(w)
(72)
if Hii(w) has a uniform spectrum of unity.
Physically a white noise is taken to mean a noise whose spectrum remains essentially constant over a band of frequencies which is considerably wider than the effective
band of the system under test. Experimental results illustrating the theory are shown
in Fig. 21a and b.
A simple resonant circuit with a time response to a unit impulse
of the form shown in Fig. 21a is subjected to a white noise excitation.
A crosscorrela-
tion curve is then obtained by the digital electronic correlator which has been developed
recently at the Research Laboratory of Electronics, M. I. T. (ref. 14).
shown in Fig. 21b. This experiment verifies the equation (71).
The curve is
A significant advantage of this method is that internal or external random noise
which may be present while measurements are made does not appreciably affect the
results. The reason is that these extraneous noise sources are incoherent with the
input noise so that crosscorrelation between them theoretically results in either a constant or zero.
This conclusion has been experimentally verified.
The method and technique of determining system characteristics by application
of random noise based on the fundamental equation (69) have other possible significant
applications.
8.
Detection of Repetitive Signal in Random Noise
A problem of considerable importance in communication theory is the detection of
a repetitive signal which has been masked by random noise. In a radar system, or in
other systems which operate on the principle of echoes, the improvement of signal-tonoise ratio is such a problem.
In the discussion on the separation of periodic and random components of a random
process the method of correlation is shown to be effective. The method is clearly
applicable to the present problem since the basic elements involved and the requirements are similar though not identical.
Consider (ref. 15) an additive mixture of a repetitive signal S(t) and a random noise
N(t) which is written as
-28-
i
al
i
;tI:`rt·
!f·
,·
j
·--F
-i;
rij'
·
i
.4
iII1
·.,·, g
Cd
Izr
:ii" &::" ;
:iitl"tB
811
i·
J:i·r
;·
II.
.·.·
IzI
;
f
.......
;
··
'J
ij
·
!..·E·
·
!
e*
h
.i
"""
Cqr;,Et,
e*·
v
t
;
t·;6ii
. "I i ;
B·
r
"·:,
il
t .Il-i-T*--
: .
-,
-L
f
a--- 1
i
X
·r
K
-·
if.
f+
Y
t
f,
* ;~ -i' I
-~
*
0).
c-Ia' oc
'.
V1I
:z
t
SoS
-I~
0
E
-~
5I
$
(
o D
NW)
__ _-
1
.
0)QI
Sk
1
5
.4-
bfO
a
Cdo
O~
$HIQ
*W
D,
(73)
fl(t) = S(t) + N(t)
The autocorrelation function of fl(t) is
T
2
l(= ) =l
(t) + N(t)]
[S(t + ) + N(t + T) dt
-T
= qbSS(T) + 4NN(T) +
SN(T) + qNS(T)
(74)
The first two terms are the autocorrelation functions of the signal and noise respectively
and the remaining terms are their crosscorrelation functions.
Since signal and noise
are incoherent, their crosscorrelation is zero if the mean value of either component is
taken to be zero.
The conclusion is that the autocorrelation function of the sum of signal
and noise is the sum of the individual autocorrelation functions.
cally illustrated in Fig. 22 for a sinusoidal signal.
This result is graphi-
Since the autocorrelation of random
noise drops to a final value, whereas that of the repetitive signal remains repetitive, it
is clear that in the region of
T
where noise autocorrelation has practically reached a
final value a periodic variation is evidence that a repetitive signal is present.
0 I(T)
i·
n
I-
Fig. 22 Autocorrelation function of
sinusoid plus random noise. Dotted
curve is component due to noise
(ref. 15).
Improvement in signal-to-noise ratio is dependent upon the length of time of observation.
When detection is carried out by the electronic correlator, the random variable
from which samples are taken is
'=
[S(t) + N(t)]
S(t)S(t +
in which
T'
T ) +
[S(t +
T')
N(t) N(t +
+
1
T)
N(t
+T
+ S(t) N(t +
T')
+
N(t) S(t +
T')
(75)
is the displacement whose lower limit should be in the neighborhood of the
value at which qNN(T) is essentially constant.
Since the correlator computes the output
graph as a series of discrete values, each of which is the result of a large number of
samples, the dispersion of these points about the ideal mean curve is a measure of
output "noise".
This noise is obviously not random noise in the ordinary sense but an
-30-
error due to the limited number of observations made for each value of the autocorrelation curve.
In other words the noise is a dispersion of a set of sample means about a
limit which they approach as the sample size increases indefinitely.
output noise the theory of sample means is applied.
In calcuating the
For a sinusoidal signal, the corre-
lator output signal-to-noise ratio for autocorrelation is
R
= 10 log,
n
db
oa= 10
0 1+4 2 +2 4
(76)
in which n is the number of sample products which the correlator takes for a point on
the output curve,
and pi is the rms noise-to-signal ratio at the input.
equation when n = 60, 000 is drawn in Fig. 23.
A curve of this
For example, when the input signal-to-
noise ratio is -20 db, the ratio at the correlator output is +4 db giving a total gain of
24 db.
;J%
; e
7X
=-\
4C
30
I
I60,000
C110
11
\_1
11,19SCr
2
t 20
Q
~"I
Z
2
ti
W
5
I
n 60,000
10
-
_
I"';</O*
'__
2
0%%
z -10
0
-20
a.
0
1 I
-30
I
-An
15
.
10
5
I
I
I
I
0
-5
-10
-15
-20 -25 -30
INPUT SIGNAL- TO- NOISE RATIO IN DB
I
-35
-40
Fig. 23
Signal-to-noise ratio gain in detection of sine wave in random
noise by autocorrelation and crosscorrelation (ref. 15).
The theory and calculations have been varified by experiment.
In Fig. 24 are shown
correlator output graphs for an 8-kc sine wave plus random noise from a Type 884 gas
tube at several ratios at the input.
The relatively flat portions of the graphs are ob-
tained when noise alone is at the input.
The presence of a sine wave is clearly in-
dicated by a sinusoidal variation.
When the repetition period of the signal is known a further gain may be had by application of crosscorrelation.
rate is generated.
For this purpose a local signal of the same repetition
The theoretical crosscorrelation function between the local signal
and the corrupted signal is
12 (
T)
=
lim
2T
Sl
(T)
[ S(t) + N(t)
+
-31-
NS
(T)
S 2 (t + T)dt
(77)
Y
-4
0
-4
I
a
S
I
c"
-4
0
.4
a)
0
7
a
4l
I2
III2
I
i
_aZW
1
a)
M
r.H
b.0
d
En
I
I
a4
zu
o
*H
z
a)
W~
0,
f
0
11
0
ml
=
-4
O
,,
.
.
.
.
.
.
_
.
ca)
O
,
Q)
O-v
0
0c
T-
N
o
a)
a)
ucd
0
a
o
-
-
in which Sl(t) is the incoming signal and S 2 (t) the local signal.
The crosscorrelation
~NS between the noise and local signal is zero because of incoherence and therefore
only the crosscorrelation between the two signals remains.
In a manner similar to autocorrelation the correlator selects samples from the
random variable
= Sl(t)Sz(t +
for crosscorrelation.
T)
+
N(t)S 2 (t
+
(78)
T)
Because the local signal is free of noise, the random variable,
which is the basis for output noise calculation, now contains only two terms instead of
For the detection of a sine wave in
four terms as in C,of (75) for autocorrelation.
random noise by crosscorrelation with a sine wave, the output gain in signal-to-noise
ratio is
R
10 log
oc
n
1+
10
22
db
(79)
A curve of this equation for n = 60, 000 is shown in Fig. 23.
It is observed that the
For the numerical example con-
additional gain over autocorrelation is appreciable.
sidered, crosscorrelation yields an additional gain of 20 db thus giving a total gain of
44 db.
If the signal is in a general form instead of a sinusoid, the methods of correlation
so far considered result in a signsal of distorted form.
For instance, a rectangular
Since signal wave form is es-
repetitive pulse autocorrelates into a triangular one.
sential information in communication, a method of correlation which preserves waveform without sacrificing signal-to-noise ratio improvement is desirable.
For the
development of this method consider the Fourier expressions for a periodic function.
The expressions are
co
f(t)
=
F(n) e
(80)
n=- co
and
T
-jnw1 t
f(t)
(
dt
dT
f)
1
1
n=-aO
0
(81)
0
oo
f(t)
f(T)
0
dT
11
1
(83)
n=,-o
by a change of the order of integration and summation.
It is not difficult to show that
the summation in (83) represents a periodic unit impulse function of the period T 1 such
-33-
as that shown in Fig. 25.
,UNIT C
:IMPULLSE
-3T I
-2Ti
-TI
up(t)
,,
TI
0
2T I
3TI t
Fig. 25 Periodic unit impulse
function.
Let this function be u (t) so that
p
up(t)
·, M
>
=
(84)
(84)
eeCjnw t
~
n=-o
and
u) =
u (t
>
1
n=-
e ~~~Jn°l(t - T)
(85)
1
Expression (83) may now be written as
T1
f(t) =
f(-r) up(t - T)dT
(86)
.
0
Since up(t) is an even function, up(t form of (86) is,
T)
is replaceable-by Up(T
-
t).
The alternative
therefore,
T1
f(t) =
f(T)Up(T -
t)dT
.
(87)
0
This result states that the crosscorrelation between a periodic function and the periodic
unit impulse function of the same fundamental frequency leaves the periodic function unchanged.
This fact is not surprising but the expression (87) is a correlation function of
the type required for the purpose stated.
When a periodic unit impulse function is crosscorrelated with random noise the
theoretical result is clearly zero if the noise has a zero mean value.
Therefore, the
conclusion is that when a repetitive signal plus a random noise are crosscorrelated
with a periodic unit impulse function having the repetition rate of the signal, the result
is the repetitive signal itself without distortion and without noise.
Physically, cross-
correlating a function with a periodic unit impulse simply means taking samples of the
function at regular intervals.
The method is simple and it is easy to see why signal
waveform is preserved while noise diminishes as the number of samples taken becomes
large.
The correlator for this method is greatly simplified.
Let Pi be the rms input noise-to-signal ratio so that in db units the ratio is
R i = 10 log 1o0
2
Pi
-34-
(88)
a
i.e
a.
b
c
e
-r
.,.
.
d
i
S.
i
t~"
A
1
I·
-
SI
Ie~
I
e
Fig. 26 Waveform detection by correlation. (a) signal, (b) noise, (c) signal plus
noise, (d) correlator output, (e) enlarged signal for comparison with output.
A
c
--1-----111----.--I·I
_C-
II-II
a
a
b
C
Iit
1!it
dat¼IIP
~~~~~~~~~~~~~~~~~
i
d~~~~~~~~~~~~~~
¾~~~~~~~
&"i
C
I
e
Fig. 27 Waveform detection by correlation. (a) signal, (b) noise, (c) signal plus
noise, (d) correlator output, (e) enlarged signal for comparison with output.
~
1
4
3~~~~
d-
C
-)
.O
i~~~~~~~~.
tP
al-P
rB:
·sI
·itl
4-
6:
·E
~II
2
x
i
r-_·-.·fl---
'L*r
:-·i*b"l;':
I·--dieff-'
·g_
r.
I.iL.P.
(
.I L
,II
..a
t
- :..%
(·
r*%llr·
*I·
:rDar
%ai
Fig. 28
(a) Radar A-scope presentation
LWIXII
r
1
i
a
i·i
x-
p-- ·
tl(f:n-9
"ri
Cb
·bt
r
.a;
i
b
(b) Correlated A-scope presentation.
I:
a
Fig. 29
(a) Radar A-scope presentation
.A_.
-......
.__I
b
(b) Correlated A-scope presentation.
If E and (a are the rms values of the signal and random noise respectively,
the output
signal-to-noise ratio is
Ro = 10 loglo
nE
n
= 10 log 1 0 Pi
2
db
(89)
db
where n is the number of samples taken for each point of the correlator output.
The
total gain is
G T = 10 log10 n
(90)
.
To check the results a repetitive signal and a random noise in the relative magnitudes as shown in (a) and (b) of Fig. 26 are used.
plus noise is shown in (c).
A single sweep picture of the signal
This mixture is fed into the correlator whose sampling
rate has been synchronized with the repetition rate of the signal.
The output of the
correlator (d) is in the form of a set of lines whose envelope is the signal.
parison, a much enlarged signal (e) is shown.
For com-
The number of samples for each point
on the correlator output is 14, 000 so that according to formula (90) the total gain is in
excess of 40 db.
Figure 27 illustrates the detection of a triangular waveform in the
same manner as Fig. 26.
This method and technique, when applied to an AN/APG-13A radar set as a test for
possibility of practical use, result in the presentations shown in Figs. 28 and 29.
In
these figures the A-scope presentations are compared with the corresponding presentations from the correlator.
shown to be appreciable.
is 7, 000.
The increase in clarity in the correlated output is clearly
For the graphs of Fig. 28(b) and Fig. 29(b) the sample size
In both cases the radar is directed toward the same object,
but in the second
case the radar signal power is reduced to such an extent that the A-scope shows no sign
of an echo.
Acknowledgment
The writer appreciates the assistance of L.
G. Kraft and I. Uygur who performed
the experiments described in the paper for obtaining the results shown in Fig. 21 and
Figs. 26-29,
and the valuable suggestions given by J.
F.
the experiments on the application of correlation to radar.
-38-
___
_
I
I
Reintjes in connection with
References
1.
N. Wiener
Generalized Harmonic Analysis, Acta Math. 55,
pp. 117-258, 1930
2.
N. Wiener
The Extrapolation, Interpolation and Smoothing of
Stationary Time Series, John Wiley and Sons, New York
3.
N. Wiener
Cybernetics, John Wiley and Sons, New York
4.
G. W. Kenrick
The Analysis of Irregular Motions with Application to the
Energy-Frequency Spectrum of Static and Telegraph Signals, Phil. Mag. ser 7, 7, pp. 176-196, 1929
5.
S. O. Rice
6.
M. C. Wang
G. E. Uhlenbeck
Mathematical Analysis of Random Noise, BSTJ 23,
pp. 282-332, 1944; 24 pp. 46-156, 1945
On The Theory of the Brownian Motion, Rev. of Modern
Phys. 17, pp. 323-342, 1945
7.
H. M. James
N. B. Nichols
R. S. Phillips
Theory of Servomechanisms, Ch. 6-8, McGraw-Hill,
New York
8.
J. L. Lawson
G. E. Uhlenbeck
Threshold Signals, Ch. 3, McGraw-Hill, New York
9.
D. Middleton
Some General Results in the Theory of Noise through
Nonlinear Devices, Quart. of App. Math. 5, pp. 445-498,
1948
10.
E. R. Kretzmer
Interference Characteristics of Pulse-Time Modulation,
Technical Report No. 92, Research Laboratory of Electronics, MIT, 1949
11.
N. H. Knudtzon
Experimental Study of Statistical Characteristics of
Filtered Random Noise, Technical Report No. 115, Research Laboratory of Electronics, MIT, 1949
12.
W. B. Davenport, Jr.
A Study of Speech Probability Distributions, Technical
Report No. 148, Research Laboratory of Electronics,
MIT, 1951
13.
J. B. Wiesner
Statistical Theory of Communication, Proc. NEC 5
pp. 334-341, 1949
14.
H. E. Singleton
A Digital Electronic Correlator, Technical Report
No. 152, Research Laboratory of Electronics, MIT,
1950
15.
Y. W. Lee
T. P. Cheatham, Jr.
J. B. Wiesner
Application of Correlation Analysis to the Detection of
Periodic Signals in Noise, Proc. IRE 38, pp. 1165-1171,
1950
16.
Y. W. Lee
C. A. Stutt
Statistical Prediction of Noise, Technical Report No. 129,
Research Laboratory of Electronics, MIT, 1949; Proc.
NEC 5, pp. 342-365, 1949
17.
Y. W. Lee
J. B. Wiesner
Correlation Functions and Communication Applications,
Electronics, June 1950
18.
Y. W. Lee
Communication Applications of Correlation Analysis,
Symposium on Applications of Autocorrelation Analysis
to Physical Problems, Woods Hole, Mass., June 1949,
Office of Naval Research, Department of the Navy,
Washington, D. C.
-39-
_11_11
__ _1
__· _
-3U i 195
JUL 17 197
_
Download