OPTIMUM FM SYSTEM PERFORMANCE--AN INVEST EMPLOYING DIGITAL SIMULATION TECHNIQUES by

advertisement
OPTIMUM FM SYSTEM PERFORMANCE--AN INVEST
EMPLOYING DIGITAL SIMULATION TECHNIQUES
by
TODD LEONARD RACHEL
B.S., Michigan State University
1965
SUBMITTED IN PARTIAL FULFILLMENT OF THE
REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
August, 1966
Signature
of Author.
.
. ..
.
. . . . . ..
Department of Electrical Engineering, August 10, 1966
Certified
by
.
.
_
I
I
I·
I
I
I
I
I
Thesis Supervisor
Accepted
by
..
. .
. . . . .
Chairman, Departmental Committee on G duate Students
OPTIMUM FM SYSTEM PERFORMANCE--AN INVESTIGATION
EMPLOYING DIGITAL SIMULATION TECHNIQUES
by
TODD LEONARD RACHEL
Submitted to the Department of Electrical Engineering on
August 10, 1966, in partial fulfillment of the requirements for the degree of Master of Science.
ABSTRACT
The optimum phase-locked loop demodulator arises in
a natural way from statistical detection theory when the
received signal is an angle-modulated cosine function that
is corrupted by additive independent white Gaussian noise.
In particular, for a frequency-modulated signal the phaselocked loop can be revamped into a non-linear feedback
system with the integrated message as an input.
The object of the thesis is to analyze the performance
of the above explained non-linear system. The primary means
of analysis is with the use of digital simulation techniques.
Secondary methods include a quasi-linear analysis and a
linear analysis. Only the simulation analysis provides complete coverage of the system performance for all input signal to noise power ratios. The other two methods only have
validity over a limited range of signal to noise ratios.
For a means of comparison, the performance of a conventional receiver is also analyzed. Both simulation and
analytical methods are employed to analyze the conventional
receiver.
Finally, the performance of the two types of receivers
are evaluated.
For a given message, the linear analysis predicted
equal performance for both types of receivers.
For a one pole Butterworth message the quasi-linear
analysis predicted an improvement in threshold for the
phase-locked loop of about 6 db. For a two pole Butterworth
message the theoretical improvement was predicted to be
about
11 db.
-2-
For a one pole Butterworth message the simulation
analysis indicated an improvement in threshold for the
phase-locked loop of about 3 db. For the two pole Butterworth message the simulated improvement was about 6 db.
Thesis Supervisor: Harry L. Van Trees
Title: Associate Professor of Electrical Engineering
-3-
ACKNOWLEDGEMENTS
Two qualities mark Professor Harry Van Trees as being
an outstanding thesis advisor.
Those qualities are profes-
sional ability and human interest.
When in conference, Dr.
Van Trees would not only answer my questions fully but would
ask some questions of his own to stimulate my thinking.
In
addition, his enthusiastic greeting and welcome smile always
set the mood for a fruitful discussion.
I thank you, Pro-
fessor Van Trees.
Doctor Donald Snyder spent many laborious hours helping
me understand the many details of digital simulations.
Only
with his assistance could I have finished this task.
My wife, Jean, has served me tirelessly for the past
five years of my formal education.
During anxious periods
she always had effective words of encouragement.
Whenever
something good occurred, she always had plenty of praise.
Also, I found that the unlimited confidence that she had in
me was a powerful motivating force.
Thus let it be known
that Jean has been a vital instrument in my completion of
this thesis.
-4-
5
TABLE OF CONTENTS
Title
. . . . . . . . . . . . . . .
.
.
.
.
.
1
. . . . . . . . . . . . . . . .
.
.
.
.
.
2
Page
Abstract
Acknowledgements
Table
Chapter
. . . . . . . . . . . . . . . . . . .
of Contents.
4
. . . . . . . . . . . . . . . .
.
.
5
. . . . . . .
.
.
7
.
.
9
and
1 - History
Introduction
1.0
- Introduction
. . ..
1.1
- Typical
Demodulators
1.2
- Thesis
F.M.
Prospectus.
. . . . . . . .
. . . .
..
. . . . . . . . .
· . . 12
Chapter 2 - Waveform Estimation and Receiver
Formulation
2.0
.
- Introduction
. . 14
.
. . . . . . . . . . .
. . . . . .
. . 14
. . ..
2.1 - Optimum Waveform Estimation (General). .
2.2 - Optimum Receiver Interpretation.
2.3
- Optimum
F.M.
Receiver
.
.
. . . . . .
2.4 - Convenient Expressions for Wiener
Filters
for
F.M.
Case .
. . . . . .
Chapter 3 - Optimum F.M. System Performance
3.0
- Introduction
3.1
- Performance
,
.
.
. 16
.
. 25
.
. 29
.
. . . . . . . . . . . . ·
(Analytical)
.
14
. 33
. . . . . . . . . . . · . . 33
Criterion
.
. . . . . .
.
. 34
3.2 - Analysis by the Quasi-Linearization
Technique
3.3
- Discussion
..
. . . . . . . . . . . . .
. . . . . . . . . . . . . ,. .
37
49
6
Chapter 4 - Digital Simulation of Optimum
Demodulator
.
4.0
- Introduction
4.1
-
Digital
. . . . . . . . . . . . . .
57
. . . . . . . . . . . . . . .
57
Modeling
. . . . . . ..
. . . . .
58
4.1.1 - Sampled Data Model of Continuous Filter.
58
4.1.2 - Sampled Data Model of Message a(t) . . .
65
4.1.3 - Sampled Data Model of Noise n(t) . . ..
72
4.2 - Optimum F.M. Demodulator, Digital Model.
73
4.3 - Simulation Results and Discussion.
76
Chapter 5 - Conventional Demodulator-Theoretical
Analysis
5.0
.
. . . . . . . . . . . . .
87
- Introduction
. . . . . . . . . . . . . . .
87
5.1 - Conventional F.M. Receiver AnalysisTheoretical
.
89
. . . . . . . . . . . . . .
5.2 - Conventional Receiver Theoretical
Results
and
Discussions
.
. . . . . . . .
98
Chapter 6 - Simulation Analysis of Conventional
.
.
.
.
6.0 - Introduction.
.
.
.
. 102
6.1 - Conventional F.M. Digital Model.
.
.
.
. 102
6.2 - Conventional F.M. Simulation .
....
6.3
.
Demodulator
- Conclusions
.
. . . . . . . .
.
. . . . . . . . . .
.
102
110
.
.
114
Chapter 7 - Observations and Conclusions . . . . . . . 120
7.0
- Introduction
. . . . . . . . . . . .
7.1 - Phase Locked Loop Observations .
.
.
.
120
. . . . 120
7.2 - Conventional Receiver Observations . . . . 123
7.3 - Phase-Locked Loop vs. Conventional
F.M.
Appendix
. .
.
.
.
.
125
. . . . . . . . . . . . . . . .
.
.
.
131
Receiver
.
. . . . . . . . .
7
CHAPTER
1
HISTORY AND INTRODUCTION
1.0
INTRODUCTION
The text of this thesis is primarily devoted to the
demodulation of a frequency modulated sinusoidal carrier
which has been corrupted by additive white noise from the
transmission media.
Prior to delving into the demodulation
scheme it seems appropriate that we first review briefly
the fundamental ideas of frequency modulation.
Frequency modulation is a particular type of nonlinear modulation.
The modulator varies the argument of
a sinusoidal function according to
S[t,a(t)]
=
f2P sin [wct +
(t)]
(1)
Here the constant /2P is the amplitude of the modulated
carrier and P is the power in the modulated
carrier.
If
a(t) is defined as the analog message of interest, then
for frequency modulation
t
q(t)
= df I
_00
a(u) du
(2)
8
A constant, df is included in (2) because, as we will see
later, its presence strongly affects the performance of
the demodulator.
Intuitively, df controls the band width
of the received signal and thus can be associated with the
maximum frequency deviation of the received signal.
In
our system, however, since a(t) is a sample function from
a Gaussian random process and hence a(t) can take on any
value, then a "maximum frequency deviation" is a nebulous
concept.
When the signal (frequency modulated carrier) is
transmitted from one station to another it is invariably
corrupted by noise.
Throughout this text, the noise will
be considered as additive white Gaussian noise defined as
n(t) = additive white Gaussian noise.
(3)
The characteristics of n(t) will be considered later.
Finally, combining (1), (2), and (3) we get the
signal that is received at the demodulator
r(t) = S[t:a(t)]
+ n(t)
or
r(t) = /2
sin [ct + df
J
a(u) du] + n(t).
(4)
9
1.1
TYPICAL F.M. DEMODULATORS
In section 1.0, it was observed that the information in
the received signal is contained in the argument of the sine
function.
The purpose of the receiver then is to extract
that message information from the argument of the received
signal.
Note that the message, is in fact, the instantan-
eous value of the frequency of the sine wave.
Whatever the
makeup of the receiver, its primary function is to determine
the instantaneous frequency of the modulated carrier wave.
There has been a variety of receivers designed for this
purpose.
They vary in complexity and cost.
one's intuition and others do not.
Some appeal to
Only a few of the many
types of frequency demodulators will be discussed in the
following paragraphs.
The receiver most commonly used for commercial use is
called the conventional demodulator.
In a conventional de-
modulator the received signal is usually clipped by a limiter
so that only the time distribution of its axial crossings is
preserved.
The output of the limiter is then band pass
filtered to suppress harmonics of the carrier and out-of-band
noise.
The result is applied to a discriminator.
A discrim-
inator is a device whose output wave has instantaneous amplitude values proportional to the instantaneous frequency of
the input wave.
Another filter is added after the discrimi-
nator to suppress noise at the output.
Realizations of
10
conventional demodulators are clearly described in Schwartz,
Black,2 and Armstrong. 3
The advent of the conventional F.M. demodulator represents a successful first attempt to overcome many of the
noise problems encountered in amplitude modulation systems.
The price that is paid for improved output signal-to-noise
ratio is increased bandwidth occupancy.
One important restriction on the performance of the
conventional demodulator is the presence of a wide band
intermediate filter at the output of the limiter.
This
filter passes the message satisfactorily but it also passes
a lot of noise.
If there was some way to further suppress
the noise, the system performance would notably improve.
An improved receiver was presented by Chaffee.4
His
aim was to constrict the wide band intermediate filter used
in the conventional demodulator.
Chaffee's demodulator
actually compresses the bandwidth of the received signal
to permit the use of a narrow band intermediate filter.
The idea is relatively simple.
He used the output of the
post discriminator filter to drive a voltage controlled
oscillator.
The frequency of the oscillator is centered
around wc ± WIF'
When the output of the oscillator is
mixed with the incoming signal the result is:
t
[ai (u) - a i (u)]
sin {wIFt +
00
du
(5)
11
where ai(t) is the instantaneous received message and
ai(t) is the estimate of ai(t).
It can be easily seen
from equation (5) that if the error is small the bandwidth
of this signal is much less than the bandwidth of the received signal; thus allowing the narrow band filter.
Another method of achieving this narrow band effect
is by using a phase-locked-loop.
The essential parts of
a phase-locked loop are shown in figure 1.
Here the out-
put of the low pass filter is the instantaneous frequency
estimate of r(t).
:UENCY
IMATE
Figure
1
Phase-locked loop
The voltage controlled oscillator (VCO) physically provides
an integrating action so that an instantaneous phase estimate is fed back to the input multiplier.
The output of
the multiplier is a low pass signal proportional to the
12
error in the phase estimate.
This error signal is filtered
and used to modify the frequency of the voltage controlled
oscillator.
Note that, the rate at which the oscillator
frequency can change is governed by the loop filter.
We
can think of the output of the voltage controlled oscillator as "locked" in phase to the incoming
signal-thus
the
name "phase-locked loop" is appropriate.
The phase-locked loop can be useful for synchronization purposes.
The first wide-spread application of phase-
locked loops was in synchronization circuits for color
television.
They are often used for synchronization in
space communication systems.
Also they can be used for
demodulation purposes in almost any analog communication
system.
1.2
THESIS PROSPECTUS
In this thesis we are going to study the performance
of the optimum phase-locked loop analog demodulator.
The
results of this study will be realized primarily through
the employment of digital simulation techniques.
lation results will be verified analytically.
The simu-
To obtain
a frame of reference we will examine the performance of
an "optimum" conventional demodulator.
The performance
of the conventional receiver will also be analyzed by
simulation and analytic methods.
13
The thesis will proceed in a straightforward manner.
In chapter 2 we will derive our optimum, realizable, phaselocked loop demodulator directly from non-linear integral
equations that arise from statistical detection theory.
Chapter 3 will present a complete analytical analysis of
the optimum demodulator and chapter 4 will present a digital simulation analysis.
Chapters 5 and 6 will contain an
analytic analysis and simulation analysis respectively of
the "optimum" conventional receiver.
Each chapter will end with a short summary and discussion of the results in that chapter.
Finally, chapter 7
will be devoted to a complete examination of all the results
in all of the foregoing chapters.
14
CHAPTER
2
WAVEFORM ESTIMATION AND RECEIVER FORMULATION
2.0
INTRODUCTION
In this chapter we are going to derive an optimum
receiver for the received waveform described by (4). For
this case the received waveform is modulated by a sample
function from a Gaussian random process and is corrupted
by uncorrelated, additive white Gaussian noise.
The re-
ceiver will process r(t) in an optimum fashion and give
the best estimate of the original uncorrupted message,
a(t).
General results will first be derived and then they
will be applied to the specific case where we have a frequency modulated, stationary, Gaussian message and additive
white Gaussian noise.
2.1
OPTIMUM WAVEFORM ESTIMATION (GENERAL)
As before the received signal is:
r(t) = St:x(t)]
+ n(t)
t0
t
t1
(6)
15
where x(t) is a function
of the message
is a sine wave modulated by x(t).
a(t) and S[t:x(t)]
It may be phase or fre-
quency modulation or even a combination of these.
A system that could generate such a signal r(t) is
shown in figure 2.
Note that for phase modulation the
linear filter would be merely a straight wire but for frequency modulation it would be an integrator.
Our criterion for deriving the optimum receiver will
be maximum a posteriori estimation (MAP).
Figure
2
Analog com. system
The assumptions for the following results are:
(i) The function x(t) and the noise n(t) are
sample functions from independent, continuous,
zero-mean Gaussian processes with covariance
functions Kx (t,u) and Kn(t,u) respectively.
Note that if h(t,u) represents a straight wire;
then x(t) = a(t) and Kx
x (t,u) = Ka(t,u).
a
16
(ii) The signal S[t:x(t)] has a derivative with
respect to x(t).
After a fair amount of derivation, the MAP estimate for
the function
x(t)
x(t) is:
tz
3)d
itl 3s[z:x(z)l
3:RW
=
xt~)
~z
d
t
< t < t
(7)
t
O
where, for white noise,
g(z)
= ftl
Qn(z,u){r(u)
- S[u:x(u)]}du
Qn(z,u)
u (z,u)
t 0o
t < t
(8)
t
and
= N
(9)
N0
Equation (7) is the fundamental result from which the optimum
receiver will be realized.
2.2
OPTIMUM RECEIVER INTERPRETATION
Assume:
i)
ii)
to =
-
t
=
(10)
+00
Kx (t,u) = Kx (t - u)
(stationary)
Youla, Reference 6
Van Trees, Reference 7, Chapter 5
(11)
17
N
iii)
u
K (t,u) =
n2
(t
- U)
(white & stationary)
(12)
Then from (9) ,
N
Qn(t,u)
- u)
(t
(13)
0
and from
(8),
2
g(z)
-co
2
u O (z - u) {r(u)
- S[u:k(u)]}du
{r(z) - S[z:x(z)]}
g(z) = N
(14)
(15)
O
Hence, substituting into equation (7) we obtain
co
0
K (t - z)
x
aS[z:x(z)]
aa(z)
{r(z) - S[z:x(z)] dz
(16)
Now if we choose
r(t) = S[t:x(t)]
r(t) =
ZV
+ n(t)
(17)
t + x(t)] + n(t)
sin[w C,
(18)
cos
(19)
then,
as[t:x(t)]
ax(t)
[
C
t + x(t)]
18
Substituting into (16) gives
00
(t)
00j KX(t-u)
2
c os{Iwcu +
x(u)
}{r(u)
-
ii
-sin [wcu + x(u)]}ldu
(20)
-
(21)
Define
Zu
cos [wct + x(t)]{r(t)
=
2
sin [wct + x(t)]}
then;
00
(t) = 2
f
Kx(t-u) z u(u) du
(22)
-00
Equation (22) is the familiar convolution integral and
can be considered as the input to a filter with an impulse
response of Kx(t - u).
A block diagram realization of equa-
tions (21) and (22) is shown in figure 3.
Note that
Kx(t - u) is an unrealizable impulse response because it is
an even function of time.
Also because it is inside the
loop we cannot add delay to make it realizable.
If the loop shown in figure 3 was a linear system we
could make the loop filter realizable and add an unrealizable
post loop filter.
We could then approximate the system of
figure 3 arbitrarily closely by including delay in the post
19
Figure
Realization
loop filter.
of
3
(21) and
(22)
With this fact in mind let us see if we can
make the system in figure 3 approximately linear.
Observe that if the correlation function K(T)
is
considered as the impulse response of a filter, then that
filter would presumably have a pass band in the vicinity
of the frequencies
in x(t).
Therefore,
if the message
is,
for example, human speech then the filter would resemble
a low pass filter.
Now remember
r(t) =
and, zu(t)
=
2LF sin [wct + x(t)] + n(t)
2 cos [ct
+ x(t)]{V2
(23)
sin [wct + x(t)]
+ n(t) - 2f sin [wct + x(t)]}
(24)
20
Using trigonometric manipulation on equation (24) we obtain
zzu(t) = /P sin [x(t) -
(t)] + 2 n(t) cos [wct +
+ A/ sin [2wct + x(t) +
(t)]
(t)]
(25)
- Ad sin [2mct + 2(t)]
Now since we argued that z u(t) is the input to a low
pass filter, we may ignore the last two terms in (25) because
the filter would not pass high frequencies like
2
c.
Note
also that the last term in (25) was contributed by the
subtraction operation in figure 3.
Hence we may erase that
leg of the feedback path since it contributed nothing to our
system.
A new model of our system can be drawn as shown in
figure 4.
In figure 4 we have replaced the K(T)
the gain term by a related low pass filter, G(w).
-r*)
.r,0
i)
Co5E+*
1£z
X"(t)
G ()
Xci
VARIABLE.
P HASFL
OSCI LLA TO R
Figure
4
Phase-locked loop
filter and
21
This figure is familiar to us from chapter one.
It
is frequently called a phase-locked loop where the multiplier symbol, in practice, would be replaced by a phase
detector.
Note here that zu(t)
consists of the first
three terms of equation (25) and that the third term is
of no consequence.
tion
Now consider the second term in equa-
(25).
We first decompose the noise n(t) into in-phase and
quadrature components,
E [-nl(t) sin wct + n2 (t) cos Wct]
n(t) =
(26)
where nl(t) and n 2(t) are sample functions from independent
low pass Gaussian processes with spectral density shown in
figure 5.
In this case Wn is considered large with respect
to the signal band width but small compared to wc.
If we
denote the second term in (25) by n(2 ) (t) and use equation
(26) we obtain,
n(2) (t) = - nl(t) sin
[-
(t)] + n 2 (t) cos
[-
+ double frequency terms
n(2) (t)-
n
sin
(t)]
+ n~~~~~~~~~~~~28
2 (t) cos [(t)]
1
(t)]
(27)
(28)
22
4
,
()
Nlo VOLTS
.t HILRTZ
I
-WN
VWN
Figure
5
Noise Spectrum
Once again w
can neglect the double frequency terms because
they will never get through the low pass filter.
Now if Wn
is large compared to the bandwidth of sine [(t)] we can make
the approximation,
NN
K (2)
()=K
Kn (t) =
*
(29)
C
Hence we can combine the past several paragraphs of
discussion to come up with a final approximation for ztu(t).
For our system then,
t) = Z(t)
r
sin [x(t) -
(t)
(+
2 )
n (t)
(30)
See appendix for proof.
23
The system of figure 4 now may be altered again to yield
a model
shown in figure 6.
2).
Figure
N.7
6
Nonlinear Realization of Equation (23)
Figure
7
Linear Realization of Equation (22)
Note that this model is still a nonlinear system corrupted by additive white noise that is independent of the
message.
Remember that our goal was to make our system
linear for the purpose of our in-loop filter.
In figure 6,
24
if we define e(t) = x(t) - x(t) to be the error in the
loop, then for small values of e(t), the system is approximately linear.
The linear model is shown in figure 7.
Now that we have achieved our objective (linear system),
we can make G(w) a realizable low pass filter and then add
an unrealizable post loop filter to approximate the system
shown in figure 3 arbitrarily closely.
Our final linear
and nonlinear models for an arbitrary message x(t) are
shown in figures 8 and 9 respectively.
(2).
Figure
8
Final Linear Model for Arbitrary x(t)
(2)
Figure
9
Final Nonlinear Model for Arbitrary x(t)
25
In figure 8, Gr()
that replaced the G(w).
is the realizable in loop filter
Gpu ()
is the post loop, unreal-
izable filter with delay so that we may approximate the
system of figure 3 arbitrarily closely.
Note also in
figure 8 that the filters involved can be obtained easily
by solving the Wiener filtering problem.
2.3
OPTIMUM F.M. RECEIVER
We will now use the results of section 2.2 to find a
model for the optimum demodulator.
Recall that the received
signal is, for a frequency modulated signal,
r(t) =
2
sin[wct + x(t)] + n(t)
(31)
and
x(t) = df I
a(u) du
(32)
-co00
Notice that the loop in the model of figure 8 will operate
on x(t).
Since x(t) is the phase of the received message,
then the loop must be designed to minimize the error in the
loop.
x(i
In other words the loop must be designed so that
is in some sense the optimum estimate of x(t).
We will
be using Wiener filters in the receiver which means that
i(t) will be the optimum estimate in the mean square sense.
Reference
8
26
To solve the Wiener filtering problem, we use block diagram
techniques to extract the additive noise so that it and the
message are inputs to the loop as shown in figure 10.
GPo(
Figure
10
Revision of figure 8
Now it is easy to find the optimum Gr(w) that minimizes
the mean square loop error.
The next section will present
some equations for finding this loop filter.
Our work is not yet finished.
A realizable estimate
of the message a(t) is actually the desired output of the
receiver.
Again we have a Wiener filtering problem but of
a slightly different type.
When the optimum loop filter
was determined we found it by saying:
filter, G(w),
"Find the optimum
such that when x(t) + n(3)(t) is the input
we obtain x (t) as the output."
that will give ar(t) we say:
Now to find the filter
"Find the optimum filter,
Hor
or (w), such that when x(t) + n(3) (t) is the input we
27
There is a distinct differ-
obtain ar (t) as the output."
ence between the two filtering problems but both of them
must be executed to realize the optimum system.
Figure
11 shows the results of our filtering calculations.
(3) .
-TV -
-i
-
-
-
-
-
Figure
-
-
-
(5)_
- Hon(s)
I-
- J
11
Notice in figure 11 we have maintained the integrity
of the loop.
This is necessary because eventually the
sine nonlinearity will be put back into the forward path
of the loop.
The filter Gpr(s) is an extra filter that
is needed to make up the difference between the transfer
function
[1 + G'(s)]
r
for finding G
Gr(s) and H
r
or
(s).
The relation
(s) is
Gpr
pr (S)
=
[1 + G(s)]
r
G
r
(s)
Hor (s)
(33)
28
So far we have found the filters that will give the
optimum realizable
estimate
of the message
a(t).
If it
was so desired to obtain the optimum unrealizable estimate
aU(t) one would proceed in a similar fashion to the above
argument and find the filter G pu(s).
In the simulations
of our optimum systems we will not use G p(s)
so it will
be discarded at this point.
After we calculate all the filters for this linear
case, we put them back into the nonlinear model of figure
9.
The result is shown in figure 12.
This is the final
realizable, zero delay nonlinear model of our optimum
receiver.
(a)
Figure
CNe
12
Nonlinear Model of Optimum F.M. Receiver
Note that in figure 12 the additive noise has changed.
That is because after the noise was brought out past the
YF',the ~V was absorbed into Gr (w).
III1III___IIIYIIP_^I__L··I-·L--·IIII
I__
Therefore when we brought
29
the noise back into the loop we had no Ad to cross; i.e.,
n(t) ==n(2) (t)
(3)(t)
(34)
An equivalent version of figure 12 is shown in figure
Here a df/jw term has been extracted from G'(w) and
~~~~f
r
placed in the feed back loop. When we do this the post
13.
loop filters will change slightly as indicated by the
primed filter transfer functions shown in figure 13.
(3)
PfU
l
CNpX
T-
A
a1, i)
Figure
13
Equivalent Optimum Model
2.4
CONVENIENT EXPRESSIONS FOR WIENER FILTERS FOR
FREQUENCY MODULATION CASE
In the final model of figure 13 note that the non-
linearity is inside the loop.
This fact dictates that
when we make the linear approximation we must maintain
30
the integrity of the loop so that we can put the nonlinearity
back in the proper place.
To obtain the loop filter one can first find the
realizable Wiener filter P(w)
to estimate the phase
and then institute the familiar feedback formula
G (w)
p(W)
r
(35)
1 + G r (w)
to find G(w).
Alternatively, the G"(w)
can be found directly by
r
N
Gr (W
2
N
(W) +
-
1
(36)
In (36) the plus superscript indicates spectral
factorization.
To be specific it means that we are to
separate the left half plane poles and zeros from the
right half plane poles and zeros and include in (36)
only the ones in the left half plane.
Also this formula
holds only when n(3 ) (t) is white Gaussian noise.
The
Sx(w) is the power density spectrum of the function x(t).
To find Hor (w), the overall optimum linear filter
for estimating a(t) without delay we may use
*
**
Reference 8.
Reference 7, Chapters 6 and 7.
31
or (W) =
Hor
{S (3n
)
(n)}a
()}
{S
(37)
Jw + f(O)
[Sa()
+
2 Sn
I
df
Equation (37) is also valid only when Sn
(w) is the
spectral density of white noise, (typically No/2P), where,
F
f(O) =
Log
df
2Sa()
(38)
dw
2
+
and simply represents a gain term.
In general, this
integral is best evaluated numerically.
In a few cases,
long hand results can be obtained.
As an alternative one can find f(O) by another method.
Consider the linearized version of figure 12.
to the loop is considered
to be x(t) + n(3) (t).
The input
Now if
the message a(t) has a spectral density Sa(w), then the
spectral
density
of x(t) is,
df
Sx()
L
*
Snyder, Reference 14
2= Sa ( W)
(39)
32
From previous results, we know that the spectral density
of n ( 3 ) (t) is Sn 3 ) ( ), or in our special
case, No/2P.
0
Using the frequency domain representation we write
1+
S (Wp()
(
=
S1
(3)
n
P (w) Pr(-w)
)
(40)
Q(W)Q(-W)
r
Here both P(w) and Q(w) are even functions of frequency
so that the left half plane zeros for both functions have
a mirror image in the right half frequency plane.
both P()
(40).
Hence
and Q(w) are factorable as shown in equation
If we consider the realizable part of P(w) and Q(w)
which is P (w) and Q(w)
and write,
+ a(J)M-1 + a (J)M-2+ ...+ aM (41)
p() =
and,
Q(W) = (J)M + bl(J)M
+ b2(Jl)M-2
+ ...+ b
(42)
then
f(O) = a
*
- b1 .
Reference 10.
(43)
33
CHAPTER
3
OPTIMUM F.M. SYSTEM PERFORMANCE (ANALYTICAL)
3.0
INTRODUCTION
Figure 13 of chapter 2 represents our model for the
optimum demodulator.
There remains the task of determining
the performance of our optimum system.
We have three alter-
natives for determining the performance.
The first is to
actually build the system in the laboratory.
The second is
to simulate the model in figure 13 using a computer and the
third is to compute the performance analytically.
To physically build the system in the laboratory would
be quite time consuming and would require a good facility
for circuit design.
The second alternative seems somewhat more palatable.
Computer simulation would be faster.
Either an analog
computer or a digital computer could be used for this purpose.
Simulation techniques for both types have been well
advanced in the past few years.
An analytical analysis of our model is also a reasonable approach.
However, note that a complete analytical
34
analysis of a nonlinear system is in no sense trivial.
For most messages, a(t), a complete analysis is not known.
However, we can investigate the linear regions and get
some idea as to where the system becomes nonlinear.
Booton 1 1 advanced a technique for evaluating the performance of this system in the region where it is just beginning to become nonlinear.
In this thesis we will use two of the above methods
to evaluate the performance of our demodulator.
The first
will be to analytically construct the performance using
Booton's quasi-linearizing method and the second will be
to simulate the model of figure 13 on a digital computer.
This chapter will be devoted to the analytical approach and the next chapter will explain the simulation
techniques.
3.1
PERFORMANCE CRITERION
Before we can evaluate the performance of the demod-
ulator we must establish some kind of framework from which
we can judge how well the system is working.
Remember from
chapter one that the message that we are trying to reconstruct is a sample function from a random process, a(t).
The output of the demodulator is an estimate of that random
process, ar(t).
Hence we can define an error
Tf
35
e a(t) = a(t) -
Since both a(t) and
e (t).
r(t).
(44)
(t) are random functions then so is
Therefore we can talk about the variance of the
error ea (t) and it will be defined as
of
= VAR[e a (t)
The subscript "f" denotes frequency demodulation.
Note
that if the error is small, the variance of the error is
correspondingly small and our demodulator is working well.
On the other hand, if the error is large, then af2
large.
is
Thus if we consider the ratio 1/af2 , large values
imply good demodulator performance and low values imply
poor performance.
Another parameter which is important to this case is
the carrier-to-noise power ratio in the message equivalent
bandwidth.
In symbols
SNR
A
N
NB
hertz
1
(45)
eq
where P represents the carrier power and NBeq represents
the white noise power in the message equivalent bandwidth.
In receivers that employ a phase-locked loop there
are two more parameters that give information about how
well the system is performing.
*
See appendix for explanation
They are the phase error
36
variance,
x
and cycle skips.
The phase error variance is actually the variance
as shown in figure 13.
of the loop error, e(t)
Formally,
2
Gx2
= VAR[e
VAR[x(t)
x(t)]13 is operating
(46)
the= system
of figure
of x(t)]
For small values
in the linear region of its performance
characteristics.
For large values of ax 2 interesting things happen and the
performance degrades rapidly.
By plotting 1/
2 versus A
we can obtain an additional insight as to what level of
SNR our system begins to fail.
Consider figure 13, and recall that the sine operator
is a modulo-2rr device.
When
el(t) = 2
the sine operator
(47)
+
views this as
e(t)
= E
(48)
As a consequence of this, the actually large error of
2n
+
is viewed by the loop as a small error.
The
result is that we get insufficient feedback from the loop
and thus a larger error between x(t) and
r(t).
This
action of the loop is called cycle skipping and its occurrence implies degradation of receiver performance.
Cycle
skipping is therefore another measure by which we can
judge the performance of our system.
37
3.2
ANALYSIS BY THE QUASI-LINEARIZATION TECHNIQUE
In figure 13 when sin [el(t)] = e(t)
the sine oper-
ator can be approximated by unity and the system is completely linear.
When this is the case it is straightforward
to compute the error variances.
However, we know that e(t)
is not always small, especially for low SNR.
When this is
the case the nonlinear operator creates a drastic change in
the performance curves.
Complete analytical analysis of the non-linear system
is difficult.
However, Booton 1 1 postulated that if the
sine operator were replaced by an equivalent gain term,
instead of unity, then the resultant quasi-linear system
would perform more like the nonlinear system.
A derivation
of Booton's equivalent gain term is given in the following
paragraphs.
If our system was linear and the input to the system
was Gaussian with mean M
2
x
and variance ax , then every vari-
able in the system would be Gaussian with mean M x and variance ax
2
.
*
However, in general, nonlinear operations on
an input function change the shape of its probability
density function.
of x(t) -
Thus, in our system, when the amplitude
(t) is small, and the linear system approxima-
tion is valid, then et)
*
can be considered approximately
Reference 17, Chapter 8.
38
With this
Gaussian with zero mean and variance a2.
approximation in mind we can proceed to find an "equivalent gain" to replace the sine operator.
The incremental gain of a sine operator for an input
Y [or e(t)]
is
d
dY
[sin (Y)] = cos
(Y)
(49)
Now since we have determined that Y is approxima tely
Gaussian then the expected gain can be written a s
y2
2 ax
. ¢
00
2
cos
(Y) dy = e
2
(50)
x
If we define
2
v -
(:x )
e
(51)
then
2
e
2
1
(52)
V7V
By replacing the sine operator in figure 13 by its equivalent expected gain (52), we obtain the quasi-linear system
shown in figure 14.
Given the quasi-linear model, it remains to compute
a
OX
2
2
and af
for a given SNR.
Using block diagram reduction
39
(3)
r-C
0()
Figure
14
Quasi-linear model
n i1 \ -
e4)
Figure 15
f,_
G C(S))
A
Figure
16
Complete, zero delay, linear, realizable model
40
we obtain figure 15 from figure 14.
a(t)
Now if,
<== S ()
(53)
then
df2
x(t)
<
~ Sx()
=
Sa (W)
(54)
and
Sn(4) ()
(X)
o
Pv
.
(55)
In general the phase error is caused by two sources.
There is an error due to distortion in the filters and
there is an error due to the presence of additive noise
in the system.
If H(w) is the system function that oper-
ates on the phase of the received signal [i.e. x(t)] then
00
{Sx(w) 11 - H(w) 12 + S ( 4 ) (w)IH(w)
X =
12 }
2dw
(56)
-0c
For this case,
df G (w)
JW
H(w)
1+
df G"(w)
JW
(57)
41
Likewise, if H(w) represents the overall transfer function
from a(t) to a(t), then
of
2= {I s () I1 - H(w)2 + Sn(4) (
2)
9
) I12} dw
T
(58)
where
(59)
H(w) = H(w) G(W)
pr()
For the purpose of investigating equation (56) and
(58) we will use the Butterworth class of power spectral
densities.
This family is defined by the two sided spectra,
2n
Tr
k
sin 2
Sa(w:n) =
(60)
+ (/k)
1 + (w/k)
for all integers n > 0.
2n
2
The gain term in Sa(w:n) is designed
so that the variance of the spectra will be unity for all n.
2n
k
.
W
2n
dw =
1
(61)
1 + (w/k)2n 2
-03
Observe that we have a choice on how we are to proceed.
We may choose to design the system filters for one of two
cases.
One case is shown in figure 15.
Here the signal
42
that must be filtered is x(t) + n( 4) (t).
The other case
is shown in figure 16 where the input to the loop is
x(t) + n(3)(t).
In either case the resultant filters
will be arranged in the quasi-linear model as shown in
figure
14.
In the body of this thesis, the system filters will
be designed according to the system of figure 16.
The
other case will be considered in the appendix.
Example
1
Some details of the operations indicated in equations
(56) and (58) will be carried out for the single pole
Butterworth message Sa (w:l).
Only results will be pre-
sented for the higher order Butterworth messages.
Consider figure 15 when a(t) is zero mean and
Gaussian with power spectral density
S
()
2
+1
(62)
and the additive channel noise is zero mean and white
with double sided power spectral density,
N
Sn(W)
=
2
(63)
43
Now using the symbols indicated in figure 15,
2
df
Sx ()
2
2
oW
w
2
+1
(64)
1
and
(4)
Sn
n
)
N
2P V
2P
.
(65)
Before we can proceed we must compute the in-loop
and post-loop filters for our system.
If we revert back
to the completely linear model shown in figure 16, then
the filters G(s)
r
tions
(36) and
and G pr(s) may be computed with equapr
(37).
The results are:
,, 1
Gr (s) =f
df
(y - 1)s + 6
s+ 1
(66)
and
G,'
=2(Y
1)
2s + (y + 1)
pr(s)
(67)
26 + 1
(68)
where
Y =
44
6
2
=d
2
A
f
(69)
4P
(70)
KN
o
Substituting these results into equations (56) and
(58), we obtain:
+J.
2
2 [(y-1)s + 6 ][-(-l)s
1
+ 6]} +
2 df 2
V
*
ds
-
+ V + (-1)]s + 6{V
. JOO{~s2
S2 -[V
+ (-1)] s + 6}
(71)
or
2
d+
+
6
[(-1)2
1
-l
df~~~[(
+
5vf
x
6v+[v
+
(
-
(72)
1)]
Similar computations with equations (58) and (59)
yields,
2v 6 + ( - 1)2 (v + 2)
2
f
=
26v + 26v(r
(73)
- 1)
Note that equation (72) is a rather complicated
formula.
It can be reduced to a transcendental equation
2
involving only ax , df and A.
*
By assuming values for ax
2
A table of integrals of this kind are found in reference 12
45
we can compute the corresponding values for A.
In this case
for the one pole Butterworth message, equation (72) may be
inverted to form a fourth order polynomial in A.
a4 A
+a 3J
aA1 A+ + a
+ a+
= 0
(74)
where
a4 = 2dfx 2
2
a3 =
2
(ax
a
2
Cyxv
2
a2 = 8dfox2v -2(x2 - oX2v)(dfv2v
2df +
- 2(ax2- vax2)(2v) - (df
al = 4xav
aO =
8 dfv
- 2(dfv + 2vdf
+
2vd + df
df)(2v+)
Once (74) is solved for a set of ax2 and A, then
equation (73) can be solved for the mean square demodulaPlots of equations (72) and (73) are shown
tion error.
in figures
(17) and
(18).
When similar calculations are carried out for the
second order Butterworth message the following results
are obtained.
46
"
M s
G (s)
df
pr (S)
2M2
+ Ls + N
(75)
s2 + /2 s + 1
1
T2s
M
2
s
+
3
(76)
+ Ls + N
[ (MN)(--+1) + (L2-2N) N + N2 ( M+
2MN [_ MN + (ML + 1)
/v--
-
2) +
r
2
J3
2 M+
]
J.f
+
-v +
v12
(77)
[
of
2
=
M
2
+2
----(MN)
(MN)
(ML
+ T3/7-] + 2/[ (M)
/ + 1)
/7 (/7 + /) + _]
2/7 4
2MN [_ MN + (ML + 1) ( -M
/7
/7
/7-
/
+
(78)
where
M=
2y -
L = (2y
-
y = Adf 2
f
1)/M
N = y/M
T2 =
2y
+ v
P
N
v = e
26 - 2/'y + 1
T3 = Y+ -
4 =4
o
ax 2
47
The results of equations (77) and (78) are plotted
in figures
(19) and (20) respectively.
Let us divert briefly from the quasi-linear analysis
and consider the system when it'soperating in its linear
region only.
That is to say,
sin et)
= e l (t)
(79)
or equivalently,
2
a
x
-< acr
2
(80)
where ace is just an arbitrary constant that implies (79).
When ax
2
< ac
2
we will say that our system is linear.
For a linear system with a message and white noise
as its input, Yovits and Jacksonl3 derived a useful closed
form expression to compute the variance of the error.
That expression is
co
aopt-
NyLog [1 + S()]
~opt
~N
2,f
(81)
-C0
Now apply (81) to our special case, the variance of
the loop error can be written as, (see figure 16)
48
2
-No df
2
N
J2P
ax=
:x2=
Log
2P
+
2
a)
2
N
_Co
a
2r'
°x
-
cr
2
(82)
2P
Also, a convenient expression for the message error
variance as given by Snyderl4 is,
of
2
= 31
N
2 (2P) f(O)
3
+ F(O)
(83)
df
where
f(O)
=
J
[1 + 2P
N S ()
2 o
Log
N
2
0
dw
2]7r
2w 2
df
and
F(0) - df2
2P
Log-C
[1
+o
df2 ]
N
2~
(84)
Plots of the linear relations are given in figures
21, 22, and 23 for the Butterworth
5.
family,
n = 1, 2, and
Notice that these curves are valid only when the system
is approximately linear.
The constraint a2
< a2
must
xfor
the
linear
to
equations
(82)
and
hold
(83) tcr
be
true for the linear equations (82) and (83) to hold true.
49
3.3
DISCUSSION
The results of this chapter are all embodied in figures
17 through 23.
Figures 17 through 20 indicate the perform-
ance of our optimum receiver during operation in the linear
region.
In addition they indicate where the system becomes
nonlinear and hence where the performance begins to drop off
rapidly.
For the one and two pole cases, Booton's quasi-
linear method shows that the systems become nonlinear when
x2
one radian2 .
Comparing the one and two pole cases we
see that the linear region performance is a whole order of
magnitude better for the latter case.
Also notice that for
larger values of df, the threshold occurs at a considerably
lower value of SNR.
Booton's technique was not applied to
the fifth order Butterworth case because of the complexity
of the computations.
Figures 21 through 23 are the results using strictly a
linear analysis.
As would be expected, these linear analysis
results do agree with the quasi-linear analysis results for
large values of SNR.
= EXP [-
1
ox 2
50
]
VI -
One pole message
First order phase-locked loop
'D
I
ts
Plot of Equation (72)
l~'~~'~i-~
FF~,'~~w-T;f-F
F~r....l
t
3tt3
.1
*
I
i
,.. -H+Hf1H+H
-II
I Ej
nIF!, i--t+Ti
A
I
I
......... 44
1-4--4c·· F4-
r-F-F
@-+--
-r-!-
- I - I I I I-
-= IUl_
1..-
I
df = 100
I
!I
1!III
I II I
is
.-
!I I " !
I
I . . .. .
4-
-- I
p~----,t
dff = 50 +--1
1-eII tor,ii
nI i
IiII
I-.l9-t"
I!4-'4Ill
~
-"IT
i J
+4+1
t4
u,,l·l,
I ]L I -H
FF-r I I I !l-I I
I FIFF-r
I, I
I
1 1
ii i
34 RX H ;#df
L !
1 I I I .I
I-
ri
r
rTTe
-Hi+ I:"
_
1 T
+t I
· I·
I
. I
= 25 ; i
-rr 11
i.I ..
r ll..,,.....
I
lI
-I;
I
I Il 1 II Ill
i,
I
!v v
1
I
C
3
I
1
I
Z
t
-
;
I'i
0
..
t
-14
u6
'9
IU
w
IN.
1S11m-
LL
w
io
1
.-
x
o*Xkm-t
WX4,W
-4g1-4g--1'
mX5h
E-'-UA--
.
4
10
4
2
3
4
5 20 6789900
2
3
4
5
67891000
2
3
4
5
2
3
4
5
6
6
7 8 9 00
7 8 9 100
4P
If
A = KN
Figure
0
17
Booton Theoretical Performance for Figure 15
I
.x
."
ll
l
..
.
2
3
4
5
6
78954
2
3
4
5
6
7 8 9
)p
1
GAIN = EXP
[- f
2
1
x ] =
51
One pole message
First order phase-locked loop
IG
R
I.
I
I I
I 1I I I I 1
I
Plot of Equation (73)
IFT"L
.
-i
i
¥ttfFf
~~~~1U
i~
· I-C-
'--,I
:'+
II
w-
)
I·-+-+
CCCW-Ci;L-
-t
+
C
l - · -·
l l
I
'I
'
'
-"
!'
C C
- .'
'
,
"; .
_iI.1
I ;_
Ih "
1_41.....L.L-II.-
I ,i
Ii I
-
_
I
I I
I
I I
.
.
I
-C-C-I-TC-I·---I-;J
I--
'q
'-'1-ri I
IH
'3
8
7
Ii
6
1-
df = 100
I
plt1Vl
d
I
io
i
I
I
I
i
I Illl I m
I
50
=
I
I
I I
I1
I-I 7
f
.-
_l.A
f=
I-!
-
df = 25
.P
I I
la
.T.'.
9
8
jif
4I.
--f
7
.. .
Ox
10-.
-:: U W
U
6
KL~~}'-h~~~ii
+-,I-1±t
~~~~II IT -~--*-.*---
5
i. '
4
~~~
4-4
-<I4i- II
-~~~~~~4
~~
I
'
+
.I
3
:-
r~~~~~~~~~~
-
-4 :--.
1 '~~~~~~~AL.
w
.~
-
-
-
.
II
2
.1.
2
2
10
3
3
4
4
5
§
......
2
2
6..7
.
6 7 89........00
8 9..OO
A
Figure
3
3
4
4
5
§
6
6
7..8...
7 8 91.
9 --(DOO2
.... 2
4P
KN
O
18
Booton Theoretical Performance for Figure 15
'-""
-'ill-`
----
UIIIIIIU--I-
3
3
4
4
5
6
6
7 89 I I
78
M;k
-
V
1
a
[-
= EXP
2 ]
52
Two pole message
Second order phase-locked loop
101~~~~~~~~~~~~~~~~~~~~~~~~~1
,-j --4"
ii
+w~i~TI
!ii,,_~,
lo~i,
-Plot of Equation (77)
~~~~~~jT~-~~~~~~~
~ t~~~~~~~~~~~~~~~1
~ ~ ~~~~~~~~__
Zrt
7j~z-
i~
j
tT4
... .
...
+---t-
-
I
!-i-i-t-t-~~t
i-i;-I
k~i;~V+<
{~
,?~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~4
AT~J
2~~t
t-~
df= 50
~
-
TZ-,
-
-:-----;-i--ti
--
i.7T
2
*X
+-4
~ 4 ~ ~ fj ~ 4~ ~ ~ ~
T-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-i
tij ~
i--t-i-
-i -iIrt4
tt-.
i~~-tli--j
-i
i-1
8 1
7 -j~~~V-1
I~r~t'tl-i-t ii i i it itit-i-
tt
ii-4
i----i
_FW
z~~~~-~~~~tlr
~F
_T
-
i__
t
--+~~~~~~
-t~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-i-
i
l-
_44-i
r~lri
91/ U1
'Aflt~-
_1
B~~Ti"Z~~~~~~T~~~~~
I_
t-1T~i
ti~tl i
11~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~T
-
~~ttti
............
34~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
t-t"·
'"1
--
7FT
I
t
-
TI'
-r
f-u-
~~~~~~I~~~~0,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~i
-t-i--~~-
o
--
i?
,
-3He
Vji l i
; i
i
tp.@
i--r-
6
1.-
-0O
i
TI
F±4'
01
ttF-7
I
I
Fi iIi I
i-ti--- I--i·I t;I H
2
3
4
-
/
n
-
-
Ij
-t·-t~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/t
-"---t--l-
i
n!l,
-r'T
i. iii~~~~~~~~~~~~~~~~
~~
t-1-4t
ti-Si·-t~~~~~t-
HH--
t-ii-li~~~~~~~~~~~~~~~~ii
L
I
~~~i-l---i-:i-i
t;~~1
Iti
,
7
i
-
ti
-i-cll
OX
i
i
;i
iiii~~~~i
-
11
1.
C, A 7 8 9 lboo
· ___
I,Ou
KN
Figure
i'
-t~~~tt"3--t-~~~--I Ci +i -Tt
H
0
19
Booton Theoretical Performance for Figure 15
J
i
JJ
3
4
5
6
7 8 9 11~~~~
0
1
= EXP
1
1
[-
2]
]
2
53
Two pole message
Second order phase-locked loop
i-i
t
2
7
I E
I jH
-
4
-I F
~~
--
-4-4
444i-+44T-14
I l
I Io I
|
.
-- -i I
_ 4--
.
·- i
i
, __
II
-
-
df
2
!,1; -I-11-,
! i
-·
·-
4
K;;L ,.,
- ,t
4
4.
i, ,1 lli
-
F
-
VI i ii
I I ii
i
X
lfll
1
---
f4-
-I -I'
d
II I
'l
I-
I
-
4
0.
7-j
= 5
1
4,,-t
i
. {44.e,
I I iI "'i'-'-"-"--'--'
II ,
I
~~~~
-'"-'-"-""""-'-u'"-'"-"
' ''
1 1 ;
i
;''
i t | t2Li 4
I I
4,
_I
T I I25
U
I I
i i
I
I
~
-
t__1:4 t
I I II
I I ! II I
1 4f--
-T ;
F-
LJ
I I I
,-,
1*4±1
f+-
I
S
I
rVI44
tm-tr+n-1-*
-I-IIllllll
-;-ci+r+-u-K----·
· ·i · · ·
1.l;l1l !I
I
i I I I
= 25-, --
i g a i
ri
-V I
1--
ni)
i i
11
All'' I
I I I Ii I !
i
-l I
.
I
4+
1-I 11,-'7t-i-11 -11t7 I 7k1-
i ' i-
I if
1 1
1 11,
[-
I
1r+
(78) 1
.
tE
1
tlIll! II/J!~,t,4~i I4I;il i
i
I III
I
-11
_1
i
.,
it
111
I
11
,=100
I ---
I .1-
I
I
-4-4-C
-1-4--l-A
-A-14-H
4-+- I l, ,!I l I i l I
l l!
I I I
Ii I I III
I -rtt-t--,
F
i i III
-r~ __
I
~
-Plot of Euation
-1
;
747t
-
184-4
t_--t Ttt
1_1 H t = - 1
1I7_| ___1t
_l-t -+
I
fit
T: ]
fi
t
4i
41
1
E4 ±
2
Vf 1
Cf0o
U
4)
10
-i
-
I
I
t
i
U
U)
-qI-
t 41"
72>4
ttA
I
l ii
t- -
-P-- I -
F
- -4- -
'~
1-1
1
LIiI
--
-b1
t
t
-'
W
II
4
K
i
0
!4
;L
Li
:D
W
:¥.I
4-~#-
fO
*4i
[ -I--'4 8
5±
44
4 -:
M ill
i
M0
4/2
P
KN
Figure
Booton Theoretical
20
erformance for Figure 15
Theoretical Linear Performance for Figure 16
54
One hole message
Realizable - zero delay mean square error
,
[
[ 1
[IT TFrl
I
4)
Plot of Equation (83)
'
I
iI
T I
I
IT -P
-IT-t'r tjj ~
-I
Ii
I
i-4-'
I
-tit
4-I
-4
I 1rild-
! 1:f
-1j i
I
1
4
r,.Sr '
iI
,
.
r
+·C·W3tKC)+--C·I-4-
F-4-- => 1oFi
IF
..
. r
-C-4-4
C-.
-. - I-C-C
e-4I-.
I
I
I I
I
C-I·CUI~
t~141
#A---.
-d---
FW
X X Xe Xt
ax
1W
ELm SS
E§-tX
WX W
ffi
2
=crc
2
I I
II
i
id
0j
U
d
W
I
H.H
[
i ... S
-
I-
,
. m
Cii i Ii
I
T
I
I
! i
6
-2
thFH1HIJ-Ffr
I
3
I
'rII I I I
+ +
1
1
9
I,
0T
f
1
illllllll I I
.
=
f
Idf= 25 .
0
Sg
1--1 1 1 1 1-4W W g
if
1
~
1tI I
= 100
f
-
1+X g g W
!
__
f0+-Ft-tT4 E-LEE
-~~~~~~~
rw
;44
i-I
i
I-
_ xE W
iW4m
5
2.
i
I
'
~t-f
-- ·tti--
4mIL~i.
..... , ,-
-
..
I
CONSTRAINT
LINE
to the left of
4
- ...
.. Er
-
·vWa..
1··· L.,~,a,
· · _I I
.
q
?.?~
-
.
A
~
P
-
-
-
-
-
-
· ·ll · ar4,i·I_
-
I
__
.a.Lf
.
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
_·I
· · I_ ·I~_
!,llk,,I*J.-.,
'
·
.
_
~
"
,
_
_
.
_%
_
_
1
_
if±-l1-I-L
i I
I I I II
IIII
I
I! ll I I I I I
4t1- 1
i44,--4~--~1I I I
I I I I
IlII
IL
~2 33
lcD-20
44
55
6fi
77 88 9100
9 10
I I I
I II I
I I I
1----1 1 1 1 1
rI
r I
I
IIII
IIIlll
!~~~~~~~~
:-t-4--H-ti
is not vallid
-- +Hq
d-HL[I
.....
_~~~-
I I !I
I 111
1 -- 1 I I III
llll
.1I
3
22
44
3
iL
:L
55
66
77 8.
8. 91000
9 000
2
-
33
44
z
55
66
KN
Figure
.--
C--
-I
_
l_
.·--
21
_
_A^.___
'111111-'-'-'---'-
77 8 91
-
v
4P
A = 4P
--
f
w
le
Theoretical Linear Performance for Figure 16
55
Two pole message
IO
+
Realizable - zero delay mean square error
- --- 4--`--+H
- l-i1t1...
'
i
-r--i
1
!
-l--t
i :
i j i:
li--
l-l'!
"I
;i--..... --
-
T
-
?i i : i i i i i i i i i i i H ii I i i ! i i i i
-tTtl
-
I-
---
-
t
~-1
r1
FITFT--1F I ---- TAI-TFF-7T-1L-TET1
r1--T---
-
i
!i i i: 1i li
1 1
i-t-'-F---t !
I=~
-r-
lI
- 1 :
JTZ~_--.
1_1-Li-L1
i
.i t !r!
I
L
~ [ [ + r j
j
I
i
~rl-;:
1
_[LT_q t; ,4+m
=f
--I1-ILL1,
r
-,-r
lift
-1I
L
T--h--
C,.4
I I
I6
_;e
;cf = .UU
i
_i
T:
- -
! ii _ M ! i i _1
TT~
L!~_ 1T:~
,'L_
: i'
r-T
1
Ll=-
1:1
[" i i
----
I
++44
: ei
i
----
-
A_ 1
-
I-A4 . f1.
01 .H H
I- TCI ti1i
1 LiL
L- ' _
I1
-r
-t,
I
'4J
1-1441-1
T:;J
,;I
r
.i.
I''
Ii i i
rT] t-'
:!TT - -T i-
!
'I
....
II
-1 TTi
!i
ffli
I I
-iLiI
1+7-.FT1-t
T
mTUl
5 0 r
-
_ i I i II I
111 i,'
,t I T r1n
- C
: ,,
7if
-'I,
i
Li'
-
L1L
dff
-t
.4 4i
t
_i
-1
it
TF
I11. [
t--
...
. ..
L~
I
I
-
I I T r
t
I I 1!
I
=
i
.4i
ii
I ff:P+_FF
I
25
t--
1-~Z
1
-
Ii
Jr
r~Ol
[ ~-r
rr ,-- >-:--_!--;-
--
L--:-
5--·-----l-'E_
-;
I
;4
j.
-X
.
.7-----
{
-
CONST
J AINT
L
-t::7-Xj1
i I-:
ito1
tI~~--i-4
inear
I
le
i of which
fhe
is
analysis
t
-
T
T
L
I
f4-Ai
1 V-T
fft
-
I
N
i
i-t
J
t
3
7
7
8
()1.
Ii IW' U
8
C
I
A;
1o)Wt
n
wj
U,
iI
)e
l
9-
~tL
;-ift
<t1
I
1
-
-
H
7-i
-
f
:
41ti
4
li:f7
4
1ti
Y
!ox
J4M
iIaW
i2
10
to
2
4
"I
7
7
8
8
3
9
91002
A
4/2 P
KN
o
Figure
22
4
5
5
6
6
78
7 8
7 8
9 OO
9 100
3
3
4-
5
5
6
9
Ok
56
Theoretical Linear Performance for Figure 16
Five pole message
Realizable - zero delay mean square error
=1444
2
= C
x
df=
9'
-I
cr
25it~~~~~~~~tt~~~~~~~tt~~~~~tt~~~~~~~t~~~lit~~~~~~tttiiitm~~~~~~~~~~~~~##(~
------
---· -
.--
Plot of
of Equation
-Plot
Equation
(83)
(83)]
Y-Y
-1 r
fftllflfiff
-4 -
3~~~~~~~t--
i
t
I
!
1 1 ; 1 -1-4-1444-1-1- -1 3 I 1 ' . .
I - -" --4-C I I I I i
lb
L--L"Iii-.LIL
H
iH '
i
JI
i
!
~
.
H
±H4++H-I+-t-H-H,--H-+i
p Iit I
-±-I--HIti-
1 11 I
tl-tFtht
l1
I II I T- ,
---
-H+i-I
Ti
.III
8
11-r-t-1I-11
---.
-
--
I1
I'
1±1W -1
-P
IIIIIIIIIH II
-""!
:1
J2
!
II
5
af
I
3
rt- · IY T r
h-uu·ur,
I
I
2]
i
Tsl
H
--
Mr.IL
r-------
mi!
T
1.
U.L...i
0"
IS
' t~H
f- t+?~-l-l+~t~tt-1
I
t! z)
M
w
m
In
'aw
Ljt i ' " i '
9- i
~
i i i i i i i i Tiffl-ttt",
,K-ee
.
.
. rI .
.
tt-tl
i i i if iiii
-- t'-M-1
·
I I
I........
-H-t.
tM
I
I11 I
i i iiii
rl?,Tlr
l-1
1-
Ili
ii iI
IIII III
III I 1111111IIIII I 11111111
I0
Z
z~;.TT
·
a;--tfl:~-r-L:
L
:2
5
I
x
0
jM
4
TIII
3
r--c--~,
+--
-T
I-
-L--
1-I II
I.
I I
_.LL I I L
i
II I I I
o10
I I-vH4TI-ITI
III
T T,
IIIIIIII.II-I
'i
iIlll!11!l! I
3
I
4
6
111-H1-4-kblIhI
Ii I
-H
-- H -- ttIHtfih~~th~~iIi
_ _ _ _ ___________Iil__l__l
I
5
7
8
I I II
.II I.II.I I
2
9100
A=
3
6.18 P
KN 0
Figure 23
L
._..._.
4
5
I--I- IIII'III
6 7
III+II
I
..............
8 ! 1000
ll lm.
. . . . .- .. .
2
._..-............
2
...
4
3
4.
.....
-1
I
l1
i~~111TI1
b
b
b
'I
I
ts
1UK
57
CHAPTER
4
DIGITAL SIMULATION OF OPTIMUM DEMODULATOR
4.0
INTRODUCTION
This chapter will be devoted to the performance
analysis of the system in figure 24 by using digital
simulation techniques.
II)
C M
JJ
C
Figure
24
Optimum Receiver
Note that Figure 24 is the optimum receiver that was
derived in chapter two.
convenience.
It is redrawn here merely for
The filters used in this model will all be
zero delay, optimum Wiener filters.
·
ll
,
.
.P.,.
l
.
__
58
4.1
DIGITAL MODELING
An obvious difficulty with figure 24, as far as
digital simulation is concerned, is that it is an analog
system.
The message a(t), the noise n (3 ) (t) and the im-
pulse response [hi (t)] of the filters are all continuous
Our problem then is to find an equivalent
waveforms.
digital model for these three components.
The following
three sections will explore the digital modeling of hi(t),
a(t) and n ( 3 ) (t), in that order.
SAMPLED DATA MODEL OF CONTINUOUS FILTER
4.1.1
From the process of Wiener filtering we automatically obtain the frequency domain representation for the
continuous optimum filter.
In the continuous case the relationship of the input
x(t) and the output y(t) of a linear filter is given by
the familiar convolution integral (figure 25)
00
y(t)-= h(T)x(t
(85)
- iT)dT
-00
If x(t) were as shown in figure 26-a then a sampled
data version of x(t), call it Xn(t), is shown in figure
Mathematically
26-b.
co
CO
xn( t) =
I
0
,~~~~~1
1
_.
1_
.
x(nT) u(t
- nT)
59
Likewise, if h(t) is the impulse response for the
continuous filter, then
o
h n (t) =
h(nT) u o (t - nT)
(86)
0
is the impulse response of the sampled data filter.
The
problem then is to find the frequency domain expression
for hn(t).
If we consider
(86) as being the product
function h(t) and a unit impulse train
0(n)
u(t
-
nT)
Figure 25H
Figure
25
.1
c
,
,,,
b
Figure
I__IP
_
IUI_____1
-^II-)ILLIIPIIIIII·111
26
of
60
where,
(Laplace transform)
= H(s)
L{h(t)}
and
L {T(nT)} = L I uo(t - nT)
0
1
(88)
1- e -sT
then
h
= h(t)
n (t)
T(nT)
H
n (s) = H(s) * 1 - 1 e
(89)
or
(90)
where the symbol * indicates the complex convolution
operation.
Hn(5)
t
-
Formally,
21J
j)oo
1
H(p)
-j
1
dp
(91)
-T(s-p)
If
LIM H(s) = 0
(92)
s500
and H(s) has all its poles in the left half s plane, then
t
Reference 15.
- -
*
-
- -
A
n
-
-
61
by applying the methods of contour integration where the
contour encloses the entire left half plane, then
Hn
X
(s) =
Residues of{H(p)
1 - e
poles of H(p)
1
_(.
(93)
If we define
sT
z = ea
(94)
then we can rewrite (93) as
H(z)
Example
I
Residues of{H(p)
poles of H(p)
1
1 - epT z-1
(95)
1
Find H(z) for,
H(s)
=
2
s
(96)
1
+ V
s + 1
Hence
1
=
(p)
(p + P 1) (p + P2
)
(97)
where
Pl
2
-p
2 [1 +J]
2
(98)
62
1
then
RES (P)
PTz]
1 - e pTz1P
+2(1+J)T
P=P1
1 - e
-1
z
(99)and
' 1
1
RES [H(p)
Z2--1
e-pTP
- P12
-P=P2
1 - e
+-e
z(-J)T
-1
z
(100)
Summing (99) and (100) and combining the result gives
the answer:
-J
2
T
sin [(//2)Tl
z-1
(101)
H(Z) =
- /2 T
cos[(/2/2)T]z
1 - 2e
-1
+
e-+
e
T
z
-2
The problem remains to interpret H(z) so that we may
realize it on a digital computer.
Consider H(z) to be the
transfer function for a digital input X(z) and digital
output Y(z),
H (z
=Y
(z)
X(z)
P(z)
Q(z)
(102)
63
Cross multiplying (18) yields
*
(103)
Y(z) Q(z) = X(z) P(z)
By converting back into the time domain difference
equations we can realize the filter on the digital computer.
An example will serve well in demonstrating the
technique.
Example
2
Using the system function in (101) we obtain
-
V
2
Y(z)
X z)
H(z)
T
sin [(/2 /2)T] z-1
-
2
1 - 2e
T
cos [
-1
-2) TI
+e
-/2 T
-2
z
(104)
or
2 T
Y(z) = f
e
-
+
2e
2
sin [(/J/2)T]}X(z) z-1
-T
2
cos [(V-22)TIY(z)
z-1
-[e/2
-[e
TY(Z
]Y(z)
z
-2
(105)
*
Reference 16.
64
From transform theorems we know that, in general, if
we multiply a function in the frequency domain by e
as
the time domain expression is delayed by "a" seconds.
, then
Like-
wise if
F(z) v
f(t)
then
F(z) z n
Applying
e
-
+
{2e
nT)
(106)
(106) to (105) yields
- 2
y(nT) = {
> f(t
2
T
sin [(/2/2)T] x(nT - T)
-T
2
cos [(/-/2)T]} y(nT - T) - e
Ty(nT
- 2T)
(107)
Rewriting (107) yields
y(nT) = bx(nT
- T) + b 2 y(nT - T) - b 3 y(nT - 2T)
(108)
where the b1 are obvious.
Equation (108) is immediately realizable in a block
diagram representation that is compatible with the digital
computer.
See Figure 27.
65
(TT)
3 II -
Figure
27
Digital model of H(s)
(Circles indicate a unit delay and squares indicate gains)
As a summary of this section we will outline the
method.
First, Wiener filtering yields a continuous filter
H(s).
Second, find the digital equivalent H(z) by using
(95).
Third, rewrite the transfer function as in equation
(103). Finally convert (103) to the time domain and interpret the result as a block diagram.
In the appendix we
list some filters and their digital equivalents.
4.1.2
SAMPLED DATA MODEL OF MESSAGE a(t)
For our system a (nT) is to be a sample function from a
zero mean Gaussian random process.
In addition, it is to
have a known spectral distribution and unit variance.
We will generate a(t) using the digital computer.
In
so doing we are automatically taking care of the requirement
r
66
that a(t) be converted to a sampled function, a(nT).
It
remains to give a(nT) the proper statistical and power
characteristics.
For our first problem we will make a(nT) a zero mean
Gaussian function.
The IBM-7094 computer has in its
library a random number generator the output of which has
a uniform distribution between zero and one.
%
Figure
28
First let x1 and x 2 be two independent variables from
px(x) and then define
1
(109)
z = i-2 Log(x 2 )
(110)
= 2x
ql = z sin
(111)
q2 = z cos
(112)
r
67
If the transformation (109) - (112)
are correct, then gl
and g2 will be independent zero mean Gaussian random
variables. To prove this, one needs only to show that,
e is a uniform distribution
given gl and g 2
Z = v' 2 + g22
is a Rayleigh distribution.
41
z sin
=
-
92
-
and
Hence
tan 8
z cos
8 = tan
1
(113)
g2
Davenport and Root1 7 shows that (113) is a uniform
distribution, and
z
2
= -2 Log
z
(x)
(114)
2
x =e
P(z) =
(115)
-1
-
(116)
Carrying out the indicated operation in (116) we obtain,
2
P(z) = (l)ze
Suppose, gl and g 2 were independent N(o,l); then if
L
(117)
r
68
gi
2
z =gl
+ g2 2
and
P(gi) =
e
2
2
(118)
and we make a suitable change of variable to polar
coordinates, then
2rr
P(z)
z
2e
2
2
de
P(z) = ze
(119)
Since equation (117) and equation (119) are the same,
then we can deduce that gl and g2 are in fact independent
zero mean Gaussian random variables.
The transformations
(109) - (112) are easily executed on the computer.
We have made our source Gaussian so the next problem
is to mold the spectrum of a(nT) into one of our choice.
First we will make a(nT) have a flat spectral distribution
and then pass it through a filter of our choice to obtain
the final message.
The variables gl and g 2 can be represented as in
figure 29a.
Each rectangle represents an independent
Gaussian variable so that we may consider their statistical character easily.
rate.
L
The letter T is the sampling
Since the variables have zero mean, then their
r
69
(a)
Independent Generated Variables, Gaussian Distributed
.R (r)
-
- -
-
-
-
-
-
-
-
-
-
- -
-
-
-
- -
- -
?
-
T
T
(b)
Correlation function of gi
, SJ(f)
I - . .- .
v
_L
0
-~~
w
!
T
T
(c)
Power spectral density of g i
f
T
(d)
T
Normalized power spectrum of g i
Figure
29
70
covariance function equals their correlation function.
The
correlation function for the process in 29a, is shown in
29b.
We know that the power spectral density for a sta-
tionary random process is given by the Fourier transform
of the correlation function.
For completeness, by defini-
tion
Si(() =
R(T)e -
dT,
= 2f
(120)
Carrying out the indicated operations yields
S()
= T [sin
T2
(121)
wT
A sketch of Si ( ) is shown in figure 29c.
This
spectrum still does not appear flat with unit height.
If
we pass gi through a gain of 1//T then we obtain the spectrum shown in figure 29d.
Note that if we pass S(w)
through a filter whose highest frequency of interest, fh'
is much less than fm = l/T, then we can consider S(w) flat
with unit height.
The point here is that the sampling rate
T must be sufficiently small so that the approximations
f
f
T
= 1
If
(122)
and
S()
are valid.
< fh
(123)
rI
71
In this thesis we will only consider the Butterworth
family of messages given by
2n
=
sin
K
Sa (:n) =2n
+
Tr
n 2n
(124)
(K)
where K will be taken as unity for simplicity.
The above discussion is best summarized with an
example:
Example
3
Suppose we wanted to generate the message
S
(w:2
)
=
(125)
4
w +1
Sa(w:2) =
(_-2
+ )jw
2+1)(
(
/22 ) (2r2
)
(126)
From transform theorems and S(w) = 1
Sa(w:2)
= S(w) IH(w) 12
S a ( w:2) =
Sa( :2)
IH()
12
= H(w) H*(w)
(127)
(128)
(129)
72
S (w:2)
J2f
2
H(w) =
H*( )
s
(130)
2 s + 1
+
Hence to generate Sa(w:2) we have merely to pass the
normalized version of gl through the filter given by
(130). Naturally the filter H(w) will be realized in
the sampled data framework.
4.1.3
SAMPLED DATA MODEL OF NOISE n(t)
The last continuous system variable that we need to
cast into the digital framework is the additive noise,
n(t).
In 4.1.2, when we were developing the Gaussian
structure of the message, we were dealing with two independent Gaussian random variables.
We need only to take
the other variable and treat it as the additive, independent, zero mean, white Gaussian channel noise.
All of the
arguments that went into developing the message also hold
for the noise.
Instead of going through it all again only
the significant results will be presented here.
We want the channel noise to be white with two sided
spectral density of No/2.
for g2 will be
30.
Hence the normalizing factor
No/2T so we get the result shown in figure
Once again the approximation (122) must be maintained
to insure
N
Sn()
=
If
fh
where fh is the highest message frequency of interest.
(131)
73
We will not use a filter on the noise source because
the loop filter will take care of the out-of-band noise.
4.2
OPTIMUM F.M. DEMODULATOR, DIGITAL MODEL
By applying the results of section 4.1 to the continu-
ous F.M. system shown in figure 24, we can obtain the equivalent digital model shown in figure 31.
_
\
Al.
Z
I
T1
_ am
$
Figure
30
Additive "white" channel noise spectrum
)
71
Figure
31
Digital model of F.M. demodulator
74
Even though the message and the noise are in digital
form we will still define them as a(t) and n(t) respectively and leave their digital nature implied.
When the
three filters are expressed in their appropriate digital
form, then figure 31 is the complete digital model of the
demodulator.
The system is now ready to be simulated on
the digital computer.
Let us divert from the complete model for a moment
and concentrate on the two integrators in figure 31.
the presence
tors.
of the "T" in the gain term for the integra-
Intuitively
Mathematically
Note
it is clear that the "T" is necessary.
the presence
of the "T" is difficult
justify but it has been done by Papoulis.
18
to
The "T"
factor must premultiply all filters that are expressible
as ratios of polynomials in the frequency domain and whose
numerators are at least one degree less than the denominator.
For filters with equal powers in the numerator and
in the denominator we are required to divide them out so
that the numerator is one power less.
The "T" term is then
to be placed in the remainder term.
Example
4
s+3
H(s) = s + 1
=
1 +
2
[see figure
32]
(132)
Now we can revert back to the complete digital receiver.
In the simulation we compute the phase error
p
75
Figure
32
j
Realization of H(S)
variance and the mean square demodulation variance as it
)
was explained in chapter 3, section 3.1.
2
=
f
VAR [a(t) - a(t)]
(133)
and
2
= VAR
°x
[x(t) -
(t)]
(134)
I
i
i
i
II
For the sampled data model, (133) and (134) can be
written
as:
I
I
II
I
I
a2
=
1
M
-I
[a(t) - a (t)] 2
r
(135)
[x(t) -
(136)
I
ax
2
=
1
M
i-
i=l
r
r(t)12
r
76
where M is the total number of Gaussian random numbers
put into the system.
Clearly, the greater we made M, the
more confidence we have in the resultant.
Also, if we
make N independent runs with M numbers in each run, then
2
2
afN
=-
af
(137)
In other words the variance of the mean square error
variance, afN ' decreases linearly with the number of runs.
Ideally, then for our system we want to make M and N as
large as possible.
Computer availability precludes this
ideal situation so we have to compromise between the length
of a run and an acceptable confidence level.
After one
performs a few simulations he gets a feeling for the length
of time it takes for the system to reach a statistical
steady state.
In the next section, we have an example
that demonstrates the length of time that it takes for a
typical system to reach steady state.
4.3
SIMULATION RESULTS AND DISCUSSION
Optimum receivers for the first and second order
Butterworth messages were simulated on the digital computer.
Also, a second order receiver was used to demodu-
late a fifth order message.
*
L.
References 17 and 19.
For the latter simulation
T'
77
the system was sub-optimum so that only limited conclusions
may be drawn from the results.
Performance curves for the first order case are shown
in figures 33 and 34.
Referring to figure 31, the appro-
priate filters for the first order message are
Ga(s) = s+l
G"(S) =(y
(138)
-
r
)s + 6
(139)
s(s + 1)
' (s) =
pr
(
-
+ 1
1
l)s
+ 6
(140)
where
2
62
2
df2A
=f
,
A
A
44P
KN
y = V26 + 1, K = 1
In figure 33 we see that the nonlinearity enters into the
system performance when the mean square phase error is
2
approximately 0.5 radian .
Note how nicely the simulation
reveals the performance characteristics below threshold.
That portion of the curves would be nearly impossible to
plot using any other method except the actual laboratory
experimental situation.
Figure 34 shows how the loop
performed in estimating x(t).
*
This plot is made modulo-27.
Zaorski20 did a similar simulation.
78
Performance curves for the second order case are
shown in figures 35 and 36.
The Wiener filters for this
case are described by
Ga (s) =
2
M
=M
G"(s)
r
- df
G
, 1=
(s)
= m
(141)
s
+ Ls + N
+
+Ls
s
s2 +
s + 1
T2s
+
2
s
(142)
3
(143)
+ Ls + N
where
M
2y -
L
2-
2
=3 y
-1
2/ y+
2y+
1
V2
2
_
M
K=for 1
-
T2 = 2
= df
tescnodraf
A
A
4-=K
KN
0
Figure 35 shows the mean square demodulation performance.
Notice for this case that the linear region per-
formance is about an order of magnitude better than for
the one pole case.
Also, the threshold for the corre-
sponding curves for a given d
for the second order case.
has measurably improved
79
For this second order case the threshold occurs very
roughly at a phase error variance of 0.2 radians
2
It
.
appears that the ao2 at threshold might be an inverse
function
of df.
The fifth order simulation is shown in figure 37.
For this case we put a fifth order message into a second
order system.
Hence these are sub-optimum performance
curves.
For all of the above simulations a sampling rate of
.002 seconds was used.
This value of T was at least
twenty times faster than any of the time constants encountered in the above systems.
Also, .002 seconds is
about 500 times faster than the message correlation time
so that the spectrum input to the message generation
filter surely looked flat.
In the last few paragraphs of section 4.3 we considered the characteristics of the variance of the mean
square error variance.
In the simulation results this
parameter manifests itself clearly.
Figure 37.1 is an
example showing how the mean square error variance varies
with the length of the simulation.
Observe that the var-
iance settles down after an initial transient condition.
This demonstrates clearly that afN
+ 0 as N + o
is the variance of mean square error variance).
(afN
These
r
80
particular data were taken from a system operating below
threshold.
Much smaller transients occur when the system
is above threshold
Often in commercial systems, bandwidth occupancy is
an important consideration because of bandwidth constraints
put on the system.
In the analysis of this chapter we have
implicitly assumed that we have sufficient bandwidth in the
transmission media to accomodate the frequency modulated
signal for all values of df.
This tacit assumption may
have given us biased conclusions in comparing the one pole
case and the two pole case.
For example, we concluded that
the linear region performance of the two pole case was approximately an order of magnitude better than that of the
one pole case.
We did not, however, investigate the possi-
bility that perhaps the two pole message occupied more
bandwidth than the one pole case.
In the sequel we will not consider bandwidth as a
system constraint and thus will not pursue this question
any further.
Hence, our performance criterion will be
restricted to unconstrained linear region performance and
threshold levels.
The results of this chapter will be further examined
in chapter seven when we have all of the analytical and
simulating data at our disposal.
I
F
i
ri
11
-7-T--Itt pt
'cr
*
-44~
Experimental Performance for Figure 24
81
One Pole Message
I 7F
First Order
Phase-Locked Loop
----I
__
A_
A
,
1-i_L.L_. I_i _LL1'' i , I LI i i i 11 1Ir 1 I 1Itl i I I! I1iIiI I I II iI IiI I
- i
-I
HHHHHHH I
HHH
IH1
I"
..Ii
k4+4WjI
F,
_
H
i---r ---
r
i-
I-
r'
T4+0-
]
-1I
I
I
~ i
1
rn
17IT_ft h L
712-L11 jnJA
__
-----T-
1+1-
___
41T_-----__ __V
t---
-1
t
I
I
[
I
I
I
_
1
_
I
'
rX
S
I
I
.t. I
I
I I 1 i-11
I
!
]
I
f
i;
+
L
'--i
.
.- t
+-iI
t
I
H---i+X-exJ--- i-+
VW-444-A1r7!1:i r_ i i t ttrittt U- hL~
l 7=-itj-~j:+`rt
H I1-7T7T177M--C
I: 1.I J_ LII
i
I l III ' i I '1 iiil I
i I i
i
I
I i I i
I II
I
!
!
I
I
]
I
]
i
I
I
I
_Li
I
+I I
I
iJ l~11[[
I
-
ITT
i---
I4
t
T
1-F
........
~Y
44-
-I- I
-',-'-[c'- ~
-,7T
l-h~--;-i
r --
I
½1
tv-i-i
hi4
I
7iI
i
II
1-
I-T1-
i
i
'tii
I
[
I
I
2 '1
it6
.
af
i
1.1~~t
30
3 o
__
T
i
I
I
I
I
t
f
I
tltt
4
-4
- O.
1-
5r
I
1_
-
I--+
__
I'
I
i
74
1t-,
4 '
t
1_Ti
-X
r-
t 4:
.....
WX M~
d
-
n11I4FF~
j2
d
/
f
25-
I IH
t~~~~H
~~~~~~~~~~~~~~~~~~~t~~~~~~~r_
i
Z 8 9 100
IOU100
3
4
I
I
+
T
t5
r
-T
_
..
j,
I
i- h-1
+- H
t!-ttt--i-
I-
J7
6
I
1--_ . '2I-~,
r' --
-_
IF
'
il~~~~~~~~~~~~~~~~~~~~~~~~~~~i~~~~-l
5
I
1
11tI, 2,11_11*11
H-
7
il-I- :L
1
1:
...
I
.½
._
F
I
137=
0
F~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~F
100
344
!
i-i
li'
4F11r--Ii
t
i--f
X
!
-t--~----i-
W=
H JU
i~~~~~~i
i-
I
given Dy
._
F
-v17IJF
5'K
i
I
144'
-r ;1
"1F-1~~~~~~~~~~~~~
i-4-4-
X
w
., -j
l
I
iI
4-'+iIi-
...
L!
T~ ~ ~
w
I
(140)
T
1z
~
I
-
rt-t- itt
4--Il
i
I-i -t~l-~rt-!---i-J--
~i'ft trL-
I
LI
I -Ti-tii
itI
ifLiiJ-
R#444-
z i
I
i
-4
ii-i
J.
L
L
4Equations
(138)
T
-i~~~~~~~~~~~~~~~~~~~~-
-ri
!
(139)
--
t
,1
I
=tT
I--
i
!
I
<It-
:-
hiC-!i--C-ii!
ri
4
I
YIpmZ1
.r ltler
[
I
I-_
!-1'
-L
1--~-4-
I
rtU
v-.-
..... /,.
I
i+11t1fitIliti-l
I 4 4,i ; f1t4tfFS _1_ --5[-r P r|it 1l
iv IJrl1
I _
FL
4----1_11~I I-flti1-1L
f t-J
ILjU<V 4:
--- --i-- -':-ti-i'--T-ri71-;rT~-T
-r
-t-til 1_17-,,----t .~--i- 'l --- i '4 HF '
-f
:- lj
L
- t-t
-
-t
117-M
- 71' ' -'i- I~~~
1,
-- '------------
lt I rl
I TIr
.
l
rmit~
I
I
F
2
5
A
2
3
4
3
4
4P
0
Figure
33
5
H
t+~~~~~~~H~
6
7 8 9 1000
3
flT PI,
,I,
5
6
7
910i I
.
82
Experimental Performance for Figure 24
One Pole Message
First Order Phase-Locked Loop
'
9
I
IIIII
I Ilii
I I I I 1 11111 I I II I
!II
q
E
-
6
Filters given by
fEquations (138)
5,
t i 5t M!,,..._._
I
r
df =100
(139)
(140)
d
50
=
f
I
S
2
df = 25
I111
I I I I I I III IIII iII IIII
II I I
III,
I I I I Ii....
rr
1 11. 1
.
11111111
i
ii
1i i Ii
-11111lln
i
1
i I I I1M I
1 I
III I 111I
U
I I LLUUI-ILCY~~11
I I I II I
I I I I1
II iLt,11 lTIII 1I !11111111
Ir I I
m
!I t
1
I
1 1 11II
11I
I III'
10
9
8
7
'
|i
'W
l/
lu
1|
|1
s
6
5
4
3
2
i[1~~~~~~~~~~~~~~~~~~~~
L
I
1111111U 1 i" I"L
111111111
I
I r IIIII
i
III1I
I.-
- i1i -
i
I ·
W H HHN
i
I...·
1 T'-1
m-
.
-L· ""·
1ml E
1
I I II I
I I
.
L
I II
Ill!I
lil
1. I
Il 1.111-§
! 11Il
Itt
9
8
7
I I
4
3
I
I I I I I 1- I I I I I I II I II I I
,
I I
I i1
I
I I I
I I I
.
II IIIII I I!
II
2
1.1...111
3
I
II
II A
I!
I I
4
5
I
I
I I
I11
I _I I - - I - I-
6
I Il
I
I
I
i
I I
I .
I
I I
I I,
III
1
I I
I I I..-I I
2
9 10
A
3
I
I I I
4
! I
5
I
6
7
8
t l t..t..
I
-
I 1
I 111 lli
I ----
I I I I I I I
.
n, 4n
IU
I
II
I
.
g
. i. .. l
-
I
I III
dt
C
;r
f .7
U
*
4P
KN
Figure
o
34
_ ··, .
. . _
.
_ _ . _ ,.
.
.....
8
Q
1luft
Experimental Performance for Figure 24
83
Two Pole Message
Second Order Phase-Locked Loop
.,
Tl
IU
1,
II
.X .T .I I I I W T
I
I
I II I
.q
r
i
7,
1.
i
I-,
..r'
. .I r. r r X 2 rl | zTT
T
I
I I I I I
il
0.2 rad2
I*l lilIl
4 .1
L
I
i
l.
df = 100
--
%-J.
.......
!
h
I'
I I I
II I
IIII
I II
df
F1
I]LU
50 i
3
U
I
II
I .Ii
. I
. . I .. I
. .
.
.
1 1! . I I II
1 t
I I I
.
!C T[!! IF IIIII I I IIr r r1
. .. . . . I. . .. -.. .... I
II.-
t;P'jI 1z , I1''' I ll''
I . +- C' I I' . .I
f =
111 il 111II
''''
I I.......'''
5
I
I I I ii
11
10
9
8
7
-6
5
Filters given byE
4
Equations
(141) E
(142)
(143)
3
I
E
2
HIM
II
I HIII
I HHI
'U
11111
I 1A I I I I I III
I I II
II I
9
8
7
11
6
5
4
3
J
', L
Ic0
2
3
4
5
6
7 8 9 100
2
A
3
4/Z P
KN 0
Figure
-
35
4
'5
6
7
8 9 1000
J I I I II
2
3
4
5
6
7
8 9 14IK
Experimental Performance for Figure 24
84
Two Pole Message
Second Order Phase-Locked Loop
"n
pv
T
I
3
-- ii
ri
41
dff = 50
i
------
= 100
=::
....
df = 25
!
I I--4--h-l
d4-~--1
-
-Ij
IIII
4
---
IIII--
--
-c-+ -
i-tC-CCC-e-CCCHtti1-CCttCH-ti·-)-Ctl
1--~---h-i-d-A-1-
41-
-
I
IIIIIII-I
III
I
I
t I
I I I I I1
I
I
.
.
I
I
.
.
.
. . . . . . I-
.
.
- .
.
lI lI !
.
.
-effitiitttttttlitl
i
I- . 1. _ il. i I - . _mt
..i_. ill
-
- .. .
- 1.....-ifi.
.
.
.
.
. I .t.I .
. .
.
I
I -
.
.
.
-
-
to
Filters given by
9
"I
f,7
R·111
1 1! I
I
I
I
i l !1111!11
I.
I
II
.I
I I I I
I I
I I ..
AI
I I I I I III
II
(141)
(142)
(143)
Equations
T
f{
5
4
~--ft-J-~!-kI
_
1 111
I I I
t++I
I
I-t-i-
I!
I . I
. .
I.
1 I IILI
I I I II 1 1111111!1
_CU"'LI"1
I
lI
I . .
. . I . I. .
. .
I. . . .
.
i I i I i II i-CCCC+ttiCtCt-H-/ti-C-tCCH-i-e-i-ttC
I.I1I
. -
..
.
. .
. .
,-
-
ILC
U
9
8
W
I -I
d.
4l
W
'0
U
7
6
5
U:)
X
iY
I
4
t- -
-
-.
-
1- -
fi~
-- r- -7i
II
*+
I
I. I I I I
.
10
_
.I
i-I-I
I
1 1-11 -t9
I
2
I
.|
!;'I
I
3
,I
I
-A
I
'
--l I -
. I
6
.
.
.
.
I
.
II-F
-4
I
I
I
...
9
I l-Ctl.
e
-·
I Lo
I
A
i . . I I . . .
III I I I t
i
I
.
4KN
KN
Figure
-t
. I . I . I.
I .
36
I
. I .
. tI I -.
I I I . .
I
.
.+ .
I
-I-
I
.
.
.
8
.
.
.
.
1Ooo
I I
.
.m
~
.
.
.
. .
.
.
2
I
I
-"
I . . . . . I -
3
-.-
,-,-.
. - I
~l-tttt
I I
. . ,
I . I II I .
.
I I
I
. - .
-
.
.
-
7
- I
8
10
Vr
II
85
Experimental Performance for Figure 24
Five Pole Message
Second order phase-locked loop
___
QGL
in
V
A
I
I
7
1
II
I
I
.
.
I I 1 1-- r-111
:I. i l .
i I
'll
I
I. I I i i ilililill
I
I I I I I I I I I I I I m I I I I . . . . . . . I I I I. I I I I I
1 1 .- -t
l
i
I 111111111
I
I I I I i. I
.
I
I I . . . . 1.1
.
I I I I 1
.
H-
I
. . . i-. . .
'
I
.lll I
af
IW
9
8
7
1
I
I
I I I t-
-f
1 11
-$
t
1
,
'11
I I I I
I II
-
I1 1w
f'
1111
I-I+ I I I +I''''-I- Ifff1
I
.-
!
I
t-I ..-.
t-
I_I_
I
. . ..
I. .
I
1 1 1 1 111
I ...
. I I..
I
I I
I
= 25
. . I . . .... ...
t
6
D
4,
$.
The message generating filter is:
+ l
.E.
I
a
W
a
i b.
01
W
i~212
X
k
CU D
11 1
111i1!1
II
tlI
-HU--·- l-t i''--
I..1I
II 1
fIllIll
Ci
11 1
I
I
I
It-!
G a (S) =
I IH .
to
(s+l)(s2 +0.618s+l)(s 2 +1.618s+l)
9
8
7
6
5
4
riY
X
n
,q
3
2
-_-+
-
Ii
tL
-I0
I m I
WI
IiI
IIIIIiIIHIII
I I IIII
I.~~~~~
2
3
1fiH
4
5
6
II I
1 l1 11
III
l 1
7 8 9 100
2
3
4
5
6
7 8 91
6.18 P
=KN KN
O
Figure
---·------·-----
-·--·-~~~~~-------- 31---
IIIIIIIIIIIIIIIIIII
ll
II
II
111
1
I~ii11!I
.
.
37
~~~~ ~ ~ ~~~~~~~.
.._.
00
2
3
4
5
6
9
86
o
.I "'.-
.---
--.-
-
' --
.... . ..--
,
, - . - .:
- -.-. ...
k.
0
^1_1_ _CII__II_*C___U_)_(·III---··lli^C-C·
,.I
a o 0p
I
~4
...
_--~·~.
.......
···a11-1_^~·
. . .
. . .. .
I-
.
0~4'
.,
f
,
-
;
,
o'
i I -Ir
0W-Po
H 4 4.,
I:1 a) i
·
!~
-P
0)4 o0
o,4
, 6-- :
0)a); 4 art
i;
j:
; ·-·- · · :·--·---· t-· i-:
·
x
,--
'·L·'--t
··
----!
1'
: -- (--··i-i!
ii~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
o
E-1 4J-- U r~ E' X
m- 0610
r
:~~~~~~
,.~''`-;
ol-
TI,~
'~
4
-P
:a·m +1
E-r ~~~~~
o 4:4
P;
.
o
I4'~~~~~~~~~~~~~~~~~~~~~I
?0''"' t'"'",':-T
.I,.IF.,,z..:t.-
0
m Z C.
'.::L
.
·- · i
':
..._... ::';-'':.
'-', ',~.
;''.
-.
-:'.
'
i..r.i.;.. ....iiL
-,-I~~~~~~~~~~~~~~~~~~~~~~~~~~~I
F4rC
....
O.il~L.·;l·,
I.
1...l...f.
.~ _:·. I_·:_._·-__.I'·
i
;
i
:~~~.....
......
0 0 .: ,,~~~~~~.
.....
....
1·- -- I
.....
"':-'
.:.I~~~~~~~~~~~'
.....
:·-·
· :-·' '-:-·· "'·
'' ' ::-
4m.:....... .........
:
' : '· -
"'·
.
'..-::
. . .
'
~
~., 1~~
..
'.-
"''~"'-
-,,-~+-,-~.4.- $~.-~d...
:.
i
'7;
4---
L-.
.!...-
'-~'
";4.~"·I ;4=
-,',...........- ·'·; ,
:.
... ' ,
,:,~
;..!,..;
,; --·. I--;
L%'4f'..
.....
,
'......
:.'....
...'... .L .....
.....:............
-~' .",--·-'r='-"~"
,.-...4.k
;-* :'~ , ~ ~'-'
r .1- ] i-.-".-' *
i: q
i~~~~~~~~~~~~~~~~~~~~~~~~·.:i?L
;;:~L;/.......
;';,i~I.I".
t tl:.~t:,
-·
~:
f..''-.
'.. i;..-,
. :
.,~~~~~~~-',...
....
.. :.L.
~:.
. .^
_eii-r..l~
,
I:
i::
'~~~~~~~~~~~~~~~~~~~~~~~~--~''·
Ul
P44.
= ='4:
......
':- I~.~.
'
'
.
.
...
~
..................
.
.....;
..-....-.........:. ....;.... .,
'
'
:~~.i;
'..
i
i;-"''- .'',
iL .~t?.'L
"": .':-''.Z'~-'-':'..'. ?;~'..7.
~i..
r..
4~~.~7.....'.1i,'.'..
.,."
' · u~us,,r~r.':f
' ' '~
();
,.. ~.+.~...,
~~~~~~~~~~~~~~~"..
Z
>;~~~~~;
':' C)' -- l i
.,.....
..
-i~
.. :L'"~'/"'
T37:i
T:,,:~:i.-=-:..~:~_~
~~~~~~~~~~·;~~
~
.-....
~. ~~~~~~~~~~'""
....
......
:... . .....
. -..
:..·-;.
......
:...'.
·.......,'...."
~~~~--3" L~~~~~~~~~':::';:?-
....:?
'-ii!
.'~ . ,
:
.
. ,':
!:';:':~b
·
-:i
:.-;
.;......,
.~.
·.,:... .-..
,:!~---. .r...-t.-.Tz- ----;
.'....'.
.: '.'....r·........
~`..
...........
.:...
-. ..
H
:;.·7--.~.
~;-.:~':.,'
;-
' ....
·r.....'l.. .......
··;·. -,.-::--.'---,--.-
.
:
.-
.
''i;
.
.
.
I*
,
:" '
,
"';'
....
1
i~.'
'
.~.
.
4
·:.,.x L-i'
-~.
..-....
;': 'I
'=-:_~!..-: ~ '.I=.=.
-._
r-.'
0 o
ra'
m:!,.i
!i-
~l
.:'!~
......
iz
,~.';'; ,::"6'
.. 711
,~
LL
~~~~~
i::'l i E4
c,',
:::" :':....
i'
i~~~~~~~~~~~~~~
~~...., ~
= ~
='
'
'
:! .,. ......
:'
... ......-...........
; ': ~'=:~;
" -.-..:~';...;.:'
.. :.. ......
·~~~~~~~~~~~~~~
~ ~~:.
·
.'......:
.:
~
.
.
.
.
..
~~~~~~~~~~~~~~~~~~~:
·.
::,....
ir=· ? '~ :;.%,
':' :,,,,,
· - '--·
.~~:.,....:.;: .. .
'., ...
,.-.4~:' ..t- ~ * .. ~" ::-I",'.. '"'-'"',"
~' ;..,..~ !"F --,,.-..
'...."-"T'T'."~'.'-".
....
;
"*-,.
.
.
.
. ':.. .;:..
. ;''.-"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~:
... ,..~ ..
'_-:
' ~ -' ;::'L
l.4.l-z
-'
...
7-~~~..,. .
'="-: ......... .......
; ..;!-r..:!'~.. ~:..
"" ':....
...
i~~~~~~~~~~~~~~~~:i,,
~ ·- · CII)1--rl-·l
eU~.....
'r._' ....
..~ . . "
"",
''; .,.
'.-;.'%
"~
.....-
4
%--'.~''"
:' g--tl:.~-]~
·
C74
l·--
-r-r~~·
... '
..
: ..
.~s
·
'~~~,
~,.,.
,
.-.
.....
:
l ·ji4S
'
::-'-+V";
t2:i
:.:....}-,.t-~'~;.~J--~'
-- · -- ·-
['.
'4:·
·
.
'
.
-
,
.
.
..
.
,-
.
..
;'i:':'~';'::;~:..
'i:'- " .''.:
,
:...;:
,~;
,
.-
.
....
:.=w-p:-~
:.
,..,~
.
~:-
,-,.
1 1....,..
.---;4..:4
...
...-
..
,~.'.....'.,--.
''. -'- .: ..i...'::-;."F'~:.::;'";~'.':' ~,,,],'.~::"~';t.'-'::.:.:.;":
7==~i
:'..---.i
...
--=
!::'. t'.711i:",
":'~,.':'
'-,"3
C%4;.: , ~ · '.2:-.,-,,.:
.4,,:.:'.~.7,'.
J 1?.'-. ~'-....
00.:,~.,,''.;;
.....
, ,;~-,-
i'' ......
,_..~
......
+_..-'.2'
: '~
F:!L''=
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.......
.......
;'q
......
:
...
'"'"4..."
'-=i'":--';:;;
4 ......
=;,=~ir--'¢ ... =-, --: ,-.-.--.=,.,.r,,.: !~7i!-;!~,F-i? +~-;4~:'" ;i ''~=-: :-': .=--!, ' '-,:--,-'.... ':'--=
Q,
Cr:
0
0
C4
6
6
6
6~
a
-A
--------------------
-- ·---·11111----··---
a
v
87
CHAPTER
CONVENTIONAL DEMODULATOR
5.0
5
THEORETICAL ANALYSIS
INTRODUCTION
It is the stated purpose of this thesis to investigate
the performance of the optimum phase locked loop demodulator.
In the foregoing chapters we have displayed the optimum system performance in detail but we have left one question unanswered.
That question is:
"How well does the optimum
receiver perform?"
To answer this question we must have a
frame of reference.
We will use the performance of a con-
ventional demodulator as a reference for the performance of
the optimum phase locked loop demodulator.
Before we plunge
into the analysis of the conventional system a brief qualitative description of the system seems appropriate for the
sake of completeness.
An idealized conventional receiver is shown in figure
38.
The received signal is first passed through a rectangu-
lar unit gain filter H(s)
rier frequency o
centered on the transmitter car-
The signal is then heterodyned into the
intermediate frequency filter H2 (s), which is also rectangular
88
Both filters H (s) and H2 (s) are presumed
with unit gain.
to have bandwidths wide enough so that they create only a
negligible amount of distortion in the transmitted message.
The purpose of the filters is to reduce the out of band
noise thus keeping the signal to noise ratio at the input
Frequency demodu-
to the discriminator as high as possible.
lation takes place in the discriminator, which is a nonlinear device that responds only to variations in instantaWe model the device by saying that when-
neous frequency.
ever the input to the device is
x(t) = b(t) cos
[t
(144)
+ ¢(t)]
the output is
y(t) =
W(t)
d
(145)
[(t)]
rzc os W,t
I
t
S(t)
.=
/2
cos
(wot + df I
a(u)du]
W(t) = white Gaussian additive channel noise
Figure
38
Idealized Conventional F.M. Receiver
i
(146)
r
89
As it turns out in an actual system the discriminator
is moderately
sensitive
to amplitude
variations
so it is
necessary to include a limiter to clip off the amplitude
variations.
The final stage in our system is a filter H3 (s) which
is designed to eliminate noise outside the message a(t)
In many physical systems H3 (s) is merely an RC
band.
filter.
In our system H 3 (s) will be the optimum Wiener
filter.
Since we are ultimately going to compare the
phase locked loop with this conventional system, we want
this conventional system to be the best one possible so
that we don't derive false conclusions about the comparative
performance.
In this chapter we will theoretically establish the
performance of a conventional receiver.
In the next chapter
we will verify the validity of these results using simulation techniques.
5.1
CONVENTIONAL F.M. RECEIVER ANALYSIS
THEORETICAL
Consider the received signal r(t) that is shown in
figure
38,
t
r(t) = /2
cos
I f
[ct
+
df
J
a(u)du] + n(t).
(147)
90
The amplitude A2
and the carrier frequency are constants
and n(t) is a narrow band Gaussian noise whose power
a(t) is the message that we are trying
spectrum is N/2.
0
to decipher and for our purpose a(t) will be of the
Butterworth family.
The noise n(t) is the filtered ver-
sion of the white channel noise and is independent of the
message a(t).
We can write n(t) in its in-phase and
quadrature components as
n(t) =
-n c(t) cos wCt - f2ns(t)
(148)
sin wct
Hence (147) becomes,
t
r(t) = [J2
+
n( t)]
cos
c
[Wct
+ dffj
c
a(u)du]
- A
n (t)sin wct
s
c
-O
(149)
t
r(t) = R(t) cos [act + dfj a(u)du +
]
(150)
-00
where
- / n (t)
(151)
6 = tan
22P +
n
c
(t)
when the input carrier to noise power ratio is large, nt)
and n (t) are much smaller than /2 most of the time, then
n
(t)
t)
(152)
T'
91
Hence, most of the time
t
r(t)
R(t) cos
[w t + d
n
n
a(u)du +
(t)
]
(153)
-00
is the input to the discriminator.
The discriminator output,
as stated before is the instantaneous frequency of r(t)
;
A (t)
f(t) = df a(t) +
(154)
In the frequency domain the power spectrum of (154) is,
2
(w) = df S a(w)
'
(~)
=
fSa(w)
+ N(
+
(155)
M)
2o2P
(156)
A graphical description of how the noise enters into our
problem is shown in figure 39.
Equation (156) is the power spectrum of the output of
the discriminator.
filtered.
This is the spectrum that must be
If we consider the first and second order
Butterworth messages, the appropriate zero delay Wiener
filters can then be determined.
The spectra and the
associated filters are given in table 5-1.
7-
92
S
(w)
%(s)
..
_.
.
.
Constants
..
........................
.~~~~~~~~~~~~~~~~~~~....
A
A ==
a2
2
2
1 + a +
= df2
¥
A
A
2
w +1
s
21
2
+ ys + a=/
=
2Z: als + a21~
2/-2
+
]s + s+
4~+I 3
a~lS
Ls
+1
+Y
[2
+
2
+1
/~ 2:e+ 1
A -
N
0
y=
d
2
A
f
-a+ = 2y±
Y
a]
2v/2
½_
2y7
4V
A=
-
+ 1
+
-
P
N
0
Table 5-1
Post Discriminator Filters
The problem remains to compute the mean square
demodulation error,
2
(157)
where a(t) is the delivered message and a(t) is the estimate
of the message at the output of the Wiener filter.
Again,
93
N9(B
C'N.
2P
-
_
raw
I
| -
ok
15(t)
iF-
b~~~~~~~~~~~~~_
,
.
_
Discriminator output noise spectrum
39-d
LS _ )
f
OS/2
g5/2
Normalized noise spectrum (input)
39-c
2.
Noise component spectra (input)
39-b
Z
-+-31l
-i
CNHA rEL
NOISE
S PEcri RVM
CARIRtERI
sPEc rRum
P
P
3f.-
++(3-.
Two sided channel noise of spectral height
No/21n(t)
39-a
Figure
39
White Noise in Conventional Receiver
1,+:/7
f
94
Oa
f
=
J {Sa
(
) I1 - H 3 (W) 12 + N(w) IH3(w)12}
(158)
is the appropriate equation for the mean square error.
If
we proceed to apply (158) to the terms in equation (156) and
table 5-1, we will obtain a straight line performance curve.
Obviously this could not represent the true performance of a
conventional receiver because we know for low SNR the system
should become nonlinear and exhibit a threshold.
The problem
with the above development is that it does not take into account the system nonlinearly.
The source of our error is in
the approximations (152). There we assumed that the received
carrier power was much greater than the received noise power.
Rice2 1 conjectured that the discriminator output noise
spectrum was not just one term but two terms and is given by
N ()
2
= {4r [N
Here N+ and N
+ N]
2 N
N
+
2
}
(159)
are the expected number of times per
second that the discriminator input noise phase increases
and decreases by an odd multiple of
radians respectively.
Lawton2 2 applied Rice's ideas to the case of a zero
mean Gaussian modulating signal.
will be presented here.
Some of Lawton's results
For the Gaussian message,
95
N =
+
where
e-u
B IF
3 ivUe 1
p is the carrier
1 + 2au 2 du
(160)
to noise power ratios in the IF
bandwidth at the discriminator input (see figure 39-a)
P
P
._
2
P
2
P =N
N
22
IF+
IF
(161)
N o BiF
IF
BFI
2
and
a df 2 3[power in a(t)]
22
2BiF
(162)
IF
and
BIF is the IF bandwidth
expressed
in Hertz.
Carrying out the indicated operations in (160), we obtain,
/
BIF
(-_-)
/
N+ =
1
/2
dfe- /
f
4 T V'Wff
/
4/
1
+
/ df
3
)2
(163)
p
Since we are dealing with a random process we may assume
N
= N
and proceed to find an expression for (159). That
is
N
N ()
a
2 + 2
2P{2+m}
(164)
96
.
P .
/
NB
2 = (4P)V2w
dfe
B 32
/
IF
0o
+ 2'
3 BIF X
f N(
/
(165)
o
Note that (165) diminishes rapidly for large P/NoBIF
and thus yields our previous noise term N w2/2P. Now we
0
can calculate the mean square error for our system with a
degree of confidence.
Proceeding with (158)
for the
one pole Butterworth and using table 5-1 and (164) we
obtain,
1
2
af
1
2
2
~1~~
(2 y
3
~
1 + a +
t+
- 1)(y + 1) - (
+
¥(1
+e + 7 2
-
2
)a
+
- +
a2
(1,+ a + y)
-(l+a+)
(166)
where in this case
A
-
4P
N
O
and
7rde
4B
2
I /
BIF
A
I3
l/ df2 A
(167)
For the two pole Butterworth case
1
of
*
1
2/2 I5 +Y
(168)
I
Newton, Gould & Kaiser Integral Tables, Reference 12.
-r
97
where
[2al2,y+ a2y. ] +
a)
I3
b)
15=
2
+ 2al1 m
A
2
+
2[
1 2y
[(2y- a) 2 +
c)
+1
a1 = 2y+
d)
2
- 2y
=
+
e)
a 3 = 2y
+
f)
a4 = 2+
+ 2 /V y
g)
a5
h)
a6 =
i)
m
=
+ 1
+ 2y*
Y + 22
2 y + 2yi
+ a5 a6
=-Y
j)
m,
k)
m3 = (a5m2 - a 3 ml)/y
1)
m
m)
A5 = y7((a6 m 4 - a4m3 + m2)
a
= (a 5m3 -
3
a6
a 3m 2 )/y
+ 2a22y]I
-
(
4y
-
("
a2)
2m
- a2)IM3
4
}
98
and,
A '
'4'
V
P
(169)
N0
and
A
=
*2
Ade
V-
BIF/
BIF
/Af2/- ~4t~B
~I
32
B
B IF If
d
(170)
A
The performance curves associated with equations (166) and
(168) are plotted in the next section.
5.2
CONVENTIONAL RECEIVER THEORETICAL RESULTS AND DISCUSSION
Figure 40 shows the theoretical performance of the con-
ventional receiver when the first order Butterworth function
is the message.
These curves do exhibit a threshold and
except for location, the general shape seems to be consistent
with the theoretical curve-sets for the phase-locked loop
case.
Notice in the conventional case that in the linear
region, the curves conform exactly with the phase-locked
loop case.
Hence preliminary observations indicate that if
the phase-locked loop is to perform better, then it must
give improved threshold
occur at lower SNR).
occurrence (i.e. threshold should
These conventional receiver curves
were obtained by plotting equation (166).
Figure 41 is the theoretical conventional receiver
performance for the second order Butterworth case.
For
99
this case the performance in the linear region is more
than an order of magnitude better than the one pole case.
Observe, however, that the threshold occurrence has degraded from the one pole case.
This is an important dif-
ference from the phase-locked loop case where the threshold
performance improved for the higher order system.
These
second order curves were obtained by plotting equation (168).
In the next chapter we simulate the conventional system.
In doing so, we are required to use an intermediate frequency
filter prior to the discriminator.
It was found that the
best intermediate filter was one that had a band width of
1.3.df K.
Now in the foregoing performance analysis we have
assumed an ideal intermediate filter.
tion situation we have no such device.
In the actual simulaSince we are using
a low pass filter of band width 1.3 .df-K in the simulation,
then for our analytical analysis we have used the noise
equivalent band width of that low pass filter.
if df = 50 radians per volt, then BF
For example,
= 32.5 Hertz.
A short
discussion of equivalent band width is included in the
appendix.
Since both the first and second order Butterworth
messages have the same 3 db point, we have assumed that
the same intermediate filter should serve both systems
equally as well.
The validity of this assumption has not
been tested.
The next chapter will describe the simulation of the
conventional receiver.
Theoretical Performance for Figure 38
100
One Pole Message
Conventional
Receiver
---Ii
--
Io
I
II
I
I
For post discriminator!
filter see Table 5-1
1.3 df hertz
equiv.BF
2
t.-.- .
--9
--t
-
--l--t-I-ti
--I -
ot-
.
i-I---~-.--I---I-
-tl- tt
T -1-i
-t-H-H±-I-H-Hi-
-+i
i
! i
I i 0 1 ; I 11 1-H11 1 2 I 1 1 1 -t--
I 1 I 1 I 1I
' I I 1 1 1 I 111 F F-+- -i-i I I 1!I 1 11 1 1
i i ! ;ii 0i i I i ! i i! t i--i-ti--t -ii i .-H
i -i
t - i--i
.
.
. . .
.' I'II .
I It
-
-
-UI
I II I
r I
1 1 1 1 1 1 1 11 1!1 11 11 1 1 -t-W44-444
t--t!t ii i! i MH-t-H-t-I-t--I
i v!
vII'
- -ii iF-
I
I
-
I I I I I
1 t I lI,
I0
9
8
7
,,,,,,,,.,.,,...
6
5
Plot of Equation (166)
100~
df
I
T1T
rrlrrllr
rrmTI
Il~rl
I-r-lr
-
a
df
= 5o
LLLY-L-YLYLY
j
S~if-t~m
rl
. . .
'47I.....1mzm11
.
. . . .
,1. .
.
.
.
.
. . '. ·.
.
.
.
.
.
.
·
.
..- . I L .
'
'
''
.·
[6
U
0:
11
7
411
w
i 15
i,
I 11
i.I
w
1..9
41
j.
I"
1,
df = 25 i
-"+--I-
'
'
'
'I -
'
'
'
C
. - . . . . . . . . - .-! '
'. . .' . .' . L.
' -
.-
1. .
.
.
I
I .
.
. -
I
4
3
L
2
~-t-1 1 I I I I1 1 1
--I
· ·r
AJ
1
I 1 I -H
I I I II
1
i
-+--C-H-C-4 1 ;H-H -H FF-4- -4-1
- -C
1 +4
I
2
3
¢
I
I
I
f
5
I
I
I I!
I
6
J
7
J
a
8
~
a
9
t
!
100
I
-t--
. .I
I I I I I ; i wi ! ; ii i
I i i f i i Si
I I ; 11: 11-f+F-+-f- 1 1 z 0 1
.........................
I
.
.
.
I .
=
:i i i iii
1II0!I ILj
I !! I II
I I . . .
-
I...
2
A
i i
i!
4P
KN
0
40
4
!1
IIi!i i i
I . I I . I I i . -
3
Figure
. .' . .I''- I .
5
--
1 1 1I1
I1
I-.
I
I I
I
.
I. J-I.
Ir
! 1l11
1 I11 i1!ttt _l,
-
6
9
1000
.
; .
I .
I . I. .I.I CC,
I I
1I I Ci, 1
! C, -
. I . . r I II
III.
lI I I
i l!
I I I I . I I I I.
s
5
I
II
;-
6
I I
78
,II I
.
I . I .
!P 10K
Theoretical Performance for Figure 38
101
Two Pole Message
, F-KX
Conventional Receiver
'I
I
-
8
I .
I .
7 11-1111-1, . i I :I I I I ,
~
~ ~ ~
i-~~~~
~
I I ;1
~ ~~~~~~~
,
i
1 ,
1
i
r
- - -.-'
-CC-I--I-C~cC~--
//
·I! .rL44
' ',n -- P ' grw,'
1
I IULUU·
tCe
Ik.
II i
:Plot of Equation
I I I I
II
I .I
T
fl
1
II1
1111 1 1 1 1 I I
!
I
-
1
?
-
_ I
Il
i
8
T
I -
.
I I I .
I-~
I I
.
I
I-
1
(168)
I I
. . I I I. . .t
I II
I
F'
I
i
1/ 1 I !f i t
I 1I
I
-r
11:1
111111 1 1 1
iK
ID
9
8
7
II
42
21
2
-i--i
-?t+-~--iff-t~-tt-Hi+--t-1·-tifftf~t-StttCX--tttt--'
' ' ' ' ''·''·'-l
--
LLL
LLL
ULU-UI-t++~-fil-lttt~ttl~~
. . . . .m~t
.
io
Ir- -
I-~-+I
I
.. . .. -I-. _. I I I I.
.
.-
. ,
For post discriminator
filter see Table 5-1
9
8
7
6
_-
-i
-11
!
5
equiv. BF
= 1.3 df hertz
4
2
I I
I
-tI-, - -1~
]1
['tr244i
-
HI-
Z-
-
. . .
IK. . I . CC.
. . . ..
.
. .
I
-1 11 11 -
I
. . -. .
. I ..
4
5
--
--
II--
C--C-C: -
CCIU~-~i
. I . . . . I . .
I . .I I
-
III- I
-- t- - - t C
41
10
2
3
4
5
6
7 8 9 000
2
A_
A -
3
4 V24/P
P
KN
O
Figure 41
5
7
8
91000
23
4
5
6
7 8 9 1OK
'T
102
6
CHAPTER
SIMULATION ANALYSIS OF CONVENTIONAL DEMODULATOR
6.0
INTRODUCTION
In this chapter we will apply the technique of digital
simulation to the performance analysis of the conventional
demodulator.
In general the digital generation of the mes-
sage and noise will be the same as in chapter four.
The
filter modeling techniques that were discussed in chapter
four also apply.
Since the performance of this system will
be used as the bases of comparison for the optimum system,
then we shall use the same Butterworth family of messages
Act
in an effort to establish a meaningful comparison.
6.1
CONVENTIONAL F.M. DIGITAL MODEL
Consider the conventional receiver shown in figure 38
of chapter 5.
The input to the limiter-discriminator com-
bination can be written as,
r(t) =
2
cos
[ct
+
c~~~~~~~~11
(t)] + n(t)
(171)
103
t
where
(t) = df{ a(u) du
(172)
and n(t) is narrow-band, zero mean, white Gaussian noise
across the intermediate filter bandwidth.
As before n(t)
can be written as
n(t) =
(ct sin wCt],
- ns(t)
[nc(t) cos
*
and n(t) is independent
(171) and introducing
a(t).
By expanding
(173)
(t) + /
cos
r(t) = [l
of the message
(173)
n(t)]cos
Wct - [
+
sin
(t)
2 n (t)]sin wct.
(174)
Define,
Xc(t) =
2P cos
(t) + J2 n
(t)
(175)
=
2'P sin
(t) + V2n
(t)
(176)
Xs(t)
By standard trigonometric manipulations,
r(t) = /Xc
2
*
I
+ Xs
(t+2(t)
X s (t)
cos [ t + tan - 1 Xc (t)
Reference 17, Chapter 8.
(177)
104
Notice that the magnitude of (177) is not a constant
but some random function of time.
We assume that the limiter
Now the output of the dis-
will take care of that problem.
criminator, as we stated before, is the instantaneous value
In this case the phase is
of the phase of the input signal.
explicit
in (177):
0(t) = tan
-1 Xs(t)
ct)
s
1
(178)
[c (t
and the instantaneous value of
(t) is merely given by its
derivative,
e~t
d
-l
X (t)
(t) = - tan
[ X(t)
(179)
Carrying out the derivative in (179) yields,
Xc (t) X s(t) 0(t) =
c
X
2
C
(t)
+ X
s(t) Xc t)
(180)
2
S
(t)
and we can rewrite (175) and (176) as
t
Xc ( t) = cos [df
c
f
n (t)
J a(u)
du] +
t
Xs(t) = sin
[df
J
a(u) du] +
c
-
(181)
JD~~~~~Y
,·v
n (t)
5
(182)
_T
105
where the division by
2P to obtain (181) and (182) has
no effect on (180). As before nc (t)
and n (t) are also
s
independent zero mean Gaussian random processes with the
same spectral density as n(t), namely,
N
Sn()
= Sn s ()
0
2
==Snc()
(183)
As a check on our result, if we let nc (t) and n(t)
s
be zero in (180) then we obtain
0(t)
= dfa(t)
,
(184)
I0
n(t)
n(t)-0
which is clearly the desired discriminator output.
The final stage in the conventional receiver is the
optimum post discriminator filter.
We will derive the
optimum filter later because at this point we want to
interpret equations (180), (181), and (182) as our conventional receiver.
equation
(181) and
The block diagram representation of
(182) is shown in figure 42.
Note here that the quadrature components Xs(t) and
Xc(t) have contained in them additive, narrow band, white
noise.
If the additive noise was not band limited the
large amount of noise power would obscure the message and
the discriminator would not operate satisfactorily.
Keep
-T
106
V
zI'
0
SIN
,
L
=
7
s
^
/,
\
A..
r
x 'f)
Figure
42
Interpretation of equations (181) and (182)
this fact in mind for it will necessitate the use of another
filter in our final digital model.
Equation (180) indicates that we should take the functions
Xs
(t) and operate on them as shown in figure 43.
Figure
43
Discriminator Model
T
107
The "/S"
term in figure 42 and the "S" term in figure
43 are frequency domain representations for the mathematical
operations of integration and differentiation respectively.
Digital models of these operators are also given in the
appendix.
Now we want to realize the digital equivalent of our
entire system using figures 42 and 43 as subsystems.
In
chapter 5 when we designed the digital model for the additive noise term, we made certain, by choosing an appropriate
"T", that it appeared as wide band, white noise well past the
message bandwidth.
If we take that wide band, white noise
and use it in figure 42 our system would not work.
we must, in some way, filter the noise.
Therefore
This new filter will
be placed as shown in figure 44, between the two subsystems
of figures
42 and 43.
[777
tS (-t)
1)INI
...
1_
Figure
Inclusion
44
of the I.F. Filter
108
We will call this filter the "I.F." filter for it
serves the same purpose as the intermediate filter in
the ideal conventional system of figure 38, chapter 5.
will be a
nity gain low pass filter in our case because
all our signals in this model are at base band.
width,
It
BIF, of this filter is clearly a dominant
the performance of the system.
The bandfactor
in
If we make BIF too wide,
excessive noise is permitted to enter the discriminator.
If we make BF
torted.
too narrow then the message will be dis-
Therefore we must make a compromise between the
two possibilities.
find that BF
A reasonable compromise would be to
that gives the best system performance.
There are two alternatives for determining the optimum
BIF.
We can analytically derive it or we may experimenTo analytically produce the optimum
tally determine it.
BIF would be a difficult task.
This is true because we
would have to find the spectrum of
t
cos
[df
J
(185)
a(u) du].
--OO
We do know the spectrum of a(t) but we do not know the
spectrum of (185). Middleton 2 3 presents a method by which
the spectrum of (185) may be determined.
Once we have the
spectrum one could proceed to analytically find the optimum low pass filter.
ILL
109
(i)
(C.)
Figure
45
Conventional F.M. System-Digital
Model
110
We will use the other alternative for finding BIF
optimum.
Therefore the system will be assembled with a
variable bandwidth I.F. low pass filter and the BIF that
gives the best performance will be chosen as BIF optimum.
The final digital model for our conventional system is
given in figure 45.
The next section deals with the derivation of the
optimum post discriminator filters, with the determination
of BIF and with the simulation results for the Butterworth
family of messages.
6.2
CONVENTIONAL F.M. SIMULATION
To calculate the optimum Wiener filter for the output
of the discriminator we must know the power density spectrum
of the output.
In general equation (180) represents the
output in the time domain.
One would have a difficult time
finding the spectrum of (180) so let us explore another possibility.
Suppose we consider the weak noise case when the
signal to noise ratio at the output of the intermediate
filter is high.
Then we can write
t
r(t) = A2
cos
[ct
+ df
a(u) du] + n(t)
-0s
(186)
T
1ll
where
t
n(t) = /2 n(t)
cos
[t
+ dff a(u)du]
t
+ /2 ns(t) sin [ct
+ df{ a(u)du]
-co
(187)
Observe here that nc (t) and n (t) are two low pass processes and that they multiply onto a varying frequency
sinusoid.
Once again we can show
Nc,
N
(W) = 2
ied
theCrSuls
Combnii
If
+(188)
< BIF
Combining these results yields
ci(t)
df
r(t) = R(t) cos
a(u)du + tan
t +
. f _c
A
t
/~2+ /2 n(t)
(189)
As in chapter 5, page
nator
the output of the discrimi-
is
n(t)
aWN (t)
=
dfa(t)
+
n
WN~~~~~~~~~/P
n s (t) I <<
/~
(190)
T-
112
The problem is now to filter 0WN(t) optimally where
d2 '2 nsin
eWN()
= d'
·
Ki2n
1 +
(W) n2
No
+X
Here we have assumed that n(t)
(191)
is independent of a(t).
Now that we have the spectrums we can proceed with
the usual Wiener filtering problem.
The results for the
Butterworth family are given in table 6-1.
Message
Spectrum
Post Discriminator
Filter
a2
2
2 +
(1
(1
(+
Constants
1
a 2 = d2
f
A
y
+ 1
=
a+2
a
+ y) (s2+ (5~~~4
s + a)
4
A
=A
ajS
4+
1
(s+
0
A
a1 = .2
+ a2
)(s2 + *s +
N
df2
y
2/
2a
)
y
2 =ya.A
4..+i
P
N
0
Table 6-1
2/:
-
Y-
+ 1
113
The next problem is to find BIF optimum.
A transfer
function for the low pass IF filter is
R2dfK
H(s)
(192)
s + R2dfK
The pole of the filter is set at R2dfK because we know that
the band width of the input spectrum is at least d
times
f
the bandwidth of the message spectrum K (we will always use
K = 1).
An intuitive feeling for this point can be obtained
by observing equation (186) above.
Simulations were run for various R2 in a system using
the first order Butterworth message and a df = 50.
The
performance results are given in figures 46 and 47.
Figure
46 shows the actual performance curves and figure 47 illustrates more clearly how the performance changes as a function
of the I.F. bandwidth.
Figure 46 depicts an interesting phenomenon.
Notice
that the linear region of the curves that exhibit a low
threshold, remains slightly below the expected linear performance.
However, the curves that have high threshold
points do reach the expected linear performance.
It is
believed that this phenomenon is caused by distortion in
the I.F. filter.
For large values of R2 , BIF is wide and
the linear region performance is high.
However, we are
allowing excessive noise through the filter and this causes
_
_I___
IIY·IPIII^II_
114
the threshold to occur early.
For small values of R2 , BIF
is narrow enough to provide for a good threshold but the
rounded corners of the low pass I.F. filter frequency
response is clipping off a little of the message spectrum.
This droop in the linear region performance will be herein
considered as a minor problem and hence the main criteria
for the BIF optimum will be a low threshold.
Figure 46
shows that the best threshold occurs when the pole of the
I.F. filter is at about 1.3-df K.
This value of R2 was used
throughout the following simulations.
Using the filters shown in table 6-1, figure 45 was
hence fully mechanized and the performance results are
given in figures 48 and 49.
6.3
CONCLUSIONS
The simulation results for the first and second order
receivers are given in figures 48 and 49 respectively.
About a 3 db loss in threshold occurred while going from
the first order case to the second order case.
This is
in agreement with the theoretical results in chapter 5.
For both messages the over-all performance curves
remain slightly below that of the predicted theoretical
curves.
The magnitude of the difference is about 0.2 db.
This "droop" in performance was predicted on the basis
of what we observed in figure 46
T
115
Further examination of these curves will be made in
the next chapter.
-w
Experimental Determination of Optimum IF Filter
116
One Pole Message
See Table
6.1 for detaill
d
Expected value line
for linear operation'
2
25k
.
1. 3
=50
~~~~~~~~~~~~~~~~
-
..
5....0
0
4P
KN
0
Figure 46
5
T
117
..... - ...1... ... I. I
.
. :. - . ....- -
.-. - .. .- -. .: 1 1. .
. .- . -'-- "..
1 . .. . I -
. ... I - .I' . :j I.11
I
. .
.Experimental Determination of Optimum I.F. Filter
for CF.M
..
,.
Receiver
,.
.-.
I
;1·
Threshold
.i~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~-10
'I~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~I
;;:~~~~~~~~~~~~~~~~~~~~~
........
....~~~~~~~~~~~,....
,'
::..,':
....
'
'
I~~~~~~~~~~~~...---
-
-
4
4
.
I
i
Ii
F
,
,
0;·,
.--
- --
.
-..
.
.
.
..--
4 :--A
-
;
;
:
.
'i. .
:,'-.
'
''i'-.4 .
~~~~~~~~~~~~Multiplying
Factor
-
.----
I .
4--,70
[! : ... ~~ ... ;
i
proportional to bandwidth of I.F
filter.
I~
I
II
"-...'......
,~~
~~~~~~~~~~~~~~~~~~~~~~.
.....:........:.......-................--..
7
- - I
.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.
.7
I . .L
47:.,Z
.
.
.
.
.
.
.
.
.......
. .
"-
.
_
I
-
---
118
Experimental Performance for Figure 45
One Pole Message
I
Conventional
r
6
rIIIi
- I
I
I
- I I
e I±aule
-
.
II
ourdaeails-
-1
I
7
I
I
.I .i I I. !
II I I t
- C-CJ-C-UJr-~~-C~
I tt
~
t I t
I
.
. I . . I
i
I I
.
.
I
I.
'I 'I
. . I .
.
'..
I I I
.II
''
.
.
I ...I 1II
I
. .I I . .
1 1
illl I
I I
I
. I.
I I
. . . .
I I
..
1 111
I
H(s) IF
I .
S
1.3 df
+ 1.3 d
1
TfI
q
7
I
I I I I
I I I i i i II
i I i
i
i i. .
iiiii i i i
ii ii
i
i
i
6
F
If = 100
2
-d = 50
I
ILLLI I 1--
F
df
,I
= 25
1o
9
8
7
6
I
5
4
+-
3
,!
~~ii~~iiH
10
#d
I
7
8
I II
100
3
2
' I~fliff-iffifI
4
5
6
9
5
100
4P
A
-
KN
O
Figure
....
.H
..
48
-
_I~··
I--
·
1I_
_X
_I
6
7
8
9
1'K
Experimental Performance for Figure 45
119
Two Pole Message
Conventional Receiver
-
-
Io
I
I
I
HI-TH'-!H
'
ti' 1l 1I !
IIi
ili
df =_ IUU
Xl
I
--df
.
IH
I Il I
--...... .............
|._21iS1_r....
foeal
C=. See Table 6-1 for detail
4
i I
--T-T--
I-
4-4=1
3
ff =
Fn
H
1.3 df
(s)
51
1I
s + 1.3 df
IF
I II IL_ -J I- l_
25
df
I
I .. 1
. : i 1 1 1.
~ H++9-HIt
sf·---
. . . . . .1 1i I
-+-H--+----~+-+
|
---
I I
-tI I
+I-- +
I I I I
.t
C I
.
.
I
.
.
.++ I I
I
I I . I
I
. . . .
.
-I-
+
-11111111.
I
. . . .
.
.
.
,
v
.
I Illllilll
.
.
.
.
I
.
I . . .
.
-
.
-I----1
++l
c-r.
I . .
I
. . .
I
.
. .
At
6
7
6
t
z'
2
I
I
I I. .t
r I . r . . .
.| Ir
t.
I
.- --..-..
. .
.
I .
..
- .-- .- . . -
. . . . . . . . . . . .. .... . I . . . . .. ... . I . . .
g
I
7
i
6
;
:1
I
I
Ii
e
_
10
I I I --
I
I
I'
-I I I I
1 1
I .tH+. ,--
t
i i
'-.II'
It
---
--
I.
l I
.
I
I I
,
....................................................................
_
_
3
5
_
6
7
9
]00
.
.
.
.
. . . .
. .
. . . I . .
.
. . . . .
.
.
.
.
I . . . . . . . .
.
.
. I
I . . . I . .
.
5
I
, .... , -. .. . 1. I... 1.F-++ t~tt~~t+
- I , - .1 11 . ,......
~~l
11
,-..................
- - , ....
. I -.............
.- 2
4
9
3
A
KN
Figure
SO
49
5
6
e
J toO
2
3
4
.
.
.
.
. .
I
. . ., .,
7
9
10
120
CHAPTER
7
OBSERVATIONS AND CONCLUSIONS
7.0
INTRODUCTION
In this chapter we will compile the results of the
foregoing six chapters.
It will be done in three sections.
The first section will investigate all of the ramifications
of the optimum phase-locked loop receiver performanceboth the simulated and the theoretical cases.
The second
section will examine the analytical and simulation results
'
for the conventional receiver.
3
4
~will
Finally the third section
utilize the observations made on the first and second
sections to make reasonable comparisons between the optimum
phase-locked loop receiver and the conventional receiver!
7.1
PHASE LOCKED LOOP OBSERVATIONS
In this section we will evaluate solely the performance
of the phase-locked loop.
There were three methods that we
used to investigate the loop.
One method was the completely
linear analysis, another was the quasi-linear analysis and
the last was the simulation analysis.
Figure 50 depicts
121
the relationship of the three methods for the one and two
pole cases.
For simplicity reasons only the df = 50
curves are shown here.
The quasi-linear analysis gives
a fair approximation to the threshold but the simulation
analysis spells it out in detail.
For the one pole case
the quasi-linear technique missed the actual threshold
by about 3 db.
In the two pole case the error was about
3.5 db.
The reason for that large error is not quite
clear.
For both the one and two pole cases the quasi-
linear technique indicated threshold at a mean square
2
phase error of about one radian . We know from our simulation results that this is not usually true.
In the
simulations we saw that the mean square phase error at
threshold varied somewhat but that it was usually less
than 0.5 radians square.
Now let us consider the two pole message performance
versus the one pole message performance.
The quasi-linear
analysis predicted that the threshold would improve by
1.5 db for the two pole case.
In the simulation a 1.0 db
improvement was realized thus verifying the prediction of
the quasi-linear result.
The quasi-linear technique cannot display the performance below threshold so let us now concentrate on the
experimental curves.
Observe that, for the one pole case
operating below threshold, the performance falls off at
122
a rate of about 4 db per octave.
However, for the two
pole case, the performance drops off at 18 db per octave.
As a result, the performance curves for the two cases
intersect at an SNR of 150 hertz
.
This fact indicates
that there is a range of input SNR where the first order
receiver gives improved performance over the second order
receiver.
We can conclude that if our physical system
had a low SNR constraint, then, if we had a choice, we
would choose to build a first order receiver and transmit
a first order message.
Let us now consider the fifth order message simulation
of chapter four.
In that case a fifth order Butterworth
message was used in a system designed optimally for a
second order message.
Figure 51 compares the results of
that simulation with the results for the same system with
a second order message input.
Interestingly enough, the
former case gave improved performance over the latter case.
The reason is clear.
The higher order Butterworth message
has a compressed bandwidth; i.e., more of its power in toward
the spectral origin.
Thus, less spectral distortion was
realized with the fifth order message than with the second
order message.
Also, since the system was perturbed by the
same noise in both cases, the M.S.E. due only to the noise
would not change.
The overall effect due to less distortion
and the same noise interference tends to give a smaller mean
123
square error.
If these observations are typical for a
Butterworth message, then we could say that any Butterworth message of order N will give better performance
than another Butterworth message of order M < N when
they are both considered as inputs to the receiver designed especially for the Mth order message.
Note that
this is an empirical observation and that it has not
been proven.
7.2
CONVENTIONAL RECEIVER OBSERVATIONS
In this section we will consider only the conventional
receiver.
For the conventional receiver we applied three
analysis methods similar to those for the phase-locked loop.
The linear analysis of chapter three holds for this case as
well as for the phase-locked loop because we are considering
the same message spectra and the same additive channel noise.
For the non-linear theoretical analysis, we used the methods
developed by Rice (henceforth called the "Rician" method).
The simulation analysis was again used to evaluate the system
above, at, and below threshold.
Figure 52 shows the combined
conventional F.M. results for both the one and the two pole
cases.
Once again only the d f = 50 curves are shown for
simplicity reasons.
First let us consider the theoretical versus the simulated results.
The Rician method provided amazingly accurate
124
predictions about the conventional receiver performance.
The graph shows essentially no measurable difference in
the threshold.
Also below threshold the rate at which
the performance decays is almost identical for the theoretical and the corresponding simulated cases.
However,
notice in figure 52 that the Rician model did not account
for everything.
In general the simulated curves lie a
little below the theoretical curves.
difference is clear.
is not actually true.
The reason for this
Rice's model made an assumption that
He assumed that the intermediate
frequency filter was an "ideal" filter that was wide band
enough so as not to cause any distortion in the message.
Hence he proceeded to derive his model only on the basis
of the noise that was passed by the intermediate filter.
In our system not only do we pass noise through the I.F.
filter but we also create distortion that manifests itself
as reduced receiver performance.
It is that extra distor-
tion term that is not included in the theoretical analysis
that is the underlying cause for the difference between the
corresponding curves.
Now let us consider the two pole message performance
versus the one pole message performance.
The theoretical
curves predict that there should be a 2 db loss in threshold
for the two pole case and the simulations confirm this fact.
Also, below threshold, the rate of decay for the two pole
125
case is 11 db/octave and the one pole case decays at
about 5 db/octave.
An intersection of the curves occurs
at about an SNR of 450 hertz-l .
7.3
PHASE-LOCKED LOOP vs. CONVENTIONAL F.M. RECEIVER
Finally we have come to the point where we can make
a valid judgement on the performance of our optimum
receiver.
Observe that in figure 53 we have replotted
the simulation curves for df = 50.
Included are both the
phase-locked loop simulations and the conventional receiver
simulations.
Some points are immediately clear.
In both
the one and two pole cases the phase-locked loop gave superior threshold occurrence.
there was a 3 db improvement.
was a 6 db improvement.
things occurred.
For the one pole message
For the two pole case there
The 6 db figure arose because two
When we went from the one pole case to
the two pole case, the phase-locked loop model improved its
threshold and the conventional model relaxed its threshold.
Notice in the linear regions that the conventional
receiver curve lies slightly below the phase locked loop
curve.
Once again, the I.F. filter problem is showing
itself.
In the phase-locked loop we do not generally
have an IF filter so we are not pestered with that difficulty.
126
In conclusion we can say that the performance of the
realizable, zero-delay, phase-locked loop receiver is
superior to that of the best conventional F.M. receiver.
Phase-Locked Loop Results
127
for Three Methods of Analysis
nnp ana Twn Pl
'10
. .7!. . R
9
8
I i R- I 1R11
1171..i
- R. .R. P R
.....
I i i i -i i f
.
rq
,,m
-LL-J
Curves taken from
7
--figures:
6
18,
33
20,
35
z
5
.
21, 22
I
! I . I I - i
-
4
,
. I I I -
_____
. -:P
I
.
I
.
1? -
d -F = 50 _.
-
3
I
-
I1+wo1t
WI'
ir.1I-
Doled
z
M
F-i--
2
9
8
Linear
7
6
4+H--
5
'
~
4
....
'
...
Quasi
linear
analysis
Simulation
a
ly i
2~~~~~~~~~~~~~iiIIH
fill~~~~~~~~~~~
j
I0~~~~~~~~~~~~~~A
II
&I!t
IIII
H
IIwwq1+
!!!EI±I
Ih
..
Linear
I I1
4-
analysis
-
I
I
.
I
I
-
T
___M
Quasi
1111
- -'-'- ' ' ' ' ' ' ''''''''''''''''"''
4
-I -
-1 T--
-- I
'
I I~
-
.
.
.
.
I
I II 1J r
I-
'l
. r
If
I I I IfI II II
I I IIII
!Il
I If IIIIIII
I
I I V F -T r rt
,
I I I
H
I
ll
I I I I11,
1I I I I I
I I I I I,
Y] I I I I I lI A ' II
II. I
II
I . I
I
I
r
. I .
.9
Simulation
linear
analysis
. . . . . . . ..
I - 1
r
--
m essage
'M'~I~+lflhone
pole I
analysis
. . . .
3
I-~~~~~~~~~~~~~LJ
I
I
I I
I
I
I
I
I
2
I
I I I I I I I II I I I I
I I . I I
. ! I II
.
I I
- I
I
I
I
I
t
-
hit-.L.
G
,---,..
z O~J
.
I
4
b
^ I_.__
b 789100
2
3
A
SNR
Figure
50
4
5
6
7 8 9 1on
-
I If
- -
-
- -
-
2
I I
I I
-1
I.- - 11
3
-1
+_ 11111
I
4
6
1
i IIII 1 Ill
7
9 II
128
Experimental Comparison of Two
and Five Order Messages into
Second_ Order
Loon
_ - _- __ _,
_
. 712
Y
11IRUiI
tt
7
-! L__1'
SJ
-
{
I i T
i111l11l
I
dt -dA 14"
:
I-- I I I I I I I I I
X
ift
y
---- 4
of five pole message into
5
fifth order loop
3
I
.
I
I I
II I.
. .
..
I
7
I
11
10
9
= 25
. . . II .
I . I . . . . .. I... ...
11 111 1 I I
I 111 1t
II II 11
..
.
.. I.
.
F
I
I
!
. . I
I
I
r
df = 25
C
C)
'-'''-''r9CC
df
I
I
- -1
6
4
I1
z
I :
_
|
_rTN
1-1111
111
i
, ,
111111
I
25
f H- f
P
t
a
I
II
4
A
3
I
I
I
I
Five pole message
into second
order loop
.............
...
..
..
..
.
....
.....
...
..
:--
.
Two pole message
into second order loop!
I
I
e
i
-1- I _ II
I1 III II.I II
1! II,III I U
UA
I IIII
I II 11
IirIII
J
II
I A. I II III 1
I I
II~ III
11"II
II -IJ
l l l 111
t-
9
I6
'sV
8
7
H
I I
14
IJ
Curves taken from
Figures:
3
23, 37 & 351
2
-F.________________________
rTr
I
I.
__________________________________
*-~~~~~~~~~~~~~~
______________________________________________________________
l' '+,
....
1.0
3
5
6
9
IO0
2
6
A
SNR
Figure 51
7
8
9 Jo0
3
4
5
6
7
8 91
Conventional F.M. Results
129
for Three Methods of Analysis
10
II
I
I
I t _ T-7-
Analytical Results
7
o Experimental
Results
o-
i
I 1
Tdr
Figures:
1
= 50 1
1,
+_,,+, { , , .
I
Curves taken from
4
,
21, 22
__
40,441
3
-H
- ---
48 , 49
1_· ,
-+--!
-,-
, ,
Linear
analysis
,,,,,
2
. . .. . . .
_
_
A '--K-
10 _
9
8
ii
-h
=
-
4-f-
¾
ii
I 'I
1 1 1 I1j]1111
LL
--
Simulation
t
ILII
H
.-.i A
1 V,
7
6
5
4
3
-i
10
9
8
7
Linear
1
1
WgX
WX
cianRician
lAnalysis
vi
Simulation
X1I
6
5
-t~-4- 'j-t
Rician
W
-
WW-X W~t
l
Analysi
4
3
1
4--
-i f
-T-
t
4
-
---L--C--i-- ::i~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-
!
i
F
II
C.:
5
6
7
8
910
5
A
SNR
Figure
52
7
8
9 10
2
5
6
7
8 910
Comparative Simulation Results
130
Phase-Locked Loop Rec'r
vs.
Conventional Receiver
9
7
I I I I I I
A. . I I
rI..
.
.
.la I I I . . Ia .' I Ir
'.
1
1
.1
.'
I Ir. . . . I . I . ...
-
. .
I
II
'-
-
6
-
7tv
4
-df= 50
3
f
Two pole
message
_ . I . I I . I . . .I I I II Il.. I
. I
.
- I
i I I II-
I . .I
I
I I . I I . I I .
. ..
I. -·
.
. I· .
I I
1111..
I
I
I
ia
I
PLL
11
' I
I1 1
1 1 1 1-
1 I
I I
I.11.
. .,
'
t
1
1. . . I
m
CFM
I
.
.
.
.
:,1
4
I
;
df = 50
r
I
*.;
.
I
.
. I I I .
I I I . I
. I.
I I . . . . I . . .
I
.
.
.
.
.
.
.
. ...
. . I . .
. ..
.
.
I
I
I
I
I I
I .
I . I .I . .
One pole
message
I
4
U
u
'I
I
7
6
4I
1--~~~~~~~~~~~~~~~4
4
3
CFW
PLL
5
-1
X
I-1:.
1-l~
-
..
r
I~
:
-
1
Ir
--
11
1-
1fWg
..
-
01
l'
Is
t>-J
l~l~
+~t>> fM-111
+
TLILLI~lt-A
A I
'
~tr
Ir
II
I
I
0
f
14Wg
I
~r|
In == <r
Ir!O-r
I|
III
sTTTI
.T
]7r
T
TI
TT
-rE
iS
2
-1 _
10
3
4
5
6
7 8 9 10
5
A
SNR
Figure
53
6
7 8 9 1000
2
4
5
6
7
8
9 18k
131
APPENDIX
This appendix will treat those subjects referenced
by the text of the thesis.
In addition it will explore
some additional inroads for applying Booton's quasi-linear
analysis technique to the receivers of chapter three.
i)
Proof of equation
(29)
N
( Kn
Kn(2) (T) = KC
n
T) =
u()
(29)
Consider equation (28)
n (2)t)
+ n2 (t) cos
sin [t)]
= n(t)~~~~~~~~~~~~~(2)
If Wn is large compared
to the bandwidth
(28)
[t)]
of sin
[x(t)] and
cos [x(t)] we can make the approximation
N
Kn
(T)
=
Kn2 (T)
=
u
(T)
(A1)
Using this approximation, it follows that a(t1 ) is independent of n(t 1 ) and n2 (t1) .
covariance function
Hence the conditional
132
K (2) (t,u)/x = Knl (t,u)/x + K2 (t,u)/x
nn
(A2)
Where
sin [(t)]
Knl (t,u)/k = E{nl(t)
nl(u)
sin
[
(u)] }
ni
.2
= sin
.2
[k(u)] E[n l (t) n l (u)]
N
UO0
(t - u)
u
(t -
= sin
[x (u)]
= cos2
[
(u)]
2
[x(u)] -T U o(t
(A3)
Likewise,
K
(tu)/
n2
N
2
u)
(A4)
Thus
=
(2) (t,u)/
K
.
sin
No
- u)
n
N
+ COS 2
[3(u)]-
uo( t
-
u)
N
(2) (t,u)
K
n
Q.E .D.
=
2u(t
- u)
(A5)
133
ii) Digital Filters
da
a)
H(s)
= S
b)
H (s)
= s
c)
H (s)
d)
H(s)
Y(nr)
+b
Y(n 1)
s + b
n
s 2 + 2cw s +
n
P1 = e
wnT sin[wnTVl-c
-c
T
= 2e
2
Y(nr)
2]
cos[ nT/l-c 2 ]
n
-2c
n
T
P3 = e
e)
H(s)
s + w
= A
2
n
s
-c
P1
-c
= 2e
-2c
P3
+ 2cwns +
nT
T
n
2
cos[wnT/l-c 2 ]
T
n
n
cos[w n TVl-c2 ]
134
iii)
Alternate Receiver Design
Up until now we have used Booton's technique to analyze
the optimum receiver
as in chapter
3.
As shown in figure
15, G"(s) is an ordinary zero-delay Wiener filter and the
r
1//7 is the gain term advanced by Booton:
Cy
2
x
-
1
2
= e
(A6)
Suppose we consider a different course of development
than what we followed in chapter 3.
Instead of completely
linearizing the system as we did in figure 16, let us find
the in-loop filter for figure 15.
In other words, we want
to find the in-loop Wiener filter that minimized the mean
square phase error when
N
Sx ( ) +
is the input to the loop.
v
(A7)
For the one pole Butterworth
message, this in-loop filter is:
g(s
where
62
v
d 2
a 2) = (
y
- 1)s + 6
s(s + 1)
2
f
6 + 1
2
v = e
ax
A =
4P
RN
o
(A8)
135
Now to find the corresponding post loop filter we
have to find the overall transfer function that minimizes
the mean square demodulation error [i.e., we desire a(t),
not x(t) as before] when (A7) is the input.
Carrying out
this Wiener filter problem yields,
H
opt
= s[6
2
s
(
-
1)]
(A9)
s +6
-
and using feed back techniques we can find the post loop
filter
as
go (s'
2) =
(
6 -
(
- 1)
1)s + 6
(Al 0)
Now our receiver filters (A8) and (A10) are functions
of the mean square phase error.
The new nonlinear system
is shown in figure Al.
.
fhi -)
Figure Al
Nonlinear System with Booton-type Filters
136
Actually to theoretically analyze the system, we do
Develet2 4 proposed a
not need to find the filters.
method that will be used here.
For the one pole case, define
Sa (W) =
2
2 2
W +1
N
N () =
a a
2P
(All)
2
~
df 2
(A12)
N
= 2
(A13)
Sa (W)
P ( U) PR(-w)
1 + Sa() Q
(w)
(A14)
N(~)
x
2
then,
Carrying out (A14) we find
= s2 + /2
P(s)
QQl(S)
(iS)=
= s
S 2 + ss
S
=
3
+ 1 s + 6
(A15)
(A16)
137
If we use Lawton's10 general result that
Ix
x
T
11
(A17)
T
-
and
2
Cx
x
= NxI
xx
(A18)
and
N
Cf=2
df
{(T1
1
1) [(T 1 -
1 )T 1
+ 2 2] -
i are defined by
where Ti and
Pz () = s N +
1
(T3 -
(A19)
N-2 + ...
N-1 + T SN-2
SN-1
2
(A20)
and
Q (s) = sN +
sN - 1
+
N-2 + ...
Then for this low pass case,
T1
=
T
=
T3
=
v26 +
1
1 = 1
2=0
0
3=
(A21)
3) }
138
Hence
2
26 + 1 - 1
I =
X
(A22)
2
2
_2df
x-
Cf f
_
[426
2
Ix2
x
___
+ 1 - 1]
[I + 1]
x
-2
(A23)
(A24)
and
6= 6(a
2
) = dfAv
A plot of (A24) for df = 50 is shown in figure A2.
Remember that this plot describes the performance of a
receiver that contains Booton-type filters and that Booton's
quasi-linear analysis technique was used to analyze it.
Figure A3 appropriately shows the arrangement.
IL (N.?
t
Figure A3
Quasi-Linear Receiver with Booton-type Filters
139
Also plotted in figure A2 is the corresponding d
f
= 50
curve for the quasi-linear analysis of a receiver with the
usual linear filters (as in chapter 3).
The same sort of analysis as described above was performed for the two pole message.
Only the results will be
included here.
Ix
=
-22
af2
af -
2
2p1 -
2~v2
p
d 22 [2p
2
[3 p
(A25)
- /21]
- 4/2
p
+
(A26)
4
p
- /]
(A27)
P
where
Adf 2
p
A
-
N
0o
A plot of (A27) is also shown in figure A2 along with the
corresponding curve for the linear filter case.
140
The system of figure Al was simulated on the digital
computer for the one pole case.
We first conducted the
simulation for
0 = e (° ) = 1
(A28)
which in effect is exactly the same one pole case that
was simulated in chapter four (curve number 1, figure A4).
For the next simulation (curve 2) we used those values of
mean square phase error that were obtained in the first
run
a
2
xo
=e
>1 .
(A29)
We expected to see an improvement in performance but instead
the performance degraded somewhat.
Because of the degraded
performance shown by curve number 2, we decided that perhaps
the a 22 values that were used in run 2 were too large.
x
Sub-
sequently, curve number 3 was run using smaller values of
2
a x 2x
The specific values are listed on the graph.
Curve number 4 was obtained by using the theoretical
values of
2
x 2 for figure A3.
puted using equation A23.
These values of a
2
x
were com-
141
These results are somewhat surprising because, according
to figure A2, the receiver should perform slightly better
when the Booton-type filters are used.
These results show
2.
that the best value for a2 is zero which reduced the BootonX
type filters to those used in chapters three and four.
iv)
Equivalent Rectangular Bandwidth
In some texts this concept is referred to as the equivThe idea is best described with an
alent noise bandwidth.
Consider the first order Butterworth spectrum
example.
Sa)
=
2K
a +K2
2
in figure
(AS-a):
Snc'L)
2
K
I1
P=1
P=)
C
o
'
o
K
r-
(b)
(a)
Figure A5
BE
The total power, P, in the message is the area under Sa(w).
In this case P equals one.
Now the E.R.B. is the bandwidth
of a rectangle of height S(0)
a
-K
E.R.B.
=
I
and area P.
For this case
142
Quasi-Linear Filters vs. Linear Filters
One and Two Pole Cases
Theoretical Quasi-Linear Analysis of P.L.L.
!
10
9
8
7
6
5
-iif
...
I[1I
';""
I I iI
0-0
r-
· -· ·· · · ·
· · · ·I
I
I I
I' 'I Ii LL-Y·
I I
;`""'u""
mml-T-rmm-7-T-7
III
{III]1It{II I Ii
I IIII
I
i
·
I
1
I
]
1[LH
H... . - - . - . I if1i
- -- .I
IllJlJlIl
i LLIL
IILII
I
II
I !l
iI .. I
I- I
{II I
{
I
i . .l{
I II I I.1
II I
I
I I.I II -I ·I -I ·I III
I11111ItI1111!1I
· ·I ·I I ·I ·I
I I I I IIlIl{1II1H111
-·· · · · · ·· · · ; I·
· I 1 i I I·······I
I
Booton-type Filters
Linear filters
4
d
21
_ .1
50
=
case,
TwoIoI pole111111111
I IIw
.. . .. . I I I . . ...... ''. -I . I . . I I I . - r. I I I.. I- I -r I
.
1mm11mm.
111t11W
m_11
m~~~~mK~~~Kt
t1
~~m~~~mm~~~tfi
10
q
111111-it
d
= 50
I r--rrl
II·
I . . . .I .. .II1. I ;-_I ..---. .I---..II
7
i
+ft·rrlrl·
I
a
2
' '"'"'
1111,111-"
6
U
u8
{,
,
10
9
IFM
1111
11111111
III III III IILIkd= III!]
1 1 11 1I--I I I I I I-
I
1111 11
I II II !III
'''''''
f
One pole case~
II ,, I I 1 . I
. . I . I--- I I . I . I I I
. . . . . . .... I. - ... . . - - I... - . . . . . . I. . . I . .. , - _ .. ''.. . .
8
7
I . . . I . I .
''..
.....
. . . . I . I . . II . .
8
J
8
6
5
3
2
-
'-IiH.
Hit......
:
[j
3
4
5
6
2
9 10
3
A
Figure A2
4.
5
6
7 8 910
221 -Z
3
.
1 IS
6
7
1
8 9 t10
Simulation Results
143
Booton type filters in P.L.L.
,H (w)a 2
Simulation Results
Input Data
Output Data
2
p400
200
(1
LiQO
(2)
0~~~~~~~
.3
1.29
56
.. . .2,
0.0
0.0
10.62
6.01
0.31
0.83
35
0.0
2.45
2.89
14 6J
10.59
0.33
-
a~~~~~~~~ xi
CT i....i......
analytical
..
[400
0.31
102.89
400 0.1
(3)
200
100
423
(4) 249
4
1.04 7.6226
3,.-..:.
0.3
0.8
10.63
5.84
1.85
0.31
0.98
3.71
0 .26
10.85
0.30
685
0 76
30
280
261
87j
043
147 100
-.
......
5
0~~~~~~~~~~~~~~~~~~~~~~~:
41
152
31-
*
....
~
.i.' ' ' ....:]~
A~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~0i
............
e..
,L._!~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~AK
jo
..
j
500
A
ur
L147 1 O0 2 80 2 618~~~~~~~~~~~~~Fi
1111·1_1111_
4P'~
too~~~~~~~~~~~~~~~~~~~~
;
_ _ _ _ _ __~~~~~~~~~~~~~
144
REFERENCES
1.
M. Schwartz, "Information Transmission Modulation and
Noise," McGraw Hill, N.Y., 1959.
2.
H. S. Black, "Modulation Theory,"
Princeton, N. J., 1953.
3.
E. H. Armstrong,
"A method
D. van Nostrand,
of Reducing
Inc.
Disturbances
in Radio Signaling by a System of Frequency Modulation,"
Proceding
4.
IRE, 24, pp. 689-740,
J. G. Chaffee,
"The Application
1934.
of Negative
Feedback
to
Frequency Modulation Systems," BSTJ, Vol. 18, July
1939.
5.
D. Richman, "Color Carrier Reference Phase Synchronization Accuracy in NTSC Color T.V."; Proceding IRE
42, 106-133
6.
D. C. Youla,
(1954).
"The Use of the Method
of Maximum
Liklihood
in Estimating Continuous-Modulated Intelligence which
has been Corrupted by Noise," Jet Propulsion Laboratory Report No. DA-04-495-ord 18, Pasadena, California.
7.
H. L. Van Trees, "Detection, Estimation and Modulation
Theory," (book to be published).
8.
A. W. Bode and C. E. Shannon, "A simplified Derivation
of Linear Least Square Smoothing and Prediction Theory,"
Procedings, IRE, Vol. 38, No. 4, pp. 417-425; April,
1950.
9.
10.
D. L. Snyder, "The State Variable Approach to Continuous
Estimation,"Sc.D. Thesis, Mass. Inst. of Tech., Elec.
Eng. Dept.; February 1966.
H. D. Becker, J. T. Chang and J. G. Lawton,
"Investiga-
tions of Advanced Analog Communications Techniques,"
Cornell Aeronautical Laboratory, Inc.: Technical
Report No. RADC-TR-65-81; March, 1965.
11.
R. C. Booton, Jr.,
"The Analysis
of Nonlinear
Control
Systems with Random Inputs," Procedings, Symposium
on Nonlinear Circuit analysis, New York, N. Y.,
April
23, 24, 1953.
145
12.
G. C. Newton, Jr., L. A. Gould, J. F. Kaiser,
"Analytical Design of Linear Feedback Controls,"
John Wiley & Sons, Inc.; New York, N. Y.; 1957.
13.
M. C. Yovits and J. L. Jackson, "Linear Filter
Optimization with Game Theory Considerations,"
IRE National Convention Record Pt. 4 pp 193-199;
1955.
14.
D. L. Snyder, "Optimum Linear Filtering of an Integrated Signal in White Noise," Transactions, IEEE,
Vol. AES-2, No. 2, pp. 231-232; March, 1966.
15.
B.C. Kuo, "Automatic Control Systems," Prentice-Hall,
Inc; 1962.
16.
I. M. Horowitz, "Synthesis of Feedback Systems,"
(Academic Press
17.
W. B. Davenport,
Inc, New York; 1963).
Jr., and William
Signals and Noise,"
New York, 1958.
18.
L. Root; "Random
McGraw-Hill Book Company Inc.,
A. Papoulis, "A New Method of Analysis of SampledData Systems," Transactions, IRE, Vol. AC-4, No. 2,
pp. 67-73; November,
1959.
19.
Y. W. Lee, "Statistical Theory of Communication,"
John Wiley & Sons, Inc., New York, N. Y., 1960.
20.
R. W. Zaorski, "Simulation of Analog Demodulation
Systems," M.Sc. E.E. Thesis, Mass. Inst. Tech.
Cambridge, Mass.; January, 1965.
21.
S. . Rice, "Noise in F.M. Receivers," in Time Series
Analysis, M. Rosenblatt, Ed., John Wiley & Sons,
N.Y. 1963.
22.
J. G. Lawton, "Investigation of Analog and Digital
Communication Systems (Phase 3 Report)," Cornell
Aeronautical Laboratory, Inc., Buffalo, N.Y.;
Technical Documentary Report RADC-TDR-63-147;
May, 1963.
23.
D. Middleton, "Statistical Communication Theory,"
McGraw-Hill Book Company, Inc., N.Y. 1960.
Sec. 15.4, 15.5, pp. 655-678.
24.
J. A. Develet, Jr., "A Threshold Criterion for PhaseLock Demodulators," Procedings, IEEE, Vol. 51,
pp. 349-356; February, 1963.
Download