AND NONLINEAR Jimmie Dale Walker III

ANALYSIS OF PRIMITIVE LINEAR AND
NONLINEAR STOCHASTIC SYSTEMS
by
Jimmie Dale Walker III
B.S., Massachusetts Institute of Technology (2000)
Submitted to the Department of Electrical Engineering and Computer Science
in Partial Fulfillment of the Requirements for the Degree of
Master of Engineering in Electrical Engineering
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
September 29, 2000
Copyright: Massachusetts Institute of Technology, MCMXCVIII. All rights reserved.
The author hereby grants to MIT permission to reproduce and distribute publicly paper and electronic
copies of this thesis document in whole or in part, and to grant others the right to do so.
Author
Department of Electrical Engineering and Computer Science
September 29, 2000
Certified by
B.C. Lesieutre
Professor of Electrical Engineering
Supervisor
-ihesis
Ty
Accepted by
Arthur C. Smith
Chairman, Department Committee on Graduate Thesis
MASSACHUSETTS WNSTITUTE
OF TECHNOLOGY
JUL 3l
T1
LIBRARIES
BARKER
ANALYSIS OF PRIMITIVE LINEAR AND
NONLINEAR STOCHASTIC SYSTEMS
by
Jimmie Dale Walker III
Submitted to the Department of Electrical Engineering and Computer Science
September 29, 2000
In Partial Fulfillment of the Requirements for the Degree of
Master of Engineering in Electrical Engineering
ABSTRACT
Engineers concerned with designing future projects need a probabilistic model, in particular during the early stages of
the development process, that gives a accurate estimate of the cost of development's distribution given pertinent
inputs for that particular time. Assuming the uncertain inputs are modeled as random variables, it is important to evaluate probabilistic characteristics of primitive operations (addition, subtraction, multiplication, and division) of these
random variable inputs. Although the math can be non-trivial, one can come up with analytical and simulated expressions for the resulting output probability distributions. However, finding these output distributions can require excessive computation time using contemporary computers.
In this research, both analytical and simulated expressions for the first and second moments of the output probability
distributions were derived. The random variable inputs used consisted of a deterministic (sine wave, product of a sine
wave and an exponential) and stochastic (uniform, gaussian, and beta distributions) element. Feedback, time-delay,
and system gain were used to tweak the output probability distributions resulting from the addition, subtraction,
multiplication and division of the inputs.
Thesis Supervisor: B. C. Lesieutre
Title: Professor of Electrical Engineering
Dedication
to
my Grandparents,
Parents,
Family,
and
Significant Other
5
Acknowledgements
First and foremost I would like to thank Bernard C. Lesieutre, my thesis advisor, for making me aware of this
research opportunity and for his supervision and assistance. Also, I would like to thank Thomas Neff of DaimlerChrysler for advice on references to help me get up to par on the research topic.
Next, I would like to thank my family and significant other for their encouragement when I needed it the most.
Whenever, I felt that my research would never come to completion, they were there to tell me to tough up and keep
my eye on the prize. For that, I will be forever thankful.
Also, I would like to thank Sandip Roy for providing me with a couple of matlab functions necessary to generate
beta distributions. Last, my secretary in LEES, Vivian Mizuno, deserves thanks for her willingness to help out with
anything.
7
Contents
1 Introduction
11
2 Background and Problem Statement
14
2.1 Discussion of Prior Research and Summary of a Relevant Technical Approach.......................14
2.1.1: Prior Research D iscussion...................................................................................................
14
2.1.2: Sum m ary of a Relevant Technical Approach......................................................................
15
2.1.3: Conclusion of Prior Research...............................................................................................
16
2.2 Problem Statem ent...............................................................................................................................17
2.3 Com ments.............................................................................................................................................20
23
3 Main Analysis
3.1 The input...............................................................................................................................................23
3.2 First and Second O rder Mom ents of O utput, n=3........................................................................
24
3.3 Comment on Expected Behavior as a Function of the Statistics and k.......................................27
3.4 R ecursion R elations...............................................................................................................................28
4 Benchmark Tests
32
4.1 Formulas Derived to Gather the Analytical and Simulated Results...........................................32
4.2 Two Deterministic (xd[n]) Components used in the Input Signal.............................................
9
35
4.3 Analytical and Simulated Results using Gaussian Distribution.................................................36
4.3.1: Simulated and Analytical Results for x[n] = xdl[n] + xs[n]gaussian--------..............---......37
4.3.2: Simulated and Analytical Results for x[n] = xd2[n] + xs[n]gaussian----------................---------....43
4.4 Analytical and Simulated Results using Uniform Distribution...................................................
49
4.4.1: Simulated and Analytical Results for x[n] = xdl[n] + xs[n]uniform---------...............-----.........50
4.4.2: Simulated and Analytical Results for x[n]= xd2[n] + xd2[n]uniform-----------.--..................56
4.5 Analytical and Simulated Results using Beta Distribution............................................................62
4.5.1: Simulated and Analytical Results for x[n]
xdl[n] + xs[n]beta---------------.................-------.....63
4.5.2: Simulated and Analytical Results for x[n]= xd2[n] + xs[n]beta-----------------....................-------...69
4.6 D ivision O peration R esults....................................................................................................................75
4.6.1: The Deterministic (xd[n]) component used in the Division Input Signal...........................75
4.6.2: Division Simulated and Analytical Results for x[n] = xd[n] + xs[n]gaussian------..................76
4.6.3: Division Simulated and Analytical Results for x[n] = xd[n] + xs[n]uniform-------------...............80
4.6.4: Division Simulated and Analytical Results for x[n]= xd[n] + xs[n]beta--------------................--84
88
5 Conclusions and Further Research
10
1 Introduction
Enterprises primarily used to integrate systems must be capable of evaluating system components early in the development stage, when more than half of total product costs is determined
[2]. Excellent examples of such enterprises include automobile and truck manufacturers. Presently, virtually all of the development and determination of subsystem costs is performed by the
suppliers of individual subsystems, yet there exists a need for the producer of the overall system to
plan and coordinate subsystem development. Therefore, the system integrator must be able to sufficiently model economic aspects, such as life-cycle costs, and technical aspects, such as electrical
losses, of system components. During the early development phase when system options are
under discussion, not all information required to make a sufficient comparison of various system
topologies is available. A vast majority of the attributes of various system components can only be
represented by estimated values. Regardless, key product decisions are based on these inaccurate
estimates. Therefore, it is preferable to disclose the uncertainty existing in all available data and
the results of any system analysis prior to making key decisions. One means of modeling uncertainties in component attributes is to use discrete random variables. Then, uncertainty in the overall attributes of the system may be calculated and represented in the form of discrete random
variables. The capability to represent and calculate the system attributes using random variables
introduces the possibility of assessing the risks involved to undertake a particular project. Currently, there are numerous well known mathematical methods for calculating functions of several
random variables. Yet, many of the methods have long computation times or are not well suited
for computer implementation. In most cases systems to be modeled are large and results are
11
needed immediately. Therefore, long computation times and large data structures pose a major
hindrance. [4] The purpose of this thesis is to derive analytical solutions that efficiently, accurately, and quickly carry out primitive mathematical operations with random variables.
Using probabilistic techniques and characterizations of dominant uncertain parameters, an
accurate estimation of the distribution of possible production costs can be achieved in the product
design and development stages. Such a system has already been developed to carry out this task
given independent uncertain parameters described by beta distribution functions and the simple
calculations of addition, subtraction, multiplication, and division.
The research in this thesis carries out the estimation of some probabilistic descriptions
through calculations which include looping the output of the calculation back as an input and
introducing temporal delays. In this case, calculations involving both input and output random
variables cannot always be resolved by assuming the variables are independent, and the feedback
loops make the problem implicitly defined, requiring new solution methods.
In this initial study we describe the output of the primitive functions, addition, subtraction,
multiplication, and division, when the output is fed back as one of the inputs, after scaling and a
time delay. The resulting systems form the primitive linear and nonlinear stochastic systems of
this thesis. In this document we characterize the first and second moments of the output in terms
of the first and second moments of the input. Difference equations are used to capture temporal
variations in the moments. For the real input variables of interest, no stationary restriction is
12
imposed, which further complicates the problem and prohibits the use of some of the well-known
traditional approaches.
We demonstrate through simulation and analysis that the multiplication and division primitive
stochastic systems are not of practical interest. Their resulting dynamics cause the output to either
tend to zero or become unbounded. The remaining addition and subtraction primitive systems
both form simple linear systems. The well-know results for linear systems and stationary (or
wide-sense stationary) inputs do not apply here and we develop appropriate alternate techniques.
The results in this thesis are interesting in their own right, and provide guidance for future
research activities.
The remainder of the thesis is organized in the following manner. In Chapter 2 we present a
review of previous work related to this topic, a precise problem statement, and a review of related
analysis techniques. Derivations and benchmark examples are presented in Chapter 3 and Chapter
4, respectively. Further discussion, conclusions, and recommendations are given in Chapter 5.
13
2 Background and Problem Statement
The following chapter will provide a good understanding of current and similar prior research.
Section 2.1 will provide information concerning previous research and a smimmAry of a relevant
technical approach. Section 2.2 will provide a detailed problem statement. Last, Section 2.3 will
provide a discussion on wide-sense stationary processes.
2.1 Discussion of Prior Research and Summary of a Relevant Technical Approach
This section will provide a summary of prior related research.
2.1.1 Prior Research Discussion
The following perspective is derived almost directly from Isaac Trefz's Master's Thesis [4].
The overall objective of the thesis was to investigate methods for approximating probability densities of functions of independent random variables to use in the MAESTrO software tool. MAESTrO [5] is specifically designed to analyze alternative electrical system configurations for
automobiles and trucks. In particular, MAESTrO can calculate estimates of total cost, weight,
reliability, and efficiency of an entire electrical system including its generation, storage, distribution and utilization subsystems. In mathematical terms, if X1 ,...,X, are independent real random
variables with density functions fl(x),...,fn(x) and Y=phi(Xl,...,Xn) where phi is any real arbitrary
function, Isaac wanted to find a reasonable approximation to the probability density function of
the random variable Y, g(y), using the density functions of X1 ,...,Xn.
14
Although it is theoretically possible in many cases to find a closed-form analytical expression
for g(y), the expression can be very complex, non-trivial to derive, and impossible to evaluate on a
computer. Therefore, a sufficient approximation to g(y) using simple numerical algorithms
requires a fraction of the computational time and resources of their analytical counterpart. Isaac's
thesis compared methods for approximating the density function, g, of the function of independent random variables, phi.
2.1.2 Summary of a Relevant Technical Approach
The technical approach used in Isaac's thesis involved investigating the following three methods for performing calculations with random variables and compare them with respect to their relative speed and accuracy:
1. Direct numerical calculation of g(y) based on analytical expressions
2. Monte Carlo approximation to g(y)
3. Analytical approximation to g(y) using the moments or cumulants of X 1,...,X. [4]
The mathematical operations investigated included products, quotients, and linear combinations of two independent random variables. Even though analytic expressions for these operations
exist, they usually involve convolution integrals that are difficult or even impossible to evaluate
analytically. However, the expressions may be used as a basis to directly generate numerical
approximations to g(y).
An alternative to a direct numerical approximation of analytic expressions is a Monte Carlo
approach, having the advantage that it does not require an analytic expression for the density
15
function being approximated. However, this approach is quite more complicated than direct
numerical calculation but can in some cases result in improvements in speed and accuracy.
Although direct numerical calculation and Monte Carlo calculation of functions of independent random variables may require not as much time and computational resources as a direct analytical approach, the computational savings that these methods offer may still not be enough if the
models require many stochastic calculations or require storage of many stochastic variables.
Luckily, methods are available to approximate the density function of Y= phi(X,...,Xn) using
simple calculations and the moments of X1 ,...,Xn. Three methods, one developed by M. Springer
[3] and two developed by G. McNichols [6] offers significant savings in computation time and
resources over direct numerical calculation or a Monte Carlo approach.
2.1.3 Conclusion of Prior Research
Trefz's research concluded that Springer's analytical approximation method was noticeably
better than either of McNichols' methods at performing arithmetic operations. Given shape
parameters of the input beta distributions greater than 1 but less than 10, Springer's method reliably generated accurate approximations to products and sums of two independent random beta
variates. Also, Springer's approximation methods required significantly less computer memory
than direct numerical methods since each result is described uniquely and completely by four
parameters. This leads to less computation time due to the simplicity of description. Yet,
Springer's method was unreliable when calculating the quotient of two independent random variables.
16
For Springer's method to be incorporated more confidently into software, it is desirable to find
more symmetric and nearly uniform beta approximations to sharp or highly skewed beta distributions. With sharp beta distributions, almost all of the probability density is localized at the center
of the interval [a,b]. It is preferable to find an approximation to this sharp beta distribution that has
much smaller shape parameters and that is defined over a much smaller interval than the original
sharp distribution. With respect to highly skewed beta distributions, it is preferable to find an
approximation defined over a larger interval and with larger shape parameters than the original
skewed distribution. Locating algorithms to perform these mappings will greatly assist practical
implementation of Springer's methods into software. [4]
2.2 Problem Statement
The primary source of prior work is contained in Trefz's thesis. That document considers only
static cases of multiplication, division, addition, and subtraction of independent input random
variables described by beta distributions.
This type of research is continued here by investigating calculations of independent random
variables described by uniform, gaussian, and beta distributions which arise from combinations of
addition, subtraction, multiplication, division and delay elements in the presence of feedback
loops. Since feedback is involved, the following systems investigated are deemed stochastic.
17
Theoretically, one might envision a method by which one could design a system in block diagram form involving the calculations stated above. This system could be subjected to several random variable inputs and several random variable outputs. The major hurdle or challenge is to
compute the outputs in a efficient manner. This is a quite challenging numerical problem.
To facilitate analysis in this initial investigation we limit the allowable connections to single
primitive element (addition, subtraction, multiplication, and division). One of the two inputs
comes from the output through delay and scaling elements. Block diagrams for these primitive
stochastic systems are shown in Figures 2.1 - 2.4, where the z-1 indicates a single-step discrete
time delay block, and k indicates a gain (scaling) block.
The difference equations describing the systems are as follows:
x[n]
y[n]
y[n] = x[n] + k*y[n-1]
Figure 2.1: A addition feedback-delay system, with input x[n]
and output y[n], governed by the equation above.
18
z-
k
_<
x[n]
y[n]
-
y[n] = k*y[n-1] - x[n]
Figure 2.2: A subtraction feedback-delay system, with input x[n]
and output y[n], governed by the equation above.
Z-
_<k
x[n]
*
y[n] = x[n] * k*y[n-1]
Figure 2.3: A multiplication feedback-delay system, with input x[n]
and output y[n], governed by the equation above.
19
x~~nJ~
\
X\
ni
y[n] = x[n] / (k * y[n-1])
or
y[n] = (k * y[n-1]) / x[n]
Figure 2.4: A division feedback-delay system, with input x[n]
and output y[n], governed by the equations above.
For the four feedback-delay systems described above, by gaining knowledge of a joint distribution description of the possible values taken by the input, x[n], a distribution description of possible values taken by the outputs can be determined.
2.3 Comments
Several observations should be made about the four types of systems listed above. The knowledgeable reader will note that the addition and subtraction systems are both linear. Well-known
techniques are known for the analysis of linear systems for certain types of stochastic inputs.
The most widely known results apply to wide-sense stationary (WSS) processes. A process is
said to be wide-sense stationary if it's first two moments don't explicitly depend on time. Thus the
20
mean of a WSS process, x[n], is constant, and the auto correlation depends only on the time differences; that is,
meanx [n] = E{ x[n]}
=
meanx
(1)
constant
(2)
Rxx[n,m]= E{x[n]x[m]}
=
(3)
R[n + s, m + s]
for all s
(4)
Rxx[0, m - n]
=
(5)
R x[m - n]
(shorthandnotation)
(6)
When the WSS process x[n] is used as the input to a linear system, the first and second statistics of the WSS output process y=[n] are easily calculated. The mean is given by
meany[n] = meany = (constant) = H(1)*meanx
(7)
where H(*) is the z-transform of the discrete-time linear system and H(1) is the zero-frequency
gain.
The cross-correlation between the output and the input is given by
Ryx[n] = h[n] * Rxx[n]
(8)
where h[n] is the unit-sample response of the discrete-time linear system and * is used to represent the convolution operator. The auto correlation of the output is given by
RYY[n] = h[-n] * h[n] * Rxx[n]
21
(9)
The second moment of the output is simply given by Ryy[O], if one is not interested in temporal correlations.
Assuming the input is wide-sense stationary is too restrictive for our interests. While very
powerful, the results for WSS stationary inputs do not aid us in this thesis.
Before moving to the next chapter in which the main analysis is presented, we pause to make
an observation about the nonlinear primitive systems. Even for deterministic input signals, the
multiplication and division systems will behave strangely. For example, in the multiplication case,
if the initial condition on the output is small, y[O] < 1/(k* x[n]), then y[n] will tend to zero. Conversely, if the initial condition is large, y[0] > 1/(k*x[n]), then y[n] will tend to become
unbounded. This is not a particularly interesting or useful system. There is no reason to expect
better behavior with stochastic inputs.
A similar observation can be made for the division systems.
22
3 Main Analysis
In this chapter we present the theoretical derivation of the main results of this thesis. Specifically we develop formulas to describe the first and second moments of the outputs for the primitive stochastic systems described in the previous chapter. First we begin with a discussion of
characteristics of the input. Then we proceed with the derivations.
3.1 The input
In this thesis we restrict the form of the input to get useful, tractable results. In our initial
attempt to study unrestricted general inputs (in terms of statistics) we did not see an obvious technique for analysis that was superior with respect to computation time to brute force calculations.
Hence, we restricted the input to determine what gains could be made.
Here we consider an input x[n] as a sum of a deterministic component, xd[n], and a random
component, xs[n],
(10)
x[n] = xd[n] + xs[n]
We assume that the random component is independent of the deterministic component at all
times. Furthermore, we assume the random component at some particular time is independent of
the random component at all other times. Under these assumptions, the expected value of x[n] is
equal to xd[n], and the correlation of x[n] at different points in time depends only on xd[n]. This
representation is convenient because the values at different points in time are independent.
23
3.2 First and Second Order Moments of Output, n = 3
Only the first and second moments of the output are considered. When calculating the first and
second moments for the four systems described in the previous chapter, n=3 was arbitrarily
choose. Also, the deterministic and stochastic component of the input have the following characteristics:
xs[n] -> uniform, gaussian, or beta distribution with zero mean
xd[n] -> real functions such as sine wave or exponential with non-zero mean
Addition Feedback-Delay System
For the addition feedback-delay system, calculating the first moment of the output was fairly
straightforward.
E{y[3]} = E{x[3] + k*y[2]}
= E{x[3] + k*x[2] + k2 *x[1] + k3 *y[0]}
= Xd[ 3 ] + k*xd[ 2 ] + k2 *xd[1] + k 3y[O]
(11)
The second moment of the output for the addition feedback-delay system is given by,
E{y2[3]} = Xd2[3] + 2 *k*xd[ 2 ]*xd[ 3 ] + 2*k 3 (y[0]*xd[3] + Xd[ 2 ]*Xd[l])
+ k2 (E{xs 2 [2]} + 2 *xd[l]*xd[ 3 ] + xd[2] 2 ) + k 4 (2*y[O]*xd[2] +
Xd2[1]
+ E{xS 2 [1]}) + 2*k5*y[O]*xd[I] + k6y2[0]
24
(12)
Subtraction Feedback-Delay System
For the subtraction feedback-delay system, calculating the first moment of the output was
pretty straightforward,
E{y[3]} =
Xd[3] -
k*xd[ 2 ] + k2 *Xd[1l
-
k3*y[O]
(13)
The second moment of the output for the subtraction feedback-delay system is given by,
E{y 2 [3]} =
Xd2[ 3 ]
+ E{xs2 [3]1 - 2 *k*xd[ 2 ]*xd[ 3 ] + 2*k 2 xd[1]*xd[3]
- 2*k 3 *y[O]*xd[3] + k2 (xd[2] + E{xs2[2]}) - 2*k 3 *xd[2]*xdl]
+ 2*k 4 *xd[2]*y[O] + k4(xd 2[1] + E{xs2[1]})
- 2*k5*xd[1]*y[O] + k 6*y2[0]
(14)
Multiplication Feedback-Delay System
For the multiplication feedback-delay system, the first moment calculations of the output was
also straightforward.
E{y[3]} = k * y[0]*xd[3]*xd[2]*xd[1]
25
(15)
However, calculating the second moment for the multiplication feedback-delay system is
given by,
E{y 2 [3]} = k6 * y 2 [0] * E{(xd[1] + xs[1]) 2 *(Xd[ 2 ] + xs[2]) 2 * (xd[ 3 ] + xs[3]) 2 }
(16)
Division Feedback-DelaySystem
The first and second moment of the output for the division feedback-delay system was calculated for only case A, y[n] = x[n] / (k * y[n-1]). The first moment calculations of the output is also
straightforward,
E{y[3]} = (xd[ 3 ] * xMI]) / (xd[ 2 ] * k * y[O])
(17)
The second moment for the division feedback-delay system is given by,
E{y 2 [3]} = {y 2 [0]*k
2 *
(xd 2 [2] +E{xs 2 [2]})}- *
(xd 2 [3]*xd 2 [1] + xd2 [1]*E{xs 2 [3]} + xd 2[3]*E{xs2[1]} +E{xs 2 [1]}*E{xs 2 [3]})
(18)
From analysis of the results of finding the first and second moment of the output of the feedback-delay systems, conclusions on the expected behavior of the systems as a function of k and
the statistics were made.This will be discussed in the next section.
26
3.3 Comment on Expected Behavior as a Function of the Statistics and k
Before delving into simulations of the feedback-delay systems using Matlab [1] and derived
analytical expressions for the systems, the first step was to make some predictions on the expected
behavior of the systems considering the statistics of the input and the system gain, k. Since the
current input xs[n] is independent of all prior inputs, analytical calculations will be simplified. For
example, if it is required to multiply the expectation of the product of xs[2] and xs[3], since the
inputs at each step are independent one can simply that calculation to just the product of the
means of the two inputs, which is zero.
With respect to k, the initial thought obviously was that k would play a major role in controlling the magnitude of the output. It was felt that k would control whether or not the second
moment for each system would blow up or stabilize. In particular, for the multiplication feedbackdelay system, it was felt that the first moment would decrease as a function of n with k < 1 and
increase for k > 1. It turns out to be slightly more complicated; the critical value of k depends on
the particular input. However, the limiting behavior for the multiplication and division systems do
tend to either go to zero or become unbounded. Thus these tow particular models will behave
oddly.
27
3.4 Recursion Relations
Examining the relations for the first and second moments above for n=3, we see that recursion
relations for these quantities can be derived.
Addition System
The difference equation that describes the first moment of the addition system is
E{y[n]} = E{x[n]+k*y[n-1]}
=
E{x[n]} +k*E{y[n-1]}
=
xd[n] +
(19)
k*E{y[n-1]}.
A difference equation for the second moment of the output in terms of the moments of the
input and first moment of the output is given by,
E{y 2 [n]} = E{(x[n] + k*y[n-1]) 2}
=
E{x 2 [n]+2k*x[n]*y[n-1]+k
2 *y2 [n-_]}
2
2
= (Xd 2[n] + vs[n]) + 2k*xd[n]*E{y[n-1]} + k E{y [n-1]}
(20)
where vs[n] is the variance of xs[n]. (Recall that xs[n] is assumed to be zero mean. Also note in the
equations above that Efx[n]*y[n-1] = xd[n]*Efy[n-1]) because y[n-1] only depends on previous
values of the input, which are independent of x[n].)
28
Subtraction System
The difference equations for the subtraction system are similarly derived. They are given by
E{y[n]}
=
k*E{y[n-1]} -xd[n]
E{y 2 [n]} = (xd 2[n] + v,[n])
-
2k*xd[n]*E{y[n-1]} + k 2 *E{y 2 [n-1]}
(21)
(22)
Multiplication System
The difference equations describing the first moment of the multiplication primitive system is
given by
E{y[n]} = E{k*x[n]*y[n-1]}
= k*xd[n]*Ely[n-1]}.
(23)
The second moment, expressed in terms of the second moment of the input, is given by the
following difference equation:
E{y2[n]l =E{(k*x[n]*y[n-1]) 2}
= k 2E{x 2 [n]}*E{y 2 [n-1]}
= k2(X 2 [n] + vs[n])*E{y 2 [n-I] I
29
(24)
Division Systems
The derivations for the two division systems are a little more complex because they depend on
statistics of the inverse of the input, which presumably are available if the input distributions are
known.
Consider first the case for y[n] = ky[n-1] /x[n]. The first and second moment difference equations are readily derived:
E{y[n]}
= kE{y[n-1]}*E{1/x[n]}
E{y 2 [n]} = k2E{y 2 [n-1]}*E{1 /x 2 [n]}
(25)
The moments for the inverse of the input are not expressed in terms of xd[n] and vs[n] and
must be known or calculable from other information.
Next we consider the second division system, y[n] = x[n] / (ky[n-1]). To obtain a recursion
relation, it is convenient to expand this,
y[n] = x[n] / (k*y[n-1])
= x[n] * y[n-2] / x[n-1]
(26)
Then the difference equations for the first and second moments are easily expressed as
E{y[n]} = (xd[n] / xd[n-I]) * E{y[n-2]}
E{y 2 [n]}
=
(xd 2 [n] + vs[n])*E{ I/ x2 [n]}*E{y 2 [n-2]}
Again, evaluation of these difference equations requires knowledge of the moments of the
inverse of the input.
30
(27)
(28)
We would like to re-emphasize that these equations are valid regardless of the stationary properties of the input. The mean (equal to the deterministic component xd[n]) need not be constant
and the variance of the random component vs[n] may also vary with time.
In the next chapter we apply these relations to benchmark tests.
31
4 Benchmark tests
Section 4.1 will provide a overview of the formulas derived to gather the analytical and simulated results. Section 4.2 will show the two deterministic (xd[n]) components used in the input signal. Section 4.3 will summarize the results of using a gaussian distribution as the stochastic (xs[n])
component of the input signal. Section 4.4 and 4.5 summarizes the results of using a uniform and
beta distribution respectively. Last, Section 4.6 will summarize the division operation results.
4.1 Formulas Derived to Gather the Analytical and Simulated Results
Simulations
The Matlab simulations ran 1000 times and covered from n=1 to n=20 in each case. The only
exception occurred in the beta distribution case, since the function used to compute random values
of a specific beta distribution took considerable computation time, the simulation only ran 50
times. Also, k=0.8 was found to give the best results and y[1] was initial value was set at 0.08.
The mean function in Matlab was used to calculate the simulated first and second moments.
32
The following table displays the functions used to simulate each primitive operation.
Table 1: Simulation Formulae
Formula
Primitive Operation
Addition
y[n] = x[n] + k*y[n-1]
Multiplication
y[n] = k*y[n-1]*x[n]
Subtraction
y[n] = k*y[n-1] - x[n]
Division (Case A)
y[n] = k*y[n-1] / x[n]
Division (Case B)
y[n] = x[n] / (k*y[n-1])
Analytical
While the simulations are very slow and required the help of Matlab, the analytical results can be
computed by hand quickly. That's the power of the analytical vs. simulated. However, since Matlab makes the calculations show easy, Matlab was also used to get the analytical solutions. Instead
of having to perform as many as 1000 trials with the simulation case, it was only required to perform one trial up from n=1 to n=20 for the analytical case.
33
The following table displays the functions used to come up with analytical solutions for the primitive operations.
Table 2: Analytical Formulae
Primitive Operation
First Moment
Second Moment
Addition
k*E{y[n-1]} +
Multiplication
k*E{y[n-1]}*E{x[n]}
k2 *E{y 2 [n-1]}*E{x 2 [n]}
Subtraction
k*E{y[n-1]} - xd[n]
k 2 *E{y 2 [n-1]} + xd[n] + variance(xs[n]) -
k2 *E{y 2 [n-1]} +E{x 2 [n]} +
2k*E{y[n-1]}*E{x[n]}
xd[n]
2k*E{y[n-1]}*xd[n]
Division (Case A)
k*E{y[n-1]}*{1/x[n]}
k*k*E{y 2 [n-1] }*E{ 1/x 2 [n] I
Division (Case B)
E{y[n-2] }*E{x[n] }*E{ 1/
E{y 2 [n-2] } *E{x 2 [n] I *E{ 1/x 2 [n] }
x[n-1]}
34
Section 4.2: Two Deterministic (xd[n]) Components used in the Input Signal
For the two deterministic signals, a sine wave and the product of a sine wave and a decaying
exponential was used.
xjn]
-
sir(0.3*n)
1
0.8
0.6 -
0.4 -
/
0.2 -
-0.2-
'I
-0.4-
-0.61
/
/
-0.8
/
-1
0
2
4
6
8
10
n
12
14
16
18
20
Figure 4.1: Deterministic signal, xdl[n] = sin(O.3*n),
used for the simulated and analytical results.
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
0
2
4
6
8
10
n
12
14
16
18
20
Figure 4.2: Deterministic signal, xd2[n] = e-.15n * sin(n), used
for the simulated and analytical results.
35
Section 4.3: Analytical and Simulated Results using Gaussian Distribution
The first xs[n] used was a zero-mean gaussian distribution with a standard deviation equal to
1. The following is a plot of xs[n], where a gaussian distribution is used to generate the data.
A
xj n] is gaussian with zero mean and standard deviation - 1
0.5
0
I
-1.5
-
-2
-2.5
2
4
6
8
10
n
12
14
16
1
20
Figure 4.3: Stochastic signal, zero-mean gaussian distribution with
standard deviation = 1, used
for the simulated and analytical results.
36
Section 4.3.1 Simulated and Analytical Results for x[n]= xdl[n] + xs[ngaussian
Addition
Simulated y
nl
4
3
-
2-
-2 -
1
1
T
-1
2 -
0 --
-2 -
0
2
4
6
8
10
n
12
14
16
18
20
Figure 4.4: The top figure is the Simulated result.While the bottom
figure is the Analytical and Simulated result for the first
moment of the output superimposed on same plot.
37
Simulated y2
12
nj
-
10 -
6 -
4
2
2
0
0
2
4
8
6
10
12
2
Analytical and Simulated y
12
fe
p
14
jn
16
16
20
16
16
20
h
s
10'
6-/
4-
2
0
2
4
6
6
10
12
14
Figure 4.5: The top figure is the Simulated result. While the bottom
figure is the Simulated and Analytical result for the second moment
of the output superimposed on the same plot.
38
Multiplication
Simulated
y,,fn]
0.09
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.010
-0.01
L
2
4
6
10
a
12
Analytical and Simulated y
14
16
18
20
14
16
18
20
fn]
0.09 -
0.08-
0.07-
0.06-
0.05-
0. 04
0.03 -
0.02 -
0.01
-
0
2
4
6
8
10
12
Figure 4.6: The first figure is the Simulated result
of the first moment of the output. While the bottom figure is the
Analytical and Simulated result for the first
moment of the output superimposed on same plot.
39
Simulated y
2
n]
0.015
0.01
F
0.0051-
0
2
4
6
8
10
n
12
14
16
18
20
14
16
18
20
Analytical and Simulated y jnJ
0.015.
0.01 1
0.005
F
K..
0
2
4
6
8
10
n
12
Figure 4.7: The top figure is the Simulated result. While
the bottom figure is the Analytical and Simulated result for the second
moment of the output superimposed on same plot.
40
Subtraction
Simulated y jn]
3
2
-
/
-4
0
2
4
6
6
10
n
12
14
16
1
20
Analytical and Simulated yenJ]
2
-
1
-1
-2-
-3
-4
4
2
4
6
a
10
n
12
14
16
1
20
Figure 4.8: The first figure is the Simulated result for the first moment of the output.
While, the bottom figure is the Analytical and Simulated result for the first
moment of the output superimposed on same plot.
41
Simulated yn
14,
12-
10
6
4-
/
2
0
2
4
6
8
10
n
12
14
16
18
20
14
16
18
20
Analytical and Simulated y24n]
14
12 -
10 -
10
0
2
4
6
8
10
n
12
Figure 4.9: The top figure is the Simulated result. While
the bottom figure is the Analytical result for the second moment of the output.
42
Section 4.3.2 Simulated and Analytical Results for x[n]= xd2[n] + xs[n]gaussian
Addition
Simulated y jn]
0.8 -
0.6 --
0.4-
/
0.2
0
/
Ii
-0.2
-0.4
0
2
4
6
8
10
n
12
14
16
18
20
14
16
18
20
Analytical and Simulated y.
0.8
0.6 -
0.4 -/-
-0.2 -
0,4
0
2
4
6
8
10
n
12
Figure 4.10: The first figure is the Simulated result of the first
moment of the output. While the bottom figure is the Simulated and Analytical
result superimposed on the same graph.
43
Simulated y2jn]
7
6
5
4
3
2
0
2
4
6
a
10
n
12
Analytical and Simulated y2
14
16
18
20
14
16
18
20
n]
7
~
6
4
3-
1 -
0
2
4
6
8
10
n
12
Figure 4.11: The first figure is the Simulated result of the second
moment of the output. While the second figure is the Simulated and
Analytical moments superimposed on the same plot.
44
Multiplication
Simulated ya4n]
0.09
0.08-
0.07 -
0.06
0.05 -
0.04 -
0.03-
0.02-
0.01 -
0
-0.01
0
2
4
6
-
-
-.
-
-
6
10
n
-
..-
12
14
-
16
16
20
Analytical and Simulated y 3 4n]
0.05
0.06 -
0.07 -
0.06-
0.05 -
0.04-
0.03-
0.02
0.01
0
-0.01
0
~
~-
-
2
4
6
a
-
-- ____
10
n
12
14
16
--.-
16
20
Figure 4.12: The first figure is the Simulated result of the first
moment of the output. While the bottom figure is the Simulated and Analytical result
superimposed on the same graph.
45
Simulated ylave
x 10-3
0
0
S
7
2
4
6
8
10
n
12
14
16
16
20
Analytcal and Simulated y24nj
-3
6
5--
4-
-t
3-
2 --
0
2
4
6
8
10
n
12
14
16
18
20
Figure 4.13: The first figure is the Simulated result of the second
moment of the output. While the bottom figure is the Simulated and Analytical result
superimposed on the same graph.
46
Subtraction
Simulated yjn]
0.6
0.4-
I
\
0.2
0
-0. 2
-0.
4-
-0.
6
-0.
0
I
I
2
4
6
8
10
n
12
0
14
16
18
20
14
16
10
20
Analytical and Simulated ygjn]
0.
0. 4
0. 2
I
-
0
-0. 2
-0. 4
-0. 6
0
2
4
6
8
10
n
12
Figure 4.14: The first figure is the Simulated result of the first
moment of the output. While the bottom figure is the Simulated and Analytical result
superimposed on the same graph.
47
Simulated y2anj
4 -/
3-
2-\
0
0
2
4
6
8
10
n
12
14
16
16
20
14
16
16
20
Analytical and Simulated y24n]
7
5
22
44
36
2-
02
4
6
8
10
12
Figure 4.15: The top figure is the Simulated result of the second moment of the
output. While the second figure is the Simulated and Analytical results superimposed
on the same plot.
48
Section 4.4: Analytical and Simulated Results using Uniform Distribution
The second xs[n] used was a zero-mean uniform distribution with a standard deviation equal
to 1. The following is a plot of xs[n]uniform, where the data is generated using a uniform distribution.
Uniform zero-mean xjn] with standard deviation . 1
0.5
0.4
0.3
/-
0.2
0.1
0
-0.1
V '-
-0.2
-0.3
-0.4
0
2
4
6
8
10
n
12
14
16
1
20
Figure 4.16: Stochastic signal, zero-mean uniform distribution with
standard deviation = 1, used
for the simulated and analytical results.
49
Section 4.4.1 Simulated and Analytical Results for x[n] = xd1[n] + xs[n]uniform
Addition
Simulated y [nl
4
2
-
3 --
0-
-2 -
0
2
4
6
8
10
n
12
14
16
18
20
14
16
18
20
Simulated and Analytical y jn
4
3-/
2-K
0-
-1-
-2 -
3
0
'L
2
4
6
8
10
n
12
Figure 4.17: The first figure is the Simulated first moment of the output. While, the
second figure is the Analytical and Simulated first moments of the output
superimposed on the same graph.
50
Simulated y24n]
10
--
6
\//
4
/
\
0
0
2
4
8
6
10
n
12
14
16
18
I
I
16
18
20
Analytical and Simulated y2 jnj
1
14
1
T
12 -
10
6/
4/
2/
0
2
4
6
8
10
n
12
14
20
Figure 4.18: The first figure is the Simulated result of the second moment of the output.
While, the second figure is the Simulated and Analytical second moments
superimposed on the same plot.
51
Multiplication
Simulated yave n]
0.14
0.12
0.1
0.08
L\
0.06
0.04
0.02
-0.02
0
2
4
6
6
10
n
12
14
16
18
20
Simulated and Analytical ya4n]
0.14 -
0.12
-
-
0.1-
0.06-
0.06-
0.04
0.02-
-0.02L
0
2
4
6
8
10
n
12
14
16
18
20
Figure 4.19: The first figure is the Simulated first moment of the output. While, the
second figure is the Analytical and Simulated first moment superimposed
on the same plot.
52
x
7--
Simulated y2,n]
10-3
54-
2--
0
------
0
2
-
I
I---- - +
4
6
8
Analytical
001n]
.01
-
10
n
12
a
14
16
18
2
14
16
1s
20
and Simulated y2jn
/
/
/
/
0.006
0.006 -
0.004 -
0.002
-
0
2
4
6
8
10
n
12
Figure 4.20: The first figure is the Simulated second moment of the output. While, the
second figure is the Analytical and Simulated first moments of the output
superimposed on the same graph.
53
Subtraction
Simulated y 4n
3/
-
-
-2 -
-3 -
-4
0
2
4
6
8
10
n
12
14
16
18
20
14
16
18
20
Analytical and Simulated yv[n]
3/
0 -
-2-
-3 -
-4
0
2
4
6
8
10
n
12
Figure 4.21: The first figure is the Simulated first moment of the output. While, the
second figure is the Analytical and Simulated first moments of the output
superimposed on the same graph.
54
Simulated y2
12
n]
T
10-
4-~
2
0
\
0
'
2
4
8
6
10
n
12
Analytical and Simulated y2
14
16
16
20
14
16
18
20
n]
14
12
10
/
/
/
\
\
8
10
n
6
I
4
/~
2
0
0
2
4
6
12
Figure 4.22: The first figure is the Simulated second moment of the output. While, the
second figure is the Simulated and Analytical second moment
superimposed on the same plot.
55
Section 4.4.2 Simulated and Analytical Results for x[n] = xd2[n] + xs[n]uniform
Addition
Simulated y
0.
8-
/
0. 6
0.
n
4-
0.
0 -
-0.
4
0
2
4
6
8
10
12
14
16
18
20
14
16
18
20
n
Analytical and Simulated y~jn]
1
0.8 -
0.6-
0.4-
P!
0.2
0
-0.2 k
0
2
4
6
8
10
n
12
Figure 4.23: The first figure is the Analytical first moment of the output. While,
the second figure is the Analytical and Simulated first moments of the output.
56
Simulated y2 [nJ
0.7
-
0.6
0.5
0.4
0.2
0.1
0L
0
2
4
6
8
10
n
12
14
16
1
20
Analytical and Simulated y2 in]
2.5
2
-
1.5
0
0
2
4
6
8
10
n
12
14
16
18
20
Figure 4.24: The first figure is the Simulated second moment of the output.
While the second figure is the Simulated and Analytical second moments.
57
Multiplication
Simulated y~4n]
06
0.
-
08 -
-
0. 07
-
0. 06
0. 05
0.
0403
0. 02
-
0. 01 -
0
-0.
0
2
4
6
8
Analytical
10
n
12
14
16
16
20
14
16
18
20
and Simulated y'fn
0.09
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
-
0
-0.01
2
4
6
8
10
n
12
Figure 4.25: The first figure is the Simulated first moment. While, the second figure is the
Simulated and Analytical first moments superimposed on the
same graph.
58
Simulated yav(n]
0.09
0.08
0.07
0.06
0.05
0.04
-
0.03-
0.02 -
F
0.01
0
-0.01
2
4
6
a
10
12
Analytical and Simulated y2
x 10-3
14
16
18
20
14
16
18
20
n]
2
1
0
2
4
6
a
10
n
12
Figure 4.26: The first figure is the Simulated second moment. While
the second figure is the Simulated and Analytical second moments
of the output superimposed on the same graph.
59
Subtraction
Simulated y~jnj
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
0
-0.8
2
4
6
a
10
12
14
16
18
20
14
16
18
20
n
Analytical and Simulated y,fn]
0.6 r
0.4
/
0.2
0
//
-0.2
ii
-0.4
-0.6
0
2
4
6
10
12
Figure 4.27: The first figure is the Simulated first moment. While
the second figure is the Simulated and Analytical first moments
of the output superimposed on the same graph.
60
Simulated y2nJ
0.5
-
0.45
0.4
0.35-
0.3 -
~~0
25
........
0,2
0.15 -
0.1 -
0.05 0
0
2
4
6
8
10
n
12
14
16
18
20
14
16
18
20
Analytical and Simulated y2nI
2.5 -
2-
0.5 0
0
2
4
6
8
10
n
12
Figure 4.28: The first figure is the Simulated second moment. While
the second figure is the Simulated and Analytical second moments.
61
Section 4.5: Analytical and Simulated Results using Beta Distribution
The last xs[n] used was a zero-mean beta distribution with the first four central moments of
0,1,0,2 respectively. As mentioned earlier, these results are based on 50 simulations. The following is a plot of the beta distribution used.
Beta disttibution with moments (0,1,0,2)
0.35
0.3
0.25
0.2
0.15
0.1
0.05
-2
-1.5
-1
-0.5
0
0.5
1
1.5
Figure 4.29: Zero-mean beta distributed Stochastic signal.
62
2
Section 4.5.1 Simulated and Analytical Results for x[n]= xd1[n] + XS[f]beta
Addition
Simulated yafn]
4
3
-
0-
-2
0
2
4
6
8
10
n
12
14
16
18
20
14
16
18
20
Simulated and Analytical y 4n]
4
3
2 -
/
-2
0-
0
2
4
6
6
10
n
12
Figure 4.30: The first figure is the Simulated first moment. While
the second figure is the Simulated and Analytical first moments
of the output superimposed on the same graph.
63
X
Simulated y2
1041
nj
3.5
1.5
-
0.5 -
0
2
4
6
10
n
12
Analytical and Simulated y2
1047
4.5
8
14
16
18
20
14
16
1
20
n]
4
3.5
3-
2.5-
2-
1.5-
0.5
0
0
'L
2
'L
4
'L
6
8
10
n
12
Figure 4.31: The first figure is the Simulated second moment. While
the second figure is the Simulated and Analytical second moments
of the output.
64
Multiplication
Simulated y,,n]
0.09
0.08
-
0.07 -
0.06-
0.05
-
0.04
-
0.03-
0.02 -
0.01
-
0 -
-0.01
0
2
4
6
8
10
n
12
14
16
18
20
14
16
18
20
Simulated and Analytical ymefn]
0.09
0.08 -
0.07 -
0.06 -
0.05 -
0.03 0.020.01
00
2
4
6
8
10
n
12
Figure 4.32: The first figure is the Simulated first moment. While
the second figure is the Simulated and Analytical first moments
of the output superimposed on the same graph.
65
Simulated y2n]
x 104s
7 -
5 --
-
1
0
0
2
4
6
8
10
n
12
14
16
18
20
Analytical and Simulated y24n]
X 1050
4
3 -
2.5-
1.5-
0.5
-
0
2
4
6
6
10
n
12
14
16
18
20
Figure 4.33: The first figure is the Simulated second moment. While
the second figure is the Simulated and Analytical second moments
of the output superimposed on the same graph.
66
Subtraction
Simulated yjn
2-
0 -
--
0
2
4
6
8
10
n
12
14
16
18
20
Simulated and Analytical y an]
3
2-
//
0 -/'
1-2
-
-21
0
2
4
6
8
10
n
12
14
16
16
20
Figure 4.34: The first figure is the Simulated first moment. While
the second figure is the Simulated and Analytical first moments
of the output superimposed on the same graph.
67
Simulated y2jn]
x 10 46
10
0
0
2
x
4
6
1046
8
10
n
12
14
16
18
20
14
16
18
20
Analytical and Simulated y2jn]
16
14 -
12
10 -
8 -6
-
4
-
2-
0'
0
2
4
6
a
10
n
12
Figure 4.35: The first figure is the Simulated second moment. While
the second figure is the Simulated and Analytical second moments.
68
Section 4.5.2 Simulated and Analytical Results for x[n] = xd2[n] + xs[n]beta
Addition
Simulated y
n]
0.8
0.6
0.4
I 1
0.2
0
-0.2
-0.4-
-0.6
-0.8
2
4
6
8
10
n
12
14
16
18
20
14
16
16
20
Simulated and Analytical y,,nl
1
0.6
0.6
0.4
0.2-
/-A--
a
-02
-0.4F
-0.6
-0 H,
0
2
4
6
8
10
n
12
Figure 4.36: The first figure is the Simulated first moment. While
the second figure is the Analytical and Simulated first moment
of the output on the same graph.
69
5
4.5
Simulated y2
47
n]
4 -
3.5-
3
-2.5
2-
1.5
0.5 -
0
0
2
4
6
47
8
10
n
12
Analytical and Simulated y
14
16
18
20
]
4.5
4
3.5
3 --
/
2.5 -
2 -
1.5
1 ,0.5-
0
2
4
6
8
10
n
12
14
16
18
20
Figure 4.37: The first figure is the Simulated second moment. While
the second figure is the Analytical and Simulated second moments.
70
Multiplication
Simulated yavn]
0.06
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
-0.01
2
4
6
8
10
12
14
16
18
20
14
16
18
20
Simulated and Analytical y~rv[nj
0.09
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
0
2
4
6
8
10
n
12
Figure 4.38: The first figure is the Simulated first moment. While
the second figure is the Simulated and Analytical first moments
of the output on same graph.
71
Simulated y2vI
X 1046
12
10
4
2
4
6
8
10
n
12
14
16
18
20
Analytical and Simulated y jn]
X 1047
4.i
3.
-
-
2.!
1. 5
-
2
1-
1 -
0.
0
2
4
6
8
10
n
12
14
16
16
20
Figure 4.39: The first figure is the Simulated second moment. While
the second figure is the Analytical and Simulated second moments.
72
Subtraction
Simulated yajnJ
0.6
0.6
0.4
0.6
0
2
4
6
.
10
n
Simulated
and Analytical
12
14
16
is
20
14
16
18
20
yjn)
0.4
0.6 -
0.4
-1-
-0.2 --
-0.4 -
-0.6
-0.8
0
2
4
6
8
10
n
12
Figure 4.40: The first figure is the Simulated first moment. While
the second figure is the Simulated and Analytical second moments
of the output on the same graph.
73
x 10
Simulated y2jn
4
12-
10 -
8-
2-
0
0
2
4
6
8
10
n
12
14
16
18
20
14
16
18
20
Analytical and Simulated y2jn]
46
10
0-
0
0
2
4
6
8
10
n
12
Figure 4.41: The first figure is the Simulated second moment. While
the second figure is the Analytical and Simulated second moments.
74
Section 4.6: Division Operation Results
This section will show the results gathered from the two different division operations for k =
0.8.
Section 4.6.1: The Deterministic (xd[n]) component used in the Division Input Signal
For the division operations, a unique deterministic component was needed since the analytical
and simulated models derived for division assumed that x[n] would always have positive values.
Therefore, a sine wave was used that was offset by 4 to insure positive values only.
5
/
/
4.8
//
4.6
/
4.4
4.2
4
3.8
'I-
3.6
3.4
3.2
0
2
4
6
8
10
12
14
16
18
Figure 4.42: Deterministic signal, xd[n] = sin(O.3*n) + 4,
used for the division simulated and analytical results.
75
20
Section 4.6.2: Division Simulated and Analytical Results for x[n] = xd[n] + xs[n]gaussian
Case A
As mentioned earlier, division case A is y[n] = ky[n-1] /x[n].
yave simulated
0.09
0.08
0.07
0.06
0.05
0.04
--
0.03-
0.02
0.01 -
0
2
4
6
8
10
n
12
14
16
18
20
y.v analytical and simulated
0.09
0.06 -
0.07 -
0.06 -0.05-
0.04 -0.03 -
0.02-
0.01
-
0
0
2
4
6
8
10
n
12
14
16
18
20
Figure 4.43: The top figure is the Simulated result. While the bottom
figure is the Analytical and Simulated result for the first
moment of the output superimposed on same plot.
76
y'
x 10
0
4
2
6
2
10
n
4
6
8
12
14
16
16
20
14
16
18
20
and simulated
y ,analytical
-3
0
8
simulated
10
12
Figure 4.44: The top figure is the Simulated result. While the bottom
figure is the Analytical and Simulated result for the second
moment of the output superimposed on same plot.
77
Case B
As mentioned earlier, division case B is y[n] = x[n] / (ky[n-1]).
simulated
yav
140
120-
100-
A
80
j*I
1
III
60
/1
2
4
6
8
y..
I
A~A~i
I I I-
\/
I
0
Il
I
20
U
i\
I/Il
I
-
40
A
10
n
/1
12
1/
0
14
16
18
2
14
16
18
20
analytical and simulated
140
120 1-
10
00 --00A
A
4
20 --
0
2
4
6
8
10
12
n
Figure 4.45: The top figure is the Simulated result. While the bottom
figure is the Analytical and Simulated result for the first
moment of the output superimposed on same plot.
78
Simulated y2
x 10s
2.5
2
15
/\
-
0.5 k
A\
0
2
4
6
a
10
12
14
/
-
16
16
20
Analytical and Simulated y2fnj
x 105
2.5
2
1.5
1
0.5
'U
0
2
4
6
a
10
n
12
14
16
/\/\'~w~II,II
1/
16
20
Figure 4.46: The top figure is the Simulated result. While the bottom
figure is the Analytical and Simulated results for the second
moments of the output.
79
Section 4.6.3: Division Simulated and Analytical Results for x[n] = xd[n] + xs[n] uniform
Case A
y
simulated
0.09
0.08 -
0.07 -
0.06 -
0.05-
0.04-
0.03-
0.02 -
0.01
-
0
2
4
6
6
10
n
12
14
16
18
20
14
16
18
20
yave analytical and simulated
0.09
0.08
0.07 -
0.06 -
0.05
-
0.04
0.03 -
0.02 -
0.01
-
0
2
4
6
a
10
n
12
Figure 4.47: The top figure is the Simulated result. While the bottom
figure is the Analytical and Simulated result for the first
moment of the output superimposed on same plot.
80
2 simulated
x 10-3
7
6
5
4
3
1
0
2
4
6
a
2 ,
x 10-3
10
n
12
14
16
16
20
14
16
18
20
analytical and simulated
6
5-
4-
3 -
2 -
0
0
2
4
6
6
10
n
12
Figure 3.48: The top figure is the Simulated result. While the bottom
figure is the Analytical and Simulated result for the second
moment of the output superimposed on same plot.
81
Case B
simulated
h1/\ ~
~k
70
60
~
50
40
/
\1
30
I
20
10
0
2
4
6
a
10
n
ya analytical
0
I~ I'
!~1\ 1/i\
70
60
YY\i ~i'I
12
40
30
20
10
/
\~
2
4
2
14
16
18
20
h-
~
4 '1 ~
0
16
4 /4
1/
'~
16
and simulated
/\ /\ /\ /\ /\
I 4/ ~/ / \
II II
50
0
14
6
8
10
L~P
12
Figure 4.49: The top figure is the Simulated result. While the bottom
figure is the Analytical and Simulated result for the first
moment of the output superimposed on same plot.
82
2
y
simulated
Il
5000 -
I~
4000-
p
h
F
%,3000
2000
/
I
1000
0
2
4
/4
\!
I ~i
8
6
/1
10
12
14
16
18
20
14
16
18
20
n
,
analytical and simulated
1
1,
6
8
Y2v.
0001
5000 4000
-
3000
-
.n1
2000-
1000
-
0
2
4
10
n
12
Figure 4.50: The top figure is the Simulated result. While the bottom
figure is the Analytical and Simulated result for the second
moment of the output superimposed on same plot.
83
Section 4.6.4: Division Simulated and Analytical Results for x[n] = xd[n] + xs[n] beta
Case A
x 102z
Simulated yj4n]
0
3 --
2 --
0
0
2
4
6
x 101 2
6
Analytical
10
n
12
14
16
14
16
18
20
18
20
and Simulated y~jn]
7
4 -
3-
0
0
'
2
4
6
8
10
n
12
-
Figure 4.51: The top figure is the Simulated result. While the bottom
figure is the Analytical and Simulated result for the first
moment of the output superimposed on same plot.
84
Simulated y2 5 n]
x 10 25
10
5-
01
2
0
15
x- 10
4
6
8
10
n
12
14
16
18
20
14
16
18
20
Analytical and Simulated y2in]
25
10
01
0
2
4
6
8
10
n
12
Figure 4.52: The top figure is the Simulated result. While the bottom
figure is the Analytical and Simulated result for the second
moment of the output superimposed on same plot.
85
Case B
y
simulated
100
80
A
!~
f\
60
I
i~
/\\ /\!\ i\
I
\
'i,
40
\\
\
'/
20
/ I
/1
''
A' ~
0
2
4
6
-
I!
B
V
10
n
12
14
16
18
2
0
y2 , analytical and simulated
14 0
12 0 -
10
0
-
0-I
-
8
4
I\
0
6
0-
2
00
'
2
4
6
8
10
n
12
14
16
1
20
Figure 4.53: The top figure is the Simulated result. While the bottom
figure is the Analytical and Simulated result for the first
moment of the output superimposed on same plot.
86
2
y
X 104
simulated
6
5-
2
1
A
0A
0
2
4
6
8
y2
X 104
10
n
12
14
16
18
A".
'
20
analytical and simulated
5
4 -
3-
2
-
0
2
4
6
a
10
n
12
14
16
18
20
Figure 4.54: The top figure is the Simulated result. While the bottom
figure is the Analytical and Simulated result for the second
moment of the output superimposed on same plot.
87
5 Conclusions and FurtherResearch
The key objective of this thesis was to develop a more accurate probabilistic model for predicting cost distributions. Also, it was desirable to develop analytical expressions for the first and
second moments of the output distributions since it is much faster to compute analytical versus
simulated expressions.
In doing the research, it was determined that certain trends exist. The relations developed were
valid only if the first and second moments of the output were finite. The analytical expressions for
the second moment with respect to xs[n] only depended on the variance of xs[n]. Yet, the analytical second moment of the output directly depends on xd[n] and is independent of xs[n]. For the
first moment resulting from the addition operation regardless of the xs[n] used, the analytical and
simulated first moments are closely matched, have very similar shape to xd[n], and are stable.
While the simulated and analytical second moments resulting from addition are stable and tend to
oscillate, they are not as closely matched as the first moments. The subtraction operation behaves
very much like the addition operation (both being linear) except that the first moment resembles a
reflection of xd[n].
For the first moment resulting from the multiplication operation regardless of the xs[n] used,
the analytical and simulated first moments are closely matched and are stable around 0. The analytical and simulated second moments are both stable. Yet, the simulated second moment tends to
take longer to go to zero than the analytical second moment.
88
For the first moment of the division results, case A, the analytical and simulated results were
closely matched and stable around zero. The first moment for the division results, case B, behaved
similar to that of case A. Both the first and second moments, for case B regardless of xs[n] used,
analytical and simulated results had similar form and some oscillation. When changing the value
of k the results for the division and multiplication operations are very erratic. Therefore, multiplication and division have undesirable behavior and are not of practical importance.
The difference equations used in this research were developed to represent the time-evolution
of the moments. Also, the difference equations depended on the moments of the input and
moments of the output at previous values of time. The validity of the difference equations is verified by comparison to Monte Carlo simulations.
Although a restricted input was used, no stationary requirements were imposed. The choice of
input is convenient because the elements in the primitive calculations will always be independent.
This is limiting, however, because the output will not have the same form as the input. That is, it
can not be decomposed into a deterministic and a stochastic component, for which the stochastic
component is zero-mean and independent of values at other times. Thus the results will not allow
calculation of the output of a series connection of these primitive stochastic systems.
Work to extend the research can follow a few paths. First, additional moments of the output of
two random variable inputs can be derived. Also, an investigation of the resulting output when the
two inputs are dependent random variables can be performed. The analysis could also be extended
to allow calculation for more general inputs. This should probably be done for the linear systems
89
first. Also, develop more meaningful representations of nonlinear systems. For example, one
might try restricting nonlinear blocks to lie outside feedback loops, or involve only one variable
after which an addition or subtraction block occurs.
The ultimate goal would be a cost estimation scheme that encompasses these paths. Such a
system would greatly increase confidence in estimating the cost of development.
90
Bibliography
[1] Etter, Delores M., EngineeringProblem Solving with Matlab. New Jersey: Prentice Hall,
1997, ISBN 0-13-397688-2.
[2] W. J. Fabrycky, B. S. Blanchard, Life-Cycle Cost and Economic Analysis. Englewood
Cliffs: Prentice Hall, 1991.
[3] Springer, M.D., The Algebra of Random Variables. New York: John Wiley &
Sons, 1979.
[4] Trefz, Isaac, A Comparison of Methodsfor the Treatment of Uncertaintiesin the
Modeling of Complex Systems. Master's Thesis, School of Electrical Engineering
and Computer Science, Massachusetts Institute of Technology, 1998.
[5] K.K Afridi, A Methodologyfor the Design and Evaluation ofAdvanced Automotive
ElectricalPower Systems. Doctoral Thesis, Department of Electrical Engineering and
Computer Science, Massachusetts Institute of Technology, 1998.
[6] G. R. McNichols, On the Treatment of Uncertainty in ParametricCosting. Doctoral
Thesis, School of Engineering and Applied Science, George Washington University,
1976
91
Suggested Additional Readings
Ward Cheney, David Kincaid, Numerical Mathematics and Computing. New York:
Brooks/Cole Publishing Company, 1999, ISBN 0-534-35184-0.
B. Gnedenko, The Theory of Probability.Moscow: Mir Publishers, 1975.
J.M. Hammersley and D. C. Handscomb, Monte CarloMethods. London: Chapman and
Hall, 1965.
M. Kendall, A. Stuart, and J. K. Ord, Kendall's Advanced Theory of Statistics. 5th ed.
New York: Oxford University Press, 1987.
Lipschutz, Seymour, LinearAlgebra. USA: McGraw-Hill, 1991, ISBN 0-07-038007-4.
Alan V. Oppenheim, Alan S. Willsky, Signals and Systems. New Jersey: Prentice Hall,
1997, ISBN 0-13-814757-4.
Pugachev, V. S. ProbabilityTheory and MathematicalStatisticsfor Engineers. Oxford:
Pergamon Press, 1984.
Scheid, Francis, Numerical Analysis. USA: McGraw-Hill, 1988, ISBN 0-07-055221-5.
92