Pulse response based identification of low-order models

advertisement
Department of Information Technology
Pulse response based identification
of low-order models
Narra Suresh
Master’s Thesis
Electrical Engineering
2006
Advisor: Dr. Magnus Mossberg
Karlstad University, Karlstad
ABSTRACT
In practice, many systems are modeled by first and second order models including a
time delay. These low-order models can often be used to describe the most important and
characteristic features of a system and be a base for control design. In industry, it is important
to have fast, reliable, and easily applicable methods for estimating the parameters in such
models. It is well-known that, for practical reasons, many systems can not be excited by an
arbitrary input signal. However, a pulse-shaped input signal can most often be used. The aim
with this thesis project is to study identification of low-order models based on pulse-shaped
input signals.
1
ACKNOWLEDGEMENT
I would like to take this opportunity to express my gratitude to some people who were
involved in this project. First of all, I would like to specially thank Dr.Magnus Mossberg for
giving the opportunity of doing my Master Thesis Project and for being my overall supervisor
for this project. I would also like to thank Dr.Andreas Jakobsson for his kind consideration.
Last but not least I would like to thank my parents, friends and well wishers for their
moral and unconditional support.
2
TABLE OF CONTENTS
1. INTRODUCTION
4
1.1. System
1.2. Model
1.2.1. Mathematical model
5
5
6
2. EVALUTION OF PULSE RESPONSE
7
2.1. Lower order systems
2.2. Input signals
2.3. Pulse response of the second order system
7
8
8
3. ESTIMATION OF MODEL PARAMETERS
12
3.1. Parametric methods
3.2. Non parametric methods
3.3. Parameters estimation
12
13
14
4. CHARACTERISTICS OF AN ESTIMATOR
19
5. SIMULATION RESULTS
21
5.1 Monte Carlo simulations
21
CONCLUSIONS
24
REFERENCES
25
3
1. INTRODUCTION
System identification has been an active area of automatic control for a few decades
and it has strong links to other areas of engineering including signal processing, but also to
optimization and statistics. At present system identification for continuous systems with time
delay is also interesting case.
Generally most of the industrial applications are in the form of continuous time
domain parametric models described by differential equations .Identification of such models
has some difficulties [4] in practice. A direct approach is to derive an equation for the
response of a presumed model to a specific input and fit it to the output measurement of the
dynamic test. Different types of identification methods are available and these are generally
classified into parametric and non parametric methods.
A great deal of attention has been paid to the modeling and identification of linear
systems in recent years. In reality, all systems are non-linear to some degree and the presence
of non-linear distortions can often introduce significant errors into the identified linear model.
Generally a linear time-invariant model can be described by its transfer functions. These
functions can be identified by direct techniques without first selecting a confined set of
possible models. Such methods are called non parametric methods. Open loop systems are
convenient than closed loop systems because closed loop configurations will typically lead to
problems for non parametric methods. This work has focused on the identification of the
simple linear low-order models to verify the simplicity and efficiency of pulse signal as an
input.
In parametric estimation problems, we are given a set of (noisy) observations from a
known model and the goal is to estimate the parameters of the model such that some objective
function is maximized (or minimized). For example, in linear regression, we minimize the
sum of squared errors. In many practical problems the method of maximum likelihood
provides a good solution for parametric estimation problems. However, if there is uncertainty
about the model, the parametric estimation methods, such as the maximum likelihood
estimation, can easily over fit data and lead to risky decisions. The recognition of the
limitations of parametric methods led to the surge of non parametric methods.
Non parametric estimation considers the use of correlation and spectral analysis to
obtain estimates of a system from the available identification data. The effectiveness of these
methods is useful for parametric system identification. The non parametric model estimated
from the parametric model could efficiently extract the input/output transformational
properties defined by the parametric model. The non parametric model estimated from
experimental data could be used to validate the parametric model. Here, this work is focused
on non parametric estimation.
For the industrial application of advanced system testing and modeling techniques,
system identification is the main research interest. To identify a system (open loop system) we
need to excite it with a signal. Generally input signals are favored over random signals
because they allow the user to have complete control over the input signal spectrum. Among
the signals is mainly pulse, step, Random Binary Sequence (RBS), Pseudo Random
Binary(PRBS), and m-level Pseudo Random (m-PRS) inputs. Future more system could also
be excited with a signal that has voltage level and frequency content that corresponds to its
4
actual operating conditions. In this thesis, pulse shaped waveform input signal is used to
identify the system.
There are two types of model identification methods namely on-line and off-line. Online methods is used to infer a model at the same time as data collection, update the model
using recursive algorithm at each time instant when new data are collected and apply the
model to support decisions to be taken on-line for control. Off-line methods are used for batch
wise data to perform identification for either time domain or frequency domain models. This
model development requires special inputs for open loop experiments. This work has focused
on the identification of the simple linear low-order models to verify the simplicity and
efficiency of pulse signal as an input.
One of the developments in common to both on-line and off-line methods is the use of
estimations with recursive least squares, which are directly applied to the measured input and
output data. Regarding the method of recursive least squares, there are some disadvantages in
contrast to the off-line methods of non least squares [6]. Because of that an offline method of
non least squares that uses transient step response for identification is more convenient for
industrial uses, because the experiments are easy to perform and the development of models
for designs does not involve decisions or instant needs for a model during the identification
process.
System identification is the field of mathematical modeling of systems from
experimental data. System identification methods are used to get appropriate models for
design of a prediction algorithm, or simulation for controlling the system.
1.1. SYSTEM
In simple terms system is defined as it is an object in which variables of different
kinds interacted and produce observable signals. The observable signals that are of interest to
us are usually called outputs. The system is also affected by external signals, that signals can
be manipulated by the observer are called inputs.
1.2. MODEL
When we interact with a system, we need some concept of how its variables relate to
each other among observed signals is called as a model of the system.
Many industrial systems must be controlled in order to run safely and efficiently. To
design controller need some type of model of the process. Especially for design of an optimal
controller case the selected model must be able to describe the properties of the disturbance
acting on the process. In some applications, like signal processing, the recorded data is the
combination of the original signal and noise. In that case, a model is necessary to design filter
parameters. In many cases the primary aim of modeling is to gain knowledge of the dynamic
behavior of the system. The different types of models [1] are available.
5
1.2.1. MATHEMATICAL MODEL
Mathematical model is constructed in two ways
Mathematical modeling: This is an analytical approach. The dynamic behavior of a process is
described by using basic laws from physics.
System identification: This is an experimental approach. System is affected by some
experiments then a model is selected to the recorded data by assuming suitable value to its
parameters.
The system identification technique is performed by exciting the system and observing
its input and output.
6
2. EVALUTION OF PULSE RESPONSE
2.1. LOWER ORDER SYSTEMS
Any basic system to be investigated has a single input and single output.
Input
u(t)
System
G
Output
y(t)
Fig: 1 A dynamic system with input u(t) and output y(t)
A first order system is the simplest system. The step response is exponential and the
frequency response plot decays monotonically. Oscillation and resonance are not possible.
A second order system is structurally simple, in many ways be considered as a reliable
idealization of a whole class of systems of higher order. The poles affect the manner in which
the system responds to external inputs. Mainly, the system is affected by the poles which are
nearer to the imaginary axis of that system. A second order system explains about the nearest
poles and one can clearly study the behavior of the system. Generally second order system
behavior is the part of the higher order system behavior. So in general the higher order system
can be approximately analyzed from a second order system by looking at the poles nearest to
the imaginary axis.
The simple second order system is shown below.
2
ω
n
G=
2
2
s + 2ζsω + ω
n
n
The system properties depend upon the parameters ω n and ζ , where ω n is the natural
frequency and ζ is the damping ratio. The position of the poles depends upon the value of
damping ratio and natural frequency. When
• ζ = 0 the poles are on the imaginary axis and it is oscillatory and also called as
undamped case.
• 0< ζ <1 the poles are complex conjugate placed at left half of the plane and is called
under damped case.
• ζ = 1 the poles are real and equal and is called critically damped case.
• ζ >1 the poles are real, negative and different and is called over damped case.
Many second order systems are having complex poles, and higher order systems have
more then one pair of complex poles. But only study of dominant (low frequency) poles can
be provide first approximation for design and performance.
7
2.2. INPUT SIGNALS
The selection of an input signal must have a significant effect on the identification
results. Moreover selection depends on different factors like allowed disturbances, noises etc.
The vital part to be considered about the selected signal is that it should have enough energy
to excite the system. Widely used test signals for general systems are step, ramp, pulse etc. A
step signal is simple and efficient for first order systems, but not so efficient in the case of a
second or a higher order system associated with delay and noise. Because when sudden
disturbance are added to the process, it corrupts the response, that corresponding data is not
reliable and should not be used for identification. Using step input cause estimation errors
because of low frequency noise and offset errors. High frequency noise can cause trouble if it
is not filtered out before sampling the signals.
For industrial processes, pulse test signal is preferred over step and ramp test signals as
the former returns the input and output to the desired steady-state values and also introduces
least perturbation to process operation. It is noticed that the output response will be still
significant and exists longer even after the pulse disappears. This feature is taken into
advantage to simplify the system equation and perform the parameter estimation. The
rectangular pulse is represented as the sum of one step change made at time zero and one
opposite step change made at time D.
The pulse signal is defined as below
u (t ) = A( H (t ) − H (t − D )) ,
where A is the amplitude of the pulse,
0, t < 0
,
H (t ) = 
1, t ≥ 0
is the Heaviside unit function, and D is the pulse width. When we apply this pulse signal to
the system we will get an output that is called the pulse response of the system. The pulse
response depends upon the system parameters especially the transient part. Using the transient
part we can estimate the system parameters easily i.e., we can modify the actual equation into
minimized form by considering some position of the transient part like maximum or minimum
of the response.
2.3. PULSE RESPONSE OF THE SECOND ORDER SYSTEM
Consider output equation
Y(s) = G(s)*U(s),
where the G(s) is the simple second order system
ω
G( s) =
2
n
2
2
s + 2ζω s + ω
n
n
and where U(s) is the Laplace transformed pulse signal
8
,
U(s) =
A
(1-e-Ds)
s
ω
Y(s) =
2
n
2
2
s + 2ζω s + ω
n
n
*
A
(1-e-Ds)
s
Use partial fractions to find the time response output y(t). We have
Is + J
2
2
s + 2ζω s + ω
n
n
Consider numerator
+
K
=
s
ω
2
n
2
2
s + 2ζω s + ω
n
n
*
A
s
2
2
2
s(Is+J) + K ( s + 2sζω + ω ) = ω A
n
n
n
To find constants compare s0 terms s1 terms and s2 terms then we get K=A J = −2 Aζω and
n
I=-A respectively. Substituting the constants, we get
− As − 2ζω A
− Ds
A
n
Y ( s) = (
+ )(1 − e
)
2
2 s
s + 2ζsω + ω
n
n
− Ds
− Ds
− Ds
− As − 2ζω A
Ase
+ 2 Aζω e
Ae
A
n
n
Y ( s) =
+ +
−
2
2
2
2
2
2
s
s
( s + ζω ) + ω (1 − ζ )
( s + ζω ) + ω (1 − ζ )
n
n
n
n
Apply inverse Laplace to the above equation.

2
1 − ζ t ) − Aθ (t − D) 

n
n
2
1−ζ


(1)

2
2
Aζ
− ζω (t − D )
− ζω (t − D )
+ Ae
cos(ω 1 − ζ (t − D )) +
e
sin(ω 1 − ζ (t − D))
n
n
n
n
2

1−ζ


Here θ (t − D) = H (t − D ) is Heaviside’s step function. The property of this function is
1, t ≥ D
θ (t − D ) = 
0, t < D
− ζω t
y (t ) = A − Ae
n cos(ω
2
1− ζ t) −
Aζ
9
− ζω t
e
n sin( ω
Equation (1) represents the pulse response of the second order system. The output of a system
depends upon the parameter of that system, which in the present case are ω n and ζ . The
Figure 2 shows how the output is affected by varying ζ .
Fig: 2 Pulse response of second order system with A=1, D=5, ω n =1 and different ζ
10
The Figure 3 shows how the output varies when changing the parameter ω n .
Fig: 3 Pulse response of second order system with A=1, D=5, ζ =.2 and different ω n
From the above two plots, we conclude that the transient part of the pulse response is
more affected when varying the system parameters. This means that we can estimate the
parameters by considering the transient part only, i.e., here we consider the time t when the
output y(t) reaches the first maximum.
11
3. ESTIMATION OF MODEL PARAMETERS
Suppose that we collect a set of models, and it is parameterized as a model structure,
using a parameter vector. The search for the best model within the set then becomes
estimating the parameter.
There is different types of identification methods available and are generally classified
into parametric and non parametric methods.
3.1. PARAMETRIC METHODS
Parametric methods can be characterized as a mapping from the recorded data to the
estimated data. Commonly used parametric methods are least squares method and
instrumental variable methods.
•
Least squares method
It is a statistical approach to estimate the expected value or function with the highest
probability from the observations with random errors. The highest probability is replaced by
minimizing the sum of squares of residuals in the least squares method. The residual is
defined as the difference between observation and an estimated value of a function.
ε (t ) = y (t ) − ϕ T (t )θ
The method of least squares assumes that the best estimate of true parameter θ is the
estimate that has the minimal sum of the deviations squared from a given set of data.
2
1 N 1
N
T
VN (θ , Z ) = ∑ y (t ) − ϕ (t )θ
N t =1 2
where N denotes the number of data points. This is the least square criterion for the linear
regression. The general form of linear regression is
[
]
∧
y (t / θ ) = ϕ T θ + µ (t )
here ϕ is the regression vector and µ (t ) is a known data-dependent vector.
The unique feature of this criterion, developed from the linear parameterization and
∧
quadratic criterion, is that it is a quadratic function in θ . The LS estimate of θ is θ N .
−1
1 N
 1 N

θ N = arg min V N (θ , Z ) =  ∑ ϕ (t )ϕ T (t )  ∑ ϕ (t ) y (t )
θ
 N t =1
  N t =1

The main disadvantage in least squares method is parameter estimates are consistent only
under restrictive conditions.
∧ LS
•
N
Instrumental variable method
In typical cases the LS will not tend to true value, the reason being correlation between
known data-dependent vector and regression vector. To overcome above problem, a new
general correlation vector ζ is introduced instead of regression vector. The elements of
general correlation vector are then called instruments or instrumental variables.
∧ IV
θN
1
= arg min V N (θ , Z ) = 
θ
N
N
−1
 1
ζ (t )ϕ (t ) 
∑
t =1
 N
N
T
12
N

∑ ζ (t ) y(t )
t =1
3.2. NON PARAMETRIC METHODS
Non parametric methods are characterized by the property that the resulting models are
curves of functions. At present case we interested in non parametric model. There are four
types of analysis in non parametric methods.
•
Transient analysis
It is easy to apply. It gives at least a first model which can be used to obtain rough
estimates of parameters. The parameters of the model could be determined by comparing the
measured response and choosing the curve that is most similar to the recorded data. However,
one can also proceed in a number of alternative ways. One possibility is to look at the local
extrema (maxima and minima) of the response. Generally uses step and impulse signals as
input. It is sensitive to noise.
•
Frequency analysis
It is based on the use of sinusoids as input, so it is convenient to use the continuous time
models. Assume for convenience that the system is initially at rest. The initial values will give
a transient effect, due to the assumptions of stability. Here the system can be represented
using a weighting function h(t) as below
t
y (t ) = ∫ h(τ )u (t − τ )dτ
0
where h(t) is the function whose Laplace transform equals G(s).
In order to reduce sensitivity to noise, they need to include correlation technique i.e., the
output is multiplied by sin ωt and cos ωt and the result is integrated over the interval [0, T]. It
takes long time to perform identification experiment. Bode plot or equivalent representation
of the transfer function can present the frequency analysis.
•
Correlation analysis
It is generally based on white noise as input. It gives the weighting function as a resulting
model. It is rather insensitive to additive noise on the output signal. The general form of
model used in correlation analysis is
∞
y (t ) = ∑ h(k )u (t − k ) + v(t )
k =0
here h(k) is the weighting sequence and v(t) is a disturbance term.
∧
The estimate h(k ) of the weighting function h(k ) can be determined by solving the estimated
∧
covariance function r yu (τ ) of the covariance function ryu (τ ) i.e.
∞
ryu (τ ) = ∑ h(k )ru (τ − k )
k =0
where
ryu (τ ) = Ey (t + τ )u (t ) and ru (τ ) = Eu (t + τ )u (t )
∧
∞ ∧
∧
r yu (τ ) = ∑ h(k ) r u (τ − k )
k =0
The covariance function can be estimated from the data as
13
1 N − max(τ , 0)
∑ y (t + τ )u(t ) τ = 0, ± 1, ± 2,....
N t =1− min(τ , 0)
∧
∧
∧
1 N −τ
r u (τ ) = ∑ u (t + τ )u (t ) r u (−τ ) = r u (τ ) τ = 0, 1, 2,....
N t =1
∧
r yu (τ ) =
•
Spectral analysis
It is versatile non parametric method, no specific restriction on the input except that it
must uncorrelate with the disturbance. Spectral analysis for determining transfer functions of
linear systems was developed from statistical methods for spectral estimation. Here the
covariance function is used in form of spectral densities.
φ yu (ω ) = H (e − iω )φu (ω )
∞
1 ∞
1 ∞
−iτω
− iτω
−iω
r
(
τ
)
e
,
φ
(
ω
)
=
r
(
τ
)
e
and
H
(
e
)
=
h(k )e −iτω
∑ yu
∑u
∑
u
2π τ = −∞
2π τ = −∞
k =0
− iω
The transfer function H (e ) can be estimated from estimated spectral density
where φ yu (ω ) =
∧
∧
H (e −iω ) =
φ yu (ω )
∧
φ u (ω )
to use above equation we must find a reasonable method for estimating the spectral densities.
A straightforward approach would be to Take care
∧
1 N ∧
φ yu (ω ) =
∑ r yu (τ )e −iτω
2π τ = − N
it can be organized in other way is
∧
1 N N
φ yu (ω ) =
∑∑ y( s)u(t )e −isω e −itω
2πN s =1 t =1
1
=
YN (ω )U N (−ω )
2πN
N
N
s =1
s =1
where YN (ω ) = ∑ y ( s )e −isω and U N (ω ) = ∑ u ( s )e −isω are the discrete Fourier transforms of
the
sequences {y(t)} and {u(t)}, padded with zeros, respectively. For
2π
4π
ω = 0,
,
, ..., π they can be computed efficiently using fast Fourier transform
N
N
algorithms.
The model obtained by the non parametric method is not very accurate. But this model
can be used as rough model.
3.3. PARAMETERS ESTIMATION
. At present transient analysis approach is used to estimate the system parameters by looking
at extrema of the pulse response.
To calculate the value of t that at which the first maximum occurs, the first derivative of
that system should be zero. So derivate y(t) with respect to t and equate to zero. The equation
14
(1) contains two parts, one is one word delay and the other is with delay. Therefore second
part is equal to zero when t<D.
Consider the case t<D
For simplicity assume that
ω ζ =a
n
ω
and
y ' (t ) = Aae
− at
n
cos(bt ) + Abe
1− ζ
− at
2
(2)
=b
(3)
sin( bt ) − Aae
− at
Aa
cos(bt ) +
2
e
b
− at
sin( bt ) = 0
After simplification, the above equation becomes
2 − at
Aω e
n
(sin( bt )) = 0
b
− at
for all t e
is not equal to zero.
sin( bt ) = 0
sin( bt ) = sin( nπ ) n=0, 1, 2… i.e,
it is a periodic function of sin, but In our case we consider only first maximum so n=1
Therefore
π
t
=
(4)
max b
So
Consider the case t ≥ D
y ' (t ) = Aae
− at
− Aae
cos(bt ) + Abe
− a (t − D )
− at
sin( bt ) − Aae
cos(b(t − D )) − Abe
− a(t − D)
+ Aae
cos(b(t − D)) −
Aa
− at
Aa
cos(bt ) +
− a (t − D )
2
b
e
− at
sin( bt )
sin( b(t − D))
2
b
e
− a(t − D)
sin(b(t − D)) − θ ' (t − D) = 0
Here θ ' (t − D ) = δ (t − D )
According to delta property δ (t − D ) = 0 except when t=D, so the equations below are valid
for t•D.
After simplification, the above equation becomes
15
2 − at
Aω e
aD
n
(sin( bt ) − e sin( b(t − D )) = 0
b
for all t, e
− at
is not equal to zero.
So
sin( bt ) − e
sin( bt ) = e
sin( bt )(e
aD
aD
aD
sin( b(t − D ) = 0
(sin( bt ) cos(bD) − sin( bD) cos(bt ))
cos(bD ) − 1) = e
tan(bt ) =
aD
sin( bD ) cos(bt )
sin( bD )
− aD
(cos(bD) − e
)
D does not affect the first maximum when it is greater than the critical value of Dc.
To find a Dc, we use the tmax from equation (4) which is obtained by the previous condition
when t<D.
From that we get
π
D =
(5)
c b
Therefore
π
t
=
When D ≥ D
(6)
max 1 b
c




−1 
1
sin( bD )
 When D < D
t
= D + tan 

−
aD
max 2
c b
c
) 
 (cos(bD) − e


(7)
We get two equations for tmax.
The second order system consists of two variables and we have two equations for tmax which
contain the two system parameters, so it is to estimate the system parameter from equations
(4) and (5).
16
The verifications of the above equations from simulations results are shown in Figure 4 and 5.
Fig: 4 tmax=.7251 when D<Dc with A=1, D=1, ω n =1 and ζ =.2
When D=5 i.e. D>=Dc
Fig: 5 tmax=.1.526 when D>=Dc with A=1, D=5, ω n =1 and ζ =.2
17
Now we have two equations for maximum t; one when D>=Dc and the other is D<Dc.
If we consider that we would have two systems one is D>=Dc and other is D<Dc from
observing this two systems we can find the Dc.
Now consider D>=Dc system from that we get the expression for two parameters, i.e.
π
∧
b=
(8)
D
c
From the second equation we will get other expression for the parameters, i.e,


∧


∧
∧
−1 
sin( b D )

a=
ln  cos(b D) −
∧

D
tan(b(t
− D )) 

max 2
c 

We can also write the above equation as
∧




tan(
b
(t
−D )
∧
max 2
c
1 

a = ln 
∧
∧
∧

D
− D ) − tan(b D )) 
 cos(b D)(tan(b(t
max 2
c


From the equations (2) and (3) we get
∧
∧2
∧2
ω = a +b
n
∧
ζ =
(9)
(10)
(11)
∧
a
(12)
∧
ω
n
18
4. CHARACTERISTICS OF AN ESTIMATOR
An estimator is a function of data that produces an estimate for an unknown parameter
of the distribution that produced the data. The quality of estimator can be assessed by its mean
value, variance and mean-square error.
RANDOM VARIABLES
A random variable is described by its probability density function and yields a
realization if an experiment is performed.
NOISE
An undesired disturbance within the frequency band of interest, the summation of
unwanted or disturbing energy introduced into a communication systems from man made and
natural sources. A disturbance that affects a signal and that may distort the information carried
by the signal. Commonly consider noises are Gaussian noise and white noise.
SIGNAL TO NOISE RATIO
It is a measure of signal strength relative to background noise. It is generally measured in
decibels (DB).
2

 ( E ( signal )) 

SNR = 10 log 
2
10
 ( E (noise )) 


VARIANCE
The variance of a random variable measures the width of its probability density function.
According to statistics variance is, in a set of samples, the mean of the squares of the
differences between the respective samples and their mean. Mathematical representation is
below.
− 2
2
1 n
σ =
( x − x) ,
∑
n − 1 i =1 i
−
Where n is the number of samples, x is the value of sample i, x is the mean of the samples,
and σ 2 is the variance. The square root of the variance is the standard deviation.
Variance is also be defined as
Var [x] = E [(x - E [x]) 2]
BIAS
∧
∧
Bias of an estimator θ of a parameter θ , is the difference between expectation of θ
and the value of θ . It is mathematically denoted as
∧
∧
Bias( θ ) = E( θ ) - θ
19
A poor noise can introduce bias into the model. This bias will be small when signal to
noise ratios is high.
MEAN SQUARE ERROR
Mean square error is a criterion for an estimator: the choice is the one that minimizes the
sum of squared errors due to bias and due to variance.
∧
MSE of an estimator θ of a parameter θ is defined as
∧
∧
MSE( θ ) = E[( θ - θ )2]
We can define MSE in terms of variance and bias i.e.
∧
∧
∧
MSE( θ ) = Var( θ ) + (Bias( θ ))2
In unbiased case the mean square error is equal to the variance of the estimated
parameter θ .
20
5. SIMULATION RESULTS
5.1. MONTE CARLO SIMULATIONS
Monte Carlo methods are algorithms for solving various kinds of problems by using
random variables, as opposed to deterministic algorithms. MC methods are extremely
important in computational physics and related applied fields, and have diverse applications
from esoteric quantum chromo dynamics calculations to designing heat shields and
aerodynamic forms. The simplest method of MC method is ‘hit and miss’ integration.
Here Monte Carlo simulations are performed by adding random variables i.e., consider
as noise to the output signal to verify the statistical properties of the estimator parameters.
First consider a noise with fixed variance to find the estimators of the parameters and
calculated the signal to noise ratio, mean, variance, bias and mean square error of the
estimator, and plotted the graphs. The above procedure is performed 100 times for every
chosen noise variance. The results are below.
Fig: 6 SNR vs. VAR, BIAS and MSE of the parameter ω n (true value is 1)
21
Fig: 7 SNR vs. Mean of the parameter ω n (true value is 1)
Fig: 7 SNR vs. VAR, BIAS and MSE of the parameter ζ (true value is 0.2)
22
Fig: 8 SNR vs. Mean of the parameter ζ (true value is 0.2)
A good estimator should yield parameters that are close to the true value with a higher
probability than parameters that are far away from the true value. This means that the
maximum of the pdf should be close to the true value.
Generally a low variance means that realizations close to the expected value are highly
probable, while a high variance implies high probabilities for realization far away from the
mean.
The mean of the estimated parameter is equal to the true parameter independent of the
amount of data used. Such an estimator is called unbiased. The mean of the estimated
parameter is deviated from true parameter value is called bias. However, this deviation
decreases for an increasing amount of data. Moreover, the bias approaches zero as amount of
data approaches infinity. Such an estimator is called consistent or asymptotically unbiased.
From the above plots we can observe that when the SNR value is low, the variance of
the estimated parameter is high and the estimated parameter affected by the bias because of
that the estimator mean value is far away from the true value. As the SNR value is increasing
the variance and bias values are decreasing to minimum value simultaneously the mean value
of the estimator reaches to approximately true value.
23
CONCLUSIONS
From this thesis work it has been concluded that the feature of the pulse signal i.e.
combination of two step signals provides two different equations for time at which the
response reaches its maximum value. It provides an easy way to estimate the low-order
system parameters. Moreover the pulse signal returns the input and output to the desired
steady-state values. And from the estimator properties of estimated parameters provide
satisfactory value using the pulse signal when the system is affected with noise.
24
REFERENCES
[1] T. Söderström and P. Stoica. ‘System Identification’. Prentice Hall, Hemel Hempstead,
U.K., 1989.
[2] Ljung, L. ‘System Identification’ - Theory for the User. Edition 2. Prentice Hall, 1999.
[3] Qing Wang and Yong Zhang. ‘Robust identification of continuous system with dead-time
from step responses’. Automatica 37(2001).
[4] Shyh-Hong Hwang and Shih-Tsung Lai. ‘Use of two-stage least-squares algorithms for
identification of continuous systems with time delay based on pulse responses’. Automatica
40(2004).
[5] Qing-Guo Wang, Min Liu and Chang Chieh Hang. ‘Simplified identification of time-delay
systems with nonzero initial conditions from pulse tests’. Ind.Eng.Chem.Res.2001.
[6] Hsiao-Ping Huang, Ming-Wei Lee and Cheng-Liang Chen. ‘A system of procedures for
identification of simple models using transient step response’. Ind.Eng.Chem.Res.2001
25
Download