Research Article Recursive Identification for Dynamic Linear Systems from Noisy Input-Output Measurements

advertisement
Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2013, Article ID 318786, 8 pages
http://dx.doi.org/10.1155/2013/318786
Research Article
Recursive Identification for Dynamic Linear Systems from
Noisy Input-Output Measurements
Dan Fan and Kueiming Lo
School of Software, Tsinghua University, Tsinghua National Laboratory for Information Science and Technology,
Beijing 100084, China
Correspondence should be addressed to Kueiming Lo; gluo@tsinghua.edu.cn
Received 7 March 2013; Accepted 13 April 2013
Academic Editor: Xiaojing Yang
Copyright © 2013 D. Fan and K. Lo. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Errors-in-variables (EIV) model is a kind of model with not only noisy output but also noisy input measurements, which can be
used for system modeling in many engineering applications. However, the identification for EIV model is much complicated due to
the input noises. This paper focuses on the adaptive identification problem of real-time EIV models. Some derivation errors in an
accuracy research of the popular Frisch scheme used for EIV identification have been pointed out in a recent study. To solve the same
modeling problem, a new algorithm is proposed in this paper. A Moving Average (MA) process is used as a substitute for the joint
impact of the mutually independent input and output noises, and then system parameters and the noise properties are estimated
in the view of the time domain and frequency domain separately. A recursive form of the first step calculation is constructed to
improve the calculation efficiency and online computation ability. Another advantage of the proposed algorithm is its applicableness
to different input processes situations. Numerical simulations are given to demonstrate the efficiency and robustness of the new
algorithm.
1. Introduction
In the field of engineering, modeling is an essential issue. In
most cases, the systems are modeled by stochastic models in
which the input signals are assumed to be measured exactly
and all the disturbed noises are added to the output signals;
that is, only the output measurements are noisy. These models
are called “errors-in-equation models.” However, there are
always signals beyond our control that also affect the input
of the systems; some of them cannot be included in the
output noises. Therefore, it is also necessary to consider
the modeling problem for those systems with noisy inputoutput measurements, especially when we concern the actual
physical laws of the process rather than the prediction of
the future behaviour [1]. This kind of model whose input
and output measurements are both containing noise is called
“errors-in-variables (EIV) model [2].”
The identification of EIV models has received a lot of
attention during the past decades. By far, EIV models have
been used in numerous applications, such as the modeling
problems in econometrics, computer vision, biomedicine,
chemical and image reconstruction, spectrum estimation,
speech analysis, noise cancelation, and digital communications [3–8].
In EIV models, the noise in input measurements cannot
be equivalent to the output error, which makes the identification of EIV models much more difficult. The identifiability
of EIV dynamic models was analyzed in [9, 10]. It is pointed
out that EIV dynamic models cannot be uniquely identified
from the second-order properties [9]. Thus specific prior
knowledge is needed to achieve the identifiability. Once the
identifiability is established, estimation algorithms can be
developed [10]. Owing to the noisy input measurements
in the EIV models, the standard least squares method for
errors-in-equation models cannot yield consistent estimates
anymore. To overcome this problem, a bias-compensated
least squares (BCLSs) principle was proposed in [4]. On
the basis of BCLS principle, various algorithms have been
2
developed, such as the Frisch scheme-based algorithms [7],
the KL algorithm [8], ECLS [9], BELS [10], and others in [11–
15].
Although there are such a number of approaches for
identifying different EIV models, the convergence of the
algorithms has always been a difficulty. Only a few literatures
have tried to solve this problem [12, 15, 16]. In [16], the accuracy of Frisch scheme for EIV identification was analyzed,
in which the estimates of the system parameters as well as
the noise variance were both proved asymptotically Gaussian
distributed by linearizing three primary equations in this
scheme. This conclusion can be perceived as the theoretical
support of the Frisch scheme-based algorithms. Based on
this work, continued extensions and real applications have
sprung up recently [17–20], which reaffirms the value of this
particular analysis result. However, the analysis in [16] needs
a condition that the estimates of the parameters are close to
their true values, which is not clear how to be guaranteed.
A counterexample that could not converge was present in
[21], and in addition, some derivation errors of [16] were
found and discussed at the same time. Furthermore, another
method was provided to identify the EIV model in [21].
But comparing to the model concerned in [16], due to the
difficulty of the identification problem, the one considered in
[21] was a simpler one with a stronger condition that the input
and output noise processes had the same variance, which has
been hampering its application in some degree.
The purpose of this paper is to consider how to avoid the
restrict condition in [21]; in other words, we are trying to
solve the same modeling problem as in [16], that is, to propose
an identification algorithm for the modeling of dynamic EIV
systems with independent input and output noises to estimate
both the unknown system parameters and the noise signals.
In order to achieve this purpose, we used a two-step method:
in Step 1, the original model is rewritten into another form
to get the system parameters in the time domain; in Step 2,
the noise variances are calculated in the frequency domain.
Moreover, the recursive form of the proposed method will be
presented to improve its operational efficiency and enhance
its online applicability.
The structure of the paper is as follows. In Section 2, the
concerned model is described in detail. The new identification algorithm is presented in Section 3. Some simulations
are given in Section 4 to illustrate the identification accuracy,
the convergence rate, and the antinoise performance. Finally,
conclusion remarks are given in Section 5.
Journal of Applied Mathematics
𝑒0 (𝑑)
Σ
𝑦0 (𝑑)
Dynamic
system
Σ
𝑒(𝑑)
Μƒ (𝑑)
𝑒
𝑦(𝑑)
Μƒ (𝑑)
𝑦
Figure 1: A basic errors-in-variables system.
research in this paper. The ARX(π‘›π‘Ž , 𝑛𝑏 ) model is considered
here as
𝐴 (𝑧) 𝑦0 (𝑑) = 𝐡 (𝑧) 𝑒0 (𝑑) ,
(1)
where
𝐴 (𝑧) = 1 + π‘Ž1 𝑧 + ⋅ ⋅ ⋅ + π‘Žπ‘›π‘Ž π‘§π‘›π‘Ž ,
𝐡 (𝑧) = 𝑏1 𝑧 + ⋅ ⋅ ⋅ + 𝑏𝑛𝑏 𝑧𝑛𝑏 ,
(2)
are the polynomials in the backward shift operator 𝑧. The
{π‘Ž1 , π‘Ž2 , . . . , π‘Žπ‘›π‘Ž , 𝑏1 , 𝑏2 , . . . , 𝑏𝑛𝑏 }are the unknown system parameters to be identified, while the measured variables 𝑒(𝑑) and
Μƒ Thus
𝑦(𝑑) are disturbed by the unknown noises 𝑒̃(𝑑) and 𝑦(𝑑).
the input and output measurements are
𝑒 (𝑑) = 𝑒0 (𝑑) + 𝑒̃ (𝑑) ,
𝑦 (𝑑) = 𝑦0 (𝑑) + 𝑦̃ (𝑑) .
(3)
After introducing the notations
𝑇
πœƒ = (π‘Ž1 , π‘Ž2 , . . . , π‘Žπ‘›π‘Ž , 𝑏1 , 𝑏2 , . . . , 𝑏𝑛𝑏 ) ,
πœ‘ (𝑑)
𝑇
= (−𝑦 (𝑑 − 1) , . . . , −𝑦 (𝑑 − π‘›π‘Ž ) , 𝑒 (𝑑 − 1) , . . . , 𝑒 (𝑑 − 𝑛𝑏 )) ,
πœ‘Μƒ (𝑑)
𝑇
= (−𝑦̃ (𝑑 − 1) , . . . , −𝑦̃ (𝑑 − π‘›π‘Ž ) , 𝑒̃ (𝑑 − 1) , . . . , 𝑒̃ (𝑑 − 𝑛𝑏 )) ,
(4)
the EIV system can be described as the following model:
𝑦 (𝑑) = πœ‘ (𝑑)𝑇 πœƒ + 𝐴 (𝑧) 𝑦̃ (𝑑) − 𝐡 (𝑧) 𝑒̃ (𝑑) .
(5)
To ensure the identifiability, we list some assumptions first.
(A1) The EIV system is asymptotically stable, which means
that there is no zero of 𝐴(𝑧) inside the unit circle.
2. Problem Formulation
A basic dynamic EIV system is shown in Figure 1.
Unlike the normal errors-in-equation model, as mentioned before, the EIV model has noise in both input
measurements and output measurements. The immeasurable
true input and output processes 𝑒0 (𝑑) and 𝑦0 (𝑑) are linked by a
dynamic system, which can be a linear or a nonlinear system
in different applications. So far, most of the related studies
are focused on the linear systems, which is also the focus of
Μƒ are mutually independent
(A2) The noises 𝑒̃(𝑑) and 𝑦(𝑑)
and also independent of the true input and output
signals 𝑒0 (𝑑) and 𝑦0 (𝑑).
Μƒ are white noises with zero mean and
(A3) 𝑒̃(𝑑) and 𝑦(𝑑)
independent variances πœ† 𝑒 and πœ† 𝑦 .
The problem we need to solve is to estimate the system
parameter vector πœƒ with the help of the measured regressor
vector πœ‘(𝑑). Furthermore, considering that a noise process
can be described by the mean and variance, to identify the
Journal of Applied Mathematics
3
zero-mean input and output noises is simplified to identify
the variances. Therefore, except for the system parameters, we
also want to estimate the output and input noise variances πœ† 𝑦
and πœ† 𝑒 . In the following section, we will give an algorithm
with two independent steps to fulfill the two aspects of the
estimate requirements.
For the new model (10), denote a new parameter vector πœƒ
and a new regressor vector πœ‘(𝑑) by
3. Identification Algorithms
and then the EIV system (5) can be rewritten as
As mentioned before, the identification for EIV system is
much more difficult because the input and output noises
are unknown. For the EIV system described in Section 2,
to overcome the influence of the input noise, we will use
another MA process {𝑀(𝑑)} as a substitute for the joint impact
of the mutually independent input and output noises, as
two mutually independent sequences of independent random
variables can be represented as an MA process which has
the same spectra with the two jointly sequences [22]. Then
the system can be modified as an ARMAX model, and what
need to do is changed to estimate the system parameters
of the new model and to determine the variances of the
input/output noises in terms of {𝑀(𝑑)}. Thus a two-step
recursive estimation algorithm can be constructed to identify
the system parameters πœƒ and the noise variances πœ† 𝑦 and πœ† 𝑒 ,
respectively.
Step 1. for the time 𝑑, we used the obtained estimation of 𝑀(𝑑−
1) to estimate the parameters and get the current estimates
πœƒ(𝑑) and 𝑀(𝑑).
Step 2. These results are utilized to calculate the estimates of
the noise variance πœ† 𝑦 (𝑑) and πœ† 𝑒 (𝑑).
In the following, we will give the algorithm followed by
proof.
Step 1 (estimation for the unkown system parameter πœƒ). For
convenience, denote the last two terms of (5) by V(𝑑), that is,
V (𝑑) = 𝐴 (𝑧) 𝑦̃ (𝑑) − 𝐡 (𝑧) 𝑒̃ (𝑑) ,
(6)
𝐸𝑦̃2 (𝑑) = πœ† 𝑦 ,
𝐸𝑦̃2 (𝑑) = πœ† 𝑒 .
(7)
(8)
where {𝑒(𝑑)} is white noise with
𝐸𝑒 (𝑑) = 0,
𝑛𝑐 = max {π‘›π‘Ž , 𝑛𝑏 } .
(9)
It can be shown that we can find a pair of {𝑐𝑖 , 0 ≤ 𝑖 ≤ 𝑛𝑐 }
and πœ† 𝑒 such that {𝑀(𝑑)} and {V(𝑑)} have the same spectra [22],
which means that {V(𝑑)} can be represented by {𝑀(𝑑)} in (8) as
𝑦 (𝑑) = πœ‘ (𝑑)𝑇 πœƒ + 𝑀 (𝑑) .
The {𝑐𝑖 , 0 ≤ 𝑖 ≤ 𝑛𝑐 } and πœ† 𝑒 are intermediate variables.
𝑦 (𝑑) = πœ‘ (𝑑)𝑇 πœƒ + 𝑒 (𝑑) .
(12)
(13)
In this step, we will give a recursive algorithm to identify (13).
The covariance matrix of the regressor vector πœ‘(𝑑) and
output variables 𝑦(𝑑) is denoted by
π‘…πœ‘ = πΈπœ‘ (𝑖) πœ‘ (𝑖)𝑇 ,
(14)
π‘Ÿπœ‘π‘¦ = πΈπœ‘ (𝑖) 𝑦 (𝑖) .
For convenience, introduce
𝑑
Μ‚ πœ‘ (𝑑) = ∑πœ‘ (𝑖) πœ‘ (𝑖)𝑇 ,
𝑅
(15)
𝑖=1
𝑑
π‘ŸΜ‚πœ‘π‘¦ (𝑑) = ∑πœ‘ (𝑖) 𝑦 (𝑖) .
(16)
𝑖=1
Assume that the input {𝑒(𝑑)} is a stationary process; in the
Μ‚ πœ‘ (𝑑)/𝑑 and π‘ŸΜ‚πœ‘π‘¦ (𝑑)/𝑑
calculation, we can use the algebra means 𝑅
instead of the mathematical expectations π‘…πœ‘ and π‘Ÿπœ‘π‘¦ in (14),
as by ergodicity, we have
Μ‚ πœ‘ (𝑑)
𝑅
𝑑
𝑑→∞
󳨀󳨀󳨀󳨀→ π‘…πœ‘ ,
π‘ŸΜ‚πœ‘π‘¦ (𝑑)
𝑑
𝑑→∞
󳨀󳨀󳨀󳨀→ π‘Ÿπœ‘π‘¦ .
(17)
Lemma 1 (Matrix Inversion Formula [23]). For the matrices
𝐴 ∈ 𝑅𝑛×𝑛 , 𝐢 ∈ 𝑅𝑛×1 , and 𝐷 ∈ 𝑅𝑛×1 , the inverse matrix of
𝐡 = 𝐴 + 𝐢𝐷𝑇 is
−1
= 𝐴−1 − π‘Ž−1 𝐴−1 𝐢𝐷𝑇 𝐴−1 ,
(18)
Theorem 2. For system (13), under the assumptions (A1)–
(A3), the parameter vector πœƒ can be estimated recursively as
follows with a large 𝑃(0) and arbitrary πœƒ(𝑑):
Μ‚ (𝑑)
πœƒ (𝑑) = πœƒ (𝑑 − 1) + π‘Ž−1 (𝑑) 𝑃 (𝑑 − 1) πœ‘
Μ‚ (𝑑)𝑇 πœƒ (𝑑 − 1)] ,
× [𝑦 (𝑑) −πœ‘
2
𝐸𝑒 (𝑑) = πœ† 𝑒 ,
𝑇
where π‘Ž = 1 + 𝐷𝑇 𝐴−1 𝐢.
Introduce an MA(𝑛𝑐 ) process
𝑀 (𝑑) = 𝑒 (𝑑) + 𝑐1 𝑒 (𝑑 − 1) + ⋅ ⋅ ⋅ + 𝑐𝑛𝑐 𝑒 (𝑑 − 𝑛𝑐 ) ,
(11)
πœ‘ (𝑑) = (πœ‘ (𝑑)𝑇 , 𝑒 (𝑑 − 1) , . . . , 𝑒 (𝑑 − 𝑛𝑐 )) ,
(𝐴 + 𝐢𝐷𝑇 )
Μƒ are mutually independent with
where 𝑒̃(𝑑) and 𝑦(𝑑)
𝐸𝑦̃ (𝑑) = 𝐸̃
𝑒 (𝑑) = 0,
𝑇
πœƒ = (πœƒπ‘‡ , 𝑐1 , 𝑐2 , . . . , 𝑐𝑛𝑐 ) ,
Μ‚ (𝑑)𝑇 𝑃 (𝑑 − 1) ,
Μ‚ (𝑑) πœ‘
𝑃 (𝑑) = 𝑃 (𝑑 − 1) − π‘Ž−1 (𝑑) 𝑃 (𝑑 − 1) πœ‘
𝑇
Μ‚ (𝑑) πœƒ (𝑑) ,
πœ€ (𝑑) = 𝑦 (𝑑) − πœ‘
Μ‚ (𝑑) = (πœ‘ (𝑑)𝑇 , πœ€ (𝑑 − 1) , . . . , πœ€ (𝑑 − 𝑛 ))𝑇 ,
πœ‘
𝑐
(19)
(10)
Μ‚ 𝑇 𝑃(𝑑 − 1)πœ‘(𝑑).
Μ‚
where π‘Ž(𝑑) = 1 + πœ‘(𝑑)
4
Journal of Applied Mathematics
Proof. Like the least squares methods, we use the covariance
matrix for help. By multiplying with πœ‘(𝑑) to the system model
(13), we have
similar way as for RPLR (recursive pseudolinear regression)
algorithm [22], that is, to form a substitute for πœ‘(𝑑) as
πΈπœ‘ (𝑑) 𝑦 (𝑑) − πΈπœ‘ (𝑑) πœ‘ (𝑑)𝑇 πœƒ = πΈπœ‘ (𝑑) 𝑒 (𝑑) ,
Μ‚ (𝑑) = (πœ‘ (𝑑)𝑇 , πœ€ (𝑑 − 1) , . . . , πœ€ (𝑑 − 𝑛 ))𝑇 ,
πœ‘
𝑐
(20)
with assumptions (A2) and (A3) which can be rewritten as
π‘…πœ‘ πœƒ = π‘Ÿπœ‘π‘¦ .
(21)
Μ‚ πœ‘ (𝑑)/𝑑 and π‘ŸΜ‚πœ‘π‘¦ (𝑑)/𝑑 in (15) and
Replacing π‘…πœ‘ and π‘Ÿπœ‘π‘¦ with 𝑅
(16), respectively, as mentioned before, one has
Μ‚ πœ‘ (𝑑) πœƒ = π‘ŸΜ‚πœ‘π‘¦ (𝑑) .
𝑅
(22)
We note that (15) and (16) imply
Μ‚ πœ‘ (𝑑) = 𝑅
Μ‚ πœ‘ (𝑑 − 1) + πœ‘ (𝑑) πœ‘ (𝑑)𝑇 ,
𝑅
(23)
π‘ŸΜ‚πœ‘π‘¦ (𝑑) = π‘ŸΜ‚πœ‘π‘¦ (𝑑 − 1) + πœ‘ (𝑑) 𝑦 (𝑑) .
Μ‚ πœ‘ (𝑑) is reversible, πœƒ is estimated as
Then on condition that 𝑅
Μ‚ −1 (𝑑) π‘ŸΜ‚πœ‘π‘¦ (𝑑)
πœƒ (𝑑) = 𝑅
πœ‘
Μ‚ −1 (𝑑) (Μ‚π‘Ÿπœ‘π‘¦ (𝑑 − 1) + πœ‘ (𝑑) 𝑦 (𝑑))
=𝑅
πœ‘
Μ‚ −1 (𝑑) [(𝑅
Μ‚ πœ‘ (𝑑) − πœ‘ (𝑑) πœ‘ (𝑑)𝑇 ) πœƒ (𝑑 − 1) + πœ‘ (𝑑) 𝑦 (𝑑)]
= 𝑅
πœ‘
Μ‚ −1 (𝑑) πœ‘ (𝑑) (𝑦 (𝑑) − πœ‘ (𝑑)𝑇 πœƒ (𝑑 − 1)) .
= πœƒ (𝑑 − 1) + 𝑅
πœ‘
Μ‚ 𝑇 πœƒ(𝑖), 𝑖 ≥ 1. The proof of the
where πœ€(𝑑 − 𝑖) = 𝑦(𝑖) − πœ‘(𝑖)
substitution’s correctness is omitted (see details in [22]).
Μ‚ defined by (29) to replace the πœ‘(𝑑) in (26)
Then using πœ‘(𝑑)
and (28), we get Theorem 2 easily.
Theorem 2 gives a recursive algorithm to get the estimate
of πœƒ: At time 𝑑 − 1, we store only the finite-dimensional
Μ‚
information {πœƒ(𝑑−1), 𝑃(𝑑−1), πœ‘(𝑑−1)}.
At time 𝑑, it is updated
using (23), (27), and (29), which is done with a given fixed
amount of operations, making it a high operational efficiency
and suitable for online applications. Since πœƒ(𝑑) is obtained,
obviously πœƒ(𝑑) can be easily got by (11).
Next we go to estimate the noise properties, which will
be helpful in real applications such as the cascade system
modeling.
Step 2 (estimation for the noise variances πœ† 𝑦 (𝑑) and πœ† 𝑒 (𝑑)).
To estimate the noise variances πœ† 𝑦 , πœ† 𝑒 , we need to find the
relationship between {V(𝑑)} in (6) and {𝑀(𝑑)} in (8).
We know that the spectrum Φ𝑠 (πœ”) of a signal {𝑠(𝑑)} is the
Fourier transform of its covariance function 𝑅𝑠 (𝜏) as
∞
(24)
Using (23), we can calculate the vector πœƒ(𝑑). But we note
Μ‚ πœ‘ (𝑑) at each recursive
that there is an inverse operation of 𝑅
step, which is a very time-consuming process. To avoid the
inversing, introduce
Μ‚ −1 (𝑑) ,
𝑃 (𝑑) = 𝑅
πœ‘
(25)
and apply the Matrix Inversion Formula in Lemma 1 to (24);
Μ‚ πœ‘ (𝑑), 𝐢 = 𝐷𝑇 = πœ‘(𝑑), we have
taking 𝐴 = 𝑅
𝑃 (𝑑) = 𝑃 (𝑑 − 1) −
𝑃 (𝑑 − 1) πœ‘ (𝑑) πœ‘ (𝑑)𝑇 𝑃 (𝑑 − 1)
𝑇
1 + πœ‘ (𝑑) 𝑃 (𝑑 − 1) πœ‘ (𝑑)
.
(26)
𝑃 (𝑑 − 1) πœ‘ (𝑑)
1 + πœ‘ (𝑑)𝑇 𝑃 (𝑑 − 1) πœ‘ (𝑑)
.
(30)
𝜏=−∞
where
1 𝑁
∑𝑠 (𝑑) 𝑠 (𝑑 − 𝜏) .
𝑁→∞𝑁
𝑑=1
𝑅𝑠 (𝜏) = 𝐸𝑠 (𝑑) 𝑠 (𝑑 − 𝜏) = lim
(31)
Since {𝑀(𝑑)} and {V(𝑑)} have the same spectra, that is,
Φ𝑀 (πœ”) ≡ ΦV (πœ”) ,
(32)
𝑅V (𝜏) = 𝑅𝑀 (𝜏) .
Thus we can find the relationship between πœ† 𝑦 , πœ† 𝑒 , and πœ† 𝑒 by
using the covariance functions 𝑅V (𝜏) and 𝑅𝑀 (𝜏).
At step 𝑑, an estimate of 𝑅𝑠 (𝜏) can be used as
𝑃 (𝑑 − 1) πœ‘ (𝑑)
1 + πœ‘ (𝑑)𝑇 𝑃 (𝑑 − 1) πœ‘ (𝑑)
(33)
(27)
Taking (27) into (23), we have
πœƒ (𝑑) = πœƒ (𝑑 − 1) +
Φ𝑠 (πœ”) = ∑ 𝑅𝑠 (𝜏) 𝑒−π‘–πœπœ” ,
this means that for all 𝜏 = 0, 1, 2, . . . , 𝑛𝑐 ,
Moreover, by (26) it is clear that
Μ‚ −1 (𝑑) πœ‘ (𝑑) =
𝑅
πœ‘
(29)
(28)
⋅ (𝑦 (𝑑) − πœ‘ (𝑑)𝑇 πœƒ (𝑑 − 1)) .
Noting that {𝑒(𝑑 − 1)} in (12) is unknown, πœ‘(𝑑) cannot
be constructed directly. This problem can be solved in the
π‘…π‘ πœ (𝑑) =
1 𝑑
∑ 𝑠 (π‘˜) 𝑠 (π‘˜ − 𝜏) .
𝑑 π‘˜=1
(34)
𝜏
We switched to the notation 𝑅V𝜏 , 𝑅𝑀
rather than 𝑅V (𝜏), 𝑅𝑀 (𝜏)
to account for certain differences due to recursive step 𝑑.
Journal of Applied Mathematics
5
Introduce
Table 1: Estimation results of System 1.
𝑇
πœƒπ‘Žπœ (𝑑) = (π‘Žπœ (𝑑) , π‘Žπœ+1 (𝑑) , . . . , π‘Žπœ+π‘›π‘Ž (𝑑)) ,
𝑇
πœƒπ‘πœ (𝑑) = (π‘πœ+1 (𝑑) , π‘πœ+2 (𝑑) , . . . , π‘πœ+𝑛𝑏 (𝑑)) ,
𝑇
πœƒπ‘πœ (𝑑) = (π‘πœ (𝑑) , π‘πœ+1 (𝑑) , . . . , π‘πœ+𝑛𝑐 (𝑑)) ,
(35)
𝑇
πœ‘π‘¦Μƒ (𝑑) = (−𝑦̃ (𝑑) , −𝑦̃ (𝑑 − 1) , . . . , −𝑦̃ (𝑑 − π‘›π‘Ž )) ,
Parameter
π‘Ž1
π‘Ž2
𝑏1
𝑏2
πœ†π‘¦
πœ†π‘’
True value
−0.20
−0.15
0.30
−0.27
0.20
0.50
Estimation
−0.1975 ± 0.0025
−0.1483 ± 6.9910𝑒−4
0.2842 ± 2.3776𝑒−4
−0.2574 ± 7.5975𝑒−4
0.1922 ± 4.5330𝑒−4
0.5125 ± 0.0024
𝑇
𝑒 (𝑑 − 1) , . . . , 𝑒̃ (𝑑 − 𝑛𝑏 )) ,
πœ‘π‘’Μƒ (𝑑) = (Μƒ
𝑇
πœ‘π‘’ (𝑑) = (𝑒 (𝑑) , 𝑒 (𝑑 − 1) , . . . , 𝑒 (𝑑 − 𝑛𝑐 )) ,
4. Simulation Examples
where
1,
π‘˜ = 0,
{
{
π‘Žπ‘˜ (𝑑) = {π‘Žπ‘˜ (𝑑) , 0 < π‘˜ ≤ π‘›π‘Ž ,
{
π‘˜ > π‘›π‘Ž or π‘˜ < 0,
{0,
π‘π‘˜ (𝑑) = {
π‘π‘˜ (𝑑) , 0 < π‘˜ ≤ 𝑛𝑏 ,
0,
π‘˜ > 𝑛𝑏 or π‘˜ ≤ 0,
(36)
1,
π‘˜ = 0,
{
{
π‘π‘˜ (𝑑) = {π‘π‘˜ (𝑑) , 0 < π‘˜ ≤ 𝑛𝑐 ,
{
π‘˜ > 𝑛𝑐 or π‘˜ < 0,
{0,
are the transformation of the parameters in πœƒ(𝑑).
Then according to (A2), (A3), (6), (8), and (34), we have
𝜏
𝑅𝑀
(𝑑) =
=
πœƒ = (π‘Ž1 , π‘Ž2 , 𝑏1 , 𝑏2 ) = (−0.2, −0.15, 0.3, −0.27)𝑇 .
𝑇
1
∑ πœƒπ‘0 (𝑑) πœ‘π‘’ (π‘˜) πœƒ0𝑐 (𝑑)𝑇 πœ‘π‘’ (π‘˜ − 𝜏)
𝑑 π‘˜=1
(37)
𝜏
𝑇
1 𝑑
= ∑ πœƒπ‘0 (𝑑) πœ‘π‘’ (π‘˜) πœ‘π‘’ (π‘˜)𝑇 πœƒπ‘ (𝑑)
𝑑 π‘˜=1
=
Similarly we have
𝑇
𝑇
(38)
Then it is clear that for 𝜏 = 0, 1, 2, . . . , 𝑛𝑐 ,
𝜏
𝑅V𝜏 (𝑑) = 𝑅𝑀
(𝑑)
(39)
is a set of linear equations about the unknown πœ† 𝑦 and πœ† 𝑒 , in
which {πœƒπœπ‘Ž (𝑑), πœƒπ‘πœ (𝑑), πœƒπ‘πœ (𝑑)} is known at time 𝑑, and πœ† 𝑒 can be
estimated by:
πœ† 𝑒 (𝑑) = π‘…πœ€0 (𝑑) =
1 𝑑 2
∑ πœ€ (π‘˜)
𝑑 π‘˜=1
System 1: 𝑦0 (𝑑) − 0.2𝑦0 (𝑑 − 1) − 0.15𝑦0 (𝑑 − 2)
= 0.3𝑒0 (𝑑 − 1) − 0.27𝑒0 (𝑑 − 2) .
(𝑑) πœ† 𝑒 .
𝑅V𝜏 (𝑑) = πœƒπ‘Ž0 (𝑑) πœƒπ‘Žπœ (𝑑) πœ† 𝑦 + πœƒπ‘0 (𝑑) πœƒπ‘πœ (𝑑) πœ† 𝑒 .
(41)
It is easy to get the system as follows, which is denoted by
System 1:
𝑑
𝑇
(𝑑) πœƒπ‘πœ
Case A. First we examine how well the algorithm works
for systems with Gaussian input. Consider an EIV dynamic
system with π‘›π‘Ž = 𝑛𝑏 = 2 and
𝑇
1 𝑑
∑ 𝑀 (π‘˜) 𝑀 (π‘˜ − 𝜏)
𝑑 π‘˜=1
πœƒπ‘0
This section addresses some numerical evaluation of the
identification algorithm presented in this paper. Matlab 7.7
is used to do the simulations. To demonstrate its validness
to various EIV systems, we have chosen different signal
processes as the true input variables {𝑒0 (𝑑)} in each case:
in Case A, a zero-mean Gaussian process is used; in Case
B, a sawtooth signal is applied; in Case C, it is an ARMA
Μƒ
process. The noise processes {Μƒ
𝑒(𝑑)} and {𝑦(𝑑)}
in these cases
are mutually uncorrelated white noise signals with zeromean. The robustness of the algorithm is also tested, which
is shown in Case C.
(40)
by taking πœ€(𝑑) in Theorem 2 as the estimate of 𝑒(𝑑).
Then the estimates πœ† 𝑦 (𝑑) and πœ† 𝑒 (𝑑) for the output/input
noise variances πœ† 𝑦 and πœ† 𝑒 can be calculated by (37)–(40).
(42)
Let the input signal {𝑒0 (𝑑)} be a zero-mean Gaussian
process whose variance equals 1. Let the noise signals {Μƒ
𝑒(𝑑)}
Μƒ
and {𝑦(𝑑)}
be mutually uncorrelated white noise signals with
πœ† 𝑦 = 0.2, πœ† 𝑒 = 0.5, which means a strong noise environment
for the system.
The system is simulated for 𝑁 = 8000 steps. Calculation
results are listed in Table 1, where the calculation error is
defined by the standard deviation. Figures 2 and 3 show
the system parameter and the noise variances estimates
separately. Solid lines indicate the true values and dashed
lines denote the corresponding estimates. Noting that the
vertical coordinate scopes are very small in both figures, it
can be seen easily that the estimates are converging fast to the
true parameters.
Case B. Consider another system, System 2, with sawtooth
input:
System 2: 𝑦0 (𝑑) =
𝑧 − 2𝑧2
𝑒 (𝑑) ,
(1 − 0.9𝑧) (1 − 0.8𝑧) 0
(43)
6
Journal of Applied Mathematics
0.8
2
0.6
1.5
Parameter value
Parameter value
0.2
π‘Ž1 = −0.2
0
π‘Ž2 = −0.15
−0.2
𝑏2 = −0.27
−0.4
0
1000
2000
3000 4000 5000
Simulation step
6000
7000
8000
πœ† 𝑦 = πœ† 𝑒 = 0.5
π‘Ž2 = 0.72
0
−0.5
−1
−1.5
π‘Ž1 = −1.7
𝑏2 = −2
−2.5
Figure 2: Estimation result of πœƒ in System 1.
−3
0
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
Simulation step
Figure 4: Estimates of System 2 by the new recursive method.
1
Case C. The proposed algorithm in this paper is valid not
only for EIV models with iid random input process but also
for those whose input signals are more general such as the
ARMA process. In order to examine it, we have considered
the following system:
πœ† 𝑒 = 0.5
Parameter value
0.5
−2
−0.6
−0.8
𝑏1 = 1
1
𝑏1 = 0.3
0.4
0.5
πœ† 𝑦 = 0.2
2 (1 − 0.5𝑧) 𝑧
𝑒 (𝑑) ,
1 + 0.2𝑧 − 0.48𝑧2 0
System 3: 𝑦0 (𝑑) =
where
0
𝑇
πœƒ = (π‘Ž1 , π‘Ž2 , 𝑏1 , 𝑏2 ) = (0.2, −0.48, 2, −1)𝑇 .
−0.5
(46)
(47)
In this system, the input process {𝑒0 (𝑑)} is an ARMA
process:
0
1000
2000
3000 4000 5000
Simulation step
6000
7000
8000
πœ† πœ‰ = 1.
in which π‘›π‘Ž = 𝑛𝑏 = 𝑛𝑐 = 2, and
𝑇
(44)
(45)
The simulation results of the system parameters and the
noise variances are all displayed in Figure 4. The true values
and estimates are also denoted by solid lines, and dotted lines
respectively. We can see that the algorithm has an even better
performance for this kind of EIV system. All the estimates
converge to their corresponding true values consummately.
(49)
Μƒ
The noise signals {Μƒ
𝑒(𝑑)} and {𝑦(𝑑)}
are mutually uncorrelated
white noise signals with
πœ† 𝑦 = 0.2,
This is the counterexample which was presented in [21]
to show the unconvergency of the Frisch-based method
analyzed in [16]. We use our proposed algorithm to identify
this system under the same conditions; that is, the input
sawtooth signal’s amplitude equals 1 and its frequency is
10 Hz; the noises’ variances are
πœ† 𝑦 = πœ† 𝑒 = 0.5.
(48)
with πœ‰(𝑑) being a zero-mean iid Gaussian process whose
variance is
Figure 3: Estimation result of πœ† 𝑦 and πœ† 𝑒 in System 1.
πœƒ = (π‘Ž1 , π‘Ž2 , 𝑏1 , 𝑏2 ) = (−1.7, 0.72, 1, −2)𝑇 .
(1 − 5𝑧) 𝑒0 (𝑑) = (1 − 0.3𝑧) πœ‰ (𝑑)
πœ† 𝑒 = 0.5.
(50)
The estimation results are shown in Figures 5 and 6.
Similarly, the true values are marked by solid lines and the
estimates of parameters are marked by dotted lines. We can
see that for ARMA input process, the estimation can still
converge to the true parameters quickly.
In order to verify the robustness against noise, we further did several experiments with different signal-to-noise
ratios (SNRs) in System 3. The noise processes used in the
experiments are shown by the first three columns in Table 2.
Column 1 is the signal variables used to generate the input
signals; columns 2 and 3 are the variables of input noise and
output noise.
The average estimation error of System 3 (including the
system parameter estimation and the noises estimation) is
Journal of Applied Mathematics
7
3
Table 2: Comparison of the estimations for different SNRs.
Parameter value
2
𝑏1 = 2
1
π‘Ž1 = 0.2
0
π‘Ž2 = −0.48
−1
𝑏2 = −1
πœ†πœ‰
1
1
1
1
1
1
1
πœ†π‘’
0.05
0.2
0.2
0.5
0.5
1
1
πœ†π‘¦
0.05
0.2
0.5
0.2
0.5
1
2
Average error
0.0025
0.0039
0.0040
0.0049
0.0052
0.0086
0.0064
−2
−3
0
πœ† 𝑒 = 0.5
mutually independent input and output noises, this twostep algorithm can not only estimate the system parameter
vector as well as the noise variances with greater accuracy but
also reduce the computational complexity significantly due to
its recursive form. It has been shown by several simulation
results that the presented algorithm demonstrates, as shown
in the numerical simulations, a great accuracy, fast convergence speed, and good antinoise performance. Theoretical
analysis of the proposed algorithm and the identification of
some more complicated model such as EIV nonlinear models
will be considered in future work.
πœ† 𝑦 = 0.2
Acknowledgments
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
Simulation step
Figure 5: Estimation result of πœƒ in System 3 with ARMA input
process.
1
0.8
0.6
Parameter value
0.4
0.2
0
This work is supported by the funds NSFC61171121 and
NSFC60973049, the Science Foundation of Chinese Ministry
of Education, and China Mobile 2012.
−0.2
−0.4
−0.6
References
−0.8
−1
0
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
Simulation step
Figure 6: Estimation of πœ† 𝑦 and πœ† 𝑒 in System 3 with ARMA input
process.
listed in column 4. It is clear that the performance of the
proposed algorithm is keeping good when the noises increase
(even when the noises are equal or larger than the input
signals).
5. Conclusions
This paper discussed the identification problem of dynamic
errors-in-variables (EIV) systems. EIV model is very useful
and has a wide engineering application range such as the
modeling of cascade system, and camera calibration. In
comparison with the usual errors-in-equation models, EIV
model has a more troublesome noise problem with the
input measurements being disturbed. Since several severe
errors in the previous analysis of the attractive Frisch scheme
identification approaches have been presented in the recent
studies, we developed an adaptive algorithm to solve the
same modeling problem. For the dynamic EIV model with
[1] T. Söderström, “Errors-in-variables methods in system identification,” Automatica, vol. 43, no. 6, pp. 939–958, 2007.
[2] R. J. Adcock, “Note on the method of least squares,” Analyst, vol.
4, pp. 183–184, 1877.
[3] L. Ng and V. Solo, “Errors-in-variables modeling in optical flow
estimation,” IEEE Transactions on Image Processing, vol. 10, no.
10, pp. 1528–1540, 2001.
[4] W. X. Zheng and C. B. Feng, “Unbiased parameter estimation
of linear systems in the presence of input and output noise,”
International Journal of Adaptive Control and Signal Processing,
vol. 3, no. 3, pp. 231–251, 1989.
[5] R. Diversi, “Bias-eliminating least-squares identification of
errors-in-variables models with mutually correlated noises,”
International Journal of Adaptive Control and Signal Processing,
2012.
[6] R. Diversi, R. Guidorzi, and U. Soverini, “Identification of
ARMAX models with noisy input and output,” World Congress,
vol. 18, no. 1, pp. 13121–13126, 2011.
[7] S. Beghelli, R. P. Guidorzi, and U. Soverini, “The Frisch scheme
in dynamic system identification,” Automatica, vol. 26, no. 1, pp.
171–176, 1990.
[8] K. V. Fernando and H. Nicholson, “Identification of linear
systems with input and output noise: the Koopmans-Levin
method,” IEE Proceedings D, vol. 132, no. 1, pp. 30–36, 1985.
[9] J. C. Agüero and G. C. Goodwin, “Identifiability of errors in
variables dynamic systems,” Automatica, vol. 44, no. 2, pp. 371–
382, 2008.
8
[10] W. X. Zheng, “A bias correction method for identification of
linear dynamic errors-in-variables models,” IEEE Transactions
on Automatic Control, vol. 47, no. 7, pp. 1142–1147, 2002.
[11] B. D. O. Anderson and M. Deistler, “Identifiability in dynamic
errors-in-variables models,” Journal of Time Series Analysis, vol.
5, no. 1, pp. 1–13, 1984.
[12] H.-F. Chen and J.-M. Yang, “Strongly consistent coefficient
estimate for errors-in-variables models,” Automatica, vol. 41, no.
6, pp. 1025–1033, 2005.
[13] M. Ekman, “Identification of linear systems with errors in variables using separable nonlinear least-squares,” in Proceedings of
the 16th Triennial World Congress of International Federation
of Automatic Control (IFAC ’05), pp. 815–820, Prague, Czech
Republic, July 2005.
[14] K. Mahata, “An improved bias-compensation approach for
errors-in-variables model identification,” Automatica, vol. 43,
no. 8, pp. 1339–1354, 2007.
[15] H. J. Palanthandalam-Madapusi, T. H. van Pelt, and D. S. Bernstein, “Parameter consistency and quadratically constrained
errors-in-variables least-squares identification,” International
Journal of Control, vol. 83, no. 4, pp. 862–877, 2010.
[16] T. Söderström, “Accuracy analysis of the Frisch scheme for
identifying errors-in-variables systems,” IEEE Transactions on
Automatic Control, vol. 52, no. 6, pp. 985–997, 2007.
[17] R. Diversi and R. Guidorzi, “A covariance-matching criterion in
the frisch scheme identification of MIMO EIV models,” System
Identification, vol. 16, no. 1, pp. 1647–1652, 2012.
[18] T. S. Söderström, “System identification for the errors-invariables problem,” Transactions of the Institute of Measurement
and Control, vol. 34, no. 7, pp. 780–792, 2012.
[19] T. Söderström, “Extending the Frisch scheme for errors-invariables identification to correlated output noise,” International
Journal of Adaptive Control and Signal Processing, vol. 22, no. 1,
pp. 55–73, 2008.
[20] M. Hong and T. Söderström, “Relations between biaseliminating least squares, the Frisch scheme and extended
compensated least squares methods for identifying errorsin-variables systems,” Automatica, vol. 45, no. 1, pp. 277–282,
2009.
[21] D. Fan and G. Luo, “Frisch Scheme identification for Errors-inVariables systems,” in Proceedings of the 9th IEEE International
Conference on Cognitive Informatics (ICCI ’10), pp. 794–799,
Beijing, China, July 2010.
[22] L. Ljung, System Identification-Theory for the User, Prentice Hall,
2nd edition, 1999.
[23] G. W. Stewart, Introduction to Matrix Computations, Academic
Press, New York, NY, USA, 1973.
Journal of Applied Mathematics
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download