Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 318786, 8 pages http://dx.doi.org/10.1155/2013/318786 Research Article Recursive Identification for Dynamic Linear Systems from Noisy Input-Output Measurements Dan Fan and Kueiming Lo School of Software, Tsinghua University, Tsinghua National Laboratory for Information Science and Technology, Beijing 100084, China Correspondence should be addressed to Kueiming Lo; gluo@tsinghua.edu.cn Received 7 March 2013; Accepted 13 April 2013 Academic Editor: Xiaojing Yang Copyright © 2013 D. Fan and K. Lo. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Errors-in-variables (EIV) model is a kind of model with not only noisy output but also noisy input measurements, which can be used for system modeling in many engineering applications. However, the identification for EIV model is much complicated due to the input noises. This paper focuses on the adaptive identification problem of real-time EIV models. Some derivation errors in an accuracy research of the popular Frisch scheme used for EIV identification have been pointed out in a recent study. To solve the same modeling problem, a new algorithm is proposed in this paper. A Moving Average (MA) process is used as a substitute for the joint impact of the mutually independent input and output noises, and then system parameters and the noise properties are estimated in the view of the time domain and frequency domain separately. A recursive form of the first step calculation is constructed to improve the calculation efficiency and online computation ability. Another advantage of the proposed algorithm is its applicableness to different input processes situations. Numerical simulations are given to demonstrate the efficiency and robustness of the new algorithm. 1. Introduction In the field of engineering, modeling is an essential issue. In most cases, the systems are modeled by stochastic models in which the input signals are assumed to be measured exactly and all the disturbed noises are added to the output signals; that is, only the output measurements are noisy. These models are called “errors-in-equation models.” However, there are always signals beyond our control that also affect the input of the systems; some of them cannot be included in the output noises. Therefore, it is also necessary to consider the modeling problem for those systems with noisy inputoutput measurements, especially when we concern the actual physical laws of the process rather than the prediction of the future behaviour [1]. This kind of model whose input and output measurements are both containing noise is called “errors-in-variables (EIV) model [2].” The identification of EIV models has received a lot of attention during the past decades. By far, EIV models have been used in numerous applications, such as the modeling problems in econometrics, computer vision, biomedicine, chemical and image reconstruction, spectrum estimation, speech analysis, noise cancelation, and digital communications [3–8]. In EIV models, the noise in input measurements cannot be equivalent to the output error, which makes the identification of EIV models much more difficult. The identifiability of EIV dynamic models was analyzed in [9, 10]. It is pointed out that EIV dynamic models cannot be uniquely identified from the second-order properties [9]. Thus specific prior knowledge is needed to achieve the identifiability. Once the identifiability is established, estimation algorithms can be developed [10]. Owing to the noisy input measurements in the EIV models, the standard least squares method for errors-in-equation models cannot yield consistent estimates anymore. To overcome this problem, a bias-compensated least squares (BCLSs) principle was proposed in [4]. On the basis of BCLS principle, various algorithms have been 2 developed, such as the Frisch scheme-based algorithms [7], the KL algorithm [8], ECLS [9], BELS [10], and others in [11– 15]. Although there are such a number of approaches for identifying different EIV models, the convergence of the algorithms has always been a difficulty. Only a few literatures have tried to solve this problem [12, 15, 16]. In [16], the accuracy of Frisch scheme for EIV identification was analyzed, in which the estimates of the system parameters as well as the noise variance were both proved asymptotically Gaussian distributed by linearizing three primary equations in this scheme. This conclusion can be perceived as the theoretical support of the Frisch scheme-based algorithms. Based on this work, continued extensions and real applications have sprung up recently [17–20], which reaffirms the value of this particular analysis result. However, the analysis in [16] needs a condition that the estimates of the parameters are close to their true values, which is not clear how to be guaranteed. A counterexample that could not converge was present in [21], and in addition, some derivation errors of [16] were found and discussed at the same time. Furthermore, another method was provided to identify the EIV model in [21]. But comparing to the model concerned in [16], due to the difficulty of the identification problem, the one considered in [21] was a simpler one with a stronger condition that the input and output noise processes had the same variance, which has been hampering its application in some degree. The purpose of this paper is to consider how to avoid the restrict condition in [21]; in other words, we are trying to solve the same modeling problem as in [16], that is, to propose an identification algorithm for the modeling of dynamic EIV systems with independent input and output noises to estimate both the unknown system parameters and the noise signals. In order to achieve this purpose, we used a two-step method: in Step 1, the original model is rewritten into another form to get the system parameters in the time domain; in Step 2, the noise variances are calculated in the frequency domain. Moreover, the recursive form of the proposed method will be presented to improve its operational efficiency and enhance its online applicability. The structure of the paper is as follows. In Section 2, the concerned model is described in detail. The new identification algorithm is presented in Section 3. Some simulations are given in Section 4 to illustrate the identification accuracy, the convergence rate, and the antinoise performance. Finally, conclusion remarks are given in Section 5. Journal of Applied Mathematics π’0 (π‘) Σ π¦0 (π‘) Dynamic system Σ π’(π‘) Μ (π‘) π’ π¦(π‘) Μ (π‘) π¦ Figure 1: A basic errors-in-variables system. research in this paper. The ARX(ππ , ππ ) model is considered here as π΄ (π§) π¦0 (π‘) = π΅ (π§) π’0 (π‘) , (1) where π΄ (π§) = 1 + π1 π§ + ⋅ ⋅ ⋅ + πππ π§ππ , π΅ (π§) = π1 π§ + ⋅ ⋅ ⋅ + πππ π§ππ , (2) are the polynomials in the backward shift operator π§. The {π1 , π2 , . . . , πππ , π1 , π2 , . . . , πππ }are the unknown system parameters to be identified, while the measured variables π’(π‘) and Μ Thus π¦(π‘) are disturbed by the unknown noises π’Μ(π‘) and π¦(π‘). the input and output measurements are π’ (π‘) = π’0 (π‘) + π’Μ (π‘) , π¦ (π‘) = π¦0 (π‘) + π¦Μ (π‘) . (3) After introducing the notations π π = (π1 , π2 , . . . , πππ , π1 , π2 , . . . , πππ ) , π (π‘) π = (−π¦ (π‘ − 1) , . . . , −π¦ (π‘ − ππ ) , π’ (π‘ − 1) , . . . , π’ (π‘ − ππ )) , πΜ (π‘) π = (−π¦Μ (π‘ − 1) , . . . , −π¦Μ (π‘ − ππ ) , π’Μ (π‘ − 1) , . . . , π’Μ (π‘ − ππ )) , (4) the EIV system can be described as the following model: π¦ (π‘) = π (π‘)π π + π΄ (π§) π¦Μ (π‘) − π΅ (π§) π’Μ (π‘) . (5) To ensure the identifiability, we list some assumptions first. (A1) The EIV system is asymptotically stable, which means that there is no zero of π΄(π§) inside the unit circle. 2. Problem Formulation A basic dynamic EIV system is shown in Figure 1. Unlike the normal errors-in-equation model, as mentioned before, the EIV model has noise in both input measurements and output measurements. The immeasurable true input and output processes π’0 (π‘) and π¦0 (π‘) are linked by a dynamic system, which can be a linear or a nonlinear system in different applications. So far, most of the related studies are focused on the linear systems, which is also the focus of Μ are mutually independent (A2) The noises π’Μ(π‘) and π¦(π‘) and also independent of the true input and output signals π’0 (π‘) and π¦0 (π‘). Μ are white noises with zero mean and (A3) π’Μ(π‘) and π¦(π‘) independent variances π π’ and π π¦ . The problem we need to solve is to estimate the system parameter vector π with the help of the measured regressor vector π(π‘). Furthermore, considering that a noise process can be described by the mean and variance, to identify the Journal of Applied Mathematics 3 zero-mean input and output noises is simplified to identify the variances. Therefore, except for the system parameters, we also want to estimate the output and input noise variances π π¦ and π π’ . In the following section, we will give an algorithm with two independent steps to fulfill the two aspects of the estimate requirements. For the new model (10), denote a new parameter vector π and a new regressor vector π(π‘) by 3. Identification Algorithms and then the EIV system (5) can be rewritten as As mentioned before, the identification for EIV system is much more difficult because the input and output noises are unknown. For the EIV system described in Section 2, to overcome the influence of the input noise, we will use another MA process {π€(π‘)} as a substitute for the joint impact of the mutually independent input and output noises, as two mutually independent sequences of independent random variables can be represented as an MA process which has the same spectra with the two jointly sequences [22]. Then the system can be modified as an ARMAX model, and what need to do is changed to estimate the system parameters of the new model and to determine the variances of the input/output noises in terms of {π€(π‘)}. Thus a two-step recursive estimation algorithm can be constructed to identify the system parameters π and the noise variances π π¦ and π π’ , respectively. Step 1. for the time π‘, we used the obtained estimation of π€(π‘− 1) to estimate the parameters and get the current estimates π(π‘) and π€(π‘). Step 2. These results are utilized to calculate the estimates of the noise variance π π¦ (π‘) and π π’ (π‘). In the following, we will give the algorithm followed by proof. Step 1 (estimation for the unkown system parameter π). For convenience, denote the last two terms of (5) by V(π‘), that is, V (π‘) = π΄ (π§) π¦Μ (π‘) − π΅ (π§) π’Μ (π‘) , (6) πΈπ¦Μ2 (π‘) = π π¦ , πΈπ¦Μ2 (π‘) = π π’ . (7) (8) where {π(π‘)} is white noise with πΈπ (π‘) = 0, ππ = max {ππ , ππ } . (9) It can be shown that we can find a pair of {ππ , 0 ≤ π ≤ ππ } and π π such that {π€(π‘)} and {V(π‘)} have the same spectra [22], which means that {V(π‘)} can be represented by {π€(π‘)} in (8) as π¦ (π‘) = π (π‘)π π + π€ (π‘) . The {ππ , 0 ≤ π ≤ ππ } and π π are intermediate variables. π¦ (π‘) = π (π‘)π π + π (π‘) . (12) (13) In this step, we will give a recursive algorithm to identify (13). The covariance matrix of the regressor vector π(π‘) and output variables π¦(π‘) is denoted by π π = πΈπ (π) π (π)π , (14) πππ¦ = πΈπ (π) π¦ (π) . For convenience, introduce π‘ Μ π (π‘) = ∑π (π) π (π)π , π (15) π=1 π‘ πΜππ¦ (π‘) = ∑π (π) π¦ (π) . (16) π=1 Assume that the input {π’(π‘)} is a stationary process; in the Μ π (π‘)/π‘ and πΜππ¦ (π‘)/π‘ calculation, we can use the algebra means π instead of the mathematical expectations π π and πππ¦ in (14), as by ergodicity, we have Μ π (π‘) π π‘ π‘→∞ σ³¨σ³¨σ³¨σ³¨→ π π , πΜππ¦ (π‘) π‘ π‘→∞ σ³¨σ³¨σ³¨σ³¨→ πππ¦ . (17) Lemma 1 (Matrix Inversion Formula [23]). For the matrices π΄ ∈ π π×π , πΆ ∈ π π×1 , and π· ∈ π π×1 , the inverse matrix of π΅ = π΄ + πΆπ·π is −1 = π΄−1 − π−1 π΄−1 πΆπ·π π΄−1 , (18) Theorem 2. For system (13), under the assumptions (A1)– (A3), the parameter vector π can be estimated recursively as follows with a large π(0) and arbitrary π(π‘): Μ (π‘) π (π‘) = π (π‘ − 1) + π−1 (π‘) π (π‘ − 1) π Μ (π‘)π π (π‘ − 1)] , × [π¦ (π‘) −π 2 πΈπ (π‘) = π π , π where π = 1 + π·π π΄−1 πΆ. Introduce an MA(ππ ) process π€ (π‘) = π (π‘) + π1 π (π‘ − 1) + ⋅ ⋅ ⋅ + πππ π (π‘ − ππ ) , (11) π (π‘) = (π (π‘)π , π (π‘ − 1) , . . . , π (π‘ − ππ )) , (π΄ + πΆπ·π ) Μ are mutually independent with where π’Μ(π‘) and π¦(π‘) πΈπ¦Μ (π‘) = πΈΜ π’ (π‘) = 0, π π = (ππ , π1 , π2 , . . . , πππ ) , Μ (π‘)π π (π‘ − 1) , Μ (π‘) π π (π‘) = π (π‘ − 1) − π−1 (π‘) π (π‘ − 1) π π Μ (π‘) π (π‘) , π (π‘) = π¦ (π‘) − π Μ (π‘) = (π (π‘)π , π (π‘ − 1) , . . . , π (π‘ − π ))π , π π (19) (10) Μ π π(π‘ − 1)π(π‘). Μ where π(π‘) = 1 + π(π‘) 4 Journal of Applied Mathematics Proof. Like the least squares methods, we use the covariance matrix for help. By multiplying with π(π‘) to the system model (13), we have similar way as for RPLR (recursive pseudolinear regression) algorithm [22], that is, to form a substitute for π(π‘) as πΈπ (π‘) π¦ (π‘) − πΈπ (π‘) π (π‘)π π = πΈπ (π‘) π (π‘) , Μ (π‘) = (π (π‘)π , π (π‘ − 1) , . . . , π (π‘ − π ))π , π π (20) with assumptions (A2) and (A3) which can be rewritten as π π π = πππ¦ . (21) Μ π (π‘)/π‘ and πΜππ¦ (π‘)/π‘ in (15) and Replacing π π and πππ¦ with π (16), respectively, as mentioned before, one has Μ π (π‘) π = πΜππ¦ (π‘) . π (22) We note that (15) and (16) imply Μ π (π‘) = π Μ π (π‘ − 1) + π (π‘) π (π‘)π , π (23) πΜππ¦ (π‘) = πΜππ¦ (π‘ − 1) + π (π‘) π¦ (π‘) . Μ π (π‘) is reversible, π is estimated as Then on condition that π Μ −1 (π‘) πΜππ¦ (π‘) π (π‘) = π π Μ −1 (π‘) (Μπππ¦ (π‘ − 1) + π (π‘) π¦ (π‘)) =π π Μ −1 (π‘) [(π Μ π (π‘) − π (π‘) π (π‘)π ) π (π‘ − 1) + π (π‘) π¦ (π‘)] = π π Μ −1 (π‘) π (π‘) (π¦ (π‘) − π (π‘)π π (π‘ − 1)) . = π (π‘ − 1) + π π Μ π π(π), π ≥ 1. The proof of the where π(π‘ − π) = π¦(π) − π(π) substitution’s correctness is omitted (see details in [22]). Μ defined by (29) to replace the π(π‘) in (26) Then using π(π‘) and (28), we get Theorem 2 easily. Theorem 2 gives a recursive algorithm to get the estimate of π: At time π‘ − 1, we store only the finite-dimensional Μ information {π(π‘−1), π(π‘−1), π(π‘−1)}. At time π‘, it is updated using (23), (27), and (29), which is done with a given fixed amount of operations, making it a high operational efficiency and suitable for online applications. Since π(π‘) is obtained, obviously π(π‘) can be easily got by (11). Next we go to estimate the noise properties, which will be helpful in real applications such as the cascade system modeling. Step 2 (estimation for the noise variances π π¦ (π‘) and π π’ (π‘)). To estimate the noise variances π π¦ , π π’ , we need to find the relationship between {V(π‘)} in (6) and {π€(π‘)} in (8). We know that the spectrum Φπ (π) of a signal {π (π‘)} is the Fourier transform of its covariance function π π (π) as ∞ (24) Using (23), we can calculate the vector π(π‘). But we note Μ π (π‘) at each recursive that there is an inverse operation of π step, which is a very time-consuming process. To avoid the inversing, introduce Μ −1 (π‘) , π (π‘) = π π (25) and apply the Matrix Inversion Formula in Lemma 1 to (24); Μ π (π‘), πΆ = π·π = π(π‘), we have taking π΄ = π π (π‘) = π (π‘ − 1) − π (π‘ − 1) π (π‘) π (π‘)π π (π‘ − 1) π 1 + π (π‘) π (π‘ − 1) π (π‘) . (26) π (π‘ − 1) π (π‘) 1 + π (π‘)π π (π‘ − 1) π (π‘) . (30) π=−∞ where 1 π ∑π (π‘) π (π‘ − π) . π→∞π π‘=1 π π (π) = πΈπ (π‘) π (π‘ − π) = lim (31) Since {π€(π‘)} and {V(π‘)} have the same spectra, that is, Φπ€ (π) ≡ ΦV (π) , (32) π V (π) = π π€ (π) . Thus we can find the relationship between π π¦ , π π’ , and π π by using the covariance functions π V (π) and π π€ (π). At step π‘, an estimate of π π (π) can be used as π (π‘ − 1) π (π‘) 1 + π (π‘)π π (π‘ − 1) π (π‘) (33) (27) Taking (27) into (23), we have π (π‘) = π (π‘ − 1) + Φπ (π) = ∑ π π (π) π−πππ , this means that for all π = 0, 1, 2, . . . , ππ , Moreover, by (26) it is clear that Μ −1 (π‘) π (π‘) = π π (29) (28) ⋅ (π¦ (π‘) − π (π‘)π π (π‘ − 1)) . Noting that {π(π‘ − 1)} in (12) is unknown, π(π‘) cannot be constructed directly. This problem can be solved in the π π π (π‘) = 1 π‘ ∑ π (π) π (π − π) . π‘ π=1 (34) π We switched to the notation π Vπ , π π€ rather than π V (π), π π€ (π) to account for certain differences due to recursive step π‘. Journal of Applied Mathematics 5 Introduce Table 1: Estimation results of System 1. π πππ (π‘) = (ππ (π‘) , ππ+1 (π‘) , . . . , ππ+ππ (π‘)) , π πππ (π‘) = (ππ+1 (π‘) , ππ+2 (π‘) , . . . , ππ+ππ (π‘)) , π πππ (π‘) = (ππ (π‘) , ππ+1 (π‘) , . . . , ππ+ππ (π‘)) , (35) π ππ¦Μ (π‘) = (−π¦Μ (π‘) , −π¦Μ (π‘ − 1) , . . . , −π¦Μ (π‘ − ππ )) , Parameter π1 π2 π1 π2 ππ¦ ππ’ True value −0.20 −0.15 0.30 −0.27 0.20 0.50 Estimation −0.1975 ± 0.0025 −0.1483 ± 6.9910π−4 0.2842 ± 2.3776π−4 −0.2574 ± 7.5975π−4 0.1922 ± 4.5330π−4 0.5125 ± 0.0024 π π’ (π‘ − 1) , . . . , π’Μ (π‘ − ππ )) , ππ’Μ (π‘) = (Μ π ππ (π‘) = (π (π‘) , π (π‘ − 1) , . . . , π (π‘ − ππ )) , 4. Simulation Examples where 1, π = 0, { { ππ (π‘) = {ππ (π‘) , 0 < π ≤ ππ , { π > ππ or π < 0, {0, ππ (π‘) = { ππ (π‘) , 0 < π ≤ ππ , 0, π > ππ or π ≤ 0, (36) 1, π = 0, { { ππ (π‘) = {ππ (π‘) , 0 < π ≤ ππ , { π > ππ or π < 0, {0, are the transformation of the parameters in π(π‘). Then according to (A2), (A3), (6), (8), and (34), we have π π π€ (π‘) = = π = (π1 , π2 , π1 , π2 ) = (−0.2, −0.15, 0.3, −0.27)π . π 1 ∑ ππ0 (π‘) ππ (π) π0π (π‘)π ππ (π − π) π‘ π=1 (37) π π 1 π‘ = ∑ ππ0 (π‘) ππ (π) ππ (π)π ππ (π‘) π‘ π=1 = Similarly we have π π (38) Then it is clear that for π = 0, 1, 2, . . . , ππ , π π Vπ (π‘) = π π€ (π‘) (39) is a set of linear equations about the unknown π π¦ and π π’ , in which {πππ (π‘), πππ (π‘), πππ (π‘)} is known at time π‘, and π π can be estimated by: π π (π‘) = π π0 (π‘) = 1 π‘ 2 ∑ π (π) π‘ π=1 System 1: π¦0 (π‘) − 0.2π¦0 (π‘ − 1) − 0.15π¦0 (π‘ − 2) = 0.3π’0 (π‘ − 1) − 0.27π’0 (π‘ − 2) . (π‘) π π . π Vπ (π‘) = ππ0 (π‘) πππ (π‘) π π¦ + ππ0 (π‘) πππ (π‘) π π’ . (41) It is easy to get the system as follows, which is denoted by System 1: π‘ π (π‘) πππ Case A. First we examine how well the algorithm works for systems with Gaussian input. Consider an EIV dynamic system with ππ = ππ = 2 and π 1 π‘ ∑ π€ (π) π€ (π − π) π‘ π=1 ππ0 This section addresses some numerical evaluation of the identification algorithm presented in this paper. Matlab 7.7 is used to do the simulations. To demonstrate its validness to various EIV systems, we have chosen different signal processes as the true input variables {π’0 (π‘)} in each case: in Case A, a zero-mean Gaussian process is used; in Case B, a sawtooth signal is applied; in Case C, it is an ARMA Μ process. The noise processes {Μ π’(π‘)} and {π¦(π‘)} in these cases are mutually uncorrelated white noise signals with zeromean. The robustness of the algorithm is also tested, which is shown in Case C. (40) by taking π(π‘) in Theorem 2 as the estimate of π(π‘). Then the estimates π π¦ (π‘) and π π’ (π‘) for the output/input noise variances π π¦ and π π’ can be calculated by (37)–(40). (42) Let the input signal {π’0 (π‘)} be a zero-mean Gaussian process whose variance equals 1. Let the noise signals {Μ π’(π‘)} Μ and {π¦(π‘)} be mutually uncorrelated white noise signals with π π¦ = 0.2, π π’ = 0.5, which means a strong noise environment for the system. The system is simulated for π = 8000 steps. Calculation results are listed in Table 1, where the calculation error is defined by the standard deviation. Figures 2 and 3 show the system parameter and the noise variances estimates separately. Solid lines indicate the true values and dashed lines denote the corresponding estimates. Noting that the vertical coordinate scopes are very small in both figures, it can be seen easily that the estimates are converging fast to the true parameters. Case B. Consider another system, System 2, with sawtooth input: System 2: π¦0 (π‘) = π§ − 2π§2 π’ (π‘) , (1 − 0.9π§) (1 − 0.8π§) 0 (43) 6 Journal of Applied Mathematics 0.8 2 0.6 1.5 Parameter value Parameter value 0.2 π1 = −0.2 0 π2 = −0.15 −0.2 π2 = −0.27 −0.4 0 1000 2000 3000 4000 5000 Simulation step 6000 7000 8000 π π¦ = π π’ = 0.5 π2 = 0.72 0 −0.5 −1 −1.5 π1 = −1.7 π2 = −2 −2.5 Figure 2: Estimation result of π in System 1. −3 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 Simulation step Figure 4: Estimates of System 2 by the new recursive method. 1 Case C. The proposed algorithm in this paper is valid not only for EIV models with iid random input process but also for those whose input signals are more general such as the ARMA process. In order to examine it, we have considered the following system: π π’ = 0.5 Parameter value 0.5 −2 −0.6 −0.8 π1 = 1 1 π1 = 0.3 0.4 0.5 π π¦ = 0.2 2 (1 − 0.5π§) π§ π’ (π‘) , 1 + 0.2π§ − 0.48π§2 0 System 3: π¦0 (π‘) = where 0 π π = (π1 , π2 , π1 , π2 ) = (0.2, −0.48, 2, −1)π . −0.5 (46) (47) In this system, the input process {π’0 (π‘)} is an ARMA process: 0 1000 2000 3000 4000 5000 Simulation step 6000 7000 8000 π π = 1. in which ππ = ππ = ππ = 2, and π (44) (45) The simulation results of the system parameters and the noise variances are all displayed in Figure 4. The true values and estimates are also denoted by solid lines, and dotted lines respectively. We can see that the algorithm has an even better performance for this kind of EIV system. All the estimates converge to their corresponding true values consummately. (49) Μ The noise signals {Μ π’(π‘)} and {π¦(π‘)} are mutually uncorrelated white noise signals with π π¦ = 0.2, This is the counterexample which was presented in [21] to show the unconvergency of the Frisch-based method analyzed in [16]. We use our proposed algorithm to identify this system under the same conditions; that is, the input sawtooth signal’s amplitude equals 1 and its frequency is 10 Hz; the noises’ variances are π π¦ = π π’ = 0.5. (48) with π(π‘) being a zero-mean iid Gaussian process whose variance is Figure 3: Estimation result of π π¦ and π π’ in System 1. π = (π1 , π2 , π1 , π2 ) = (−1.7, 0.72, 1, −2)π . (1 − 5π§) π’0 (π‘) = (1 − 0.3π§) π (π‘) π π’ = 0.5. (50) The estimation results are shown in Figures 5 and 6. Similarly, the true values are marked by solid lines and the estimates of parameters are marked by dotted lines. We can see that for ARMA input process, the estimation can still converge to the true parameters quickly. In order to verify the robustness against noise, we further did several experiments with different signal-to-noise ratios (SNRs) in System 3. The noise processes used in the experiments are shown by the first three columns in Table 2. Column 1 is the signal variables used to generate the input signals; columns 2 and 3 are the variables of input noise and output noise. The average estimation error of System 3 (including the system parameter estimation and the noises estimation) is Journal of Applied Mathematics 7 3 Table 2: Comparison of the estimations for different SNRs. Parameter value 2 π1 = 2 1 π1 = 0.2 0 π2 = −0.48 −1 π2 = −1 ππ 1 1 1 1 1 1 1 ππ’ 0.05 0.2 0.2 0.5 0.5 1 1 ππ¦ 0.05 0.2 0.5 0.2 0.5 1 2 Average error 0.0025 0.0039 0.0040 0.0049 0.0052 0.0086 0.0064 −2 −3 0 π π’ = 0.5 mutually independent input and output noises, this twostep algorithm can not only estimate the system parameter vector as well as the noise variances with greater accuracy but also reduce the computational complexity significantly due to its recursive form. It has been shown by several simulation results that the presented algorithm demonstrates, as shown in the numerical simulations, a great accuracy, fast convergence speed, and good antinoise performance. Theoretical analysis of the proposed algorithm and the identification of some more complicated model such as EIV nonlinear models will be considered in future work. π π¦ = 0.2 Acknowledgments 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 Simulation step Figure 5: Estimation result of π in System 3 with ARMA input process. 1 0.8 0.6 Parameter value 0.4 0.2 0 This work is supported by the funds NSFC61171121 and NSFC60973049, the Science Foundation of Chinese Ministry of Education, and China Mobile 2012. −0.2 −0.4 −0.6 References −0.8 −1 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 Simulation step Figure 6: Estimation of π π¦ and π π’ in System 3 with ARMA input process. listed in column 4. It is clear that the performance of the proposed algorithm is keeping good when the noises increase (even when the noises are equal or larger than the input signals). 5. Conclusions This paper discussed the identification problem of dynamic errors-in-variables (EIV) systems. EIV model is very useful and has a wide engineering application range such as the modeling of cascade system, and camera calibration. In comparison with the usual errors-in-equation models, EIV model has a more troublesome noise problem with the input measurements being disturbed. Since several severe errors in the previous analysis of the attractive Frisch scheme identification approaches have been presented in the recent studies, we developed an adaptive algorithm to solve the same modeling problem. For the dynamic EIV model with [1] T. SoΜderstroΜm, “Errors-in-variables methods in system identification,” Automatica, vol. 43, no. 6, pp. 939–958, 2007. [2] R. J. Adcock, “Note on the method of least squares,” Analyst, vol. 4, pp. 183–184, 1877. [3] L. Ng and V. Solo, “Errors-in-variables modeling in optical flow estimation,” IEEE Transactions on Image Processing, vol. 10, no. 10, pp. 1528–1540, 2001. [4] W. X. Zheng and C. B. Feng, “Unbiased parameter estimation of linear systems in the presence of input and output noise,” International Journal of Adaptive Control and Signal Processing, vol. 3, no. 3, pp. 231–251, 1989. [5] R. Diversi, “Bias-eliminating least-squares identification of errors-in-variables models with mutually correlated noises,” International Journal of Adaptive Control and Signal Processing, 2012. [6] R. Diversi, R. Guidorzi, and U. Soverini, “Identification of ARMAX models with noisy input and output,” World Congress, vol. 18, no. 1, pp. 13121–13126, 2011. [7] S. Beghelli, R. P. Guidorzi, and U. Soverini, “The Frisch scheme in dynamic system identification,” Automatica, vol. 26, no. 1, pp. 171–176, 1990. [8] K. V. Fernando and H. Nicholson, “Identification of linear systems with input and output noise: the Koopmans-Levin method,” IEE Proceedings D, vol. 132, no. 1, pp. 30–36, 1985. [9] J. C. AguΜero and G. C. Goodwin, “Identifiability of errors in variables dynamic systems,” Automatica, vol. 44, no. 2, pp. 371– 382, 2008. 8 [10] W. X. Zheng, “A bias correction method for identification of linear dynamic errors-in-variables models,” IEEE Transactions on Automatic Control, vol. 47, no. 7, pp. 1142–1147, 2002. [11] B. D. O. Anderson and M. Deistler, “Identifiability in dynamic errors-in-variables models,” Journal of Time Series Analysis, vol. 5, no. 1, pp. 1–13, 1984. [12] H.-F. Chen and J.-M. Yang, “Strongly consistent coefficient estimate for errors-in-variables models,” Automatica, vol. 41, no. 6, pp. 1025–1033, 2005. [13] M. Ekman, “Identification of linear systems with errors in variables using separable nonlinear least-squares,” in Proceedings of the 16th Triennial World Congress of International Federation of Automatic Control (IFAC ’05), pp. 815–820, Prague, Czech Republic, July 2005. [14] K. Mahata, “An improved bias-compensation approach for errors-in-variables model identification,” Automatica, vol. 43, no. 8, pp. 1339–1354, 2007. [15] H. J. Palanthandalam-Madapusi, T. H. van Pelt, and D. S. Bernstein, “Parameter consistency and quadratically constrained errors-in-variables least-squares identification,” International Journal of Control, vol. 83, no. 4, pp. 862–877, 2010. [16] T. SoΜderstroΜm, “Accuracy analysis of the Frisch scheme for identifying errors-in-variables systems,” IEEE Transactions on Automatic Control, vol. 52, no. 6, pp. 985–997, 2007. [17] R. Diversi and R. Guidorzi, “A covariance-matching criterion in the frisch scheme identification of MIMO EIV models,” System Identification, vol. 16, no. 1, pp. 1647–1652, 2012. [18] T. S. SoΜderstroΜm, “System identification for the errors-invariables problem,” Transactions of the Institute of Measurement and Control, vol. 34, no. 7, pp. 780–792, 2012. [19] T. SoΜderstroΜm, “Extending the Frisch scheme for errors-invariables identification to correlated output noise,” International Journal of Adaptive Control and Signal Processing, vol. 22, no. 1, pp. 55–73, 2008. [20] M. Hong and T. SoΜderstroΜm, “Relations between biaseliminating least squares, the Frisch scheme and extended compensated least squares methods for identifying errorsin-variables systems,” Automatica, vol. 45, no. 1, pp. 277–282, 2009. [21] D. Fan and G. Luo, “Frisch Scheme identification for Errors-inVariables systems,” in Proceedings of the 9th IEEE International Conference on Cognitive Informatics (ICCI ’10), pp. 794–799, Beijing, China, July 2010. [22] L. Ljung, System Identification-Theory for the User, Prentice Hall, 2nd edition, 1999. [23] G. W. Stewart, Introduction to Matrix Computations, Academic Press, New York, NY, USA, 1973. Journal of Applied Mathematics Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014