Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 597628, 13 pages http://dx.doi.org/10.1155/2013/597628 Research Article Neural-Network-Based Approach for Extracting Eigenvectors and Eigenvalues of Real Normal Matrices and Some Extension to Real Matrices Xiongfei Zou,1 Ying Tang,2 Shirong Bu,1 Zhengxiang Luo,1 and Shouming Zhong3 1 School of Optoelectronic Information, University of Electronic Science and Technology of China, Chengdu 610054, China School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China 3 School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China 2 Correspondence should be addressed to Xiongfei Zou; zouxiongfei@sohu.com Received 30 October 2012; Accepted 16 January 2013 Academic Editor: Nicola Mastronardi Copyright © 2013 Xiongfei Zou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper introduces a novel neural-network-based approach for extracting some eigenpairs of real normal matrices of order n. Based on the proposed algorithm, the eigenvalues that have the largest and smallest modulus, real parts, or absolute values of imaginary parts can be extracted, respectively, as well as the corresponding eigenvectors. Although the ordinary differential equation on which our proposed algorithm is built is only n-dimensional, it can succeed to extract n-dimensional complex eigenvectors that are indeed 2n-dimensional real vectors. Moreover, we show that extracting eigen-pairs of general real matrices can be reduced to those of real normal matrices by employing the norm-reducing skill. Numerical experiments verified the computational capability of the proposed algorithm. 1. Introduction The problem of extracting special eigenpairs of real matrices has attracted much attention both in theory [1–4] and in many engineering fields such as real-time signal processing [5–8] and principal or minor component analysis [9–12]. For example, we may wish to get eigenvectors and the corresponding eigenvalues that (1) have the largest or smallest modulus; (2) have the largest or smallest real parts; (3) have the largest or smallest imaginary parts in absolute value. Two most popular methods for this problem are the power method and the Rayleigh quotient method in their direct forms or in the context of inverse iteration [13]. Recently, many neuralnetwork-based methods have also been proposed to solve this problem [14–23]. However, most of those neural network based methods focused on computing eigenpairs of real symmetric matrices. The following two ordinary differential equations (ODEs): dπ₯ (π‘) (1) = π΄π₯ (π‘) − π₯ (π‘)π π΄π₯ (π‘) π₯ (π‘) , dπ‘ dπ₯ (π‘) (2) = π₯ (π‘)π π₯ (π‘) π΄π₯ (π‘) − π₯ (π‘)π π΄π₯ (π‘) π₯ (π‘) dπ‘ were proposed by [19, 23], respectively, where π΄ is a real symmetric matrix. Both (1) and (2) are efficient to compute the largest eigenvalue of π΄, as well as the corresponding eigenvector. In addition, they can succeed to compute the smallest eigenvalue of π΄ and the corresponding eigenvector by simply replacing π΄ with −π΄, for example, dπ₯ (π‘) = −π₯ (π‘)π π₯ (π‘) π΄π₯ (π‘) + π₯ (π‘)π π΄π₯ (π‘) π₯ (π‘) . dπ‘ (3) The following ODE for solving the generalized eigenvalue problem was proposed by [22] dπ₯ (π‘) = π΄π₯ (π‘) − π (π₯ (π‘)) π΅π₯ (π‘) , dπ‘ (4) where π΄ and π΅ are two real symmetric matrices, and π can be a general form to some degree. Particularly, if π΅ is the identity matrix, (4) can be used to solve the standard eigenvalue problem as (1) and (2). References [16–18] extended those neural network based approaches to the case of real antisymmetric or special real matrices of order π, where the proposed neural networks can 2 Journal of Applied Mathematics be summarized by 2π-dimensional ODEs since eigenvectors in those cases may be π-dimensional complex vectors, that is, 2π-dimensional real vectors. In this paper, we propose an approach for extracting six types of eigenvalues of π-by-π real normal matrices and the corresponding eigenvectors based on (2) or (3). Although eigenvectors of real normal matrices may be π-dimensional complex vectors, the computation of our proposed method can be achieved in π-dimensional real vector space, which can reduce the scale of networks a lot. Then, we show that any real matrix can be made arbitrarily close to a normal matrix by a series of similarity transformations, based on which our proposed algorithm can be extended to the case of arbitrary real matrices. 2. Main Results Let π = √−1 be the imaginary unit, π₯ the conjugate of π₯, and diag[π΄ 1 , . . . , π΄ π ] a block diagonal matrix, where π΄ π , π = 1, . . . , π, is a square matrix at the πth diagonal block. Unless specially stated, π΄ is a real normal matrix in this paper. Lemma 1. In [13], π΄ is a real normal matrix of order π if and only if there exists an orthogonal matrix π such that π΄ π+1 , . . . , π΄ π+π ] , ππ π΄π = diag [π βββββββββββββββββ 1 , . . . , π π , βββββββββββββββββββββββββββ π π + 2π = π, π (5) 0 ≤ π ≤ π, where π πΌ π πΌ ], , βββββββββββββββββββββββββββββββββββββββββββββ π’π+1 , π’π+1 , . . . , π’π+π , π’π+π π=[ π’ βββββββββββββββββ 1 , . . . , π’π [π real eigenvectors π pairs of complex eigenvectors ] (6) σ΅¨ σ΅¨σ΅¨ σ΅¨ σ΅¨σ΅¨π σ΅¨σ΅¨ π π΄ π+π = ( σ΅¨σ΅¨ π+π σ΅¨σ΅¨ σ΅¨ π+π σ΅¨) , π = 1, . . . , π. σ΅¨ σ΅¨ − σ΅¨σ΅¨ππ+π σ΅¨σ΅¨ ππ+π Here, π π , π = 1, . . . , π, are π real eigenvalues of π΄ corresponding to the π real eigenvectors π’π , and ππ+π ± iππ+π , π = 1, . . . , π, are π pairs of complex eigenvalues of π΄ corresponding to the π pairs π πΌ ± iπ’π+π . of complex eigenvectors π’π+π For simplicity, let ππ = π π , ππ = 0 for π = 1, . . . , π. Based on (5) and (6), it is straightforward to verify π ππ (π΄ − π΄π ) (π΄ − π΄π ) π = 2 2 2 2 , ππ+1 , . . . , ππ+π , ππ+π ] 4 diag [π12 , . . . , ππ2 , ππ+1 ππ (π΄ + π΄π ) π = 2 diag [π1 , . . . , ππ , ππ+1 , ππ+1 , . . . , ππ+π , ππ+π ] (7) (8) 2 2 2 2 2 +ππ+1 , . . . , ππ+π + ππ+π , ππ+π + ππ+π ]. (C1) Let J1 = {π | |ππ | = max{|π1 |, . . . , |ππ+π |}. Then for all π, π ∈ J1 , ππ = ππ . (C2) Let J2 = {π | |ππ | = min{|π1 |, . . . , |ππ+π |}. Then for all π, π ∈ J2 , ππ = ππ . (C3) Let J3 = {π | ππ = max{π1 , . . . , ππ+π }. Then for all π, π ∈ J3 , |ππ | = |ππ |. (C4) Let J4 = {π | ππ = min{π1 , . . . , ππ+π }. Then for all π, π ∈ J4 , |ππ | = |ππ |. 2 2 (C5) Let J5 = {π | ππ 2 + ππ 2 = max{π12 + π12 , . . . , ππ+π + ππ+π }. Then for all π, π ∈ J5 , ππ = ππ . 2 2 (C6) Let J6 = {π | ππ 2 + ππ 2 = min{π12 + π12 , . . . , ππ+π + ππ+π }. Then for all π, π ∈ J6 , ππ = ππ . Lemma 2 (Theorem 4 in [23]). Assume that nonzero π₯(0) ∈ π π is not orthogonal to the eigensubspace corresponding to the largest eigenvalue of π΄. Then, the solution of (2) starting from π₯(0) converges to an eigenvector corresponding to the largest eigenvalue of π΄ that is equal to limπ‘ → +∞ (π₯(π‘)π π΄π₯(π‘)/π₯(π‘)π π₯(π‘)). Lemma 3 (Theorem 5 in [23]). Assume that nonzero π₯(0) ∈ π π is not orthogonal to the eigensubspace corresponding to the smallest eigenvalue of π΄. Then, the solution of (3) starting from π₯(0) converges to an eigenvector corresponding to the smallest eigenvalue of π΄ that is equal to limπ‘ → +∞ (π₯(π‘)π π΄π₯(π‘)/π₯(π‘)π π₯(π‘)). Remark 4. If we randomly choose π₯(0), the projection of π₯(0) on the eigensubspace corresponding to the largest or smallest eigenvalue of π΄ will be nonzero with high probability. Hence, (2) and (3) can almost work well with randomly generated π₯(0). 2.1. Computing the Eigenvalues with the Largest or Smallest Imaginary Parts in Absolute Value, as well as the Corresponding Eigenvectors. Without loss of generality, in this subsection 2 2 ≤ ⋅ ⋅ ⋅ ≤ ππ+π in (7). we assume that π1 = ⋅ ⋅ ⋅ = ππ = 0 < ππ+1 π π π π 2 Note that (π΄ − π΄ ) (π΄ − π΄ ) = −(π΄ − π΄ ) . Based on (7), we know that 0 (if any) is the eigenvalue of −(π΄ − π΄π )2 corresponding to the eigenvector π’π , π = 1, . . . , π, 2 and that 4ππ+π , π = 1, . . . , π, is the eigenvalue of −(π΄ − π΄π )2 π πΌ corresponding to the eigenvectors π’π+π or π’π+π . If replacing π΄ π 2 with the symmetric matrix −(π΄ − π΄ ) in (2) as 2 dπ₯ (π‘) = − π₯ (π‘)π π₯ (π‘) (π΄ − π΄π ) π₯ (π‘) dπ‘ π π 2 (10) + π₯(π‘) (π΄ − π΄ ) π₯ (π‘) π₯ (π‘) ππ (π΄π΄π ) π 2 2 2 + ππ+1 , ππ+1 = diag [π12 , . . . , ππ2 , ππ+1 Then, the following six definitions and two lemmas are presented, which will be involved much in the sequel. (9) based on Lemma 2, we know that πΌ = limπ‘ → ∞ π₯(π‘) is an 2 2 eigenvector of −(π΄ − π΄π )2 corresponding to 4ππ+π and 4ππ+π = π π π 2 limπ‘ → ∞ (−π₯(π‘) (π΄ − π΄ ) π₯(π‘)/π₯(π‘) π₯(π‘)) (thus, |ππ+π | has been gotten), where π₯(π‘) is a solution of (10). If ππ+π = 0, ππ ’s Journal of Applied Mathematics 3 are all zero, meaning that π΄ is symmetric. In this case, we can directly use (2) or (3) to get the largest or smallest eigenvalue of π΄ and the corresponding eigenvector, respectively. Hence, we assume ππ+π =ΜΈ 0. The following lemma introduces an approach for computing ππ+π under the condition (C1). Lemma 5. Assume that (C1) holds. Let πΌ = limπ‘ → ∞ π₯(π‘), where π₯(π‘) is a solution of (10). Then, ππ+π = πΌπ (π΄ + π΄π )πΌ/ 2 2 2πΌπ πΌ. In addition, there exists ππ+π + ππ+π = πΌπ (π΄π΄π )πΌ/πΌπ πΌ. 2 , π = π , π + 1, . . . , π, π ≥ 1, are the Proof. Assume that 4ππ+π largest eigenvalues of −(π΄ − π΄π )2 , that is; |ππ+π | = |ππ+π +1 | = ⋅ ⋅ ⋅ = |ππ+π |. Therefore, J1 = {π + π , . . . , π + π} . (11) Based on Lemma 2 and (7), we know that πΌ should be a linear combination of π’ππ and π’ππΌ , π ∈ J1 . Let πΌ = ∑ (πΎπ π’ππ + ππ π’ππΌ ) , πΎπ , ππ ∈ π . π∈J1 (12) In addition, by (8) we have (π΄ + π΄π ) π’ππ = 2ππ π’ππ , (π΄ + π΄π ) π’ππΌ = 2ππ π’ππΌ , π ∈ J1 . (13) Proof. Let J1 and πΌ take the form as (11) and (12), respectively. Based on (13), we can write (π΄ + π΄π ) 2 σ΅¨ σ΅¨σ΅¨ π πΌ σ΅¨σ΅¨ππ+π σ΅¨σ΅¨σ΅¨ π½ = (π΄ − π΄ ) 2 πΌ = ππ diag [0, . . . , 0, π΅π+1 , . . . , π΅π+π ] π , βββββββββββββ 2 π (π΄π΄π ) π’ππΌ = (ππ2 + ππ2 ) π’ππΌ , σ΅¨ σ΅¨ 0 2 σ΅¨σ΅¨σ΅¨σ΅¨ππ σ΅¨σ΅¨σ΅¨σ΅¨ { { ), π ∈ {π + 1, . . . , π + π} \ J1 ( σ΅¨ σ΅¨ { { −2 σ΅¨σ΅¨σ΅¨ππ σ΅¨σ΅¨σ΅¨ 0 σ΅¨ σ΅¨ π΅π = { σ΅¨σ΅¨ σ΅¨σ΅¨ { { {( σ΅¨0 σ΅¨ 2 σ΅¨σ΅¨ππ+π σ΅¨σ΅¨) , π ∈ J1 . 0 { −2 σ΅¨σ΅¨σ΅¨ππ+π σ΅¨σ΅¨σ΅¨ (18) Taking (12) into (17), we get π½ = ∑ (ππ π’ππ − πΎπ π’ππΌ ) . 2πΌπ πΌ πΌπ (π΄π΄π ) πΌ πΌπ πΌ = (14) π ∈ J1 . 2ππ+π ∑π∈J1 (πΎπ2 + ππ2 ) = ππ+π , 2 ∑π∈J1 (πΎπ2 + ππ2 ) 2 2 (ππ+π + ππ+π ) ∑π∈J1 (πΎπ2 + ππ2 ) ∑π∈J1 (πΎπ2 + ππ2 ) = 2 ππ+π + (π΄ + π΄π ) 2 (π΄ − π΄π ) 2 π π Remark 6. Based on Lemma =ΜΈ πΌ (π΄π΄ )πΌ/πΌ πΌ, (C1) does not hold surely, which can be used to check whether (C1) holds or not. The following lemma introduces an approach for computing a pair of conjugated eigenvectors of π΄ corresponding to the eigenvalues ππ+π ± π|ππ+π | under the condition (C1). 2 , Lemma 7. Assume that (C1) holds. Given any nonzero 4ππ+π 2 π the largest eigenvalue of −(π΄ − π΄ ) , and the corresponding eigenvector πΌ obtained by (10), let ππ+π = πΌπ (π΄ + π΄π )πΌ/2πΌπ πΌ and π½ = (π΄ − π΄π )πΌ/2|ππ+π |. Then, ππ+π ± i|ππ+π | are two eigenvalues of π΄ corresponding to the eigenvectors π½ ± iπΌ, respectively. (20) σ΅¨ σ΅¨ π½ = − σ΅¨σ΅¨σ΅¨ππ+π σ΅¨σ΅¨σ΅¨ πΌ. (21) By (20) and (21), we have thus proving the lemma. π π½ = ππ+π π½. 2 is the eigenvalue of −(π΄ − π΄π )2 In addition, since 4ππ+π corresponding to the eigenvector πΌ, we have −(π΄ − π΄π )2 πΌ = 2 2 4ππ+π πΌ. Hence, −(π΄ − π΄π )|ππ+π |π½ = 2ππ+π πΌ. Since ππ+π =ΜΈ 0, we have 2 ππ+π , (15) 2 2 5, if ππ+π +ππ+π (19) Based on (13) and (19), it is straightforward to verify Because π is an orthogonal matrix and for all π ∈ J1 , ππ = ππ+π holds due to (C1), it is straightforward to verify = (17) where π∈J1 (π΄π΄π ) π’ππ = (ππ2 + ππ2 ) π’ππ , (16) Following the decomposition of π΄ as (5) and the definition of π½, we have And by (9), we have πΌπ (π΄ + π΄π ) πΌ πΌ = ππ+π πΌ. π΄π½ = π΄ + π΄π π΄ − π΄π σ΅¨ σ΅¨ π½+ π½ = ππ+π π½ − σ΅¨σ΅¨σ΅¨ππ+π σ΅¨σ΅¨σ΅¨ πΌ. 2 2 (22) By (16) and (17), we have π΄πΌ = π΄ + π΄π π΄ − π΄π σ΅¨ σ΅¨ πΌ+ πΌ = ππ+π πΌ + σ΅¨σ΅¨σ΅¨ππ+π σ΅¨σ΅¨σ΅¨ π½. 2 2 (23) Then, it is straightforward to verify σ΅¨ σ΅¨ π΄ (π½ + ππΌ) = (ππ+π + π σ΅¨σ΅¨σ΅¨ππ+π σ΅¨σ΅¨σ΅¨) (π½ + ππΌ) , σ΅¨ σ΅¨ π΄ (π½ − ππΌ) = (ππ+π − π σ΅¨σ΅¨σ΅¨ππ+π σ΅¨σ΅¨σ΅¨) (π½ − ππΌ) , thus proving the lemma. (24) 4 Journal of Applied Mathematics To get |π1 |, we can in advance get 4π12 , the smallest eigenvalue of −(π΄ − π΄π )2 by replacing π΄ with −(π΄ − π΄π )2 in (3) as follows: 2 dπ₯ (π‘) = π₯ (π‘)π π₯ (π‘) (π΄ − π΄π ) π₯ (π‘) dπ‘ π π 2 (25) − π₯ (π‘) (π΄ − π΄ ) π₯ (π‘) π₯ (π‘) . Let πΌ∗ = limπ‘ → ∞ π₯(π‘), where π₯(π‘) is a solution of (25). From Lemma 3, we know 4π12 = −(πΌ∗ )π (π΄ − π΄π )2 πΌ∗ /(πΌ∗ )π πΌ∗ . Then, the following lemma similar to Lemma 5 can be used to compute π1 . Lemma 8. Assume that (C2) holds. Then, π1 = (πΌ∗ )π (π΄ + π΄π )πΌ∗ /2(πΌ∗ )π πΌ∗ . In addition, there exists π12 + π12 = (πΌ∗ )π (π΄π΄π )πΌ∗ /(πΌ∗ )π πΌ∗ . Proof. The proof is almost the same to that in Lemma 5. Note that π1 may be zero; that is, −(π΄ − π΄π )2 has real eigenvalues. In this case, we have the following lemma. Lemma 9. Assume that (C2) holds and 0 is the smallest eigenvalue of −(π΄ − π΄π )2 corresponding to the eigenvector πΌ∗ . Then, π1 is the eigenvalue of π΄ corresponding to the eigenvector πΌ∗ , where π1 = (πΌ∗ )π (π΄ + π΄π )πΌ∗ /2(πΌ∗ )π πΌ∗ . In the case of π1 =ΜΈ 0, that is, all the eigenvalues of −(π΄ − π΄π )2 are complex numbers, we have the following lemma similar to Lemma 7. Lemma 10. Assume that (C2) holds. Given any nonzero 4π12 , the smallest eigenvalue of −(π΄ − π΄π )2 , and the corresponding eigenvector πΌ∗ obtained by (25), let π1 = (πΌ∗ )π (π΄ + π΄π )πΌ∗ / 2(πΌ∗ )π πΌ∗ and π½∗ = (π΄ − π΄π )πΌ∗ /2|π1 |. Then, π1 ± i|π1 | are two eigenvalues of π΄ corresponding to the eigenvectors π½∗ ± iπΌ∗ , respectively. Proof. The proof is almost the same to that in Lemma 7. Remark 11. Among the following four real normal matrices 7 0 (0 0 0 0 5 0 0 0 0 0 5 0 0 0 0 0 6 −2 0 0 0) , 2 6 7 0 (0 0 0 0 5 −1 0 0 0 1 5 0 0 0 0 0 6 −2 0 0 0) , 2 6 7 0 (0 0 0 0 5 2 0 0 0 −2 5 0 0 0 0 0 5 −2 0 0 0) , 2 5 7 0 (0 0 0 0 5 −2 0 0 0 2 5 0 0 0 0 0 6 −2 0 0 0) , 2 6 (32) Proof. Following the conditions, we have −(π΄ − π΄π )2 πΌ∗ = 0. 2 Hence, (πΌ∗ )π (π΄−π΄π )π (π΄−π΄π )πΌ∗ = β(π΄ − π΄π )πΌ∗ β2 = 0; that is, only the first three meet (C1), but the last one does not. And only the last three matrices meet (C2), but the first one does not. (π΄ − π΄π ) πΌ∗ = 0. 2.2. Computing the Eigenvalues with the Largest or Smallest Real Parts, as well as the Corresponding Eigenvectors. As shown in (8), 2ππ , π = 1, . . . , π + π, are the eigenvalues of the symmetric matrix (π΄+π΄π). In this subsection, we assume that π1 ≤ ⋅ ⋅ ⋅ ≤ ππ and ππ+1 ≤ ⋅ ⋅ ⋅ ≤ ππ+π , which can be achieved by reordering ππ and the corresponding columns of π. If replacing π΄ with (π΄ + π΄π ) in (2), we get (26) Note that π1 = ⋅ ⋅ ⋅ = ππ = 0, π ≥ 1, because 0 is the smallest eigenvalue of −(π΄ − π΄π )2 . Based on the definition of J2 , we have π1 = ⋅ ⋅ ⋅ = ππ , J2 = {1, . . . , π} , π ≥ 1. (27) Applying Lemma 3 to (25), we know that πΌ∗ should be a linear combination of π’1 , . . . , π’π . Let πΌ∗ = ∑ πΎπ π’π , π∈J2 πΎπ ∈ π . (28) By (8), we have π (π΄ + π΄ ) π’π = 2ππ π’π = 2π1 π’π , π ∈ J2 . (29) Therefore, π΄ + π΄π ∗ 1 πΌ = ( ∑ πΎπ (π΄ + π΄π ) π’π ) = π1 πΌ∗ . 2 2 π∈J (30) 2 Then, by (26) and (30), it is straightforward to verify π΄πΌ∗ = π΄ + π΄π ∗ π΄ − π΄π ∗ πΌ + πΌ = π1 πΌ∗ , 2 2 thus proving the lemma. (31) dπ₯ (π‘) = π₯ (π‘)π π₯ (π‘) (π΄ + π΄π ) π₯ (π‘) dπ‘ π (33) π − π₯ (π‘) (π΄ + π΄ ) π₯ (π‘) π₯ (π‘) . Without loss of generality, assume that ππ+π is the largest real part of the eigenvalues of π΄ (it may be ππ . However, Lemmas 12 and 13 have no difference in that case). Applying Lemma 2 to (33), we can get 2ππ+π = limπ‘ → ∞ (π₯(π‘)π (π΄ + π΄π )π₯(π‘)/π₯(π‘)π π₯(π‘)), the largest eigenvalue of (π΄ + π΄π ), and the corresponding eigenvector π = limπ‘ → ∞ π₯(π‘), where π₯(π‘) is a solution of (33). The following lemma introduces an approach for computing |ππ+π | under the condition (C3). Lemma 12. Assume that (C3) holds. Let π = limπ‘ → ∞ π₯(π‘), = where π₯(π‘) is a solution of (33). Then, |ππ+π | 2 . √(ππ (π΄π΄π )π/ππ π) − ππ+π Journal of Applied Mathematics 5 Proof. By Lemma 2, we know that π should be a linear combination of π’ππ and π’ππΌ , π ∈ J3 . Let π = ∑ (πΎπ π’ππ + ππ π’ππΌ ) , π∈J3 πΎπ , ππ ∈ π . (34) Then, based on (9), we get (π΄π΄π ) π’ππ = (ππ2 + ππ2 ) π’ππ , (π΄π΄π ) π’ππΌ = (ππ2 + ππ2 ) π’ππΌ , π ∈ J3 . (35) Because π is an orthogonal matrix and for all π ∈ J3 , ππ = 2 2 ππ+π and |ππ | = |ππ+π | hold, that is, ππ2 + ππ2 = ππ+π + ππ+π for all π ∈ J3 , we have ππ (π΄π΄π ) π ππ π = 2 2 (ππ+π + ππ+π ) ∑π∈J3 (πΎπ2 + ππ2 ) ∑π∈J3 (πΎπ2 + ππ2 ) 2 2 = ππ+π + ππ+π , (36) thus proving the lemma. The following lemma introduces an approach for computing a pair of conjugated eigenvectors of π΄ corresponding to the eigenvalues ππ+π ± π|ππ+π | under the condition (C3). Lemma 13. Assume that (C3) holds. Given any nonzero 2ππ+π , the largest eigenvalue of (π΄ + π΄π ), and the corresponding eigenvector π obtained by (33), let |ππ+π | = 2 . If π √(ππ (π΄π΄π )π/ππ π) − ππ+π π+π = 0, ππ+π is an eigenvalue of π΄ corresponding to the eigenvector π. If ππ+π =ΜΈ 0, let π = (π΄ − π΄π )π/2|ππ+π |. Then, ππ+π ± i|ππ+π | are two eigenvalues of π΄ corresponding to the eigenvectors π ± iπ, respectively. Proof. Combining the proofs of Lemmas 9 and 7, we can prove this lemma. If replacing π΄ with (π΄ + π΄π ) in (3), we get dπ₯ (π‘) = − π₯ (π‘)π π₯ (π‘) (π΄ + π΄π ) π₯ (π‘) dπ‘ (37) + π₯ (π‘)π (π΄ + π΄π ) π₯ (π‘) π₯ (π‘) . Without loss of generality, assume that ππ+1 is the smallest real part of the eigenvalues of π΄ (it may be π1 . However, Lemma 14 has no difference in that case). Applying Lemma 3 to (37), we can obtain 2ππ+1 , the smallest eigenvalue of (π΄+π΄π), as well as the corresponding eigenvector, denoted by π∗ . Then, we have the following lemma. Lemma 14. Assume that (C4) holds. Given any nonzero 2ππ+1 , the smallest eigenvalue of (π΄ + π΄π ), and the corresponding eigenvector π∗ obtained by (37), let |ππ+1 | = 2 . If π √((π∗ )π (π΄π΄π )π∗ /(π∗ )π π∗ ) − ππ+1 π+1 = 0, ππ+1 is the eigenvalue of π΄ corresponding to the eigenvector π∗ . If ππ+1 =ΜΈ 0, let π∗ = (π΄ − π΄π )π∗ /2|ππ+1 |. Then, ππ+1 ± i|ππ+1 | are two eigenvalues of π΄ corresponding to the eigenvectors π∗ ± iπ∗ , respectively. Proof. Combining the proofs of Lemmas 12, 9, and 7, we can prove this lemma. Remark 15. Among the following four real normal matrices 1 0 (0 0 0 0 4 0 0 0 0 0 5 0 0 0 0 0 1 −2 0 0 0) , 2 1 3 0 (0 0 0 0 4 0 0 0 0 0 4 0 0 0 0 0 1 −2 0 0 0) , 2 1 3 0 (0 0 0 0 4 −2 0 0 0 2 4 0 0 0 0 0 4 −2 0 0 0) , 2 4 3 0 (0 0 0 0 4 0 0 0 0 0 5 0 0 0 0 0 5 −2 0 0 0) , 2 5 (38) only the first three meet (C3), but the last one does not. And only the last three matrices meet (C4), but the first one does not. 2.3. Computing the Eigenvalues with the Largest or Smallest Modulus, as well as the Corresponding Eigenvectors. Reorder the eigenvalues of the symmetric matrix π΄π΄π in (9) and the corresponding columns of π such that π12 ≤ ⋅ ⋅ ⋅ ≤ ππ2 and 2 2 2 2 + ππ+1 ≤ ⋅ ⋅ ⋅ ≤ ππ+π + ππ+π . Without loss of generality, ππ+1 assume that ππ+π ± π|ππ+π | are the eigenvalues of π΄ that have the largest modulus (it may be ππ . However, Lemma 16 has no difference in that case), and that ππ+1 ± π|ππ+1 | are the eigenvalues of π΄ that have the smallest modulus (it may be π1 . However, Lemma 17 has no difference in that case). Replacing π΄ with π΄π΄π in (2), we get dπ₯ (π‘) = π₯ (π‘)π π₯ (π‘) π΄π΄π π₯ (π‘) − π₯ (π‘)π π΄π΄π π₯ (π‘) π₯ (π‘) . dπ‘ (39) 2 2 Applying Lemma 2 to (39), we can obtain ππ+π + ππ+π = π π π limπ‘ → ∞ (π₯(π‘) π΄ π΄π₯(π‘)/π₯(π‘) π₯(π‘)), the largest eigenvalue of π΄π΄π , and the corresponding eigenvector π = limπ‘ → ∞ π₯(π‘), where π₯(π‘) is the solution of (39). Then, we have the following lemma. 2 2 + ππ+π , the Lemma 16. Assume that (C5) holds. Given any ππ+π π largest eigenvalue of π΄π΄ , and the corresponding eigenvector π obtained by (39). Then, ππ+π = ππ (π΄ + π΄π )π/2ππ π. Thus, |ππ+π | can be gotten. If ππ+π = 0, ππ+π is an eigenvalue of π΄ corresponding to the eigenvector π. If ππ+π =ΜΈ 0, let π = (π΄ − π΄π )π/|2ππ+π |. Then, ππ+π ± i|ππ+π | are two eigenvalues of π΄ corresponding to the eigenvectors π ± iπ, respectively. Proof. Combining the proofs of Lemmas 5, 7, and 9, we can prove this lemma. Replacing π΄ with π΄π΄π in (3), we get dπ₯ (π‘) = −π₯ (π‘)π π₯ (π‘) π΄π΄π π₯ (π‘) + π₯ (π‘)π π΄π΄π π₯ (π‘) π₯ (π‘) . dπ‘ (40) 6 Journal of Applied Mathematics 2 2 Applying Lemma 3 to (40), we can obtain ππ+1 + ππ+1 , the π smallest eigenvalue of π΄π΄ , and the corresponding eigenvector, denoted by π∗ . Then, we have the following lemma. 2 ππ+1 holds with equality if and only if π−1 π΄π is a normal matrix. In addition, [24] proved π σ΅¨ σ΅¨ σ΅© σ΅©2 ∑ σ΅¨σ΅¨σ΅¨σ΅¨π2π σ΅¨σ΅¨σ΅¨σ΅¨ = inf σ΅©σ΅©σ΅©σ΅©π−1 π΄πσ΅©σ΅©σ΅©σ΅© . π∈T 2 ππ+1 , + the Lemma 17. Assume (C6) holds. Given any π smallest eigenvalue of π΄π΄ , and the corresponding eigenvector π∗ obtained by (40). Then, ππ+1 = (π∗ )π (π΄ + π΄π )π∗ /2(π∗ )π π∗ . So one can get that |ππ+1 |. If ππ+1 = 0, ππ+1 is an eigenvalue of π΄ corresponding to the eigenvector π∗ . If ππ+1 =ΜΈ 0, let π∗ = (π΄ − π΄π )π∗ /2|ππ+1 |. Then, ππ+1 ± i|ππ+1 | are two eigenvalues of π΄ corresponding to the eigenvectors π∗ ± iπ∗ , respectively. Proof. Combining the proofs of Lemmas 5, 7, and 9, we can prove this lemma. Based on (45), if we can find a sequence π΄ π as follows: π΄ 1 = π΄, π΄ π+1 = ππ−1 π΄ π ππ , 3 0 (0 0 0 0 −2 0 0 0 0 0 6 0 0 0 0 0 1 −2 0 3 4 0 0 0 −4 3 0 0 0 0 0) , 2 1 0 0 0 ), 3 −4 3 0 (0 0 0 3 0 (0 0 0 0 6 0 0 0 0 4 0 0 0 0 0 6 0 0 0 0 5 0 0 0 0 0 1 −2 0 0 0 3 −4 0 0 0) , 2 1 0 0 0) , 4 3 (41) only the first three meet (C5), but the last one does not. And only the last three meet (C6), but the first one does not. However, there exists some specially constructed π΄ that meet none of (C1) to (C6), for example, π΄ = diag[π΄ 1 , π΄ 2 , π΄ 3 , π΄ 4 ], where −4 1 π΄1 = ( ), −1 −4 −4 3 π΄2 = ( ), −3 −4 4 3 π΄3 = ( ), −3 4 4 1 π΄4 = ( ). −1 4 σ΅© σ΅© σ΅© σ΅©σ΅© σ΅©σ΅©π΄ π+1 σ΅©σ΅©σ΅© < σ΅©σ΅©σ΅©π΄ π σ΅©σ΅©σ΅© , σ΅© σ΅© σ΅© σ΅© ππ ∈ T, π = 1, 2, . . . π σ΅¨ σ΅¨ σ΅© σ΅©2 ∑ σ΅¨σ΅¨σ΅¨σ΅¨π2π σ΅¨σ΅¨σ΅¨σ΅¨ = lim σ΅©σ΅©σ΅©σ΅©π΄ π σ΅©σ΅©σ΅©σ΅© , π→∞ 2.4. Extension to Arbitrary Real Matrices. In this subsection, π΄ is an arbitrary real matrix of order π. Let βπ΄β be the Frobenius norm of π΄, and, π π , π = 1, . . . , π, be the eigenvalue of π΄. Denote the set of all complex nonsingular matrices by T. By the Schur inequality [13], we know π σ΅¨ σ΅¨ ∑ σ΅¨σ΅¨σ΅¨σ΅¨π2π σ΅¨σ΅¨σ΅¨σ΅¨ ≤ βπ΄β2 where π΄ ∞ = limπ → ∞ π΄ π is to be a normal matrix with the same eigenvalues as π΄. Such skill, termed as the normreducing technique, has been proposed by [25–27]. Moreover, following the idea presented by [26], it is easy to find that when π΄ is real, π΄ ∞ can be chosen to be a real normal matrix. In a word, any real matrix π΄ can be translated into a real −1 normal matrix π΄ ∞ = π∞ π΄π∞ by a similarity transformation π∞ = π1 π2 ⋅ ⋅ ⋅. Typical approaches for constructing ππ can be found in [26]. Note that if π is the eigenvalue of π΄ ∞ corresponding to the eigenvector π’, π is the eigenvalue of π΄ corresponding to the eigenvector π∞ π’. Hence, our proposed algorithm can be extended to extract eigenpairs of arbitrary real matrices by employing the norm-reducing technique. Without loss of generality, we use the following random matrix π΄ as an example to describe the norm-reducing technique: 1.0541 −1.9797 π΄ = (−1.8674 −1.8324 0.8486 0.4052 −0.7025 1.4990 0.1378 −1.5868 −1.0191 −1.3852 0.9549 −0.6011 −1.1719 −0.5771 −0.8364 0.8530 0.4773 0.3023 with equality if and only if π΄ is a normal matrix. Since the spectrum of π΄ does not change by a similarity transformation, the inequality π σ΅¨ σ΅¨ σ΅© σ΅©2 ∑ σ΅¨σ΅¨σ΅¨σ΅¨π2π σ΅¨σ΅¨σ΅¨σ΅¨ ≤ σ΅©σ΅©σ΅©σ΅©π−1 π΄πσ΅©σ΅©σ΅©σ΅© , π=1 π∈T (44) 0.4158 0.0430 −0.9489) . 0.5416 −0.8211 (48) The Frobenius norm of π΄ is βπ΄β = √27.7629, and the eigenvalues of matrix π΄ are π 1 = 2.8797, π 2 = π 3 = −0.6900 + 1.8770π, π 4 = −1.5144, and π 5 = 0.9774; obviously βπ΄β > √∑5π=1 π2π = 19.5401, so π΄ is a nonnormal real matrix. According to the approach that presented in [26], we can construct ππ , π = 1, 2, . . .. If the condition βπ΄ π − π΄ π+1 β < 10−10 can be satisfied, we break the iteration. After 129 iterations, we can obtain a approximate normal matrix π΄ 130 : (43) π=1 (47) π=1 (42) Nevertheless, a randomly generated real normal matrix can meet (C1) to (C6) with high probability. (46) such that Remark 18. Among the following four real normal matrices 2 0 (0 0 0 (45) π=1 π΄ 130 −0.7441 −0.0035 = (−1.4162 −1.1560 0.0011 0.0007 2.8797 −0.0048 −0.0019 −0.0000 1.6334 −0.0038 −0.9132 −0.1181 0 0.8210 0.0000 0.8223 −1.2370 −0.0036 0.0010 −0.0000 0.0034 ) , −0.0014 0.9774 (49) = from which we have βπ΄∗130 π΄ 130 − π΄ 130 π΄∗130 β √0.00121231723027107 and βπ΄ 130 β = √19.540133042214 Journal of Applied Mathematics 7 28 140 27 120 100 25 Value of mean Δ2 Value of βπ΄ π β2 26 24 23 22 21 60 40 20 20 19 80 0 20 40 60 80 100 Number of iterations 120 140 Figure 1: Trajectory of βπ΄ π β2 . −0.5290 −0.0259 0.7211 0.1475 −0.3265 −0.0090 −0.6935 0.6743 −0.3668 0.4294 0.2228 −0.4079 0.0395 0.2444 −1.1028 −0.0151 0.1559 0.3437 ) , −0.8704 −0.5166 (50) −1 π΄π∞ . We also can see that the which satisfy π΄ 130 = π∞ eigenvalues of π΄ 130 are π 1 = −1.5144, π 2 = π 3 = −0.6900 + 1.8770π, π 4 = 2.8797, and π 5 = 0.9774, which are just the eigenvalues of original nonnormal matrix π΄. We presented here the transient behavior of βπ΄ π β2 in Figure 1, where π = 1, 2, 3, . . . is the iterations and π΄ 1 = π΄. In the following, we use π = 1000 random matrices to verify the average performance of the norm-reducing technique. Let Mean Δ2π = 0 100 200 300 400 500 Number of iterations 600 700 800 Figure 2: Average performance trajectory of norm-reducing technique. that is very close to √∑5π=1 π2π = √19.5400526824864; so the matrix π΄ 130 can be regarded as a normal matrix in practical application, the corresponding π∞ = π1 π2 π3 ⋅ ⋅ ⋅ π129 are as follows: −0.3499 −0.4809 π∞ = (−0.2914 −0.4484 0.0611 0 1 π 2 ∑Δ π π=1 ππ (51) denotes the average measure of a large number of nonnormal matrices in a statistical sense at πth iteration, where Δ2ππ = 2 βπ΄πππ π΄ ππ − π΄ ππ π΄πππ β is the measure of the nonnormal matrix π΄ π at πth iteration, Δ2ππ = 0 if and only if π΄ π is normal matrix at πth iteration. We also presented the dynamic behavior trajectory of Mean Δ2 in Figure 2, from which we can see that for most of nonnormal matrices, after 350 iterations, the Mean Δ2 is very close to zero. 3. Neural Implementation Description In the presented paper, we mainly focus on the classical neural network differential equation as shown in (2), where π΄ = (πππ ), π, π = 1, 2, . . . , π are symmetric matrices that need to calculate eigenvalues and the corresponding eigenvectors, π₯(π‘) = [π₯1 (π‘), π₯2 (π‘), . . . , π₯π (π‘)]π is a column vector which denotes the states of neurons in the neural network dynamic system, and the elements of symmetric matrix π΄ denote the connection weights between those neurons. We presented the schematic diagram of the neural network in Figure 3, from which we can see that it is a recurrent neural network since the input is just the output of the system. In the practical applications, we often only need a nonzero column vector π₯(0) = [π₯1 (0), π₯2 (0), . . . , π₯π (0)]π to start the neural network system by the following update rule: π π₯ (π + 1) = π₯ (π) + π (π₯(π) π₯ (π) π΄π₯ (π) (52) π − π₯(π) π΄π₯ (π) π₯ (π)) , where π denote the πth iteration and π is a small time step. The iteration stops once βπ₯(π + 1) − π₯(π)β < π, where π is a small constraint error that can be set in advance. If βπ₯(π + 1) − π₯(π)β < π, we could regard that βπ₯(π + 1) − π₯(π)β = 0, that is, π₯(π)π π₯(π)π΄π₯(π) − π₯(π)π π΄π₯(π)π₯(π) = 0; so we have π΄π₯(π) = (π₯(π)π π΄π₯(π)/π₯(π)π π₯(π))π₯(π), according to the theory in [23], π₯(π) is the eigenvector corresponding to the modulus largest eigenvalue which can be denoted as π₯(π)π π΄π₯(π)/π₯(π)π π₯(π). 4. Examples and Discussion Three experiments are presented to verify our results. The following real normal matrix π΄ (randomly generated) was used in those three experiments: 1.8818 −1.4267 0.4273 π΄ = (−0.1929 1.5966 1.0627 −2.1563 0.5258 0.7784 1.3273 −0.6593 1.9973 −0.5979 1.4946 2.8645 −1.3125 −0.3799 −0.0249 −0.4768 0.7125 0.8587 3.1629 0.2058 −1.5727 0.7270 0.4700 1.3792 0.2573 2.9845 0.9289 0.4133 2.5384 −1.0153 0.6329 ) . 0.6404 2.3259 (53) 8 Journal of Applied Mathematics a11 × a21 . .. aπ1 × + +∑ + × a11 + +∑ + × × + +∑ + 2a12 . .. 2a1π π1 ∫ × × × π2 a12 a22 . .. aπ2 . .. +∑ − + + +∑ + + + a22 . .. aππ + +∑ + × +∑ − ∫ . .. × . .. 2a2π a1π × a2π . .. aππ × +∑ − ππ ∫ Figure 3: The schematic diagram of the neural network equation (2) for solving the eigenvector corresponding to the modulus largest eigenvalues of real symmetric matrix π΄. Using the [π, π·] = eig(π΄) function in Matlab, we got the eigenvalues of π΄ as π 1 = −2.3420, π 2 = 1.005, π 3 = π 4 = 3.1967 + 1.7860π, and π 5 = π 6 = 4.3445 + 1.2532π, as well as the corresponding eigenvectors π’1 , . . . , π’6 as follows: 0.3207 + 0.2956π 0.0472 − 0.4424π 0.0711 − 0.2167π ) , π’5 = ( −0.1146 − 0.3213π 0.4903 0.3692 − 0.2583π ) ( 0.3754 0.7510 −0.2146 ), π’1 = ( −0.1664 0.0280 −0.4696 ( ) π’4 = π’3 , 0.6657 −0.0242 0.2994 ) π’2 = ( , 0.2594 −0.5888 ( 0.2295 ) −0.0294 + 0.1297π −0.0243 − 0.1385π 0.6165 ), π’3 = ( −0.1917 + 0.5472π 0.0913 + 0.2785π −0.2708 − 0.2950π) ( π’6 = π’5 . (54) We can see that all of (C1) to (C6) hold except (C2). For simplicity, denote limπ‘ → ∞ π₯(π‘) by π₯(∞). Example 19 (for Section 2.1). We used (10) with the following initial condition (randomly generalized) π₯ (0) = [0.2329, −0.0023, −0.8601, −0.2452, −0.1761, −0.1577]π (55) to get |π3 | (the largest absolute value of the imaginary part among π π ) that √−π₯(π‘)π (π΄ − π΄π )2 π₯(π‘)/4π₯(π‘)π π₯(π‘) should converge to. From Lemma 5, π₯(π‘)π (π΄ + π΄π )π₯(π‘)/2π₯(π‘)π π₯(π‘) → π3 should hold. By Lemma 7, π₯(π‘) and (π΄ − π΄π )π₯(π‘)/2|π3 | should converge to the imaginary Journal of Applied Mathematics 9 3.4 0.5 3.2 3 π₯6 (π‘) 0 2.6 2.4 π(π‘) 2.2 Value Value 2.8 π(π‘) π₯3 (π‘) −0.5 π₯2 (π‘) π₯ (π‘) 4 2 π₯1 (π‘) π₯5 (π‘) 1.8 1.6 1.4 0 10 20 30 40 50 60 70 Number of iterations 80 90 100 Figure 4: Trajectories of π(π‘) = √−π₯(π‘)π (π΄ − π΄π )2 π₯(π‘)/4π₯(π‘)π π₯(π‘) and π(π‘) = π₯(π‘)π (π΄ + π΄π )π₯(π‘)/2π₯(π‘)π π₯(π‘) based on (10) with initial π₯(0) as (55), which should converge to |π3 | and π3 , respectively. −1 0 10 20 30 40 50 60 70 Number of iterations 80 90 100 Figure 5: Trajectories of π₯(π‘), the solution of (10) with initial π₯(0) as (55), which should converge to the imaginary part of an eigenvector of π΄ corresponding to the eigenvalue π 3 . 1 and real parts of an eigenvector corresponding to π 3 , respectively. The transient behaviors of the above four variables are shown in Figures 4, 5, and 6, respectively. After convergence, we saw 0.8 0.6 π¦1 (π‘) 0.4 π¦6 (π‘) −0.2 −0.4 0.1795 − 0.0005π −0.1747 + 0.0737π −0.1857 − 0.8107π) =( 0.7772 + 0.0872π 0.3386 − 0.2039π −0.3064 + 0.4450π) ( (56) = (−0.3012 − 1.3149π) π’3 . Thus, the estimated complex vector is an eigenvector of π΄ corresponding to π 3 . Although we can use (25) to get the smallest absolute value of the imaginary part among π π (it is zero in this experiment), neither the corresponding real part nor the eigenvector can be obtained from Lemmas 8 or 9 since (C2) does not hold. Example 20 (for Section 2.2). We used (33) with the same π₯(0) as (55) to get π5 (the largest real part among π π ) that π₯(π‘)π (π΄ + π΄π )π₯(π‘)/2π₯(π‘)π π₯(π‘) should converge to. Based π π on Lemma 12, √(π₯(π‘) (π΄π΄π )π₯(π‘)/π₯(π‘) π₯(π‘)) − π52 → |π5 | should hold. The transient behaviors of such two variables are shown in Figure 7. Since π5 =ΜΈ 0, from Lemma 13, π₯(π‘) and (π΄ − π΄π )π₯(π‘)/2|π5 | should converge to the imaginary and real parts of an eigenvector corresponding to π 5 , as shown π¦4 (π‘) 0.2 0 (π΄ − π΄π ) π₯ (∞) + π₯ (∞) π σ΅¨ σ΅¨ 2 σ΅¨σ΅¨σ΅¨π3 σ΅¨σ΅¨σ΅¨ π¦5 (π‘) π¦2 (π‘) π¦3 (π‘) 0 10 20 30 40 60 70 50 Number of iterations 80 90 100 Figure 6: Trajectories of π¦(π‘) = (π΄ − π΄π )π₯(π‘)/2π3 based on (10) with initial π₯(0) as (55), which should converge to the real part of an eigenvector of π΄ corresponding to the eigenvalue π 3 . in Figures 8 and 9. After convergence, we saw (π΄ − π΄π ) π₯ (∞) + π₯ (∞) π σ΅¨ σ΅¨ 2 σ΅¨σ΅¨σ΅¨π5 σ΅¨σ΅¨σ΅¨ −0.5695 − 0.1474π 0.2254 + 0.5561π 0.0534 + 0.3029π ) ( = 0.3407 + 0.3091π −0.5830 + 0.3120π (−0.2746 + 0.5420π) (57) = (−1.1890 + 0.6363π) π’5 . Hence, the estimated complex vector is an eigenvector of π΄ corresponding to π 5 . Based on (37), we got π1 (the smallest real part among π π ). After convergence, we saw that π₯(∞)π (π΄ + π΄π )π₯(∞)/2π₯(∞)π π₯(∞) = π1 = π 1 = −2.3420, √(π₯(∞)π (π΄π΄π )π₯(∞)/π₯(∞)π π₯(∞)) − π12 = |π1 | = 0, and 10 Journal of Applied Mathematics 4.5 1 4 0.8 3.5 0.6 Value 3 π (π‘) 2.5 0.2 0 1.5 −0.2 1 π¦6 (π‘) π¦3 (π‘) −0.4 0.5 0 10 20 30 40 50 Number of iterations 60 70 Figure 7: Trajectories of π(π‘) = π₯(π‘)π (π΄ + π΄π )π₯(π‘)/2π₯(π‘)π π₯(π‘) and π(π‘) = √(π₯(π‘)π (π΄π΄π )π₯(π‘)/π₯(π‘)π π₯(π‘)) − π52 based on (33) with initial π₯(0) as (55), which should converge to π5 and |π5 |, respectively. −0.6 0 20 10 30 40 50 Number of iterations 60 70 Figure 9: Trajectories of π¦(π‘) = (π΄ − π΄π )π₯(π‘)/2|π5 | based on (33) with initial π₯(0) as (55), which should converge to the real part of an eigenvector of π΄ corresponding to the eigenvalue π 5 . 3.5 1 3 π₯6 (π‘) 0.8 2.5 π₯2 (π‘) 0.6 Value π₯4 (π‘) 0.4 0.2 0 π₯3 (π‘) −0.2 −0.4 π¦4 (π‘) π¦5 (π‘) π¦1 (π‘) 0.4 π(π‘) 2 0 π¦2 (π‘) 10 20 1.5 1 π₯1 (π‘) 0.5 π₯5 (π‘) 0 0 30 40 50 Number of iterations π (π‘) π(π‘) 2 60 70 0 10 20 30 40 50 60 Number of iterations 70 80 90 Figure 10: Trajectories of π(π‘) = π₯(π‘)π (π΄ + π΄π )π₯(π‘)/2π₯(π‘)π π₯(π‘) and Figure 8: Trajectories of π₯(π‘), the solution of (33) with initial π₯(0) as (55), which should converge to the imaginary part of an eigenvector of π΄ corresponding to the eigenvalue π 5 . π(π‘) = √(π₯(π‘)π (π΄π΄π )π₯(π‘)/π₯(π‘)π π₯(π‘)) − π22 based on (40) with initial π₯(0) as (55), which should converge to π2 and |π2 |, respectively. π₯(∞) was equal to of π’2 , which is shown in Figure 11. As expected, π₯(∞) was equal to −0.3581 −0.7165 0.2048 ) π₯ (∞) = ( = −0.9541π’1 , 0.1588 −0.0267 ( 0.4480 ) (58) just as expected from Lemma 14. Example 21 (for Section 2.3). Based on (39) and Lemma 16, we can get π 5 and one corresponding eigenvector again because |π 5 | is the largest modulus of π π . In addition, we used (40) with the same π₯(0) as (55) to get |π 2 |, the smallest modulus of π π . By Lemma 17, π2 = π₯(∞)π (π΄ + π΄π )π₯(∞)/2π₯(∞)π π₯(∞) and |π2 | = √(π₯(∞)π (π΄π΄π )π₯(∞)/π₯(∞)π π₯(∞)) − π22 should hold. The transient behaviors of such two variables are shown in Figure 10. After convergence, we saw that π2 = 0. Hence, from Lemma 17, π₯(π‘) should converge to a constant multiple −0.6337 0.0230 −0.2850 ) = −0.9519π’2 . π₯ (∞) = ( −0.2469 0.5605 (−0.2185) (59) Example 22 (for extension to arbitrary real matrices). According to the theory in Section 2.1, we present here an experiment for arbitrary real matrices to verify the effectiveness of the norm-reducing technique in Section 2.4. Considering the following nonnormal matrix π΄: 3.4633 4.4860 π΄ = ( 0.5268 −2.4768 0.0498 −3.7487 3.4259 −1.0311 1.2585 −3.1267 2.9281 −3.2010 1.5386 2.0525 −6.5297 −0.5106 5.0625 −0.4766 2.3539 −1.5106 −0.0455 1.2120 −1.4478) , −0.8487 2.2083 (60) Journal of Applied Mathematics 11 0.6 1 0.8 0.5 0.6 π₯3 (π‘) π₯6 (π‘) 0.4 π₯2 (π‘) 0.2 0.4 π₯5 (π‘) 0.3 0 −0.4 π₯1 (π‘) −0.6 −0.8 Value −0.2 π₯4 (π‘) 0 10 20 30 40 50 60 Number of iterations 70 80 π₯5 (π‘) 0.2 0.1 0 90 π₯4 (π‘) π₯3 (π‘) π₯2 (π‘) π₯1 (π‘) −0.1 Figure 11: Trajectories of π₯(π‘), the solution of (40), with initial π₯(0) as (55), which should converge to an eigenvector of π΄ corresponding to the eigenvalue π 2 . −0.2 −0.3 0 10 20 4 30 40 50 Number of iterations 60 70 Figure 13: Trajectories of π₯(π‘), the solution of (10) with initial π₯(0) as (65), which should converge to the imaginary part of an eigenvector of π΄ π corresponding to the eigenvalue π 2 . 3.5 π(π‘) 0.8 Value 3 0.7 π(π‘) 2.5 0.6 0.5 2 π¦2 (π‘) Value 0.4 1.5 1 0.3 π¦5 (π‘) π¦1 (π‘) 0.2 0 10 20 30 40 50 Number of iterations 60 70 0.1 0 π(π‘) = = π₯(π‘)π (π΄ π + −0.1 based on (10) with initial π₯(0) as (65), which should converge to |π2 | and π2 , respectively. −0.2 Figure π¦4 (π‘) π¦3 (π‘) 12: Trajectories of √−π₯(π‘)π (π΄ π − π΄ππ )2 π₯(π‘)/4π₯(π‘)π π₯(π‘) and π(π‘) π΄ππ )π₯(π‘)/2π₯(π‘)π π₯(π‘) √∑5π=1 the Frobenius norm of π΄ is βπ΄β = √187.8869 > = 97.1780, where π π , π = 1, 2, . . . , 5 are the eigenvalues of matrix π΄, so matrix π΄ is a nonnormal matrix. According to the theory in Section 2.4, the eigenpairs problem of nonnormal real matrix can be converted into the eigenpairs problem of the corresponding normal matrix π΄ π , which can be calculated by norm-reducing technique: 2.3438 −1.5918 π΄ π = ( 0.8659 1.7777 −2.4688 0.7915 2.1016 −1.7434 −2.3015 −2.3136 1.1071 1.7249 4.5103 −0.9898 −0.4746 1.8624 2.2067 −1.2708 3.5815 0.0545 π2π −2.6867 1.9843 0.0408 ) , 0.7201 0.4528 (61) 0 10 20 30 40 50 Number of iterations 60 70 Figure 14: Trajectories of π¦(π‘) = (π΄ π − π΄ππ )π₯(π‘)/2π2 based on (10) with initial π₯(0) as (65), which should converge to the real part of an eigenvector of π΄ π corresponding to the eigenvalue π 2 . and the corresponding π∞ as follows: 28.2974 −35.9689 π∞ = (−10.2373 −3.4360 −79.1532 −28.8442 115.6698 16.4765 48.5475 4.3465 58.1384 −14.2976 14.1168 23.6416 −25.4122 −114.2417 15.0191 −35.3465 75.4347 −24.8455 13.5575 9.0642 −40.2882) , 15.1720 −48.2893 (62) −1 π΄π∞ . In order which satisfies the relationship π΄ π = π∞ to solve the eigenpairs of matrix π΄, we have to calculate the eigenpairs of the matrix π΄ π at first. Direct calculations 12 Journal of Applied Mathematics of the eigenvalues of matrix π΄ π are π 1 = −1.8342, π 2 = π 3 = 2.1114 + 3.786π, π 4 = π 5 = 5.3007 + 0.1328π and those are just the eigenvalues of nonnormal matrix π΄ and the corresponding eigenvectors as follows: 0.6079 0.0886 π’1 = (−0.1195) , −0.2805 0.7278 0.0035 − 0.2240π 0.7020 π’2 = ( 0.0189 + 0.3280π ) = π’3 , 0.0502 + 0.4231π −0.0660 + 0.4040π 0.1769 − 0.4835π −0.0129 + 0.0561π 0.6204 π’4 = ( ) = π’5 . −0.2522 − 0.4643π −0.1415 + 0.2181π The eigenvectors of nonnormal matrix π΄, which corresponding to the eigenvalues with largest imaginary parts in absolute are (64) According to the results above, the eigenvalues with largest imaginary in absolute are π 2 and π 3 , and let π2 ± |π2 |π denote them. We used (10) with the following initial condition (randomly generalized): π₯ (0) = [0.3891, 0.0856, 0.2215, 0.3151, 0.4929]π (65) to get |π2 | (the largest absolute value of the imaginary part among π π ) that √−π₯(π‘)π (π΄ π − π΄ππ )2 π₯(π‘)/4π₯(π‘)π π₯(π‘) should converge to. From Lemma 5, π₯(π‘)π (π΄ π + π΄ππ )π₯(π‘)/2π₯(π‘)π π₯(π‘) → π2 should hold. By Lemma 7, π₯(π‘) and (π΄ π − π΄ππ )π₯(π‘)/2|π2 | should converge to the imaginary and real parts of an eigenvector corresponding to π 2 , respectively. The transient behaviors of the above four variables are shown in Figures 12, 13, and 14, respectively. After convergence, we saw (π΄ π − π΄ππ ) π₯ (∞) + π₯ (∞) π σ΅¨ σ΅¨ 2 σ΅¨σ΅¨σ΅¨π2 σ΅¨σ΅¨σ΅¨ 0.0394 − 0.2317π 0.7277 + 0.1121π = (−0.0327 + 0.3430π) −0.0155 + 0.4466π −0.1329 + 0.4083π (π΄ π − π΄ππ ) π₯ (∞) + π₯ (∞) π) π∞ ( σ΅¨ σ΅¨ 2 σ΅¨σ΅¨σ΅¨π2 σ΅¨σ΅¨σ΅¨ −25.7189 + 42.7862π 31.6606 + 44.9313π = ( −6.2833 − 0.2807π ) −17.1372 − 28.3402π −24.5113 + 9.4386π (67) = (−49.3476 − 70.0318π) π’2π΄, (63) −0.2353 − 0.5331π −0.6416 π’2π΄ = ( 0.0449 − 0.0581π ) = π’3π΄. 0.3856 + 0.0270π 0.0747 − 0.2973π the estimated complex eigenvector of nonnormal matrix π΄ corresponding to the eigenvalue π 2 should be (66) = (1.0367 + 0.1597i) π’2 . Thus, the estimated complex vector is an eigenvector of π΄ π corresponding to π 2 . According to the theory in Section 2.4, from which we can see that the estimated complex vector is an eigenvector of π΄ corresponding to π 2 . 5. Conclusion This paper introduces a neural network based approach for computing eigenvectors of real normal matrices and the corresponding eigenvalues that have the largest or smallest modulus, have the largest or smallest real part, and have the largest or smallest imaginary part in absolute value. All the computation can be carried out in real vector space although eigenpairs may be complex, which can reduce the scale of networks a lot. We also shed light on extending this method to the case of general real matrices by employing the norm-reducing technique proposed in other literatures. Four simulation examples verified the validity of our proposed algorithm. References [1] S. Attallah and K. Abed-Meraim, “A fast adaptive algorithm for the generalized symmetric eigenvalue problem,” IEEE Signal Processing Letters, vol. 15, pp. 797–800, 2008. [2] T. Laudadio, N. Mastronardi, and M. Van Barel, “Computing a lower bound of the smallest eigenvalue of a symmetric positivedefinite Toeplitz matrix,” IEEE Transactions on Information Theory, vol. 54, no. 10, pp. 4726–4731, 2008. [3] J. Shawe-Taylor, C. K. I. Williams, N. Cristianini, and J. Kandola, “On the eigenspectrum of the Gram matrix and the generalization error of kernel-PCA,” IEEE Transactions on Information Theory, vol. 51, no. 7, pp. 2510–2522, 2005. [4] D. Q. Wang and M. Zhang, “A new approach to multiple class pattern classification with random matrices,” Journal of Applied Mathematics and Decision Sciences, no. 3, pp. 165–175, 2005. [5] M. R. Bastian, J. H. Gunther, and T. K. Moon, “A simplified natural gradient learning algorithm,” Advances in Artificial Neural Systems, vol. 2011, Article ID 407497, 9 pages, 2011. [6] T. H. Le, “Applying artificial neural networks for face recognition,” Advances in Artificial Neural Systems, vol. 2011, Article ID 673016, 16 pages, 2011. [7] Y. Xia, “An extended projection neural network for constrained optimization,” Neural Computation, vol. 16, no. 4, pp. 863–883, 2004. [8] Y. Xia and J. Wang, “A recurrent neural network for solving nonlinear convex programs subject to linear constraints,” IEEE Journal of Applied Mathematics [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] Transactions on Neural Networks, vol. 16, no. 2, pp. 379–386, 2005. T. Voegtlin, “Recursive principal components analysis,” Neural Networks, vol. 18, no. 8, pp. 1051–1063, 2005. J. Qiu, H. Wang, J. Lu, B. Zhang, and K.-L. Du, “Neural network implementations for PCA and its extensions,” ISRN Artificial Intelligence, vol. 2012, Article ID 847305, 19 pages, 2012. H. Liu and J. Wang, “Integrating independent component analysis and principal component analysis with neural network to predict Chinese stock market,” Mathematical Problems in Engineering, vol. 2011, Article ID 382659, 15 pages, 2011. F. L. Luo, R. Unbehauen, and A. Cichocki, “A minor component analysis algorithm,” Neural Networks, vol. 10, no. 2, pp. 291–297, 1997. G. H. Golub and C. F. Van Loan, Matrix Computations, vol. 3 of Johns Hopkins Series in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, Md, USA, 1983. C. Chatterjee, V. P. Roychowdhury, J. Ramos, and M. D. Zoltowski, “Self-organizing algorithms for generalized eigendecomposition,” IEEE Transactions on Neural Networks, vol. 8, no. 6, pp. 1518–1530, 1997. H. Kakeya and T. Kindo, “Eigenspace separation of autocorrelation memory matrices for capacity expansion,” Neural Networks, vol. 10, no. 5, pp. 833–843, 1997. Y. Liu, Z. You, and L. Cao, “A functional neural network for computing the largest modulus eigenvalues and their corresponding eigenvectors of an anti-symmetric matrix,” Neurocomputing, vol. 67, no. 1–4, pp. 384–397, 2005. Y. Liu, Z. You, and L. Cao, “A functional neural network computing some eigenvalues and eigenvectors of a special real matrix,” Neural Networks, vol. 18, no. 10, pp. 1293–1300, 2005. Y. Liu, Z. You, and L. Cao, “A recurrent neural network computing the largest imaginary or real part of eigenvalues of real matrices,” Computers & Mathematics with Applications, vol. 53, no. 1, pp. 41–53, 2007. E. Oja, “Principal components, minor components, and linear neural networks,” Neural Networks, vol. 5, no. 6, pp. 927–935, 1992. R. Perfetti and E. Massarelli, “Training spatially homogeneous fully recurrent neural networks in eigenvalue space,” Neural Networks, vol. 10, no. 1, pp. 125–137, 1997. L. Xu, E. Oja, and C. Y. Suen, “Modified Hebbian learning for curve and surface fitting,” Neural Networks, vol. 5, no. 3, pp. 441– 457, 1992. Q. Zhang and Y. W. Leung, “A class of learning algorithms for principal component analysis and minor component analysis,” IEEE Transactions on Neural Networks, vol. 11, no. 2, pp. 529– 533, 2000. Y. Zhang, Y. Fu, and H. J. Tang, “Neural networks based approach for computing eigenvectors and eigenvalues of symmetric matrix,” Computers & Mathematics with Applications, vol. 47, no. 8-9, pp. 1155–1164, 2004. L. Mirsky, “On the minimization of matrix norms,” The American Mathematical Monthly, vol. 65, pp. 106–107, 1958. C. P. ] Huang and R. T. Gregory, A Norm-Reducing Jacobi-Like Algorithm for the Eigenvalues of Non-Normal Matrices, Colloquia Mathematica Societatis Janos Bolyai, Keszthely, Hungary, 1977. I. Kiessling and A. Paulik, “A norm-reducing Jacobi-like algorithm for the eigenvalues of non-normal matrices,” Journal of Computational and Applied Mathematics, vol. 8, no. 3, pp. 203– 207, 1982. 13 [27] R. Sacks-Davis, “A real norm-reducing Jacobi-type eigenvalue algorithm,” The Australian Computer Journal, vol. 7, no. 2, pp. 65–69, 1975. Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014