Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 125687, 6 pages http://dx.doi.org/10.1155/2013/125687 Research Article A New Method for the Bisymmetric Minimum Norm Solution of the Consistent Matrix Equations π΄ 1ππ΅1 = πΆ1, π΄ 2ππ΅2 = πΆ2 Aijing Liu,1,2 Guoliang Chen,1 and Xiangyun Zhang1 1 2 Department of Mathematics, East China Normal University, Shanghai 200241, China School of Mathematical Sciences, Qufu Normal University, Shandong 273165, China Correspondence should be addressed to Guoliang Chen; glchen@math.ecnu.edu.cn Received 18 October 2012; Accepted 16 January 2013 Academic Editor: Alberto Cabada Copyright © 2013 Aijing Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We propose a new iterative method to find the bisymmetric minimum norm solution of a pair of consistent matrix equations π΄ 1 ππ΅1 = πΆ1 , π΄ 2 ππ΅2 = πΆ2 . The algorithm can obtain the bisymmetric solution with minimum Frobenius norm in finite iteration steps in the absence of round-off errors. Our algorithm is faster and more stable than Algorithm 2.1 by Cai et al. (2010). 1. Introduction Let Rπ×π denote the set of π × π real matrices. A matrix π = (π₯ππ ) ∈ Rπ×π is said to be bisymmetric if π₯ππ = π₯ππ = π₯π−π+1,π−π+1 for all 1 ≤ π, π ≤ π. Let BSRπ×π denote π × π real bisymmetric matrices. For any π ∈ Rπ×π , ππ , tr(π), βπβ, and βπβ2 represent the transpose, trace, Frobenius norm, and Euclidean norm of π, respectively. The symbol vec(⋅) stands for the vec operator; that is, for π = (π₯1 , π₯2 , . . . , π₯π ) ∈ Rπ×π , where π₯π (π = 1, 2, . . . , π) denotes the πth column of π, vec(π) = (π₯1π , π₯2π , . . . , π₯ππ )π . Let mat(⋅) represent the inverse operation of vec operator. In the vector space Rπ×π , we define the inner product as β¨π, πβ© = tr(πππ) for all π, π ∈ Rπ×π . Two matrices π and π are said to be orthogonal if β¨π, πβ© = 0. Let ππ = (ππ , ππ−1 , . . . , π1 ) denote the π × π reverse unit matrix where ππ (π = 1, 2, . . . , π) is the πth column of π × π unit matrix πΌπ ; then πππ = ππ , ππ2 = πΌπ . In this paper, we discuss the following consistent matrix equations: π΄ 1 ππ΅1 = πΆ1 , π΄ 2 ππ΅2 = πΆ2 , π ∈ BSRπ×π , (1) where π΄ 1 ∈ Rπ1 ×π , π΅1 ∈ Rπ×π1 , πΆ1 ∈ Rπ1 ×π1 , π΄ 2 ∈ Rπ2 ×π , π΅2 ∈ Rπ×π2 , and πΆ2 ∈ Rπ2 ×π2 are given matrices, and π ∈ BSRπ×π is unknown bisymmetric matrix to be found. Research on solving a pair of matrix equations π΄ 1 ππ΅1 = πΆ1 , π΄ 2 ππ΅2 = πΆ2 has been actively ongoing for the past 30 or more years (see details in [1–6]). Besides the works on finding the common solutions to the matrix equations π΄ 1 ππ΅1 = πΆ1 , π΄ 2 ππ΅2 = πΆ2 , there are some valuable efforts on solving a pair of the matrix equations with certain linear constraints on solution. For instance, Khatri and Mitra [7] derived the Hermitian solution of the consistent matrix equations π΄π = πΆ, ππ΅ = π·. Deng et al. [8] studied the consistent conditions and the general expressions about the Hermitian solutions of the matrix equations (π΄π, ππ΅) = (πΆ, π·) and designed an iterative method for its Hermitian minimum norm solutions. Peng et al. [9] presented an iterative method to obtain the least squares reflexive solutions of the matrix equations π΄ 1 ππ΅1 = πΆ1 , π΄ 2 ππ΅2 = πΆ2 . Cai et al. [10, 11] proposed iterative methods to solve the bisymmetric solutions of the matrix equations π΄ 1 ππ΅1 = πΆ1 , π΄ 2 ππ΅2 = πΆ2 . In this paper, we propose a new iterative algorithm to solve the bisymmetric solution with the minimum Frobenius norm of the consistent matrix equations π΄ 1 ππ΅1 = πΆ1 , π΄ 2 ππ΅2 = πΆ2 , which is faster and more stable than Cai’s algorithm (Algorithm 2.1) in [10]. The rest of the paper is organized as follows. In Section 2, we propose an iterative algorithm to obtain the bisymmetric minimum Frobenius norm solution of (1) and present some basic properties of the algorithm. Some numerical examples are given in Section 3 to show the efficiency of the proposed iterative method. 2 Journal of Applied Mathematics 2. A New Iterative Algorithm (ii) Iteration. For π = 1, 2, . . ., until {π₯π } convergence, do Firstly, we give the following lemmas. (a) ππ = −ππ−1 π½π /πΌπ ; π§π = π§π−1 + ππ π£π ; Lemma 1 (see [12]). There is a unique matrix π(π, π) ∈ Rππ×ππ such that vec(ππ ) = π(π, π) vec (π) for all π ∈ Rπ×π . This matrix π(π, π) depends only on the dimensions π and π. Moreover, π(π, π) is a permutation matrix and π(π, π) = π(π, π)π = π(π, π)−1 . (b) ππ = (ππ−1 − π½π ππ−1 )/πΌπ ; π€π = π€π−1 + ππ π£π ; Lemma 2. If π¦0 , π¦1 , π¦2 , . . . ∈ Rπ are orthogonal to each other, then there exists a positive integer Μπ ≤ π such that π¦Μπ = 0. Proof. If there exists a positive integer Μπ ≤ π − 1 such that π¦Μπ = 0, then Lemma 2 is proved. Otherwise, we have π¦π =ΜΈ 0, π = 0, 1, 2, . . . , π − 1, and π¦0 , π¦1 , . . . , π¦π−1 are orthogonal to each other in the πdimension vector space of Rπ . So π¦0 , π¦1 , . . . , π¦π−1 form a set of orthogonal basis of Rπ . Hence π¦π can be expressed by the linear combination of π¦0 , π¦1 , . . . , π¦π−1 . Denote π¦π = π0 π¦0 + π1 π¦1 + ⋅ ⋅ ⋅ + ππ−1 π¦π−1 (2) in which ππ ∈ R, π = 0, 1, 2 . . . , π − 1. Then (c) π½π+1 π’π+1 = ππ£π − πΌπ π’π ; (d) ππ = −ππ−1 πΌπ /π½π+1 ; (e) πΌπ+1 π£π+1 = ππ π’π+1 − π½π+1 π£π ; (f) πΎπ = π½π+1 ππ /(π½π+1 ππ − ππ ); (g) π₯π = π§π − πΎπ π€π . It is well known that if the consistent system of linear equations ππ₯ = π has a solution π₯∗ ∈ π (ππ ), then π₯∗ is the unique minimum Euclidean norm solution of ππ₯ = π. It is obvious that π₯π generated by Algorithm 5 belongs to π (ππ ) and this leads to the following result. Theorem 6. The solution generated by Algorithm 5 is the minimum Euclidean norm solution of (5). If π’1 , π’2 , . . . and π£1 , π£2 , . . . are generated by Algorithm 5, then π’ππ π’π = πΏππ , π£ππ π£π = πΏππ (see details in [13]), in which 1, πΏππ = { 0, β¨π¦π , π¦π β© = π0 β¨π¦π , π¦0 β© + π1 β¨π¦π , π¦1 β© + ⋅ ⋅ ⋅ + ππ−1 β¨π¦π , π¦π−1 β© = ππ β¨π¦π , π¦π β© + ∑ ππ β¨π¦π , π¦π β© (3) ππ = π − ππ₯π , π=1 π =ΜΈ π π = 0, 1, 2, . . . , π − 1. From β¨π¦π , π¦π β© = 0 and β¨π¦π , π¦π β© =ΜΈ 0, π = 0, 1, 2, . . . , π − 1, we have ππ = 0, π = 0, 1, 2, . . . , π − 1; that is, π¦π = 0. (4) This completes the proof. Lemma 3. A matrix π ∈ BSRπ×π if and only if π = ππ = ππ πππ . Lemma 4. If π ∈ Rπ×π , then π + ππ + ππ (π + ππ)ππ ∈ BSRπ×π . Next, we review the algorithm proposed by Paige [13] for solving the following consistent problem: ππ₯ = π, (5) π§0 = 0; π€0 = 0; π0 = −1; π½1 π’1 = π; πππ ππ = βππ πΏππ π0 = 0; πΌ1 π£1 = ππ π’1 . (6) (9) in which βππ = π½π+1 π½π+1 ππ ππ . Now we derive our new algorithm, which is based on Paige algorithm. Noting that π is the bisymmetric solution of (1) if and only if π is the bisymmetric solution of the following linear equations: π΄ 1 ππ΅1 = πΆ1 , π΄ 1 ππ πππ π΅1 = πΆ1 , π΄ 2 ππ πππ π΅2 = πΆ2 , Algorithm 5 (Paige algorithm). (i) Initialization (8) where π₯π is the approximation solution obtained by Algorithm 5 after the πth iteration, it follows that ππ = −π½π+1 ππ π’π+1 (see details in [13]). So we have π΄ 2 ππ΅2 = πΆ2 , with given π ∈ Rπ ×π‘ , π ∈ Rπ . π0 = 1; (7) If we denote π−1 = ππ β¨π¦π , π¦π β© , π = π, π =ΜΈ π. π΅1π ππ΄π1 = πΆ1π , π΅1π ππ πππ π΄π1 = πΆ1π , π΅2π ππ΄π2 = πΆ2π , (10) π΅2π ππ πππ π΄π2 = πΆ2π . Furthermore, suppose (10) is consistent; let π be a solution of (10). If π is a bisymmetric matrix, then π is a bisymmetric solution of (1); otherwise we can obtain a bisymmetric solution of (10) by π = (π + ππ + ππ (π + ππ )ππ )/4. Journal of Applied Mathematics 3 The system of (10) can be transformed into (5) with coefficient matrix π and vector π as vec (πΆ1 ) vec (πΆ1π ) ( vec (πΆ1 ) ) (vec (πΆπ )) ( 1 ) π=( ). ( vec (πΆ2 ) ) ( π ) vec (πΆ2 ) vec (πΆ2 ) π (vec (πΆ2 )) (11) π΅1π ⊗ π΄ 1 π΄ 1 ⊗ π΅1π π (π΅1 ππ ⊗ π΄ 1 ππ ) ( π ) (π΄ π ⊗ π΅1 ππ ) π = ( 1 ππ ), ( π΅2 ⊗ π΄ 2 ) ( π ) π΄ 2 ⊗ π΅2 π΅2π ππ ⊗ π΄ 2 ππ π (π΄ 2 ππ ⊗ π΅2 ππ ) Therefore, π½1 π’1 = π, πΌ1 π£1 = ππ π’1 , π½π+1 π’π+1 = ππ£π − πΌπ π’π , and πΌπ+1 π£π+1 = ππ π’π+1 − π½π+1 π£π can be written as πΌπ+1 π£π+1 where ππ1 ∈ Rπ1 ×π1 , ππ2 ∈ Rπ2 ×π2 , ππ ∈ Rπ×π , and ππ is a bisymmetric matrix. And so, the vector form of π½1 π’1 = π, πΌ1 π£1 = ππ π’1 , π½π+1 π’π+1 = ππ£π − πΌπ π’π , and πΌπ+1 π£π+1 = ππ π’π+1 − π½π+1 π£π in Algorithm 5 can be rewritten as matrix form. Then we now propose the following matrix-form algorithm. π1 = π΄π1 π11 π΅1π + π΄π2 π12 π΅2π ; π1 = π1 + π1π + ππ (π1 + π1π )ππ ; πΌ1 = βπ1 β; π1 = π1 /πΌ1 . (ii) Iteration. For π = 1, 2, . . ., until {ππ } convergence, do (a) ππ = −ππ−1 π½π /πΌπ ; ππ = ππ−1 + ππ ππ ; (b) ππ = (ππ−1 − π½π ππ−1 )/πΌπ ; ππ = ππ−1 + ππ ππ ; (c) ππ+1,π = π΄ π ππ π΅π − πΌπ πππ , π = 1, 2; 2 2 π½π+1 = 2√βππ+1,1 β + βππ+1,2 β ; ππ+1,π = ππ+1,π /π½π+1 , π = 1, 2; (d) ππ = −ππ−1 πΌπ /π½π+1 ; (e) ππ+1 = π΄π1 ππ+1,1 π΅1π + π΄π2 ππ+1,2 π΅2π ; π π ππ+1 = ππ+1 + ππ+1 + ππ (ππ+1 + ππ+1 )ππ − π½π+1 ππ ; πΌπ+1 = βππ+1 β; ππ+1 = ππ+1 /πΌπ+1 ; π = 1, 2, . . . , (f) πΎπ = π½π+1 ππ /(π½π+1 ππ − ππ ); (g) ππ = ππ − πΎπ ππ . Remark 8. The stopping criteria on Algorithm 7 can be used as σ΅© σ΅© σ΅© σ΅©σ΅© σ΅©σ΅©πΆ1 − π΄ 1 ππ π΅1 σ΅©σ΅©σ΅© + σ΅©σ΅©σ΅©πΆ2 − π΄ 2 ππ π΅2 σ΅©σ΅©σ΅© ≤ π, (14) σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅© σ΅© σ΅¨σ΅¨ππ σ΅¨σ΅¨ ≤ π or σ΅©σ΅©σ΅©ππ − ππ−1 σ΅©σ΅©σ΅© ≤ π, π π΅1π ⊗ π΄ 1 π΄ 1 ⊗ π΅1π π (π΅1 ππ ⊗ π΄ 1 ππ ) ( π ) (π΄ π ⊗ π΅1 ππ ) = ( 1 ππ ) π’ − π½π+1 π£π , ( π΅2 ⊗ π΄ 2 ) π+1 ( π ) π΄ 2 ⊗ π΅2 π π΅2 ππ ⊗ π΄ 2 ππ π (π΄ 2 ππ ⊗ π΅2 ππ ) (13) π½1 = 2√βπΆ1 β2 + βπΆ2 β2 ; π1π = πΆπ /π½1 , π = 1, 2; ⊗ π΄1 π΄ 1 ⊗ π΅1π π (π΅1 ππ ⊗ π΄ 1 ππ ) ( π ) (π΄ π ⊗ π΅1 ππ ) πΌ1 π£1 = ( 1 ππ ) π’, ( π΅2 ⊗ π΄ 2 ) 1 ( π ) π΄ 2 ⊗ π΅2 π΅2π ππ ⊗ π΄ 2 ππ π (π΄ 2 ππ ⊗ π΅2 ππ ) π½π+1 π’π+1 π£π = vec (ππ ) , π0 = 1; π0 = −1; π0 = 0; π0 = 0 (∈ Rπ×π ); π0 = π0 ; π π΅1π ⊗ π΄ 1 π΄ 1 ⊗ π΅1π π (π΅1 ππ ⊗ π΄ 1 ππ ) ( π ) (π΄ π ⊗ π΅1 ππ ) = ( 1 ππ ) π£ − πΌπ π’π , ( π΅2 ⊗ π΄ 2 ) π ( ) π΄ 2 ⊗ π΅2π π΅2π ππ ⊗ π΄ 2 ππ π (π΄ 2 ππ ⊗ π΅2 ππ ) vec (ππ1 ) vec (ππ1π ) ( vec (ππ1 ) ) (vec (ππ )) ( π1 ) π’π = ( ), ( vec (ππ2 ) ) ( ) vec (ππ2π ) vec (ππ2 ) π vec ( (ππ2 )) Algorithm 7. (i) Initialization vec (πΆ1 ) vec (πΆ1π ) ( vec (πΆ1 ) ) (vec (πΆπ )) ( 1 ) π½1 π’1 = ( ), ( vec (πΆ2 ) ) ( π ) vec (πΆ2 ) vec (πΆ2 ) π (vec (πΆ2 )) π΅1π From (12), we have where π is a small tolerance. π = 1, 2, . . . . (12) Remark 9. As ππ , ππ , and ππ in Algorithm 7 are bisymmetric matrices, we can see that ππ obtained by Algorithm 7 are also bisymmetric matrices. Some basic properties of Algorithm 7 are listed in the following theorems. 4 Journal of Applied Mathematics Theorem 10. The solution generated by Algorithm 7 is the bisymmetric minimum Frobenius norm solution of (1). Theorem 11. The iteration of Algorithm 7 will be terminated in at most π1 π1 + π2 π2 steps in the absence of round-off errors. Proof. By (8) and (11), we have by simple calculation that vec (π π1 ) π vec (π π1 ) ( vec (π π1 ) ) ( ) (vec (π π )) ( π1 ) ππ = ( ), ( vec (π π2 ) ) ( ) (vec (π π )) (15) in which π π1 = πΆ1 −π΄ 1 ππ π΅1 , and π π2 = πΆ2 −π΄ 2 ππ π΅2 , where ππ is the approximation solution obtained by Algorithm 7 after the πth iteration. By Lemma 1, we have that = π (π2 , π2 ) vec (π π2 ) , (16) where π(π1 , π1 ) ∈ Rπ1 π1 ×π1 π1 and π(π2 , π2 ) ∈ Rπ2 π2 ×π2 π2 are permutation matrices. For simplicity, we denote π1 = π(π1 , π1 ), π2 = π(π2 , π2 ). Then π1π π1 = πΌπ1 π1 , π2π π2 = πΌπ2 π2 . Hence vec (π π1 ) π ) vec (π π1 ( vec (π π1 ) ) ( ) (vec (π π )) ( π π1 ) ππ ππ = ( ) ( vec (π π2 ) ) ( ) (vec (π π )) π2 vec (π π2 ) π (vec (π π2 )) π vec (π π1 ) π vec (π π1 ) (vec (π )) π1 ) ( ( π ) (vec (π π1 )) ( ) (vec (π )) ( π2 ) ( π ) (vec (π π2 )) vec (π π2 ) π (vec (π π2 )) π vec (π π1 ) π1 vec (π π1 ) ( vec (π π1 ) ) ) ( (π1 vec (π π1 )) ) =( ( vec (π ) ) π2 ) ( (π vec (π )) 2 π2 vec (π π2 ) (π2 vec (π π2 )) vec (π π1 ) π1 vec (π π1 ) ( vec (π ) ) π1 ) ( ( ) (π1 vec (π π1 )) ( ) ( vec (π ) ) ( π2 ) ( ) (π2 vec (π π2 )) vec (π π2 ) (π2 vec (π π2 )) π + 2(vec (π π2 )) vec (π π2 ) π + 2(vec (π π2 )) π2π π2 vec (π π2 ) π π = 4 ((vec (π π1 )) vec (π π1 ) + (vec (π π2 )) vec (π π2 )) (17) π2 π ) vec (π π2 π π vec (π π1 ) vec (π π1 ) ) ( = 4( ). vec (π π2 ) vec (π π2 ) vec (π π2 ) π vec ( (π π2 )) π ) = π (π1 , π1 ) vec (π π1 ) , vec (π π1 π = 2(vec (π π1 )) vec (π π1 ) + 2(vec (π π1 )) π1π π1 vec (π π1 ) π1 π1 +π2 π2 π1 ) If we let π‘π = ( vec(π , then we have by (9) vec(π π2 ) ) ∈ R that π‘0 , π‘1 , π‘2 , . . . are orthogonal to each other in Rπ1 π1 +π2 π2 . By Lemma 2, there exists a positive integer Μπ ≤ (π1 π1 +π2 π2 ) such that π‘Μπ = 0. Hence π Μπ1 = π Μπ2 = 0, (18) that is, the iteration of Algorithm 7 will be terminated in at most π1 π1 +π2 π2 steps in the absence of round-off errors. 3. Numerical Examples In this section, we use some numerical examples to illustrate the efficiency of our algorithm. The computations are carried out at PC computer, with software MATLAB 7.0. The machine precision is around 10−16 . We stop the iteration when βπ π1 β + βπ π2 β ≤ 10−12 . Example 12. Given matrices π΄ 1 , π΅1 , πΆ1 , π΄ 2 , π΅2 , and πΆ2 as follows: 1 3 4 ( π΄1 = 2 −1 (−3 −4 1 −3 5 4 −1 −3 2 −1 ( π΅1 = ( (0 1 3 (0 −19 −55 −74 πΆ1 = ( −36 19 ( 55 −2 −1 −3 1 2 1 2 −3 1 1 2 −3 −1 30 47 77 17 −30 −47 −1 3 2 4 1 −3 0 −1 −1 −1 0 1 −1 −1 0 1 3 0 −1 3 −2 1 0 −1 −3 0 11 −8 3 −19 −11 8 19 55 74 36 −19 −55 1 −2 −1 −3 −1 2 −2 3 −1 −1 −2 3 1 −3 1 −2) , 4 3 −1) 1 −4 1) 2) ), 5 −3 −2) −30 −47 −77 −17 30 47 41 39 80 ) , −2 −41 −39) Journal of Applied Mathematics 5 4 2 log10 (β£β£π π1 β£β£ + β£β£π π2 β£β£) 0 −2 −4 −6 −8 −10 −12 −14 2 0 4 6 8 Iterative steps 10 12 14 Cai’s algorithm Algorithm 7 Figure 1: Convergence curves of log10 (βπ π1 β + βπ π2 β). 3 0 π΄ 2 = (−2 0 1 −2 −3 −4 3 −6 −1 1 1 −1 0 2 −3 (1 π΅2 = ( (0 −2 1 −1 ( 33 17 πΆ2 = ( 27 −17 60 1 −3 −3 3 −2 −4 2 0 −2 −4 1 −1 2 4 0 −5 −2 3 −4 3 4 −2 −4 −3 107 −34 −29 34 78 140 −17 −2 17 138 0 3 3 −3 3 −1 1 1 ), −1 0 then (1) is consistent, for one can easily verify that it has a bisymmetric solution: −2 3 −1) 0) ), 2 −1 1) 1 −1 (1 Μ=(2 π ( 1 −1 (1 −33 −17 −27) 17 −60 (19) π13 0.4755 −0.6822 ( 0.6274 =( ( 1.4586 0.2774 −1.2112 (−0.1053 −0.6822 2.6628 0.4046 0.0716 1.0133 0.4001 −1.2112 0.6274 0.4046 −1.0215 −2.2128 −1.6176 1.0133 0.2774 with βπ 13,1 β + βπ 13,2 β = 6.2303π − 013. Figure 1 illustrates the performance of our algorithm and Cai’s algorithm [10]. From Figure 1, we see that our algorithm is faster than Cai’s algorithm. −1 3 1 1 1 1 −1 1 1 0 −2 −1 1 1 2 1 −2 1 −2 1 2 1 1 −1 −2 0 1 1 −1 1 1 1 1 3 −1 1 −1 1) 2) ). 1 −1 1) (20) We choose the initial matrix π0 = 0, then using Algorithm 7 and iterating 13 steps, we have the unique bisymmetric minimum Frobenius norm solution of (1) as follows: 1.4586 0.0716 −2.2128 −1.1548 −2.2128 0.0716 1.4586 0.2774 1.0133 −1.6176 −2.2128 −1.0215 0.4046 0.6274 −1.2112 0.4001 1.0133 0.0716 0.4046 2.6628 −0.6822 −0.1053 −1.2112 0.2774 ) 1.4586 ) ), 0.6274 −0.6822 0.4755 ) (21) Example 13. Let π΄ 1 = hilb (7) , π΄ 2 = rand (7, 7) , π΅1 = pascal (7) , π΅2 = rand (7, 7) , (22) 6 Journal of Applied Mathematics 4 2 log10 (β£β£π π1 β£β£ + β£β£π π2 β£β£) 0 −2 −4 −6 −8 −10 −12 −14 20 0 40 60 80 100 Iterative steps Cai’s algorithm Algorithm 7 Figure 2: Convergence curves of log10 (βπ π1 β + βπ π2 β). with hilb, pascal, and rand being functions in Matlab. And Μ 1 , πΆ2 = π΄ 2 ππ΅ Μ 2 , in which π Μ is defined in we let πΆ1 = π΄ 1 ππ΅ Example 12. Hence (1) is consistent. We choose the initial matrix π0 = 0; Figure 2 illustrates the performance of our algorithm and Cai’s algorithm [10]. From Figure 2, we see that our algorithm is faster and more stable than Cai’s algorithm. Acknowledgments This research was supported by the Natural Science Foundation of China (nos. 10901056, 11071079, and 11001167), the Shanghai Science and Technology Commission “Venus” Project (no. 11QA1402200), and the Natural Science Foundation of Zhejiang Province (no. Y6110043). References [1] X. P. Sheng and G. L. Chen, “A finite iterative method for solving a pair of linear matrix equations (π΄ππ΅, πΆππ·) = (πΈ, πΉ),” Applied Mathematics and Computation, vol. 189, no. 2, pp. 1350–1358, 2007. [2] Y. X. Yuan, “Least squares solutions of matrix equation π΄ππ΅ = πΈ; πΆππ· = πΉ,” Journal of East China Shipbuilding Institute, vol. 18, no. 3, pp. 29–31, 2004. [3] A. Navarra, P. L. Odell, and D. M. Young, “A representation of the general common solution to the matrix equations π΄ 1 ππ΅1 = πΆ1 and π΄ 2 ππ΅2 = πΆ2 with applications,” Computers & Mathematics with Applications, vol. 41, no. 7-8, pp. 929–935, 2001. [4] S. K. Mitra, “A pair of simultaneous linear matrix equations and a matrix programming problem,” Linear Algebra and its Applications, vol. 131, pp. 107–123, 1990. [5] S. K. Mitra, “The matrix equations π΄π = πΆ,ππ΅ = π·,” Linear Algebra and its Applications, vol. 59, pp. 171–181, 1984. [6] P. Bhimasankaram, “Common solutions to the linear matrix equations π΄π = πΆ,ππ΅ = π· and πΉππΊ = π»,” SankhyaΜ A, vol. 38, no. 4, pp. 404–409, 1976. [7] C. G. Khatri and S. K. Mitra, “Hermitian and nonnegative definite solutions of linear matrix equations,” SIAM Journal on Applied Mathematics, vol. 31, no. 4, pp. 579–585, 1976. [8] Y.-B. Deng, Z.-Z. Bai, and Y.-H. Gao, “Iterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equations,” Numerical Linear Algebra with Applications, vol. 13, no. 10, pp. 801–823, 2006. [9] Z.-H. Peng, X.-Y. Hu, and L. Zhang, “An efficient algorithm for the least-squares reflexive solution of the matrix equation π΄ 1 ππ΅1 = πΆ1 ,” Applied Mathematics and Computation, vol. 181, no. 2, pp. 988–999, 2006. [10] J. Cai, G. L. Chen, and Q. B. Liu, “An iterative method for the bisymmetric solutions of the consistent matrix equations π΄ 1 ππ΅1 = πΆ1 , π΄ 2 ππ΅2 = πΆ2 ,” International Journal of Computer Mathematics, vol. 87, no. 12, pp. 2706–2715, 2010. [11] J. Cai and G. L. Chen, “An iterative algorithm for the least squares bisymmetric solutions of the matrix equations π΄ 1 ππ΅1 = πΆ1 ,” Mathematical and Computer Modelling, vol. 50, no. 7-8, pp. 1237–1244, 2009. [12] R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, UK, 1991. [13] C. C. Paige, “Bidiagonalization of matrices and solutions of the linear equations,” SIAM Journal on Numerical Analysis, vol. 11, pp. 197–209, 1974. Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014