Research Article the Consistent Matrix Equations ,

advertisement
Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2013, Article ID 125687, 6 pages
http://dx.doi.org/10.1155/2013/125687
Research Article
A New Method for the Bisymmetric Minimum Norm Solution of
the Consistent Matrix Equations 𝐴 1𝑋𝐡1 = 𝐢1, 𝐴 2𝑋𝐡2 = 𝐢2
Aijing Liu,1,2 Guoliang Chen,1 and Xiangyun Zhang1
1
2
Department of Mathematics, East China Normal University, Shanghai 200241, China
School of Mathematical Sciences, Qufu Normal University, Shandong 273165, China
Correspondence should be addressed to Guoliang Chen; glchen@math.ecnu.edu.cn
Received 18 October 2012; Accepted 16 January 2013
Academic Editor: Alberto Cabada
Copyright © 2013 Aijing Liu et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We propose a new iterative method to find the bisymmetric minimum norm solution of a pair of consistent matrix equations
𝐴 1 𝑋𝐡1 = 𝐢1 , 𝐴 2 𝑋𝐡2 = 𝐢2 . The algorithm can obtain the bisymmetric solution with minimum Frobenius norm in finite iteration
steps in the absence of round-off errors. Our algorithm is faster and more stable than Algorithm 2.1 by Cai et al. (2010).
1. Introduction
Let Rπ‘š×𝑛 denote the set of π‘š × π‘› real matrices. A matrix
𝑋 = (π‘₯𝑖𝑗 ) ∈ R𝑛×𝑛 is said to be bisymmetric if π‘₯𝑖𝑗 = π‘₯𝑗𝑖 =
π‘₯𝑛−𝑖+1,𝑛−𝑗+1 for all 1 ≤ 𝑖, 𝑗 ≤ 𝑛. Let BSR𝑛×𝑛 denote 𝑛 × π‘› real
bisymmetric matrices. For any 𝑋 ∈ Rπ‘š×𝑛 , 𝑋𝑇 , tr(𝑋), ‖𝑋‖,
and ‖𝑋‖2 represent the transpose, trace, Frobenius norm,
and Euclidean norm of 𝑋, respectively. The symbol vec(⋅)
stands for the vec operator; that is, for 𝑋 = (π‘₯1 , π‘₯2 , . . . , π‘₯𝑛 ) ∈
Rπ‘š×𝑛 , where π‘₯𝑖 (𝑖 = 1, 2, . . . , 𝑛) denotes the 𝑖th column of 𝑋,
vec(𝑋) = (π‘₯1𝑇 , π‘₯2𝑇 , . . . , π‘₯𝑛𝑇 )𝑇 . Let mat(⋅) represent the inverse
operation of vec operator. In the vector space Rπ‘š×𝑛 , we define
the inner product as βŸ¨π‘‹, π‘ŒβŸ© = tr(π‘Œπ‘‡π‘‹) for all 𝑋, π‘Œ ∈ Rπ‘š×𝑛 .
Two matrices 𝑋 and π‘Œ are said to be orthogonal if βŸ¨π‘‹, π‘ŒβŸ© = 0.
Let 𝑆𝑛 = (𝑒𝑛 , 𝑒𝑛−1 , . . . , 𝑒1 ) denote the 𝑛 × π‘› reverse unit matrix
where 𝑒𝑖 (𝑖 = 1, 2, . . . , 𝑛) is the 𝑖th column of 𝑛 × π‘› unit matrix
𝐼𝑛 ; then 𝑆𝑛𝑇 = 𝑆𝑛 , 𝑆𝑛2 = 𝐼𝑛 .
In this paper, we discuss the following consistent matrix
equations:
𝐴 1 𝑋𝐡1 = 𝐢1 ,
𝐴 2 𝑋𝐡2 = 𝐢2 ,
𝑋 ∈ BSR𝑛×𝑛 ,
(1)
where 𝐴 1 ∈ R𝑝1 ×𝑛 , 𝐡1 ∈ R𝑛×π‘ž1 , 𝐢1 ∈ R𝑝1 ×π‘ž1 , 𝐴 2 ∈ R𝑝2 ×𝑛 , 𝐡2 ∈
R𝑛×π‘ž2 , and 𝐢2 ∈ R𝑝2 ×π‘ž2 are given matrices, and 𝑋 ∈ BSR𝑛×𝑛 is
unknown bisymmetric matrix to be found.
Research on solving a pair of matrix equations 𝐴 1 𝑋𝐡1 =
𝐢1 , 𝐴 2 𝑋𝐡2 = 𝐢2 has been actively ongoing for the past 30 or
more years (see details in [1–6]). Besides the works on finding
the common solutions to the matrix equations 𝐴 1 𝑋𝐡1 = 𝐢1 ,
𝐴 2 𝑋𝐡2 = 𝐢2 , there are some valuable efforts on solving a
pair of the matrix equations with certain linear constraints
on solution. For instance, Khatri and Mitra [7] derived the
Hermitian solution of the consistent matrix equations 𝐴𝑋 =
𝐢, 𝑋𝐡 = 𝐷. Deng et al. [8] studied the consistent conditions
and the general expressions about the Hermitian solutions of
the matrix equations (𝐴𝑋, 𝑋𝐡) = (𝐢, 𝐷) and designed an
iterative method for its Hermitian minimum norm solutions.
Peng et al. [9] presented an iterative method to obtain the least
squares reflexive solutions of the matrix equations 𝐴 1 𝑋𝐡1 =
𝐢1 , 𝐴 2 𝑋𝐡2 = 𝐢2 . Cai et al. [10, 11] proposed iterative methods
to solve the bisymmetric solutions of the matrix equations
𝐴 1 𝑋𝐡1 = 𝐢1 , 𝐴 2 𝑋𝐡2 = 𝐢2 .
In this paper, we propose a new iterative algorithm to
solve the bisymmetric solution with the minimum Frobenius
norm of the consistent matrix equations 𝐴 1 𝑋𝐡1 = 𝐢1 ,
𝐴 2 𝑋𝐡2 = 𝐢2 , which is faster and more stable than Cai’s algorithm (Algorithm 2.1) in [10].
The rest of the paper is organized as follows. In Section 2,
we propose an iterative algorithm to obtain the bisymmetric
minimum Frobenius norm solution of (1) and present some
basic properties of the algorithm. Some numerical examples
are given in Section 3 to show the efficiency of the proposed
iterative method.
2
Journal of Applied Mathematics
2. A New Iterative Algorithm
(ii) Iteration. For 𝑖 = 1, 2, . . ., until {π‘₯𝑖 } convergence, do
Firstly, we give the following lemmas.
(a) πœ‰π‘– = −πœ‰π‘–−1 𝛽𝑖 /𝛼𝑖 ; 𝑧𝑖 = 𝑧𝑖−1 + πœ‰π‘– 𝑣𝑖 ;
Lemma 1 (see [12]). There is a unique matrix 𝑃(π‘š, 𝑛) ∈
Rπ‘šπ‘›×π‘šπ‘› such that vec(𝑋𝑇 ) = 𝑃(π‘š, 𝑛) vec (𝑋) for all 𝑋 ∈ Rπ‘š×𝑛 .
This matrix 𝑃(π‘š, 𝑛) depends only on the dimensions π‘š and
𝑛. Moreover, 𝑃(π‘š, 𝑛) is a permutation matrix and 𝑃(𝑛, π‘š) =
𝑃(π‘š, 𝑛)𝑇 = 𝑃(π‘š, 𝑛)−1 .
(b) πœƒπ‘– = (πœπ‘–−1 − 𝛽𝑖 πœƒπ‘–−1 )/𝛼𝑖 ; 𝑀𝑖 = 𝑀𝑖−1 + πœƒπ‘– 𝑣𝑖 ;
Lemma 2. If 𝑦0 , 𝑦1 , 𝑦2 , . . . ∈ Rπ‘š are orthogonal to each other,
then there exists a positive integer ̂𝑙 ≤ π‘š such that 𝑦̂𝑙 = 0.
Proof. If there exists a positive integer ̂𝑙 ≤ π‘š − 1 such that
𝑦̂𝑙 = 0, then Lemma 2 is proved.
Otherwise, we have 𝑦𝑖 =ΜΈ 0, 𝑖 = 0, 1, 2, . . . , π‘š − 1, and
𝑦0 , 𝑦1 , . . . , π‘¦π‘š−1 are orthogonal to each other in the π‘šdimension vector space of Rπ‘š . So 𝑦0 , 𝑦1 , . . . , π‘¦π‘š−1 form a set
of orthogonal basis of Rπ‘š .
Hence π‘¦π‘š can be expressed by the linear combination of
𝑦0 , 𝑦1 , . . . , π‘¦π‘š−1 . Denote
π‘¦π‘š = π‘Ž0 𝑦0 + π‘Ž1 𝑦1 + ⋅ ⋅ ⋅ + π‘Žπ‘š−1 π‘¦π‘š−1
(2)
in which π‘Žπ‘– ∈ R, 𝑖 = 0, 1, 2 . . . , π‘š − 1. Then
(c) 𝛽𝑖+1 𝑒𝑖+1 = 𝑀𝑣𝑖 − 𝛼𝑖 𝑒𝑖 ;
(d) πœπ‘– = −πœπ‘–−1 𝛼𝑖 /𝛽𝑖+1 ;
(e) 𝛼𝑖+1 𝑣𝑖+1 = 𝑀𝑇 𝑒𝑖+1 − 𝛽𝑖+1 𝑣𝑖 ;
(f) 𝛾𝑖 = 𝛽𝑖+1 πœ‰π‘– /(𝛽𝑖+1 πœƒπ‘– − πœπ‘– );
(g) π‘₯𝑖 = 𝑧𝑖 − 𝛾𝑖 𝑀𝑖 .
It is well known that if the consistent system of linear
equations 𝑀π‘₯ = 𝑓 has a solution π‘₯∗ ∈ 𝑅(𝑀𝑇 ), then π‘₯∗ is
the unique minimum Euclidean norm solution of 𝑀π‘₯ = 𝑓. It
is obvious that π‘₯𝑖 generated by Algorithm 5 belongs to 𝑅(𝑀𝑇 )
and this leads to the following result.
Theorem 6. The solution generated by Algorithm 5 is the
minimum Euclidean norm solution of (5).
If 𝑒1 , 𝑒2 , . . . and 𝑣1 , 𝑣2 , . . . are generated by Algorithm 5,
then 𝑒𝑖𝑇 𝑒𝑗 = 𝛿𝑖𝑗 , 𝑣𝑖𝑇 𝑣𝑗 = 𝛿𝑖𝑗 (see details in [13]), in which
1,
𝛿𝑖𝑗 = {
0,
βŸ¨π‘¦π‘– , π‘¦π‘š ⟩ = π‘Ž0 βŸ¨π‘¦π‘– , 𝑦0 ⟩ + π‘Ž1 βŸ¨π‘¦π‘– , 𝑦1 ⟩
+ ⋅ ⋅ ⋅ + π‘Žπ‘š−1 βŸ¨π‘¦π‘– , π‘¦π‘š−1 ⟩
= π‘Žπ‘– βŸ¨π‘¦π‘– , 𝑦𝑖 ⟩ + ∑ π‘Žπ‘— βŸ¨π‘¦π‘– , 𝑦𝑗 ⟩
(3)
π‘Ÿπ‘– = 𝑓 − 𝑀π‘₯𝑖 ,
𝑗=1
𝑗 =ΜΈ 𝑖
𝑖 = 0, 1, 2, . . . , π‘š − 1.
From βŸ¨π‘¦π‘– , π‘¦π‘š ⟩ = 0 and βŸ¨π‘¦π‘– , 𝑦𝑖 ⟩ =ΜΈ 0, 𝑖 = 0, 1, 2, . . . , π‘š − 1, we
have π‘Žπ‘– = 0, 𝑖 = 0, 1, 2, . . . , π‘š − 1; that is,
π‘¦π‘š = 0.
(4)
This completes the proof.
Lemma 3. A matrix 𝑋 ∈ BSR𝑛×𝑛 if and only if 𝑋 = 𝑋𝑇 =
𝑆𝑛 𝑋𝑆𝑛 .
Lemma 4. If π‘Œ ∈ R𝑛×𝑛 , then π‘Œ + π‘Œπ‘‡ + 𝑆𝑛 (π‘Œ + π‘Œπ‘‡)𝑆𝑛 ∈ BSR𝑛×𝑛 .
Next, we review the algorithm proposed by Paige [13] for
solving the following consistent problem:
𝑀π‘₯ = 𝑓,
(5)
𝑧0 = 0;
𝑀0 = 0;
πœ‰0 = −1;
𝛽1 𝑒1 = 𝑓;
π‘Ÿπ‘–π‘‡ π‘Ÿπ‘— = β„Žπ‘–π‘— 𝛿𝑖𝑗
πœƒ0 = 0;
𝛼1 𝑣1 = 𝑀𝑇 𝑒1 .
(6)
(9)
in which β„Žπ‘–π‘— = 𝛽𝑖+1 𝛽𝑗+1 πœ‰π‘– πœ‰π‘— .
Now we derive our new algorithm, which is based on
Paige algorithm.
Noting that 𝑋 is the bisymmetric solution of (1) if and
only if 𝑋 is the bisymmetric solution of the following linear
equations:
𝐴 1 𝑋𝐡1 = 𝐢1 ,
𝐴 1 𝑆𝑛 𝑋𝑆𝑛 𝐡1 = 𝐢1 ,
𝐴 2 𝑆𝑛 𝑋𝑆𝑛 𝐡2 = 𝐢2 ,
Algorithm 5 (Paige algorithm). (i) Initialization
(8)
where π‘₯𝑖 is the approximation solution obtained by Algorithm 5
after the 𝑖th iteration, it follows that π‘Ÿπ‘– = −𝛽𝑖+1 πœ‰π‘– 𝑒𝑖+1 (see details
in [13]). So we have
𝐴 2 𝑋𝐡2 = 𝐢2 ,
with given 𝑀 ∈ R𝑠×𝑑 , 𝑓 ∈ R𝑠 .
𝜏0 = 1;
(7)
If we denote
π‘š−1
= π‘Žπ‘– βŸ¨π‘¦π‘– , 𝑦𝑖 ⟩ ,
𝑖 = 𝑗,
𝑖 =ΜΈ 𝑗.
𝐡1𝑇 𝑋𝐴𝑇1 = 𝐢1𝑇 ,
𝐡1𝑇 𝑆𝑛 𝑋𝑆𝑛 𝐴𝑇1 = 𝐢1𝑇 ,
𝐡2𝑇 𝑋𝐴𝑇2 = 𝐢2𝑇 ,
(10)
𝐡2𝑇 𝑆𝑛 𝑋𝑆𝑛 𝐴𝑇2 = 𝐢2𝑇 .
Furthermore, suppose (10) is consistent; let π‘Œ be a solution of
(10). If π‘Œ is a bisymmetric matrix, then π‘Œ is a bisymmetric
solution of (1); otherwise we can obtain a bisymmetric
solution of (10) by 𝑋 = (π‘Œ + π‘Œπ‘‡ + 𝑆𝑛 (π‘Œ + π‘Œπ‘‡ )𝑆𝑛 )/4.
Journal of Applied Mathematics
3
The system of (10) can be transformed into (5) with
coefficient matrix 𝑀 and vector 𝑓 as
vec (𝐢1 )
vec (𝐢1𝑇 )
( vec (𝐢1 ) )
(vec (𝐢𝑇 ))
(
1 )
𝑓=(
).
( vec (𝐢2 ) )
(
𝑇 )
vec (𝐢2 )
vec (𝐢2 )
𝑇
(vec (𝐢2 ))
(11)
𝐡1𝑇 ⊗ 𝐴 1
𝐴 1 ⊗ 𝐡1𝑇
𝑇
(𝐡1 𝑆𝑛 ⊗ 𝐴 1 𝑆𝑛 )
(
𝑇 )
(𝐴 𝑆 ⊗ 𝐡1 𝑆𝑛 )
𝑀 = ( 1 𝑇𝑛
),
( 𝐡2 ⊗ 𝐴 2 )
(
𝑇 )
𝐴 2 ⊗ 𝐡2
𝐡2𝑇 𝑆𝑛 ⊗ 𝐴 2 𝑆𝑛
𝑇
(𝐴 2 𝑆𝑛 ⊗ 𝐡2 𝑆𝑛 )
Therefore, 𝛽1 𝑒1 = 𝑓, 𝛼1 𝑣1 = 𝑀𝑇 𝑒1 , 𝛽𝑖+1 𝑒𝑖+1 = 𝑀𝑣𝑖 − 𝛼𝑖 𝑒𝑖 ,
and 𝛼𝑖+1 𝑣𝑖+1 = 𝑀𝑇 𝑒𝑖+1 − 𝛽𝑖+1 𝑣𝑖 can be written as
𝛼𝑖+1 𝑣𝑖+1
where π‘ˆπ‘–1 ∈ R𝑝1 ×π‘ž1 , π‘ˆπ‘–2 ∈ R𝑝2 ×π‘ž2 , 𝑉𝑖 ∈ R𝑛×𝑛 , and 𝑉𝑖 is a bisymmetric matrix.
And so, the vector form of 𝛽1 𝑒1 = 𝑓, 𝛼1 𝑣1 = 𝑀𝑇 𝑒1 ,
𝛽𝑖+1 𝑒𝑖+1 = 𝑀𝑣𝑖 − 𝛼𝑖 𝑒𝑖 , and 𝛼𝑖+1 𝑣𝑖+1 = 𝑀𝑇 𝑒𝑖+1 − 𝛽𝑖+1 𝑣𝑖 in
Algorithm 5 can be rewritten as matrix form. Then we now
propose the following matrix-form algorithm.
𝑇1 = 𝐴𝑇1 π‘ˆ11 𝐡1𝑇 + 𝐴𝑇2 π‘ˆ12 𝐡2𝑇 ; 𝑉1 = 𝑇1 + 𝑇1𝑇 + 𝑆𝑛 (𝑇1 +
𝑇1𝑇 )𝑆𝑛 ; 𝛼1 = ‖𝑉1 β€–; 𝑉1 = 𝑉1 /𝛼1 .
(ii) Iteration. For 𝑖 = 1, 2, . . ., until {𝑋𝑖 } convergence, do
(a) πœ‰π‘– = −πœ‰π‘–−1 𝛽𝑖 /𝛼𝑖 ; 𝑍𝑖 = 𝑍𝑖−1 + πœ‰π‘– 𝑉𝑖 ;
(b) πœƒπ‘– = (πœπ‘–−1 − 𝛽𝑖 πœƒπ‘–−1 )/𝛼𝑖 ; π‘Šπ‘– = π‘Šπ‘–−1 + πœƒπ‘– 𝑉𝑖 ;
(c) π‘ˆπ‘–+1,𝑗 = 𝐴 𝑗 𝑉𝑖 𝐡𝑗 − 𝛼𝑖 π‘ˆπ‘–π‘— , 𝑗 = 1, 2;
2
2
𝛽𝑖+1 = 2√β€–π‘ˆπ‘–+1,1 β€– + β€–π‘ˆπ‘–+1,2 β€– ;
π‘ˆπ‘–+1,𝑗 = π‘ˆπ‘–+1,𝑗 /𝛽𝑖+1 , 𝑗 = 1, 2;
(d) πœπ‘– = −πœπ‘–−1 𝛼𝑖 /𝛽𝑖+1 ;
(e) 𝑇𝑖+1 = 𝐴𝑇1 π‘ˆπ‘–+1,1 𝐡1𝑇 + 𝐴𝑇2 π‘ˆπ‘–+1,2 𝐡2𝑇 ;
𝑇
𝑇
𝑉𝑖+1 = 𝑇𝑖+1 + 𝑇𝑖+1
+ 𝑆𝑛 (𝑇𝑖+1 + 𝑇𝑖+1
)𝑆𝑛 − 𝛽𝑖+1 𝑉𝑖 ;
𝛼𝑖+1 = ‖𝑉𝑖+1 β€–; 𝑉𝑖+1 = 𝑉𝑖+1 /𝛼𝑖+1 ;
𝑖 = 1, 2, . . . ,
(f) 𝛾𝑖 = 𝛽𝑖+1 πœ‰π‘– /(𝛽𝑖+1 πœƒπ‘– − πœπ‘– );
(g) 𝑋𝑖 = 𝑍𝑖 − 𝛾𝑖 π‘Šπ‘– .
Remark 8. The stopping criteria on Algorithm 7 can be used
as
σ΅„© σ΅„©
σ΅„©
σ΅„©σ΅„©
󡄩󡄩𝐢1 − 𝐴 1 𝑋𝑖 𝐡1 σ΅„©σ΅„©σ΅„© + 󡄩󡄩󡄩𝐢2 − 𝐴 2 𝑋𝑖 𝐡2 σ΅„©σ΅„©σ΅„© ≤ πœ–,
(14)
󡄨󡄨 󡄨󡄨
σ΅„©
σ΅„©
σ΅„¨σ΅„¨πœ‰π‘– 󡄨󡄨 ≤ πœ– or 󡄩󡄩󡄩𝑋𝑖 − 𝑋𝑖−1 σ΅„©σ΅„©σ΅„© ≤ πœ–,
𝑇
𝐡1𝑇 ⊗ 𝐴 1
𝐴 1 ⊗ 𝐡1𝑇
𝑇
(𝐡1 𝑆𝑛 ⊗ 𝐴 1 𝑆𝑛 )
(
𝑇 )
(𝐴 𝑆 ⊗ 𝐡1 𝑆𝑛 )
= ( 1 𝑇𝑛
) 𝑒 − 𝛽𝑖+1 𝑣𝑖 ,
( 𝐡2 ⊗ 𝐴 2 ) 𝑖+1
(
𝑇 )
𝐴 2 ⊗ 𝐡2
𝑇
𝐡2 𝑆𝑛 ⊗ 𝐴 2 𝑆𝑛
𝑇
(𝐴 2 𝑆𝑛 ⊗ 𝐡2 𝑆𝑛 )
(13)
𝛽1 = 2√‖𝐢1 β€–2 + ‖𝐢2 β€–2 ; π‘ˆ1𝑗 = 𝐢𝑗 /𝛽1 , 𝑗 = 1, 2;
⊗ 𝐴1
𝐴 1 ⊗ 𝐡1𝑇
𝑇
(𝐡1 𝑆𝑛 ⊗ 𝐴 1 𝑆𝑛 )
(
𝑇 )
(𝐴 𝑆 ⊗ 𝐡1 𝑆𝑛 )
𝛼1 𝑣1 = ( 1 𝑇𝑛
) 𝑒,
( 𝐡2 ⊗ 𝐴 2 ) 1
(
𝑇 )
𝐴 2 ⊗ 𝐡2
𝐡2𝑇 𝑆𝑛 ⊗ 𝐴 2 𝑆𝑛
𝑇
(𝐴 2 𝑆𝑛 ⊗ 𝐡2 𝑆𝑛 )
𝛽𝑖+1 𝑒𝑖+1
𝑣𝑖 = vec (𝑉𝑖 ) ,
𝜏0 = 1; πœ‰0 = −1; πœƒ0 = 0; 𝑍0 = 0 (∈ R𝑛×𝑛 ); π‘Š0 = 𝑍0 ;
𝑇
𝐡1𝑇 ⊗ 𝐴 1
𝐴 1 ⊗ 𝐡1𝑇
𝑇
(𝐡1 𝑆𝑛 ⊗ 𝐴 1 𝑆𝑛 )
(
𝑇 )
(𝐴 𝑆 ⊗ 𝐡1 𝑆𝑛 )
= ( 1 𝑇𝑛
) 𝑣 − 𝛼𝑖 𝑒𝑖 ,
( 𝐡2 ⊗ 𝐴 2 ) 𝑖
(
)
𝐴 2 ⊗ 𝐡2𝑇
𝐡2𝑇 𝑆𝑛 ⊗ 𝐴 2 𝑆𝑛
𝑇
(𝐴 2 𝑆𝑛 ⊗ 𝐡2 𝑆𝑛 )
vec (π‘ˆπ‘–1 )
vec (π‘ˆπ‘–1𝑇 )
( vec (π‘ˆπ‘–1 ) )
(vec (π‘ˆπ‘‡ ))
(
𝑖1 )
𝑒𝑖 = (
),
( vec (π‘ˆπ‘–2 ) )
(
)
vec (π‘ˆπ‘–2𝑇 )
vec (π‘ˆπ‘–2 )
𝑇
vec
( (π‘ˆπ‘–2 ))
Algorithm 7. (i) Initialization
vec (𝐢1 )
vec (𝐢1𝑇 )
( vec (𝐢1 ) )
(vec (𝐢𝑇 ))
(
1 )
𝛽1 𝑒1 = (
),
( vec (𝐢2 ) )
(
𝑇 )
vec (𝐢2 )
vec (𝐢2 )
𝑇
(vec (𝐢2 ))
𝐡1𝑇
From (12), we have
where πœ– is a small tolerance.
𝑖 = 1, 2, . . . .
(12)
Remark 9. As 𝑉𝑖 , 𝑍𝑖 , and π‘Šπ‘– in Algorithm 7 are bisymmetric
matrices, we can see that 𝑋𝑖 obtained by Algorithm 7 are also
bisymmetric matrices.
Some basic properties of Algorithm 7 are listed in the
following theorems.
4
Journal of Applied Mathematics
Theorem 10. The solution generated by Algorithm 7 is the
bisymmetric minimum Frobenius norm solution of (1).
Theorem 11. The iteration of Algorithm 7 will be terminated
in at most 𝑝1 π‘ž1 + 𝑝2 π‘ž2 steps in the absence of round-off errors.
Proof. By (8) and (11), we have by simple calculation that
vec (𝑅𝑖1 )
𝑇
vec (𝑅𝑖1
)
( vec (𝑅𝑖1 ) )
(
)
(vec (𝑅𝑇 ))
(
𝑖1 )
π‘Ÿπ‘– = (
),
( vec (𝑅𝑖2 ) )
(
)
(vec (𝑅𝑇 ))
(15)
in which 𝑅𝑖1 = 𝐢1 −𝐴 1 𝑋𝑖 𝐡1 , and 𝑅𝑖2 = 𝐢2 −𝐴 2 𝑋𝑖 𝐡2 , where 𝑋𝑖
is the approximation solution obtained by Algorithm 7 after
the 𝑖th iteration.
By Lemma 1, we have that
= 𝑃 (𝑝2 , π‘ž2 ) vec (𝑅𝑖2 ) ,
(16)
where 𝑃(𝑝1 , π‘ž1 ) ∈ R𝑝1 π‘ž1 ×𝑝1 π‘ž1 and 𝑃(𝑝2 , π‘ž2 ) ∈ R𝑝2 π‘ž2 ×𝑝2 π‘ž2
are permutation matrices. For simplicity, we denote 𝑃1 =
𝑃(𝑝1 , π‘ž1 ), 𝑃2 = 𝑃(𝑝2 , π‘ž2 ). Then 𝑃1𝑇 𝑃1 = 𝐼𝑝1 π‘ž1 , 𝑃2𝑇 𝑃2 = 𝐼𝑝2 π‘ž2 .
Hence
vec (𝑅𝑖1 )
𝑇
)
vec (𝑅𝑖1
( vec (𝑅𝑖1 ) )
(
)
(vec (𝑅𝑇 ))
(
𝑇
𝑖1 )
π‘Ÿπ‘– π‘Ÿπ‘— = (
)
( vec (𝑅𝑖2 ) )
(
)
(vec (𝑅𝑇 ))
𝑖2
vec (𝑅𝑖2 )
𝑇
(vec (𝑅𝑖2 ))
𝑇
vec (𝑅𝑗1 )
𝑇
vec (𝑅𝑗1
)
(vec (𝑅 ))
𝑗1 )
(
(
𝑇 )
(vec (𝑅𝑗1
))
(
)
(vec (𝑅 ))
(
𝑗2 )
(
𝑇 )
(vec (𝑅𝑗2
))
vec (𝑅𝑗2 )
𝑇
(vec (𝑅𝑗2 ))
𝑇
vec (𝑅𝑖1 )
𝑃1 vec (𝑅𝑖1 )
( vec (𝑅𝑖1 ) )
)
(
(𝑃1 vec (𝑅𝑖1 ))
)
=(
( vec (𝑅 ) )
𝑖2 )
(
(𝑃 vec (𝑅 ))
2
𝑖2
vec (𝑅𝑖2 )
(𝑃2 vec (𝑅𝑖2 ))
vec (𝑅𝑗1 )
𝑃1 vec (𝑅𝑗1 )
( vec (𝑅 ) )
𝑗1 )
(
(
)
(𝑃1 vec (𝑅𝑗1 ))
(
)
( vec (𝑅 ) )
(
𝑗2 )
(
)
(𝑃2 vec (𝑅𝑗2 ))
vec (𝑅𝑗2 )
(𝑃2 vec (𝑅𝑗2 ))
𝑇
+ 2(vec (𝑅𝑖2 )) vec (𝑅𝑗2 )
𝑇
+ 2(vec (𝑅𝑖2 )) 𝑃2𝑇 𝑃2 vec (𝑅𝑗2 )
𝑇
𝑇
= 4 ((vec (𝑅𝑖1 )) vec (𝑅𝑗1 ) + (vec (𝑅𝑖2 )) vec (𝑅𝑗2 ))
(17)
𝑖2
𝑇
)
vec (𝑅𝑖2
𝑇
𝑇
vec (𝑅𝑗1 )
vec (𝑅𝑖1 )
) (
= 4(
).
vec (𝑅𝑖2 )
vec (𝑅𝑗2 )
vec (𝑅𝑖2 )
𝑇
vec
( (𝑅𝑖2 ))
𝑇
) = 𝑃 (𝑝1 , π‘ž1 ) vec (𝑅𝑖1 ) ,
vec (𝑅𝑖1
𝑇
= 2(vec (𝑅𝑖1 )) vec (𝑅𝑗1 ) + 2(vec (𝑅𝑖1 )) 𝑃1𝑇 𝑃1 vec (𝑅𝑗1 )
𝑝1 π‘ž1 +𝑝2 π‘ž2
𝑖1 )
If we let 𝑑𝑖 = ( vec(𝑅
, then we have by (9)
vec(𝑅𝑖2 ) ) ∈ R
that 𝑑0 , 𝑑1 , 𝑑2 , . . . are orthogonal to each other in R𝑝1 π‘ž1 +𝑝2 π‘ž2 . By
Lemma 2, there exists a positive integer ̂𝑙 ≤ (𝑝1 π‘ž1 +𝑝2 π‘ž2 ) such
that 𝑑̂𝑙 = 0. Hence
𝑅̂𝑙1 = 𝑅̂𝑙2 = 0,
(18)
that is, the iteration of Algorithm 7 will be terminated in at
most 𝑝1 π‘ž1 +𝑝2 π‘ž2 steps in the absence of round-off errors.
3. Numerical Examples
In this section, we use some numerical examples to illustrate
the efficiency of our algorithm. The computations are carried
out at PC computer, with software MATLAB 7.0. The machine
precision is around 10−16 .
We stop the iteration when ‖𝑅𝑖1 β€– + ‖𝑅𝑖2 β€– ≤ 10−12 .
Example 12. Given matrices 𝐴 1 , 𝐡1 , 𝐢1 , 𝐴 2 , 𝐡2 , and 𝐢2 as
follows:
1
3
4
(
𝐴1 =
2
−1
(−3
−4
1
−3
5
4
−1
−3
2
−1
(
𝐡1 = (
(0
1
3
(0
−19
−55
−74
𝐢1 = (
−36
19
( 55
−2
−1
−3
1
2
1
2
−3
1
1
2
−3
−1
30
47
77
17
−30
−47
−1
3
2
4
1
−3
0
−1
−1
−1
0
1
−1
−1
0
1
3
0
−1
3
−2
1
0
−1
−3
0
11
−8
3
−19
−11
8
19
55
74
36
−19
−55
1
−2
−1
−3
−1
2
−2
3
−1
−1
−2
3
1
−3
1
−2)
,
4
3
−1)
1
−4
1)
2)
),
5
−3
−2)
−30
−47
−77
−17
30
47
41
39
80 )
,
−2
−41
−39)
Journal of Applied Mathematics
5
4
2
log10 (βˆ£βˆ£π‘…π‘–1 ∣∣ + βˆ£βˆ£π‘…π‘–2 ∣∣)
0
−2
−4
−6
−8
−10
−12
−14
2
0
4
6
8
Iterative steps
10
12
14
Cai’s algorithm
Algorithm 7
Figure 1: Convergence curves of log10 (‖𝑅𝑖1 β€– + ‖𝑅𝑖2 β€–).
3
0
𝐴 2 = (−2
0
1
−2
−3
−4
3
−6
−1
1
1
−1
0
2
−3
(1
𝐡2 = (
(0
−2
1
−1
(
33
17
𝐢2 = ( 27
−17
60
1
−3
−3
3
−2
−4
2
0
−2
−4
1
−1
2
4
0
−5
−2
3
−4
3
4
−2
−4
−3
107
−34
−29
34
78
140
−17
−2
17
138
0
3
3
−3
3
−1
1
1 ),
−1
0
then (1) is consistent, for one can easily verify that it has a
bisymmetric solution:
−2
3
−1)
0)
),
2
−1
1)
1
−1
(1
Μ‚=(2
𝑋
(
1
−1
(1
−33
−17
−27)
17
−60
(19)
𝑋13
0.4755
−0.6822
( 0.6274
=(
( 1.4586
0.2774
−1.2112
(−0.1053
−0.6822
2.6628
0.4046
0.0716
1.0133
0.4001
−1.2112
0.6274
0.4046
−1.0215
−2.2128
−1.6176
1.0133
0.2774
with ‖𝑅13,1 β€– + ‖𝑅13,2 β€– = 6.2303𝑒 − 013.
Figure 1 illustrates the performance of our algorithm and
Cai’s algorithm [10]. From Figure 1, we see that our algorithm
is faster than Cai’s algorithm.
−1
3
1
1
1
1
−1
1
1
0
−2
−1
1
1
2
1
−2
1
−2
1
2
1
1
−1
−2
0
1
1
−1
1
1
1
1
3
−1
1
−1
1)
2)
).
1
−1
1)
(20)
We choose the initial matrix 𝑋0 = 0, then using Algorithm 7
and iterating 13 steps, we have the unique bisymmetric
minimum Frobenius norm solution of (1) as follows:
1.4586
0.0716
−2.2128
−1.1548
−2.2128
0.0716
1.4586
0.2774
1.0133
−1.6176
−2.2128
−1.0215
0.4046
0.6274
−1.2112
0.4001
1.0133
0.0716
0.4046
2.6628
−0.6822
−0.1053
−1.2112
0.2774 )
1.4586 )
),
0.6274
−0.6822
0.4755 )
(21)
Example 13. Let
𝐴 1 = hilb (7) ,
𝐴 2 = rand (7, 7) ,
𝐡1 = pascal (7) ,
𝐡2 = rand (7, 7) ,
(22)
6
Journal of Applied Mathematics
4
2
log10 (βˆ£βˆ£π‘…π‘–1 ∣∣ + βˆ£βˆ£π‘…π‘–2 ∣∣)
0
−2
−4
−6
−8
−10
−12
−14
20
0
40
60
80
100
Iterative steps
Cai’s algorithm
Algorithm 7
Figure 2: Convergence curves of log10 (‖𝑅𝑖1 β€– + ‖𝑅𝑖2 β€–).
with hilb, pascal, and rand being functions in Matlab. And
Μ‚ 1 , 𝐢2 = 𝐴 2 𝑋𝐡
Μ‚ 2 , in which 𝑋
Μ‚ is defined in
we let 𝐢1 = 𝐴 1 𝑋𝐡
Example 12. Hence (1) is consistent.
We choose the initial matrix 𝑋0 = 0; Figure 2 illustrates
the performance of our algorithm and Cai’s algorithm [10].
From Figure 2, we see that our algorithm is faster and more
stable than Cai’s algorithm.
Acknowledgments
This research was supported by the Natural Science Foundation of China (nos. 10901056, 11071079, and 11001167),
the Shanghai Science and Technology Commission “Venus”
Project (no. 11QA1402200), and the Natural Science Foundation of Zhejiang Province (no. Y6110043).
References
[1] X. P. Sheng and G. L. Chen, “A finite iterative method for solving
a pair of linear matrix equations (𝐴𝑋𝐡, 𝐢𝑋𝐷) = (𝐸, 𝐹),” Applied
Mathematics and Computation, vol. 189, no. 2, pp. 1350–1358,
2007.
[2] Y. X. Yuan, “Least squares solutions of matrix equation 𝐴𝑋𝐡 =
𝐸; 𝐢𝑋𝐷 = 𝐹,” Journal of East China Shipbuilding Institute, vol.
18, no. 3, pp. 29–31, 2004.
[3] A. Navarra, P. L. Odell, and D. M. Young, “A representation
of the general common solution to the matrix equations
𝐴 1 𝑋𝐡1 = 𝐢1 and 𝐴 2 𝑋𝐡2 = 𝐢2 with applications,” Computers
& Mathematics with Applications, vol. 41, no. 7-8, pp. 929–935,
2001.
[4] S. K. Mitra, “A pair of simultaneous linear matrix equations
and a matrix programming problem,” Linear Algebra and its
Applications, vol. 131, pp. 107–123, 1990.
[5] S. K. Mitra, “The matrix equations 𝐴𝑋 = 𝐢,𝑋𝐡 = 𝐷,” Linear
Algebra and its Applications, vol. 59, pp. 171–181, 1984.
[6] P. Bhimasankaram, “Common solutions to the linear matrix
equations 𝐴𝑋 = 𝐢,𝑋𝐡 = 𝐷 and 𝐹𝑋𝐺 = 𝐻,” SankhyaΜ„ A, vol.
38, no. 4, pp. 404–409, 1976.
[7] C. G. Khatri and S. K. Mitra, “Hermitian and nonnegative
definite solutions of linear matrix equations,” SIAM Journal on
Applied Mathematics, vol. 31, no. 4, pp. 579–585, 1976.
[8] Y.-B. Deng, Z.-Z. Bai, and Y.-H. Gao, “Iterative orthogonal
direction methods for Hermitian minimum norm solutions of
two consistent matrix equations,” Numerical Linear Algebra with
Applications, vol. 13, no. 10, pp. 801–823, 2006.
[9] Z.-H. Peng, X.-Y. Hu, and L. Zhang, “An efficient algorithm
for the least-squares reflexive solution of the matrix equation
𝐴 1 𝑋𝐡1 = 𝐢1 ,” Applied Mathematics and Computation, vol. 181,
no. 2, pp. 988–999, 2006.
[10] J. Cai, G. L. Chen, and Q. B. Liu, “An iterative method for
the bisymmetric solutions of the consistent matrix equations
𝐴 1 𝑋𝐡1 = 𝐢1 , 𝐴 2 𝑋𝐡2 = 𝐢2 ,” International Journal of Computer
Mathematics, vol. 87, no. 12, pp. 2706–2715, 2010.
[11] J. Cai and G. L. Chen, “An iterative algorithm for the
least squares bisymmetric solutions of the matrix equations
𝐴 1 𝑋𝐡1 = 𝐢1 ,” Mathematical and Computer Modelling, vol. 50,
no. 7-8, pp. 1237–1244, 2009.
[12] R. A. Horn and C. R. Johnson, Topics in Matrix Analysis,
Cambridge University Press, Cambridge, UK, 1991.
[13] C. C. Paige, “Bidiagonalization of matrices and solutions of the
linear equations,” SIAM Journal on Numerical Analysis, vol. 11,
pp. 197–209, 1974.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download