Research Article New Representations of the Group Inverse of Block Matrices

advertisement
Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2013, Article ID 247028, 10 pages
http://dx.doi.org/10.1155/2013/247028
Research Article
New Representations of the Group Inverse of
2 × 2 Block Matrices
Xiaoji Liu, Qi Yang, and Hongwei Jin
College of Science, Guangxi University for Nationalities, Nanning 530006, China
Correspondence should be addressed to Xiaoji Liu; xiaojiliu72@126.com
Received 13 March 2013; Accepted 25 June 2013
Academic Editor: Mehmet Sezer
Copyright © 2013 Xiaoji Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which
permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper presents a full rank factorization of a 2 × 2 block matrix without any restriction concerning the group inverse. Applying
this factorization, we obtain an explicit representation of the group inverse in terms of four individual blocks of the partitioned
matrix without certain restriction. We also derive some important coincidence theorems, including the expressions of the group
inverse with Banachiewicz-Schur forms.
1. Introduction
Let Cπ‘š×𝑛 denote the set of all π‘š × π‘› complex matrices. We
use 𝑅(𝐴), 𝑁(𝐴), and π‘Ÿ(𝐴) to denote the range, the null space,
and the rank of a matrix 𝐴, respectively. The Moore-Penrose
inverse of a matrix 𝐴 ∈ Cπ‘š×𝑛 is a matrix 𝑋 ∈ C𝑛×π‘š which
satisfies
(1) 𝐴𝑋𝐴 = 𝐴
∗
(3) (𝐴𝑋) = 𝐴𝑋
(2) 𝑋𝐴𝑋 = 𝑋
∗
(4) (𝑋𝐴) = 𝑋𝐴.
(1)
The Moore-Penrose inverse of 𝐴 is unique, and it is denoted
by 𝐴† .
Recall that the group inverse of 𝐴 is the unique matrix
𝑋 ∈ Cπ‘š×π‘š satisfying
𝐴𝑋𝐴 = 𝐴,
𝑋𝐴𝑋 = 𝑋,
𝐴𝑋 = 𝑋𝐴.
(2)
The matrix 𝑋 is called the group inverse of 𝐴 and it is denoted
by 𝐴# .
Partitioned matrices are very useful in investigating
various properties of generalized inverses and hence can be
widely used in the matrix theory and have many other
applications (see [1–4]). There are various useful ways to
write a matrix as the product of two or three other matrices
that have special properties. For example, linear algebra texts
relate Gaussian elimination to the LU factorization and the
Gram-Schmidt process to the QR factorization. In this paper,
we consider a factorization based on the full rank factorization of a matrix. Our purpose is to provide an integrated
theoretical development of and setting for understanding a
number of topics in linear algebra, such as the Moore-Penrose
inverse and the group inverse.
A full rank factorization of 𝐴 is in the form
𝐴 = 𝐹𝐴 𝐺𝐴 ,
(3)
where 𝐹𝐴 is of full column rank and 𝐺𝐴 is of full row rank. Any
choice in (3) is acceptable throughout the paper, although this
factorization is not unique.
For a complex matrix A of the form
A=[
𝐴 𝐡
] ∈ C(π‘š+𝑠)×(𝑛+𝑑) ,
𝐢 𝐷
(4)
in the case when π‘š = 𝑛 and 𝐴 is invertible, the Schur
complement of 𝐴 in A is defined by 𝑆 = 𝐷 − 𝐢𝐴−1 𝐡.
Sometimes, we denote the Schur complement of 𝐴 in A by
(A/𝐴). Similarly, if 𝑠 = 𝑑 and 𝐷 is invertible, then the Schur
complement of 𝐷 in A is defined by 𝑇 = 𝐴 − 𝐡𝐷−1 𝐢.
In the case when 𝐴 is not invertible, the generalized Schur
complement of 𝐴 in A is defined by
𝑆 = 𝐷 − 𝐢𝐴† 𝐡.
(5)
Similarly, the generalized Schur complement of 𝐷 in A is
defined by
𝑇 = 𝐴 − 𝐡𝐷† 𝐢.
(6)
2
Journal of Applied Mathematics
The Schur complement and generalized Schur complement have quite important applications in the matrix theory,
statistics, numerical analysis, applied mathematics, and so
forth.
There are a great deal of works [5–8] for the representations of the generalized inverse of A. Various other
generalized inverses have also been researched by a lot of
researchers, for example, Burns et al. [6], Marsaglia and Styan
[8], Benı́tez and Thome [9], Cvetković-Ilić et al. [10], Miao
[11], Chen et al., and so forth [12] and the references therein.
The concept of a group inverse has numerous applications
in matrix theory, from convergence to Markov chains and
from generalized inverses to matrix equations. Furthermore,
the group inverse of block matrix has many applications
in singular differential equations, Markov chains iterative
methods, and so forth [13–17]. Some results for the group
inverse of a 2 × 2 block matrix (operator) can be found in [18–
30]. Most works in the literature concerning representations
for the group inverses of partitioned matrices were carried out
under certain restrictions on their blocks. Very recently, Yan
[31] obtained an explicit representation of the Moore-Penrose
inverse in terms of four individual blocks of the partitioned
matrix by using the full rank factorization without any restriction. This motivates us to investigate the representations of
the group inverse without certain restrictions.
In this paper, we aimed at a new method in giving the
representation of the group inverse for the fact that there is
no known representation for A# , A𝐷 with 𝐴, 𝐡, 𝐢, and 𝐷
arbitrarily. The outline of our paper is as follows. In Section 2,
we first present a full rank factorization of A using previous results by Marsaglia and Styan [8]. Inspired by this
factorization, we extend the analysis to obtain an explicit
representation of the group inverse of A without any restriction. Furthermore, we discuss variants special forms with the
corresponding consequences, including Banachiewicz-Schur
forms and some other extensions as well.
Yan [31] initially considered the representation of the MoorePenrose inverse of the partitioned matrix by using the
full rank factorization technique. The following result is borrowed from [31, Theorem 2.2].
For convenience, we first state some notations which will
be helpful throughout the paper:
𝑄𝛼 = 𝐼 − 𝛼− 𝛼,
†
𝑆 = 𝐷 − 𝐢𝐴 𝐡,
π‘Š = 𝐢𝑄𝐴 ,
where 𝛼− ∈ 𝛼 {1} ,
𝐸 = 𝑃𝐴𝐡,
𝑅 = π‘ƒπ‘Šπ‘†π‘„πΈ .
(7)
(8)
π‘Š = πΉπ‘ŠπΊπ‘Š,
𝐸 = 𝐹𝐸 𝐺𝐸 ,
𝑅 = 𝐹𝑅 𝐺𝑅 ,
A = 𝐹𝐺 = [ †
𝐢𝐺𝐴
𝐹𝐴† 𝐡
]
𝐺𝑅 ]
]
].
† ]
πΉπ‘Šπ‘†]
𝐺𝐸 ]
(10)
Now, the Moore-Penrose inverse of A can be expressed as
A† = 𝐺† 𝐹† . In particular, when 𝐴 is group inverse, let 𝑆 =
𝐷 − 𝐢𝐴# 𝐡; then the full rank factorization of A is
𝐹𝐴
A=[ #
𝐢𝐴 𝐹𝐴
𝐺𝐴 𝐺𝐴 𝐴# 𝐡
]
[
0 0
𝐹𝐸
𝐺𝑅 ]
[ 0
].
[
]
]
†
𝐹𝑅 πΉπ‘Š π‘ƒπ‘Šπ‘†πΊπΈ† [
𝑆 ]
[πΊπ‘Š πΉπ‘Š
[ 0
(11)
𝐺𝐸 ]
This motivates us to obtain some new results concerning the
group inverse by using the full rank factorization related to
the group inverse.
Recall that if a matrix 𝐴 ∈ C𝑛×𝑛 is group inverse (which
is true when ind(𝐴) = 1), then 𝐴# can be expressed in terms
of 𝐴{1}; that is,
3
𝐴# = 𝐴(𝐴(1) ) 𝐴.
(12)
Particularly, we have
3
𝐴# = 𝐴 (𝐴† ) 𝐴.
(13)
Theorem 1. Let A be defined by (4); then the group inverse of
A can be expressed as
A# = [
𝐴 𝐡
]
𝐢 𝐷
× [(𝑉5 + 𝑉5 𝑉3 𝑉2∗ 𝑉1 𝑉2 − 𝑉4 𝑉1 𝑉2 ) 𝑉3
π‘Š† 0
×[
] (π‘ˆ3 π‘ˆ5 + π‘ˆ2∗ π‘ˆ1 π‘ˆ2 π‘ˆ5 − π‘ˆ2∗ π‘ˆ1 π‘ˆ4 )
0 𝐸†
+ (𝑉4 − 𝑉5 𝑉3 𝑉2∗ ) 𝑉1 [
Let 𝐴, 𝐸, π‘Š, 𝑅 have the full rank factorizations
𝐴 = 𝐹𝐴 𝐺𝐴,
𝐹𝐴
𝐺𝐴
[
[ 0
0 0
𝐹𝐸
[
]
† [
𝐹𝑅 πΉπ‘Š π‘ƒπ‘Šπ‘†πΊπΈ [
[πΊπ‘Š
[ 0
The following result follows by using [31, Theorem 3.6]
and (13).
2. Representation of the Group Inverse:
General Case
𝑃𝛼 = 𝐼 − 𝛼𝛼− ,
respectively; then there is a full rank factorization of the block
matrix A:
3
(9)
𝐴† 0
]
0 𝑅†
× π‘ˆ1 (π‘ˆ4 − π‘ˆ2 π‘ˆ5 )] [
𝐴 𝐡
],
𝐢 𝐷
(14)
Journal of Applied Mathematics
3
†
+ (𝑉4 − 𝑉5 𝑉3 𝑉2∗ ) 𝑉1 [𝐴 0† ]
0 𝑅
where
𝐹1 = [
𝐺1 = [
π‘ˆ1 = [
†
𝐺𝐴
0
𝐹𝐴† 0
],
0 𝐹𝑅†
∗
𝐹2 = [
0
∗] ,
𝐺𝑅†
†
0
πΉπ‘Š
],
0 𝐹𝐸†
𝐺2 = [
𝑋3−1
†
πΊπ‘Š
0
∗
3 𝐴 𝐡
× π‘ˆ1 (π‘ˆ4 − π‘ˆ2 π‘ˆ5 )] [𝐢 𝐷] ,
0
∗] ,
𝐺𝐸†
−𝑋3−1 π»π‘ƒπ‘Š 𝑋2−1 𝑋4
−𝑋4∗ 𝑋2−1 π‘ƒπ‘Š 𝐻∗ 𝑋3−1 𝑋4 + 𝑋4∗ 𝑋2−1 π‘ƒπ‘Š 𝐻∗ 𝑋3−1 π»π‘ƒπ‘Š 𝑋2−1 𝑋4
π‘ˆ2 = [
π»π‘ƒπ‘Šπ»1∗ 𝑋1−1
],
π‘ƒπ‘Šπ»1∗ 𝑋1−1
𝐻
𝐼
π‘ˆ4 = [
𝑉1 = [
𝐼 𝐻
],
0 𝐼
π‘Œ3−1
𝑉3 = [
𝐹† = [
𝐹1 π‘ˆ1 π‘ˆ4 − 𝐹1 π‘ˆ1 π‘ˆ2 π‘ˆ5
],
−𝐹2 π‘ˆ∗ π‘ˆ1 π‘ˆ4 + 𝐹2 (π‘ˆ3 + π‘ˆ2∗ π‘ˆ1 π‘ˆ2 ) π‘ˆ5
𝐺† =
],
∗
𝐻=𝐴 𝐢 ,
𝑉4 = [
(a) If 𝐸 is of full column rank and π‘Š is of full row rank,
then
𝐴 𝐡
A# = [
]
𝐢 𝐷
𝐼 0
],
𝐾∗ 𝐼
π‘Š† π‘Š 0
],
𝐾1∗ 𝐸† 𝐸
(15)
× ([
𝐻1 = 𝐸 𝑆 ,
×[
𝐾 = 𝐴 𝐡,
𝑋2 = 𝐼 +
†
𝑋4 = (𝑅𝑅† 𝑋2−1 𝑅𝑅† ) ,
A# = [
×[
†
π‘Œ4 = (𝑅† π‘…π‘Œ2−1 𝑅† 𝑅) .
If the (1, 1)-element 𝐴 of A is group inverse, we immediately have Theorem 2 by using the full rank factorization of
(11).
Theorem 2. Let A be defined by (4). Suppose 𝐴 is group
inverse; then the group inverse of A can be expressed as
𝐴 𝐡
]
𝐢 𝐷
× [(𝑉5 +
×[
− 𝑉4 𝑉1 𝑉2 ) 𝑉3
π‘Š† 0
] (π‘ˆ3 π‘ˆ5 + π‘ˆ2∗ π‘ˆ1 π‘ˆ2 π‘ˆ5 − π‘ˆ2∗ π‘ˆ1 π‘ˆ4 )
0 𝐸†
Μƒ
Μƒ
π‘Œ
−π‘ŒπΎ
𝐴† 0
]
∗Μƒ
∗Μƒ ][
0 𝑆†
𝑄𝑆 𝐾 π‘Œ 𝐼 − 𝑄𝑆 𝐾 π‘ŒπΎ
3
Μƒ
Μƒ 𝑆
𝑋
𝑋𝐻𝑃
×[ ∗
])
Μƒ 𝐼 − 𝐻∗ 𝑋𝐻𝑃
Μƒ 𝑆
−𝐻 𝑋
π‘Œ2 = 𝐼 + 𝑄𝐸 𝐾1∗ 𝐾1 𝑄𝐸 ,
𝑉5 𝑉3 𝑉2∗ 𝑉1 𝑉2
3
𝐴 𝐡
−𝐻∗ 𝐼
]) [
].
𝐼 0
𝐢 𝐷
𝐴 𝐡
]
𝐢 𝐷
× ([
(16)
π‘Œ3 = 𝐼 + 𝐾𝑄𝐸 (π‘Œ2−1 − π‘Œ2−1 π‘Œ4 π‘Œ2−1 ) 𝑄𝐸 𝐾∗ ,
A# = [
(20)
(b) If 𝐸 = 0, π‘Š = 0, then
π‘ƒπ‘Šπ»1∗ 𝐻1 π‘ƒπ‘Š,
𝑋3 = 𝐼 + π»π‘ƒπ‘Š (𝑋2−1 − 𝑋2−1 𝑋4 𝑋2−1 ) π‘ƒπ‘Šπ»∗ ,
π‘Œ1 = 𝐼 + 𝐾1 𝑄𝐸 𝐾1∗ ,
𝐼 −𝐾 − 𝐾1 π‘Š† 0
𝐴† 0
]
][
]+[
0
𝐼
0 0
0 𝐸†
†
𝐾1 = π‘Š† 𝑆,
𝑋1 = 𝐼 +
(19)
Theorem 3. Let A be defined by (4); then the following
statements are true.
†∗ ∗
𝐻1 π‘ƒπ‘Šπ»1∗ ,
(18)
will be helpful in the proofs of the following results.
with
†∗
−𝑉4 𝑉1 𝑉2 𝑉3 𝐺2∗ + 𝑉5 (𝑉3 + 𝑉3 𝑉2∗ 𝑉1 𝑉2 𝑉3 ) 𝐺2∗ ] ,
[𝑉4 𝑉1 𝐺1∗ − 𝑉5 𝑉3 𝑉2∗ 𝑉1 𝐺1∗ ,
𝐾
],
0
−π‘Œ1−1 𝐾1
π‘Œ1−1
],
∗ −1
−𝐾1 π‘Œ1 𝐼 + 𝐾1∗ π‘Œ1−1 𝐾1
𝑉5 = [
The two representations of 𝐹† , 𝐺† (which can be found in
[31, Theorem 3.1]),
0 π‘Šπ‘Š
],
𝐸𝐸† 𝐻1 π‘ƒπ‘Š
−π‘Œ4 π‘Œ2−1 𝑄𝐸 𝐾∗ π‘Œ3−1 π‘Œ4 + π‘Œ4 π‘Œ2−1 𝑄𝐸 𝐾∗ π‘Œ3−1 𝐾𝑄𝐸 π‘Œ2−1 π‘Œ4
𝐾𝐾∗
[ ∗1
𝐾1
where 𝐻 = 𝐴 𝐢∗ , 𝐾 = 𝐴# 𝐡, and 𝐻1 , 𝐾1 , π‘ˆ1 , π‘ˆ2 , π‘ˆ3 , π‘ˆ4 , π‘ˆ5 ,
𝑉1 , 𝑉2 , 𝑉3 , 𝑉4 , 𝑉5 , 𝑋1 , 𝑋2 , 𝑋3 , 𝑋4 , π‘Œ1 , π‘Œ2 , π‘Œ3 , π‘Œ4 are the same
as those in Theorem 1.
†
−π‘Œ3−1 𝐾𝑄𝐸 π‘Œ2−1 π‘Œ4
𝑉2 =
],
𝐼 0
π‘ˆ3 = [
],
0 𝑋1−1
π‘ˆ5 = [
(17)
#∗
(21)
𝐴 𝐡
],
𝐢 𝐷
Μƒ = (𝐼 + 𝐾𝑄𝑆 𝐾∗ )−1 .
Μƒ = (𝐼 + 𝐻𝑃𝑆 𝐻∗ )−1 and π‘Œ
where 𝑋
Proof. (a) If 𝐸 is full row rank, then 𝑄𝐸 = 0, and hence 𝑅 = 0,
𝑋1 = 𝐼, 𝑋2 = 𝐼, 𝑋3 = 𝐼, and 𝑋4 = 0. Thus, 𝑉1 , 𝑉2 , 𝑉3 , 𝑉4 , 𝑉5
defined in Theorem 1 can be simplified to
𝑉1 = [
𝑉3 = [
𝐼 0
],
0 0
𝑉2 = [
𝐼
−𝐾1
],
∗
−𝐾1 𝐼 + 𝐾1∗ 𝐾1
𝑉5 = [
𝐾𝐾1∗ 𝐾
],
𝐾1∗ 0
𝐼 0
𝑉4 = [ ∗ ] ,
𝐾 𝐼
π‘Š† π‘Š 0
],
𝐾1∗ 𝐼
(22)
4
Journal of Applied Mathematics
(b) If 𝐸 = 0, then 𝐻1 = 0, 𝑋1 = 𝐼, and 𝑋2 = 𝐼 such that
𝑋3 = 𝐼+π»π‘ƒπ‘Šπ‘ƒπ‘… π‘ƒπ‘Šπ»∗ and 𝑋4 = 𝑅𝑅† . Letting 𝑋 = 𝑋3−1 , then
which imply
𝐼 0
𝑉4 𝑉1 = [ ∗ ] ,
𝐾 0
𝑉2 𝑉3 = [
𝑉1 𝑉2 𝑉3 = [
𝑉5 𝑉3 𝑉2∗ 𝑉1
𝑉5 𝑉3 = [
0 0
= [ ∗ ],
𝐾 0
0
𝐾
],
𝐾1∗ −𝐾1∗ 𝐾
π‘ˆ1 = [
0 𝐾
],
0 0
𝐻 0
π‘ˆ2 = [
],
𝐼 0
0 𝐾
𝑉4 𝑉1 𝑉2 𝑉3 = [
],
0 𝐾∗ 𝐾
†
π‘Š π‘Š −𝐾1
],
0
𝐼
𝑉5 𝑉3 𝑉2∗ 𝑉1 𝑉2 𝑉3 = [
0 0
].
0 𝐾∗ 𝐾
(23)
π‘ˆ4 = [
0 0
𝐼 𝐼 −𝐾 − 𝐾1 [
†
] 0 πΊπ‘Š
0 ].
0 0
𝐼
†
[ 0 0 𝐺𝐸 ]
𝐼 0
],
0 𝐼
π‘ˆ2 = [
π‘ˆ4 = [
π‘ˆ4 = [
𝐻 0
],
𝐼 0
π‘ˆ3 = [
=[
(24)
𝐼 0
],
0 𝐼
0 𝐼
π‘ˆ5 = [ † ] .
𝐸𝐸 0
𝐼 𝐻
],
0 𝐼
𝐼 𝐻
π‘ˆ1 π‘ˆ4 = [
],
0 𝐼
π‘ˆ2∗ π‘ˆ1 π‘ˆ4 = [
𝐻
0
π‘ˆ1 π‘ˆ2 π‘ˆ5 = [
∗
(25)
𝑋
𝑋𝐻 (𝐼 − π‘ƒπ‘Šπ‘…π‘…† )
],
∗
†
−𝑅𝑅 π‘ƒπ‘Šπ» 𝑋 𝑅𝑅 − 𝑅𝑅† π‘ƒπ‘Šπ»∗ 𝑋𝐻 (𝐼 − π‘ƒπ‘Šπ‘…π‘…† )
=[
0
𝑋𝐻 (𝐼 − π‘ƒπ‘Šπ‘…π‘…† ) π‘Šπ‘Š†
],
†
†
0 𝑅𝑅 π‘Šπ‘Š − 𝑅𝑅† π‘ƒπ‘Šπ»∗ 𝑋𝐻 (𝐼 − π‘ƒπ‘Šπ‘…π‘…† ) π‘Šπ‘Š†
π‘ˆ2∗ π‘ˆ1 π‘ˆ4
†
∗
†
†
∗
†
= [(𝐼 − 𝑅𝑅 0π‘ƒπ‘Š) 𝐻 𝑋 𝑅𝑅 + (𝐼 − 𝑅𝑅 π‘ƒπ‘Š)0𝐻 𝑋𝐻 (𝐼 − π‘ƒπ‘Šπ‘…π‘… )] ,
π‘ˆ3 π‘ˆ5 = [
0 𝐻
],
0 𝐼
0 𝐼+𝐻 𝐻
].
0
0
𝐹𝐴† 0 0
𝐼 0
†
†
[
𝐹 = 0 πΉπ‘Š
0 ] [−𝐻∗ 𝐼] .
†
[ 0 0 𝐹𝐸 ] [ 𝐼 0]
†
†
†
∗
†
= [0 𝑅𝑅 π‘Šπ‘Š + (𝐼 − 𝑅𝑅 π‘ƒπ‘Š) 𝐻 𝑋𝐻 (𝐼 − π‘ƒπ‘Šπ‘…π‘… )] .
0
Hence,
𝐹𝐴† 0 0
𝐹 = [ 0 𝐹𝑅† 0 ]
†
[ 0 0 πΉπ‘Š]
†
(27)
𝑋
𝑋𝐻 (𝐼 − π‘ƒπ‘Šπ‘…π‘…† ) π‘ƒπ‘Š
]
∗
−π‘ƒπ‘Šπ» 𝑋
π‘ƒπ‘Š − π‘ƒπ‘Šπ»∗ 𝑋𝐻 (𝐼 − π‘ƒπ‘Šπ‘…π‘…† ) π‘ƒπ‘Š] .
†
∗
†
𝐼 − 𝑅𝑅 π‘ƒπ‘Š
[(𝐼 − 𝑅𝑅 π‘ƒπ‘Š) 𝐻 𝑋
]
[
×[
𝐼 −𝐾 − 𝐾1
𝐴† 0
]
]+[
A† = 𝐺† 𝐹† = [
0
𝐼
0 0
(28)
(32)
If π‘Š = 0, then 𝐾1 = 0, π‘Œ1 = 𝐼, and π‘Œ2 = 𝐼 such that π‘Œ3 =
𝐼 + 𝐾𝑄𝐸 𝑄𝑅 𝑄𝐸 𝐾∗ and π‘Œ4 = 𝑅† 𝑅. Letting π‘Œ = π‘Œ3−1 , then
one gets the expression of A# by using (13):
𝐴 𝐡
]
𝐢 𝐷
𝑉1 = [
𝐼 −𝐾 − 𝐾1 π‘Š† 0
𝐴† 0
]
][
]+[
0
𝐼
0 0
0 𝐸†
3
−𝐻∗ 𝐼
𝐴 𝐡
×[
]) [
].
𝐼 0
𝐢 𝐷
0
(31)
Since
π‘Š† 0 −𝐻∗ 𝐼
×[
][
],
𝐼 0
0 𝐸†
0 π‘Šπ‘Š†
],
0
0
π‘ˆ2∗ π‘ˆ1 π‘ˆ2 π‘ˆ5
∗
Now, 𝐹† possesses the following form according to (18):
× ([
0 π‘Šπ‘Š†
].
0
0
π‘ˆ1 π‘ˆ2 π‘ˆ5
0 𝐼
π‘ˆ3 π‘ˆ5 = [ † ] , (26)
𝐸𝐸 0
𝐼+𝐻 𝐻
],
0
π‘ˆ2∗ π‘ˆ1 π‘ˆ2 π‘ˆ5 = [
A# = [
(30)
†
Simple computations show that
∗
𝐼 0
],
0 𝐼
π‘ˆ1 π‘ˆ4
†
𝐺𝐴
When π‘Š is full row rank, one gets π‘ƒπ‘Š = 0 which implies
𝑅 = 0, 𝑋1 = 𝐼, 𝑋2 = 𝐼, 𝑋3 = 𝐼, and 𝑋4 = 0. Thus,
π‘ˆ1 = [
𝐼 𝐻
],
0 𝐼
π‘ˆ3 = [
By short computations, one gets
So, (19) is reduced to
𝐺† = [
𝑋
−π‘‹π»π‘ƒπ‘Šπ‘…π‘…†
],
−𝑅𝑅† π‘ƒπ‘Šπ»∗ 𝑋 𝑅𝑅† + 𝑅𝑅† π‘ƒπ‘Šπ»∗ π‘‹π»π‘ƒπ‘Šπ‘…π‘…†
(29)
π‘Œ
−π‘ŒπΎπ‘„πΈ 𝑅† 𝑅
],
−𝑅† 𝑅𝑄𝐸 𝐾∗ π‘Œ 𝑅† 𝑅 + 𝑅† 𝑅𝑄𝐸 𝐾∗ π‘ŒπΎπ‘„πΈ 𝑅† 𝑅
𝑉2 = [
0 𝐾
],
0 0
𝐼 0
𝑉4 = [ ∗ ] ,
𝐾 𝐼
𝑉3 = [
𝑉5 = [
𝐼 0
],
0 𝐼
0 0
],
0 𝐸† 𝐸
(33)
Journal of Applied Mathematics
5
Μƒ = (𝐼 + 𝐾𝑄𝑆 𝐾∗ )−1 .
where π‘Œ
which imply
𝑉4 𝑉1
=[
(b) If 𝐸 = 0, π‘Š = 0, and 𝑅(𝐡∗ ) ⊂ 𝑅(𝑆∗ ), then
π‘Œ
−π‘ŒπΎπ‘„πΈ 𝑅† 𝑅
],
†
∗
†
𝐾 π‘Œ − 𝑅 𝑅𝑄𝐸 𝐾 π‘Œ 𝑅 𝑅 − (𝐼 − 𝑅† 𝑅𝑄𝐸 ) 𝐾∗ π‘ŒπΎπ‘„πΈ 𝑅† 𝑅
∗
𝑉1 𝑉2 𝑉3 = [
𝑉5 𝑉3 𝑉2∗ 𝑉1 = [
0
π‘ŒπΎ
],
0 −𝑅† 𝑅𝑄𝐸 𝐾∗ π‘ŒπΎ
𝐴 𝐡
𝐼 −𝐾 𝐴† 0
]
] ([
][
𝐢 𝐷
0 𝐼
0 S†
×[
0
0
],
𝐸† 𝐸𝐾∗ π‘Œ −𝐸† 𝐸𝐾∗ π‘ŒπΎπ‘„πΈ 𝑅† 𝑅
𝑉4 𝑉1 𝑉2 𝑉3 = [
𝑉5 𝑉3 𝑉2∗ 𝑉1 𝑉2 𝑉3
A# = [
0
π‘ŒπΎ
],
∗
0 𝐾 π‘ŒπΎ − 𝑅† 𝑅𝑄𝐸 𝐾∗ π‘ŒπΎ
0
0
=[
].
∗
†
0 𝐾 π‘ŒπΎ − 𝑅 𝑅𝑄𝐸 𝐾∗ π‘ŒπΎ
Μƒ = (𝐼 + 𝐻𝑃𝑆 𝐻∗ )−1 .
where 𝑋
(c) If 𝐸 = 0, π‘Š = 0, and 𝑅(𝐡) ⊂ 𝑅(𝐴), 𝑅(𝐢) ⊂ 𝑅(𝑆),
𝑅(𝐡∗ ) ⊂ 𝑅(𝑆∗ ), 𝑅(𝐢∗ ) ⊂ 𝑅(𝐴∗ ), then
3
𝐴(𝐴† ) 𝐴 + 𝐴𝐴† 𝐾𝑆† 𝐻∗ 𝐴† 𝐴 −𝐴𝐴† 𝐾(𝑆† ) 𝑆]
A =[
.
2
3
−𝑆(𝑆† ) 𝐻∗ 𝐴† 𝐴
𝑆(𝑆† ) 𝑆
[
]
(40)
(35)
Proof. (a) Since 𝐸 = 0 and π‘Š = 0, by Theorem 3(b), one gets
−π‘ŒπΎ
π‘Œ
−π‘ŒπΎπ‘„πΈ
𝐺† = [
]
𝑄𝑅 𝑄𝐸 𝐾∗ π‘Œ 𝐼 − 𝑄𝑅 𝑄𝐸 𝐾∗ π‘ŒπΎ 𝐼
#
A† = [
Then,
A† = 𝐺† 𝐹† = [
Μƒ
Μƒ
π‘Œ
−π‘ŒπΎ
Μƒ 𝐼 − 𝑄𝑆 𝐾∗ π‘ŒπΎ
Μƒ ]
𝑄𝑆 𝐾∗ π‘Œ
Μƒ
Μƒ 𝑆
𝑋
𝑋𝐻𝑃
𝐴† 0
].
×[
†] [
∗
Μƒ 𝐼 − 𝐻∗ 𝑋𝐻𝑃
Μƒ 𝑆
0 𝑆
−𝐻 𝑋
(36)
3
Μƒ
Μƒ 𝑆
𝑋
𝑋𝐻𝑃
𝐴 0
]
[
])
×[
Μƒ 𝐼 − 𝐻∗ 𝑋𝐻𝑃
Μƒ 𝑆
0 𝑆† −𝐻∗ 𝑋
†
𝐴 𝐡
].
𝐢 𝐷
(37)
Theorem 4. Let A be defined by (4), then the following statements are true.
(41)
Μƒ = 𝐼, then the
Since 𝑅(𝐢) ⊂ 𝑅(𝑆), that is, 𝑃𝑆 𝐢 = 0, then 𝑋
equality previously mentioned is simplified to
A† = [
Μƒ
Μƒ
𝐴 𝐡
π‘Œ
−π‘ŒπΎ
] ([
∗Μƒ
Μƒ ]
𝐢 𝐷
𝑄𝑆 𝐾 π‘Œ 𝐼 − 𝑄𝑆 𝐾∗ π‘ŒπΎ
×[
Μƒ
Μƒ
π‘Œ
−π‘ŒπΎ
𝐴† 0
]
∗Μƒ
∗Μƒ ][
0 𝑆†
𝑄𝑆 𝐾 π‘Œ 𝐼 − 𝑄𝑆 𝐾 π‘ŒπΎ
Μƒ
Μƒ 𝑆
𝑋
𝑋𝐻𝑃
× [ ∗Μƒ
Μƒ 𝑆] .
−𝐻 𝑋 𝐼 − 𝐻∗ 𝑋𝐻𝑃
Therefore, we have
A# = [
2
(34)
So, (19) is reduced to
†
𝐺𝐴
0 0
[
]
× [ 0 𝐺𝑅† 0 ] .
†
[ 0 0 𝐺𝐸 ]
3
Μƒ
Μƒ 𝑆
𝑋
𝑋𝐻𝑃
𝐴 𝐡
]) [
],
∗Μƒ
∗Μƒ
𝐢 𝐷
−𝐻 𝑋 𝐼 − 𝐻 𝑋𝐻𝑃𝑆
(39)
Μƒ
Μƒ
π‘Œ
−π‘ŒπΎ
𝐼 0
𝐴† 0
] [ ∗ ] . (42)
∗Μƒ
∗Μƒ ][
0 𝑆† −𝐻 𝐼
𝑄𝑆 𝐾 π‘Œ 𝐼 − 𝑄𝑆 𝐾 π‘ŒπΎ
3
By using A# = A(A† ) A, we have
A# = [
𝐴 𝐡
]
𝐢 𝐷
3
Μƒ
Μƒ
π‘Œ
−π‘ŒπΎ
𝐼 0
𝐴† 0
× ([
Μƒ 𝐼 − 𝑄𝑆 𝐾∗ π‘ŒπΎ
Μƒ ] [ 0 𝑆† ] [−𝐻∗ 𝐼])
𝑄𝑆 𝐾∗ π‘Œ
𝐴 𝐡
×[
].
𝐢 𝐷
(43)
(a) If 𝐸 = 0, π‘Š = 0, and 𝑅(𝐢) ⊂ 𝑅(𝑆), then
(b) Similarly to the proof of (a).
(c) Since 𝐸 = 0 and π‘Š = 0, by Theorem 2(b), one gets
𝐴 𝐡
]
A# = [
𝐢 𝐷
Μƒ
Μƒ
π‘Œ
−π‘ŒπΎ
𝐴† 0
]
× ([
∗Μƒ
∗Μƒ ][
0 𝑆†
𝑄𝑆 𝐾 π‘Œ 𝐼 − 𝑄𝑆 𝐾 π‘ŒπΎ
3
𝐼 0
𝐴 𝐡
× [ ∗ ]) [
],
−𝐻 𝐼
𝐢 𝐷
(38)
A† = [
Μƒ
Μƒ
π‘Œ
−π‘ŒπΎ
𝐴† 0
]
∗Μƒ
∗Μƒ ][
0 𝑆†
𝑄𝑆 𝐾 π‘Œ 𝐼 − 𝑄𝑆 𝐾 π‘ŒπΎ
Μƒ
Μƒ 𝑆
𝑋
𝑋𝐻𝑃
× [ ∗Μƒ
Μƒ 𝑆] .
−𝐻 𝑋 𝐼 − 𝐻∗ 𝑋𝐻𝑃
(44)
6
Journal of Applied Mathematics
Since 𝑅(𝐡) ⊂ 𝑅(𝐴), 𝑅(𝐢) ⊂ 𝑅(𝑆), 𝑅(𝐡∗ ) ⊂ 𝑅(𝑆∗ ), 𝑅(𝐢∗ ) ⊂
𝑅(𝐴∗ ), that is, 𝑃𝐴𝐡 = 0, 𝐢𝑄𝐴 = 0, 𝑃𝑆 𝐢 = 0, 𝐡𝑄𝑆 = 0, then the
previous equality is simplified to
(b) If 𝐴 and 𝑆 are group inverse, 𝑃𝐴𝐡 = 0, 𝐢𝑄𝐴 = 0,
B𝑃𝑆 = 0, then
A#
𝐼 −𝐾 𝐴† 0
𝐼 0
A =[
][ ∗ ]
][
0 𝐼
0 𝑆† −𝐻 𝐼
†
=[
(45)
𝐴† + 𝐾𝑆† 𝐻∗ −𝐾𝑆†
].
=[
−𝑆† 𝐻∗
𝑆†
𝐴# + 𝐴# 𝐡𝑆# 𝐢𝐴#
2
𝑃𝑆 𝐢𝐴# (𝐼 + 𝐴# 𝐡𝑆# 𝐢) 𝐴# − 𝑆# 𝐢𝐴# (𝐼 − 𝑃𝑆 𝐢(𝐴# ) 𝐡) 𝑆#
(c) Let 𝐴 and 𝑆 be group inverse; then 𝑃𝐴𝐡 = 0,
𝐢𝑄𝐴 = 0, 𝑃𝑆 𝐢 = 0, and 𝐡𝑃𝑆 = 0 if and only if
𝐴 𝐡 𝐴† + 𝐾𝑆† 𝐻∗ −𝐾𝑆†
AA = [
]
][
𝐢 𝐷
−𝑆† 𝐻∗
𝑆†
†
𝐴𝐴† + 𝐴𝐾𝑆† 𝐻∗ − 𝐡𝑆† 𝐻∗ −𝐴𝐾𝑆† + 𝐡𝑆†
=[ †
]
𝐢𝐴 + 𝐢𝐾𝑆† 𝐻∗ − 𝐷𝑆† 𝐻∗ −𝐢𝐾𝑆† + 𝐷𝑆†
A† A = [
=[
A# = [
𝐴𝐴† 0
],
0 𝑆𝑆†
†
∗
†
𝐴# + 𝐴# 𝐡𝑆# 𝐢𝐴# −𝐴# 𝐡𝑆#
].
−𝑆# 𝐢𝐴#
𝑆#
(50)
Proof. (i) If 𝑃𝐴𝐡 = 0 and 𝐢𝑄𝐴 = 0, then 𝐸, π‘Š, 𝑅 defined
in (8) can be simplified to 𝐸 = 0, π‘Š = 0; 𝑅 = 𝑆 and then
there is a full rank factorization
𝐴† + 𝐾𝑆† 𝐻∗ −𝐾𝑆† 𝐴 𝐡
][
]
𝐢 𝐷
−𝑆† 𝐻∗
𝑆†
†
]
(49)
Moreover,
=[
−𝐴# 𝐡𝑆#
†
∗
†
𝐴 𝐴 + 𝐾𝑆 𝐻 𝐴 − 𝐾𝑆 𝐢 𝐾 + 𝐾𝑆 𝐻 𝐡 − 𝐾𝑆 𝐷
]
−𝑆† 𝐻∗ 𝐴 + 𝑆† 𝐢
−𝑆† 𝐻∗ 𝐡 + 𝑆† 𝐷
A=[
0 𝐺𝐴 𝐺𝐴 𝐴# 𝐡
𝐹
𝐴 𝐡
][
]
] = 𝐹𝐺 = [ #𝐴
𝐢 𝐷
𝐢𝐴 𝐹𝐴 𝐹𝑆
0
G𝑆
(51)
according to (11). Thus,
𝐴† 𝐴 0
].
=[
0 𝑆† 𝑆
𝐺𝐹 = [
(46)
𝐺𝐴𝐹𝐴 + 𝐺𝐴 𝐾𝐢𝐴# 𝐹𝐴 𝐺𝐴 𝐾𝐹𝑆
],
𝐺𝑆 𝐻𝐹𝐴
𝐺𝑆 𝐹𝑆
(52)
Therefore,
where 𝐻 = 𝐢𝐴# and 𝐾 = 𝐴# 𝐡. Denote by 𝑆󸀠 the
Schur complement of 𝐺𝑆 𝐹𝑆 in the partitioned matrix 𝐺𝐹.
Then,
𝐴𝐴† 0
A = AA A A A = [
]
0 𝑆𝑆†
#
†
×[
†
†
𝐴† + 𝐾𝑆† 𝐻∗ −𝐾𝑆† 𝐴† 𝐴 0
][
]
0 𝑆† 𝑆
−𝑆† 𝐻∗
𝑆†
3
−1
𝑆󸀠 = 𝐺𝐴𝐹𝐴 + 𝐺𝐴 𝐾𝐻𝐹𝐴 − 𝐺𝐴 𝐾𝐹𝑆 (𝐺𝑆 𝐹𝑆 ) 𝐺𝑆 𝐻𝐹𝐴
= 𝐺𝐴𝐹𝐴 + 𝐺𝐴 𝐾𝐻𝐹𝐴 − 𝐺𝐴 𝐾𝑆𝑆# 𝐻𝐹𝐴
2
𝐴(𝐴† ) 𝐴 + 𝐴𝐴† 𝐾𝑆† 𝐻∗ 𝐴† 𝐴 −𝐴𝐴† 𝐾(𝑆† ) 𝑆]
= [
.
2
3
𝑆(𝑆† ) 𝑆
−𝑆(𝑆† ) 𝐻∗ 𝐴† 𝐴
[
]
(47)
Theorem 5. Let A be defined by (4); let 𝑆 = 𝐷 − 𝐢𝐴# 𝐡 be
the Schur complement of 𝐷 in A; then the following statements
are true.
(a) If 𝐴 and 𝑆 are group inverse, 𝑃𝐴𝐡 = 0, 𝐢𝑄𝐴 = 0,
and 𝑃𝑆 𝐢 = 0, then
A#
= 𝐺𝐴𝐹𝐴 + 𝐺𝐴 𝐾𝑃𝑆 𝐻𝐹𝐴
(53)
= 𝐺𝐴𝐹𝐴 + 𝐺𝐴 𝐾𝑃𝑆 𝐢𝐴# 𝐹𝐴
= 𝐺𝐴𝐹𝐴 .
Applying the Banachiewicz-Schur formula, we have
(𝐺𝐴 𝐹𝐴)
−1
−1
#
#
#
#
#
#
#
#
#
#
𝐴 + 𝐴 𝐡𝑆 𝐢𝐴 𝐴 (𝐼 + 𝐡𝑆 𝐢𝐴 ) 𝐴 𝐡𝑃𝑆 − 𝐴 𝐡𝑆
=[
].
2
−𝑆# 𝐢𝐴#
𝑆# (𝐼 − 𝐢(𝐴# ) 𝐡𝑃𝑆 )
(48)
−1
(𝐺𝐴 𝐹𝐴 )
=[
−(𝐺𝐴 𝐹𝐴 ) 𝐺𝐴 𝐾𝐹𝑆 (𝐺𝑆 𝐹𝑆 )
−1
−1
−(𝐺𝑆 𝐹𝑆 ) 𝐺𝑆 𝐻𝐹𝐴 (𝐺𝐴 𝐹𝐴 )
=[
−1
(𝐺𝐴 𝐹𝐴 )
#
#
−1
−1
−1
−1
(𝐺𝑆 𝐹𝑆 ) (𝐼 + 𝐺𝑆 𝐻𝐹𝐴 (𝐺𝐴 𝐹𝐴 ) 𝐺𝐴 𝐾𝐹𝑆 (𝐺𝑆 𝐹𝑆 ) )
−𝐺𝐴 𝐴# 𝐾𝑆# 𝐹𝑆
−𝐺𝑆 𝑆 𝐻𝐴 𝐹𝐴 (𝐺𝑆 𝐹𝑆 )
−1
+ 𝐺𝑆 𝑆# 𝐻𝐾𝑆# 𝐹𝑆
]
].
(54)
Journal of Applied Mathematics
7
Simple computations give
Short computations show that
𝐹(𝐺𝐹)−1
𝐹(𝐺𝐹)−1 = [
=[
𝐹𝐴 0
]
𝐻𝐹𝐴 𝐹𝑆
×[
=[
×[
−1
−𝐺𝐴 𝐴# 𝐾𝑆# 𝐹𝑆
(𝐺𝐴 𝐹𝐴)
]
−1
−𝐺𝑆 𝑆# 𝐻𝐴# 𝐹𝐴 (𝐺𝑆 𝐹𝑆 ) + 𝐺𝑆 𝑆# 𝐻𝐾𝑆# 𝐹𝑆
𝐴# 𝐹𝐴 −𝐾𝑆# 𝐹𝑆
],
0
𝑆# 𝐹𝑆
(55)
−1
−1
×[
𝐺𝐴 𝐺𝐴 𝐾
]
0 𝐺𝑆
𝐺𝐴 𝐺𝐴 𝐾
]
0 𝐺𝑆
0
𝐺 𝐴#
=[ 𝐴 #
].
−𝐺𝑆 𝑆 𝐻 𝐺𝑆 𝑆#
𝐺𝐴 𝐴# 𝐾𝑃𝑆
𝐺𝐴 𝐴#
].
#
#
−𝐺𝑆 𝑆 𝐻 𝐺𝑆 𝑆 − 𝐺𝑆 𝑆# 𝐻𝐾𝑃𝑆
(61)
Therefore,
A# = 𝐹(𝐺𝐹)−2 𝐺
Then,
A# = 𝐹(𝐺𝐹)−2 𝐺
=[
−𝐾𝑆# 𝐹𝑆
𝐴# 𝐹𝐴
],
#
𝑃𝑆 𝐻𝐴 𝐹𝐴 −𝑃𝑆 𝐻𝐾𝑆# 𝐹𝑆 + 𝑆# 𝐹𝑆
(𝐺𝐴 𝐹𝐴)
−𝐺𝐴 𝐴# 𝐾𝑆# 𝐹𝑆
=[
]
−1
#
#
−𝐺𝑆 S 𝐻𝐴 𝐹𝐴 (𝐺𝑆 𝐹𝑆 ) + 𝐺𝑆 𝑆# 𝐻𝐾𝑆# 𝐹𝑆
(𝐺𝐴 𝐹𝐴)
−𝐺𝐴𝐴# 𝐾𝑆# 𝐹𝑆
]
−1
#
#
−𝐺𝑆 𝑆 𝐻𝐴 𝐹𝐴 (𝐺𝑆 𝐹𝑆 ) + 𝐺𝑆 𝑆# 𝐻𝐾𝑆# 𝐹𝑆
×[
=[
= [
−1
−𝐺𝐴𝐴# 𝐾𝑆# 𝐹𝑆
(𝐺𝐴 𝐹𝐴)
],
−1
−𝐺𝑆 𝑆# 𝐻𝐴# 𝐹𝐴 (𝐺𝑆 𝐹𝑆 ) + 𝐺𝑆 𝑆# 𝐻𝐾𝑆# 𝐹𝑆
(𝐺𝐹)−1 𝐺
(𝐺𝐹)−1 𝐺
=[
𝐹𝐴 0
]
𝐻𝐹𝐴 𝐹𝑆
=[
𝐺𝐴 𝐴# 𝐾𝑃𝑆
𝐺 𝐴#
𝐴# 𝐹𝐴 −𝐾𝑆# 𝐹𝑆
][ 𝐴 #
]
#
#
0
𝑆 𝐹𝑆
−𝐺𝑆 𝑆 𝐻 𝐺𝑆 𝑆 − 𝐺𝑆 𝑆# 𝐻𝐾𝑃𝑆
𝐴# + 𝐴# 𝐡𝑆# 𝐢𝐴#
𝐴# + 𝐴# 𝐡𝑆# 𝐢𝐴# 𝐴# (𝐼 + 𝐡𝑆# 𝐢𝐴# ) 𝐴# 𝐡𝑃𝑆 − 𝐴# 𝐡𝑆#
=[
].
2
−𝑆# 𝐢𝐴#
𝑆# (𝐼 − 𝐢(A# ) 𝐡𝑃𝑆 )
(56)
(b) Since 𝑃𝐴𝐡 = 0 and 𝐢𝑄𝐴 = 0, similar as (a), there is a
full rank factorization of A such that
A = 𝐹𝐺 = [
𝐹𝐴
0 𝐺𝐴 𝐺𝐴𝐴# 𝐡
][
].
#
𝐢𝐴 𝐹𝐴 𝐹𝑆
0
𝐺𝑆
(c) (⇒:) Since 𝑃𝐴𝐡 = 0, 𝐢𝑄𝐴 = 0, 𝑃𝑆 𝐢 = 0, and 𝐡𝑃𝑆 = 0,
according to the proof of (a) and (b), we have
𝐹(𝐺𝐹)−1 = [
(57)
𝐴# 𝐹𝐴 −𝐾𝑆# 𝐹𝑆
],
0
𝑆# 𝐹𝑆
−𝐺𝐴 𝐴# 𝐾𝑆𝑆#
(𝐺𝐴 𝐹𝐴)
]
(𝐺𝐹)−1 𝐺 = [
−1
#
#
−𝐺𝑆 𝑆 𝐻𝐴 𝐹𝐴 (𝐺𝑆 𝐹𝑆 ) + 𝐺𝑆 𝑆# 𝐻𝐾𝑆# 𝐹𝑆
×[
= [
(59)
= 𝐺𝐴 𝐹𝐴.
−1
−𝐺𝐴 𝐴# 𝐾𝑆# 𝐹𝑆
(𝐺𝐴 𝐹𝐴)
]
−1
#
#
−𝐺𝑆 𝑆 𝐻𝐴 𝐹𝐴 (𝐺𝑆 𝐹𝑆 ) + 𝐺𝑆 𝑆# 𝐻𝐾𝑆# 𝐹𝑆
−1
(58)
By using 𝐡𝑃𝑆 = 0, one gets the Schur complement of 𝐺𝑆 𝐹𝑆 in
𝐺𝐹:
𝑆󸀠 = 𝐺𝐴 𝐹𝐴 + 𝐺𝐴 𝐴# 𝐡𝑃𝑆 𝐻𝐹𝐴
𝐹𝐴 0
]
𝐻𝐹𝐴 𝐹𝑆
×[
= [
𝐺 𝐹 + 𝐺𝐴 𝐾𝐢𝐴# 𝐹𝐴 𝐺𝐴 𝐾𝐹𝑆
𝐺𝐹 = [ 𝐴 𝐴
].
𝐺𝑆 𝐻𝐹𝐴
𝐺𝑆 𝐹𝑆
−𝐴# 𝐡𝑆#
= [𝑃 𝐢𝐴# (𝐼 + 𝐴# 𝐡𝑆# 𝐢) 𝐴# − 𝑆# 𝐢𝐴# (𝐼 − 𝑃 𝐢(𝐴# )2 𝐡) 𝑆# ] .
𝑆
𝑆
(62)
We also have
𝐺𝐴 𝐺𝐴 𝐾
]
0 𝐺𝑆
0
𝐺𝐴 𝐴#
].
#
−𝐺𝑆 𝑆 𝐻 𝐺𝑆 𝑆#
(63)
Hence,
Hence,
(𝐺𝐹)−1 = [
−𝐾𝑆# 𝐹𝑆
0
𝐺𝐴 𝐴#
𝐴# 𝐹𝐴
]
#
#
# ][
𝑃𝑆 𝐻𝐴 𝐹𝐴 −𝑃𝑆 𝐻𝐾𝑆 𝐹𝑆 + 𝑆 𝐹𝑆 −𝐺𝑆 𝑆# 𝐻 𝐺𝑆 𝑆#
−1
#
#
−𝐺𝐴 𝐴 𝐾𝑆 𝐹𝑆
(𝐺𝐴 𝐹𝐴)
].
−1
#
#
−𝐺𝑆 𝑆 𝐻𝐴 𝐹𝐴 (𝐺𝑆 𝐹𝑆 ) + 𝐺𝑆 𝑆# 𝐻𝐾𝑆# 𝐹𝑆
(60)
A# = 𝐹(𝐺𝐹)−1 (𝐺𝐹)−1 𝐺 = [
(⇐:) By [9, Theorem 2].
𝐴# + 𝐴# 𝐡𝑆# 𝐢𝐴# −𝐴# 𝐡𝑆#
].
−𝑆# 𝐢𝐴#
𝑆#
(64)
8
Journal of Applied Mathematics
Analogous to Theorem 5, if define 𝑇 = 𝐴 − 𝐡𝐷# 𝐢 the
Schur complement of 𝐴 in A, one can obtain the following
results.
Theorem 6. Let A be defined by (4); let 𝑇 = 𝐴 − 𝐡𝐷# 𝐢 be the
Schur complement of 𝐴 in A; then the following statements are
true.
(a) If 𝐷 and 𝑇 are group inverse, 𝑃𝐷𝐢 = 0, 𝐡𝑄𝐷 = 0,
𝑃𝑇 𝐡 = 0, then
A# = [
2
𝑇# (𝐼 − 𝐡(𝐷# ) 𝐢𝑄𝑇 )
#
#
#
#
#
−𝑇# 𝐡𝐷#
#
#
−𝐷 𝐢𝑇 + 𝐷 (𝐼 + 𝐢𝑇 𝐡𝐷 ) 𝐷 𝐢𝑄𝑇 𝐷 + 𝐷# 𝐢𝑇# 𝐡𝐷#
].
(65)
(b) If 𝐷 and 𝑇 are group inverse, 𝑃𝐷𝐢 = 0, 𝐡𝑄𝐷 = 0,
and 𝐢𝑄𝑇 = 0, then
A# = [
2
(𝐼 − 𝑃𝑇 𝐡(𝐷# ) 𝐢) 𝑇# −𝑇# 𝐡𝐷# + 𝑃𝑇 𝐡𝐷# (𝐼 + 𝐷# 𝐢𝑇# 𝐡) 𝐷#
#
#
#
−𝐷 𝐢𝑇
#
#
#
𝐷 + 𝐷 𝐢𝑇 𝐡𝐷
(c) If 𝐴, 𝑆, 𝐷, 𝑇 are group inverse, 𝑃𝐴𝐡 = 0, 𝐢𝑄𝐴 = 0,
𝐡𝑃𝑆 = 0, 𝑃𝐷𝐢 = 0, 𝐡𝑄𝐷 = 0, and 𝑃𝑇 𝐡 = 0, then
A# = [
𝐴# + 𝐴# 𝐡𝑆# 𝐢𝐴#
#
#
#
−𝐴# 𝐡𝑆#
#
#
#
𝑃𝑆 𝐢𝐴 (𝐼 + 𝐴 𝐡𝑆 𝐢) 𝐴 − 𝑆 𝐢𝐴
2
(𝐼 − 𝑃𝑆 𝐢(𝐴# ) 𝐡) 𝑆#
2
=[
𝑇# (𝐼 − 𝐡(𝐷# ) 𝐢𝑄𝑇 )
#
#
#
#
#
−𝑇# 𝐡𝐷#
#
#
−𝐷 𝐢𝑇 + 𝐷 (𝐼 + 𝐢𝑇 𝐡𝐷 ) 𝐷 𝐢𝑄𝑇 𝐷 + 𝐷# 𝐢𝑇# 𝐡𝐷#
(d) If 𝐴, 𝑆, 𝐷, 𝑇 are group inverse, 𝑃𝐴𝐡 = 0, 𝐢𝑄𝐴 = 0,
𝐡𝑃𝑆 = 0, 𝑃𝐷𝐢 = 0, 𝐡𝑄𝐷 = 0, and 𝐢𝑄𝑇 = 0, then
A# = [
𝐴# + 𝐴# 𝐡𝑆# 𝐢𝐴#
−𝐴# 𝐡𝑆#
2
𝑃𝑆 𝐢𝐴# (𝐼 + A# 𝐡𝑆# 𝐢) 𝐴# − 𝑆# 𝐢𝐴# (𝐼 − 𝑃𝑆 𝐢(𝐴# ) 𝐡) 𝑆#
]
2
(𝐼 − 𝑃𝑇 𝐡(𝐷# ) 𝐢) 𝑇# −𝑇# 𝐡𝐷# + 𝑃𝑇 𝐡𝐷# (𝐼 + 𝐷# 𝐢𝑇# 𝐡) 𝐷#
].
(c) Let 𝐷 and 𝑇 be group inverse; then 𝑃𝐷𝐢 = 0, 𝐡𝑄𝐷 =
0, 𝐢𝑄𝑇 = 0, and 𝑃𝑇 𝐡 = 0 if and only if
−𝑇# 𝐡𝐷#
𝑇#
].
−𝐷# 𝐢𝑇# 𝐷# + 𝐷# 𝐢𝑇# 𝐡𝐷#
].
(70)
=[
−𝐷# 𝐢𝑇#
𝐷# + 𝐷# 𝐢𝑇# 𝐡𝐷#
Theorem 8. Let A be defined by (4); let 𝑆 = 𝐷−𝐢𝐴# 𝐡, 𝑇 = 𝐴−
𝐡𝐷# 𝐢 be the Schur complement of 𝐷 and 𝐴 in A, respectively.
Then
(67)
A# = [
𝐴# + 𝐴# 𝐡𝑆# 𝐢𝐴# −𝐴# 𝐡𝑆#
]
−𝑆# 𝐢𝐴#
𝑆#
(72)
𝑇#
−𝑇# 𝐡𝐷#
=[ # # #
]
−𝐷 𝐢𝑇 𝐷 + 𝐷# 𝐢𝑇# 𝐡𝐷#
Proof. The proof is similar to the proof of Theorem 5.
Combining Theorems 5 and 6, we have the following
results.
].
(71)
(66)
A# = [
]
if and only if one of the following conditions holds
#
Theorem 7. Let A be defined by (4); let 𝑆 = 𝐷−𝐢𝐴 𝐡, 𝑇 = 𝐴−
𝐡𝐷# 𝐢 be the Schur complement of 𝐷 and 𝐴 in A, respectively.
Then the following statements are true.
(a) 𝑃𝐴𝐡 = 0,
𝑃𝐷𝐢 = 0,
𝑃𝑆 𝐢 = 0,
𝐢𝑄𝐴 = 0,
𝐡𝑄𝐷 = 0,
𝐡𝑄𝑆 = 0,
(a) If 𝐴, 𝑆, 𝐷, 𝑇 are group inverse, 𝑃𝐴𝐡 = 0, 𝐢𝑄𝐴 =
0, 𝑃𝑆 𝐢 = 0, 𝑃𝐷𝐢 = 0, 𝐡𝑄𝐷 = 0, and 𝑃𝑇 𝐡 = 0, then
(b) 𝑃𝐴𝐡 = 0,
𝑃𝐷𝐢 = 0,
𝑃𝑇 𝐡 = 0,
#
A =[
𝐴# + 𝐴# 𝐡𝑆# 𝐢𝐴# 𝐴# (𝐼 + 𝐡𝑆# 𝐢𝐴# ) 𝐴# 𝐡𝑃𝑆 − 𝐴# 𝐡𝑆#
2
−𝑆# 𝐢𝐴#
𝑆# (𝐼 − 𝐢(𝐴# ) 𝐡𝑃𝑆 )
2
𝑇# (𝐼 − 𝐡(𝐷# ) 𝐢𝑄𝑇 )
=[
−𝑇# 𝐡𝐷#
].
(b) If 𝐴, 𝑆, 𝐷, 𝑇 are group inverse, 𝑃𝐴𝐡 = 0, 𝐢𝑄𝐴 = 0,
𝑃𝑆 𝐢 = 0, 𝑃𝐷𝐢 = 0, 𝐡𝑄𝐷 = 0, and 𝐢𝑄𝑇 = 0, then
A# = [
#
#
#
#
#
#
#
#
𝐴 + 𝐴 𝐡𝑆 𝐢𝐴
𝐴 (𝐼 + 𝐡𝑆 𝐢𝐴 ) 𝐴 𝐡𝑃𝑆 − 𝐴 𝐡𝑆
−𝑆# 𝐢𝐴#
𝑆# (𝐼 − 𝐢(𝐴# ) 𝐡𝑃𝑆 )
2
2
]
−𝐷# 𝐢𝑇#
𝐷# + 𝐷# 𝐢𝑇# 𝐡𝐷#
Proof. (a) Using Theorem 6(c) and Theorem 7(c), we conclude that
𝑃𝐴𝐡 = 0,
𝐢𝑄𝐴 = 0,
𝑃𝑆 𝐢 = 0,
𝑃𝐷𝐢 = 0,
𝐡𝑄𝐷 = 0,
𝐢𝑄𝑇 = 0,
].
(69)
𝐡𝑄𝑆 = 0,
𝑃𝑇𝐡 = 0,
(75)
if and only if
A# = [
(𝐼 − 𝑃𝑇 𝐡(𝐷# ) 𝐢) 𝑇# −𝑇# 𝐡𝐷# + 𝑃𝑇 𝐡𝐷# (𝐼 + 𝐷# 𝐢𝑇# 𝐡) 𝐷#
=[
𝐢𝑄𝑇 = 0.
(74)
(68)
#
𝐢𝑄𝐴 = 0,
]
−𝐷# 𝐢𝑇# + 𝐷# (𝐼 + 𝐢𝑇# 𝐡𝐷# ) 𝐷# 𝐢𝑄𝑇 𝐷# + 𝐷# 𝐢𝑇# 𝐡𝐷#
#
𝐡𝑄𝐷 = 0,
(73)
𝐴# + 𝐴# 𝐡𝑆# 𝐢𝐴# −𝐴# 𝐡𝑆#
]
−𝑆# 𝐢𝐴#
𝑆#
𝑇#
−𝑇# 𝐡𝐷#
=[ # # #
].
−𝐷 𝐢𝑇 𝐷 + 𝐷# 𝐢𝑇# 𝐡𝐷#
(76)
Journal of Applied Mathematics
9
Now, we only need to prove (73) is equivalent to (75). Denote
𝑇󸀠 = 𝐴# + 𝐴# 𝐡𝑆# 𝐢𝐴# . Then
𝑇𝑇󸀠 = (𝐴 − 𝐡𝐷# 𝐢) (𝐴# + 𝐴# 𝐡𝑆# 𝐢𝐴# )
= 𝐴𝐴# + 𝐴𝐴# 𝐡𝑆# 𝐢𝐴# − 𝐡𝐷# 𝐢𝐴# − 𝐡𝐷# 𝐢𝐴# 𝐡𝑆# 𝐢𝐴#
= 𝐴𝐴# + 𝐡𝑆# 𝐢𝐴# − 𝐡𝐷# 𝐢𝐴# − 𝐡𝐷# (𝐷 − 𝑆) 𝑆# 𝐢𝐴#
= 𝐴𝐴# ,
Now, the solution of system (79) can be obtained by the
two small linear systems previously mentioned. In that case,
the operation can be significantly simplified. We will also
notice that the Moore-Penrose inverse of 𝐴 can be replaced
by other generalized inverses, such as the group inverse, the
Drazin inverse and generalized inverse of 𝐴 or even the
ordinary inverse 𝐴−1 .
In the following, we will give the group inverse solutions
of the linear system.
Theorem 9. Let
𝑇󸀠 𝑇 = (𝐴# + 𝐴# 𝐡𝑆# 𝐢𝐴# ) (𝐴 − 𝐡𝐷# 𝐢)
Aπ‘₯ = 𝑦
= 𝐴# 𝐴 − 𝐴# 𝐡𝐷# 𝐢 − 𝐴# 𝐡𝑆# 𝐢𝐴# 𝐴 − 𝐴# 𝐡𝑆# 𝐢𝐴# 𝐡𝐷# 𝐢
= 𝐴# 𝐴 − 𝐴# 𝐡𝐷# 𝐢 − 𝐴# 𝐡𝑆# 𝐢 − 𝐴# 𝐡𝑆# (𝐷 − 𝑆) 𝐷# 𝐢
be a linear system. Suppose A satisfies all the conditions of
Theorem 5 (c), partitioning π‘₯ and 𝑦 as
= 𝐴# 𝐴.
π‘₯
π‘₯ = [ 1] ,
π‘₯2
(77)
Moreover, we have
π‘₯1 = 𝐴# (𝑦1 − 𝐡π‘₯2 ) ,
𝑇󸀠 𝑇𝑇󸀠 = 𝐴# 𝐴 (𝐴# + 𝐴# 𝐡𝑆# 𝐢𝐴# ) = 𝐴# + 𝐴# 𝐡𝑆# 𝐢𝐴# = 𝑇󸀠 .
(78)
Thus, 𝑇󸀠 = 𝑇# . Hence, 𝑇# 𝑇 = 𝐴# 𝐴 and 𝑇𝑇# = 𝐴𝐴# . Now, we
get 𝑃𝐴𝐡 = 𝑃𝑇 𝐡 = 0 and 𝐢𝑄𝐴 = 𝐢𝑄𝑇 = 0, which means
(73) implying (75). Obviously, (75) implies (73). So, (73) is
equivalent to (75).
(b) The proof is similar to (a).
3. Applications to the Solution of
a Linear System
(79)
be a linear system. Applying the block Gaussian elimination
to the system, we have
𝑦1
𝐴
𝐡
π‘₯
].
] [ 1] = [
𝑦2 − 𝐢𝐴† 𝑦1
0 𝐷 − 𝐢𝐴† 𝐡 π‘₯2
(80)
Hence, we get
𝐴π‘₯1 + 𝐡π‘₯2 = 𝑦1 ;
𝑆π‘₯2 = 𝑦2 − 𝐢𝐴† 𝑦1 .
𝑆π‘₯2 = 𝑦2 − 𝐢𝐴† 𝑦1 .
(84)
(85)
where 𝑆 = 𝐷 − 𝐢𝐴# 𝐡.
Proof. Since 𝑦 ∈ 𝑅(A), we conclude that π‘₯ = A# 𝑦 is the
solution of linear system (79). By Theorem 5 (c), we can get
the following:
=[
𝐴# + 𝐴# 𝐡𝑆# 𝐢𝐴# −𝐴# 𝐡𝑆# 𝑦1
][ ]
𝑦2
−𝑆# 𝐢𝐴#
𝑆#
𝐴# 𝑦1 + 𝐴# 𝐡𝑆# 𝐢𝐴# 𝑦1 − 𝐴# 𝐡𝑆# 𝑦2
]
𝑆# (𝑦2 − 𝐢𝐴# 𝑦1 )
(86)
π‘₯
= [ 1] .
π‘₯2
Now, it is easy to see that the solution π‘₯ = A# 𝑦 can be
expressed as
π‘₯1 = 𝐴# (𝑦1 − 𝐡π‘₯2 ) ,
π‘₯2 = 𝑆# (𝑦2 − 𝐢𝐴# 𝑦1 ) ,
(87)
which are also the group inverse solutions of the two small
linear systems of (82), respectively.
(81)
That is,
𝐴π‘₯1 = 𝑦1 − 𝐡π‘₯2 ,
π‘₯2 = 𝑆# (𝑦2 − 𝐢𝐴# 𝑦1 ) ,
π‘₯ = A# 𝑦 = [
In this section, we will give an application of the previous
results above to the solution of a linear system. Using generalized Schur complement, we can split a larger system into
two small linear systems by the following steps.
Let
[
𝑦
𝑦 = [ 1]
𝑦2
which have appropriate sizes with A. If 𝑦 ∈ 𝑅(A), then the
solution π‘₯ = A# 𝑦 of linear system (79) can be expressed as
𝑇𝑇󸀠 𝑇 = 𝐴𝐴# (𝐴 − 𝐡𝐷# 𝐢) = 𝐴 − 𝐡𝐷# 𝐢 = 𝑇,
Aπ‘₯ = 𝑦
(83)
(82)
Acknowledgments
This work was supported by the National Natural Science
Foundation of China (11061005) and the Ministry of Education, Science, and Grants (HCIC201103) of Guangxi Key
Laboratory of Hybrid Computational and IC Design Analysis
Open Fund.
10
References
[1] F. J. Hall, “Generalized inverses of a bordered matrix of operators,” SIAM Journal on Applied Mathematics, vol. 29, pp. 152–163,
1975.
[2] F. J. Hall, “The Moore-Penrose inverse of particular bordered
matrices,” Australian Mathematical Society Journal A, vol. 27, no.
4, pp. 467–478, 1979.
[3] F. J. Hall and R. E. Hartwig, “Further results on generalized
inverses of partitioned matrices,” SIAM Journal on Applied
Mathematics, vol. 30, no. 4, pp. 617–624, 1976.
[4] S. K. Mitra, “Properties of the fundamental bordered matrix
used in linear estimation,” in Statistics and Probability, Essays
in Honor of C.R. Rao, G. Kallianpur, Ed., pp. 504–509, North
Holland, New York, NY, USA, 1982.
[5] J. K. Baksalary and G. P. H. Styan, “Generalized inverses
of partitioned matrices in Banachiewicz-Schur form,” Linear
Algebra and its Applications, vol. 354, pp. 41–47, 2002.
[6] F. Burns, D. Carlson, E. Haynsworth, and T. Markham, “Generalized inverse formulas using the Schur complement,” SIAM
Journal on Applied Mathematics, vol. 26, pp. 254–259, 1974.
[7] D. S. Cvetković-Ilić, “A note on the representation for the
Drazin inverse of 2 × 2 block matrices,” Linear Algebra and its
Applications, vol. 429, no. 1, pp. 242–248, 2008.
[8] G. Marsaglia and G. P. H. Styan, “Rank conditions for generalized inverses of partitioned matrices,” SankhyaΜ„ A, vol. 36, no. 4,
pp. 437–442, 1974.
[9] J. Benı́tez and N. Thome, “The generalized Schur complement
in group inverses and (π‘˜ + 1)-potent matrices,” Linear and
Multilinear Algebra, vol. 54, no. 6, pp. 405–413, 2006.
[10] D. S. Cvetković-Ilić, J. Chen, and Z. Xu, “Explicit representations of the Drazin inverse of block matrix and modified
matrix,” Linear and Multilinear Algebra, vol. 57, no. 4, pp. 355–
364, 2009.
[11] J. M. Miao, “General expressions for the Moore-Penrose inverse
of a 2 × 2 block matrix,” Linear Algebra and its Applications, vol.
151, pp. 1–15, 1991.
[12] J. Chen, Z. Xu, and Y. Wei, “Representations for the Drazin
inverse of the sum 𝑃 + 𝑄 + 𝑅 + 𝑆 and its applications,” Linear
Algebra and its Applications, vol. 430, no. 1, pp. 438–454, 2009.
[13] S. L. Campbell, C. D. Meyer, Jr., and N. J. Rose, “Applications
of the Drazin inverse to linear systems of differential equations
with singular constant coefficients,” SIAM Journal on Applied
Mathematics, vol. 31, no. 3, pp. 411–425, 1976.
[14] R. Hartwig, X. Li, and Y. Wei, “Representations for the Drazin
inverse of a 2×2 block matrix,” SIAM Journal on Matrix Analysis
and Applications, vol. 27, no. 3, pp. 757–771, 2005.
[15] A. da Silva Soares and G. Latouche, “The group inverse of finite
homogeneous QBD processes,” Stochastic Models, vol. 18, no. 1,
pp. 159–171, 2002.
[16] Y. Wei and H. Diao, “On group inverse of singular Toeplitz
matrices,” Linear Algebra and its Applications, vol. 399, pp. 109–
123, 2005.
[17] Y. Wei, “On the perturbation of the group inverse and oblique
projection,” Applied Mathematics and Computation, vol. 98, no.
1, pp. 29–42, 1999.
[18] J. Benı́tez, X. Liu, and T. Zhu, “Additive results for the group
inverse in an algebra with applications to block operators,”
Linear and Multilinear Algebra, vol. 59, no. 3, pp. 279–289, 2011.
[19] C. Bu, M. Li, K. Zhang, and L. Zheng, “Group inverse for the
block matrices with an invertible subblock,” Applied Mathematics and Computation, vol. 215, no. 1, pp. 132–139, 2009.
Journal of Applied Mathematics
[20] C. Bu, K. Zhang, and J. Zhao, “Some results on the group inverse
of the block matrix with a sub-block of linear combination or
product combination of matrices over skew fields,” Linear and
Multilinear Algebra, vol. 58, no. 7-8, pp. 957–966, 2010.
[21] C. Bu, J. Zhao, and K. Zhang, “Some results on group inverses
of block matrices over skew fields,” Electronic Journal of Linear
Algebra, vol. 18, pp. 117–125, 2009.
[22] C. Bu, J. Zhao, and J. Zheng, “Group inverse for a class 2 × 2
block matrices over skew fields,” Applied Mathematics and Computation, vol. 204, no. 1, pp. 45–49, 2008.
[23] C. G. Cao, “Some results of group inverses for partitioned matrices over skew fields,” Journal of Natural Science of Heilongjiang
University, vol. 18, no. 3, pp. 5–7, 2001 (Chinese).
[24] X. Chen and R. E. Hartwig, “The group inverse of a triangular
matrix,” Linear Algebra and its Applications, vol. 237/238, pp. 97–
108, 1996.
[25] C. Cao and J. Li, “Group inverses for matrices over a Bezout
domain,” Electronic Journal of Linear Algebra, vol. 18, pp. 600–
612, 2009.
[26] C. Cao and J. Li, “A note on the group inverse of some 2 × 2
block matrices over skew fields,” Applied Mathematics and
Computation, vol. 217, no. 24, pp. 10271–10277, 2011.
[27] C. Cao and X. Tang, “Representations of the group inverse of
some 2 × 2 block matrices,” International Mathematical Forum,
vol. 31, pp. 1511–1517, 2006.
[28] M. Catral, D. D. Olesky, and P. van den Driessche, “Graphical
description of group inverses of certain bipartite matrices,”
Linear Algebra and its Applications, vol. 432, no. 1, pp. 36–52,
2010.
[29] P. Patrı́cio and R. E. Hartwig, “The (2, 2, 0) group inverse problem,” Applied Mathematics and Computation, vol. 217, no. 2, pp.
516–520, 2010.
[30] J. Zhou, C. Bu, and Y. Wei, “Group inverse for block matrices
and some related sign analysis,” Linear and Multilinear Algebra,
vol. 60, no. 6, pp. 669–681, 2012.
[31] Z. Yan, “New representations of the Moore-Penrose inverse of
2 × 2 block matrices,” Linear Algebra and its Applications, 2013.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download