Document 10905131

advertisement
Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2012, Article ID 390592, 10 pages
doi:10.1155/2012/390592
Research Article
The Expression of the Drazin Inverse with
Rank Constraints
Linlin Zhao
Department of Mathematics, Dezhou University, Dezhou 253023, China
Correspondence should be addressed to Linlin Zhao, zhaolinlin0635@163.com
Received 9 October 2012; Accepted 5 November 2012
Academic Editor: C. Conca
Copyright q 2012 Linlin Zhao. This is an open access article distributed under the Creative
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
By using the matrix decomposition and the reverse order law, we provide some new expressions
of the Drazin inverse for any 2 × 2 block matrix with rank constraints.
1. Introduction
Let A be a square complex matrix. The symbols rA and A† stand for the rank and the
Moore-Penrose inverse of the matrix A, respectively. The Drazin inverse AD of A is the unique
matrix satisfying
Ak1 AD Ak ,
AD AAD AD ,
AAD AD A,
1.1
where k indA is the index of A, the smallest nonnegative integer such that rAk1 rAk . We write Aπ I − AAD .
The Drazin inverse of a square matrix plays an important role in various fields
like singular differential equations and singular difference equations, Markov chains, and
iterative methods.
The problem of finding explicit representations for the Drazin inverse of a complex
block matrix,
M
A B
C D
1.2
in terms of its blocks was posed by Campbell and Meyer 1, 2 in 1979. Many authors have
considered this problem and have provided formulas for MD under some specific conditions
3–6.
2
Journal of Applied Mathematics
In this paper, under rank constraints, we will present some new representations of MD
which have not been discussed before.
2. Preliminary
Lemma 2.1 see 4. Let P and Q be square matrices of the same order.
If P Q 0, then
D
P Q Q
π
k−1
i
Q P
D
i
D
P Q
D
k−1 i0
Q
D
i
i
P Pπ,
2.1
i0
where max{indP , indQ} ≤ k ≤ indP indQ.
If P Q 0 and QP 0, then P QD P D QD .
Lemma 2.2 see 7. Let
M1 A 0
,
C B
B C
M2 ,
0 A
2.2
where A, B are square matrices with indA r, indB s. Then
M1D
D
0
A
,
X BD
M2D
BD X
,
0 AD
2.3
where
X B
D
r−1 2 B
D
i
i
π
CA A B
i0
π
s−1
i
BC A
D
i AD
2
− BD CAD .
2.4
i0
D
D
D
Lemma 2.3 see 8. Let A A1 A2 · · · An , X AD
n An−1 · · · A1 . Then X A if and only if
A1 , A2 , . . . , An , A satisfy
r
−1n A2k1 Ak E
T
E Ak
N
r Ak r Ak1 · · · r Akn ,
2.5
where Ai ∈ Cm×m , i 1, 2, . . . , n, E 0, . . . , 0, Im ∈ Cm×mn1 and
⎛
⎜
..
⎜
.
N⎜
⎜ 2k1 k k
⎝An
An An−1
Akn
with k max{indAi , indA}, 1 ≤ i ≤ n.
⎞
A2k1
Ak1
1
⎟
..
⎟
.
⎟,
⎟
⎠
2.6
Journal of Applied Mathematics
3
Lemma 2.4 see 9. Let A ∈ Cm×n , B ∈ Cm×k , C ∈ Cl×n . Then
r
A B
C 0
rB rC r
Im − BB† A In − C† C .
2.7
3. Main Results
In this section, with rank equality constraints, we consider the Drazin inverse of block
matrices.
B
m×m
is invertible and D − CA−1 B ∈ Cn×n is singular. It is
Let M A
C D , where A ∈ C
easy to verify that M can be decomposed as
Im 0
CA−1 In
M
A
0
Im A−1 B
M1 M2 M3 .
0
In
0 D − CA−1 B
3.1
Let
⎛
⎜
M22k1
⎜
N ⎜ 2k1
⎝ M3
M3k M2k
k
M3
E 0, 0, 0, Imn ,
⎞
M12k1 M1k
⎟
M2k M1k
⎟
⎟,
⎠
3.2
where k max{indM2 , indM}. According to Lemma 2.3, we have the following
theorem.
B
m×m
is invertible and D − CA−1 B ∈ Cn×n is singular. If
Theorem 3.1. Let M A
C D , where A ∈ C
r M
k
0
M2k M1−1 FMk
r
0
EMk M3−1 M2k
r M2k ,
3.3
where k max{indM2 , indM}, EMk I − Mk Mk † , FMk I − Mk † Mk , then MD has
the following form:
D
M D
D AD A−1 B D − CA−1 B CA−1 −A−1 B D − CA−1 B
.
D
D
− D − CA−1 B CA−1
D − CA−1 B
3.4
Proof. From Lemma 2.3 and 3.1, we know that if
r
−M2k1 Mk E
N
ET Mk
r Mk r M1k r M2k r M3k ,
3.5
4
Journal of Applied Mathematics
where k max{indM2 , indM}, then
D
M M3−1 M2D M1−1
Im −A−1 B
0
In
AD
0
D
0
D − CA−1 B
0
Im
−CA−1 In
D
D AD A−1 B D − CA−1 B CA−1 −A−1 B D − CA−1 B
.
D
D
− D − CA−1 B CA−1
D − CA−1 B
3.6
Note that
⎛
r
−M2k1
ET Mk
−M2k1
0
0
0
0
0
0
Mk
⎞
⎟
⎟
M12k1 M1k ⎟
⎟
⎟
⎟
2k1
k
k
0
0
M2
M2 M1 0 ⎟
⎟
⎟
⎟
2k1
k
k
M3 M2
0
0 ⎟
0
M3
⎟
⎠
M3k
0
0
0
Mk
⎛
⎞
0 Mk1 M3k
0
0
Mk
⎜
⎟
⎜
⎟
2k1
k⎟
⎜ 0
0
0
M
M
⎜
1
1⎟
⎜
⎟
⎜
⎟
2k1
k
k
⎜
r⎜ 0
0
M2
M2 M1 0 ⎟
⎟
⎜
⎟
⎜
⎟
2k1
k
k
⎜ 0
M3 M2
0
0 ⎟
M3
⎜
⎟
⎝
⎠
Mk
M3k
0
0
0
⎛
⎞
0
0
0
0
Mk
⎜
⎟
⎜
⎟
2k1
k⎟
⎜ 0 −Mk MMk
0
M
M
⎜
3
1
1
1⎟
⎜
⎟
⎜
⎟
2k1
k
k
⎜
r⎜ 0
0
M2
M2 M1 0 ⎟
⎟.
⎜
⎟
⎜
⎟
2k1
k
k
⎜ 0
M3 M2
0
0 ⎟
M3
⎜
⎟
⎝
⎠
M3k
0
0
0
Mk
⎜
⎜
⎜
⎜
⎜
k ⎜
M E
r⎜
⎜
N
⎜
⎜
⎜
⎜
⎝
3.7
Let
⎛
⎛
G 0, 0, 0, Mk ,
⎞
0
⎜ 0 ⎟
⎟
J⎜
⎝ 0 ⎠,
Mk
⎜
⎜
⎜
⎜
H⎜
⎜
⎜
⎜
⎝
−M1k MM3k
0
M12k1 M1k
0
M22k1
M2k M1k
M32k1
M3k M2k
0
M3k
0
0
⎞
⎟
⎟
0 ⎟
⎟
⎟. 3.8
⎟
0 ⎟
⎟
⎠
0
Journal of Applied Mathematics
5
From Lemma 2.4, we have
r
0 G
J H
H J
rJ rG r I4m − JJ † H I4m − G† G
G 0
2r Mk r I4m − JJ † H I4m − G† G .
r
k †
†
†
Note that J 0, 0, 0, M , G 0
0
0
†
Mk 3.9
. Then we get
I4m − JJ † H I4m − G† G
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
−M1k MM3k
0
0
M22k1
M32k1
M3k M2k
† I4m − Mk Mk M3k
0
⎞
†
M12k1 M1k I4m − Mk Mk
⎟
⎟
⎟
k
k
M2 M1
0
⎟
⎟
⎟.
⎟
0
0
⎟
⎟
⎠
0
0
3.10
Let EMk I4m − Mk Mk † , F I4m − Mk † Mk . Then I4m − JJ † HI4m − G† G can be
rewritten as the following three matrix products:
⎛ k
M1
⎜ 0
⎜
⎝ 0
0
0 0
Im 0
0 M3k
0 0
⎞⎛
⎞⎛
k
−M
0
M1 F M
M3k
0
⎜
⎟
2k1
k
0⎟
0 M2
M2 0 ⎟ ⎜
0
⎟⎜
⎜
⎟⎜
0 ⎠⎝ M3 M2k
0
0 ⎠⎝ 0
k
Im
0
EM
0
0
0
Since M1 , M3 is nonsingular, then
r
I4m − JJ † H I4m − G† G
⎞
−M
0
M1 FMk
2k1
k
⎜ 0 M
M2 0 ⎟
2
⎟
r⎜
⎝ M3 Mk
0
0 ⎠
2
0
0
0
EMk
⎛
⎞
−M
0
Imn FMk
⎜
⎟
⎜
⎟
−1
2k1
k
⎜ 0
M2
M2 M1
0 ⎟
⎜
⎟
⎟
r⎜
⎜
⎟
⎜Imn M−1 Mk
⎟
0
0
⎜
⎟
2
3
⎝
⎠
EMk
0
0
0
⎛
0 0
Im 0
0 M1k
0 0
⎞
0
0⎟
⎟.
0⎠
Im
3.11
6
⎛
0
M1 M2k1
⎜
⎜
⎜ 0
M22k1
⎜
r⎜
⎜
⎜Imn
M3−1 M2k
⎜
⎝
0 −EMk M3−1 M2k
⎛
0
M1 M2k1
⎜
⎜
⎜ 0
0
⎜
r⎜
⎜
⎜Imn
M3−1 M2k
⎜
⎝
0 −EMk M3−1 M2k
Journal of Applied Mathematics
⎞
Imn FMk
⎟
⎟
M2k M1−1 0 ⎟
⎟
⎟
⎟
0
0 ⎟
⎟
⎠
0
0
⎞
Imn
FM k
⎟
⎟
−1
k
0 −M2 M1 FMk ⎟
⎟
⎟
⎟
⎟
0
0
⎟
⎠
0
0
⎛
⎞
0
0
Imn
0
⎜ 0
0
0 −M2k M1−1 FMk ⎟
⎟
r⎜
⎝Imn
⎠
0
0
0
−1
k
0 −EMk M3 M2 0
0
M2k M1−1 FMk
0
.
2m n r
0
EMk M3−1 M2k
3.12
Thus, we have
−M2k1 Mk E
r
N
ET Mk
0 G
r
J H
2r Mk
0
M2k M1−1 FMk
.
2m n r
0
EMk M3−1 M2k
3.13
From the above equality and the condition 3.3, 3.5 is easily verified.
B
m×m
Let M A
, D ∈ Cn×n . It is easy to verify that the matrix M can
C D , where A ∈ C
be decomposed as
M
Im 0
CAD In
A Aπ B
CAπ S
I m AD B
0
In
A1 A2 A3 ,
where S D − CAD B is the generalized Schur complement of A in M.
3.14
Journal of Applied Mathematics
7
Let
⎛
A2k1
Ak1
1
⎜
⎜
⎜
A2k1
Ak2 Ak1
⎜
2
N1 ⎜
⎜ 2k1 k k
⎜A
A3 A2
⎜ 3
⎝
Ak3
⎞
⎟
⎟
⎟
⎟
⎟,
⎟
⎟
⎟
⎠
3.15
where k max {indA2 , indM}. Then we have the following theorem.
Theorem 3.2. If AAπ B 0, CAπ B 0, and the matrices A1 , A2 , A3 , M satisfy
r M
k
0
Ak2 A−1
1 FM k
r
k
0
EMk A−1
3 A2
r Ak2 ,
3.16
then,
2
AD Aπ BXAD Aπ BSD − AD B Y Aπ B SD − AD BSD
,
Y
SD
D
M where S D − CAD B, Y SD 2 indA−1
i0
3.17
SD i CAπ Ai Aπ − SD CAD .
Proof. From Lemma 2.3 and 3.14, we get that if
r
−M2k1 Mk E
E T M k N1
r Mk r Ak1 r Ak2 r Ak3 ,
3.18
then
D −1
MD A−1
3 A2 A1 Im −AD B
0
In
A Aπ B
CAπ S
D 0
Im
.
D
−CA In
3.19
Similar to the proof of Theorem 3.1, we derive that the rank condition 3.18 can be
simplified as 3.16.
Next, we will give the representation for AD
2 . Let
A2 A 0
0 Aπ B
P Q.
CAπ S
0 0
3.20
Since AAπ B 0, CAπ B 0, and Aπ Aπ Aπ , then P Q 0. From Lemma 2.2, we get
D
Q 0,
P
D
AD 0
,
X SD
where X S
D
2
indA−1
S
i0
D
i
π
i
CA A Aπ .
3.21
8
Journal of Applied Mathematics
From Lemma 2.1 and the fact Qi 0, i ≥ 2, it follows that
D
D
D
AD
I
PD
Q
I
−
QQ
QP
P
mn
mn
2
D
D
0 Aπ B
A
A
0
0
Imn 0 0
X SD
X SD
D
0
I Aπ BX Aπ BSD
A
m
0
In
X SD
2 AD Aπ B XAD SD X Aπ B SD
.
X
SD
3.22
Substituting AD
2 in 3.19, the conclusion can be obtained.
From Theorem 3.2, we can easily obtain the following corollaries.
Corollary 3.3. If Aπ B 0 and the rank equality 3.16 hold, then
MD AD − AD B X − SD CAD −AD BSD
,
X − SD CAD
SD
3.23
where S and X are the same as in Theorem 3.2.
Corollary 3.4. If Aπ B 0, CAπ 0, and the rank equality 3.16 hold, then
MD AD AD BSD CAD −AD BSD
,
−SD CAD
SD
3.24
where S D − CAD B.
Next, we will consider another decomposition of M involving the generalized Schur
complement SB C − DBD A.
m×m
B
. Then M can be decomposed as
Let M A
C D , where A, D ∈ C
M
Im 0
DBD Im
Bπ A B
SB DBπ
Im 0
B D A Im
V1 V2 V3 ,
3.25
where SB C − DBD A is the generalized Schur complement of B in M.
Let
⎛
E1 0, 0, 0, I2m ,
V12k1 V1k
⎜
⎜
⎜
V22k1 V2k V1k
⎜
N2 ⎜
⎜ 2k1 k k
⎜V
V3 V2
⎜ 3
⎝
V3k
⎞
⎟
⎟
⎟
⎟
⎟,
⎟
⎟
⎟
⎠
where k max{indV2 , indM}. Then we have the following theorem.
3.26
Journal of Applied Mathematics
9
B
m×m
. If Bπ A 0, CB DBD AB and the matrices
Theorem 3.5. Let M A
C D , where A, D ∈ C
V1 , V2 , V3 , M satisfy the following rank equality:
r M
k
V2k V1−1 FMk
0
r
0
EMk V3−1 V2k
r V2k ,
3.27
then
MD D
BX 2 XS
BX 2 B − DB
,
I − BD ABX X XSB − DBD
I − BD ABX X
3.28
where X DBπ D .
Proof. From Lemma 2.3 and 3.25, we get that if the following rank condition
−M2k1 Mk E
r
E T M k N2
r Mk r V1k r V2k r V3k
3.29
holds, then M V3−1 V2D V1−1 . From the same method used in Theorem 3.1, we can verify that
the above condition 3.29 can be reduced to 3.27.
Next, we will give the representation for V2D . For Bπ A 0, we write
V2 0
0
SB DBπ
0 B
0 0
P1 Q1 ,
3.30
where SB C − DBD A.
From the condition CB DBD AB, we get P1 Q1 0, according to Lemma 2.2, we have
⎡
Q1D 0,
P1D ⎣
0
DBπ D
2
0
SB DBπ D
⎤
⎦.
3.31
Let X DBπ D . By Lemma 2.1 and the fact Q1i 0, i ≥ 2, we get
V2D I2m − Q1 Q1D I2m Q1 P1D P1D
0 B
0
0
0
0
I2m 0 0
X 2 SB X
X 2 SB X
3
0
0
I BX 2 SB BX
BX SB X BX 2
m
.
X 2 SB X
X 2 SB
X
0
Im
3.32
10
Journal of Applied Mathematics
Therefore, we get
3
0
BX SB X BX 2
Im
X 2 SB
X
−DBD Im
2
D
−
DB
BX
XS
BX 2 B
.
I − BD ABX X XSB − DBD
I − BD ABX X
MD 0
Im
−BD A Im
3.33
Remark 3.6. In addition to the decompositions of M in 3.14 and 3.25, the matrix M also
can be decomposed as other matrix products involving the generalized Schur complements
SC B − ACD D or SD A − BDD C. In these cases, new formulas for MD would be derived
by the method used in this paper.
4. Conclusion
In this paper, we mainly discuss the Drazin inverse of block matrices under rank equality
constraints. Comparing with the existing results, it is obvious that our results have more
strong restrictions, but the methods used in this paper are different from those in previous
relevant paper.
Acknowledgments
The author wishes to thank Professor Guoliang Chen and Dr. Jing Cai for their help. This
research is financed by NSFC Grants 10971070, 11071079, and 10901056.
References
1 S. L. Campbell and C. D. Meyer, GeneralizedInverses of Linear Transformations, Dover, New York, NY,
USA, 1991.
2 S. L. Campbell and C. D. Meyer, Generalized Inverses of Linear Transformations, Pitman, London, UK,
1979.
3 Y. Wei, “Expressions for the Drazin inverse of a 2 × 2 block matrix,” Linear and Multilinear Algebra, vol.
45, no. 2-3, pp. 131–146, 1998.
4 R. E. Hartwig, G. Wang, and Y. Wei, “Some additive results on Drazin inverse,” Linear Algebra and Its
Applications, vol. 322, no. 1–3, pp. 207–217, 2001.
5 X. Li and Y. Wei, “A note on the representations for the Drazin inverse of 2 × 2 block matrices,” Linear
Algebra and Its Applications, vol. 423, no. 2-3, pp. 332–338, 2007.
6 D. S. Cvetković-Ilić, “A note on the representation for the Drazin inverse of 2×2 block matrices,” Linear
Algebra and Its Applications, vol. 429, no. 1, pp. 242–248, 2008.
7 C. D. Meyer, and N. J. Rose, “The index and the Drazin inverse of block triangular matrices,” SIAM
Journal on Applied Mathematics, vol. 33, no. 1, pp. 1–7, 1977.
8 G. Wang, “The reverse order law for the Drazin inverses of multiple matrix products,” Linear Algebra
and Its Applications, vol. 348, pp. 265–272, 2002.
9 G. Marsaglia and G. P. H. Styan, “Equalities and inequalities for ranks of matrices,” Linear and
Multilinear Algebra, vol. 2, pp. 269–292, 1974.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download