Document 10953434

advertisement
Hindawi Publishing Corporation
Mathematical Problems in Engineering
Volume 2012, Article ID 591256, 14 pages
doi:10.1155/2012/591256
Research Article
An Analog of the Adjugate Matrix for
2
the Outer Inverse AT,S
Xiaoji Liu,1, 2 Guangyan Zhu,1
Guangping Zhou,1 and Yaoming Yu3
1
School of Science, Guangxi University for Nationalities, Nanning 530006, China
Guangxi Key Laboratory of Hybrid Computational and IC Design Analysis, Nanning 530006, China
3
School of Mathematical Sciences, Monash University, Caulfield East, VIC 3800, Australia
2
Correspondence should be addressed to Xiaoji Liu, liuxiaoji.2003@yahoo.com.cn
Received 5 August 2011; Revised 24 November 2011; Accepted 7 December 2011
Academic Editor: Kui Fu Chen
Copyright q 2012 Xiaoji Liu et al. This is an open access article distributed under the Creative
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
We investigate the determinantal representation by exploiting the limiting expression for the
2
generalized inverse AT,S . We show the equivalent relationship between the existence and limiting
2
expression of AT,S and some limiting processes of matrices and deduce the new determinantal
2
representations of AT,S , based on some analog of the classical adjoint matrix. Using the analog of
the classical adjoint matrix, we present Cramer rules for the restricted matrix equation AXB D, RX ⊂ T, NX ⊃ S.
1. Introduction
Throughout this paper Cm×n denotes the set of m × n matrices over the complex number field
denotes its subset in which every matrix has rank r. I stands for the identity
C, and Cm×n
r
matrix of appropriate order dimension.
Let A ∈ Cm×n , and let M and N be Hermitian positive definite matrices of orders m
and n, respectively. Consider the following equations:
AXA A,
1
XAX A,
2
AX∗ AX,
3
2
Mathematical Problems in Engineering
MAX∗ MAX,
XA∗ XA,
NXA∗ NXA.
3M
4
4N
X is called a {2}- or outer inverse of A if it satisfies 2 and denoted by A2 . X is called the
Moore-Penrose inverse of A if it satisfies 1, 2, 3, and 5 and denoted by A† . X is called
the weighted Moore-Penrose inverse of A with respect to M, N if it satisfies 1, 2, 4,
and 6 and denoted by AMN see, e.g., 1, 2.
Let A ∈ Cn×n . Then a matrix X satisfying
Ak XA Ak ,
1k XAX X,
2∗ AX XA,
5
where k is some positive integer, is called the Drazin inverse of A and denoted by Ad . The
smallest positive integer k such that X and A satisfy 7, 8, and 9, then it is called the
Drazin index and denoted by k IndA. It is clear that IndA is the smallest positive
integer k satisfying rankAk rankAk1 see 3. If k 1, then X is called the group
inverse of A and denoted by Ag . As is well known, Ag exists if and only if rank A rank A2 .
The generalized inverses, and in particular Moore-Penrose, group and Drazin inverses, have
also been studied in the context of semigroups, rings of Banach and C∗ algebras see 4–8.
In addition, if a matrix X satisfies 1 and 5, then it is called a {1, 5}-inverse of A and
is denoted by A1,5 .
Let A ∈ Cm×n , W ∈ Cn×m . Then the matrix X ∈ Cm×n satisfying
AWk1 XW AWk ,
1k W
XWAWX X,
2W
AWX XWA,
5W
where k is some nonnegative integer, is called the W-weighted Drazin inverse of A, and is
denoted by X Ad,W see 9. It is obvious that when m n and W In , X is called the
Drazin inverse of A.
, and let T and S be subspaces of Cn and Cm ,
Lemma 1.1 see 1, Theorem 2.14. Let A ∈ Cm×n
r
⊥
respectively, with dim T dim S t ≤ r. Then A has a {2}-inverse X such that RX T and
NX S if and only if
AT ⊕ S Cm
2
in which case X is unique and denoted by AT,S .
1.1
Mathematical Problems in Engineering
3
2
If AT,S exists and there exists a matrix G such that RG T and NG S, then
2
2
GAAT,S G and AT,S AG G.
It is well known that several important generalized inverses, such as the MoorePenrose inverse A† , the weighted Moore-Penrose inverse AM,N , the Drazin inverse Ad , and
2
the group inverse Ag , are outer inverses AT,S for some specific choice of T and S, are all the
2
generalized inverse AT,S , {2}- or outer inverse of A with the prescribed range T and null
space S see 2, 10 in the context of complex matrices and 11 in the context of semigroups.
2
Determinantal representation of the generalized inverse AT,S was studied in 12, 13.
2
We will investigate further such representation by exploiting the limiting expression for AT,S .
The paper is organized as follows. In Section 2, we investigate the equivalent relationship
2
between the existence of AT,S and the limiting process of matrices limλ → 0 GAG λI−1 or
2
limλ → 0 GA λI−1 G and deduce the new determinantal representations of AT,S , based on
some analog of the classical adjoint matrix, by exploiting limiting expression. In Section 3,
using the analog of the classical adjoint matrix in Section 2, we present Cramer rules for
In Section 4, we give an
the restricted matrix equation AXB D, RX ⊂ T, NX ⊃ S.
example for solving the solution of the restricted matrix equation by using our expression.
We introduce the following notations.
For 1 ≤ k ≤ n, the symbol Qk,n denotes the set {α : α α1 , . . . , αk , 1 ≤ α1 < · · · <
αk ≤ n, where αi , i 1, . . . , k, are integers}. And Qk,n {j} : {β : β ∈ Qk,n , j ∈ β}, where
j ∈ {1, . . . , n}.
Let A aij ∈ Cm×n . The symbols a.j and ai. stand for the jth column and the ith row
of A, respectively. In the same way, denote by a∗.j and a∗i. the jth column and the ith row of
Hermitian adjoint matrix A∗ . The symbol A.j b or Aj. b denotes the matrix obtained from
A by replacing its jth column or row with some vector b or bT . We write the range of A
by RA {Ax : x ∈ Cn } and the null space of A by NA {x ∈ Cn : Ax 0}. Let B ∈ Cp×q .
We define the range of a pair of A and B as RA, B {AWB : W ∈ Cn×p }.
Let α ∈ Qk,m and β ∈ Qk,n , where 1 ≤ k ≤ min{m, n}. Then |Aαβ | denotes a minor of A
determined by the row indexed by α and the columns indexed by β. When m n, the cofactor
of aij in A is denoted by ∂|A|/∂aij .
2
2. Analogs of the Adjugate Matrix for AT,S
We start with the following theorem which reveals the intrinsic relation between the existence
2
of AT,S and of limλ → 0 GAGλI−1 or limλ → 0 GAλI−1 G. Here λ → 0 means λ → 0 through
any neighborhood of 0 in C which excludes the nonzero eigenvalues of a square matrix. In
2
14, Wei pointed out that the existence of AT,S implies the existence of limλ → 0 GAG λI−1
or limλ → 0 GA λI−1 G. The following result will show that the converse is true under some
condition.
, and let T and S be subspaces of Cn and Cm , respectively, with dim T Theorem 2.1. Let A ∈ Cm×n
r
⊥
n×m
with RG T and NG S. Then the following statements are
dim S t ≤ r. Let G ∈ Cr
equivalent:
2
i AT,S exists;
ii limλ → 0 GAG λI−1 exists and rankAG rankG;
iii limλ → 0 GA λI−1 G exists and rankGA rankG.
4
Mathematical Problems in Engineering
In this case,
2
AT,S lim GA λI−1 G lim GAG λI−1 .
λ→0
λ→0
2.1
2
Proof. i⇔ii Assume that AT,S exists. By 14, Theorem 2.4, limλ → 0 GAG λI−1 exists.
2
Since G AT,S AG, rankAG rankG.
Conversely, assume that limλ → 0 GAG λI−1 exists and rankAG rankG. So
lim AG λI−1 AG lim AGAG λI−1
λ→0
λ→0
2.2
2
exists. By 15, Theorem, AGg exists. So AG1,5 exists, and then, by 13, Theorem 2, AT,S
exists.
Similarly, we can show that i⇔iii. Equation 2.1 comes from 14, equation 2.16.
Lemma 2.2. Let A aij ∈ Cm×n and G gij ∈ Cn×m
. Then rankGA.i g.j ≤ t, where
t
1 ≤ i ≤ n, 1 ≤ j ≤ m, and rankAGi. gj. ≤ t, where 1 ≤ i ≤ m, 1 ≤ j ≤ n.
Proof. Let Pik a be an n × n matrix with a in the i, k entry, 1 in all diagonal entries, and 0 in
others. It is an elementary matrix and
⎛
g1k ak1 · · · g1j · · ·
g1k akn
⎞
⎟
⎜ k / j
k/
j
⎟
⎜
⎟
⎜
.
.
.
.
.
⎜
..
..
..
..
..
⎟
⎟
Pik −ajk ⎜ GA.i g.j
⎝ g a ··· g ···
gnk akn ⎠
k/
i
nk k1
nj
k/
j
k/
j
ith
⎛
⎞
g11 · · · g1j · · · g1m
⎜ .
..
..
..
.. ⎟
⎜ ..
.
.
.
. ⎟
⎜
⎟
⎜
⎟
⎜ gi1 · · · gij · · · gim ⎟
⎜ .
..
..
..
.. ⎟
⎜ .
⎟
⎝ .
.
.
.
. ⎠
gn1 · · · gnj · · · gnm
⎛
⎞
a11 · · · 0 · · · a1n
⎜ .
.. .. ..
.. ⎟
⎜ ..
. . .
. ⎟
⎜
⎟
⎜
⎟
⎜ 0 · · · 1 · · · 0 ⎟ jth.
⎜ .
.. .. ..
.. ⎟
⎜ .
⎟
⎝ .
. . .
. ⎠
am1 · · · 0 · · · amm
ith
2.3
It follows from the invertibility of Pik a, i /
k, that rankGA.i g.j ≤ t.
Analogously, the inequation rankAGi. gj. ≤ t can be proved. So the proof is
complete.
Recall that if fA λ detλI A λn d1 λn−1 · · · dn−1 λ dn is the characteristic
polynomial of an n × n matrix—A over C, then di is the sum of all i × i principal minors of A,
where i 1, . . . , n see, e.g., 16.
Mathematical Problems in Engineering
5
Theorem 2.3. Let A, T, S, and G be the same as in Theorem 2.1. Write G gij . Suppose that the
2
2
generalized inverse AT,S of A exists. Then AT,S can be represented as follows:
2
AT,S xij
dt GA
2.4
,
n×m
where
xij β GA.i g.j β ,
β
GAβ ,
dt GA β∈Qt,n {i}
β∈Qt,n
2.5
or
2
AT,S yij
dt AG
2.6
,
n×m
where
yij α AGj. gi. ,
α
α∈Qt,m {j}
AGα .
dt AG α
α∈Qt,m
2.7
Proof. We will only show the representation 2.5 since the proof of 2.7 is similar. If −λ is not
the eigenvalue of GA, then the matrix λI GA is invertible, and
⎞
X11 X21 · · · Xn1
⎜ X12 X22 · · · Xn2 ⎟
1
⎟
⎜
,
⎜ .
..
..
.. ⎟
detλI GA ⎝ ..
.
.
. ⎠
X1n X2n · · · Xnn
⎛
λI GA−1
2.8
where Xij , i, j 1, . . . , n, are cofactors of λI GA. It is easy to see that
n
Xil glj det λI GA.i g.j .
2.9
l1
So, by 2.1,
⎞
det λI GA.1 g.1
det λI GA.1 g.m
···
⎟
⎜
detλI GA
detλI GA
⎟
⎜
⎟
⎜
..
..
..
⎟.
lim ⎜
⎜
.
.
.
λ → 0⎜
⎟
⎟
⎝ det λI GA.n g.1
det λI GA.n g.m ⎠
···
detλI GA
detλI GA
⎛
2
AT,S
2.10
We have the characteristic polynomial of GA
fGA λ detλI GA λn d1 λn−1 d2 λn−2 · · · dn ,
2.11
6
Mathematical Problems in Engineering
where di 1 ≤ i ≤ n is a sum of i × i principal minors of GA. Since rankGA ≤ rankG t,
dn dn−1 · · · dt1 0 and
2.12
detλI GA λn d1 λn−1 d2 λn−2 · · · dt λn−t .
Expanding det λI GA.i g.j , we have
ij
ij
ij
det λI GA.i g.j x1 λn−1 x2 λn−2 · · · xn ,
ij
where xk
β∈Qk,n {i}
2.13
β
|GA.i g.j β |, 1 ≤ k ≤ n, for 1 ≤ i ≤ n and 1 ≤ j ≤ m.
β
By Lemma 2.2, rank GA.i g.j ≤ t and so |GA.i g.j β | 0, k > t and β ∈ Qk,n {i}, for
ij
all i, j. Therefore, xk
0, k ≤ n, for all i, j. Consequently,
ij
ij
ij
det λI GA.i g.j x1 λn−1 x2 λn−2 · · · xt λn−t .
2.14
Substituting 2.12 and 2.14 into 2.10 yields
⎛
2
AT,S
11
11 n−t
x1 λn−1 · · · xt
λ
⎜ n
⎜ λ d1 λn−1 · · · dt λn−t
⎜
..
lim ⎜
⎜
.
λ → 0⎜
⎝ xn1 λn−1 · · · xn1 λn−t
t
1
λn d1 λn−1 · · · dt λn−t
⎛ 11
1m ⎞
xt
xt
···
⎜
⎟
⎜ dt
dt ⎟
⎜ .
..
.. ⎟
⎟
⎜
⎜ ..
.
. ⎟.
⎜ n1
nm ⎟
⎝x
⎠
xt
t
···
dt
dt
ij
Substituting xij for xt
···
..
.
···
1m n−1
x1
λ
1m n−t
· · · xt
λ
⎞
⎟
⎟
⎟
⎟
⎟
nm n−1
nm n−t ⎟
λ ··· x
λ ⎠
x
λn d1 λn−1 · · · dt λn−t
..
.
t
1
λn d1 λn−1 · · · dt λn−t
2.15
in the above equation, we reach 2.5.
Remark 2.4. The proofs of Lemma 2.2 and Theorem 2.3 are based on the general techniques
and methods obtained previously by 17, respectively.
Remark 2.5. i By using 2.5, we can obtain 2.17 in 12, Theorem 2.3. In fact, u dt GA
and, by the Binet-Cauchy formula,
xij β GA.i g.j β β∈Qt,n {i}
α∈Qt,m ,β∈Qt,n
det Gαβ
β∈Qt,n {i} k
β
∂Aα ∂aji
gkj
β
∂GAβ ∂ski
β∈Qt,n ,α∈Qt,m k
gkj
β
∂Gαβ ∂Aα ∂gkj
∂aji
,
2.16
Mathematical Problems in Engineering
7
β
/ α or j ∈
/ β. In addition, using the symbols
where skj GAkj . Note that ∂|Aα |/∂aij 0 if i ∈
in 13, we can rewrite 2.5 as 13, equation 13 over C.
ii This method is especially efficient when GA or AG is given comparing with 12,
Theorem 2.
Observing the particular case from Theorem 2.3, G gij N −1 A∗ M, where M and
N are Hermitian positive definite matrices, we obtain the following corollary in which the
symbols g.j : g.j and gi. : gi. .
and G N −1 A∗ M, where M and N are Hermitian positive definite
Corollary 2.6. Let A ∈ Cm×n
r
matrices of order m and n, respectively, Then
†
AMN xij
dr GA
2.17
,
n×m
where
xij β GA.i g.j β ,
β
GAβ ,
dr GA β∈Qr,n {i}
β∈Qr,n
2.18
or
†
AMN
yij
dr AG
2.19
,
n×m
where
yij α AGj. gi. ,
α
α∈Qr,m {j}
AGα .
dr AG α
α∈Qr,m
2.20
If M and N are identity matrices, then we can obtain the following result.
Corollary 2.7 see 17, Theorem 2.2. The Moore-Penrose inverse A† of A aij ∈ Cm×n
can be
r
represented as follows:
†
A xij
dr A∗ A
2.21
,
n×m
where
xij β A∗ A a∗
,
.i
.j
β
dr A∗ A β∈Qr,n {i}
β
A∗ Aβ ,
β∈Qr,n
2.22
or
†
A yij
dr AA∗ ,
n×m
2.23
8
Mathematical Problems in Engineering
where
yij α AA∗ j. a∗i. ,
dr AA∗ α
α∈Qr,m {j}
AA∗ α .
2.24
α
α∈Qr,m
2
Note that Ad,W WAW
RAWk A, NAWk A
. Therefore, when G AWk A in
Theorem 2.3, we have the following corollary.
Corollary 2.8. Let A ∈ Cm×n , W ∈ Cn×m , and k max{IndAW, IndWA}. If rank AWk t, rank WAk r, and AWk A cij m×n , then
⎛
⎞
xij
⎜
⎟
Ad,W ⎝ ⎠
k2
dt AW
2.25
,
m×n
where
xij β AWk2
,
c.j
.i
β
k2
dt WA
β∈Qt,m {i}
β k2
,
AW
β
β∈Qt,m
2.26
or
⎛
⎜
Ad,W ⎝
⎞
yij
k2
dr WA
⎟
⎠
2.27
,
m×n
where
α k2
yij ci. ,
WA
j.
α
α∈Qr,n {j }
α dr WAk2 WAk2 .
α∈Qr,n
α
2.28
When G Ak with k IndA in Theorem 2.3, we have the following corollary.
Corollary 2.9 see 17, Theorem 3.3. Let A ∈ Cn×n with IndA k and rank Ak r, and
k
Ak aij . Then
n×n
Ad xij
k1 dr A
2.29
,
n×n
where
dij β Ak1 ak
,
.i
.j
β
β∈Qr,n {i}
β Ak1 .
dr Ak1 β
β∈Qr,n
2.30
Mathematical Problems in Engineering
9
2
2
Finally, we turn our attention to the two projectors AT,S A and AAT,S . The limiting
2
expressions for AT,S in 2.1 bring us the following:
2
AT,S A lim GA λI−1 GA,
λ→0
2
AAT,S
2.31
lim AGAG λI−1 .
λ→0
Corollary 2.10. Let A, T, S, and G be the same as in Theorem 2.1. Write GA sij and AG hij .
2
2
2
Suppose that AT,S exists. Then AT,S A of AAT,S can be represented as follows:
2
AT,S A xij
dt GA
2.32
,
n×n
where
xij β GA.i s.j β ,
β∈Qt,n {i}
2
AAT,S
dt GA yij
dt AG
β
GAβ ,
β∈Qt,n
2.33
,
m×m
where
yij α AGj. hi. ,
α∈Qt,m {j}
α
dt AG AGα .
α
α∈Qt,m
2.34
3. Cramer Rules for the Solution of the Restricted Matrix Equation
The restricted matrix equation problem is mainly to find solution of a matrix equation or a
system of matrix equations in a set of matrices which satisfy some constraint conditions. Such
problems play an important role in applications in structural design, system identification,
principal component analysis, exploration, remote sensing, biology, electricity, molecular
spectroscopy, automatics control theory, vibration theory, finite elements, circuit theory, linear
optimal, and so on. For example, the finite-element static model correction problem can be
transformed to solve some constraint condition solution and its best approximation of the
matrix equation AX B. The undamped finite-element dynamic model correction problem
can be attributed to solve some constraint condition solution and its best approximation of
the matrix equation AT XA B. These motivate the gradual development of theory in respect
of the solution to the restricted matrix equation in recent years see 18–27.
In this section, we consider the restricted matrix equation
AXB D,
RX ⊂ T,
NX ⊃ S,
3.1
10
Mathematical Problems in Engineering
p×q
, B ∈ Cr , D ∈ Cm×q , T ⊂ Cn , S ⊂ Cm , T ⊂ Cq and S ⊂ Cp , satisfy
where A ∈ Cm×n
r
dimT dim S⊥ t ≤ r,
dim T dim S⊥ t ≤ r.
3.2
∈ Cq×p satisfying
Assume that there exist matrices G ∈ Cn×m and G
RG T,
T ,
R G
NG S,
S.
N G
3.3
2
2
then the restricted matrix equation 3.1 has the
If AT,S and B exist and D ∈ RAG, GB,
T ,S
unique solution
2
2
X AT,S DB 3.4
T ,S
see 2, Theorem 3.3.3 for the proof.
I1 , the restricted matrix equation 3.1
In particular, when D is a vector b and B G
becomes the restricted linear equation
Ax b,
x ∈ T.
3.5
2
If b ∈ ARG, then x AT,S b is the unique solution of the restricted linear equation 3.5 see
also 10, Theorem 2.1.
2
gij , T, S, T , and S as above. If AT,S and
Theorem 3.1. Given A, B, D dij , G gij , G
2
2
2
B exist and D ∈ RAG, GB,
then X AT,S DB is the unique solution of the restricted matrix
T ,S
T,S
equation 3.1 and it can be represented as
m xij k1
β GA
.i g.k β β∈Qt,n {i},α∈Qt,p {j}
dt GA d BG
α BG
fk.
j.
β β∈Qt,n {i},α∈Qt,p {j} GA.i f.k β dt GA d BG
α BG
g
k.
j.
α
3.6
t
or
q
xij k1
t
and f.k Gd.k , i 1, . . . , n, and j 1, . . . , p.
where fk. dk. G
α
,
3.7
Mathematical Problems in Engineering
11
2
2
Proof. By the argument above, we have X AT,S DB is the unique solution of the restricted
T ,S
2
matrix equation 3.1. Setting Y DB and using 2.7, we get that
T ,S
ykj q
h1
2
dkh B T ,S hj
q
dkh
h1
α
BG
gh.
α∈Qt,p {j} j.
α
dt BG
α α q
BG
BG
d
g
fk.
kh h.
α∈Qt,p {j} α∈Qt,p {j} h1
j.
j.
α
α
,
d BG
d BG
t
3.8
t
Since X A Y , by 2.5,
where fk. dk. G.
T,S
2
xij m k1
2
AT,S
ik
ykj m
k1
α β BG
fk.
α∈Qt,p {j} β∈Qt,n {i} GA.i g.k β j.
α
.
dt GA
dt BG
3.9
Hence, we have 3.6.
We can obtain 3.7 in the same way.
I1 in the above theorem, we have the
In particular, when D is a vector b and B G
following result from 3.7.
2
Theorem 3.2. Given A, G, T , and S as above. If b ∈ ARG, then x AT,S b is the unique solution
of the restricted linear equation Ax b, x ∈ T , and it can be represented as
xi β∈Qt,n {i} β GA.i f β dt GA
,
j 1, . . . , n,
where f Gb.
Remark 3.3. Using the symbols in 13, we can rewrite 3.10 as 13, equation 27.
3.10
12
Mathematical Problems in Engineering
4. Example
Let
⎛
1
⎜0
⎜
⎜
A ⎜0
⎜
⎝0
0
2
2
0
0
0
2
1
4
2
0
⎞
2
1⎟
⎟
⎟
2 ⎟,
⎟
1⎠
0
⎛
0
⎜−1
⎜
⎜
D⎜0
⎜
⎝0
0
⎛
⎞
2 1
B ⎝1 0⎠,
0 1
G
⎞
1
0⎟
⎟
⎟
0⎟,
⎟
0⎠
0
⎛
1
⎜0
G⎜
⎝0
0
−1
2
0
0
2
1
0
0
0
0
0
0
⎞
0
0⎟
⎟,
0⎠
0
1 0 0
.
0 2 0
4.1
Obviously, rank A 3, dim AT dim T 2, and
T RG ⊂ C4 ,
⊂ C2 ,
T R G
S NG ⊂ C5 ,
⊂ C3 .
S N G
4.2
2
2
It is easy to verify that AT ⊕ S C5 and BT ⊕ S C3 . Thus, AT,S and B exist by Lemma 1.1.
T ,S
Now consider the restricted matrix equation
AXB D,
RX ⊂ T,
NX ⊃ S.
4.3
Clearly,
⎛
1
⎜0
⎜
⎜
AG ⎜0
⎜
⎝0
0
3
4
0
0
0
4
2
0
0
0
0
0
0
0
0
⎞
0
0⎟
⎟
⎟
0⎟,
⎟
0⎠
0
GB
2 1
,
2 0
4.4
hold.
and it is easy to verify that RD ⊂ RAG and ND ⊃ NGB
Note that RD ⊂ RAG and ND ⊃ NGB if and only if D ∈ RAG, GB.
So, by
Theorem 3.1, the unique solution of 4.3 exists.
Mathematical Problems in Engineering
13
Table 1
yik
i1
i2
i3
i4
4
4
−2
0
0
0
0
0
k1
k2
Table 2
zkj
k1
k2
j1
j2
j3
0
−2
−2
4
0
0
Computing
⎛
⎞
⎛
⎞
9 5
2 2 0
6 4⎟
⎟,
⎝1 0 0⎠,
BG
0 0⎠
0 2 0
0 0
9 1 5 4 6 4 4 0
0 0 0 0 0 0 0 0
2 2 2 0 0 0
d2 BG
1 0 0 0 2 0 −2,
⎛
⎞
⎛
⎛
⎞ 0 1
⎞
1 −1 2 0 0 ⎜
1 1
⎟
−1
0
⎟ ⎜−2 0⎟
⎜0 2 1 0 0⎟⎜
⎟⎜ 0 0 ⎟
⎟,
f ⎜
⎟⎜
⎝0 0 0 0 0⎠⎜
⎜
⎟ ⎝ 0 0⎠
⎝ 0 0⎠
0 0 0 0 0
0 0
0 0
1 0
⎜0 4
GA ⎜
⎝0 0
0 0
1 0 1
d2 GA 0 4 0
0
4,
0
4.5
β
|GA.i f.k β |, we have Table 1.
α
j. Similarly, setting zkj α∈Qt,p {j} |BG
gk. α |, we have Table 2.
So, by 3.7, we have
and setting yik β∈Qt,n {i}
⎛
1 −1
⎜
1
⎜
0 −
X xij ⎜
2
⎜
⎝0 0
0 0
⎞
0
⎟
0⎟
⎟.
⎟
0⎠
0
4.6
Acknowledgments
The authors would like to thank the referees for their helpful comments and suggestions.
This work was supported by the National Natural Science Foundation of China 11061005, the
Ministry of Education Science and Technology Key Project under Grant 210164, and Grants
HCIC201103 of Guangxi Key Laboratory of Hybrid Computational and IC Design Analysis
Open Fund.
14
Mathematical Problems in Engineering
References
1 A. Ben-Israel and T. N. E. Greville, Generalized Inverse Theory and Application, Springer-Verlag, New
York, NY, USA, 2nd edition, 2003.
2 G. Wang, Y. Wei, and S. Qiao, Generalized Inverse: Theory and Computations, Science Press, Beijing,
China, 2004.
3 M. P. Drazin, “Pseudo-inverses in associative rings and semigroups,” The American Mathematical
Monthly, vol. 65, pp. 506–514, 1958.
4 P. Patricio and R. Puystjens, “Drazin-moore-penrose invertibility in rings,” Linear Algebra and its
Applications, vol. 389, pp. 159–173, 2004.
5 D. Huang, “Group inverses and Drazin inverses over Banach algebras,” Integral Equations and Operator
Theory, vol. 17, no. 1, pp. 54–67, 1993.
6 R. Harte and M. Mbekhta, “On generalized inverses in C∗ -algebras,” Studia Mathematica, vol. 103, no.
1, pp. 71–77, 1992.
7 P. S. Stanimirović and D. S. Cvetković-Ilić, “Successive matrix squaring algorithm for computing outer
inverses,” Applied Mathematics and Computation, vol. 203, no. 1, pp. 19–29, 2008.
8 M. Miladinović, S. Miljković, and P. Stanimirović, “Modified SMS method for computing outer
inverses of Toeplitz matrices,” Applied Mathematics and Computation, vol. 218, no. 7, pp. 3131–3143,
2011.
9 R. E. Cline and T. N. E. Greville, “A Drazin inverse for rectangular matrices,” Linear Algebra and its
Applications, vol. 29, pp. 54–62, 1980.
10 Y. L. Chen, “A Cramer rule for solution of the general restricted linear equation,” Linear and Multilinear
Algebra, vol. 34, no. 2, pp. 177–186, 1993.
11 X. Mary, “On generalized inverses and Green’s relations,” Linear Algebra and its Applications, vol. 434,
no. 8, pp. 1836–1844, 2011.
2
12 Y. Yu and G. Wang, “On the generalized inverse AT,S over integer domains,” The Australian Journal of
Mathematical Analysis and Applications, vol. 4, no. 1, article 16, pp. 1–20, 2007.
2
13 Y. Yu and Y. Wei, “Determinantal representation of the generalized inverse AT,S over integral domains
and its applications,” Linear and Multilinear Algebra, vol. 57, no. 6, pp. 547–559, 2009.
2
14 Y. Wei, “A characterization and representation of the generalized inverse AT,S and its applications,”
Linear Algebra and its Applications, vol. 280, no. 2-3, pp. 87–96, 1998.
15 A. Ben-Israel, “On matrices of index zero or one,” Society for Industrial and Applied Mathematics Journal
on Applied Mathematics, vol. 17, pp. 1118–1121, 1969.
16 R. A. Horn and C. R. Johnson, The Matrix analysis, Cambridge University Press, New York, NY, USA,
1985.
17 I. I. Kyrchei, “Analogs of the adjoint matrix for generalized inverses and corresponding Cramer
rules,” Linear and Multilinear Algebra, vol. 56, no. 4, pp. 453–469, 2008.
18 M. Baruch, “Optimal correction of mass and stiffness matrices usingmeasured modes,” American
Institute of Aeronautics and Astronautics Journal, vol. 20, no. 11, pp. 1623–1626, 1982.
19 A. Bjierhammer, “Rectangular reciprocal matrices with special reference to geodetic calculations,”
Kungliga tekniska hogskolan Handl Stackholm, vol. 45, pp. 1–86, 1951.
20 M. Bixon and J. Jortner, “Intramolecular radiationless transitions,” Journal of Chemical and Physics, vol.
48, pp. 715–726, 1968.
21 J. W. Gadzuk, “Localized vibrational modes in Fermi liquids. General theory,” Physical Review B, vol.
24, no. 4, pp. 1651–1663, 1981.
22 L. Datta and S. D. Morgera, “On the reducibility of centrosymmetric matrices applications in
engineering problems,” Circuits, Systems, and Signal Processing, vol. 8, no. 1, pp. 71–96, 1989.
23 Y. M. Ram and G. G. L. Gladwell, “Contructing a finite elemnt model of a vibratory rod from
eigendata,” Journal of Sound and Vibration, vol. 169, no. 2, pp. 229–237, 1994.
24 A. Jameson and E. Kreindler, “Inverse problem of linear optimal control,” Society for Industrial and
Applied Mathematics Journal on Control and Optimization, vol. 11, pp. 1–19, 1973.
25 G. W. Stagg and A. H. El-Abiad, Computer Methodes in Power System Analysis, MeGrawHill, New York,
NY, USA, 1968.
26 H. C. Chen, “The SAS domain decomposition method for structural anlaysis,” CSRD Technical Report
754, Center for Supercomputing Research and Development. University of Illinois, Urbana, Ill, USA,
1988.
27 F. S. Wei, “Analytical dynamic model improvement using vibration test data,” American Institute of
Aeronautics and Astronautics Journal, vol. 28, no. 1, pp. 175–177, 1990.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download