Research Article Modified Preconditioned GAOR Methods for Systems of Linear Equations

advertisement
Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2013, Article ID 850986, 7 pages
http://dx.doi.org/10.1155/2013/850986
Research Article
Modified Preconditioned GAOR Methods for
Systems of Linear Equations
Xue-Feng Zhang, Qun-Fa Cui, and Shi-Liang Wu
School of Mathematics and Statistics, Anyang Normal University, Anyang 455000, China
Correspondence should be addressed to Xue-Feng Zhang; [email protected]
Received 30 January 2013; Accepted 13 May 2013
Academic Editor: Giuseppe Marino
Copyright © 2013 Xue-Feng Zhang et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
Three kinds of preconditioners are proposed to accelerate the generalized AOR (GAOR) method for the linear system from the
generalized least squares problem. The convergence and comparison results are obtained. The comparison results show that the
convergence rate of the preconditioned generalized AOR (PGAOR) methods is better than that of the original GAOR methods.
Finally, some numerical results are reported to confirm the validity of the proposed methods.
1. Introduction
Consider the generalized least squares problem
min(饾惔饾懃 − 饾憦)饾憞 饾憡−1 (饾惔饾懃 − 饾憦) ,
饾懃∈R饾憶
(1)
where 饾懃 ∈ R饾憶 , 饾惔 ∈ R饾憶×饾憶 , 饾憦 ∈ R饾憶 , and the variance-covariance
matrix 饾憡 ∈ R饾憶×饾憶 is a known symmetric and positive-definite
matrix. This problem has many scientific applications and one
of the applications is a parameter estimation in mathematical
model [1, 2].
In order to solve the problem simply, one has to solve a
linear system of the equivalent form as follows:
饾惞饾懄 = 饾憮,
(2)
饾惣−饾惖 饾惢
饾惞=(
),
饾惥 饾惣−饾惗
(3)
In order to get the approximate solutions of the linear
system (2), a lot of iterative methods such as Jacobi, GaussSeidel (GS), successive over relaxation (SOR), and accelerated
over relaxation (AOR) have been studied by many authors [3–
8]. These iterative methods have very good results, but have a
serious drawback because of computing the inverses of 饾惣 − 饾惖
and 饾惣−饾惗 in (3). To avoid this drawback, Darvishi and Hessari
[9] proposed the generalized convergence of the generalized
AOR (GAOR) method when the coefficient matrix 饾惞 is a
diagonally dominant matrix. The GAOR method [10, 11] can
be defined as follows:
饾懄饾憳+1 = T饾浘饾湐 饾懄饾憳 + 饾湐饾憯,
饾憳 = 0, 1, 2, . . . ,
(5)
where
with 饾惖 ∈ R饾憹×饾憹 , 饾惗 ∈ R饾憺×饾憺 , and 饾憹 + 饾憺 = 饾憶. Without loss of
generality, we assume that 饾惞 = I − L − U, where I is the
identity matrix, and L and U are strictly lower and upper
triangular matrices obtained from 饾惞, respectively. So we can
pretty easily get that
饾惣 0
I=(
),
0 饾惣
0 0
L=(
),
−饾惥 0
饾惖 −饾惢
U=(
).
0 饾惗
(4)
where
T饾浘饾湐 = (
饾惣 0
)
饾浘饾惥 饾惣
−1
× ((1 − 饾湐) 饾惣 + (饾湐 − 饾浘) (
0 0
饾惖 −饾惢
) + 饾湐(
)) ,
−饾惥 0
0 饾惗
饾惣 0
饾憯=(
) 饾憮.
−饾浘饾惥 饾惣
(6)
2
Journal of Applied Mathematics
Here, 饾湐 and 饾浘 are real parameters with 饾湐 =谈 0. The iteration
matrix is rewritten briefly as
−饾湐饾惢
(1 − 饾湐) 饾惣 + 饾湐饾惖
).
T饾浘饾湐 = (
饾湐 (饾浘 − 1) 饾惥 − 饾浘饾湐饾惥饾惖 (1 − 饾湐) 饾惣 + 饾湐饾惗 + 饾懁饾浘饾惥饾惢
(7)
To improve the convergence rate of the GAOR iterative
method, a preconditioner should be applied. Now we can
transform the original linear system (2) into the preconditioned linear system
饾憙饾惞饾懄 = 饾憙饾憮,
(8)
(a) 饾惔 has a positive eigenvalue equal to its spectral radius.
(b) 饾惔 has an eigenvector 饾懃 > 0 corresponding to 饾湆(饾惔).
(c) 饾湆(饾惔) is a simple eigenvalue of 饾惔.
Lemma 2 (see [12]). Let 饾惔 be a nonnegative matrix. Then,
(i) if 饾浖饾懃 ≤ 饾惔饾懃 for some nonnegative vector 饾懃, 饾懃 =谈 0, then
饾浖 ≤ 饾湆(饾惔);
(ii) if 饾惔饾懃 ≤ 饾浗饾懃 for some positive vector 饾懃, then 饾湆(饾惔) ≤ 饾浗.
Moreover, if 饾惔 is irreducible and if 0 =谈 饾浖饾懃 ≤ 饾惔饾懃 ≤ 饾浗饾懃
for some nonnegative vector 饾懃, then 饾浖 ≤ 饾湆(饾惔) ≤ 饾浗 and
饾懃 is a positive vector.
3. Preconditioned GAOR Methods
where 饾憙 is the preconditioner. 饾憙饾惞 can be expressed as
饾惣 − 饾惖∗ 饾惢∗
).
饾憙饾惞 = ( ∗
饾惥
饾惣 − 饾惗∗
Lemma 1 (see [7]). Let 饾惔 ≥ 0 be an 饾憱饾憻饾憻饾憭饾憫饾憿饾憪饾憱饾憦饾憴饾憭 matrix. Then,
(9)
To solve the linear system (2) with the coefficient matrix 饾惞 in
(3), we consider the preconditioners as follows:
饾惣 0
),
饾憜饾憱 饾惣
饾憙饾憱 = 饾惣 + 饾憜 = (
Meanwhile, the PGAOR method for solving the preconditioned linear system (8) is defined by
饾懄饾憳+1 = T∗饾浘饾湐 饾懄饾憳 + 饾懁饾憯∗ ,
饾憳 = 0, 1, 2, . . . ,
0 0 ⋅ ⋅ ⋅ −饾憳1饾憹
(10)
0 0
饾憜1 = (
0 d
(11)
In this paper, we propose three new types of preconditioners and study the convergence rate of the preconditioned
GAOR methods for solving the linear system (2). This
paper is organized as follows. In Section 2, some notations,
definitions, and preliminary results are presented. In Section 3, three new types of preconditioners are proposed and
compared with that of the original GAOR methods. Lastly,
a numerical example is provided to confirm the theoretical
results studied in Section 4.
0
( 0
0
( 0
(
饾憜3 = (
( 0
(
For vector 饾懃 ∈ R , 饾懃 ≥ 0 (饾懃 > 0) denotes that all
components of 饾懃 are nonnegative (positive). For two vectors
饾懃, 饾懄 ∈ R饾憶 , 饾懃 ≥ 饾懄 (饾懃 > 饾懄) means that 饾懃 − 饾懄 ≥ 0 (饾懃 − 饾懄 > 0).
These definitions are carried immediately over to matrices. A
matrix 饾惔 is said to be irreducible if the directed graph of 饾惔
is strongly connected. 饾湆(饾惔) denotes the spectral radius of 饾惔.
Some useful results are provided as follows.
..
.
−饾憳12
⋅⋅⋅
−饾憳21 0 −饾憳23
(
(
饾憜2 = (
( 0 −饾憳32 d
(
..
..
.
.
⋅⋅⋅
2. Preliminaries
饾憶
⋅ ⋅ ⋅ −饾憳2饾憹
),
(0 0 ⋅ ⋅ ⋅ −饾憳饾憺饾憹 )
−饾湐饾惢∗
(1 − 饾湐) 饾惣 + 饾湐饾惖∗
),
∗
∗ ∗
饾湐 (饾浘 − 1) 饾惥 − 饾浘饾湐饾惥 饾惖 (1 − 饾湐) 饾惣 + 饾湐饾惗∗ + 饾浘饾湐饾惥∗ 饾惢∗
饾惣
0
饾憯 = (−饾浘饾惥∗ 饾惣) 饾憙饾憮.
(12)
where
where
T∗饾浘饾湐 = (
饾憱 = 1, 2, 3,
−
饾憳饾憺1
饾浖
0
0
0
⋅⋅⋅
0
..
.
..
.
0
−饾憳饾憺−1,饾憹
⋅ ⋅ ⋅ −饾憳饾憺,饾憹−1
−饾憳12 ⋅ ⋅ ⋅
..
.
d
..
.
0 −饾憳饾憺−1,饾憹
0
(13)
)
0
0
...
0
)
)
),
)
)
0
)
)
),
)
(饾浖 > 0) .
)
The preconditioned coefficient matrix 饾憙饾憱 饾惞 can be expressed
as
饾惣−饾惖
饾惢
饾憙饾憱 饾惞 = (
),
饾惥 + 饾憜饾憱 (饾惣 − 饾惖) 饾憜饾憱 饾惢 + 饾惣 − 饾惗
where
饾憱 = 1, 2, 3,
(14)
Journal of Applied Mathematics
3
饾憳11 饾憳12 ⋅ ⋅ ⋅ 饾憳1饾憹 饾憦饾憹饾憹
饾惥 + 饾憜1 (饾惣 − 饾惖) = (饾憳21 饾憳22 ⋅ ⋅ ⋅ 饾憳2饾憹 饾憦饾憹饾憹 ) ,
饾憳饾憺1 饾憳饾憺2 ⋅ ⋅ ⋅ 饾憳饾憺饾憹 饾憦饾憹饾憹
(
饾惥 + 饾憜2 (饾惣 − 饾惖) = (
饾憳11
饾憳21 饾憦11 + 饾憳23 饾憦31
饾憳12 饾憦22
饾憳22
⋅⋅⋅
饾憳21 饾憦13 + 饾憳23 饾憦33
饾憳31
..
.
饾憳32 饾憦22 + 饾憳34 饾憦42
..
.
d
饾憳饾憺1
饾憳饾憺2
(
⋅⋅⋅
⋅⋅⋅
饾憳1,饾憹−1
⋅⋅⋅
..
.
饾憙饾憱 饾惞 = I − L饾憱 − U饾憱 ,
饾憱 = 1, 2, 3.
(16)
⋅⋅⋅
d
饾惣 0
I=(
),
0 饾惣
0
0
),
L饾憱 = (
−饾惥 − 饾憜饾憱 (饾惣 − 饾惖) 0
饾惖
−饾惢
U饾憱 = (
),
0 饾惗 − 饾憜饾憱 饾惢
. . . 饾憳饾憺−1,饾憹 饾憦饾憹饾憹 ) .
⋅⋅⋅
饾憳饾憺饾憹
Next, we will study the convergence condition of the
PGAOR methods. For simplicity, without loss of generality,
we can assume that
0 < 饾湐1 ≤ 1,
(17)
饾憱 = 1, 2, 3.
The preconditioned GAOR methods for solving 饾憙饾憱 饾惞饾懄 = 饾憙饾憱 饾憮
are defined by
饾懄饾憳+1 = T∗饾浘饾湐饾憱 饾懄饾憳 + 饾湐饾憯饾憱∗ ,
饾憳 = 0, 1, 2, . . . , 饾憱 = 1, 2, 3,
(18)
饾惥 ≤ 0,
饾惖 ≥ 0,
0 < 饾湐2 ≤ 1,
饾惗 ≥ 0,
0 < 饾浘2 ≤
饾湐2
.
饾湐1
−1
饾湐饾憯饾憱∗ = (I − ΓL饾憱 ) Ω饾憙饾憱 饾憮,
饾憱 = 1, 2, 3,
(19)
(23)
Then, we have the following theorem.
Theorem 3. Let T饾浘饾湐 and T∗饾浘饾湐1 be the iteration matrices of
the GAOR method and the PGAOR method corresponding to
problem (2), which are defined by (7) and (22), respectively.
If matrix 饾惞 in (3) is an irreducible matrix then it holds that
饾湆(T饾浘饾湐 ) =谈 1,
饾湆 (T∗饾浘饾湐1 ) > 饾湆 (T饾浘饾湐 ) =谈 1 if 饾湆 (T饾浘饾湐 ) > 1,
where
(15)
饾憳1饾憹
饾憳2饾憹
饾惢 ≤ 0,
Similarly,
)
),
0
饾憳饾憺−1,饾憹−2 饾憦饾憺−2,饾憹 + 饾憳饾憺−1,饾憹 饾憦饾憺饾憹
饾憳饾憺,饾憹−1 饾憦饾憺−1,饾憹−1
饾憳饾憺饾憹
)
饾憳12 饾憦22
饾憳11
饾憳21
饾憳22
..
..
饾惥 + 饾憜3 (饾惣 − 饾惖) = (
.
.
1 − 饾憦11
饾憳饾憺1 (1 −
) 饾憳饾憺2
饾浖
Based on the discussed above, 饾憙饾憱 饾惞 can be spitted as
饾憳1饾憹
饾憳2饾憹
..
.
饾湆 (T∗饾浘饾湐1 ) < 饾湆 (T饾浘饾湐 ) =谈 1 if 饾湆 (T饾浘饾湐 ) < 1.
(24)
Proof. By some simple calculations on (7), one can get
with
−1
T∗饾浘饾湐饾憱 = (饾惣 − ΓL饾憱 ) (饾惣 − Ω + (Ω − Γ) L饾憱 + ΩU饾憱 ) ,
饾憱 = 1, 2, 3,
(20)
0
0
+ 饾湐1 饾浘2 (
).
−饾惥饾惖 饾惥饾惢
where
饾湐饾惣 0
Ω=( 1
),
0 饾湐2 饾惣,
饾浘饾惣 0
Γ=( 1
).
0 饾浘2 饾惣
−饾湐1 饾惢
(1 − 饾湐1 饾惣) + 饾湐1 饾惖
T饾浘饾湐 = (
)
(饾湐1 饾浘2 − 饾湐2 ) 饾惥 (1 − 饾湐2 饾惣 + 饾湐2 ) 饾惗
(21)
For 饾憱 = 1, 2, 3, we have
−饾湐1 饾惢
(1 − 饾湐1 ) 饾惣 + 饾湐1 饾惖
T∗饾浘饾湐饾憱 = ((饾湐1 饾浘2 − 饾湐2 ) [饾惥 + 饾憜饾憱 (饾惣 − 饾惖)] (1 − 饾浘2 ) 饾惣 + 饾浘2 饾惗 − 饾浘2 饾憜饾憱 饾惢 ) .
−饾湐1 饾浘2 [饾惥 + 饾憜饾憱 (饾惣 − 饾惖)] 饾惖 +饾湐1 饾浘2 [饾惥 + 饾憜饾憱 (饾惣 − 饾惖)] 饾惢
(22)
(25)
Since here 饾惞 is irreducible, one can pretty easily obtain that
T饾浘饾湐 is nonnegative and irreducible by the above assumptions. And so on, one can also easily prove that T∗饾浘饾湐1 is nonnegative and irreducible. By Lemma 1, there exists a positive
vector 饾懃 > 0 such that
T饾浘饾湐 饾懃 = 饾渾饾懃,
where 饾渾 = 饾湆(T饾浘饾湐 ).
(26)
4
Journal of Applied Mathematics
One can easily have
By far, we can easily get
[I − Ω + (Ω − Γ) L + ΩU] 饾懃 = 饾渾 (I − ΓL) 饾懃.
(27)
−1
T∗饾浘饾湐1 饾懃 − 饾渾饾懃 = (饾渾 − 1) (I − ΓL1 )
That is,
(I − Ω) 饾懃 = 饾渾 (I − ΓL饾懃 − (Ω − Γ) L饾懃 − ΩU饾懃) .
0 0
0
0 ]
× [( 饾湐2 饾憜 0) + (
) 饾懃
−饾浘2 饾憜1 (饾惣 − 饾惖) 0
1
饾湐
1
[
]
(28)
With the same vector 饾懃 > 0, it holds
饾惣
0
)
−饾浘2 [饾惥 + 饾憜1 (饾惣 − 饾惖)] 饾惣
−1
= (饾渾 − 1) (
T∗饾浘饾湐1 饾懃 − 饾渾饾懃 = (I − ΓL1 )
× [I − Ω + (Ω − Γ) L1 − 饾渾 (I − ΓL1 )] 饾懃.
(29)
0
0
)饾懃
× ( 饾湐2
( − 饾浘2 ) 饾憜1 + 饾浘2 饾憜1 饾惖 0
饾湐1
Using (22), (26), and (28), we can obtain
0
0
= (饾渾 − 1) ( 饾湐2
) 饾懃.
( − 饾浘2 ) 饾憜1 + 饾浘2 饾憜1 饾惖 0
饾湐1
−1
T∗饾浘饾湐1 饾懃 − 饾渾饾懃 = (I − ΓL1 )
× [(Ω − Γ) (L1 − L)
+Ω (U1 − U) + 饾渾Γ (L1 − L)] 饾懃
(32)
In view of the abovementioned assumptions, we have that
−1
= (I − ΓL1 ) [Ω (U1 − U + L1 − L)
0
0
( 饾湐2
) 饾懃 > 0.
( − 饾浘2 ) 饾憜1 + 饾浘2 饾憜1 饾惖 0
饾湐1
+ (饾渾 − 1) Γ (L1 − L)] 饾懃
−1
= (I − ΓL1 ) Ω (U1 − U + L1 − L) 饾懃
Then, if 饾渾 = 饾湆(T饾懁饾憻 ) > 1, then
−1
+ (饾渾 − 1) ((I − ΓL1 ) ) Γ (L1 − L) 饾懃
0
0
= (I − ΓL1 ) (
)饾懃
−饾湐2 饾憜1 (饾惣 − 饾惖) −饾湐2 饾憜1 饾惢
T∗饾浘饾湐1 − 饾渾饾懃 > 0,
−1
T∗饾浘饾湐1 − 饾渾饾懃 =谈 0.
−1
饾湆 (T∗饾浘饾湐1 ) > 饾湆 (T饾懁饾憻 ) > 1.
0
0
×(
) 饾懃.
−饾浘2 饾憜1 (饾惣 − 饾惖) 0
(34)
From Lemma 2, we can easily get
+ (饾渾 − 1) ((I − ΓL1 ) )
(35)
Similarly, if 饾渾 = 饾湆(T饾懁饾憻 ) < 1, then
T∗饾浘饾湐1 − 饾渾饾懃 < 0,
(30)
Meanwhile, we have
T∗饾浘饾湐1 − 饾渾饾懃 =谈 0.
(36)
So we have
0
0
(I − ΓL1 ) (
)饾懃
−饾湐2 饾憜1 (饾惣 − 饾惖) −饾湐2 饾憜1 饾惢
−1
饾湆 (T∗饾浘饾湐1 ) < 饾湆 (T饾浘饾湐 ) < 1.
0 0
−饾湐 饾惣 + 饾湐1 饾惖 −饾湐1 饾惢
−1
= (I − ΓL1 ) ( 饾湐2 饾憜 0) ( 1
)饾懃
0
0
1
饾湐1
0 0
−1
= (I − ΓL1 ) ( 饾湐2 饾憜 0)
饾湐1 1
−饾湐1 饾惣 + 饾湐1 饾惖
−饾湐1 饾惢
×(
)饾懃
(饾湐1 饾浘2 饾惥 − 饾湐1 饾浘2 饾惥饾惖) −饾湐2 饾惣 + 饾湐2 饾惗 + 饾湐1 饾浘2 饾惥饾惢
0 0
−1
= (I − ΓL1 ) ( 饾湐2 饾憜 0) (T饾懁饾憻 − I) 饾懃
饾湐1 1
0 0
−1
= (饾渾 − 1) (I − ΓL1 ) ( 饾湐2 饾憜 0) 饾懃.
饾湐1 1
(33)
(37)
If 饾渾 = 饾湆(T饾浘饾湐 ) = 1, then we may get that 饾惞饾懄 = 饾憮 but 饾懄 =谈 0,
which is contradictory to the fact of nonsingular matrix 饾惞 by
assumptions; this completes the conclusion of the theorem.
Theorem 4. Let T饾浘饾湐 and T∗饾浘饾湐2 be the iteration matrices of
the GAOR method and the PGAOR method corresponding to
problem (2), which are defined by (7) and (22), respectively. If
the matrix 饾惞 in (3) is an irreducible matrix satisfying
饾憦11 > 0,
蟮劏
蟮劏
饾憳饾憱饾憲 =谈 0 for 蟮劏蟮劏蟮劏饾憱 − 饾憲蟮劏蟮劏蟮劏 = 1,
(38)
then it holds that 饾湆(饾憞饾懁饾憻 ) =谈 0,
饾湆 (T∗饾浘饾湐2 ) > 饾湆 (T饾浘饾湐 ) =谈 1 if 饾湆 (T饾浘饾湐 ) > 1,
(31)
饾湆 (T∗饾浘饾湐2 ) < 饾湆 (T饾浘饾湐 ) =谈 1 if 饾湆 (T饾浘饾湐 ) < 1.
(39)
Journal of Applied Mathematics
5
Proof. One can easily prove this theorem by using similar
arguments of Theorem 3.
The coefficient matrix F is spitted as
饾惣−饾惖 饾惢
F=(
),
饾惥 饾惣−饾惗
Similarly, we have the following theorem.
Theorem 5. Let T饾浘饾湐 and T∗饾浘饾湐3 be the iteration matrices of
the GAOR method and the PGAOR method corresponding to
problem (2), which are defined by (7) and (22), respectively. If
the matrix 饾惞 in (3) is an irreducible matrix satisfying
饾憳饾憺1 < 0,
饾浖 > 1,
饾憦11 > 0,
饾憳饾憱,饾憱+1 < 0 for 饾憱 = 1, 2, . . . , 饾憺 − 1,
(40)
then it holds that 饾湆(T饾浘饾湐 ) =谈 1,
饾湆 (T∗饾浘饾湐3 ) > 饾湆 (T饾浘饾湐 ) =谈 1
if 饾湆 (T饾浘饾湐 ) > 1,
饾湆 (T∗饾浘饾湐3 ) < 饾湆 (T饾浘饾湐 ) =谈 1
if 饾湆 (T饾浘饾湐 ) < 1.
(41)
4. Numerical Examples
In this section, we give numerical examples to demonstrate
the conclusions drawn above. The numerical experiments
were done by using MATLAB 7.0.
Example 1. Consider the following Laplace equation:
饾湑2 饾憿 (饾懃, 饾懄) 饾湑2 饾憿 (饾懃, 饾懄)
+
= 0.
饾湑饾懃2
饾湑饾懄2
F饾懃 = 饾憮,
(43)
where
1
2
0
0
0
0
0
0
1
(
0
0
0
0
0
( 0
(
2
(
(
1
1
( 0
0
0
0
0 −
(
2
8
(
(
(
1
1
( 0
0
0
0
0 −
(
2
8
(
(
(
1
( 0
0
0
0
0
0
(
2
(
(
1
(
F=( 0
0
0
0
0
0
(
2
(
(
1
1
1
( 0
0 − −
0
0
(
8
8
2
(
(
( 1
(− − 1 − 1 − 1 0
0
0
( 8
8
8
8
(
(
(
1
1
( 0
0 −
0 −
0
0
(
8
8
(
(
1
1
1
( 1
(−
0 −
0 − −
0
8
8
8
8
1
1
−
0
0
0
0 −
0
( 8
8
where
1
2
(0
(
(0
(
饾惖=(
(
(0
(
(
0
0 0 0 0
1
0 0 0
2
1
0 0
0
2
1
0
0 0
2
1
0 0 0
2
0
0
0
0
0
(
1
1
1
0 − −
8
8
8
1
)
−
0
0
0 )
)
8
)
)
1
1
1
− − −
0 )
)
8
8
8
)
)
)
1
−
0
0
0 )
)
8
)
)
)
1
1
0 − −
0 )
)
8
8
)
1
1)
)
0
0 − − ).
8
8)
)
)
0
0
0
0 )
)
)
)
)
1
0
0
0 )
)
2
)
)
)
1
0
0
0 )
)
2
)
)
1
)
0
0
0 )
2
1
0
0
0
2 )
−
(44)
0
0)
)
0)
)
),
)
0)
)
)
0
1
2)
1
2
(0
(
(
饾惗 = (0
(
(
0
0 0 0 0
1
0 0 0)
2
)
1
)
0 0) ,
0
)
2
1 )
0
0 0
2
1
0 0 0 0
(
2)
(42)
Under a uniform square domain, applying the five-point
finite difference method with the uniform mesh size, we can
get the following linear system:
(45)
1 1
1
0 − −
8
8 8
1
−
0 0 0 )
)
8
)
)
)
1 1 1
− − −
0 )
)
8 8 8
),
)
1
−
0 0 0 )
)
8
)
)
)
1 1
0 − −
0 )
8 8
1 1
0 0 − −
8 8)
0
−
( 0
(
(
(
( 1
(−
( 8
饾惢=(
( 1
(−
(
( 8
(
(
( 0
(
0
1 1
−
0 0
8 8
(− 1 − 1 − 1 − 1 0 0 )
)
( 8 8 8 8
)
(
)
(
1
1
)
(
饾惥=( 0 0 −
0 −
0 ).
)
(
8
8
)
(
)
( 1
1
1
1
(−
0 −
0 − − )
8
8
8 8
1
1
−
0 0 0 0 −
( 8
8)
0
0
−
(46)
Table 1 reveals the spectral radii of the GAOR methods
and the PGAOR methods. It tells that the spectral radii of the
preconditioned PGAOR methods are smaller than those of
the GAOR methods, so we can get that the proposed three
6
Journal of Applied Mathematics
Table 1: Spectral radii of GAOR method and PGAOR method.
Preconditioner
饾憜1
饾憜2
饾浖
饾憜3
1.5
2
饾湐1
0.8912
0.8912
0.8912
0.8912
饾湐2
0.9654
0.9654
0.9654
0.9654
饾浘2
0.8865
0.8865
0.8865
0.8865
饾湆GAOR
0.8478
0.8478
0.8478
0.8478
饾湆PGAOR
0.8457
0.8376
0.8418
0.8420
饾湆GAOR
0.7505
0.7505
0.7505
0.7505
饾湆PGAOR
0.6618
0.7340
0.7298
0.7328
Table 2: Spectral radii of GAOR method and PGAOR method.
饾浖
Preconditioner
饾憜1
饾憜2
饾湐1
0.8
0.8
0.8
0.8
0.1
0.5
饾憜3
饾湐2 = 饾湐
0.6
0.6
0.6
0.6
preconditioners can accelerate the speed rate of the GAOR
method for the linear systems (2). The results in Table 1 are in
accordance with Theorems 3–5.
Example 2. The coefficient matrix 饾惞 in (3) is given by
饾惣−饾惖 饾惢
饾惞=(
),
饾惥 饾惣−饾惗
(47)
where 饾惖 = (饾憦饾憱饾憲 )饾憹×饾憹 , 饾惗 = (饾憪饾憱饾憲 )饾憺×饾憺 , and 饾憹 + 饾憺 = 饾憶, 饾惢 = (鈩庰潙栶潙 )饾憹×饾憺 ,
饾惥 = (饾憳饾憱饾憲 )饾憺×饾憹 with
饾憦饾憱饾憱 =
饾憦饾憱饾憲 =
1
1
,
−
30 (30饾憲) + 饾憱
饾憦饾憱饾憲 =
1
,
饾憱+1
饾憱 = 1, . . . 饾憹,
饾憲 > 饾憱, 饾憱 = 1, . . . 饾憹 − 1, 饾憲 = 2, . . . 饾憹,
饾憱 > 饾憲, 饾憱 = 2, . . . 饾憹, 饾憲 = 1, . . . 饾憹 − 1,
1
, 饾憱 = 1, . . . 饾憺,
饾憶+饾憱+1
1
1
饾憪饾憱饾憲 =
,
−
30 30 (饾憶 + 饾憲) + 饾憶 + 饾憱
饾憪饾憱饾憱 =
饾憲 > 饾憱, 饾憱 = 1, . . . , 饾憺 − 1, 饾憲 = 2, . . . , 饾憺,
1
1
,
−
30 30 (饾憱 − 饾憲 + 1) + 饾憶 + 饾憱
饾憱 > 饾憲, 饾憱 = 2, . . . , 饾憺, 饾憲 = 1, . . . 饾憺 − 1,
饾憳饾憱饾憲 =
1
1
− ,
30 (饾憶 + 饾憱 − 饾憲 + 1) + 饾憶 + 饾憱 30
饾憱 = 1, . . . , 饾憺, 饾憲 = 1, . . . , 饾憹,
鈩庰潙栶潙 =
1
1
− ,
30 (饾憶 + 饾憲) + 饾憱 30
Obviously, 饾惞 is irreducible. Table 2 shows the spectral
radii of the corresponding iteration matrices with 饾憶 = 8 and
饾憵 = 6.
Similarly, in Table 2, we get that the results are in concord
with Theorems 3–5.
Acknowledgments
The authors would like to thank the anonymous referees
for their helpful suggestions, which greatly improved the
paper. This research of this author was supported by NSFC
Tianyuan Mathematics Youth Fund (no. 11026040), by Science and Technology Development Plan of Henan Province
(no. 122300410316), and by Natural Science Foundations of
Henan Province (no. 13A110022).
References
1
1
,
−
30 30 ((10 (饾憱 − 饾憲 + 1)) + 饾憱)
饾憪饾憱饾憲 =
饾浘2 = 饾浘
0.7
0.7
0.7
0.7
饾憱 = 1, . . . , 饾憹, 饾憲 = 1, . . . , 饾憺.
(48)
[1] J. Y. Yuan, “Numerical methods for generalized least squares
problem,” Journal of Computational and Applied Mathematics,
vol. 66, pp. 571–584, 1996.
[2] J.-Y. Yuan and X.-Q. Jin, “Convergence of the generalized AOR
method,” Applied Mathematics and Computation, vol. 99, no. 1,
pp. 35–46, 1999.
[3] A. D. Gunawardena, S. K. Jain, and L. Snyder, “Modified
iterative methods for consistent linear systems,” Linear Algebra
and Its Applications, vol. 154/156, pp. 123–143, 1991.
[4] A. Hadjidimos, “Accelerated overrelaxation method,” Mathematics of Computation, vol. 32, no. 141, pp. 149–157, 1978.
[5] Y. T. Li, C. X. Li, and S. L. Wu, “Improvement of preconditioned
AOR iterative methods for L-matrices,” Journal of Computational and Applied Mathematics, vol. 206, pp. 656–665, 2007.
[6] J. P. Milaszewicz, “Improving Jacobi and Gauss-Seidel iterations,” Linear Algebra and Its Applications, vol. 93, pp. 161–170,
1987.
[7] R. S. Varga, Matrix Iterative Analysis, vol. 27, Springer, Berlin,
Germany, 2000.
[8] D. M. Young, Iterative Solution of Large Linear Systems, Academic Press, New York, NY, USA, 1971.
[9] M. T. Darvishi and P. Hessari, “On convergence of the generalized AOR method for linear systems with diagonally dominant
Journal of Applied Mathematics
coefficient matrices,” Applied Mathematics and Computation,
vol. 176, no. 1, pp. 128–133, 2006.
[10] X. Zhou, Y. Song, L. Wang, and Q. Liu, “Preconditioned GAOR
methods for solving weighted linear least squares problems,”
Journal of Computational and Applied Mathematics, vol. 224, no.
1, pp. 242–249, 2009.
[11] M. T. Darvishi, P. Hessari, and B.-C. Shin, “Preconditioned
modified AOR method for systems of linear equations,” International Journal for Numerical Methods in Biomedical Engineering,
vol. 27, no. 5, pp. 758–769, 2011.
[12] A. Berman and R. J. Plemmons, Nonnegative Matrices in the
Mathematical Sciences, vol. 9, Academic Press, New York, NY,
USA; Society for Industrial and Applied Mathematics (SIAM),
Philadelphia, Pa, USA, 1994.
7
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download