Research Article Modified Preconditioned GAOR Methods for Systems of Linear Equations

advertisement
Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2013, Article ID 850986, 7 pages
http://dx.doi.org/10.1155/2013/850986
Research Article
Modified Preconditioned GAOR Methods for
Systems of Linear Equations
Xue-Feng Zhang, Qun-Fa Cui, and Shi-Liang Wu
School of Mathematics and Statistics, Anyang Normal University, Anyang 455000, China
Correspondence should be addressed to Xue-Feng Zhang; ayzhangxf@163.com
Received 30 January 2013; Accepted 13 May 2013
Academic Editor: Giuseppe Marino
Copyright © 2013 Xue-Feng Zhang et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
Three kinds of preconditioners are proposed to accelerate the generalized AOR (GAOR) method for the linear system from the
generalized least squares problem. The convergence and comparison results are obtained. The comparison results show that the
convergence rate of the preconditioned generalized AOR (PGAOR) methods is better than that of the original GAOR methods.
Finally, some numerical results are reported to confirm the validity of the proposed methods.
1. Introduction
Consider the generalized least squares problem
min(𝐴𝑥 − 𝑏)𝑇 𝑊−1 (𝐴𝑥 − 𝑏) ,
𝑥∈R𝑛
(1)
where 𝑥 ∈ R𝑛 , 𝐴 ∈ R𝑛×𝑛 , 𝑏 ∈ R𝑛 , and the variance-covariance
matrix 𝑊 ∈ R𝑛×𝑛 is a known symmetric and positive-definite
matrix. This problem has many scientific applications and one
of the applications is a parameter estimation in mathematical
model [1, 2].
In order to solve the problem simply, one has to solve a
linear system of the equivalent form as follows:
𝐹𝑦 = 𝑓,
(2)
𝐼−𝐵 𝐻
𝐹=(
),
𝐾 𝐼−𝐶
(3)
In order to get the approximate solutions of the linear
system (2), a lot of iterative methods such as Jacobi, GaussSeidel (GS), successive over relaxation (SOR), and accelerated
over relaxation (AOR) have been studied by many authors [3–
8]. These iterative methods have very good results, but have a
serious drawback because of computing the inverses of 𝐼 − 𝐵
and 𝐼−𝐶 in (3). To avoid this drawback, Darvishi and Hessari
[9] proposed the generalized convergence of the generalized
AOR (GAOR) method when the coefficient matrix 𝐹 is a
diagonally dominant matrix. The GAOR method [10, 11] can
be defined as follows:
𝑦𝑘+1 = T𝛾𝜔 𝑦𝑘 + 𝜔𝑔,
𝑘 = 0, 1, 2, . . . ,
(5)
where
with 𝐵 ∈ R𝑝×𝑝 , 𝐶 ∈ R𝑞×𝑞 , and 𝑝 + 𝑞 = 𝑛. Without loss of
generality, we assume that 𝐹 = I − L − U, where I is the
identity matrix, and L and U are strictly lower and upper
triangular matrices obtained from 𝐹, respectively. So we can
pretty easily get that
𝐼 0
I=(
),
0 𝐼
0 0
L=(
),
−𝐾 0
𝐵 −𝐻
U=(
).
0 𝐶
(4)
where
T𝛾𝜔 = (
𝐼 0
)
𝛾𝐾 𝐼
−1
× ((1 − 𝜔) 𝐼 + (𝜔 − 𝛾) (
0 0
𝐵 −𝐻
) + 𝜔(
)) ,
−𝐾 0
0 𝐶
𝐼 0
𝑔=(
) 𝑓.
−𝛾𝐾 𝐼
(6)
2
Journal of Applied Mathematics
Here, 𝜔 and 𝛾 are real parameters with 𝜔 ≠ 0. The iteration
matrix is rewritten briefly as
−𝜔𝐻
(1 − 𝜔) 𝐼 + 𝜔𝐵
).
T𝛾𝜔 = (
𝜔 (𝛾 − 1) 𝐾 − 𝛾𝜔𝐾𝐵 (1 − 𝜔) 𝐼 + 𝜔𝐶 + 𝑤𝛾𝐾𝐻
(7)
To improve the convergence rate of the GAOR iterative
method, a preconditioner should be applied. Now we can
transform the original linear system (2) into the preconditioned linear system
𝑃𝐹𝑦 = 𝑃𝑓,
(8)
(a) 𝐴 has a positive eigenvalue equal to its spectral radius.
(b) 𝐴 has an eigenvector 𝑥 > 0 corresponding to 𝜌(𝐴).
(c) 𝜌(𝐴) is a simple eigenvalue of 𝐴.
Lemma 2 (see [12]). Let 𝐴 be a nonnegative matrix. Then,
(i) if 𝛼𝑥 ≤ 𝐴𝑥 for some nonnegative vector 𝑥, 𝑥 ≠ 0, then
𝛼 ≤ 𝜌(𝐴);
(ii) if 𝐴𝑥 ≤ 𝛽𝑥 for some positive vector 𝑥, then 𝜌(𝐴) ≤ 𝛽.
Moreover, if 𝐴 is irreducible and if 0 ≠ 𝛼𝑥 ≤ 𝐴𝑥 ≤ 𝛽𝑥
for some nonnegative vector 𝑥, then 𝛼 ≤ 𝜌(𝐴) ≤ 𝛽 and
𝑥 is a positive vector.
3. Preconditioned GAOR Methods
where 𝑃 is the preconditioner. 𝑃𝐹 can be expressed as
𝐼 − 𝐵∗ 𝐻∗
).
𝑃𝐹 = ( ∗
𝐾
𝐼 − 𝐶∗
Lemma 1 (see [7]). Let 𝐴 ≥ 0 be an 𝑖𝑟𝑟𝑒𝑑𝑢𝑐𝑖𝑏𝑙𝑒 matrix. Then,
(9)
To solve the linear system (2) with the coefficient matrix 𝐹 in
(3), we consider the preconditioners as follows:
𝐼 0
),
𝑆𝑖 𝐼
𝑃𝑖 = 𝐼 + 𝑆 = (
Meanwhile, the PGAOR method for solving the preconditioned linear system (8) is defined by
𝑦𝑘+1 = T∗𝛾𝜔 𝑦𝑘 + 𝑤𝑔∗ ,
𝑘 = 0, 1, 2, . . . ,
0 0 ⋅ ⋅ ⋅ −𝑘1𝑝
(10)
0 0
𝑆1 = (
0 d
(11)
In this paper, we propose three new types of preconditioners and study the convergence rate of the preconditioned
GAOR methods for solving the linear system (2). This
paper is organized as follows. In Section 2, some notations,
definitions, and preliminary results are presented. In Section 3, three new types of preconditioners are proposed and
compared with that of the original GAOR methods. Lastly,
a numerical example is provided to confirm the theoretical
results studied in Section 4.
0
( 0
0
( 0
(
𝑆3 = (
( 0
(
For vector 𝑥 ∈ R , 𝑥 ≥ 0 (𝑥 > 0) denotes that all
components of 𝑥 are nonnegative (positive). For two vectors
𝑥, 𝑦 ∈ R𝑛 , 𝑥 ≥ 𝑦 (𝑥 > 𝑦) means that 𝑥 − 𝑦 ≥ 0 (𝑥 − 𝑦 > 0).
These definitions are carried immediately over to matrices. A
matrix 𝐴 is said to be irreducible if the directed graph of 𝐴
is strongly connected. 𝜌(𝐴) denotes the spectral radius of 𝐴.
Some useful results are provided as follows.
..
.
−𝑘12
⋅⋅⋅
−𝑘21 0 −𝑘23
(
(
𝑆2 = (
( 0 −𝑘32 d
(
..
..
.
.
⋅⋅⋅
2. Preliminaries
𝑛
⋅ ⋅ ⋅ −𝑘2𝑝
),
(0 0 ⋅ ⋅ ⋅ −𝑘𝑞𝑝 )
−𝜔𝐻∗
(1 − 𝜔) 𝐼 + 𝜔𝐵∗
),
∗
∗ ∗
𝜔 (𝛾 − 1) 𝐾 − 𝛾𝜔𝐾 𝐵 (1 − 𝜔) 𝐼 + 𝜔𝐶∗ + 𝛾𝜔𝐾∗ 𝐻∗
𝐼
0
𝑔 = (−𝛾𝐾∗ 𝐼) 𝑃𝑓.
(12)
where
where
T∗𝛾𝜔 = (
𝑖 = 1, 2, 3,
−
𝑘𝑞1
𝛼
0
0
0
⋅⋅⋅
0
..
.
..
.
0
−𝑘𝑞−1,𝑝
⋅ ⋅ ⋅ −𝑘𝑞,𝑝−1
−𝑘12 ⋅ ⋅ ⋅
..
.
d
..
.
0 −𝑘𝑞−1,𝑝
0
(13)
)
0
0
...
0
)
)
),
)
)
0
)
)
),
)
(𝛼 > 0) .
)
The preconditioned coefficient matrix 𝑃𝑖 𝐹 can be expressed
as
𝐼−𝐵
𝐻
𝑃𝑖 𝐹 = (
),
𝐾 + 𝑆𝑖 (𝐼 − 𝐵) 𝑆𝑖 𝐻 + 𝐼 − 𝐶
where
𝑖 = 1, 2, 3,
(14)
Journal of Applied Mathematics
3
𝑘11 𝑘12 ⋅ ⋅ ⋅ 𝑘1𝑝 𝑏𝑝𝑝
𝐾 + 𝑆1 (𝐼 − 𝐵) = (𝑘21 𝑘22 ⋅ ⋅ ⋅ 𝑘2𝑝 𝑏𝑝𝑝 ) ,
𝑘𝑞1 𝑘𝑞2 ⋅ ⋅ ⋅ 𝑘𝑞𝑝 𝑏𝑝𝑝
(
𝐾 + 𝑆2 (𝐼 − 𝐵) = (
𝑘11
𝑘21 𝑏11 + 𝑘23 𝑏31
𝑘12 𝑏22
𝑘22
⋅⋅⋅
𝑘21 𝑏13 + 𝑘23 𝑏33
𝑘31
..
.
𝑘32 𝑏22 + 𝑘34 𝑏42
..
.
d
𝑘𝑞1
𝑘𝑞2
(
⋅⋅⋅
⋅⋅⋅
𝑘1,𝑝−1
⋅⋅⋅
..
.
𝑃𝑖 𝐹 = I − L𝑖 − U𝑖 ,
𝑖 = 1, 2, 3.
(16)
⋅⋅⋅
d
𝐼 0
I=(
),
0 𝐼
0
0
),
L𝑖 = (
−𝐾 − 𝑆𝑖 (𝐼 − 𝐵) 0
𝐵
−𝐻
U𝑖 = (
),
0 𝐶 − 𝑆𝑖 𝐻
. . . 𝑘𝑞−1,𝑝 𝑏𝑝𝑝 ) .
⋅⋅⋅
𝑘𝑞𝑝
Next, we will study the convergence condition of the
PGAOR methods. For simplicity, without loss of generality,
we can assume that
0 < 𝜔1 ≤ 1,
(17)
𝑖 = 1, 2, 3.
The preconditioned GAOR methods for solving 𝑃𝑖 𝐹𝑦 = 𝑃𝑖 𝑓
are defined by
𝑦𝑘+1 = T∗𝛾𝜔𝑖 𝑦𝑘 + 𝜔𝑔𝑖∗ ,
𝑘 = 0, 1, 2, . . . , 𝑖 = 1, 2, 3,
(18)
𝐾 ≤ 0,
𝐵 ≥ 0,
0 < 𝜔2 ≤ 1,
𝐶 ≥ 0,
0 < 𝛾2 ≤
𝜔2
.
𝜔1
−1
𝜔𝑔𝑖∗ = (I − ΓL𝑖 ) Ω𝑃𝑖 𝑓,
𝑖 = 1, 2, 3,
(19)
(23)
Then, we have the following theorem.
Theorem 3. Let T𝛾𝜔 and T∗𝛾𝜔1 be the iteration matrices of
the GAOR method and the PGAOR method corresponding to
problem (2), which are defined by (7) and (22), respectively.
If matrix 𝐹 in (3) is an irreducible matrix then it holds that
𝜌(T𝛾𝜔 ) ≠ 1,
𝜌 (T∗𝛾𝜔1 ) > 𝜌 (T𝛾𝜔 ) ≠ 1 if 𝜌 (T𝛾𝜔 ) > 1,
where
(15)
𝑘1𝑝
𝑘2𝑝
𝐻 ≤ 0,
Similarly,
)
),
0
𝑘𝑞−1,𝑝−2 𝑏𝑞−2,𝑝 + 𝑘𝑞−1,𝑝 𝑏𝑞𝑝
𝑘𝑞,𝑝−1 𝑏𝑞−1,𝑝−1
𝑘𝑞𝑝
)
𝑘12 𝑏22
𝑘11
𝑘21
𝑘22
..
..
𝐾 + 𝑆3 (𝐼 − 𝐵) = (
.
.
1 − 𝑏11
𝑘𝑞1 (1 −
) 𝑘𝑞2
𝛼
Based on the discussed above, 𝑃𝑖 𝐹 can be spitted as
𝑘1𝑝
𝑘2𝑝
..
.
𝜌 (T∗𝛾𝜔1 ) < 𝜌 (T𝛾𝜔 ) ≠ 1 if 𝜌 (T𝛾𝜔 ) < 1.
(24)
Proof. By some simple calculations on (7), one can get
with
−1
T∗𝛾𝜔𝑖 = (𝐼 − ΓL𝑖 ) (𝐼 − Ω + (Ω − Γ) L𝑖 + ΩU𝑖 ) ,
𝑖 = 1, 2, 3,
(20)
0
0
+ 𝜔1 𝛾2 (
).
−𝐾𝐵 𝐾𝐻
where
𝜔𝐼 0
Ω=( 1
),
0 𝜔2 𝐼,
𝛾𝐼 0
Γ=( 1
).
0 𝛾2 𝐼
−𝜔1 𝐻
(1 − 𝜔1 𝐼) + 𝜔1 𝐵
T𝛾𝜔 = (
)
(𝜔1 𝛾2 − 𝜔2 ) 𝐾 (1 − 𝜔2 𝐼 + 𝜔2 ) 𝐶
(21)
For 𝑖 = 1, 2, 3, we have
−𝜔1 𝐻
(1 − 𝜔1 ) 𝐼 + 𝜔1 𝐵
T∗𝛾𝜔𝑖 = ((𝜔1 𝛾2 − 𝜔2 ) [𝐾 + 𝑆𝑖 (𝐼 − 𝐵)] (1 − 𝛾2 ) 𝐼 + 𝛾2 𝐶 − 𝛾2 𝑆𝑖 𝐻 ) .
−𝜔1 𝛾2 [𝐾 + 𝑆𝑖 (𝐼 − 𝐵)] 𝐵 +𝜔1 𝛾2 [𝐾 + 𝑆𝑖 (𝐼 − 𝐵)] 𝐻
(22)
(25)
Since here 𝐹 is irreducible, one can pretty easily obtain that
T𝛾𝜔 is nonnegative and irreducible by the above assumptions. And so on, one can also easily prove that T∗𝛾𝜔1 is nonnegative and irreducible. By Lemma 1, there exists a positive
vector 𝑥 > 0 such that
T𝛾𝜔 𝑥 = 𝜆𝑥,
where 𝜆 = 𝜌(T𝛾𝜔 ).
(26)
4
Journal of Applied Mathematics
One can easily have
By far, we can easily get
[I − Ω + (Ω − Γ) L + ΩU] 𝑥 = 𝜆 (I − ΓL) 𝑥.
(27)
−1
T∗𝛾𝜔1 𝑥 − 𝜆𝑥 = (𝜆 − 1) (I − ΓL1 )
That is,
(I − Ω) 𝑥 = 𝜆 (I − ΓL𝑥 − (Ω − Γ) L𝑥 − ΩU𝑥) .
0 0
0
0 ]
× [( 𝜔2 𝑆 0) + (
) 𝑥
−𝛾2 𝑆1 (𝐼 − 𝐵) 0
1
𝜔
1
[
]
(28)
With the same vector 𝑥 > 0, it holds
𝐼
0
)
−𝛾2 [𝐾 + 𝑆1 (𝐼 − 𝐵)] 𝐼
−1
= (𝜆 − 1) (
T∗𝛾𝜔1 𝑥 − 𝜆𝑥 = (I − ΓL1 )
× [I − Ω + (Ω − Γ) L1 − 𝜆 (I − ΓL1 )] 𝑥.
(29)
0
0
)𝑥
× ( 𝜔2
( − 𝛾2 ) 𝑆1 + 𝛾2 𝑆1 𝐵 0
𝜔1
Using (22), (26), and (28), we can obtain
0
0
= (𝜆 − 1) ( 𝜔2
) 𝑥.
( − 𝛾2 ) 𝑆1 + 𝛾2 𝑆1 𝐵 0
𝜔1
−1
T∗𝛾𝜔1 𝑥 − 𝜆𝑥 = (I − ΓL1 )
× [(Ω − Γ) (L1 − L)
+Ω (U1 − U) + 𝜆Γ (L1 − L)] 𝑥
(32)
In view of the abovementioned assumptions, we have that
−1
= (I − ΓL1 ) [Ω (U1 − U + L1 − L)
0
0
( 𝜔2
) 𝑥 > 0.
( − 𝛾2 ) 𝑆1 + 𝛾2 𝑆1 𝐵 0
𝜔1
+ (𝜆 − 1) Γ (L1 − L)] 𝑥
−1
= (I − ΓL1 ) Ω (U1 − U + L1 − L) 𝑥
Then, if 𝜆 = 𝜌(T𝑤𝑟 ) > 1, then
−1
+ (𝜆 − 1) ((I − ΓL1 ) ) Γ (L1 − L) 𝑥
0
0
= (I − ΓL1 ) (
)𝑥
−𝜔2 𝑆1 (𝐼 − 𝐵) −𝜔2 𝑆1 𝐻
T∗𝛾𝜔1 − 𝜆𝑥 > 0,
−1
T∗𝛾𝜔1 − 𝜆𝑥 ≠ 0.
−1
𝜌 (T∗𝛾𝜔1 ) > 𝜌 (T𝑤𝑟 ) > 1.
0
0
×(
) 𝑥.
−𝛾2 𝑆1 (𝐼 − 𝐵) 0
(34)
From Lemma 2, we can easily get
+ (𝜆 − 1) ((I − ΓL1 ) )
(35)
Similarly, if 𝜆 = 𝜌(T𝑤𝑟 ) < 1, then
T∗𝛾𝜔1 − 𝜆𝑥 < 0,
(30)
Meanwhile, we have
T∗𝛾𝜔1 − 𝜆𝑥 ≠ 0.
(36)
So we have
0
0
(I − ΓL1 ) (
)𝑥
−𝜔2 𝑆1 (𝐼 − 𝐵) −𝜔2 𝑆1 𝐻
−1
𝜌 (T∗𝛾𝜔1 ) < 𝜌 (T𝛾𝜔 ) < 1.
0 0
−𝜔 𝐼 + 𝜔1 𝐵 −𝜔1 𝐻
−1
= (I − ΓL1 ) ( 𝜔2 𝑆 0) ( 1
)𝑥
0
0
1
𝜔1
0 0
−1
= (I − ΓL1 ) ( 𝜔2 𝑆 0)
𝜔1 1
−𝜔1 𝐼 + 𝜔1 𝐵
−𝜔1 𝐻
×(
)𝑥
(𝜔1 𝛾2 𝐾 − 𝜔1 𝛾2 𝐾𝐵) −𝜔2 𝐼 + 𝜔2 𝐶 + 𝜔1 𝛾2 𝐾𝐻
0 0
−1
= (I − ΓL1 ) ( 𝜔2 𝑆 0) (T𝑤𝑟 − I) 𝑥
𝜔1 1
0 0
−1
= (𝜆 − 1) (I − ΓL1 ) ( 𝜔2 𝑆 0) 𝑥.
𝜔1 1
(33)
(37)
If 𝜆 = 𝜌(T𝛾𝜔 ) = 1, then we may get that 𝐹𝑦 = 𝑓 but 𝑦 ≠ 0,
which is contradictory to the fact of nonsingular matrix 𝐹 by
assumptions; this completes the conclusion of the theorem.
Theorem 4. Let T𝛾𝜔 and T∗𝛾𝜔2 be the iteration matrices of
the GAOR method and the PGAOR method corresponding to
problem (2), which are defined by (7) and (22), respectively. If
the matrix 𝐹 in (3) is an irreducible matrix satisfying
𝑏11 > 0,
󵄨
󵄨
𝑘𝑖𝑗 ≠ 0 for 󵄨󵄨󵄨𝑖 − 𝑗󵄨󵄨󵄨 = 1,
(38)
then it holds that 𝜌(𝑇𝑤𝑟 ) ≠ 0,
𝜌 (T∗𝛾𝜔2 ) > 𝜌 (T𝛾𝜔 ) ≠ 1 if 𝜌 (T𝛾𝜔 ) > 1,
(31)
𝜌 (T∗𝛾𝜔2 ) < 𝜌 (T𝛾𝜔 ) ≠ 1 if 𝜌 (T𝛾𝜔 ) < 1.
(39)
Journal of Applied Mathematics
5
Proof. One can easily prove this theorem by using similar
arguments of Theorem 3.
The coefficient matrix F is spitted as
𝐼−𝐵 𝐻
F=(
),
𝐾 𝐼−𝐶
Similarly, we have the following theorem.
Theorem 5. Let T𝛾𝜔 and T∗𝛾𝜔3 be the iteration matrices of
the GAOR method and the PGAOR method corresponding to
problem (2), which are defined by (7) and (22), respectively. If
the matrix 𝐹 in (3) is an irreducible matrix satisfying
𝑘𝑞1 < 0,
𝛼 > 1,
𝑏11 > 0,
𝑘𝑖,𝑖+1 < 0 for 𝑖 = 1, 2, . . . , 𝑞 − 1,
(40)
then it holds that 𝜌(T𝛾𝜔 ) ≠ 1,
𝜌 (T∗𝛾𝜔3 ) > 𝜌 (T𝛾𝜔 ) ≠ 1
if 𝜌 (T𝛾𝜔 ) > 1,
𝜌 (T∗𝛾𝜔3 ) < 𝜌 (T𝛾𝜔 ) ≠ 1
if 𝜌 (T𝛾𝜔 ) < 1.
(41)
4. Numerical Examples
In this section, we give numerical examples to demonstrate
the conclusions drawn above. The numerical experiments
were done by using MATLAB 7.0.
Example 1. Consider the following Laplace equation:
𝜕2 𝑢 (𝑥, 𝑦) 𝜕2 𝑢 (𝑥, 𝑦)
+
= 0.
𝜕𝑥2
𝜕𝑦2
F𝑥 = 𝑓,
(43)
where
1
2
0
0
0
0
0
0
1
(
0
0
0
0
0
( 0
(
2
(
(
1
1
( 0
0
0
0
0 −
(
2
8
(
(
(
1
1
( 0
0
0
0
0 −
(
2
8
(
(
(
1
( 0
0
0
0
0
0
(
2
(
(
1
(
F=( 0
0
0
0
0
0
(
2
(
(
1
1
1
( 0
0 − −
0
0
(
8
8
2
(
(
( 1
(− − 1 − 1 − 1 0
0
0
( 8
8
8
8
(
(
(
1
1
( 0
0 −
0 −
0
0
(
8
8
(
(
1
1
1
( 1
(−
0 −
0 − −
0
8
8
8
8
1
1
−
0
0
0
0 −
0
( 8
8
where
1
2
(0
(
(0
(
𝐵=(
(
(0
(
(
0
0 0 0 0
1
0 0 0
2
1
0 0
0
2
1
0
0 0
2
1
0 0 0
2
0
0
0
0
0
(
1
1
1
0 − −
8
8
8
1
)
−
0
0
0 )
)
8
)
)
1
1
1
− − −
0 )
)
8
8
8
)
)
)
1
−
0
0
0 )
)
8
)
)
)
1
1
0 − −
0 )
)
8
8
)
1
1)
)
0
0 − − ).
8
8)
)
)
0
0
0
0 )
)
)
)
)
1
0
0
0 )
)
2
)
)
)
1
0
0
0 )
)
2
)
)
1
)
0
0
0 )
2
1
0
0
0
2 )
−
(44)
0
0)
)
0)
)
),
)
0)
)
)
0
1
2)
1
2
(0
(
(
𝐶 = (0
(
(
0
0 0 0 0
1
0 0 0)
2
)
1
)
0 0) ,
0
)
2
1 )
0
0 0
2
1
0 0 0 0
(
2)
(42)
Under a uniform square domain, applying the five-point
finite difference method with the uniform mesh size, we can
get the following linear system:
(45)
1 1
1
0 − −
8
8 8
1
−
0 0 0 )
)
8
)
)
)
1 1 1
− − −
0 )
)
8 8 8
),
)
1
−
0 0 0 )
)
8
)
)
)
1 1
0 − −
0 )
8 8
1 1
0 0 − −
8 8)
0
−
( 0
(
(
(
( 1
(−
( 8
𝐻=(
( 1
(−
(
( 8
(
(
( 0
(
0
1 1
−
0 0
8 8
(− 1 − 1 − 1 − 1 0 0 )
)
( 8 8 8 8
)
(
)
(
1
1
)
(
𝐾=( 0 0 −
0 −
0 ).
)
(
8
8
)
(
)
( 1
1
1
1
(−
0 −
0 − − )
8
8
8 8
1
1
−
0 0 0 0 −
( 8
8)
0
0
−
(46)
Table 1 reveals the spectral radii of the GAOR methods
and the PGAOR methods. It tells that the spectral radii of the
preconditioned PGAOR methods are smaller than those of
the GAOR methods, so we can get that the proposed three
6
Journal of Applied Mathematics
Table 1: Spectral radii of GAOR method and PGAOR method.
Preconditioner
𝑆1
𝑆2
𝛼
𝑆3
1.5
2
𝜔1
0.8912
0.8912
0.8912
0.8912
𝜔2
0.9654
0.9654
0.9654
0.9654
𝛾2
0.8865
0.8865
0.8865
0.8865
𝜌GAOR
0.8478
0.8478
0.8478
0.8478
𝜌PGAOR
0.8457
0.8376
0.8418
0.8420
𝜌GAOR
0.7505
0.7505
0.7505
0.7505
𝜌PGAOR
0.6618
0.7340
0.7298
0.7328
Table 2: Spectral radii of GAOR method and PGAOR method.
𝛼
Preconditioner
𝑆1
𝑆2
𝜔1
0.8
0.8
0.8
0.8
0.1
0.5
𝑆3
𝜔2 = 𝜔
0.6
0.6
0.6
0.6
preconditioners can accelerate the speed rate of the GAOR
method for the linear systems (2). The results in Table 1 are in
accordance with Theorems 3–5.
Example 2. The coefficient matrix 𝐹 in (3) is given by
𝐼−𝐵 𝐻
𝐹=(
),
𝐾 𝐼−𝐶
(47)
where 𝐵 = (𝑏𝑖𝑗 )𝑝×𝑝 , 𝐶 = (𝑐𝑖𝑗 )𝑞×𝑞 , and 𝑝 + 𝑞 = 𝑛, 𝐻 = (ℎ𝑖𝑗 )𝑝×𝑞 ,
𝐾 = (𝑘𝑖𝑗 )𝑞×𝑝 with
𝑏𝑖𝑖 =
𝑏𝑖𝑗 =
1
1
,
−
30 (30𝑗) + 𝑖
𝑏𝑖𝑗 =
1
,
𝑖+1
𝑖 = 1, . . . 𝑝,
𝑗 > 𝑖, 𝑖 = 1, . . . 𝑝 − 1, 𝑗 = 2, . . . 𝑝,
𝑖 > 𝑗, 𝑖 = 2, . . . 𝑝, 𝑗 = 1, . . . 𝑝 − 1,
1
, 𝑖 = 1, . . . 𝑞,
𝑛+𝑖+1
1
1
𝑐𝑖𝑗 =
,
−
30 30 (𝑛 + 𝑗) + 𝑛 + 𝑖
𝑐𝑖𝑖 =
𝑗 > 𝑖, 𝑖 = 1, . . . , 𝑞 − 1, 𝑗 = 2, . . . , 𝑞,
1
1
,
−
30 30 (𝑖 − 𝑗 + 1) + 𝑛 + 𝑖
𝑖 > 𝑗, 𝑖 = 2, . . . , 𝑞, 𝑗 = 1, . . . 𝑞 − 1,
𝑘𝑖𝑗 =
1
1
− ,
30 (𝑛 + 𝑖 − 𝑗 + 1) + 𝑛 + 𝑖 30
𝑖 = 1, . . . , 𝑞, 𝑗 = 1, . . . , 𝑝,
ℎ𝑖𝑗 =
1
1
− ,
30 (𝑛 + 𝑗) + 𝑖 30
Obviously, 𝐹 is irreducible. Table 2 shows the spectral
radii of the corresponding iteration matrices with 𝑛 = 8 and
𝑚 = 6.
Similarly, in Table 2, we get that the results are in concord
with Theorems 3–5.
Acknowledgments
The authors would like to thank the anonymous referees
for their helpful suggestions, which greatly improved the
paper. This research of this author was supported by NSFC
Tianyuan Mathematics Youth Fund (no. 11026040), by Science and Technology Development Plan of Henan Province
(no. 122300410316), and by Natural Science Foundations of
Henan Province (no. 13A110022).
References
1
1
,
−
30 30 ((10 (𝑖 − 𝑗 + 1)) + 𝑖)
𝑐𝑖𝑗 =
𝛾2 = 𝛾
0.7
0.7
0.7
0.7
𝑖 = 1, . . . , 𝑝, 𝑗 = 1, . . . , 𝑞.
(48)
[1] J. Y. Yuan, “Numerical methods for generalized least squares
problem,” Journal of Computational and Applied Mathematics,
vol. 66, pp. 571–584, 1996.
[2] J.-Y. Yuan and X.-Q. Jin, “Convergence of the generalized AOR
method,” Applied Mathematics and Computation, vol. 99, no. 1,
pp. 35–46, 1999.
[3] A. D. Gunawardena, S. K. Jain, and L. Snyder, “Modified
iterative methods for consistent linear systems,” Linear Algebra
and Its Applications, vol. 154/156, pp. 123–143, 1991.
[4] A. Hadjidimos, “Accelerated overrelaxation method,” Mathematics of Computation, vol. 32, no. 141, pp. 149–157, 1978.
[5] Y. T. Li, C. X. Li, and S. L. Wu, “Improvement of preconditioned
AOR iterative methods for L-matrices,” Journal of Computational and Applied Mathematics, vol. 206, pp. 656–665, 2007.
[6] J. P. Milaszewicz, “Improving Jacobi and Gauss-Seidel iterations,” Linear Algebra and Its Applications, vol. 93, pp. 161–170,
1987.
[7] R. S. Varga, Matrix Iterative Analysis, vol. 27, Springer, Berlin,
Germany, 2000.
[8] D. M. Young, Iterative Solution of Large Linear Systems, Academic Press, New York, NY, USA, 1971.
[9] M. T. Darvishi and P. Hessari, “On convergence of the generalized AOR method for linear systems with diagonally dominant
Journal of Applied Mathematics
coefficient matrices,” Applied Mathematics and Computation,
vol. 176, no. 1, pp. 128–133, 2006.
[10] X. Zhou, Y. Song, L. Wang, and Q. Liu, “Preconditioned GAOR
methods for solving weighted linear least squares problems,”
Journal of Computational and Applied Mathematics, vol. 224, no.
1, pp. 242–249, 2009.
[11] M. T. Darvishi, P. Hessari, and B.-C. Shin, “Preconditioned
modified AOR method for systems of linear equations,” International Journal for Numerical Methods in Biomedical Engineering,
vol. 27, no. 5, pp. 758–769, 2011.
[12] A. Berman and R. J. Plemmons, Nonnegative Matrices in the
Mathematical Sciences, vol. 9, Academic Press, New York, NY,
USA; Society for Industrial and Applied Mathematics (SIAM),
Philadelphia, Pa, USA, 1994.
7
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download