Research Article Block Preconditioned SSOR Methods for -Matrices Linear Systems

advertisement
Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2013, Article ID 213659, 7 pages
http://dx.doi.org/10.1155/2013/213659
Research Article
Block Preconditioned SSOR Methods for
𝐻-Matrices Linear Systems
Zhao-Nian Pu and Xue-Zhong Wang
School of Mathematics and Statistics, Hexi University, Zhangye, Gansu 734000, China
Correspondence should be addressed to Xue-Zhong Wang; mail2011wang@gmail.com
Received 9 January 2013; Accepted 12 March 2013
Academic Editor: Hak-Keung Lam
Copyright © 2013 Z.-N. Pu and X.-Z. Wang. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
We present a block preconditioner and consider block preconditioned SSOR iterative methods for solving linear system 𝐴π‘₯ = 𝑏.
When 𝐴 is an 𝐻-matrix, the convergence and some comparison results of the spectral radius for our methods are given. Numerical
examples are also given to illustrate that our methods are valid.
where 𝐷 = diag(𝐴 11 , . . . , 𝐴 π‘šπ‘š ), −𝐿 and −π‘ˆ are strictly block
lower and strictly block upper triangular parts of 𝐴, respectively. Let 0 < πœ” < 2, and
1. Introduction
For the linear system
𝐴π‘₯ = 𝑏,
(1)
𝑀=
where 𝐴 is an 𝑛 × π‘› square matrix and π‘₯ and 𝑏 are 𝑛-dimensional vectors. The basic iterative method for solving (1) is
𝑀π‘₯π‘˜+1 = 𝑁π‘₯π‘˜ + 𝑏,
π‘˜ = 0, 1, . . . ,
(2)
where 𝐴 = 𝑀 − 𝑁 and 𝑀 is nonsingular. Thus (2) can be
written as
π‘₯π‘˜+1 = 𝑇π‘₯π‘˜ + 𝑐,
π‘˜ = 0, 1, . . . ,
(3)
where 𝑇 = 𝑀−1 𝑁, 𝑐 = 𝑀−1 𝑏.
Let us consider the following partition of 𝐴:
𝐴 11 𝐴 12
𝐴 21 𝐴 22
..
𝐴 = ( ...
.
𝐴 π‘š1 𝐴 π‘š2
⋅ ⋅ ⋅ 𝐴 1π‘š
⋅ ⋅ ⋅ 𝐴 2π‘š
.
d .. ) ,
𝑁=
1
(𝐷 − πœ”πΏ) 𝐷−1 (𝐷 − πœ”π‘ˆ) ,
πœ” (2 − πœ”)
1
((1 − πœ”) 𝐷 + πœ”πΏ) 𝐷−1 ((1 − πœ”) 𝐷 + πœ”π‘ˆ) .
πœ” (2 − πœ”)
(6)
Then, the iteration matrix of the SSOR method for 𝐴 is given
by
Lπœ” = 𝑀−1 𝑁
= (𝐷 − πœ”π‘ˆ)−1 𝐷(𝐷 − πœ”πΏ)−1
(4)
⋅ ⋅ ⋅ 𝐴 π‘šπ‘š
× ((1 − πœ”) 𝐷 + πœ”πΏ) 𝐷−1
(7)
× ((1 − πœ”) 𝐷 + πœ”π‘ˆ) .
where the blocks 𝐴 𝑖𝑖 ∈ 𝐢𝑛𝑖 ×𝑛𝑖 , 𝑖 = 1, . . . , π‘š, are nonsingular
and 𝑛1 + 𝑛2 + ⋅ ⋅ ⋅ + π‘›π‘š = 𝑛.
Usually we split 𝐴 into
Transforming the original system (1) into the preconditioned form
𝐴 = 𝐷 − 𝐿 − π‘ˆ,
𝑃𝐴π‘₯ = 𝑃𝑏,
(5)
(8)
2
Journal of Applied Mathematics
then we can define the basic iterative scheme:
𝑀𝑝 π‘₯π‘˜+1 = 𝑁𝑝 π‘₯π‘˜ + 𝑃𝑏,
π‘˜ = 0, 1, . . . ,
(9)
When 𝐴 is an 𝑀-matrix, Alanelli and Hadjidimosin [1]
considered the preconditioner 𝑃 = 𝑄 + 𝑆, where 𝑄 =
diag(𝐿−1
11 , 𝐼22 , . . . , πΌπ‘šπ‘š ) and 𝑆 is given by
𝑂11
𝑂12
−𝐴 21 𝐿−1
11 𝑂22
.
..
..
𝑆=(
.
−1
−𝐴 π‘š1 𝐿 11 π‘‚π‘š2
where 𝑃𝐴 = 𝑀𝑝 − 𝑁𝑝 and 𝑀𝑝 is nonsingular. Thus (9) can
also be written as
π‘₯π‘˜+1 = 𝑇π‘₯π‘˜ + 𝑐,
π‘˜ = 0, 1, . . . ,
(10)
where 𝑇 = 𝑀𝑝−1 𝑁𝑝 , 𝑐 = 𝑀𝑝−1 𝑃𝑏. Similar to the original system (1), we call the basic iterative methods corresponding to
the preconditioned system the preconditioned iterative methods.
𝐼11
d
𝐼 + 𝑆1 = (
(
Μƒ−𝐿
Μƒ − π‘ˆ,
Μƒ
𝑃1 𝐴 = (𝐼 + 𝑆1 ) (𝐷 − 𝐿 − π‘ˆ) = 𝐷
(13)
Μƒ −𝐿
Μƒ and −π‘ˆ
Μƒ are block diagonally, strictly block lower,
where 𝐷,
and strictly block upper triangular parts of 𝑃1 𝐴, respectively.
Μƒ is nonsingular, then (𝐷
Μƒ − πœ”πΏ)
Μƒ −1 and (𝐷
Μƒ − πœ”π‘ˆ)
Μƒ −1 exist
If 𝐷
and it is possible to define the SSOR iteration matrix for 𝑃1 𝐴.
Namely,
Μƒ πœ” = (𝐷
Μƒ 𝐷
Μƒ − πœ”πΏ)
Μƒ −1
Μƒ − πœ”π‘ˆ)
Μƒ −1 𝐷(
L
−1
Μƒ + πœ”πΏ)
Μƒ 𝐷
Μƒ ((1 − πœ”) 𝐷
Μƒ + πœ”π‘ˆ)
Μƒ .
× ((1 − πœ”) 𝐷
(
𝐼 + 𝑆2 = (
(
d
𝐼22
).
πΌπ‘šπ‘š
(12)
)
󡄨
󡄨
⟨𝐴 11 ⟩ − 󡄨󡄨󡄨𝐴 12 󡄨󡄨󡄨
󡄨󡄨
󡄨󡄨
− 󡄨󡄨𝐴 21 󡄨󡄨 ⟨𝐴 22 ⟩
..
⟨𝐴⟩ = ( ...
.
󡄨󡄨
󡄨󡄨 󡄨󡄨
󡄨
− 󡄨󡄨𝐴 π‘š1 󡄨󡄨 − 󡄨󡄨𝐴 π‘š2 󡄨󡄨󡄨
󡄨
󡄨
⋅ ⋅ ⋅ − 󡄨󡄨󡄨𝐴 1π‘š 󡄨󡄨󡄨
󡄨
󡄨󡄨
⋅ ⋅ ⋅ − 󡄨󡄨𝐴 2π‘š 󡄨󡄨󡄨
.. ) .
d
.
(15)
⋅ ⋅ ⋅ ⟨𝐴 π‘šπ‘š ⟩
Let ⟨𝐴⟩ = ⟨𝐷⟩ − |𝐿| − |π‘ˆ|, where ⟨𝐷⟩, −|𝐿|, and −|π‘ˆ| are
block diagonally, strictly block lower, and strictly block upper
triangular parts of ⟨𝐴⟩, respectively.
Notice that the preconditioner of the matrix ⟨𝐴⟩ corresponding to 𝑃1 is 𝑃2 = 𝐼 + 𝑆2 ; namely,
−1 󡄨
󡄨󡄨 󡄨󡄨
󡄨
󡄨󡄨𝛼1 󡄨󡄨 ⟨𝐴 11 ⟩ 󡄨󡄨󡄨𝐴 1𝑠 󡄨󡄨󡄨
d
−1 󡄨
󡄨󡄨 󡄨󡄨
󡄨
σ΅„¨σ΅„¨π›Όπ‘š 󡄨󡄨 ⟨𝐴 π‘šπ‘š ⟩ 󡄨󡄨󡄨𝐴 π‘šπ‘’ 󡄨󡄨󡄨
⋅ ⋅ ⋅ π‘‚π‘šπ‘š
method with preconditioner 𝑃1 and give some comparison
results of the spectral radius for the case when 𝐴 is an 𝐻matrix.
Let |𝐴| denote the matrix whose elements are the moduli
of the elements of the given matrix. We call ⟨𝐴⟩ = (π‘Žπ‘–π‘— ) to
comparison matrix if π‘Žπ‘–π‘— = |π‘Žπ‘–π‘— | for 𝑖 = 𝑗, if π‘Žπ‘–π‘— = −|π‘Žπ‘–π‘— | for
𝑖 =ΜΈ 𝑗. For (4), under the previous definition, we have
(14)
Alanelli and Hadjidimos in [1] showed that the preconditioned Gauss-Seidel, the preconditioned SOR, and the
preconditioned Jacobi methods with preconditioner 𝑃 are
better than original methods. Our work in the presentation
is to prove convergence of the block preconditioned SSOR
𝐼11
−𝛼2 𝐴−1
22 𝐴 2𝑑
−π›Όπ‘š 𝐴−1
π‘šπ‘š 𝐴 π‘šπ‘’
Let
(11)
with 𝐿 11 being the lower triangular matrix in the LU triangular decomposition of 𝐴 11 .
We consider the preconditioner 𝑃1 = 𝐼 + 𝑆1 , where
−𝛼1 𝐴−1
11 𝐴 1𝑠
𝐼22
⋅ ⋅ ⋅ 𝑂1π‘š
⋅ ⋅ ⋅ 𝑂2π‘š
.
d .. ) ,
−1 󡄨
󡄨󡄨 󡄨󡄨
󡄨
󡄨󡄨𝛼2 󡄨󡄨 ⟨𝐴 22 ⟩ 󡄨󡄨󡄨𝐴 2𝑑 󡄨󡄨󡄨
)
).
d
πΌπ‘šπ‘š
)
(16)
Journal of Applied Mathematics
3
Let 𝑃2 ⟨𝐴⟩ = (𝐼+𝑆2 )⟨𝐴⟩ = 𝐷−𝐿−π‘ˆ, where 𝐷, −𝐿, and −π‘ˆ are
block diagonally, strictly block lower, and strictly block upper
triangular parts of 𝑃2 ⟨𝐴⟩, respectively.
If 𝐷 is nonsingular, then (𝐷 − πœ”πΏ)−1 and (𝐷 − πœ”π‘ˆ)−1 exist
and the SSOR iteration matrix for 𝑃2 ⟨𝐴⟩ is as follows:
−1
Lπœ” = (𝐷 − πœ”π‘ˆ) 𝐷(𝐷 − πœ”πΏ)
−1
−1
(17)
× ((1 − πœ”) 𝐷 + πœ”πΏ) 𝐷 ((1 − πœ”) 𝐷 + πœ”π‘ˆ) .
Lemma 6. If 𝐴 and 𝐡 are two 𝑛 × π‘› matrices, then ⟨𝐴 − 𝐡⟩ ≥
⟨𝐴⟩ − |𝐡|.
Proof. It is easy to see that |π‘Žπ‘–π‘— − 𝑏𝑖𝑗 | ≥ |π‘Žπ‘–π‘— | − |𝑏𝑖𝑗 |, for 𝑖 = 𝑗, and
−|π‘Žπ‘–π‘— −𝑏𝑖𝑗 | ≥ −|π‘Žπ‘–π‘— |−|𝑏𝑖𝑗 |, for 𝑖 =ΜΈ 𝑗. Therefore, ⟨𝐴−𝐡⟩ ≥ ⟨𝐴⟩−|𝐡|
is true.
Lemma 7. If 𝐴 is an 𝐻-matrix with unit diagonal elements,
then β€– ⟨𝐴⟩−1 β€–∞ > 1.
Proof. Let ⟨𝐴⟩ = 𝐼 − 𝐡, from ⟨𝐴⟩ being an 𝑀-matrix; then
𝐡 ≥ 0 and 𝜌(𝐡) < 1, and thus, we have
2. Preliminaries
A matrix 𝐴 is called nonnegative (positive) if each entry of
𝐴 is nonnegative (positive). We denote it by 𝐴 ≥ 0 (𝐴 > 0).
Similarly, for 𝑛-dimensional vector π‘₯, we can also define π‘₯ ≥ 0
(π‘₯ > 0). Additionally, we denote the spectral radius of 𝐴 by
𝜌(𝐴). 𝐴𝑇 denotes the transpose of 𝐴. A matrix 𝐴 = (π‘Žπ‘–π‘— ) is
called a 𝑍-matrix if for any 𝑖 =ΜΈ 𝑗, π‘Žπ‘–π‘— ≤ 0. A 𝑍-matrix is a
nonsingular 𝑀-matrix if 𝐴 is nonsingular and 𝐴−1 ≥ 0, If
⟨𝐴⟩ is a nonsingular 𝑀-matrix , then 𝐴 is called an 𝐻-matrix.
𝐴 = 𝑀 − 𝑁 is said to be a splitting of 𝐴 if 𝑀 is nonsingular,
𝐴 = 𝑀 − 𝑁 is said to be regular if 𝑀−1 ≥ 0 and 𝑁 ≥ 0, and
weak regular if 𝑀−1 ≥ 0 and 𝑀−1 𝑁 ≥ 0, respectively.
Some basic properties on special matrices introduced
previously are given to be used in this paper.
∞
⟨𝐴⟩−1 = ∑ π΅π‘˜ ≥ 𝐼
and then β€– ⟨𝐴⟩−1 β€–∞ > 1.
3. Convergence Results
𝑇 𝑇
Let 𝑒𝑖 = (1, . . . , 1)𝑇 ∈ 𝑅𝑛𝑖 , 𝑖 = 1, 2, . . . , π‘š, 𝑒 = (𝑒1𝑇 , . . . , π‘’π‘š
) ,
𝑇
−1
𝑇
𝑇
𝑇
π‘Ÿ = (π‘Ÿ1 , . . . , π‘Ÿπ‘š ) = ⟨𝐴⟩ 𝑒, 𝑂𝑖 = (0, . . . , 0) ∈ 𝑅𝑛𝑖 ,
where π‘Ÿ and 𝑒 are partitioned in accordance with the block
partitioning of the matrix 𝐴, and let
σ΅„©
−1 󡄨
󡄨 σ΅„©
𝑠𝑖 = σ΅„©σ΅„©σ΅„©σ΅„©βŸ¨π΄ 𝑖𝑖 ⟩ 󡄨󡄨󡄨𝐴 π‘–π‘˜ 󡄨󡄨󡄨 π‘’π‘˜ σ΅„©σ΅„©σ΅„©σ΅„©∞ ,
Lemma 1 (see [2]). Let A be a 𝑍-matrix. Then the following
statements are equivalent.
β„Žπ‘– =
(a) 𝐴 is an 𝑀-matrix.
1
,
σ΅„©
σ΅„©σ΅„©
σ΅„©
𝑠𝑖 (2σ΅„©σ΅„©βŸ¨π΄βŸ©−1 σ΅„©σ΅„©σ΅„©σ΅„©∞ − 1)
(21)
𝑖 = 1, 2, . . . , π‘š.
(b) There is a positive vector π‘₯ such that 𝐴π‘₯ > 0.
Theorem 8. Let 𝐴 be a nonsingular H-matrix; if |𝛼𝑖 | < β„Žπ‘– ,
𝑖 = 1, 2, . . . , π‘š, then 𝑃1 𝐴 is also an H-matrix.
(c) 𝐴−1 ≥ 0.
(d) All principal submatrices of 𝐴 are 𝑀-matrices.
(e) All principal minors are positive.
Lemma 2 (see [3, 4]). Let 𝐴 be an 𝑀-matrix and let 𝐴 = 𝑀 −
𝑁 be a weak regular splitting. Then 𝜌(𝑀−1 𝑁) < 1.
Proof. From 𝐴 being an 𝐻-matrix, we have π‘Ÿ > 0, and π‘Ÿπ‘˜ ≤β€–
⟨𝐴⟩−1 β€–∞ π‘’π‘˜ . Let
((𝑃1 𝐴)𝑖𝑗 )
Lemma 3 (see [2]). Let 𝐴 and 𝐡 be two 𝑛 × π‘› matrices with
0 ≤ 𝐡 ≤ 𝐴. Then 𝜌(𝐡) ≤ 𝜌(𝐴).
ΜΈ 𝑖, 𝑗 = 1, 2, . . . , π‘š, π‘˜ =ΜΈ 𝑖,
𝐴 −𝛼 𝐴−1
𝑖𝑖 𝐴 π‘–π‘˜ 𝐴 π‘˜π‘— , 𝑖=𝑗,
= { 𝑖𝑗 𝑖 −1
𝐴 𝑖𝑖 −𝛼𝑖 𝐴 𝑖𝑖 𝐴 π‘–π‘˜ 𝐴 π‘˜π‘– , 𝑖 = 𝑗, 𝑖, 𝑗 = 1, 2, . . . , π‘š, π‘˜ =ΜΈ 𝑖.
(22)
Lemma 4 (see [5]). If 𝐴 is an 𝐻-matrix, then |𝐴−1 | ≤ ⟨𝐴⟩−1 .
Lemma 5 (see [6]). Suppose that 𝐴 1 = 𝑀1 − 𝑁1 and 𝐴 2 =
𝑀2 − 𝑁2 are weak regular splitting of monotone matrices 𝐴 1
and 𝐴 2 , respectively, such that 𝑀2−1 ≥ 𝑀1−1 . If there exists
a positive vector π‘₯ such that 0 ≤ 𝐴 1 π‘₯ ≤ 𝐴 2 π‘₯, then for the
monotone norm associated with π‘₯,
σ΅„©σ΅„© −1 σ΅„©σ΅„©
σ΅„©
σ΅„©
󡄩󡄩𝑀1 𝑁1 σ΅„©σ΅„© ≤ 󡄩󡄩󡄩𝑀2−1 𝑁2 σ΅„©σ΅„©σ΅„© .
(18)
σ΅„©
σ΅„©π‘₯ σ΅„©
σ΅„©π‘₯
In particular, if 𝑀1−1 𝑁1 has a positive Perron vector, then
𝜌 (𝑀1−1 𝑁1 ) ≤ 𝜌 (𝑀2−1 𝑁2 ) .
(20)
𝐾=0
(19)
Moreover if π‘₯ is a Perron vector of 𝑀1−1 𝑁1 and strict inequality
holds in (18), then strict inequality holds in (19).
Then
(βŸ¨π‘ƒ1 𝐴⟩ π‘Ÿ)𝑖 = ⟨𝐴 𝑖𝑖 − 𝛼𝑖 𝐴−1
𝑖𝑖 𝐴 π‘–π‘˜ 𝐴 π‘˜π‘– ⟩ π‘Ÿπ‘–
π‘š
󡄨
󡄨󡄨
󡄨
− ∑ 󡄨󡄨󡄨󡄨𝐴 𝑖𝑗 − 𝛼𝑖 𝐴−1
𝑖𝑖 𝐴 π‘–π‘˜ 𝐴 π‘˜π‘— 󡄨󡄨 π‘Ÿπ‘—
𝑗 =ΜΈ 𝑖,π‘˜
󡄨󡄨
󡄨
󡄨
− 󡄨󡄨󡄨󡄨𝐴 π‘–π‘˜ − 𝛼𝑖 𝐴−1
𝑖𝑖 𝐴 π‘–π‘˜ 𝐴 π‘˜π‘˜ 󡄨󡄨 π‘Ÿπ‘˜
π‘š
󡄨󡄨 󡄨󡄨 󡄨󡄨 󡄨󡄨 󡄨󡄨
󡄨󡄨 󡄨󡄨
󡄨 󡄨󡄨
󡄨
󡄨󡄨𝐴 𝑖𝑗 󡄨󡄨
𝐴
𝐴
π‘Ÿ
≥ ⟨𝐴 𝑖𝑖 ⟩ π‘Ÿπ‘– − 󡄨󡄨󡄨𝛼𝑖 󡄨󡄨󡄨 󡄨󡄨󡄨󡄨𝐴−1
−
∑
󡄨
󡄨
󡄨
󡄨
󡄨
𝑖𝑖 󡄨 󡄨 π‘–π‘˜ 󡄨 󡄨 π‘˜π‘– 󡄨 𝑖
󡄨 󡄨
𝑗 =ΜΈ 𝑖,π‘˜
π‘š
󡄨󡄨 󡄨󡄨 󡄨󡄨 󡄨󡄨 󡄨󡄨
󡄨 󡄨󡄨
󡄨 󡄨
󡄨
− ∑ 󡄨󡄨󡄨𝛼𝑖 󡄨󡄨󡄨 󡄨󡄨󡄨󡄨𝐴−1
𝑖𝑖 󡄨󡄨 󡄨󡄨𝐴 π‘–π‘˜ 󡄨󡄨 󡄨󡄨𝐴 π‘˜π‘— 󡄨󡄨 π‘Ÿπ‘—
𝑗 =ΜΈ 𝑖,π‘˜
4
Journal of Applied Mathematics
󡄨󡄨 󡄨󡄨 󡄨󡄨 󡄨󡄨
󡄨󡄨
󡄨 󡄨
󡄨 󡄨󡄨
󡄨
− 󡄨󡄨󡄨𝐴 π‘–π‘˜ 󡄨󡄨󡄨 π‘Ÿπ‘˜ − 󡄨󡄨󡄨𝛼𝑖 󡄨󡄨󡄨 󡄨󡄨󡄨󡄨𝐴−1
𝑖𝑖 󡄨󡄨 󡄨󡄨𝐴 π‘–π‘˜ 󡄨󡄨 󡄨󡄨𝐴 π‘˜π‘˜ 󡄨󡄨 π‘Ÿπ‘˜
that (24) is a weak regular splitting; from Lemma 2, we know
that 𝜌(L̈ πœ” ) < 1. Since
−1 󡄨
󡄨󡄨 󡄨
󡄨 󡄨
≥ 𝑒𝑖 − 󡄨󡄨󡄨𝛼𝑖 󡄨󡄨󡄨 ⟨𝐴 𝑖𝑖 ⟩ 󡄨󡄨󡄨𝐴 π‘–π‘˜ 󡄨󡄨󡄨 󡄨󡄨󡄨𝐴 π‘˜π‘– 󡄨󡄨󡄨 π‘Ÿπ‘–
󡄨󡄨 Μƒ 󡄨󡄨
󡄨󡄨Lπœ” 󡄨󡄨 =
󡄨 󡄨
π‘š
−1 󡄨
󡄨 󡄨
󡄨󡄨 󡄨
− ∑ 󡄨󡄨󡄨𝛼𝑖 󡄨󡄨󡄨 ⟨𝐴 𝑖𝑖 ⟩ 󡄨󡄨󡄨𝐴 π‘–π‘˜ 󡄨󡄨󡄨 󡄨󡄨󡄨󡄨𝐴 π‘˜π‘— 󡄨󡄨󡄨󡄨 π‘Ÿπ‘—
󡄨󡄨
󡄨󡄨(𝐷
Μƒ
Μƒ −1 Μƒ
Μƒ −1
󡄨󡄨 − πœ”π‘ˆ) 𝐷(𝐷 − πœ”πΏ)
󡄨
Μƒ + πœ”πΏ)
Μƒ 𝐷−1 ((1 − πœ”) 𝐷
Μƒ + πœ”π‘ˆ)
Μƒ 󡄨󡄨󡄨
× ((1 − πœ”) 𝐷
󡄨󡄨
𝑗 =ΜΈ 𝑖,π‘˜
−1 󡄨
󡄨󡄨
󡄨
󡄨 󡄨
− 󡄨󡄨󡄨𝛼𝑖 󡄨󡄨󡄨 ⟨𝐴 𝑖𝑖 ⟩ 󡄨󡄨󡄨𝐴 π‘–π‘˜ 󡄨󡄨󡄨 󡄨󡄨󡄨𝐴 π‘˜π‘˜ 󡄨󡄨󡄨 π‘Ÿπ‘˜
󡄨󡄨
−1
−1
Μƒ −1 π‘ˆ)
Μƒ (𝐼 − πœ”π·
Μƒ
Μƒ −1 𝐿)
= 󡄨󡄨󡄨󡄨(𝐼 − πœ”π·
󡄨
−1 󡄨
󡄨
󡄨 󡄨
= 𝑒𝑖 + 󡄨󡄨󡄨𝛼𝑖 󡄨󡄨󡄨 ⟨𝐴 𝑖𝑖 ⟩ 󡄨󡄨󡄨𝐴 π‘–π‘˜ 󡄨󡄨󡄨
󡄨
Μƒ ((1 − πœ”) 𝐼 + πœ”π·
Μƒ −1 π‘ˆ)
Μƒ 󡄨󡄨󡄨
× ((1 − πœ”) 𝐼 + πœ”π·−1 𝐿)
󡄨󡄨
󡄨󡄨
−1 󡄨 󡄨
−1 󡄨
Μƒ −1 π‘ˆ)
Μƒ −1 𝐿)
Μƒ 󡄨󡄨󡄨󡄨 󡄨󡄨󡄨󡄨(𝐼 − πœ”π·
Μƒ 󡄨󡄨󡄨󡄨
≤ 󡄨󡄨󡄨󡄨(𝐼 − πœ”π·
󡄨
󡄨󡄨 󡄨󡄨
󡄨󡄨
󡄨
󡄨
󡄨
Μƒ −1 π‘ˆ)
Μƒ 󡄨󡄨󡄨󡄨 󡄨󡄨󡄨((1 − πœ”) 𝐼 + πœ”π·
Μƒ 󡄨󡄨󡄨
× σ΅„¨σ΅„¨σ΅„¨σ΅„¨((1 − πœ”) 𝐼 + πœ”π·−1 𝐿)
󡄨󡄨
󡄨 󡄨󡄨
π‘š
󡄨 󡄨
󡄨
󡄨
× (− ∑ 󡄨󡄨󡄨󡄨𝐴 π‘˜π‘— 󡄨󡄨󡄨󡄨 π‘Ÿπ‘— − 󡄨󡄨󡄨𝐴 π‘˜π‘˜ 󡄨󡄨󡄨 π‘Ÿπ‘˜
𝑗 =ΜΈ π‘˜
+ ⟨𝐴 π‘˜π‘˜ ⟩ π‘Ÿπ‘˜ − ⟨𝐴 π‘˜π‘˜ ⟩ π‘Ÿπ‘˜ )
Μƒ 󡄨󡄨󡄨
Μƒ −1 |π‘ˆ|)
Μƒ
Μƒ −1 󡄨󡄨󡄨󡄨𝐿
≤ (𝐼 − πœ”βŸ¨π·βŸ©
(𝐼 − πœ”βŸ¨π·βŸ©
󡄨 󡄨󡄨)
−1
−1 󡄨
󡄨
󡄨 󡄨
= 𝑒𝑖 + 󡄨󡄨󡄨𝛼𝑖 󡄨󡄨󡄨 ⟨𝐴 𝑖𝑖 ⟩ 󡄨󡄨󡄨𝐴 π‘–π‘˜ 󡄨󡄨󡄨 (π‘’π‘˜ − 2π‘Ÿπ‘˜ )
σ΅„©
−1 󡄨
󡄨
󡄨 󡄨 σ΅„©
≥ 𝑒𝑖 − 󡄨󡄨󡄨𝛼𝑖 󡄨󡄨󡄨 (2σ΅„©σ΅„©σ΅„©σ΅„©βŸ¨π΄βŸ©−1 σ΅„©σ΅„©σ΅„©σ΅„©∞ − 1) ⟨𝐴 𝑖𝑖 ⟩ 󡄨󡄨󡄨𝐴 π‘–π‘˜ 󡄨󡄨󡄨 π‘’π‘˜
σ΅„©
σ΅„©
󡄨 󡄨
≥ 𝑒𝑖 − 󡄨󡄨󡄨𝛼𝑖 󡄨󡄨󡄨 𝑠𝑖 (2σ΅„©σ΅„©σ΅„©σ΅„©βŸ¨π΄βŸ©−1 σ΅„©σ΅„©σ΅„©σ΅„©∞ − 1) 𝑒𝑖
Μƒ −1 󡄨󡄨󡄨 Μƒ 󡄨󡄨󡄨
Μƒ 󡄨󡄨󡄨
Μƒ −1 󡄨󡄨󡄨󡄨𝐿
× ((1−πœ”) 𝐼+πœ”βŸ¨π·βŸ©
󡄨 󡄨󡄨) ((1−πœ”) 𝐼+πœ”βŸ¨π·βŸ© σ΅„¨σ΅„¨π‘ˆσ΅„¨σ΅„¨)
= 𝜌 (L̈ πœ” )
(26)
> 𝑂𝑖 .
(23)
Therefore, βŸ¨π‘ƒ1 𝐴⟩ is an 𝑀-matrix, and then 𝑃1 𝐴 is an 𝐻matrix.
Theorem 9. If 𝐴 is a nonsingular 𝐻-matrix with unit diagonal
elements, 0 < πœ” ≤ 1 and |𝛼𝑖 | < β„Žπ‘– , 𝑖 = 1, 2, . . . , π‘š. Then
Μƒ πœ” ) < 1.
𝜌(L
Μƒ − |𝐿|
Μƒ − |π‘ˆ|
Μƒ is
Proof. From Theorem 8, we know βŸ¨π‘ƒ1 𝐴⟩ = ⟨𝐷⟩
an 𝑀-matrix; if we let
βŸ¨π‘ƒ1 𝐴⟩ =
1
Μƒ −1 (⟨𝐷⟩
Μƒ 󡄨󡄨󡄨󡄨)
Μƒ 󡄨󡄨󡄨󡄨) ⟨𝐷⟩
Μƒ − πœ” σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘ˆ
Μƒ − πœ” 󡄨󡄨󡄨󡄨𝐿
(⟨𝐷⟩
󡄨
󡄨
󡄨
󡄨
πœ” (2 − πœ”)
1
Μƒ 󡄨󡄨󡄨 Μƒ −1
Μƒ + πœ” 󡄨󡄨󡄨󡄨𝐿
((1 − πœ”) ⟨𝐷⟩
󡄨 󡄨󡄨) ⟨𝐷⟩
πœ” (2 − πœ”)
Μƒ 󡄨󡄨󡄨
Μƒ + πœ” σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘ˆ
× ((1 − πœ”) ⟨𝐷⟩
󡄨 󡄨󡄨) ,
Μƒ πœ” ) ≤ 𝜌(|L
Μƒ πœ” |) ≤ 𝜌(L̈ πœ” ) < 1.
then, by Lemma 3, 𝜌(L
4. Comparison Results of Spectral Radius
Theorem 10. Let 𝐴 be a nonsingular 𝐻-matrix with unit
diagonal elements, 0 < πœ” ≤ 1 and |𝛼𝑖 | < β„Žπ‘– , 𝑖 = 1, 2, . . . , π‘š.
Then 𝑃2 ⟨𝐴⟩ is an 𝑀-matrix and 𝜌(Lπœ” ) < 1.
Proof. Similar to the proof of Theorems 8 and 9, it is easy to
get the proof of this theorem.
In what follows we will give some comparison results on
the spectral radius of preconditioned SSOR iteration matrices
with different preconditioner.
Let
Μ‚ −𝑁
Μ‚
⟨𝐴⟩ = 𝑀
−
=
(24)
1
(⟨𝐷⟩ − πœ” |𝐿|) ⟨𝐷⟩−1 (⟨𝐷⟩ − πœ” |π‘ˆ|)
πœ” (2 − πœ”)
1
−
((1 − πœ”) ⟨𝐷⟩ + πœ” |𝐿|) ⟨𝐷⟩−1
πœ” (2 − πœ”)
then the SSOR iteration matrix for βŸ¨π‘ƒ1 𝐴⟩ is as follows:
(27)
× ((1 − πœ”) ⟨𝐷⟩ + πœ” |π‘ˆ|) ,
󡄨󡄨 Μƒ 󡄨󡄨 −1
Μƒ 󡄨󡄨󡄨 −1 Μƒ
Μƒ − πœ” σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘ˆ
Μƒ
󡄨 󡄨
L̈ πœ” = (⟨𝐷⟩
󡄨 󡄨󡄨) ⟨𝐷⟩ (⟨𝐷⟩ − πœ” 󡄨󡄨𝐿󡄨󡄨)
Μƒ 󡄨󡄨󡄨 Μƒ −1
Μƒ + πœ” 󡄨󡄨󡄨󡄨𝐿
× ((1 − πœ”) ⟨𝐷⟩
󡄨 󡄨󡄨) ⟨𝐷⟩
Μƒ 󡄨󡄨󡄨
Μƒ + πœ” σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘ˆ
× ((1 − πœ”) ⟨𝐷⟩
󡄨 󡄨󡄨) .
−1
where
(25)
Μƒ ⟨𝐷⟩
Μƒ − πœ”|𝐿|
Μƒ and
Since βŸ¨π‘ƒ1 𝐴⟩ is an 𝑀-matrix; we have ⟨𝐷⟩,
Μƒ
Μƒ
⟨𝐷⟩ − πœ”|π‘ˆ| are 𝑀-matrices; by simple calculation, we obtain
Μ‚=
𝑀
1
(⟨𝐷⟩ − πœ” |𝐿|) ⟨𝐷⟩−1 (⟨𝐷⟩ − πœ” |π‘ˆ|) ,
πœ” (2 − πœ”)
Μ‚=
𝑁
1
((1 − πœ”) ⟨𝐷⟩ + πœ” |𝐿|) ⟨𝐷⟩−1
πœ” (2 − πœ”)
× ((1 − πœ”) ⟨𝐷⟩ + πœ” |π‘ˆ|) .
(28)
Journal of Applied Mathematics
5
Then the SSOR iteration matrix for ⟨𝐴⟩ is as follows:
It follows that
−1
Μ‚πœ” = 𝑀
Μ‚
Μ‚ 𝑁
L
= (⟨𝐷⟩ − πœ” |π‘ˆ|)−1 ⟨𝐷⟩ (⟨𝐷⟩ − πœ” |𝐿|)−1
× ((1−πœ”) ⟨𝐷⟩+πœ” |𝐿|) ⟨𝐷⟩−1 ((1−πœ”) ⟨𝐷⟩+πœ” |π‘ˆ|) ,
σ΅„© −1 σ΅„©σ΅„©
σ΅„©σ΅„© −1 σ΅„©σ΅„©
󡄩󡄩𝑀 𝑁󡄩󡄩 ≤ 󡄩󡄩󡄩𝑀
Μ‚ 𝑁
Μ‚ σ΅„©σ΅„© .
(35)
σ΅„©σ΅„©π‘₯
σ΅„©σ΅„©π‘₯ σ΅„©σ΅„©
σ΅„©σ΅„©
Μ‚−𝑁
Μ‚ is a weak regular splitting, there exists
As ⟨𝐴⟩ = 𝑀
a positive perron vector 𝑦; by Lemma 5, the following inequality holds:
(29)
and let
−1
1
(𝐷 − πœ”πΏ) 𝐷 (𝐷 − πœ”π‘ˆ)
=
πœ” (2 − πœ”)
−1
1
((1 − πœ”) 𝐷 + πœ”πΏ) 𝐷
−
πœ” (2 − πœ”)
(30)
× ((1 − πœ”) 𝐷 + πœ”π‘ˆ) ,
where
𝑁=
−1
1
(𝐷 − πœ”πΏ) 𝐷 (𝐷 − πœ”π‘ˆ) ,
πœ” (2 − πœ”)
−1
1
((1 − πœ”) 𝐷 + πœ”πΏ) 𝐷 ((1 − πœ”) 𝐷 + πœ”π‘ˆ) .
πœ” (2 − πœ”)
(31)
Then the AOR iteration matrix for 𝑃2 ⟨𝐴⟩ is (17).
Theorem 11. If 𝐴 is a nonsingular 𝐻-matrix with unit diagonal elements, 0 < πœ” ≤ 1 and |𝛼𝑖 | < β„Žπ‘– , 𝑖 = 1, 2, . . . , π‘š. Then
Μ‚ πœ” ).
𝜌(Lπœ” ) ≤ 𝜌(L
Proof. Since ⟨𝐴⟩ is a nonsingular 𝑀-matrix, by Theorem 10,
𝑃2 ⟨𝐴⟩ is a nonsingular 𝑀-matrix, and thus ⟨𝐴⟩ and 𝑃2 ⟨𝐴⟩
are two monotone matrices.
From ⟨𝐴⟩ and 𝑃2 ⟨𝐴⟩ being 𝑀-matrices, we can get ⟨𝐷⟩,
Μ‚ and 𝑀 are 𝑀-matrices, together with
𝐷, 𝑀,
((1−πœ”) 𝐼+πœ”βŸ¨π·βŸ©−1 |𝐿|) ⟨𝐷⟩−1 ((1−πœ”) 𝐼+πœ”βŸ¨π·βŸ©−1 |π‘ˆ|) > 0,
−1
−1
−1
((1 − πœ”) 𝐼 + πœ”π· 𝐿) 𝐷 ((1 − πœ”) 𝐼 + πœ”π· π‘ˆ) > 0.
(32)
Μ‚−𝑁
Μ‚ and 𝑃2 ⟨𝐴⟩ = 𝑀 − 𝑁 are two
We obtain that ⟨𝐴⟩ = 𝑀
weak regular splittings. By simple calculation, we have
𝑀=
≤
−1
1
(𝐷 − πœ”πΏ) 𝐷 (𝐷 − πœ”π‘ˆ)
πœ” (2 − πœ”)
1
Μ‚
(⟨𝐷⟩ − πœ” |𝐿|) ⟨𝐷⟩−1 (⟨𝐷⟩ − πœ” |π‘ˆ|) = 𝑀
πœ” (2 − πœ”)
(33)
−1
(36)
Μ‚πœ”) .
𝜌 (Lπœ” ) ≤ 𝜌 (L
(37)
that is,
𝑃2 ⟨𝐴⟩ = 𝑀 − 𝑁
𝑀=
−1
Μ‚ ,
Μ‚ −1 𝑁)
𝜌 (𝑀 𝑁) ≤ 𝜌 (𝑀
−1
Μ‚ ≥ 0; letting π‘₯ = ⟨𝐴⟩−1 𝑒 > 0, then
and thus 𝑀 ≥ 𝑀
−1
Μ‚ −1 ≥ 0, we
(𝑃2 ⟨𝐴⟩ − ⟨𝐴⟩)π‘₯ = (𝐼 + 𝑆2 )𝑒 > 0; since 𝑀 ≥ 𝑀
have
−1
−1
Μ‚ −1 𝑁)
Μ‚ π‘₯.
Μ‚ −1 ⟨𝐴⟩ π‘₯ = (𝐼− 𝑀
𝑀 (𝑃2 ⟨𝐴⟩) π‘₯ = (𝐼−𝑀 𝑁) π‘₯ ≥ 𝑀
(34)
When 𝐴 is a nonsingular 𝑀-matrix, we have 𝐴 = ⟨𝐴⟩. If
𝛼𝑖 > 0, 𝑖 = 1, 2, . . . , π‘š, then 𝑃2 = 𝑃1 . Furthermore, we have
Μƒ πœ” and Lπœ” = L
Μ‚ πœ” ; therefore, we get the following
Lπœ” = L
result.
Corollary 12. Let 𝐴 be a nonsingular 𝑀-matrix with unit
diagonal elements, 0 < 𝛼𝑖 < |β„Žπ‘– |, 𝑖 = 1, 2, . . . , π‘š, and 0 <
πœ” ≤ 1. Then
Μƒ πœ” ) ≤ 𝜌 (L
Μ‚ πœ” ) = 𝜌 (Lπœ” ) .
𝜌 (Lπœ” ) = 𝜌 (L
(38)
Theorem 13. Let 𝐴 be a nonsingular 𝐻-matrix with unit
diagonal elements, 0 < πœ” ≤ 1 and |𝛼𝑖 | < β„Žπ‘– , 𝑖 = 1, 2, . . . , π‘š.
Then 𝜌(L̈ πœ” ) ≤ 𝜌(Lπœ” ).
Proof . Let
βŸ¨π‘ƒ1 𝐴⟩ =
1
Μƒ −1 (⟨𝐷⟩
Μƒ 󡄨󡄨󡄨󡄨) ⟨𝐷⟩
Μƒ 󡄨󡄨󡄨
Μƒ − πœ” 󡄨󡄨󡄨󡄨𝐿
Μƒ − πœ” σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘ˆ
(⟨𝐷⟩
󡄨
󡄨
󡄨 󡄨󡄨)
πœ” (2 − πœ”)
1
Μƒ −1
Μƒ 󡄨󡄨󡄨󡄨) ⟨𝐷⟩
Μƒ + πœ” 󡄨󡄨󡄨󡄨𝐿
((1 − πœ”) ⟨𝐷⟩
󡄨
󡄨
πœ” (2 − πœ”)
Μƒ 󡄨󡄨󡄨
Μƒ + πœ” σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘ˆ
× ((1 − πœ”) ⟨𝐷⟩
󡄨 󡄨󡄨) .
−
(39)
Then the SSOR iteration matrix for βŸ¨π‘ƒ1 𝐴⟩ is L̈ πœ” which is
defined in the proof of Theorem 9, and let
𝑃2 ⟨𝐴⟩ =
−1
1
(𝐷 − πœ”πΏ) 𝐷 (𝐷 − πœ”π‘ˆ)
πœ” (2 − πœ”)
−
−1
1
((1 − πœ”) 𝐷 + πœ”πΏ) 𝐷
πœ” (2 − πœ”)
(40)
× ((1 − πœ”) 𝐷 + πœ”π‘ˆ) .
Then the SSOR iteration matrix for 𝑃2 ⟨𝐴⟩ is (17). It is easy
to know that the previous two splittings are weak regular
splittings. Furthermore, by Lemma 6, we have the following
result, for any 𝑖, 𝑖 = 1, 2, . . . , π‘š,
Μƒ 𝑖𝑖 ⟩ = ⟨𝐴 𝑖𝑖 − 𝛼𝑖 𝐴−1 𝐴 π‘–π‘˜ 𝐴 π‘˜π‘– ⟩
⟨𝐷
𝑖𝑖
−1 󡄨
󡄨󡄨 󡄨
󡄨 󡄨
≥ ⟨𝐴 𝑖𝑖 ⟩ − 󡄨󡄨󡄨𝛼𝑖 󡄨󡄨󡄨 ⟨𝐴 𝑖𝑖 ⟩ 󡄨󡄨󡄨𝐴 π‘–π‘˜ 󡄨󡄨󡄨 󡄨󡄨󡄨𝐴 π‘˜π‘– 󡄨󡄨󡄨 = βŸ¨π·π‘–π‘– ⟩ .
(41)
From βŸ¨π‘ƒ1 𝐴⟩ and 𝑃2 ⟨𝐴⟩ being two 𝑀-matrices, we have
Μƒ −1 ≤ 𝐷 −1
0 ≤ ⟨𝐷⟩
(42)
6
Journal of Applied Mathematics
Table 1: Comparison of spectral radius with preconditioner 𝑃1 .
πœ”, π‘Ÿ
Μƒπœ”)
𝜌(L
0.5636
0.6030
0.6120
0.3650
0.4510
0.4345
0.2387
0.3494
0.3569
0.3674
𝑁
100
200
500
100
200
500
100
200
500
1000
πœ” = 0.8
πœ” = 0.6
πœ” = 0.9
and then
−1
−1
Μƒ −1 σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘ˆ
Μƒ 󡄨󡄨󡄨
Μƒ −1 󡄨󡄨󡄨 Μƒ 󡄨󡄨󡄨
= (𝐼 − πœ”βŸ¨π·βŸ©
󡄨 󡄨󡄨) (𝐼 − πœ”βŸ¨π·βŸ© 󡄨󡄨𝐿󡄨󡄨)
Μƒ πœ” ) ≤ 𝜌 (L̈ πœ” ) ≤ 𝜌 (Lπœ” ) ≤ 𝜌 (L
Μ‚ πœ” ) < 1.
𝜌 (L
󡄨󡄨 Μƒ 󡄨󡄨
Μƒ −1 σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘ˆ
Μƒ 󡄨󡄨󡄨
Μƒ
󡄨󡄨𝐿󡄨󡄨) ((1−πœ”) 𝐼+πœ”βŸ¨π·βŸ©
× ((1−πœ”) 𝐼+πœ”βŸ¨π·βŸ©
󡄨 󡄨
󡄨 󡄨󡄨)
−1
= Lπœ” .
(43)
Therefore, by Lemma 3, 𝜌(L̈ πœ” ) ≤ 𝜌(Lπœ” ).
For randomly generated nonsingular 𝐻-matrices for 𝑛 =
100, 200, 500, 1000 with 𝑛1 = 𝑛2 = ⋅ ⋅ ⋅ = π‘›π‘š = 5, we have
determined the spectral radius of the iteration matrices of
SSOR method mentioned previously with preconditioner 𝑃1 .
We report the spectral radius of the corresponding iteration
matrix by 𝜌. The π‘š parameters 𝛼𝑖 , 𝑖 = 1, 2, ..., π‘š, are taken
from the π‘š equal-partitioned points of the interval [0, 1]. We
take
𝐴−1
11 𝐴 12
𝑂13
⋅⋅⋅
𝐴−1
22
𝐴−1
11 𝐴 23
d
d
⋅⋅⋅
d
d
π‘‚π‘š−1,π‘š−2 𝐴−1
π‘š−1,π‘š−1
π‘‚π‘š2
(44)
5. Numerical Example
−1 󡄨 󡄨󡄨 −1
󡄨 Μƒ 󡄨󡄨 −1
̃󡄨
󡄨󡄨) (𝐼 − πœ”π· 󡄨󡄨󡄨𝐿
≤ (𝐼 − πœ”π· σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘ˆ
󡄨
󡄨 󡄨󡄨)
−1 󡄨 󡄨󡄨
Μƒ 󡄨󡄨) ((1 − πœ”) 𝐼 + πœ”π·−1 σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘ˆ
Μƒ 󡄨󡄨󡄨
× ((1 − πœ”) 𝐼 + πœ”π· 󡄨󡄨󡄨󡄨𝐿
󡄨
󡄨 󡄨󡄨)
−1
𝐴−1
π‘šπ‘š 𝐴 π‘š1
𝜌(Lπœ” )
0.6288
0.7059
0.7103
0.4770
0.5769
0.5609
0.3602
0.4899
0.5019
0.5173
Theorem 14. Let 𝐴 be a nonsingular 𝐻-matrix with unit
diagonal elements, 0 < πœ” ≤ 1 and |𝛼𝑖 | < β„Žπ‘– , 𝑖 = 1, 2, . . . , π‘š.
Then
󡄨󡄨 Μƒ 󡄨󡄨 Μƒ −1
󡄨󡄨 Μƒ 󡄨󡄨
Μƒ
Μƒ
󡄨󡄨𝐿󡄨󡄨) ⟨𝐷⟩ ((1−πœ”) ⟨𝐷⟩+πœ”
σ΅„¨σ΅„¨π‘ˆσ΅„¨σ΅„¨)
× ((1−πœ”) ⟨𝐷⟩+πœ”
󡄨 󡄨
󡄨 󡄨
𝑂21
(
..
(
.
𝑃1 = (
( 𝑂
( π‘š−1,1
Μ‚πœ”)
𝜌(L
0.8999
0.9473
0.9847
0.8530
0.9240
0.9835
0.8087
0.9009
0.9731
0.9840
Combining the previous Theorems, we can obtain the
following conclusion.
󡄨󡄨 Μƒ 󡄨󡄨 −1
Μƒ 󡄨󡄨󡄨 −1 Μƒ
Μƒ − πœ” σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘ˆ
Μƒ
󡄨 󡄨
L̈ πœ” = (⟨𝐷⟩
󡄨 󡄨󡄨) ⟨𝐷⟩ (⟨𝐷⟩ − πœ” 󡄨󡄨𝐿󡄨󡄨)
𝐴−1
11
𝜌(L̈ πœ” )
0.7404
0.7698
0.7844
0.5923
0.6507
0.6766
0.6647
0.5800
0.6284
0.6316
𝜌(Lπœ” )
0.8635
0.9195
0.9751
0.7906
0.8832
0.9730
0.7512
0.8491
0.9561
0.9738
⋅⋅⋅
π‘‚π‘š,π‘š−1
𝑂1π‘š
..
.
)
..
)
.
).
−1
𝐴 π‘š−1,π‘š−1 𝐴 π‘š−1,π‘š )
)
𝐴−1
π‘šπ‘š
)
(
For 𝑃1 , we make two groups of experiments. In Figure 1,
we test the relation between πœ” and 𝜌, when 𝑁 = 100, πœ” = 0.6,
where “×”, “+”, “∗”, “⋅” and “∘” denote the spectral radius of
⟨𝐴⟩, 𝑃2 ⟨𝐴⟩, βŸ¨π‘ƒ1 𝐴⟩, 𝐴, and 𝑃1 𝐴, respectively. In Table 1, the
Μƒ πœ” ), 𝜌(Lπœ” ), 𝜌(L̈ πœ” ), 𝜌(L
Μ‚ πœ” ), and
meaning of notations 𝜌(L
𝜌(Lπœ” ) denotes the spectral radius of 𝑃1 𝐴, 𝑃2 ⟨𝐴⟩, βŸ¨π‘ƒ1 𝐴⟩,
⟨𝐴⟩, and 𝐴, respectively.
From Figure 1 and Table 1, we can conclude that the
spectral radius of the preconditioned SSOR method with
(45)
preconditioner 𝑃1 is the best among others, which further
illustrates that, Theorem 14 is true.
Acknowledgments
The authors express their thanks to the editor Professor HakKeung Lam and the anonymous referees who made much
useful and detailed suggestions that helped them to correct
some minor errors and improve the quality of the paper.
Journal of Applied Mathematics
7
1
0.9
0.8
𝜌
0.7
0.6
0.5
0.4
0.3
0.2
0
0.1
0.2
0.3
0.4
0.5
π‘Ÿ
0.6
0.7
0.8
0.9
1
Figure 1: The relation between πœ” and 𝜌, when 𝑁 = 100, πœ” = 0.6.
References
[1] M. Alanelli and A. Hadjidimos, “Block Gauss elimination followed by a classical iterative method for the solution of linear
systems,” Journal of Computational and Applied Mathematics,
vol. 163, no. 2, pp. 381–400, 2004.
[2] A. Berman and R. J. Plemmons, Nonnegative Matrices in the
Mathematical Sciences, vol. 9 of Classics in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM),
Philadelphia, Pa, USA, 1994.
[3] W. Li and Z.-y. You, “The multi-parameters overrelaxation
method,” Journal of Computational Mathematics, vol. 16, no. 4,
pp. 367–374, 1998.
[4] R. S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood
Cliffs, NJ, USA, 1962.
[5] L. Yu. Kolotilina, “Two-sided bounds for the inverse of an Hmatrix,” Linear Algebra and Its Applications, vol. 225, pp. 117–123,
1995.
[6] M. Neumann and R. J. Plemmons, “Convergence of parallel
multisplitting iterative methods for M-matrices,” Linear Algebra
and Its Applications, vol. 88-89, pp. 559–573, 1987.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download