Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 435730, 8 pages http://dx.doi.org/10.1155/2013/435730 Research Article Geometrical and Spectral Properties of the Orthogonal Projections of the Identity Luis González, Antonio Suárez, and Dolores García Department of Mathematics, Research Institute SIANI, University of Las Palmas de Gran Canaria, Campus de Tafira, 35017 Las Palmas de Gran Canaria, Spain Correspondence should be addressed to Luis GonzaΜlez; luisglez@dma.ulpgc.es Received 28 December 2012; Accepted 10 April 2013 Academic Editor: K. Sivakumar Copyright © 2013 Luis GonzaΜlez et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We analyze the best approximation π΄π (in the Frobenius sense) to the identity matrix in an arbitrary matrix subspace π΄π (π΄ ∈ Rπ×π nonsingular, π being any fixed subspace of Rπ×π ). Some new geometrical and spectral properties of the orthogonal projection π΄π are derived. In particular, new inequalities for the trace and for the eigenvalues of matrix π΄π are presented for the special case that π΄π is symmetric and positive definite. 1. Introduction The set of all π × π real matrices is denoted by Rπ×π , and πΌ denotes the identity matrix of order π. In the following, π΄π and tr(π΄) denote, as usual, the transpose and the trace of matrix π΄ ∈ Rπ×π . The notations β¨⋅, ⋅β©πΉ and β ⋅ βπΉ stand for the Frobenius inner product and matrix norm, defined on the matrix space Rπ×π . Throughout this paper, the terms orthogonality, angle, and cosine will be used in the sense of the Frobenius inner product. Our starting point is the linear system π΄π₯ = π, π΄ ∈ Rπ×π , π₯, π ∈ Rπ , (1) where π΄ is a large, nonsingular, and sparse matrix. The resolution of this system is usually performed by iterative methods based on Krylov subspaces (see, e.g., [1, 2]). The coefficient matrix π΄ of the system (1) is often extremely ill-conditioned and highly indefinite, so that in this case, Krylov subspace methods are not competitive without a good preconditioner (see, e.g., [2, 3]). Then, to improve the convergence of these Krylov methods, the system (1) can be preconditioned with an adequate nonsingular preconditioning matrix π, transforming it into any of the equivalent systems ππ΄π₯ = ππ, π΄ππ¦ = π, π₯ = ππ¦, (2) the so-called left and right preconditioned systems, respectively. In this paper, we address only the case of the right-hand side preconditioned matrices π΄π, but analogous results can be obtained for the left-hand side preconditioned matrices ππ΄. The preconditioning of the system (1) is often performed in order to get a preconditioned matrix π΄π as close as possible to the identity in some sense, and the preconditioner π is called an approximate inverse of π΄. The closeness of π΄π to πΌ may be measured by using a suitable matrix norm like, for instance, the Frobenius norm [4]. In this way, the problem of obtaining the best preconditioner π (with respect to the Frobenius norm) of the system (1) in an arbitrary subspace π of Rπ×π is equivalent to the minimization problem; see, for example, [5] minβπ΄π − πΌβπΉ = βπ΄π − πΌβπΉ . π∈π (3) The solution π to the problem (3) will be referred to as the “optimal” or the “best” approximate inverse of matrix π΄ in the subspace π. Since matrix π΄π is the best approximation to the identity in subspace π΄π, it will be also referred to as the orthogonal projection of the identity matrix onto the subspace π΄π. Although many of the results presented in this paper are also valid for the case that matrix π is singular, from now on, we assume that the optimal approximate inverse π (and thus also the orthogonal projection π΄π) is a nonsingular 2 matrix. The solution π to the problem (3) has been studied as a natural generalization of the classical Moore-Penrose inverse in [6], where it has been referred to as the π-MoorePenrose inverse of matrix π΄. The main goal of this paper is to derive new geometrical and spectral properties of the best approximations π΄π (in the sense of formula (3)) to the identity matrix. Such properties could be used to analyze the quality and theoretical effectiveness of the optimal approximate inverse π as preconditioner of the system (1). However, it is important to highlight that the purpose of this paper is purely theoretical, and we are not looking for immediate numerical or computational approaches (although our theoretical results could be potentially applied to the preconditioning problem). In particular, the term “optimal (or best) approximate inverse” is used in the sense of formula (3) and not in any other sense of this expression. Among the many different works dealing with practical algorithms that can be used to compute approximate inverses, we refer the reader to for example, [4, 7–9] and to the references therein. In [4], the author presents an exhaustive survey of preconditioning techniques and, in particular, describes several algorithms for computing sparse approximate inverses based on Frobenius norm minimization like, for instance, the well-known SPAI and FSAI algorithms. A different approach (which is also focused on approximate inverses based on minimizing βπ΄π − πΌβπΉ ) can be found in [7], where an iterative descent-type method is used to approximate each column of the inverse, and the iteration is done with “sparse matrix by sparse vector” operations. When the system matrix is expressed in block-partitioned form, some preconditioning options are explored in [8]. In [9], the idea of “target” matrix is introduced, in the context of sparse approximate inverse preconditioners, and the generalized Frobenius norms βπ΅β2πΉ,π» = tr(π΅π»π΅π ) (π» symmetric positive definite) are used, for minimization purposes, as an alternative to the classical Frobenius norm. The last results of our work are devoted to the special case that matrix π΄π is symmetric and positive definite. In this sense, let us recall that the cone of symmetric and positive definite matrices has a rich geometrical structure and, in this context, the angle that any symmetric and positive definite matrix forms with the identity plays a very important role [10]. In this paper, the authors extend this geometrical point of view and analyze the geometrical structure of the subspace of symmetric matrices of order π, including the location of all orthogonal matrices not only the identity matrix. This paper has been organized as follows. In Section 2, we present some preliminary results required to make the paper self-contained. Sections 3 and 4 are devoted to obtain new geometrical and spectral relations, respectively, for the orthogonal projections π΄π of the identity matrix. Finally, Section 5 closes the paper with its main conclusions. 2. Some Preliminaries Now, we present some preliminary results concerning the orthogonal projection π΄π of the identity onto the matrix Journal of Applied Mathematics subspace π΄π ⊂ Rπ×π . For more details about these results and for their proofs, we refer the reader to [5, 6, 11]. Taking advantage of the prehilbertian character of the matrix Frobenius norm, the solution π to the problem (3) can be obtained using the orthogonal projection theorem. More precisely, the matrix product π΄π is the orthogonal projection of the identity onto the subspace π΄π, and it satisfies the conditions stated by the following lemmas; see [5, 11]. Lemma 1. Let π΄ ∈ Rπ×π be nonsingular and let π be a linear subspace of Rπ×π . Let π be the solution to the problem (3). Then, 0 ≤ βπ΄πβ2πΉ = tr (π΄π) ≤ π, (4) 0 ≤ βπ΄π − πΌβ2πΉ = π − tr (π΄π) ≤ π. (5) An explicit formula for matrix π can be obtained by expressing the orthogonal projection π΄π of the identity matrix onto the subspace π΄π by its expansion with respect to an orthonormal basis of π΄π [5]. This is the idea of the following lemma. Lemma 2. Let π΄ ∈ Rπ×π be nonsingular. Let π be a linear subspace of Rπ×π of dimension π and {π1 , . . . , ππ } a basis of π such that {π΄π1 , . . . , π΄ππ } is an orthogonal basis of π΄π. Then, the solution π to the problem (3) is π tr (π΄ππ ) π, π=∑σ΅© σ΅©π΄ππ σ΅©σ΅©σ΅©2 π π=1 σ΅© σ΅© σ΅©πΉ (6) and the minimum (residual) Frobenius norm is 2 π [tr (π΄ππ )] . βπ΄π − πΌβ2πΉ = π − ∑ σ΅© σ΅©π΄ππ σ΅©σ΅©σ΅©2 π=1 σ΅© σ΅© σ΅©πΉ (7) Let us mention two possible options, both taken from [5], for choosing in practice the subspace π and its corresponding basis {ππ }ππ=1 . The first example consists of considering the subspace π of π×π matrices with a prescribed sparsity pattern, that is, π = {π ∈ Rπ×π : πππ = 0 ∀ (π, π) ∉ πΎ} , πΎ ⊂ {1, 2, . . . , π} × {1, 2, . . . , π} . (8) Then, denoting by ππ,π , the π × π matrix whose only nonzero entry is πππ = 1, a basis of subspace π is clearly {ππ,π : (π, π) ∈ πΎ}, and then {π΄ππ,π : (π, π) ∈ πΎ} will be a basis of subspace π΄π (since we have assumed that matrix π΄ is nonsingular). In general, this basis of π΄π is not orthogonal, so that we only need to use the Gram-Schmidt procedure to obtain an orthogonal basis of π΄π, in order to apply the orthogonal expansion (6). For the second example, consider a linearly independent set of π × π real symmetric matrices {π1 , . . . , ππ } and the corresponding subspace πσΈ = span {π1 π΄π , . . . , ππ π΄π } , (9) Journal of Applied Mathematics 3 Theorem 5. Let π΄ ∈ Rπ×π be nonsingular and let π be a linear subspace of Rπ×π . Let π be the solution to the problem (3). Then, which clearly satisfies πσΈ ⊆ {π = ππ΄π : π ∈ Rπ×π ππ = π} = {π ∈ Rπ×π : (π΄π)π = π΄π} . (10) Hence, we can explicitly obtain the solution π to the problem (3) for subspace πσΈ , from its basis {π1 π΄π , . . . , ππ π΄π }, as follows. If {π΄π1 π΄π , . . . , π΄ππ π΄π } is an orthogonal basis of subspace π΄πσΈ , then we just use the orthogonal expansion (6) for obtaining π. Otherwise, we use again the Gram-Schmidt procedure to obtain an orthogonal basis of subspace π΄πσΈ , and then we apply formula (6). The interest of this second example stands in the possibility of using the conjugate gradient method for solving the preconditioned linear system, when the symmetric matrix π΄π is positive definite. For a more detailed exposition of the computational aspects related to these two examples, we refer the reader to [5]. Now, we present some spectral properties of the orthogonal projection π΄π. From now on, we denote by {π π }ππ=1 and {ππ }ππ=1 the sets of eigenvalues and singular values, respectively, of matrix π΄π arranged, as usual, in nonincreasing order, that is, σ΅¨ σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨π 1 σ΅¨σ΅¨ ≥ σ΅¨σ΅¨π 2 σ΅¨σ΅¨ ≥ ⋅ ⋅ ⋅ ≥ σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨ > 0, (11) π1 ≥ π2 ≥ ⋅ ⋅ ⋅ ≥ ππ > 0. The following lemma [11] provides some inequalities involving the eigenvalues and singular values of the preconditioned matrix π΄π. Lemma 3. Let π΄ ∈ Rπ×π be nonsingular and let π be a linear subspace of Rπ×π . Then, π π π π=1 π=1 π=1 π σ΅¨ σ΅¨2 ∑π2π ≤ ∑σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨ ≤ ∑ππ2 = βπ΄πβ2πΉ π π π=1 π=1 σ΅¨ σ΅¨ = tr (π΄π) = ∑π π ≤ ∑ σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨ ≤ ∑ππ . π=1 (12) The following fact [11] is a direct consequence of Lemma 3. Lemma 4. Let π΄ ∈ Rπ×π be nonsingular and let π be a linear subspace of Rπ×π . Then, the smallest singular value and the smallest eigenvalue’s modulus of the orthogonal projection π΄π of the identity onto the subspace π΄π are never greater than 1. That is, σ΅¨ σ΅¨ 0 < ππ ≤ σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨ ≤ 1. (13) The following theorem [11] establishes a tight connection between the closeness of matrix π΄π to the identity matrix and the closeness of ππ (|π π |) to the unity. 2 σ΅¨ σ΅¨2 (1 − σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨) ≤ (1 − ππ ) ≤ βπ΄π − πΌβ2πΉ σ΅¨ σ΅¨2 ≤ π (1 − σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨ ) ≤ π (1 − ππ2 ) . (14) Remark 6. Theorem 5 states that the closer the smallest singular value ππ of matrix π΄π is to the unity, the closer matrix π΄π will be to the identity, that is, the smaller βπ΄π − πΌβπΉ will be, and conversely. The same happens with the smallest eigenvalue’s modulus |π π | of matrix π΄π. In other words, we get a good approximate inverse π of π΄ when ππ (|π π |) is sufficiently close to 1. To finish this section, let us mention that, recently, lower and upper bounds on the normalized Frobenius condition number of the orthogonal projection π΄π of the identity onto the subspace π΄π have been derived in [12]. In addition, this work proposes a natural generalization (related to an arbitrary matrix subspace π of Rπ×π ) of the normalized Frobenius condition number of the nonsingular matrix π΄. 3. Geometrical Properties In this section, we present some new geometrical properties for matrix π΄π, π being the optimal approximate inverse of matrix π΄, defined by (3). Our first lemma states some basic properties involving the cosine of the angle between matrix π΄π and the identity, that is, cos (π΄π, πΌ) = tr (π΄π) β¨π΄π, πΌβ©πΉ = . βπ΄πβπΉ βπΌβπΉ βπ΄πβπΉ √π (15) Lemma 7. Let π΄ ∈ Rπ×π be nonsingular and let π be a linear subspace of Rπ×π . Let π be the solution to the problem (3). Then, cos (π΄π, πΌ) = tr (π΄π) βπ΄πβπΉ √tr (π΄π) = = , √π √π √π βπ΄πβπΉ (16) 0 ≤ cos (π΄π, πΌ) ≤ 1, (17) βπ΄π − πΌβ2πΉ = π (1 − cos2 (π΄π, πΌ)) . (18) Proof. First, using (15) and (4) we immediately obtain (16). As a direct consequence of (16), we derive that cos(π΄π, πΌ) is always nonnegative. Finally, using (5) and (16), we get βπ΄π − πΌβ2πΉ = π − tr (π΄π) = π (1 − cos2 (π΄π, πΌ)) (19) and the proof is concluded. Remark 8. In [13], the authors consider an arbitrary approximate inverse π of matrix π΄ and derive the following equality: 2 βπ΄π − πΌβ2πΉ = (βπ΄πβπΉ − βπΌβπΉ ) + 2 (1 − cos (π΄π, πΌ)) βπ΄πβπΉ βπΌβπΉ , (20) 4 Journal of Applied Mathematics that is, the typical decomposition (valid in any inner product space) of the strong convergence into the convergence of the norms (βπ΄πβπΉ − βπΌβπΉ )2 and the weak convergence (1 − cos(π΄π, πΌ))βπ΄πβπΉ βπΌβπΉ . Note that for the special case that π is the optimal approximate inverse π defined by (3), formula (18) has stated that the strong convergence is reduced just to the weak convergence and, indeed, just to the cosine cos(π΄π, πΌ). Remark 9. More precisely, formula (18) states that the closer cos(π΄π, πΌ) is to the unity (i.e., the smaller the angle ∠(π΄π, πΌ) is), the smaller βπ΄π − πΌβπΉ will be, and conversely. This gives us a new measure of the quality (in the Frobenius sense) of the approximate inverse π of matrix π΄, by comparing the minimum residual norm βπ΄π − πΌβπΉ with the cosine of the angle between π΄π and the identity, instead of with tr(π΄π), βπ΄πβπΉ (Lemma 1), or ππ , |π π | (Theorem 5). So for a fixed nonsingular matrix π΄ ∈ Rπ×π and for different subspaces π ⊂ Rπ×π , we have σ΅¨ σ΅¨ tr (π΄π) β π ⇐⇒ βπ΄πβπΉ β √π ⇐⇒ ππ β 1 ⇐⇒ σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨ β 1 ⇐⇒ cos (π΄π, πΌ) β 1 ⇐⇒ βπ΄π − πΌβπΉ β 0. (21) Obviously, the optimal theoretical situation corresponds to the case tr (π΄π) = π ⇐⇒ βπ΄πβπΉ = √π ⇐⇒ ππ = 1 ⇐⇒ π π = 1 ⇐⇒ cos (π΄π, πΌ) = 1 ⇐⇒ βπ΄π − πΌβπΉ = 0 ⇐⇒ π = π΄−1 ⇐⇒ π΄−1 ∈ π. (22) Remark 10. Note that the ratio between cos(π΄π, πΌ) and cos(π΄, πΌ) is independent of the order π of matrix π΄. Indeed, assuming that tr(π΄) =ΜΈ 0 and using (16), we immediately obtain cos (π΄π, πΌ) βπ΄πβπΉ : √π βπ΄βπΉ = . = βπ΄πβπΉ cos (π΄, πΌ) tr (π΄) tr (π΄) : βπ΄βπΉ √π (23) The following lemma compares the trace and the Frobenius norm of the orthogonal projection π΄π. Lemma 11. Let π΄ ∈ Rπ×π be nonsingular and let π be a linear subspace of Rπ×π . Let π be the solution to the problem (3). Then, tr (π΄π) ≤ βπ΄πβπΉ ⇐⇒ βπ΄πβπΉ ≤ 1 ⇐⇒ tr (π΄π) ≤ 1 ⇐⇒ cos (π΄π, πΌ) ≤ 1 , √π (24) tr (π΄π) ≥ βπ΄πβπΉ ⇐⇒ βπ΄πβπΉ ≥ 1 ⇐⇒ tr (π΄π) ≥ 1 ⇐⇒ cos (π΄π, πΌ) ≥ 1 . √π (25) Proof. Using (4), we immediately obtain the four leftmost equivalences. Using (16), we immediately obtain the two rightmost equivalences. The next lemma provides us with a relationship between the Frobenius norms of the inverses of matrices π΄ and its best approximate inverse π in subspace π. Lemma 12. Let π΄ ∈ Rπ×π be nonsingular and let π be a linear subspace of Rπ×π . Let π be the solution to the problem (3). Then, σ΅©σ΅© −1 σ΅©σ΅© σ΅©σ΅© −1 σ΅©σ΅© σ΅©σ΅©π΄ σ΅©σ΅© σ΅©σ΅©π σ΅©σ΅© ≥ 1. (26) σ΅©πΉ σ΅© σ΅©πΉ σ΅© Proof. Using (4), we get σ΅© σ΅© βπ΄πβπΉ ≤ √π = βπΌβπΉ = σ΅©σ΅©σ΅©σ΅©(π΄π) (π΄π)−1 σ΅©σ΅©σ΅©σ΅©πΉ σ΅© σ΅© σ΅© σ΅© ≤ βπ΄πβπΉ σ΅©σ΅©σ΅©σ΅©(π΄π)−1 σ΅©σ΅©σ΅©σ΅©πΉ σ³¨⇒ σ΅©σ΅©σ΅©σ΅©(π΄π)−1 σ΅©σ΅©σ΅©σ΅©πΉ ≥ 1, and hence σ΅©σ΅©σ΅©π΄−1 σ΅©σ΅©σ΅© σ΅©σ΅©σ΅©π−1 σ΅©σ΅©σ΅© ≥ σ΅©σ΅©σ΅©π−1 π΄−1 σ΅©σ΅©σ΅© = σ΅©σ΅©σ΅©(π΄π)−1 σ΅©σ΅©σ΅© ≥ 1, σ΅©σ΅©πΉ σ΅©σ΅© σ΅©σ΅©πΉ σ΅©σ΅© σ΅©σ΅©πΉ σ΅©σ΅© σ΅©σ΅©πΉ σ΅©σ΅© (27) (28) and the proof is concluded. The following lemma compares the minimum residual norm βπ΄π − πΌβπΉ with the distance (with respect to the Frobenius norm) βπ΄−1 − πβπΉ between the inverse of π΄ and the optimal approximate inverse π of π΄ in any subspace π ⊂ Rπ×π . First, note that for any two matrices π΄, π΅ ∈ Rπ×π (π΄ nonsingular), from the submultiplicative property of the Frobenius norm, we immediately get σ΅© σ΅©2 βπ΄π΅ − πΌβ2πΉ = σ΅©σ΅©σ΅©σ΅©π΄ (π΅ − π΄−1 )σ΅©σ΅©σ΅©σ΅©πΉ σ΅© σ΅©2 ≤ βπ΄β2πΉ σ΅©σ΅©σ΅©σ΅©π΅ − π΄−1 σ΅©σ΅©σ΅©σ΅©πΉ . (29) However, for the special case that π΅ = π (the solution to the problem (3)), we also get the following inequality. Lemma 13. Let π΄ ∈ Rπ×π be nonsingular and let π be a linear subspace of Rπ×π . Let π be the solution to the problem (3). Then, σ΅© σ΅© βπ΄π − πΌβ2πΉ ≤ βπ΄βπΉ σ΅©σ΅©σ΅©σ΅©π − π΄−1 σ΅©σ΅©σ΅©σ΅©πΉ . (30) Proof. Using the Cauchy-Schwarz inequality and (5), we get σ΅¨ σ΅© σ΅¨σ΅¨ −1 σ΅© σ΅© σ΅© σ΅¨σ΅¨β¨π΄ − π, π΄π β© σ΅¨σ΅¨σ΅¨ ≤ σ΅©σ΅©σ΅©π΄−1 − πσ΅©σ΅©σ΅© σ΅©σ΅©σ΅©π΄π σ΅©σ΅©σ΅© σ΅© σ΅¨ σ΅©πΉ σ΅© σ΅©πΉ πΉσ΅¨ σ΅¨ σ΅¨ σ΅© σ΅© σ΅© σ΅© σ³¨⇒ σ΅¨σ΅¨σ΅¨σ΅¨tr ((π΄−1 − π) π΄)σ΅¨σ΅¨σ΅¨σ΅¨ ≤ σ΅©σ΅©σ΅©σ΅©π΄−1 − πσ΅©σ΅©σ΅©σ΅©πΉ σ΅©σ΅©σ΅©σ΅©π΄π σ΅©σ΅©σ΅©σ΅©πΉ σ΅© σ΅¨ σ΅¨ σ΅© σ³¨⇒ σ΅¨σ΅¨σ΅¨σ΅¨tr (π΄ (π΄−1 − π))σ΅¨σ΅¨σ΅¨σ΅¨ ≤ σ΅©σ΅©σ΅©σ΅©π − A−1 σ΅©σ΅©σ΅©σ΅©πΉ βπ΄βπΉ σ΅© σ΅© σ³¨⇒ |tr (πΌ − π΄π)| ≤ σ΅©σ΅©σ΅©σ΅©π − π΄−1 σ΅©σ΅©σ΅©σ΅©πΉ βπ΄βπΉ σ΅© σ΅© σ³¨⇒ π − tr (π΄π) ≤ σ΅©σ΅©σ΅©σ΅©π − π΄−1 σ΅©σ΅©σ΅©σ΅©πΉ βπ΄βπΉ σ΅© σ΅© σ³¨⇒ βπ΄π − πΌβ2πΉ ≤ βπ΄βπΉ σ΅©σ΅©σ΅©σ΅©π − π΄−1 σ΅©σ΅©σ΅©σ΅©πΉ . (31) Journal of Applied Mathematics 5 The following extension of the Cauchy-Schwarz inequality, in a real or complex inner product space (π», β¨⋅, ⋅β©), was obtained by Buzano [14]. For all π, π₯, π ∈ π», we have |β¨π, π₯β© ⋅ β¨π₯, πβ©| ≤ 1 (βπβ βπβ + |β¨π, πβ©|) βπ₯β2 . 2 (32) The next lemma provides us with lower and upper bounds on the inner product β¨π΄π, π΅β©πΉ , for any π × π real matrix π΅. π×π Lemma 14. Let π΄ ∈ R be nonsingular and let π be a linear subspace of Rπ×π . Let π be the solution to the problem (3). Then, for every π΅ ∈ Rπ×π , we have σ΅¨ 1 σ΅¨σ΅¨ σ΅¨σ΅¨β¨π΄π, π΅β©πΉ σ΅¨σ΅¨σ΅¨ ≤ (√πβπ΅βπΉ + |tr (π΅)|) . 2 π σ΅¨ σ΅¨ (βπ΄βπΉ βπ΄πβπΉ + σ΅¨σ΅¨σ΅¨σ΅¨β¨π΄π , π΄πβ©πΉ σ΅¨σ΅¨σ΅¨σ΅¨) 2 π σ΅© σ΅© ≤ (βπ΄βπΉ βπ΄πβπΉ + σ΅©σ΅©σ΅©σ΅©π΄π σ΅©σ΅©σ΅©σ΅©πΉ βπ΄πβπΉ ) 2 ≤ = πβπ΄βπΉ βπ΄πβπΉ σ³¨⇒ |tr (π΄)| βπ΄πβ2πΉ ≤ πβπ΄βπΉ βπ΄πβπΉ σ³¨⇒ βπ΄πβπΉ βπ΄βπΉ ≤ π |tr (π΄)| σ³¨⇒ πβπ΄β2πΉ βπ΄πβ2πΉ βπ΄β2πΉ tr (π΄π) ≤ σ³¨⇒ . ≤ π2 π [tr (π΄)]2 [tr (π΄)]2 (33) (36) Proof. Using (32) for π = πΌ, π₯ = π΄π, π = π΅, and (4), we get σ΅¨σ΅¨ σ΅¨ 1 σ΅¨ σ΅¨ 2 σ΅¨σ΅¨β¨πΌ, π΄πβ©πΉ ⋅ β¨π΄π, π΅β©πΉ σ΅¨σ΅¨σ΅¨ ≤ (βπΌβπΉ βπ΅βπΉ + σ΅¨σ΅¨σ΅¨β¨πΌ, π΅β©πΉ σ΅¨σ΅¨σ΅¨) βπ΄πβπΉ 2 σ΅¨ σ΅¨ σ³¨⇒ σ΅¨σ΅¨σ΅¨tr (π΄π) ⋅ β¨π΄π, π΅β©πΉ σ΅¨σ΅¨σ΅¨ ≤ 1 (√πβπ΅βπΉ + |tr (π΅)|) βπ΄πβ2πΉ 2 σ΅¨ 1 σ΅¨ σ³¨⇒ σ΅¨σ΅¨σ΅¨β¨π΄π, π΅β©πΉ σ΅¨σ΅¨σ΅¨ ≤ (√πβπ΅βπΉ + |tr (π΅)|) . 2 (34) The next lemma provides an upper bound on the arithmetic mean of the squares of the π2 terms in the orthogonal projection π΄π. By the way, it also provides us with an upper bound on the arithmetic mean of the π diagonal terms in the orthogonal projection π΄π. These upper bounds (valid for any matrix subspace π) are independent of the optimal approximate inverse π, and thus they are independent of the subspace π and only depend on matrix π΄. Lemma 15. Let π΄ ∈ Rπ×π be nonsingular with tr(π΄) =ΜΈ 0 and let π be a linear subspace of Rπ×π . Let π be the solution to the problem (3). Then, βπ΄πβ2πΉ βπ΄β2πΉ ≤ , π2 [tr (π΄)]2 πβπ΄β2πΉ tr (π΄π) . ≤ π [tr (π΄)]2 (35) Proof. Using (32) for π = π΄π , π₯ = πΌ, and π = π΄π, and the Cauchy-Schwarz inequality for β¨π΄π , π΄πβ©πΉ and (4), we get σ΅¨σ΅¨ π σ΅¨ σ΅¨σ΅¨β¨π΄ , πΌβ© ⋅ β¨πΌ, π΄πβ©πΉ σ΅¨σ΅¨σ΅¨ σ΅¨ σ΅¨ πΉ ≤ 1 σ΅©σ΅© π σ΅©σ΅© σ΅¨ σ΅¨ (σ΅©σ΅©π΄ σ΅©σ΅© βπ΄πβπΉ + σ΅¨σ΅¨σ΅¨σ΅¨β¨π΄π , π΄πβ©πΉ σ΅¨σ΅¨σ΅¨σ΅¨) βπΌβ2πΉ 2 σ΅© σ΅©πΉ σ³¨⇒ |tr (π΄) ⋅ tr (π΄π)| Remark 16. Lemma 15 has the following interpretation in terms of the quality of the optimal approximate inverse π of matrix π΄ in subspace π. The closer the ratio πβπ΄βπΉ /| tr(π΄)| is to zero, the smaller tr(π΄π) will be, and thus, due to (5), the larger βπ΄π − πΌβπΉ will be, and this happens for any matrix subspace π. Remark 17. By the way, from Lemma 15, we obtain the following inequality for any nonsingular matrix π΄ ∈ Rπ×π . Consider any matrix subspace π s.t. π΄−1 ∈ π. Then, π = π΄−1 , and using Lemma 15, we get βπ΄πβ2πΉ βπΌβ2πΉ 1 βπ΄β2πΉ = = ≤ π2 π2 n [tr (π΄)]2 σ³¨⇒ |tr (π΄)| ≤ √πβπ΄βπΉ . (37) 4. Spectral Properties In this section, we present some new spectral properties for matrix π΄π, π being the optimal approximate inverse of matrix π΄, defined by (3). Mainly, we focus on the case that matrix π΄π is symmetric and positive definite. This has been motivated by the following reason. When solving a large nonsymmetric linear system (1) by using Krylov methods, a possible strategy consists of searching for an adequate optimal preconditioner π such that the preconditioned matrix π΄π is symmetric positive definite [5]. This enables one to use the conjugate gradient method (CG-method), which is, in general, a computationally efficient method for solving the new preconditioned system [2, 15]. Our starting point is Lemma 3, which has established that the sets of eigenvalues and singular values of any orthogonal projection π΄π satisfy π π π π=1 π=1 π=1 σ΅¨ σ΅¨2 ∑π2π ≤ ∑σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨ ≤ ∑ππ2 π π π π=1 π=1 π=1 σ΅¨ σ΅¨ = ∑π π ≤ ∑ σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨ ≤ ∑ππ . Let us particularize (38) for some special cases. (38) 6 Journal of Applied Mathematics First, note that if π΄π is normal (i.e., for all 1 ≤ π ≤ π: ππ = |π π | [16]), then (38) becomes π π π=1 π=1 π σ΅¨ σ΅¨2 ∑π2π ≤ ∑σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨ = ∑ππ2 π=1 π π π π=1 π=1 π=1 (39) σ΅¨ σ΅¨ = ∑π π ≤ ∑ σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨ = ∑ππ . In particular, if π΄π is symmetric (ππ = |π π | = ±π π ∈ R), then (38) becomes π π π π=1 π=1 π=1 σ΅¨ σ΅¨2 ∑π2π = ∑σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨ = ∑ππ2 π π π π=1 π=1 π=1 (40) The rest of the paper is devoted to obtain new properties about the eigenvalues of the orthogonal projection π΄π for the special case that this matrix is symmetric positive definite. First, let us recall that the smallest singular value and the smallest eigenvalue’s modulus of the orthogonal projection π΄π are never greater than 1 (see Lemma 4). The following theorem establishes the dual result for the largest eigenvalue of matrix π΄π (symmetric positive definite). Theorem 19. Let π΄ ∈ Rπ×π be nonsingular and let π be a linear subspace of Rπ×π . Let π be the solution to the problem (3). Suppose that matrix π΄π is symmetric and positive definite. Then, the largest eigenvalue of the orthogonal projection π΄π of the identity onto the subspace π΄π is never less than 1. That is, π1 = π 1 ≥ 1. σ΅¨ σ΅¨ = ∑π π ≤ ∑ σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨ = ∑ππ . In particular, if π΄π is symmetric and positive definite (ππ = |π π | = π π ∈ R+ ), then the equality holds in all (38), that is, π π π π=1 π=1 π=1 σ΅¨ σ΅¨2 ∑π2π = ∑σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨ = ∑ππ2 π π π π=1 π=1 π=1 (41) σ΅¨ σ΅¨ = ∑π π = ∑ σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨ = ∑ππ . The next lemma compares the traces of matrices π΄π and (π΄π)2 . Lemma 18. Let π΄ ∈ Rπ×π be nonsingular and let π be a linear subspace of Rπ×π . Let π be the solution to the problem (3). Then, (i) for any orthogonal projection π΄π tr ((π΄π)2 ) ≤ βπ΄πβ2πΉ = tr (π΄π) , (ii) for any symmetric orthogonal projection π΄π σ΅©σ΅© σ΅© σ΅©σ΅©(π΄π)2 σ΅©σ΅©σ΅© ≤ tr ((π΄π)2 ) = βπ΄πβ2πΉ = tr (π΄π) , σ΅© σ΅©πΉ (42) (43) (iii) for any symmetric positive definite orthogonal projection π΄π σ΅©σ΅© σ΅© σ΅©σ΅©(π΄π)2 σ΅©σ΅©σ΅© ≤ tr ((π΄π)2 ) = βπ΄πβ2πΉ = tr (π΄π) ≤ [tr (π΄π)]2 . σ΅© σ΅©πΉ (44) Proof. (i) Using (38), we get π π π π=1 π=1 π=1 ∑π2π ≤ ∑ππ2 = ∑π π . (45) Proof. Using (41), we get π π π−1 π=1 π=1 π=1 ∑π2π = ∑π π σ³¨⇒ π2π − π π = ∑ (π π − π2π ) . π π π π=1 π=1 π=1 (46) (iii) It suffices to use (43) and the fact that (see, e.g., [17, 18]) if π and π are symmetric positive definite matrices then tr(ππ) ≤ tr(π) tr(π) for π = π = π΄π. (48) Now, since π π ≤ 1 (Lemma 4), then π2π − π π ≤ 0. This implies that at least one summand in the rightmost sum in (48) must be less than or equal to zero. Suppose that such summand is the πth one (1 ≤ π ≤ π − 1). Since π΄π is positive definite, then π π > 0, and thus π π − π2π ≤ 0 σ³¨⇒ π π ≤ π2π σ³¨⇒ π π ≥ 1 σ³¨⇒ π 1 ≥ 1 (49) and the proof is concluded. In Theorem 19, the assumption that matrix π΄π is positive definite is essential for assuring that |π 1 | ≥ 1, as the following simple counterexample shows. Moreover, from Lemma 4 and Theorem 19, respectively, we have that the smallest and largest eigenvalues of π΄π (symmetric positive definite) satisfy π π ≤ 1 and π 1 ≥ 1, respectively. Nothing can be asserted about the remaining eigenvalues of the symmetric positive definite matrix π΄π, which can be greater than, equal to, or less than the unity, as the same counterexample also shows. Example 20. For π = 3, let 3 0 0 π΄ π = (0 π 0) , 0 0 1 π ∈ R, (50) let πΌ3 be identity matrix of order 3, and let π be the subspace of all 3×3 scalar matrices; that is, π = span{πΌ3 }. Then the solution ππ to the problem (3) for subspace π can be immediately obtained by using formula (6) as follows: (ii) It suffices to use the obvious fact that β(π΄π)2 βπΉ ≤ βπ΄πβ2πΉ and the following equalities taken from (40): ∑π2π = ∑ππ2 = ∑π π . (47) tr (π΄ ) π+4 ππ = σ΅© σ΅©π2 πΌ3 = 2 πΌ σ΅©σ΅©π΄ π σ΅©σ΅© π + 10 3 σ΅© σ΅©πΉ (51) and then we get π΄ π ππ = 3 0 0 π+4 π+4 0 π 0) . = π΄ ( π2 + 10 π π2 + 10 0 0 1 (52) Journal of Applied Mathematics 7 Let us arrange the eigenvalues and singular values of matrix π΄ π ππ , as usual, in nonincreasing order (as shown in (11)). On one hand, for π = −2, we have π΄ −2 π−2 = 3 0 0 1 (0 −2 0) , 7 0 0 1 π+4 π2 + 10 5 π= : 2 9 > 1, 7 π2 = π 2 = 6 < 1, 7 π3 = π 3 = 3 < 1, 7 π1 = π 1 = 6 > 1, 5 π2 = π 2 = 1, π3 = π 3 = 8 π= : 3 π3 = π 3 = π π π=1 π=1 σ΅¨ σ΅¨ σ³¨⇒ σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨ , (55) (60) ππ ≤ √π, ∀π = 1, 2, . . . , π. Our last theorem improves the upper bound given in (60) for the special case that the orthogonal projection π΄π is symmetric positive definite. Theorem 22. Let π΄ ∈ Rπ×π be nonsingular and let π be a linear subspace of Rπ×π . Let π be the solution to the problem (3). Suppose that matrix π΄π is symmetric and positive definite. Then, all the eigenvalues of matrix π΄π satisfy ππ = π π ≤ (56) 2 < 1, 5 90 π1 = π 1 = > 1, 77 π2 = π 2 = Let us mention that an upper bound on all the eigenvalues moduli and on all singular values of any orthogonal projection π΄π can be immediately obtained from (38) and (4) as follows: σ΅¨ σ΅¨2 ∑σ΅¨σ΅¨σ΅¨π π σ΅¨σ΅¨σ΅¨ ≤ ∑ππ2 = βπ΄πβ2πΉ ≤ π π+4 , π2 + 10 π1 = π 1 = (59) Finally, (58) follows immediately from (57) and (25). and then π = 2: σ³¨⇒ 1 ≤ βπ΄πβπΉ ≤ tr (π΄π) = βπ΄πβ2πΉ ≤ π. (54) Hence, π΄ −2 π−2 is indefinite and π1 = |π 1 | = 3/7 < 1. On the other hand, for 1 < π < 3, we have (see matrix (52)) ≥ π3 = π 3 = (58) βπ΄πβπΉ ≥ βπ΄πβ2 = π1 = π 1 ≥ 1 σ΅¨ σ΅¨ 1 ≥ π3 = σ΅¨σ΅¨σ΅¨π 3 σ΅¨σ΅¨σ΅¨ = . 7 π+4 ≥ π2 = π 2 = π 2 π + 10 1 . √π cos (π΄π, πΌ) ≥ (57) Proof. Denote by β ⋅ β2 the spectral norm. Using the wellknown inequality β ⋅ β2 ≤ β ⋅ βπΉ [19], Theorem 19, and (4), we get σ΅¨ σ΅¨ 3 π1 = σ΅¨σ΅¨σ΅¨π 1 σ΅¨σ΅¨σ΅¨ = 7 π1 = π 1 = 3 1 ≤ βπ΄πβπΉ ≤ tr (π΄π) = βπ΄πβ2πΉ ≤ π, (53) and then σ΅¨ σ΅¨ 2 ≥ π2 = σ΅¨σ΅¨σ΅¨π 2 σ΅¨σ΅¨σ΅¨ = 7 problem (3). Suppose that matrix π΄π is symmetric and positive definite. Then, 80 > 1, 77 30 < 1. 77 1 + √π 2 ∀π = 1, 2, . . . , π. (61) Proof. First, note that the assertion is obvious for the smallest singular value since |π π | ≤ 1 for any orthogonal projection π΄π (Lemma 4). For any eigenvalue of π΄π, we use the fact that π₯ − π₯2 ≤ 1/4 for all π₯ > 0. Then from (41), we get π π π=1 π=1 ∑π2π = ∑π π π σ³¨⇒ π21 − π 1 = ∑ (π π − π2π ) ≤ π=2 σ³¨⇒ π 1 ≤ π−1 4 (62) 1 + √π 1 + √π σ³¨⇒ π π ≤ 2 2 ∀π = 1, 2, . . . , π. Hence, for π΄ π ππ positive definite, we have (depending on π) π 2 < 1, π 2 = 1, or π 2 > 1. The following corollary improves the lower bound zero on both tr(π΄π), given in (4), and cos(π΄π, πΌ), given in (17). 5. Conclusion Corollary 21. Let π΄ ∈ Rπ×π be nonsingular and let π be a linear subspace of Rπ×π . Let π be the solution to the In this paper, we have considered the orthogonal projection π΄π (in the Frobenius sense) of the identity matrix onto an 8 arbitrary matrix subspace π΄π (π΄ ∈ Rπ×π nonsingular, π ⊂ Rπ×π ). Among other geometrical properties of matrix π΄π, we have established a strong relation between the quality of the approximation π΄π ≈ πΌ and the cosine of the angle ∠(π΄π, πΌ). Also, the distance between π΄π and the identity has been related to the ratio πβπ΄βπΉ /| tr(π΄)| (which is independent of the subspace π). The spectral analysis has provided lower and upper bounds on the largest eigenvalue of the symmetric positive definite orthogonal projections of the identity. Acknowledgments The authors are grateful to the anonymous referee for valuable comments and suggestions, which have improved the earlier version of this paper. This work was partially supported by the “Ministerio de EconomΔ±Μa y Competitividad” (Spanish Government), and FEDER, through Grant Contract CGL201129396-C03-01. Journal of Applied Mathematics [14] [15] [16] [17] [18] [19] References [1] O. Axelsson, Iterative Solution Methods, Cambridge University Press, Cambridge, Mass, USA, 1994. [2] Y. Saad, Iterative Methods for Sparse Linear Systems, PWS Publishing, Boston, Mass, USA, 1996. [3] S.-L. Wu, F. Chen, and X.-Q. Niu, “A note on the eigenvalue analysis of the SIMPLE preconditioning for incompressible flow,” Journal of Applied Mathematics, vol. 2012, Article ID 564132, 7 pages, 2012. [4] M. Benzi, “Preconditioning techniques for large linear systems: a survey,” Journal of Computational Physics, vol. 182, no. 2, pp. 418–477, 2002. [5] G. Montero, L. GonzaΜlez, E. FloΜrez, M. D. GarcΔ±Μa, and A. SuaΜrez, “Approximate inverse computation using Frobenius inner product,” Numerical Linear Algebra with Applications, vol. 9, no. 3, pp. 239–247, 2002. [6] A. SuaΜrez and L. GonzaΜlez, “A generalization of the MoorePenrose inverse related to matrix subspaces of πΆπ×π ,” Applied Mathematics and Computation, vol. 216, no. 2, pp. 514–522, 2010. [7] E. Chow and Y. Saad, “Approximate inverse preconditioners via sparse-sparse iterations,” SIAM Journal on Scientific Computing, vol. 19, no. 3, pp. 995–1023, 1998. [8] E. Chow and Y. Saad, “Approximate inverse techniques for block-partitioned matrices,” SIAM Journal on Scientific Computing, vol. 18, no. 6, pp. 1657–1675, 1997. [9] R. M. Holland, A. J. Wathen, and G. J. Shaw, “Sparse approximate inverses and target matrices,” SIAM Journal on Scientific Computing, vol. 26, no. 3, pp. 1000–1011, 2005. [10] R. Andreani, M. Raydan, and P. Tarazaga, “On the geometrical structure of symmetric matrices,” Linear Algebra and Its Applications, vol. 438, no. 3, pp. 1201–1214, 2013. [11] L. GonzaΜlez, “Orthogonal projections of the identity: spectral analysis and applications to approximate inverse preconditioning,” SIAM Review, vol. 48, no. 1, pp. 66–75, 2006. [12] A. SuaΜrez and L. GonzaΜlez, “Normalized Frobenius condition number of the orthogonal projections of the identity,” Journal of Mathematical Analysis and Applications, vol. 400, no. 2, pp. 510–516, 2013. [13] J.-P. Chehab and M. Raydan, “Geometrical properties of the Frobenius condition number for positive definite matrices,” Linear Algebra and Its Applications, vol. 429, no. 8-9, pp. 2089– 2097, 2008. M. L. Buzano, “Generalizzazione della diseguaglianza di Cauchy-Schwarz,” Rendiconti del Seminario Matematico UniversitaΜ e Politecnico di Torino, vol. 31, pp. 405–409, 1974 (Italian). M. R. Hestenes and E. Stiefel, “Methods of conjugate gradients for solving linear systems,” Journal of Research of the National Bureau of Standards, vol. 49, no. 6, pp. 409–436, 1952. R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, UK, 1991. E. H. Lieb and W. Thirring, “Inequalities for the moments of the eigenvalues of the SchroΜdinger Hamiltonian and their relation to Sobolev inequalities,” in Studies in Mathematical Physics: Essays in Honor of Valentine Bargmann, E. Lieb, B. Simon, and A. Wightman, Eds., pp. 269–303, Princeton University Press, Princeton, NJ, USA, 1976. Z. UlukoΜk and R. TuΜrkmen, “On some matrix trace inequalities,” Journal of Inequalities and Applications, vol. 2010, Article ID 201486, 8 pages, 2010. R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, UK, 1985. Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014