Orthogonal Bases and the -So In Section 4.8 we discussed the problem of finding the orthogonal projection p the vector b into the suhspace V of R. If the vectors v , v 1 , 2 v,, form a ho for V, and the in x n matrix A has these basis vectors as its column vectors. ilt the orthogonal projection p is given by . p = . . , Ax where x is the (unique) solution of the normal system ATAx = A b 7 . The formula for p takes an especially simple and attractive Form when the ba vectors v , 1 v are mutually orthogonal. . DEFINITION .. , Orthogonal Basis An orthogonal basis for the suhspacc V of R” is a basis consisting of vectors ,v,, that are mutually orthogonal, so that v v = 0 if I j. If in additii these basis vectors are unit vectors, so that v 1 = I for i = 1. 2 n, thct the orthogonal basis is called an orthonormal basis. , . Example 1 The vectors = (1, 1,0), 2 v = (1, —1,2), 3 v = (—1,1,1) form an orthogonal basis for R . We can “normalize” this orthogonal basis 1w 3 viding each basis vector by its length: If w = 1 —lvii (1=1,2,3), ‘ 4.9 Orthogonal Bases and the Gram-Schmidt Algorithm 295 then the vectors 1 w /1 1 /1 ‘\ 0) 1 W2 = 2” / 3 W _z) —— I I form an orthonormal basis for R . 3 Now suppose that the column vectors v v . 2 v, of the m x form an orthogonal basis for the suhspacc V of R’. Then , 0 V}.VI o 1 A’A . viz[v = ..., .v 2 v o ii matrix A 0 0 : ... 0 1 1v1 ... (3) . Thus the coelficient matrix in the normal system in (2) is a diagonal matrix, so the normal equations simplify to 1 v )x (v )x, 2 (V2 v = b •b . = V2 (4) (v,, v, )x,, Consequently the solution x is given by xi = v.b = = v, b 2 ,...,x x 1 (x ) of the normal system ATAx 11 Vi Vi When we substitute this solution in p = Ax (I = Equation v 1 x 1,2 = b 7 A (5) ii). (1). v + 2 +x + x,,v,,, we get the following result. THEOREM 1 Projection into an Orthogonal Basis Suppose that the vectors v ,v 1 2 v,, form an orthogonal basis for the sub— space V of . 11 Then the orthogonal projection p of the vector b into V is R given by v • 1 b 1 V = 1 V + v • 2 b 2 V 2 V 2 V + + v,,. V,, (6) Note that if the basis v , 1 v,, is orthonormal rather than merely orthog onal, then the formula for the orthogonal projection p simplifies still further; 1 1 p= (v •b)v + + b 2 -i-( )v (v v,, b)v,,. 296 Chapter 4 Vector Spaces If ii = I and v = . then the formula in (6) reduces to the formula 1 v vb p=—v v.v (7) for the component of b parallel to v. The component of b orthogonal to v is then (=b—p=b————v. (8) may also he stated without symbolism: The orthogonal prjeCtiO17 p qf the iector b tub the subspace V is the sum oft/ic components of I) parallel to the orthogonal basis vectors v V2, 1 for V. V, Theorem I , Example 2 , . The vectors Vi = (1, 1,0, I), 2 v = (I, —2,3, 1), 3 v = (—4,3,3, 1) are mutually orthogonal, and hence form an orthogonal basis for a 3—dimensional suhspacc V of R . To find the orthogonal projection p of b = (0, 7, 0, 7) into V we 4 use the formula in (6) and get b 1 v v,•b ‘11+ - V2’V, = V3V3 14 28 —7 1(1. 1,0, 1) + ——(1, —2,3, 1) + (—4, 3,3, 1); thereftwe p= (1,8,1,5). The component of the vector b orthogonal to the suhspacc V is q = b p — = (0,7,0.7)— (1,8.1,5) = (—1,—I, —1,2). Example 2 illustrates how useful orthogonal bases can he for computational purposes. There is a standard process that is used to transform a given linear/v mdc pendent set of vectors Vj V,, in W” into a mutually orthogonal set of vectors , 1 u u,, that span the same subspace of R”. We begin with , (9) . V 1 U To get the second vector is, 112, we subtract from 2 V its component parallel to ui. Thai 2 1 UI.’ U2 = V2 (lOi 1 U — Ui’UI is the component of’ ‘12 orthogonal to u . At this point it is clear that u 1 1 and u 2 form an orthogonal basis for the 2-dimensional suhspace V spanned Vj and 2 by ‘12. To gel the third vector u , we subtract from ‘13 its components parallel to ii and 112. Thus 3 3 V UI ‘ V• — 1 U U 2 U U1 . 1 V 2 U — 2 U (II) U, is the component of ‘13 orthogonal to the suhspace V . Hay 2 2 spanned by Ui and u ing defined ui, U in this manner, we take Uk+l to be the component of Vk1 I ti,. This process I’or construe> orthogonal to the suhspace V, spanned by u 1 ing the mutually orthogonal vectors u ,u 1 ,..., u,, is summarized in the followinf2 2 algorithm. . . . , , . . . , 4.9 Orthogonal Bases and the Gram-Schmidt Algorithm ALGORITHM 297 Gram—Schmidt Orthogonalization To replace the linearly independent vectors VI, V ,..., v one by one with mutu 2 ally orthogonal vectors u ,u 1 ,..., u that span the same subspace of R, begin 2 with III Fork = 1, 2,.... n — (12) I in turn, take Uj Uk+I . 1 v = Vj+I Vk+I — 1 U Ui •Ui U2 Uk Vk+l . (13) —’— 2 U — U2U2 The formula in (13) means that we get Uk+ I from Vk. by subtracting from parallel to the mutually orthogonal vectors u , 1 u pre viously constructed, and hence Uk+I is orthogonal to each of ihe first k vectors. If we assume inductively that the vectors u ,U 1 2 Uk span the same subspace as thc original k vectors v , 1 , 1 Vk, then it also lollows from (13) that the vectors u 2 u ,v 1 2 If we begin Uk,I span the same suhspace as the vectors v Vk with a basis v V for the suhspacc V of R”. the final result of carrying out the Gram—Schmidt algorithm is therefore an orthogonal basis u , 1 u,, for V. Vk.II its components , THEOREM 2 Every nonzero Existence of Orthogonal Bases subspacc of R” has an orthogonal basis. As a linal step, one can convert the orthogonal basis to an orthonormal basis by dividing each of the orthogonal basis vectors u 1 112 1 by its length. u, In numerical applications of the Gram—Schmidt algorithm, the calculations involved often can be simplified by multiplying each vector k (as it is found) by an appropriate scalar kictor to eliminate unwanted fractions. We do this in the two examples that follow. Example 3 To apply the Gram-Schmidt algorithm beginning with the basis Vj = (3,1,1), 2 v (1,3,1), V = (1, 1,3) for R , we first take 3 = 1 v = (3. 1, 1). Then 1 U 112 = . V — 1 u UI.uI 7 = (1,3, 1)— —(3, 1, 1) 11 10264 = (_, , = 2 (—5, 13, 2). 298 Chapter 4 Vector Spaces . (or multiply by ) and instead use We delete the scalar factor Next, — = 3 UjV = 2 U U2 7 14 1,1) — / 55 \99 We delete the scalar factor 2 U Lii (1, 1,3) — 55 220’\ --) = i-—— 13,2) 5 —(1, 1, —4). and thereby choose — (—5, 13. 2 3 ’ 2 U .’ Uj Uj 2 = u U3 = (I 1, —4). Thus our lifl3i , result is the orthogonal basis (3,1,1), 1 = U 2 = u (—5,13,2), u = (1,1, —4)] for R . 3 Example 4 Find an orthogonal basis for the suhspace V of R 4 that has basis vectors 1 v Solution = 2 v (2,1.2,1), = (2,2, 1,0), 3 = v (1.2,1,0). We begin with =(2, 1,2, 1). 1 1 =v u Then ii’.’, U2 =v 2 = — 8 (2,2,1,0) ——(2,1,2,1) 10 1 I_u.n /2 6 =1 ‘\5 5 We delete the scalar factor 4’\ I I=—(26 —3—4). 5) 5 3 5 and instead take (2, 6, —3, —4). Next, V3 2 U UIV3 LIj =v 3 U — 2 = u UI 1 U 2 U — 2 U2L1 = (1,2,1,0)— (2, 1,2,1)— (2,6.—3.—4) = / I \ 70 50 ——. 130 130 40 10\ I 130 130) . I = —(—7.5.4 1). 13 3 = (—7, 5, 4, 1). Thus our orthor We delete the scalar factor and instead take u onal basis for the subspacc V consists of the vectors 1 = u (2,1,2,1), 112 = (2,6, —3, —4), 3 = u (—7,5,4,1). I