MATH 115A – Linear Algebra Theorem Sheet If we’ve gone over a result in class and it’s not on here, then you can use it without citiation. A good example is the cancellation theorem for a field or vector space. When you write −x, you’re implicitly using the result that there is a unique vector y with x + y = 0. You don’t have to cite this each time you write −x. We assume that V is a vector space over a field F . Theorem 1. Let S be a subset of V . Then Span(S) is a subspace of V . If W is a subspace of V than contains S, then W contains Span(S). Proposition 2. Let S be a subset of V . If x is in Span(S), then Span(S ∪ {x}) = Span(S). Proposition 3. Consider a set of vectors {x1 , . . . , xn } in V . The following are equivalent: (1) The set of vectors is linearly dependent. (2) There is an i such that xi is a linear combination of {x1 , . . . , xi−1 , xi+1 , . . . , xn }. (3) There is an i such that xi is a linear combination of {x1 , . . . , xi−1 }. Proposition 4. Let S1 ⊆ S2 be subsets of V . (1) If S1 is linearly dependent, then S2 is linearly dependent. (2) If S2 is linearly independent, then S1 is linearly independent. Theorem 5. Let β = {x1 , . . . , xn } be a finite set of V . Then β is a basis of V if and only if every element x of V can be written uniquely as a linear combination x = a1 x 1 + . . . + an x n . Theorem 7 (Replacement Theorem). Assume that V has a finite generating set S with n elements. Let L be a linearly independent set of V /. Then L has finitely many elements, say m, and m ≤ n. Moreover, there exists a subset H of S containing n − m vectors, such that L ∪ H generates V . Corollary 7.2. Let V be an n-dimensional vector space. (1) If S is a spanning set of V with m elements, then m ≥ n and there exists an n element subset of S which is a basis of V . (2) If L is a linearly independent subset of V with k elements, then k ≤ n and there exists an n − k element subset H of V such that L ∪ H is a basis of V . Corollary 7.3. Let V be an n-dimensional vector space and S = {x1 , . . . , xn } a subset. (1) If S spans V , then S is a basis. (2) If S is linearly independent, then S is a basis. Corollary 7.4. Let V be a finite dimensional vector space and W ≤ V a subspace of V . Then W is finite dimensional, dim W ≤ dim V , and dim W = dim V if and only if W = V . Theorem 10 (Rank-Nullity Theorem). Let T : V → W be a linear map and assume that V is finite dimensional. Then null T + rank T = dim V. 1 Theorem 11. Let V, W be vector spaces, and assume V is finite dimensional. Let β = {x1 , . . . , xn } be a basis of V . For any vectors w1 , . . . , wn of W , there exists a unique linear map T : V → W such that T (xi ) = wi for i = 1, . . . , n. Theorem 12. Let T : V → W and U : W → Z be linear transformations. Let α, β and γ be bases of V, W and Z, respectively. Then [U ◦ T ]γα = [U ]γβ [T ]βα . Corollary 12.1. Let T : V → W be a linear transformation. Let α, β be bases of V and W , respectively. Then, for x ∈ V , we have [T (x)]β = [T ]βα [x]α . In particular, if T = 1V , then we have [x]β = [1V ]βα [x]α . Proposition 13. Let T : V → W be a linear transformation and assume that dim V = dim W . Then the following are equivalent: (1) T is an isomorphism; (2) there exists a linear map T −1 : W → V such that T −1 ◦ T = 1V and T ◦ T −1 = 1W ; (3) T is onto; (4) T is 1-1; (5) N (T ) is zero. (6) rank T = dim V . Theorem 14. Let V, W be finite dimensional vector spaces. Then V is isomorphic to W if and only if dim V = dim W . Theorem 15. Let V, W be finite dimensional vector spaces with bases β and γ, respectively. Assume dim V = n and dim W = m. (1) The map φβ : V → F n is an isomorphism. (2) Let T : V → W be a linear transformation and let A = [T ]γβ . The following diagram is commutative. T /W V φβ ∼ = Fn ∼ = φγ LA / Fm Theorem 20. Let T : V → V , where V is a finite dimensional vector space. If v1 , . . . , vk are eigenvectors with eigenvalues λ1 , . . . , λk such that λi 6= λj for all i 6= j, then v1 , . . . , vk are linearly independent in V . Corollary 20.1. Let T : V → V , where V is a finite dimensional vector space. Let λ1 , . . . , λk be the eigenvalues of T with eigenspaces E1 , . . . , Ek . If Si is a linearly independent subset of Ei for i = 1, . . . , k, then S = S1 ∪ . . . ∪ Sk is a linearly independent set of V . Theorem 24 (Diagonalization Theorem). Let T : V → V , where V is a finite dimensional vector space. Then T is diagonalizable if and only if fT (t) splits and dim Eλ = m(λ) for every eigenvalue λ. Theorem 25. Let V be an inner product space and let S = (x1 , . . . , xn ) be an orthonormal set of vectors in V . If y ∈ Span(S), then y = hy, x1 ix1 + . . . + hy, xn i. In particular, if β = (x1 , . . . , xn ) is an orthonormal basis of V , then for any y in V , we have hy, x1 i .. . [y]β = . hy, xn i Theorem 26 (Gram-Schmidt). Let V be an inner product space and let S = (w1 , . . . , wn ) be a linearly independent set. Define S 0 = (v1 , . . . , vn ), where v1 = w1 and for 2 ≤ k ≤ n, set k−1 X hwk , vj i vk = wk − · vj . ||vj ||2 j=1 Then S 0 is an orthogonal set with Span(S) = Span(S 0 ). Theorem 27. Let W be a finite dimensional subspace of an inner product space V . Then for every x in V , there is a unique w in W and a unique y in W ⊥ such that x = w + y. Corollary 27.1. Let W be a finite dimensional subspace of an inner product space V . There exists a linear transformation projW : V → V such that R(projW ) = W , N (projW ) = W ⊥ and projW (x) = x for all x ∈ W . Corollary 27.2. Let V be a finite dimensional inner product space and W a subspace. Then dim W + dim W ⊥ = dim V. Theorem 28. Let V be an inner product space over R. Define a function ψ :V →Vd by ψ(y)(x) = hx, yi (so ψ(y) : V → R1 ). Then: (1) The function φ is a linear transformation; (2) φ is 1-1; (3) if V is finite dimensional, then φ is an isomorphism. Corollary 28.1. Let V be a finite dimensional inner product space over R, let φ : V → R1 be a linear transformation. Then there exists an element y of V such that φ(x) = hx, yi for all x in V . From this point on we assume that V is a finite dimensional inner product space over R. Theorem 29. Let T : V → V be a linear operator. There exists a unique liner operator T ∗ : V → V such that hT (x), yi = x, T ∗ (y)i for all x, y in V . Proposition 30. Let T : V → V be a linear operator and let β be an orthonormal basis of V . Then [T ∗ ]β = ([T ]β )tr . Theorem 31. Let T : V → V be a linear operator. Then T is symmetric if and only if there is an orthonormal basis of eigenvectors of T . Corollary 31.1. Let A be an n × n matrix of real numbers. Then A is symmetric if and only if there is an orthogonal matrix Q such that QT AQ = D, where D is a diagonal matrix. Theorem 32. Let T : V → V be a linear operator. The following are equivalent. (1) T T ∗ = 1V ; (2) hT (x), T (y)i = hx, yi for all x, y in V ; (3) for any orthonormal basis β of V , T (β) is an orthonormal basis of V ; (4) there exists an orthonormal basis of V such that T (β) is an orthonormal basis of V ; (5) ||T (x)|| = ||x|| for all x in V . Corollary 32.1. T is orthogonal if and only if T is an isomorphism and T −1 = T ∗ . Corollary 32.2. Let A be an n × n matrix of real numbers. The following are equivalent. (1) AAT = In ; (2) the columns of A are orthonormal; (3) ||Ax|| = ||x|| for all x in Rn . Lemma 33. Let A be an m × n matrix of real numbers. Then: (1) The n × n matrix AT A has n eigenvalues, counted with multiplicity, and they are all non-negative. (2) Let x1 , . . . , xn be an orthonormal basis of eigenvectors for AT A with eigenvalues λi . Then ||Axi || = λ2i . (3) hAxi , Axj i = 0 for i 6= j. Theorem 34. Let A be an m × n matrix of real numbers, let σ1 , . . . , σr be the nonzero singular values of A. Let Σ = [sij ] be the m × n matrix with sii = σi for i = 1, . . . , r and sij = 0 otherwise. Then there exists an m × m orthogonal matrix U and an n × n orthogonal matrix V such that A = U ΣV T .