Math 317: Linear Algebra Practice Exam 2 Spring 2016 Name: *Please be aware that this practice test is longer than the test you will see on March 11, 2016. Also, this test does not cover every possible topic that you are responsible for on the exam. For a comprehensive list of all topics covered on the exam, please see the exam topics document on the website. I have purposefully strayed away from computation type problems (except for one or two) to give you some additional insight and practice on the types of proofs you could see on the exam. The exam is roughly 40% theory and 60% computational, so please be sure that you know how to compute answers as well! (See Exam Guide topics). 1. Suppose that A is an n×n skew-symmetric matrix and x ∈ Rn satisfies the equation (A + I)x = 0, where I denotes the identity matrix. (a) Show that Ax = −x. (b) Show that xT A = xT . (c) Show that xT x = −xT x. (d) Use (a)-(c) to prove that I + A is invertible. (a) Proof: Suppose that A is an n × n skew-symmetric matrix and x ∈ Rn satisfies the equation (A + I)x = 0, where I denotes the identity matrix. Then (A + I)x = 0 =⇒ Ax + x = 0 =⇒ Ax = −x. (b) Proof: Since A is skew-symmetric, we know that A = −AT . Thus, Ax = −x =⇒ (Ax)T = (−x)T =⇒ xT AT = −xT =⇒ xT (−A) = −xT =⇒ xT A = xT . (c) Proof: Multiplying both sides by x in part (b) we obtain: xT Ax = xT x =⇒ xT x = −xT x since Ax = −x from part (a). (d) Proof: Suppose that (I + A)x = 0. We show that x = 0 which would then say that I + A was nonsingular and hence invertible. From parts (a)-(c), we know this implies that xT x = −xT x, which in turn implies that x · x = 0 =⇒ x = 0 (recall that xT y = x · y). Thus I + A is nonsingular and hence invertible. 2. Suppose that A is a nonsingular n × n matrix. (a) Prove that (A−1 )T = (AT )−1 . Proof : Suppose that A is a nonsingular matrix. Then A is invertible and hence A−1 exist. We show that AT is invertible by showing that AT (A−1 )T = I. Now, I = A−1 A =⇒ I = I T = (A−1 A)T = AT (A−1 )T . Since AT (A−1 )T = I, then AT is invertible and (A−1 )T = (AT )−1 . (b) Suppose that A is a nonsingular symmetric n × n matrix. Prove that A−1 is symmetric. 1 Math 317: Linear Algebra Practice Exam 2 Spring 2016 Proof : Suppose that A is a nonsingular symmetric matrix. Then, A = AT . We show that (A−1 )T = A−1 . From (a), we know that (A−1 )T = (AT )−1 . Using the symmetry of A, we obtain (A−1 )T = (AT )−1 = A−1 . 3. Suppose that A and B are n × n symmetric matrices such that AB = BA. Prove that AB is symmetric. Give an example to show that AB need not be symmetric if AB 6= BA. Proof: Suppose that A and B are n × n symmetric matrices such that AB = BA (wts. (AB)T = AB)). Then (AB)T = B T AT = BA = AB since A and B are symmetric and AB = BA. Thus AB is symmetric. Consider, 1 2 3 −1 0 4 the matrices A = 2 4 5 and B = 0 2 0. Note that A and B are 3 5 6 4 0 0 11 4 4 symmetric but AB = 18 8 8 is not symmetric. This happens since 21 10 12 AB 6= BA. 4. Consider the following matrix: 2 2 2 4 7 7 . 6 18 22 Compute the LU factorization of A. We perform Gaussian elimination on A as follows: 2 2 2 2 2 2 3 −3R1 3 −4R2 4 7 7 R−→ 0 3 3 R−→ R −2R 6 18 22 2 1 0 12 16 1 0 0 L is given by L = 2 1 0. 3 4 1 2 2 2 0 3 3 = U 0 0 4 5. Prove that the LU factorization of a matrix A (assuming that it exist) is unique. Hint: Suppose that A has two LU factorizations, i.e. A = L1 U1 and A = L2 U2 . Prove that L1 = L2 and U1 = U2 . You may use the fact that if A = LU , then L is invertible and U is invertible. Furthermore, L−1 is a lower triangular matrix and U −1 is an upper triangular matrix. 2 Math 317: Linear Algebra Practice Exam 2 Spring 2016 Proof : Suppose that A has two LU factorizations, say A = L1 U1 and A = L2 U2 . Then L1 U1 = L2 U2 =⇒ L1 U1 U1−1 = L2 U2 U1−1 =⇒ L1 = −1 −1 L2 U2 U1−1 =⇒ L−1 2 L1 = U2 U1 . We note on the left that L2 L1 is a lower triangular matrix, and on the right U2 U1−1 is an upper triangular matrix. The only type of matrix that can be both upper triangular and lower triangular is a diagonal matrix, D = diag(d1 , d2 , . . . , dn ). However, by construction of the LU factorization, we know that L (and hence its inverse) has ones on its diagonal. Thus D must have ones on its diagonal. −1 In other words D = I. So L−1 2 L1 = U2 U1 = I =⇒ L1 = L2 and U1 = U2 . Thus, the LU factorization of A is unique. 6. Prove or disprove: V = {x ∈ R2 | x1 x2 = 0} is a subspace of R2 . 1 0 We claim that V is not a subspace. To see why, let x = and y = . 0 1 1 Then x, y ∈ V since 1(0) = 0(1) = 0. However, x + y = 6∈ V since 1 1(1) 6= 0. Thus, V is not a subspace as it is not closed under vector addition. 7. Suppose that A is a 3 × 3 matrix such that 1 1 −2 C(A) = span 2 , −1 , N (A) = span 1 , 3 2 0 1 and b = −7. 0 (a) Prove that Ax = b is consistent. Proof: Recalling that C(A) = {b | Ax = b is consistent.} = span {a1 , a2 , . . . , an } where ai is the ith column of A, to prove that Ax = b is consistent, it suffices to show that b can be written as a linear combination of a1 , a2 . This generates the following system of equations to solve: 1 1 1 −7 = c1 2 + c2 −1 =⇒ 0 3 2 c1 + c2 = 1 2c1 − c2 = −7 3c1 + 2c2 = 0 3 Math 317: Linear Algebra Practice Exam 2 Spring 2016 One such solution is c1 = −2 and c2 = 3. Thus, Ax = b is consistent. (b) Prove that Ax = b does not have a unique solution. Ax = b has a unique solution only if the associated homogeneous system Ax = 0 has only the trivial solution. This is true if and only if N (A) = {0}. Since N (A) 6= {0}, Ax = b cannot have a unique solution. 8. Suppose that A is an n × n matrix. (a) Suppose that A is nonsingular. Prove that C(A) = Rn . Proof: Suppose that A is a nonsingular matrix. Then for each b ∈ Rn there is a unique x ∈ Rn such that Ax = b. Thus, Ax = b is consistent for every b ∈ Rn . Since, C(A) = {b | Ax = b is consistent.}, then b ∈ Rn =⇒ b ∈ C(A). So, Rn ⊂ C(A). We have containment in the other direction for free, since if b ∈ C(A), then b ∈ Rn . Thus, C(A) ⊂ Rn and so C(A) = Rn . (b) If A is nonsingular, describe its four fundamental subspaces. If A is nonsingular, then C(A) = Rn from above. We also know that if A is nonsingular, then so is AT and so R(A) = C(AT ) = Rn . By the rank plus nullity theorem or using the fact that if A or AT are nonsingular, then Ax = 0 has only the trivial solution, we find that N (A) = N (AT ) = {0}. 9. Let S = {0}. (a) Prove that S is a linearly dependent set. Proof : To prove that S is a linearly dependent set, we look at c1 0 = 0. Since this equation is satisfied for any c1 6= 0, S must be a linearly dependent set. (b) Prove that any set containing the zero vector is linearly dependent. Proof : Consider the set V = {v1 , v2 , . . . , vk , 0} which contains the zero vector. We consider the following linear equation: c1 v1 + c2 v2 + . . . + ck vk + ck+1 0 = 0 and show that it has at least one nontrivial solution. Indeed, if we let c1 = c2 = . . . = ck = 0 and ck+1 = 1, then the equation is satisfied and is a nontrivial solution. Thus, any set containing the zero vector must be linearly dependent. 4 Math 317: Linear Algebra Practice Exam 2 Spring 2016 10. Suppose that S = {u1 , u2 , . . . , un } is a linearly independent subset of Rm and that P is an m × m nonsingular matrix. Prove that P (S) = {P u1 , P u2 , . . . , P un } is a linearly independent set. Give an example to show why the nonsingularity of P is necessary. Proof: Suppose that S = {u1 , u2 , . . . , un } is a linearly independent subset of Rm and that P is an m × m nonsingular matrix. To prove that P (S) = {P u1 , P u2 , . . . , P un } is a linearly independent set, we consider the following linear equation: c1 P u1 +c2 P u2 +. . .+cn P un = 0, and show that it only has the trivial solution. Now, by the linearity of P , we have that c1 P u1 +c2 P u2 + . . . + cn P un = P (c1 u1 + c2 u2 + . . . + cn un ) = 0. Since P is nonsingular, then P x = 0 =⇒ x = 0 and thus P (c1 u1 + c2 u2 + . . . + cn un ) = 0 =⇒ c1 u1 + c2 u2 + . . . + cn un = 0. Now since, {u1 , u2 , . . . , un } is a linearly independent subset of Rm , we know that c1 = c2 = . . . = cn = 0 and thus c1 P u1 + c2 P u2 + . . . + cn P un = 0 has only the trivial solution. Thus P (S) = {P u1 , P u2 , . . . , P un } is a linearly independent set. P must be nonsingular for this to work. For instance, if P were the zero matrix, then P (S) = {0} which we have shown is a linearly dependent set. 11. Find a basis and the dimensions of the four fundamental subspaces associated with 1 2 2 3 A = 2 4 1 3 . 3 6 1 4 The reduced row echelon form of A along with the constraints for b for Ax = b to be consistent is given as 1 2 2 3 [U |b̂] = 0 0 1 1 0 0 0 0 b1 − 13 (b2 − 2b1 ) 1 b − 53 b2 + b3 3 1 Thus a basis for C(A) is given by the 1st and 3rd columns of A since they correspond pivot to the variables in the row echelon form of A. So 1 2 C(A) = span 2 , 1 . We also see that dim C(A) = 2. A basis for 3 1 R(A) corresponds rows in any echelon form for A and hence to the nonzero 0 1 2 , 0 . We see that dim R(A) = 2. To find a basis R(A) = span 2 1 1 3 for N (A) we solve Ax = 0. Consulting the row echelon form of A and noting that x2 and x4 give us our free variables, we have that x3 + x4 = 5 Math 317: Linear Algebra Practice Exam 2 Spring 2016 0, x1 + 2x2 + 2x3 + 3x4 = 0 =⇒ x3 = −x4 , x1 = −2x2 − 2(−x4 ) − 3x4 = −2x2 −x4 .Thegeneral gives us the following basis for N (A), solution −2 −1 1 0 N (A) = span , . We see that dim N (A) = 2. This fits in 0 −1 0 1 perfectly with the rank plus nullity theorem, since A is a 3 × 4 matrix and rank(A) + null(A) = 2 + 2 = 4. Finally, a basis for N (AT ) is given by the T coefficients in the constraint 1 equations for b. Thus, a basis N (A ) is given 3 by N (AT ) = span − 53 with dim N (AT ) = 1. Once again, the rank 1 plus nullity theorem holds because rank(AT ) + null(AT ) = 2 + 1 = 3 = m since A is a 3 × 4 matrix. Note that AT is a 4 × 3 matrix. 6