Chapter 5 Study Guide Name: Philip D. Olivier Due Date: Questions: Comments: Problem 5.2. Analog sensors typically produce numerical values within a bounded range. Show that the set of bounded real numbers [ a ,a ] {x :| x | a} , where x, a and a>0, is not a field. Background concepts/theory/facts: A field is a set of numbers with two operations (addition and multiplication) defined on the set that satisfy ALL of the field axioms. Failure to satisfy any of the field axioms disqualifies the set from being a field. The filed axioms: Assume that a, b, c F (i.e., a, b, and c are elements of the field) (You need to commit these to memory.) Addition Multiplication a, b F Uniqueness, closure !c F : a b c !c F : a b c commutativity a b ba a b b a associativity (a b) c a (b c) (a b) c a (b c) identity 0 F : a 0 a 1 F : a 1 a inverse !(a) F : a (a) 0 a 0, !(a 1 ) F : a (a 1 ) 0 Right distributivity a(b c) ab ac Left distributivity (a b)c ac bc Supporting Examples in text: 1, 2, 3, 4, 5, 6 Solution: This set fails the closure property of addition. a a 2a also fails the closure property of multiplication. [ a,a ] . If a >1, it Questions: Comments: = ”for every”. = “there exists”. ! = ”there exists a unique”. : = “such that”. (You need to commit these to memory.) In arithmetic, there is one fundamental operation, i.e. addition. Subtraction is the operation that undoes addition. Multiplication corresponds to over-and-over addition. Finally, division undoes multiplication which is over-and-over addition. Problem 5.6: Show that the set of diagonal 3 by 3 matrices with elements in ordinary matrix addition and scalar multiplication, is a field. , with Background concepts/theory/facts: A field is a set of numbers with two operations (addition and multiplication) defined on the set that satisfy ALL of the field axioms. Failure to satisfy any of the field axioms disqualifies the set from being a field. The filed axioms: Assume that a, b, c F (i.e., a, b, and c are elements of the field) (You need to commit these to memory.) Addition Multiplication a, b F Uniqueness, closure !c F : a b c !c F : a b c commutativity a b ba a b b a associativity (a b) c a (b c) (a b) c a (b c) identity 0 F : a 0 a 1 F : a 1 a inverse !(a) F : a (a) 0 a 0, !(a 1 ) F : a (a 1 ) 0 Right distributivity c F a(b c) ab ac Left distributivity c F (a b)c ac bc Supporting Examples in text: (as in 2.2, 1, 2, 3, 4, 5, 6) and 7, 8, Solution: Consider generic 3 by 3 diagonal matrices with real elements a1 0 0 b1 0 0 c1 0 0 A 0 a2 0 , B 0 b2 0 , C 0 c2 0 0 0 a3 0 0 b3 0 0 c3 For the uniqueness and closure axioms, compute the left, verify that the result exist and is in the set, and use the uniqueness of the scalar operation. For the commutativity, associativity, and distributivity axoims compute the left, compute the right, verify that the results are the same. 0 0 0 For the identity axioms, the appropriate additive identity is 0 0 0 0 , and the 0 0 0 1 0 0 multiplicative identity is I 0 1 0 0 0 1 0 a1 0 For the inverse axioms, the additive inverse is A 0 a2 0 , and the 0 0 a3 a11 0 0 1 1 0 . How must the caveat ( a 0 ) be multiplicative inverse is A 0 a2 0 0 a31 changed? (You must fill in the details.) Questions: Comments: The field axioms show up in more than 1 problem. This indicates increased importance. Problem 5.8:Statement: Show that if S , Q STQS is symmetric. Background concepts/theory/facts: [A]i,j = ai,j, [AT]i,j=aj,i [AB]T = BTAT If Q is symmetric, QT=Q (MT)T=M nn and Q is symmetric, then the product Supporting Examples in text: Solution: [STQS]T = ST[STQ]T = STQT(ST)T = STQS Questions: Comments: means “is equivalent to” it means simultaneously and means ”left side is the if-part and it implies right side which is the then-part” means “right side is the in-part and it implies left side which is the then-part” “Implies” means that the truth of the “if-part” guarantees the truth of the “then-part” but not vice-versa. You need to commit these to memory. Problem 5.10: Using elementary column transformations, find the right inverse of the 0 2 3 real matrix. A 2 3 1 Background concepts/theory/facts: There are 3 three elementary column transformations defined by: Ki,j, interchanges ith column with the jth column (Kij-1=Kii) Ki(a) multiplies all elements of column i by the nonzero scalar a (Ki(a)-1=Ki(1/a)) Ki,j(a) adds the product of column j and the scalar a to column i (Kij(a)-1=Kij(-a)) Applying each elementary column operation corresponds to multiplication on the right by a matrix that is a modified identity matrix, these matrices have inverses that are simply and exactly computable. Example 4-by-4 column transformation matrices are: (consider multiplying on the right by the appropriate K (i.e. [c1 c2 c3 c4]K=??)) 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 a 0 0 0 1 0 0 K 23 , K (a ) , K (a) 0 1 0 0 2 0 0 1 0 23 0 a 1 0 0 0 0 1 0 0 0 1 0 0 0 1 Used to compute right inverses (when possible) AKa Kb Km Kn I m Q AR [ Ka Kb Km Kn ]m Where the subscript “m” indicates that you keep only the first m columns, m = # of rows of A. A*AR = Im (m = # of rows of A) Which of these statements define the elementary column transformations? Which provide examples? Supporting Examples in text: Solution: 2 0 3 Interchange columns 1 and 2: AK12= 3 2 1 0 3 1 Multiply first column by ½: [AK12]K1(1/2) = 1.5 2 1 0 0 1 Use first column to zero out rest of first row: [AK12K1(1/2)]K13(-3) 1.5 2 5.5 0 1 0 Multiply second col by -1/2: [AK12K1(1/2)K13(-3)]K2(-1/2 ) = 1.5 1 5.5 Use second col to zero out rest of 2nd row: 0 1 0 [AK12K1(1/2)K13(-3)K2(-1/2)]K12(-1.5) = 0 1 5.5 .75 .5 So AR K12 K1 (1/ 2) K13 (3) K 2 (1/ 2) K12 (1.5)2 .5 0 0 0 1 0 Verification: Compute A*AR= (You always know if you inverted a matrix 0 1 correctly by checking.) Questions: Comments: Kij is both idempotent (= is its own inverse) and orthogonal (= its inverse is its transpose) The noted inverses are obtained from logical consequence of the definition, not from matrix properties. The implied matrix multiplication properties should be verified and MUST yield the same results (mathematics requires consistency). Problem 5.12: Compute the determinant of A in Example 16 by Laplace expansion along 1 0 2 0 2 3 1 1 rows 2 and 3. A 4 1 0 0 2 0 1 0 Background concepts/theory/facts: det( A) a1i a2 j ank Laplace expansions n o det( A) aij (1)i j det( Aij ) i 1 n o det( A) aij (1)i j det( Aij ) j 1 Which is the definition, which are theorems? Supporting Examples in text: Solution: Expansion about row 2: 0 2 0 1 2 0 2 1 2 2 det( A) 2(1) det 1 0 0 3(1) det 4 0 0 0 1 0 2 1 0 1 0 0 1 0 2 23 2 4 (1)(1) det 4 1 0 1(1) det 4 1 0 2 0 0 2 0 1 0 0 0 (1 4) 3 Expansion about row 3: (You provide the details, the results MUST agree, remember LOGICAL CONSISTENCY is the only truth in mathematics) Questions: Comments: Problem 5.14: Find the rank of each of the matrices in Problems 9 and 10. 0 2 0 2 3 A9 2 3 , A10 2 3 1 1 1 Background concepts/theory/facts: The rank of A is the dimension of the largest square submatrix that has nonzero determinant and that is obtainable fro A by deleting rows or columns or both. o The rank of A is the number of linearly independent rows. o The rank of A is the number of linearly independent columns. If there is no set of scalars 1 , 2 , , k , at least one of which is not zero, such that 1 A1 2 A2 k Ak 0 , then the set A1 , A2 , , Ak is said to be linearly independent. Which of these statements are definitions, which are theorems? Supporting Examples in text: ?? Solution: You provide the details. Questions: ?? Comments: ?? Problem 5.16: Using a computer program, obtain answers to Problems 9, 10 and 15 by first computing the SVD of A, and then the required matrices in each case. 0 2 0 1 1 1 0 2 3 A9 2 3 , A10 , A15 2 3 0 4 2 3 1 1 1 2 4 1 5 Background concepts/theory/facts: 0 The singular value decomposition of a matrix A U T AV r , r = rank(A) 0 0 U, V orthogonal matrices (notice difference in notation between text and Matlab) I 0 Normal form PAQ N r , r = rank(A) 0 0 The inverse of an orthogonal matrix is its transpose. If you do not remember the definition of rank, or the required theorems about rank, then include what you need here. Matlab documentation on command svd >> help svd SVD Singular value decomposition. [U,S,V] = SVD(X) produces a diagonal matrix S, of the same dimension as X and with nonnegative diagonal elements in decreasing order, and unitary matrices U and V so that X = U*S*V'. S = SVD(X) returns a vector containing the singular values. [U,S,V] = SVD(X,0) produces the "economy size" decomposition. If X is m-by-n with m > n, then only the first n columns of U are computed and S is n-by-n. For m <= n, SVD(X,0) is equivalent to SVD(X). [U,S,V] = SVD(X,'econ') also produces the "economy size" decomposition. If X is m-by-n with m >= n, then it is equivalent to SVD(X,0). For m < n, only the first m columns of V are computed and S is m-by-m. See also svds, gsvd. Supporting Examples in text: Solution: In the SVD of an A, A U V T , in which the upper left square submatrix of (S in the matlab terminology) is diagonal with nonzero entries. We can convert the SVD to normal form by letting A U V T where U incorporates the diagonal elements and has an identity matrix in its upper left square. To create the U and divide the first column of by 1,1 4.0283 and multiply the first column of U by 4.0283. divide the second column of by 2,2 1.6653 and multiply the first column of U by 1.6653. An alternate could be obtained by modifying V instead of U A9 = 0 -2 -1 2 3 -1 >> [U,S,V]=svd(A9) U= -0.4535 0.4886 0.7454 -0.8823 -0.3642 -0.2981 0.1258 -0.7929 0.5963 S= 4.0283 0 0 1.6653 0 0 V= 0.4068 0.9135 -0.9135 0.4068 To create the U and divide the first column of by 1,1 ?? and multiply the first column of U by ??. divide the second column of by 2,2 ?? and multiply the first column of U by ??. A10 = 0 -2 2 3 3 -1 >> [U,S,V]=svd(A10) U= -0.6464 -0.7630 -0.7630 0.6464 S= 4.0671 0 0 3.2340 0 0 V= 0.3752 -0.3997 -0.8363 -0.8807 0.1277 -0.4562 -0.2892 -0.9077 0.3041 A15 = 0 -2 -2 1 -1 3 0 4 -1 1 4 5 >> [U,S,V]=svd(A15) U= -0.1635 0.7999 -0.5774 -0.6110 -0.5416 -0.5774 -0.7746 0.2583 0.5774 S= 8.7470 0 0 0 1.2207 0 0 0 0.0000 V= 0 0 0 Questions: Comments: Problem 5.18: For example 48, find another P, Q that satisfy (5.78). 1 2 3 1 2 3 1 0 , P A , Q 0 1 0 1 2 3 1 1 0 0 1 P AQ P1 AQ2 I r 0 (5.78) PAQ 1 1 0 0 , r = rank(A) P AQ P AQ 2 1 2 2 Background concepts/theory/facts: Normal form Procedure o Perform row operations (to produce P) and column operations (to produce Q) to transform A to normal form. o Partition P and Q according to the rank and dimensions of A o Include more directions as needed. Supporting Examples in text: Solution: Follow procedure for Problem 5.16. A48 = Questions: 1 2 3 Comments: -1 -2 -3 >> [U,S,V]=svd(A48) U= -0.7071 0.7071 0.7071 0.7071 S= 5.2915 0 0 3.5108e-16 0 0 V= -0.2673 0.9562 0.1195 -0.5345 -0.0439 -0.8440 -0.8018 -0.2895 0.5228 1 2 Problem 5.22: For A , verify by computation that ( A) 0. 2 5 Background concepts/theory/facts: The Characteristic polynomial ( ) is computed from det( I A) Characteristic equation ( ) 0 (Equations must have an “=” sign.) ( A) ( ) A , 0 A0 I Comes from a formula for inverting a matrix, and from the definition of eigenvalue Supporting Examples in text: Solution: Computation of ( ) 2 1 0 1 2 1 ( ) det( I A) ( 1)( 5) 4 2 5 0 1 2 5 ( ) 2 6 9 Evaluation of ( A) 2 6 9 0 1 2 1 2 1 2 1 0 6 9 5 2 5 2 5 0 1 3 12 6 12 9 0 3 6 9 12 12 0 12 21 12 30 0 9 12 12 0 21 30 9 0 0 0 0 ( A) A2 6 A 9 I 2 Questions: Comments: Problem 5.24: Use the Ho algorithm to find a minimal system with weighting sequence H k 0 {1, 2,3, } Background concepts/theory/facts: Ho algorithm: Given the Hk, find {A, B, C, D} from H0=D, k>1, Hk=CAkB o Minimal order o Algorithm Create Hankel matrix, S, from Hk’s Find max rank of the submatrices Sk, call this r. Put Sr in normal form and find partitioned P and Q From the Hk, k=0,1, …, 2r-1, and P1 and Q1, construct A, B, C, D matrices according to D=H0 H r 1 H1 H2 Q B P1 H r Q1 , A P1 , C H1 1 H 2 r 1 H r H r 1 Normal form? Beginnings of sub-field of system identification Based on Cayley-Hamilton theorem. What role does the Cayley-Hamilton theorem play? Supporting Examples in text: 1.5 .5 1 , B , C 2 0 . Solution: Answer r = 2, A .5 3.5 0 Detailed calculations: 2 3 4 5 3 4 5 6 2 3 , det( S1 ) 2 rank ( S ) 1 , det( S2 ) S 4 5 6 7 1 rank ( S ) 2 3 4 5 6 7 8 2 3 4 det( S3 ) 3 4 5 2(24 25) 3(18 20) 4(15 16) 2 6 4 0 rank ( S ) 2 . 4 5 6 So r=2. 1 0 2 3 .5 0 1 1.5 .5 0 1 1.5 0 1 3 4 rowtransform ? 0 1 3 4 rowtransform ? 1.5 1 0 .5 1 0 1 0 1 0 0 1 0 1 0 1 .5 0 1 1.5 .5 0 1 0 .5 0 1 0 1.5 1 0 .5 coltransform ? 1.5 1 0 .5 coltransform ? 1.5 1 0 1 1 0 1 1.5 1 3 0 1 0 1 0 2 .5 0 1 3 So P1 , and Q1 1.5 1 0 2 Now compute A, B, and C. Close the circle. With the A, B and C just computed, verify that they produce the given H2, H3, etc Questions: Comments: Notice the simple way to organize the computations. Problem 5.26: Use the Ho algorithm to find a minimal system with periodic mod-2 weighting sequence H k 0 {[0, 0], 0,1 ,[1,1],[0, 0],[0,1],[1,1], } Background concepts/theory/facts: You fill in Supporting Examples in text: ??? Solution:??? Questions: Comments: Problem 5.28: Solve Example 48 by the abbreviated method Background concepts/theory/facts: As presented in text. From pre-requisite courses Supporting Examples in text: Solution: See example 49. Questions: Comments: Problem 5.30: In the presence of inexact data or computation, in order to guarantee that the computed matrix is exact for a matrix near the original, the Gaussian elimination algorithm for row compression is replaced by premultiplication (or postmultiplication, for column compression) by a matrix H that is orthogonal; that is, H-1=HT. The computational requirement is that HX = +/- e1||X||, where X is a column of the matrix (or submatrix) being compressed; e1 is a zero vector except for the topmost entry, which is 1; and ||S|| is the Euclidean length of X. Show that 2 (a) the Householder matrix H I T VV T is orthogonal, where V is a vector, and that V V (b) if V = X -/+ ||X||e1, then HX is +/- ||X||e1, as required. Background concepts/theory/facts: Matrix-vector multiplication IS NOT COMMUTATIVE 2 ( 1)( 1) 1, X X T X , e1T e1 1, X T e1 e1T X x1 You cannot divide by a vector/matrix, only a scalar Supporting Examples in text: Solution: T T T T 2 2 2 (a) Is H orthogonal? H I T VV T I T T VV T I V T V T T V V V V V V 2 2 I VV T T I T VV T H V V V V T ? H 1H H T H HH I 2 2 4 4 HH I T VV T I T VV T I T VV T VV TVV T 2 T V V V V V V V V 4 4V TV 4 4 T I T VV VV T I T VV T T VV T I 2 T V V V V V V V V 2 (b) With V = X-/+||X||e1, and H I T VV T is HX = +/- ||X||e1? V V Using the following definitions and identities 2 V X X e1 , HX I T VV T X V V V V X T 2 X V V T X X T X e1 X X e1 X x1 T X e1 X X e1 XX T 2 X Xe1T 2 X x1 X 2 X e1 X T X e1e1T 2 And ( 1)( 1) 1, X 2 X T X , e1T e1 1, X T e1 e1T X x1 2 T So using the identities and definitions above, HX I V V X , becomes T V V HX I 2 X X X X X X X x1 X X x1 1 x1 1 X x1 X X ( X X e1 ) Questions: Comments: XX T x1 1 X X 1 X 2 X X 2 X x1 X X x1 ( X X e1 2 X e1 X T X e1e1T X X Xe1T X Xx1 X e1 X 3 X e1 X x1e1 2 x1 e1 X X e1 ) x1 X 2 x1e1