Set #3

advertisement
EGR 599 (Fall 2003)
__________________
LAST NAME, FIRST
Problem set #3
1. (4.21) If Ax = x represents an eigenvalue equation, find the eigenvalue for the following
matrix equation
8 8
7
 9  16  18


13 
 5 11
 1 
 1 
 3  =  3 
 
 
  2
  2
Ans:
>> [v,d]=eig([7 -8 -8;9 -16 -18;-5 11 13])
v=
0.2673 0.8944 -0.0000
0.8018 0.0000 -0.7071
-0.5345 0.4472 0.7071
d=
-1.0000
0
0
0 3.0000
0
0
0 2.0000
2. (4.31) Consider the following matrix that depends on the parameter t
 t 2t 
 2t  t 


Find the eigenvalues.
Ans: 1 =
5 t, 1 = 
5t
3. (4.51) Find the eigenvalues and eigenvectors for the matrix
2 1 1
A =  2 3 2


1 1 2
Ans:
>> [v,d]=eig([2 1 1;2 3 2;1 1 2])
v=
-0.8018 0.4082 -0.3794
0.5345 0.8165 -0.4364
0.2673 0.4082 0.8158
d=
1
Advanced Engineering Mathematics with MATLAB, 2e by Thomas Harman
1.0000
0
0
0 5.0000
0
0
0 1.0000
4. (4.71) The matrix
 3 0  2
A=  0 2 0 


 2 0 0 
is a real, symmetrix matrix, since the elements aij are real and aij = aji.
a. Find the eigenvalues and eigenvectors for the matrix.
b. Form the matrix Q whose columns consists of the eigenvectors (normalized to be unit
vectors) and show that Q-1 = QT.
c. Show that Q is an orthogonal matrix in which the rows are orthonormal vectors  Rn.
d. Form the product D = QTAQ and show that D is a diagonal matrix.
5. (4.91) Assume that A is an nn matrix and let x be a vector in Rn.
a. What vectors x are parallel to the image vector Ax?
b. Given the MATLAB statement [V,D] = eig(A), what is the interpretation of
(A – D(k,k)*eye(n))*V(:,k) = 0 for k = 1, 2, ..., n?
c. Use the trace and determinant of
3  2 
A= 

1 2 
find the eigenvalues of A.
6. (4.101) Using the matrix
3  2 
A= 

1 2 
verify the Cayley-Hamilton theorem. Check the result using MATLAB commands.
7. (4.111) Given the rotation matrix
cos 
R = 
 sin 
 sin  
cos  
show the following:
a. The matrix has eigenvectors and corresponding eigenvalues
1
1
 = ei:   ;  = e-i:  
i 
 i 
b. The vectors in Part (a) are orthogonal with respect to the complex inner product in C2.
Solution
a.
cos     sin 
= (cos   )2 + sin2 = 0
sin 
cos   
(cos   )2 =  sin2 = i2sin2
cos    =  isin   = cos  + isin = ei
cos    = isin   = cos   isin = e-i
 i sin 
 = cos  + isin = ei  
 sin 
 i sin 
 i sin 

 sin    x1  0
=
 i sin    x2  0
 sin  
 sin  
x1 = c A11 =  c sin, x2 = c A12
c
i sin 
 = cos   isin = e-i  
 sin 
i sin 
i sin 

c
 x1  0
c
 x  = 0  xi = c Aki
 2  
1
= c isin  x1 =  c sin  
 i 
 sin    x1  0
=
i sin    x2  0
 sin    x1  0
c
=    xi = c Aki



 sin    x2  0
1
c
c
x1 = c A11 =  c sin, x2 = c A12 =  c isin  x2 =  c sin  
i 
b. The vectors in Part (a) are orthogonal with respect to the complex inner product in C2.
1
<x1T, x2 > = 1  i   = 0
i 
8. (4.131) Considering the Cayley-Hamilton theorem, show that the inverse of a nonsingular nn
matrix A can be written as a matrix polynomial of degree n  1. Using this approach, determine
the inverse of the matrix.
1  1 4 
A = 3 2
1


2 1  1
Is this method efficient for computation?
Solution
Let f() = n + an-1n-1 + ... + a1 + a0 = 0
be the characteristic equation of A. Since A is a nonsingular matrix, i  0; that is, every
eigenvalue is nonzero, and (1)na0 = |A|  0.
A n + an-1A n-1 + ... + a1A + a0I = 0  I = 
A-1 = 
1
(A n + an-1A n-1 + ... + a1A)
a0
1
(A n-1 + an-1A n-2 + ... + a1I)
a0
>> A=[1 -1 4;3 2 1;2 1 -1]
A=
1 -1 4
3 2 1
2 1 -1
>> poly(A)
ans =
1.0000 -2.0000 -7.0000 12.0000
1
A-1 =  (A 2  2A  7I)
12
>> id=eye(3)
id =
1 0 0
0 1 0
0 0 1
>> Ai=-(A*A-2*A-7*id)/12
Ai =
0.2500 -0.2500 0.7500
-0.4167 0.7500 -0.9167
0.0833 0.2500 -0.4167
>> Ai=inv(A)
Ai =
0.2500 -0.2500 0.7500
-0.4167 0.7500 -0.9167
0.0833 0.2500 -0.4167
This method is more efficient than using the
formula A-1 = Aa/|A| to determine the inverse
matrix. However this method is not as efficient
as the Gauss-Jordan technique to invert a
matrix.
The Gauss-Jordan technique augments the given matrix with the identity matrix of the same
order. One then reduces the original matrix to the identity matrix by elementary row
transformations, performing the same operations on the augmentation columns. When the
identity matrix stands as the left half of the augmented matrix, the inverse of the matrix stands as
the right half.
9. (4.141) It is not obvious that the eigenvalues of an nn real symmetric matrix are real, since
the roots of the nth-order characteristic equation with real coefficients are, in general, complex
numbers. Prove that the eigenvalues of a real symmetric matrix are real.
Hint: Assume that there are complex eigenvalues  with complex conjugates * and show that 
= * so that the eigenvalues are real.
Solution
Suppose that x is an eigenvector of A corresponding to the eigenvalue , we have
Ax = x
Take the complex conjugate of both sides
Ax = x  A x = * x  A x = * x since A is a real matrix
(Ax)T x = (x)T x  xT(AT x ) = xT x
xT(A x ) = xT x  xT(* x ) = xT x
*xT x = xT x   = * so that the eigenvalues are real
Download