Uploaded by Danny D cruze

vision 2023 engg mathematics chapter 1 linear algebra updated 95 2 55

advertisement
byjusexamprep.com
ENGINEERING MATHEMATICS
1
1.
LINEAR ALGEBRA
MATRIX
1.1.
Definition:
A system of mn numbers arranged in a rectangular formation along m rows and n
columns and bounded by the brackets [ ] is called an m by n matrix; which is written
as m × n matrix. A matrix is also denoted by a single capital letter A.
 a11 a12 ...a1j ...a1n 


 a21 a22 ...a2j ...a2n 


...
...
...
... 
Thus, A = 
is a matrix of order mn.
 ai1
ai2 ...aij ...ain 


 ...
...
...
... 


am1 am2 amj ...amn 
It has m rows and n columns. Each of the mn numbers is called an element of the
matrix.
1.2.
Transpose of a Matrix:
The matrix obtained from any given matrix A, by interchanging rows and columns is
called the transpose of A and is denoted by AT or A’.
1 2 
1 4 7 


Thus, the transposed matrix of A = 4 5 is A ' = 

2 5 8
7 8
Clearly, the transpose of an m × n matrix is an n × m matrix.
Also, the transpose of the transpose of a matrix coincides with itself i.e. (A’)’ = A.
1.2.1. Properties of Transpose of a Matrix:
If AT and BT be transpose of A and B respectively then,
1. (AT)T = A
2. (A + B)T = AT + BT
3. (kA)T = kAT, k being any real number.
4. (AB)T= BTAT
5. (ABC)T = CTBTAT
1 1 2


Example.1 For the Matrix, A = 2 1 1 , AA T is :
1 1 2
2
byjusexamprep.com
6 5 6 


(A) 5 6 6 
6 5 6 
6 5 6 


(B) 5 6 6 
5 5 6 
6 5 6 


(C) 5 6 5 
6 6 6 
6 5 6 


(D) 5 6 5 
6 5 6 
Ans. D
Sol.
1 1 2


Since A = 2 1 1
1 1 2
1 2 1


A = 1 1 1
2 1 2
T
1 1 2 1 2 1



Now AA = 2 1 1 1 1 1
1 1 2 2 1 2
T
1 + 1 + 4 2 + 1 + 2 1 + 1 + 4 


= 2 + 1 + 2 4 + 1 + 1 2 + 1 + 2
1 + 1 + 4 2 + 1 + 2 1 + 1 + 4 
AA
1.3.
T=
6 5 6 


5 6 5 
6 5 6 
Special Matrices and Properties:
1.3.1. Row and Column Matrix:
• A matrix having a single row is called a row matrix, e.g., 1 3 4 5 .
2 
 
• A matrix having a single column is called a column matrix, e.g., 7 
9 
• Row and column matrices are sometimes called row vector and column vectors.
1.3.2. Square matrix:
• An m × n matrix for which the number of rows is equal to number of columns i.e. m
= n, is called square matrix.
• It is also called an n-rowed square matrix.
• The element aij such that i = j, i.e. a11, a22… are called DIAGONAL ELEMENTS and the
line along which they line is called Principle Diagonal of matrix.
• Elements other than principal diagonal elements are called off-diagonal elements i.e.
aij such that i ≠ j.
3
byjusexamprep.com
1 2 3 


Example: A = 4 5 6 
is a square Matrix.
9 8 3 
3 3
Note.1:
A square sub-matrix of a square matrix A is called a “principle sub-matrix” if its diagonal
1 2
elements are also the diagonal elements of the matrix A. So 
 is a principle sub
 4 5
2 3
matrix of the matrix A given above, but 
 is not.
5 6
1.3.3. Diagonal Matrix:
A square matrix in which all off-diagonal elements are zero is called a diagonal matrix.
The diagonal elements may or may not be zero.
2 0 0


Example: A = 0 4 0 is a diagonal matrix.
0 0 7 
1.3.3.1. Properties of diagonal Matrix:
(a) diag [x, y, z] + diag [p, q, r] = diag [x + p, y + q, z + r]
(b) diag [x, y, z] × diag [p, q, r] = diag [xp, yq, zr]
(c) (diag [x, y, z])–1 = diag [1/x, 1/y/ 1/z]
(d) (diag[x, y, z])T = diag[x, y, z]
(e) diag [x, y, z]n = diag[xn, yn, zn]
(f) Eigen values of diag [x, y, z] = x, y and z.
(g) Determinant of diag [x, y, z] = | diag[x, y, z]| = xyz
1.3.4. Scalar Matrix:
A scalar matrix is a diagonal matrix with all diagonal elements belong equal.
 a 0 0


Example: A = 0 a 0 is a scalar matrix where a is any non-zero value.
0 0 a
1.3.5. Unit Matrix or Identity Matrix:
• A square matrix each of whose diagonal elements is 1 and each of whose nondiagonal elements are zero is called unit matrix or an identity matrix which is denoted
by I.
• Identity matrix is always square.
• Thus, a square matrix A = [aij] is a unit matrix if aij = 1 when i = j and aij = 0 when
i ≠ j.
1 0 0 
1 0


Example: I3 = 0 1 0  is unit matrix, I2 = 
.
0 1 
0 0 1 
4
byjusexamprep.com
1.3.5.1. Properties of Identity Matrix:
(a) I is identity element for multiplication, so it is called multiplicative identity
(b) AI = IA = A
(c) In = I
(d) I–1 = I
(e) |I| = 1
1.3.6. Null matrix:
• The m × n matrix whose elements are all zero is called null matrix. Null matrix is
denoted by O.
• Null matrix need not be square.
0 0 0
0 0
0


Example : O3 = 0 0 0  , O2 = 
 , O21 =  
0 0
0
0 0 0
1.3.6.1 Properties of Null Matrix:
(a) A + O = O + A = A. So, O is additive identity.
(b) A + (–A) = O
1.3.7. Upper triangular Matrix:
• An upper triangular matrix is a square matrix whose lower off-diagonal elements are
zero i.e. aij = 0 whenever i > j.
• It is denoted by U.
• The diagonal and upper off diagonal elements may or may not be zero.
3 5 –1


Example : U = 0 5 6 
0 0 2 
1.3.8. Lower Triangular matrix:
• A lower triangular matrix is a square matrix whose upper off-diagonal triangular
elements are zero, i.e., aij = 0 whenever i < j.
• The diagonal and lower off-diagonal elements may or may not be zero. It is denoted
by L.
 1 0 0


Example : L = –1 5 0 
 2 3 6 
1.3.9. Idempotent Matrix:
A matrix A is called idempotent if A2 = A.
 2 –2 –4
1 0 0 0 

Example: 
,
 , –1 3 4  are examples of idempotent matrices.
0
1
0
0

 
  1 –2 –3


5
byjusexamprep.com
1.3.10. Involutory Matrix:
A matrix A is called involutory if A2 = I.
1 0
Example: 
 is involutory.
0 1
4 3 3


Also –1 0 –1 is involutory since A2 = I.
–4 –4 –3
1.3.11. Nilpotent Matrix:
A matrix A is said to be nilpotent of class m or index m iff Am = O and Am – 1 ≠ O i.e., m
is the smallest index which makes Am = O
1 1 3


Example: The matrix A =  5 2 6  is nilpotent class 3, since A ≠ 0 and A2 ≠ 0, but
–2 –1 –3
A3 = 0.
1.3.12. Singular Matrix:
A matrix will be singular matrix if its determinant is equal to zero.
aij 
 nn
a11 a12 ...... a1n 


. 
 .
 .
. 
=

. 
 .
 .
. 


 an1 a21 ...... ann nn
If |aij| = 0 ⇒ Matrix will be singular.
If a given matrix is not singular, then it will be the Non – singular matrix.
1.3.13. Periodic Matrix:
A square matrix A is called periodic if A k + 1 = A where k is least positive integer and is
called the period of A.
5 2 m


Example.2 For what value of m, matrix A = 3 4 2  is singular.
1 5 1 
Ans. 2.909
Sol.
5 2 m


Since  A  = 3 4 2 
1 5 1 
For the singular matrix, determinant of matrix |A| = 0.
6
byjusexamprep.com
5 2 m
A = 3 4 2 =0
1 5 1
5(4 – 10) –2 (3 – 2) + m (15 – 4) = 0
– 30 – 2 + 11m = 0
m=
1.4.
32
= 2.909
11
Classification of Real and Complex Matrices:
1.4.1. Real Matrices:
Real matrices can be classified into the following three types of the relationship between
AT and A.
1.4.1.1. Symmetric Matrix:
• A square matrix A = [aij] is said to be symmetric if its (i, j)th elements is same as its
(j, i)th element i.e. aij = aij for all i and j.
• In a symmetric matrix: AT = A
p a b 


Example: A = a q c  is a symmetric matrix since AT = A
b c r 
1.4.1.1.1. Properties of symmetric matrices: For any Square matrix A,
(a) AAt is always a symmetric matrix.
(b)
A + At
is always symmetric matrix.
2
(c) A - AT and AT – A are skew symmetric.
Note.2:
1. If A and B and symmetric, then:
(a) A + B and A – B are also symmetric
(b) AB, BA may or may not be symmetric.
(c) Ak is symmetric when k is set of any natural number.
(d) AB + BA is symmetric.
(e) AB – BA is skew symmetric.
(f) A2, B2, A2 ± B2 are symmetric.
(g) KA is symmetric where k is any scalar quantity.
2. Every square matrix can be uniquely expressed as a sum of a symmetric and a
skew-symmetric matrix. Let A be the given square matrix, then:
A=
1
1
(A + A ') + (A − A ').
2
2
1.4.1.2. Skew – Symmetric Matrix:
• A square matrix A = [aij] is said to be skew symmetric if (i, j) th elements of A is the
negative of the (j, i)th elements of A if aij = –aij ∀ i, j.
• In a skew symmetric matrix AT = –A.
7
byjusexamprep.com
• A skew symmetric matrix must have all 0’s in the diagonal.
 0 a b


Example: A = –a 0 c  is a skew-symmetric matrix.
–b –c 0
Note.3:
(a) For any matrix A, the matrix
A – At
is always skew symmetric.
2
(b) A ± B are skew symmetric.
(c) AB and BA are not skew symmetric.
(d) A2, B2, A2 ± B2 are symmetric.
(e) A2, A4, A6 are symmetric.
(f) A3, A5, A7 are skew symmetric.
(g) kA is skew symmetric where k is any scalar number.
0 6 
Example.3 Matrix A = 
 will be skew – symmetric when P = _________.
P 0
Ans. –6
Sol.
For skew-symmetric Matrix:
PT = −P
0 P  0 6 

=

6 0 P 0
0 P  0 6 

+
=0
6 0 P 0
P + 6  0 0
 0

=

0  0 0
P + 6
P+6
=0
P = −6
1.4.1.3. Orthogonal Matrices:
A square matrix A is said be orthogonal if: A T = A–1 ⇒ AAT = AA–1 = 1. Thus, A will be
an orthogonal matrix if:
AAT = I = ATA.
Example: The identity matrix is orthogonal since I T = I–1 = I
Note.4: Since for an orthogonal matrix A:
⇒ AAT = I
⇒ |AAT| = |I| = 1
⇒ |A| |AT| = 1
⇒ (|A|)2 = 1
⇒ |A| = ±1
So, the determinant of an orthogonal matrix always has a modulus of 1.
8
byjusexamprep.com
Example.4 Prove that the following matrix is orthogonal:
–2 / 3 1 / 3 2 / 3


 2 / 3 2 / 3 1 / 3
 1 / 3 –2 / 3 2 / 3
Sol.
–2 / 3 1 / 3 2 / 3 –2 / 3 2 / 3 1 / 3 

 

We have AA ' =  2 / 3 2 / 3 1 / 3    1 / 3 2 / 3 –2 / 3
 1 / 3 –2 / 3 2 / 3  2 / 3 1 / 3 2 / 3 
 4 / 9 + 1 / 9 + 4 / 9 –4 / 9 + 2 / 9 + 2 / 9 –2 / 9 – 2 / 9 + 4 / 9 


2 / 9 – 4 / 9 + 2 / 9  = I.
AA’ = –4 / 9 + 2 / 9 + 2 / 9 4 / 9 + 4 / 9 + 1 / 9
–2 / 9 – 2 / 9 + 4 / 9 2 / 9 – 4 / 9 + 2 / 9 1 / 9 + 4 / 9 + 4 / 9 
Hence the matrix is orthogonal.
1.4.2. Complex Matrices:
Complex matrices can be classified into the following three types based on relationship
between Aθ and A.
1.4.2.1. Hermitian Matrix:
A necessary and sufficient condition for a matrix A to be Hermitian is that A θ = A.
b + ic 
 a
Example: A = 
 is a Hermitian matrix.
d 
b – ic
1.4.2.2. Skew – Hermitian Matrix:
A necessary and sufficient condition for a matrix to be skew-Hermitian if Aθ = –A.
 0 –2 – i
Example: A = 
 is skew-Hermitian matrix.
0 
2 – i
1.4.2.3. Unitary Matrix:
A square matrix A is said to be unitary iff: Aθ = A–1.
Multiplying both sides by A, we get an alternate definition of unitary matrix as given
below:
A square matrix A is said to be unitary iff:
AAθ = I = AθA
 3 + i –3 + i 

2  is an example of a unitary matrix.
Example: A =  2

3 + i 3 – i 
 2
2 
1.5.
Operations in the matrices:
1.5.1 Equality of matrices:
Two matrices A = [aij] and B = [bij] are said to be equal if and only if:
(i) They are of the same order and
(ii) each element of A is equal to the corresponding element of B.
9
byjusexamprep.com
1.5.2 Addition and Subtraction of Matrices:
If A, B be two matrices of the same order, then their sum A + B is defined as the matrix
each element of which is the sum of the corresponding element of A and B.
 a1 b1   c1 d1   a1 + c1 b1 + d1 

 
 

Thus, a2 b2  + c2 d2  = a2 + c2 b2 + d2 
a3 b3  c3 d3  a3 + c3 b3 + d3 
Similarly, A – B is defined as a matrix whose elements are obtained by subtracting the
elements of B from the corresponding element of A.
 a b1   c1
Thus,  1
–
a2 b2  c2
d1   a1 – c1 b1 – d1 
=

d2  a2 – c2 b2 – d2 
1.5.2.1 Properties of addition and subtraction:
(a). Only matrices of the same order can be added or subtracted
(b). Addition of matrices is commutative i.e. A + B = B + A.
(c). Addition and subtraction of matrices is associative i.e. (A + B) – C = A + (B – C) =
B + (A – C).
1.5.3 Multiplication of a Matrix by a Scalar:
The product of a matrix A by a scalar k is a matrix of which each element is k times the
corresponding elements of A.
 a b1
Thus,k  1
a2 b2
c1  ka1 kb1 kc1 
=

c2  ka2 kb2 kc2 
The distributive law holds for such products, i.e., k (A + B) = kA + kB.
Note.5:
All the laws of ordinary algebra hold for the addition or subtraction of matrices and their
multiplication by scalars.
1.5.4 Multiplication of Matrices:
Two matrices can be multiplied only when the number of columns in the first is equal to
the number of rows in the second. Such matrices are said to be conformable.
 a11 a12

a
a22
In general, if A =  21
 ...
...

am1 am2
 b11 b12
... a1n 


... a2n 
 b21 b22
and B = 
... ... 
...
 ...

bm1 bm2
... amn 

... b1p 

... b2p 
 be two m × n and n
... ... 
... bmp 
× p conformable matrices, then their product is defined as the m × p matrix:
 c11 c12

 c21 c22
AB = 
...
 ...
cm1 cm2

...
c1p 

... c2p 

... ... 
... cmp 
10
byjusexamprep.com
Where cij = ai1 b1j + ai2b2j + ai3b3j + … + ainbnj i.e. the element in the i th row and the jth
column of the matrix AB is obtained by multiplying the ith row of A with jth column of B.
The expression for cij is known as the inner product of the ith row with the jth column.
1 2 3
7 0 
Example.5 For the two Matrices X = 
,y = 
 , the product YX will be:
4 5 6 
8 −1
7 14 21
(A) YX = 

4 11 18
4 11 18
(B) YX = 

7 14 21
 7 14 18
(C) YX = 

14 11 21
 7 14 21
(D) YX = 

18 5 6 
Ans. A
Sol.
7 0  1 2 3
YX = 


8 −1 4 5 6 
7+0
7  2 + 0  5 7  3 + 0  6

YX = 

8

1
−
1

4
8  2 − 1  5 8  3 − 1  6

7 14 21
YX = 

4 11 18
3 2 2 
3 4 2 




Example.6 If A = 1 3 1  , find the matrix B such that AB = 1 6 1  .
5 3 4
5 6 4
Sol.
3 2 2   l m n 



Let AB = 1 3 1  p q r 
5 3 4 u v w
3l + 2p + 2u 3m + 2q + 2v 3n + 2r + 2w 


=  l + 3p + u
m + 3q + v
n + 3r + w 
5l + 3p + 4u 5m + 3q + 4v 5n + 3r + 4w
3 4 2 


= 1 6 4 (given)
5 6 1 
Equating corresponding elements, we get
3l + 2p + 2u = 3,
l + 3p + u = 1,
5l + 3p + 4u = 5
………… (i)
3m + 2q + 2v = 4,
m + 3q + v = 6,
5m + 3q + 4v = 6
………… (ii)
3n + 2r + 2w = 2,
n + 3r + w = 1,
5n + 3r + 4w = 4
………… (iii)
Solving the equations (i), we get l = 1, p = 0, u = 0
Similarly, equations (ii) give m = 0, q = 2, v = 0
and equations (iii) give n = 0, r = 0, w = 1
1 0 0


B
=
Thus,
0 2 0 .
0 0 1 
11
byjusexamprep.com
1.5.4.1 Properties of Matrix Multiplication:
1. Multiplication of matrices is not commutative. In fact, if the product of AB exists, then
it is not necessary that the product of BA will also exist.
Example: A3×2 × B2×4 = C3×4 but B2×4×A3×2 does not exist since these are not
compatible for multiplication.
2. Matrix multiplication is associative, if conformability is assured. i.e. A(BC) = (AB)C
where A, B, C are m × n, n × p, p × q matrices respectively.
3. Multiplication of matrices is distributive with respect to addition matrices i.e. A (B +
C) = AB + AC.
4. The equation AB = O does not necessarily imply that at least one of matrices A and
1 1  1 1  0 0
B must be a zero matrix. For example, 

=
.
1 1 –1 –1 0 0
5. In the case of matrix multiplication if AB = O then it is not necessarily imply that BA
= O. In fact, BA may not even exist.
6. 6. Both left and right cancellation laws hold for matrix multiplication as shown below:
AB = AC ⇒ B = C (iff A is non-singular matrix) and
BA = CA ⇒ B = C (iff is non-singular matrix).
1.5.5 Trace of Matrix:
Let A be a square matrix of order n. The Sum of elements lying along the principal
diagonal is called the trace of A denoted by Tr(A).
Thus, if A = [aij]n×n then:
Tr(A) =
n
 aij = a11 + a22 + a33 + ....... + ann
i =1
1.5.5.1 Properties of trace of matrix:
(a). tr (λA) = λ tr(A)
(b). tr (A +B) = tr (A) + tr (B)
(c). tr (AB) = tr (BA)
2 3 5 


Example.7 If A = 4 5 7  then find the trace of A?
6 1 9
Ans. 16
Sol.
3
Tr(A) =  aij = a11 + a22 + a33
i =1
Tr(A) = 2 + 5 + 9
Tr(A) = 16
12
byjusexamprep.com
1.5.6 Conjugate of the Matrix:
The matrix obtained from given matrix A on replacing its elements by the corresponding
conjugate complex numbers is called the conjugate of A and is denoted by A .
2 + 3i 4 – 7i 8 
Example : A = 

6
9 + i
 –i
2 – 3i 4 + 7i 8 
Then, A = 

6
9 – i
 +i
1.5.6.1 Properties of Conjugate of a Matrix: If A & B be the conjugates of A and B
respectively. Then,
( )
(a). A = A
(b).( A + B) = A + B
(c). (kA ) = k A, k being any complex number
( )
(d). AB = A B, A and B being conformable to multiplication
(e). A = A iff A is real matrix
(f). A = –A iff A is purely imaginary matrix.
1.5.7 Transposed Conjugate of the Matrix:
The transpose of the conjugate of a matrix A is called transposed conjugate of A and is
denoted by Aθ or A* or
(A)
T
. It is also called conjugate transpose of A.
2 + i 3 – i
Example : IfA = 

 4 1 – i
To find Aθ:
2 − i 3 + i
First find A = 

 4 1 + i
( )
A = A
T
2 − i 4 
=

3 + i 1 + i
1.5.7.1 properties: If Aθ and Bθ be the transposed conjugates of A and B respectively
then,
(a). (Aθ)θ = A
(b). (A + B)θ = Aθ + Bθ
(c). (kA)θ = kA  where k → complex number
(d). (AB)θ = BθAθ
13
byjusexamprep.com
2.
DETERMINANTS
2.1. Definition:
The expression
a11 b12
is called a determinant of the second order and stands for ‘a 11b22
a21 b22
– a21b12’. It contains 4 numbers a11, b12, a22, b22 (called elements) which are arranged
along two horizontal lines (called rows) and two vertical lines (called columns).
a11 b12 c13
Similarly, a21 b22 c23
a31 b32 c33
is called a determinant of the third order. It consists of 9
elements which are arranged in 3 rows and 3 columns.
In general, a determinant of the nth order is denoted by:
a11 b12 c13 d14....l1n
a21 b22 c23 d24....l2n
 = .... .... ....
....
.... .... ....
....
an1 bn2 cn3 dn 4....lnn
Which is a block of n 2 elements arranged in the form a square along n-rows and ncolumns.
2.1.1. Principal Diagonal:
The diagonal through the left-hand top corner which contains the elements a11, b22, c33,
…... is called the leading or principal diagonal.
2.2. Minor and Cofactors:
2.2.1. Minor:
The minor of the element aij is denoted Mij and is the determinant of the matrix that
remains after deleting row i and column j of A.
2.2.2. Co – factor:
The cofactor of aij is denoted Cij and is given by:
Cij = (–1) i+j Mij
The cofactor of an element is usually denoted by the corresponding capital letter.
a11 b12
For instance, if  = a21 b22
a31 b32
c13
c23
c33
Then the cofactor of b3 is B32 = (–1)
3+2
a11
a21
c13
a
b12
and that of c2 is C23 = − 11
.
c23
a31 b32
2.3. Laplace’s expansion:
A determinant can be expanded in terms of any row (or column) as follows:
Multiply each element of the row (or column) in terms of which we intend expanding the
determinant, by its cofactor and then add up all these terms.
∴ Expanding by 1st row i.e. R1
14
byjusexamprep.com
 = a11A11 + b12B12 + c13C13 = a11
b22
b32
c23
a
c23
a
b22
– b12 21
+ c13 21
c33
a31 c33
a31 b33
= a11(b22c33 – b32c23) – b12(a21c33 – a31c23) + c1(a21b33 – a31b22)
Similarly, On Expanding by 2nd column i.e. C2:
 = b12B12 + b22B22 + b32B32 = –b12
a21 c23
a
c13
a
c13
+ b22 11
– b32 11
a31 c33
a31 c33
a21 c23
= –b12(a21c33 – a31c23) + b22(a11c33 – a31c13) – b32(a11c23 – a21c13)
and expanding by 3rd row i.e. R3, Δ = a31A31 + b32B32 + c33C33
Thus, Δ is the sum of the products of the elements of any row (or column) by the
corresponding cofactors.
Note.6:
If, however, the sum of the products of the elements of any row (or column) by the
cofactors of another row (or column) be taken, the result is zero.
e.g., in Δ, a31A21 + b32B22 + c33C23 = –a31
b12
b32
c13
a
c13
a
b12
+ b32 11
– c33 11
c33
a31 c33
a31 b32
= –a31(b12c33 – b32c13) + b31(a11c33 – a31c13) – c33(a11b32 – a31b12) = 0
In general
aiAj + biBj + ciCj = Δ
when i = j
aiAj + biBj + ciCj = 0
when i ≠ j
Example.8 Find the determinant of the following matrix using the second row.
1 2 –1


A = 3 0 1 
4 2 1 
Ans. – 6
Sol.
Expanding the determinant in terms of the second row we get:
|A| = ∆ = a21C21 + a22C22 + a23C23
 = –3
2 –1
1 –1
1 2
+0
–1
2 1
4 1
4 2
∆ = –3 [(2 × 1) – (–1 × 2)] + 0[(1 × 1) – (–1 × 4)] –1 [(1 × 2) – (2 × 4)]
∆ = –12 + 0 + 6 = –6
2 3 5 


Example.9 consider the following square matrix A = 6 4 8
.Now choose the option
2 7 9
3 3
which is not the principal submatrix of A.
2 3 
A. 

6 4
3 5 
B. 

 4 8
15
byjusexamprep.com
 4 8
C. 

7 9
D. None of these
Ans. B
Sol.
Since Principal sub matrix has same diagonal elements as the principal matrix. Thus, in
3 5 
given options, 
 is not principal submatrix of A.
 4 8
2.4. Properties:
(a). The value of a determinant does not change when rows and columns are interchanged
i.e.
|AT| = |A|
(b). If any row (or column) of a matrix A is completely zero, then:
|A| = 0, Such a row (or column) is called a zero row (or column).
(c). Also, if any two rows (or columns) of a matrix A are identical, then |A| = 0.
(d). If any two rows or two columns of a determinant are interchanged the value of
determinant is multiplied by –1.
(e). If all elements of the one row (or one column) or a determinant are multiplied by
same number k the value of determinant is k times the value of given determinant.
(f). If A be n-rowed square matrix, and k be any scalar, then |kA| = kn|A|.
(g). (i) In a determinant the sum of the products of the element of any row (or column)
with the cofactors of corresponding elements of any row or column is equal to the
determinant value.
(ii) In determinant the sum of the products of the elements of any row (or column) with
the cofactors of some other row or column is zero.
Example:
a11 b12
 = a21 b22
a31 b32
c13
c23
c33
Then,  = a11A11 + b12B12 + c13C13 and,
 = a31A21 + b32B22 + c33C23 = 0
(h). If to the elements of a row (or column) of a determinant are added k times the
corresponding elements of another row (or column) the value of determinant thus
obtained is equal to the value of original determinant.
R + kR
i
j
→ B then A = B
i.e. A ⎯⎯⎯⎯⎯
C +kC
i
j
→ B then A = B
and A ⎯⎯⎯⎯⎯
(i). |AB| = |A|×|B| and based on this we can prove the following:
16
byjusexamprep.com
(i) |An| = (|A|)n
Proof:
|An| = |A × A × A × …... n times.
|An| = |A| × |A| × |A| … n times
|An| = (|A|)n
(ii) |A A–1| = |I|
Proof:
|A A–1| = |I| = 1
Now, |A A–1| = |A| |A–1|
∴ |A| |A–1| = 1
⇒ A–1 =
1
A
(j). Using the fact that A · Adj A = |A|. I, the following can be proved for A n×n.
(i). |Adj A| = |A|n–1
(ii). |Adj (Adj (A)) | =
A
2
(n−1)
1 1 –1


Example.10 The determinant of the matrix 2 1 0  is ______ (accurate to one
3 1 1 
decimal places).
Ans. 0
Sol.
 =1
1 0
2 0
2 1
–1
+ (–1)
1 1
3 1
3 1
Δ = 1(1 – 0) – 1 (2 – 0) – 1 (2 – 3)
Δ=1–2+1
Δ=2–2
Δ=0
1
2
1
Example.11 If a determined is defined as  = 3 0 1 , then what will be value of
4 –2 1
determinant if first row is multiplied by 2.
A. 80
B. 20
C. –4
D. –16
Ans. C
17
byjusexamprep.com
Sol.
Δ = 1(0 + 2) –2 (3 – 4) + 1 (–6 – 0)
=2+2–6
Δ = –2
If all elements of one row (or one column) of a determinant are multiplied by same
number k, the value of determinant is k times the value of given determinant.
Thus,
Required value = 2∆ = 2× (-2) = -4
2 1 0 4 


0 –1 0 2 
Example.12 Evaluate the determinant of the following 4 × 4 matrix A = 
.
7 –2 3 5 


0 1 0 –3
Ans. 6
Sol.
The third column of this matrix contains the most zeros. Expand the determinant in terms
of the third column, we get
|A| = a13C13 + a23C23 + a33C33 + a43C43
|A| = 0(C13) + 0(C23) + 3(C33) + 0(C43)
2 1 4
|A| = +3 0 –1 2
0 1 –3
The first column of this determinant contains the most zeroes. Expand the determinant
in terms of the first column, we get
A = 32
–1 2
= 6 (3 – 2 ) = 6
1 –3
2.5. Adjoint and Inverse of the Matrix:
2.5.1 Adjoint of a square matrix:
 a1 b1

Let a square matrix A = a2 b2
a3 b3
c1 

c2  . Then the transpose of matrix formed by the
c3 
cofactors of the elements is called the transpose of the matrix and it is written as Adj(A).
 A1 B1 C1 


Cofactor − matrix(Cij ) =  A2 B2 C2  . Then:
 A3 B3 C3 
 A1 A2
T

Adj(A) = (cij ) = B1 B2
C1 C2
A3 

B3 
C3 
18
byjusexamprep.com
Thus, adjoint of A matrix is the transpose of matrix formed by the cofactors of A.
2.5.2 Inverse of a matrix:
If A be any matrix, then a matrix B if it exists, such that:
AB = BA = I
Then, B is called the Inverse of A which is denoted by A-1 so that AA-1= I.
Also A −1 =
Adj.A
, if A is non-singular matrix.
A
2.5.2.1 Properties of Inverse
(a). AA–1 = A–1 A = I
(b). A and B are are inverse of each other iff AB = BA = I
(c). (AB)–1 = B–1 A–1
(d). (ABC)–1 = C–1 B–1 A–1
(e). If A be a n × n non-singular matrix, then (A’)–1 = (A–1)’.
−1 
 −1
(f). If A be a n × n non-singular matrix then (A ) = (A ) .
a b 
(g). For a 2 × 2 matrix A = 
 there is a short-cut formula for inverse as given
c d
below:
a b 
A −1 = 

c d
−1
=
 d −b 
1
.
(ad − bc)  −c a 
1 3
Example.13 The inverse of the matrix 
 is
1 2
2 3
(A) 

1 1
 −2 1 
(B) 

 3 −1
 −2 3 
(C) 

 1 −1
 2 −3
(D) 

 −1 1 
Ans. C
Sol.
A
−1
1 3
=

1 2
−1
=
 2 −3
1
(1  2 − 1  3)  −1 1 
 −2 3 
A −1 = 

 1 −1
Alternate solution:
A −1 =
A =
Adj(A)
A
1 3
1 2
= 2–3 = –1
19
byjusexamprep.com
+
 2 −1
 2 −3
Adj(A) = 
 =

 −3 1 
 −1 1 
A −1 =
Adj(A)
1  2 −3
=


(−1)  −1 1 
A
 −2 3 
A −1 = 

 1 −1
K 2
Example.14 The Value of k for which the matrix A= 
 does not have an inverse is
 3 1
_____.
Ans. 6
Sol.
K 2
Matrix A = 

 3 1
Given matrix will not have inverse, if it is singular i.e. A = 0.
K 2
=0
3 1
K−6 =0
K=6
Example.15 Let AB, C, D, E be n × n matrices, each with non-zero determinant, If
ABCDE = I, then C–1 is ________.
A. ABDE
B. DEBA
C. BAED
D. DEAB
Ans. D
Sol.
A, B, C, D, E is n × n matrix.
Given ABCDE = I
Now post multiplication of E–1:
ABCDEE–1 = IE–1
Similarly, ABCDD–1 = E–1 D–1
ABC = E–1 D–1
Now A–1ABC = A–1E–1D–1
B–1BC = B–1A–1E–D–1
C = B–1A–1E–1D–1
C–1 = (B–1A–1E–1D–1)–1
C–1 = (D)–1 (E–1)–1 (A–1)–1(B–1)–1
C–1 = DEAB
20
byjusexamprep.com
1 1 3


3 –3 .
Example.16 Find the inverse of  1
–2 –4 –4
Sol.
The determinant of the given matrix A is
 1 1 3   a1 b1 c1 

 

 =  1 3 –3 = a2 b2 c2  (say)
–2 –4 –4 a3 b3 c3 
If A1, A2, ……be the cofactors of a1, a2…in Δ, then A1 = – 24, A2 = – 8, A3 = – 12; B1 =
10, B2 = 2 B3 = 6; C1 = 2, C2 = 2, C3 = 2.
Thus, Δ = α1A1 + α2A2 + a3A3 = – 8.
 A1 A2

and adj A = B1 B2
C1 C2
A3  –24 –8 –12
 

B3  =  10
2
6 
C3   2
2
2 
Hence the inverse of the given matrix A:
A −1
3.
3 

1
 3

2
–24 –8 –12 

adj A
1 
1
3
  5
=
=
10
2
6  = –
–
– 

–8 
4
4
4
 2


2
2  
1
1
1
–
–
– 
 4
4
4 
RANK OF MATRIX
The rank of a matrix is defined as the order of highest non-zero minor of matrix A. It is denoted
by the notation ρ(A). A matrix is said to be of rank r when:
(i) it has at least one non-zero minor of order r, and
(ii) every minor of order higher than r vanishes.
3.1.
Properties:
(a). Rank of A and its transpose is the same i.e. (A) = (A ').
(b). Rank of a null matrix is zero.
(c). Rank of a non-singular square matrix of order r is r.
(d). If a matrix has a non-zero minor of order r, its rank is  r and if all minors of a
matrix of order r + 1 are zero, its rank is  r.
(e). Rank of a matrix is same as the number of linearly independent row vectors vectors
in the matrix as well as the number of linearly independent column vectors in the
matrix.
(f). For any matrix A, rank (A)  min(m,n) i.e. maximum rank of Am×n = min (m, n).
(g). If Rank (AB)  Rank A and Rank (AB)  Rank B:
so, Rank (AB) ≤ min (Rank A, Rank B)
21
byjusexamprep.com
(h). Rank (AT) = Rank (A)
(i). Rank of a matrix is the number of non-zero rows in its echelon form.
(j). Elementary transformations do not alter the rank of a matrix.
(k). Only null matrix can have a rank of zero. All other matrices have rank of at least
one.
(l). Similar matrices have the same rank.
3.2.
Echelon Form:
A matrix is in echelon form if only if
(i). Leading non-zero element in every row is behind leading non-zero element in
previous row i.e. below the leading non-zero element in every row all the elements must
be zero.
(ii). All the zero rows should be below all the non-zero rows.
3.2.1 Use of Echelon form:
This definition gives an alternate way of calculating the rank of larger matrices (larger
than 3 × 3) more easily. The number of non-zero rows in the upper triangular matrix to
get the rank of the matrix.
3.2.2 How to reduce a matrix into Echelon form?
To reduce a matrix to its echelon form, use gauss elimination method on the matrix and
convert it into an upper triangular matrix, which will be in echelon form.
3.3.
Elementary transformation of a matrix:
The following operations, three of which refer to rows and three to columns are known
as elementary transformations:
(i). The interchange of any two rows (columns).
(ii). The multiplication of any row (column) by a non-zero number.
(iii). The addition of a constant multiple of the elements of any row (column) to the
corresponding elements of any other row (column).
3.3.1 Notation:
The elementary row transformations will be denoted by the following symbols:
(i) Rij for the interchange of the ith and jth rows.
(ii) kRi for multiplication of the ith row by k.
(iii) Ri + kRj for addition to the ith row to k times the jth row.
Similarly, the corresponding column transformation will be denoted by writing C in place
of R.
Note.7:
(1). Elementary transformations do not change either the order or rank of a matrix.
(2). While the value of the minors may get changed by the transformation I and II, their
zero or non-zero character remains unaffected.
22
byjusexamprep.com
3.4.
Equivalent Matrix:
Two matrices A and B are said to be equivalent if one can be obtained from the other
by a sequence of elementary transformations. Two equivalent matrices have the same
order and the same rank. The symbol ~ is used for equivalence.
3.5.
Normal Form of a matrix:
Every non-zero matrix A of rank r can be reduced by a sequence of elementary
transformations, to the form.
Ir 0

 called the normal form of A.
 0 0
1 1 2 


For the matrix A = 1 2 3  ,
0 –1 –1
Example.17 Find non-singular matrices P and Q such that PAQ is in the normal form.
Hence find the rank of A.
Sol.
1 1 2  1 0 0 1 0 0

 
 

We write A = IAI,i.e. 1 2 3  = 0 1 0 A 0 1 0
0 –1 –1 0 0 1  0 0 1 
We shall affect every elementary row (column) transformation of the product by
subjecting the pre-factor (post-factor) of A to the same.
Now, replace C2 by C2 – C1 and C3 by C3 – 2C1:
1 0 0  1 0 0 1 –1 –2

 
 

1 1 1  = 0 1 0 A 0 1 0 
0 –1 –1 0 0 1 0 0 1 
Re placeR2 by R2 – R1
1 0 0   1 1 0 1 –1 –2

 
 

0 1 1  = –1 0 0 A 0 1 0 
0 –1 –1  0 0 1  0 0 1 
1 0 0  1 0 0 1 –1 –1

 
 

Operate C3 – C1, 0 1 0 = –1 1 0 A 0 1 –1
0 –1 0  0 0 1 0 0 1 
Re placeR3 byR3 + R2 :
1 0 0  1 0 0 1 –1 –1

 
 

0 1 0 = –1 1 0 A 0 1 –1
0 0 0  1 0 1  0 0 1 
I
Which is of the normal form  2
0
0

0
23
byjusexamprep.com
 1 0 0
1 –1 –1




Hence, P = –1 1 0 ' Q = 0 1 –1 and (A) = 2.
–1 1 1
0 0 1 
4.
VECTORS
An ordered n-tuple X = (x1, x2, … xn) is called an n-vector and x1, x2, … xn are called components
of X.
4.1.
Row Vector:
A vector may be written as either a row matrix X = [x1 x2 … xn] which is called row
vector.
4.2.
Column Vector:
 x1 
 
 x2 
A column matrix X = x3  which is called column vector.
 
 
x 
 n
Thus, for a matrix A of order m×n, each row of A is an n-vector and each column of A
is an m-vector.
In particular, if m=1 then A is a row vector & if n=1 then A is a column vector.
4.3.
Multiplication of a vector by a scalar:
Let ‘k’ be any number and X = (x1, x2, … xn) then kX = (kx1, kx2, … kxn).
Example:
X = (1, 3, 2)
Then, 4X = (4, 12, 8)
4.4.
Linear combination of vectors:
If X1, X2, … Xr are r vectors of order n and k1, k2, … kr are r scalars then the expression
of the form k1X1+k2X2+ … +krXr is also a vector and it is called linear combination of the
vectors X1, X2, … Xr.
4.5.
Linearly dependent vectors:
The vectors X1, X2, …. Xr of same order n are said to be linearly dependent if there exist
scalars (or numbers) k1, k2, … kr not all zero such that k1X1+k2X2+……+krXr = O where
O denotes the zero vector of order n.
4.6.
Linearly independent vectors:
The vectors X1, X2, …., Xr of same order n are said to be linearly independent vectors if
every relation of the type:
K1X1+k2X2+…+krXr = O
Such that all k1 = k2 = …... = kr = 0
Note.8:
24
byjusexamprep.com
(i). If X1, X2, ……, Xr are linearly dependent vectors then at least one of the vectors can
be expressed as a linear combination of other vectors.
(ii). If A is a square matrix of order n and A = 0 then the rows and columns are linearly
dependent.
(iii). If A is a square matrix of order n and A  0 then the rows and columns are linearly
independent.
(iii). Any subset of a linearly independent set is itself linearly independent set.
(iv). If a set of vectors includes a zero vector, then the set of vectors is linearly
dependent set.
4.7.
Inner product:
 x1 
 y1 
 
 
x
y
The inner product of two vectors X =  2  and Y =  2  is denoted by X  Y and defined
 
 
 
 
 xn 
 yn 
 y1 
 
y
as X  Y = XT Y = [x1x2.....xn ]  2  = x1y1 + x2 y2 + ..... + xnyn which is a scalar quantity.
... 
 
 yn 
Note.9:
1. XTY=YTX i.e. Inner Product is symmetric
2. X.Y = 0  the vectors X and Y are perpendicular.
3. X.Y. =  1  the vectors X and Y are parallel.
4.8.
Length or norm of a vector:
 x1 
 
x
If X =  2  is a vector of order n then the positive square root of inner product of X and
 
 
 xn 
XT i.e. XTX is called length of X and it is denoted by X .
 X = X  X = x12 + x22 + ..... + xn2
Example.18 If X = [2 4 5] then find the length of the vector.
A. 11
B.
C. 3
D.
11
5
43
Ans. C
Sol.
25
byjusexamprep.com
Length or Norm of the vector is defined as:
X = 22 + 42 + 52
X =
4.9.
45 = 3 5
Unit vector or normal vector:
A vector X is said to be a unit vector if X = 1
Example: If X = [0 –1 0]
then X = 02 + (−1)2 + 12 = 1
4.10. Orthogonal vectors:
T
T
The two column vectors X1 & V2 are said to be orthogonal if: X1  X2 = X1 X2 = X2 X1 = 0.
1
2
 
 
Ex: X1 =  −2 and X2 = 1 
 0 
0
 X1  X2 = X1T X2 = X2T X1
 (1)(2) + (−2)(1) + (0)(0) = 0
 X1, X2 are orthogonal vectors.
4.11. Orthogonal set:
A set S of column vectors X1, X2, …... Xn of same order are said to be orthogonal if:
Xi  XTj = XiT X j = 0 for all i  j.
3
0
1
 
 
 
Ex: X1 =  −3 , X2 = 1 , X3 = 0
0
1
 0 
 X1T X2 = 0, X1T X3 = 0 and X2T X3 = 0
 X1, X2, X3 are orthogonal vectors.
1
6 
 
 
Example.19 If the vectors x1 = –2 and x2 = k  are orthogonal vectors. Then k is
 0 
0
_________.
A. –2
B. 3
C. –3
D. 12
Ans. B
Sol.
For Orthogonal vectors: x1T x2 = 0
26
byjusexamprep.com
6 
 
1 – 2 0 k  = 0
0
[6–2k + 0] = 0
6 – 2k = 0
2k = 6
K=3
4.12. Orthonormal vectors/Orthonormal set:
A set S of column vectors X 1, X2, …. Xn of same order is said to be an orthonormal set
if
0, i  j
XiT X j = ij = 
.
1, i = j
 −1
0
0
 
 
 
Ex: X1 =  0  , X2 =  −1 , X3 = 0
 0 
 0 
1
 X1T X2 = 0, X1T X3 = 0, X2T X3 = 0 and X1T X1 = 1, X2T X2 = 1, X3T X3 = 1
 X1, X2, X3 are orthonormal vectors.
5.
SYSTEM OF LINEAR EQUATIONS
5.1.
Homogenous System of Linear Equations:
If the system of m homogeneous linear equations in n variables x 1, x2, …. xn is given
by:
a11x1 + a12 x2 + .... + a1n xn = 0 

a21x1 + a22 x2 + .... + a2n xn = 0 

..................................... 
am1x1 + am2 x2 + .... + amnxn = 0 

Then, the set of these equations can be written in matrix form as AX = O
where A is matrix of the coefficients and X is the column matrix of the variables.
Note.10:
For the system AX = O where A is the square matrix then:
(i). The system AX = O is always consistent.
(ii). If A  0 and (A) = n (number of variables). Then, the system has unique solution
(zero solution or trivial solution)
(iii). If A = 0 and (A)  n then the system has infinitely many non-zero (or non-trivial)
solutions.
27
byjusexamprep.com
(iv). If (A) = r  n (number of variables) then the number of linearly independent
solutions of AX = O is (n – r).
(v). In a system of homogeneous linear equations, if the number of unknowns (or
variables) exceeds the number of equations then the system necessarily possesses a
non-zero solution.
Example.20 Find the values of k for which the system of equation (3k – 8) x+3y + 3z
= 0, 3x + (3k – 8) y + 3z = 0, 3x + 3y + (3k – 8) z = 0 has a non-trivial solution.
Sol.
For the given system of equation of have a non-trivial solution, the determinant of the
coefficient matrix should be zero.
3k – 8
3
3
i.e. 3
3k – 8
3
=0
3
3
3k – 8
Now on applying Operation C1 by C1 + (C2 + C3):
3k – 2
3
3
3k – 2 3k – 8
3
=0
3k – 2
3
3k – 8
1
3
3
(3k – 2) 1 3k – 8
3
=0
1
3
3k – 8
Now, on Operating R2 – R1, R3 – R1
1
3
3
(3k – 2) 0 3k – 8
0
=0
0
0
3k – 11
(3k – 2) (3k – 11)2 = 0
Thus, k = 2/3, 11/3.
Example.21 If the following system has non-trivial solution, prove that a + b + c = 0
or a = b = c:
ax + by + cz = 0,
bx + cy + az = 0,
cx + ay + bz = 0.
Sol.
For the given system of equations to have non-trivial solution, the determinant of the
coefficient matrix is zero.
i.e.
a b c
b c a =0
c a b
Replace R1 by R1 + R2 + R3:
28
byjusexamprep.com
a+b+c a+b+c a+b +c
b
c
a
=0
c
a
b
1 1 1
or(a + b + c) b c a = 0
c a b
Now replace C2 by C2 – C1 and C3 by C3– C1:
1
0
0
(a + b + c) b c – b a – b = 0
c a– c b – c
(a + b + c) [(c – b) (b – c) – (a – c) (a – b)] = 0
(a + b + c) (–a2 – b2 – c2 + ab + bc + ca) = 0
i.e. a + b + c = 0 or a2 + b2 + c2 – ab – bc – ca = 0
a + b + c = 0 or
1
[(a – b)2 + (b – c)2 + (c – a)2 ] = 0
2
a + b + c = 0; a = b, b = c, c = a.
Hence the given system has a non-trivial solution if a + b c = 0 or a = b = c.
5.2.
Non – homogenous system of linear equations:
If the system of ‘m’ non-homogeneous linear equation in ‘n’ variables x1, x2, … xn is
given by
a11x1 + a12 x2 + .... + a1nxn = b1 

a21x1 + a22 x2 + .... + a2nxn = b2 
 ……… (1)
................................... 
am1x1 + am2 x2 + ... + amnxn = bm 
Then, the set of these equations can be written in matrix form as:
AX = B
……… (2)
Where A is coefficient matrix, X is column matrix of the variables and B is the column
matrix of constants b1, b2, …. bn.
Note.11:
(i). The system has a solution (consistent) if and only if Rank of A = Rank of [A|B].
(ii). The system AX = B has a unique solution if and only if Rank (A) = Rank (A|B) = n
number of variables.
(ii) The system has infinitely many solutions if (A) = (A | B)  n (number of variables).
(iii) The system has no solution (or is inconsistent) if (A)  (A | B) i.e. (A)  (A | B) .
29
byjusexamprep.com
Example.22 Consider the following system of linear equations:
x + 2ay + az = 0
x + 3by + bz = 0
x + 4cy + cz = 0 which has non-zero solution then a, b, c, are in:
A. Arithmetic Progression
B. Geometric Progression
C. Harmonic Progression
D. None of these
Ans. C
Sol.
For non-zero solution:
|A| = 0
1 2a a
1 3b b = 0
1 4c c
1(3bc – 4bc) – 2a (c – b) + a (4c – 3b) = 0
– bc – 2ac + 2ab + 4ac – 3ab = 0
2ac = ab + bc
2ac = b (a + c)
b=
2ac
2+c
2 1 1
= +
b a c
30
byjusexamprep.com
Example.23 Investigate the values of λ and μ so that the equations
2x + 3y + 5z = 9,
7x + 3y – 2z = 8,
2x + 3y + λz = μ,
Have:
(i) no solution
(ii) a unique solution.
(iii) an infinite number of solutions.
Sol.
2 3 5  x  9

   
We have 7 3 –2 y  = 8
2 3    z   
The system admits of unique solution if, and only if, the coefficient matrix is of rank 3.
2 3 5 


This requires that 7 3 –2 = 15(5 – )  0
2 3  
Thus, for a unique solution λ ≠ 5 and μ may have any value.
If λ = 5, the system will have no solution for those values of μ for which the matrices
2 3 5 
2 3 5 9




A = 7 3 –2 and AB = 7 3 –2 8 .
2 3 5 
2 3   
Now Replace R3 by R3 – R1:
5
9 
2 3


AB = 7 3 –2
8 
0 0  − 5  − 9
But A is of rank 2 for λ =5 and AB is not of rank 2 unless μ = 9.
Thus, if λ = 5 and μ ≠ 9, the system will have no solution.
If λ = 5 and μ = 9, the system will have an infinite number of solutions.
6.
EIGEN VALUES, EIGEN VECTORS AND CAYLEY HAMILTON THEOREM
6.1.
Eigen Values:
Let A = [aij]n×n be any n-rowed square matrix and  is a scalar. Then the matrix A −  |
is called characteristic matrix of A, where I is the unit matrix of order n.
31
byjusexamprep.com
a11−

a
Then, the determinant A −  | =  21
 ...

 an1
a12
a22 −
....
an2
a1n 

a2n 
which is ordinary polynomial in  of
... 

ann− 
degree n is called “characteristic polynomial of A”. The equation A −  | = 0 is called
“characteristic equation of A”.
The  values of this characteristic equation are called eigen values of A and the set of
eigenvalues of A is called the “spectrum of A”.
The corresponding non-zero solutions to X such that AX = X , for different eigen values
are called as the eigen vectors of A.
6.1.1 Properties of Eigen Values:
(a). If 1 , 2.......n are the eigenvalues of A, then k1 ,k 2.......k n are eigenvalues of
kA.
(b). The eigenvalues of A–1 are the reciprocals of the eigenvalues of A. i.e. if 1 , 2.......n
are the eigen value of A, then
1 1
1
are the eigen value of A–1.
, ,...
1 2
n
m
m
m
(c). If 1 , 2 ,...n are the eigen values of A, then 1 , 2 ,........n are the eigen values of
Am .
(d). If 1 , 2 , 3...n are the eigen values of a non-singular matric A, then
A
,
A
1 2
...
A
n
are the eigen values of Adj A.
(e). Eigen values of A = Eigen values of AT.
(f). Maximum no. of distinct eigen values = size of A.
(g). If λ1, λ2, λ3, λ4 …………..., λk are eigen values of matrix A of order n, then sum of
eigen values = trace of A = sum of diagonal elements
i.e. λ1 +λ2 +λ3 +λ4 +…………..., λk = trace of A
(h). Product of eigen values = A (i.e. At least one eigen value is zero iff A is singular).
λ1. λ2. λ3 ……………λk = A
(i). In a triangular and diagonal matrix, eigen values are diagonal elements themselves.
(j). Similar matrices have same eigen values. Two matrices A and B are said to be
similar if there exists a non-singular matrix P such that B = P–1 AP.
(k). If a + √𝑏 is the one eigen value of a real matrix A then a - √𝑏 other eigen value of
matrix A.
(l). If a + ib is an eigen value of a real matrix A then a – ib is also other eigen value of
A.
32
byjusexamprep.com
(m). If A and B are two matrices of same order, then the matrix AB and BA will have
same characteristic roots.
0 −1
Example.24 Let λ1 and λ2 be the two eigen values of the matrix A = 
 . Then, λ1
1 1 
+ λ2 and λ1. λ2, are respectively:
A. 1 and 1
B. 1 and -1
C. -1 and -1
D. -1 and -1
Ans. A
Sol.
Sum of Eigen values (λ1 + λ2) = Trace of A
Sum of Eigen values (λ1 + λ2) = Sum of diagonal elements
λ1 + λ2 = 0 +1 = 1
Now, λ1. λ2 = Determinant of A
λ1. λ2 = 0×1 – 1× (-1) = 1
6.2.
Eigen Vectors:
The corresponding non-zero solutions to X such that AX = X , for different eigen values
are called as the eigen vectors of A.
6.2.1 Properties of Eigen vectors:
(a). For each eigen value of a matrix there are infinitely many eigen vectors. If X is an
eigen vector of a matrix A corresponding to the Eigen Value λ then KX is also an eigen
vector of A for every non – zero value of K.
(b). Same Eigen vector cannot be obtained for two different eigen values of a matrix.
(c). Eigen vectors corresponding to the distinct eigen values are linearly independent.
(d). For the repeated eigen values, eigen vectors may or may not be linearly
independent.
(e). The Eigen vectors of A and Ak are same.
(f). The eigen vectors of A and A-1 are same.
(g). The Eigen vectors of A and AT are NOT same.
(h). Eigen vectors of a symmetric matrix are Orthogonal.
6.3.
Cayley Hamilton Theorem:
Every square matrix A satisfies its own characteristic equation A – λI = 0.
Example:
If λ2 – 5λ + 6 =0 is the Characteristic equation of the matrix A, then according to Cayley
Hamilton theorem:
A2 – 5A +6I = 0
6.3.1 Applications of Cayley Hamilton theorem:
(a). It is used to find the higher powers of A such that A2, A3, A4 etc.
33
byjusexamprep.com
(b). It can also be used to obtain the inverse of the Matrix.
2
Example.25 Characteristic Equation of the matrix 
 2
2
 with eigen value λ is –
1 
A. λ2 + 3λ + 4 = 0
B. λ2 + 3λ –2 = 0
C. λ2 – 3λ = 0
D. λ2 + 3λ = 0
Ans. C
Sol.
Since characteristic equation is given by:
A – I = 0
2–
2
2
1– 
=0
(2 – )(1 – ) – 2  2 = 0
λ2 – 3λ = 0
3 1 4


Example.26 Find the eigen values and eigen vectors of the matrix A = 0 2 6  .
0 0 5 
Sol.
The characteristic equation is:
| A – I |= 0
3–
i.e.
0
0
1
4
2–
6 =0
0
5–
(3 – λ) (2 – λ) (5 – λ) = 0
Thus, the eigen values of A are 2, 3, 5.
If x, y, z be the components of an eigen vector corresponding to the eigen value λ, we
have
| A – I | X = 0
1
4  x 
3 – 

 
2–
6  y  = 0
 0
 0
0
5 –    z 
Putting λ = 2, we have x + y + 4z = 0, 6z = 0, 3z = 0, i.e. x + y = 0 and z = 0.
∴
x
y
z
=
= = k1(say)
1 –1 0
Hence the eigen vector corresponding to λ = 2 is k1 (1, – 1, 0).
Now, putting λ = 3, we have:
y + 4z = 0, –y + 6z = 0, i.e. y = 0, z = 0.
34
byjusexamprep.com
x y z
= = = k2
1 0 0
Hence the eigen vector corresponding to λ = 3 is k2 (1, 0, 0).
Similarly, the eigen vector corresponding to λ = 5, we have
-2x+y+4Z=0, -3y+6z=0
Thus, k3 (3, 2, 1).
6.4.
Number of Linearly independent eigen vectors:
6.4.1 Algebraic Multiplicity:
The eigenvalues are the roots of the characteristic polynomials and a polynomial can
have repeated roots.
i.e. λ1 = λ2 = λ3 = λ4 = --------------------= λK
If this happens then the eigenvalue is said to be of algebraic multiplicity k.
6.4.2 Geometric Multiplicity:
The number of linearly independent eigen vectors associated with that eigenvalue is
called the Geometric multiplicity of that value.
Geometric Multiplicity (GM) corresponding to any eigen value λ i is given by:
GM= n – Rank of (A – λi I)
Where n is the order of the matrix.
Thus, for a matrix A, the number of linearly independent eigen vectors is the sum of
geometric multiplicities obtained corresponding to different eigen values.
3 3 
Example.27 Consider the matrix A = 
 . Find the number of linearly independent
2 4
eigen vectors for the matrix.
A. 1
B. 2
C. 0
D. 3
Ans. B
Sol.
3 3 
A=

2 4
Let λ1 and λ2 be eigen values of matrix.
λ1 + λ2 = trace of A = 3 + 4
λ1 + λ2 = 7
…………… (1)
λ1λ2 = Determinant of eigen values
λ1λ2 = 12 – 6
λ1λ2 = 6
………. (2)
Now use relation:
(λ1 – λ2)2 = (λ1 + λ2)2 – 4λ1λ2
(λ1 – λ2)2 = 49 – 4 × 6
(λ1 – λ2)2 = 25
λ1 – λ2 = 5
…………. (3)
35
byjusexamprep.com
Use equation (1) and (3):
λ1 = 6, λ2 = 1
Now consider λ1 = 6:
3 
3 − 6
A – 1I = 

4 – 6
 2
–3 3 
A – 1I = 

 2 –2
Replace R2 → R2 +
R
2
R1 and R1 → 1
–3
3
1 –1
A – 1I = 

0 0 
GM = Order (n) of matrix – Rank ρ(A–λ1I)
GM = 1
Now, Corresponding to λ= 1:
3 
3 – 1
A – 2I = 

4 – 1
 2
2 3
A= 

2 3
Now R2 → R1 – R2
2 3
A= 

0 0
1 3 
2
A – 2I = 
0 0 
ρ (A–λ2 I) = 1
Thus, Geometric Multiplicity corresponding to (λ 2) = 2 – 1 = 1
Thus, number of linearly independent eigen vectors = Sum of Geometric multiplicity
corresponding to different eigen values
Number of linearly independent vectors = 1 + 1 = 2
6.5.
Diagonalizable matrix:
If for a given square matrix A of order n, there exists a non – singular matrix P such
that P-1AP = D or AP = PD where D is the diagonal matrix then A is said to be
diagonalizable matrix.
Note:
1. If X1, X2, X3, ………., X3 are linearly independent eigen vectors of A 3×3 corresponding
to eigen values λ1, λ2, λ3 then P can be found such that P-1AP = D or AP = PD.
1 0

Where D =  0 2
 0 0
0

0  and P = [X1, X2, X3]
3 
****
36
Download