Uploaded by Alden Pinto

Matrices Formulas

advertisement
Matrices Formulas
1. Definition of a matrix
A matrix (plural matrices) is an array of items of the same kind arranged in rows and columns. It is
usually denoted by a capital letter.
Each entry in a matrix is called an ‘element’ of the matrix.
2. Order of a matrix
The order of a matrix is written as "π‘š × π‘›" where π‘š is the number of rows and 𝑛 is the number of
columns of the matrix.
5 2
Example : The matrix [−1 3] has order 3 × 2.
6 1
Each element π‘Žπ‘–π‘— in a matrix 𝐴 is indexed using the iterator variables 𝑖 and 𝑗, where 𝑖 ∈ {1, 2, … … … π‘š}
and 𝑗 ∈ {1, 2, … … … 𝑛}.
3. Types of matrices
(i) Rectangular Matrix – Any matrix π‘š × π‘›, where π‘š ≠ 𝑛, i.e. a matrix having different number of
rows and columns is called a rectangular matrix.
1 0 6
Example : [
]
−4 2 3
(ii) Row Matrix – It is a subtype of the rectangular matrix and is defined as a matrix having only one
row.
Example : [3 2 −1]
(iii) Column Matrix – It is a subtype of the rectangular matrix and is defined as a matrix having only
one column.
3
Example : [7]
1
(iv) Square Matrix – A matrix with equal number of rows and columns is called a square matrix. Its
order is represented by a single number.
1 3
Example : [
] is a square matrix of order 2
6 7
Note : The principal/main/leading diagonal in a square matrix of order 𝑛 is elements of the
diagonal ranging from the first (π‘Ž11 ) to the last element (π‘Žπ‘›π‘› ) in the matrix, i.e. the principal
diagonal of a square matrix is the ordered set of entries π‘Žπ‘–π‘— , where 𝑖 = 𝑗 extending from the upper
left-hand corner to the lower right-hand corner of the matrix.
(v) Diagonal Matrix – A diagonal matrix is a square matrix in which only the principal diagonal
elements are non-zero while the other elements are equal to 0.
A diagonal matrix of order 𝑛 × π‘› having 𝑑1 , 𝑑2 , … … … 𝑑𝑛 as principal diagonal elements may be
denoted by π‘‘π‘–π‘Žπ‘”[𝑑1 , 𝑑2 , … … … 𝑑𝑛 ].
3
Example : [0
0
0 0
4 0 ] = π‘‘π‘–π‘Žπ‘”[3, 4, −7]
0 −7
(vi) Scalar Matrix – A diagonal matrix where all the principal diagonal elements are equal to the
same scalar value is called a scalar matrix.
A scalar matrix can be defined as follows :
π‘˜; if 𝑖 = 𝑗
𝐴 = [π‘Žπ‘–π‘— ]𝑛×𝑛 = {
0; if 𝑖 ≠ 𝑗
14
Example : [ 0
0
0
14
0
0
0]
14
(vii) Unit/Identity Matrix – A scalar matrix whose principal diagonal elements are equal to 1 is
called a unit/identity matrix. It is denoted by 𝐼𝑛 .
An identity matrix can be defined as follows :
𝐼𝑛 = [π‘Žπ‘–π‘— ]𝑛×𝑛 = {
1; if 𝑖 = 𝑗
0; if 𝑖 ≠ 𝑗
1 0 0
Example : 𝐼3 = [0 1 0]
0 0 1
(viii) Zero/Null Matrix – A matrix of order π‘š × π‘› in which all the elements are equal to 0 is called as
a zero matrix. It is denoted by π‘‚π‘š×𝑛 .
0 0 0
Example : 𝑂2×3 = [
]
0 0 0
4. Equality of matrices
Two matrices 𝐴 and 𝐡 are equal if and only if both matrices are of the same order and each element of
one is equal to the corresponding element in the other, i. e. 𝐴 = [π‘Žπ‘–π‘— ]π‘š×𝑛 & 𝐡 = [𝑏𝑖𝑗 ]π‘š×𝑛 are said to be
equal if π‘Žπ‘–π‘— = 𝑏𝑖𝑗 ∀ 𝑖, 𝑗.
2
Example : [
3
4/2
1
]=[
0
15 − 12
2−1
]
0
5. Addition of matrices
The sum of two matrices of the same order, π΄π‘š×𝑛 and π΅π‘š×𝑛 , is the matrix (𝐴 + 𝐡)π‘š×𝑛 in which the entry
in the 𝑖 th row and 𝑗 th column is π‘Žπ‘–π‘— + 𝑏𝑖𝑗 , for 𝑖 = 1, 2, 3, … … … π‘š and 𝑗 = 1, 2, 3, … … … 𝑛.
If 𝐴 = [π‘Žπ‘–π‘— ]π‘š×𝑛 and 𝐡 = [𝑏𝑖𝑗 ]π‘š×𝑛 , then 𝐴 + 𝐡 = [π‘Žπ‘–π‘— + 𝑏𝑖𝑗 ]π‘š×𝑛 .
3
Example : [
2
1 0 2
1 2
4 1
]+[
]=[
−1 3 0
1 4
1 4
4
]
4
6. Zero/Null Matrix
The zero/null matrix of order π‘š × π‘›, denoted by π‘‚π‘š×𝑛 is the matrix containing π‘š × π‘› elements, all of
which are equal to 0.
0 0
Example : 𝑂3×2 = [0 0]
0 0
Note : The zero/null matrix π‘‚π‘š×𝑛 also serves as the additive identify for a matrix of order π‘š × π‘›, i.e.
𝐴 + 𝑂 = 𝐴 and 𝑂 + 𝐴 = 𝐴, where 𝐴 is a matric of order π‘š × π‘›.
7. Negative of a matrix
The negative of a matrix π΄π‘š×𝑛 denoted by −π΄π‘š×𝑛 is the matrix formed by replacing each entry in the
matrix π΄π‘š×𝑛 with its additive inverse.
3 −1
−3 1
Example : If 𝐴3×2 = [ 2 −2], then −𝐴3×2 = [−2 2 ]
−4 5
4 −5
Note : The negative of a matrix also serves as the additive inverse of a matrix, i.e. 𝐴 + (−𝐴) = 𝑂 and
(−𝐴) + 𝐴 = 𝑂, where 𝐴 is a matric of order π‘š × π‘› and 𝑂 is the zero/null matrix of order π‘š × π‘›.
8. Properties of sums of matrices
If 𝐴, 𝐡 and 𝐢 are members of the set π‘†π‘š×𝑛 of all π‘š × π‘› matrices with real number entries where π‘š, 𝑛 ∈
𝑁, then :
I.
II.
III.
IV.
𝐴 + 𝐡 ∈ π‘†π‘š×𝑛 (Closure Law for Addition)
𝐴 + 𝐡 = 𝐡 + 𝐴 (Commutative Law for Addition)
(𝐴 + 𝐡) + 𝐢 = 𝐴 + (𝐡 + 𝐢) (Associative Law for Addition)
If 𝐴, 𝐡 and 𝐢 are three matrices of the same order, then :
𝐴 + 𝐡 = 𝐴 + 𝐢 ⟹ 𝐡 = 𝐢 (Left Cancellation Law)
𝐡 + 𝐴 = 𝐢 + 𝐴 ⟹ 𝐡 = 𝐢 (Right Cancellation Law)
9. Subtraction of matrices
The subtraction or difference between two matrices of the same order, π΄π‘š×𝑛 and π΅π‘š×𝑛 , is the matrix
(𝐴 − 𝐡)π‘š×𝑛 in which the entry in the 𝑖 th row and 𝑗 th column is π‘Žπ‘–π‘— + (−𝑏𝑖𝑗 ), for 𝑖 = 1, 2, 3, … … … π‘š and
𝑗 = 1, 2, 3, … … … 𝑛.
If 𝐴 = [π‘Žπ‘–π‘— ]π‘š×𝑛 and 𝐡 = [𝑏𝑖𝑗 ]π‘š×𝑛 , then 𝐴 − 𝐡 = 𝐴 + (−𝐡) = [π‘Žπ‘–π‘— + (−𝑏𝑖𝑗 )]π‘š×𝑛 .
Example : If 𝐴 = [
2 + (−1) 0 + (−(−2))
2 0
1 −2
] and 𝐡 = [
], then 𝐴 − 𝐡 = 𝐴 + (−𝐡) = [
]
−3 6
0 4
−3 + (−0)
6 + (−4)
=[
1 2
]
−3 2
10. Multiplication of a matrix by a scalar
The product of a real number or real number scalar π‘˜ and a matrix 𝐴, denoted by π‘˜π΄, is defined as the
matrix whose entries are each multiplied by π‘˜.
If 𝐴 = [π‘Žπ‘–π‘— ]π‘š×𝑛 , then π‘˜π΄ = [π‘˜. π‘Žπ‘–π‘— ]π‘š×𝑛 .
Example : If 𝐴 = [
1 −3
1
], then 3𝐴 = 3 [
0 2
0
−3
3×1
]=[
2
3×0
3 × −3
3
]=[
3×2
0
−9
]
6
11. Properties of products of matrices and real numbers
If 𝐴 and 𝐡 are members of the set π‘†π‘š×𝑛 of all π‘š × π‘› matrices with real number entries where π‘š, 𝑛 ∈ 𝑁
and 𝑐, 𝑑 ∈ 𝑅, then :
I.
II.
III.
IV.
V.
VI.
VII.
VIII.
𝑐𝐴 ∈ π‘†π‘š×𝑛
𝑐(𝐴 + 𝐡) = 𝑐𝐴 + 𝑐𝐡
(𝑐 + 𝑑)𝐴 = 𝑐𝐴 + 𝑑𝐴
(𝑐𝑑)𝐴 = 𝑐(𝑑𝐴) = 𝑑(𝑐𝐴)
1𝐴 = 𝐴
(−1)𝐴 = −𝐴
0𝐴 = 0
𝑐0 = 0
12. Multiplication of matrices
Matrix multiplication is “multiply row by column” process. This means that we multiply the entries of a
row by the corresponding entries of a column and then add the products.
Two matrices 𝐴 and 𝐡 are compatible for multipicaton only if the number of columns of 𝐴 is equal to
the number of rows of 𝐡, i.e. π΄π‘š×𝑝 × π΅π‘×𝑛 = (𝐴𝐡)π‘š×𝑛 , where 𝐴𝐡 denotes the product of the matrices 𝐴
and 𝐡.
12 6
20
] and 𝐡 = [24 12], then they are cmpatibel for matrix multiplication
4
12 9
12 6
13 × 12 + 18 × 24 + 20 × 12 13 × 6 + 18 × 12 + 20 × 9
20
] [24 12] = [
]
2 × 12 + 3 × 24 + 4 × 12
2 × 6 + 3 × 12 + 4 × 9
4
12 9
13 18
Example : If 𝐴 = [
2
3
and 𝐴𝐡 = [
13 18
2
3
=[
828
144
474
]
84
13. Properties of matrix multiplication
I.
II.
III.
IV.
V.
Matrix multiplication is not commutative i.e. 𝐴𝐡 ≠ 𝐡𝐴 (most of the time). In fact, sometimes
when 𝐴𝐡 is defined, 𝐡𝐴 is not defined due to incompatibility for matrix multiplication.
The product of two matrices can be zero without either of them being a zero matrix.
Cancellation law for the multiplication of real numbers is not valid for the multiplication of
matrices, i.e. 𝐴𝐡 = 𝐴𝐢 does not imply 𝐡 = 𝐢.
Matrix multiplication is associative if conformability is assured, i.e. 𝐴(𝐡𝐢) = (𝐴𝐡)𝐢.
Matrix multiplication is distributive with respect to matrix addition, i.e. 𝐴(𝐡 + 𝐢) = 𝐴𝐡 + 𝐴𝐢.
Note : The unit/identity matrix 𝐼𝑛 multiplicative identify for a square matrix of order 𝑛, i.e. 𝐴𝐼 = 𝐴 and
𝐼𝐴 = 𝐴.
14. Positive integral powers of matrices
If 𝐴 is a square matrix, then all its positive integral powers such as 𝐴2 , 𝐴3 , etc. are defined and can be
multiplied with each other to get higher powers, such as 𝐴2 𝐴 = (𝐴𝐴)𝐴 = 𝐴(𝐴𝐴) = 𝐴𝐴2 = 𝐴3 = 𝐴𝐴𝐴,
etc.
Further, if 𝐼 is a unit matrix of any order, then 𝐼 = 𝐼 2 = 𝐼 3 = 𝐼 π‘š .
Note : We already know that the product of non-zero matrices can be 0. It is also possible that the
integral power of a non-zero matrix is 0. Such a matrix is called a nil potent matrix.
0 2 2
0
Example : If 𝑇 = [
], 𝑇 = 𝑇 × π‘‡ = [
0 0
0
2 0 2
0 0
][
]=[
] = 0 and 𝑇 3 = 𝑇 2 𝑇 = 0, and so on.
0 0 0
0 0
15. Transpose of a matrix
The matrix obtained from any matrix 𝐴 by interchanging its rows and columns is called the transpose
of the given matrix and is denoted by 𝐴′ or 𝐴𝑇 . This means that if the order of matrix 𝐴 is π‘š × π‘›, the
order of its transpose is 𝑛 × π‘š.
3 6
3 2 1
′
Example : If 𝐴 = [
] , 𝐴 = [2 5 ]
6 5 4
1 4
16. Properties of transpose of a matrix
I.
II.
III.
IV.
If 𝐴 is any matrix, then (𝐴′ )′ = 𝐴 and (−𝐴)′ = −𝐴′.
If 𝐴 and 𝐡 are two matrices of the same order, then (𝐴 + 𝐡)′ = 𝐴′ + 𝐡′ and (𝐴 − 𝐡)′ = 𝐴′ − 𝐡′.
If matrix 𝐴 has order π‘š × π‘ and matrix 𝐡 has order 𝑝 × π‘›, then (𝐴𝐡)′ = 𝐡′𝐴′.
If 𝐴 is a matrix and π‘˜ is a scalar, then (π‘˜π΄)′ = π‘˜π΄′.
17. Symmetric and skew-symmetric matrices
Symmetric Matrix
1. A symmetric matrix is a square matrix which is
symmetric about its principal diagonal. Thus the
elements on one side of the principal diagonal are
the reflected images of the elements on the other
side of the principal diagonal.
Skew-symmetric Matrix
1. A skew-symmetric matrix is a square matrix
which is symmetric about its principal diagonal,
but with reverse signs. Thus the elements on one
side of the principal diagonal are the reflected
images of the elements on the other side of the
principal diagonal with changed signs.
2. The principal diagonal elements themselves 2. The principal diagonal elements are equal to 0.
can be any value.
3. Definition of a symmetric matrix :
3. Definition of a skew-symmetric matrix :
A square matrix 𝐴 = [π‘Žπ‘–π‘— ] of order 𝑛 is said to be A square matrix 𝐴 = [π‘Žπ‘–π‘— ] of order 𝑛 is said to be
symmetric if π‘Žπ‘–π‘— = π‘Žπ‘—π‘– , ∀ 𝑖, 𝑗.
skew-symmetric if π‘Žπ‘–π‘— = −π‘Žπ‘—π‘– , ∀ 𝑖, 𝑗. Using this
definition, we get π‘Žπ‘–π‘– = −π‘Žπ‘–π‘– , ⟹ π‘Žπ‘–π‘– = 0.
4. A necessary and sufficient condition for a 4. A necessary and sufficient condition for a
matrix 𝐴 to be symmetric is :
matrix 𝐴 to be skew-symmetric is :
𝐴 = 𝐴′
Note :
➒ Diagonal matrices are always symmetric.
𝐴 = −𝐴′
➒ A matrix which is both symmetric and skew-symmetric is called a square null matrix.
18. Theorems related to symmetric and skew-symmetric matrices
Theorem 1. If 𝐴 be any square matrix, then 𝐴 + 𝐴′ is symmetric and 𝐴 − 𝐴′ is skew-symmetric.
Proof : Using the properties of transpose of a matrix and the necessary and sufficient condition of a
symmetric matrix, we get :
(𝐴 + 𝐴′ )′ = 𝐴′ + (𝐴′)′
𝐴′ + 𝐴
= 𝐴 + 𝐴′
Likewise, using the properties of transpose of a matrix and the necessary and sufficient condition of a
symmetric matrix, we get :
(𝐴 − 𝐴′ )′ = 𝐴′ − (𝐴′)′
𝐴′ − 𝐴
= −(𝐴 − 𝐴′ )
Theorem 2. Every square matrix can uniquely expressed as the sum of a symmetric matrix and a skewsymmetric matrix.
i. e. 𝐴 =
1
1
1
(𝐴 + 𝐴′) + (𝐴 − 𝐴′)
2
2
1
where 2 (𝐴 + 𝐴′) is a symmetric matrix and 2 (𝐴 − 𝐴′) is a skew-symmetric matrix.
19. Determinant of a matrix
Determinant of a 2 × 2 matrix
Let 𝐴 = [
π‘Ž1
π‘Ž2
𝑏1
] be a square matrix of order 2, then its determinant is given by :
𝑏2
π‘Ž
|𝐴| or det(𝐴) = | 1
π‘Ž2
𝑏1
| = π‘Ž1 𝑏2 − π‘Ž2 𝑏1
𝑏2
Determinant of a 3 × 3 matrix
π‘Ž1
Let 𝐴 = [π‘Ž2
π‘Ž3
by :
𝑏1
𝑏2
𝑏3
𝑐1
𝑐2 ] be a square matrix of order 3, then its determinant (expanding along 𝑅1 ) is given
𝑐3
π‘Ž1
|𝐴| or det(𝐴) = |π‘Ž2
π‘Ž3
𝑏1
𝑏2
𝑏3
𝑐1
𝑏
𝑐2 | = π‘Ž1 | 2
𝑏3
𝑐3
π‘Ž2
𝑐2
| − 𝑏1 |π‘Ž
𝑐3
3
𝑐2
π‘Ž2
|
+
𝑐
|
1
𝑐3
π‘Ž3
𝑏2
|
𝑏3
= π‘Ž1 (𝑏2 𝑐3 − 𝑏3 𝑐2 ) − 𝑏1 (π‘Ž2 𝑐3 − π‘Ž3 𝑐2 ) + 𝑐1 (π‘Ž2 𝑏3 − π‘Ž3 𝑏2 )
Note :
➒ If 𝐴 and 𝐡 are square matrices of the same order, then det(𝐴𝐡) = det(𝐴) . det(𝐡).
20. Singular and non-singular matrices
A square matrix 𝐴 is said to be singular if det(𝐴) = 0, otherwise it is said to be non-singular.
21. Adjoint of a matrix
The adjoint of a square matrix 𝐴 is the transpose of the matrix obtained by replacing each element π‘Žπ‘–π‘—
by its cofactor 𝐴𝑖𝑗 . It is denoted by adj. 𝐴
π‘Ž11
If 𝐴 = [π‘Ž21
π‘Ž31
π‘Žπ‘–π‘— .
π‘Ž12
π‘Ž22
π‘Ž32
π‘Ž13
𝐴11
π‘Ž23 ], then adj. 𝐴 = [𝐴12
π‘Ž33
𝐴13
𝐴21
𝐴22
𝐴23
𝐴31
𝐴32 ], where 𝐴𝑖𝑗 denotes the co-factor of an element
𝐴33
22. Properties of adjoint and determinant of a matrix
I.
If 𝐴 be any 𝑛-rowed square matrix, then (adj. 𝐴)𝐴 = 𝐴(adj. 𝐴) = |𝐴|𝐼𝑛 , where 𝐼𝑛 is the 𝑛-rowed
identity matrix.
If 𝐴 is a square matrix of order 𝑛, then |π‘˜π΄| = π‘˜ 𝑛 |𝐴|.
If 𝐴 is a square matrix and π‘˜ is a scalar, then adj. (π‘˜π΄) = π‘˜ 2 (adj. 𝐴).
If 𝐴 is a 𝑛 × π‘› non-singular matrix, then |adj. 𝐴| = |𝐴|𝑛−1 .
If 𝐴 and 𝐡 are two non-singular matrices of the same type, then adj. (𝐴𝐡) = (adj. 𝐡)(adj. 𝐴).
II.
III.
IV.
V.
23. Inverse of a matrix
A non-zero square matrix 𝐴 of order 𝑛 is said to be invertible if there exists a square matrix 𝐡 of order
𝑛 such that 𝐴𝐡 = 𝐡𝐴 = 𝐼𝑛 . The square matrix 𝐡 is then called as the inverse of 𝐴 and is denoted by 𝐴−1 .
Every invertible matrix has a unique inverse.
The necessary and sufficient condition for a square matrix 𝐴 to possess inverse is that it must be nonsingular, i.e. det(𝐴) ≠ 0.
For an invertible square matrix 𝐴 of order 𝑛, the inverse is given as follows :
𝐴−1 =
1
(adj. 𝐴)
|𝐴|
24. Properties of inverse of a matrix
I.
II.
III.
If 𝐴 be any 𝑛-rowed non-singular square matrix, then 𝐴𝐴−1 = 𝐴−1 𝐴 = 𝐼𝑛 , where 𝐼𝑛 is the 𝑛rowed identity matrix.
If 𝐴 be any 𝑛-rowed non-singular square matrix, then (𝐴′ )−1 = (𝐴−1 )′.
If 𝐴 and 𝐡 be two 𝑛-rowed non-singular square matrices, then 𝐴𝐡 is also non-singular and
(𝐴𝐡)−1 = 𝐡 −1 𝐴−1 (Reversal law for the inverse of a product).
25. Matrix representation of a system of linear equations
Consider a system of two linear equations in two unknowns :
π‘Ž1 π‘₯ + 𝑏1 𝑦 = 𝑐1
π‘Ž2 π‘₯ + 𝑏2 𝑦 = 𝑐2
The system of linear equations can be written in matrix form as follows :
𝐴𝑋 = 𝐡
[
π‘Ž1
π‘Ž2
𝑐1
𝑏1 π‘₯
] [𝑦] = [𝑐 ]
𝑏2
2
where 𝐴 − π‘π‘œπ‘’π‘“π‘“π‘–π‘π‘–π‘’π‘›π‘‘ π‘šπ‘Žπ‘‘π‘Ÿπ‘–π‘₯
𝑋 − π‘£π‘Žπ‘Ÿπ‘–π‘Žπ‘π‘™π‘’ π‘šπ‘Žπ‘‘π‘Ÿπ‘–π‘₯
𝐡 − π‘’π‘›π‘˜π‘›π‘œπ‘€π‘› π‘šπ‘Žπ‘‘π‘Ÿπ‘–π‘₯
Likewise, consider a system of three linear equations in three unknowns :
π‘Ž1 π‘₯ + 𝑏1 𝑦 + 𝑐1 𝑧 = 𝑑1
π‘Ž2 π‘₯ + 𝑏2 𝑦 + 𝑐2 𝑧 = 𝑑2
π‘Ž3 π‘₯ + 𝑏3 𝑦 + 𝑐3 𝑧 = 𝑑3
The system of linear equations can be written in matrix form as follows :
𝐴𝑋 = 𝐡
π‘Ž1
[π‘Ž 2
π‘Ž3
𝑏1
𝑏2
𝑏3
𝑐1 π‘₯
𝑑1
𝑐2 ] [𝑦] = [𝑑2 ]
𝑐3 𝑧
𝑑3
where 𝐴 − π‘π‘œπ‘’π‘“π‘“π‘–π‘π‘–π‘’π‘›π‘‘ π‘šπ‘Žπ‘‘π‘Ÿπ‘–π‘₯
𝑋 − π‘£π‘Žπ‘Ÿπ‘–π‘Žπ‘π‘™π‘’ π‘šπ‘Žπ‘‘π‘Ÿπ‘–π‘₯
𝐡 − π‘’π‘›π‘˜π‘›π‘œπ‘€π‘› π‘šπ‘Žπ‘‘π‘Ÿπ‘–π‘₯
26. Consistency and inconsistency of a system of linear equations
The following flowchart must be followed to understand whether a system of linear equations is
consistent or inconsistent.
27. Solving a system of linear equations using matrix method (Martin’s rule)
Let 𝐴𝑋 = 𝐡 be a system of linear equations. The solution of the system of linear equations is given as
follows :
𝑋 = 𝐴−1 𝐡
βŸΉπ‘‹=
1
(adj. 𝐴)𝐡
|𝐴|
*******************************
Download