Inverses and powers: Rules of Matrix Arithmetic

advertisement
Math1300:MainPage/Inverses
Contents
• 1 Inverses and powers: Rules of Matrix Arithmetic
♦ 1.1 What about division of matrices?
♦ 1.2 Properties of the Inverse of a Matrix
◊ 1.2.1 Theorem (Uniqueness of Inverse)
◊ 1.2.2 Inverse Test
◊ 1.2.3 Lemma (Reduced Row Echelon Form
for Square Matrices)
◊ 1.2.4 Lemma (Condition for Singularity)
◊ 1.2.5 Lemma (AB=I and Nonsingularity)
◊ 1.2.6 Proposition (AB=I implies BA=I)
◊ 1.2.7 Theorem (A Right Inverse is an Inverse)
◊ 1.2.8 New Inverse Test
◊ 1.2.9 Theorem (Exponents and Transpose)
◊ 1.2.10 Theorem (Inverse of Product of
Matrices)
♦ 1.3 The Computation of the Inverse of a Matrix
♦ 1.4 Applying the Inverse of a Matrix to Systems of
Linear Equations
◊ 1.4.1 Theorem (Solving Equations Using the
Matrix Inverse)
♦ 1.5 Powers of a Matrix
◊ 1.5.1 Theorem (Laws of Exponents)
Inverses and powers: Rules of Matrix Arithmetic
What about division of matrices?
We have considered addition, subtraction and multiplication of matrices. What about division? When we consider
real numbers, we can write
as
In addition, we may think of
as the multiplicative inverse of a, that is,
it is the number which, when multiplied by a yields 1. In other words, if we set
, then
Finally, 1 is the multiplicative identity, that is, r1 = 1r = r for any real number r.
While these concepts can not be extended to matrices completely, there are some circumstances when they do
make sense.
First, we can note that
matrices satisfy
[a] + [b] = [a + b] and [a][b] = [ab]. This means that both addition and multiplication of these matrices are just
like the addition and multiplication of the real numbers. In this sense, matrices may be thought of as a
generalization of the real numbers.
Next we remember that if A is
, then ImA = A = AIn. This means that the identity matrix (or, more
properly, matrices) acts like 1 does for the real numbers. This also means that if we want there to be a (single)
matrix I satisfying IA = A = AI, then we must have m = n. This means we have to restrict ourselves to square
matrices.
Inverses and powers: Rules of Matrix Arithmetic
1
Math1300:MainPage/Inverses
If A is and
matrix, then InA = A = AIn, and so In acts in the same manner as does 1 for the real numbers.
Indeed, that is the reason it is called the identity matrix.
Finally, we want to find (if possible) a matrix A − 1 so that A − 1A = AA − 1 = I. When such a matrix exists, it is
called the inverse of A, and the matrix A itself is called invertible.
Properties of the Inverse of a Matrix
We consistently refer to the inverse of A rather than an inverse of A, which would seem to imply that a matrix can
have only one inverse. This is indeed true.
Theorem (Uniqueness of Inverse)
A square matrix A can have no more than one inverse.
Proof: Suppose we have matrices B and C which both act as inverses, that is, AB = BA = I and AC = CA = I. We
evaluate BAC in two different ways and equate the results: BAC = (BA)C = IC = C and also BAC = B(AC) = BI =
B, and so B = C.
Inverse Test
If A and B are square matrices of the same size, then B is a left inverse of A if BA = I. Similarly, it is a right
inverse of A if AB = I.
By definition A − 1 is the inverse of A if AB = BA = I, that is, B is both a left inverse and a right inverse.
We next make an observation about the reduced row echelon form of square matrices:
Lemma (Reduced Row Echelon Form for Square Matrices)
If A is an
matrix then
1. The reduced row echelon form of A is In, or
2. The last row of the reduced row echelon form of A is all zero.
Proof: If every row in the reduced row echelon form of A has a leading one, then, since A has the same number of
rows as columns, so does every column. This means that the leading ones must be on the diagonal, and the every
other entry of the matrix is zero. In other words, the reduced row echelon form is In. If, on the other hand, some
row does not have a leading one, then it is an all-zero row. Since these rows are at the bottom of the matrix when
it is in reduced row echelon form, the last row, in particular, must be all zero.
When the reduced row echelon form is In, the matrix is called nonsingular. Otherwise it is called singular.
Next we give a criterion for nonsingularity. It is trivial that if
x for which this is true, then M is nonsingular.
What about division of matrices?
then
If this is the only vector
2
Math1300:MainPage/Inverses
Lemma (Condition for Singularity)
M is nonsingular if and only if
implies
Proof: First, suppose that M is nonsingular. The the equation
reduced row echelon form, gives the equation
. Hence
has an augmented matrix which, in
Now suppose that M is singular. The reduced row echelon form is not In, and so some column does not contain a
leading 1, that is, there must exist a free variable. It can be assigned a nonzero value, and thus provide a nonzero
solution to
Lemma (AB=I and Nonsingularity)
If AB=I then B is nonsingular.
Proof: Suppose that
Multiply both sides of the equation by A to get
the other hand, A(Bx) = (AB)x = Ix = x, and so
Hence
implies
nonsingular.
On
and so B is
Proposition (AB=I implies BA=I)
Suppose the A and B are square matrices with AB = I. Then BA = I.
Proof: From the previous lemma we know that B is nonsingular. Hence we know how to find C which is a
solution to the equation BX = I, that is, so that BC = I. We now evaluate BABC in two different ways and equate
the results:
We get an important result from this Proposition.
Theorem (A Right Inverse is an Inverse)
Suppose A and B are square matrices with AB = I. Then B = A − 1.
Proof: By the Proposition above, AB = I implies BA = I. Since the inverse of A is unique, B = A − 1.
New Inverse Test
If A and B are square matrices then B is the inverse of A if and only if AB = I.
Here is an application of the previous theorem:
Lemma (Condition for Singularity)
3
Math1300:MainPage/Inverses
Theorem (Exponents and Transpose)
If A is a square matrix with inverse A − 1 then
(AT) − 1 = (A − 1)T
Proof: Let B = (A − 1)T. Then ATB = AT(A − 1)T = (A − 1A)T = IT = I and so B = (AT) − 1.
Here is another application of the previous theorem:
Theorem (Inverse of Product of Matrices)
If A and B are invertible matrices of the same size, then AB is also invertible and (AB) − 1 = B − 1A − 1
Proof:
Since (AB)(B − 1A − 1) = A(BB − 1)A − 1 = AIA − 1 = AA − 1 = I, it follows that B − 1A − 1 is the inverse of AB.
The Computation of the Inverse of a Matrix
Suppose we have a square matrix A and the reduced row echelon form of A is I (that is, A is nonsingular). X is the
inverse of A if it satisfies the equation AX = I. We have seen how to solve such equations . We conclude that if
we start with the matrix [A | I] then the reduced row echelon form will be [I | A − 1]. This not only allows us to
compute the inverse of A but it also shows that nonsingular matrices are invertible and vice-versa.
Example: If we start with
then
has, as its reduced row echelon form,
and so we conclude that
Theorem (Exponents and Transpose)
4
Math1300:MainPage/Inverses
Applying the Inverse of a Matrix to Systems of Linear
Equations
Theorem (Solving Equations Using the Matrix Inverse)
If a system of linear equations is given by the equations Ax = b, and A has an inverse, then x = A − 1b.
Proof: We take the equation Ax = b and multiply both sides by A − 1:
A − 1(Ax) = A − 1b and A − 1(Ax) = (A − 1A)x = Ix = x, and so x = A − 1b.
Example:
Suppose we want to solve the system of equations
Then let
and
so that we are solving Ax = b. We have already done the
computation to determine that
Hence
and the (only) solution is x = − 1,x = 1,x = 0.
Powers of a Matrix
1
2
3
Suppose we have a square matrix A. Then A2 = AA, A3 = AAA, and in general, for m,n > 0 we have
•
1
(Note that A = A)
•
•
Applying the Inverse of a Matrix to Systems of Linear Equations
5
Math1300:MainPage/Inverses
For invertible matrices, nonpositive powers are defined so that these equalities remain valid.
• n = 0: AmA0 = Am + 0 = Am, so A0 = I.
• m = 1,n = − 1: A1A − 1 = A − 1 + 1 = A0 = I so A − 1 is the inverse of A (hence the notation).
• n = − m: AmA − m = Am + ( − m) = A0 = I so A − m is the inverse of Am.
Theorem (Laws of Exponents)
For any invertible square matrix A and integers m and n,
• AmAn = Am + n
• (Am)n = Amn
• (A − 1) − 1 = A
• (An) − 1 = (A − 1)n
•
Proof: The first two results have already been verified. The method for the last three is the same:
• (A − 1)A = I implies that A is the inverse of A − 1, that is, A = (A − 1) − 1.
• Evaluate so the the middle factors "disappear":
and so (A − 1)n is the inverse of An.
•
Powers of a Matrix
so
is the inverse of rA.
6
Download