Uploaded by Godwin Azaledzi

REAL WORK

advertisement
UNIVERSITY OF GHANA, LEGON
DEPARTMENT OF MATHEMATICS
APPLICATION OF EIGENVALUES AND
EIGENVECTORS IN DIFFERENTIAL EQUATION
By
Group Two(2)
March 31, 2023
A PROJECT IN LINEAR ALGEBRA.
Acknowledgement
We would like to express our deepest appreciation to the following individuals for their invaluable
contributions to this project:
First and foremost, we would like to thank our project supervisor, Sir. David Armah, for his
guidance, support, and encouragement throughout the project. His insightful comments and
constructive criticism were instrumental in shaping the direction of our research.
We are also grateful to the staff at the Mathematics department, who provided us with access
to their facilities and academic resources, and who were always willing to answer our questions
and provide assistance when needed.
Finally, we would like to thank the following members of this group who made this project a
successful one:
• Amanie Princess Pearl (10916722)
• Obeng De-Graft Kofi (10922787)
• Fosu Andrews Kwaku (10922469)
• Dennis Argyire Darko (10897168)
• Ansah Owusu Theophilus (10920377)
• Godwin Azaledzi (10862788)
• Ansah shadrack (10893224)
• Nyarku Mintah Francis (10863056)
• Kelvin Ampong Boateng (10865952)
• Obodai David Sai (10904526)
• Fianu Mensah Wisdom (10352532)
Thank you all for your contributions and support.
Abstract
Eigenvalues and eigenvectors are powerful tools in linear algebra, with many applications in various
fields of science and engineering. In this paper, we explore the application of eigenvalues and
eigenvectors to solving systems of linear differential equations.
We begin by introducing the concepts of eigenvalues and eigenvectors, and their properties. We
then show how these concepts can be used to diagonalize a matrix, which is a key step in solving
systems of linear differential equations. Specifically, we demonstrate how the diagonalization
process can reduce a system of differential equations into a set of decoupled first-order equations,
each with a unique solution that can be expressed in terms of exponential functions.
We illustrate our approach with several examples, including the classic predator-prey model, a
damped harmonic oscillator, and a system of coupled oscillators. For each example, we first
derive the corresponding system of differential equations, and then show how the eigenvalues and
eigenvectors of the associated matrix can be used to obtain the general solution.
Our results demonstrate the usefulness of eigenvalues and eigenvectors in solving systems of linear
differential equations, and highlight the power of diagonalization as a tool for simplifying complex
systems. We conclude by discussing some potential extensions and applications of our approach,
including in the study of nonlinear systems and in control theory.
Contents
1 Introduction
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Structure of the project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Basic Notions and Preliminaries Results
2.1 Introduction . . . . . . . . . . . . . . .
2.2 Eigenvalues and Eigenvectors . . . . . .
2.3 Similar matrix . . . . . . . . . . . . . .
2.4 Diagonalization of Matrices . . . . . . .
2.5 Homogenous Linear Systems . . . . . . .
2.6 Planar system . . . . . . . . . . . . . .
2.7 Phase Portraits . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3 Application of Eigenvalues And Eigenvectors to solving
tion
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Eigenvalues And Eigenvectors . . . . . . . . . . . . . . .
3.3 Linear Systems Of Differential Equations . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
3
4
5
6
6
6
9
9
10
10
10
Differentiatial Equa. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
12
12
13
15
4 Application of eigenvalues and eigenvectors
4.1 Renaming simple graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
28
5 Conclusion
33
6 Reference
6.1 Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
34
iii
Chapter 1
Introduction
This chapter talks about the the general overview of the the project.
Eigenvalues and eigenvectors are concepts in linear algebra that allow us to understand certain
properties of square matrices.
Eigenvalues and eigenvectors have numerous applications in various fields of study, including
physics, engineering, and computer science. Here are a few examples of how eigenvalues and
eigenvectors can be applied in daily life:
1. In finance, eigenvectors can be used to calculate the optimal portfolio of investments with
minimum risk.
2. In image processing, eigenvectors can be used to compress images by reducing the dimensionality of the data.
3.
In chemistry, eigenvectors can be used to calculate the vibrational modes of molecules,
which can help predict their behavior.
4. In machine learning, eigenvectors can be used in algorithms for dimensional reduction and
feature extraction.
5. In structural engineering, eigenvalues and eigenvectors can be used to analyze the stability
of structures and predict their natural frequencies.
While these examples may seem technical, they illustrate the broad range of applications of
eigenvalues and eigenvectors in various aspects of everyday life.
• Eigenvalues refer to the set of values that result when a matrix is multiplied by a vector. An
eigenvector is a non-zero vector that remains in the same direction after being multiplied
by the matrix. Eigenvectors can be used to interpret the effects of matrix transformations
on vector spaces and to find the principal axes of transformation.
• In this Project, we will explore the concepts of eigenvalues and eigenvectors in depth,
beginning with their fundamental properties and important theorems. We will also examine
1
Page 2
various applications of eigenvalues and eigenvectors in different fields and provide examples
of how they can be used to solve real-world problems.
• Whether you are a student learning linear algebra for the first time or a researcher looking to
apply these concepts in your work, this project will provide a comprehensive and practical
guide to understanding eigenvalues and eigenvectors.
Section 1.1. Motivation
1.1
Page 3
Motivation
The study of differential equations plays a fundamental role in many areas of science and engineering, from physics and biology to economics and finance. Differential equations describe
the behavior of dynamic systems, and finding their solutions is essential for understanding the
dynamics of physical and natural systems.
Eigenvalues and eigenvectors are important tools in linear algebra that have found numerous
applications in various fields, including differential equations. They provide insights into the
behavior of physical systems and enable us to diagonalize matrices, which simplifies the solutions
of differential equations.
The motivation behind this project is to explore the application of eigenvalues and eigenvectors
to systems of linear differential equations. Specifically, we aim to demonstrate how the diagonalization of matrices using eigenvectors and eigenvalues can be used to solve systems of linear
differential equations, and to analyze the behavior of the solutions using phase portraits. We will
also discuss the stability and type of equilibrium points using the eigenvectors and eigenvalues of
the matrix representing the system.
This project is important because it provides a deeper understanding of the relationship between
eigenvalues, eigenvectors, and differential equations, and their applications to real-world problems.
Additionally, it equips us with the tools to analyze and solve systems of linear differential equations,
which is a crucial skill for researchers and practitioners in various fields.
The concept of eigenvalues and eigenvectors can be traced back to the 18th century when mathematicians were studying quadratic forms. It was first introduced by David Hilbert in 1904 and
Felix Klein in 1908. In the 20th century, it became increasingly important with the development of
quantum mechanics and the use of matrices to describe physical systems. It has many important
applications in fields such as physics, engineering, computer science, and data analysis.
Section 1.2. Problem statement
1.2
Page 4
Problem statement
Eigenvalues and eigenvectors are used to solve systems of linear ordinary differential equations
(ODEs). The method of diagonalization involves finding a diagonal matrix D and an invertible
matrix P such that D = P −1 AP , where A is the matrix of coefficients in the system of ODEs.
This approach is particularly useful in the study of dynamical systems and physical phenomena,
and provides a powerful tool for modeling and analyzing complex systems. Many physical systems
can be modeled using systems of linear differential equations. Solving these systems analytically
can be a challenging task, especially for complex systems. Eigenvalues and eigenvectors offer a
powerful tool for solving linear differential equations by diagonalizing the corresponding matrices.
The objective of this project is to explore the application of eigenvalues and eigenvectors to solving
systems of linear differential equations. Specifically, we aim to investigate how the diagonalization
process can simplify the solution of differential equations and provide insight into the behavior of
physical systems.
We will focus on several examples of physical systems, including the classic predator-prey model, a
damped harmonic oscillator, and a system of coupled oscillators. For each example, we will derive
the corresponding system of differential equations and show how the eigenvalues and eigenvectors
of the associated matrix can be used to obtain the general solution.
The project will involve the development and implementation of numerical algorithms for diagonalization and solution of differential equations. The results of the project will demonstrate the
usefulness of eigenvalues and eigenvectors in solving systems of linear differential equations and
highlight the power of diagonalization as a tool for simplifying complex systems. The findings of
the project will contribute to the field of linear algebra and have implications for a wide range of
applications in science and engineering.
Section 1.3. Structure of the project
1.3
Page 5
Structure of the project
1. Introduction:
• Brief overview of eigenvalues and eigenvectors, their importance and applications
• Objectives and scope of the project
2. Theoretical background:
• Definition of eigenvalues and eigenvectors
• Properties and characteristics of eigenvectors and eigenvalues
• Diagonalization of matrices using eigenvectors and eigenvalues
• Applications of eigenvectors and eigenvalues in various fields
3. Methodology:
• Description of the mathematical tools and techniques used in the project
• Procedures for diagonalization of matrices and solving differential equations using eigenvectors and eigenvalues
4. Results and Analysis:
• Application of eigenvectors and eigenvalues in solving systems of linear differential equations
• Presentation of numerical and graphical results
• Discussion of the implications and significance of the results
5. Discussion:
• Interpretation and evaluation of the results
• Comparison with previous studies and literature
• Limitations and assumptions of the study
6. Conclusion:
• Summary of the main findings and contributions of the study
• Recommendations for future research and applications
7. References:
Chapter 2
Basic Notions and Preliminaries
Results
2.1
Introduction
In this section, we will introduce some basic notions and preliminary results related to eigenvalues,
eigenvectors, differential equations, and phase portraits.
Eigenvalues and eigenvectors are fundamental concepts in linear algebra that have wide-ranging
applications in various fields of science and engineering. They describe how a linear transformation
(represented by a square matrix) acts on a vector, and provide insights into the behavior of physical
systems. In this report, we will introduce some basic notions and preliminary results related to
eigenvalues, eigenvectors, differential equations, and phase portraits.
We will start by defining eigenvalues and eigenvectors, and discussing their properties. We
will then introduce the diagonalization of matrices and its applications to solving systems of
linear differential equations. Finally, we will discuss phase portraits, which provide a graphical
representation of the behavior of systems of differential equations in the phase plane.
By the end of this report, we aim to provide a solid understanding of the basic notions and
preliminary results related to eigenvalues, eigenvectors, differential equations, and phase portraits,
and to demonstrate their applications in various contexts.
2.2
Eigenvalues and Eigenvectors
Eigenvalues and eigenvectors are fundamental concepts in linear algebra that describe how a
linear transformation (represented by a square matrix) acts on a vector. An eigenvector is a
nonzero vector v that, when multiplied by a square matrix, A, results in a scalar multiple of
the eigenvector (i.e., Av = λv,) We call λ the eigenvalue of the square matrix A and v the
corresponding eigenvector to the eigenvalue λ .
6
Section 2.2. Eigenvalues and Eigenvectors
Example: Let A=
Page 7
−3 1
1
, v=
−2 0
1
−3 + 1
−2
−2
=
=
From the given example above we have Av=
=−2 + 0
−2
−2
−3 1 1
A=
−2 0 1
1
2
Here our eigenvalue for the 2 × 2 matrix A is −2 and the corresponding eigenvector v is
1
1
1
2.2.1 Characteristic Equation. Given the equation; Av = λv
Av − λv = 0
Av − λIv = 0
, where I is the identity matrix.
(A − λI)v = 0
In order to get non trivial solutions, we find the determinant of the matrix (A − λI) and equate
to zero:
det(A − λI) = 0
The equation above is called characteristics equation of the square matrix A, the roots of the
characteristic equation are the eigenvalues of A. To find its corresponding eigenvector v , we
substitute the values of λ into the equation (A − λI)v = 0 P (λ)=det(A
−
λI) becomes the
3 1
characteristic polynomial for the square matrix A. Example Let A=
now finding the
0 2
eigenvalues using the characteristic equation det(A − λI) = 0
3 1
1 0
det
−λ
=0
0 2 0 1
3 1
λ 0
det
−
=0
0 λ
0 2
3−λ
1
det
=0
0
2−λ
(3 − λ)(2 − λ) − (0)(1) = 0
(3 − λ)(2 − λ) = 0
λ = 3 and λ = 2
Let λ1 = 3 and λ2 = 2 Now finding the corresponding eigenvectors: For λ1 = 3
= 0 (A −λ1 I)v
3 1
1 0
v1
−3
=0
0
2
0
1
v
2
3 1
3 0
v1
−
=0
0 3 v2
0 2
3−3
1
v1
=0
0
2−3
v2
Section 2.2. Eigenvalues and Eigenvectors
Page 8
0 1
v1
=0
0 −1
v2
v2 = 0
(2.2.1)
−v2 = 0
v2 = 0(2.2.2)
v1 is free. The eigenvector,
v
v= 1
0 1
v = v1
0
set v1 =1,
1
v=
0
For λ2 = 2
= 0 (A −λ2 I)v
3 1
1 0
v1
−2
=0
0 2 0 1 v2
3 1
2 0
v1
−
=0
0
2
0
2
v2
3−2
1
v1
=0
0
2
− 2 v2
1 1
v1
=0
v2
0 0
v1 + v 2 = 0
(2.2.3)
we get v1 = −v2
−v2
v=
v2 1
v = v2
1
set v2 =1,
1
v=
1
1
1
Therefore our eigenpairs are 3,
and 2,
0
1
2.2.2 Properties of Eigenvectors and Eigenvalues. Here are some properties of eigenvectors
and eigenvalues:
• Eigenvectors are only defined for square matrices and are nonzero vectors.
Section 2.3. Similar matrix
Page 9
• Eigenvectors corresponding to different eigenvalues are linearly independent.
• The sum of the eigenvalues is equal to the trace of the matrix, while the product of the
eigenvalues is equal to the determinant of the matrix.
2.3
Similar matrix
Definition: Let A and B be square matrices. We say A is similar to B if ;
A = PBP−1
where P is an invertible matrix.
relation.
NB: Similarly of matrix satisfies the properly of equivalence
• A is similar to itself(Reflexivity)
• If A is similar to B, then B is similar A (Symmetry)
• If A is similar to B and B is similar to C then A is similar to C(Transitivity)
2.3.1 Theorem. If two matrices are similar, then they have the same characteristic equation or
characteristic polynomial, hence the same eigenvalues but not the same eigenvectors. Proof:
A = PBP−1
A − λI = PBP−1 − λI
A − λI = PBP−1 − PλIP−1
A − λI = P(B − λI)P −1
det(A − λI) = det(P)·det(B − λI)·det(P−1 )
det(A − λI) = det(PP−1 )·det(B − λI)
det(A − λI) = det(I)·det(B − λI)
det(A − λI) = det(B − λI)
2.3.2 Theorem. The set of eigenvectors that correspond to distinct eigenvalues of a square
matrix is linearly independent.
2.4
Diagonalization of Matrices

a11

a22

A diagonal matrix is of the form D = 
...




 where all the other entries are

ann
zero. A square matrix is diagonalizable if it can be transformed into a diagonal matrix using
a matrix of eigenvectors, the diagonal entries of the resulting matrix are the eigenvalues of the
original matrix. In other words a square matrix is diagonalisable if it is similar to a diagonal
matrix, where the entries of the diagonal matrix are the eigenvalues of the matrix A.Therefore
A is diagonalizable if,A = PDP−1 , where D is a diagonal matrix.
Section 2.5. Homogenous Linear Systems
2.5
Page 10
Homogenous Linear Systems
A differential equation can be written in the form;
X 1 (t) = A(t)x(t) + b(t)
If b(t)is the non trivial function(b(t) not equal to zero), then the equation is nonhomogeneous
linear system. if b(t) = 0,then the differential equation is homogeneous
2.6
Planar system
The linear system of first order Ordinary differential equation with constant coefficients on the
plane has the form
x′1 (t) = a11 x1 + a12 x2 + · · · + a1n xn
x′2 (t) = a21 x2 + a22 x2 + · · · + a2n xn
..
.
x′n (t) = an1 x1 + an2 x2 + · · · + ann xn


a11 a12 · · · a1n
 a21 a22 · · · a2n 


We can derive a square matrix A out of the ODE where, A=  ..
..
..
.. 
 .
.
.
. 
an1 an2 · · · ann
2.7
Phase Portraits
A phase portrait is a graphical representation of the behavior of a system of differential equations
in the phase plane. It shows the trajectories of the system and the location of the equilibrium
points. The eigenvectors and eigenvalues of the matrix representing the system can be used to
determine the stability and type of the equilibrium points. Given the general solution
x′ (t) = C1 eλ1 t V1 + C2 eλ2 t V2
where λ1 , λ2 are the eigenvalues and V1 , V2 are the corresponding eigenvectors and the constants
are C1 and C2 . How the phase portrait will look like depends on the eigenvalues.Below are cases
of how the phase portraits will look like.
2.7.1 Case 1: Real Distinct Eigenvalues.
• We have a saddle when λ1 < 0 < λ2 . In this type of phase portrait, the trajectories given by
the eigenvectors with positive eigenvalues moves away from the origin and moves towards
infinity as t → ∞ and the trajectories given by the eigenvectors with negative eigenvalues
move towards the origin or converges to the origin as t → ∞ and the orbits are hyperbolic.
x′ (t) = C1 e−λ1 t V1 + C2 eλ2 t V2
The saddle is in unstable equilibrium from the left to right. Since λ1 < 0 and λ2 > 0.
Section 2.7. Phase Portraits
Page 11
• When λ1 and λ2 are both negative (λ1 , λ2 < 0), the phase portrait shows the trajectories
of the eigenvectors moving towards the origin or converging to the origin as t → ∞. This
type of solution is called nodal sink and it is a stable solution.
x′ (t) = C1 e−λ1 t V1 + C2 e−λ2 t V2
In this case,the orbits are parabolic
• When λ1 and λ2 are both positive (λ1 , λ2 > 0), the phase portrait shows the trajectories
of the eigenvectors moving away from the origin and approaching infinity as t → ∞.This
type of solution is called nodal source and it is an unstable solution.
x′ (t) = C1 eλ1 t V1 + C2 eλ2 t V2
In this case,the orbits are also parabolic.
2.7.2 Case 2: Repeated Eigenvalues. When the two eigenvalues are the same,(λ1 = λ2 =
λ),where λ ∈ R. The trajectories of the eigenvectors move away from the origin when the
eigenvalue is positive, λ > 0 as t → ∞ and move towards the origin or converges to the origin
as t → ∞ when the eigenvalue is negative,λ < 0. this type of solution is called a star and it is
stable when λ < 0 and unstable when λ > 0
x′ (t) = eλ1 t V (C1 + C2 )
The orbits are straight lines through the origin.The orbit is a stable star for λ1 > 0 and unstable
for λ1 < 0
Chapter 3
Application of Eigenvalues And
Eigenvectors to solving Differentiatial
Equation
3.1
Introduction
Eigenvalues and eigenvectors are important concepts in linear algebra. Given a square matrix A,
an eigenvector v is a non-zero vector such that Av is a scalar multiple of v. The scalar multiple
is called the eigenvalue λ. In other words, Av = λv.
To find the eigenvalues and eigenvectors of a matrix, we solve the equation (A − λI)v = 0, where
I is the identity matrix. The eigenvalues are the solutions to this equation, and the eigenvectors
are the non-zero solutions.
12
Section 3.2. Eigenvalues And Eigenvectors
3.2
Page 13
Eigenvalues And Eigenvectors
Let us say A is an n × n matrix and λ is an eigenvalue of matrix A, then ⃗x, a non-zero vector,
is called an eigenvector if it satisfies the given expression below;
A⃗x = λ⃗x
(3.2.1)
x is an eigenvector of A corresponding to eigenvalue, λ.
A⃗x − λ⃗x = ⃗0
(3.2.2)
In order not to have a trivial solution, (A − λI) must be singular so that the determinant of
(A − λ) can be equal to zero thus,
(A − λI)⃗x = ⃗0
(3.2.3)
(A − λI) = 0
(3.2.4)
Eigenvalues of a Square Matrix
Suppose An×n is a square matrix with real entries, then [A−λI] is called an Eigen or characteristic
matrix, which is an indefinite or undefined scalar. Where the determinant of the Eigen matrix
can be written as, |A − λI| = 0
|A − λI| = 0
(3.2.5)
The equation det(A − λI) = ⃗0 is called the characteristic equation of the matrix A and its roots
(the values of λ ) are called characteristic roots or eigenvalues. It is also known that every square
matrix has its characteristic equation.
Note: The det(A − λI) is the characteristic polynomial.
Example 1: Find the eigenvalues and eigenvectors of the 2 × 2 Matrix below.
3 −2
A=
0 2
Solution:
Using the characteristic equation, det(A − λI) = 0
3 −2
1 0
det(
−λ
)=0
0 2
0 1
3 −2
λ 0
det(
−
)=0
0 2
0 λ
3 − λ −2
=0
0
2−λ
Section 3.2. Eigenvalues And Eigenvectors
Page 14
(3 − λ)(2 − λ) − (0)(−2) = 0
λ2 − 5λ + 6 = 0
λ1 = 2 and λ2 = 3
Hence, the two eigenvalues of the given matrix are λ = 2 and λ = 3.The sum of the diagonal
entries of a matrix A is called the trace and is denoted tr(A). It is always true that If A is an
n × n matrix with n eigenvalues, λ1 , λ2 , ..., λn , then
λ1 + λ2 +, , , +λn = tr(A)
Finding the eigenvectors:
Now we want to find the eigenvectors for each eigenvalue.
For λ1 = 2, we have (A − 2I)⃗v = ⃗0
0
3 − 2 −2
v1
=
0
0
2 − 2 v2
1 −2 v1
0
=
0 0
v2
0
0
v1 − 2v2
=
0
0
After solving the matrix equation, v1 = 2v2 . Now, the eigenvector corresponding to λ2 is
V1 = 2v2 v2
2
= v2
1
2
set v2 = 1, therefore V1 =
1
2
Also, eigenvector corresponding to λ2 = 3 is ⃗1
When v⃗1 = 1 then the eigenvector for λ2 = 3
0
2
is V2 =
0
Section 3.3. Linear Systems Of Differential Equations
3.3
Page 15
Linear Systems Of Differential Equations
Now we will look at linear systems.
We are going to be looking at first order, linear systems of differential equations. Here is an
example of a system of first order, linear differential equations.
x′1 = x1 + x2
x′2 = 3x1 + 2x1
We call this kind of system a coupled system since knowledge of x2 is required to find x1 and
likewise, knowledge x1 is required to find x2 .
In this section, we will learn how to write an nth order differential
The example below is a second order differential equation. We will write it as a system of first
order differential equation.
Example:
2y ′′ − 5y ′ + y = 0
Solution
Let
x1 (t) = y(t)
and
x2 (t) = y ′ (t),
Now, we have that, y ′′ = 52 y ′ − 12 y
then
and
x′1 (t) = y ′ (t) = x2
and
x′2 (t) = y ′′ (t)
x′2 = 52 x2 − 12 x1
We will now learn to convert the system of linear differential equation to matrix form.
x′1 = x1 + x2 + t
x′2 = 3x1 + 2x1 + 1
′ x1
x1 + x2 + t
=
x′2
3x1 + 2x1 + 1
′ x1
1 1 x1
t
=
+
′
x2
3 2 x2
1
Section 3.3. Linear Systems Of Differential Equations
Now if we define some vector
Page 16
x
⃗x = 1
x2
′
x
x⃗′ = 1′
x2
The system can be written in matrix form as
1
1
x⃗′ = x
3 2
Now let us consider the linear system of the form
x′1 (t) = a11 x1 + a12 x2 + ... + a1n xn b1 (t)
(3.3.1)
x′2 (t) = a21 x1 + a22 x2 + ... + a2n xn b2 (t)
(3.3.2)
x′n (t) = an1 x1 + an2 x2 + ... + ann xn bn (t)
(3.3.3)
The ai,j , 1 < i, j < n and the bi,j , 1 < i, j < n are continuous real valued functions on the
interval I
The equations above may be written in a vector equation form as x′ (t) = A(t)x(t) + b(t)
N ote:
• A matrix function is continuous on the interval I if and only if all its entries are continuous
on I.
• The matrix function is called the companion or coefficient matrix of the differential equation.
Section 3.3. Linear Systems Of Differential Equations
Page 17
Homogeneous Linear Systems:
A homogeneous system of linear equations is a linear system of equations in which there are no
constant terms. i.e., a homogeneous linear system is of the form:
a11 x1 + a12 x2 + ... + a1n xn + b1 (t) = 0
(3.3.4)
a21 x1 + a22 x2 + ... + a2n xn + b2 (t) = 0
(3.3.5)
an1 x1 + an2 x2 + ... + ann xn + bn (t) = 0
(3.3.6)
A homogeneous system may have two types of solutions: trivial solutions and nontrivial solutions.
Since there is no constant term present in the homogeneous systems, (x1 , x2 , ..., xn ) = (0, 0, ..., 0)
is obviously a solution to the system and is called the trivial solution (the most obvious solution).
Properties Of A Homogeneous Linear Systems
1. It always has at least one solution that is called a trivial solution where the value of each
variable is 0.
2. If a and b are two solutions of a homogeneous system, then their sum a + b is also a
solution.
3. If a is a solution, then ka is also a solution, where k is a scalar.
4. A zero vector is always a solution of the homogeneous system.
Existence And Uniqueness Of Solutions:
Every linear system of equations has exactly one solution, infinite solutions, or no solution. Therefore
• A system of linear equations is consistent if it has a solution (perhaps more than one).
• A linear system is inconsistent if it does not have a solution.
• A consistent linear system of equations will have exactly one solution if and only if there is
a leading 1 for each variable in the system.
• If a consistent linear system of equations has a free variable, it has infinite solutions. If
a consistent linear system has more variables than leading 1s, then the system will have
infinite solutions.
• Consistent linear system with more variables than equations will always have infinite solutions.
Section 3.3. Linear Systems Of Differential Equations
Let us consider the linear vector differential equation:
Page 18
x′ = Ax
Suppose that a real eigenvalue λ1 has multiplicity of 2 and that ⃗v is the only eigenvector corresponding to this eigenvalue, then A⃗v = λ1⃗v and y1 (t) = eλ1 ⃗v is a solution.
⃗
To find the second linearly independent solution, we consider y2 (t) = teλ1 ⃗v + eλ1 w
where w
⃗ constant vector
Now lets us find the general solution of the linear system below
x′ = Ax
3 −2
A=
0 2
Solution:
Given that
3 −2
A=
0 2
Using the characteristic equation, det(A − λI), where I is 2 × 2 matrix.
det(A − λI) = 0
3 −2
1 0
det(
−λ
)=0
0 2
0 1
3 −2
λ 0
det(
−
)=0
0 2
0 λ
3 − λ −2
=0
0
2−λ
(3 − λ)(2 − λ) − (0)(−2) = 0
λ2 − 5λ + 6 = 0
λ1 = 2 and λ2 = 3
Hence, the two eigenvalues of the given matrix are λ1 = 2 and λ2 = 3.
Now we want to find the eigenvectors for each eigenvalue.
Section 3.3. Linear Systems Of Differential Equations
Page 19
For λ1 = 2, we have (A − 2I)V⃗ = ⃗0
3 − 2 −2 − 0 v1
= ⃗0
0−0 2−2
v2
1 −2 v1
= ⃗0
0 0
v2
v1 − 2v2
0
=
0
0
After solving the matrix equation, v1 = 2v2 Therefore, the eigenvector is of the form
2
V⃗1 = v2
1
2
When v2 is 1 then the eigenvector for λ = 2 is
1
Also, the is an eigenvector corresponding to λ = 3 is of the form
2
V⃗2 = v1
0
2
When v⃗1 is 1 then the eigenvector for λ = 3 is
0
The general solution to the differential equation is
x(t) = C1 eλ1 t V⃗1 + C2 eλ1 t V⃗2
2
3t 2
x(t) = C1 e
+ C2 e
1
0
2t
Remark
• The existence of the uniqueness theorem guarantees that trajectories do not cross.
• The qualitative behavior of solutions is determined by plotting a few trajectories.
The Phase Portrait
Let us consider the planar system x′ = Ax , where there is only one critical point, at (0,0): We
first consider the case of distinct real eigenvalues
Section 3.3. Linear Systems Of Differential Equations
Page 20
Real Distinct Eigenvalues
The three cases to consider are
1. λ1 < 0 < λ2 ( The Saddle)
2. λ1 < λ2 < 0 (Nodal Sink)
3. 0 < λ1 < λ2 (Nodal Source)
The Saddle
Example: Solve the differential equation
x′ (t) = x + y
and
Solution
′ x
1 1
x
=
y
4 1
y
It is now of the form x′ = A⃗x
where
′
′
x
1 1
x
′
,A=
x =
and ⃗x =
y
4 1
y
Using the characteristic equation,
det(A − λI) = 0
1 1
λ 0
det(
−
)=0
4 1
0 λ
1−λ
1
det(
)=0
4
1−λ
(1 − λ)(1 − λ) − 4(1) = 0
1 − 2λ + λ2 − 4 = 0
λ2 + λ − 3λ − 3 = 0
λ(λ + 1) − 3(λ + 1) = 0
(λ + 1)(λ − 1) = 0
Therefore the eigenvalues foe the matrix are, λ1 = 3 and λ2 = −1
y ′ (t) = 4x + y
Section 3.3. Linear Systems Of Differential Equations
Solving for corresponding eigenvectors, With
(A − λI)⃗x = 0
1 1
λ 0
(
−
)⃗x = 0
4 1
0 λ
1−λ
1
x⃗1 = 0
4
1−λ
At λ1 = 3,
−2 1
x
=0
4 −2
y
−2x + y = 0 ........................ (1)
4x − 2y = 0 ........................ (2)
From (1), y = 2x
x⃗1 =
x
2x
1
x⃗1 = x
2
At x = 1,
1
x⃗1 =
2
1
Therefore the first eigen pair is (3,
)
2
Also, at λ2 = −1
2 1
x
=0
4 2
y
Page 21
Section 3.3. Linear Systems Of Differential Equations
Page 22
2x + y = 0 ............................... (1)
4x + 2y = 0.................................(2)
From (1), y = −2x
x
−2x
x⃗2 =
1
−2
x⃗2 = x
At x = 1
x⃗2 =
1
−2
Therefore the second eigen pair is (-1,
1
)
−2
Now using the general solution to the differential equation,
x(t) = C1 eλ1 t x⃗1 + C2 eλ2 t x⃗2
1
1
−1t
X(t) = C1 e
+ C2 e
2
−2
3t
Since λ2 < 0 < λ2 , the solution is a saddle
Given the equation: x′1 = x1 + x2 and x′2 = 4x1 + x2 . The eigen pairs are (−1, (1. − 2))
and (3, (1.2)) In the case of the saddle λ1 < 0 < λ2 , the solution
1
−t
3t 1
x(t) = C1 e
+ C2 e
−2
2
The saddle is in unstable equilibrium from the left to the right. Since λ1 < 0 and λ2 > 0. For a
Given the equation:
x′1 = x1 + x2
and
x′2 = 4x1 + x2
1
1
The Eigen pairs are (−1,
) and (3,
) In the case of the saddle λ1 < 0 < λ2 , the solution
−2
2
1
−t
3t 1
x(t) = C1 e
+ C2 e
−2
2
The saddle is in unstable equilibrium from the left to the right, Since λ1 < 0 and λ2 > 0. For a
non zero value of C2 the initial condition will not be exactly on the line x2 = −2x1
Section 3.3. Linear Systems Of Differential Equations
Page 23
Nodal Sink
In this case, we have two stable lines along both axes since λ1 < λ2 < 0 . As t → inf
(i.e. as trajectories approach the origin), the stable line for λ2 with eigenvector [0, 1]T , becomes
dominant. So trajectories turn towards (0, 0) in a direction tangent to the stable line eλ2 [0, 1]T as
illustrated in Figure 2 On the other hand, if λ2 < λ1 < 0 , trajectories will behave in a different
way.
EXAMPLE: Solve the differential equation
x′ (t) = y
and
Solution
′ x
0
1
=
y
−2 −3
x
y
It is now of the form x′ = A⃗x where
′
′
0
1
x
x
′
,A=
and ⃗x =
x =
−2 −3
y
y
Using the characteristic equation,
det(A − λI) = 0
0
1
λ 0
det(
) =0
−2 −3
0 λ
−λ
1
det(
) =0
−2 −3 − λ
(−λ)(−3 − λ) − 1(−2) = 0
3λ + λ2 + 2 = 0
λ2 + 2λ + λ + 2 = 0
λ(λ + 2) + 1(λ + 2) = 0
(λ + 2) + (λ + 1) = 0
λ1 = −2 or λ2 = −1
Solving for corresponding eigenvectors,
With (A − λI)⃗x = 0
0
1
λ 0
−
x⃗1 =0
−2 −3
0 λ
−λ
1
x⃗1 =0 At λ1 = −2
−2 −3 − λ
y ′ (t) = −2x + −3y
Section 3.3. Linear Systems Of Differential Equations
2
1
x
=0
−2 −1
y
2x + y = 0 . . . (1)
−2x − y = 0 . . . (2)
From (1),
y = −2x
x
x⃗1 =
−2x
1
x⃗1 = x
−2
At x = 1,
1
x⃗1 =
−2
Therefore the first eigenpair is (-2,
1
)
−2
Also,
−λ
1
x⃗2 =0
−2 −3 − λ
at λ2 = −1,
1
1
x
=0
−2 −2
y
x + y = 0 . . . (1)
−2x − 2y = 0 . . . (2)
From (1),
y = −x
x
x⃗2 =
−x
1
x⃗2 = x
−1
At x = 1,
1
x⃗2 =
−1
Therefore the second eigenpair is (-1,
1
)
−1
Page 24
Section 3.3. Linear Systems Of Differential Equations
Page 25
Now using the general solution to the differential equation,
x(t) = C1 eλ1 t x⃗1 + C2 eλ2 t x⃗2
1
1
−1t
−2t
+ C2 e
X(t) = C1 e
−1
−2
Since λ1 < λ2 < 0, the solution is a Nodal Sink.
Nodal Source In this case, we have two unstable lines along both axes. The phase portrait of
the nodal source 0 < λ1 < λ2 is similar to that of the nodal sink but the directions are reversed.
As t → inf (i.e. closer to the origin),the trajectories corresponding to [0, 1]T is dominant. This
means that the path it takes turns from the origin to the direction of the tangent to the unstable
line So trajectories turn towards (0, 0) in a direction tangent to the stable line eλ1 t [0, 1]T as
illustrated in Figure 3 On the other hand, if 0 < λ2 < λ1 , trajectories will behave differently.
EXAMPLE Solve the differential equation
x′ (t) = 2x + y
y ′ (t) = x + 2y
Solution
′ 2 1
x
x
=
1 2
y
y
It is now of the form x′ = A⃗x where
′
′
2 1
x
x
′
,A=
and ⃗x =
x =
1 2
y
y
Using the characteristic equation,
det(A − λI) = 0
2 1
λ 0
det(
) =0
1 2
0 λ
2−λ
1
det(
) =0
1
2−λ
(2 − λ)(2 − λ) − 1(1) = 0
4 − 4λ + λ2 − 1 = 0
λ2 − 3λ − λ + 3 = 0
λ(λ − 3) − 1(λ − 3) = 0
(λ − 3)(λ − 1) = 0
λ1 = 3 or λ2 = 1
Solving for corresponding eigenvectors,
Section 3.3. Linear Systems Of Differential Equations
With (A − λI)⃗x = 0
2 1
λ 0
−
x⃗1 =0
1 2
0 λ
2−λ
1
x⃗1 =0 At λ1 = 3
1
2−λ
−1 1
x
=0
1 −1
y
−x + y = 0 . . . (1)
x − y = 0 . . . (2)
From (2),
x=y
y
x⃗1 =
y
1
x⃗1 = y
1
At y = 1,
1
x⃗1 =
1
1
Therefore the first eigen pair is (3,
)
1
Also,
2−λ
1
x⃗2 =0
1
2−λ
at λ2 = 1,
1 1
x
=0
1 1
y
x + y = 0 . . . (1)
x + y = 0 . . . (2)
From (1),
x = −y
−y
x⃗2 =
y
−1
x⃗2 = y
1
Page 26
Section 3.3. Linear Systems Of Differential Equations
Page 27
At y = 1,
−1
x⃗2 =
1
−1
Therefore the second eigen pair is (1,
)
1
Now using the general solution to the differential equation,
x(t) = C1 eλ1 t x⃗1 + C2 eλ2 t x⃗2
1
−1
3t
t
X(t) = C1 e l
+ C2 e
1
1
Since 0 < λ2 < λ1 , the solution is a nodal source
STAR If λ1 = λ2 , then we have y = Cx The orbits are straight lines through the origin. The
orbit is a stable star for λ < 0 and unstable star for λ > 0.
EXAMPLE consider the system of linear equation
x′ = −2x + 1y
y ′ = −x
′ −2
x
′ =
y
−1
−2
0 det(
−1
1 x
−2 1
let A be =
finding the eigenvalues of the matrix A det(A-λI)=
0 y
−1 0
1
λ 0
)=0
0
0 λ
−2 − λ
1
=0
−1
0−λ
(-2-λ)(−λ) + 1 = 0
λ2 + 2λ + 1 = 0
(λ + 1)2 = 0
λ = −1 with Algebraic Multiplicity 2 where Algebraic Multiplicity is the number of occurrence of the eigenvalues
a
We can find the corresponding eigenvectors we consider the first λ = −1 and let ⃗v = 1 such
a2
−2 + 1
1
a1
0
that (A − λI)⃗v = 0 ⇔
=
−1
0 − 1 a2
0
−1 1 a1
0
⇔
=
−1 1 a2
0
We now obtain the general equation as
1
1
−t 1
−t 1
−t
⃗u(t) = C1 e
+ C2 e
=e
C1
+ C2
1
1
1
1
Chapter 4
Application of eigenvalues and
eigenvectors
4.1
Renaming simple graphs
Introduction
In this chapter, we will discuss the types and stability of solutions of systems of homogeneous linear
differential equations with a constant coefficient discussed in chapter 3. This includes graphs that
explain the general behaviour of the solutions. When a system of differential equation is solved,
we will either get real distinct eigenvalues,repeated real eigenvalues or complex eigenvalues. In
this project we will not consider complex eigenvalues.
Type and stability of solutions
• Saddle
• Nodal sink
• Nodal source
• Star
Saddle
Given the system of differential equation
x′ (t) = x + y
y ′ (t) = 4x + y
1
1
3t 1
The eigenpairs are (3,
) and (−1,
) and the general solution is X(t) = C1 e
+
2
−2
2
28
Section 4.1. Renaming simple graphs
Page 29
1
. This type of solution is called a saddle since the the eigenvalues have opposite
C2 e
−2
signs, that is one being positive and the other being negative (λ = 3) and (λ = −1).The
trajectories of the eigenvectors move away from the origin when the eigenvalue is positive, (λ = 3)
as t → ∞ and move towards the origin or converges to the origin when the eigenvalue is negative,
(λ = −1) as t → ∞. This type of solution is an unstable solution since one of the trajectories
of the eigenvectors move away from the origin or moves towards infinity.
−1t
Figure 4.1: Saddle
Section 4.1. Renaming simple graphs
Page 30
The nodal sink
Also, in this
system ofequation
1
1
(−2,
) and (−1,
),
−2
−1
x′ (t) = y
and
y ′ (t) = −2x − 3y The eigenpairs are
the general solution is given by:
−2t
1
1
−t
+ C2 e
−1
−2
x(t) = C1 e
. This type of solution is called a nodal sink since the eigenvalues are negative, (λ = −2)
and (λ = −1). The phase portrait shows the trajectories of the eigenvectors moving towards the
origin or converging to the origin as t → ∞. This type of solution is a stable solution because
the trajectories of the eigenvectors converges to the origin.
The graph of the nodal sink is shown below.
Figure 4.2: The nodal sink
Section 4.1. Renaming simple graphs
Page 31
Nodal source
In this system of equation
x′ (t) = 2x + y
y ′ (t) = x + 2y
1
−1
The eigenpairs are (3,
) and (1,
), the general solution is given by:
1
1
t −1
3t 1
+ C2 e
x(t) = C1 e
1
1
This type of solution is called a nodal source since both eigenvalues are positive,λ = 3 and
λ = 1. The phase portrait shows the trajectories of the eigenvectors moving away from the origin
as t → ∞.This type of solution is unstable.
Below is the graph of the nodal source
Figure 4.3: The nodal source
Section 4.1. Renaming simple graphs
Page 32
Star
In this system of equation !
+y!
1
−x. The eigen pair is (−1,
). The general solution is given by
1
−t 1
−t 1
x(t) = C1 e
+ C2 e
1
1
. This type of solution is called a star since we have a repeated eigenvalue,λ = −1. The phase
portrait shows the trajectories of the eigenvectors moving towards the origin as t → ∞. This
type of solution is a stable solution.
Below is the graph of the nodal source
Figure 4.4: Star
Chapter 5
Conclusion
Eigenvalues help to know the trends and solutions with a system of ODEs. Once the eigenvalues
for a system are found, it can be used to describe the system’s ability to return to steady-state
if disturbed. The simplest way to predict the behavior of a system if disturbed is to examine the
signs of its eigenvalues. Negative eigenvalues will drive the system back to its steady-state value,
while positive eigenvalues will drive it away. What happens if there are two eigenvalues present
with opposite
signs? What happens if its a complex number with opposite signs? How will the system respond
to a disturbance in that case?
In many situations, there will be one eigenvalue which has a much higher absolute value than the
other corresponding eigenvalues for that system of differential equations. This is known as the
“dominant eigenvalue”, and it will have the greatest effect on the system when it is disturbed.
However, in the case that the eigenvalues are equal and opposite sign there is no dominant
eigenvalue. In this case the constants from the initial conditions are used to determine the
stability. Eigenvalues has helped largely in the field of computer science where the concept of
similar matrices has lead to the way of computing real time variables in huge n by n matrices.
Results are obtained within pico seconds by the computer all because of the concept of Eigenvalues
and its corresponding Eigenvectors.
We hope to discover new and deep ways of solving real time problems using the concepts of
Eigenvalues and its corresponding Eigenvectors in Linear Algebra in our societies.
33
Chapter 6
Reference
6.1
Reference
Reference: Title: Linear Algebra and Its Applications Authors: David C. Lay and Steven R. Lay
https://www.mathworks.com/help/matlab/math/eigenvalues-and-eigenvectors-of-linear-differentialequations.html
http://www.math.pitt.edu/ sussmanm/2070/lab5/lab5.html
https://www.math.ucla.edu/ ronmiech/DEn otes/eigenvectors.pdf
https://www.math.ucdavis.edu/ daddel/lineara lgebraa ppl/Applications/Dif f Eqn/Dif f Eqn.html
34
Download