5.Differential equation

advertisement
ADVANCED ENGINEERING MATHEMATICS
5mark
1.Pierre François Verhulst
Pierre François Verhulst (28 October 1804, Brussels – 15 February 1849, Brussels) was a
mathematician and a doctor in number theory from the University of Ghent in 1825. Verhulst
published in 1838 the equation:
when N(t) represents number of individuals at time t, r the intrinsic growth rate and K is the
carrying capacity, or the maximum number of individuals that the environment can support. In a
paper published in 1845 he called the solution to this the logistic function, and the equation is
now called the logistic equation. This model was rediscovered in 1920 by Raymond Pearl and
Lowell Reed, who promoted its wide and indiscriminate use.
The logistic equation can be integrated exactly, and has solution
where C = 1/N(0) − 1/K is determined by the initial condition N(0). The solution can also be
written as a weighted harmonic mean of the initial condition and the carrying capacity,
Although the continuous-time logistic equation is often compared to the logistic map because of
similarity of form, it is actually more closely related to the Beverton–Holt model of fisheries
recruitment.
The concept of R/K selection theory derives its name from the competing dynamics of
exponential growth and carrying capacity introduced by the equations above.
2.Derivatives and differential equations
The derivative of the exponential function is equal to the value of the function. From any point on the
curve (blue), let a tangent line (red), and a vertical line (green) with height be drawn, forming a right
triangle with a base on the -axis. Since the slope of the red tangent line (the derivative) at is
equal to the ratio of the triangle's height to the triangle's base (rise over run), and the derivative is equal
to the value of the function, must be equal to the ratio of to . Therefore the base must always
be .
The importance of the exponential function in mathematics and the sciences stems mainly from
properties of its derivative. In particular,
That is, ex is its own derivative and hence is a simple example of a Pfaffian function. Functions
of the form cex for constant c are the only functions with that property (by the Picard–Lindelöf
theorem). Other ways of saying the same thing include:




The slope of the graph at any point is the height of the function at that point.
The rate of increase of the function at x is equal to the value of the function at x.
The function solves the differential equation y ′ = y.
exp is a fixed point of derivative as a functional.
If a variable's growth or decay rate is proportional to its size—as is the case in unlimited
population growth (see Malthusian catastrophe), continuously compounded interest, or
radioactive decay—then the variable can be written as a constant times an exponential function
of time. Explicitly for any real constant k, a function f: R→R satisfies f′ = kf if and only if
f(x) = cekx for some constant c.
Furthermore for any differentiable function f(x), we find, by the chain rule:
3.Integration
Below, the curly symbol ∂ means "boundary of".
Surface–volume integrals
In the following surface–volume integral theorems, V denotes a 3d volume with a corresponding
2d boundary S = ∂V (a closed surface):

(Divergence theorem)



(Green's first identity)

(Green's second identity)
Curve–surface integrals
In the following curve–surface integral theorems, S denotes a 2d open surface with a
corresponding 1d boundary C = ∂S (a closed curve):

(Stokes' theorem)

Integration around a closed curve in the clockwise sense is the negative of the same line integral
in the counterclockwise sense (analogous to interchanging the limits in a definite integral):
20mark
4.Linear algebra
Solution of linear systems
Linear algebra provides the formal setting for the linear combination of equations used in the
Gaussian method. Suppose the goal is to find and describe the solution(s), if any, of the
following system of linear equations:
The Gaussian-elimination algorithm is as follows: eliminate x from all equations below L1, and
then eliminate y from all equations below L2. This will put the system into triangular form. Then,
using back-substitution, each unknown can be solved for.
In the example, x is eliminated from L2 by adding (3/2)L1 to L2. x is then eliminated from L3 by
adding L1 to L3. Formally:
The result is:
Now y is eliminated from L3 by adding −4L2 to L3:
The result is:
This result is a system of linear equations in triangular form, and so the first part of the algorithm
is complete.
The last part, back-substitution, consists of solving for the knowns in reverse order. It can thus be
seen that
Then, z can be substituted into L2, which can then be solved to obtain
Next, z and y can be substituted into L1, which can be solved to obtain
The system is solved.
We can, in general, write any system of linear equations as a matrix equation:
The solution of this system is characterized as follows: first, we find a particular solution x0 of
this equation using Gaussian elimination. Then, we compute the solutions of Ax = 0; that is, we
find the nullspace N of A. The solution set of this equation is given by
. If the number of variables equal the number of equations,
then we can characterize when the system has a unique solution: since N is trivial if and only if
det A ≠ 0, the equation has a unique solution if and only if det A ≠ 0.[12]
Least-squares best fit line
The least squares method is used to determine the best fit line for a set of data.[13] This line will
minimize the sum of the squares of the residuals.
Fourier series expansion
Fourier series are a representation of a function f: [−π, π] → R as a trigonometric series:
This series expansion is extremely useful in solving partial differential equations. In this article,
we will not be concerned with convergence issues; it is nice to note that all continuous functions
have a converging Fourier series expansion, and nice enough discontinuous functions have a
Fourier series that converges to the function value at most points.
The space of all functions that can be represented by a Fourier series form a vector space
(technically speaking, we call functions that have the same Fourier series expansion the "same"
function, since two different discontinuous functions might have the same Fourier series).
Moreover, this space is also an inner product space with the inner product
The functions
for n > 0 and
for n ≠ 0 are an
orthonormal basis for the space of Fourier-expandable functions. We can thus use the tools of
linear algebra to find the expansion of any function in this space in terms of these basis
functions. For instance, to find the coefficient ak, we take the inner product with hk:
and by orthonormality,
; that is,
Quantum mechanics
Quantum mechanics is highly inspired by notions in linear algebra. In quantum mechanics, the
physical state of a particle is represented by a vector, and observables (such as momentum,
energy, and angular momentum) are represented by linear operators on the underlying vector
space. More concretely, the wave function of a particle describes its physical state and lies in the
vector space L2 (the functions φ: R3 → C such that
is finite),
and it evolves according to the Schrödinger equation. Energy is represented as the operator
, where V is the potential energy. H is also known as the
Hamiltonian operator. The eigenvalues of H represents the possible energies that can be
observed. Given a particle in some state φ, we can expand φ into a linear combination of
eigenstates of H. The component of H in each eigenstate determines the probability of measuring
the corresponding eigenvalue, and the measurement forces the particle to assume that eigenstate
(wave function collapse).
5.Differential equation
Ordinary differential equations are further classified according to the order of the highest
derivative of the dependent variable with respect to the independent variable appearing in
the equation. The most important cases for applications are first-order and second-order
differential equations. For example, Bessel's differential equation
(in which y is the dependent variable) is a second-order differential equation. In the
classical literature also distinction is made between differential equations explicitly
solved with respect to the highest derivative and differential equations in an implicit
form.

A partial differential equation (PDE) is a differential equation in which the unknown
function is a function of multiple independent variables and the equation involves its
partial derivatives. The order is defined similarly to the case of ordinary differential
equations, but further classification into elliptic, hyperbolic, and parabolic equations,
especially for second-order linear equations, is of utmost importance. Some partial
differential equations do not fall into any of these categories over the whole domain of
the independent variables and they are said to be of mixed type.
Linear and non-linear
Both ordinary and partial differential equations are broadly classified as linear and nonlinear.

A differential equation is linear if the unknown function and its derivatives appear to the power
1 (products are not allowed) and nonlinear otherwise. The characteristic property of linear
equations is that their solutions form an affine subspace of an appropriate function space, which
results in much more developed theory of linear differential equations. Homogeneous linear
differential equations are a further subclass for which the space of solutions is a linear subspace
i.e. the sum of any set of solutions or multiples of solutions is also a solution. The coefficients of
the unknown function and its derivatives in a linear differential equation are allowed to be
(known) functions of the independent variable or variables; if these coefficients are constants
then one speaks of a constant coefficient linear differential equation.

There are very few methods of solving nonlinear differential equations exactly; those that are
known typically depend on the equation having particular symmetries. Nonlinear differential
equations can exhibit very complicated behavior over extended time intervals, characteristic of
chaos
Examples
In the first group of examples, let u be an unknown function of x, and c and ω are known
constants.

Inhomogeneous first-order linear constant coefficient ordinary differential equation:

Homogeneous second-order linear ordinary differential equation:

Homogeneous second-order linear constant coefficient ordinary differential equation describing
the harmonic oscillator:

Inhomogeneous first-order nonlinear ordinary differential equation:

Second-order nonlinear ordinary differential equation describing the motion of a pendulum of
length L:
In the next group of examples, the unknown function u depends on two variables x and t or x and
y.

Homogeneous first-order linear partial differential equation:

Homogeneous second-order linear constant coefficient partial differential equation of elliptic
type, the Laplace equation:

Third-order nonlinear partial differential equation, the Korteweg–de Vries equation:
6.Euler–Lagrange equation
Statement
The Euler–Lagrange equation is an equation satisfied by a function, q, of a real argument, t,
which is a stationary point of the functional
where:

q is the function to be found:
such that q is differentiable, q(a) = xa, and q(b) = xb;

q′ is the derivative of q:
TX being the tangent bundle of X (the space of possible values of derivatives of functions with
values in X);

L is a real-valued function with continuous first partial derivatives:
The Euler–Lagrange equation, then, is given by
where Lx and Lv denote the partial derivatives of L with respect to the second and third
arguments, respectively.
If the dimension of the space X is greater than 1, this is a system of differential equations, one for
each component:
Examples
A standard example is finding the real-valued function on the interval [a, b], such that f(a) = c
and f(b) = d, the length of whose graph is as short as possible. The length of the graph of f is:
the integrand function being L(x, y, y′) = √1 + y′ ² evaluated at (x, y, y′) = (x, f(x), f′(x)).
The partial derivatives of L are:
By substituting these into the Euler–Lagrange equation, we obtain
that is, the function must have constant first derivative, and thus its graph is a straight line.
Variations for several functions, several variables, and higher derivatives
Single function of single variable with higher derivatives
The stationary values of the functional
can be obtained from the Euler–Lagrange equation[3]
under fixed boundary conditions for the function itself as well as for the first
(i.e. for all
remain flexible.
derivatives
). The endpoint values of the highest derivative
Several functions of one variable
If the problem involves finding several functions (
variable ( ) that define an extremum of the functional
then the corresponding Euler–Lagrange equations are[4]
) of a single independent
Single function of several variables
A multi-dimensional generalization comes from considering a function on n variables. If Ω is
some surface, then
is extremized only if f satisfies the partial differential equation
When n = 2 and
is the energy functional, this leads to the soap-film minimal surface problem.
Several functions of several variables
If there are several unknown functions to be determined and several variables such that
the system of Euler–Lagrange equations is[3]
Single function of two variables with higher derivatives
If there is a single unknown function f to be determined that is dependent on two variables x1 and
x2 and if the functional depends on higher derivatives of f up to n-th order such that
then the Euler–Lagrange equation is[3]
7.Partial differential equation
Classification
Some linear, second-order partial differential equations can be classified as parabolic, hyperbolic
or elliptic. Others such as the Euler–Tricomi equation have different types in different regions.
The classification provides a guide to appropriate initial and boundary conditions, and to
smoothness of the solutions.
Equations of first order
In mathematics, a first-order partial differential equation is a partial differential equation that
involves only first derivatives of the unknown function of n variables. The equation takes the
form
Equations of second order
Assuming
, the general second-order PDE in two independent variables has the form
where the coefficients A, B, C etc. may depend upon x and y. If
over a
region of the xy plane, the PDE is second-order in that region. This form is analogous to the
equation for a conic section:
More precisely, replacing
by X, and likewise for other variables (formally this is done by a
Fourier transform), converts a constant-coefficient PDE into a polynomial of the same degree,
with the top degree (a homogeneous polynomial, here a quadratic form) being most significant
for the classification.
Just as one classifies conic sections and quadratic forms into parabolic, hyperbolic, and elliptic
based on the discriminant
, the same can be done for a second-order PDE at a
given point. However, the discriminant in a PDE is given by
due to the convention
of the xy term being 2B rather than B; formally, the discriminant (of the associated quadratic
form) is
1.
with the factor of 4 dropped for simplicity.
: solutions of elliptic PDEs are as smooth as the coefficients allow, within
the interior of the region where the equation and solutions are defined. For example, solutions
of Laplace's equation are analytic within the domain where they are defined, but solutions may
assume boundary values that are not smooth. The motion of a fluid at subsonic speeds can be
approximated with elliptic PDEs, and the Euler–Tricomi equation is elliptic where x < 0.
2.
: equations that are parabolic at every point can be transformed into a form
analogous to the heat equation by a change of independent variables. Solutions smooth out as
the transformed time variable increases. The Euler–Tricomi equation has parabolic type on the
line where x=0.
3.
: hyperbolic equations retain any discontinuities of functions or derivatives
in the initial data. An example is the wave equation. The motion of a fluid at supersonic speeds
can be approximated with hyperbolic PDEs, and the Euler–Tricomi equation is hyperbolic where
x>0.
If there are n independent variables x1, x2 , ..., xn, a general linear partial differential equation of
second order has the form
The classification depends upon the signature of the eigenvalues of the coefficient matrix.
1. Elliptic: The eigenvalues are all positive or all negative.
2. Parabolic : The eigenvalues are all positive or all negative, save one that is zero.
3. Hyperbolic: There is only one negative eigenvalue and all the rest are positive, or there is only
one positive eigenvalue and all the rest are negative.
4. Ultrahyperbolic: There is more than one positive eigenvalue and more than one negative
eigenvalue, and there are no zero eigenvalues. There is only limited theory for ultrahyperbolic
equations (Courant and Hilbert, 1962).
Systems of first-order equations and characteristic surfaces
The classification of partial differential equations can be extended to systems of first-order
equations, where the unknown u is now a vector with m components, and the coefficient matrices
are m by m matrices for
. The partial differential equation takes the form
where the coefficient matrices Aν and the vector B may depend upon x and u. If a hypersurface S
is given in the implicit form
where φ has a non-zero gradient, then S is a characteristic surface for the operator L at a given
point if the characteristic form vanishes:
The geometric interpretation of this condition is as follows: if data for u are prescribed on the
surface S, then it may be possible to determine the normal derivative of u on S from the
differential equation. If the data on S and the differential equation determine the normal
derivative of u on S, then S is non-characteristic. If the data on S and the differential equation do
not determine the normal derivative of u on S, then the surface is characteristic, and the
differential equation restricts the data on S: the differential equation is internal to S.
1. A first-order system Lu=0 is elliptic if no surface is characteristic for L: the values of u on S and
the differential equation always determine the normal derivative of u on S.
2. A first-order system is hyperbolic at a point if there is a space-like surface S with normal ξ at that
point. This means that, given any non-trivial vector η orthogonal to ξ, and a scalar multiplier λ,
the equation
has m real roots λ1, λ2, ..., λm. The system is strictly hyperbolic if these roots are always distinct.
The geometrical interpretation of this condition is as follows: the characteristic form Q(ζ)=0
defines a cone (the normal cone) with homogeneous coordinates ζ. In the hyperbolic case, this
cone has m sheets, and the axis ζ = λ ξ runs inside these sheets: it does not intersect any of them.
But when displaced from the origin by η, this axis intersects every sheet. In the elliptic case, the
normal cone has no real sheets.
Equations of mixed type
If a PDE has coefficients that are not constant, it is possible that it will not belong to any of these
categories but rather be of mixed type. A simple but important example is the Euler–Tricomi
equation
which is called elliptic-hyperbolic because it is elliptic in the region x < 0, hyperbolic in the
region x > 0, and degenerate parabolic on the line x = 0.
Infinite-order PDEs in quantum mechanics
Weyl quantization in phase space leads to quantum Hamilton's equations for trajectories of
quantum particles. Those equations are infinite-order PDEs. However, in the semiclassical
expansion one has a finite system of ODEs at any fixed order of . The equation of evolution of
the Wigner function is infinite-order PDE also. The quantum trajectories are quantum
characteristics with the use of which one can calculate the evolution of the Wigner function.
Download