# Chapter 17-21

Chapter 17 Objectives
• Recognizing that Newton-Cotes integration
formulas are based on the strategy of replacing a
complicated function or tabulated data with a
polynomial that is easy to integrate.
• Knowing how to implement the following single
application Newton-Cotes formulas:
– Trapezoidal rule
– Simpson’s 1/3 rule
– Simpson’s 3/8 rule
• Knowing how to implement the following composite
Newton-Cotes formulas:
– Trapezoidal rule
– Simpson’s 3/8 rule
Objectives (cont)
• Recognizing that even-segment-odd-point
formulas like Simpson’s 1/3 rule achieve
higher than expected accuracy.
• Knowing how to use the trapezoidal rule to
integrate unequally spaced data.
• Understanding the difference between open
and closed integration formulas.
Integration
• Integration:
I
 f x dx
b
a
is the total value, or summation, of f(x) dx over the range
from a to b:

Newton-Cotes Formulas
• The Newton-Cotes formulas are the most
common numerical integration schemes.
• Generally, they are based on replacing a
complicated function or tabulated data with a
polynomial that is easy to integrate:
I
 f x dx   f x dx
b
b
a
a
n
where fn(x) is an nth order interpolating
polynomial.

Newton-Cotes Examples
• The integrating function
can be polynomials for
any order - for example,
(a) straight lines or (b)
parabolas.
• The integral can be
approximated in one step
or in a series of steps to
improve accuracy.
The Trapezoidal Rule
• The trapezoidal rule is
the first of the NewtonCotes closed integration
formulas; it uses a
straight-line
approximation for the
function:
I
 f x dx
b
a
n


f b  f a
I   a f (a) 
x  adx
ba


b
f a  f b
I  b  a
2

Error of the Trapezoidal Rule
• An estimate for the local
truncation error of a single
application of the trapezoidal rule
is:
1
3
Et   f b  a
12
where  is somewhere between
a and b.
• This formula indicates that the
error is dependent upon the
curvature of the actual function
as well as the distance between
the points.
• Error can thus be reduced by
breaking the curve into parts.
Composite Trapezoidal Rule
• Assuming n+1 data points are evenly
spaced, there will be n intervals over
which to integrate.
• The total integral can be calculated
by integrating each subinterval and
I

xn
x0
fn x dx 

x1
x0
fn x dx 
x2
x1
fn x dx

f x0   f x1
f x1  f x2 
I  x1  x0 
 x2  x1

2
2
n1

h 
I  f x0   2 f xi   f xn 
2 

i 1
xn
x n 1
fn x dx
 xn  xn1
f xn1  f xn 
2
MATLAB Program
Simpson’s Rules
• One drawback of the trapezoidal rule is that the error is
related to the second derivative of the function.
• More complicated approximation formulas can improve the
accuracy for curves - these include using (a) 2nd and (b) 3rd
order polynomials.
• The formulas that result from taking the integrals under
these polynomials are called Simpson’s rules.
Simpson’s 1/3 Rule
• Simpson’s 1/3 rule corresponds to using
second-order polynomials. Using the
Lagrange form for a quadratic fit of three
points:
fn x 
x  x1  x  x2  f x  x  x0  x  x2  f x  x  x0  x  x1  f x
 
 
 
x0  x1  x0  x2  0 x1  x0  x1  x2  1 x2  x0  x2  x1  2
Integration over the three points simplifies to:
I   f x dx
x2
x0
I
n
h
f x0   4 f x1   f x2 

3
Error of Simpson’s 1/3 Rule
• An estimate for the local truncation error of a single
application of Simpson’s 1/3 rule is:
1
5
4 
Et  
f b  a
2880
where again  is somewhere between a and b.
• This formula indicates that the error is dependent upon the

fourth-derivative
of the actual function as well as the
distance between the points.
• Note that the error is dependent on the fifth power of the
step size (rather than the third for the trapezoidal rule).
• Error can thus be reduced by breaking the curve into parts.
Composite Simpson’s 1/3 Rule
• Simpson’s 1/3 rule can be used
on a set of subintervals in much
the same way the trapezoidal rule
was, except there must be an odd
number of points.
• Because of the heavy weighting of
the internal points, the formula is a
little more complicated than for the
trapezoidal rule:
I

xn
x0
fn x dx 

x2
x0
fn x dx  x fn x dx
x4
2
 x
xn
n2
fn x dx
h
h
f x0  4 f x1  f x2   f x2  4 f x3  f x4 

3
3


n1
n2
h 

I  f x0  4  f xi  2  f xi  f xn 
3
i1
j2


i, odd
j, even


I

h
f xn2  4 f xn1  f xn 

3
Simpson’s 3/8 Rule
• Simpson’s 3/8 rule corresponds
to using third-order polynomials
to fit four points. Integration over
the four points simplifies to:
I

x3
x0
fn x dx
3h
I   f x0   3 f x1   3 f x2   f x3 
8
• Simpson’s 3/8 rule is generally
used in concert with Simpson’s
1/3 rule when the number of
segments is odd.
Higher-Order Formulas
• Higher-order Newton-Cotes formulas may also be
used - in general, the higher the order of the
polynomial used, the higher the derivative of the
function in the error estimate and the higher the
power of the step size.
• As in Simpson’s 1/3 and 3/8 rule, the evensegment-odd-point formulas have truncation errors
that are the same order as formulas adding one
more point. For this reason, the even-segmentodd-point formulas are usually the methods of
preference.
Integration with Unequal Segments
• Previous formulas were simplified based on
equispaced data points - though this is not
always the case.
• The trapezoidal rule may be used with data
containing unequal segments:
I

xn
x0
fn x dx 

x1
x0
fn x  dx

x2
x1
fn x  dx


f x0   f x1 
f x1   f x2 
I  x1  x0 
 x2  x1 

2
2
xn
x n1
fn x dx
f xn1   f xn 
 xn  xn1 
2
Integration Code for Unequal
Segments
MATLAB Functions
• MATLAB has built-in functions to evaluate integrals
based on the trapezoidal rule
• z = trapz(y)
z = trapz(x, y)
produces the integral of y with respect to x. If x is
omitted, the program assumes h=1.
• z = cumtrapz(y)
z = cumtrapz(x, y)
produces the cumulative integral of y with respect
to x. If x is omitted, the program assumes h=1.
Multiple Integrals
• Multiple integrals can be
determined numerically by first
integrating in one dimension,
then a second, and so on for all
dimensions of the problem.
Chapter 18 Objectives
• Understanding how Richardson extrapolation
provides a means to create a more accurate
integral estimate by combining two less accurate
estimates.
• Understanding how Gauss quadrature provides
superior integral estimates by picking optimal
abscissas at which to evaluate the function.
• Knowing how to use MATLAB’s built-in functions
Richardson Extrapolation
• Richard extrapolation methods use two estimates
of an integral to compute a third, more accurate
approximation.
• If two O(h2) estimates I(h1) and I(h2) are calculated
for an integral using step sizes of h1 and h2,
respectively, an improved O(h4) estimate may be
formed using:
1
I  I(h2 )
(h1 / h2 ) 1
2
I(h2 )  I(h1 )
• For the special case where the interval is halved
(h2=h1/2), this becomes:

4
1
I  I(h2 )  I(h1 )
3
3
Richardson Extrapolation (cont)
• For the cases where there are two O(h4) estimates
and the interval is halved (hm=hl/2), an improved
O(h6) estimate may be formed using:
16
1
I  I m  Il
15
15
• For the cases where there are two O(h6) estimates
and the interval is halved (hm=hl/2), an improved
O(h8) estimate
may be formed using:

64
1
I
I m  Il
63
63
The Romberg Integration Algorithm
• Note that the weighting factors for the Richardson
extrapolation add up to 1 and that as accuracy
increases, the approximation using the smaller step
size is given greater weight.
• In general,
4 k1 I
I
I j,k 
j1,k1
k1
4
j,k1
1
where ij+1,k-1 and ij,k-1 are the more and less
accurate integrals, respectively, and ij,k is the new

approximation.
k is the level of integration and j is
used to determine which approximation is more
accurate.
Romberg Algorithm Iterations
• The chart below shows the process by which
lower level integrations are combined to
produce more accurate estimates:
MATLAB Code for Romberg
a class of techniques for
evaluating the area under a
straight line by joining any
two points on a curve rather
than simply choosing the
endpoints.
• The key is to choose the line
that balances the positive and
negative errors.
Gauss-Legendre Formulas
• The Gauss-Legendre formulas seem to optimize
estimates to integrals for functions over intervals
from -1 to 1.
• Integrals over other intervals require a change in
variables to set the limits from -1 to 1.
• The integral estimates are of the form:
I  c0 f x0  c1 f x1 
 cn1 f xn1 
where the ci and xi are calculated to ensure that the
method exactly integrates up to (2n-1)th order

polynomials
over the interval from -1 to 1.
• Methods such as Simpson’s 1/3 rule has a
disadvantage in that it uses equally spaced
points - if a function has regions of abrupt
changes, small steps must be used over the
entire domain to achieve a certain accuracy.
functions automatically adjust the step size
so that small steps are taken in regions of
sharp variations and larger steps are taken
• MATLAB has two built-in functions for implementing
efficient for low accuracies or nonsmooth functions
for high accuracies and smooth functions
• q = quad(fun, a, b, tol, trace, p1, p2, …)
– fun : function to be integrates
– a, b: integration bounds
– tol: desired absolute tolerance (default: 10-6)
– trace: flag to display details or not
– p1, p2, …: extra parameters for fun
– quadl has the same arguments
Chapter 19 Objectives
• Understanding the application of high-accuracy numerical
differentiation formulas for equispaced data.
• Knowing how to evaluate derivatives for unequally spaced
data.
• Understanding how Richardson extrapolation is applied for
numerical differentiation.
• Recognizing the sensitivity of numerical differentiation to
data error.
• Knowing how to evaluate derivatives in MATLAB with the
• Knowing how to generate contour plots and vector fields
with MATLAB.
Differentiation
• The mathematical definition of a derivative begins
with a difference approximation:
y f xi  x  f xi 

x
x
and as x is allowed to approach zero, the
difference becomes a derivative:


f xi  x  f xi 
dy
 lim
dx x0
x
High-Accuracy Differentiation
Formulas
• Taylor series expansion can be used to
generate high-accuracy formulas for
derivatives by using linear algebra to
combine the expansion around several
points.
• Three categories for the formula include
forward finite-difference, backward finitedifference, and centered finite-difference.
Forward Finite-Difference
Backward Finite-Difference
Centered Finite-Difference
Richardson Extrapolation
• As with integration, the Richardson extrapolation can be used to
combine two lower-accuracy estimates of the derivative to produce a
higher-accuracy estimate.
• For the cases where there are two O(h2) estimates and the interval is
halved (h2=h1/2), an improved O(h4) estimate may be formed using:
4
1
D  D(h2 )  D(h1 )
3
3
• For the cases where there are two O(h4) estimates and the interval is
halved (h2=h1/2), an improved O(h6) estimate may be formed using:

16
1
D  D(h2 )  D(h1 )
15
15
• For the cases where there are two O(h6) estimates and the interval is
halved (h2=h1/2), an improved O(h8) estimate may be formed using:

64
1
D
D(h2 )  D(h1 )
63
63
Unequally Spaced Data
• One way to calculated derivatives of
unequally spaced data is to determine a
polynomial fit and take its derivative at a
point.
• As an example, using a second-order
Lagrange polynomial to fit three points and
taking its derivative yields:
f x  f x0 
2x  x1  x2
2x  x0  x2
2x  x0  x1
 f x1 
 f x2 
x0  x1 x0  x2 
x1  x0 x1  x2 
x2  x0 x2  x1 
Derivatives and Integrals for Data
with Errors
• A shortcoming of numerical differentiation is that it tends to
amplify errors in data, whereas integration tends to smooth
data errors.
• One approach for taking derivatives of data with errors is to
fit a smooth, differentiable function to the data and take the
derivative of the function.
Numerical Differentiation with
MATLAB
• MATLAB has two built-in functions to help
• diff(x)
– Returns the difference between adjacent
elements in x
• diff(y)./diff(x)
– Returns the difference between adjacent values
in y divided by the corresponding difference in
Numerical Differentiation with
MATLAB
Determines the derivative of the data in f at each
of the points. The program uses forward difference
for the first point, backward difference for the last
point, and centered difference for the interior points.
h is the spacing between points; if omitted h=1.
gradient’s result is the same size as the original
data.
• Gradient can also be used to find partial derivatives
for matrices:
Visualization
• MATLAB can generate contour plots of functions as
well as vector fields. Assuming x and y represent
a meshgrid of x and y values and z represents a
function of x and y,
– contour(x, y, z) can be used to generate a contour
plot
– [fx, fy]=gradient(z,h) can be used to generate
partial derivatives and
– quiver(x, y, fx, fy) can be used to generate
vector fields
Chapter 20 Objectives
• Understanding the meaning of local and global truncation
errors and their relationship to step size for one-step
methods for solving ODEs.
• Knowing how to implement the following Runge-Kutta (RK)
methods for a single ODE:
–
–
–
–
Euler
Heun
Midpoint
Fourth-Order RK
• Knowing how to iterate the corrector of Heun’s method.
• Knowing how to implement the following Runge-Kutta
methods for systems of ODEs:
– Euler
– Fourth-order RK
Ordinary Differential Equations
• Methods described here are for solving differential
equations of the form:
dy
 f t, y
dt
• The methods in this chapter are all one-step
methods and have the general format:

yi1  yi  h
where  is called an increment function, and is
used to extrapolate from an old value yi to a new
value yi+1. 
Euler’s Method
• The first derivative
provides a direct
estimate of the slope
at ti: dy
 f t i , yi 
dt ti
and the Euler method
uses that estimate as
 the increment
function:
  f ti , yi 
yi1  yi  f ti , yi h
Error Analysis for Euler’s Method
• The numerical solution of ODEs involves two types
of error:
– Truncation errors, caused by the nature of the techniques
employed
– Roundoff errors, caused by the limited numbers of
significant digits that can be retained
• The total, or global truncation error can be further
split into:
– local truncation error that results from an application
method in question over a single step, and
– propagated truncation error that results from the
approximations produced during previous steps.
Error Analysis for Euler’s Method
• The local truncation error for Euler’s method
is O(h2) and proportional to the derivative of
f(t,y) while the global truncation error is O(h).
• This means:
– The global error can be reduced by decreasing
the step size, and
– Euler’s method will provide error-free predictions
if the underlying function is linear.
• Euler’s method is conditionally stable,
depending on the size of h.
MATLAB Code for Euler’s Method
Heun’s Method
• One method to improve Euler’s method is to determine derivatives at the
beginning and predicted ending of the interval and average them:
• This process relies on making a prediction of the new value of y, then
correcting it based on the slope calculated at that new value.
• This predictor-corrector approach can be iterated to convergence:
Midpoint Method
• Another improvement to Euler’s method is
similar to Heun’s method, but predicts the
slope at the midpoint of an interval rather
than at the end:
• This method has a local truncation error of
O(h3) and global error of O(h2)
Runge-Kutta Methods
• Runge-Kutta (RK) methods achieve the accuracy of a Taylor
series approach without requiring the calculation of higher
derivatives.
• For RK methods, the increment function  can be generally
written as:
  a1k1  a2 k2   an kn
where the a’s are constants and the k’s are
k1  f ti , yi 
k2  
f ti  p1h, yi  q11k1h
k3  f ti  p2 h, yi  q21k1h  q22 k2 h
kn  f ti  pn1h, yi  qn1,1k1h  qn1,2 k2 h 
where the p’s and q’s are constants.
 qn1,n1kn1h
Classical Fourth-Order RungeKutta Method
• The most popular RK methods are fourthorder, and the most commonly used form is:
1
yi1  yi  k1  2k2  2k3  k4 h
6
where:
k1  f t i , yi 

 1
1 
k2  f t i  h, yi  k1h 
 2
2 
 1

1
k3  f t i  h, yi  k2 h 
 2

2
k4  f t i  h, yi  k3h
Systems of Equations
• Many practical problems require the solution of a
system of equations:
dy1
 f1 t, y1, y2 , , yn 
dt
dy2
 f2 t, y1, y2 , , yn 
dt
dyn
 fn t, y1, y2 , , yn 
dt
• The solution of such a system requires that n initial
conditions be known at the starting value of t.

Solution Methods
• Single-equation methods can be used to
solve systems of ODE’s as well; for example,
Euler’s method can be used on systems of
equations - the one-step method is applied
for every equation at each step before
proceeding to the next step.
• Fourth-order Runge-Kutta methods can also
be used, but care must be taken in
calculating the k’s.
MATLAB RK4 Code
Chapter 21 Objectives
• Understanding how the Runge-Kutta Fehlberg
methods use RK methods of different orders to
provide error estimates that are used to adjust step
size.
• Familiarizing yourself with the built-in MATLAB
function for solving ODEs.
• Learning how to adjust options for MATLAB’s ODE
solvers.
• Learning how to pass parameters to MATLAB’s
ODE solvers.
• Understanding what is meant by stiffness and its
implications for solving ODEs.
• The solutions to some ODE
problems exhibit multiple time
scales - for some parts of the
solution the variable changes
slowly, while for others there are
abrupt changes.
• Constant step-size algorithms
would have to apply a small stepsize to the entire computation,
wasting many more calculations
• Adaptive algorithms, on the other
hand, can change step-size
depending on the region.
• There are two primary approaches to
– Step halving - perform the one-step algorithm
two different ways, once with a full step and once
with two half-steps, and compare the results.
– Embedded RK methods - perform two RK
iterations of different orders and compare the
results. This is the preferred method.
MATLAB Functions
• MATLAB’s ode23 function uses second- and thirdorder RK functions to solve the ODE and adjust
step sizes.
• MATLAB’s ode45 function uses fourth- and fifthorder RK functions to solve the ODE and adjust
step sizes. This is recommended as the first
function to use to solve a problem.
• MATLAB’s ode113 function is a multistep solver
useful for computationally intensive ODE functions.
Using ode Functions
• The functions are generally called in the same way;
ode45 is used as an example:
[t, y] = ode45(odefun, tspan, y0)
– y: solution array, where each column represents one of
the variables and each row corresponds to a time in the t
vector
– odefun: function returning a column vector of the righthand-sides of the ODEs
– tspan: time over which to solve the system
• If tspan has two entries, the results are reported for those times
as well as several intermediate times based on the steps taken
by the algorithm
• If tspan has more than two entries, the results are reported only
for those specific times
– y0: vector of initial values
Example - Predator-Prey
dy1
1.2y1  0.6y1y2
• Solve:
dt
•
dy2
 0.8y2  0.3y1y2
dt
with y1(0)=2 and y2(0)=1 for 20 seconds
predprey.m M-file:
function yp = predprey(t, y)

yp = [1.2*y(1)-0.6*y(1)*y(2);…
-0.8*y(2)+0.3*y(1)*y(2)];
• tspan = [0 20];
y0 = [2, 1];
[t, y] = ode45(@predprey, tspan, y0);
figure(1); plot(t,y); figure(2); plot(y(:,1),y(:,2));
ODE Solver Options
• Options to ODE solvers may be passed as
an optional fourth argument, and are
generally created using the odeset function:
options=odeset(‘par1’, ‘val1’, ‘par2’, ‘val2’,…)
• Commonly used parameters are:
– ‘InitialStep’: sets initial step size
– ‘MaxStep’: sets maximum step size (default:
one tenth of tspan interval)
Multistep Methods
• Multistep methods are based on
the insight that, once the
computation has begun,
valuable information from the
previous points exists.
• One example is the non-selfstarting Heun’s Method, which
has the following predictor and
corrector equations:
0
m
(a) Predictor yi1
 yi1
 f ti , yim 2h
m
j1
f
t
,
y

f
t
,
y



h
i
i
i1
i1
j
(b) Corrector yi1
 yim 
2
Stiffness
• A stiff system is one involving rapidly changing components
together with slowly changing ones.
• An example of a single stiff ODE is:
dy
 1000y  3000 2000et
dt
whose solution if y(0)=0 is:
y  3 0.998e1000t  2.002et


MATLAB Functions for Stiff
Systems
• MATLAB has a number of built-in functions
for solving stiff systems of ODEs, including
ode15s, ode23, ode23t, and ode23tb.
• The arguments for the stiff solvers are the
same as those for previous solvers.