An Exploration of Adams-Bashforth and Adams-Moulton
Predictor-Corrector Methods for Ordinary Differential Equations
Your Name Here
June 4, 2025
Abstract
This document explores the Adams-Bashforth and Adams-Moulton methods, which form a popular class of explicit and implicit linear multistep methods, respectively, for the numerical solution
of ordinary differential equations (ODEs). We will discuss their derivation, stability properties, and
the common practice of using them in predictor-corrector pairs to leverage the strengths of both.
1
Introduction
Ordinary Differential Equations (ODEs) are fundamental in modeling various phenomena in science and
engineering. An initial value problem (IVP) is given by:
dy
= f (t, y(t)), y(t0 ) = y0
dt
While analytical solutions are often elusive, numerical methods provide approximations to y(t) at discrete
time points tn = t0 + nh, where h is the step size. Multistep methods, unlike one-step methods (e.g.,
Runge-Kutta), use information from several previous points yn−1 , yn−2 , . . . to compute yn .
2
Linear Multistep Methods
A general k-step linear multistep method has the form:
k
X
αj yn+j = h
j=0
k
X
βj f (tn+j , yn+j )
j=0
where αj and βj are constants, and αk ̸= 0. If βk = 0, the method is explicit. If βk ̸= 0, the method is
implicit.
2.1
Adams-Bashforth Methods (Explicit)
Adams-Bashforth methods are explicit multistep methods. They are derived by integrating dy
dt = f (t, y)
from tn to tn+1 :
Z tn+1
y(tn+1 ) − y(tn ) =
f (t, y(t))dt
tn
The function f (t, y(t)) is then approximated by a polynomial P (t) that interpolates f at k previous
points tn , tn−1 , . . . , tn−k+1 .
For example, the two-step Adams-Bashforth (AB2) method is:
yn+1 = yn +
h
(3fn − fn−1 )
2
where fj = f (tj , yj ).
The general form for a k-step Adams-Bashforth method is:
yn+1 = yn + h
k−1
X
j=0
where ∇ is the backward difference operator.
1
bj ∇j fn
2.2
Adams-Moulton Methods (Implicit)
Adams-Moulton methods are implicit multistep methods. The derivation is similar, but the interpolating
polynomial P (t) also includes the future point tn+1 . This means fn+1 = f (tn+1 , yn+1 ) appears on the
right-hand side, making the method implicit.
For example, the two-step Adams-Moulton (AM2) method (also known as the trapezoidal rule if
one-step, or Milne’s method when higher order and used in specific ways) is:
yn+1 = yn +
h
(fn+1 + fn )
2
A common three-step Adams-Moulton (AM3) method is:
yn+1 = yn +
3
h
(9fn+1 + 19fn − 5fn−1 + fn−2 )
24
Predictor-Corrector Methods
Implicit methods like Adams-Moulton generally have better stability properties and higher accuracy for
a given step number compared to explicit methods like Adams-Bashforth. However, they require solving
an algebraic equation (often non-linear) for yn+1 at each step.
Predictor-Corrector (PECE) methods combine an explicit method (predictor) with an implicit method
(corrector):
1. Predict (P): Use an explicit method (e.g., Adams-Bashforth) to get an initial estimate for yn+1 ,
(0)
let’s call it yn+1 .
k−1
X
(0)
(AB coefficients)j fn−j
yn+1 = yn + h
j=0
(0)
(0)
2. Evaluate (E): Evaluate fn+1 = f (tn+1 , yn+1 ).
(0)
3. Correct (C): Use an implicit method (e.g., Adams-Moulton) with fn+1 on the right-hand side to
(1)
get a corrected value yn+1 .
m−1
X
(1)
(0)
yn+1 = yn + h (AM coefficient)−1 fn+1 +
(AM coefficients)j fn−j
j=0
(1)
(1)
4. Evaluate (E) (Optional, for PE(CE)r ): One might re-evaluate fn+1 = f (tn+1 , yn+1 ) and apply
the corrector step multiple times.
A common pairing is a k-step Adams-Bashforth predictor with a (k − 1)-step Adams-Moulton corrector,
or a k-step AB with a k-step AM. For instance, the AB4 predictor with AM3 corrector.
4
Order of Accuracy and Truncation Error
The local truncation error (LTE) of a k-step Adams-Bashforth method is O(hk+1 ). The local truncation
error of a k-step Adams-Moulton method is O(hk+2 ) if k ≥ 1.
The coefficients for these methods are typically derived to maximize the order of accuracy. For an
Adams-Bashforth method of order p:
yn+1 = yn + h
p−1
X
βi f (tn−i , yn−i )
i=0
For an Adams-Moulton method of order q:
yn+1 = yn + h
q−2
X
γi f (tn−i , yn−i )
i=−1
2
(where fn+1 is f (tn+1 , yn+1 ))
5
Stability Analysis
Stability of multistep methods is often analyzed by applying the method to the test equation y ′ = λy,
where λ ∈ C. This leads to a characteristic polynomial whose roots determine stability. The region
of absolute stability is the set of hλ in the complex plane for which the numerical solution remains
bounded.
Adams-Bashforth methods have relatively small stability regions, especially for higher orders. AdamsMoulton methods generally have larger stability regions. Predictor-corrector methods inherit stability
properties that are a combination of the predictor and corrector, and how the corrector is applied (e.g.,
number of iterations).
The root condition
Pkis crucial: A linear multistep method is zero-stable if all roots of the characteristic
polynomial ρ(z) = j=0 αj z j = 0 lie within or on the unit circle, and any roots on the unit circle are
simple. A convergent method must be zero-stable and consistent.
6
Numerical Example
(This section would include a specific IVP, its analytical solution if known, and tables/plots comparing
the numerical solutions obtained using an Adams-Bashforth-Moulton predictor-corrector pair against
the true solution for different step sizes. It would also show errors.)
Consider the IVP:
dy
= −2ty 2 , y(0) = 1
dt
1
The analytical solution is y(t) = 1+t
2.
We can apply a 2-step Adams-Bashforth predictor and a 2-step Adams-Moulton corrector (PECE
mode).
(P )
• Predictor (AB2): yn+1 = yn + h2 (3fn − fn−1 )
(C)
(P )
• Corrector (AM2): yn+1 = yn + h2 (f (tn+1 , yn+1 ) + fn )
Requires starting values, e.g., y0 (given) and y1 (from a one-step method like RK4).
7
Conclusion
Adams-Bashforth and Adams-Moulton methods are powerful tools for solving ODEs. When used in
predictor-corrector schemes, they offer a balance of computational efficiency (from the explicit predictor)
and good stability/accuracy (from the implicit corrector). The choice of specific orders for the predictor
and corrector depends on the problem’s requirements for accuracy and stability.
References
[1] Butcher, J. C. (2008). Numerical Methods for Ordinary Differential Equations. John Wiley & Sons.
[2] Hairer, E., Nørsett, S. P., Wanner, G. (1993). Solving Ordinary Differential Equations I: Nonstiff
Problems. Springer.
[3] Süli, E., Mayers, D. F. (2003). An Introduction to Numerical Analysis. Cambridge University Press.
3