Math 2250-010 Numerical Solutions to first order Differential Equations

advertisement
Math 2250-010
Numerical Solutions to first order Differential Equations
January 29, 2014
You may wish to download this file from our class homework or lecture page, or by directly opening
the URL
http://www.math.utah.edu/~korevaar/2250spring14/jan29.mw
from Maple. It contains discussion and Maple commands which will help you with your
Maple/Matlab/Mathematica work later this week, in addition to our in-class discussions.
In this handout we will study numerical methods for approximating solutions to first order differential
equations. Later in the course we will see how higher order differential equations can be converted into
first order systems of differential equations. It turns out that there is a natural way to generalize what we
do now in the context of a single first order differential equations, to systems of first order differential
equations. So understanding this material will be an important step in understanding numerical solutions
to higher order differential equations and to systems of differential equations.
We will be working through material from sections 2.4-2.6 of the text.
Euler's Method:
The most basic method of approximating solutions to differential equations is called Euler's method,
after the 1700's mathematician who first formulated it. If you want to approximate the solution to the initial
value problem
dy
= f x, y
dx
y x0 = y0 ,
first pick a step size h. Then for x between x0 and x0 C h, use the constant slope (= rate of change)
f x0 , y0 . At x-value x1 = x0 C h your y-value will therefore be y1 := y0 C f x0 , y0 $h. Then for x
between x1 and x1 C h you use the constant slope f x1 , y1 , so that at x2 d x1 C h your y-value is
y2 := y1 C f x1 , x1 $h. You continue in this manner. It is easy to visualize if you understand the slope
field concept we've been talking about; you just use the slope field with finite rather than infinitesimal
stepping in the t-variable. You use the value of the slope function f x, y at your current point x, y to
get a slope which you then use to move to the next point. It is straightforward to have the computer do this
sort of tedious computation for you. In Euler's time such computations would have been done by hand!
A good first example to illustrate Euler's method is our favorite DE from the time of Calculus, namely
the initial value problem
dy
=y
dx
y 0 = 1.
x
We know that y = e is the solution. Let's take h = 0.2 and try to approximate the solution on the x-interval
0, 1 . Since the approximate solution will be piecewise linear, we only need to know the approximations
at the discrete x values x = 0, 0.2, 0.4, 0.6, 0.8, 1.0. Use the table to fill in the hand-computations which
produce these y values and points. Then compare to the ``do loop'' computation on the following page,
and the graph of the approximate solution points.
Exercise 1: Hand work for the approximate solution to
dy
=y
dx
y 0 =1
on the interval 0, 1 , with n = 5 subdivisions and h = 0.2, using Euler's method. This table might help
organize the arithmetic. In this differential equation the slope function is f x, y = y.
step i
xi
yi
k = f xi , yi
Dx = h
Dy = h$k
xi C Dx
yi C Dy
0
0
1
1
0.2
0.2
0.2
1.2
1
0.2
1.2
2
3
4
5
Euler Table
Here is the automated computation, and a graph comparing the approximate solution points to the actual
solution graph:
> restart : #clear all memory, if you wish
> unassign `x`, `y` ; # in case you used the letters elsewhere
> f d x, y /y; # slope field function for our DE i.e. for y#(x)=f(x,y)
f := x, y /y
(1)
> x 0 d 0; y 0 d 1; #initial point
h d 0.2; n d 5; #step size and number of steps
x0 := 0
y0 := 1
h := 0.2
n := 5
> for i from 0 to n do #this is an iteration loop, with index "i" running from 0 to n
print i, x i , y i , ex i ; #print iteration step, current x,y, values, and exact solution value
k d f x i , y i ; #current slope function value
x i C 1 d x i C h;
y i C 1 d y i C h$k;
end do: #how to end a for loop in Maple
(2)
0, 0, 1, 1
1, 0.2, 1.2, 1.221402758
2, 0.4, 1.44, 1.491824698
3, 0.6, 1.728, 1.822118800
4, 0.8, 2.0736, 2.225540928
5, 1.0, 2.48832, 2.718281828
(3)
>
and the plot comparing the approximate solution points to the exact solution graph:
> with plots :
Eulerapprox d pointplot seq x i , y i , i = 0 ..n
:
exactsolgraph d plot exp t , t = 0 ..1, `color` = `black` :
#used t because x was already used has been defined to be something else
display Eulerapprox, exactsolgraph , title = `approximate and exact solution graphs` ;
approximate and exact solution graphs
2.6
2.4
2.2
2
1.8
1.6
1.4
1.2
1
0
0.2
0.4
0.6
0.8
1
t
Exercise 2: Why are your approximations too small in this case, compared to the exact solution?
It should be that as your step size h gets smaller, your approximations to the actual solution get better.
This is true if your computer can do exact math (which it can't), but in practice you don't want to make the
computer do too many computations because of problems with round-off error and computation time, so
for example, choosing h = .0000001 would not be practical. But, trying h = 0.01 in our previous initial
value problem should be instructive.
If we change the n-value to 100 and keep the other data the same we can rerun our experiment, using
copy and paste and modify:
>
>
>
>
>
restart : # clear any memory from earlier work
Digits d 6 : #to make our display neater - if we want more accuracy we can use more digits
unassign `x`, `y` ; # in case you used the letters elsewhere, and came back to this piece of code
f d x, y /y; # slope field function for our DE i.e. for y#(x)=f(x,y)
f := x, y /y
(4)
> x 0 d 0; y 0 d 1; #initial point
h d 0.01; n d 100; #step size and number of steps
x0 := 0
y0 := 1
h := 0.01
n := 100
> for i from 0 to n do #this is an iteration loop, with index "i" running from 0 to n
if i mod 10 = 0
then print i, x i , y i , ex i ;
#print iteration step, current x,y, values, and exact solution value
end if: # only print out when i is a multiple of 10
k d f x i , y i ; #current slope function value
x i C 1 d x i C h;
y i C 1 d y i C h$k;
end do: #how to end a for loop in Maple
0, 0, 1, 1
10, 0.10, 1.104622126, 1.105170918
20, 0.20, 1.220190040, 1.221402758
30, 0.30, 1.347848915, 1.349858808
40, 0.40, 1.488863734, 1.491824698
50, 0.50, 1.644631822, 1.648721271
60, 0.60, 1.816696698, 1.822118800
70, 0.70, 2.006763369, 2.013752707
80, 0.80, 2.216715219, 2.225540928
90, 0.90, 2.448632677, 2.459603111
100, 1.00, 2.704813833, 2.718281828
>
(5)
(6)
and the plot with more points
> with plots :
Eulerapprox d pointplot seq x i , y i , i = 0 ..n , color = red :
exactsolgraph d plot exp t , t = 0 ..1, `color` = `black` :
#used t because x was already used has been defined to be something else
display Eulerapprox, exactsolgraph , title = `approximate and exact solution graphs` ;
approximate and exact solution graphs
2.6
2.4
2.2
2
1.8
1.6
1.4
1.2
1
0
0.2
0.4
0.6
0.8
1
t
Exercise 3: For this very special initial value problem which has y x = ex as the solution, set up Euler on
1
the x-interval 0, 1 , with n subdivisions, and step size h = . Write down the resulting Euler estimate
n
for exp 1 = e. What is the limit of this estimate as n/N? You learned this special limit in Calculus!
In more complicated differential equations it is a very serious issue to find relatively efficient ways of
approximating solutions. An entire field of mathematics, ``numerical analysis'' deals with such issues for a
variety of mathematical problems. The book talks about some improvements to Euler in sections 2.5 and
2.6, in particular it discusses improved Euler, and Runge Kutta. Runge Kutta-type codes are actually used
in commerical numerical packages, e.g. in Maple and Matlab.
Let's summarize some highlights from 2.5-2.6.
Suppose we already knew the solution y x to the initial value problem
dy
= f x, y
dx
y x0 = y0 .
If we integrate the DE from x to x C h and apply the Fundamental Theorem of Calculus, we get
xCh
y x C h Ky x =
f t, y t
dt , i.e.
x
xCh
y xCh = y x C
f t, y t
dt .
x
The problem with Euler is that we always approximate this integral by h$f x, y x , i.e. we use the lefthand endpoint as our approximation of the ``average height''. This causes errors, and these accumulate as
we move from subinterval to subinterval and as our approximate solution diverges from the actual
solution. The improvements to Euler depend on better approximations to the integral above These are
subtle, because we don't yet have an approximation for y t when t is greater than x, so also not for the
slope function f t, y t .
``Improved Euler'' uses an approximation to the Trapezoid Rule to approximate the integral. Recall, the
trapezoid rule for this integral approximation would be
1
h$ f x, y x C f x C h, y x C h
.
2
Since we don't know y x C h we approximate it using unimproved Euler, and then feed that into the
trapezoid rule. This leads to the improved Euler do loop below, for the same differential equation we just
studied with the unimproved Euler method. Of course before you use it you must make sure you initialize
everything correctly.
>
>
>
>
>
restart : # clear any memory from earlier work
Digits d 6 : #to make our display neater - if we want more accuracy we can use more digits
unassign `x`, `y` ; # in case you used the letters elsewhere, and came back to this piece of code
f d x, y /y; # slope field function for our DE i.e. for y#(x)=f(x,y)
f := x, y /y
(7)
> x 0 d 0; y 0 d 1; #initial point
h d 0.2; n d 5; #step size and number of steps
x0 := 0
y0 := 1
h := 0.2
n := 5
(8)
> for i from 0 to n do #this is an iteration loop, with index "i" running from 0 to n
print i, x i , y i , ex i ; #print iteration step, current x,y, values, and exact solution value
k1 d f x i , y i ; #current slope function value
k2 d f x i C h, y i C h$k1 ; #estimate for slope at right endpoint of subinterval
k1 C k2
kd
; # improved Euler estimate
2
x i C 1 d x i C h;
y i C 1 d y i C h$k;
end do: #how to end a for loop in Maple
0, 0, 1, 1
1, 0.2, 1.22000, 1.22140
2, 0.4, 1.48840, 1.49182
3, 0.6, 1.81585, 1.82212
4, 0.8, 2.21534, 2.22554
5, 1.0, 2.70271, 2.71828
(9)
Notice we did almost as well with n = 5 in improved Euler as we did with n = 100 in unimproved Euler.
In the same vein as ``improved Euler'' we can use the Simpson approximation for the integral instead of
the Trapezoid rule, and this leads to the Runge-Kutta method. You may or may not have talked about
Simpson's Parabolic Rule for approximating definite integrals in Calculus, it is based on a quadratic
approximation to the function g, whereas the Trapezoid rule is based on a first order approximation. In
fact, consider two numbers x0 ! x1 , with interval width h = x1 K x0 and interval midpoint
1
x=
x C x1 . If you fit a parabolic graph y = p x to the three points
2 0
Notice that we're still using slope function f x, y = y but the code below will work for whatever this
function is, for the DE you're trying to find approximate solutions for. You just need to use Maple to first
define what f x, y is. Sometimes you might know the exact solution (so that you can print its values out
as well, as we do here). Other times you won't know a formula for the exact solution. Also, it could be
that "t" is your independent variable, in which case you'll probably prefer to change your "x's" to "t's".
>
>
>
>
>
restart : # clear any memory from earlier work
Digits d 10 : #to make our display neater - if we want more accuracy we can use more digits
unassign `x`, `y` ; # in case you used the letters elsewhere, and came back to this piece of code
f d x, y /y; # slope field function for our DE i.e. for y#(x)=f(x,y)
f := x, y /y
(10)
> x 0 d 0; y 0 d 1; #initial point
h d 0.2; n d 5; #step size and number of steps
x0 := 0
(11)
y0 := 1
h := 0.2
n := 5
(11)
> for i from 0 to n do #this is an iteration loop, with index "i" running from 0 to n
print i, x i , y i , ex i ; #print iteration step, current x,y, values, and exact solution value
k1 d f x i , y i ; #current slope function value
h
h
k2 d f x i C , y i C $k1 ;
2
2
# first estimate for slope at right endpoint of midpoint of subinterval
h
h
k3 d f x i C , y i C $k2 ; #second estimate for midpoint slope
2
2
k4 d f x i C h, y i C h$k3 ; #estimate for right-hand slope
k1 C 2$ k2 C 2$k3 C k4
kd
; # Runge Kutta estimate for rate of change
6
x i C 1 d x i C h;
y i C 1 d y i C h$k;
end do: #how to end a for loop in Maple
0, 0, 1, 1
1, 0.2, 1.221400000, 1.221402758
2, 0.4, 1.491817960, 1.491824698
3, 0.6, 1.822106456, 1.822118800
4, 0.8, 2.225520825, 2.225540928
5, 1.0, 2.718251136, 2.718281828
(12)
>
Notice how close Runge-Kutta gets you to the correct value of e, with n = 5!
As we know, solutions to non-linear DE's can blow up, and there are other interesting pathologies as
well, so if one is doing numerical solutions there is a real need for care and understanding, see e.g.
examples in sections 2.4-2.6.
Exercise 4) The text homework problems 2.4.4, 2.5.4, 2.6.4 are canceled. However, you will work on a
similar set of "by hand" problems in the lab. In preparation for that, use the Euler, Improved Euler, and
Runge Kutta pseudo-code below to estimate e1 , as the solution to the initial value problem that we've used
exclusively in these notes, namely
y# x = y
y 0 = 1.
(In the lab you will be studying different initial value problems). In this exercise use just one step, n = 1,
of size h = 1. Also, use the slope fields to illustrate how you find and use the various k K values which
are rates of changes that get represented as slopes in the solution graphs. Take one step to get from the
initial point x0 , y0 = 0, 1 to the final point x1 , y1 = 1, ??) .
4a) Euler pseudocode for
0
k= f x, y
j
j
x
=x Ch
y
=y Ch k.
jC 1
jC 1
j
j
(answer is x y
1, 1
= 1, 2). )
4b) Improved Euler pseudocode:
k = f x, y
1
j
2
j
j
k = f x C h, y C h k
j
1
k= 2 k Ck
1
2
x
=x Ch
jC 1
y
jC 1
y# x = f x, y
y x =y
j
=y Ch k
j
(answer is y = 2.5)
1
1
0
4c) Runge-Kutta pseudocode:
k = f x, y
1
j
4
j
j
h
h
k =f x C 2 , y C 2 k
2
j
j
1
h
h
k =f x C 2 , y C 2 k
3
j
j
2
k = f x C h, y C h k
j
3
1
k= 6 k C2 k C2 k Ck
1
2
3
4
x
=x Ch
jC 1
y
jC 1
j
=y Ch k
j
(answer is y = 2.708)
1
Download