Presented by: Rabin Ranabhat Date: October 31, 2012 Ordinary Differential Equations Methods of Solving Initial Value Problems of ODE Euler’s Method Taylor Series Method Runge - Kutta Method Stability Euler’s method Backwards Euler’s method Trapezoidal method Future Work A relation containing functions of only one independent variable, and one or more of their derivatives with respect to the variable 𝑌′(𝑥) = 𝑓 𝑥, 𝑌(𝑥) where 𝑥 ≥ 𝑥0 Common numerical methods for solving initial value problems of ordinary differential equations are Euler’s Method Taylor Series Method Runge-Kutta Method Let 𝑌(𝑥) denote a true solution of the initial value problem, with the initial value Yo : 𝑌 ′ 𝑥 = 𝑓 𝑥, 𝑌 𝑥 , 𝑥0 ≤ 𝑥 ≤ 𝑏 𝑌 𝑥0 = 𝑌0 Let the approximate solution be denoted using y(𝑥) with some variations Let us consider the discrete set of points between x0 and b as follows x0<x1<x2<….<xN<b X0 X1 h b Let us consider these nodes to be evenly spaced, such that xn = x0+nh, n = 0,1, …… N To derive Euler’s method, 𝑦 −𝑦 Slope(m)= 2 1 𝑥2 −𝑥1 In this case, 𝑦2 = 𝑌 𝑥 + ℎ , 𝑦1 = 𝑌 𝑥 𝑥2 = 𝑥 + ℎ, 𝑥1 = 𝑥 such that 𝑥2 − 𝑥1 = ℎ 𝑦2 − 𝑦1 𝑌′ 𝑥 = 𝑥2 − 𝑥1 Replacing the x and y values 𝑌′ 𝑥 = Evaluating this, we get 𝑌′ 𝑌 𝑥+ℎ −𝑌 𝑥 (𝑥+ℎ)−𝑥 𝑌 𝑥+ℎ −𝑌 𝑥 𝑥 = ℎ We have so far, 𝑌 𝑥+ℎ −𝑌 𝑥 𝑌 𝑥 = ℎ ′ Applying the initial value theorem at x = xn , 𝑌 ′ 𝑥 = 𝑓(𝑥, 𝑌 𝑥 ) changes to we get that, 𝑌 ′ 𝑥𝑛 = 𝑓(𝑥𝑛 , 𝑌 𝑥𝑛 ) 𝑌 𝑥𝑛+1 − 𝑌 𝑥𝑛 𝑓 𝑥𝑛 , 𝑌 𝑥𝑛 = ℎ Multiplying both sides by ‘h’. We get, 𝑌 𝑥𝑛+1 − 𝑌 𝑥𝑛 ℎ∗ = ℎ ∗ 𝑓 𝑥𝑛 , 𝑌 𝑥𝑛 ℎ 𝑌 𝑥𝑛+1 − 𝑌 𝑥𝑛 = ℎ ∗ 𝑓 𝑥𝑛 , 𝑌 𝑥𝑛 We have so far, 𝑌 𝑥𝑛+1 − 𝑌 𝑥𝑛 = ℎ ∗ 𝑓 𝑥𝑛 , 𝑌 𝑥𝑛 Adding 𝑌 𝑥𝑛 on both sides 𝑌 𝑥𝑛+1 = ℎ ∗ 𝑓 𝑥𝑛 , 𝑌 𝑥𝑛 + 𝑌 𝑥𝑛 From derivation we got, 𝑌 𝑥𝑛+1 = 𝑌 𝑥𝑛 + ℎ ∗ 𝑓(𝑥𝑛 , 𝑌 𝑥𝑛 ) For Euler’s method approximation, it can be written as 𝑦𝑛+1 = 𝑦𝑛 + ℎ ∗ 𝑓(𝑥𝑛 , 𝑦𝑛 ) x=0.000, y(x)=1.0000, y1(x)=1.0000, error=0.0000 x=0.500, y(x)=0.6065, y1(x)=0.5000, error=0.1065 x=1.000, y(x)=0.3679, y1(x)=0.2500, error=0.1179 x=1.500, y(x)=0.2231, y1(x)=0.1250, error=0.0981 x=2.000, y(x)=0.1353, y1(x)=0.0625, error=0.0728 x=2.500, y(x)=0.0821, y1(x)=0.0313, error=0.0508 x=3.000, y(x)=0.0498, y1(x)=0.0156, error=0.0342 x=3.500, y(x)=0.0302, y1(x)=0.0078, error=0.0224 x=4.000, y(x)=0.0183, y1(x)=0.0039, error=0.0144 Graph with step size 0.5 Values in MATLAB x=0.000, y(x)=1.0000, y1(x)=1.0000, error=0.0000 x=0.250, y(x)=0.7788, y1(x)=0.7500, error=0.0288 x=0.500, y(x)=0.6065, y1(x)=0.5625, error=0.0440 x=0.750, y(x)=0.4724, y1(x)=0.4219, error=0.0505 x=1.000, y(x)=0.3679, y1(x)=0.3164, error=0.0515 x=1.250, y(x)=0.2865, y1(x)=0.2373, error=0.0492 x=1.500, y(x)=0.2231, y1(x)=0.1780, error=0.0452 x=1.750, y(x)=0.1738, y1(x)=0.1335, error=0.0403 x=2.000, y(x)=0.1353, y1(x)=0.1001, error=0.0352 x=2.250, y(x)=0.1054, y1(x)=0.0751, error=0.0303 x=2.500, y(x)=0.0821, y1(x)=0.0563, error=0.0258 x=2.750, y(x)=0.0639, y1(x)=0.0422, error=0.0217 x=3.000, y(x)=0.0498, y1(x)=0.0317, error=0.0181 x=3.250, y(x)=0.0388, y1(x)=0.0238, error=0.0150 x=3.500, y(x)=0.0302, y1(x)=0.0178, error=0.0124 x=3.750, y(x)=0.0235, y1(x)=0.0134, error=0.0102 x=4.000, y(x)=0.0183, y1(x)=0.0100, error=0.0083 Graph with step size 0.25 Values in MATLAB When step size is half 𝐸𝑟𝑟𝑜𝑟 𝐷𝑒𝑐𝑟𝑒𝑚𝑒𝑛𝑡 𝑓𝑎𝑐𝑡𝑜𝑟 = 𝐸𝑟𝑟𝑜𝑟 𝑎𝑡 𝑥 𝑤ℎ𝑒𝑛 ℎ=0.5 𝐸𝑟𝑟𝑜𝑟 𝑎𝑡 𝑥 𝑤ℎ𝑒𝑛 ℎ=0.25 At x = 1.00 error decrement factor At x = 2.00 error decrement factor At x = 3.00 error decrement factor 0.1179 0.0515 0.0728 0.0352 0.0342 0.0181 = 2.28 = 2.06 = 1.88 In the each of these cases, when h is halved, the error is decreased by factor of 2. 1 When step size is decremented by , 3 𝐸𝑟𝑟𝑜𝑟 𝑎𝑡 𝑥 𝑤ℎ𝑒𝑛 ℎ=0.3 𝐸𝑟𝑟𝑜𝑟 𝐷𝑒𝑐𝑟𝑒𝑚𝑒𝑛𝑡 𝑓𝑎𝑐𝑡𝑜𝑟 = 𝐸𝑟𝑟𝑜𝑟 𝑎𝑡 𝑥 𝑤ℎ𝑒𝑛 ℎ=0.1 At x = 0.3 error decrement factor At x = 0.6 error decrement factor At x = 0.9 error decrement factor 0.0408 0.0118 0.0588 0.0174 0.0636 0.0191 = 3.2 = 3.3 = 3.3 In the each of these cases, when h is reduced by a 1 factor of , the error is decreased by factor of 3. 3 Taylor Series is where the sum of infinite number of terms is calculated from the values of the function derivatives at a single point. ′′ 𝒂 𝒚 𝒚 𝒛 = 𝒚 𝒂 + 𝒚′ 𝒂 𝒛 − 𝒂 + 𝒛−𝒂 𝟐+⋯ 𝟐! where the taylor expansion is occurring around ′a′ Taylor Series centered at zero is known as Maclaurin series. The second order Taylor approximation is ′′ 𝑦 𝑛 ′ 𝑦𝑛+1 = 𝑦𝑛 + 𝑦𝑛 ∗ ℎ + ∗ ℎ2 2! x=0.000, y(x)=1.0000, y1(x)=1.0000, error=0.0000 x=0.500, y(x)=0.6065, y1(x)=0.6250, error=0.0185 x=1.000, y(x)=0.3679, y1(x)=0.3906, error=0.0227 x=1.500, y(x)=0.2231, y1(x)=0.2441, error=0.0210 x=2.000, y(x)=0.1353, y1(x)=0.1526, error=0.0173 x=2.500, y(x)=0.0821, y1(x)=0.0954, error=0.0133 x=3.000, y(x)=0.0498, y1(x)=0.0596, error=0.0098 x=3.500, y(x)=0.0302, y1(x)=0.0373, error=0.0071 x=4.000, y(x)=0.0183, y1(x)=0.0233, error=0.0050 Graph with step size 0.5 Values at MATLAB Euler’s Method Taylor Series Method x=0.000, y(x)=1.0000, y1(x)=1.0000, error=0.0000 x=0.250, y(x)=0.7788, y1(x)=0.7813, error=0.0024 x=0.500, y(x)=0.6065, y1(x)=0.6104, error=0.0038 x=0.750, y(x)=0.4724, y1(x)=0.4768, error=0.0045 x=1.000, y(x)=0.3679, y1(x)=0.3725, error=0.0046 x=1.250, y(x)=0.2865, y1(x)=0.2910, error=0.0045 x=1.500, y(x)=0.2231, y1(x)=0.2274, error=0.0042 x=1.750, y(x)=0.1738, y1(x)=0.1776, error=0.0039 x=2.000, y(x)=0.1353, y1(x)=0.1388, error=0.0034 x=2.250, y(x)=0.1054, y1(x)=0.1084, error=0.0030 x=2.500, y(x)=0.0821, y1(x)=0.0847, error=0.0026 x=2.750, y(x)=0.0639, y1(x)=0.0662, error=0.0022 x=3.000, y(x)=0.0498, y1(x)=0.0517, error=0.0019 x=3.250, y(x)=0.0388, y1(x)=0.0404, error=0.0016 x=3.500, y(x)=0.0302, y1(x)=0.0316, error=0.0014 x=3.750, y(x)=0.0235, y1(x)=0.0247, error=0.0011 x=4.000, y(x)=0.0183, y1(x)=0.0193, error=0.0009 Graph with step size 0.25 Values at MATLAB Euler’s Method Taylor Series Method When step size is half 𝐸𝑟𝑟𝑜𝑟 𝐷𝑒𝑐𝑟𝑒𝑚𝑒𝑛𝑡 𝑓𝑎𝑐𝑡𝑜𝑟 = At At At 𝐸𝑟𝑟𝑜𝑟 𝑎𝑡 𝑥 𝑤ℎ𝑒𝑛 ℎ=0.5 𝐸𝑟𝑟𝑜𝑟 𝑎𝑡 𝑥 𝑤ℎ𝑒𝑛 ℎ=0.25 0.0227 x = 1.00 error decrement factor = 4.23 0.0046 0.0173 x= 2.00 error decrement factor = 4.3 0.0034 0.098 x = 3.00 error decrement factor = 4.4 0.019 In the each of these cases, when h is halved, the error is decreased by factor of 4. 1 When step size is decreased by 3 𝐸𝑟𝑟𝑜𝑟 𝑎𝑡 𝑥 𝑤ℎ𝑒𝑛 ℎ=0.3 𝐸𝑟𝑟𝑜𝑟 𝐷𝑒𝑐𝑟𝑒𝑚𝑒𝑛𝑡 𝑓𝑎𝑐𝑡𝑜𝑟 = 𝐸𝑟𝑟𝑜𝑟 𝑎𝑡 𝑥 𝑤ℎ𝑒𝑛 ℎ=0.1 0.0042 = 9.5 0.000442 0.0062 x= 0.6 error decrement factor = 9.3 0.000066 0.0069 x = 0.9 error decrement factor = 9.1 0.0007 At x = 0.3 error decrement factor At At In the each of these cases, when h is halved, the error is decreased by factor of 9. General form 𝑦𝑛+1 = 𝑦𝑛 + ℎ𝐹 𝑥𝑛 , 𝑦𝑛 ; ℎ , 𝑛 > 0, 𝑦0 = 𝑌0 𝐹 𝑥, 𝑦; ℎ = 𝛾1 𝑓 𝑥, 𝑦 + 𝛾2 𝑓(𝑥 + 𝛼ℎ, 𝑦 + 𝛽ℎ𝑓(𝑥, 𝑦)) where 𝐹 𝑥, 𝑦; ℎ can be thought of as the average slope To determine the values of 𝛼, 𝛽, 𝛾1 𝑎𝑛𝑑 𝛾2 , the term 𝑓(𝑥 + 𝛼ℎ, 𝑦 + 𝛽ℎ𝑓(𝑥, 𝑦)) will be expanded with respect to x and y We know that 𝜕𝑓 𝜕𝑓 𝑓 𝑥 + ∆𝑥, 𝑦 + ∆𝑦 = 𝑓 𝑥, 𝑦 + ∆𝑥 + ∆𝑦 + 𝑂 ℎ2 𝜕𝑥 𝜕𝑦 In this case, ∆𝑥 = 𝛼ℎ Δ𝑦 = 𝛽ℎ𝑓(𝑥, 𝑦) Expanding it with respect to y 𝜕𝑓 𝑓 𝑥 + 𝛼ℎ, 𝑦 + 𝛽ℎ𝑓 𝑥, 𝑦 = 𝑓 𝑥 + 𝛼ℎ, 𝑦 + 𝛽ℎ𝑓 𝑥, 𝑦 + 𝑂 ℎ2 𝜕𝑦 𝑓 𝑥 + 𝛼ℎ, 𝑦 + 𝛽ℎ𝑓 𝑥, 𝑦 = 𝑓 𝑥 + 𝛼ℎ, 𝑦 + 𝛽ℎ𝑓 𝑥, 𝑦 𝑓𝑦 (𝑥 + 𝛼ℎ, 𝑦) + 𝑂 ℎ2 We have so far, 𝑓 𝑥 + 𝛼ℎ, 𝑦 + 𝛽ℎ𝑓 𝑥, 𝑦 Keeping in mind = 𝑓 𝑥 + 𝛼ℎ, 𝑦 + 𝛽ℎ𝑓 𝑥, 𝑦 𝑓𝑦 (𝑥 + 𝛼ℎ, 𝑦) + 𝑂 ℎ2 𝑓 𝑥 + ∆𝑥, 𝑦 + ∆𝑦 = 𝑓 𝑥, 𝑦 + ∆𝑥 𝜕𝑓 𝜕𝑓 + ∆𝑦 + 𝑂 ℎ2 𝜕𝑥 𝜕𝑦 where, ∆𝑥 = 𝛼ℎ Δ𝑦 = 𝛽ℎ𝑓(𝑥, 𝑦) Expanding with respect to x 𝑓 𝑥 + 𝛼ℎ, 𝑦 + 𝛽ℎ𝑓 𝑥, 𝑦 𝑓 𝑥 + 𝛼ℎ, 𝑦 + 𝛽ℎ𝑓 𝑥, 𝑦 = 𝑓 𝑥, 𝑦 + 𝛼ℎ 𝜕𝑓 + 𝛽ℎ𝑓 𝑥, 𝑦 𝑓𝑦 𝑥, 𝑦 + 𝑂 ℎ2 𝜕𝑥 = 𝑓 𝑥, 𝑦 + 𝛼ℎ𝑓𝑥 (𝑥, 𝑦) + 𝛽ℎ𝑓 𝑥, 𝑦 𝑓𝑦 𝑥, 𝑦 + 𝑂 ℎ2 So this can be written in the following way, 𝑓 𝑥 + 𝛼ℎ, 𝑦 + 𝛽ℎ𝑓 𝑥, 𝑦 = 𝑓 + 𝑓𝑥 𝛼ℎ + 𝑓𝑦 𝛽ℎ𝑓 + 𝑂 ℎ2 Where the functions are evaluated at (x , y). To determine exact solution, we take 2 ℎ 𝑌 𝑥 + ℎ = 𝑌 + ℎ𝑌 ′ + 𝑌 ′′ + 𝑂(ℎ3 ) 2 We know that 𝑌 ′ 𝑥 = 𝑓(𝑥, 𝑌 𝑥 ) So 𝑌 ′′ 𝑌 ′′ 𝑥 = 𝑑 ′ 𝑌 (𝑥) 𝑑𝑥 = 𝜕𝑓 𝜕𝑓 𝑑𝑦 𝑥 = + 𝜕𝑥 𝜕𝑦 𝑑𝑥 𝑌 ′′ 𝑥 = 𝑓𝑥 + 𝑓𝑦 𝑓 𝑑 𝑓 𝑑𝑥 𝑥, 𝑌 𝑥 Now that we have established 𝑌 ′′ = 𝑓𝑥 + 𝑓𝑦 𝑓 Replacing it in the 2nd order Taylor series, 2 ℎ 𝑌 𝑥 + ℎ = 𝑌 + ℎ𝑌 ′ + 𝑌 ′′ + 𝑂(ℎ3 ) 2 Replacing 𝑌′′, we get ℎ2 𝑌 𝑥 + ℎ = 𝑌 + ℎ𝑓 + (𝑓𝑥 + 𝑓𝑦 𝑓) + 𝑂(ℎ3 ) 2 The truncation error is 𝑇 = 𝑌 𝑥 + ℎ − [𝑌 𝑥 + ℎ𝐹(𝑥, 𝑌 𝑥 ); ℎ] So far we know that Where, ℎ2 𝑌 𝑥 + ℎ = 𝑌 + ℎ𝑓 + (𝑓𝑥 + 𝑓𝑦 𝑓) + 𝑂(ℎ3 ) 2 𝐹 𝑥, 𝑦; ℎ = 𝛾1 𝑓 𝑥, 𝑦 + 𝛾2 𝑓(𝑥 + 𝛼ℎ, 𝑦 + 𝛽ℎ𝑓(𝑥, 𝑦)) 𝑓 𝑥 + 𝛼ℎ, 𝑦 + 𝛽ℎ𝑓 𝑥, 𝑦 Replacing these values 𝑌 + ℎ𝑓 + = 𝑓 + 𝑓𝑥 𝛼ℎ + 𝑓𝑦 𝛽ℎ𝑓 + 𝑂 ℎ2 ℎ2 2 𝑓𝑥 + 𝑓𝑦 𝑓 − [𝑌 + ℎ𝛾1 𝑓+𝛾2 h(𝑓 + 𝑓𝑥 𝛼ℎ + 𝑓𝑦 𝛽ℎ𝑓)+ 𝑂 ℎ3 Rearranging this ℎ2 ℎ 1 − 𝛾1 − 𝛾2 𝑓 + 1 − 2𝛾2 𝛼 𝑓𝑥 + 1 − 2𝛾2 𝛽 𝑓𝑧 𝑓 + 𝑂 ℎ3 2 So far we have, ℎ2 ℎ 1 − 𝛾1 − 𝛾2 𝑓 + 1 − 2𝛾2 𝛼 𝑓𝑥 + 1 − 2𝛾2 𝛽 𝑓𝑧 𝑓 2 + 𝑂 ℎ3 So the coefficients in the system must satisfy 1 − 𝛾1 − 𝛾2 = 0 1-2 𝛾2 𝛼 = 0 1-2 𝛾2 𝛽 = 0 Therefore, 1 𝛾2 ≠ 0, 𝛾1 = 1 − 𝛾2 , 𝛼 = 𝛽 = 2𝛾2 1 ,we 2 Choosing the value for 𝛾2 = get 1 𝛾1 = , 𝛼 = 1, 𝛽 = 1 2 Using these values, we obtain the numerical method as follows 𝑦𝑛+1 ℎ = 𝑦𝑛 + 𝑓 𝑥𝑛 , 𝑦𝑛 + 𝑓 𝑥𝑛 + ℎ, 𝑦𝑛 + ℎ𝑓 𝑥𝑛 , 𝑦𝑛 2 which is known as the Heun’s method Values at MATLAB Stability of an initial value problem states that small change in the initial value leads to a small change in the solution For the stability problems, let us consider the model problem Y′ = 𝜆 𝑌 𝑌 0 =1 where the exact solution is 𝑌(𝑥) = 𝑌0 𝑒 𝜆𝑥 The kind of stability property that we would like for a numerical method is that when it is applied to the model problem, the numerical solution satisfies 𝑦ℎ 𝑥𝑛 → 0 𝑎𝑠 𝑥𝑛 → ∞ for any choice of the step size ‘h’. This property is called ‘absolute stability’. From Euler’s method, we get the formula 𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓 𝑥𝑛 , 𝑦𝑛 The model problem is Y′ = 𝜆 𝑌 Applying Euler’s method to the model problem we have 𝑦𝑛+1 = (1 + ℎ𝜆)𝑦𝑛 We see that 𝑦1 = (1 + ℎ𝜆)𝑦0 𝑦2 = (1 + ℎ𝜆)𝑦1 Since, 𝑦0 = 1 𝑦2 = (1 + ℎ𝜆)(1 + ℎ𝜆)𝑦0 𝑦2 = 1 + ℎ𝜆 2 By induction, 𝑦𝑛 = 1 + ℎ𝜆 𝑛 From Euler’s method, we have gotten 𝑦𝑛 = 1 + ℎ𝜆 𝑛 For stability property, we would like to see 𝑦ℎ 𝑥𝑛 → 0 𝑎𝑠 𝑛 → ∞ We see that 𝑦𝑛 → 0 𝑎𝑠 𝑛 → ∞ if and only if |1 + ℎ𝜆| < 1 Therefore, 1 + ℎ𝜆 < 1 (Stable) 1 + ℎ𝜆 ≥ 1 (Unstable) Expanding the stable equation −1 < ℎ𝜆 + 1 < 1 −2 < ℎ𝜆 < 0 Taking the model problem where, Y ′ (𝑥) = 𝜆 𝑌(𝑥) 𝑌 0 =1 The true solution is, 𝑌 𝑥 = 𝑒 𝜆𝑥 Let us consider 𝜆 to be -10, then −2 < ℎ𝜆 < 0 Replacing 𝜆 we get, −2 < −10ℎ < 0 1 >ℎ>0 5 Therefore, step size between 0 to 0.2 will be stable. Step size h = 1/16 Step size h = 0.25, 0.5, 0.75 In derivation of the Euler’s method, the forward difference approximation is used i.e. 1 𝑌 ′ 𝑥 ≈ [𝑌 𝑥 + ℎ − 𝑌 𝑥 ] ℎ In Backward Euler’s method, the backward difference approximation used is 1 𝑌 ′ 𝑥 ≈ [𝑌 𝑥 − 𝑌 𝑥 − ℎ ] ℎ Therefore, the differential equation 𝑌 ′ 𝑥 at 𝑥 = 𝑥𝑛 is discretized as 𝑦𝑛 = 𝑦𝑛−1 + ℎ𝑓 𝑥𝑛 , 𝑦𝑛 Shifting the index by 1, we obtain the Backward Euler’s method 𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓 𝑥𝑛+1 , 𝑦𝑛+1 So far we have 𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓 𝑥𝑛+1 , 𝑦𝑛+1 Applying the model problem Y′ = 𝜆 𝑦 The Backward Euler’s method becomes 𝑦𝑛+1 = 𝑦𝑛 + ℎ𝜆𝑦𝑛+1 Solving for 𝑦1 , 𝑦1 = 𝑦0 + ℎ𝜆𝑦1 (1 − ℎ𝜆)𝑦1 = 𝑦0 𝑦0 𝑦1 = 1−ℎ𝜆 𝑦1 = 1 − ℎ𝜆 −1 𝑦0 Similarly, 𝑦2 = 1 − ℎ𝜆 −1 𝑦1 𝑦2 = 1 − ℎ𝜆 −1 1 − ℎ𝜆 −1 𝑦0 𝑦2 = 1 − ℎ𝜆 −2 𝑦0 So far we have, 𝑦2 = 1 − ℎ𝜆 −2 𝑦0 Since, 𝑦0 = 1 𝑦2 = 1 − ℎ𝜆 −2 By induction, 𝑦𝑛 = 1 − ℎ𝜆 −𝑛 𝑛 1 𝑦𝑛 = 1 − ℎ𝜆 We see that 𝑦𝑛 → 0 𝑎𝑠 𝑛 → ∞ if and only if 1 <1 1 − ℎ𝜆 Therefore, it is stable when 1 − ℎ𝜆 ≤ 1 (Unstable) 1 − ℎ𝜆 > 1 (Stable) We know that, This can be expanded as 1 − ℎ𝜆 ≤ 1 (Unstable) −1 ≤ 1 − ℎ𝜆 ≤ 1 −2 ≤ −ℎ𝜆 ≤ 0 2 ≥ ℎ𝜆 ≥ 0 For stable region, 1 − ℎ𝜆 > 1 (Stable) Expanding it −1 > 1 − ℎ𝜆 > 1 Taking left hand side, −1 > 1 − ℎ𝜆 2 < ℎ𝜆 Taking the right hand side 1 − ℎ𝜆 < 1 ℎ𝜆 < 0 Taking the model problem where, Y ′ (𝑥) = 𝜆 𝑌(𝑥), 𝑌 0 =1 𝑥>0 The true solution is, 𝑌 𝑥 = 𝑒 𝜆𝑥 Let us consider 𝜆 to be -10, then For stable region, 1 − ℎ𝜆 > 1 Expanding it, Taking left hand side, So, −1 > 1 − ℎ𝜆 > 1 2 < ℎ𝜆 2 < ℎ −10 ℎ> −1 5 Again, taking the right hand side 1 − ℎ𝜆 < 1 ℎ −10 < 0 ℎ>0 Therefore, when considering 𝜆 = -10, any values of ‘h’ that is positive will give stable results. Values when step size is 0.2 Values when step size is 0.4 Values when step size is 0.6 Values when step size is 0.8 We integrate the differential equation 𝑌 ′ 𝑥 = 𝑓(𝑥, 𝑌 𝑥 ) from 𝑥𝑛 to 𝑥𝑛+1 This means, 𝑥𝑛+1 𝑌 𝑥𝑛+1 = 𝑌 𝑥𝑛 + 𝑓 𝑥, 𝑌 𝑥 𝑑𝑥 (1.1) 𝑥𝑛 The trapezoidal rule is 𝑏 𝑓 𝑥 𝑑𝑥 ≈ 𝑏 − 𝑎 𝑎 𝑓 𝑎 +𝑓 𝑏 2 So the above equation (1.1) transforms to ℎ 𝑌 𝑥𝑛+1 = 𝑌 𝑥𝑛 + [𝑓 𝑥𝑛 , 𝑌 𝑥𝑛 2 + 𝑓 𝑥𝑛+1 , 𝑌 𝑥𝑛+1 ] Hence we obtained the general formula for Trapezoidal Method ℎ ℎ 𝑦𝑛+1 = 𝑦𝑛 + 𝑓 𝑥𝑛 , 𝑦𝑛 + 𝑓(𝑥𝑛+1 , 𝑦𝑛+1 ) 2 2 So far we have, ℎ ℎ 𝑦𝑛+1 = 𝑦𝑛 + 𝑓 𝑥𝑛 , 𝑦𝑛 + 𝑓(𝑥𝑛+1 , 𝑦𝑛+1 ) 2 2 Applying trapezoidal method to 𝑌 ′ = 𝜆𝑌with y 0 = 1 We get ℎ ℎ 𝑦𝑛+1 = 𝑦𝑛 + 𝜆𝑦𝑛 + 𝜆𝑦𝑛+1 2 2 Regrouping the terms ℎ𝜆 1+ 2 𝑦𝑛+1 = 𝑦𝑛 ℎ𝜆 (1 − ) 2 We have obtained so far, 𝑦𝑛+1 Then Replacing the value of 𝑦1 ℎ𝜆 1+ 2 = 𝑦 ℎ𝜆 𝑛 (1 − 2 ) ℎ𝜆 1+ 2 𝑦1 = 𝑦 ℎ𝜆 0 (1 − 2 ) ℎ𝜆 1+ 2 𝑦2 = 𝑦 ℎ𝜆 1 (1 − 2 ) ℎ𝜆 ℎ𝜆 1+ 2 1+ 2 𝑦2 = 𝑦 ℎ𝜆 ℎ𝜆 0 (1 − 2 ) (1 − 2 ) ℎ𝜆 1+ 2 𝑦2 = ℎ𝜆 1 − 2 2 2 𝑦0 By induction ℎ𝜆 𝑛 1+ 2 𝑦𝑛 = ℎ𝜆 𝑛 1 − 2 We see that 𝑦𝑛 → 0 𝑎𝑠 𝑛 → ∞ if and only if ℎ𝜆 1+ 2 <1 ℎ𝜆 1 − 2 Therefore, it is stable when 1+ ℎ𝜆 2 < 1− 1+ ℎ𝜆 2 ≥ 1− ℎ𝜆 2 ℎ𝜆 2 (Stable) (Unstable) Taking ℎ𝜆 2 ℎ𝜆 1 − 2 1+ < 1 (stable) Expanding we get ℎ𝜆 2 −1 < <1 ℎ𝜆 1 − 2 Taking the two terms on the right, we have ℎ𝜆 1+ 2 <1 ℎ𝜆 1 − 2 ℎ𝜆 ℎ𝜆 1+ < 1 − 2 2 ℎ𝜆 ℎ𝜆 < − 2 2 ℎ𝜆 < 0 1+ Since ‘h’ cannot be negative, for all the 𝜆 that is negative, the method will be stable Taking the model problem where, Y′ 𝑥 = 𝜆 𝑌 𝑥 , 𝑌 0 =1 The true solution is, 𝑌 𝑥 = 𝑒 𝜆𝑥 Let us consider 𝜆 to be -10, then For stable region, ℎ𝜆 1+ 2 ℎ𝜆 1 − 2 < 1 (stable) Expanding it, ℎ𝜆 1+ 2 −1 > >1 ℎ𝜆 1 − 2 Taking right hand side, ℎ𝜆 < 0 So, for any value of ‘h’, the result will be stable in this case. Investigate in depth analysis of stability of differential equations Multistep methods Investigate stability of non linear problems Atkinson, Kendall. "The Numerical Solutions of Differential Equations."Elementary Numerical Analysis. Second Edition ed. New York: John Wiley & Sons, Inc., 1993. 303-370. Print. "Taylor Series Representations." Cal State Fullerton Web. N.p., n.d. Web. 2 Dec. 2011. <http://math.fullerton.edu/mathews/c2003/ "Numerical Methods." www.davidson.edu. N.p., n.d. Web. 29 Oct. 2012. Lu, Ya Yan . "Numerical Methods for Differential Equations." Numerical Methods for Differential Equations I (2012): 67. www.math.cityu.edu.hk. Web. 29 Oct. 2012. "Pauls Online Notes : Differential Equations - Euler's Method." Pauls Online Math Notes. N.p., n.d. Web. 31 Oct. 2012. <http://tutorial.math.lamar.edu/Classes/D "Stability of Numerical Methods for ODEs." MathsNet: School of Mathematical Sciences Intranet Pages. N.p., n.d. Web. 29 Oct. 2012. <http://www.maths.nottingham.ac.uk/per