Lesson 01 – Introduction and Fundamentals Introduction Why Numerical Methods? Numerical methods aim to give a numerical solution to a mathematical problem Two cases when they are used: When the mathematical problem is too complex (no closed form solution) When it is too cumbersome or lengthy to give a closed form solution Numerical solutions are always approximations of the actual solution Evaluate the quality of such approximations Upper boundary of the difference between our approximation and the actual mathematical solution Nonlinear equations Systems of linear equations Regression and interpolation Introduction to machine learning Numerical differentiation Which Problems Will Be Covered? Numerical integration Initial value problems How to evaluate the quality of the numerical approximation produced What Are Numerical Methods? 1st Component Numerical methods are algorithms. Precise set of rules to follow Generate a numerical approximation of the mathematical problem Do not tell how good/poor the approximation is Algorithm Example Select the example to learn more. 2nd Component Numerical methods need a method to estimate the error between the generated approximation and the exact mathematical solution. Without an error estimation, an algorithm is not better than a guess Depending on the type of mathematical model, the methodology of estimating errors will differ Algorithm Example Algorithm of brushing teeth: Check toothpaste tube for toothpaste If tube has toothpaste, unscrew the cap Pick up toothbrush Turn on faucet for water Run toothbrush under faucet Add toothpaste to the toothbrush bristles Replace cap on toothpaste Open your mouth Place toothbrush bristle side towards your teeth and move in an up-and-down motion Quality control step: Check your work in the mirror Try it yourself! Most Algorithms Are Iterative You have to find 𝑥𝑥0 by yourself. 𝑥𝑥0 There is no built-in rule for when to stop. 𝑥𝑥𝑥𝑥 21 Algorithm Quality Control of Answer Algorithm 𝑥𝑥𝑖𝑖 𝑥𝑥𝑖𝑖+1 Error > Precision Error < Precision Quality Control of Answer – Target Precision Varies How small is the target precision? No universal value for this precision. Mechanical engineer calculating a specific dimension of a mechanical part Electrical engineer simulating a circuit The target precision is an input to the problem we want to solve numerically. Quality Control of Answer – Control Step Algorithm 𝑥𝑥𝑖𝑖+1 𝑥𝑥𝑖𝑖 Error < Precision Not one single way to do it Error > Precision The application of the algorithm is simply a matter of using a numerical software correctly, whereas the correct analysis of the error on the produced approximation is a complete engineering analysis problem. Errors In this course, by error we refer to the difference between the actual mathematical solution 𝑟𝑟 and the numerical approximation 𝑥𝑥𝑟𝑟 computed by the algorithm. This error can be either measured as an absolute error or as a relative error. Example: calculate a distance with a precision of 1 mm. ? Precision of 1 mm is challenging ? Precision of 1 mm is less challenging Errors – Further Distinction True Errors Estimated Errors Select each type of error to learn more. True Errors True errors are the difference between the actual mathematical solution 𝑟𝑟 and the numerical approximation 𝑥𝑥𝑟𝑟. For a real situation the true error is never known as the true solution 𝑟𝑟 is unknown. Estimated Errors Estimated errors are conservative estimations of true errors. We distinguish between: Measure the difference between the mathematical solution and the approximation Estimated absolute error Estimated relative error 𝐸𝐸𝑎𝑎 ≥ 𝑥𝑥𝑟𝑟 − 𝑟𝑟 𝑥𝑥𝑟𝑟 − 𝑟𝑟 𝑒𝑒𝑎𝑎 ≥ 𝑟𝑟 In this course we will learn how these errors can be estimated. Same as absolute error but also compare them with a typical order of magnitude, generally the true solution Source of Errors Errors in mathematical modeling Blunders (bugs) Developing a mathematical model Errors in inputs Round-off errors Specific to numerical methods Truncation errors Summary Numerical algorithms: Compute a numerical approximation of a mathematical problem Are in general iterative Do not provide an estimation of how good/bad the approximation is Numerical algorithms come with two specific sources of errors: Round-off errors Truncation errors Strategies to estimate the error of a numerical approximation will have to be developed END OF SECTION Lesson 01 – Introduction and Fundamentals Round-Off Errors Digital Numbers as Approximations When performing numerical calculations we use digital numbers Digital numbers are approximations of actual mathematical quantities Digital numbers contain a finite number of digits The digits that we consider to be correct are called significant digits Example: 3.14 has 3 significant digits for approximation of pi Depending on the physical quantity used, a digital number with same number of significant digits can be written differently: 1043 mm 1.043 m Leading zeros 1.043 × 106 µm 0.001043 km Handling Numbers in a Computer Computers use a base-2 representation for numbers, called binary numbers For example the binary number 1001 converts to: 1 × 23 + 0 × 22 + 0 × 21 + 1 × 20 = 9 Numbers such as 2, 𝜋𝜋, or 𝑒𝑒 cannot be expressed by a finite sequences of 1 and 0 Computers store typically 16 decimal digits (double precision) The difference between the mathematical number and the number stored in the computer is called round-off error Round-Off Errors The origin of round-off errors is the fact that a computer can only store a finite number of digits The consequence of round-off errors can be quite dramatic Example 1 Start with 2.6 and progressively add 0.2. Consider the following sequence of simple calculations in Octave: Octave> a=2.6 a = 2.6000 Octave> a=a+0.2 a = 2.8000 Octave> a=a+0.2 a = 3.0000 Octave> a=a+0.2 a = 3.2000 Now let us compute a-3.2: Octave> a-3.2 a = 4.4409e-16 The answer is wrong! Example 1 – Why This Error? The reason is the way the number 0.2 is represented in binary: 0011111111001001100110011001100110011001100110011001100110011010 Round-off error Repeated sequence This is similar to the well known fact: 1 = 0. 3� 3 Example 2 Consider the following two functions: 𝑓𝑓 𝑥𝑥 = 𝑥𝑥 𝑔𝑔 𝑥𝑥 = 𝑥𝑥 + 1 − 𝑥𝑥 Try it yourself! 𝑥𝑥 𝑥𝑥 + 1 + 𝑥𝑥 Mathematically it is simple to prove that 𝑓𝑓 𝑥𝑥 = 𝑔𝑔(𝑥𝑥) for all 𝑥𝑥 where the functions are defined. Example 2 – Numerically Let’s evaluate the functions numerically: Anonymous function definition Octave> f = @(x) x*(sqrt(x+1)-sqrt(x)) f = @(x) x * (sqrt (x + 1) - sqrt (x)) Octave> g = @(x) x/(sqrt(x+1)+sqrt(x)) g = @(x) x / (sqrt (x + 1) + sqrt (x)) Octave> f(500)-g(500) ans = -3.4639e-13 The reason is round-off errors. Both f(500) and g(500) are approximations. Example 3 Consider the following equation: 𝑥𝑥 2 +912 𝑥𝑥 − 3 = 0 The true mathematical solution is given by: Quadratic formula 𝑥𝑥1,2 −𝑏𝑏 ± 𝑏𝑏 2 − 4𝑎𝑎𝑎𝑎 = 2𝑎𝑎 Example 3 – Numerically Consider the following equation: 𝑥𝑥 2 +912 𝑥𝑥 − 3 = 0 Numerically we find: Octave> a=1; b=9^12; c=-3; Octave> x1=(-b+sqrt(b^2-4*a*c))/(2*a) x1 = 0 Octave> x2=(-b-sqrt(b^2-4*a*c))/(2*a) x2 = -2.8243e+11 However, 𝑥𝑥 = 0 is NOT a solution of the equation 𝑥𝑥 2 +912 𝑥𝑥 − 3 = 0. Example 3 – Why This Error? Why do we get such a wrong answer? The reason is that in our case: 𝑏𝑏 2 − 4𝑎𝑎𝑎𝑎 ≅ 𝑏𝑏 = 912 𝑏𝑏 2 = 𝑏𝑏 a=1 c = −3 We lose all significant digits when computing: −𝑏𝑏 + 𝑏𝑏 2 − 4𝑎𝑎𝑎𝑎 ≅ −𝑏𝑏 + 𝑏𝑏 = 0 This effect is called loss of significance. Summary Computers can store only a finite number of digits The error introduced by this fact is called: round-off error In every numerical calculation round-off errors are present A dramatic consequence can be loss of significance END OF SECTION Lesson 01 – Introduction and Fundamentals Truncation Errors Numerical algorithms aim to solve a mathematical problem numerically even if no closed form solution is known Truncation Errors A common approach is to replace the original problem by a simpler one with a known solution The difference between the solution of the actual problem one aims to solve and the solution of the simpler problem is called the truncation error Taylor Series – Equation An important tool used to estimate truncation errors is the Taylor series expansion. For a function in one variable one has: ℎ2 𝑓𝑓 𝑥𝑥 + ℎ = 𝑓𝑓 𝑥𝑥 + ℎ𝑓𝑓 𝑥𝑥 + 𝑓𝑓𝑓(𝑥𝑥) + ⋯ 2! ′ Taylor Series – Graph The Taylor series expansion has a very intuitive interpretation. 𝑦𝑦 = 𝑓𝑓(𝑥𝑥) 𝑓𝑓(𝑥𝑥) 𝑓𝑓(𝑥𝑥) 𝑓𝑓 𝑥𝑥 + ℎ𝑓𝑓𝑓(𝑥𝑥) 2 ℎ 𝑓𝑓 𝑥𝑥 + ℎ𝑓𝑓 ′ 𝑥𝑥 + 𝑓𝑓𝑓(𝑥𝑥) 2! 𝑥𝑥 𝑓𝑓(𝑥𝑥 + ℎ) 𝑥𝑥 + ℎ 𝑥𝑥 Taylor Series – Order of Approximation Depending on number of terms included in the Taylor series expansion, one refers to different orders of approximation: 2 ℎ 𝑓𝑓 𝑥𝑥 + ℎ = 𝑓𝑓 𝑥𝑥 + ℎ𝑓𝑓 ′ 𝑥𝑥 + 𝑓𝑓𝑓(𝑥𝑥) + ⋯ 2! Zero Order First Order Second Order Truncation Error of Taylor Series (Part 1 of 2) The difference between 𝑓𝑓 𝑥𝑥 + ℎ and its approximation from the Taylor series expansion is the truncation error. This error becomes smaller as one includes more terms It also becomes smaller if we choose smaller values of ℎ Order of approximation Truncation error Zero 𝑓𝑓 𝑥𝑥 + ℎ − 𝑓𝑓 𝑥𝑥 First Second … 𝑓𝑓 𝑥𝑥 + ℎ − 𝑓𝑓 𝑥𝑥 − ℎ𝑓𝑓 ′ 𝑥𝑥 𝑓𝑓 𝑥𝑥 + ℎ − 𝑓𝑓 𝑥𝑥 − … ℎ𝑓𝑓 ′ ℎ2 𝑥𝑥 − 𝑓𝑓𝑓(𝑥𝑥) 2! Truncation Error of Taylor Series (Part 2 of 2) The difference between 𝑓𝑓 𝑥𝑥 + ℎ and its approximation from the Taylor series expansion is the truncation error. This error becomes smaller as one includes more terms It also becomes smaller if we choose smaller values of ℎ Order of approximation Truncation error Remainder of a Taylor series Zero 𝑓𝑓 𝑥𝑥 + ℎ − 𝑓𝑓 𝑥𝑥 𝑓𝑓𝑓(c)ℎ ≅ 𝑓𝑓 ′ 𝑥𝑥 ℎ First Second … 𝑓𝑓 𝑥𝑥 + ℎ − 𝑓𝑓 𝑥𝑥 − ℎ𝑓𝑓 ′ 𝑥𝑥 𝑓𝑓 𝑥𝑥 + ℎ − 𝑓𝑓 𝑥𝑥 − … ℎ𝑓𝑓 ′ ℎ2 𝑥𝑥 − 𝑓𝑓𝑓(𝑥𝑥) 2! 𝑓𝑓′′(c) 2 𝑓𝑓′′(𝑥𝑥) 2 ℎ ≅ ℎ 2! 2! 𝑓𝑓𝑓𝑓𝑓(c) 3 𝑓𝑓𝑓𝑓𝑓(𝑥𝑥) 3 ℎ ≅ ℎ 3! 3! … Big “O” Notation (Part 1 of 2) Order of approximation Taylor series with truncation error Order of Truncation error Zero 𝑓𝑓 𝑥𝑥 + ℎ = 𝑓𝑓 𝑥𝑥 + 𝑓𝑓𝑓(c)ℎ First First Second 𝑓𝑓 𝑥𝑥 + ℎ = 𝑓𝑓 𝑥𝑥 + ℎ𝑓𝑓 ′ 𝑓𝑓𝑓(c) 2 ℎ 𝑥𝑥 + 2! Second 𝑓𝑓 𝑥𝑥 + ℎ = 𝑓𝑓 𝑥𝑥 + ℎ𝑓𝑓 ′ ℎ2 𝑓𝑓𝑓𝑓𝑓(c) 3 𝑥𝑥 + 𝑓𝑓𝑓(𝑥𝑥) + ℎ 2! 3! Third Big “O” Notation (Part 2 of 2) Order of approximation Taylor series with truncation error Big “O” notation Order of Truncation error Zero 𝑓𝑓 𝑥𝑥 + ℎ = 𝑓𝑓 𝑥𝑥 + 𝑓𝑓𝑓(c)ℎ First First Second 𝑓𝑓 𝑥𝑥 + ℎ = 𝑓𝑓 𝑥𝑥 + 𝑶𝑶(𝒉𝒉) 𝑓𝑓 𝑥𝑥 + ℎ = 𝑓𝑓 𝑥𝑥 + ℎ𝑓𝑓 ′ 𝑓𝑓𝑓(c) 2 ℎ 𝑥𝑥 + 2! 𝑓𝑓 𝑥𝑥 + ℎ = 𝑓𝑓 𝑥𝑥 + ℎ𝑓𝑓 ′ ℎ2 𝑓𝑓𝑓𝑓𝑓(c) 3 𝑥𝑥 + 𝑓𝑓𝑓(𝑥𝑥) + ℎ 2! 3! 𝑓𝑓 𝑥𝑥 + ℎ = 𝑓𝑓 𝑥𝑥 + ℎ𝑓𝑓 ′ 𝑥𝑥 + 𝑶𝑶 𝒉𝒉𝟐𝟐 2 ℎ 𝑓𝑓 𝑥𝑥 + ℎ = 𝑓𝑓 𝑥𝑥 + ℎ𝑓𝑓 ′ 𝑥𝑥 + 𝑓𝑓𝑓(𝑥𝑥) + 𝑶𝑶 𝒉𝒉𝟑𝟑 2! 𝑶𝑶(𝒉𝒉) Second 𝑶𝑶 𝒉𝒉𝟐𝟐 Third 𝑶𝑶 𝒉𝒉𝟑𝟑 Example – Cosine Function Try it yourself! To get cos(𝑥𝑥) for small 𝑥𝑥: cos 𝑥𝑥 = 1 − 𝑥𝑥 2 2! + 𝑥𝑥 4 4! − 𝑥𝑥 6 6! +⋯ The term in order seven is equal to zero. This is also the case for the terms of order one, three and five. For example for 𝑥𝑥 = 0.5: cos 𝑥𝑥 = 0.5 ≅ 1−0.125+0.0026041−0.0000127+ ⋯ ≅ 0.877582 Truncation error 𝐸𝐸: 0.58 = 9 � 10−8 𝐸𝐸 ≅ 8! Use your calculator in radian mode! Summary Truncation errors are the difference between the solution of the actual mathematical problem one wants to solve and the solution of a simplified problem Taylor series is a common tool to estimate truncation errors Truncation error is decreased by addition of terms to the Taylor series The big “O” notation is commonly used to show the order of the truncation error END OF SECTION Lesson 01 – Introduction and Fundamentals Key Points of Lesson 1 Using Numerical Algorithms – Wrap-up Algorithm 𝑥𝑥𝑖𝑖 𝑥𝑥𝑖𝑖+1 Error < Precision Error > Precision The algorithm itself doesn’t tell anything about how good or how poor this approximation is. Numerical Algorithms If applied under the right conditions, a numerical algorithm will produce in each iteration 𝑖𝑖 an improved approximation 𝑥𝑥𝑖𝑖 of the solution 𝑟𝑟 of the mathematical problem one aims to solve In each iteration one has to estimate the error 𝑟𝑟 − 𝑥𝑥𝑖𝑖 in order to decide if the desired precision is reached or not The error 𝑟𝑟 − 𝑥𝑥𝑖𝑖 is made out of two components: Total Error = Truncation Error + Round-Off Error Typical Behaviour of Total Numerical Error Log (Total Error) Truncation error Round-off error Number of iterations END OF SECTION