Uploaded by Eloy Dumalhin

module 4 numericals

advertisement
INSTRUCTIONAL MATERIAL
___Engineering and Technology___
COLLEGE
1st Term; Academic Year 2022-2023
Module 4
CURVE FITTING AND INTERPOLATION
ON
MATH 315
NUMERICAL SOLUTION TO CE PROBLEMS
ENGR. FREDALYN D. IGADNA
2022
IFSU-LAG-INS-F008
Rev.00(Jan.03,2022)
i
CONTENTS
▪ Polynomial Interpolation
▪ Cubic and Quadratic Spline Interpolation
▪ Curve Fitting by Function Approximation
▪ Least squares Fit
IFSU-LAG-INS-F008
Rev.00(Jan.03,2022)
1
CHAPTER 4
CURVE FITTING AND INTERPOLATION
Learning Outcomes
▪
▪
Interpolate spaced points to determine function by different techniques and
methods
Find for the polynomial fitting a given set of data
Lesson Proper
I. CURVE FITTING
• The technique to fit curves to obtain intermediate estimates and compute values of
the function at a number of discrete values along the range of interest.
• is the process of constructing a curve, or mathematical function, that has the best
fit to a series of data points, possibly subject to constraints.
• also known as regression analysis, is used to find the "best fit" line or curve for a
series of data points. Most of the time, the curve fit will produce an equation that
can be used to find points anywhere along the curve
II. LEAST SQUARES FIT
A. Linear Regression
The simplest example of a least-squares approximation is fitting a straight line to a
set of paired observations: (x1, y1), (x2, y2), . . . , (xn, yn). The mathematical
expression for the straight line is
where a0 and a1 are coefficients representing the intercept and the slope,
respectively, and e is the error, or residual, between the model and the observations.
The error, or residual, is the discrepancy between the true value of y and the
approx imate value, a0 + a1x, predicted by the linear equation.
To determine values for a0 and a1,
is differentiated with respect to each coefficient:
Note that we have simplified the summation symbols; unless otherwise indicated, all
summations are from i = 1 to n. Setting these derivatives equal to zero will result in
a minimum Sr. If this is done, the equations can be expressed as
Now, realizing that Σa0 = na0, we can express the equations as a set of two
simultaneous linear equations with two unknowns (a0 and a1):
IFSU-LAG-INS-F008
Rev.00(Jan.03,2022)
2
These are called the normal equations. They can be solved simultaneously
This result can then be used in conjunction with previous equations to solve for
where y¯ and x¯ are the means of y and x, respectively.
Example 4.1 Fit a straight line to the x and y values in the first two columns of
The following quantities can be computed:
Using eqs.
Therefore, the least-squares fit is
The line, along with the data, is shown as
IFSU-LAG-INS-F008
Rev.00(Jan.03,2022)
3
B. POLYNOMIAL REGRESSION
Some engineering data, although exhibiting a marked pattern is poorly
represented by a straight line. For these cases, a curve would be better suited to fit
the data. One alternative is to fit polynomials to the data using polynomial regression.
The least-squares procedure can be readily extended to fit the data to a higherorder polynomial. For example, suppose that we fit a second-order polynomial or
quadratic:
For this case the sum of the squares of the residuals is
Following the previous procedures, we take the derivative with respect to each of the
unknown coefficients of the polynomial, as in
These equations can be set equal to zero and rearranged to develop the following set
of normal equations:
where all summations are from i = 1 through n. Note that the above three equations
are linear and have three unknowns: a0, a1, and a2. The coefficients of the
unknowns can be calculated directly from the observed data.
For this case, we see that the problem of determining a least-squares secondorder polynomial is equivalent to solving a system of three simultaneous linear
equations.
The two-dimensional case can be easily extended to an mth-order polynomial
as
The foregoing analysis can be easily extended to this more general case. Thus, we
can recognize that determining the coefficients of an mth-order polynomial is
equivalent to solving a system of m + 1 simultaneous linear equations. For this case,
the standard error is formulated as
This quantity is divided by n − (m + 1) because (m + 1) data-derived coefficients— a0,
a1, . . . , am—were used to compute Sr; thus, we have lost m + 1 degrees of freedom.
IFSU-LAG-INS-F008
Rev.00(Jan.03,2022)
4
Example 4.2 Fit a second-order polynomial to the data in the first two columns of
From the given data,
Therefore, the simultaneous linear equations are
Solving these equations through a technique such as Gauss elimination gives a0 =
2.47857, a1 = 2.35929, and a2 = 1.86071. Therefore, the least-squares quadratic
equation for this case is
Assessment Tasks
Note: To be given in class
IFSU-LAG-INS-F008
Rev.00(Jan.03,2022)
Download