Notes

advertisement
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
CHAPTER V
Interpolation and Regression
Topics
Interpolation
Direct Method; Newton’s Divided Difference; Lagrangian Interpolation; Spline Interpolation.
Regression
Linear and non-linear.
1. What is interpolation?
y  f x  is, often, given only at discrete points such as
x0 , y0 , x1 , y1 ,......, xn1 , yn1 , xn , yn  . How does one find the value of y at any other value
A
function
of x?
Well, a continuous function f x may be used to represent the n+1 data values with f x
passing through the n+1 point. Then we can find the value of y at any other value of x. This is
called interpolation. Of course, if x falls outside the range of x for which the data is given, it is no
longer interpolation, but instead, is called extrapolation.



So what kind of function f x should we choose? A polynomial is a common choice for an
interpolating function because polynomials are easy to
- Evaluate
- Differentiate, and
- Integrate
as opposed to other choices such as a sine or exponential series.
Polynomial interpolation involves finding a polynomial of order ‘n’ that passes through the ‘n+1’
points. One of the methods is called the direct method of interpolation. Other methods include
Newton’s divided difference polynomial method and Lagrangian interpolation method.
y
(x3, y3)
(x1, y1)
f (x)
(x2, y2)
x
(x0, y0)
Interpolation & Regression
60
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
1.2. Direct Method
The direct method of interpolation is based on the following principle. If we have 'n+1' data
points, fit a polynomial of order 'n' as given below
y  a 0  a1 x  ...............  a n x n
(1)
through the data, where a0, a1, . . ., an are n+1 real constants. Since n+1 values of y are given at
n+1 values of x, one can write n+1 equations. Then the 'n+1' constants, a 0, a1, . . ., an, can be
found by solving the n+1 simultaneous linear equations (Ahaaa !!! do you remember previous
course !!!). To find the value of y at a given value of x, simply substitute the value of x in the
polynomial form.
But, it is not necessary to use all the data points. How does one then choose the order of the
polynomial and what data points to use? This concept and the direct method of interpolation are
best illustrated using an example.
1.2.1. Example
The upward velocity of a rocket is given as a function of time in Table 1.
Table 1. Velocity as a function of time
t [s]
v(t) [m/s]
0
0
10
227.04
15
362.78
20
517.35
22.5
602.97
30
901.67
1. Determine the value of the velocity at t=16 s using the direct method and a first order
polynomial.
2.. Determine the value of the velocity at t=16 s using direct method and a third order
polynomial interpolation using direct method.
Interpolation & Regression
61
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
1000
750
v (t) [s] 500
250
0
0
10
20
30
40
t [s]
Figure 5.2. Velocity vs. time data for the rocket example.
1.3. Newton’s divided difference interpolation
To illustrate this method, we will start with linear and quadratic interpolation, then, the general
form of the Newton’s Divided Difference Polynomial method will be presented.
1.3.1. Linear interpolation
( x0 , y0 ), ( x1 , y1 ), fit a linear interpolant through the data. Note taht y 0  f ( x0 ) and
y1  f ( x1 ) , assuming a linear interpolant means:
Given
f1 ( x)  b0  b1 ( x  x0 )
Since at
and at
x  x0 :
x  x1 :
f1 ( x0 )  f ( x0 )  b0  b1 ( x0  x0 )  b0 ,
f1 ( x1 )  f ( x1 )  b0  b1 ( x1  x0 )  f ( x0 )  b1 ( x1  x0 )
Then
b1 
f ( x1 )  f ( x0 )
x1  x0
so
b0  f ( x0 )
b1 
f ( x1 )  f ( x0 )
x1  x0
And the linear interpolant,
f1 ( x)  b0  b1 ( x  x0 )
Interpolation & Regression
62
Numerical Methods for Eng [ENGR 391]
Becomes:
f1 ( x )  f ( x0 ) 
[Lyes KADEM 2007]
f ( x1 )  f ( x0 )
( x  x0 )
x1  x0
1.3.2. Quadratic interpolation
( x0 , y0 ), ( x1 , y1 ), and ( x2 , y2 ), fit a quadratic interpolant through the data. Note that
y  f (x), y 0  f ( x0 ), y1  f ( x1 ), and y2  f ( x2 ), assume the quadratic interpolant f 2 ( x)
Given
given by
f 2 ( x)  b0  b1 ( x  x0 )  b2 ( x  x0 )( x  x1 )
At
x  x0
f ( x0 )  f 2 ( x0 )  b0  b1 ( x0  x0 )  b2 ( x0  x0 )( x0  x1 )
 b0
b0  f ( x0 )
At
x  x1
f ( x1 )  f 2 ( x1 )  b0  b1 ( x1  x0 )  b2 ( x1  x0 )( x1  x1 )
f ( x1 )  f ( x0 )  b1 ( x1  x0 )
then
b1 
At
f ( x1 )  f ( x0 )
x1  x0
x  x2
f ( x2 )  f 2 ( x2 )  b0  b1 ( x2  x0 )  b2 ( x2  x0 )( x2  x1 )
f ( x1 )  f ( x0 )
f ( x 2 )  f ( x0 ) 
( x2  x0 )  b2 ( x2  x0 )( x2  x1 )
x1  x0
then
f ( x 2 )  f ( x1 ) f ( x1 )  f ( x0 )

x 2  x1
x1  x0
b2 
x 2  x0
Hence the quadratic interpolant is given by
f 2 ( x)  b0  b1 ( x  x0 )  b2 ( x  x0 )( x  x1 )
f ( x 2 )  f ( x1 ) f ( x1 )  f ( x0 )

f ( x1 )  f ( x0 )
x 2  x1
x1  x0
f 2 ( x)  f ( x0 ) 
( x  x0 ) 
( x  x0 )( x  x1 )
x1  x0
x 2  x0
Interpolation & Regression
63
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
Figure 5.4. Quadratic interpolation
1.3.3. General Form of Newton’s Divided Difference Polynomial
In the two previous cases, we found how linear and quadratic interpolation is derived by Newton’s
Divided Difference polynomial method. Let us analyze the quadratic polynomial interpolant
formula
f 2 ( x)  b0  b1 ( x  x0 )  b2 ( x  x0 )( x  x1 )
where
b0  f ( x0 )
f ( x1 )  f ( x0 )
x1  x0
f ( x 2 )  f ( x1 ) f ( x1 )  f ( x0 )

x 2  x1
x1  x0
b2 
x 2  x0
Note that b0 , b1 , and b2 are finite divided differences. b0 , b1 , and b2 are first, second, and
b1 
third finite divided differences, respectively. Denoting first divided difference by
f [ x0 ]  f ( x0 )
the second divided difference by
f [ x1 , x0 ] 
f ( x1 )  f ( x0 )
x1  x0
and the third divided difference by
f [ x2 , x1 ]  f [ x1 , x0 ]
x 2  x0
f ( x 2 )  f ( x1 ) f ( x1 )  f ( x0 )

x 2  x1
x1  x0

x 2  x0
f [ x2 , x1 , x0 ] 
Interpolation & Regression
64
Numerical Methods for Eng [ENGR 391]
where
[Lyes KADEM 2007]
f [ x0 ], f [ x1 , x0 ], and f [ x2 , x1 , x0 ] are called bracketed functions of their variables
enclosed in square brackets.
We can write:
f 2 ( x)  f [ x0 ]  f [ x1 , x0 ]( x  x0 )  f [ x2 , x1 , x0 ]( x  x0 )( x  x1 )
This leads to the general form of the Newton’s divided difference polynomial for ( n  1) data
points,
x0 , y0 , x1 , y1 ,......, xn1 , yn1 , xn , yn  as
f n ( x)  b0  b1 ( x  x0 )  ....  bn ( x  x0 )( x  x1 )...( x  xn1 )
where
b0  f [ x0 ]
b1  f [ x1 , x0 ]
b2  f [ x2 , x1 , x0 ]

bn1  f [ xn1 , xn2 ,...., x0 ]
bn  f [ xn , xn1 ,...., x0 ]
where the definition of the m
th
divided difference is
bm  f [ xm ,........, x0 ]

f [ xm ,........, x1 ]  f [ xm1 ,........, x0 ]
x m  x0
From the above definition, it can be seen that the divided differences are calculated recursively.
( x0 , y0 ), ( x1 , y1 ), ( x2 , y2 ), and ( x3 , y3 ),
f 3 ( x)  f [ x0 ]  f [ x1 , x0 ]( x  x0 )  f [ x2 , x1 , x0 ]( x  x0 )( x  x1 )
For an example of a third order polynomial, given
 f [ x3 , x 2 , x1 , x0 ]( x  x0 )( x  x1 )( x  x 2 )
b0
x0
f ( x0 )
b1
f [ x1 , x0 ]
x1
b2
f [ x2 , x1 , x0 ]
f ( x1 )
f [ x2 , x1 ]
x2
f [ x3 , x2 , x1 ]
f ( x2 )
f [ x3 , x 2 ]
x3
f ( x3 )
Interpolation & Regression
65
b3
f [ x3 , x2 , x1 , x0 ]
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
Example
Use the same previous data of the upward velocity of a rocket, to determine the value of the
velocity at t=16 s using third order polynomial interpolation using Newton’s Divided Difference
polynomial.
1.4. Lagrangian Interpolation
Polynomial interpolation involves finding a polynomial of order ‘n’ that passes through the ‘n+1’
points. One of the methods to find this polynomial is called Lagrangian Interpolation.
Lagrangian interpolating polynomial is given by
n
f n ( x)   Li ( x) f ( xi )
i 0
f n (x) stands for the n th order polynomial that approximates the function
y  f (x) given at (n  1) data points as x0 , y0 , x1 , y1 ,......, xn1 , y n1 , xn , y n  , and
where ‘ n ’ in
n
Li ( x)  
j 0
j i
x  xj
xi  x j
Li (x) is a weighting function that includes a product of (n  1) terms with terms of j  i
omitted.
Example
Use the same previous data of the upward velocity of a rocket, to determine the value of the
velocity at t=16 s using third order polynomial interpolation using third order polynomial
interpolation using Lagrangian polynomial interpolation.
1.5. Spline Method of Interpolation
Spline method was introduced to solve one of the drawbacks of the polynomial interpolation. In
fact, when the order (n) becomes large, in many cases, oscillations appear in the resulting
polynomial. This was shown by Runge when he interpolated data based on a simple function of
y
1
1  25 x 2
on an interval of [-1, 1]. For example, take six equidistantly spaced points in [-1, 1] and find y at
these points as given in Table 1.
Interpolation & Regression
66
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
Table 1: Six equidistantly spaced points in [-1, 1]
y
x
1
1  25 x 2
-1.0
0.038461
-0.6
0.1
-0.2
0.5
0.2
0.5
0.6
0.1
1.0
0.038461
Figure.5.5. 5th order polynomial vs. exact function.
Now through these six points, we can pass a fifth order polynomial
f 5 ( x)  0.56731  1.7308 x 2  1.2019 x 4 ,
1  x  1
through the six data points.
When plotting the fifth order polynomial and the original function, you can notice that the two do
not match well. So maybe you will consider choosing more points in the interval [-1, 1] to get a
better match, but it diverges even more (see figure below). In fact, Runge found that as the order
of the polynomial becomes infinite, the polynomial diverges in the interval of –1 < x < 0.726 and
0.726 < x < 1.
2
1
f ( x)
 
f1 n2  x
f1 n3  x
f1 n1  x
0
1
1
0.5
0
0.5
x
Figure.5.6. Higher order polynomial interpolation is a bad idea.
Interpolation & Regression
67
1
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
1.5.1. Linear spline interpolation
Given
x0 , y0 , x1 , y1 ,......, xn1 , yn1 xn , yn  ,
fit linear splines to the data.
This simply
involves forming the consecutive data through straight lines. So if the above data is given in an
ascending order, the linear splines are given by  yi  f ( xi ) 
Figure.5.7. Linear splines.
f ( x1 )  f ( x0 )
x0  x  x1
( x  x0 ),
x1  x0
f ( x2 )  f ( x1 )
 f ( x1 ) 
( x  x1 ),
x1  x  x2
x2  x1
f ( x)  f ( x 0 ) 
.
.
.
 f ( xn1 ) 
f ( xn )  f ( xn1 )
( x  xn1 ), xn1  x  xn
xn  xn1
Note the terms of
f ( xi )  f ( xi 1 )
xi  xi 1
in the above function are simply slopes between
xi 1 and xi .
1.5.2. Quadratic Splines
In these splines, a quadratic polynomial approximates the data between two consecutive data
points. The splines are given by
Interpolation & Regression
68
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
f ( x)  a1 x 2  b1 x  c1 ,
x0  x  x1
 a 2 x  b2 x  c2 ,
x1  x  x2
2
.
.
.
 a n x 2  bn x  c n ,
xn1  x  xn
Now, how to find the coefficients of these quadratic splines? There are 3n such coefficients
a i , i  1, 2, …, n
bi , i  1, 2, …, n
ci , i  1, 2, …, n
To find ‘3n’ unknowns, we need ‘3n’ equations and then simultaneously solve them. These ‘3n’
equations are found by the following.
1) Each quadratic spline goes through two consecutive data points
a1 x0  b1 x0  c1  f ( x0 )
2
a1 x1  b1 x1  c1  f ( x1 )
2
.
.
.
ai xi 1  bi xi 1  ci  f ( xi 1 )
2
ai xi  bi xi  ci  f ( xi )
2
.
.
.
an xn1  bn xn1  cn  f ( xn1 )
2
an xn  bn xn  cn  f ( xn )
2
This condition gives 2n equations as there are ‘n’ quadratic splines going through two
consecutive data points.
2) The first derivatives of two quadratic splines are continuous at the interior points. For
example, the derivative of the first spline
a1 x 2  b1 x  c1
is
2a1 x  b1
The derivative of the second spline
a2 x 2  b2 x  c2
is
2a2 x  b2
Interpolation & Regression
69
Numerical Methods for Eng [ENGR 391]
and the two are equal at
[Lyes KADEM 2007]
x  x1 giving
2a1 x1  b1  2a2 x1  b2
2a1 x1  b1  2a2 x1  b2  0
Similarly at the other interior points,
2a2 x2  b2  2a3 x2  b3  0
.
.
.
2ai xi  bi  2ai 1 xi  bi 1  0
.
.
.
2an1 xn1  bn1  2an xn1  bn  0
Since there are (n-1) interior points, we have (n-1) such equations. Now, the total number of
equations is (2n)  (n  1)  (3n  1) equations. We still then need one more equation.
We can assume that the first spline is linear, that is:
a1  0
This gives us ‘3n’ equations and ‘3n’ unknowns. These can be solved by a number of techniques
used to solve simultaneous linear equations.
Interpolation & Regression
70
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
2. Regression
2.2. What is regression?
Regression analysis gives information on the relationship between a response variable and one
or more independent variables to the extent that information is contained in the data. The goal of
regression analysis is to express the response variable as a function of the predictor variables.
Duality of fit and accuracy of conclusion depend on the data used. Hence non-representative or
improperly compiled data result into poor fits and conclusions. Thus, for effective use of
regression analysis one must
 Investigate the data collection process,
 Discover any limitations in data collected,
 Restrict conclusions accordingly.
Once regression analysis relationship is obtained, it can be used to predict values of the
response variable, identify variables that most affect response, or verify hypothesized casual
models of the response. The value of each predictor variable can be assessed through statistical
tests on the estimated coefficients (multipliers) of the predictor variables.
2.3. Linear regression
Linear regression is the most popular regression model. In this model we wish to predict
response to n data points (x1,y1), (x2,y2), ....., (xn, yn) data by a regression model given by
y  a0  a1 x
where a0 and a1 are the constants of the regression model.
A measure of goodness of fit, that is, how
magnitude of the residual,
a0  a1 x predicts the response variable y is the
 i at each of the n data points.
 i  yi  (a0  a1 xi )
Ideally, if all the residuals
on the model.
coefficients.
i
are zero, one may have found an equation in which all the points lie
Thus, minimization of the residual is an objective of obtaining regression
The most popular method to minimize the residual is the least squares method, where the
estimates of the constants of the models are chosen such that the sum of the squared residuals
n
is minimized, that is minimize

i 1
2
i
.
Why minimize the sum of the square of the residuals? Why not, for instance, minimize the
sum of the residual errors or the sum of the absolute values of the residuals?
Alternatively, constants of the model can be chosen such that the average residual is zero without
making individual residuals small. For example, let us analyze the following table.
x
2.0
3.0
2.0
3.0
y
4.0
6.0
6.0
8.0
To explain this data by a straight line regression model,
Interpolation & Regression
71
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
y  a0  a1 x
n
and using minimizing

i 1
as a criteria to find ao and a1, we find that for (Figure 5.8)
i
Y = 4x -4
10
y =4x - 4
8
y
6
4
2
0
0
1
2
3
4
x
Figure.5.8. Regression curve y = 4x – 4 for y vs. x data.
4
The sum of the residuals,

i 1
i
x
2.0
3.0
2.0
3.0
 0 as shown in the table below.
y
4.0
6.0
6.0
8.0
ypredicted
4.0
8.0
4.0
8.0
ε = y - ypredicted
0.0
-2.0
2.0
0.0
4

i 1
4
So does this give us the smallest error? It does as

i 1
i
i
0
 0 . But it does not give unique values
for the parameters of the model. A straight-line of the model: Y = 6.
Interpolation & Regression
72
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
10
8
y
6
4
2
y=6
0
0
1
2
3
4
x
Figure.5.9. Regression curve y = 6 for y vs. x data.
4
also makes

i 1
 0 as shown in the table below.
i
x
2.0
3.0
2.0
3.0
y
4.0
6.0
6.0
8.0
ypredicted
6.0
6.0
6.0
6.0
ε = y - ypredicted
-2.0
0.0
0.0
2.0
4

i 1
i
0
Since this criterion does not give unique regression model, it cannot be used for finding the
regression coefficients. Why? Because, we want to minimize
n
n
    y
i
i 1
i 1
i
 a0  a1 xi 
Differentiating this equation with respect to a0 and a1, we get
n
  i
i 1
a 0
n
 1   n
i 1
n
  i
i 1
a1
n
_
  xi   n x
i 1
Interpolation & Regression
73
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
Putting these equations to zero, give n= 0 but this is impossible. Therefore, unique values of a0
and a1 do not exist.
n
You may think that the reason the minimization criterion

i 1
i
does not work is that negative
n
residuals cancel with positive residuals. So is minimizing

i 1
i
criterion may be better? Let us
look at the data given below for equation y  4 x  4 . It makes
4

i 1
i
 4 as shown in the
following table.
x
2.0
3.0
2.0
3.0
y
4.0
6.0
6.0
8.0
ypredicted
4.0
8.0
4.0
8.0
|ε| = |y - ypredicted|
0.0
2.0
2.0
0.0
4

i 1
4
The value of

i 1
4
data has

i 1
4
i
 4 also exists for the straight line model y = 6. No other straight line for this
i
 4 . Again, we find the regression coefficients are not unique, and hence this
i
criterion also cannot be used for finding the regression model.
Let us use the least squares criterion where we minimize
n
n
2
S r    i    yi  a0  a1 xi 
i 1
2
i 1
Sr is called the sum of the square of the residuals.
Interpolation & Regression
74
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
y
xi , y i
 i  yi  a0  a1 xi
x2 , y 2
xn , y n
x3 , y 3
y  a0  a1 x
x1 , y1
x
Figure.5.10. Linear regression of y vs. x data showing residuals at a typical point, xi.
To find a0 and a1, we minimize Sr with respect to a0 and a1:
n
Sr
 2  yi  a0  a1xi  1  0
a0
i 1
n
Sr
 2  yi  a0  a1xi  xi   0
a1
i 1
giving
n
n
i 1
i 1
n
  yi   a0   a1 xi  0
i 1
n
n
n
i 1
i 1
i 1
  y i x i   a 0 x i   a1 x i2  0
n
Noting that
a
i 1
0
 a 0  a 0  . . .  a 0  na 0
n
n
na 0  a1  xi  y i
i 1
i 1
n
n
n
i 1
i 1
i 1
a 0  x i  a1  x i2   x i y i
Solving the above equations gives:
Interpolation & Regression
75
Numerical Methods for Eng [ENGR 391]
n
n
n
i 1
i 1
i 1
[Lyes KADEM 2007]
n x i y i  x i  y i
a1 
 n 
n x   x i 
i 1
 i 1 
n
2
2
i
n
a0 
n
n
n
x y x x y
2
i
i 1
i
i 1
i 1
i
i
i 1
2
i
n
 n 
n xi2   xi 
i 1
 i 1 
Redefining
n
_ _
S xy   x i y i  n x y
i 1
_2
n
S xx   xi2  n x
i 1
n
_
x
x
i 1
i
n
n
_
y
y
i 1
i
n
we can rewrite
a1 
S xy
S xx
_
_
a 0  y  a1 x
2.4. Nonlinear models using least squares
2.4.1. Exponential model
x1 , y1  , x2 , y2  , . . . xn , yn , we can fit
y  ae bx to the data. The variables a and
b are the constants of the exponential model. The residual at each data point xi is
Given
Ei  yi  ae bxi
The sum of the square of the residuals is
n
n
i 1
i 1

S r   Ei2   yi  ae bxi

2
To find the constants a and b of the exponential model, we minimize Sr by differentiating with
respect to a and b and equating the resulting equations to zero.
Interpolation & Regression
76
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
n
S r
  2 y i  ae bxi  e bxi  0
a
i 1
n
S r
  2 y i  ae bxi  axi e bxi  0
b
i 1






or
n
n
i 1
i 1
n
  y i e bxi  a  e 2bxi  0
n
 y i xi e
bxi
i 1
 a  xi e 2bxi  0
i 1
These equations are nonlinear in a and b and thus not in a closed form to be solved as was the
case for the linear regression. In general, iterative methods must be used to find values of a and
b.
However, in this case,
n
a
 yi e
i 1
n
e
a can be written explicitly in terms of b as
bxi
2bxi
i 1
Substituting gives
n
n
 y i xi e
bxi
i 1
 yi e
bxi
n
2bx
 i n1
 xi e i  0
2bx
 e i i 1
i 1
This equation is still a nonlinear equation in
bisection method or secant method.
b and can be solved by numerical methods such as
2.4.2. Growth model
Growth models common in scientific fields have been developed and used successfully for
specific situations. The growth models are used to describe how something grows with changes
in regressor variable (often the time). Examples in this category include growth of population with
time. Growth models include
y
a
1  be c. x
where a, b and c are the constants of the model. At x= 0, y 
The residuals at each data point, xi are
Ei  y i 
a
1  be cxi
The sum of the square of the residuals is
Interpolation & Regression
77
a
and as x   , y  a .
1b
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
n
S r   Ei2
i 1
a


   yi 
 cxi 
1  be 
i 1 
n
2
To find the constants a, b and c we minimize Sr by differentiating with respect to
equating the resulting equations to zero.






n 
2e cxi ae c xi  yi e cxi  b
S r
 
2

a
i 1 
e cxi  b
S r
b
S r
c
a , b and c, and
  0 ,


cx
cx
n 
2ae i byi  e i  y i  a  
 
 0,
3


i 1 
e cxi  b

cxi
cxi
n 
 2abxi e byi  e  y i  a  
 
 0.
3


i 1 
e cxi  b







Then , it is possible to use the Newton-Raphson method to solve the above set of simultaneous
nonlinear equations for a, b and c.
2.4.3. Polynomial Models
Given n data points (x1, y1), (x2, y2). . , (xn, yn) use least squares method to regress the data to
an mth order polynomial.
y  a0  a1 x  a 2 x 2    a m x m , m  n
The residual at each data point is given by
Ei  y i  a 0  a1 xi  . . .  a m xim
The sum of the square of the residuals is given by
n
S r   Ei2
i 1
n

  y i  a 0  a1 xi  . . .  a m xim

2
i 1
To find the constants of the polynomial regression model, we put the derivatives with respect to ai
to zero, that is,
Interpolation & Regression
78
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
n
S r
  2. y i  a 0  a1 xi  . . .  a m xim (1)  0
a 0 i 1


n
S r
  2. y i  a 0  a1 xi  . . .  a m xim ( xi )  0
a1 i 1


.
.
.
.
n
S r
  2. y i  a 0  a1 xi  . . .  a m xim ( xim )  0
a m i 1


Writing these equations in matrix form gives


 n

 n
   xi 
  i 1 
. . .

 n m 
  xi 
 i 1 


  xi 
 i 1 
 n 2
  xi 
 i 1 
. . .
n
. .
. .
. .
 n m1 
  xi  . .
 i 1



 n m 
.  xi  a
 i 1    0

 n m1  a1
.  xi 
 i 1
 . .
. . .  a
 m
 n 2m  
.  xi  
 i 1


 n

   yi
  ni 1

 xi yi
.  i 1
 . . .
 
n
 xim yi
 i 1











The above system is solved for a0, a1,. . ., am
2.4.4. Logarithmic Functions
The form for the log regression models is
y   0  1 ln x 
ln x  and the usual least squares method applies in
which y is the response variable and ln x  is the regressor.
This is a linear function between y and
2.4.5. Power Functions
The power function equation describes many scientific and engineering phenomena:
y  ax b
The method of least squares is applied to the power function by first linearizing the data
(assumption is that b is not known). If the only unknown is a, then a linear relation exists between
xb and y . The linearization of the data is as follows:
ln  y   ln a  b ln x
The resulting equation shows a linear relation between
Interpolation & Regression
79
ln y  and ln x  .
Numerical Methods for Eng [ENGR 391]
[Lyes KADEM 2007]
We can put
z  ln y
w  ln( x)
a0  ln a  then a  e ao
a1  b
we get
z  a0  a1 w
n
a1 
n
n
n wi z i   wi  z i
i 1
i 1
i 1


n w    wi 
i 1
 i 1 
n
n
2
2
i
n
a0 
 zi
i 1
n
n
 a1
w
i 1
i
n
Since a0 and a1 can be found, the original constants of the model are
b  a1
a  e a0
Interpolation & Regression
80
Download