App-1

advertisement
Appendix A
Nonlinear Curve Fitting
A sample consists of a layer of aluminum and a layer of a composite coating is tested in a
vacuum chamber by measuring its temperature as a function of time. The behavior of the
sample temperature has a nonlinear dependence on the emissivity  of the sample and the
heat transfer coefficient h between the coating and the vacuum air.
Insulation
Aluminum
Composite
coating
Vacuum chamber
Figure A-1. A sample enclosed within a testing chamber.
The unknown parameters h and  may be obtained by fitting the model equation to
experimental data as shown in Figure A.2 where the curve represents the model equation and
the circles represent the data.
260
250
Temperature(K)
240
230
220
210
200
0
5
10
15
20
25
Time(min.)
30
35
40
45
50
Figure A-2. Transient temperature of a typical sample.
A-1
To illustrate how this is done, first consider a portion of the graph in Figure A-2 that is replotted in Figure A-3. The relationship between the temperature Ti obtained from the model
equation and the experimental value Ti,exp can be expressed generally as
Ti,exp = Ti (t; , h) + ei
(A-2)
where ei is a random error that can be negative or positive. Ti is a function of the independent
variable ti and the parameters h and . The random error is also called the residual, which is
the difference between the calculated and measured values.
Ti,exp
Ti
ei
ti
Figure A-3. Relationship between the model equation and the data
Nonlinear regression is based on determining the values of the parameters that minimize the
sum of the squares of the residuals called an objective function Fobj.
N
Fobj =
 ei2 =
i 1
 T
N
i ,exp
i 1
 Ti  =  Ti  Ti ,exp 
2
N
2
(A-3)
i 1
Where N is the number of data points or measured temperatures in this case. The temperature
from equation A-1 can be expanded in a Taylor series around h and  and curtailed after the
first derivative.
Ti,j+1 = Ti,j +
Ti , j
T
 + i , j h

h
(A-4)
Where j is the guess and j+1 is the prediction,  = j+1  j, and h = hj+1  hj. We have
linearized the original model with respect to the parameters h and . Equation (A-4) can be
substituted into Eq. (A-2) to yield
Ti,exp  Ti,j =
Ti , j
T
 + i , j h + ei

h
(A-5a)
or in matrix form
{D} = [Zj]{A} + {E}
(A-5b)
A-2
where [Zj] is the matrix of partial derivatives of the function (called the Jacobian matrix)
evaluated at the guess j, the vector {D}contains the differences between the measure
temperature and the calculated temperature at the guess j, the vector {A}contains the
changes in the parameter values, and the vector {E} contains the residuals. It should be noted
that as the final values of the parameters are obtained after the iterations vector {D} is the
same as vector {E}.
 T1
 
 T
2
[Zj] =  
 
 TN
 
T1 
h 
T2 

h  , {D} =
 
TN 
h 
 T1,exp  T1, j 
 T

 
2 ,exp  T2 , j

 , {A} =   , {E} =

 h 


TN ,exp  TN , j 
 e1 
 e2 
 
 
e N 
We minimize the objective function
N
Fobj =
e
i 1
2
i
 T
N
=
i ,exp
 Ti 
2
(A-3)
i 1
by taking its derivative with respect to each of the parameters and setting the resulting
equation to zero.
N
Fobj
Ti
=  2  Ti ,exp  Ti 
=0


i 1
(A-6a)
N
Fobj
Ti
=  2  Ti ,exp  Ti 
=0
h
h
i 1
(A-6b)
This algorithm is Gauss-Newton method for minimizing the sum of the squares of the
residuals between data and nonlinear functions. Equations (A-6a) and (A-6b) can be
combined in a matrix form
[Zj]T{E} = 0
(A-7)
where [Zj]T is the transpose of [Zj]. Let consider N = 3 so we can see the combination from
(A-6a) and (A-6b) to (A-7).
 T1
 
 T
 1
 h
T2

T2
h
T3 
 
T3 

h 
 T1, exp  T1 


T2, exp  T2  = 0
T

 3, exp  T3 
Substitute {E} = {D}  [Zj]{A} from Eq. (A-5b) into (A-7)
A-3
[Zj]T{{D}  [Zj]{A}} = 0
or
[Zj]T[Zj]{A} = {[Zj]T{D}}
(A-8)
The Jacobian matrix [Zj] may be evaluated numerically for the model equation (A-1).
Ti Ti (   , h )  Ti ( , h )



(A-9a)
Ti Ti ( , h  h )  Ti ( , h )

h
h
(A-9b)
Typically,  can be chosen to be 0.01 and h can be chosen to be 0.01 W/m2K. Thus, the
Gauss-Newton method consists of solving Eq. (A-8) for {A}, which can be employed to
compute improved values for the parameters h and .
j+1 = j +  (from {A})
hj+1 = hj + h (from {A})
This procedure is repeated until the solution converges that is until  and h fall below an
acceptable criterion. The Gauss-Newton method is a common algorithm that can be found in
many numerical method texts. However, this description follows the notations and
development by Chapra and Canale.
Example A-1
Fit the function T(t; , h) = (1  eht) to the data.
t
T
0.25
0.28
0.75
0.75
1.25
0.68
1.75
0.74
2.25
0.79
Use initial guesses of h = 1 and  = 1 for the parameters.
Solution
The partial derivatives of the function with respect to the parameters h and  are
T
T
= 1  eht and
= teht

h
A-4
 T1
 
 T
2
[Zj] =  
 
 TN
 
T1 
0.2212
h  
T2  0.5276

h  = 0.7135
  0.8962
TN  
0.8946
h  
0.1947
0.3543

0.3581

0.3041
0.2371
The matrix multiplied by its transpose results in
0.2212 0.5276 0.7135 0.8962 0.8946
[Zj]T[Zj] = 

0.1947 0.3543 0.3581 0.3041 0.2371
0.2212
0.5276

0.7135

0.8962
0.8946
0.1947
0.3543

0.3581

0.3041
0.2371
 3.6397  7.8421
[Zj]T[Zj] = 

 7.8421 19.1678 
The vector {D} consists of the differences between the measurements T and the model
predictions T(t; , h) = (1  eht)
0.28  0.2212  0.0588 
0.57  0.5276  0.0424 

 

{D} =  0.68  0.7135 =   0.0335

 

0.74  0.8262   0.0862 
0.79  0.8946   0.1046
The vector {D} is pre-multiplied by [Zj]T to give
0.2212 0.5276 0.7135 0.8962 0.8946
[Zj]T{D} = 

0.1947 0.3543 0.3581 0.3041 0.2371
 0.0588 
 0.0424 

   0.1533
  0.0335 = 


   0.0365

0
.
0862


  0.1046
The vector {A} can be calculated by using MATLAB statement dA=ZjTZj\ZjTD ({A} =
{[Zj]T[Zj] \[Zj]T{D}})
  0.2714
{A} = 

 0.5019 
A-5
The next guesses for the parameters  and h are
 = 1  0.2715 = 0.7285
h = 1 + 0.5019 = 1.5019
Table A-1 lists the MATLAB program with the results of two iterations.
Table A-1 _____________________________________
% Gauss-Newton method
%
t=[0.25 0.75 1.25 1.75 2.25]';
T=[0.28 0.57 0.68 0.74 0.79]';
e=1;h=1;
Tmodel='e*(1-exp(-h*t))';
dTde='1-exp(-h*t)';dTdh='e*t.*exp(-h*t)';
for i=1:2
Zj=[eval(dTde) eval(dTdh)];
ZjTZj=Zj'*Zj
D=T-eval(Tmodel)
ZjTD=Zj'*D
dA=ZjTZj\ZjTD
e=e+dA(1);
h=h+dA(2);
fprintf('Iteration #%g: e = %8.4f, h = %8.4f\n',i,e,h)
end
>> Gauss
ZjTZj =
2.3194 0.9489
0.9489 0.4404
D=
0.0588
0.0424
-0.0335
-0.0862
-0.1046
ZjTD =
-0.1534
-0.0366
dA =
-0.2715
0.5019
Iteration #1: e = 0.7285, h = 1.5019
ZjTZj =
3.0660 0.4162
0.4162 0.0780
A-6
D=
0.0519
0.0777
0.0629
0.0641
0.0863
ZjTD =
0.2648
0.0397
dA =
0.0625
0.1758
Iteration #2: e = 0.7910, h = 1.6777
The Matlab function fminsearch can also be used to fit the data to an expression with more
than one parameter, T(t; , h) = (1  eht). Table A-2 lists the function required by
fminsearch. Table A-3 lists the program that calls fminsearch to find the two parameters 
= 1 and h = 1. The program also plots the fitted results with the experimental data shown in
Figure A-4.
______ Table A-2 Matlab program to define the objective function ______
function y=nlin(p)
t=[.25 .75 1.25 1.75 2.25];
T=[.28 .75 0.68 0.74 0.79];
e=p(1);h=p(2);
Tc=e*(1-exp(-h*t));
y=sum((T-Tc).^2);
______ Table A-3 Matlab program to find  and h ______
clf
t=[.25 .75 1.25 1.75 2.25];
T=[.28 .75 0.68 0.74 0.79];
p=fminsearch('nlin',[1 1])
tp=.25:.1:2.25;
e=p(1);h=p(2);
Tc=e*(1-exp(-h*tp));
plot(tp,Tc,t,T,'o')
grid on
xlabel('t');ylabel('T')
legend('Fitted','Data')
>> nlinear
p=
0.7754 2.3752
>>
A-7
0.9
Fitted
Data
0.8
0.7
T
0.6
0.5
0.4
0.3
0.2
0
0.5
1
1.5
2
t
Figure A-4 Nonlinear Regression of T(t; , h) = (1  eht).
A-8
2.5
Download