Provedení, principy činnosti a základy výpočtu pro výměníky tepla

advertisement
NAP2
Numerical Analysis of Processes
Physical models (transport equations, energy and entropy
principles).
Empirical models (neural networks and regression models).
Models that use analytical solutions (diffusion).
Identification of models based on the best agreement with
experiment (regression analysis, optimization)
Rudolf Žitný, Ústav procesní a
zpracovatelské techniky ČVUT FS 2010
NAP2
Mathematical MODELs
Mathematical model = mathematical description (program) of system
characteristics (distribution of temperature, velocity, pressure, concentration,
performance, ...), as a function of time t, space and operating parameters
(geometry system, material properties, flow, ...).
Mathematical model of an
apparatus or system
Identification of the
model parameters on
the basis of experiment
Experiment evaluation
for example diagnostics
Optimisation of
operational parameters
Rudolf Žitný, Ústav procesní a
zpracovatelské techniky ČVUT FS 2010
NAP2
Mathematical MODELs
General classification of models
Black box - a model based solely on a comparison with experiment. Three basic types are
-Neural networks (analogous neurons in the brain - the method of artificial intelligence)
-Regression models (e.g. in the form of correlation functions of the power law type Nu = cRenPrm)
-Identification of the transfer function of a system E(t) from the measured time history of input x(t) and output y (t) of the
system.
White box - the model is based entirely on sound physical principles:
the balance of mass, momentum, energy (Fourier and Fick's equation, Navier Stokes equations)
on energy principles, the desired solution minimizes the total energy of the system (elasticity),
on simulations of random movements and interactions of fictitious particles (Monte Carlo, lattice Boltzmann ...).-
Grey box - models, which is located between the above extremes. Examples are compartment models of flow
systems, respecting the physical principle of conservation of mass, but other principles (eg balance of momentum), they are
replaced by empirical correlations or data.
Rudolf Žitný, Ústav procesní a
zpracovatelské techniky ČVUT FS 2010
NAP2
Physical Principles
Continuum models (white boxes)
Engineering models based on physical principles are often based on three
balances (mass, momentum and energy). In differential form:
Moss conservation
D

   v  0
Dt
Cauchy equilibrium equations. Valid for
structural as well as fluid flow analysis,
compressible and incompressible.


Dv

     g
Momentum conservation 
Dt
Energy conservation
The total energy, the
sum of internal, kinetic
and potential energy

 
D
v2

 (u    )    q    (  v )  Q ( g )
Dt
2
heat flux
power of
inner forces
Most of the equations, you know, can be derived from these conservation equations: Bernoulli, Euler, Navier
Stokes, Fourier Kirchhoff.
NAP2
Physical Principles
Continuum models (white boxes)
The principles of energy are preferred especially in the mechanics of solid phase.In
the theory of elasticity it is the Lagrange principle of minimum of potential energy
strains and stresses
can be expressed in
term of displacement u
deformation energy of internal
forces (the product of the stress
and strain tensor
work of external
forces (volumetric,
surface, concentrated)
1
Wi (u )   e :  d 
2

 
 

 
We (u )    f  ud   p  ud   Fi  ui


Wi
W(u)
u
We
at equilibrium the energy
(W i+W e) is minimum
Requirement of zero variation of energy functional W/u=0 is the basis of many
finite elements design. This approach is represented in dynamics by Hamilton
principle of zero variation of functional with kinetic energy
t
1 2
S   (Wi  We    v d )dt
2
0

"Energy" principles are not so general: each of the variational principle can derive an equivalent Euler Lagrange
differential equation, but the converse is not true. Especially with flow the variational principles can be applied only
in some cases (eg creeping flow).
NAP2
Physical principles
Models determine the type of engineers who are engaged in addressing these issues:
FUNCTIONALISTS imagine that each system can have infinitely many forms, and a number, functional can be
assigned to each of them. Then they plot these numbers and look for some special points (minima, maxima,
inflection). A special mystical significance is assigned to these characteristic points, perhaps, the state of
balance, loss of stability, etc. Functionalists work especially in mechanics of solids and are easily identified by
the fact that they appretiate only the finite element method (through gritted teeth perhaps Boundary Elements).
They use words as Cauchy Green tensor, Kirchhoff - Piola stresses, the seventh-invariant, etc.
DIFFERENTIALs believe that the law of conservation of mass, momentum and energy expressed in terms of
differential equations can describe the state of a system in the infinitesimal neighborhood of any point (and
consequently the whole system too). When choosing numerical methods the differentials are not too picky, but
generally choose the method of finite volumes. Often use words like material derivative.
PARTICULARISTs do not believe in continuum mechanics. Everything can be derived from the mechanics of a
particle. There are only discrete quantities, such as in digital computers. The complexity of reality is caused by
a large number of simple processes running in parallel. Particularists can be identified by the fact that shuns all
others and are discreet.
FATALISTs do not believe that it is possible for a human beeing to understand the laws of God providence and
therefore are limited to the empirical description of the observed phenomena. They love expert systems,
artificial intelligence methods, engineering correlations and statistics.
CHAOTs admit existence of some principles, but do not believe in their meaningful solvability. They hate the
smooth curves, and prefer to be spoiled by random number generators. They love disasters, catastrophs and
attractors. They are attractive especially for women fascinated by foreign words and colorful flowers of fractals.
NAP2
Example of models Drying
As an example we will use different mathematical models of drying or moistening of the material, such as
grain (starch, corn, rice, coffee ...) in the whole course.
There will always be one of the basic results time of drying (or relationship between time, temperature and
moisture of dried material). For this purpose simple regression models or neural networks are usually
sufficient.
The distribution of moisture within the grain must be known (calculated) as soon as the microbial activity or
healt risc is to be determined. White or gray boxes models based on transport equations and heat diffusion
can be used. When the grain has a simple geometry (ball), and when there is no significant effect of
moisture on the diffusion coefficient, the solution can be expressed in the form of infinite series (analytical
solution of Fick's equation and FK, only few terms in these series are sufficient).
When the geometry is simple, but diffusion and heat transfer coefficients are strongly non-linear, the 1D
numerical methods are preferred (finite difference method or the spectral method). The complicated shape
of grains (rice, beans) are evaluated by 3D numerical finite element or control volume (commercial software
ANSYS, COMSOL and FLUENT are usually used).
At present, the concern is focused to changing internal structure of a material, such as cracking, which in
turn significantly affect the moisture (free water easily penetrates micro grain). This means that in addition
to transport equations the deformation field and the distribution of mechanical stress tensor (elastic
problem) must be solved. These models are generally based on the finite element method.
Rudolf Žitný, Ústav procesní a
zpracovatelské techniky ČVUT FS 2010
NAP2
Neural networks
Artificial neural networks are the brutal
techniques of artificial intelligence. Its
popularity stems from the fact that more and
more people know less and less about the
real processes and their nature (maybe it
could be called the law of exponential
growth of ignorance). ANN (Artificial Neural
Network) is designed specifically for those
who knows nothing about the processes
they want to model. They just have a lot of
experimental data in which they are unable
to navigate. And have a MATLAB (for
example).
Schiele
NAP2
Neural Networks
Neuron=module that calculates one output value from several input.
The behavior of a particular neural network is given by the values of the synaptic weights wi (coefficients
amplification of signals between interconnected neurons)
Valce controlling
substrate flowrate
Hidden layer (here 4
neurons)
Input layer (2
neurons) X1,X2
3
1
Outlet layer (1
neuron) Y
4
7
Sensors
2
5
6
Neuron’s response y to N-inputs xi
N
N
i 1
i 1
y  f ( wi xi  )  tgh( wi xi  )
W i=coefficients of
synaptic weights
evaluate by special
algorithm „learning“
network
N
 1/ (1  exp( ( wi xi  ))
i 1
N
 sgn( wi xi  )
i 1
NEURAL NETWORK - BRAIN
Neuron has several inlets - dendrits
and one output - axon
The most
frequently
used activation
Nejčastěji
používané
aktivační
funkce f functions f
(tangens
hyperbolic,
sigmoidal
function,
(tangens hyperbolický, sigmoidální funkce,sign function
sgn).
All implemented
e.g. in MATLABu.
znaménková
funkce)
NAP2
Neural networks
Modeling of wheat soaking using two artificial neural networks (MLP and RBF)
Journal of Food Engineering, Volume 91, Issue 4, April 2009, Pages 602-607
M. Kashaninejad, A.A. Dehghani, M. Kashiri
In this article you will read how the experiments on cereal grains humidification were evaluated. Humidificated
grains were in distilled water at temperatures of 25, 35, 45, 55 and 65 0C for about 15 hours (samples were
weighed at 15 minute intervals), the total available 154 values of specific humidity of grain for various temperatures
and times. From these values only the 99 data were used for training the network, which had two neurons in the
input layer (time, temperature), 26 neurons in the hidden layer and a single output neuron (humidity). Remaining 55
data (moisture) was used to verify that the “trained" neural network gives reasonable results and what is about fault
prediction. They used two types of network MLP (Multi Layer Perceptron) and RBF (Radial Base Functions) with
different activation functions of neurons (sigmoidal, respectively. Gaussian basis function).
This is prediction of neural
network (ANN). MR-Moisture
Ratio as a function of time and
temperature.
MLP is a classical neural network. Neuron activation function (hyperbolic tangent ... see previous film) with no adjustable
parameters, optimize only the weighting coefficients wij connecting neuron with neuron j (and used the same method as
described in further regression models). Radial basis function RBF neurons have their own adjustable parameters coordinates of the neuron (determines the "distance" from the neuron of the previous layer neurons) and "width" basis
 
functions. RBF is the Gaussian function

( x  xc ) 2
f c ( x )  exp( 
2 c2
)
RBF networks have only one hidden layer and weighting coefficients wij can be adjusted between the hidden and output
layer. Parameters of the "radial" neurons (xc,  c) is selected "ad hoc" to the nature of the problem, while using the
statistical strategy of "cluster analysis". It is more complicated than the MLP and the result (at least for drying) tends to be
worse.
NAP2
Regression models
Delvaux
NAP2
Regression models
The regression model has the form of a relatively simple function of the independent
variable x, and the parameters p1, p2,…,pM, which is to be determined so that the
values of the function best match the experimental values y.
.
Frankly neural networks are almost the same. The search parameters p1, p2,…,pM are the coefficients of synaptic weights
linking neurons. The difference is that the type of the model functions is more or less unified in the ANN and the number of
weights is greater than the number of parameters commonly used in regression functions.
Regression function is chosen on the basis of experience or simplified ideas of
modeled process (reasonable and logically explainable behavior at very small or
large values of the independent variables is to be required). It is also desirable that
the parameters p1, p2,…,pM have clearly defined physical meaning.
However, it is true that if we do not know a physical nature of the process or if it is
too complex (ie, the same situation as in neural networks) the polynomial regression
function y= p1+p2x+…+pMxM-1 or another „neutral“ function is used.
NAP2
Regression models
Let us consider a regression model with the unknown vector p of M parameters

y  f ( x, p1, p2 ,.... pM )  f ( x, p)
The parameters pi should be calculated so that the model prediction best fits
the N measured points (xi,yi). The most frequently used criterion of fit is chisquare (sum of squares of deviations)
Number of data points N should

N

y

f
(
x
,
p
) 2
i
 2 ( p)   ( i
)
i 1
be greater than the number of
calculated parameters M (N=M
means interpolation)
i
The quantity i is standard deviation of measured quantity yi (measurement
errror). Regression looks for minimum of the function 2 in the parametric space
p1, p2,…,pM (sometimes more robust criteria of fit, for example the sum of
absolute values of deviations, are used).
Quality of a selected model is evaluated by the so called correlation index, which
should be close to unity for good models
r
 ( y  f ( x , p))
1
( y  y)
i
2
i
2
i
The worst val;ue r=0 corresponds to
the case when it would be better to use
a constant as a regression model )
NAP2
Linear regression
Linear regression model is a linear combination of selected base functions, for
example polynomials gm(x)=xm-1 or goniometric functions gm(x)=sin(mx)
M

f ( x, p)   pm g m ( x)
Base functions are analogy of activation functions of neurons and the
parameters pm correspond to synaptic weights.
m 1
Model prediction can be expressed in a matrix form
[ y predikce ]  [[ A]][ p]
[[A]] is design matrix with N rows corresponding
to N/points and M columns for M-base functions.
N>M, the case of square matrix N=M is not
regression but interpolation.
 g1 ( x1 ) ..

...
A
 ...

 g1 ( xN )
g M ( x1 ) 




g M ( xN ) 
As soon as the standard deviation is constant, the 2 value is proportional to the
sum of squares s2 of deviation between prediction [ypredikce] and measured data [y]
s 2  ([ y]  [[ A]][ p])T ([ y]  [[ A]][ p])
This is scalar product of two
vectors
NAP2
Base functions
b
Example: orthogonal polynomials
 w( x) g ( x) g ( x)dx  
i
j
a
HERMITE polynomial
TSCHEBYSHEF I. polynomial
ij
NAP2
Linear regression
Zero first derivatives of s2 with respect of all model parameters exist at minimum
s 2
 (2[ y]T [[ A]])T  2[[ A]]T [[ A]][ p]  0
[ p]
which is the system of M linear algebraic equations for M unknown parameters
[[ A]]T [[ A]][ p]  [[ A]]T [ y]
Transposed matrix [[A]]T having dimension
MxN multiplied by vector [y] Nx1 gives a
vector of M-values
Transposed matrix [[A]]T has
dimension MxN and when multiplied
by [[A]] gives square matrix with the
dimension MxM
[[C]]MxM
pk is standard deviation of calculated
parameter pk for k=1,2,…,M
Matrix [[C]] enables estimate of reliability interval
of calculated parameters p1,…,pM
 pk2  Ckk1 y2
Inversion of matrix [[C]]
(inverted matrix is called
covariance matrix)
y jis standard deviation of measured
data (it is assumed that all data are
measured with the same accuracy)
NAP2
Linear regression– example noise filtration
Principle of Savitzky Golay filter of noised data is very simple: each point of
input data xi,yi is associated with the window of Nw points left and Nw points
right and these 2Nw+1 points is approximated by regression polynomial of
degree k, where k<2Nw. Value of this regression polynomial in the point xi
substitutes original value yi.
Number of data N=1024, width of
window Nw=50, quadratic polynomial.
The SG filtration is implemented in
MATLAB as function
SGOLAYFILT(X,K,F)
where X vector of noised data, K degree
of regression polynomial and F=2Nw+1 is
width of window.
NAP2
Regression models example-soaking
Many empirical models are used for description of relationship between
moisture content in grains and time, for example exponential Page’s model
X  Xe
 exp(kt n )
There are two model
X0  Xe
parameters p =k, p =n.
X is equilibrium, X
e
1
O
2
Independent variable t is
time
initial moisture of
grain
and even more frequently used Peleg’s model (see paper)
t
X  X0 
p1  p2t
Parameter p2 characterises
equilibrium moisture
The application of Peleg's equation to model water absorption during the soaking of red kidney beans (Phaseolus
vulgaris L.) Journal of Food Engineering, Volume 32, Issue 4, June 1997, Pages 391-401
Nissreen Abu-Ghannam, Brian McKenna Peleg’s model is used for beans, chickpeas, peas, nuts…
Page and Peleg models are nonlinear and in practice are transformed to
linear form by logarithm (Page) or (Peleg) as tvar
This is new
dependent variable
y
t
 p1  p2t
X (t )  X 0
…and then linear regression can be
applied for identification of p1 a p2
NAP2
Regression models -soaking
Linearisation of non-linear models (for example linearisation of Page and
Peleg models) means that the optimised parameters p1, p2 minimise some
other criterion than the 2 and also other characteristics, like covariance
matrix, correlation index do not correspond to the assumption of normal
distribution of errors. However, this effect is usually small and can be
neglected.
NAP2
Analytical solution diffusion and heat
Vermeer
NAP2
Analytical solutions
Analytical solution exist only at linear models
Ordinary differential equations:
any analytical functions that you know, for example sin, cos, exp, tgh, Bessel functions,… are
defined in fact as a solution of ordinary differential equations (see handbooks, for example
Kamke E: Differential Gleichungen, Abramowitz M., Stegun I.: Handbook of Mathematical
functions). General approach consists in expression of solution in form of infinite power series
and identification of coefficients by substitution the expansion into the solved differential
equation
Partial differential equations: there exist two ways
Fourier method of separation of variables. Solution F(t,x,y,z) is searched in the form of product
F=T(t)X(x)Y(y)Z(z). Substituting into partial differential equation results ordinary differential
equations for T(t), X(x), Y(y) and Z(z).
Application of integral transforms: Fourierovu, Laplaceovu, Hankelovu. Result is algebraic
equation, which is to be solved and back-transformation must be used.
NAP2
Analytical solution diffusion (1/5)
Distribution of moisture X (kg water/kg solid) is described by Fick’s equation
Def  *m X *
X *
 2 *m * ( r
)
*
t
R r r
r
X* 
X  Xe
r
, r* 
X0  Xe
R
Xe equilibrium, X0 initial
moisture
m=0,1,2 for plate, cylinder, sphere.
Fourier method of separation of variables

X (t , r )   ci Fi (r * )Gi (t )
*
*
i 1
Substituting to Fick’s PDE results
R 2 dGi
1 d *m dFi
2
 *m
(
r
)



i
Def Gi dt
r Fi dr *
dr *
This term depends
only on time t
this terms depends
only on r
in fact both terms must be
constants independent on t, r. The
constant is called eigenvalue.
NAP2
Analytical solution diffusion (2/5)
Solution of Gi(t) is exponetial function, solution of Fi(r) is cos(r) for plate
(m=0), Bessel function J0(r) for cylinder (m=1) and for sphere (m=2, this is
our case of spherical grains)
sin( i r )
Fi (r ) 
,
*
r
*
*
Gi (t )  e

i2
R2
Def t
Phase equilibrium is assumed at the sphere surface (r*=1, X=Xe). Therefore
X*(t,1)=0 and this condition must be satisfied by any function Fi. This is the
condition for eigenvalues
sin( i )  0  i  i
Dimensionless con centration profile is therefore
sin(i r ) 
X (t , r )   ci
e
*
r
i 1
*

*
*
i 2 2 Def
R2
The coefficients ci must be selected so that the initial condition (distribution of
concentration in time zero) will be satisfied. For constant initial concentration

X=X0 must hold X*=1 for arbitrary r*, thewrefore
sin(i r * )
1   ci
i 1
r*
t
NAP2
Analytical solution diffusion (3/5)
The coefficients ci are evaluatred from orthogonality of functions Fi(r*).
Functions are orthogonal if their scalar product is zero. Scalar product of
functions is defined as integral
1
*2
*
*
r
F
(
r
)
F
(
r
)dr  0
i
j

0
Proof
d *2 dFi
2 *2
(
r
)


i r Fi F j  0
*
*
dr
dr
d *2 dFj
Fi * (r
)   2j r *2 Fi Fj  0
*
dr
dr
1
1
d *2 dFi
d *2 dFj
*
2
2
*2
*
[
F
(
r
)

F
(
r
)]
dr

(



)
r
F
F
dr
j
i 
i j
0 j dr * dr * i dr * dr *
0
Fj
1
dFj 1
dFi
2
2
[r ( Fj *  Fi * )]0  (  j   i )  r *2 Fi Fj dr *
dr
dr
0
*2
Thbis is zero because both
functions F must satisfy boundary
conditions
per partes integration.
NAP2
Analytical solution diffusion (4/5)
Let us apply orthogonality to previous equation (multiplied by r2Fj and integrated)
sin(i r * )
1   ci
r*
i 1

1

1
i 1
0
*
*
*
r
sin(
j

r
)
dr

c
sin(
i

r
)
sin(
j

r
)
dr

i


*
*
*
0
1
cj 
r
*
sin( j r * )dr *
0
1
 sin
2
( j r * )dr *
2(1) j

j
0
Concetration profile or by integration across the volume of sphere total moisture
content as a function of time
2(1) sin(i r )
X (t , r )  
e
*
 ir
i 1

*
*
i
*

i 2 2 Def
R2
t

6
X (t )   2 2 e
i 1  i
*

i 2 2 Def
R2
t
NAP2
Analytical solution diffusion (5/5)
Diffusion coefficient Def generaly depends upon temperature and moisture
and also the equilibrioum moisture Xe is a function of temperature, for
example
Ea
Def  Dwa exp(
) exp(bX )
RT
Regression model is therefore strictly speaking nonlinear (with parameters
p1=Dwa, p2=Ea a p3=b) and the analytical solution with substituted Def is only
an approximation. This model used Katrin Burmester for coffee grains
Heat and mass transfer during the coffee drying process Journal of Food Engineering, Volume 99, Issue 4, August
2010, Pages 430-436 Katrin Burmester, Rudolf Eggers
Modeling and simulation of heat and mass transfer during drying of solids with hemispherical shell
geometry Computers & Chemical Engineering, Volume 35, Issue 2, 9 February 2011, Pages 191-199
I.I. Ruiz-López, H. Ruiz-Espinosa, M.L. Luna-Guevara, M.A. García-Alvarado
NAP2
Optimalisation
There are two basic optimisation techniques
for calculation of parameters pi minimising
2 of nonlinear models :
Without derivatives, when minimum can
be identified only by repeated evaluation of
s2 (or another criterion of fit between
prediction and experiment) for arbitrary
values of parameters pi. These methods are
necessary at very complicated regression
models, for example models based upon
finite element methods.
Derivative methods, making use values of
all first (and sometimes second) derivatives
of regression model with respect to all
parameters p1,p2,…,pM.
NAP2
Optimisation methods with derivatives
Gauss method of least sum of squares of deviations
s   ( yi  f i ) wi
2
2
i
data
Weight coefficients (representing for
example variable accuracy of
measurin method)
mod
el
Zero derivatives with respect all parameters
f i
s 2
 2 ( y i  f i )
wi  0
p j
p j
i
j=1,2,…,M
Linearisation of regression function by Taylor expansion
f i
f i
i ( yi  f i 0  k p pk ) p wi  0
k
j
p is increment of parameters in one
iteration
Optimisation methods with derivatives
NAP2
Solution of systém of linear equations in each iteration
C
jk
p k  B j
k
C jk
f i f i

wi
i p j p k
f i
Bj  
( y i  f i 0 ) wi
i p j
The most frequently used modification of Gauss method is the Marquardt
Levenberg method : diagonal of matrix [[C]] is increased by adding a constant
 in the case that the iteration are not converging. For very high  the matrix
[[C]] is almost diagonal and the Gauss method reduces to the gradient
steepest descent method (right hand side is in fact gradient of the minimised
function s2).  value is changing during iterations: when process converges 
decreases and the faster Gauss method is preferred, while if iterations diverge
the  increases (gradient method is slower but more reliable).
NAP2
Optimisation method without derivatives
The simplest case is optimisation of only one parameter (M=1). First the global
minimum is to be localised (for example by random search). Then the exaxt
position of minimum is identified by the method of bisection or by the Golden
Section. The golden section method reduces uncertainty interval in each step in
the ratio 0.618 and not in the ratio 0.5 as in bisection. However only one value of
regression function need to be evaluated in the golden section and not two values
necessary for bisection. See algorithm Golden section search and the following
slide (definition of golden section)
f1
f2
L1
f3
L2=0.618L1
f4
L3=0.618L2
Example: Initially two values f1 f2 in golden sections of interval L1 are calculated. Because f1>f2 the minimum cannot be left,
and the interval of uncertainty reduces to L2. We need again two values in this interval, but one value (f2) was calculated in
the previous step, so that only ONE new value need to be calculated. And that is just the glamor of the golden section
method and the secret of the magic ratio 0.618.
NAP2
Golden Section
C1
A
p
q
p

p pq
Quadratic equation for
the ratio q/p
B
C2
a
q
q
q
(1  )  1
p
p
q 1 5

 0.618
p
2
NAP2
Optimisation method without derivatives
Example:
How many steps of golden section method and how many values of regression
function must be evaluated if thousand times reduction of uncertainty interval is
required?
L
n
 0.618 L
m
log m  n log 0.618
log m
n
0.209
Result, for 1000times increase accuracy it is sufficent 14 steps.
An approach to determine diffusivity in hardening concrete based on measured humidity
profiles Advanced Cement Based Materials, Volume 2, Issue 4, July 1995, Pages 138-144 D. Xin,
D. G. Zollinger, G. D. Allen
Diffusion of concrete hardening, and identification of diffusion coefficient by golden section method.
NAP2
Optimisation method without derivatives
Principles of the one-parametric optimisation can be applied also for M-parametric
optimisation of p1,…,pM just repeating 1D search separately (Rosenbrock).
However, the most frequent is the simplex method Nelder Mead . Principle is quite
simple:
1. Simplex formed by M+1
vertices is generated (for
two parameters p1 p2 it is a
triangle).
2. The vertex with the worst
value of the regression
function is substituted by
flipping, expansion or
contraction with respect to
the gravity center of simplex
.
Step 2 is repeated until the size of simplex is
decreased sufficiently
Animated gig from
wikipedia.org
NAP2
Optimisation method without derivatives
Zang H., Zhang S., Hapeshi K.: A review of nature inspired algorithms. J.Bionic
Engineering 7 (2010), 232-237
Ant Colony Optimisation (AS Ant System, ACS Ant colony system)
Bees Algorithm
Genetic Algorithm
SOMA Self Organizing
Memetic Algorithm
(algorithms of a pack of
volves)
a2
a1
NAP2
MATLAB
NAP2
Optimisation methods MATLAB
Linear polynomial regression
p=polyfit(x,y,m)
Minimalisation of function (without „constraints“, method Nelder Mead)
p=fminsearch(fun,p0) where fun is a reference (“handle” denoted by the symbol @) to a user defined function,
calculating value which is to be minimised for specified values of model parameters (for example sum of squares of
deviations). Vector p0 is initial estimate.
Nonlinear regression (nonlinear regression models)
p = nlinfit(x,y,modelfun,p0)
ci = nlparci(p,resid,'covar',sigma) also statistical evaluation of results (covariance matrix, intervals of
uncertainty)
NAP2
Optimisation methods MATLAB
Example: Measured drying curve, 10 points (time and moisture)
time X(moisture)
0 0.9406
1 0.7086
2 0.7196
3 0.5229
4 0.4657
5 0.3796
6 0.3023
7 0.1964
8 0.1545
9 0.1466
xdata=[0 1 2 3 4 5 6 7 8 9]’;
Apostroph [vektor]’ means
transposition, instead of row it
will be columnwise vector
ydata=[0.9406 0.7086 …]’;
1
0.9
Plot od date by command
plot(xdata,ydata,’*’)
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1
2
3
4
5
6
7
8
9
NAP2
Optimisation methods MATLAB
Drying curve approximated by cubic polynomial p1x3+…+p4
p=polyfit(xdata,ydata,3)
Vector of calculated
coefficients
p=
0.0001
0.0040 -0.1322
0.9103
Plot polynomial with coefficients p(1)=0.0001, p(2)=0.0040
…Y=polyval(p,xdata)
1
hold on enables
plotting more
curves into one
graph
hold on
plot(xdata,ydata,'*')
plot(xdata,Y)
0.9
0.8
0.7
0.6
0.5
0.4
0.3
All this could have been done by single command
0.2
plot(xdata,ydata,'*',xdata,polyval(p,xdata))
0.1
0
1
2
3
4
5
6
7
8
9
NAP2
Optimisation methods MATLAB
The data of the drying curve can be better approximated by diffusion model
n

i 2 2 Def
t
6
R2
X (t )  A 2 2 e
i 1  i
with A „a scale“ coefficient, diffusion coefficient Def and radius of particle Def and R
are not independent parameters because only the ratio Def/R2 appears in mthe
model. The radius R can be selected (for example R=0.01 m) and only wo
regression parameters A, Def, should be minimised. How?
*
Define xmodel(t,p,pa) as a sum of series (with params p1=A, p2=Def, pa1=R, pa2=n)
Define function calculating sum of squares xdev(xmodel,p,pa,xdata,ydata)
Calculate optimum using p= fminsearch(xdev,p0)
NAP2
Optimisation methods MATLAB
Model definition
n
6 
X (t )  A 2 2 e
i 1  i
*
i 2 2 Def
R2
t
function xval = xmodel(t,p,pa)
A=p(1);
D=p(2);
R=pa(1);
This text should be saved as
ni=pa(2);
M-file with the filename
xv=0;
xmodel.m
for i=1:ni
pii=(pi*i)^2;
xv=xv+6/pii*exp(-pii*D*t/R^2);
end
xval=A*xv;
In this way can be defined arbitrary model of drying, for example previously identified
cubic polynomial, Peleg’s or Page’s models, or even models defined by differential
integrations. This last case should be discussed later.
Optimisation methods MATLAB
NAP2
Definition of goal function (sum of squares of deviations)
function sums = xdev(model,p,paux,xdat,ydat)
sums=0;
n=length(xdat);
for i=1:n
sums=sums+(model(xdat(i),p,paux)-ydat(i))^2;
end
Define auxilliary parameters and search regression parameters by fminsearch
pa=[0.01,10]
R=0.01 and number of
terms in expansion n=10
p = fminsearch(@(p) xdev(@xmodel,p,pa,xdata,ydata),[.5;0.0003])
Initial estimate of
parameters
The first parameter fminsearch is function with only one parameter,
vector p of optimised parameter. Another non optimised parameters
must be in MATLAB specified by using anonymous function
@(p) expression.
NAP2
Optimisation methods MATLAB
Exactly the same procedure can be summarized in the single M-file
function [estimates, model] = fitcurve(xdata, ydata)
start_point = [1 0.00005]; Hledané parametry jsou dva: A,D. Počáteční odhad.
model = @expfun; Expfun je predikce modelu a výpočet součtu čtverců odchylek
estimates = fminsearch(model, start_point); volání optimalizační Nelder Mead metody
function [sse, FittedCurve] = expfun(params)
A = params(1); optimalizovaný škálovací parametr
D = params(2); optimalizovaný difuzní koeficient
R=0.01; poloměr částice (když ho chcete změnit musíte opravit funkci fitcurve.m)
ni=10; počet členů řady(správně nekonečno, ale 10 obvykle stačí)
ndata=length(xdata); (počet bodů naměřené křivky sušení)
sse=0; výsledek expfun – součet čtverců odchylek
for idata=1:ndata
xv=0;
tady je třeba naprogramovat konkrétní model sušení (Y(X,params))
for i=1:ni
pii=(pi*i)^2;
xv=xv+6/pii*exp(-pii*D*xdata(idata)/R^2);
end
FittedCurve(idata) = A* xv;
ErrorVector(idata) = FittedCurve(idata) - ydata(idata);
sse=sse+ErrorVector(idata)^2;
end
end
end
[estimates, model] = fitcurve(xdata,ydata)
NAP 2
EXAM
Regression models
Optimisation
NAP2
EXAM
Balance equations
D

   v  0
Dt


Dv


     g
Dt

 
D
v2

 (u    )    q    (  v )  Q ( g )
Dt
2
Lagrange variational principle (minimum Wi+We)
1
Wi (u )   e :  d 
2

 
 

 
We (u )    f  ud   p  ud   Fi  ui

Goal function (chi-quadrat)


N

y

f
(
x
,
p
) 2
i
 2 ( p)   ( i
)
i 1
Regression (parameters p minimising chi-quadrat)
i
[[ A]]T [[ A]][ p]  [[ A]]T [ y]
Minimisation method with derivativatives (Marquardt Levenberg) and without
derivatives (golden section, simplex method Nelder Mead)
Download