part 2

advertisement
OPTI_ENERGY
Summer School: Optimization of Energy Systems and Processes
Gliwice, 24 – 27 June 2003
METHODS OF ENERGY SYSTEMS OPTIMIZATION
Christos A. Frangopoulos
National Technical University of Athens
Department of Naval Architecture and Marine Engineering
1
METHODS OF ENERGY SYSTEMS OPTIMIZATION
Contents
1.
2.
3.
4.
5
6.
7.
8.
9.
INTRODUCTION
DEFINITION OF OPTIMIZATION
LEVELS OF OPTIMIZATION OF ENERGY SYSTEMS
FORMULATION OF THE OPTIMIZATION PROBLEM
MATHEMATICAL METHODS FOR SOLUTION
OF THE OPTIMIZATION PROBLEM
SPECIAL METHODS FOR OPTIMIZATION OF ENERGY SYSTEMS
INTRODUCTION OF ENVIRONMENTAL AND SUSTAINABILITY
CONSIDERATIONS IN ΤΗΕ OPTIMIZATION OF
ENERGY SYSTEMS
SENSITIVITY ANALYSIS
NUMERICAL EXAMPLES
2
1.
INTRODUCTION
Questions to be answered:
 Given the energy needs, what is the best type of energy
system to be used?
 What is the best system configuration (components and their
interconnections)?
 What are the best technical characteristics of each
component (dimensions, material, capacity, etc.)?
 What are the best flow rates, pressures and temperatures of
the various working fluids?
 What is the best operating point of the system at each instant
of time?
3
1.
INTRODUCTION
Questions (continued):
When a number of plants are available to serve a certain region:
 Which plants should be operated, and at what load under
certain conditions?
 How should the operation and maintenance of each plant
be scheduled in time?
4
1.
INTRODUCTION
Procedure to find a rational answer:
Optimization
5
2.
DEFINITION OF OPTIMIZATION
Optimization
is the process of finding the conditions,
i.e. the values of variables
that give the minimum (or maximum) of the
objective function.
6
3.
LEVELS OF OPTIMIZATION OF ENERGY SYSTEMS
Synthesis
A. Synthesis: components and their
interconnections.
Design
B. Design: technical characteristics of
components and properties of
substances at the nominal (design)
point.
Operation
C. Operation: operating properties of
components and substances.
7
3.
LEVELS OF OPTIMIZATION OF ENERGY SYSTEMS
The complete optimization problem stated as a
question:
What is the synthesis of the system, the
design characteristics of the components,
and the operating strategy that lead to an
overall optimum?
8
OPTI_ENERGY
Summer School: Optimization of Energy Systems and Processes
Gliwice, 24 – 27 June 2003
METHODS OF ENERGY SYSTEMS OPTIMIZATION
4. FORMULATION OF THE
OPTIMIZATION PROBLEM
9
4.1
Mathematical Statement of the Optimization Problem
Mathematical formulation of the optimization problem
(4.1)
min imize f (x)
x
with respect to:
x = (x1, x2, … , xn)
subject to the constraints:
hi (x)  0
i = 1, 2, … , m (4.3)
g j ( x)  0
j = 1, 2, …, p
x
f ( x)
set of independent variables,
objective function,
h i ( x)
g j ( x)
equality constraint functions,
inequality constraint functions.
(4.2)
(4.4)
10
4.1
Mathematical Statement of the Optimization Problem
Alternative expression:
min f ( v, w, z )
v, w, z
(4.1)'
v set of independent variables for operation optimization,
w set of independent variables for design optimization,
z set of independent variables for synthesis optimization.
(4.5)
x  ( v, w , z )
Design optimization:
Operation optimization:
min f d ( v, w )
v, w
min f op ( v)
v
11
4.1
Mathematical Statement of the Optimization Problem
Maximization is also covered by the preceding
formulation since:
min f (x)  max f (x)
x
(4.6)
x
12
4.2
Objective Functions
Examples:
•
•
•
•
•
•
•
•
•
•
•
minimization of weight of the system,
minimization of size of the system,
maximization of efficiency,
minimization of fuel consumption,
minimization of exergy destruction,
maximization of the net power density,
minimization of emitted pollutants,
minimization of life cycle cost (LCC) of the system,
maximization of the internal rate of return (IRR),
minimization of the payback period (PBP),
etc.
13
4.2
Objective Functions
Multiobjective optimization:
An attempt to take two or more objectives into
consideration simultaneously.
14
4.2
Independent Variables
Quantities appearing in the
equality and inequality constraints:
• parameters
• independent variables
• dependent variables
15
4.3
Equality and Inequality Constraints
Equality Constraints:
model of the components and of the system.
Inequality Constraints:
imposed by safety and operability requirements.
16
OPTI_ENERGY
Summer School: Optimization of Energy Systems and Processes
Gliwice, 24 – 27 June 2003
METHODS OF ENERGY SYSTEMS OPTIMIZATION
5. MATHEMATICAL METHODS
FOR SOLUTION
OF THE OPTIMIZATION PROBLEM
17
5.1 Classes of Mathematical Optimization Methods
•
•
•
•
•
•
•
•
•
•
•
•
•
Constrained and unconstrained programming
Search and calculus (or gradient) methods
Linear, nonlinear, geometric and quadratic programming
Integer- and real-valued programming
Mixed integer linear programming (MILP)
Mixed integer nonlinear programming (MINLP)
Deterministic and stochastic Programming
Separable programming:
Single and multiobjective programming
Dynamic programming and calculus of variations
Genetic Algorithms
Simulated Annealing
Other methods
18
5.2 Basic Principles of Calculus Methods
5.2.1
Single-variable optimization
f(x)
.
A2
.
A1
A3
.
A1, A2, A 3 : Relative maxima
A 2 : Global maximum
B 1, B 2 : Relative minima
B 1 : Global minimum
.
B2
.
B1
x
a
b
Fig. 5.1. Local and global optimum points of a multimodal function.
19
5.2.1
Single-variable optimization
Theorem 1: Necessary condition.
Necessary condition for x* to be a local minimum or maximum
of f(x) on the open interval (a, b) is that
f '(x*)  0
(5.5)
If Eq. (5.5) is satisfied, then x* is a stationary point of f(x),
i.e. a minimum, a maximum or an inflection point.
20
5.2.1
Single-variable optimization
f(x)
global maximum
inflection point
local minimum
global minimum
x
Fig. 5.2. Stationary points.
21
5.2.1
Single-variable optimization
Theorem 2: Sufficient condition.
Let all the derivatives of a function up to order (n-1) be equal to zero
and that the nth order derivative is nonzero:
f '(x*)  f ''(x*)  ...  f (n 1) (x*)  0
where
f
(n)
f (n) (x*)  0
d n f (x)
(x) 
dx n
(5.6)
(5.7)
If n is odd, then x* is a point of inflection.
If n is even, then x* is a local optimum. Moreover:
If
f (n) (x*)  0 , then x* is a local minimum.
If
f (n) (x*)  0 , then x* is a local maximum.
22
5.2.2 Multi-variable optimization with no constraints
Definitions
First derivatives of a function f(x) of n variables:
  f  x  f  x
 f  x  
,
,
 x2
  x1
 f x 
,

 xn 
(5.8)
Matrix of second partial derivatives of f(x) (Hessian matrix):
  2f
 2f
 2
x1x 2
 x1
  2f
 2f

2
F  x   H f  x    f  x    x 2x1 x 22


  2f
 2f

 x n x1 x n x 2
 2f 

x1x n 
 2f 

x 2x n 


2
 f 

x 2n 
(5.9)
23
5.2.2 Multi-variable optimization with no constraints
Definitions (continued)
Principal minor of order k of a symmetric matrix nn is the matrix,
which is derived if the last n-k lines and columns of the initial matrix are
deleted. It is symbolized by A k
Every nn matrix has n principal minors.
24
5.2.2 Multi-variable optimization with no constraints
Theorem 3: Necessary conditions.
Necessary conditions for an interior point x* of the n-dimensional space
  R n to be a local minimum or maximum of f(x) is that
 
 f x  0
and
 
 2 f x
is positive semidefinite.
(5.10)
(5.11)
If Eq. (5.10) is satisfied, then x* is a minimum, maximum or saddle point.
25
5.2.2 Multi-variable optimization with no constraints
f(x 1,x2)
x*
x2
x1
Fig. 5.3. Saddle point: x*.
26
5.2.2 Multi-variable optimization with no constraints
Theorem 4: Sufficient conditions.
If an interior point x* of the space
 
 2 f x
  Rn
satisfies Eq. (5.10) and
is positive (or negative) definite,
then x* is a local minimum (or maximum) of f (x).
27
5.2.3 Multi-variable optimization with equality constraints
(Lagrange theory)
Statement of the optimization problem:
min f  x 
(5.12a)
x
subject to
h i  x   0,
i  1, 2,
,m
(5.12b)
m
Lagrangian function:
L x, λ   f  x    i h i  x 
(5.13)
i 1
Lagrange multipliers:
λ   1 ,  2 ,
, m 
28
5.2.3 Multi-variable optimization with equality constraints
(Lagrange theory)
Necessary conditions:




 x L x , λ   0
 λ L x , λ   0
(5.14a)
(5.14b)
The system of Eq. (5.14) consists of n+m equations.
Its solution gives the values of the n+m unknown x* and λ*.
Sufficient conditions:
Similar as in Theorem 4, where
instead of
 
f x 
 2x L x , λ 
2
is used,

29
5.2.4
The general optimization problem
(Kuhn - Tucker theory)
Presented in the complete text.
30
5.3 Nonlinear Programming Methods
5.3.1 Single-variable nonlinear programming methods
Golden section search
Golden section ratio:
f(x)
τ
(1-τ)L 0
τL0
τL0
a
x3
1  5
 0,61803...
2
1-τ
(1-τ)L 0
x1
x2
b
x
τ
τ2L0
L0
Fig. 5.4. Golden section search.
31
Golden section search
Length of the initial interval containing the optimum point:
L0 = b – a
The function f(x) is evaluated at the two points:
x1  α  1  τ  L0
(5.19a)
x 2  α  τ L0
(5.19b)
If f(x1) < f(x2), then x* is located in the interval (a, x2).
If f(x1) ≥ f(x2), then x* is located in the interval (x1, b).
Length of the new interval:
L1  x 2  a  b  x1 = τ L0
32
Golden section search
Length of the interval of uncertainty after N iterations:
L N  τ N L0
(5.21)
Number of iterations needed for a satisfactory interval of uncertainty, LN:
N
n  L N L0 
(5.22)
nτ
Convergence criteria:
(i)
N  Nmax
(ii)
L N  ε1
(iii)
f x N  1  f  x N   ε2


33
Newton – Raphson method
Series of trial points:
f  xk 
xk  1  xk 
f   x k 
(5.23)
f'(x)
x1
x*
x3
x2
x
Fig. 5.5. Newton – Raphson method (convergence).
34
Newton – Raphson method
Convergence criteria:


(i)
f  x k  1  ε1
(ii)
x k  1  xk  ε2
(iii)
f x k  1  f  x k   ε3


35
Newton – Raphson method
f'(x)
x*
x0
x1
x2
x3
x
Fig. 5.6. Divergence of Newton – Raphson method.
36
Modified Regula Falsi method (MRF)
f '(x)
Initial points a0 and b0
are determined such that:
f   a 0   f   b0   0
a0
Then it is

a 0  x  b0
a1
a2=a3
b3
b0=b1=b2
x
x*
Fig. 5.7. Modified Regula Falsi method.
37
Modified Regula Falsi method (MRF)
Convergence criteria:


(i)
f  x n  1  ε1
(ii)
bn  1  a n  1  ε 2
(iii)
f x n  1  f  x n   ε3


38
5.3.2 Multi-variable nonlinear programming methods
Two of the most successful methods for energy systems optimization:
Generalized Reduced Gradient method (GRG)
Sequential Quadratic Programming (SQP)
39
Generalized Reduced Gradient method (GRG)
It is based on the idea that, if an optimization problem has n independent
variables x and m equality constraints, then, at least in theory, the
system of m equations can be solved for m of the independent variables.
Thus, the number of independent variables is reduced to n-m, the
dimensionality of the optimization problem is decreased and the solution
is facilitated.
40
Sequential Quadratic Programming (SQP)
A quadratic programming problem consists of
a quadratic objective function and linear constraints.
Due to the linear constraints, the space of feasible solutions is convex,
and consequently the local optimum is also global optimum.
For the same reasons, the necessary optimality conditions are also
sufficient.
Since the objective function is of second degree (quadratic) and the
constraints are linear, the necessary conditions lead to a system of linear
equations, which is solved easily.
The SQP approach tries to exploit these special features.
It proceeds with a sequential approximation of the real problem with a
quadratic problem.
41
5.4 Decomposition
An optimization problem is of separable form, if it can be written in the form
K
min f ( x)   f k ( x k )
x
subject to
k 1
(5.31a)
h k (x k )  0
k = 1, 2, …, K
(5.31b)
g k (x k )  0
k = 1, 2, …, K
(5.31c)
where the set x is partitioned into k disjoint sets:
x  x1 , x 2 , ..., x k , ..., x K
(5.32)
42
5.4 Decomposition
A separable problem can be decomposed into K separate subproblems:
min f k (x k )
(5.33a)
h k (x k )  0
(5.33b)
g k (x k )  0
(5.33c)
xk
subject to
Each subproblem is solved independently from the other subproblems.
The solution thus obtained is the solution of the initial problem too.
43
5.5 Procedure for Solution of the Problem by a
Mathematical Optimization Algorithm
Structure of the computer program
for the solution of the optimization problem
Main program: It reads the values of the parameters, the initial values of the
independent variables and the lower and upper bounds on the constraint
functions. It calls the optimization algorithm.
Simulation package: It evaluates the dependent variables and the objective
function. It is called by the optimization algorithm.
Constraints subroutine: It determines the values of the inequality
constraint functions. It is called by the optimization algorithm.
Optimization algorithm: Starting from the given initial point, it searches for
the optimum. It prints intermediate and final results, messages regarding
convergence, number of function evaluation, etc.
44
5.5 Procedure for Solution of the Problem by a
Mathematical Optimization Algorithm
Searching for the global optimum
(a)
The user may solve the problem repeatedly starting from different
points in the domain where x is defined. Of course, there is no
guarantee that the global optimum is reached.
(b)
A coarse search of the domain is first conducted by, e.g., a genetic
algorithm. Then, the points with the most promising values of the
objective function are used as starting points with a nonlinear
programming algorithm in order to determine the optimum point
accurately. This approach has a high probability for locating the
global optimum.
45
5.6 Multilevel Optimization
In multilevel optimization, the problem is reformulated as a set of
subproblems and a coordination problem, which preserves the coupling
among the subproblems.
Multilevel optimization can be combined with decomposition either of
the system into subsystems or of the whole period of operation into a
series of time intervals or both.
Example: synthesis-design-operation optimization of an energy
system under time-varying conditions.
46
5.6 Multilevel Optimization
Overall objective function:
min f ( x, z )
x, z
(5.34)
where
x set of independent variables for operation,
z set of independent variables for synthesis and design.
Objective function for each time interval:
min k (x k )
xk
k = 1, 2, …, K
(5.35)
47
5.6 Multilevel Optimization
First - level problem
For a fixed set z* ,
*
Find x k that minimizes
k (x k , z*) ,
k = 1, 2, …, K
Second-level problem
Find a new z* which minimizes f (x*, z )
where x* is the optimal solution of the first-level problem.
The procedure is repeated until convergence is achieved.
48
5.7 Modular Simulation and Optimization
Common block
of parameters
Common block of
dependent variables
p1
y1i
1-4: Simulation and
local optimization
modules.
y1
1
w1
p
y
x1
y4
4
p4
y4i
w4
x4
x2
Optimizer
2
w2
y2i
p2
y2
x3
3
w3
y3
y3i
p3
Fig. 5.8. Structure of the computer program for
modular simulation and optimization.
49
5.7 Modular Simulation and Optimization
Simulation model for each module:
y r  Y(x r , y ri ),
w r  W(x r , y ri )
where
xr
set of independent variables of module r,
yri
set of input dependent variables (coming from other modules),
yr
set of output dependent variables of module r, i.e., of
dependent variables which are used also by the simulation
models of other modules or by the optimization algorithm,
wr
set of dependent variables appearing in the simulation model of
module r only.
50
5.8
Parallel Processing
Parallel computers: multiple processing units combined in an organized
way such that multiple independent computations for the same
problem can be performed concurrently.
Parallel processing can solve the optimization problem at a fraction of
the time.
Modular approach and decomposition with parallel processing:
• Simulation and/or optimization of modules or subsystems are
performed on parallel processors.
• The coordinating optimization problem is solved by the main
processor.
Multilevel optimization:
• Level A on parallel processors.
• Level B on the main processor.
51
OPTI_ENERGY
Summer School: Optimization of Energy Systems and Processes
Gliwice, 24 – 27 June 2003
METHODS OF ENERGY SYSTEMS OPTIMIZATION
6. SPECIAL METHODS FOR
OPTIMIZATION OF ENERGY SYSTEMS
52
6.1 Methods for Optimization of Heat Exchanger Networks
(HEN)
Statement of the HEN synthesis problem:
A set of hot process streams (HP) to be cooled, and a set of cold
process streams (CP) to be heated are given. Each hot and cold
process stream has a specified heat capacity flowrate while their inlet
and outlet temperature can be specified exactly or given as inequalities.
A set of hot utilities (HU) and a set of cold utilities (CU) along with their
corresponding temperatures are also provided.
Determine the heat exchanger network with the least total annualized
cost.
53
6.1 Methods for Optimization of HEN
The solution of the optimization problem provides the
•
•
•
•
hot and cold utilities required,
stream matches and the number of heat exchangers,
heat load of each heat exchanger,
network configuration with flowrates and temperatures of
all streams, and
• areas of heat exchangers.
54
6.1 Methods for Optimization of HEN
Classes of methods for solution of the problem:
a.
Heuristic methods
b.
Search methods
c.
Pinch method
d.
Mathematical programming methods
e.
Artificial Intelligence methods
55
6.2 The First Thermoeconomic Optimization Method
Thermoeconomics is a technique, which combines thermodynamic
and economic analysis for the evaluation, improvement and
optimization of thermal systems.
Initiators of the first method: Tribus, Evans, El-Sayed
Two basic concepts are introduced: exergy and internal economy.
The balance between thermodynamic measures and capital
expenditures is an economic feature, which applies to the complex
plant as a whole and to each of its components individually.
56
6.3 The Functional Approach
6.3.1 Concepts and definitions
System:
a set of interrelated units, of which no unit is unrelated to
any other unit.
Unit:
a piece or complex of apparatus serving to perform one
particular function.
Function: a definite end or purpose of the unit or of the system as a
whole.
Functional Analysis: the formal, documented determination of the
functions of the system as a whole and of each unit
individually.
57
6.3.2 The Functional diagram of a system
Functional diagram
A picture of a system, which is composed primarily of the units
represented by small geometrical figures, and lines connecting the
units, which represent the relations between units or between the
system and the environment, as they are established by the
distribution of functions (i.e. “services” or “products”).
58
6.3.2 The Functional diagram of a system
y r ' ' r
y r ' r
y r : the product (function)
of unit r
y r ' ' ' r
r
yr
Fig. 6.1. Unit r of a system
59
6.3.2 The Functional diagram of a system
y r 'r
y r ''r
R
y r   y rr '
...
r '0
yr
r
R
 yr 'r  yr
yr
r '0
Figure 6.2. Junction.
y r r '
y rr ''
...
Figure 6.3. Branching point.
60
6.3.3 Economic Functional Analysis
Total cost for construction and operation of the system
(benefits, e.g. revenue from products, are taken into consideration as
negative costs):
F
 Zr   0kr    r0
r
r
k
(6.3)
r
Revenue from products
or services
Costs of resources and services,
as well as penalties for hazards
caused to the environment
Capital cost
Units: monetary or physical (e.g., energy, exergy): “physical economics.”
61
6.3.3 Thermoeconomic Functional Analysis
Cost rates in case of steady-state operation:
F
 Zr   0kr    r0
r
It is:
F(x, y ) 
r
k
(6.4)
r
Zr  Zr ( x r , y r )  Z r
(6.5a)
Γ0kr  Γ0kr (y0kr )  Γ0k
(6.5b)
Γ r0  Γ r0 (y r0 )  Γ r0
(6.5c)
F  F(x, y )  F
(6.5d)
 Zr (x, yr )   0kr (y0kr )   r0 (yr0 )
r
r
k
(6.6)
r
62
6.3.3 Thermoeconomic Functional Analysis
Mathematical functions derived by the analysis of the system:
y rr '  Yrr ' (x r ' , y r ' )  Yrr '
r′ = 1, 2, …, R
r = 0, 1, 2,..., R
(6.7)
Interconnections between units or between a unit and the environment:
yr 
R
 yrr '
r = 1, 2, …, R
(6.8)
r '0
For a quantitatively fixed product:
yr0  yˆ r0
(6.9)
Cost balance for break-even operation (no profit-no loss):
R
Cr  Zr   cr ' y r 'r  cr y r
r = 1, 2, …, R
(6.10)
r '0
63
6.3.4 Functional Optimization
Optimization objective:
min F 
 Zr (x, yr )   0kr (y0kr )   r0 (yr0 )
r
r
k
(6.11)
r
Lagrangian:
L
(6.12)
 Zr   0kr   r0    rr ' (Yrr '  yrr ' )    r ( yrr '  yr )
r
r
k
r
r'
r
r
r'
First order necessary conditions for an extremum:
 x L(x, y , λ)  0
 y L( x, y , λ)  0
 λ L(x, y , λ)  0
(6.13)
64
6.3.4 Functional Optimization
L
0
yrr '

 rr '   r
(6.14)
Then, the Lagrangian is written:
L
 ( r   r yr )   (0kr  0kr y0kr )   ( r0   r0
r
r
k
(6.15)
r
R
where
y r0 )
 r  Zr    r 'r Yr 'r
r = 1, 2, …, R
(6.16)
r '0
65
6.3.4 Functional Optimization
The necessary conditions lead to:
x  r  0
(6.17a)
r
λr 
Γ r
y r
 0kr 

r0 
0kr
y0kr
 r0
y r0
(6.17b)
(6.17c)
(6.17d)
Lagrange multipliers as economic indicators: marginal price
(cost or revenue) of the corresponding function (product) y.
66
6.3.5 Complete functional decomposition
If the sets of decision variables xr are required, then complete
decomposition is applicable and the subsystems correspond to the
units and junctions of the system:
q=R
(6.18)
 xr  r  0
(6.19a)
Yrr ' (x r ' , y r ' )  y rr '  0
(6.19b)
Γ r
y r
(6.19c)
Sub-problem of each unit r:
λr 
67
6.3.5 Complete functional decomposition
Local optimization problem:
R
min  r  Zr    r 'r Yr 'r
xr
(6.20a)
r '0
subject to the constraints
y r 'r  Yr 'r (x r , y r )
(6.20b)
The solution of the system of Eqs. (6.19) gives the optimum values
of the independent variables and the Lagrange multipliers.
68
6.3.6 Partial functional decomposition
If the sets
are not disjoint, but it is possible to formulate larger sets
x
x ν which arer disjoint, then partial functional decomposition
is applicable.
Necessary conditions:
where
x ν  ν  0
ν 
 r
(6.21)
(6.22)
r
The summation in Eq. (6.22) is considered over those units
and junctions, which belong to the subsystem ν
The solution of the system of Eqs. (6.21), (6.19b,c) gives the optimum
values of the independent variables and the Lagrange multipliers.
69
6.4 Artificial Intelligence Techniques
Real-world problems are often not “textbook” problems: though the
goals may be well defined,
• data are often incomplete and expressed in qualitative instead of
quantitative form;
• the constraints are weak or even vague.
In order to help the engineer in handling these cases, new
procedures have been developed under the general denomination of
• “expert systems” or
• “artificial intelligence”.
70
OPTI_ENERGY
Summer School: Optimization of Energy Systems and Processes
Gliwice, 24 – 27 June 2003
METHODS OF ENERGY SYSTEMS OPTIMIZATION
7.
INTRODUCTION OF ENVIRONMENTAL AND
SUSTAINABILITY CONSIDERATIONS
IN ΤΗΕ OPTIMIZATION
OF ENERGY SYSTEMS
71
7.1 Principal Concerns
Aspects to be considered:
1. Scarcity of natural resources,
2. Degradation of the natural environment,
3. Social implications of the energy system,
both positive (e.g. job creation, general welfare)
and negative (effects on human health).
Approaches:
a. Sustainability indicators,
b. Total cost function.
72
7.2 The New Objective
7.2.1 Total cost function
min F 
 Zr   0kr   e    r0
r
e
r
k
e
(7.1)
r
the eth environmental and social cost due to
construction and operation of the system
Another expression:
Total cost = Internal general cost
+ Internal environmental cost
+ External environmental cost
(7.2)
73
7.2.2 Cost of resources
Scarcity of resources
A quantity of raw material extracted today has two consequences:
(a) it will not be available for future generations,
(b) it will cause future generations to spend more energy
for extracting the remaining quantities of the
same material.
Current market prices do not, in general, account for long-term local
or global scarcity or the ensuing difficulties and costs of extraction that
such scarcity may cause.
74
7.2.2 Cost of resources
General cost function:
Γ0kr  Γ0kr (y0kr )
An example of cost function:
Γ0kr  f p0kr fs0kr c0kr y0kr
(6.5b)
(7.3)
where
c0kr
unit cost (e.g. market price) of resource 0k  r
f p0kr
pollution penalty factor for resource 0k  r
fs0kr
scarcity factor for resource 0k  r
75
7.2.3 Pollution measures and costs
General cost function:
Γe  Γe (pe )
(7.4)
An example of cost function:
Γe  f pe ce p e
(7.5)
where
pe
an appropriate measure of pollution,
ce
unit environmental and social cost,
due to the pollutant e,
fpe
pollution penalty factor for the pollutant e.
76
7.2.3 Pollution measures and costs
Examples of pollution measures pe :
• quantity of the pollutant (e.g. kg of CO2),
• exergy content of the pollutant,
• entropy increased of the environment due to the pollutant,
• etc.
77
7.2.3 Pollution measures and costs
Approaches to estimate the environmental and social cost
due to pollution:
(i)
Indirect methods: Measure the value of goods not traded in
formal markets (e.g. life, scenic and recreational goods).
(ii)
Direct methods (damage cost): Measure goods for which
economic costs can be readily assessed (e.g. value of
agricultural products, or the cost of repairing damaged goods).
(iii) Proxy methods (avoidable cost): Measure the costs of avoiding
the initiating insult.
78
7.2.3 Pollution measures and costs
Urging
Lack of sufficient data, limited epistemological position and other
difficulties may cause an uncertainty in the numerical results obtained.
However, an attempt to derive reasonable figures and take these into
consideration in the analysis and optimization makes far more sense
than to ignore external effects of energy systems.
79
OPTI_ENERGY
Summer School: Optimization of Energy Systems and Processes
Gliwice, 24 – 27 June 2003
METHODS OF ENERGY SYSTEMS OPTIMIZATION
8. SENSITIVITY ANALYSIS
80
8.1 Sensitivity Analysis with respect to the Parameters
Simply called sensitivity analysis or parametric analysis
A. Preparation of graphs
The optimization problem is solved for several values of a single
parameter, while the values of the other parameters are kept
constant.
Then, graphs are drawn, which show the optimal values of the
independent variables and of the objective function as functions
of the particular parameter.
81
8.1 Sensitivity Analysis with respect to the Parameters
B. Evaluation of the uncertainty of the objective function
Uncertainty of the objective function due to the uncertainty of a parameter:
F
F
 pj
 pj
(8.1)
Maximum uncertainty of the objective function
due to the uncertainties of a set of parameters:
 Fmax  
j
F
 pj
 pj
(8.2)
The most probable uncertainty of the objective function
due to the uncertainties of a set of parameters:
 Fprob
 F

 
 pj

j 
 pj
2
(8.3)
82
8.1 Sensitivity Analysis with respect to the Parameters
C. Evaluation of certain Lagrange multipliers
If the constraints of the optimization problem are written in the form
h j x  p j
(8.4a)
g k  x   pk
(8.4b)
where pj, pk are parameters, then the Lagrangian is written
L  F  x    λ j  p j  h j  x     μ k  p k  g k  x  
j
(8.5)
k
83
8.1 Sensitivity Analysis with respect to the Parameters
It is:
λj 
L
,
 pj
μk 
L
 pk
(8.6)
At the optimum point, for the pj’s and those of the pk’s
for which Eq. (8.4b) is valid as equality, it is
L
F

,
 pj  pj
L
F

 pk  pk
(8.7)
F
 pk
(8.8)
Equations (8.6) and (8.7) result in
λj 
F
,
 pj
μk 
Consequently: the Lagrange multipliers express
the uncertainty of the objective function.
84
8.1 Sensitivity Analysis with respect to the Parameters
If the sensitivity analysis reveals that the optimal solution is very
sensitive with respect to a parameter, then one or more of the following
actions may be necessary:
• attempt for a more accurate estimation of the parameter
(decrease of the uncertainty of the parameter),
• modifications in the design of the system with the scope of
reducing the uncertainty,
• changes in decisions regarding the use of (physical and
economic) resources for the construction and operation of the
system.
A careful sensitivity analysis may prove more useful than
the solution of the optimization problem itself.
85
8.2 Sensitivity Analysis of the Objective Function
with respect to the Independent Variables
The sensitivity of the optimum solution with respect to the
independent variable xi is revealed by the values of the following
derivatives at the optimum point:
 f x
 xi
 xj
 xi
x
,
ji
x
or with the differences
 f x
 xi
 xj
x
 xi
x
86
Download