Unlicensed-7-PDF437-440_engineering optimization

advertisement
7.9
419
Generalized Reduced Gradient Method
we find, at X 1,
•
ffi
_
_ _f ffi
__x
1 ffi
ffi
ffi _f ffi
Yf
=
•
ffi
ffi
_ •x2
ff •f fi X1
Zf
•x3
=
ff
=
X1
= {Š4(x
0}
[D] =
1
32
[D]
,]
GR = _
=
Yf
2
Š1
[C] =
Š1
Š [[D]
ff Š9 2. fi
9
2.
Šx
3)
3
X1
Š
ff
1
32
fi
3
=
ff Š9 2. fi
9
2.
=
• •g 1 •g1 • = [5
•x1 •x2 X1
• •g 1 •
= [32]
•x3 X1
[C] =
DŠ1 = [
2 (Š2.6 Š 2)
Š 2(Š2.6 Š 2) + 4(2 Š 2)
Š10.4]
[5 Š10.4] = [0.15625 Š0.325]
[C]] T_
Zf
fi
0
( )
Š .15625
0.325
=0
ff Š9 2. fi
9
2.
are not zero, the point X1
Step 3: Since the components of GR
is not optimum, and
hence we go to step 4.
Step 4: We use the steepest descent method and take the search direction as
S = ŠGR =
ff
9 fi
Š92.2.
Step 5: We find the optimal step length along S.
(a) Considering the design variables, we use Eq. (7.111) to obtain For y
_=
For y
2:
2
3 Š (Š2.6)
9 2.
=x
1:
= 0 .6087
=x
Š 3 Š (2)
_=
Thus the smaller value gives _
T = Š([D
Š1]
Š9 2.
1
For z 1 = x
2
: _3 =
= 1.1293.
= 0 .5435
= 0.5435. Equation (7.113) gives
[C])S = Š(0.15625 Š0.325)
and hence Eq. (7.114) leads to
Thus _
1
Š 3 Š (2)
Š 4.4275
= 1 .1293
ff
9 fi
= Š4.4275
Š92.2.
420
Nonlinear Programming III: Constrained Optimization Techniques
(b) The upper bound on _ is given by the smaller of _
to 0.5435. By expressing
1
and _
2,
which is equal
ff Y + _S fi
X=
Z + _T
we obtain
_
_
_
_ 9
_
_Š2.6 + 9.2_ _
_Š2.6 _
_x1 _
_ 2.
2 Š 9.2_
2
+_
=
x2 =
X=
2.
Š 4.4275
_ x3
_ 2
_ 2 Š 4.4275_
_ Š9
and hence
f (_) = f (X) = (Š2.6 + 9.2_ Š 2 + 9.2_)2
+ ( 2 Š 9.2_ Š 2 + 4.4275_)
4
= 518.7806_
4
2 169.28_ + 21.16
+ 338.56_ Š
df/d_ = 0 gives
2075.1225_
from which we find the root as _
bound value 0.5435, we use _ _.
(c) The new vector Xnew
Xnew =
=
+ 677.12_ Š 169.28 = 0
_
_ 0.22. Since _
_
is less than the upper
is given by
. Yold + dY /
Zold + dZ
_
_
_
ffi Š0.576 ffi
ffi Š2.6 + 0.22(9.2) ffi _
ffi
ffi
ffi
ffi
= _ 2+ 0
= _ Š 0.024
_
+_ T
.22(Š9.2)
ffi 2 + 0 .22(Š4.4275) ffi
ffi 1
ffi
ffi
ffi
ffi .02595 ffi
_
_
. Y +_
old
Zold
with
3
_S/
dY =
ff
fi
2
,
.024
Š 2.024
dZ = {Š0.97405}
Now, we need to check whether this vector is feasible. Since
g1 (X new) = (Š0.576)[1 + (Š0.024
2)
] + (1.02595) Š4 3 = Š2.4684 _= 0
the vector Xnew is infeasible. Hence we hold Ynew constant and modify
Znew using Newton's method [Eq. (7.108)] as
dZ = [D
Š] 1
[Šg(X)
Š [C]dY]
7.9
Generalized Reduced Gradient Method
421
Since
• •g 1 •
•z1
[D] =
= [4x
3]
3
= [4(1.02595) ]3 = [4.319551]
g1 (X) = {Š2.4684}
[C] =
• •g
•y
_g 1 •
1
1
= {[2(Š0.576 + 0.024)][Š2(Š0.576 + 0.024)
2
_y
+ 4 (Š0.024 Š 1.02595) 3]}
= [Š1.104 Š3.5258]
dZ =
×
we have Znew
becomes
•2
1
4 .319551
.4684 Š {Š1.104 Š3.5258}
ff 2
fi _
= {Š0.5633}
Š.024
2.024
= Zold + dZ = {2 Š 0.5633} = {1.4367}. The current Xnew
Xnew =
_
_Š0.576 _
Š 0.024
=
_ 1
.4367
. Y + dY /
old
Zold + dZ
The constraint becomes
g1 = (Š0.576)(1Š(Š0.024
2)
4 3 = 0.6842 _= 0
) + (1.4367) Š
Since this
is infeasible, we need to apply Newton's method
Xnew (7.108)] at the current X
[Eq.
. In the present case, instead of repeating
new
Newton's iteration, we can find the value of Znew
= {x 3} new by satisfying
the constraint as
g 1( X) = (Š0.576)[1 + (Š0.024
or
2)
0 .25
x3 = ( 2.4237)
]+x4 Š3=0
3
= 1 .2477
This gives
_
_Š0.576 _
Š 0.024
and
Xnew =
_ 1
.2477
) = (Š0.576 + 0.024 2) + (Š0.024 Š 1.2477)
f (X
new
=4 2.9201
Next we go to step 1.
Step 1: We do not have to change the set of independent and dependent variables and
hence we go to the next step.
422
Nonlinear Programming III: Constrained Optimization Techniques
Step 2: We compute the GRG at the current X using Eq. (7.105). Since
•
ffi
_ _f
ffi
•x1
Yf = _
ffi _f
ffi
_ •x2
_
ffi
ff
2 (Š0.576 + 0.024)
ffi =
Š 2(Š0.576 + 0.024) + 4(Š0.024 Š 1.2477)
ffi
ffi
Š 1.104
= ff
Š 7.1225
•
Zf
=
[C] =
fi
ff •f fi
ff •f fi
= {Š4(Š0.024 Š 1.2477
=
•z
•x3
• •g 11 •g1 •
•x1 •x2
= [1.000576
[D] =
[D
Š]
1[C]
GR
• •g 1 •
•x3
= [4x
= [(1 + (Š0.024
)
2)
3)
0 .027648]
3]
3
= [4(1.2477) 3] = [7.7694]
1
[1.000576
0 .027648] = [0.128784
7
.7694
= _ Yf Š [[D] Š1 [C]] T _ Zf
ff Š 1.104 fi
Š 7.1225
} = {8.2265}
2 (Š0.576)(Š0.024)]
=
=
fi
3
Š
0
.003558]
ff 0 .128784fi
ff Š 2.1634 fi
( 8.2265) =
Š 7.1518
0 .003558
Since GR _= 0, we need to proceed to the next step.
Note: It can be seen that the value of the objective function reduced from an initial
value of 21.16 to 2.9201 in one iteration.
7.10
SEQUENTIAL QUADRATIC PROGRAMMING
The sequential quadratic programming is one of the most recently developed and perhaps one of the best methods of optimization. The method has a theoretical basis that
is related to (1) the solution of a set of nonlinear equations using Newton's method,
and (2) the derivation of simultaneous nonlinear equations using Kuhn-Tucker conditions to the Lagrangian of the constrained optimization problem. In this section we
present both the derivation of the equations and the solution procedure of the sequential
quadratic programming approach.
7.10.1
Derivation
Consider a nonlinear optimization problem with only equality constraints:
Find X which minimizes f (X)
subject to
hk (X) = 0,
k = 1, 2, . . . , p
(7.117)
Download