Unlicensed-7-PDF745-748_engineering optimization

advertisement
13.7
Neural-Network-Based Optimization
727
subject to
_ µ f (X)
_ µg(l)
j
,)
(X
_ µg(u)
j
13.6.4
(X
,)
j = 1, 2 . ., . , m
j = 1, 2 . ., . , m
(13.59)
Numerical Results
The minimization of the error between the generated and specified outputs of the
four-bar mechanism shown in Fig. 13.9 is considered. The design vector is taken as
X = {a
b
c
ffi
, T} . The mechanism is constrained to be a crank-rocker mechanism so that
a Š b _ 0,
a Š c _ 0,
a_1
d = [(a + c) Š (b + 1)][(c Š a) 2 Š (b Š 1
2)
]_0
The maximum deviation of the transmission angle (µ) from 90 ff is restricted to be less
ff
than a specified value, tmax
= 35 . The specified output angle is
.
• s(_) =
ff
20 + •3 ,
0
unspecified, 240
ff
_ _ _ 240
ff
ff
_ _ < 360ff
Linear membership functions are assumed for the response characteristics [13.22]. The
optimum solution is found to be X = {0.2537
0 .8901
0 .8865
Š 0 .7858
with f _ = 1.6562 and _ = 0.4681. This indicates that the maximum level of satisfaction that can be achieved in the presence of fuzziness in the problem is 0.4681. The
transmission angle constraint is found to be active at the optimum solution [13.22].
13.7
NEURAL-NETWORK-BASED OPTIMIZATION
The immense computational power of nervous system to solve perceptional problems
in the presence of massive amount of sensory data has been associated with its parallel
r3 = b
r4 = c
q3
r2 = a
q4
w2

q2 = f
b
1
Figure 13.9
Four-bar function generating mechanism.
Š10
T.}
728
Modern Methods of Optimization
processing capability. The neural computing strategies have been adopted to solve
optimization problems in recent years [13.23, 13.24]. A neural network is a massively
parallel network of interconnected simple processors (neurons) in which each neuron
accepts a set of inputs from other neurons and computes an output that is propagated
to the output nodes. Thus a neural network can be described in terms of the individual
neurons, the network connectivity, the weights associated with the interconnections
between neurons, and the activation function of each neuron. The network maps an
input vector from one space to another. The mapping is not specified but is learned.
Consider a single neuron as shown in Fig. 13.10. The neuron receives a set of
n inputs, x i, i = 1, 2, . . . , n, from its neighboring neurons and a bias whose value
is equal to 1. Each input has a weight (gain) w i associated with it. The weighted
sum of the inputs determines the state or activity of a neuron, and is given by a
=
_
i
+1n
i
i=1 w x
2
T
n
T
= W X, where X = {x 1x · · · x 1} . A simple function is now used to
provide a mapping from the n-dimensional space of inputs into a one-dimensional
space of the output, which the neuron sends to its neighbors. The output of a neuron is
a function of its state and can be denoted as f (a). Usually, no output will be produced
unless the activation level of the node exceeds a threshold value. The output of a neuron
is commonly described by a sigmoid function as
1
(13.60)
f (a) =
1 + e Ša
which is shown graphically in Fig. 13.10. The sigmoid function can handle large as
well as small input signals. The slope of the function f (a) represents the available
gain. Since the output of the neuron depends only on its inputs and the threshold value,
each neuron can be considered as a separate processor operating in parallel with other
neurons. The learning process consists of determining values for the weights w i that
lead to an optimal association of the inputs and outputs of the neural network.
x1
xn
xn+1 = 1
w1
wn
a
f
(a)
wn+1
(Bias)
f
(a)
1.0
0.5
a
0
Figure 13.10
Single neuron and its output. [12.23], reprinted with permission of Gordon &
Breach Science Publishers.
13.7
Neural-Network-Based Optimization
729
Several neural network architectures, such as the Hopfield and Kohonen networks,
have been proposed to reflect the basic characteristics of a single neuron. These architectures differ one from the other in terms of the number of neurons in the network,
the nature of the threshold functions, the connectivities of the various neurons, and
the learning procedures. A typical architecture, known as the multilayer feedforward
network, is shown in Fig. 13.11. In this figure the arcs represent the unidirectional
feedforward communication links between the neurons. A weight or gain associated
with each of these connections controls the output passing through a connection. The
weight can be positive or negative, depending on the excitatory or inhibitory nature
of the particular neuron. The strengths of the various interconnections (weights) act as
repositories for knowledge representation contained in the network.
The network is trained by minimizing the mean-squared error between the actual
output of the output layer and the target output for all the input patterns. The error is
minimized by adjusting the weights associated with various interconnections. A number
of learning schemes, including a variation of the steepest descent method, have been
used in the literature. These schemes govern how the weights are to be varied to
minimize the error at the output nodes. For illustration, consider the network shown
in Fig. 13.12. This network is to be trained to map the angular displacement and
angular velocity relationships, transmission angle, and the mechanical advantage of a
four-bar function-generating mechanism (Fig. 13.9). The inputs to the five neurons in
the input layer include the three link lengths of the mechanism (r 2, r 3, and r 4) and the
angular displacement and velocities of the input link (_ 2 and - 2). The outputs of the six
neurons in the output layer include the angular positions and velocities of the coupler
and the output links (_ 3, -
3,
_ 4, and -
4),
the transmission angle (_ ), and the mechanical
Outputs
Output
layer
Hidden
layer
Input
layer
Inputs
Figure 13.11 Multilayer feedforward network. [13.23], reprinted with permission of Gordon
and Breach Science Publishers.
730
Modern Methods of Optimization
q3
q4
r2
w3
r3
w4
r4
g
q2
h
w2
Figure 13.12
Network used to train relationships for a four-bar mechanism. [12.23], reprinted
with permission of Gordon & Breach Science Publishers.
advantage (.) of the mechanism. The network is trained by inputting several possible
combinations of the values of r 2, r 3, r 4, _ 2, and - 2 and supplying the corresponding
values of _ 3, _ 4, - 3, - 4, _ , and .. The difference between the values predicted by the
network and the actual output is used to adjust the various interconnection weights
such that the mean-squared error at the output nodes is minimized. Once trained, the
network provides a rapid and efficient scheme that maps the input into the desired
output of the four-bar mechanism. It is to be noted that the explicit equations relating
r2 , r 3, r 4, _ 2, and - 2 and the output quantities _ 3, _ 4, - 3, - 4, _ , and . have not
been
programmed into the network; rather, the network learns these relationships during the
training process by adjusting the weights associated with the various interconnections.
The same approach can be used for other mechanical and structural analyses that might
require a finite-element-based computations.
truss
described
in Section
7.22.1 (Fig. of
7.21)thewasstructural
considered
with of
constraints
on
Numerical
Results.
The minimization
weight
the three-bar
the cross-sectional areas and stresses in the members. Two load conditions were
considered with P = 20,000 lb, E = 10 × 10 6 psi,
= 0.1 lb/in 3, H = 100 in., fimin =
(l)
(u)
2
2
= 0 1. in (i = 1, 2), and Ai
= 5 0. in (i = 1, 2).
Š15,000 psi, fimax = 20,000 psi, A
i
optimization
The solution
obtained
using neural-network-based
is [12.23]:
0 .4079 in and f _ = 26.3716 lb.
be
compared
0
.788
in
can
This
x1
x2 =_2
,
2
2, solution given by nonlinear
with the
programming: x 1_ = 0.7745 in 2, x _ = 0.4499 in ,
2
=_ f _ = 26.4051 lb.
and
REFERENCES AND BIBLIOGRAPHY
13.1
J. H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan
Press, Ann Arbor, MI, 1975.
13.2
I. Rechenberg, Cybernetic Solution Path of an Experimental Problem, Library Translation 1122, Royal Aircraft Establishment, Farnborough, Hampshire, UK, 1965.
13.3
D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Linearning,
Addison-Wesley, Reading, MA, 1989.
Download