Computational Intelligence COM907M1

advertisement
COM907M1: Computational Intelligence -Tutorial
Computational Intelligence COM907M1
Tutorial 1
1.
In the field of hydrology, the study of rainfall patterns is most important. The rate of
rainfall, in units of mm/h, falling in a particular geographic region could be described
linguistically. Defining membership functions for linguistic variables ‘heavy’ and ‘light’
as follows
 0.2 0.4 0.6 
'heavy'  



8
12 
 5
 0.5 0.8 1 
'light '  

 
8 5
 12
Develop membership functions for the following linguistic phrases
i.
More or less light
ii.
Heavy but Not very heavy
iii.
More or less light but not very heavy
2. Let A be a fuzzy set in X defined as
A  {0.9/1  0.4 /2  0/3}
and given is the fuzzy relation R in X  Y via the following relational matrix:
 1 0.8 0.1
R  A  B  0.8 0.6 0.3


0.6 0.3 0.1
What will be the fuzzy output B in Y using max-min operation?
3. In a car breaking simulation systems, load on a car is described by the following MFs.

Small
Large
1
load
Figure 2.1: MFs load
http://infm3.infm.ulst.ac.uk/~siddique
COM907M1: Computational Intelligence -Tutorial
Define the following labels which have linguistic hedges appended to the fuzzy label Small
and Large defined above. Express the following graphically
i.
Small but very large
ii.
Small or very large
Computational Intelligence COM907M1
Tutorial 2
1. Consider the following Takagi-Sugeno rules:
If x is A1 and y is B1 Then
If x is A2 and y is B1 Then
If x is A1 and y is B2 Then
If x is A2 and y is B2 Then
z1  x  y  1
z2  2x  y  1
z3  2 x  3 y
z4  2x  5
The antecedent fuzzy sets are
 0.1 0.6 1.0 
A1  



2
3 
1
 0.9 0.4 0 
A2  

 
2 3
 1
 1 1 0 .3 
B1    

4 5 6 
 0.1 0.9 1.0 
B2  



5
6 
 4
Compute the value of z for x  1and y  4 where z is defined as
w1 z1  w2 z 2  w3 z 3  w4 z 4
w1  w2  w3  w4
wi are the firing strength of the rules
z
2. Consider the classical control problem of an inverted pendulum.

u
Figure 2.1: Inverted Pendulum.
http://infm3.infm.ulst.ac.uk/~siddique
COM907M1: Computational Intelligence -Tutorial
Where   displacement and u  control torque. Assuming x1   and x 2   equivalent to
state-variables and by linearising for the small  , we arrive at the following discrete form of
the inverted pendulum system
x1 (k  1)  x1 (k )  x2 (k )
x2 (k  1)  x1 (k )  x2 (k )  u (k )
The universes of discourse are  2   x1  2  ,  5 deg ree / s  x 2  5 deg ree / s and
 24  u  24 . In order to develop a Mamdani-type fuzzy control system, x1, x2 and u are
considered as inputs and output respectively. The membership functions for x1, x2, and u are
defined in the following figures.


1
N
-2
1
P
Z
0
2
x1

N
-5
Z
P
0
+5
1
x2
NB
-24 -12
NS
-6
Z
0
PS
PB
6
12
24 u
Figure 2.2: MFs for x1, x2, and u
The rule-base for the fuzzy system is represented by 9 rules shown in the table below
Table 2.1: Rule-base for Mamdani-type FLC
x2
x1
P
Z
N
P
PB
PS
Z
Z
PS
Z
NS
N
Z
NS
NB
Develop a fuzzy control system for the inverted pendulum problem (block diagram). Taking
initial conditions x1(0) =1 and x2(0) = -2.5/s, demonstrate graphically one iteration of
simulation of the fuzzy control system. Use estimates in any defuzzification procedure to be
used.
3.A steam engine simulation system has been developed using Tsukamoto rule, where x
represents speed, y represents pressure and z represents throttle position. Table 3.1 describes
the rule base of the system.
http://infm3.infm.ulst.ac.uk/~siddique
COM907M1: Computational Intelligence -Tutorial
Table 3.1: Rule-base for the Sugeno-type FLC
x
speed
y  pressure
B1
B2
A1
C1
C2
A2
C2
C1
Where throttle positions are defined by monotone membership functions C1, and C2. The
fuzzy sets for speed and pressure are described by the membership functions in Figure 3.12 and monotone membership functions for C1, and C2 are defined in Figure 3.3.
Compute the throttle position z at speed x  3 and pressure y  7 using the Tsukamoto
system described above.
(y)
(x)
A1
1
A2
1
.5
B1
B2
6
7
.5
1
2
3
4
5
6 xspeed
4
5
Figure 3.1-2: MFs for x and y.
(z)
1
C1
C2
3
4
.5
1
2
5
6
Figure 3.3: MFs for z
http://infm3.infm.ulst.ac.uk/~siddique
zthrottle
8
9 ypressure
COM907M1: Computational Intelligence -Tutorial
Computational Intelligence COM907M1
Tutorial 3
1. An unknown system is to be modelled. The system has 4 inputs and single output. By
inputting 3 sets of random signals 3 responses are measured. The inputs are given as follows:
1
0
 1
 1
1
1
1
2
3




, x 
, x  
x 
0
 1
0
 
 
 
 1
 1
 1
The responses for x1, x2, and x3 are d1=-1, d2=-1, and d3=1 respectively. Explain how the
system can be modelled by using a Neural Network. Suppose, the model of the system is
given by a single neuron network as shown in Figure 7.
w1
x1
w2
x2
x3
x4
1
f (net ) 
1  e  net

w3
O
w4
Figure 7: Single neuron network.
The initial weight vector is w1  1  1 0 0.5 . The learning rate is assumed to be =0.1. To
train the network by using delta learning rule, it requires the value of f (net ) be computed
T
each time which is defined as
f (net )  o(1  o)
where f (net ) and O are the continuous activation function and output respectively. What
will be the weight update ( w2 ) after 2nd iteration of learning using delta-learning rule.
2. Assume the neural network with a single neuron shown in Figure below having the initial
weight vector w1, three input vectors and desired outputs as
x1
x2
w1
w2
f(.)
x3
x4
w3
w4
http://infm3.infm.ulst.ac.uk/~siddique
O
COM907M1: Computational Intelligence -Tutorial
1
0
1
 1
  1
 2  0.5 1 
 , d  d 1 d 2 d 3  1  1 0
w1    , x  x 1 x 2 x 3  
 1
0
1.5  2
 


0.5
 0  1.5 1.5
The network needs to be trained with a learning constant =1. The activation function is
defined as f ( net )  net . Show the weight update rule with a diagram.




What will be the weight vector after 2nd iteration of Widrow-Hoff learning? (done in lecture)
3. What are the problems that may occur while training a feedforward neural network using
Backpropagation algorithm? What corrections are made to learning rule to overcome these
problems? Explain with a diagram.
Computational Intelligence COM907M1
Tutorial 4
1. Assume the neural network with a single bipolar binary neuron shown in Figure below
having the initial weight vector w1, and three input vectors x1, x2, and x3 as
1
 1 
 1 
0
  1
  2
  0.5
1
1
1
2
3






w 
, x 
, x 
and x   
0
 1 .5 
 2 
  1
 
 


 
0.5
 0 
  1 .5 
1.5
x1
x2
x3
x4
w1
w2
w3

f (net)
O
w4
The network needs to be trained with a learning constant c=1. The activation function is
defined as
 1 net  0
f (net )  
 0 net  0
Show a single iteration of Hebbian learning with a diagram and calculate the weight update.
2. Explain the Kohonen’s Self-Organising Maps (SOM) algorithm.
3. Elman and Hopfield networks are both recurrent networks. Explain the basic differences
between the two architectures.
4. Explain winner-take-all algorithm in competitive learning.
http://infm3.infm.ulst.ac.uk/~siddique
Download