ECE 8412 Introduction to Neural Networks

advertisement
V I L L A N O V A U N I V E R S I T Y
Department of Electrical & Computer Engineering
ECE 8412 Introduction to Neural Networks
Homework 06 Due 1 November 2005
Chapters 7
Name: Santhosh Kumar Kokala
Complete the following and submit your work to me as a hard copy.
You can also fax your solutions to 610-519-4436.
Use MATLAB to make determine the results of matrix algebra.
E7.1
1. a) Are p1 and p2 orthogonal?
A. Converting the given patterns in to vectors.
p1 = [-1 -1 1 1]T and p2 = [1 1 -1 1]T
To test the orthogonality we take the inner product of p1 and p2
1 
1 
p1T p2 = [-1 -1 1 1]    2
 1
 
1 
So the vectors are not orthogonal.
b) Use the Hebb rule to design an autoassociator network for these patterns.
c) Test the operation of the network for using the input pattern pt shown to the left. Does the network
perform as you expected? Explain.
Matlab Code: (For Problem E7.1 b and E7.1 c)
%Santhosh Kumar Kokala
%Problem E7.1 b and E7.1 c
p1=[-1 -1 1 1]'
p2=[1 1 -1 1]'
P=[p1 p2]
disp('Computing the weight matrix')
wh=P*P'
%E7.1 c
disp('Checking the network with the test vector')
pt=[1 1 1 1]'
disp('The network response for the test vector is')
ah=hardlims(wh*pt)
Matlab Output:
p1 =
-1
-1
1
1
p2 =
1
1
-1
1
P=
-1 1
-1 1
1 -1
1 1
Computing the weight matrix
wh =
2
2
-2
0
2 -2
2 -2
-2 2
0 0
0
0
0
2
Checking the network with the test vector
pt =
1
1
1
1
The network response for the test vector is
ah =
1
1
-1
1
-2-
The network response is equal to p2 = [1 1 -1 1]T .Generally the response has to be prototype pattern
closest to the input pattern. In this case pt is a Hamming distance of 1 from p2 and 2 from p1. So our
network has converged to expected output.
E7.4
2. From the problem P7.7 we can calculate the weights of the binary network using the weights of bipolar
network. Our binary network is slightly different from our bipolar network in two ways. First, it uses the
hardlim nonlinearity rather than hardlims, since the output should be either 0 or 1. Secondly, it uses a
bias vector.
Therefore, to produce the same results as the bipolar network, we should choose
Wbinary = 2*Wbipolar and b = -Wbipolar * 1
where ‘1’ is a vector of ones.
Our input vectors in binary form
p1b = [0 0 1 1]T and p2b = [1 1 0 1]T
Test vector is
pt = [ 1 1 0 1]
Matlab Code:
%Santhosh Kumar Kokala
%Problem E7.4
clear;
clc;
disp('Input patterns in bipolar form')
p1=[-1 -1 1 1]'
p2=[1 1 -1 1]'
P=[p1 p2]
disp('Input patterns in binary form')
p1b=[0 0 1 1]'
p2b=[1 1 0 1]'
disp('Computing the weight matrix of bipolar network')
wh=P*P'
disp('Computing the weight matrix of binary network')
wb=2*wh
disp('Computing the bias matrix of binary networ')
b=-wh*[1 1 1 1]'
disp('Checking the network with the test vector')
pt=[1 1 1 1]'
disp('The network response for the test vector is')
ah=hardlim(wb*pt+b)
-3-
Matlab Output:
Input patterns in bipolar form
p1 =
-1
-1
1
1
p2 =
1
1
-1
1
P=
-1 1
-1 1
1 -1
1 1
Input patterns in binary form
p1b =
0
0
1
1
p2b =
1
1
0
1
Computing the weight matrix of bipolar network
wh =
2
2
-2
2 -2
2 -2
-2 2
0
0
0
-4-
0
0
0
2
Computing the weight matrix of binary network
wb =
4
4
-4
0
4 -4
4 -4
-4 4
0 0
0
0
0
4
Computing the bias matrix of binary networ
b=
-2
-2
2
-2
Checking the network with the test vector
pt =
1
1
1
1
The network response for the test vector is
ah =
1
1
0
1
Hence our output matches with that of input pattern p2.
-5-
E7.6
3. a) We know that the decision boundary for the perceptron network is the line defined by:
Wp+b=0
If there is no bias, then b = 0 and the boundary is defined by:
Wp = 0
which is a line that must pass through the origin. Now consider the three vectors p1, p2, p3 which are
given in the problem. They are shown graphically in the figure below with their corresponding target
ouputs. It is clear that no decision boundary that passes through the origin could separate these two
vectors. Therefore a bias is required to solve this problem.
-6-
Download