PPT

advertisement
Visual Illusions
Visual Illusions demonstrate how we perceive an “interpreted version” of
the incoming light pattern rather that the exact pattern itself.
September 14, 2010
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
1
Visual Illusions
He we see that the squares A and B from the previous image actually have
the same luminance (but in their visual context are interpreted differently).
September 14, 2010
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
2
How do NNs and ANNs work?
• NNs are able to learn by adapting their
connectivity patterns so that the organism
improves its behavior in terms of reaching certain
(evolutionary) goals.
• The strength of a connection, or whether it is
excitatory or inhibitory, depends on the state of a
receiving neuron’s synapses.
• The NN achieves learning by appropriately
adapting the states of its synapses.
September 14, 2010
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
3
An Artificial Neuron
synapses
x1
neuron i
x2
Wi,1
Wi,2
…
…
xi
Wi,n
xn
n
net input signal
net i (t )   wi , j (t ) x j (t )
j 1
output
September 14, 2010
x i (t )  f i (neti (t ))
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
4
The Activation Function
One possible choice is a threshold function:
f i (net i (t ))  1, if net i (t )  
 0, otherwise
The graph of this function looks like this:
fi(neti(t))
1
0

September 14, 2010
neti(t)
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
5
Binary Analogy: Threshold Logic Units (TLUs)
Example:
x1
x2
w1 = 1
w2 = 1
 = 1.5
x1 x2 x3
w3 = -1
x3
TLUs in technical systems are similar to the
threshold neuron model, except that TLUs only
accept binary inputs (0 or 1).
September 14, 2010
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
6
Binary Analogy: Threshold Logic Units (TLUs)
Yet another example:
x1
w1 =
XOR
=
w2 =
x1  x2
x2
Impossible! TLUs can only realize linearly separable
functions.
September 14, 2010
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
7
Linear Separability
A function f:{0, 1}n  {0, 1} is linearly separable if the
space of input vectors yielding 1 can be separated
from those yielding 0 by a linear surface
(hyperplane) in n dimensions.
Examples (two dimensions):
x2
1
1
1
0
0
1
0
1
x2
1
1
0
0
0
1
0
1
x1
linearly separable
September 14, 2010
x1
linearly inseparable
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
8
Linear Separability
To explain linear separability, let us consider the
function f:Rn  {0, 1} with
f ( x1 , x2 ,..., xn )  1, if
n
w x
i 1
i i

 0, otherwise
where x1, x2, …, xn represent real numbers.
This is the exactly the function that our threshold
neurons use to compute their output from their inputs.
September 14, 2010
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
9
Linear Separability
n
w x
f ( x1 , x2 ,..., xn )  1, if
i i
i 1

 0, otherwise
Input space in the two-dimensional case (n = 2):
x2
-3 -2 -1
0
3
2
1
1
1
-1
-2
-3
2
1
3
x1
w1 = 1, w2 = 2,
=2
September 14, 2010
x2
-3 -2 -1
3
2
1
1
1
-1
-2
-3
2
0
3
-3 -2 -1
x1
w1 = -2, w2 = 1,
=2
x2
3
2
1
1
-1
-2
-3
2
0
3
x1
w1 = -2, w2 = 1,
=1
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
10
Linear Separability
So by varying the weights and the threshold, we can
realize any linear separation of the input space into
a region that yields output 1, and another region that
yields output 0.
As we have seen, a two-dimensional input space
can be divided by any straight line.
A three-dimensional input space can be divided by
any two-dimensional plane.
In general, an n-dimensional input space can be
divided by an (n-1)-dimensional plane or hyperplane.
Of course, for n > 3 this is hard to visualize.
September 14, 2010
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
11
Linear Separability
Of course, the same applies to our original function f
of the TLU using binary input values.
The only difference is the restriction in the input
values.
Obviously, we cannot find a straight line to realize the
XOR function:
x2 1 1 0
0
0
1
0
1 x1
In order to realize XOR with TLUs, we need to
combine multiple TLUs into a network.
September 14, 2010
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
12
Multi-Layered XOR Network
x1
x2
1
-1
0.5
1
1
x1 -1
x2
1
September 14, 2010
0.5
x1  x2
0.5
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
13
Capabilities of Threshold Neurons
What can threshold neurons do for us?
To keep things simple, let us consider such a neuron
with two inputs:
x1
Wi,1
Wi,2
xi
x2
The computation of this neuron can be described as
the inner product of the two-dimensional vectors x
and wi, followed by a threshold operation.
September 14, 2010
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
14
The Net Input Signal
The net input signal is the sum of all inputs after
passing the synapses:
n
net i (t )   wi , j (t ) x j (t )
j 1
This can be viewed as computing the inner product
of the vectors wi and x:
net i (t )  || wi (t ) ||  || x(t ) ||  cos  ,
where  is the angle between the two vectors.
September 14, 2010
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
15
Capabilities of Threshold Neurons
Let us assume that the threshold  = 0 and illustrate the
function computed by the neuron for sample vectors wi and x:
second vector component
x
wi
first vector component
Since the inner product is positive for -90    90, in this
example the neuron’s output is 1 for any input vector x to the
right of or on the dotted line, and 0 for any other input vector.
September 14, 2010
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
16
Capabilities of Threshold Neurons
By choosing appropriate weights wi and threshold 
we can place the line dividing the input space into
regions of output 0 and output 1in any position and
orientation.
Therefore, our threshold neuron can realize any
linearly separable function Rn  {0, 1}.
Although we only looked at two-dimensional input,
our findings apply to any dimensionality n.
For example, for n = 3, our neuron can realize any
function that divides the three-dimensional input
space along a two-dimension plane.
September 14, 2010
Neural Networks
Lecture 3: Models of Neurons and Neural Networks
17
Download