Phantom Limb Phenomena

advertisement
Phantom Limb Phenomena
Hand movement observation by individuals born
without hands: phantom limb experience
constrains visual limb perception.
Funk M, Shiffrar M, Brugger P.
We investigated the visual experiences of two persons born
without arms, one with and the other without phantom sensations.
Normally-limbed observers perceived rate-dependent paths of
apparent human movement .
The individual with phantom experiences showed the same
perceptual pattern as control participants, the other did not.
Neural systems matching action observation, action execution and
motor imagery are likely contribute to the definition of body
schema in profound ways.
Summary

Both genetic factors and activity dependent
factors play a role in developing the brain
architecture and circuitry.
 There
are critical developmental periods where nurture
is essential, but there is also a great ability for the adult
brain to regenerate.


Next lecture: What computational models satisfy
some of the biological constraints.
Question: What is the relevance of neural
development and learning in language and
thought?
Connectionist
Models: Basics
Jerome Feldman
CS182/CogSci110/Ling109
Spring 2007
Realistic Biophysical Neuron Simulations
Not covered in any UCB class?
Genesis and Neuron systems
Neural networks abstract from
the details of real neurons




Conductivity delays are neglected
An output signal is either discrete (e.g., 0 or
1) or it is a real-valued number (e.g., between
0 and 1)
Net input is calculated as the weighted sum of
the input signals
Net input is transformed into an output signal
via a simple function (e.g., a threshold
function)
The McCullough-Pitts Neuron
yj
wij
xi
f
yi
ti : target
xi = ∑j wij yj
yi = f(xi – qi)
Threshold
yj: output from unit j
Wij: weight on connection from j to i
xi: weighted sum of input to unit i
Mapping from neuron
Nervous System
Computational Abstraction
Neuron
Node
Dendrites
Input link and propagation
Cell Body
Axon
Combination function,
threshold, activation function
Output link
Spike rate
Output
Synaptic strength
Connection strength/weight
Simple Threshold Linear Unit
Simple Neuron Model
1
A Simple Example
a = x1w1+x2w2+x3w3... +xnwn
.
a= 1*x1 + 0.5*x2 +0.1*x3
x1 =0, x2 = 1, x3 =0
Net(input) = f = 0.5
Threshold bias = 1
Net(input) – threshold bias< 0
Output = 0
Simple Neuron Model
1
1
1
1
Simple Neuron Model
1
1
1
1
1
Simple Neuron Model
0
1
1
1
Simple Neuron Model
0
1
1
1
0
Different Activation Functions
BIAS UNIT
With X0 = 1
Threshold Activation Function (step)
 Piecewise Linear Activation Function
 Sigmoid Activation Funtion
 Gaussian Activation Function

 Radial
Basis Function
Types of Activation functions
The Sigmoid Function
y=a
x=neti
The Sigmoid Function
Output=1
y=a
Output=0
x=neti
The Sigmoid Function
Output=1
Sensitivity to input
y=a
Output=0
x=neti
Changing the exponent k(neti)
K >1
K<1
Radial Basis Function
f ( x)  e
 ax 2
Stochastic units

Replace the binary threshold units by binary
stochastic units that make biased random
decisions.
 The
“temperature” controls the amount of
noise
p( si 1)

1 e
1
  s j wij
j
T
temperature
Types of Neuron parameters




The form of the input function - e.g. linear,
sigma-pi (multiplicative), cubic.
The activation-output relation - linear, hardlimiter, or sigmoidal.
The nature of the signals used to communicate
between nodes - analog or boolean.
The dynamics of the node - deterministic or
stochastic.
Computing various functions

McCollough-Pitts Neurons can compute
logical functions.
 AND,
NOT, OR
Computing other functions: the OR function
i1
i2
b=1
w01
w02
w0b
x0
f
y0
i1
i2
y0
0
0
0
0
1
1
1
0
1
1
1
1
• Assume a binary threshold activation function.
• What should you set w01, w02 and w0b to be so that
you can get the right answers for y0?
Many answers would work
y = f (w01i1 + w02i2 + w0bb)
i2
recall the threshold function
the separation happens when
w01i1 + w02i2 + w0bb = 0
i1
move things around and you get
i2 = - (w01/w02)i1 - (w0bb/w02)
Decision Hyperplane




The two classes are therefore separated by the
`decision' line which is defined by putting the
activation equal to the threshold.
It turns out that it is possible to generalise this
result to TLUs with n inputs.
In 3-D the two classes are separated by a
decision-plane.
In n-D this becomes a decision-hyperplane.
Linearly separable patterns
Linearly Separable Patterns
PERCEPTRON is an architecture which can
solve this type of decision boundary problem.
An "on" response in the output node
represents one class, and an "off" response
represents the other.
The Perceptron
The Perceptron
Input Pattern
The Perceptron
Input Pattern
Output Classification
A Pattern Classification
Pattern Space
 The space in which the inputs reside is referred to as the
pattern space. Each pattern determines a point in the
space by using its component values as spacecoordinates. In general, for n-inputs, the pattern space
will be n-dimensional.
 Clearly, for nD, the pattern space cannot be drawn or
represented in physical space. This is not a problem: we
shall return to the idea of using higher dimensional
spaces later. However, the geometric insight obtained in
2-D will carry over (when expressed algebraically) into nD.
The XOR Function
X1/X2
X2 = 0
X2 = 1
X1= 0
0
1
X1 = 1
1
0
The Input Pattern Space
The Decision planes
Multi-layer Feed-forward Network
Pattern Separation and NN
architecture
Conjunctive or Sigma-Pi nodes




The previous spatial summation function
supposes that each input contributes to the
activation independently of the others. The
contribution to the activation from input 1 say, is
always a constant multiplier ( w1) times x1.
Suppose however, that the contribution from
input 1 depends also on input 2 and that, the
larger input 2, the larger is input 1's contribution.
The simplest way of modeling this is to include a
term in the activation like w12(x1*x2) where
w12>0 (for a inhibiting influence of input 2 we
would, of course, have w12<0 ).
w1*x1 + w2*x2 +w3*x3 + w12*(x1*x2) + w23(x2*x3)
+w13*(x1*x3)
Sigma-Pi units
Sigma-Pi Unit
Biological Evidence for Sigma-Pi
Units



[axo-dendritic synapse] The stereotypical synapse
consists of an electro-chemical connection between an
axon and a dendrite - hence it is an axo-dendritic
synapse
[presynaptic inhibition] However there is a large
variety of synaptic types and connection grouping. Of
special importance are cases where the efficacy of the
axo-dendritic synapse between axon 1 and the dendrite
is modulated (inhibited) by the activity in axon 2 via the
axo-axonic synapse between the two axons. This might
therefore be modelled by a quadratic like w12(x1*x2)
[synapse cluster] Here the effect of the individual
synapses will surely not be independent and we should
look to model this with a multilinear term in all the inputs.
Biological Evidence for Sigma-Pi
units
[axo-dendritic synapse]
[presynaptic inhibition]
[synapse cluster]
Link to Vision: The Necker Cube
Constrained Best Fit in Nature
inanimate
physics
chemistry
biology
vision
language
animate
lowest energy
state
molecular
minima
fitness, MEU
Neuroeconomics
threats,
friends
errors,
NTL
Computing other relations
The 2/3 node is a useful function that
activates its outputs (3) if any (2) of its 3
inputs are active
 Such a node is also called a triangle node
and will be useful for lots of
representations.

Triangle nodes and
McCullough-Pitts Neurons?
A
B
C
A
B
C
Representing concepts using
triangle
triangle nodes
nodes:
when two
of the
neurons
fire, the
third also
fires
“They all rose”
triangle nodes:
when two of the
neurons fire, the
third also fires
model of
spreading
activation
Basic Ideas behind the model





Parallel activation streams.
Top down and bottom up activation combine to
determine the best matching structure.
Triangle nodes bind features of objects to values
Mutual inhibition and competition between
structures
Mental connections are active neural
connections
5 levels of Neural Theory of
Language
Spatial
Relation
Motor
Control
Metaphor Grammar
Cognition and Language
abstraction
Computation
Structured Connectionism
Neural Net
Triangle Nodes
SHRUTI
Computational Neurobiology
Biology
Neural
Development
Quiz
Midterm
Finals
Can we formalize/model these
intuitions
What is a neurally plausible computational
model of spreading activation that
captures these features.
 What does semantics mean in neurally
embodied terms

 What
are the neural substrates of concepts
that underlie verbs, nouns, spatial predicates?
Download