( an Artificial) Neural Network?

advertisement
LECTURE 6
Network Models
I. Introduction
− Basic concepts of neural networks
II. Realistic neural networks
− Homogeneous excitatory and
inhibitory populations
− The olfactory bulb
− Persistent neural activity
What is a ( an Artificial) Neural Network?
• Inspired from real neural systems
• Having a network structure, consisting of artificial
neurons (nodes) and neuronal connections (weights )
• A general methodology for function approximation
• It may either be used to gain
an understanding of
biological neural networks, or
for solving artificial
intelligence problems
How neural systems look like?
• Neuron: the fundamental
singalling/computational units
• Synapses: the connections
between neurons
• Layer: neurons are organized
into layers
• Extremely complex: around
1011 neurons in the brain,
each with 104 connections
Imagine 6 business cards putted together.
− Human: as large as a dinner napkin;
− Monkey: about the size of a businessletter envelope;
− Rat: a stamp.
There are about 105 neurons within 1mm2.
2 mm
Cortical columns:
− In the neocortex, neurons lie in 6 vertical layers highly coupled
within cylindrical columns with about 0.1 to 1 mm in diameter
(e.g. ocular dominance columns and orientation maps in V1)
Three main classes of interconnections (e.g. visual
system):
− Feedforward connections bring input to a given region
from another region located at an earlier stage along a
particular processing pathway
− Recurrent synapses interconnect neurons within a
particular region that are considered to be at the same stage
along the processing pathway
− Top-down connections carry signals back from areas
located at later stages.
Compare neural systems with computers
The brain is superior than modern computers in many
aspects (e.g., face recognition and talk).
• A different style of computation: parallel distributed
processing
• An universal computational architecture: the same
structure carries out many different functions
• It can learn new knowledge, therefore it is adaptive
The idea of artificial neural networks
1 Nodes --- Neurons (artificial neurons, performing a
linear or non-linear mapping)
2 Weights --- Synapses
3 Mimic the network structure of neural systems
• A single neuron’s function is simple. The specific functions
come from the network structure
• ANN is more and more engineering-driven nowadays, its
biological root is gradually losing. The key of ANN is on the
design and training of suitable network structures
• The connection style
– Feed-forward and recurrent
Feedforward and recurrent
networks
An example of one-layer feed-forward
neural network
x1
x2
w1
w2
b
y
Activation function
‘+’ or ‘-’
n
w3
y  f ( wi xi  b)
i 1
wn
xn
Bias ‘b’ is an external parameter that
can be modeled by adding an extra
input
Activation function
• Linear-Threshold function:
if V  Vth
0
g (V )  
aV , a  0 if V  Vth
f
V
• Sigmoid function:
g
1
g (V ) 
1  e V
Sigmoid function
The sigmoid function
1.0
0.0
-6.0
0.0
6.0
1
with β = 1.0
 V
1 e
-6.0
0.0
β = 5.0
6.0
1.0
0.0
-6.0
0.0
β = 0.1
6.0
-25.0
0.0
β = 0.1
25.0
An example of 3-layer feed-forward network
x1
x2
y1
y2
ym
n
yi  f ( wij x j  bi )
xn
Input Layer
j 1
Hidden layers
Output Layer
Neural Network (NN) for function
approximation
NN transforms input into output. In other words, a NN
model implements a function
NN has a set of adjustable parameters (weights, activation
threshold, and etc.)
Network parameters (in particular, network weights) are
free to be adjusted in order to achieve desired functions
Type of learning used depends on task
at hand
• Supervised learning: have a teacher
• Unsupervised learning: no teacher
• Reinforcement learning: no detailed instruction,
only the final reward is available
Possible use of artificial neural networks
(Do you remember the application of Neurocomputing? )
Pattern recognition (face recognition, radar systems, object
recognition),
Sequence recognition (gesture, speech, handwritten text
recognition)
Medical diagnosis, Financial applications, Data mining,
Stock market analysis, Weather forecast
• Classification: given an input, decide its class index
• Regression: given an input, decide the corresponding
continuous output value
• I will mainly focus on the realistic neural networks in
this course
• And Prof. Shi Zhongzhi will covers the artificial
neural networks and …
I. Introduction
− Basic concepts of neural networks
II. Realistic neural networks
− Homogeneous excitatory and
inhibitory populations
− The olfactory bulb
− Persistent neural activity
− ……
Homogeneous excitatory and inhibitory
populations (i.e. Wilson-Cowan model)
• It describes the dynamics of interacting excitatory and
inhibitory neuronal populations
• Usually two neurons (excitatory/inhibitory)
• Explain the oscillation source in neural systems
• A typical recurrent network
wEI
wII
I
Input E
E
wEE
Input I
wIE
dvE
E
 vE  g ( wEE vE  wEI vE   E ) Compare with
dt
dvI
I
 vI  g ( wII vI  wIE vI   I )
dt
dV
m
 EL  V  Rm I e
dt
• Here g(x) is an activation function or the steady-state firing
rate as a function of input
• g(x) is Linear-Threshold function:
g
0
g ( x)  
x
if x  0
if x  0
x
dvE
 [vE  g (1.25vE  vI  10)] /  E
dt
dvI
 [vI  g (vE  10)] /  I
dt
 E  10ms
adjustable parameter
1. Find the fixed points by setting:
dvE
0
dt
dvI
0
dt
VE = 26.67Hz
VI = 16.67Hz
2. Identify stability by using phase plane or 2-D phase
space portrait
y
(x(t+h),y(t+h)) =(x(t)+hdx/dt, y(t)+hdy/dt)
dVI /dt < 0
(x(t),y(t))
x
dVI /dt > 0
Adjust parameter  I
 30ms
(Dayan and Abbott, 2001)
Adjust parameter
 I  50ms
(Dayan and Abbott, 2001)
Hopf bifurcation
As a parameter is changed, a fixed point of a dynamical
system loses stability and a limit cycle emerges
I. Introduction
− Basic concepts of neural networks
II. Realistic neural networks
− Homogeneous excitatory and
inhibitory populations
− The olfactory bulb
− Persistent neural activity
− ……
The olfactory bulb
InputE
InputI
InputI
Input




dvE
E
 vE  FE ( wEE vE  wEI vE  Input E )
dt




dvI
I
 vI  FI ( wII vI  wIE vI  Input I )
dt
where
 E   I  6.7ms, wEE  wII  0, and n  m  10
• A recurrent network
• A ?-D phase space is
needed
• The InputE, must induce a
transition between
fixed-point and
oscillatory activity
Activities of four of ten mitral and granule cells
during a single sniff cycle for two different odors
(Dayan and Abbott, 2001)
(Dayan and Abbott, 2001)
I. Introduction
− Basic concepts of neural networks
II. Realistic neural networks
− Homogeneous excitatory and
inhibitory populations
− The olfactory bulb
− Persistent neural activity
Persistent neural activity
• It refers to a sustained change in action potential
discharge that long outlasts a stimulus
• It is found in a diverse set of brain regions and organisms
and several in vitro systems, suggesting that it can be
considered a universal form of circuit dynamics
Several examples for persistent neural
activity
• Oculomotor neural integrator cells in an awake
behaving goldfish
• Monkey prefrontal cortical cells
• Head direction cells in rat
An oculomotor neural integrator cell in
an awake behaving goldfish
(Major and Tank 2004)
Generation mechanism of persistent
neural activity during memory-guided
saccade task in the prefrontal cortex
E
dV jE
EE
ij
w
dt
i j
w
ri  e
s
E
I
S
 V jE   w EE
r

r

0
.
5
r
ji i
j 
EE
max


e
( pj  pi ) 2
2a2
(T  pi ) 2
2b 2
I
dV
I
E

  ri
dt
i
(300ms)
120 units
Persistent activity
Activation function:
r
1
1V jE
1 e
A distractor input for 300ms
g ( )
(Gruber et al. 2006)
From this PFC network, we learn:
1. The persistent PFC activity is a
network effect associated with
recurrent excitation
2. The network has a line (or ring)
attractor
Homework
1. 什么是动力系统的 Hopf 分叉?
2. 用一个兴奋性和一个抑制性神经元构造 Wilson-Cowan
网络模型,探讨参数与动力学行为关系。
3. 思考神经系统 persistent activity 的可能计算神经机制
Download