Neural Network Modeling: Round Two

advertisement
University Studies 15A:
Consciousness I
Neural Network
Modeling
(Round 2)
Let us begin again with the problem that neuroscientists
confronted with the 100-step rule.
That is, they knew that whatever the brain was doing, it was
using massive parallelism to produce responses in no more that
100 hundred sequences of neurons firing.
So, we have a brain:
If we unfold and flatten the neocortex we get:
Two sheets of interconnected ,sixlayered neuronal assemblies,
connected by the corpus callosum
The work of the brain is done by the
neuronal clusters becoming activated
and activating additional neuronal
clusters in turn.
Experiential Level:
Seeing, hearing, remembering, deciding, acting
Brain Level:
One set of neurons becomes activated, activating another
set, which in turn activates yet another set. This continues
through 100 steps of transmitted activation.
For example:
Information about light comes in from the retina to the primary visual cortex:
The primary visual cortex passes the activation on to higher processing layers:
Each layer processes the activation by aggregating large number of simple
patterns into a smaller number of more complex patterns:
Remember that we started with just angled line segments in the “simple
cells: of V1.
What we “see” is a
complex reconstruction
built from many layers of
input that had been
divided into separate
streams and reassembled.
And keep those feedback
connection in mind.
To construct what we “see,” activation passes from the visual cortex to layers
of neurons that synthesize the input from different sensory sources:
The neural clusters embodying the Visual System then continue further
and connect to those that embody the Semantic Systems and visual
object recognition:
Despite the many layers, there are
fewer than one hundred layer.
Because the brain is processing all the
activation information in parallel, the
activation passes quickly from layer to
layer.
So, when you see this picture
Your visual system very quickly uses the feedback connections
from higher memory of objects and draws on your knowledge
of dalmatians to fill in the missing information.
Experiential Level:
Seeing, hearing, remembering, deciding, acting
Brain Level:
One set of neurons becomes activated, activating another
set, which in turn activates yet another set. This continues
through 100 steps of transmitted activation.
How the Trick is Done: Neural Networks
Each level of processing in the brain is a “cell assembly,”
that is, layer of neurons.
This is a cell assembly of
neurons in a fly’s retina.
This picture shows the
connection of one
layer to neurons to the
next.
The connection of neurons of one layer to those of the next is
through the synaptic junction:
Various factors control
the strength of the
connection between
neurons at the synaptic
junction:
Number of vesicles of
neurotransmitter in
the “sending” neuron
Structural changes in
the junction gap.
Number of receptors on
the “receiving” neuron
All of these can change
Back to the Neuron and our Schematic Model of one
axon (output)
cell nucleus
dendrite (input)
i(a)
wa
i(b)
wb
i(c)
wc
t
o
Our Schematic Representation of
Layers of Cell Assemblies:
a(1)
a(2)
b(1)
O1
…
a(3)
…
b(j)
Oj
…
a(i)
…
b(m)
Om
a(n)
The strength of the synaptic connection
between neurons in the two layers—a(i) and
b(j)—is represented by wi,j.
This set of weights defines a weighting matrix of dimension
(m,n) (columns for Layer A, rows for Layer B)
Wn,m =
𝑤1,1
𝑤1,2
⋮
𝑤1,𝑚
𝑤2,1
𝑤2,2
⋮
𝑤2,𝑚
⋯ 𝑤𝑛,1
⋯ 𝑤𝑛,2
⋱
⋮
⋯ 𝑤𝑛,𝑚
Experiential Level:
Seeing, hearing, remembering, deciding, acting
Brain Level:
One set of neurons becomes activated, activating another set, which in
turn activates yet another set. This continues through 100 steps of
transmitted activation.
How the Trick is Done: Neural Networks
At each step, the activation of a Layer B derives from the sum of
the input synaptic activations from Layer A multiplied by the
strength of the synaptic junction. Or:
𝑁𝑒𝑡𝐵
= W ∙ 𝐼𝐴
Since all the neurons in Layer B are receiving input at the same
time, calculating the activation of the entire layer occurs
simultaneously.
Experiential Level:
Learning and memory
Brain Level:
Neurons in one cell assembly change the strength of their connections
to neurons in the next cell assembly by changing the structure of the
synaptic connection.
How we represent memory and learning in Neural Network models
Memory: all memory resides in the Weighting Matrices that
represents the structure of synaptic connections in the system
Activation-based Learning: changing weights in the Weighting
Matrices, using Hebb’s Rule.
∆wij = aibj
At every level, as the brain passes activation from layer to
layer, they adjust their patterns of synaptic strength.
At every level, the boxes representing functional units in the brain actually
have their own internal structures of cell assemblies, and these also have
their own changing patterns of synaptic connections, their own W.
Experiential Level:
“Things” in memory: apples, houses, words, ideas
Learned abilities: riding a bicycle
Brain Level:
They are all patterns of synaptic connections.
Modeling:
It is all the Ws.
This has implications, because the layers in a network
operate as a system rather than as independent neurons.
Remember our simple set of artificial neurons:
1.
2.
3.
4.
Sixteen input units are connected to two output units
Only two input units are active at a time.
They must be horizontal or vertical neighbors
Only one output unit can be active at a time (inhibition is marked by the
black dots).
Trial 1
Trial 2
Trial 3
Trial 1
Trial 2
Trial 3
We have trained this network on a simulation.
If one used this as a “perception” unit that passed its internal state onto
other layers, those other layers would only know of two “objects”
activated by the input layer.
How it would “see” the 16 input units would vary from Trial 1 to 3, but
it divides the input space into just two “things” as patterns of
connection.
ALL “objects” from cars and people to concepts like cuteness or justice,
are mutually defined partitions in very, very high level input spaces.
In a word, this is your brain at work:
Neural networks extract patterns and divide an input space.
This can lead to odd results with implications for biological neural networks.
David McClelland tested the ability of a neural network to build a
classification tree based on closeness of attributes.
He built a network that could handle simple property statements like:
Robin can grow, move, fly.
Oak can grow.
Salmon has scales, gills, skin.
Robin has wings, feathers, skin.
Oak has bark, branches, leaves, roots.
Baars and Gage discuss this and give the design:
Neural Network software turns this sort of design into a computer
program to simulate the network:
When one runs the simulation, the result is a tree that
did a good job:
What Baars and Gage do not discuss was the next step.
McClelland fed the system facts about penguins:
Penguin can swim, move, grow.
Penguin has wings, feathers, skin.
The results were profoundly different if they gave the facts about penguins
interleaved with facts about the other objects or if it was all penguins all the time:
We’ll come back to this result when we discuss memory and sleep.
Artificial neural networks like the “penguin learner” allow researchers to model
the behavior of neural systems
An important aspect of neural networks in the brain that people explored
through artificial networks is the brain’s use of recurrency, when nodes in
networks loop back on themselves.
Simulated models show that one absolutely crucial feature of recurrent
networks is the ability to complete partial patterns:
The image of the Dalmatian is
very incomplete, but the brain
feeds back knowledge of
Dalmatians to the visual system,
which then produces a yet more
complete view and cycles in loops
until perception settles into
“Dalmation.”
These sorts of pattern-completing, self-modifying networks appear
throughout the brain.
Baars and Gage stress that 90% of the connections between the thalamus
and V1 go from V1 to the thalamus as re-entrant connections rather than
feed-forward input.
Many neural net modelers have developed systems based on re-entrant
brain connectivity:
To sum up:
Experiential Level
Biological Level
Seeing, Deciding,
Acting
Layers of cell assemblies
transmitting activation
Learning
Adjustment of synaptic strength
in connections between neurons
Memory
The strength of synaptic
connections maintained by the
system of neuron assemblies
“things:” all forms of
Mutually differentiated patterns
internal representation of activation within an over-all
system
Modeling
𝑁𝑒𝑡𝐵
= W ∙ 𝐼𝐴
∆wij = aibj
W
Attractor basins (You
really don’t want to
know the details.)
It’s all done with neural networks.
Download