2009S-NeuralNet

advertisement
Modeling Neural Networks
Christopher Krycho
Advisor: Dr. Eric Abraham
May 14, 2009
Computational Neuroscience
• Young field (decades old)
• Modeling the brain with
physical principles
– Oscillatory behaviors
– Large-scale networks
• Technique:
An EEG recording of
various locations in a brain
– Create a model of neurons
– Model neuron interactions
(varying degrees complexity)
– Compare results to
EEG/MRI/etc.
Project Goals
• Build a computer program that builds and
analyzes small-world networks
• Apply that model to neural network models
created by the Zochowski group at Michigan
– Reproduce neuron and network models
– Run a simulation with that model and compare
results
• Test different models and compare results
– Doing new science by examining variations on
their model
Outline
• What are neurons and neural networks?
• Small world networks and their relevance
• Modeling a set of neurons in the brain with
small world networks
• Graph, analyze total synaptic current in
system over time.
Neurons
• Primary components of nervous system
• Transmission and reception of electrical signals
– “synaptic” current: current along synapses - connections
between neurons
Representation of
a neuron
Source: Wikipedia
The Nervous System
• Composed of neurons
throughout
– Brain
– Nerves in fingers
– Spinal cord
• 1011 neurons in the brain alone
– Each with 105+ connections to
other neurons (and itself)
The human nervous system
Source: http://sciencecity.oupchina.com.hk/biology/
Small World Networks (SWNs)
• Small average “path length”
– Number of steps from random node to other random node
• Large number of nodes
– # of nodes in network
• Low connectivity
– # of connections per node
• High clusteredness
– Nodes with mutual
connections also connect
to each other
A network with small
world characteristics
Source: Exploring Complex Networks, Steven H.
Strogatz, Nature 410 268-276 (8 March 2001)
Network vs. SWN
• “Small world” different from random networks
• Despite…
– low relative connectivity
– Connections primarily short-range
– Connections very clustered
• … short average path length between any
random points in network
• SWN architecture observed in brain
– On a small scale in local neural networks
– Possibly on a larger scale among neural networks
Making SWNs
• Initialize an array representing neurons’
connections to each other
– Connections initially symmetric within networks
– Two or more networks connected internally, but
not to each other
– Note: actual networks also connected to nextnearest neighbors
Two networks of 9
nodes each:
A
B
From network to SWN, pt. 1
• Randomly rewire the internal connections of
each network
– Use a random number generator
– Connections can be symmetric or directional
• Here, symmetric
• In model, directional
A
The networks after
internal rewiring:
B
From network to SWN, pt. 2
• Randomly add connectivity between networks
– Choose some proportion of the neurons in any
given network to receive connections
– Choose the number in any given network sending
connections
– Connect - showing directional connections here
A
The networks after
adding connectivity:
Key:
From A to B
From B to A
B
Representing Connections
• Matrices filled with 0s and 1s
– Rows send, columns receive
1
1
1
2
2
1
2
2
A small network rewired
• From left to right:
– The initial setup of the network - symmetric
– The networks, with random rewiring of 0.3 of connections
– 25% of neurons on each network receiving input from 4
different neurons on the other network
Representative Connection Graphs
The networks before any
rewiring:
• 15x15 grids = 225
neurons/network
• 2 networks
• 450x450 matrix:
– Rows send
– Columns receive
• Connections to
nearest and nextnearest neighbors
Rep. Conn. Graphs cont’d.
The networks after
internal rewiring:
Rep. Conn. Graphs cont’d.
The networks after
adding internetwork
connections:
A Neuron Model
• Equation representing the state of each neuron at a
given time
– Numerical value representing charge (V) on the neuron
and thus closeness to “spiking” (sending a signal)
• Equation involves 4 parameters:
dVi
  iV t   A J i, j t   B J i,k t    t 
dt
j
k
 i - Leakage current on ith neuron; constant over time but
differs for each neuron
 Ji,j(t) - Incoming current to the ith neuron from the jth
 neurons (connected, in the same network)
 Ji,k(t) - Incoming current to the ith neuron from the kth
neurons (connected, in another network)
 (t) - White noise
Generating Current cont’d.
• Equation (identical form for intra/internetwork):
 t
 t 
J i, j (t)  exp   exp 


  s 
  f




• Ji,j(t): current to neuron i from neuron j at time t
– Ji,k(t) has the same form, but different lag
  ensure
 s and
f
correct shape
of pulse
Current versus time;
threshold reached at t=0
Solving the Equation
• Integrate over 10 sec with Euler numerical
method
– Maximum 10-3 sec step size for good resolution
(some runs as low as 10-5 sec)
• Large computation:
– 450 neurons
– 5 calculations/neuron/iteration
– ~ 225 billion calculations at highest resolution
• ~ 45-50 minutes of real time for high
resolution
– Coded in Fortran
– Running on 2007 MacBook Pro
Network Progress
Network Finished
Simulation Progress
• Replicating Zochowski model: in progress
– Reproducing underlying phenomena
– Missing the tell-tale sign of perfectly reproducing their model:
“bursting”
The Zochowski group’s
model graph of network
behavior:
Total synaptic current (arb. Units)
Early Simulation Run
Network 2
Network 1
Time (s)
Present Simulation Run
The Zochowski group’s
model showing bursting:
Our model showing basic
behaviors but no bursting
Result Details
The Zochowski model
individual neurons
Our model individual neurons
The Future (of the Project)
• Finish replicating Zochowski group’s results
• Open doors for the future
• Possibility of expanding the number and/or
scale of the networks
• My design includes
– Ability to implement and then test learning
mechanisms
– Ability to increase complexity of neuron model
– Ability to increase complexity of network model
Acknowledgments
• The Zochowski group at The University of
Michigan, particularly Jane Wang and Sarah
Feldt, for their work on which this project is
based, and for answering many questions
along the way
• Dr. Keiran Mullen for teaching me enough
Fortran to start the project
• Dr. Eric Abraham, my advisor
Download