Title: Fast Computations with Spike Sequence Attractors in Neural Networks
1Fast Computations with Spike Sequence Attractors
in Neural Networks
- Dezhe Jin
- Howard Hughes Medical Institute
- Massachusetts Institute of Technology
2Outline
- Introduction
- Spike sequence attractors
- Summary
3Introduction
4Brain, neurons networks
Human brain 1011 neurons
Hierarchal, modular, interacting structures
cortical areas
Local neural networks
5Neuron spikes
Dendrite
Cell body
Axon
Output
Input
Inhibitory conductance
Spike (width 1msec)
Inhibitory neurons
Ions
-
-
Threshold
Reset
Leak conductance
-
Membrane potential
Active conductance
V
V -70 mV
Excitatory neurons
time
Membrane
Excitatory conductance
Outside 0 mV
Spikes are transmitted to other neurons
6A typical structure of local networks
Excitatory projection neuron, 80 of total
Inhibitory inter-neuron, 20 of total
Inputs from lower area neurons
Outputs to other local networks
The inhibitory neurons regulate the dynamics of
the excitatory neurons.
7Computing with dynamical attractors
spike
Membrane potential
1
1
2
2
Tiger attractor
3
4
3
potential
4
time
1
1
2
2
Local recurrent neural network
Cow attractor
3
4
4
3
Dynamical attractors
8Issues with attractor encoding
- Is the convergence fast?
- Is the number of attractors large enough to
encode a large number of external inputs? - Is the timing of the spikes important?
1
1
2
2
3
4
3
4
9Spatiotemporal patterns of spikes
- Neurons of the local networks in locust antennal
lobe responding to odor presentation
Neuron 1
Membrane potential
Trial 1
Neuron 2
Neuron 1
Trial 2
40 mV
Neuron 2
Presentation of odor
200 msec
Stopfer Laurent (Nature, 1999)
10Spatiotemporal spike attractors
- For a large class of neural networks,
spatiotemporal spikes are the dynamical
attractors. - Fast convergence with a few transient spikes
- Rich spatiotemporal structures
- Spike timings are precise
- Simplifications
- Simple models of neurons and the coupling between
them - No interneurons, allowing direct excitation and
inhibition between neurons - No noise, spike transmission delay, ...
- Roadmap
- A special case winner-take-all computation
- General case
11Winner-take-all computation
12The structure of the network
Inhibitory connection (global inhibition)
Excitatory connection (self-excitation)
- No inhibitory inter-neurons
- Identical neurons, excitatory connection
strength, and inhibitory connection strength - External inputs constant in time but vary
spatially
External inputs
13Neuron model Leaky integrate-and-fire neuron
- Leaky integrate-
- -and-fire (spike)
Membrane potential
External input
Leak time constant
Resting membrane potentials
Spike (not modeled)
Spike threshold
Reset
14Coupling between neurons
Strength of the connection GE
Excitatory conductance gE
Conductance decay time
Excitatory coupling
time
Membrane potential
Spike time
Strength of the connection GI
Inhibitory conductance gE
Inhibitory coupling
time
Spike time
Membrane potential
Inhibitory reversal potential -75 mV
15d-Pulse coupling
gE
Spike time
16The winner-take-all attractor
No spikes
External inputs
potential
time
Periodic spiking
The attractor
Neuron with the maximum input
Only the neuron with the maximum input spikes it
spikes periodically Computation maximum input
selection
17Fast winner-take-all computation
- Maximum input selection
- Requires a specific relationship between the
strength of the self-excitation and that of the
global inhibition. - Fast convergence
- The computation is done as soon as the neuron
with the maximum input spikes once. - Very few transient spikes are needed.
- (simulation)
Jin Seung (PRE, 2002)
18Intuitive picture
- Two stage dynamics between spikes at the spike
- With a strong inhibition, spikes from the winner
suppress spiking of all other neurons.
Between spikes Race to spike
At spike membrane potentials jump
19A mapping technique
20The G-mapping
Spike time without interaction
Pseudo-spike time
Neuron ID of the nth spike of the network
Threshold current
Neuron ID of next spike
Pseudo-spike times relative to next spike
Constants depending on the external inputs and
the connection strength
21Condition for winner-take-all
22Spatiotemporal spike attractors
23A class of neural networks
- Network structure
- Strong global inhibition
- Arbitrary number of spiking neurons
- Arbitrary connectivity
- Arbitrary patterns of the external inputs
- Heterogeneity in neuron properties
- Simplifications
- No inter-neurons
- Leaky integrate-and-fire neuron model
- Synaptic coupling d-pulse
- No noise, no spike transmission delay
- External inputs constant in time but distributed
spatially
Excitatory connection
Inhibitory connection
External inputs
24Spike sequence attractors
- Spike sequence attractor
- All spike sequences flow into spike sequence
attractors. - Timings of the spikes in the attractor are
precise. - The convergence is fast when the inhibition is
strong. -
- (simulation)
Jin (PRL, 2002)
25Description of the dynamics
In between spikes race to spike
One neurons spikes all membrane potentials jump
discontinuously
26The G-mapping
Neuron ID of next spike
Pseudo-spike time
Neuron ID of the nth spike of the network
Constants depending on the external inputs and
the connection strength
Pseudo-spike times relative to next spike
27Stability of the mapping
- Exponential damping of small perturbations
3
Neuron ID
1
1
G
2
2
3
Perturbed
2
1
3
Unperturbed
Spike No.
1st
2nd
3rd
28Trapping of spike sequences
29Spike sequence attractors
- All spike sequences will be trapped in periodic
patterns (spike sequence attractors). - Subsequences of any finite length will appear
again in an infinite sequence with finite number
of neurons.
Spike sequence attractor
30An example
31Fast convergence - statistics
- Simulation
- 2000 runs.
- For each run, the connections and the external
inputs are randomly set. - The maximum of the external inputs is fixed.
- The range of the connection strength is fixed.
- Results
- Poisson distribution of the number of the
transient spikes - No relationship between the length of the spike
sequence attractor and the number of transient
spikes
Histogram
Number of transient spikes
Number of transient spikes
Length of the attractor sequence
32Rich structures - statistics
- Simulation
- 20 random networks
- 10N sets of randomly selected inputs with fixed
maximum for each network - 10 random initial conditions for each network and
each set of inputs - Results
- Exponential growth of the number of spike
sequence attractors with the network size - One attractor for one set of external inputs
Spike sequence attractors
Number of attractors
Spatial pattern attractors
Number of neurons, N
33Encoding with spike sequence attractors
- Spike sequence attractors have two favorable
characteristics - Fast convergence
- Large encoding space
34Summary
- Spike sequences in a large class of recurrent
neural networks are stable, and converge to spike
sequence attractors. - The convergence is fast, often within a few
number of transient spikes, especially when the
global inhibition is strong. - Spike sequence attractors have favorable
characteristics for encoding the external inputs.
35Thanks!
- Professor Sebastian Seung
- Members of the Seung Lab at M.I.T.