Title: TNI: Computational Neuroscience
1 TNI Computational
Neuroscience Instructors Peter Latham Maneesh
Sahani Peter Dayan TA Mandana Ahmadi,
mandana_at_gatsby.ucl.ac.uk Website http//www.gatsb
y.ucl.ac.uk/mandana/TNI/TNI.htm (slides will
be on website) Lectures Tuesday/Friday,
1100-100. Review Friday, 100-300. Homework
Assigned Friday, due Friday (1 week
later). first homework assigned Oct. 3, due
Oct. 10.
2What is computational neuroscience? Our goal
figure out how the brain works.
3There are about 10 billion cubes of this size in
your brain!
10 microns
4How do we go about making sense of this
mess? David Marr (1945-1980) proposed three
levels of analysis 1. the problem
(computational level) 2. the strategy
(algorithmic level) 3. how its actually done
by networks of neurons (implementational
level)
5Example 1 memory. the problem recall events,
typically based on partial information.
6Example 1 memory. the problem recall events,
typically based on partial information. associati
ve or content-addressable memory. an
algorithm dynamical systems with fixed points.
7Example 1 memory. the problem recall events,
typically based on partial information. associati
ve or content-addressable memory. an
algorithm dynamical systems with fixed
points. neural implementation Hopfield
networks. xi sign(?j Jij xj)
8Example 2 vision. the problem (Marr) 2-D
image on retina ? 3-D reconstruction of a
visual scene.
9Example 2 vision. the problem (modern
version) 2-D image on retina ? recover the
latent variables.
house sun tree bad artist
10Example 2 vision. the problem (modern
version) 2-D image on retina ? reconstruction
of latent variables. an algorithm graphical
models.
x1
x2
x3
latent variables
r1
r2
r3
r4
low level representation
11Example 2 vision. the problem (modern
version) 2-D image on retina ? reconstruction
of latent variables. an algorithm graphical
models.
x1
x2
x3
latent variables
inference
r1
r2
r3
r4
low level representation
12Example 2 vision. the problem (modern
version) 2-D image on retina ? reconstruction
of latent variables. an algorithm graphical
models. implementation in networks of
neurons no clue.
13Comment 1 the problem the algorithm neural
implementation
14Comment 1 the problem easier the
algorithm harder neural implementation harder
often ignored!!!
15Comment 1 the problem easier the
algorithm harder neural implementation harder
A common approach Experimental observation
? model Usually very underconstrained!!!!
16Comment 1 the problem easier the
algorithm harder neural implementation harder
Example i CPGs (central pattern generators)
rate
rate
Too easy!!!
17Comment 1 the problem easier the
algorithm harder neural implementation harder
Example ii single cell modeling
C dV/dt -gL(V VL) n4(V VNa) dn/dt
lots and lots of parameters which ones should
you use?
18Comment 1 the problem easier the
algorithm harder neural implementation harder
Example iii network modeling
lots and lots of parameters thousands
19Comment 2 the problem easier the
algorithm harder neural implementation harder
You need to know a lot of math!!!!!
20Comment 3 the problem easier the
algorithm harder neural implementation harder
This is a good goal, but its hard to do in
practice. Our actual bread and butter 1.
Explaining observations (mathematically) 2.
Using sophisticated analysis to design simple
experiments that test hypotheses.
21A classic example Hodgkin and Huxley.
22A classic example Hodgkin and Huxley.
C dV/dt gL(V-VL) gNam3h(V-VNa) dm/dt
23Comment 4 the problem easier the
algorithm harder neural implementation harder
some algorithms are easy to implement on a
computer but hard in a brain, and vice-versa. we
should be looking for the vice-versa ones. it
can be hard to tell which is which.
these are linked!!!
24Basic facts about the brain
25Your brain
26Your cortex unfolded
neocortex (cognition)
6 layers
30 cm
0.5 cm
subcortical structures (emotions,
reward, homeostasis, much much more)
27Your cortex unfolded
1 cubic millimeter, 310-5 oz
281 mm3 of cortex 50,000 neurons 10000
connections/neuron (gt 500 million connections) 4
km of axons
291 mm3 of cortex 50,000 neurons 10000
connections/neuron (gt 500 million connections) 4
km of axons
1 mm2 of a CPU 1 million transistors 2
connections/transistor (gt 2 million
connections) .002 km of wire
301 mm3 of cortex 50,000 neurons 10000
connections/neuron (gt 500 million connections) 4
km of axons whole brain (2 kg) 1011
neurons 1015 connections 8 million km of axons
1 mm2 of a CPU 1 million transistors 2
connections/transistor (gt 2 million
connections) .002 km of wire whole CPU 109
transistors 2109 connections 2 km of wire
311 mm3 of cortex 50,000 neurons 10000
connections/neuron (gt 500 million connections) 4
km of axons whole brain (2 kg) 1011
neurons 1015 connections 8 million km of axons
1 mm2 of a CPU 1 million transistors 2
connections/transistor (gt 2 million
connections) .002 km of wire whole CPU 109
transistors 2109 connections 2 km of wire
32dendrites (input)
soma (spike generation)
axon (output)
40 mV
1 ms
voltage
-50 mV
100 ms
time
33(No Transcript)
34synapse
current flow
35synapse
current flow
3640 mV
voltage
-50 mV
100 ms
time
37neuron j
neuron i
neuron j emits a spike
EPSP
V on neuron i
t
10 ms
38neuron j
neuron i
neuron j emits a spike
IPSP
V on neuron i
t
10 ms
39neuron j
neuron i
neuron j emits a spike
IPSP
V on neuron i
t
amplitude wij
10 ms
40neuron j
neuron i
neuron j emits a spike
changes with learning
IPSP
V on neuron i
t
amplitude wij
10 ms
41synapse
current flow
42A bigger picture view of the brain
43x
r
sensory processing
r
direct code for latent variables
cognition memory action selection
brain
r'
direct code for motor actions
motor processing
r'
peripheral spikes
x'
motor actions
44r
45r
46r
47r
48r
49r
50x
r
sensory processing
r
direct code for latent variables
cognition memory action selection
brain
r'
direct code for motor actions
motor processing
r'
peripheral spikes
x'
motor actions
51x
r
sensory processing
r
direct code for latent variables
cognition memory action selection
brain
r'
direct code for motor actions
motor processing
r'
peripheral spikes
x'
motor actions
52In some sense, action selection is the most
important problem if we dont choose the
right actions, we dont reproduce, and all the
neural coding and computation in the world
isnt going to help us.
53Do I call him and risk rejection and
humiliation, or do I play it safe, and stay home
on Saturday night and eat oreos?
54Do I call her and risk rejection and
humiliation, or do I play it safe, and stay home
on Saturday night and eat oreos?
55x
r
sensory processing
r
direct code for latent variables
cognition memory action selection
brain
r'
direct code for motor actions
motor processing
r'
peripheral spikes
x'
motor actions
56- Problems
- How does the brain extract latent variables?
- How does it manipulate latent variables?
- How does it learn to do both?
- Ask at two levels
- What are the algorithms?
- How are they implemented in neural hardware?
57Highly biased
What do we know about the brain?
58a. Anatomy. We know a lot about what is where.
But be careful about labels neurons in motor
cortex sometimes respond to
color. Connectivity. We know (more or less)
which area is connected to which. We dont know
the wiring diagram at the microscopic level.
wij
59b. Single neurons. We know very well how point
neurons work (think Hodgkin Huxley). Dendrites.
Lots of potential for incredibly
complex processing. My guess they make
neurons bigger and reduce wiring length (see
the work of Mitya Chklovskii). How much I
would bet 20 p.
60c. The neural code. My guess once you get
away from periphery, its mainly firing
rate an inhomogeneous Poisson process with
a refractory period is a good model of spike
trains. How much I would bet 100. The
role of correlations. Still unknown. My
guess dont have one.
61- Recurrent networks of spiking neurons. This is a
field that - is advancing rapidly! There were two absolutely
seminal - papers about a decade ago
- van Vreeswijk and Sompolinsky (Science, 1996)
- van Vreeswijk and Sompolinsky (Neural Comp.,
1998) - We now understand very well randomly connected
networks - (harder than you might think), and (I believe)
we are on - the verge of
- i) understanding networks that have
interesting - computational properties.
- ii) computing the correlational structure in
those - networks.
62- Learning. We know a lot of facts (LTP, LTD,
STDP). - its not clear which, if any, are relevant.
- the relationship between learning rules and
computation - is essentially unknown.
- Theorists are starting to develop
unsupervised learning - algorithms, mainly ones that maximize mutual
information. - These are promising, but the link to the
brain has not been - fully established.
63- Learning. We know a lot of facts (LTP, LTD,
STDP). - its not clear which, if any, are relevant.
- the relationship between learning rules and
computation - is essentially unknown.
- Theorists are starting to develop
unsupervised learning - algorithms, mainly ones that maximize mutual
information. - These are promising, but the link to the
brain has not been - fully established.
64What is unsupervised learning? Learning
structure from data without any help from
anybody. Example most visual scenes are very
unlikely to occur. 1000 1000 pixels gt
million dimensional space. space of possible
pictures is much smaller, and forms a very
complicated manifold
possible visual scenes
pixel 2
pixel 1
65What is unsupervised learning? Learning
structure from data without any help from
anybody. Example most visual scenes are very
unlikely to occur. 1000 1000 pixels gt
million dimensional space. space of possible
pictures is much smaller, and forms a very
complicated manifold
visual scenes
pixel 2
pixel 1
66What is unsupervised learning? Learning
structure from data without any help from
anybody. Example most visual scenes are very
unlikely to occur. 1000 1000 pixels gt
million dimensional space. space of possible
pictures is much smaller, and forms a very
complicated manifold
visual scenes
pixel 2
pixel 1
67What is unsupervised learning? Learning from
spikes
neuron 2
neurons 1
68What is unsupervised learning? Learning from
spikes
dog
neuron 2
cat
neurons 1
69A word about learning (remember these
numbers!!!) You have about 1015 synapses. If
it takes 1 bit of information to set a
synapse, you need 1015 bits to set all of
them. 30 years 109 seconds. To set 1/10 of
your synapses in 30 years, you must absorb
100,000 bits/second. Learning in the brain is
almost completely unsupervised!!!
70f. Where we know algorithms we know the
neural implementation (sort of)
vestibular system, sound localization,
echolocation, addition This is not a
coincidence!!!! Remember David Marr
1. the problem (computational level) 2.
the strategy (algorithmic level) 3. how
its actually done by networks of neurons
(implementational level)
71What we know my score (1-10).
- Anatomy. 5
- Single neurons. 6
- The neural code. 6
- Recurrent networks of spiking neurons. 3
- Learning. 2
- The hard problems
- How does the brain extract latent
variables? 1.001 - How does it manipulate latent variables? 1.002
- How does it learn to do both? 1.001
72- Outline
- Basics single neurons/axons/dendrites/synapses. L
atham - Language of neurons neural coding. Sahani
- Learning at network and behavioral level. Dayan
- What we know about networks (very little).
Latham
73- Outline for this part of the course (biophysics)
- What makes a neuron spike.
- How current propagates in dendrites.
- How current propagates in axons.
- How synapses work.
- Lots and lots of math!!!