TNI: Computational Neuroscience - PowerPoint PPT Presentation

About This Presentation
Title:

TNI: Computational Neuroscience

Description:

TNI: Computational Neuroscience. Instructors: Peter Latham. Maneesh ... the verge of: i) understanding networks that have interesting. computational properties. ... – PowerPoint PPT presentation

Number of Views:174
Avg rating:3.0/5.0
Slides: 74
Provided by: gatsby
Category:

less

Transcript and Presenter's Notes

Title: TNI: Computational Neuroscience


1
TNI Computational
Neuroscience Instructors Peter Latham Maneesh
Sahani Peter Dayan TA Mandana Ahmadi,
mandana_at_gatsby.ucl.ac.uk Website http//www.gatsb
y.ucl.ac.uk/mandana/TNI/TNI.htm (slides will
be on website) Lectures Tuesday/Friday,
1100-100. Review Friday, 100-300. Homework
Assigned Friday, due Friday (1 week
later). first homework assigned Oct. 3, due
Oct. 10.
2
What is computational neuroscience? Our goal
figure out how the brain works.
3
There are about 10 billion cubes of this size in
your brain!
10 microns
4
How do we go about making sense of this
mess? David Marr (1945-1980) proposed three
levels of analysis 1. the problem
(computational level) 2. the strategy
(algorithmic level) 3. how its actually done
by networks of neurons (implementational
level)
5
Example 1 memory. the problem recall events,
typically based on partial information.
6
Example 1 memory. the problem recall events,
typically based on partial information. associati
ve or content-addressable memory. an
algorithm dynamical systems with fixed points.
7
Example 1 memory. the problem recall events,
typically based on partial information. associati
ve or content-addressable memory. an
algorithm dynamical systems with fixed
points. neural implementation Hopfield
networks. xi sign(?j Jij xj)
8
Example 2 vision. the problem (Marr) 2-D
image on retina ? 3-D reconstruction of a
visual scene.
9
Example 2 vision. the problem (modern
version) 2-D image on retina ? recover the
latent variables.
house sun tree bad artist
10
Example 2 vision. the problem (modern
version) 2-D image on retina ? reconstruction
of latent variables. an algorithm graphical
models.
x1
x2
x3
latent variables
r1
r2
r3
r4
low level representation
11
Example 2 vision. the problem (modern
version) 2-D image on retina ? reconstruction
of latent variables. an algorithm graphical
models.
x1
x2
x3
latent variables
inference
r1
r2
r3
r4
low level representation
12
Example 2 vision. the problem (modern
version) 2-D image on retina ? reconstruction
of latent variables. an algorithm graphical
models. implementation in networks of
neurons no clue.
13
Comment 1 the problem the algorithm neural
implementation
14
Comment 1 the problem easier the
algorithm harder neural implementation harder
often ignored!!!
15
Comment 1 the problem easier the
algorithm harder neural implementation harder
A common approach Experimental observation
? model Usually very underconstrained!!!!
16
Comment 1 the problem easier the
algorithm harder neural implementation harder
Example i CPGs (central pattern generators)
rate
rate
Too easy!!!
17
Comment 1 the problem easier the
algorithm harder neural implementation harder
Example ii single cell modeling
C dV/dt -gL(V VL) n4(V VNa) dn/dt

lots and lots of parameters which ones should
you use?
18
Comment 1 the problem easier the
algorithm harder neural implementation harder
Example iii network modeling
lots and lots of parameters thousands
19
Comment 2 the problem easier the
algorithm harder neural implementation harder
You need to know a lot of math!!!!!
20
Comment 3 the problem easier the
algorithm harder neural implementation harder
This is a good goal, but its hard to do in
practice. Our actual bread and butter 1.
Explaining observations (mathematically) 2.
Using sophisticated analysis to design simple
experiments that test hypotheses.
21
A classic example Hodgkin and Huxley.
22
A classic example Hodgkin and Huxley.
C dV/dt gL(V-VL) gNam3h(V-VNa) dm/dt

23
Comment 4 the problem easier the
algorithm harder neural implementation harder
some algorithms are easy to implement on a
computer but hard in a brain, and vice-versa. we
should be looking for the vice-versa ones. it
can be hard to tell which is which.
these are linked!!!
24
Basic facts about the brain
25
Your brain
26
Your cortex unfolded
neocortex (cognition)
6 layers
30 cm
0.5 cm
subcortical structures (emotions,
reward, homeostasis, much much more)
27
Your cortex unfolded
1 cubic millimeter, 310-5 oz
28
1 mm3 of cortex 50,000 neurons 10000
connections/neuron (gt 500 million connections) 4
km of axons
29
1 mm3 of cortex 50,000 neurons 10000
connections/neuron (gt 500 million connections) 4
km of axons
1 mm2 of a CPU 1 million transistors 2
connections/transistor (gt 2 million
connections) .002 km of wire
30
1 mm3 of cortex 50,000 neurons 10000
connections/neuron (gt 500 million connections) 4
km of axons whole brain (2 kg) 1011
neurons 1015 connections 8 million km of axons
1 mm2 of a CPU 1 million transistors 2
connections/transistor (gt 2 million
connections) .002 km of wire whole CPU 109
transistors 2109 connections 2 km of wire
31
1 mm3 of cortex 50,000 neurons 10000
connections/neuron (gt 500 million connections) 4
km of axons whole brain (2 kg) 1011
neurons 1015 connections 8 million km of axons
1 mm2 of a CPU 1 million transistors 2
connections/transistor (gt 2 million
connections) .002 km of wire whole CPU 109
transistors 2109 connections 2 km of wire
32
dendrites (input)
soma (spike generation)
axon (output)
40 mV
1 ms
voltage
-50 mV
100 ms
time
33
(No Transcript)
34
synapse
current flow
35
synapse
current flow
36
40 mV
voltage
-50 mV
100 ms
time
37
neuron j
neuron i
neuron j emits a spike
EPSP
V on neuron i
t
10 ms
38
neuron j
neuron i
neuron j emits a spike
IPSP
V on neuron i
t
10 ms
39
neuron j
neuron i
neuron j emits a spike
IPSP
V on neuron i
t
amplitude wij
10 ms
40
neuron j
neuron i
neuron j emits a spike
changes with learning
IPSP
V on neuron i
t
amplitude wij
10 ms
41
synapse
current flow
42
A bigger picture view of the brain
43
x
r
sensory processing

r
direct code for latent variables
cognition memory action selection
brain

r'
direct code for motor actions
motor processing
r'
peripheral spikes
x'
motor actions
44
r
45
r
46
r
47
r
48
r
49
r
50
x
r
sensory processing

r
direct code for latent variables
cognition memory action selection
brain

r'
direct code for motor actions
motor processing
r'
peripheral spikes
x'
motor actions
51
x
r
sensory processing

r
direct code for latent variables
cognition memory action selection
brain

r'
direct code for motor actions
motor processing
r'
peripheral spikes
x'
motor actions
52
In some sense, action selection is the most
important problem if we dont choose the
right actions, we dont reproduce, and all the
neural coding and computation in the world
isnt going to help us.
53
Do I call him and risk rejection and
humiliation, or do I play it safe, and stay home
on Saturday night and eat oreos?
54
Do I call her and risk rejection and
humiliation, or do I play it safe, and stay home
on Saturday night and eat oreos?
55
x
r
sensory processing

r
direct code for latent variables
cognition memory action selection
brain

r'
direct code for motor actions
motor processing
r'
peripheral spikes
x'
motor actions
56
  • Problems
  • How does the brain extract latent variables?
  • How does it manipulate latent variables?
  • How does it learn to do both?
  • Ask at two levels
  • What are the algorithms?
  • How are they implemented in neural hardware?

57
Highly biased
What do we know about the brain?
58
a. Anatomy. We know a lot about what is where.
But be careful about labels neurons in motor
cortex sometimes respond to
color. Connectivity. We know (more or less)
which area is connected to which. We dont know
the wiring diagram at the microscopic level.
wij
59
b. Single neurons. We know very well how point
neurons work (think Hodgkin Huxley). Dendrites.
Lots of potential for incredibly
complex processing. My guess they make
neurons bigger and reduce wiring length (see
the work of Mitya Chklovskii). How much I
would bet 20 p.
60
c. The neural code. My guess once you get
away from periphery, its mainly firing
rate an inhomogeneous Poisson process with
a refractory period is a good model of spike
trains. How much I would bet 100. The
role of correlations. Still unknown. My
guess dont have one.
61
  • Recurrent networks of spiking neurons. This is a
    field that
  • is advancing rapidly! There were two absolutely
    seminal
  • papers about a decade ago
  • van Vreeswijk and Sompolinsky (Science, 1996)
  • van Vreeswijk and Sompolinsky (Neural Comp.,
    1998)
  • We now understand very well randomly connected
    networks
  • (harder than you might think), and (I believe)
    we are on
  • the verge of
  • i) understanding networks that have
    interesting
  • computational properties.
  • ii) computing the correlational structure in
    those
  • networks.

62
  • Learning. We know a lot of facts (LTP, LTD,
    STDP).
  • its not clear which, if any, are relevant.
  • the relationship between learning rules and
    computation
  • is essentially unknown.
  • Theorists are starting to develop
    unsupervised learning
  • algorithms, mainly ones that maximize mutual
    information.
  • These are promising, but the link to the
    brain has not been
  • fully established.

63
  • Learning. We know a lot of facts (LTP, LTD,
    STDP).
  • its not clear which, if any, are relevant.
  • the relationship between learning rules and
    computation
  • is essentially unknown.
  • Theorists are starting to develop
    unsupervised learning
  • algorithms, mainly ones that maximize mutual
    information.
  • These are promising, but the link to the
    brain has not been
  • fully established.

64
What is unsupervised learning? Learning
structure from data without any help from
anybody. Example most visual scenes are very
unlikely to occur. 1000 1000 pixels gt
million dimensional space. space of possible
pictures is much smaller, and forms a very
complicated manifold
possible visual scenes
pixel 2
pixel 1
65
What is unsupervised learning? Learning
structure from data without any help from
anybody. Example most visual scenes are very
unlikely to occur. 1000 1000 pixels gt
million dimensional space. space of possible
pictures is much smaller, and forms a very
complicated manifold
visual scenes
pixel 2
pixel 1
66
What is unsupervised learning? Learning
structure from data without any help from
anybody. Example most visual scenes are very
unlikely to occur. 1000 1000 pixels gt
million dimensional space. space of possible
pictures is much smaller, and forms a very
complicated manifold
visual scenes
pixel 2
pixel 1
67
What is unsupervised learning? Learning from
spikes
neuron 2
neurons 1
68
What is unsupervised learning? Learning from
spikes
dog
neuron 2
cat
neurons 1
69
A word about learning (remember these
numbers!!!) You have about 1015 synapses. If
it takes 1 bit of information to set a
synapse, you need 1015 bits to set all of
them. 30 years 109 seconds. To set 1/10 of
your synapses in 30 years, you must absorb
100,000 bits/second. Learning in the brain is
almost completely unsupervised!!!
70
f. Where we know algorithms we know the
neural implementation (sort of)
vestibular system, sound localization,
echolocation, addition This is not a
coincidence!!!! Remember David Marr
1. the problem (computational level) 2.
the strategy (algorithmic level) 3. how
its actually done by networks of neurons
(implementational level)
71
What we know my score (1-10).
  1. Anatomy. 5
  2. Single neurons. 6
  3. The neural code. 6
  4. Recurrent networks of spiking neurons. 3
  5. Learning. 2
  • The hard problems
  • How does the brain extract latent
    variables? 1.001
  • How does it manipulate latent variables? 1.002
  • How does it learn to do both? 1.001

72
  • Outline
  • Basics single neurons/axons/dendrites/synapses. L
    atham
  • Language of neurons neural coding. Sahani
  • Learning at network and behavioral level. Dayan
  • What we know about networks (very little).
    Latham

73
  • Outline for this part of the course (biophysics)
  • What makes a neuron spike.
  • How current propagates in dendrites.
  • How current propagates in axons.
  • How synapses work.
  • Lots and lots of math!!!
Write a Comment
User Comments (0)
About PowerShow.com