Title: CS 182 Ling 109 CogSci 110 Sections 101 102
1CS 182 / Ling 109 / CogSci 110Sections 101 - 102
- Klinton Bicknell (kbicknell at berkeley.edu)
- Feb 9, 2005
2Announcements
- a3 is out, due 2/14 (Mon) and 2/21 by 1159pm
- Please please please start early
- Liam (grader) holding office hours in one of 2nd
floor labs in soda Monday and Tues 9pm-12am - a1 graded (?) check grades on glookup
- quiz has been graded youll get it back today
- mean 15.8 deviation 2.9
- a2 solution should be posted soon please dont
pass on to your friends next year
3Where we stand
- Last Week
- Connectionist Representation
- This Week
- Backprop
- traditional AI
- Coming up
- fMRI
- color
- cognitive linguistics
4The Big (and complicated) Picture
Spatial Relation
Motor Control
Psycholinguistics Experiments
Metaphor
Grammar
Chang Model
Bailey Model
Narayanan Model
abstraction
Neural Net Learning
Regier Model
SHRUTI
Triangle Nodes
Visual System
Neural Development
Midterm
Quiz
Finals
5Quiz
- What is a localist representation? What is a
distributed representation? Why are they both
bad? - What is coarse-fine encoding? Where is it used in
our brain? - What can Back-Propagation do that Hebbs Rule
cant? - Derive the Back-Propagation Algorithm (Im
kidding)
6Quiz
- What is a localist representation? What is a
distributed representation? Why are they both
bad? - What is coarse-fine encoding? Where is it used in
our brain? - What can Back-Propagation do that Hebbs Rule
cant? - Derive the Back-Propagation Algorithm (Im
kidding)
7Distributed vs Localist Repn
What are the drawbacks of each representation?
8Distributed vs Localist Repn
- What happens if you want to represent a group?
- How many persons can you represent with n bits?
2n
- What happens if one neuron dies?
- How many persons can you represent with n bits? n
9Quiz
- What is a localist representation? What is a
distributed representation? Why are they both
bad? - What is coarse-fine encoding? Where is it used in
our brain? - What can Back-Propagation do that Hebbs Rule
cant? - Derive the Back-Propagation Algorithm (Im
kidding)
10Visual System
- 1000 x 1000 visual map
- For each location, encode
- orientation
- direction of motion
- speed
- size
- color
- depth
- Blows up combinatorically!
11Coarse Coding
- info you can encode with one fine resolution unit
info you can encode with a few coarse
resolution units - Now as long as we need fewer coarse units total,
were good
12Coarse-Fine Coding
- but we can run into ghost images
Feature 2e.g. Direction of Motion
13Quiz
- What is a localist representation? What is a
distributed representation? Why are they both
bad? - What is coarse-fine encoding? Where is it used in
our brain? - What can Back-Propagation do that Hebbs Rule
cant? - Derive the Back-Propagation Algorithm (Im
kidding)
14Back-Propagation Algorithm
Sigmoid
- We define the error term for a single node to be
ti - yi
15Gradient Descent
global mimimum this is your goal
it should be 4-D (3 weights) but you get the idea
16Quiz
- What is a localist representation? What is a
distributed representation? Why are they both
bad? - What is coarse-fine encoding? Where is it used in
our brain? - What can Back-Propagation do that Hebbs Rule
cant? - Derive the Back-Propagation Algorithm (Im
kidding) - Handout
17The output layer
The derivative of the sigmoid is just
18The hidden layer
19Lets just do an example
0
0.8
1/(1e-0.5)
0.6
0
0.5
E Error ½ ?i (ti yi)2
0.6224
E ½ (t0 y0)2
0.5
E ½ (0 0.6224)2 0.1937
20Anonymous Feedback Lectures
- feel free to comment on each instructor
seperately - How is the pace?
- Do you find the material interesting? too dense?
too slow? - What will be most helpful to you in getting the
most out of lectures? - Any particularly confusing topic?
21Anonymous Feedback Sections
- Have sections been useful?
- Any feedback on our styles of presentation?
- How will sections be most helpful to you?
- Any other comments?