CS 182 Ling 109 CogSci 110 Sections 101 102 - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

CS 182 Ling 109 CogSci 110 Sections 101 102

Description:

Derive the Back-Propagation Algorithm (I'm kidding) Distributed vs Localist Rep'n ... the Back-Propagation Algorithm (I'm kidding) Visual System. 1000 x 1000 ... – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 22
Provided by: idiom3
Category:

less

Transcript and Presenter's Notes

Title: CS 182 Ling 109 CogSci 110 Sections 101 102


1
CS 182 / Ling 109 / CogSci 110Sections 101 - 102
  • Klinton Bicknell (kbicknell at berkeley.edu)
  • Feb 9, 2005

2
Announcements
  • a3 is out, due 2/14 (Mon) and 2/21 by 1159pm
  • Please please please start early
  • Liam (grader) holding office hours in one of 2nd
    floor labs in soda Monday and Tues 9pm-12am
  • a1 graded (?) check grades on glookup
  • quiz has been graded youll get it back today
  • mean 15.8 deviation 2.9
  • a2 solution should be posted soon please dont
    pass on to your friends next year

3
Where we stand
  • Last Week
  • Connectionist Representation
  • This Week
  • Backprop
  • traditional AI
  • Coming up
  • fMRI
  • color
  • cognitive linguistics

4
The Big (and complicated) Picture
Spatial Relation
Motor Control
Psycholinguistics Experiments
Metaphor
Grammar
Chang Model
Bailey Model
Narayanan Model
abstraction
Neural Net Learning
Regier Model
SHRUTI
Triangle Nodes
Visual System
Neural Development
Midterm
Quiz
Finals
5
Quiz
  • What is a localist representation? What is a
    distributed representation? Why are they both
    bad?
  • What is coarse-fine encoding? Where is it used in
    our brain?
  • What can Back-Propagation do that Hebbs Rule
    cant?
  • Derive the Back-Propagation Algorithm (Im
    kidding)

6
Quiz
  • What is a localist representation? What is a
    distributed representation? Why are they both
    bad?
  • What is coarse-fine encoding? Where is it used in
    our brain?
  • What can Back-Propagation do that Hebbs Rule
    cant?
  • Derive the Back-Propagation Algorithm (Im
    kidding)

7
Distributed vs Localist Repn
What are the drawbacks of each representation?
8
Distributed vs Localist Repn
  • What happens if you want to represent a group?
  • How many persons can you represent with n bits?
    2n
  • What happens if one neuron dies?
  • How many persons can you represent with n bits? n

9
Quiz
  • What is a localist representation? What is a
    distributed representation? Why are they both
    bad?
  • What is coarse-fine encoding? Where is it used in
    our brain?
  • What can Back-Propagation do that Hebbs Rule
    cant?
  • Derive the Back-Propagation Algorithm (Im
    kidding)

10
Visual System
  • 1000 x 1000 visual map
  • For each location, encode
  • orientation
  • direction of motion
  • speed
  • size
  • color
  • depth
  • Blows up combinatorically!

11
Coarse Coding
  • info you can encode with one fine resolution unit
    info you can encode with a few coarse
    resolution units
  • Now as long as we need fewer coarse units total,
    were good

12
Coarse-Fine Coding
  • but we can run into ghost images

Feature 2e.g. Direction of Motion
13
Quiz
  • What is a localist representation? What is a
    distributed representation? Why are they both
    bad?
  • What is coarse-fine encoding? Where is it used in
    our brain?
  • What can Back-Propagation do that Hebbs Rule
    cant?
  • Derive the Back-Propagation Algorithm (Im
    kidding)

14
Back-Propagation Algorithm
Sigmoid
  • We define the error term for a single node to be
    ti - yi

15
Gradient Descent
global mimimum this is your goal
it should be 4-D (3 weights) but you get the idea
16
Quiz
  • What is a localist representation? What is a
    distributed representation? Why are they both
    bad?
  • What is coarse-fine encoding? Where is it used in
    our brain?
  • What can Back-Propagation do that Hebbs Rule
    cant?
  • Derive the Back-Propagation Algorithm (Im
    kidding)
  • Handout

17
The output layer
The derivative of the sigmoid is just
18
The hidden layer
19
Lets just do an example
0
0.8
1/(1e-0.5)
0.6
0
0.5
E Error ½ ?i (ti yi)2
0.6224
E ½ (t0 y0)2
0.5
E ½ (0 0.6224)2 0.1937
20
Anonymous Feedback Lectures
  • feel free to comment on each instructor
    seperately
  • How is the pace?
  • Do you find the material interesting? too dense?
    too slow?
  • What will be most helpful to you in getting the
    most out of lectures?
  • Any particularly confusing topic?

21
Anonymous Feedback Sections
  • Have sections been useful?
  • Any feedback on our styles of presentation?
  • How will sections be most helpful to you?
  • Any other comments?
Write a Comment
User Comments (0)
About PowerShow.com