CSE P573 Applications of Artificial Intelligence Neural Networks - PowerPoint PPT Presentation

About This Presentation
Title:

CSE P573 Applications of Artificial Intelligence Neural Networks

Description:

... Function Gradient of the Error Function Single Unit Training Rule Beyond Perceptrons Slide 11 Slide 12 Slide 13 Character Recognition Demo Beyond Backprop – PowerPoint PPT presentation

Number of Views:65
Avg rating:3.0/5.0
Slides: 19
Provided by: HenryK154
Category:

less

Transcript and Presenter's Notes

Title: CSE P573 Applications of Artificial Intelligence Neural Networks


1
CSE P573Applications of Artificial
IntelligenceNeural Networks
  • Henry Kautz
  • Autumn 2004

2
(No Transcript)
3
(No Transcript)
4
Perceptron (sigmoid unit)
weighted sum of inputs
constant term
soft threshold
5
(No Transcript)
6
Training a Neuron
  • Idea adjust weights to reduce sum of squared
    errors over training set
  • Error difference between actual and intended
    output
  • Algorithm gradient descent
  • Calculate derivative (slope) of error function
  • Take a small step in the downward direction
  • Step size is the training rate

7
Gradient of the Error Function
8
Gradient of the Error Function
9
Single Unit Training Rule
  • In short adjust weights on inputs that were on
    in proportion to the error and the size of the
    output

10
Beyond Perceptrons
  • Single units can learn any linear function
  • Single layer of units can learn any set of linear
    inequalities
  • Adding additional layers of hidden units
    between input and output allows any function to
    be learned!
  • Hidden units trained by propagating errors back
    through the network

11
(No Transcript)
12
(No Transcript)
13
(No Transcript)
14
Character Recognition Demo
15
Beyond Backprop
  • Backpropagation is the most common algorithm for
    supervised learning with feed-forward neural
    networks
  • Many other learning rules for these and other
    cases have been studied

16
Hebbian Learning
  • Alternative to backprop for unsupervised learning
  • Increase weights on connected neurons whenever
    both fire simultaneously
  • Neurologically plausible (Hebbs 1949)

17
Self-organizing maps (SOMs)
  • Unsupervised learning for clustering inputs
  • Winner take all network
  • one cell per cluster
  • Learning rule update weights near winning
    neuron to make it closer to the input

18
Recurrent Neural Networks
  • Include time-delay feedback loops
  • Can handle temporal data tasks, such as sequence
    prediction
Write a Comment
User Comments (0)
About PowerShow.com