Introduction to Training and Learning in Neural Networks - PowerPoint PPT Presentation

1 / 14
About This Presentation
Title:

Introduction to Training and Learning in Neural Networks

Description:

different approaches yield various results ... Example of Rosenblatt's Training Algorithm. Training data: x1 x2 out. 0 1 1. 1 1 1. 1 0 0 ... – PowerPoint PPT presentation

Number of Views:45
Avg rating:3.0/5.0
Slides: 15
Provided by: informat202
Category:

less

Transcript and Presenter's Notes

Title: Introduction to Training and Learning in Neural Networks


1
Introduction to Training and Learning in Neural
Networks
  • CS/PY 231 Lab Presentation 4
  • February 7, 2005
  • Mount Union College

2
Automatic Training in Networks
  • Weve seen manually adjusting weights to obtain
    desired outputs is difficult
  • What do biological systems do?
  • if output is unacceptable (wrong), some
    adjustment is made in the system
  • how do we know it is wrong? Feedback
  • pain, bad taste, discordant sound, observing that
    desired results were not obtained, etc.

3
Learning via Feedback
  • Weights (connection strengths) are modified so
    that next time the same input is encountered,
    better results may be obtained
  • How much adjustment should be made?
  • different approaches yield various results
  • goal automatic (simple) rule that is applied
    during weight adjustment phase

4
Rosenblatts Training Algorithm
  • Developed for Perceptrons (1958)
  • illustrative of other training rules simple
  • Consider a single perceptron, with 0/1 output
  • We will work with a training set
  • a set of inputs for which we know the correct
    output
  • weights will be adjusted based on correctness of
    obtained output

5
Rosenblatts Training Algorithm
  • for each input pattern in the training set, do
    the following
  • obtain output from perceptron
  • if output is correct (strengthen)
  • if output is 1, set w w x
  • if output is 0, set w w - x
  • but if output is incorrect (weaken)
  • if output is 1, set w w - x
  • if output is 0, set w w x

6
Example of Rosenblatts Training Algorithm
  • Training data
  • x1 x2 out
  • 0 1 1
  • 1 1 1
  • 1 0 0
  • Pick random values as starting weights and ?
  • w1 0.5, w2 -0.4, ? 0.0

7
Example of Rosenblatts Training Algorithm
  • Step 1 run first training case through a
    perceptron
  • x1 x2 out
  • 0 1 1
  • (0, 1) should give answer 1 (from table), but
    perceptron produces 0
  • do we strengthen or weaken?
  • do we add or subtract?
  • based on answer produced by perceptron!

8
Example of Rosenblatts Training Algorithm
  • obtained answer is wrong, and is 0 we must ADD
    input vector to weight vector
  • new weight vector (0.5, 0.6)
  • w1 0.5 0 0.5
  • w2 -0.4 1 0.6
  • Adjust weights in perceptron now, and try next
    entry in training data set

9
Example of Rosenblatts Training Algorithm
  • Step 2 run second training case through a
    perceptron
  • x1 x2 out
  • 1 1 1
  • (1, 1) should give answer 1 (from table), and it
    does!
  • do we strengthen or weaken?
  • do we or -?

10
Example of Rosenblatts Training Algorithm
  • obtained answer is correct, and is 1 we must
    ADD input vector to weight vector
  • new weight vector (1.5, 1.6)
  • w1 0.5 1 1.5
  • w2 0.6 1 1.6
  • Adjust weights, then on to training case 3

11
Example of Rosenblatts Training Algorithm
  • Step 3 run last training case through the
    perceptron
  • x1 x2 out
  • 1 0 0
  • (1, 0) should give answer 0 (from table) does
    it?
  • do we strengthen or weaken?
  • do we or -?

12
Example of Rosenblatts Training Algorithm
  • determine what to do, and calculate a new weight
    vector
  • should have SUBTRACTED
  • new weight vector (0.5, 1.6)
  • w1 1.5 - 1 0.5
  • w2 1.6 - 0 1.6
  • Adjust weights, then try all three training cases
    again

13
Ending Training
  • This training process continues until
  • perceptron gives correct answers for all training
    cases, or
  • a maximum number of training passes has been
    carried out
  • some training sets may be impossible for a
    perceptron to calculate (e.g., XOR ftn.)
  • In actual practice, we train until the error is
    less than an acceptable level

14
Introduction to Training and Learning in Neural
Networks
  • CS/PY 231 Lab Presentation 4
  • February 7, 2005
  • Mount Union College
Write a Comment
User Comments (0)
About PowerShow.com