Single-Layer Perceptrons - PowerPoint PPT Presentation

1 / 11
About This Presentation
Title:

Single-Layer Perceptrons

Description:

Single-Layer Perceptrons (3.4 ~ 3.6) CS679 Lecture Note by Sung Won Jung Computer Science Department KAIST Linear Least-Squares Filter The single neuron around which ... – PowerPoint PPT presentation

Number of Views:146
Avg rating:3.0/5.0
Slides: 12
Provided by: Jinhyu7
Category:

less

Transcript and Presenter's Notes

Title: Single-Layer Perceptrons


1
  • Single-Layer Perceptrons
  • (3.4 3.6)
  • CS679 Lecture Note
  • by Sung Won Jung
  • Computer Science Department
  • KAIST

2
Linear Least-Squares Filter
  • The single neuron around which it is built is
    linear
  • The cost function consists of the sum of error
    squares
  • Using and
    the error vector is
  • Differentiating with respect to
  • correspondingly,
  • From Gauss-Newton method, (eq. 3.22)

3
Wiener Filter (for Ergodic Env.)
  • Correlation matrix of
  • Cross-correlation vector between ,
  • Wiener solution to the linear optimum filtering
    problem
  • For an ergodic process, the linear least-squares
    filter asymptotically approaches the Wiener filter

4
LMS Algorithm (I)
  • Based on the use of instantaneous values for cost
    function
  • Differentiating with respect to ,
  • The error signal in LMS algorithm
  • hence,

  • so,

5
LMS Algorithm (II)
  • Using as an estimate for the gradient
    vector,
  • Using this for the gradient vector of steepest
    descent method, LMS algorithm as follows

  • learning-rate parameter
  • The inverse of is a measure of the memory
    of the LMS algorithm
  • When is small, the adaptive process progress
    slowly, more of the past data are remembered and
    more accurate filtering action

6
LMS Characteristics
  • LMS algorithm produces an estimate of the weight
    vector
  • Sacrifice a distinctive feature
  • Steepest descent algorithm follows a
    well-defined trajectory
  • LMS algorithm follows a random
    trajectory
  • Number of iterations goes infinity,
    performs a random walk about Wiener solution
  • But importantly, LMS algorithm does not require
    knowledge of the statistics of the environment

7
LMS Summary
Training Sample Input signal vector
Desired response
User-selected parameter Initialization.
Set Computation. For n 1, 2, , compute
8
Convergence Consideration
  • Two distinct quantities, and
    determine the transmittance of feedback loop
  • With environment that supplies , the
    selection of is important for the LMS
    algorithm to be converge
  • Convergence of the mean
  • as
  • This is not a practical value
  • Convergence in the mean square
  • Convergence condition for LMS algorithm in the
    mean square

9
Virtue and Limitations
Virtue
Limitations
  • Simple
  • Model independent, so
  • robust
  • Satisfactory in stationary
  • and nonstationary
  • environment
  • Slow rate of convergence
  • Sensitivity to variations in
  • the eigenstructure of the
  • input

Condition number
is high, then training sample is illl conditioned
10
Learning Curves (I)
  • Plot of the mean-square value of the estimation
    error , versus the number of iterations
  • Misadjustment
  • The smaller M is compared to unity, the more
    accurate is the adaptive filtering action
  • For example, misadjustment of 10 means that the
    filter produces mean-square error that is 10
    greater than the minimem mean-square error
    produced by corresponding Wiener filter
  • Increasing learning-rate to accelerate learning
    process, the misadjustment is increased.
    Conversely, similar result.

11
Learning Curves (II)
Number of iterations
Rate of convergence (arbitrary chosen)
Write a Comment
User Comments (0)
About PowerShow.com