Learning Process - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

Learning Process

Description:

If two neurons on either side of a synapse are activated simultaneously, then ... At any given time, only one neuron in the group is active ... – PowerPoint PPT presentation

Number of Views:44
Avg rating:3.0/5.0
Slides: 33
Provided by: asimk
Category:

less

Transcript and Presenter's Notes

Title: Learning Process


1
Learning Process
  • CS/CMPE 333 Neural Networks

2
Learning
  • Learning?
  • Learning is a process by which the free
    parameters of a neural network are adapted
    through a continuing process of stimulation by
    the environment in which the network is embedded
  • The type of learning is determined by the manner
    in which the parameter changes take place
  • Types of learning
  • Error-correction, Hebbian, competitive, Boltzmann
  • Supervised, reinforced, unsupervised

3
Learning Process
  • Adapting the synaptic weight
  • wkj(n 1) wkj(n) ?wkj(n)

4
Learning Algorithms
  • Learning algorithm a prescribed set of
    well-defined rules for the solution of a learning
    problem
  • In the context of synaptic weight updating, the
    learning algorithm prescribes rules for ?w
  • Learning rules
  • Error-correction
  • Boltzmann
  • Hebbian
  • Competitive
  • Learning paradigms
  • Supervised
  • Reinforced
  • Self-organizing (unsupervised)

5
Error-Correction Learning (1)
  • ek(n) dk(n) yk(n)
  • The goal of error-correction learning is to
    minimize a cost function based on the error
    function
  • Least-mean-square error as cost function
  • J E0.5Skek2(n)
  • E expectation operator
  • Minimizing J with respect to the network
    parameters is the method of gradient descent

6
Error-Correction Learning (2)
  • How do we find the expectation of the process?
  • We avoid its computation, and use an
    instantaneous value of the sum of squared errors
    as the error function (as an approximation)
  • ?(n) 0.5Skek2(n)
  • Error correction learning rule (or delta rule)
  • ?wkj(n) ?ek(n)xj(n)
  • ? learning rate
  • A plot of error function and weights is called an
    error surface. The minimization process tries to
    find the minimum point on the surface through an
    iterative procedure.

7
Hebbian Learning (1)
  • Hebb, a neuropsychologist, proposed a model of
    neural activation in 1949. Its idealization is
    used as a learning rule in neural network
    learning.
  • Hebbs postulate (1949)
  • If the axon of cell A is near enough to excite
    cell B and repeatedly or perseistently takes part
    in firing it, some growth process or metabolic
    change occurs in one or both cells such that As
    efficiency as one of the cells firing B is
    increased.

8
Hebbian Learning (2)
  • Hebbian learning (model of Hebbian synapse)
  • If two neurons on either side of a synapse are
    activated simultaneously, then the strength of
    that synapse is selectively increased
  • If two neurons on either side of synapse are
    activated asynchronously, then that synapse is
    selectively weakened or eliminated
  • Properties of Hebbian synapse
  • Time-dependent mechanism
  • Local mechanism
  • Interactive mechanism
  • Correlational mechanism

9
Mathematical Models of Hebbian Learning (1)
  • General form of Hebbian rule
  • ?wkj(n) Fyk(n), xj(n)
  • F is a function of pre-synaptic and
    post-synaptic activities.
  • A specific Hebbian rule (activity product rule)
  • ?wkj(n) ?yk(n)xj(n)
  • ? learning rate
  • Is there a problem with the above rule?
  • No bounds on increase (or decrease) of wkj

10
Mathematical Models of Hebbian Learning (2)
  • Generalized activity product rule
  • ?wkj(n) ?yk(n)xj(n) ayk(n)wkj(n)
  • Or
  • ?wkj(n) ayk(n)cxk(n) - wkj(n)
  • where c ?/ a and a positive constant

11
Mathematical Models of Hebbian Learning (3)
12
Mathematical Models of Hebbian Learning (4)
  • Activity covariance rule
  • ?wkj(n) ? covyk(n), xj(n)
  • ? E(yk(n) y)(xj(n) x)
  • where ? proportionality constant and x and y
    are respective means
  • After simplification
  • ?wkj(n) ? Eyk(n)xj(n) xy

13
Competitive Learning (1)
  • The output neurons of a neural network (or a
    group of output neurons) compete among themselves
    for being the one to be active (fired)
  • At any given time, only one neuron in the group
    is active
  • This behavior naturally leads to identifying
    features in input data (feature detection)
  • Neurobiological basis
  • Competitive behavior was observed and studied in
    the 1970s
  • Early self-organizing and topographic map neural
    networks were also proposed in the 1970s (e.g.
    cognitron by Fukushima)

14
Competitive Learning (2)
  • Elements of competitive learning
  • A set of neurons
  • A limit on the strength of each neuron
  • A mechanism that permits the neurons to compete
    for the right to respond to a given input, such
    that only one neuron is active at a time

15
Competitive Learning (3)
16
Competitive Learning (4)
  • Standard competitive learning rule
  • ?wji ?(xi wji) if neuron j wins the
    competition
  • 0 otherwise
  • Each neuron is allotted a fixed amount of
    synaptic weight which is distributed among its
    input nodes
  • Si wji 1 for all j

17
Competitive Learning (5)
18
Boltzmann Learning
  • Stochastic learning algorithm based on
    information-theoretic and thermodynamic
    principles
  • The state of the network is captured by an energy
    function, E
  • E -1/2 Si Sj wjisisj
  • where si state of neuron i 0, 1 (i.e. binary
    state)
  • Learning process
  • At each step, choose a neuron at random (say j)
    and flip its state sj by the following
    probability
  • w(sj -gt -sj) (1 exp(-?Ej/T)-1
  • The state evolves until thermal equilibrium is
    achieved

19
Credit-Assignment Problem
  • How to assign credit and blame for a neural
    networks output to its internal (free)
    parameters ?
  • This is basically the credit-assignment problem
  • The learning system (rule) must distribute credit
    or blame in such a way that the network evolves
    to the correct outcomes
  • Temporal credit-assignment problem
  • Determining which actions, among a sequence of
    actions, are responsible for certain outcomes of
    the network
  • Structural credit-assignment problem
  • Determining which internal components behavior
    should be modified and by how much

20
Supervised Learning (1)
21
Supervised Learning (2)
  • Conceptually, supervised learning involves a
    teacher who has knowledge of the environment and
    guides the training of the network
  • In practice, knowledge of the environment is in
    the form of input-output examples
  • When viewed as a intelligent agent, this
    knowledge is current knowledge obtained from
    sensors
  • How is supervised learning applied?
  • Error-correction learning
  • Examples of supervised learning algorithms
  • LMS algorithm
  • Back-propagation algorithm

22
Reinforcement Learning (1)
  • Reinforcement learing is supervised learning in
    which limited information of the desired outputs
    is known
  • Complete knowledge of the environment is not
    available only basic benefit or reward
    information
  • In other words, a critic rather than a teacher
    guides the learning process
  • Reinforcement learning has roots in experimental
    studies of animal learning
  • Training a dog by positive (good dog, something
    to eat) and negative (bad dog, nothing to eat)
    reinforcement

23
Reinforcement Learning (2)
  • Reinforcement learning is the online learning of
    an input-output mapping through a process of
    trail and error designed to maximize a scalar
    performance index called reinforcement signal
  • Types of reinforcement learning
  • Non-associative selecting one action instead of
    associating actions with stimuli. The only input
    received from the environment is reinforcement
    information. Examples include genetic algorithms
    and simulated annealing.
  • Associative associating action and stimuli. In
    other words, developing a action-stimuli mapping
    from reinforcement information received from the
    environment. This type is more closely related to
    neural network learning.

24
Supervised Vs Reinforcement Learning
25
Unsupervised Learning (1)
  • There is no teacher or critic in unsupervised
    learning
  • No specific example of the function/model to be
    learned
  • A task-independent measure is used to guide the
    internal representation of knowledge
  • The free parameters of the network are optimized
    with respect to this measure

26
Unsupervised Learning (2)
  • Also known as self-organizing when used in the
    context of neural networks
  • The neural network develops an internal
    representation of the inputs without any specific
    information
  • Once it is trained it can identify features in
    the input, based on the task-independent (or
    general) criterion

27
Supervised Vs Unsupervised Learning
28
Learning Tasks
  • Approximation
  • Association
  • Auto-association
  • Hetero-association
  • Pattern classification
  • Prediction
  • Control

29
Adaptation and Learning (1)
  • Learning, as we know it in biological systems, is
    a spatiotemporal process
  • Space and time dimensions are equally significant
  • Is supervised error-correcting learning
    spatiotemporal?
  • Yes and no (trick question ?)
  • Stationary environment
  • Learning one time procedure in which
    environment knowledge is built-in (memory) and
    later recalled for use
  • Non-stationary environment
  • Adaptation continually update the free
    parameters to reflect the changing environment

30
Adaptation and Learning (2)
31
Adaptation and Learning (3)
  • e(n) x(n) - x(n)
  • where e error x actual input x model
    output
  • Adaptation needed when e not equal to zero
  • This means that the knowledge encoded in the
    neural network has become outdated requiring
    modification to reflect the new environment
  • How to perform adaptation?
  • As an adaptive control system
  • As an adaptive filter (adaptive error-correcting
    supervised learning)

32
Statistical Nature of Learning
  • Learning can be viewed as a stochastic process
  • Stochastic process? when there is some element
    of randomness (e.g. neural network encoding is
    not unique for the same environment that is
    temporal)
  • Also, in general, neural network represent just
    one form of representation. Other representation
    forms are also possible.
  • Regression model
  • d g(x) e
  • where g(x) actual model e statistical
    estimate of error
Write a Comment
User Comments (0)
About PowerShow.com