Self Organizing Feature Maps - PowerPoint PPT Presentation

1 / 16
About This Presentation
Title:

Self Organizing Feature Maps

Description:

Algorithm Synopsis Step 1 ... Algorithm Synopsis Step 3. In this step we do the training. ... Algorithm Synopsis Step 4. This is a simple stop condition. ... – PowerPoint PPT presentation

Number of Views:83
Avg rating:3.0/5.0
Slides: 17
Provided by: Pil85
Category:

less

Transcript and Presenter's Notes

Title: Self Organizing Feature Maps


1
Self Organizing Feature Maps
  • Winter 2005

2
Self-Organizing Feature Maps
  • SOFM (or Kohonen Networks) can partition space
    into regions
  • Usually composed of an 2d or 3d lattice of m
    neurons where the neurons are not explicitly
    connected to each other
  • Neighbourhood of each neuron is defined by a
    n-dim sphere of radius r
  • Each neuron connects to an n-dimensional input
    vector denoted by x
  • Neuron to input vector connection has a set of
    weights denoted by vector w

3
SOFMs in 1D
  • 1D array of neurons

4
SOFMs in 2D
  • Regular 2D lattice
  • Mapping between the n-dim input space V and the
    SOFM 2D space is shown

5
How It works Training it
  • So now the question becomes, how does a neuron
    know that it is responsible for a given area of
    space?  And how does it's neighbors know that
    they are associated with this said neuron?  They
    way it works is quite simple.  Each neuron has a
    weight associated with it.  When you input some
    value (say a point), we calculate the Euclidian
    distance between the input and the weight
    vector.  The neuron with the closest value to the
    vector (i.e. the weight vector is close to the
    input vector) will "light" up, or generate the
    most excitation. 

6
How It works Training it
  • From this, degrees of excitation can be had as
    other neurons may also partially light up in
    response.  The difference is the intensity of
    this response.  A neuron with a very close weight
    vector (or minimal vector change) will light up
    with more intensity than one that doesn't have as
    close an association.  So, the tough part about
    this is the training.  In some sense, you can
    liken SOFM to fuzzy logic, where the response you
    get may vary from neuron to neuron, but will be
    "correct" within some interval (the
    neighborhood). 

7
How It works Training it
  • As a result of this, we can not apply the
    standard training algorithms that we have defined
    earlier such as gradient descent.  In most cases,
    we don't have a hard numerical output that we
    want from our neural network (as there may be
    multiple answers to our problem) and so we can
    not compute a hard error function.  As a result
    of this, Kohonen came up with an algorithm for
    training SOFMs based on this idea of
    neighborhoods and excitation values. 

8
Algorithm Notation
  • r neighbourhood radius
  • ? - learning constant
  • used to control learning rate
  • F - neighbourhood function
  • defines the neighbourhood relation between
    neurons in a similar way to fuzzy membership
    functions

9
Kohonen Learning Algorithm
10
Algorithm Synopsis Step 1
  • In the first step we select and input vector that
    we wish to train this SOFM with.  We also define
    a set of weight vectors for our neurons at
    random.  We start with an initial radius, which
    defines our neighborhood distance (i.e. a radius
    of 1 means that for a given neuron, its neighbors
    are every neuron that is one away from it -- in a
    one-dimensional lattice, this would be immediate
    right and left neighbors). 

11
Algorithm Synopsis Step 1 cont..
  • We also define a learning constant which we have
    seen previously, and finally we detail a learning
    function.  This learning function, by definition,
    is known as the strength of the coupling between
    units during the training process.  In other
    words, it is a function that tells us how closely
    associated a given neuron is with its neighbors. 
    The closer the association, the more fine-grained
    our SOFM is (i.e. each neuron will represent a
    smaller pixel space) and the looser the
    association, the more coarse-grain our SOFM is
    (i.e. each neuron will represent a larger pixel
    space).

12
Algorithm Synopsis Step 2
  • We give the SOFM an input, and we look at which
    neuron gets excited the most (recall that this is
    a measure of how close our input vector is to a
    given neuron's weight vector).  Once we have this
    neuron, we will work on it by inducing some
    training.

13
Algorithm Synopsis Step 3
  • In this step we do the training.  This is
    accomplished by adjusting the weight values of
    the excited neuron's neighbors.  As you can see
    from the equation, it is very similar to what we
    presented in the perceptron learning algorithm, a
    straight difference between input and weight
    adjustment multiplied by the learning function
    and the neighborhood function.

14
Algorithm Synopsis Step 4
  • This is a simple stop condition.  If we have
    reached the required number of iterations for our
    training, then we stop.  Alternatively, we could
    modify our learning value and our neighborhood
    function, and fine-tune our SOFM as we see fit.

15
Learning Problems
  • We initially use course granularity and by
    modifying the neighborhood radius and learning
    constant, we gradually go towards fine granuality
  • Difficult to find how to modify those parameters
  • Difficulties with higher dimensional input spaces

16
SOFM Additional Material
  • For a nice tutorial on SOFMs and some interesting
    examples, please see
  • http//www.ai-junkie.com/som1.html
Write a Comment
User Comments (0)
About PowerShow.com