Translation Invariance - PowerPoint PPT Presentation

1 / 18
About This Presentation
Title:

Translation Invariance

Description:

Example: recognizing patterns that are displaced in space. same shape, different location ... patterns in data to a new relative location causes the new vector ... – PowerPoint PPT presentation

Number of Views:51
Avg rating:3.0/5.0
Slides: 19
Provided by: informat202
Category:

less

Transcript and Presenter's Notes

Title: Translation Invariance


1
Translation Invariance
  • CS/PY 399 Lab Presentation 10
  • March 22, 2001
  • Mount Union College

2
A surprisingly hard problem
  • As weve seen, problems that are easy for humans
    can be difficult for comptuers to solve
  • This applies to simple neural networks as well as
    conventional programs
  • Example recognizing patterns that are displaced
    in space
  • same shape, different location

3
Translation Invariance Problem
  • We say that the pattern to be recognized is
    invariant (doesnt change) under the operation of
    Translation (no change to shape or size moved to
    another place)
  • It turns out that the completely connected
    networks weve been using dont do well at this
    recognition task
  • clue brains arent completely connected

4
Translation Invariance Problem
  • Example we can recognize the letter A in many
    different locations on a page, even flipped
    sideways
  • Simpler example for todays lab recognizing
    strings of 8 bits that have a run of 3
    consecutive 1s
  • First, you will train a completely connected
    network, and then see if it generalizes novel
    inputs well

5
Relative vs. Absolute Position
  • Completely-connected networks treat input
    patterns as descriptions of points in
    n-dimensional space, where n is the number of
    input values in each pattern
  • Position is very important in an input vector
  • (3, 4, 4, 4, 3) is not similar to (4, 4, 4, 3,
    3), in general

6
City Block Geometric Example
  • To illustrate this, a excellent example is given
    on pp. 140-141 of P. E.
  • Consider a vector of 4 values, with each value
    representing a number of blocks to walk in a
    certain compass direction
  • first position NORTH, 2nd EAST, 3rd SOUTH,
    4th WEST
  • Lets compare two vectors with similar values to
    see where we end up

7
Walking City Blocks (n, e, s, w)
x
Path 1 ( 1, 3, 1, 1)
8
Walking City Blocks (n, e, s, w)
x
x
Path 1 ( 1, 3, 1, 1)
9
Walking City Blocks (n, e, s, w)
x
Path 2 ( 3, 1, 1, 1)
10
Walking City Blocks (n, e, s, w)
x
x
Path 2 ( 3, 1, 1, 1)
11
Implications of the Preceding
  • Shifting sub-patterns in data to a new relative
    location causes the new vector to look very
    different to the network (in absolute terms)
  • Finding relative similarities in input patterns
    that are absolutely different is a key skill of
    living beings
  • recognizing a tiger in various positions and/or
    orientations in your visual field is (as Martha
    would say) a good thing!!

12
Better Architecture for Recognition of Relative
Patterns
  • How is visual field represented in living
    creatures?
  • ganglion cells receive input from a certain
    subset of photoreceptors
  • not all inputs are delivered to each hidden node
  • In general, we say that a hidden node has a
    Receptive Field spanning a subset of the input
    nodes

13
Receptive Field example
  • For this problem, we are interested in finding
    patterns of 3 consecutive 1s
  • Therefore, connect each hidden node to just 3
    input nodes
  • Hidden node1 receives input from i1, i2 and i3
  • Hidden node 2 from i2, i3, i4
  • Hidden node 6 from i6, i7, i8

14
Network Architecture for Receptive Field Example
15
Making the Receptive Fields Identical in Function
  • Each receptive field should be trained to
    recognize our pattern in the same way, with the
    same strength
  • Random weights for each connection wont do this,
    unless were lucky
  • What we want is a way to duplicate the weights
    for all of the left-most connections to the
    hidden nodes, so that they are the same

16
Making the Receptive Fields Identical in Function
  • Weight change for each left-most connection is
    the average for all of these connections
  • TLearn does this through the groups feature of
    the .CF file
  • All connections in the same group are restricted
    to the same value at all times

17
Making the Receptive Fields Identical in Function
  • Lab example for this architecture will have 5
    groups
  • Group 1 All hidden nodes from bias
  • Group 2 Output from all hidden nodes
  • Group 3 All left-most connections into hidden
    nodes
  • Group 4 All middle connections
  • Group 5 All right-most connections

18
Translation Invariance
  • CS/PY 399 Lab Presentation 10
  • March 22, 2001
  • Mount Union College
Write a Comment
User Comments (0)
About PowerShow.com