Kohonens SelfOrganizing Feature Maps and Chaos - PowerPoint PPT Presentation

1 / 43
About This Presentation
Title:

Kohonens SelfOrganizing Feature Maps and Chaos

Description:

State of the Cart-Broom System: x, x', theta ?, angular velocity ?-dot. lattice row = 10 ... Broom balancing behaviours: Keep the broom handle vertical. ... – PowerPoint PPT presentation

Number of Views:42
Avg rating:3.0/5.0
Slides: 44
Provided by: milar1
Category:

less

Transcript and Presenter's Notes

Title: Kohonens SelfOrganizing Feature Maps and Chaos


1
Kohonens Self-Organizing Feature Maps andChaos
  • Dr. N. Reyes
  • Adapted from the Welstead Book

2
A Simple Kohonen Network
Lattice
4x4
Node
Weight Vectors
Input Nodes
Input Vector
3
SOM for Color Clustering
Unsupervised learning
Reduces dimensionality of information
Data Compression Vector Quantisation
Clustering of data
Topological relationship between data is
maintained
Input 3D , Output 2D
Vector quantisation
4
Learning without a teacher
1. Initialize the weight vectors. Random
initialisation is used, although, as with
feed-forward nets, partially trained weights can
be used as a starting point.
2. Present input to the network. Typically some
dynamic process generates the input vectors.
3. Determine the weight vector that is closest to
the input vector. This is just a brute force
search over the entire lattice. The weight
vector with the smallest Euclidean distance
difference from the input vector is the winner.
This step can be summarised as Find i, j such
that
where v is the input vector and i and j range
over all the nodes in the lattice.
5
Learning without a teacher
4. Adapt the weights. This step is similar to
the gradient descent step in back-propagation.
However, the adaptation is applied only to weight
vectors in a neighborhood of the winner selected
in step 3. The starting neighborhood size is
one of the set-up parameters, and the
neighborhood size is gradually reduced over the
course of the iterations. The adaptation
stepSize is also reduced over the course of the
iterations.
6
Learning without a teacher
Also, weights within the neighborhood that are
farther away from the winner are not adapted as
strongly as weights close to the winner. This
mechanism is accomplished through the use of a
gaussian function applied to the distance of the
weight vector from the winner. The adaptation is
summarised by the following equation
stepsize
Set equal to the inverse of the neighborhood size
Where v is the input vector and i and j range
over just the neighborhood of i, j as selected
in step 3. Here, e is the step size and a is a
fixed coefficient which is set equal to the
inverse of the neighborhood size.
7
SOM in Chaos
x,y - correspond to lattice position
z - correspond actual weight vector
Input vector
y
  • Random Uniform Distribution
  • Henon Dynamical System

x
8
SOM in Chaos
  • Search for the Best Matching Unit (BMU) is done
    by a simple-minded
  • brute force search
  • Neighborhood around the winning weight vector is
    trimmed to ensure that it does not overshoot the
    bounds of the lattice array
  • Weight adjustment for weight vectors in the
    trimmed neighborhood

stepsize
Set equal to the inverse of the neighborhood size
9
Learning Order and Chaos
  • SOM has the capability of learning the
    underlying distribution of random
  • Input vectors presented to the network.
  • If (Input is Uniformly Distributed) then the
    network weights will arrange
  • Themselves in a rectangular grid.
  • If (Input is generated by Chaotic Henon
    Dynamical System)
  • then the arrangement of the network weights will
    resemble the strange
  • Attractor associated with the dynamical system.

10
Learning Order and Chaos
  • Weight Initialisation -0.1, 0.1

void initialize_weights (float w,int n_inp,
int l_rows, int l_cols) int i,j,k
float r srand(1) for (k 1k lt
n_inpk) for (i 1i lt l_rowsi)
for (j 1j lt l_colsj) r
2frand() - 1.0 /Random number between -1 and
1/ w i j k 0.1 r /Choose small
initial weight values/ /j/
/i/ /k/ /procedure/
11
Learning Order and Chaos
  • Gaussian and Henon Functions

float gaussian (const float x, const float y)
return exp (-xx - yy) void
henon_dynam_sys (float a,float b,float x_in,float
y_in, float x_out,float y_out) x_out
1.0 y_in - a x_in x_in y_out b
x_in
12
Learning Order and Chaos
PSEUDO CODE
Repeat the following inside a loop 0. Increment
Iteration Count. 1. Generate random input
(Uniform Dist.) or use Henon Function. 2.
Initialize minimum distance. 3. Find minimum
distance over all weights (Find the BMU). 4. Make
sure neighborhood doesn't exceed array bounds. 5.
Update weights in neighborhood around minimum. 6.
Display weight values on screen, every freq'th
iteration. Use transformation equations to plot
on screen properly. 7. Update stepsize and nbhd
size. StepSize reduce by 20.
Decrement neighborhood Size by 1
13
Learning Order and Chaos
  • Finding the BMU

/Find minimum distance over all weights/ for
(i 1i lt s.lattice_rows i) for (j 1
j lt s.lattice_cols j) sum 0 for (k
1 k lt no_of_inputs k) sum (xk -
wijk)(xk - wijk) if (sum lt
min_dist) min_i i min_j j
min_dist sum /if/ /i,j/
14
Learning Order and Chaos
  • Weight Adjustments in the neighborhood

i_upper_limit number of rows in lattice
i_upper_limit min_i nbhd
for (i i_lower_limiti lt i_upper_limit i)
for (j j_lower_limitj lt j_upper_limit
j) distanceFactor gaussian
(alpha(i-min_i),

alpha(j-min_j)) for (k 1 k lt
no_of_inputs k) w ijk
distanceFactor stepsize
(x k - w
ijk)
15
Simulations
  • Setting a value for the initial neighborhood
    that is too small could lead to
  • the network getting entangled.
  • Uniform Distribution
  • lattice 10 x 10
  • stepSize0.25
  • starting neighborhood size5
  • graph frequency 5
  • iteration interval 500
  • range for x -1.0, 1.0, range for y -1.0, 1.0
  • Henon Dynamical System
  • a1.4, b 0.3, a0.9, b -1.0
  • lattice 20 x 20
  • stepSize0.25
  • starting neighborhood size10
  • graph frequency 5
  • iteration interval 4000, 5000
  • range for x -1.0, 1.0, range for y -1.0, 1.0

16
Transformation Equations
159.234
WORLD-to-DEVICE COORDINATES
100,000,000 miles x 500,000 miles
1280 x 1024 pixels
y
0
x
(Xworld,Yworld)
(XDevice,YDevice)
x
y
0
World System of Coordinates
Device System of Coordinates
17
Transformation Equations
159.234
From World Coordinates to Device Coordinates
How?
Scaling from some very big or very small values
to the boundaries of the computer screen
e.g. (124,075,454 , 765,454,012) to (345,
675)
18
World-to-Device Coordinates
159.234
TRANSFORMATION EQUATIONS
19
World-to-Device Coordinates
159.234
TRANSFORMATION EQUATIONS
20
Example Projectile Motion
159.234
PHYSICS EQUATIONS
where g 9.8 m/sec.2 pull of gravity Vo
in m/sec. initial velocity t in sec.
time
21
World Boundaries
159.234
SETTING THE BOUNDARIES
where ?85 degrees
where ?45 degrees
Time of flight from take-off to landing
22
World Boundaries
159.234
SETTING THE BOUNDARIES
Use the upper-left and bottom-right coordinates
to set the boundaries
y
0
x
(Xworld,Yworld)
(XDevice,YDevice)
x
y
0
Top-left x1, y1 Bottom-right x2, y2
23
Example Projectile Motion
159.234
PUTTING THE PIECES TOGETHER
circle(x, y, radius)
// InitGraphics here // InitBoundaries
here t0.0 while(t lt tf)
cleardevice() setcolor(RED) circle
(Xdev(x(t, Vo, Theta)),Ydev(y(t, Vo, Theta)),
12) tttinc
World-to-Device Transformation Function
Implements a Physics Equation for x
24
SOM in Control Systems
x,y,z - correspond to lattice position
k - correspond actual weight vector
lattice row 10 lattice col 10 lattice k 10
y
LATTICE
x
z
Input vectors
State of the Cart-Broom System x, x, theta
?, angular velocity ?-dot
25
SOM in Control Systems
OUTPUT LAYER WEIGHTS
lattice row 10 lattice col 10 lattice k 10
y
k - correspond actual weight vector
LATTICE
x
Input vectors
z
State of the Cart-Broom System x, x, theta ?
INPUT LAYER WEIGHTS
26
SOM in Control Systems
OUTPUT LAYER WEIGHTS
SOM NETWORK
1. TRAINING MODE
y
2. RUN MODE
Weight adaptation to produce the desired control
response is accomplished through the use of a
reward function
LATTICE
x
The reward function guides the adaptation of the
network toward the desired behaviour.
Interestingly, more than one desired behaviour
could be built
z
INPUT LAYER WEIGHTS
27
SOM in Control Systems
OUTPUT LAYER WEIGHTS
The reward function guides the adaptation of the
network toward the desired behaviour.
Interestingly, more than one desired behaviour
could be built.
y
  • Broom balancing behaviours
  • Keep the broom handle vertical.
  • Keep the cart away from the ends of the cart path.

LATTICE
x
z
INPUT LAYER WEIGHTS
28
SOM in Control Systems
OUTPUT LAYER WEIGHTS
The reward function guides the adaptation of the
network toward the desired behaviour.
Interestingly, more than one desired behaviour
could be built.
y
  • Broom balancing behaviours
  • Keep the broom handle vertical.
  • Keep the cart away from the ends of the cart path.

LATTICE
x
The function is evaluated using normalised values
for each variable.
z
INPUT LAYER WEIGHTS
29
SOM in Control Systems
The reward function guides the adaptation of the
network toward the desired behaviour.
Interestingly, more than one desired behaviour
could be built.
  • Broom balancing behaviours
  • Keep the broom handle vertical.
  • Keep the cart away from the ends of the cart path.

The function is evaluated using normalised values
for each variable.
30
SOM in Control Systems
OUTPUT LAYER WEIGHTS
y
LATTICE
x
W_IN1010103
z
INPUT LAYER WEIGHTS
31
Network Training
1. Initialise parameters.
stepSize startingStepSize0.25 minStepSize
0.005 AS_StepSize0.05 Neighborhood
startingNeighborhood 5
no_of_f_steps 20 //force increment
steps ForceFactor 3.0 Iter_Interval
500 Lattice_rows 10 Lattice_cols
10 Lattice_k10 NO_OF_INP_WTS 3 NO_OF_INPUTS
4
32
Network Training
2. Initialise weights
W_IN gt 4-D Matrix W_OUT gt 3-D Matrix AS gt
3-D Matrix
range r floating point number between
-1.0, 1.0
initialised all to 1.0 Matrix of Search Step
Sizes
33
Network Training
3. Initialise Normalisation Factors
xFactor thetaFactor thetaDotFactor forceFactor
34
Normalisation of Input Parameters
1. From the training data, calculate max, min
values of each input parameter
2. xFactor
3. xDotFactor
35
Normalisation of Input Parameters
4. thetaFactor
5. thetaDotFactor
6. forceFactor
36
Network Training
4. Loop (Main loop)
  • Generate random input between -1.0, 1.0
  • Initialise minimum distance
  • Find minimum distance over all weights (find BMU)
  • get the indices min_i min_j min_k

/Find minimum distance over all weights/ for
(i 1i lt s.lattice_rows i) for (j
1 j lt s.lattice_cols j) sum 0 for
(k 1 k lt no_of_inputs k) sum
(xk - wijk)(xk - wijk) if
(sum lt min_dist) min_i i min_j
j min_dist sum /if/
//j // i
37
Network Training
F_new W_OUTmin_imin_jmin_k -
ASmin_imin_jmin_k Best_F
F_new Best_Reward Reward(s, x, F_new,
ForceFactor) Find Force with Best Reward for
(i 2i lt no_of_f_steps i) F_new
F_new F_increment The_Reward Reward(s,
F_new, ForceFactor) if (The_Reward gt
BestReward) Best_F F_new
Best_Reward The_Reward
// i
38
Network Training
  • Find limits of the neighborhood

if( (min_i neighborhood) lt 1)
i_lower_limit i else i_lower_limit
min_i neighborhood if( (min_i
neighborhood) gt lattice_rows)
i_upper_limit lattice_rows else
i_upper_limit min_i neighborhood ... ...
i_lower_limit , i_upper_limit j_lower_limit
, j_upper_limit k_lower_limit , k_upper_limit
y
LATTICE
x
z
39
Network Training
  • Adapt weights to input vectors

for (i i_lower_limit i lt i_upper_limit i)
for (j j_lower_limit j lt j_upper_limit
j) for (k k_lower_limit k lt
k_upper_limit k) Factor
Gaussian(alpha(i min_i), alpha(j min_j),

alpha(k min_k)) //Factor 1
For each input vector m
W_INijkm W_INijkm Factor

StepSize (xm - W_INijkm)
W_OUTijk
W_OUTijk Factor
StepSize (Best_F -
W_OUT ijk) // k // j
// i
40
Network Training
  • Adjust Matrix of Search Step Sizes

ASmin_imin_jmin_k ASmin_imin_jmin_k

- AS_StepSize ASmin_imin_jmin_k iters
if(iters iterInterval 0)
StepSize StepSize 0.8
if(StepSize lt minStepSize)
minStepSize StepSize
Neighborhood --
if(Neighborhood lt 1) Neighborhood 1
// main loop
41
Reward Function
  • Reward(s, x, F_new, ForceFactor)

rand_x_low_lim -1.0 rand_x_up_lim
1.0 oldState GetActualState(x,
rand_x_low_lim, rand_x_up_lim) ForceEffect(F_new
ForceFactor, oldState, newState) newState
GetNormalisedState(newState, rand_x_low_lim,
rand_x_up_lim) return
42
Get Force Function
  • get_network_force()
  • Initialise minimum distance.
  • Find BMU based on minimum distance.
  • get the indices min_i min_j min_k
  • Out_Force ForceFactor W_OUT min_i min_j
    min_k

43
  • END OF PRESENTATION
Write a Comment
User Comments (0)
About PowerShow.com