MultiLayer Perceptron - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

MultiLayer Perceptron

Description:

The result u1 XOR u2 belongs to either of two classes (0 or 1), and the reason ... of (u1, u2) with their classes ... u2. XOR (blue = target, red = neuron) ... – PowerPoint PPT presentation

Number of Views:1090
Avg rating:3.0/5.0
Slides: 20
Provided by: JanJa1
Category:

less

Transcript and Presenter's Notes

Title: MultiLayer Perceptron


1
Multi-Layer Perceptron
  • Only a multi-layer Perceptron can model the XOR
    function

2
TABLE 1. Truth table for the logical operator XOR.
The XOR example was used many years ago to
demonstrate that the single layer Perceptron
was unable to model such a simple relationship.
The result u1 XOR u2 belongs to either of two
classes (0 or 1), and the reason is, that the two
classes are non-separable.
3
Plot of (u1, u2) with their classes indicated (
and o). Boolean values 0 have been replaced by -1
for easier implementation. It is impossible to
draw a line that separates the two classes, they
are non-separable.
4
The classifier is a Perceptron (a) with two
inputs, an offset w0 and an activation function
f(x). The function is a hard limiter (b). A
single-layer Perceptron has neurons connected in
a single layer in this case there is only one
neuron, however.
5
The plot shows, in red, the neurons attempt at
classifying the first row. If it uses the same
symbol as previously, the classification is
correct otherwise it plots two symbols ( and o)
on top of each other. It also tries to separate
the two classes with a line (green), but
sometimes the line is outside the plot area.
6
Classifying row 2 (wrong).
7
Classifying row 3 (wrong).
8
Classifying row 4 (wrong).
9
Next epoch (round), classifying row 1 (wrong).
10
Epoch 2, row 2 (wrong).
11
Epoch 2, row 3 (wrong).
12
Epoch 2, row 4 (wrong). In fact, the Perceptron
will never be able to separate the two classes.
13
  • Use a 2-layer network

14
The classifier is a two-layer Perceptron with two
inputs, a hidden layer with two neurons, and an
output layer with one neuron. All neurons have an
offset input as well. The activation function is
a hard limiter.
15
With predefined, fixed weights (Matlab
notation), w1 -0.5, 1, -1' w2 0.5, 1,
-1' w3 0.5, 1, -1 the classifier gets
the first row right. The weights define three
lines, but two are identical.
16
Four iterations verify that the network
classifies all rows correctly.
17
Change the weights of neuron 3 into w3 3,
1, -1' Now all three lines are visible, but
the network classifies the first row incorrectly.
18
The network gets two out of four correct.
19
  • The End
Write a Comment
User Comments (0)
About PowerShow.com