Title: Example, perceptron learning function AND
1Example, perceptron learning function AND
- Training samples
- Initial weights W(0)
- Learning rate 1
- Present p1
- net (0, 2, 0)(1, -1, 1) -2
- no learning occurs
in_0 in_1 in_2 d
p0 1 -1 -1 -1
p1 1 -1 1 -1
p2 1 1 -1 -1
p3 1 1 1 1
- Present p2
- net (0, 2, 0)(1, 1, -1) 2
- x (-1)(1, 1, -1) (-1, -1, 1)
- W(2) (0, 2, 0) (-1, -1, 1)
- (-1, 1, 1)
w0 w1 w2
1 1 -1
- Present p3
- net (-1, 1, 1)(1, 1, 1) 1
- no learning occurs
- Present p0
- net W(0)p0 (1, 1, -1)(1, -1, -1) 1
- p0 misclassified, learning occurs
- x d p0 (-1, 1, 1)
- W(1) W(0) x (0, 2, 0)
- New net W(1)p0 -2 is closer to target (d -1)
- Present p0, p1, p2, p3
- All correctly classified with W(2)
- Learning stops with W(2)
2(No Transcript)
3Example, learning function AND by delta rule
- Training samples
- Initial weights W(0)
- Learning rate 0.3
- Present p0
- net (1, 1, -1)(1, -1, -1) 1
- ?W 0.3(d net) p0
- (-0.6, 0.6, 0.6)
- W(1) W(0) ?W
- (0.4, 1.6, -0.4)
- New net W(1)p0 -0.8 is closer to target (d
-1) than before
in_0 in_1 in_2 d
p0 1 -1 -1 -1
p1 1 -1 1 -1
p2 1 1 -1 -1
p3 1 1 1 1
w0 w1 w2
1 1 -1
4W(k) w0 w1 w2 net1 d_out d - net
0 1 1 -1 1 -1 -2
1 0.4 1.6 -0.4 -1.6 -1 0.6
2 0.58 1.42 -0.22 2.22 -1 -3.22
3 -0.386 0.454 0.746 0.814 1 0.186
4 -0.3302 0.5098 0.8018 -1.6418 -1 0.6418
5 -0.13766 0.31726 0.60926 0.15434 -1 -1.15434
6 -0.48396 0.663562 0.262958 -0.08336 -1 -0.91664
7 -0.75895 0.388569 0.537951 0.167565 1 0.832435
8 -0.50922 0.6383 0.787681 -1.9352 -1 0.935205
9 -0.22866 0.357738 0.507119 -0.07928 -1 -0.92072
10 -0.50488 0.633954 0.230904 -0.10183 -1 -0.89817
11 -0.77433 0.364502 0.500355 0.090528 1 0.909472
12 -0.50149 0.637344 0.773197 -1.91203 -1 0.912029
13 -0.22788 0.363735 0.499588 -0.09203 -1 -0.90797
14 -0.50027 0.636127 0.227196 -0.09134 -1 -0.90866
15 -0.77287 0.363529 0.499794 0.090454 1 0.909546
5(No Transcript)