Title: Neural Network - Perceptron
1Neural Network - Perceptron
- ??? ?????
- Control Information Process Lab
- ???
2Variant of Network
- Variety of Neural Network
- Feedforward Network Perceptron
- Recurrent Network - Hopfield Network
- ??? ??? ???? ?? Network
- Optimization Problem? ??
- Competitive Network - Hamming Network
- Feedforward Recurrent Network
- ??? ??? Hamming distance? ??? ?? Network
- Target ????
- Recurrent Layer
- Layer with Feedback
- ???? ??
3Hopfield Network Example
- W w11 w12 w13
- w21 w22 w23
- w31 w32 w33
- b b1 b2 b3T
- P1 -1 1 -1T(banana)
- P2 -1 -1 1T(pineapple)
- T1 -1 1 -1T, T2 -1 -1 1T
4Hamming Network Example
- W1 P1T P2TT, b R RT, (R ??? ??)
- W2 1 -e- e 1, 0lt elt1/s-1
- (s Recurrent Layer? Neuron ??)
5Single Neuron Perceptron - definition
- The Perceptron is a binary classifier.
- Single Neuron Perceptron
6Perceptron - Algorithm
- Learning Rule Perceptron
- e t o (t target, o output, e error)
- W W eX W (t o)X
- b b e b (t o)
- ???? ?? Weight, Bias ?? ???.
7Single Neuron Perceptron example1(AND)
- X 1 1 -1 -1
- 1 -1 1 -1
- O 1 -1 -1 -1
- Simulation Result1
- Initial Weight 0 0
- Initial Bias 0
- Iteration Number 3
- Weight 2 2
- Bias -2
- Simulation Result2
- Initial Weight -1.5 -1.5
- Initial Bias -10
- Iteration Number 4
- Weight 4.5 4.5
- Bias -4
8ADALINE Network Algorithm
- ADAptive LInear NEuron
- Perceptron?? ??
- Transfer Function Hard Limit vs Linear
- Algorithm(Least Mean Square)
- W(k1) W(k) 2ae(k)pT(k)
- b(k1) b(k) 2ae(k)
9ADALINE Network example1(AND)
- X 1 1 -1 -1
- 1 -1 1 -1
- O 1 -1 -1 -1
- Simulation Result1
- Initial Weight 0 0
- Initial Bias 0
- a 0.5
- Iteration Number 2
- Weight 0.5 0.5
- Bias -0.5
- Simulation Result2
- Initial Weight -1.5 -1.5
- Initial Bias -10
- a 0.5
- Iteration Number 2
- Weight 0.5 0.5
- Bias -0.5
10ADALINE Network example1(AND)
- Simulation ? ????
- ??? a? ??
- a? ?? ??
- a? ??? ?? ?? ??
- error? ? ?? ???? ??? ???
- ADALINE? ?????
- Simulation Result4
- Initial Weight 0 0
- Initial Bias 0
- a 0.1
- Iteration Number 162
- Weight 0.5 0.5
- Bias -0.5
- Simulation Result3
- Initial Weight 0 0
- Initial Bias 0
- a 1.2
- Weight -5.2 -5.2e153
- Bias 5.2e153
11ADALINE Network example2(XOR)
- Linearly Separable
- ???? ?? ??? ?
- AND Problem
- Not Linearly Separable
- ???? ?? ??? ? ?
- XOR Problem
- ADALINE Network?? ?? ???
- ?? ??1 - Multi Neuron ??
- ????2 - Multi Layer ??
12ADALINE Network example2(XOR)
- ????1. Multi Neuron ??
- Target? ??? ???.
- Ex) 1, 0 -gt 00, 01, 10, 11
- Simulation ??
- Initial Weight 1 2-1 -5
- Initial Bias 3-2, a 0.5
- Iteration Number 2
- W 0 00 0, b 00
- ??? - ? ?? ??? ? ??? ??
- ? ????2. Multi Layer Perceptron ??
13Multi-Layer Perceptron - Characteristic
- MLP? ???
- Not Linear Separable ?? ??, ?? ???
- ??? ?? ? ????, ??? ??? ??
14Multi-Layer Perceptron Algorithm1
- BackPropagation
- Forward Propagation
- Backward Propagation
- (Sensitivity)
3. Weight Bias Update
15Multi-Layer Perceptron Variable
- Weight, Bias
- Rand ??? ??? ? ??
- Hidden Layer Neuron
- ??? ??? ??(HDNEU)
- HDNEU? ???? ??? ?? ?? ??
- Alpha
- Steepest Descent Method??? ?? ??
- Stop Criteria
- ???? Algorithm??? ??? ??? ??? ???
- Mean Square Error? ???
16Multi-Layer Perceptron example2(XOR)
- HDNEU 20
- a 0.1
- Stop Criteria 0.005
- Iteration Number 480
- MSE 4.85e-3
- Elapsed Time 113.65sec
17Multi-Layer Perceptron example3(sine Function)
- BP Algorithm
- HDNEU 20
- a 0.2
- Stop Criteria 0.005
- Iteration Number 3000
- 3000?? ?? ? ?
- 4710?? ??
- MSE 0.0081
- Elapsed Time 739sec
- ?? ??? ?? ? 7.76sec
18Multi-Layer Perceptron Algorithm2
- MOmentum BackPropagation
- Backpropagation Algorithm Low Pass Filter
- Weight, Bias Update
- Variable
- Gamma(?) ??????? pole
19Multi-Layer Perceptron example3(sine Function)
- MOBP Algorithm
- HDNEU 20
- a 1
- ? 0.9
- Stop Criteria 0.005
- Iteration Number 625
- MSE 0.005
- Elapsed Time 150sec
20Multi-Layer Perceptron Algorithm3
- Conjugate Gradient BackPropagation
- ??? ??? Conjugate Gradient Method ??
- ??? ??????? ????? ??
- Variable
- a, ? ???
- HDNEU, Stop Criteria
- Algorithm
- Step1. Search Direction( )
- Step2. Line Search(
) - Step3. Next Search Direction(
) - Step4. if Not Converged, Continue Step2
21Multi-Layer Perceptron example3(sine Function)
- CGBP Algorithm
- HDNEU 20
- Stop Criteria 0.005
- Iteration Number 69
- MSE 0.0046
- Elapsed Time 22sec
22Multi-Layer Perceptron example3(sine Function)
- HDNEU 20
- Stop Criteria 0.0005
- Iteration Number 125
- MSE 0.0005
- Elapsed Time 37sec
23Multi-Layer Perceptron Local Minima
- Global Minima ??? ???
- LMS Algorithm? ??? Global Minima ??
- Local Minima ??? ???
- BP Algorithm? Global Minima ?? ? ?
- ?? ?? ?????? ???
- HDNEU 10
- Stop Criteria 0.001
- Iteration Number 3000
- MSE 0.2461
- Elapsed Time 900sec
24Multi-Layer Perceptron Over Parameterization
- Over Parameterization
- ??????? ??? ?? ??? ??? ?????? ?? ?, ?? ???? ???
?? ???? ? ?? ?????? ??? ???? ? - Generalization Performance(??? ??)
- ?? ???? ?? ?? ??? ?? ?????? ??? ???? ?
25Multi-Layer Perceptron Scaling
- ?? ??? attribute? 01??? ??? ??? ?
- ?? ???? Scaling? ?, ?????? ????
- ? ?? ?? ?? ???? Scaling ??.
- Nearest Neighbor??? normalize ??? ??
- ???? Target ?? Scaling ??
- Target ?? ??? ????? Scaling ??
- Ex) ???? ??
- ?? ?????????????? ??? ???
Origin Data 34780 31252 39317
7 1 2
34 32 33 20
23 24
Modification Data 0.4374 0
0 1 0
0.1667 1 0
0.5 0 0.25 1
26Multi-Layer Perceptron Scaling
- ???? ?? Simulation
- HDNEU 20, Stop Criteria 0.01, Max Iteration
Number 1000 - Case1. Not Scaling
- Iteration Number 1000, Train Set MSE
11,124,663, Test Set MSE 20,425,686 - Case2. Scaling
- Iteration Number 1000, Train Set MSE
11,124,663, Test Set MSE 20,425,686 - Case3. Target Scaling
- Iteration Number 6, Train Set MSE 0.008628,
Test Set MSE 0.0562
27Multi-Layer Perceptron Overfitting
- Overfitting
- Stop Criteria? ???? ?? ???? ?? ???? ??? ?????
Weight, Bias? ????? ?? ???? ??? ??? ??? ??? ???
???? ?? ???.
? Issue 1. Stop Criteria? ??? ????
???? 2. HDNEU? ? ?? ?????
- Stop Criteria 0.01 / 0.001
- Test Set MSE 0.0562 / 0.1697
28Reference
- Machine Learning, Tom Mitchell, McGraw Hill.
- Introduction to Machine Learning, Ethem Alpaydin,
MIT press. - Neural Network Design, Martin T.Hagan, Howard
B.Demuth, Mark Beale, PWS Publishing Company. - Neural Networks and Learning Machine, Simon
Haykin, Prentice Hall.