Title: PERCEPTRON
1PERCEPTRON
2Chapter 3 The Basic Neuron
- The structure of the brain can be viewed as a
highly interconnected network of relatively
simple processing elements/ neuron. - The brain has at least 1010 neurons, each
connected to 104 others - We are not attempting to build computer brains
extremely simplified versions of natural neural
systems- rather we are aiming to discover the
properties of models. - The idea behind neural computing - by modeling
the major features of the brain- can produce
computers that exhibit many of the useful
properties of the brain. - Whereas, we are concerned here with maybe a few
hundred neurons at most, connected to a few
thousand input lines - The aim of a model is to produce a simplified
version of a system.
3Biological Neural Networks
- Neuron - three components its dendrites, soma,
and axon (Fig. 1.3). - Dendrites receive signals from other neurons.
- The signals are electric impulses that are
transmitted across a synaptic gap. - The soma/ cell body, sums the incoming signals.
- When sufficient input is received, the cell fires
(transmits a signal to other cells.) - However, the frequency of firing varies a- either
greater or lesser magnitude.
4human brain
The human brain contains about 10 billion nerve
cells, or neurons. On average, each neuron is
connected to other neurons through about 10 000
synapses. (The actual figures vary greatly,
depending on the local neuroanatomy.) The brain's
network of neurons forms a massively parallel
information processing system. This contrasts
with conventional computers, in which a single
processor executes a single series of
instructions.
http//www.idsia.ch/NNcourse/brain.html
5- the brain has quite remarkable capabilities
- its performance tends to degrade gracefully under
partial damage. In contrast, most programs and
engineered systems are brittle if you remove
some arbitrary parts, very likely the whole will
cease to function. - it can learn (reorganize itself) from experience.
this means that partial recovery from damage is
possible if healthy units can learn to take over
the functions previously carried out by the
damaged areas. - it performs massively parallel computations
extremely efficiently. For example, complex
visual perception occurs within less than 100 ms,
that is, 10 processing steps! - it supports our intelligence and self-awareness.
(Nobody knows yet how this occurs.)
6As a discipline of Artificial Intelligence,
Neural Networks attempt to bring computers a
little closer to the brain's capabilities by
imitating certain aspects of information
processing in the brain, in a highly simplified
way.
7MODELLING THE SINGLE NEURON
- The basic function of a biological neuron is to
add up its inputs, and to produce an output if
this sum is greater than some value, known as the
threshold value. - The inputs to the neuron arrive along the
dendrites, which are connected to the outputs
from other neurons by specialized junctions
called synapses. - The junctions pass a large signal across, whilst
others are very poor. - The cell body receives all inputs, and fires if
the total input exceeds the threshold. - Our model of the neuron must capture these
important features - The output from a neuron is either on or off.
- The output depends only on the inputs. A certain
number must be on (threshold value) at any one
time in order to make the neuron fire. - The synapses can be modeled by having a
multiplicative factor on the input.
8MODELLING THE SINGLE NEURON
9MODELLING THE SINGLE NEURON
10BRAIN ANALOGY AND NN
Biological Neuron
11LEARNING IN SIMPLE NEURONS
- We need a mechanism for achieving learning in our
model neuron. - Connecting these neurons together then train them
in order to do useful task. - Example in Classification problem
- Figure 3.5 - Two groups - one group of several
differently written As, and the other of Bs, we
may want our neuron to output a 1 when an A is
presented and a 0 when it sees a B. - The guiding principle is to allow the neuron to
learn from its mistakes
12LEARNING IN SIMPLE NEURONS
- If it produces an incorrect output, we want to
reduce the chances of that happening again if it
comes up with correct output, then we need do
nothing. - If the neuron produces a 0 when we show it an A,
then increase the weighted sum so that next time
it will exceed the threshold and so produces the
correct output 1. - If the neuron produces a 1 when we show it an B,
then decrease the weighted sum so that next time
it will less than threshold and so produces the
correct output 0.
13Learning strategy
- increase the weights on the active inputs when we
want the output to be active, - decrease them when we want the output to be
inactive. - To achieve - add the input values to the weights
when we the output to be on, and subtracting the
input values from the weights when we want the
output to be off. - This defines our learning rule.
- This learning rule is a variant on that proposed
in 1949 by Donald Hebb, and is therefore called
Hebbian learning. - Since the learning is guided by knowing what we
want to achieve, it is known as supervised
learning.
14Learning strategy
- Our learning paradigm can be summarized as
follows - Set the weights and thresholds randomly
- Present an input
- Calculate the actual output - thresholding the
weighted sum of the inputs. (0 or 1) - Alter the weights to reinforce correct decisions
i.e, reduce the error.
15The Perceptron
- The operation of Rosenblatts perceptron is based
on the McCulloch and Pitts neuron model. The
model consists of a linear combiner followed by a
hard limiter. - The weighted sum of the inputs is applied to the
hard limiter, which produces an output equal to
1 if its input is positive and ?1 if it is
negative.
16The Perceptron
17The Perceptron
- Negative and Positive Response
18The Algorithm
- Step 5. Update weights and bias if an error
occurred for this pattern -
- If y ? t,
- wi(new) wi(old) ? txi b(new) b(old) ?
t - else
- wi(new) wi(old)
- b(new) b(old)
- Step 6. Test stopping condition
- If no weights changed in Step 2,
- stop
- else, continue
- Step 0 Initialize all weights and bias
- wi 0 (i 1 to n), b0
- Set learning rate ? (0 lt ? 1)
- ? 0
- Step 1 While stopping condition is false,
- do steps 2-6.
- Step2 For each training pair st, do steps 3-5
- Step 3. Set activations for input units
- xi si
- Step 4.Compute response of output unit
-
- y_in b ? xi wi
-
-
-
-
y
19The Algorithm
Step 0 Initialize all weights and bias wi
0 (i 1 to n), b0 Set learning rate ? (0 lt ?
1) ? 0
20The Algorithm
Step 1 While stopping condition is false, do
steps 2-6. Step2 For each training pair st,
do steps 3-5 Step 3. Set activations for input
units xi si Step 4. Compute response of
output unit NET y_in b ? xi wi
OUT y
21The Algorithm
Step 5. Update weights and bias if an error
occurred for this pattern If y ?
t, wi(new) wi(old) ? txi (i 1 to n).
b(new) b(old) ? t else wi(new)
wi(old) b(new) b(old) Step 6. Test
stopping condition If no weights changed in
Step 2, stop else, continue
22Perceptron net for And function binary inputs
and bipolar targets 1st Epoch
23Separating lines for 1st training input
x2
-
-
-
x1
Formula asas lukis graf b ? xi wi gt ?
1 x1(1) x2(1)0.2 and 1 x1(1) x2(1)-0.2
24Separating lines for 2nd training input
x2
-
-
-
x1
0 x1(0) x2(1) 0.2 and 0 x1(0) x2(1) -0.2
Formula asas lukis graf b ? xi wi gt ?
25Separating lines for 3rd and 4th training input
- For 3rd input the weight derived is ve
- For the 4th input no weight changes
- Decision boundary is still not correct for 1st
input - We are not finished training
26Perceptron net for And function binary inputs
and bipolar targets 2nd Epoch
27Separating lines for 1st training input, 2nd
epoch
x2
-
-
-
x1
Formula asas lukis graf b ? xi wi gt ?
0 x1(1) x2(1) 0.2 and 0 x1(1) x2(1) -0.2
28Separating lines for 2nd training input 2nd
epoch
x2
-
-
-
x1
-1 x1(0) x2(1) 0.2 and -1 x1(0) x2(1) -0.2
Formula asas lukis graf b ? xi wi gt ?
29Perceptron net for And function binary inputs
and bipolar targets 3rd Epoch
30Perceptron net for And function binary inputs
and bipolar targets 10th Epoch
31Separating lines for Final decision Boundaries
x2
-
-
-
x1
-4 2x1 3x2gt 0.2 and -4 2x1 3x2lt -0.2
Formula asas lukis graf b ? xi wi gt ?
32Perceptron net for And function bipolar inputs
and bipolar targets 1st and 2nd epoch
?1, ? 0.2 wi0 b0
Weights
Weight Changes
33LIMITATIONS OF PERCEPTRONS
- The perceptron is trying to find the straight
line that separates classes. - It can separate classes that lie on either side
of a straight line easily enough, - but there are many situations where the division
between classes is much more complex. Consider
the case of the exclusive-or (XOR) problem.
34LIMITATIONS OF PERCEPTRONS
- The XOR logic function has two inputs and one
output - It produces an output as shown in table 3.1.
- Such patterns are known as linearly inseparable
since no straight line can divide them up. - The single-layer perceptron has shown great
success for such a simple model.
35Perceptron
36- Perceptron learning applet
- http//diwww.epfl.ch/mantra/tutorial/english/perce
ptron/html/
37(No Transcript)
38(No Transcript)
39(No Transcript)