Title: Neural%20Nets
1Neural Nets
2Symbolic and sub-symbolic artificial intelligence
- The various conventional knowledge representation
techniques that have been mentioned so far can be
labelled symbolic artificial intelligence.
3Symbolic and sub-symbolic artificial intelligence
- The elements in the knowledge representation -
- production rules, frames, semantic net nodes and
arcs, objects, or whatever - - act as symbols, with each element
corresponding to a similar element in the
real-world knowledge. - Manipulations of these elements correspond to the
manipulation of elements of real-world knowledge.
4Symbolic and sub-symbolic artificial intelligence
- An alternative set of approaches, which have
recently become popular, are known as
sub-symbolic AI.
5Symbolic and sub-symbolic artificial intelligence
- Here, the real-world knowledge is dispersed among
the various elements of the representation - Only by operating on the representation as a
whole can you retrieve or change the knowledge it
contains.
6Symbolic and sub-symbolic artificial intelligence
- The two main branches of sub-symbolic AI are
- neural nets (also known as neural networks, or
ANNs, standing for artificial neural nets) - and
- genetic algorithms.
7Symbolic and sub-symbolic artificial intelligence
- The term connectionism is used to mean roughly
the same as the study of neural nets.
8The biological inspiration for artificial neural
nets
- Neural networks are an attempt to mimic the
reasoning, or information processing, to be found
in the nervous tissue of humans and other
animals.
9(No Transcript)
10(No Transcript)
11The biological inspiration for artificial neural
nets
- Such nervous tissue consists of large numbers
(perhaps 100 billion in a typical human brain) of
neurons (nerve cells), connected together by
fibres called axons and dendrites to form
networks, which process nerve impulses.
12Apical dendrites
A cluster of neurons, showing how the axon from
one connects to the dendrites of others.
Basal dendrites
One sort of neuron - a pyramidal cell
Synapses
Cell body
Axon
13The biological inspiration for artificial neural
nets
- The neuron acts as a signal processing device
- the dendrites act as inputs,
- the axon acts as an output,
- a connection between one of these fibres and an
adjacent cell - known as a synapse - may be
inhibitory or excitatory, i.e. may tend to cause
the next cell to 'fire', or tend to stop it
'firing'.
14The biological inspiration for artificial neural
nets
- Obviously, neurones are extremely small, and made
of living tissue rather than the metals and other
inorganic substances that make up electronic
circuits.
15The biological inspiration for artificial neural
nets
- The signals that pass along nerve fibres are
electrochemical in nature, unlike the electrical
signals in a computer. - The synapses which connect one neuron to another
use chemicals -neurotransmitters - to transmit
signals.
16The biological inspiration for artificial neural
nets
- Drugs which affect the brain typically do so by
altering the chemistry of the synapses, making
the synapses for a whole group of neurons either
more efficient or less efficient.
17The biological inspiration for artificial neural
nets
- As a result, there are some important differences
between neurons, and the individual processing
elements in computers (transistor-based
switches) - Neurons are far slower than artificial neurons -
neurodes - but far more efficient in energy terms.
18The biological inspiration for artificial neural
nets
- Brain tissue can do what it does (think,
remember, perceive, control bodies etc) partly
because of the electrochemical signals that it
processes, and partly because of the chemical
messages. - Artificial neural nets imitate the first of
these, but not the second.
19The biological inspiration for artificial neural
nets
- The neurons in a brain work in parallel to
perform their symbol processing (i.e., the
individual neurons are operating simultaneously.
This is quite unlike a conventional computer,
where the programming steps are performed one
after the other.
20The biological inspiration for artificial neural
nets
- The brains of all animals of any complexity
consist of a number of these networks of neurons,
each network specialised for a particular task. - There are many different types of neuron (over
100) in the human brain.
21Artificial neural nets
- Note that neural nets are inspired by the
organisation of brain tissue, but the resemblance
is not necessarily very close. - Claims that a particular type of artificial
neural net has been shown to demonstrate some
property, and that this 'explains' the working of
the human brain, should be treated with caution.
22Artificial neural nets
- Note that a neural net is ideally implemented on
a parallel computer (e.g. a connection machine). - However, since these are not widely used, most
neural net research, and most commercial neural
net packages, simulate parallel processing on a
conventional computer.
23Neurodes
- Neural nets are constructed out of artificial
neurones (neurodes). The characteristics of these
are - each has one or more inputs (typically several).
- Each input will have a weight, which measures how
effective that input is at firing the neurode as
a whole. These weights may be positive (i.e.
increasing the chance that the neurode will fire)
or negative (i.e. decreasing the chance that the
neurode will fire).
24Neurodes
- These weights may change as the neural net
operates. - Inputs may come from the outside environment, or
from other neurodes - each has one output (but this output may branch,
and go to several locations). - an output may go to the outside environment, or
to another neurode.
25Neurodes
- More properties of neurodes
- each is characterised by a summation function and
a transformation function.
26Neurodes
- The summation function is a technique for finding
the weighted average of all the inputs. - These vary in complexity, according to the type
of neural net. - The simplest approach is to multiply each input
value by its weight and add up all these figures.
27Neurodes
- The transformation function is a technique for
determining the output of the neurode, given the
combined inputs. - Again, these vary in complexity, according to the
type of neural net. - The simplest approach is to have a particular
threshold value - but the sigmoid function, to be
discussed later, is more common.
28Neurodes
- "an artificial neuron is a unit that accepts a
bunch of numbers, and learns to respond by
producing a number of its own." - Aleksander Morton, 1993.
29The functions of a typical neurode. ai represents
the activation of the neurode. This is also the
output from the neurode
aj
Wj,i
ai
Transformation function
Summation function ?
Activation ai
Input links
Output links
30Artificial neural nets
- Different sorts of transformation function are
available, and are favoured for different designs
of ANN. - The three most common choices are
- the step function,
- the sign function,
- and
- the sigmoid function
311
ai
ai
ai
1
1
1
inpi
inpi
inpi
-1
-1
Step function
Sigmoid function
Sign function
32Artificial neural nets
- As far as networks are concerned, they may or may
not be organised into layers. - Usually, they are.
33Artificial neural nets
- Networks organised into layers may be subdivided
into those that simply have an input layer and an
output layer, and those that have one or more
intermediate layers, known as hidden layers.
34? ? ? ? ? ?
? ? ? ? ? ?
A neural net with one input layer and one output
layer (both containing 6 neurodes)
35? ? ? ? ? ?
? ? ? ? ? ?
? ? ? ? ? ?
A neural net with one input layer, one hidden
layer, and one output layer (each containing 6
neurodes)
36How networks are used
- Each input in a network corresponds to a single
attribute of a pattern or collection of data. - The data must be numerical qualitative aspects
of the data, or graphical data, must be
pre-processed to convert it into numbers before
the network can deal with it.
37How networks are used
- Thus, an image is converted into pixels (a number
could be converted into a 6x8 dot matrix, and
provide the input to 48 input neurodes). - Similarly, a fragment of sound would have to be
digitised, and a set of commercial decision
criteria would have to be coded before the net
could deal with them.
38How networks are used
- Similarly, values must be assigned to the outputs
from the output nodes before they can be treated
as 'the solution' to whatever problem the network
was given.
39How networks are used
- Neural nets are not programmed in the
conventional way (we do not have techniques for
'hand-programming' a net). - Instead, they go through a learning phase, during
which the weights are modified. After which the
weights are clamped, and the system is ready to
perform.
40How networks are used
- Learning involves
- entering examples of data as the input,
- using some appropriate algorithm to modify the
weights so that the output changes in the desired
direction, - repeating this until the desired output is
achieved.
41Example of supervised learning in a simple neural
net
- Suppose we have a net consisting of a single
neurode. - The summation function is the standard version.
- The transformation function is a step function.
42Example of supervised learning in a simple neural
net
- There are two inputs and one output,
- We wish to teach this net the logical INCLUSIVE
OR function, i.e. - if the values of both the inputs is 0, the output
should be 0 - if the value of either or both the inputs is 1,
the output should be 1.
43Example of supervised learning in a simple neural
net
- We will represent the values of the two inputs as
X1 and X2, the desired output as Z, the weights
on the two inputs as W1 and W2, the actual output
as Y.
44Example of supervised learning in a simple neural
net
- Input Weight Desired
output
X1
W1
Z
Y
W2
X2
Actual output
45Example of supervised learning in a simple neural
net
- The learning process involves repeated applying
the four possible patterns of input - X1 X2 Z
- 0 0 0
- 0 1 1
- 1 0 1
- 1 1 1
46Example of supervised learning in a simple neural
net
- The two weights W1 and W2 are initially set to
random values. Each time a set of inputs is
applied, a value D is calculated as - D Z - Y
- (the difference between what you got and what
you wanted) - and the weights are adjusted.
47Example of supervised learning in a simple neural
net
- The new weight, V for a particular input i is
given by - Vi Wi a D Xi
- where a is a parameter which determines how much
the weights are allowed to fluctuate in a
particular cycle, and hence how quickly learning
takes place. - An actual learning sequence might be as follows
48Example of supervised learning in a simple neural
net
- a 0.2 threshold 0.5
- Iter-
- ation X1 X2 Z W1 W2 Y D V1 V2
- _______________________________________
- 1 0 0 0 0.1 0.3 0 0.0 0.1 0.3
- 0 1 1 0.1 0.3 0 1.0 0.1 0.5
- 1 0 1 0.1 0.5 0 1.0 0.3 0.5
- 1 1 1 0.3 0.5 1 0.0 0.3 0.5
49Example of supervised learning in a simple neural
net
- a 0.2 threshold 0.5
- Iter-
- ation X1 X2 Z W1 W2 Y D V1 V2
- _______________________________________
- 2 0 0 0 0.3 0.5 0 0.0 0.3 0.5
- 0 1 1 0.3 0.5 0 1.0 0.3 0.7
- 1 0 1 0.3 0.7 0 1.0 0.5 0.7
- 1 1 1 0.5 0.7 1 0.0 0.5 0.7
50Example of supervised learning in a simple neural
net
- a 0.2 threshold 0.5
- Iter-
- ation X1 X2 Z W1 W2 Y D V1 V2
- _______________________________________
- 3 0 0 0 0.5 0.7 0 0.0 0.5 0.7
- 0 1 1 0.5 0.7 1 0.0 0.5 0.7
- 1 0 1 0.5 0.7 0 1.0 0.7 0.7
- 1 1 1 0.7 0.7 1 0.0 0.7 0.7
51Example of supervised learning in a simple neural
net
- a 0.2 threshold 0.5
- Iter-
- ation X1 X2 Z W1 W2 Y D V1 V2
- _______________________________________
- 4 0 0 0 0.7 0.7 0 0.0 0.7 0.7
- 0 1 1 0.7 0.7 1 0.0 0.7 0.7
- 1 0 1 0.7 0.7 1 0.0 0.7 0.7
- 1 1 1 0.7 0.7 1 0.0 0.7 0.7
- - no errors detected for an entire iteration
learning halts.
52Human-like features of neural nets
- Distributed representation - 'memories' are
stored as a pattern of activation, distributed
over a set of elements. - 'Memories' can be superimposed different
memories are represented by different patterns
over the same elements.
53Human-like features of neural nets
- Distributed asynchronous control - each element
makes its own decisions, and these add up to a
global solution.
54Human-like features of neural nets
- Content-addressable memory - a number of patterns
can be stored in a network and, to retrieve a
pattern, we need only specify a portion of it
the network automatically finds the best match.
55Human-like features of neural nets
- Fault tolerance - if a few processing elements
fail, the network will still function correctly.
56Human-like features of neural nets
- Graceful degradation - failure of the net is
progressive, rather than catastrophic.
57Human-like features of neural nets
- Collectively, the network is a little like a
committee, coming to a joint decision on a
particular question. - And like a committee, the absence of one or more
members/neurodes does not necessarily prevent the
committee/network from functioning (or even from
coming to the same decisions).
58Human-like features of neural nets
- The failure of a small proportion of its
neurodes, or its links, does not cause a
catastrophic failure, merely a reduction in
performance. - Compare this with a conventional program, where
the loss of a vital line of code would cause such
a failure.
59Human-like features of neural nets
- Automatic generalisation - similar or related
facts are automatically stored as related
patterns of activation.
60Human-like features of neural nets
- 'Fuzzy' mapping - similarities (rather than
strict inclusion or exclusion) are represented in
connectionist models. This enables human-like
interpretation of vague and unclear concepts.
61Strengths weaknesses of neural nets
- Connectionism seems particularly promising for
- learning, in poorly structured and unsupervised
domains. - low-level tasks such as vision, speech
recognition, handwriting recognition.
62Strengths weaknesses of neural nets
- Connectionism seems rather unpromising for
- highly-structured domains such as chess playing,
theorem proving, maths, planning. - domains where it is desirable to understand how
the system is solving the problem - expert
systems, safety-critical systems. Neural nets are
essentially inscrutable.
63Strengths weaknesses of neural nets
- This suggests that, as a knowledge acquisition
tool, connectionism would be useful for - pattern recognition,
- learning,
- classification,
- generalisation,
- abstraction,
- interpretation of incomplete data.
64Strengths weaknesses of neural nets
- As a basis for a decision support systems
- optimisation and resource allocation
65(No Transcript)