Title: Presentation on Neural Networks.
1Presentation on Neural Networks.
2Basics Of Neural Networks
- Neural networks refers to a connectionist model
that simulates the biophysical information
processing occurring in the nervous system. - It can also be defined as an interconnected
assembly of simple processing elements ,units or
nodes whose functionality is loosely based on the
animal neuron. - And a cognitive information processing structure
based (on models of brain function. In a more
formal engineering context a highly parallel
dynamical system with the topology of a directed
graph that can carry out information processing
by means of it's state response to continuous or
initial input.
3Basics Of Neural Networks
4Benefits Of Neural Networks
- Non-linearity
- Input-output Mapping
- Adaptivity
- Evidential Response
- Contentional Information
- Fault Tolerance
- RESEARCH THOUGHT
- 1. Neural networks are highly parallel structures
which is true because human brain functions in
the same way. - 2. But apart from being parallel it is has
priority based parallelism. - 3. Apart from being parallel there is interaction
between these parallel processes and in the end
one process may dominate while others vanish or
survive with much lower priority.
5Benefits Of Neural Networks
6Facts
- 1. Knowledge is acquired by the network from its
environment through a learning process. - 2. Interneuron connection strengths known as
synaptic weights are used to store the acquired
knowledge
7LEARNING IN NEURAL NETWORKS
- LEARNING MAY BE DEFINED AS
- 1. To learn from environment and improve its
performance through learning. - 2. Learning is a process by which free parameters
of neural networks are adapted through the
process of simulation by the environment in which
the network is embedded. The type of learning is
as follows- - a. Neural network is simulated by environment.
- b. Neural network undergoes change.
- c. Neural network responds in a way to
environment.
8LEARNING IN NEURAL NETWORKS
9Types Of Learning
- a. Error correction learning
- b. Memory based learning
- c. Competitive learning
- d. Boltzman learning
- RESEARCH THOUGHT
- 1. Normally learning process is iterative process
in which neural networks consistently learn from
environment. - 2. Neural networks must try for self eradication
of - of error by heuristically moving towards the goal
state. - 3. It means that there should be combination of
heuristic knowledge and previous data to obtain
the final result.
10MEMORY BASED LEARNING
- Past experiences are explicitly stored in a large
memory of correctly classified input-output
examples. - xi is the input vector
- di denotes the desired response
- c1 and c2 are classification examples
- Retrieving and analyzing the training data by
putting into classifications c1 and c2. - RESEARCH THOUGHT
- 1. Since memory based learning is only a
classification process it is inaccurate because
it does not account long term and short term
memory. - 2. It should be a layered process where
information if filtered from the forward layers
to the backward layers. - 3. The forward layers are short term memory
layers where as back layers are long term memory
layers. - 4. The neural network must operate by considering
all the layers giving short term memory layers
more priority than long term memory layers.
11MEMORY BASED LEARNING
12Memory Based Learning (Working Example)
- In memory based learning there is classification
of input-output examples (Xi,Di)i1 to N.where
Xi is the input vector and Di is the desired
response. - A working example of memory based learning is car
movement. We shall classify all the cases into
two parts 1 (car speed up) and 0 (car slow
down).The input signals are X1 -gt road conditions
, X2-gt traffic signal, X3-gt fuel efficiency ,
X4-gt road ascent. - Now when a set of inputs are applied to X1, X2,
X3, X4 then the response is either speed up or
slow down. As per the memory based learning all
these cases can be stored during learning and new
cases can then be classified as being of either
speeding up or slow down of the car depending on
the input conditions.
13Memory Based Learning (Working Example)
14HEBBIAN LEARNING
- When an axon of cell a is near enough to cell b
and repeatedly takes part in firing it some
growth process or metabolic change takes place in
one or both cells such that efficiency of a as
one of the cells firing b is increased. - It means that two neurons on the either side of
synapse are activated simultaneously causing the
strength of synapse to increase. - RESEARCH THOUGHT
- Hebbian learning should be classifieds into two
parts - 1. A process in which there is gradual shift
toward strengthening of synapse if the input
total synaptic weights are below a threshold
value. - 2. If the synaptic weights inputs are above a
threshold value there is a fast shift in a single
iteration.
15HEBBIAN LEARNING
16Hebbian Learning (Working Example)
- Hebbian learning in mathematical terms can be
expressed by considering a synaptic weight Wkj of
neuron k with pre-synaptic and post-synaptic
signals denoted by Xj and Yk. If pre-synaptic and
post-synaptic signals are synchronous then there
is increase in weight .The adjustment applied to
the synaptic weight Wkj at time step n is - ? Wkj(n) F(Yk(n),Xj(n))
- A working example is introduction of traffic
signal X1-gtRed , X2-gt Yellow and X3-gt Green. We
can observe that initial slow down at red signal
Y1(n) and initial startup at green signal Y2(n)
is slow but with time the response becomes
stronger and faster.
17Hebbian Learning (Working Example)
18COMPETITIVE LEARNING
- In competitive learning the output neurons of a
neural network compete among themselves to become
active. - In competitive learning a set of neurons behave
differently to a given set of inputs.
19COMPETITIVE LEARNING
20BOLTZMANN LEARNING
- The neurons constitute a recurrent structure and
they operate in a binary manner since for example
they may be in 1 or-1 state. - There is flipping of states depending on the
input. - RESEARCH THOUGHT
- Blotzmann learning puts the neurons in only two
states 1 and -1 whereas actually they should
take a number of states depending on the set of
inputs previous states.
21BOLTZMANN LEARNING
22Boltzman Learning (working Example)
- Boltzman machine operates on the energy generated
when a signal moves from neuron j to neuron k.
This process continues till the system reaches
thermal equilibrium or the desired state. - A working example can be a thermostat which keeps
a check on the heat energy being released in
various processes in a factory. The Boltzmann
system can gradually learn the amount of heat
energy released during all processes and then
learn to adjust the weights maintaining an
optimal temperature. In fact it can automatically
guide the temperature maintenance all the time. - P (change) 1/1exp (-?E/Ti)
23Boltzman Learning (working Example)
24- We can easily calculate it as-
- X1 (Temperature process P1) , X2 (Temperature
process P2) , X3 (Temperature process P3) , X4
(Temperature process P4) may cause the final
temperature reading Ti which is compared with
required temperature Tj. Energy change ?E can be
calculated and error correction applied
automatically.
25SUPERVISED LEARNING
- In conceptual terms supervised learning may be
thought of as neural network having knowledge of
the environment and using that knowledge to
formulate the neural network by input-output
examples. - RESEARCH THOUGHT
- 1. Supervised learning should be object based in
which we try to learn about an object from the
environment. - 2. It means there is need to first learn about
the object properties and then about the object
methods. - 3. Once the object has been learned neural
network may simulate it for a set of inputs
26SUPERVISED LEARNING
27Supervised Learning (Working Example)
- The aircraft control system can become a good
example of supervised learning because the
aircraft navigation system faces new
environmental conditions all the time. But these
conditions are fed to the GPS, Ground support and
other devices which teach the system to deal with
them. - The system can do the error correction learning
to stay on course and learn to manage the system.
When the system has fully learned to automate
itself, it can be put onto a pilot less vehicle
for self navigation with minimal outside help.
28Supervised Learning (Working Example)
29UNSUPERVISED LEARNING
- In unsupervised learning there is no external
teacher. Rather the process is made for a
task-independent measure of quality of
representation that the network is required to
learn. - Various stochastic methods like standard
deviation regression are used to obtain useful
information from data. - RESEARCH THOUGHT
- Since there is no supervision required and all
data is collected and then analyzed it would be
useful to first create a broad classification of
environment - Once the environment has been classified data
from the environment can be further classified to
make the data collected to be more meaningful.
30UNSUPERVISED LEARNING
31Unsupervised Learning (Working Example)
- In case of unsupervised learning there is no
error correction support applied. The data is
statistically classified into one or more
classes. - Unsupervised learning can be used in weather
forecast system. The data can be collected in the
form of variable values T (Temperature
Conditions), C (Cloud formations), H (Humidity
reading in and around a place), A (Air flow
readings). - These can then use unsupervised learning to learn
to make a correct weather forecast as an output
of the neural network.
32Unsupervised Learning (Working Example)
33SINGLE LAYER PERCEPTRON
- A perceptron is the simplest form of a neural
network used for the classification of patterns
which are linearly separable. It consists of a
single neuron with adjustable synaptic weights. - Perceptron convergence theorem tries to do
pattern classification with only two classes in
case of single layer perceptron. - RESEARCH THOUGHT
- Single layer perceptron should be clocked as
being a slower and a faster neuron. - Further the weights themselves should be a
function of time and should depend on delta t.
34SINGLE LAYER PERCEPTRON
35Single Layer Perceptron (Working Example)
- Single layer Perceptron does binary
classification and then does error correction as
per the learning rule by modification of weights. - An example can be a Perceptron that calculates
the price of a product. We can consider the input
variable with some initial weights X1(Market
demand), X2(Input material prices), X3(Past
growth) X4(Profit expected). This can be
expressed as a linear equation. - 1.2354 X1 2.3338 X2 6.4523 X3 1.1 X4
Price - Now single layer perceptron can be made to
calculate the exact price after error correction
done by comparing the output price with the
actual price. Eventually it can predict the
correct price.
36Single Layer Perceptron (Working Example)
37MULTI LAYER PERCEPTRON
- A multi layer perceptron consists of a sensory
units that constitute an input layer, one or more
hidden layers of computation nodes and an output
layer of computation nodes. - Learning takes place using error-correction
learning rule. - The function used is a non linear activation
function called the activation function. - RESEARCH THOUGHT
- As there are numerous layers in multi layer
perceptron they should be classified as- - 1. Which layer is faster than others?
- 2. Which layer has a higher priority?
- 3. Which layer is responsible for what part of
output?
38Multi-Layer Perceptron (Working Example) XOR
39RADIAL BASIS FUNCTION
- RBF looks at multi layer perceptron from curve
fitting point of view. - RBF does complex pattern classification
- Task- A complex pattern classification problem is
cast in a high dimension space non-linearity is
more likely to be linearly separable than in low
dimensional space. - Interpolation is the technique used for curve
fitting from data movement across the layers.
40RADIAL BASIS FUNCTION
41Radial Basis Function (Working Example)
- In multi-layer perceptron calculation is done for
an approximate function to various layers of the
multi-layer perceptron. A function F(x) which
approximates movement of signal from one layer to
another. - An example can be application of RBF to study
growth of disease in a given population infecting
people in a phased manner. For example a disease
starts by infecting 10 of the population in 5
cities in the beginning. In the next phase it
grows by 5 in 10 more cities and grows to 20 in
the first 5 cities. This process can be
approximated using RBF and another RBF can
calculate the move against the spread of the
disease.
42Radial Basis Function (Working Example)
43SUPPORT VECTOR MACHINES
- Support vector machine is a linear machine which
consists of a decision surface in such a way that
the margin of separation between positive and
negative cases is maximized. - It follows the method of structural risk
minimization an induction principle based on the
fact that the error rate of a learning machine on
test data is bounded by the sum of training error
rate and a term that depends on
vapnik-chervonenkis dimension. - SVM supports the following three types of
learning machines - 1. Polynomial learning machine.
- 2. RBF function networks.
- 3.2-layer perceptrons.
- RESEARCH THOUGHT
- SVM can calculate the probability of each point
being a part of classification. - It can further deduce the results as to validity
of each input for a classification.
44SUPPORT VECTOR MACHINES
45Support Vector Machines (Working Example)
- Support Vector machines use the hyper plane
equation to separate the examples into two
classes 1 or -1. Support vector machines use
training data (Xi,di) i1 to N. di 1 or di
-1 is the desired response from the neural
network. It uses equations - Wt Xi b gt0 for di1
- Wt Xi b lt 0 for di -1
- An example can be a neural net which computes
whether a person can be given a loan (1) or may
not be given a loan (-1). The input vector
consists of the inputs Xi (Income, past
transactions, job type, family) etc. Support
vector machine can calculate optimal values of
these support vectors and then give the desired
response
46Support Vector Machines (Working Example)