Bioinspired Computing Lecture 14 - PowerPoint PPT Presentation

1 / 23
About This Presentation
Title:

Bioinspired Computing Lecture 14

Description:

Monkeys were trained to distinguish between a training set of pictures of trees ... Simplistic cartoon models of these mechanisms can lead to new paradigms and ... – PowerPoint PPT presentation

Number of Views:58
Avg rating:3.0/5.0
Slides: 24
Provided by: samuellbr
Category:

less

Transcript and Presenter's Notes

Title: Bioinspired Computing Lecture 14


1
Bioinspired ComputingLecture 14
  • Alternative
  • Neural Networks
  • Netta Cohen

2
Last time
Today
  • Biologically inspired associative memories
  • moves away from bio- realistic model
  • Unsupervised learning
  • Working examples and applications
  • Pros, Cons open questions
  • SOM (Competitive) Nets
  • Neuroscience applications
  • GasNets.
  • Robotic control

Attractor neural nets
Other Neural Nets
3
Spatial Codes
Natural neural nets often code similar things
close together. The auditory and visual cortex
provide examples.
Another example touch receptors in the human
body. "Almost every region of the body is
represented by a corresponding region in both the
primary motor cortex and the somatic sensory
cortex" (Geschwind 1979106). "The finger tips of
humans have the highest density of receptors
about 2500 per square cm!" (Kandel and Jessell
1991374). This representation is often dubbed
the homunculus (or little man in the brain)
Picture from http//www.dubinweb.com/brain/3.html
4
Kohonen Nets
In a Kohonen net, a number of input neurons feed
a single lattice of neurons. The output pattern
is produced across the lattice surface.
Large volumes of data are compressed using
spatial/ topological relationships within the
training set. Thus the lattice becomes an
efficient distributed representation of the input.
5
Kohonen Nets
also known as self-organising maps (SOMs)
  • Important features
  • Self-organisation of a distributed
    representation of inputs.
  • This is a form of unsupervised learning
  • The underlying learning principle competition
    among nodes known as winner takes all. Only
    winners get to learn losers decay. The
    competition is enforced by the network
    architecture each node has a self-excitatory
    connection and inhibits all its neighbours.
  • Spatial patterns are formed by imposing the
    learning rule throughout the local neighbourhood
    of the winner.

6
Training Self-Organising Maps
  • A simple training algorithm might look like this
  • Randomly initialise the network input weights
  • Normalise all inputs so they are size-independent
  • Define a local neighbourhood and a learning rate
  • For each item in the training set
  • Find the lattice node most excited by the input
  • Alter the input weights for this node and those
    nearby such that they more closely resemble the
    input vector, i.e., at each node, the input
    weight update rule is ?w r (x-w)
  • Reduce the learning rate the neighbourhood size
  • Goto 2 (another pass through the training set)

7
Training Self-Organising Maps (cont)
Gradually the net self-organises into a map of
the inputs, clustering the input data by
recruiting areas of the net for related inputs or
features in the inputs. The size of the
neighbourhood roughly corresponds to the
resolution of the mapped features.
8
How Does It Work?
Imagine a 2D training set with clusters of data
points
The nodes in the lattice are initially randomly
sensitive.
Gradually, they will migrate towards the input
data.
Nodes that are neighbours in the lattice will
tend to become sensitive to similar inputs.
Effective resource allocation dense parts of the
input space recruit more nodes than sparse areas.
Another example The travelling salesman problem
Applet from http//www.patol.com/java/TSP/index.ht
ml
9
How does the brain perform classification?
  • One area of the cortex (the inferior temporal
    cortex or IT) has been linked with two important
    functions
  • object recognition
  • object classification
  • These tasks seem to be shape/colour specific but
    independent of object size, position, relative
    motion or speed, brightness or texture.
  • Indeed, category-specific impairments have been
    linked to IT injuries.

10
How does the brain perform classification (cont)?
Questions
  • How do IT neurons encode objects/categories?
    e.g.,
  • local versus distributed representations/coding
  • temporal versus rate coding at the neuronal
    level
  • Can we recruit ANNs to answer such questions?
  • Can ANNs perform classification as well given
    similar data?

Recently, Elizabeth Thomas and colleagues
performed experiments on the activity of IT
neurons during an exercise of image
classification in monkeys and used a Kohonen net
to analyse the data.
11
The experiment
Monkeys were trained to distinguish between a
training set of pictures of trees and various
other objects. The monkeys were considered
trained when they reached a 95 success
rate. Trained monkeys were now shown new images
of trees and other objects. As they classified
the objects, the activity in IT neurons in their
brains was recorded. All in all 226 neurons were
recorded on various occasions and over many
different images. The data collected was the
mean firing rate of each neuron in response to
each image. 25 of neurons responded only to one
category, but 75 were not category specific. All
neurons were image-specific. Problem Not all
neurons were recorded for all images
No images were tested across all
neurons. In fact, when a Table
of neuronal responses for each image was created,
it was more than 80 empty.
E. Thomas et al, J. Cog. Neurosci. (2001)
12
Experimental Results
Question Given the partial data, is there
sufficient information to classify images as
trees or non-trees? Answer A 2-node Kohonen net
trained on the Table of neuronal responses was
able to classify new images with an 84 success
rate.
Question Are categories encoded by
category-specific neurons? Answer Delete data of
category-specific neuron responses from Table.
The success rate of the Kohonen net was degraded
but only minimally. A control set with random
data deletions yielded similar results.
Conclusion Category-specific neurons are not
important for categorisation!
E. Thomas et al, J. Cog. Neurosci. (2001)
13
Experimental Results (cont.)
Question Which neurons are important, if
any? Answer An examination of the weights that
contribute most to the output in the Kohonen net
revealed that a small subset of neurons (lt50)
that are not category-specific yet respond with
different intensities to different categories are
crucial for correct classification.
Conclusions The IT employs a distributed
representation to encode categories of different
images. The redundancy in this encoding allows
for graceful degradation so that even with 80 of
data missing and many neurons deleted, sufficient
information is present for classification
purposes. The fact that only rate information was
used suggests that temporal information is less
important here.
E. Thomas et al, J. Cog. Neurosci. (2001)
14
Space in Neural Nets
  • Kohonen nets teach us an important lesson about
    the ability of neurons to encode information, not
    only in weights, but also in spatial
    organisation. What are the consequences for
    network dynamics? Can these principles be
    extended beyond simple centre-surround
    constraints of self-excitation and neighbour
    inhibition?
  • Once again, insight may be gained by returning to
    the biological domain and asking how space
    affects brain activity.

While always aware of the immense richness of
neuronal behaviour, we have, until today,
considered them to be minimal processors
communicating via well-defined circuits. What
have we neglected?
We have turned our networks into abstract,
computing tools, disconnected from the real world
in which problems are defined. We have also
robbed the networks of enormous freedom by
restricting the encoding of information to series
of weights.
15
Neurotransmitters in brain
  • many neurotransmitters do not just excite or
    inhibit
  • neurons release gases such as nitric oxide (NO)
  • the behaviour of these diffusing gaseous
    modulators is very different from that of
    standard neurotransmitters
  • Unlike standard neurotransmitters which are
    unable to travel far from their point of origin,
    NO is a small gas molecule that is free to
    diffuse slowly away from its origin, unhindered
    by intervening cellular structures.

NO secreted by a neuron affects all neurons
within range regardless of circuitry. Such
influences go beyond excitation or inhibition.
NO has the potential to modulate many aspects of
the neurons behaviour.
16
GasNets
Researchers at Sussexs Centre for Computational
Neuroscience and Robotics have been developing an
ANN architecture inspired by these findings which
they call GasNets.
Their model is a generalisation of the dynamic
recurrent neural nets. Neurons are organised on a
2D grid, with all-to-all synaptic connections.
Active neurons can also secrete gas.
The concentration of gas at the location of a
neuron modulates its sigmoid activity function,
either increasing or decreasing the steepness of
the curve (and its ability to secrete gas itself).
All GasNet figures courtesy of Phil Husbands,
Mick OShea, Tom Smith, Nick Jakobi
17
A Control Task
A robot lives in a walled arena. Its task is to
approach a white triangle painted on the wall and
avoid a white rectangle, using only very crude
visual input (typically a handful of pixels from
a camera mounted on the robot).
Performing this shape discrimination under noisy
lighting conditions is a non-trivial task,
especially given the limited visual input
available to the controller.
18
Non-Gaseous Solutions
  • The same researchers had previously evolved more
    standard dynamical neural nets to solve this task

These controllers took 6000 generations to
discover. What kind of GasNet controllers would
evolve? Would they exhibit advantages over other
kinds of ANN?
all figures courtesy of Sussex CCNR
19
GasNet Controllers
  • Two kinds of successful GasNet controller were
    evolved, each taking 1000 generations to
    discover. Heres one

20
The GasNet Solutions
  • Both GasNets perform robustly despite the noisy
    lighting conditions and outrageously low
    bandwidth vision.

Far Ballistic Contrast between two offset visual
inputs is used to detect triangle edge. Near
Closed-loop Scanning behaviour continually
modulates approach
The evolved visual morphology always played a
crucial role. Active visual strategies solved
the task, rather than central reasoning.
21
Why Does Gas Make It Better?
  • While still an open question, several
    possibilities include
  • Gas diffuses widely, allowing large parts of the
    network to be inhibited or excited
    simultaneously.
  • Gas concentration varies much slower than the
    flow of electrical activation around the
    synaptic connections.
  • There may be useful interactions between the slow
    gas and fast activation dynamics.

Combination of these ideas may explain why
solutions appear easier to build from GasNets
than from non-gas dynamical ANNs. Research into
these possibilities is currently ongoing by Chris
Buckley in the Biosystems group of the School of
Computing.
22
From Biology to ANNs Back
  • Neuroscience and studies of animal behaviour have
    led to new ideas for artificial learning,
    communication, cooperation competition.
    Simplistic cartoon models of these mechanisms can
    lead to new paradigms and impressive technologies.
  • Dynamic Neural Nets are helping us understand
    real-time adaptation and problem-solving under
    changing conditions.
  • Hopfield nets shed new insight on mechanisms of
    association and the benefits of unsupervised
    learning.
  • Thomas work helps unravel coding structures in
    the cortex.
  • Husbands et al.s GasNets are helping
    neuroscientists to understand the behaviour of NO
    and other local influences in real nervous
    systems and for improved robot control.

23
Next time
  • Guest lecture series on Genetic Evolution and
    Genetic Programming.

Reading
  • Elizabeth Thomas et al (2001) Encoding of
    categories by noncategory-specific neurons in the
    inferior temporal cortex, J. Cog. Neurosci. 13
    190-200.
  • Phil Husbands, Tom Smith, Nick Jakobi Michael
    OShea (1998). Better living through chemistry
    Evolving GasNets for robot control, Connection
    Science, 10185-210.
  • Ezequiel Di Paolo (2003). Organismically-inspired
    robotics Homeostatic adaptation and natural
    teleology beyond the closed sensorimotor loop,
    in K. Murase T. Asakura (Eds) Dynamical
    Systems Approach to Embodiment and Sociality,
    Advanced Knowledge International., Adelaide, pp
    19 - 42.
  • Ezequiel Di Paolo (2000) Homeostatic adaptation
    to inversion of the visual field and other
    sensorimotor disruptions, SAB2000, MIT Press.
Write a Comment
User Comments (0)
About PowerShow.com