Evolving Neural Network Architectures in a Computational Ecology - PowerPoint PPT Presentation

1 / 54
About This Presentation
Title:

Evolving Neural Network Architectures in a Computational Ecology

Description:

Lesion/aphasia studies demonstrate very specific, limited effects ... Crutchfield: Similar results measuring complexity of finite state machines ... – PowerPoint PPT presentation

Number of Views:124
Avg rating:3.0/5.0
Slides: 55
Provided by: larryy
Category:

less

Transcript and Presenter's Notes

Title: Evolving Neural Network Architectures in a Computational Ecology


1
Evolving Neural Network Architectures in a
Computational Ecology
  • Larry Yaeger
  • Professor of Informatics, Indiana University
  • Distinguished Scientist, Apple Computer
  • Networks and Complex Systems
  • Indiana University
  • 18 October 2004

2
Wiring Diagram Learning Brain Maps
3
Motor Cortex Map
4
Motor Cortex Homunculus
5
Plasticity in Function
Orientation maps
Mriganka Sur, et al Science 1988, Nature 2000
6
Plasticity in Wiring
Patterns of long-range horizontal connections in
V1, normal A1, and rewired A1
Mriganka Sur, et al Nature 2000
7
Wiring Diagram Matters
  • Relative consistency of brain maps across large
    populations
  • Lesion/aphasia studies demonstrate very specific,
    limited effects
  • Moderate stroke damage to occipital lobe can
    induce Charcot-Wilbrand syndrome (loss of dreams)
  • Scarcity of tissue in localized portion of visual
    system (parietooccipital/intraparietal sulcus) is
    method of action for gene disorder, Williams
    Syndrome (lack of depth perception, inability to
    assemble parts into wholes)

8
Real Artificial Brain Maps
Distribution of orientation-selective cells in
visual cortex
9
Neuronal Cooperation
John Pearson, Gerald Edelman
10
Neuronal Competition
John Pearson, Gerald Edelman
11
The Story So Far
  • Brain maps are good
  • Brain maps are derived from
  • General purpose learning mechanism
  • Suitable wiring diagram
  • Artificial neural networks capture key features
    of biological neural networks using
  • Hebbian learning
  • Suitable wiring diagram

12
How to Proceed?
  • Design a suitable neural architecture
  • Simple architectures are easy, but are limited to
    simple (but robust) behaviors
  • W. Grey Walters Turtles
  • First few Valentino Braitenberg Vehicles (1-3,
    of 14)
  • Complex architectures are much more difficult!
  • We know a lot about neural anatomy
  • Theres a lot more we dont know
  • It is being tried Steve Grands Lucy

13
How to Proceed?
  • Evolve a suitable neural architecture
  • It ought to work
  • Valentino Braitenbergs Vehicles (4 and higher)
  • We know it works
  • Genetic Algorithms (computational realm)
  • Natural Selection (biological realm)

14
Evolution is a Tautology
  • That which survives, persists.
  • That which reproduces, increases its numbers.
  • Things change.
  • Any little niche

15
Neural Architectures for Controlling Behavior
using Vision
16
(No Transcript)
17
(No Transcript)
18
What Polyworld Is
  • An electronic primordial soup experiment
  • Why do we get science, instead of ratatouille?
  • Right ingredients in the right pot under the
    right conditions
  • An attempt to approach artificial intelligence
    the way natural intelligence emerged
  • Through the evolution of nervous systems in an
    ecology
  • An opportunity to work our way up through the
    intelligence spectrum
  • Tool for evolutionary biology, behavioral
    ecology, cognitive science

19
What Polyworld Is Not
  • Fully open ended
  • Even natural evolution is limited by physics (and
    previous successes)
  • Accurate model of microbiology
  • Accurate model of any particular ecology
  • Though it is possible to model specific ecologies
  • Accurate model of any particular organisms brain
  • Though many neural models are possible
  • A strong model of ontogeny

20
What is Mind?
  • Hydraulics (Descartes)
  • Marionettes (ancient Greeks)
  • Pulleys and gears (Industrial Revolution)
  • Telephone switchboard (1930s)
  • Boolean logic (1940s)
  • Digital computer (1960s)
  • Hologram (1970s)
  • Neural Networks (1980s - ?)
  • Studying what mind is (the brain) instead of
    what mind is like

21
Polyworld Overview
  • Computational ecology
  • Organisms have genetic structure and evolve over
    time
  • Organisms have simulated physiologies and
    metabolisms
  • Organisms have neural network brains
  • Arbitrary, evolved neural architectures
  • Hebbian learning at synapses
  • Organisms perceive their environment through
    vision
  • Organisms primitive behaviors are neurally
    controlled
  • Fitness is determined by Natural Selection alone
  • Bootstrap online GA if required

22
(No Transcript)
23
Genetics Physiology Genes
  • Size
  • Strength
  • Maximum speed
  • Mutation rate
  • Number of crossover points
  • Lifespan
  • Fraction of energy to offspring
  • ID (mapped to bodys green color component)

24
Genetics Neurophysiology Genes
  • of neurons for red component of vision
  • of neurons for green component of vision
  • of neurons for blue component of vision
  • of internal neuronal groups
  • of excitatory neurons per group
  • of inhibitory neurons per group
  • Initial bias of neurons per group
  • Bias learning rate per group
  • Connection density per pair of groups types
  • Topological distortion per pair of groups types
  • Learning rate per pair of groups types

25
Physiology and Metabolism
  • Energy is expended by behavior neural activity
  • Size and strength affect behavioral energy
    costs(and energy costs to opponent when
    attacking)
  • Neural complexity affects mental energy costs
  • Size affects maximum energy capacity
  • Energy is replenished by eating food (or other
    organisms)
  • Health energy is distinct from Food-Value energy
  • Body is scaled by size and maximum speed

26
Perception Neural System Inputs
  • Vision
  • Internal energy store
  • Random noise

27
Behavior Neural System Outputs
  • Primitive behaviors controlled by single neuron
  • Volition is level of activation of relevant
    neuron
  • Move
  • Turn
  • Eat
  • Mate (mapped to bodys blue color component)
  • Fight (mapped to bodys red color component)
  • Light
  • Focus

28
Behavior Sample Eating
29
Behavior Sample Killing Eating
30
Behavior Sample Mating
31
Behavior Sample Lighting
32
Neural System Internal Units
  • No prescribed function
  • Neurons
  • Synaptic connections

33
Evolving Neural Architectures
34
Neural System Learning and Dynamics
  • Simple summing and squashing neuron model
  • xi ? ajt sijt j
  • ait1 1 / (1 e-xi)
  • Hebbian learning
  • sijt1 sijt ckl (ait1 - 0.5) (ajt - 0.5)

35
Emergent Species Joggers
36
Emergent Species Indolent Cannibals
37
Emergent Species Edge-runners
38
Emergent Species Dervishes
39
Emergent Behavior Visual Response
40
Emergent Behavior Fleeing Attack
41
Emergent Behaviors Foraging, Grazing, Swarming
42
A Few Observations
  • Evolution of higher-order, ethological-level
    behaviors observed
  • Selection for use of vision observed
  • This approach to evolution of neural
    architectures generates a broad range of network
    designs

43
Is It Alive? Ask Farmer Belin
  • Life is a pattern in spacetime, rather than a
    specific material object.
  • Self-reproduction.
  • Information storage of a self-representation.
  • A metabolism.
  • Functional interactions with the environment.
  • Interdependence of parts.
  • Stability under perturbations.
  • The ability to evolve.

44
Information Is What Matters
  • "Life is a pattern in spacetime, rather than a
    specific material object. - Farmer Belin
    (ALife II, 1990)
  • Schrödinger speaks of life being characterized by
    and feeding on negative entropy (What Is Life?
    1944)
  • Von Neumann describes brain activity in terms of
    information flow (The Computer and the Brain,
    Silliman Lectures, 1958)
  • Informational functionalism
  • Its the process, not the substrate
  • What can information theory tell us about living,
    intelligent processes

45
Information and Complexity
  • Chris Langtons lambda parameter (ALife II)
  • Complexity length of transients
  • ? rules leading to nonquiescent state /
    rules

High
Complexity
Low
0.0
1.0
?c
Lambda
  • Crutchfield Similar results measuring
    complexity of finite state machines needed to
    recognize binary strings

46
Quantifying Life and Intelligence
  • Measure state and compute complexity
  • What complexity?
  • Mutual Information
  • Adamis physical complexity
  • Gell-Mann Lloyds effective complexity
  • What state?
  • Chemical composition
  • Electrical charge
  • Aspects of behavior or structure
  • Neuronal states
  • Other issues
  • Scale, normalization, sparse data

47
Future Directions
  • Compute and record measure(s) of complexity
  • Use best complexity measure(s) as fitness
    function
  • More environmental interaction
  • Pick up and put down pieces of food
  • Pick up and put down pieces of barrier
  • More complex environment
  • More control over food growth patterns
  • Additional senses
  • More complex, temporal (evolved?) neural models

48
Future Directions
  • Behavioral Ecology benchmarks
  • Optimal foraging
  • Patch depletion (Marginal Value Theorem)
  • Patch selection (profitability vs. predation
    risk)
  • Vancouver whale populations
  • Evolutionary Biology problems
  • Speciation (population isolation)
  • Altruism (genetic similarity)
  • Classical conditioning, intelligence assessment
    experiments

49
Future Directions
  • Source code is available
  • Original SGI version at ftp.apple.com in
    /research/neural/polyworld
  • New Mac/Windows/X11 version coming soon, based
    on Qt
  • Paper and other materials at rryy

50
Evolving Neural Network Architectures in a
Computational Ecology
  • Larry Yaeger
  • mailto larryy_at_indiana.edu
  • http//pobox.com/larryy
  • Networks and Complex Systems
  • Indiana University
  • 18 October 2004

51
But It Can't Be Done!
  • "If an elderly but distinguished scientist says
    that something is possible he is almost certainly
    right, but if he says that it is impossible he is
    very probably wrong." - Arthur C. Clarke
  • Humans are a perfect example of mind embedded in
    matter there is no point arguing about whether
    it is possible to embed mind in matter.
  • The Earth is flat and at the center of the
    universe...

52
But Gödel Said So...
  • No he didn't.
  • Every consistent formalisation of number theory
    is incomplete.
  • It is a huge leap to "AI is impossible".
  • Indeed, the fact that human brains are capable of
    both expressing arithmetical relationships and
    contemplating "I am lying" bodes well for machine
    minds.
  • The (formal) consistency of the human mind has
    most definitely not been proven.

53
Quantum Effects Required for Unpredictability
  • With just three variables, Lorenz demonstrated
    chaotic, unpredictable systems.
  • Even the 102 neurons and 103 synapses of
    Polyworld's organisms should provide adequate
    complexity.

54
Man Cannot Design Human Minds
  • Even Gödel acknowledged that human-level minds
    might be evolved in machines.
Write a Comment
User Comments (0)
About PowerShow.com