Title: Modelling Language Evolution Lecture/practical: The Talking Heads
1Modelling Language EvolutionLecture/practical
The Talking Heads
- Simon Kirby
- University of Edinburgh
- Language Evolution Computation Research Unit
2Meanings in models.
- Most models either
- Ignore meanings entirely (e.g., Elman,
Christiansen etc.) - Treat meanings as pre-given (e.g., Kirby etc.)
- Why have meanings at all?
- Imagine an iterated learning model without them
- what would the best language be like?
- Probably say anything or say nothing
- Need something to drive agents to be expressive
3Problems with meanings.
- Where do meanings come from?
- If they are predefined, then the model must
assume they are innate - Is this justified?
- How different are different languages semantics?
- How do we learn the meaning of words?
- In models, agents are simply given the
meaning-form pairs on a plate - What do meanings correspond to in the real world?
- In models, they are symbolic representations
there is no real world.
4Enter the robots
- Luc Steels and others
- Get round these problems by using robots
5Communication grounded in sensory-motor behaviour
- Quinn (2001) Use evolving robots
- Khepera robot
- Two motors and several proximity sensors
- Controlled by neural network
- No learning
- Weights evolved by genetic algorithm
6Cooperative evolutionary task
- Khepera inhabit environment in pairs
- Put into environment close together but in random
orientation - Task (fixed time limit)
- Move final average position as far away from
initial position as possible - How to maximise fitness?
- Need to coordinate to both go in the same
direction - How can the robots coordinate?
- One must lead, the other follow
7The evolution of after you
- Evolutionary early solution
- Two genotypes leaders and followers
- Problem if two leaders or two followers get
picked - Communicative solution
- Agent rotates until it faces the other
- The first to face moves forward to close range
(this will be the follower) - Once in range, it oscillates back and forth
- This second agent starts reversing, and the other
follows
8Scales up to a number of robots
9Grounding in learned system
- Assume there is a real world out there and agents
want to discriminate objects in the world and
name them - Talking heads experiment (e.g., Paul Vogt)
10Discrimination
- World is a collection of objects (shapes on
whiteboard) - Represented as features Red, Green, Blue, Area,
X, Y - Context a set of objects on white board
- Topic one particular object
- Robots want to build a set of meanings
- Meaning is a region represented by a prototype
- A particular colour, area and location
- The category of every object is the region
represented by its nearest prototype - An object is discriminated if its category is
different from all the others in the context
11Simplified example
CONTEXT A(0.1, 0.3) B(0.3, 0.3) C(0.25,
0.15)
A
B
b
a
C
ROBOTS PROTOTYPES a(0.15, 0.25) b(0.35,
0.3)
A is categorised as a B is categorised as b C is
categorised as b
A is discriminated B and C are not
12After discrimination
- If discrimination of the topic fails
- Add new prototype that corresponds to exactly
that topic - If discrimination of the topic succeeds
- Shift prototype slightly so that it moves closer
to the topic - If discrimination succeeds, the distinctive
category is used to play a language game - In some sense, the distinctive category is the
meaning that will be communicated
13The language game
- The lexicon is a set of meaning-word pairs with
an association score (between 0 and 1) - Language game speakers use a word to refer to
the topic (possibly by invention) and hearers try
to interpret that word - Different game types
- Guessing game
- Observational game
14The guessing game step 1
- Hearers try and guess the topic
- Hearers look at all objects in context only
consider discriminated objects as possible topics - (BUT Hearers categorise (discriminate) each
object in context anyway) - Look for association between each possible topic
and the speakers word - Select one with highest score
- Speakers provide corrective feedback
- Yes, thats the topic
- No, thats not the topic and indicate the
correct one
15The guessing game step 2
- Both speaker and hearer adjust lexicon
- Speaker
- Success increase association, and laterally
inhibit other associations - Failure decrease association
- Hearer
- Success same as speaker
- Failure to guess correct topic decrease
association, and increase association with
correct topic - Failure to understand word add word
16What decides communicative success?
- ADJUST
- Number of games
- Just colours
- Noise
- Context size
- Number of agents
17Look into their brains what determines lexicon
size?
- LOOK AT
- Number of symbols
- Number of meanings
18The observational game
- The guessing game relies on corrective feedback
- Instead, lets assume that learners can figure
out what the topic is (joint attention? ToM?) - Hearers given the topic
- Hearers play discrimination game only on topic
- Looks for association between the topic and the
speakers word - Speaker knows if hearer has correct association
- Both speaker and hearer adjust lexicon just as
before
19What difference does this make?
Repeat selected experiments