Title: COMPUTATIONAL%20COGNITIVE%20SCIENCE
1COMPUTATIONAL COGNITIVE SCIENCE
2Cognitive Revolution
- Development of computer led to rise of cognitive
psychology and artificial intelligence
BINAC the Binary Automatic Computer, developed
in 1949
3Artificial Intelligence
- Constructing artificial computer-based systems
that produce intelligent outcomes - Examples
- Game playing programs
- Deep blue
- Intelligent robots
- Mars rovers
- Darpas urban challenge
- Netflix competition
- Conversational agents
4Weak vs. Strong AI
- Weak AI using AI as a tool to understand human
cognition - Strong AI a properly programmed computer has a
mind capable of understanding
5Turing Test
- Can artificial intelligence be as good as human
intelligence? How can we test this? - Turing test (1950)
- designed to test whether humans can distinguish
between humans and computers based on
conversations - A human interrogator could ask a respondent
(either a computer or a human, whose identity was
hidden) any question he or she wished, and based
on either the computers or the humans response,
the interrogator had to decide if the answer was
given by the computer or by the human.
Alan Turing (1912-1954)
6A classic conversational agent
- The Turing Test inspired an early, satirical,
attempt to create a computerized Rogerian
therapist, Eliza
Elizas program is a set of IfThen rules
IF person mentions father THEN say Who else
in your family comes to mind when you think about
this? IF person writes I remember X THEN
say Does it make you feel happy to recall
X? ELIZA has no intelligence itself.
Intelligence comes from people interpreting its
statements.
7Example of modern conversational agents
http//testing.turinghub.com/
- following is a fragment where Ayour
instructor B very good artificial agent -
- A are you human?B What do you think? I am a
Supreme Ruler.A seriouslyB I am always
serious.A who was Turing?B A brilliant and
tragic figure in the history of computer
science.A Do you think the Turing test is a
good one?B Some of them can. - A are you having a good day?B I am
having fun.A How do you know you are having
fun?B By introspection.
8Future of AI
- Computer chip capacity and processing speed are
increasing exponentially - Some theorists (e.g. Ray Kurzweil) believe this
will lead to a technological singularity along
with dramatic improvements in AI
9Computational Modeling
- Most modeling in cognitive science targets
natural intelligence - Goal is to develop model or mimic some aspects of
human cognitive functioning - produce the same errors as humans
- ? Simulations of aspects of human behaviour
10Why do we need computational models?
- Makes vague verbal terms specific
- Provides precision needed to specify complex
theories. - Provides explanations
- Obtain quantitative predictions
- just as meteorologists use computer models to
predict tomorrows weather, the goal of modeling
human behavior is to predict performance in novel
settings
11Neural Networks
12Neural Networks
- Alternative to traditional information processing
models - Also known as
- PDP (parallel distributed processing approach)
- Connectionist models
David Rumelhart
Jay McClelland
13Neural Networks
- Neural networks are networks of simple processors
that operate simultaneously - Some biological plausibility
14Idealized neurons (units)
Inputs
S
Processor
Output
Abstract, simplified description of a neuron
15Neural Networks
- Units
- Activation Activity of unit
- Weight Strength of the connection between
two units - Learning changing strength of connections
between units - Excitatory and inhibitory connections
- correspond to positive and negative weights
respectively
16An example calculation for a single (artificial)
neuron
- Diagram showing how the inputs from a number of
units are combined to determine the overall input
to unit-i. - Unit-i has a threshold of 1 so if its net input
exceeds 1 then it will respond with 1, but if
the net input is less than 1 then it will respond
with 1
final output
17- What would happen if we change the input J3 from
1 to -1? - output changes to -1
- output stays at 1
- do not know
- What would happen if we change the input J4 from
1 to -1? - output changes to -1
- output stays at 1
- do not know
final output
18- If we want a positive correlation between the
output and input J3, how should we change the
weight for J3? - make it negative
- make it positive
- do not know
final output
19Multi-layered Networks
output units
- Activation flows from a layer of input units
through a set of hidden units to output units - Weights determine how input patterns are mapped
to output patterns
hidden units
input units
20Multi-layered Networks
output units
- Network can learn to associate output patterns
with input patterns by adjusting weights - Hidden units tend to develop internal
representations of the input-output associations - Backpropagation is a common weight-adjustment
algorithm
hidden units
input units
21A classic neural network NETtalk
network learns to pronounce English words i.e.,
learns spelling to sound relationships. Listen to
this audio demo.
(after Hinton, 1989)
22Other Demos Tools
- If you are interested, here is a tool to create
your own neural network and train it on data - Hopfield network
- http//www.cbu.edu/pong/ai/hopfield/hopfieldapple
t.html - Backpropagation algorithm and competitive
learning - http//www.psychology.mcmaster.ca/4i03/demos/demos
.html - Competitive learning
- http//www.neuroinformatik.ruhr-uni-bochum.de/ini/
VDM/research/gsn/DemoGNG/GNG.html - Various networks
- http//diwww.epfl.ch/mantra/tutorial/english/
- Optical character recognition
- http//sund.de/netze/applets/BPN/bpn2/ochre.html
- Brain-wave simulator
- http//www.itee.uq.edu.au/7Ecogs2010/cmc/home.htm
l
23Recent Neural Network Research(since 2006)
- Deep neural networks by Geoff Hinton
- Demos of learning digits
- Demos of learning faces
- Demos of learned movements
- What is new about these networks?
- they can stack many hidden layers
- can capture more regularities in data
andgeneralize better - activity can flow from input to output and
vice-versa
Geoff Hinton
In case you want to see more details YouTube
video
24Different ways to represent information with
neural networks localist representation
Unit 6
Unit 5
Unit 3
Unit 4
Unit 1
Unit 2
1 0 0 0 0 0
0 0 0 1 0 0
0 1 0 0 0 0
concept 1
concept 2
concept 3
(activations of units 0off 1on)
Each unit represents just one item ?
grandmother cells
25Distributed Representations (aka Coarse Coding)
Unit 6
Unit 5
Unit 3
Unit 4
Unit 1
Unit 2
1 1 1 0 0 0
1 0 1 1 0 1
0 1 0 1 0 1
concept 1
concept 2
concept 3
(activations of units 0off 1on)
Each unit is involved in the representation of
multiple items
26Suppose we lost unit 6
Unit 6
Unit 5
Unit 4
Unit 3
Unit 1
Unit 2
1 1 1 0 0 0
1 0 1 1 0 1
0 1 0 1 0 1
concept 1
- Can the three concepts still be discriminated?
- NO
- YES
- do not know
concept 2
concept 3
(activations of units 0off 1on)
27Representation A
Representation B
Unit 1 Unit 2 Unit 3 Unit 4 Unit 1 Unit 2 Unit 3 Unit 4
W 1 0 0 0 W 1 0 0 1
X 1 0 0 0 X 0 1 1 0
Y 1 0 0 0 Y 0 1 0 1
Z 1 0 0 0 Z 1 0 1 0
- Which representation is a good example of
distributed representation? - representation A
- representation B
- neither
28Advantage of Distributed Representations
- Efficiency
- Solve the combinatorial explosion problem With n
binary units, 2n different representations
possible. (e.g.) How many English words from a
combination of 26 alphabet letters? - Damage resistance
- Even if some units do not work, information is
still preserved because information is
distributed across a network, performance
degrades gradually as function of damage - (aka robustness, fault-tolerance, graceful
degradation)
29Neural Network Models
- Inspired by real neurons and brain organization
but are highly idealized - Can spontaneously generalize beyond information
explicitly given to network - Retrieve information even when network is damaged
(graceful degradation) - Networks can be taught learning is possible by
changing weighted connections between nodes