Title: Machine as Mind
1Machine as Mind
- Herbert A. Simon
- ???? ????
- 99132-801
- ???
2Contents
- 1. Introduction
- 2. Nearly-Decomposable Systems
- 3. The Two Faces of AI
- 4. The View from Psychology
- 5. The Matter of Semantics
- 6. Ill-Structured Phenomena
- 7. The Processing of Language
- 8. Affect, Motivation, and Awareness
- 9. Conclusion Computers Think
- -- and Often Think like People
31. Introduction
- I will proceed from what psychological research
has learned about human mind, to the
characteristics we must bestow upon computer
programs when we wish those programs to think.
(Section 4 8) - By mind, I means a system that produces
thought, viewed at a relatively high level of
aggregation. (Section 2)
41. Introduction
- The level of aggregation at which we model
phenomena - The primitive of mind are symbols, complex
structure of symbols, and processes that operate
on symbols (requiring at least tens of
milliseconds). At this level, the same software
can be implemented with different kinds of
hardware.
51. Introduction
- Central thesis
- At this level of aggregation, conventional
computer can be, and have been, programmed to
represent symbol structures and carry out
processes on those structures in a manner that
parallels the way the human brain does it. - Principal evidence
- Programs that do just that
61. Introduction
- Computer simulation of thinking is no more
thinking than a simulation of digestion is
digestion. - The analogy is false. The materials of digestion
are chemical substances, which are not replicated
in computer simulation., but the materials of
thought are symbols, which can be replicated in a
great variety of materials (including neurons and
chips).
72. Nearly-Decomposable Systems
- Most complex systems are hierarchical and nearly
decomposable. - E.g. Building - Rooms - Cubicles
- Nearly-Decomposable Systems
- can be analyzed at a particular level of
aggregation without detailed knowledge of the
structures at the levels below. Only aggregate
properties of the more microscopic systems affect
behavior at the higher level.
82. Nearly-Decomposable Systems
- Because mind behaves as a nearly-decomposable
system, we can model thinking at the symbolic
level, without concern for details of
implementation at the hardware level.
93. The Two Faces of AI
- AI can be approached in two ways.
- First, we can write programs without any
commitment to imitating the processes of human
intelligence. - E.g. DEEPTHOUGHT
- Alternatively, we can write programs that imitate
closely the human processes. - E.g. MATER (Baylor and Simon 1966)
103. The Two Faces of AI
- Chess-playing programs illustrate the two
approaches. - DEEPTHOUGHT does not play in a humanoid way,
typically exploring 107 of branches of the game
tree before it makes its choice of move.
DEEPTHOUGHT rests on a combination of brute
force, unattainable by human players, and
extensive, mediocre chess knowledge.
113. The Two Faces of AI
- Human grandmasters seldom look at more than 100
branches. By searching the relevant branches,
they make up with chess knowledge for their
inability to carry out massive searches. - MATER uses heuristics, so it looks at fewer than
100 branches. - Because my aim here is to consider machine as
mind, the remainder of my remarks are concerned
with programs that are intelligent in more or
less humanoid ways.
124. The View from Psychology
- How does intelligence look to contemporary
cognitive psychology? - 4.1 Selective Heuristic Search
- Human problem solvers do not carry out extensive
searches. - People use knowledge about the structure of the
problem space to form heuristics that allow them
to search extremely selectively.
134. The View from Psychology
- 4.2 Recognition The Indexed Memory
- The grandmasters memory is like a large indexed
encyclopedia. - The perceptually noticeable features of the
chessboard (the cues) trigger the appropriate
index entries and give access to the
corresponding information.
144. The View from Psychology
- Solving problems by responding to cues that are
visible only to experts is called solving them by
intuition. (solving by recognition) - In computers, recognition processes are
implemented by productions the condition sides
serve as tests for the presence of cues, the
action sides hold the information that is
accessed when the cues are noticed.
154. The View from Psychology
- Items that serve to index semantic memory are
called chunks. An expert in any domain must
acquire some 50,000 chunks. - It takes at least 10 years of intensive training
for a person to acquire the information required
for world-class performance in any domain of
expertise.
164. The View from Psychology
- 4.3 Seriality The Limits of Attention
- Problems that cannot be solved by recognition
require the application of sustained attention.
Attention is closely associated with human
short-term memory. - The need for all inputs and outputs of
attention-demanding tasks to pass through
short-term memory essentially serializes the
thinking process. We can only think of one thing
at a time.
174. The View from Psychology
- Hence, whatever parallel processes may be going
on at lower (neural) levels, at the symbolic
level the human mind is fundamentally a serial
machine.
184. The View from Psychology
- 4.4 The Architecture of Expert Systems
- Human Experts
- Search is highly selective, the selectivity is
based on heuristics stored in memory. - The information accessed can be processed further
by a serial symbol-processing system.
194. The View from Psychology
- The AI experts systems
- have fewer chunks than the human experts and make
up for the deficiency by doing more computing
than people do. The difference is quantitative,
not qualitative Both depend heavily upon
recognition, supplemented by a little capacity
for reasoning (i.e., search)
205. The Matter of Semantics
- It is claimed that the thinking of computers is
purely syntactical, that is, computers do not
have intentions, and their symbols do not have
semantic referents. - The argument is refuted by concrete examples of
computer programs that have goals and that
demonstrably understand the meanings of their
symbols.
215. The Matter of Semantics
- Computer-driven van program has the intention of
driving along the road and creates internal
symbols that denote landscape features,
interprets them, and uses the symbols to guide
its steering and speed-control mechanisms - Chess-playing program forms internal
representation that denotes the chess position
and intends to beat its opponent.
225. The Matter of Semantics
- There is no mystery about semantics and human
intentions. - Semantic means that there is a correspondence,
a relation of denotation, between symbols inside
the head and objects outside and the two programs
have goals. - It may be objected that computer does not
understand the meaning of its symbols or the
semantic operations on them, or the goals it
adopts.
235. The Matter of Semantics
- The word understand has something to do with
consciousness of meanings and intentions. But my
evidence that you are conscious is no better than
my evidence that the road-driving computers are
conscious.. - Semantic meaning
- a correspondence between the symbol and the thing
it denotes. - Intention
- a correspondence between the goal symbol and
behavior appropriate to achieving the goal.
245. The Matter of Semantics
- Searls Chinese Room parable
- proves not that computer programs cannot
understand Chinese, but only that the particular
program Searl described does not understand
Chinese. - Had he described a program that could receive
inputs from a sensory system and emit the symbol
cha in the presence of tea, we would have to
admit that it understood a little chinese.
256. Ill-Structured Phenomena
- Ill-structured means
- that the task has ill-defined or
multi-dimensional goals, - that its frame of reference or representation is
not clear or obvious, - that there are no clear-cut procedures for
generating search paths or evaluating them. - Use of NL, learning, scientific discovery
- When a problem is ill-structured,
- a first step is to impose some kind of structure
that allows it to be represented at least
approximately.
266. Ill-Structured Phenomena
- What does psychology tell us about problem
representations? - 6.1 Forms of Representation
- Propositional Representations
- Situations may be represented in word or in
logical or mathematical notations - The processing will resemble logical reasoning or
proof.
276. Ill-Structured Phenomena
- Pictorial Representations
- Situations may be represented in diagrams or
pictures. - With processes to move them through time or to
search through a succession of their states. - Most psychological research on representations
assumes one of the representations mentioned.
286. Ill-Structured Phenomena
- 6.2 Equivalence of Representations
- What consequences does the form of representation
have for cognition? - Informational Computational Equivalence
- Two representations are informationally
equivalent if either one is logically derivable
from the other. If all the information available
in the one is available in the other. - Two representations are computationally
equivalent if all the information easily
available in the one is easily available in the
other.
296. Ill-Structured Phenomena
- Information is easily available if it can be
obtained from the explicit information with a
small amount of computation. (small relative to
the capacities of the processor) - E.g. Arabic and Roman numerals are
informationally equivalent, but not
computationally equivalent. - E.g. Representation of the same problem
- as a set of declarative propositions in
PROLOG, as a node-link diagram in LISP.
306. Ill-Structured Phenomena
- 6.3 Representations Used by People
- There is much evidence that people use mental
pictures to represent problems, but there is
little evidence that people use propositions in
predicate calculus. - Even in problems with mathematical formalisms,
the processes resemble heuristic search more than
logical reasoning.
316. Ill-Structured Phenomena
- In algebra and physics, subjects typically
convert a problem from natural language into
diagrams and then into equations. - Experiment with presentation( and ) and a
sentence, The star is above/below the plus - Whatever the form of representation, the
processing of information resembles heuristic
search rather than theorem proving
326. Ill-Structured Phenomena
- 6.4 Insight Problems (Aha! experiences)
- Problems that tend to be solved suddenly, after a
long period of fruitless struggle. - Insight that lead to change in representation
and solution of the mutilated checkerboard
problem can be explained by mechanisms of
attention focusing.
336. Ill-Structured Phenomena
- The representations people use (both
propositional and pictorial) can be simulated by
computers.
347. The Processing of Language
- Whatever the role it plays in thought, natural
language is the principal medium of communication
between people. - Far more has been learned about the relation
between natural language and thinking from
computer programs that use language inputs or
outputs to perform concrete tasks.
357. The Processing of Language
- 7.1 Some Programs that Understand Language
- Novaks ISMC program (1977)
- extracts the information from natural-language
descriptions of physics problems, and transforms
it into an internal semantic representation
suitable for a problem-solving system.
367. The Processing of Language
- Hayes and Simons UNDERSTAND program (1974)
- reads natural-language instructions for puzzles
and creates internal representations(pictures)
of the problem situations and interpretations of
the puzzle rules for operating on them. - These programs give us specific models of how
people extract meaning from discourse with
semantic knowledge in memory.
377. The Processing of Language
- 7.2 Acquiring Language
- Siklossys program ZBIE (1972)
- was given (internal representations of) a simple
picture (a dog chasing a cat) and a sentence
describing the scene. - With the aid of a carefully designed sequence of
such examples, it gradually learned to associate
nouns with the objects in the pictures and other
words with their properties and the relations.
387. The Processing of Language
- 7.3 Will Our Knowledge of Language Scale?
- These illustrations involve relatively simple
language with a limited vocabulary. - To demonstrate an understanding of human
thinking, we do not need to model thinking in the
most complex situations we can imagine. Our
theory explain the phenomena in range of
situations that would call for genuine thinking
in human.
397. The Processing of Language
- 7.4 Discovery and Creativity
- Making scientific discoveries is both
ill-structured and creative. These activities
have been simulated by computer. - BACON program (Simon et al. 1987)
- When given the data available to the scientists
in historically important situations, it has
discovered Keplers Third Law, etc..
407. The Processing of Language
- KEKADA program (Simon et al. 1988)
- plans experimental strategies, responding to the
information gained from each experiment to plan
the next one. - is able to track Faradays strategy.
- Programs like BACON and KEKADA show that
scientists use essentially the same kinds of
processes as those identified in more prosaic
kinds of problem solving.
418. Affect, Motivation, and Awareness
- Motivation selects particular tasks for attention
and diverts attention from others. - If affect and cognition interact largely through
the mechanisms of attention, then it is
reasonable to pursue our research on these two
components of mental behavior independently. - Many of the symbolic processes are in conscious
awareness, and awareness has implications for the
easy of testing.
429. Conclusion Computers Think and Often Think
like People
- Computers can be programmed, and have been
programmed, to simulate at a symbolic level the
processes that are used in human thinking. - The human mind does not reach its goals
mysteriously or miraculously. Even its sudden
insights are explainable in terms of recognition
processes, well-informed search, and changes in
representation motivated by shifts in attention.