Title: Why General Artificial Intelligence AI is so Hard
1Why General Artificial Intelligence (AI) is so
Hard
- Theo Pavlidis
- Distinguished Professor Emeritus
- Dept. of Computer Science
- t.pavlidis_at_ieee.org
- http//theopavlidis.com
2Definitions of Artificial Intelligence (AI)
- General or Strong AI A machine that replicates
the functionality of the human brain. Around the
Corner since about 1945. - Narrow or Weak AI A machine that does a specific
task that traditionally has been done by humans.
Each specific application is treated as a
separate engineering problem. Numerous successes.
3Successes in Narrow AI (Seen in daily life)
- Restricted Speech Recognition (in Banking and
Airline reservation systems, etc) - Credit Card Fraud Detection
- Web Tools (Shopping Suggestions, Mechanical
Translation, etc) - Simple Robots (Roomba house cleaner)
- 1D and 2D Bar Codes (in stores and in shipping)
4Successes in Narrow AI(Not Seen Everyday)
- Chess Playing Machines
- Optical Character Recognition
- Industrial Inspection
- Biometrics (Fingerprints, Iris, etc)
- Medical Diagnosis
5Features of Narrow AI
- Each Problem is Solved Separately even though
certain common mathematical tools may be used
(statistics, graph theory, signal processing,
etc). - Each Solution Relies Heavily on Specific
Environment Constraints and performance (compared
to that of humans) drops when these constraints
are relaxed.
6Why Not General AI?
- Why waste time with all the special cases and
not solve the general problem once for all? - Why not use a brain model to solve all these
problems? - Are advances in general computer technology
(hardware, systems) likely to help? Why not wait
for them rather than solving problems piecemeal?
7Humans may be machines, but they are very
different from computers
8Understanding the Difference betweenHumans and
Computers
- We will start by looking at the problem of
content-based image retrieval to obtain an
understanding of the difference.
9Content-based Image Retrieval(CBIR)
- Given an image find those that are similar to it
from a data base of images. (If the images are
labeled, the problem is reduced to text search.) - Systems do not perform as advertised. For a
collection of critical writings see - http//www.theopavlidis.com/technology/CBIR/index.
htm - The difficulty of image retrieval should be
contrasted with the success of text retrieval,
not only Google, but also earlier programs such
as the Unix grep.
10Example
11Reasons for the Poor Results in Machine Vision
and CBIR
- Images are represented by statistics of pixel
values (e.g. color histogram, texture histogram,
etc) - Such statistics are unrelated to human
perception. - Papers describing CBIR methods use trivial
queries (e.g. show me all pictures with a lot of
green).
12Perceptual versus Computational Similarity
- Two pictures may differ a lot in their pixel
values but appear similar to a person. (They
have the same meaning.) - Two pictures may differ in very few pixels but
they have different meaning. (Face portraits of
two different people in front of the same
background.)
13Perceptual versus Computational Similarity
Perceptually close
Pixel-wise close
14Text versus Pictures
- In text files each byte (or two) is a numerical
code for a character. Therefore strings of bytes
correspond to words that carry semantic meaning. - In pictures each byte (or group thereof)
represents the color at a particular location
(pixel). Pixels are quite far from the components
that have a semantic meaning.
15We do not do that well in text!
- If it is hard to search for concepts unless we
can map concepts into words. - Example 1 Find all articles critical of the
government policy in dealing with the banking
crisis. - Example 2 Find all articles about a dog named
Lucy. Amongst the Google returns was an article
with the phrase Lucy and I spent the weekend
alone together. We have a dog named Kyler.
16Human Intelligence made simple
Input
Concept
Input
Output
17The Big Difference
- The transformation of input to concept is a
complex process (binding), barely understood by
neuroscientists. (In spite of claims to the
opposite by some computer scientists.) - It is hard to develop algorithms for a barely
understood process. - Humans can transform concepts into formal
entities (words in a language) and then code them
in computer readable form. - Computers can deal with such formal input.
18What Neuroscientist Say
- Perceptions emerge as a result of reverberations
of signals between different levels of the
sensory hierarchy, indeed across different
senses. The author then goes on to criticize the
view that sensory processing involves a one-way
cascade of information (processing) - Source V.S. Ramachandran and S. Blakeslee
Phantoms in the Brain, William Morrow and Company
Inc., New York, 1998 (p. 56)
19What Do You See?
20Reading Demo - 1
21Reading Demo - 1
Tentative binding on the letter shapes (bottom
up) is finalized once a word is recognized (top
down). Word shape and meaning over-ride early
cues.
22Reading Demo -2
- New York State lacks proper facilities for the
mentally III. - The New York Jets won Superbowl III.
-
- Human readers may ignore entirely the shape of
individual letters if they can infer the meaning
through context.
23The Importance of Context
- Human intelligence almost always thrives on
context while computers work on abstract numbers
alone. Independence from context is in fact a
great strength of mathematics. - Source Arno Penzias Ideas and Information,
Norton, 1989, p. 49.
24The Challenges
- We need to replicate complex transformations that
the (human/animal) brain has evolved to do over
millions of years. - We have to deal with the fact the processing is
not unidirectional and also affected by other
factors than the input (context). (Such factors
cause visual illusions.)
25A time scale
- The human visual system has evolved from animal
visual systems over a period of more than 100
million years. - Speech is barely over 100 thousand years old.
- Written text is no more than 10 thousand years
old.
26A note on brain models
- There is a history for considering the latest
technology to be a model of the human brain, for
example in the 16th century irrigations networks
were considered to be models of the brain. - If someone claims to have a machine modeling the
human brain, ask how could the machine be
modified to model the brain of a dog (since a dog
cannot learn to write poetry, play chess, etc)?
27A Note on Neural Nets
Is this a model of the brain?
As much as a table is a model of a dog.
28Simplified model of a small part of the brain
29A Dubious Approach
- Training on large numbers of samples has been
used as a way out of finding a way to understand
what is going on. - But humans (and animals) do not need to be
trained on large numbers of samples. - Rats trained to distinguish between a square and
a rectangle perform quite well when faced with
skinnier rectangles. They have the concept of
rectangle!
30Distinguish Rectangles from SquaresThe
Artificially Intelligent Approach
- Take a hundred (or more) pictures of rectangles
and squares, compute several statistics on each
picture and for each picture create a feature
vector F. Then compute a vector W so that FW gt
0 for squares and FW lt 0 for rectangles
31Distinguish Rectangles from SquaresThe Natural
Approach
- Find the outline of a shape (if one exists in a
picture) and fit a rectangle to it. Then compute
the aspect ratio of the rectangle. If it is near
1 (for some given tolerance), then it is called a
square, otherwise a rectangle. - Criticism Method lacks generality!!!
32No Generality in Nature
- The animal visual systems has many special areas
for visual tasks (about 30 in the human case). - We have already seen examples where high level
(context) recognition takes quickly over the low
level data processing.
33Negator of Generality
34The Learning Machine (neural net) Approach
- It has the appeal of getting something for
nothing, so it is kept alive. - We can solve a problem without really
understanding it. - Give a learning machine enough samples and a
classifier will be found!!! - (Forget about the rat who only needs two samples.)
35Criteria for Choosing a Problem to Work on
- Context should either be known or not important.
- Processing of the input should be relatively
simple (it should be clear what kind of
information we need to extract). - For an example relying heavily on context see
technology/BoxDimensions/overview.htm on my web
site. - Comments on major areas in the next few slides.
36Speech Recognition
- Grammar driven models (using low level context)
have been quite successful. - High level context is even better. For example,
matching a speech fragment to a name on a list.
37Optical Character Recognition (OCR)
- Printed text characters have small shape
variability and high contrast with the
background. - Spelling checkers (or ZIP code directories in
postal applications) introduce low level context.
38An example of heavy use of context
- Reading of the checks sent for payment to
American Express. - Because payments are supposed to be in full and
the amount due is known, the number written on a
check is analyzed to confirm whether it matches
the amount due or not. - (But direct payment is used more and more!)
39An Aside Why did OCR mature when the need for
it was diminished?
- The algorithms used in the products of the 1990s
were known earlier but they were too complex to
be implemented effectively with the digital
technology of earlier times. - When computer hardware became cheap enough for
good OCR, it also became cheap enough for direct
text entry through PCs and the Internet. - Keep this in mind in your business plans!
40Face Recognition
- It took over thirty years to built acceptable
quality machines that recognize printed symbols.
What makes us think that we can solve the much
more complex problem of distinguishing human
faces? - Neuroscientists point out that humans have
special neural circuitry for face recognition.
41How these two faces differ?
42How about these two?
43Face Recognition and Scalability
- The population samples in published studies are
relatively small and include men and women of
different races with different hairstyles, etc. - I have never seen a study where all the subjects
are similar. For example, white blond men between
the ages of 20 and 30 with long hair and beards. - Subjects in published studies are cooperative.
44How About Deep Blue?
- In 1997 a chess machine (IBMs Deep Blue) beat
the human world champion Garry Kasparov. - This resulted in a lot of publicity on how
computers had become smarter than humans.
45However
- Chess is a deterministic game, so a computer
could derive a winning solution analytically. On
the other hand the number of all possible
positions is so large (10120) that using even the
fastest available computer it will take billions
of years to consider all possible moves. - Skilled players may look at 20 moves ahead by
pruning, i.e. ignoring non-promising moves.
46Chess Playing Machines
- Around 1980 Ken Thompson developed a chess
playing program called Belle based on a
minicomputer with a hardware attachment used to
generate moves very fast. - Belle defeated all other computer programs and
became the world champion. - The use of special chess knowledge and special
purpose hardware became the preferred approach
since then.
47More on Deep Blue
- A major focus of the effort was the development
of special purpose hardware. - An expert chess player (Murray Campbell )
contributed the evaluation functions of the moves
generated by the hardware. - The project had as a consultant an international
grandmaster (Joel Benjamin who had played
Kasparov to a draw in 1994).
48Concluding Remarks
- Before we try to built a machine to achieve a
goal we must ask ourselves whether that goal is
compatible with the laws of nature . (Not because
people can do it.) - While such laws are clear in Physics and
Chemistry, there are not in the field of
Computation except in some extreme cases.
49Human Credulity - 1
- In spite of well understood laws of physics
inventors persist in offering designs that
violate them and they find takers. - Therefore fundamental advances in Computer
Science are likely to reduce but not to eliminate
preposterous claims.
50Human Credulity - 2
- 50 years ago Langmuir (in Pathological Science)
debunked UFOs but also predicted that UFOs will
be with us for a long time because it is too good
a story for the news media to let go. - The view of computers as giant brains that are
able to out-think and replace humans is about as
valid as visits by extraterrestrials, but it
makes too good a story for the news media to let
go.
51The End