Paradigm Shift in AI - PowerPoint PPT Presentation

About This Presentation
Title:

Paradigm Shift in AI

Description:

Title: CS276B Text Information Retrieval, Mining, and Exploitation Author: Christopher Manning Last modified by: etzioni Created Date: 1/27/2003 5:44:14 AM – PowerPoint PPT presentation

Number of Views:126
Avg rating:3.0/5.0
Slides: 36
Provided by: Christophe402
Category:

less

Transcript and Presenter's Notes

Title: Paradigm Shift in AI


1
Paradigm Shift in AI
  • Oren Etzioni
  • Turing Center
  • Note some half-baked thoughts, please dont
    circulate or cite

2
Two Apologies
3
Basic Premises (Im a)
  • Materialist ? everything is made of atoms
  • Functionalist ? if you can instantiate it in
    neurons, you can also instantiate in silicon.
  • (what is it?)
  • This makes me an AI Optimist (long term)
  • We are very far from the boundary of machine
    intelligence ? medium term optimist too!
  • Are we studying AI, though?!?

4
Outline
  1. A bit of philosophy of science
  2. Critique of Present AI
  3. Whenever you find yourself on the side of the
    majority, it is time to pause and reflect Mark
    Twain
  4. Hints at a new paradigm

5
Science is done within paradigms
  • paradigm set of shared assumptions, ideas,
    methods.
  • Cathedral view we keep accumulating bricks
  • over 100s of years until
  • we have.

6
Paradigm Shift, (The Duck View) Thomas Kuhn 62
  • Anomalies are explained away
  • When they accumulate
  • The paradigm is deemed inadequate
  • Change is unexpected, and revolutionary!
  • Or is it a rabbit?

7
Example from Physics
  • 19th Century (and earlier) ? light is a wave
  • How does light move in space?
  • Through the luminferous ether
  • But on one has observed it
  • 1897 Michelson-Morley experiment ? ingenious way
    to detect ether, but
  • No ether was detected..
  • Other cracks in the Newtonian paradigm
  • 1905 theory of relativity

8
Critique of Current AI Paradigm
  • Subtask-driven research (e.g., parsing, concept
    learning, planning)
  • Formulate a narrow subtask ? spend way too many
    years solving it better better
  • One shot systems (e.g., run learning algorithm
    once on single data set, single concept)
  • Where is the intelligence in an AI system?
  • target concept, learning algorithm,
    representation, bias, pruning, training set all
    chosen by expert
  • labor-intensive, iterative refinement

9
Critique of AI cont.
  • Focus of the field is on
  • Modules, not complete systems
  • Desired I/O is assumed and invented
  • Where do target concepts, goals come from?
  • Experimental metrics are surrogates for real
    performance (e.g., search-tree depth versus chess
    rating).
  • precision/recall of KnowItAll.
  • How well can the robot pick up cups?
  • How instead of what
  • Most papers describe a mechanism/algorithm/system/
    enhancement
  • Only a few tackle issue/question (why does NB
    work?)
  • This makes me an AI Pessimist (short term)

10
So What do we Need?
  • Rod Brooks (1991)
  • Complete systems
  • Real sensing, real action (Drosophila is a real
    creature!)
  • Pitfall low level/engineering overhead
  • For me, this led to softbots (1991 - 1997)
  • Pitfall low level/engineering overhead
  • Pitfall Need background knowledge to succeed
  • Ed Figenbaum/Doug Lenat
  • Machines that can learn/represent/utilize massive
    bodies of knowledge
  • Cyc, KnowItAll, MLNs are pieces of this
  • Lesson from Cyc/KnowItAll writing down bits is
    easy
  • Lesson from MLNs reasoning is still hard
  • Question how to make progress? How to measure
    it?
  • Need bona fide, external performance metrics

11
External Measure of Performance
  • IQ score, SAT score, chess rating, Turing test
  • This is surprisingly tricky
  • Peter Turneys SAT analogy test
  • Demo
  • Halo Project

12
HALO project
  • Build a Scientific KB Digital Aristotle 
  • Measure performance on AP science tests 
  • The Hype computer passes AP test
  • The Reality goals in this project are further
    than they appear
  • Slides courtesy of Peter Clark (KCAP 07)

13
Example question (physics)
An alien measures the height of a cliff by
dropping a boulder from rest and measuring the
time it takes to hit the ground below. The
boulder fell for 23 seconds on a planet with an
acceleration of gravity of 7.9 m/s2. Assuming
constant acceleration and ignoring air
resistance, how high was the cliff?
?
Example question (chemistry)
A solution of nickel nitrate and sodium hydroxide
are mixed together. Which of the following
statements is true? a. A precipitate will not
form. b. A precipitate of sodium nitrate will be
produced. c. Nickel hydroxide and sodium nitrate
will be produced. d. Nickel hydroxide will
precipitate. e. Hydrogen gas is produced from the
sodium hydroxide.
14
There lies a sweet spot between logic and full
NL which is both human-usable and
machine-understandable
Unrestricted natural language
Formal language
CPL
A boulder is dropped
?x?y B(x)? R(x,y)?C(y)
Consider the following possible situation in
which a boulder first
15
Example of a CPL encoding of a qn
An alien measures the height of a cliff by
dropping a boulder from rest and measuring the
time it takes to hit the ground below. The
boulder fell for 23 seconds on a planet with an
acceleration of gravity of 7.9 m/s2. Assuming
constant acceleration and ignoring air
resistance, how high was the cliff?
?
A boulder is dropped. The initial speed of the
boulder is 0 m/s. The duration of the drop is 23
seconds. The acceleration of the drop is 7.9
m/s2. What is the distance of the drop?
16
The Interface (Posing Questions)
17
Controlled Language for Question-Asking
  • Controlled Language Not a panacea!
  • Not just a matter of grammatical simplification
  • Only certain linguistic forms are understood
  • Many concepts, many ways of expressing each one
  • Huge effort to encode these in the interpreter
  • User has to learn acceptable forms
  • User needs to make common sense explicit
  • Man pulls rope, rope attached to sled ? force on
    sled
  • 4 wheels support a car ? ¼ weight on each wheel

18
Lessons from HALO Project
  • Setting an ambitious, externally-defined target
    is exciting but challenging
  • Grammatical simplification (via CL) is helpful,
    but only one layer of the onion!
  • Text leaves key information implicit
  • Need common sense to understand text
  • Need massive body of background knowledge and
    ability to reason over it
  • Need to articulate clear lessons
  • What have we learned from Soar? Cyc? KnowItAll?

19
Appealing Hypothesis?
  • AI will emerge from evolution, from neural soup,
  • AI will emerge from scale up.
  • Lets just continue doing what were doing
  • Perhaps gear it up to use massive data
    sets/machine cycles (VLSAI)
  • Then, we will ride Moores Law to success

20
Banko Brill 01 (case study)
  • Example problem confusion set disambiguation
  • principle, principal
  • then, than
  • to, two, too
  • whether, weather
  • Approaches include
  • Latent semantic analysis
  • Differential grammars
  • Decision lists
  • A variety of Bayesian classifiers

21
Banko Brill 01
  • Collected a 1-billion-word English training
    corpus
  • 3 orders of magnitude gt than largest corpus used
    previously for this problem
  • Consisted of
  • News articles
  • Scientific abstracts
  • Government transcripts
  • Literature
  • Etc.
  • Test set
  • 1 million words of WSJ text (non used in
    training)

22
Training on a Huge Corpus
  • Each learner trained at several cutoff points
  • First 1 million, then 5M, etc.
  • Items drawn by probabilistically sampling
    sentences from the different sources, weighted by
    source size.
  • Learners
  • Naïve bayes, perceptron, winnow, memory-based
  • Results
  • Accuracy continues to increase log-linearly even
    out to 1 billion words of training data
  • BUT the size of the trained model also increases
    log-linearly as a function of training set size.

23
Banko Brill 01
24
Lessons from Banko Brill
  • Relative performance changes with data set size
  • Performance continues to climb w. increase in
    data set
  • Caveats
  • there is much more to their paper. I just took
    a biased sample.
  • the task considered is narrow and simple
  • However, similar phenomena has been shown in
    other settings and tasks
  • Lesson ask what happens if I have 10x or 100x
    more data, cycles, memory?

25
Computer Chess Case Study
  • A complete system, in a real/toy domain ?
  • Simple, external performance metric
  • 40 years ? super-human performance
  • massive databases
  • knowledge engineering to choose features
  • automatic tuning of evaln function parameters
  • Brute-force search coupled with heuristics for
    selective extensions
  • Deeper search (scale up!) led to a qualitative
    difference in performance

26
Achilles Hill of the Scale Up Argument
  • These were narrow, well-formed problems
  • How do you apply these ideas to broader problems?
  • Take for example monkeys at a type writer
  • they would eventually produce the worlds most
    amazing literature
  • But how would you know?!

27
Elements of a New AI Paradigm
  • Report lessons from major projects
  • Focus on what is being computed?
  • Is it an advance?
  • Build complete systems in real-world test beds
  • Challenge avoid engineering rat holes
  • Rely on external performance metrics
  • Is AI making progress?
  • Ask new questions can this program survive?
  • How does it formulate its goals?
  • Is it conscious?
  • Is this enough?

28
AI Study of Ill Formed Problems
  • Conjecture if you can define it as a
    search/optimization problem, then computer
    scientists will figure out how to solve it
    tractably (if thats possible)
  • The fundamental challenge of AI today is to
    figure out how to map fluid and amazing human
    capabilities (NLU, Common sense, human navigation
    of the physical world, etc.) into formal
    problems.
  • The amazing thing is how little discussion there
    is of how to get from here to our goal!!!

29
Some Ill-Formed Problems
  • Softbot a cyber-assistant with wide ranging
    capabilities.
  • Would you let it send you email?
  • Would you give it your credit card?
  • A textbook learner a program that reads a
    chapter and then answer
  • Machine Reading at Web scale read the Web and
    leverage scale to compensate for limited subtlety
  • Life-long learner a program that learns, but
    also learns how to learn better over time.

30
Automatic Formulation of Learning
  • Learning Problem (labeled examples, hypothesis
    space, target concept)
  • Can the learner
  • Choose a target concept
  • Choose a representation for examples/hypotheses
  • Label some examples
  • Choose a learning algorithm
  • Evaluate the results

31
Life long Learning
  • Nonstop learning/reasoning/action
  • Is this just a matter of a large enough data set?
  • Add in recursive learning
  • learning at time T is a function of learning at
    T-1.
  • Multiple problems, limited resources
  • Representation change
  • Ability to formulate own goals/learning problems

32
The Future of AI
  • To borrow from Alan Kay
  • The best way to predict the future of AI is to
    invent it!

33
My Own View
  • What is your own goal?
  • write a paper versus solve AI
  • Science is done within paradigms
  • AIs current paradigm is statistical/probabilisti
    c methods
  • Paradigms shift when they are deemed inadequate
  • Change is unexpected, and revolutionary! The
    Structure of Scientific Revolutions by Thomas
    Kuhn

34
Example from Physics
  • 19th Century (and earlier) ? light is a wave
  • How does light move in space?
  • Through the luminferous ether
  • But on one has observed it
  • 1897 Michelson-Morley experiment ? ingenious way
    to detect ether, but
  • No ether was detected..
  • Other cracks in the Newtonian paradigm
  • 1905 theory of relativity

35
Cracks in the AI Paradigm
  • We are building increasingly powerful algorithms
    for very narrow tasks
  • Learning algorithms are one shot
  • we have parsing, but what about understanding?
  • Much of our progress is due to Moores Law
  • Its time for a revolution
Write a Comment
User Comments (0)
About PowerShow.com