CSE 471/598

1 / 39
About This Presentation
Title:

CSE 471/598

Description:

Open only for Humans; Droids and Robots should go for CSE 462 next door ; ... You woke up in the morning. You want to attend a class. What should your action be? ... – PowerPoint PPT presentation

Number of Views:22
Avg rating:3.0/5.0
Slides: 40
Provided by: subbraoka

less

Transcript and Presenter's Notes

Title: CSE 471/598


1
Open only for Humans Droids and Robots should go
for CSE 462 next door -)
CSE 471/598
Intro to AI
2
General Information
  • Instructor Subbarao Kambhampati (Rao)
  • Office hours after class, T/Th 11451245pm
  • TA Yunsong Meng
  • Took this course in Fall 2006 and did very well
  • Office hours TBD
  • Additional help from CSE471 tutors..
  • Course Homepage http//rakaposhi.eas.asu.edu/cse4
    71

3
Grading etc.
Subject to (minor) Changes
  • Projects/Homeworks/Participation (55)
  • Projects
  • Approximately 4
  • First project already up! Due in 2 weeks
  • Expected background
  • Competence in Lisp programming
  • Why lisp? (Because!)
  • Homeworks
  • Homeworks will be assigned piecemeal.. (Socket
    system)
  • Participation
  • Attendance to and attentiveness in classes is
    mandatory
  • Participation on class blog is highly encouraged.
  • Do ask questions
  • Midterm final (45)

4
Lisp Programming
  • Use Lisp-in-a-box (link from the class page)
  • Easy to install and use. Take the clisp version
  • There is a link to a lisp review book
  • There is also a link to Lisp vs. Scheme
    differences
  • You are allowed to use other languages such as
    Java/Python/C etc.but the partial code snippets
    will only be provided for Lisp
  • If you plan to take this option, please do talk
    to the instructor

5
Course demands..
It has not been the path for the faint-hearted,
for those who prefer leisure over work, or seek
only the pleasures of riches and fame.
-Obama inadvertently talking about CSE471
in his inaugural address ?
  • ..your undivided attention
  • Attendance mandatory if you have to miss a
    class, you should let me know before hand
  • Has been repeatedly seen as a 4-5 credit course
  • (while the instructor just thinks your other
    courses are 1-2 credit ones ? )
  • No apologies made for setting high-expectations

6
Grade Anxiety
  • All letter grades will be awarded
  • A,A,B,B,B-,C,C,D etc.
  • No pre-set grade thresholds
  • CSE471 and CSE598 students will have the same
    assignments/tests etc. During letter grade
    assignment however, they will be compared to
    their own group.
  • The class is currently 33 CSE471 and 10 CSE598
    (grad) students

7
Honor Code
  • Unless explicitly stated otherwise, all
    assignments are
  • Strictly individual effort
  • You are forbidden from trawling the web for
    answers/code etc
  • Any infraction will be dealt with in severest
    terms allowed.

8
Life with a homepage..
  • I will not be giving any handouts
  • All class related material will be accessible
    from the web-page
  • Home works may be specified incrementally
  • (one problem at a time)
  • The slides used in the lecture will be available
    on the class page (along with Audio of the
    lecture)
  • I reserve the right to modify slides right up to
    the time of the class
  • When printing slides avoid printing the hidden
    slides

9
(No Transcript)
10
About the only thing Microsoft Google can agree
on these days
  • If you invent a breakthrough in artificial
    intelligence, so machines can learn," Mr. Gates
    responded, "that is worth 10 Microsofts." (Quoted
    in NY Times, Monday March 3, 2004)
  • No. 1 AI at human level in 10-20 year time
    frame
  • Sergey Brin
  • Larry Page
  • (independently, when asked to name the top 5
    areas needing research. Google Faculty Summit,
    July 2007)

11
(No Transcript)
12
(No Transcript)
13
1946 ENIAC heralds the dawn of Computing
14
Three Fundamental QuestionsFacing our Age
  • Origin of the Universe
  • Origin of Life
  • Nature of Intelligence

15
1950 Turing asks the question.
I propose to consider the question
Can machines think?
--Alan Turing, 1950
16
1956 A new field is born
  • We propose that a 2 month, 10 man study of
    artificial intelligence be carried out during the
    summer of 1956 at Dartmouth College in Hanover,
    New Hampshire.
  • - Dartmouth AI Project Proposal J. McCarthy et
    al. Aug. 31, 1955.

17
1996 EQP proves that Robbins Algebras are all
boolean
An Argonne lab program has come up with a major
mathematical proof that would have been called
creative if a human had thought of it.
-New
York Times, December, 1996
18
1997 HAL 9000 becomes operational in fictional
Urbana, Illinois
by now, every intelligent person knew that
H-A-L is derived from Heuristic ALgorithmic
-Dr. Chandra, 2010 Odyssey Two

19
1997 Deep Blue ends Human Supremacy in Chess
vs.
I could feel human-level intelligence across the
room -Gary Kasparov, World Chess
Champion (human)
In a few years, even a single victory in a
long series of games would be the triumph of
human genius.
20
1999 Remote Agent takes Deep Space 1 on a
galactic ride
21
2002 Computers start passing Advanced Placement
Tests
a project funded by (Microsoft Co-founder)
Paul Allen attempts to design a Digital
Aristotle. Its first results involve
programs that can pass High School Advanced
Placement Exam in Chemistry
22
2005 Cars Drive Themselves
  • Stanley and three other cars drive themselves
    over a 132 mile mountain road

23
2005 Robots play soccer (without headbutting!)
  • 2005 Robot Soccer Humanoid league

24
2006 AI Celebrates its Golden Jubilee
25
2007 Poker-faced robots give hard time to
HumansWhile the robot cars threaten to drive on
Rural Broadway
..and thankfully You step in to take CSE
471 Welcome!
Humans narrowly won the first Computer-Human
Poker challenge (AAAI 2007) Darpa Urban Grand
Challenge (November, 2007) will put autonomous
cars in robot traffic..
26
Course Overview
  • What is AI
  • Intelligent Agents
  • Search (Problem Solving Agents)
  • Single agent search Project 1
  • Markov Decision Processes
  • Constraint Satisfaction Problems
  • Adversarial (multi-agent) search
  • Logical Reasoning Project 2
  • Reasoning with uncertainity
  • Planning Project 3
  • Learning Project 4

27
Although we will see that all four views have
motivations..
28
Do we want a machine that beats humans in chess
or a machine that thinks like humans while
beating humans in chess? ?DeepBlue
supposedly DOESNT think like humans..
(But what if the machine is trying to tutor
humans about how to do things?) (Bi-directional
flow between thinking humanly and thinking
rationally)
29
(No Transcript)
30
What if we are writing intelligent agents that
interact with humans? ?The COG project
?The Robotic care givers
Mechanical flight became possible only when
people decided to stop emulating birds
31
What AI can do is as important as what it cant
yet do..
  • Captcha project

32
Arms race to defeat Captchas(using unwitting
masses)
digression
  • Start opening an email account at Yahoo..
  • Clip the captcha test
  • Show it to a human trying to get into another
    site
  • Usually a site that has pretty pictures of the
    persons of apposite sex
  • Transfer their answer to the Yahoo

Much more interesting idea ESP Game
Note Appositenot opposite. This course is
nothing if not open minded ?
33
(No Transcript)
34
It can be argued that all the faculties needed to
pass turing test are also needed to act
rationally to improve success ratio
35
(No Transcript)
36
(No Transcript)
37
(No Transcript)
38
Discuss on Class Blog
Playing an (entertaining) game of Soccer Solving
NYT crossword puzzles at close to expert
level Navigating in deep space Learning patterns
in databases (datamining) Supporting
supply-chain management decisions at fortune-500
companies Learning common sense from the
web Navigating desert roads Navigating urban
roads Bluffing humans in Poker..
39
1/22
  • Architectures for Intelligent Agents
  • Wherein we discuss why do we need representation,
    reasoning and learning

40
Environment
The Question
What action next?
41
(No Transcript)
42
and prior knowledge
Rational ! Intentionally avoiding sensing
history s0,s1,s2sn. Performance
f(history) Expected Performance E(f(history))
43
(No Transcript)
44
(No Transcript)
45
Partial contents of sources as found by
Get Get,Post,Buy,.. Cheapest price on specific
goods Internet, congestion, traffic, multiple
sources
Qn How do these affect the complexity of the
problem the rational agent faces?
?Lack of percepts makes things harder ?Lack of
actions makes things harder ?Complex goals make
things harder ?How about the environment?
46
(Static vs. Dynamic)
(Observable vs. Partially Observable)
Environment
perception
(perfect vs. Imperfect)
(Full vs. Partial satisfaction)
(Instantaneous vs. Durative)
action
Goals
(Deterministic vs. Stochastic)
The Question
What action next?
47
Yes Yes No Yes Yes 1
No No No No No gt1
Agents
Accessible The agent can sense its
environment best Fully accessible
worst inaccessible typical Partially
accessible Deterministic The actions have
predictable effects best deterministic
worst non-deterministic typical
Stochastic Static The world evolves only
because of agents actions best
static worst dynamic typical
quasi-static Episodic The performance of the
agent is determined episodically
best episodic worst non-episodic Discrete
The environment evolves through a discrete set of
states best discrete worst
continuous typical hybrid Agents of agents
in the environment are they competing or
cooperating?
48
?
Ways to handle ?Assume that the environment is
more benign than it really is (and hope
to recover from the inevitable failures)
Assume determinism when it is stochastic
Assume static even though it is dynamic
?Bite the bullet and model the complexity
49
Additional ideas/points covered Impromptu
  • The point that complexity of behavior is a
    product of both the agent and the environment
  • Simons Ant in the sciences of the artificial
  • The importance of modeling the other agents in
    the environment
  • The point that one reason why our brains are so
    large, evolutionarily speaking, may be that we
    needed them to outwit not other animals but our
    own enemies
  • The issue of cost of deliberation and modeling
  • It is not necessary that an agent that minutely
    models the intentions of other agents in the
    environment will always win
  • The issue of bias in learning
  • Often the evidence is consistent with many many
    hypotheses. A small agent, to survive, has to use
    strong biases in learning.
  • Gavagai example and the whole-object hypothesis.

50
(Model-based reflex agents)
How do we write agent programs for these?
51
Even basic survival needs state information..
This one already assumes that the
sensors?features mapping has been done!
52
(aka Model-based Reflex Agents)
State Estimation
EXPLICIT MODELS OF THE ENVIRONMENT
--Blackbox models --Factored models
?Logical models
?Probabilistic models
53
State Estimation
Search/ Planning
It is not always obvious what action to do now
given a set of goals You woke up in the
morning. You want to attend a class. What should
your action be? ? Search (Find a path
from the current state to goal state execute the
first op) ?Planning (does the same for
structurednon-blackbox state models)
54
Representation Mechanisms Logic (propositional
first order) Probabilistic logic
Learning the models
Search Blind, Informed Planning Inference
Logical resolution Bayesian inference
How the course topics stack up
55
..certain inalienable rightslife, liberty and
pursuit of
?Money ?Daytime
TV ?Happiness
(utility)
--Decision Theoretic Planning --Sequential
Decision Problems
56
Discounting
  • The decision-theoretic agent often needs to
    assess the utility of sequences of states (also
    called behaviors).
  • One technical problem is How do keep the utility
    of an infinite sequence finite?
  • A closely related real problem is how do we
    combine the utility of a future state with that
    of a current state (how does 15 tomorrow compare
    with 5000 when you retire?)
  • The way both are handled is to have a discount
    factor r (0ltrlt1) and multiply the utility of nth
    state by rn
  • r0 U(so) r1 U(s1). rn U(sn)
  • Guaranteed to converge since power series
    converge for 0ltrltn
  • r is set by the individual agents based on how
    they think future rewards stack up to the current
    ones
  • An agent that expects to live longer may consider
    a larger r than one that expects to live shorter

57
Learning
Dimensions What can be learned? --Any of
the boxes representing the agents
knowledge --action description, effect
probabilities, causal relations in the
world (and the probabilities of
causation), utility models (sort of through
credit assignment), sensor data
interpretation models What feedback is
available? --Supervised, unsupervised,
reinforcement learning --Credit
assignment problem What prior knowledge is
available? -- Tabularasa (agents head is
a blank slate) or pre-existing knowledge
Write a Comment
User Comments (0)