INTRODUCTION TO ARTIFICIAL INTELLIGENCE - PowerPoint PPT Presentation

1 / 92
About This Presentation
Title:

INTRODUCTION TO ARTIFICIAL INTELLIGENCE

Description:

Rules of procedure may be strategies, heuristics or rules of thumb. 15. Some examples. Children do not like green bats. Big. Large vs small. Walk vs run. Catching ... – PowerPoint PPT presentation

Number of Views:163
Avg rating:3.0/5.0
Slides: 93
Provided by: evelyne5
Category:

less

Transcript and Presenter's Notes

Title: INTRODUCTION TO ARTIFICIAL INTELLIGENCE


1
INTRODUCTION TO ARTIFICIAL INTELLIGENCE
  • CSC 345

2
WHAT IS INTELLIGENCE?
  • IQ results?
  • Knowledge?
  • Understanding?
  • Ability to reason?
  • Ability to think?
  • Memory recall?
  • Ability to learn?
  • Creativity?

3
What is Artificial Intelligence?
  • Computation speed?
  • Vast memory capacity?
  • Expert emulation?
  • Building a house of blocks?
  • Recognizing voice analyzing speech?
  • Recognizing faces?
  • Inventing stories?

4
Examples of current applications
  • Mine detection robots
  • Forest fire combatant robots
  • Robotic arms for toxic labs
  • Intelligent systems for nuclear plant alerts
  • Virtual reality to learn how to pilot a plane, to
    learn microscopic surgery
  • Expert systems in medicine, law, oil prospection

5
Examples of current applications II
  • Artificial vision reconnaissance systems
  • Voice recognition speech analysis
  • Political speech analyzers
  • etc

6
LIMITATIONS OF AI
  • A 2 year old infant can build a house of blocks,
    but not an intelligent system
  • A human can recognize a friend instantaneously
    after a 20 year span, a computer needs about 205
    calculations
  • A human can understand his(her) mother tongue
    regardless of accent or bad syntax, not an
    intelligent system
  • A human can learn, systems are beginning to

7
PROBLEMS OF AI
  • Sight recognition (cat,)
  • Bad syntax (infant speak,)
  • Fuzzy language (inexact words,)
  • Understanding (strange shapes, perspectives,.)
  • Context (occlusion, props, perspectives,)
  • Handling inconsistencies, incorrect data, fuzzy
    data

8
How can a program reason learn?
  • In traditional programming input is read into a
    typed variable
  • The knowledge (algorithms) input (data) are all
    part of the program.
  • No learning can take place because we would have
    to add the learned knowledge to an existing
    program recompile.

9
AI vs. classic systems
Procedure A x integer code
data end
  • Any modification implies a recompilation.
  • New knowledge may change the whole structure of
    the program the overall algorithm.
  • Existing data structures can not handle fuzzy
    knowledge or data

10
Classical architecture
Program code data
Input
Output
11
AI architecture
12
Parallel between human artificial processing
13
How do we organize knowledge for reasoning
learning?
  • By hierarchies taxonomies
  • By classes, sub-classes instances
  • By rules of procedure
  • By inference rules of predicate calculus
  • By pre-defined but modifiable scripts
  • By conceptual graphs

14
Problems in knowledge representation
  • Incomplete or uncertain knowledge or data
  • Probabilistic knowledge
  • Inconsistent knowledge or data
  • Erroneous knowledge or data
  • Knowledge may be beliefs, prejudices, hypotheses
  • Rules of procedure may be strategies, heuristics
    or rules of thumb

15
Some examples
  • Children do not like green bats
  • Big
  • Large vs small
  • Walk vs run
  • Catching

(as in farm, tree, pot, university,)
(large mouse vs small elephant)
(as in turtle running, hare
walking) (as in catch a ball or
a cold)
16
Example of inference with exceptions
Knowledge elephants are mammals with a
trunk, 4 paws and grey
Input Jumbo is an Albino elephant who
lost his trunk one paw in an accident
Question to system Is Jumbo an elephant?
17
Types of inference I
  • Deduction
  • canaries are yellow
  • Tweety is a canary
  • therefore Tweety is yellow
  • Abduction
  • It is raining humans get wet in the rain
  • Fred comes in he is wet
  • therefore, probably, he was in rain
  • Induction
  • poplar chestnut cedar trees have leaves
  • therefore all trees have leaves

18
Types of inference II
  • Analogy
  • cars run on gasoline
  • therefore trucks must run on gasoline
  • Probabilistic
  • Symptoms of aches, fever congestion indicate a
    flu with a probability of 75
  • Mary has aches, a fever congestion, therefore
    there is a 75 probability that she has a flu

19
Mechanism for inference
  • Input sent to inference engine
  • Inference engine partially matches input with as
    many pieces of knowledge in the knowledge
    base, KB, as possible
  • It may infer deductively from one item to the
    other in the KB
  • When it can go no further, it sends out the
    obtained result as output

20
State Space Searches
  • Chapters 3 4

21
ExampleTic-Tac-Toe
We use four legal moves to simplify the
problem move the blank up move the blank
right move the blank down move the blank left
We can solve this problem going from the start to
the goal, data-driven, or from the goal to the
start, goal-driven.
22
(No Transcript)
23
Data-driven forward chaining
Backtrack list start newList start
deadEnds currentStatestart while
newList if currentState goal
return list if currentState has no children
while (list not empty AND currentState is first
element of list) add currentState to
deadEnds remove first element from
list remove first element from
newList currentState first element from
newList add currentState to list else
place children of currentState on newList
currentState first element from newList
add currentState to list return FAIL
24
Goal-driven backward chaining
Use previous algorithm with start replaced by
goal children replaced by parents
25
ExampleFarmer, fox, goose grain
A farmer, a fox, a goose a sack of grains must
be transported across the river in a boat that
can only hold 2. How do you do that without
having the fox eat the goose, or the goose eat
the grain by leaving them alone?
This can be done by a state space representation
with all possibilities.
Let farmer be Fr Let fox be Fx Let goose be
Gs Let grains be Gr
26
(Fr, Fx, Gs, Gr)
(Fr, Fx, Gr) ? (Gs)
(Gr) ? (Fr, Fx, Gs)
(Fx) ? (Fr, Gr, Gs)
(Fr, Gr, Gs) ? (Fx)
(Fr, Fx, Gs) ? (Gr)
(Gs) ? (Fr, Fx, Gr)
(Fr, Gs) ? (Fx, Gr)
() ? (Fr, Fx, Gr, Gs)
27
Searching through a state space or a Knowledge
Base
  • There are weak methods that are generic apply
    to any domain any problem.
  • depth-first search
  • breadth-first search
  • iterative deepening depth-first
  • There are strong methods that heuristic and are
    specific to a domain or a problem.
  • best-first search
  • alpha-beta minimax search

28
Order of Search
  • Depth-first
  • Search down one branch
  • if goal not found backtrack to immediate parent
    node
  • go down next branch
  • Breadth-first
  • Search all first level children
  • If goal not found, search all next level children

29
Depth-first search
6
4
5
3
1
2
  • Left-most branch first then next to left-most,
    , right-most branch
  • It is very memory-wise thrifty.
  • It may go on indefinitely without finding the
    solution.
  • Numbers represent order of search

30
Depth-first algorithm
Depth-first-search open start
close while open ? remove leftmost
state from open, call it X if X is goal return
SUCCESS else generate children of X
put X on closed discard children if on
closed or open put remaining children on
left end of open return FAIL
31
Breadth-first search
1
1
2
2
2
2
2
3
3
3
  • Top level first, then next level, , leaves of
    tree last.
  • If solutions exist the shortest will always be
    found.
  • Numbers represent order of search

32
Breadth-first Algorithm
Breadth-first-search open start
closed while open ? remove
leftmost state from open, call it X if X
is goal return SUCCESS else generate
children of X put X on closed
discard children if on closed or open put
remaining children on right end of open
return FAIL
33
(No Transcript)
34
Depth-first vs breadth-first search
  • Breadth-first search
  • always finds the shortest path
  • If solution exists it will find it
  • Space utilization is exponential for an average
    of C children, Cn states will be open at the nth
    level
  • Depth-first search
  • If solution path is long, it will not waste time
    and space searching all paths at each level (C x
    n states)
  • It can miss a short path
  • It can get lost in a long, never-ending path with
    no solution

35
Compromise depth-first with a bound
36
Depth-first with Iterative Deepening
  • Start with depth-first search with a bound of 1
  • If no solution is found, do a depth-first search
    with a bound of 2, etc.
  • No information is kept between iterations

37
Heuristics
  • Problems do not necessarily have exact solutions
  • Medical diagnosis
  • Vision (connectedness, orientation, optical
    illusions
  • Natural language analysis
  • Solutions may exist but are prohibitively costly
  • Chess
  • Oil prospection
  • Theorem proving
  • Heuristics may be weights on branches or pruning
    of branches through symmetry, etc.

38
Pruning of Tic Tac Toe through symmetries
39
Hill-climbinga heuristic search with weights
Numbers represent weights of branches
  • It alternates between depth-first breadth-first
    depending on which one seems best at the time.
  • If the weights are accurate, this is indeed the
    best search.
  • In TTT you need to get a worse position before
    getting a better one

40
Best-first search
best-first-search open Start
closed while open ? remove
leftmost state from open, call it X if X is
goal return path from Start to X else
generate children of X for each child of
X case child not on open or closed
assign child a heuristic value add
child to open child is on open if
child was reached by shorter path give state
on open the shorter path child is on closed
If child was reached by shorter
path remove state from closed add child to
open put X on closed re-order
states on open by heuristic return FAIL
41
Choosing a heuristic for TTT
  • The number of tiles out of place
  • Sum of all the distances of tiles out of place
  • If two states have same evaluation, pick the one
    that is nearest to the root (depth) of the graph
    (probably will be on shortest path)
  • Heuristic is then f(n) g(n) h(n) where
  • g(n) is actual length of path from state n to
    start
  • h(n) is a heuristic estimate of distance from
    state n to goal

42
A algorithmwith h(n) h(n)
  • If we know something about the domain or the
    problem at hand, for each node we can measure the
    depth from the start, g(n), as well as a
    heuristic estimate to the goal, h(n), specific
    to the problem.
  • Each node receives an estimated weight, f(n),
    such that
  • f(n) g(n) h(n)
  • If f h, we go for the fastest path to the
    solution
  • If g g1 for each advance, we are measuring
    the number of steps to the solution.
  • If g 1, we just have a Best-first search.

43
Example traveling salesman
We start by choosing h(n) to be the minimal
distance to goal in a straight line.
H
G
C
Straight line distance A?F 366 B ?F 253 C ?F
178 D ?F 193 E ?F 98
148
98
B
A
211
88
97
D
181
E
F
44
Minimax methodfor games
  • Whatever search technique we use, if we maximize
    our turn minimize the opponents turn, we are
    using a minimax method.
  • This is applied to war strategies, for buying
    selling on the stock market or for corporate
    competition tactics.
  • After labeling each level as min or max, each
    leaf is given a value of 1 for a max win and 0
    for a min win.
  • Propagating up, give a max parent the maximum
    value of its children and a min parent the
    minimum value of its children.

45
Example a Nim-type game
  • A number of tokens are placed on the table
  • The next player must separate one pile into 2
    unequal piles (7 ?(6,1),(5,2),(4,3))
  • The first player who cannot do it, loses.

46
1
(7)
Min
1
1
1
Max
(6,1) (5,2) (4,3)
1
0
1
0
Min
(5,1,1) (4,2,1) (3,2,2) (3,3,1)
0
0
1
(4,1,1,1) (3,2,1,1) (2,2,2,1)
Max
1
0
Min
(3,1,1,1,1) (2,2,1,1,1)
0
Max
(2,1,1,1,1,1)
0 a win for Min 1 a win for max
47
1
(7)
Min
1
1
1
Max
(6,1) (5,2) (4,3)
1
0
1
0
Min
(5,1,1) (4,2,1) (3,2,2) (3,3,1)
0
0
1
(4,1,1,1) (3,2,1,1) (2,2,2,1)
Max
1
0
Min
(3,1,1,1,1) (2,2,1,1,1)
0
Max
(2,1,1,1,1,1)
Max wins whatever choice min makes
48
Minimax with n-ply-look-ahead
  • If the state space is too large to expand to the
    leaf nodes, expand the space to n levels
  • Each leaf node is given a heuristic evaluation
    value
  • The values are propagated back to the root
  • The root value is that of the best state that can
    be reached in n moves

49
Alpha-beta pruning
  • To improve search efficiency the state space can
    be pruned by doing a depth-first search.
  • Alpha values are associated with max and beta
    values with min.
  • Alpha values never decrease they are the worst
    max can do. If an alpha value is 6, all beta
    nodes directly below, of value less than 6 are
    pruned.
  • Beta values never increase they are the best min
    can do. If a beta value is 6, all alpha nodes
    directly below, of value more than 6 are pruned.

50
3
max
min
max
min
51
Knowledge Representation
  • Chapters 6 7

52
Different paradigms for KR
  • Associationist representations network of
    associations between concepts or objects
  • Semantic networks (Quillian) standardization
    of network relations
  • Frames (Minsky) ordered network of
    multi-properties concepts
  • Conceptual dependencies networks, CDN, (Schank)
    meta-language to describe natural language
  • Scripts (Schank, Abelson) structured rep. to
    describe a stereotyped sequence of events in CDN
  • Conceptual graphs (Sowa) graph rep. of the
    semantics of natural language
  • First order predicate logic to represent truths
    about statements
  • Production rules IF-THEN statements to capture
    expertise
  • Agents distributing hard problems such that
    each agent is responsible for a part

53
Rule-based representation inference
  • To do Expert Systems one must drag the expertise
    out of experts. They usually transfer their
    knowledge via rules.
  • The knowledge base contains various rules written
    as conjunctions disjunctions
  • The incoming input is partially matched to the
    rule that fits best
  • That rules action might trigger another rules
    condition.
  • The system chains through the KB until a
    resolution is obtained

54
Example of forward rule chaining
  • IF athritis AND joint-aches
  • THEN joint-inflammation
  • IF joint-inflammation
  • THEN joint-aches AND temp 1000
  • IF arthritis-history AND joint-aches present THEN
    osteo-arthritis

55
Example of backward rule chaining
  • IF temp 1000 AND joint-aches
  • THEN flu OR joint-inflammation
  • IF flu
  • THEN analgesic
  • IF joint-inflammation AND arthritis-history
  • THEN check-rhumatoid OR check-osteo

56
Example of KB for Expert System
IF engine is getting gas AND engine turns
over THEN problem is spark plugs IF engine is
does not turn over AND lights do not work THEN
problem is battery or cables IF engine is does
not turn over AND lights work THEN problem is
starter motor IF there is gas in fuel tank AND
there is gas in carburetor THEN engine is getting
gas
To run this KB a top-level, general goal must be
added. The general goal is matched to a
rule(s)s condition The conclusion is matched to
another rules condition until there are no more
rules The last conclusion is the answer.
57
Goal-driven reasoning
Search is done depth-first
58
Data-driven reasoning
If we start with specific problems given as data
and work our way to the problem we have a
data-driven reasoning mechanism. The search is
usually done breadth-first.
For car problem, we can start with The engine
does not turn over?
For Tic Tac Toe we can start with possible moves
as stated in a Production rules system goal in
working memory ? halt blank is not on left
edge ? move the blank left blank is not on top
edge ? move the blank up blank is not on right
edge ? move the blank right blank is not on
bottom edge ? move the blank down
59
Semantic Network representation for taxonomies
Mammals Dogs Cats Horses Rabbits
Siamese Tabby Calico Frisky Fluffy Kiko f
urColor(orange) eyeColor(green)
60
Predicate representation for Semantic Networks
  • ISA(dogs, mammals)
  • ISA(cats, mammals)
  • INST(kiko,cats)
  • furColor(orange, kiko)
  • eyeColor(green, kiko)
  • ...

61
Reasoning with networks
  • They can be used as concept identification
    systems.
  • Properties can be inherited upwards.
  • One problem is the non-standardization of
    predicates. Originally there were only the
    predicates agent, object, instrument, location
    and time.

62
Frame Representation
  • Network representations links are hard to
    interpret. Do they mean subclass, instance,
    type,?
  • Concepts their properties are hard to group in
    a logical fashion.
  • In 1975, Minsky came up with a frame, slot
    facet representation.

63
Uses of frames
  • They can represent scripts such as a birthday
    party.
  • They can represent different aspects of a
    concept, such as seeing a table from different
    viewpoints or different definitions of an
    animal, one for veterinarians, one for
    identification, for zoological taxonomies.
  • They can handle inheritance, exceptions,
    triggered procedures.

64
Example 1 of frames
  • Animals
  • Legs
  • Fur or skin or feathers
  • Heartstomachliver

65
Example 2 of frames
Dog
66
What can be done with frames
  • Inheritance
  • Accommodate exceptions
  • Create templates
  • Represent scripts or different viewpoints
  • Incorporate procedural attachments to call
    procedures or maintain consistency of the KB.

67
Example of template frame
(Student (student-id(VALUE( ))) (student-ssn(VAL
UE( ))) (address(DEFAULT (Suny-Plattsburgh))) (b
irthdate(VALUE( ))) (age(IF-NEEDED
(procedure-age))) (registered-for(SET(course1,cou
rse2,course3,course4,course5)) (IF-MODIFIED
(procedure-registrar-modification))))
  • Slot 3 contains a default value which can be
    overridden by exceptions
  • Slot 5 has a procedural attachment which
    computes the age from slot 4
  • Slot 6 has an attached daemon which will
    modify another part of the KB

68
Example of frame instance
(John-Doe (student-id(VALUE (123456))) (student-
ssn(VALUE (98-765-4321))) (address(VALUE
(NYC))) (birthdate(VALUE (03/12/82))) (age(IF-NE
EDED (procedure-age))) (registered-for(SET(csc321
,csc422,csc485)) (IF-MODIFIED
(procedure-registrar-modification))))
  • Slot 3 contains an exception to the default
    value
  • Slot 5s procedural attachment computes the
    age from slot 4
  • Slot 6 has an attached daemon which will
    modify the registrars KB

69
Conceptual Dependencies - SchankNatural language
representation
70
(No Transcript)
71
(No Transcript)
72
Scripts Schank Abelson
  • A script is a stereotypical sequence of events in
    a particular context.
  • It contains
  • Entry conditions that are the triggers to enter
    the script
  • Termination results
  • Props used in the script
  • Roles are actions that participants perform
  • Scenes which are different temporal aspects of
    script

73
(No Transcript)
74
Conceptual Graphs - Sowa
  • They represent an idea or mental image or
    sentence
  • They may contain -,?,!,,,_at_,",
    ,,,(,),Æ,,, HOUSE? Which house?
  • HOUSE x A house
  • HOUSE 123 House at number 123
  • HOUSE _at_n n houses

75
(No Transcript)
76
Contd
  • There are graph formation rules
  • COPY makes a copy of the CG
  • RESTRICT replace a type by a sub-type or a
    generic by an instance
  • JOIN joins the relations of 2 arcs
  • SIMPLIFY eliminates identical relation

77
(No Transcript)
78
Contd
  • There are graph manipulation rules
  • ERASE eliminates CG around other CGs -
    transitivity
  • INSERT adds a CG in a different context-
    specialization
  • ITERATE adds copies of predicates to an
    existing CG
  • DEITERATE an iterated CG may be eliminated
  • DOUBLENEGATION double negations may be added
    or eliminated

79
Logic-based representation reasoning
  • Originally AI systems were based on logic because
    that was the only paradigm that could give
    provably correct and sound conclusions.
  • Other representations were found to accommodate
    real-life with exceptions
  • Logic was then modified to enable commonsense
    reasoning modal logics, temporal logics,
    multi-valued logic, nonmonotonic logics, fuzzy
    logic

80
Propositional logic
  • Based on statements with truth values
  • open(csc345) - csc345 is open
  • open(csc345) csc345 is not open (i.e.
    closed)
  • Uses connectives ? (conjunction), ?
    (disjunction), ? (implication), ? (equivalence),
    ? (negation)
  • open(csc345) ? meets(mwf)
  • csc345 is open meets on mwf
  • prereq(csc314) ? poi
  • prerequisite is csc314 or poi

81
Semantics are truth tables
82
FOPL
  • It permits relations between entities called
    clauses
  • prereq(csc345,csc314)
  • It allows variables
  • man(X) ? mortal(X)
  • It has quantifiers universal ? (for all),
    existential ? (there exists)
  • ?X dog(X) ? animal(X)
  • The x in dog(X) is free but the x in ?X
    dog(X) is bound because all values for x in
    domain are accounted for.
  • A formula with no free variables is a sentence.

83
Important equivalences
  • p ? q ? p ? q
  • ? X p(X) ? ? X p(X)
  • ? X p(X) ? ? X p(X)
  • ? X p(X) ? ? Y p(Y)
  • X p(X) ? ? Y p(Y)
  • X (p(X) ? q(X)) ? ? X p(X) ? ? X q(X)
  • ? X (p(X) ? q(X)) ? ? X p(X) ? ? X q(X)
  • (p(X) ? q(Y)) ? p(X) ? q(Y)
  • (p(X) ? q(Y)) ? p(X) ? q(Y)

84
Automating FOPL reasoning
  • Replace p ? q by p ? q
  • Place all in front of quantifiers and combine
    them
  • Replace ? X p(X) by ? X p(X) and ? X p(X)
    by ? X p(X)
  • Replace ?X ? Y p(X,Y) by ?X p(X,f(X))
  • Replace all ? X by constants X1, X2,
  • Eliminate all ? as variable X implies ?X use
    different Xs
  • Replace conjunctions by separate clauses on
    different lines
  • If several substitutions are possible, choose the
    one with the least disjuncts.

85
Reasoning with FOPL
Given
  • man (Marcus)
  • pompeian (Marcus)
  • ? X pompeian(X) ? roman(X)
  • ruler(Caesar)
  • X roman(X) ? loyalto(X,Caesar) ? hate(X,Caesar)
  • X ?Y loyalto(X, Y)
  • X man(X) ? ? Y ruler(Y) ? tryassassinate(X,Y) ?
    loyalto(X,Y)
  • tryassassinate (Marcus, Caesar)

Rewrite as
man (Marcus) pompeian (Marcus) pompeian(X1) ?
roman(X1) by rules 1,6 ruler(Caesar)
roman(X2) ? loyalto(X2,Caesar) ?
hate(X2,Caesar) by rules 1,6 loyalto(X3,
f(X3)) by rule 4 man(X4) ? ruler(Y1) ?
tryassassinate(X4,Y1) ? loyalto(X4,Y1) by
rules 1,6 tryassassinate (Marcus, Caesar)
86
Unification resolution
87
Algorithm for Unify
Unify (E1, E2) case E1 E2 are constants
or on the empty list if E1 E2 return
else return FAIL E1 is a variable
if E1 is in E2 then return FAIL else
return (E2/E1) unify E1 to E2 E2 is a
variable if E2 is in E1 then return
FAIL else return (E1/E2) unify E2 to
E1 E1 or E2 are empty return
FAIL otherwise Head1 first element of
E1 Head2 first element of E2
SUBS1 Unify(Head1,Head2) if SUBS1
FAIL return FAIL Tail1 rest of E1
Tail2 rest of E2 SUBS2
Unify(Tail1,Tail2) if SUBS2 FAIL return
FAIL else return composition of
SUBS1,SUBS2
88
Blackboard Architecturefor Reasoning
  • Chapter 5.4

89
Structure of BB
  • Originally created to do voice speech
    recognition, it has different Knowledge Sources,
    KS
  • It has a BB which is forms a state space of
    solutions
  • It has a scheduler which determines which KS to
    examine for the next piece of knowledge to put in
    the state space of the BB

90
KSs for speech recognition
KS1 waveform of acoustic signal KS2 phonemes
of acoustic signal KS3 possible syllables KS4
possible words analyzed by one KS KS5 - possible
words analyzed by another KS KS6 generates
possible word sequences KS7 generates possible
phrases
91
BB design
KS1
KS1
KS1
KS1
Scheduler
92
Format of KSs
KS-name instance-of type of KS precondition po
sitive if KS is desirable trigger-condition posit
ive if KS is triggerable action action to be
taken if KS triggered
The scheduler affects values to precondition and
trigger-condition depending on its focus
complete solution, space saving, time saving, etc.
Write a Comment
User Comments (0)
About PowerShow.com