Title: Course Introduction
1- Do you drive? Have you thought about how the
route plan is created for you in the GPS system? - How would you implement a cross-and-nought
computer program?
2Artificial Intelligence Search Algorithms
- Dr Rong Qu
- School of Computer Science
- University of Nottingham
- Nottingham, NG8 1BB, UK
- rxq_at_cs.nott.ac.uk
- Course Introduction Tree Search
3Problem Space
- Many problems exhibit no detectable regular
structure to be exploited, they appear chaotic,
and do not yield to efficient algorithms - The concept of search plays an important role in
science and engineering - In one way, any problem whatsoever can be seen as
a search for the right answer
4Problem Space
5Problem Space
- Search space
- Set of all possible solutions to a problem
- Search algorithms
- Take a problem as input
- Return a solution to the problem
6Problem Space
- Search algorithms
- Exact (exhaustive) tree search
- Uninformed search algorithms
- Depth First, Breadth First
- Informed search algorithms
- A, Minimax
- Meta-heuristic search
- Modern AI search algorithms
- Genetic algorithm, Simulated Annealing
7References
- Artificial intelligence a guide to intelligent
systems. Addison-Wesley, 2002. Negnevitsky - Good AI textbook
- Easy to read while in depth
- Available from library
8References
- Metaheuristics From Design to Implementation,
Talbi, 2009
Quite recent Seems quite complete, from theory to
design, and implementation
9References
- Artificial Intelligence A Modern Approach
(AIMA) (Russell/Norvig), 1995 2003
Artificial Intelligence (AI) is a big field and
this is a big book (Preface to AIMA) Most
comprehensive textbook in AI Web site
http//aima.cs.berkeley.edu/ Textbook in many
courses Better to be used as reference book
(especially for tree search) You dont have to
learn and read the whole book
10Other Teaching Resources
- Introduction to AI http//www.cs.nott.ac.uk/rxq/g
51iai.htm - Uniform cost search
- Depth limited search
- Iterative deepening search
- alpha-beta pruning game playing
- Other basics of AI incl. tree search
- AI Methods http//www.cs.nott.ac.uk/rxq/g52aim.ht
m - Search algorithms
11Brief intro to search space
12Problem Space
- Often we can't simply write down and solve the
equations for a problem - Exhaustive search of large state spaces appears
to be the only viable approach
How?
13The Travelling Salesman Problem
- TSP
- A salesperson has to visit a number of cities
- (S)He can start at any city and must finish at
that same city - The salesperson must visit each city only once
- The cost of a solution is the total distance
traveled - Given a set of cities and distances between them
- Find the optimal route, i.e. the shortest
possible route - The number of possible routes is
14Combinatorial Explosion
A 10 city TSP has 181,000 possible solutions A 20
city TSP has 10,000,000,000,000,000 possible
solutions A 50 City TSP has 100,000,000,000,000,00
0,000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000 possible solutions
There are 1,000,000,000,000,000,000,000 litres of
water on the planet
Mchalewicz, Z, Evolutionary Algorithms for
Constrained Optimization Problems, CEC 2000
(Tutorial)
15Combinatorial Explosion
A 50 City TSP has 1.52 1064 possible solutions
A 10GHz computer might do 109 tours per
second Running since start of universe, it would
still only have done 1026 tours Not even close to
evaluating all tours! One of the major unsolved
theoretical problems in Computer Science
16Combinatorial Explosion
17Tree searches
18Search Trees
- Does the tree under the following root contain a
node I? - All you get to see at first is the root
- and a guarantee that it is a tree
- The rest is up to you to discover during the
process of search
F
19Search Trees
- A tree is a graph that
- is connected but becomes disconnected by removing
any edge (branch) - has precisely one path between any two nodes
- Unique path
- makes them much easier to search
- so we will start with search on trees
20Search Trees
- Depth of a node
- Depth of a tree
- Examples TSP vs. game
- Tree size
- Branching factor b 2 (binary tree)
- Depth d
d nodes at d, 2d total nodes
0 1 1
1 2 3
2 4 7
3 8 15
4 16 31
5 32 63
6 64 127
Exponentially - Combinatorial explosion
21Search Trees
- Heart of search techniques
- Nodes states of problem
- Root node initial state of problem
- Branches moves by operator
- Branching factor number of neighbourhoods
22Search Trees Example I
23Search Trees Example II
- 1st level 1 root node (empty board)
- 2nd level 8 nodes
- 3rd level 6 nodes for each of the node on the
2nd level (?)
24Implementing a Search
- State
- The state in the state space to which this node
corresponds - Parent-Node
- the node that generated the current node
- Operator
- the operator that was applied to generate this
node - Path-Cost
- the path cost from the initial state to this node
25Blind searches
26Breadth First Search - Method
- Expand Root Node First
- Expand all nodes at level 1 before expanding
level 2 - OR
- Expand all nodes at level d before expanding
nodes at level d1 - Queuing function
- Adds nodes to the end of the queue
27Breadth First Search - Implementation
A
node REMOVE-FRONT(nodes) If GOAL-TESTproblem
applied to STATE(node) succeeds then return
node nodes QUEUING-FN(nodes, EXPAND(node,
OPERATORSproblem))
28The example node set
Initial state
A
A
C
D
E
F
B
Goal state
G
H
I
J
K
L
M
N
O
P
L
Q
R
S
T
U
V
W
X
Y
Z
Press space to see a BFS of the example node set
29We begin with our initial state the node labeled
A. Press space to continue
This node is then expanded to reveal further
(unexpanded) nodes. Press space
Node A is removed from the queue. Each revealed
node is added to the END of the queue. Press
space to continue the search.
A
A
The search then moves to the first node in the
queue. Press space to continue.
Node B is expanded then removed from the queue.
The revealed nodes are added to the END of the
queue. Press space.
We then backtrack to expand node C, and the
process continues. Press space
C
D
E
F
B
B
C
D
E
F
G
H
I
J
K
L
N
M
O
P
G
H
I
J
K
L
L
L
L
L
Node L is located and the search returns a
solution. Press space to end.
Q
R
S
T
U
Press space to begin the search
Press space to continue the search
Press space to continue the search
Press space to continue the search
Press space to continue the search
Press space to continue the search
Press space to continue the search
Press space to continue the search
Press space to continue the search
Press space to continue the search
Size of Queue 0
Queue Empty
Queue A
Size of Queue 1
Queue B, C, D, E, F
Size of Queue 5
Queue C, D, E, F, G, H
Size of Queue 6
Queue D, E, F, G, H, I, J
Size of Queue 7
Queue E, F, G, H, I, J, K, L
Size of Queue 8
Queue F, G, H, I, J, K, L, M, N
Size of Queue 9
Queue G, H, I, J, K, L, M, N, O, P
Size of Queue 10
Queue H, I, J, K, L, M, N, O, P, Q
Queue I, J, K, L, M, N, O, P, Q, R
Queue J, K, L, M, N, O, P, Q, R, S
Queue K, L, M, N, O, P, Q, R, S, T
Queue L, M, N, O, P, Q, R, S, T, U
Queue Empty
Size of Queue 0
Nodes expanded 0
Current Action
Current level n/a
Nodes expanded 1
Current level 0
Current Action Expanding
Nodes expanded 2
Current level 1
Current Action Backtracking
Current level 0
Current level 1
Nodes expanded 3
Current Action Expanding
Current Action Backtracking
Current level 0
Current level 1
Nodes expanded 4
Current Action Expanding
Current Action Backtracking
Current level 0
Current level 1
Current Action Expanding
Nodes expanded 5
Current Action Backtracking
Current level 0
Current Action Expanding
Current level 1
Nodes expanded 6
Current Action Backtracking
Current level 0
Current level 1
Current level 2
Current Action Expanding
Nodes expanded 7
Current Action Backtracking
Current level 1
Current Action Expanding
Nodes expanded 8
Current Action Backtracking
Current level 2
Current level 1
Current level 0
Current level 1
Current level 2
Current Action Expanding
Nodes expanded 9
Current Action Backtracking
Current level 1
Current level 2
Current Action Expanding
Nodes expanded 10
Current Action Backtracking
Current level 1
Current level 0
Current level 1
Current level 2
Current Action Expanding
Nodes expanded 11
Current Action Backtracking
Current level 1
FINISHED SEARCH
Current level 2
BREADTH-FIRST SEARCH PATTERN
30Evaluating Search Algorithms
- Evaluating against four criteria
- Optimal
- Complete
- Time complexity
- Space complexity
31Evaluating Breadth First Search
- Evaluating against four criteria
- Complete?
- Yes
- Optimal?
- Yes
32Evaluating Breadth First Search
- Evaluating against four criteria
- Space Complexity
- O(bd), i.e. number of leaves
- Time Complexity
- 1 b b2 b3 ... bd-1 i.e. O(bd)
- b the branching factor
- d is the depth of the search tree
- Note The space/time complexity could be less as
the solution could be found anywhere before the
dth level.
33Evaluating Breadth First Search
Time and memory requirements for breadth-first
search, assuming a branching factor of 10, 100
bytes per node and searching 1000 nodes/second
34Breadth First Search - Observations
- Very systematic
- If there is a solution, BFS is guaranteed to find
it - If there are several solutions, then BFS
- will always find the shallowest goal state first
and - if the cost of a solution is a non-decreasing
function of the depth then it will always find
the cheapest solution
35Breadth First Search - Observations
- Space is more of a factor to breadth first search
than time - Time is still an issue
- Who has 35 years to wait for an answer to a level
12 problem (or even 128 days to a level 10
problem) - It could be argued that as technology gets faster
then exponential growth will not be a problem - But even if technology is 100 times faster
- we would still have to wait 35 years for a level
14 problem and what if we hit a level 15 problem!
36Blind searches
37Depth First Search - Method
- Expand Root Node First
- Explore one branch of the tree before exploring
another branch - Queuing function
- Adds nodes to the front of the queue
38Depth First Search - Observations
- Space complexity
- Only needs to store the path from the root to the
leaf node as well as the unexpanded nodes - For a state space with a branching factor of b
and a maximum depth of m, DFS requires storage of
bm nodes - Time complexity
- is bm in the worst case
39Depth First Search - Observations
- If DFS goes down a infinite branch it will not
terminate if it does not find a goal state - If it does find a solution there may be a better
solution at a lower level in the tree - Therefore, depth first search is neither complete
nor optimal
40The example node set
Exercise - Show the queue status of BFS and DFS
Initial state
A
A
C
D
E
F
B
Goal state
G
H
I
J
K
L
M
N
O
P
L
Q
R
S
T
U
V
W
X
Y
Z
41Heuristic searches
42Blind Search vs. Heuristic Searches
- Blind search
- Randomly choose where to search in the search
tree - When problem get large, not practical any more
- Heuristic search
- Explore the node which is more likely lead to
goal state
43Heuristic Searches - Characteristics
- Heuristic searches work by deciding which is the
next best node to expand - Has some domain knowledge
- Use a function to tell us how close the node is
to the goal state - Usually more efficient than blind searches
- Sometimes called an informed search
- There is no guarantee that it is the best node
- Heuristic searches estimate the cost to the goal
from its current position. It is usual to denote
the heuristic evaluation function by h(n)
44Heuristic Searches Example
Go to the city which is nearest to the goal city
Hsld(n) straight line distance between n and
the goal location
45Heuristic Searches - Greedy Search
- So named as it takes the biggest bite it can
out of the problem - That is, it seeks to minimise the estimated cost
to the goal by expanding the node estimated to be
closest to the goal state - Implementation is achieved by sorting the nodes
based on the evaluation function f(h) h(n)
46Heuristic Searches - Greedy Search
- It is only concerned with short term aims
- It is possible to get stuck in an infinite loop
- It is not optimal
- It is not complete
- Time and space complexity is O(bm) where m is
the depth of the search tree
47Greedy Search
Performed well, but not optimal
48Heuristic Searches vs. Blind Searches
49Heuristic Searches vs. Blind Searches
- Want to achieve this but stay
- complete
- optimal
- If bias the search too much then could miss
goals or miss shorter paths
50Heuristic searches
51The A algorithm
- Combines the cost so far and the estimated cost
to the goal - That is evaluation function f(n) g(n) h(n)
- An estimated cost of the cheapest solution via n
52The A algorithm
- A search algorithm to find the shortest path
through a search space to a goal state using a
heuristic - f g h
- f - function that gives an evaluation of the
state - g - the cost of getting from the initial state to
the current state - h - the cost of getting from the current state to
a goal state
53The A algorithm admissible heuristic
- It can be proved to be optimal and complete
providing that the heuristic is admissible. - That is the heuristic must never over estimate
the cost to reach the goal - h(n) must provide a valid lower bound on cost to
the goal - But, the number of nodes that have to be searched
still grows exponentially
54Straight Line Distances to Bucharest
Town SLD
Arad 366
Bucharest 0
Craiova 160
Dobreta 242
Eforie 161
Fagaras 178
Giurgiu 77
Hirsova 151
Iasi 226
Lugoj 244
Town SLD
Mehadai 241
Neamt 234
Oradea 380
Pitesti 98
Rimnicu 193
Sibiu 253
Timisoara 329
Urziceni 80
Vaslui 199
Zerind 374
We can use straight line distances as an
admissible heuristic as they will never
overestimate the cost to the goal. This is
because there is no shorter distance between two
cities than the straight line distance.
55ANIMATION OF A.
Nodes Expanded
1.Sibiu
2.Rimnicu
3.Pitesti
4.Fagaras
5.Bucharest 278 GOAL!!
Neamt
Oradea
71
Zerind
87
Iasi
Fringe in RED Visited in BLUE
Annotations ghf
75
140366506
92
Arad
Optimal route is (8097101) 278 miles
140
140
Vaslui
99178277
0253253
118
99
Fagaras
Sibiu
Timisoara
could use 211?
142
80
111
80193273
211
Rimnicu
Lugoj
98
Urziceni
Hirsova
17798275
86
70
97
Pitesti
Mehadia
146
101
Bucharest
86
75
138
3100310 (F)
Dobreta
2780278 (R,P)
90
226160386(R)
Craiova
120
Eforie
Giurgui
315160475(P)
56The A algorithm
- Clearly the expansion of the nodes is much more
directed towards the goal - The number of expansions is significantly reduced
- Exercise
- Draw the search tree of A for the 8-puzzle using
the two heuristics
57The A Algorithm An example
Initial State
Goal State
8 puzzle problem Online demo of A algorithm
for 8 puzzle Noyes Chapmans 15 puzzle
58The A Algorithm An example
Possible Heuristics in A Algorithm
- H1
- the number of tiles that are in the wrong
position - H2
- the sum of the distances of the tiles from
their goal positions using the Manhattan Distance - We need admissible heuristics (never over
estimate) - Both are admissible but which one is better?
59The A Algorithm An example
1 3 4
8 6 2
7 5
1 3 4
8 6 2
7 5
5
4
6
1 3 4
8 6 2
7 5
1 3 4
8 2
7 6 5
1 3 4
8 6 2
7 5
1 3 4
8 6 2
7 5
1 3 4
8 6 2
7 5
1 3 4
8 2
7 6 5
4
6
5
6
3
- H1 the number of tiles that are in the wrong
position (4) - H2 the sum of the distances of the tiles from
their goal positions using the Manhattan Distance
(5)
601 3 4
8 6 2
7 5
5
6
1 3 4
8 6 2
7 5
1 3 4
8 2
7 6 5
1 3 4
8 6 2
7 5
4
6
5
1 3 4
8 2
7 6 5
1 4
8 3 2
7 6 5
1 3 4
8 2
7 6 5
5
3
1 3
8 2 4
7 6 5
1 3 4
8 2 5
7 6
2
4
1 3
8 2 4
7 6 5
Whats wrong with the search? Is it implementing
the A search?
1
H2 the sum of the distances of the tiles from
their goal positions using the Manhattan Distance
(5)
61(No Transcript)
62The A Algorithm An example
- A is optimal and complete, but it is not all
good news - It can be shown that the number of nodes that are
searched is still exponential to the size of most
problems - This has implications not only for the time taken
to perform the search but also the space required - Of these two problems the search complexity is
more serious
63Game tree searches
64Game Playing
- Up till now we have assumed the situation is not
going to change whilst we search - Shortest route between two towns
- The same goal board of 8-puzzle, n-Queen
- Game playing is not like this
- Not sure of the state after your opponent move
- Goals of your opponent is to prevent your goal,
and vice versa
65Game Playing
Wolfgang von Kempelen The Turk 18th
Century Chess Automaton 1770-1854
66Game Playing
67Game Playing - Minimax
- Game Playing
- An opponent tries to thwart your every move
- 1944 - John von Neumann outlined a search method
(Minimax) - maximise your position whilst minimising your
opponents
68Game Playing - Minimax
- In order to implement we need a method of
measuring how good a position is - Often called a utility function
- Initially this will be a value that describes our
position exactly
69Assume we can generate the full search tree
The idea is computer wants to force the opponent
to lose, and maximise its own chance of winning
Of course for larger problem its not possible to
draw the entire tree
1
MAX
A
Game starts with computer making the first move
Now we can decide who win the game
We know absolutely who will win following a branch
Then the opponent makes the next move
MIN
MAX
Assume positive computer wins
terminal position
agent
opponent
70Now the computer is able to play a perfect game.
At each move itll move to a state of the highest
value.
Question who will win this game, if both players
play a perfect game?
71Game Playing - Minimax
- Nim
- Start with a pile of tokens
- At each move the player must divide the tokens
into two non-empty, non-equal piles
72Game Playing - Minimax
- Nim
- Starting with 7 tokens, draw the complete search
tree - At each move the player must divide the tokens
into two non-empty, non-equal piles
737
74Game Playing - Minimax
- Conventionally, in discussion of minimax, have
two players MAX and MIN - The utility function is taken to be the utility
for MAX - Larger values are better for MAX
- Assuming MIN plays first, complete the MIN/MAX
tree - Assume that a utility function of
- 0 a win for MIN
- 1 a win for MAX
75Game Playing - Minimax
- Player MAX is going to take the best move
available - Will select the next state to be the one with the
highest utility - Hence, value of a MAX node is the MAXIMUM of the
values of the next possible states - i.e. the maximum of its children in the search
tree
76Game Playing - Minimax
- Player MIN is going to take the best move
available for MIN i.e. the worst available for
MAX - Will select the next state to be the one with the
lowest utility - higher utility values are better for MAX and so
worse for MIN - Hence, value of a MIN node is the MINIMUM of the
values of the next possible states - i.e. the minimum of its children in the search
tree
77Game Playing - Minimax
- A MAX move takes the best move for MAX
- so takes the MAX utility of the children
- A MIN move takes the best for min
- hence the worst for MAX
- so takes the MIN utility of the children
- Games alternate in play between MIN and MAX
781
1
1
1
0
1
0
1
1
0
0
1
0
0
79Game Playing - Minimax
- Efficiency of the search
- Game trees are very big
- Evaluation of positions is time-consuming
- How can we reduce the number of nodes to be
evaluated? - alpha-beta search
80Game Playing - Minimax
- Consider a variation of the two player game Nim
- The game starts with a stack of 5 tokens. At each
move a player removes one, two or three tokens
from the pile, leaving the pile non-empty. A
player who has to remove the last token loses the
game. - (a) Draw the complete search tree for this
variation of Nim. - (b) Assume two players, min and max. Max plays
first. - If a terminal state in the search tree developed
above is a win for min, a utility function of -1
is assigned to that state. A utility function of
1 is assigned to a state if max wins the game. - Apply the minimax algorithm to the search tree.
81Appendix
- A brief history of AI game playing
82Game Playing
- Game Playing has been studied for a long time
- Babbage (1791-1871)
- Analytical machine
- tic-tac-toe
- Turing (1912-1954)
- Chess playing program
- Within 10 years a computer will be a chess
champion - Herbert Simon, 1957
83Game Playing
- Why study game playing in AI
- Games are intelligent activities
- It is very easy to measure success or failure
- Do not require large amounts of knowledge
- They were thought to be solvable by
straightforward search from the starting state to
a winning position
84Game Playing - Checkers
- Arthur Samuel
- 1952 first checker program, written for an IBM
701 - 1954 - Re-wrote for an IBM 704
- 10,000 words of main memory
85Game Playing - Checkers
- Arthur Samuel
- Added a learning mechanism that learnt its own
evaluation function by playing against itself - After a few days it could beat its creator
- And compete on equal terms with strong human
players
86Game Playing - Checkers
- Jonathon Schaeffer Chinook, 1996
- In 1992 Chinook won the US Open
- Plays a perfect end game by means of a database
- And challenged for the world championship
- http//www.cs.ualberta.ca/chinook/
87Game Playing - Checkers
- Jonathon Schaeffer Chinook, 1996
- Dr Marion Tinsley
- World championship for over 40 years, only losing
three games in all that time - Against Chinook he suffered his fourth and fifth
defeat - But ultimately won 21.5 to 18.5
88Game Playing - Checkers
- Jonathon Schaeffer Chinook, 1996
- Dr Marion Tinsley
- In August 1994 there was a re-match but Marion
Tinsley withdrew for health reasons - Chinook became the official world champion
89Game Playing - Checkers
- Jonathon Schaeffer Chinook, 1996
- Uses Alpha-Beta search
- Did not include any learning mechanism
- Schaeffer claimed Chinook was rated at 2814
- The best human players are rated at 2632 and 2625
90Game Playing - Checkers
- Chellapilla and Fogel 2000
- Learnt how to play a good game of checkers
- The program used a population of games with the
best competing for survival - Learning was done using a neural network with the
synapses being changed by an evolutionary
strategy - Input current board position
- Output a value used in minimax search
91Game Playing - Checkers
- Chellapilla and Fogel 2000
- During the training period the program is given
- no information other than whether it won or lost
(it is not even told by how much) - No strategy and no database of opening and ending
positions - The best program beats a commercial application
6-0 - The program was presented at CEC 2000 (San Diego)
and prize remain unclaimed
92Game Playing - Chess
- No computer can play even an amateur-level game
of chess - Hubert Dreyfus, 1960s
93Game Playing - Chess
- Shannon - March 9th 1949 - New York
- Size of search space (10120 - average of 40
moves) - 10120 gt number of atoms in the universe
- 200 million positions/second 10100 years to
evaluate all possible games - Age of universe 1010
- Searching to depth 40, at one state per
microsecond, it would take 1090 years to make its
first move
94Game Playing - Chess
- 1957 AI pioneers Newell and Simon predicted
that a computer would be chess champion within
ten years - Simon I was a little far-sighted with chess,
but there was no way to do it with machines that
were as slow as the ones way back then - 1958 - First computer to play chess was an IBM
704 - about one millionth capacity of deep blue
95Game Playing - Chess
- 1967 Mac Hack competed successfully in human
tournaments - 1983 Belle attained expert status from the
United States Chess Federation - Mid 80s Scientists at Carnegie Mellon
University started work on what was to become
Deep Blue - Sun workstation, 50K positions per second
- Project moved to IBM in 1989
96Game Playing - Chess
- May 11th 1997, Gary Kasparov lost a six match
game to deep blue, IBM Research - 3.5 to 2.5
- Two wins for deep blue, one win for Kasparov and
three draws
(http//www.research.ibm.com/deepblue/meet/html/d.
3.html)
97Game Playing - Chess
- Still receives a lot of research interests
- Computer program to learn how to play chess,
rather than being told how it should play - Research on game playing at School of CS,
Nottingham
98Game Playing Go
- A significant challenge to computer programmers,
not yet much helped by fast computation - Search methods successful for chess and checkers
do not work for Go, due to many qualities of the
game - Larger area of the board (five times the chess
board) - New piece appears every move - progressively more
complex
wikipedia http//en.wikipedia.org/wiki/Go_(game)
99Game Playing Go
- A significant challenge to computer programmers,
not yet much helped by fast computation - Search methods successful for chess and checkers
do not work for Go, due to many qualities of the
game - A material advantage in Go may just mean that
short-term gain has been given priority - Very high degree of pattern recognition involved
in human capacity to play well
wikipedia http//en.wikipedia.org/wiki/Go_(game)
100Appendix
101A
STOP! What else can you deduce now!?
STOP! What else can you deduce now!?
STOP! What else can you deduce now!?
MAX
lt6
MIN
On discovering util( D ) 6 we know that
util( B ) lt 6
6
gt8
On discovering util( J ) 8 we know that
util( E ) gt 8
MAX
Can stop expansion of E as best play will not go
via E Value of K is irrelevant prune it!
6
5
8
agent
opponent
102A
gt6
MAX
6
lt2
MIN
6
gt8
2
MAX
6
5
8
2
1
agent
opponent
103A
gt6
MAX
6
2
MIN
6
gt8
2
MAX
6
5
8
2
1
agent
opponent
104Alpha-beta Pruning
A
6
MAX
6
2
beta cutoff
MIN
6
gt8
2
alpha cutoff
MAX
6
5
8
2
1
agent
opponent
105Appendix
106General Search
- Function GENERAL-SEARCH (problem, QUEUING-FN)
returns a solution or failure - nodes MAKE-QUEUE(MAKE-NODE(INITIAL-STATEproblem
)) - Loop do
- If nodes is empty then return failure
- node REMOVE-FRONT(nodes)
- If GOAL-TESTproblem applied to STATE(node)
succeeds then return node - nodes QUEUING-FN(nodes,EXPAND(node,OPERATORSpr
oblem)) - End
- End Function
107General Search
- Function GENERAL-SEARCH (problem, QUEUING-FN)
returns a solution or failure - nodes MAKE-QUEUE(MAKE-NODE(INITIAL-STATEproblem
)) - Loop do
- If nodes is empty then return failure
- node REMOVE-FRONT(nodes)
- If GOAL-TESTproblem applied to STATE(node)
succeeds then return node - nodes QUEUING-FN(nodes,EXPAND(node,OPERATORSpr
oblem)) - End
- End Function
108General Search
- Function GENERAL-SEARCH (problem, QUEUING-FN)
returns a solution or failure - nodes MAKE-QUEUE(MAKE-NODE(INITIAL-STATEproblem
)) - Loop do
- If nodes is empty then return failure
- node REMOVE-FRONT(nodes)
- If GOAL-TESTproblem applied to STATE(node)
succeeds then return node - nodes QUEUING-FN(nodes,EXPAND(node,OPERATORSpr
oblem)) - End
- End Function