Course Introduction - PowerPoint PPT Presentation

About This Presentation
Title:

Course Introduction

Description:

Title: Course Introduction Author: Rong Qu Last modified by: rxq Created Date: 1/17/2000 9:25:46 PM Document presentation format: On-screen Show (4:3) – PowerPoint PPT presentation

Number of Views:246
Avg rating:3.0/5.0
Slides: 109
Provided by: Rong47
Category:

less

Transcript and Presenter's Notes

Title: Course Introduction


1
  • Do you drive? Have you thought about how the
    route plan is created for you in the GPS system?
  • How would you implement a cross-and-nought
    computer program?

2
Artificial Intelligence Search Algorithms
  • Dr Rong Qu
  • School of Computer Science
  • University of Nottingham
  • Nottingham, NG8 1BB, UK
  • rxq_at_cs.nott.ac.uk
  • Course Introduction Tree Search

3
Problem Space
  • Many problems exhibit no detectable regular
    structure to be exploited, they appear chaotic,
    and do not yield to efficient algorithms
  • The concept of search plays an important role in
    science and engineering
  • In one way, any problem whatsoever can be seen as
    a search for the right answer

4
Problem Space
5
Problem Space
  • Search space
  • Set of all possible solutions to a problem
  • Search algorithms
  • Take a problem as input
  • Return a solution to the problem

6
Problem Space
  • Search algorithms
  • Exact (exhaustive) tree search
  • Uninformed search algorithms
  • Depth First, Breadth First
  • Informed search algorithms
  • A, Minimax
  • Meta-heuristic search
  • Modern AI search algorithms
  • Genetic algorithm, Simulated Annealing

7
References
  • Artificial intelligence a guide to intelligent
    systems. Addison-Wesley, 2002. Negnevitsky
  • Good AI textbook
  • Easy to read while in depth
  • Available from library

8
References
  • Metaheuristics From Design to Implementation,
    Talbi, 2009

Quite recent Seems quite complete, from theory to
design, and implementation
9
References
  • Artificial Intelligence A Modern Approach
    (AIMA) (Russell/Norvig), 1995 2003

Artificial Intelligence (AI) is a big field and
this is a big book (Preface to AIMA) Most
comprehensive textbook in AI Web site
http//aima.cs.berkeley.edu/ Textbook in many
courses Better to be used as reference book
(especially for tree search) You dont have to
learn and read the whole book
10
Other Teaching Resources
  • Introduction to AI http//www.cs.nott.ac.uk/rxq/g
    51iai.htm
  • Uniform cost search
  • Depth limited search
  • Iterative deepening search
  • alpha-beta pruning game playing
  • Other basics of AI incl. tree search
  • AI Methods http//www.cs.nott.ac.uk/rxq/g52aim.ht
    m
  • Search algorithms

11
Brief intro to search space
12
Problem Space
  • Often we can't simply write down and solve the
    equations for a problem
  • Exhaustive search of large state spaces appears
    to be the only viable approach

How?
13
The Travelling Salesman Problem
  • TSP
  • A salesperson has to visit a number of cities
  • (S)He can start at any city and must finish at
    that same city
  • The salesperson must visit each city only once
  • The cost of a solution is the total distance
    traveled
  • Given a set of cities and distances between them
  • Find the optimal route, i.e. the shortest
    possible route
  • The number of possible routes is

14
Combinatorial Explosion
A 10 city TSP has 181,000 possible solutions A 20
city TSP has 10,000,000,000,000,000 possible
solutions A 50 City TSP has 100,000,000,000,000,00
0,000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000 possible solutions
There are 1,000,000,000,000,000,000,000 litres of
water on the planet
Mchalewicz, Z, Evolutionary Algorithms for
Constrained Optimization Problems, CEC 2000
(Tutorial)
15
Combinatorial Explosion
A 50 City TSP has 1.52 1064 possible solutions
A 10GHz computer might do 109 tours per
second Running since start of universe, it would
still only have done 1026 tours Not even close to
evaluating all tours! One of the major unsolved
theoretical problems in Computer Science
16
Combinatorial Explosion
17
Tree searches
18
Search Trees
  • Does the tree under the following root contain a
    node I?
  • All you get to see at first is the root
  • and a guarantee that it is a tree
  • The rest is up to you to discover during the
    process of search

F
19
Search Trees
  • A tree is a graph that
  • is connected but becomes disconnected by removing
    any edge (branch)
  • has precisely one path between any two nodes
  • Unique path
  • makes them much easier to search
  • so we will start with search on trees

20
Search Trees
  • Depth of a node
  • Depth of a tree
  • Examples TSP vs. game
  • Tree size
  • Branching factor b 2 (binary tree)
  • Depth d

d nodes at d, 2d total nodes
0 1 1
1 2 3
2 4 7
3 8 15
4 16 31
5 32 63
6 64 127
Exponentially - Combinatorial explosion
21
Search Trees
  • Heart of search techniques
  • Nodes states of problem
  • Root node initial state of problem
  • Branches moves by operator
  • Branching factor number of neighbourhoods

22
Search Trees Example I



23
Search Trees Example II
  • 1st level 1 root node (empty board)
  • 2nd level 8 nodes
  • 3rd level 6 nodes for each of the node on the
    2nd level (?)









24
Implementing a Search
  • State
  • The state in the state space to which this node
    corresponds
  • Parent-Node
  • the node that generated the current node
  • Operator
  • the operator that was applied to generate this
    node
  • Path-Cost
  • the path cost from the initial state to this node

25
Blind searches
  • Breadth First Search

26
Breadth First Search - Method
  • Expand Root Node First
  • Expand all nodes at level 1 before expanding
    level 2
  • OR
  • Expand all nodes at level d before expanding
    nodes at level d1
  • Queuing function
  • Adds nodes to the end of the queue

27
Breadth First Search - Implementation
A
node REMOVE-FRONT(nodes) If GOAL-TESTproblem
applied to STATE(node) succeeds then return
node nodes QUEUING-FN(nodes, EXPAND(node,
OPERATORSproblem))
28
The example node set
Initial state
A
A
C
D
E
F
B
Goal state
G
H
I
J
K
L
M
N
O
P
L
Q
R
S
T
U
V
W
X
Y
Z
Press space to see a BFS of the example node set
29
We begin with our initial state the node labeled
A. Press space to continue
This node is then expanded to reveal further
(unexpanded) nodes. Press space
Node A is removed from the queue. Each revealed
node is added to the END of the queue. Press
space to continue the search.
A
A
The search then moves to the first node in the
queue. Press space to continue.
Node B is expanded then removed from the queue.
The revealed nodes are added to the END of the
queue. Press space.
We then backtrack to expand node C, and the
process continues. Press space
C
D
E
F
B
B
C
D
E
F
G
H
I
J
K
L
N
M
O
P
G
H
I
J
K
L
L
L
L
L
Node L is located and the search returns a
solution. Press space to end.
Q
R
S
T
U
Press space to begin the search
Press space to continue the search
Press space to continue the search
Press space to continue the search
Press space to continue the search
Press space to continue the search
Press space to continue the search
Press space to continue the search
Press space to continue the search
Press space to continue the search
Size of Queue 0
Queue Empty
Queue A
Size of Queue 1
Queue B, C, D, E, F
Size of Queue 5
Queue C, D, E, F, G, H
Size of Queue 6
Queue D, E, F, G, H, I, J
Size of Queue 7
Queue E, F, G, H, I, J, K, L
Size of Queue 8
Queue F, G, H, I, J, K, L, M, N
Size of Queue 9
Queue G, H, I, J, K, L, M, N, O, P
Size of Queue 10
Queue H, I, J, K, L, M, N, O, P, Q
Queue I, J, K, L, M, N, O, P, Q, R
Queue J, K, L, M, N, O, P, Q, R, S
Queue K, L, M, N, O, P, Q, R, S, T
Queue L, M, N, O, P, Q, R, S, T, U
Queue Empty
Size of Queue 0
Nodes expanded 0
Current Action
Current level n/a
Nodes expanded 1
Current level 0
Current Action Expanding
Nodes expanded 2
Current level 1
Current Action Backtracking
Current level 0
Current level 1
Nodes expanded 3
Current Action Expanding
Current Action Backtracking
Current level 0
Current level 1
Nodes expanded 4
Current Action Expanding
Current Action Backtracking
Current level 0
Current level 1
Current Action Expanding
Nodes expanded 5
Current Action Backtracking
Current level 0
Current Action Expanding
Current level 1
Nodes expanded 6
Current Action Backtracking
Current level 0
Current level 1
Current level 2
Current Action Expanding
Nodes expanded 7
Current Action Backtracking
Current level 1
Current Action Expanding
Nodes expanded 8
Current Action Backtracking
Current level 2
Current level 1
Current level 0
Current level 1
Current level 2
Current Action Expanding
Nodes expanded 9
Current Action Backtracking
Current level 1
Current level 2
Current Action Expanding
Nodes expanded 10
Current Action Backtracking
Current level 1
Current level 0
Current level 1
Current level 2
Current Action Expanding
Nodes expanded 11
Current Action Backtracking
Current level 1
FINISHED SEARCH
Current level 2
BREADTH-FIRST SEARCH PATTERN
30
Evaluating Search Algorithms
  • Evaluating against four criteria
  • Optimal
  • Complete
  • Time complexity
  • Space complexity

31
Evaluating Breadth First Search
  • Evaluating against four criteria
  • Complete?
  • Yes
  • Optimal?
  • Yes

32
Evaluating Breadth First Search
  • Evaluating against four criteria
  • Space Complexity
  • O(bd), i.e. number of leaves
  • Time Complexity
  • 1 b b2 b3 ... bd-1 i.e. O(bd)
  • b the branching factor
  • d is the depth of the search tree
  • Note The space/time complexity could be less as
    the solution could be found anywhere before the
    dth level.

33
Evaluating Breadth First Search
Time and memory requirements for breadth-first
search, assuming a branching factor of 10, 100
bytes per node and searching 1000 nodes/second
34
Breadth First Search - Observations
  • Very systematic
  • If there is a solution, BFS is guaranteed to find
    it
  • If there are several solutions, then BFS
  • will always find the shallowest goal state first
    and
  • if the cost of a solution is a non-decreasing
    function of the depth then it will always find
    the cheapest solution

35
Breadth First Search - Observations
  • Space is more of a factor to breadth first search
    than time
  • Time is still an issue
  • Who has 35 years to wait for an answer to a level
    12 problem (or even 128 days to a level 10
    problem)
  • It could be argued that as technology gets faster
    then exponential growth will not be a problem
  • But even if technology is 100 times faster
  • we would still have to wait 35 years for a level
    14 problem and what if we hit a level 15 problem!

36
Blind searches
  • Depth First Search

37
Depth First Search - Method
  • Expand Root Node First
  • Explore one branch of the tree before exploring
    another branch
  • Queuing function
  • Adds nodes to the front of the queue

38
Depth First Search - Observations
  • Space complexity
  • Only needs to store the path from the root to the
    leaf node as well as the unexpanded nodes
  • For a state space with a branching factor of b
    and a maximum depth of m, DFS requires storage of
    bm nodes
  • Time complexity
  • is bm in the worst case

39
Depth First Search - Observations
  • If DFS goes down a infinite branch it will not
    terminate if it does not find a goal state
  • If it does find a solution there may be a better
    solution at a lower level in the tree
  • Therefore, depth first search is neither complete
    nor optimal

40
The example node set
Exercise - Show the queue status of BFS and DFS
Initial state
A
A
C
D
E
F
B
Goal state
G
H
I
J
K
L
M
N
O
P
L
Q
R
S
T
U
V
W
X
Y
Z
41
Heuristic searches
42
Blind Search vs. Heuristic Searches
  • Blind search
  • Randomly choose where to search in the search
    tree
  • When problem get large, not practical any more
  • Heuristic search
  • Explore the node which is more likely lead to
    goal state

43
Heuristic Searches - Characteristics
  • Heuristic searches work by deciding which is the
    next best node to expand
  • Has some domain knowledge
  • Use a function to tell us how close the node is
    to the goal state
  • Usually more efficient than blind searches
  • Sometimes called an informed search
  • There is no guarantee that it is the best node
  • Heuristic searches estimate the cost to the goal
    from its current position. It is usual to denote
    the heuristic evaluation function by h(n)

44
Heuristic Searches Example
Go to the city which is nearest to the goal city
Hsld(n) straight line distance between n and
the goal location
45
Heuristic Searches - Greedy Search
  • So named as it takes the biggest bite it can
    out of the problem
  • That is, it seeks to minimise the estimated cost
    to the goal by expanding the node estimated to be
    closest to the goal state
  • Implementation is achieved by sorting the nodes
    based on the evaluation function f(h) h(n)

46
Heuristic Searches - Greedy Search
  • It is only concerned with short term aims
  • It is possible to get stuck in an infinite loop
  • It is not optimal
  • It is not complete
  • Time and space complexity is O(bm) where m is
    the depth of the search tree

47
Greedy Search
Performed well, but not optimal
48
Heuristic Searches vs. Blind Searches
49
Heuristic Searches vs. Blind Searches
  • Want to achieve this but stay
  • complete
  • optimal
  • If bias the search too much then could miss
    goals or miss shorter paths

50
Heuristic searches
  • A Search

51
The A algorithm
  • Combines the cost so far and the estimated cost
    to the goal
  • That is evaluation function f(n) g(n) h(n)
  • An estimated cost of the cheapest solution via n

52
The A algorithm
  • A search algorithm to find the shortest path
    through a search space to a goal state using a
    heuristic
  • f g h
  • f - function that gives an evaluation of the
    state
  • g - the cost of getting from the initial state to
    the current state
  • h - the cost of getting from the current state to
    a goal state

53
The A algorithm admissible heuristic
  • It can be proved to be optimal and complete
    providing that the heuristic is admissible.
  • That is the heuristic must never over estimate
    the cost to reach the goal
  • h(n) must provide a valid lower bound on cost to
    the goal
  • But, the number of nodes that have to be searched
    still grows exponentially

54
Straight Line Distances to Bucharest
Town SLD
Arad 366
Bucharest 0
Craiova 160
Dobreta 242
Eforie 161
Fagaras 178
Giurgiu 77
Hirsova 151
Iasi 226
Lugoj 244
Town SLD
Mehadai 241
Neamt 234
Oradea 380
Pitesti 98
Rimnicu 193
Sibiu 253
Timisoara 329
Urziceni 80
Vaslui 199
Zerind 374
We can use straight line distances as an
admissible heuristic as they will never
overestimate the cost to the goal. This is
because there is no shorter distance between two
cities than the straight line distance.
55
ANIMATION OF A.
Nodes Expanded
1.Sibiu
2.Rimnicu
3.Pitesti
4.Fagaras
5.Bucharest 278 GOAL!!
Neamt
Oradea
71
Zerind
87
Iasi
Fringe in RED Visited in BLUE
Annotations ghf
75
140366506
92
Arad
Optimal route is (8097101) 278 miles
140
140
Vaslui
99178277
0253253
118
99
Fagaras
Sibiu
Timisoara
could use 211?
142
80
111
80193273
211
Rimnicu
Lugoj
98
Urziceni
Hirsova
17798275
86
70
97
Pitesti
Mehadia
146
101
Bucharest
86
75
138
3100310 (F)
Dobreta
2780278 (R,P)
90
226160386(R)
Craiova
120
Eforie
Giurgui
315160475(P)
56
The A algorithm
  • Clearly the expansion of the nodes is much more
    directed towards the goal
  • The number of expansions is significantly reduced
  • Exercise
  • Draw the search tree of A for the 8-puzzle using
    the two heuristics

57
The A Algorithm An example
Initial State
Goal State
8 puzzle problem Online demo of A algorithm
for 8 puzzle Noyes Chapmans 15 puzzle
58
The A Algorithm An example
Possible Heuristics in A Algorithm
  • H1
  • the number of tiles that are in the wrong
    position
  • H2
  • the sum of the distances of the tiles from
    their goal positions using the Manhattan Distance
  • We need admissible heuristics (never over
    estimate)
  • Both are admissible but which one is better?

59
The A Algorithm An example
1 3 4
8 6 2
7 5
1 3 4
8 6 2
7 5
5
4
6
1 3 4
8 6 2
7 5
1 3 4
8 2
7 6 5
1 3 4
8 6 2
7 5
1 3 4
8 6 2
7 5
1 3 4
8 6 2
7 5
1 3 4
8 2
7 6 5
4
6
5
6
3
  • H1 the number of tiles that are in the wrong
    position (4)
  • H2 the sum of the distances of the tiles from
    their goal positions using the Manhattan Distance
    (5)

60
1 3 4
8 6 2
7 5
5
6
1 3 4
8 6 2
7 5
1 3 4
8 2
7 6 5
1 3 4
8 6 2
7 5
4
6
5
1 3 4
8 2
7 6 5
1 4
8 3 2
7 6 5
1 3 4
8 2
7 6 5
5
3
1 3
8 2 4
7 6 5
1 3 4
8 2 5
7 6
2
4
1 3
8 2 4
7 6 5
Whats wrong with the search? Is it implementing
the A search?
1
H2 the sum of the distances of the tiles from
their goal positions using the Manhattan Distance
(5)
61
(No Transcript)
62
The A Algorithm An example
  • A is optimal and complete, but it is not all
    good news
  • It can be shown that the number of nodes that are
    searched is still exponential to the size of most
    problems
  • This has implications not only for the time taken
    to perform the search but also the space required
  • Of these two problems the search complexity is
    more serious

63
Game tree searches
  • MiniMax Algorithm

64
Game Playing
  • Up till now we have assumed the situation is not
    going to change whilst we search
  • Shortest route between two towns
  • The same goal board of 8-puzzle, n-Queen
  • Game playing is not like this
  • Not sure of the state after your opponent move
  • Goals of your opponent is to prevent your goal,
    and vice versa

65
Game Playing
Wolfgang von Kempelen The Turk 18th
Century Chess Automaton 1770-1854
66
Game Playing
67
Game Playing - Minimax
  • Game Playing
  • An opponent tries to thwart your every move
  • 1944 - John von Neumann outlined a search method
    (Minimax)
  • maximise your position whilst minimising your
    opponents

68
Game Playing - Minimax
  • In order to implement we need a method of
    measuring how good a position is
  • Often called a utility function
  • Initially this will be a value that describes our
    position exactly

69
Assume we can generate the full search tree
The idea is computer wants to force the opponent
to lose, and maximise its own chance of winning
Of course for larger problem its not possible to
draw the entire tree
1
MAX
A
Game starts with computer making the first move
Now we can decide who win the game
We know absolutely who will win following a branch
Then the opponent makes the next move
MIN
MAX
Assume positive computer wins
terminal position
agent
opponent
70
Now the computer is able to play a perfect game.
At each move itll move to a state of the highest
value.
Question who will win this game, if both players
play a perfect game?
71
Game Playing - Minimax
  • Nim
  • Start with a pile of tokens
  • At each move the player must divide the tokens
    into two non-empty, non-equal piles

72
Game Playing - Minimax
  • Nim
  • Starting with 7 tokens, draw the complete search
    tree
  • At each move the player must divide the tokens
    into two non-empty, non-equal piles

73
7
74
Game Playing - Minimax
  • Conventionally, in discussion of minimax, have
    two players MAX and MIN
  • The utility function is taken to be the utility
    for MAX
  • Larger values are better for MAX
  • Assuming MIN plays first, complete the MIN/MAX
    tree
  • Assume that a utility function of
  • 0 a win for MIN
  • 1 a win for MAX

75
Game Playing - Minimax
  • Player MAX is going to take the best move
    available
  • Will select the next state to be the one with the
    highest utility
  • Hence, value of a MAX node is the MAXIMUM of the
    values of the next possible states
  • i.e. the maximum of its children in the search
    tree

76
Game Playing - Minimax
  • Player MIN is going to take the best move
    available for MIN i.e. the worst available for
    MAX
  • Will select the next state to be the one with the
    lowest utility
  • higher utility values are better for MAX and so
    worse for MIN
  • Hence, value of a MIN node is the MINIMUM of the
    values of the next possible states
  • i.e. the minimum of its children in the search
    tree

77
Game Playing - Minimax
  • A MAX move takes the best move for MAX
  • so takes the MAX utility of the children
  • A MIN move takes the best for min
  • hence the worst for MAX
  • so takes the MIN utility of the children
  • Games alternate in play between MIN and MAX

78
1
1
1
1
0
1
0
1
1
0
0
1
0
0
79
Game Playing - Minimax
  • Efficiency of the search
  • Game trees are very big
  • Evaluation of positions is time-consuming
  • How can we reduce the number of nodes to be
    evaluated?
  • alpha-beta search

80
Game Playing - Minimax
  • Consider a variation of the two player game Nim
  • The game starts with a stack of 5 tokens. At each
    move a player removes one, two or three tokens
    from the pile, leaving the pile non-empty. A
    player who has to remove the last token loses the
    game.
  • (a) Draw the complete search tree for this
    variation of Nim.
  • (b) Assume two players, min and max. Max plays
    first.
  • If a terminal state in the search tree developed
    above is a win for min, a utility function of -1
    is assigned to that state. A utility function of
    1 is assigned to a state if max wins the game.
  • Apply the minimax algorithm to the search tree.

81
Appendix
  • A brief history of AI game playing

82
Game Playing
  • Game Playing has been studied for a long time
  • Babbage (1791-1871)
  • Analytical machine
  • tic-tac-toe
  • Turing (1912-1954)
  • Chess playing program
  • Within 10 years a computer will be a chess
    champion
  • Herbert Simon, 1957

83
Game Playing
  • Why study game playing in AI
  • Games are intelligent activities
  • It is very easy to measure success or failure
  • Do not require large amounts of knowledge
  • They were thought to be solvable by
    straightforward search from the starting state to
    a winning position

84
Game Playing - Checkers
  • Arthur Samuel
  • 1952 first checker program, written for an IBM
    701
  • 1954 - Re-wrote for an IBM 704
  • 10,000 words of main memory

85
Game Playing - Checkers
  • Arthur Samuel
  • Added a learning mechanism that learnt its own
    evaluation function by playing against itself
  • After a few days it could beat its creator
  • And compete on equal terms with strong human
    players

86
Game Playing - Checkers
  • Jonathon Schaeffer Chinook, 1996
  • In 1992 Chinook won the US Open
  • Plays a perfect end game by means of a database
  • And challenged for the world championship
  • http//www.cs.ualberta.ca/chinook/

87
Game Playing - Checkers
  • Jonathon Schaeffer Chinook, 1996
  • Dr Marion Tinsley
  • World championship for over 40 years, only losing
    three games in all that time
  • Against Chinook he suffered his fourth and fifth
    defeat
  • But ultimately won 21.5 to 18.5

88
Game Playing - Checkers
  • Jonathon Schaeffer Chinook, 1996
  • Dr Marion Tinsley
  • In August 1994 there was a re-match but Marion
    Tinsley withdrew for health reasons
  • Chinook became the official world champion

89
Game Playing - Checkers
  • Jonathon Schaeffer Chinook, 1996
  • Uses Alpha-Beta search
  • Did not include any learning mechanism
  • Schaeffer claimed Chinook was rated at 2814
  • The best human players are rated at 2632 and 2625

90
Game Playing - Checkers
  • Chellapilla and Fogel 2000
  • Learnt how to play a good game of checkers
  • The program used a population of games with the
    best competing for survival
  • Learning was done using a neural network with the
    synapses being changed by an evolutionary
    strategy
  • Input current board position
  • Output a value used in minimax search

91
Game Playing - Checkers
  • Chellapilla and Fogel 2000
  • During the training period the program is given
  • no information other than whether it won or lost
    (it is not even told by how much)
  • No strategy and no database of opening and ending
    positions
  • The best program beats a commercial application
    6-0
  • The program was presented at CEC 2000 (San Diego)
    and prize remain unclaimed

92
Game Playing - Chess
  • No computer can play even an amateur-level game
    of chess
  • Hubert Dreyfus, 1960s

93
Game Playing - Chess
  • Shannon - March 9th 1949 - New York
  • Size of search space (10120 - average of 40
    moves)
  • 10120 gt number of atoms in the universe
  • 200 million positions/second 10100 years to
    evaluate all possible games
  • Age of universe 1010
  • Searching to depth 40, at one state per
    microsecond, it would take 1090 years to make its
    first move

94
Game Playing - Chess
  • 1957 AI pioneers Newell and Simon predicted
    that a computer would be chess champion within
    ten years
  • Simon I was a little far-sighted with chess,
    but there was no way to do it with machines that
    were as slow as the ones way back then
  • 1958 - First computer to play chess was an IBM
    704
  • about one millionth capacity of deep blue

95
Game Playing - Chess
  • 1967 Mac Hack competed successfully in human
    tournaments
  • 1983 Belle attained expert status from the
    United States Chess Federation
  • Mid 80s Scientists at Carnegie Mellon
    University started work on what was to become
    Deep Blue
  • Sun workstation, 50K positions per second
  • Project moved to IBM in 1989

96
Game Playing - Chess
  • May 11th 1997, Gary Kasparov lost a six match
    game to deep blue, IBM Research
  • 3.5 to 2.5
  • Two wins for deep blue, one win for Kasparov and
    three draws

(http//www.research.ibm.com/deepblue/meet/html/d.
3.html)
97
Game Playing - Chess
  • Still receives a lot of research interests
  • Computer program to learn how to play chess,
    rather than being told how it should play
  • Research on game playing at School of CS,
    Nottingham

98
Game Playing Go
  • A significant challenge to computer programmers,
    not yet much helped by fast computation
  • Search methods successful for chess and checkers
    do not work for Go, due to many qualities of the
    game
  • Larger area of the board (five times the chess
    board)
  • New piece appears every move - progressively more
    complex

wikipedia http//en.wikipedia.org/wiki/Go_(game)
99
Game Playing Go
  • A significant challenge to computer programmers,
    not yet much helped by fast computation
  • Search methods successful for chess and checkers
    do not work for Go, due to many qualities of the
    game
  • A material advantage in Go may just mean that
    short-term gain has been given priority
  • Very high degree of pattern recognition involved
    in human capacity to play well

wikipedia http//en.wikipedia.org/wiki/Go_(game)
100
Appendix
  • Alfa Beta pruning

101
A
STOP! What else can you deduce now!?
STOP! What else can you deduce now!?
STOP! What else can you deduce now!?
MAX
lt6
MIN
On discovering util( D ) 6 we know that
util( B ) lt 6
6
gt8
On discovering util( J ) 8 we know that
util( E ) gt 8
MAX
Can stop expansion of E as best play will not go
via E Value of K is irrelevant prune it!
6
5
8
agent
opponent
102
A
gt6
MAX
6
lt2
MIN
6
gt8
2
MAX
6
5
8
2
1
agent
opponent
103
A
gt6
MAX
6
2
MIN
6
gt8
2
MAX
6
5
8
2
1
agent
opponent
104
Alpha-beta Pruning
A
6
MAX
6
2
beta cutoff
MIN
6
gt8
2
alpha cutoff
MAX
6
5
8
2
1
agent
opponent
105
Appendix
  • General Search

106
General Search
  • Function GENERAL-SEARCH (problem, QUEUING-FN)
    returns a solution or failure
  • nodes MAKE-QUEUE(MAKE-NODE(INITIAL-STATEproblem
    ))
  • Loop do
  • If nodes is empty then return failure
  • node REMOVE-FRONT(nodes)
  • If GOAL-TESTproblem applied to STATE(node)
    succeeds then return node
  • nodes QUEUING-FN(nodes,EXPAND(node,OPERATORSpr
    oblem))
  • End
  • End Function

107
General Search
  • Function GENERAL-SEARCH (problem, QUEUING-FN)
    returns a solution or failure
  • nodes MAKE-QUEUE(MAKE-NODE(INITIAL-STATEproblem
    ))
  • Loop do
  • If nodes is empty then return failure
  • node REMOVE-FRONT(nodes)
  • If GOAL-TESTproblem applied to STATE(node)
    succeeds then return node
  • nodes QUEUING-FN(nodes,EXPAND(node,OPERATORSpr
    oblem))
  • End
  • End Function

108
General Search
  • Function GENERAL-SEARCH (problem, QUEUING-FN)
    returns a solution or failure
  • nodes MAKE-QUEUE(MAKE-NODE(INITIAL-STATEproblem
    ))
  • Loop do
  • If nodes is empty then return failure
  • node REMOVE-FRONT(nodes)
  • If GOAL-TESTproblem applied to STATE(node)
    succeeds then return node
  • nodes QUEUING-FN(nodes,EXPAND(node,OPERATORSpr
    oblem))
  • End
  • End Function
Write a Comment
User Comments (0)
About PowerShow.com