CPSC 322 Introduction to Artificial Intelligence - PowerPoint PPT Presentation

About This Presentation
Title:

CPSC 322 Introduction to Artificial Intelligence

Description:

Did I mention that you have a. final exam at noon on Friday, December 10, in MCML 166? ... It's the ideal Christmas gift! Reinforcement learning ... – PowerPoint PPT presentation

Number of Views:42
Avg rating:3.0/5.0
Slides: 77
Provided by: kurtei
Category:

less

Transcript and Presenter's Notes

Title: CPSC 322 Introduction to Artificial Intelligence


1
CPSC 322Introduction to Artificial Intelligence
  • December 3, 2004

2
Things...
Slides for the last couple of weeks will be up
this weekend. Did I mention that you have a
final exam at noon on Friday, December 10, in
MCML 166? Check the announcements part of the
web page occasionally between now and then in
case anything important comes up.
3
This should be fun
Artificial Intelligence and Interactive Digital
Entertainment First Annual Conference June 1-3,
2005 in Marina Del Rey, California www.aiide.org
4
Learning
Definition learning is the adaptive changes
that occur in a system which enable that system
to perform the same task or similar tasks more
efficiently or more effectively over time. This
could mean The range of behaviors is
expanded the agent can do more The
accuracy on tasks is improved the agent can
do things better The speed is improved the
agent can do things faster
5
Learning is about choosing the best representation
Thats certainly true in a logic-based AI
world. Our arch learner started with some
internal representation of an arch. As examples
were presented, the arch learner modified its
internal representation to either make
the representation accommodate positive examples
(generalization) or exclude negative examples
(specialization). Theres really nothing else
the learner could modify... the reasoning system
is what it is. So any learning problem can be
mapped onto one of choosing the best
representation...
6
Learning is about search
...but wait, theres more! By now, youve
figured out that the arch learner was doing
nothing more than searching the space of possible
representations, right? So learning, like
everything else, boils down to search. If that
wasnt obvious, you probably will want to do a
little extra preparation for the final exam....
7
Same problem - different representation
The arch learner could have represented the arch
concept as a decision tree if we wanted
8
Same problem - different representation
The arch learner could have represented the arch
concept as a decision tree if we wanted
arch
9
Same problem - different representation
The arch learner could have represented the arch
concept as a decision tree if we wanted
do upright blocks support sideways block?
no yes
not arch arch
10
Same problem - different representation
The arch learner could have represented the arch
concept as a decision tree if we wanted
do upright blocks support sideways block?
no yes
not arch do upright blocks
touch each other?
no yes
arch not arch
11
Same problem - different representation
The arch learner could have represented the arch
concept as a decision tree if we wanted
do upright blocks support sideways block?
no yes
not arch do upright blocks
touch each other?
no yes
is the not arch
top block either a rectangle or a wedge?
no yes
not arch arch
12
Other issues with learning by example
The learning process requires that there is
someone to say which examples are positive
and which are negative. This approach must start
with a positive example to specialize or
generalize from. Learning by example is
sensitive to the order in which examples are
presented. Learning by example doesnt work well
with noisy, randomly erroneous data.
13
Reinforcement learning
Learning operator sequences based on reward or
punishment Lets say you want your robot to
learn how to vacuum the living room
iRobot Roomba 4210 Discovery Floorvac Robotic
Vacuum 249.99 Its the ideal Christmas gift!
14
Reinforcement learning
Learning operator sequences based on reward or
punishment Lets say you want your robot to
learn how to vacuum the living room
goto livingroom vacuum floor goto
trashcan empty bag Good Robot!

15
Reinforcement learning
Learning operator sequences based on reward or
punishment Lets say you want your robot to
learn how to vacuum the living room
goto livingroom goto trashcan vacuum
floor vacuum floor goto trashcan goto
livingroom empty bag empty bag Good Robot!
Bad Robot!
16
Reinforcement learning
Learning operator sequences based on reward or
punishment Lets say you want your robot to
learn how to vacuum the living room
goto livingroom goto trashcan vacuum
floor vacuum floor goto trashcan goto
livingroom empty bag empty bag Good Robot!
Bad Robot!
(actually, there are no bad robots, there is only
bad behavior...and Roomba cant really empty its
own bag)
17
Reinforcement learning
Should the robot learn from success? How does it
figure out which part of the sequence of actions
is right (credit assignment problem)?
goto livingroom goto trashcan vacuum
floor vacuum floor goto trashcan goto
livingroom empty bag empty bag Good Robot!
Bad Robot!
18
Reinforcement learning
Should the robot learn from failure? How does it
figure out which part of the sequence of actions
is wrong (blame assignment problem)?
goto livingroom goto trashcan vacuum
floor vacuum floor goto trashcan goto
livingroom empty bag empty bag Good Robot!
Bad Robot!
19
Reinforcement learning
However you answer those questions, the learning
task again boils down to the search for the best
representation. Heres another type of learning
that searches for the best representation, but
the representation is very different from what
youve seen so far.
20
Learning in neural networks
The perceptron is one of the earliest neural
network models, dating to the early 1960s.
weights
w1
x1
w2
w3
x2
S
x3
. . .
sum
threshold
xn
wn
inputs
21
Learning in neural networks
The perceptron cant compute everything, but what
it can compute it can learn to compute. Heres
how it works. Inputs are 1 or 0. Weights are
reals (-n to n). Each input is multiplied by
its corresponding weight. If the sum of the
products is greater than the threshold, then
the perceptron outputs 1, otherwise the
output is 0.
weights
w1
x1
w2
w3
x2
S
x3
. . .
sum
threshold
xn
wn
inputs
22
Learning in neural networks
The perceptron cant compute everything, but what
it can compute it can learn to compute. Heres
how it works. The output, 1 or 0, is a guess or
prediction about the input does it fall into the
desired classification (output 1) or
not (output 0)?
weights
w1
x1
w2
w3
x2
S
x3
. . .
sum
threshold
xn
wn
inputs
23
Learning in neural networks
Thats it? Big deal. No, theres more to it....
weights
w1
x1
w2
w3
x2
S
x3
. . .
sum
threshold
xn
wn
inputs
24
Learning in neural networks
Thats it? Big deal. No, theres more to
it.... Say you wanted your perceptron to
classify arches.
weights
w1
x1
w2
w3
x2
S
x3
. . .
sum
threshold
xn
wn
inputs
25
Learning in neural networks
Thats it? Big deal. No, theres more to
it.... Say you wanted your perceptron to
classify arches. That is, you present
inputs representing an arch, and the output
should be 1.
weights
w1
x1
w2
w3
x2
S
x3
. . .
sum
threshold
xn
wn
inputs
26
Learning in neural networks
Thats it? Big deal. No, theres more to
it.... Say you wanted your perceptron to
classify arches. That is, you present
inputs representing an arch, and the output
should be 1. You present inputs not representing
an arch, and the output should be 0.
weights
w1
x1
w2
w3
x2
S
x3
. . .
sum
threshold
xn
wn
inputs
27
Learning in neural networks
Thats it? Big deal. No, theres more to
it.... Say you wanted your perceptron to
classify arches. That is, you present
inputs representing an arch, and the output
should be 1. You present inputs not representing
an arch, and the output should be 0. If your
perceptron does that correctly for all inputs, it
knows the concept of arch.
weights
w1
x1
w2
w3
x2
S
x3
. . .
sum
threshold
xn
wn
inputs
28
Learning in neural networks
But what if you present inputs for an arch, and
your perceptron outputs a 0???
weights
w1
x1
w2
w3
x2
S
x3
. . .
sum
threshold
xn
wn
inputs
29
Learning in neural networks
But what if you present inputs for an arch, and
your perceptron outputs a 0??? What could be done
to make it more likely that the output will be 1
the next time the tron sees those same inputs?
weights
w1
x1
w2
w3
x2
S
x3
. . .
sum
threshold
xn
wn
inputs
30
Learning in neural networks
But what if you present inputs for an arch, and
your perceptron outputs a 0??? What could be done
to make it more likely that the output will be 1
the next time the tron sees those same
inputs? You increase the weights. Which
ones? How much?
weights
w1
x1
w2
w3
x2
S
x3
. . .
sum
threshold
xn
wn
inputs
31
Learning in neural networks
But what if you present inputs for not an arch,
and your perceptron outputs a 1?
weights
w1
x1
w2
w3
x2
S
x3
. . .
sum
threshold
xn
wn
inputs
32
Learning in neural networks
But what if you present inputs for not an arch,
and your perceptron outputs a 1? What could be
done to make it more likely that the output will
be 0 the next time the tron sees those same
inputs?
weights
w1
x1
w2
w3
x2
S
x3
. . .
sum
threshold
xn
wn
inputs
33
Learning in neural networks
But what if you present inputs for not an arch,
and your perceptron outputs a 1? What could be
done to make it more likely that the output will
be 0 the next time the tron sees those same
inputs? You decrease the weights. Which
ones? How much?
weights
w1
x1
w2
w3
x2
S
x3
. . .
sum
threshold
xn
wn
inputs
34
Lets make one...
First we need to come up with a representation
language. Well abstract away most everything to
make it simple.
35
Lets make one...
First we need to come up with a representation
language. Well abstract away most everything to
make it simple. All training examples have three
blocks. A and B are upright blocks. A is
always left of B. C is a sideways block. Our
language will assume those things always to be
true. The only things our language will
represent are the answers to these five
questions...
36
Lets make one...
yes 1, no 0 Does A support C? Does B
support C? Does A touch C? Does B touch
C? Does A touch B?
37
Lets make one...
yes 1, no 0 Does A support C? 1 Does B
support C? 1 Does A touch C? 1 Does B touch
C? 1 Does A touch B? 0
C
A
B
arch
38
Lets make one...
yes 1, no 0 Does A support C? 1 Does B
support C? 1 Does A touch C? 1 Does B touch
C? 1 Does A touch B? 1
C
A
B
not arch
39
Lets make one...
yes 1, no 0 Does A support C? 0 Does B
support C? 0 Does A touch C? 1 Does B touch
C? 1 Does A touch B? 0
A
B
C
not arch
40
Lets make one...
yes 1, no 0 Does A support C? 0 Does B
support C? 0 Does A touch C? 1 Does B touch
C? 1 Does A touch B? 0
A
B
C
not arch
and so on.....
41
Our very simple arch learner
x1
x2
S
x3
x4
x5
42
Our very simple arch learner
- 0.5
x1
0
0.5
x2
S
x3
x4
0.5
0
x5
-0.5
43
Our very simple arch learner
- 0.5
1
0
C
0.5
1
A
B
S
1
arch
1
0.5
0
0
-0.5
44
Our very simple arch learner
- 0.5
1
0
C
0.5
1
A
B
S
1
arch
1
0.5
0
0
-0.5
sum 1-0.5 10 10.5 10 00.5
45
Our very simple arch learner
- 0.5
1
0
C
0.5
1
A
B
S
1
0
arch
1
0.5
0
0
-0.5
sum -0.5 0 0.5 0 0 0 which is not gt
threshold so output is 0
46
Our very simple arch learner
- 0.5
1
0
C
0.5
1
A
B
S
1
0
arch
1
0.5
0
0
-0.5
sum -0.5 0 0.5 0 0 0 which is not gt
threshold so output is 0 tron said no when it
should say yes so increase weights where input 1
47
Our very simple arch learner
- 0.4
1
0.1
C
0.6
1
A
B
S
1
0
arch
1
0.5
0.1
0
-0.5
sum -0.5 0 0.5 0 0 0 which is not gt
threshold so output is 0 so we increase the
weights where the input is 1
48
Our very simple arch learner
- 0.4
1
0.1
C
0.6
1
A
B
S
1
not arch
1
0.5
0.1
1
-0.5
now we look at the next example
49
Our very simple arch learner
- 0.4
1
0.1
C
0.6
1
A
B
S
1
0
not arch
1
0.5
0.1
1
-0.5
sum -0.4 0.1 0.6 0.1 - 0.5 -0.1 which
is not gt 0.5 so output is 0
50
Our very simple arch learner
- 0.4
1
0.1
C
0.6
1
A
B
S
1
0
not arch
1
0.5
0.1
1
-0.5
sum -0.4 0.1 0.6 0.1 - 0.5 -0.1 which
is not gt 0.5 so output is 0 thats the right
output for this input, so we dont touch the
weights
51
Our very simple arch learner
- 0.4
0
0.1
0.6
0
A
B
S
C
1
not arch
1
0.5
0.1
0
-0.5
now we look at the next example
52
Our very simple arch learner
- 0.4
0
0.1
0.6
0
A
B
S
C
1
1
not arch
1
0.5
0.1
0
-0.5
sum 0 0 0.6 0.1 0 0.7 which is gt 0.5
so output 1
53
Our very simple arch learner
- 0.4
0
0.1
0.6
0
A
B
S
C
1
1
not arch
1
0.5
0.1
0
-0.5
the tron said yes when it should have said no,
so we decrease the weights where the inputs 1
54
Our very simple arch learner
- 0.4
0
0.1
0.5
0
A
B
S
C
1
1
not arch
1
0.5
0
0
-0.5
the tron said yes when it should have said no,
so we decrease the weights where the inputs 1
55
We could do this for days...
...but lets have the computer do it all for
us. First, take a look at the training examples
weve constructed...
56
Training Set
a b a b
a in class? supports supports touches touches
touches c c c c
b yes 1 1 1
1 0 no 1
1 1 1 1 no 0
0 0 0 0 no 0
0 1 1 0 no 1
0 1 0 1
57
Training Set
a b a b
a in class? supports supports touches touches
touches c c c c
b no 1 0 1
0 0 no 0
1 0 1 1 no 0
1 0 1 0 no 0
0 1 0 0 no 0
0 0 1 0
58
Now lets look at the program
The program isnt in CILOG!
59
Whats going on?
The perceptron goes through the training
set, making a guess for each example and
comparing it to the actual answer. Based on that
comparison, the perceptron adjusts weights up,
down, or not at all. For some concepts, the
process converges on a set of weights such that
the perceptron guesses correctly for every
example in the training set -- thats when the
weights stop changing.
60
Whats going on?
Another way of looking at it Each of the
possible inputs (25 in our case) maps onto a 1 or
a 0. The perceptron is trying to find a set of
weights such that it can draw a line through the
set of all inputs and say these inputs belong on
one side of the line (output 1) and those
belong on the other side (output 0).
61
Whats going on?
one set of weights
62
Whats going on?
another set of weights
63
Whats going on?
still another set of weights
64
Whats going on?
still another set of weights
The perceptron looks for linear
separability. That is, in the n-space defined by
the inputs, its looking for a line or plane that
divides the inputs.
65
Observations
This perceptron can learn concepts involving
and and or This perceptron cant learn
exclusive or (try ex3 in the Scheme code). Not
linearly separble... it wont converge Even a
network of perceptrons arranged in a single layer
cant compute XOR (as well as other things)
66
Observations
The representation for the learned concept
(e.g., the arch concept) is just five
numbers. What does that say about the physical
symbol system hypothesis? However, if you know
which questions or relations each individual
weight is associated with, you still have a
sense of what the numbers/weights mean.
67
Observations
This is another example of intelligence as
search for the best representation. The final
set of weights that was the solution to the
arch-learning problem is not the only
solution. In fact, there are infinitely many
solutions, corresponding to the infinitely many
ways to draw the line or plane.
68
Beyond perceptrons
Perceptrons were popular in the 1960s, and
some folks thought they were the key to
intelligent machines. But you needed
multiple-layer perceptron networks to compute
some things, but to make those work you had to
build them by hand. Multiple-layer perceptron
nets dont necessarily converge, and when they
dont converge, they dont learn. Building big
nets by hand was too time consuming, so interest
in perceptrons died off in the 1970s.
69
The return of the perceptron
In the 1980s, people figured out that with
some changes, multiple layers of perceptron-like
units could compute anything. First, you get rid
of the threshold -- replace the step function
that generated output with a continuous
function. Then you put hidden layers between the
inputs and the output(s).
70
The return of the perceptron
output layer
hidden layer
input layer
71
The return of the perceptron
Then for each input example you let the
activation feed forward to the outputs, compare
outputs to desired outputs, and then
backpropagate the information about the
differences to inform decisions about how to
adjust the weights in the multiple layers of
network. Changes to weights give rise to
immediate changes in output, because theres no
threshold.
72
The return of the perceptron
This generation of networks is called
neural networks, or backpropagation nets, or
connectionist networks (dont use perceptron
now) Theyre capable of computing anything
computable They learn well They degrade
gracefully They handle noisy input well Theyre
really good at modelling low-level perceptual
processing and simple learning but...
73
The return of the perceptron
This generation of networks is called
neural networks, or backpropagation nets, or
connectionist networks (dont use perceptron
now) They cant explain their reasoning Neither
can we, easily...what the pattern of weights in
the hidden layers correspond to is opaque Not a
lot of success yet in modelling higher levels
of cognitive behavior (planning, story
understanding, etc.)
74
The return of the perceptron
But this discussion of perceptrons should put you
in a position to read Chapter 11.2 and see how
those backprop networks came about. Which leads
us to the end of this term...
75
Things...
Good bye! Thanks for being nice to the new
guy. Well see you at noon on Friday, December
10, in MCML 166 for the final exam Have a great
holiday break, and come visit next term.
76
Questions?
Write a Comment
User Comments (0)
About PowerShow.com