Bayesian Reasoning - PowerPoint PPT Presentation

1 / 25
About This Presentation
Title:

Bayesian Reasoning

Description:

Two binary attributes A and B, and a binary class Y. A distribution ... a set of nodes, one per variable (attribute) ... probabilistic inference system is to ... – PowerPoint PPT presentation

Number of Views:18
Avg rating:3.0/5.0
Slides: 26
Provided by: alext8
Category:

less

Transcript and Presenter's Notes

Title: Bayesian Reasoning


1
Bayesian Reasoning
2
Tax Data Naive Bayes
Classify (_, No, Married, 95K, ?)
3
Tax Data Naive Bayes
  • Classify (_, No, Married, 95K, ?)
  • P(Yes) 3/10 0.3
  • P(RefundNoYes) (31)/(32) 0.8
  • P(StatusMarriedYes) (01)/(33) 0.17

Approximate ? with (958590)/3 90 Approximate
?2 with ( (95-90)2(85-90) 2(90-90) 2 )/
(3-1) 25 f(income95Yes) e(- ( (95-90)2 /
(225)) ) / sqrt(23.1425) .048 P(Yes E)
?.8.17.048.3 ?.0019584
4
Tax Data
Classify (_, No, Married, 95K, ?) P(No) 7/10
.7 P(RefundNoNo) (41)/(72)
.556 P(StatusMarriedNo) (41)/(73) .5
Approximate ? with (125100701206022075)/7
110 Approximate ?2 with ((125-110)2
(100-110)2 (70-110)2 (120-110)2
(60-110)2 (220-110)2 (75-110)2 )/(7-1)
2975 f(income95No) e( -((95-110)2 /
(22975)) ) /sqrt(23.14 2975) .00704 P(No
E) ?.556.5 .007040.7 ?.00137
5
Tax Data
  • Classify (_, No, Married, 95K, ?)
  • P(Yes E) ?.0019584
  • P(No E) ?.00137
  • 1/(.0019584 .00137)300.44
  • P(YesE) 300.44 .0019584 0.59
  • P(NoE) 300.44 .00137 0.41
  • We predict Yes.

6
Motivation
  • The conditional independence assumption made by
    naïve Bayes classifiers may seem to rigid,
    especially for classification problems in which
    the attributes are somewhat correlated.
  • We talk today for a more flexible approach for
    modeling the conditional probabilities.

7
Naïve Bayes and Correlated Attrs
  • Two binary attributes A and B, and a binary class
    Y.
  • A distribution
  • P(A0 Y0) 0.4 P(A1 Y0) 0.6
  • P(A0 Y1) 0.6 P(A1 Y1) 0.4
  • B is perfectly correlated with A when Y0, but
    not when Y1
  • P(B0 Y0) 0.4 P(B1 Y0) 0.6
  • P(B0 Y1) 0.5 P(B1 Y1) 0.5

8
Naïve Bayes and Correlated Attrs
  • Now, we are given a new record with A0 and B0.
  • Using Naïve Bayes we have
  • P(Y0 A0, B0) ? P(A0 Y0) P(B0
    Y0) P(Y0) .4 .4 .5 / P(A0, B0) 0.08
    / P(A0, B0)
  • P(Y1 A0, B0) ? P(A0 Y1) P(B0
    Y1) P(Y1) .6 .5 .5 / P(A0, B0) .15
    / P(A0, B0)
  • So, we predict Y1.

9
Naïve Bayes and Correlated Attrs
  • However, since A and B are perfectly correlated
    (when Y0) we have that
  • P(A0, B0 Y0) P(A0 Y0) 0.4
  • Thus,
  • P(Y0 A0, B0) ? P(A0, B0 Y0)
    P(Y0)
  • 0.4 0.5 / P(A0, B0)
  • 0.2 / P(A0, B0)
  • which is greater than
  • P(Y1 A0, B0) ? P(A0 Y1) P(B0
    Y1) P(Y1) .6 .5 .5 / P(A0, B0) .15
    / P(A0, B0)
  • So, the record should have been classified as
    class 0.

10
Bayesian networks
  • A simple, graphical notation for conditional
    independence assertions.
  • Syntax
  • a set of nodes, one per variable (attribute)
  • a directed, acyclic graph (link means "directly
    influences")
  • a conditional distribution for each node given
    its parents
  • P (Xi Parents (Xi))
  • The conditional distribution is represented as a
    conditional probability table (CPT) giving the
    distribution over Xi for each combination of
    parent values.

11
Example
  • I'm at work, neighbor John calls to say my alarm
    is ringing, but neighbor Mary doesn't call.
    Sometimes it's set off by minor earthquakes. Is
    there a burglar?
  • John always calls when he hears the alarm, but
    sometimes confuses the telephone ringing with the
    alarm.
  • Mary likes rather loud music and sometimes misses
    the alarm.
  • Variables Burglary, Earthquake, Alarm,
    JohnCalls, MaryCalls
  • Network topology reflects "causal" knowledge
  • A burglar can set the alarm off
  • An earthquake can set the alarm off
  • The alarm can cause Mary to call
  • The alarm can cause John to call

12
Example contd
To save space, some of the probabilities have
been omitted from the diagram. The omitted
probabilities can be recovered by noting that P(X
x) 1 - P(X ? x) and P(X ?xY) 1 -
P(X?xY), where ?x denotes the opposite outcome
of x.
The topology shows that burglary and earthquakes
directly affect the probability of alarm, but
whether Mary or John call depends only on the
alarm. Thus our assumptions are that they dont
perceive any burglaries directly, and they dont
confer before calling.
13
Semantics
Suppose we have the variables X1,,Xn. The
probability for them to have the values x1,,xn
respectively is P(xn,,x1)
P(xn,,x1) is short for P(Xnxn,, Xn x1)
  • e.g.,
  • P(j ? m ? a ? ?b ? ?e)
  • P(j a) P(m a) P(a ?b, ?e) P(?b) P(?e)

14
Inference in Bayesian Networks
  • The basic task for a probabilistic inference
    system is to compute the posterior probability
    for a query variable, given some observed event
  • that is, some assignment of values to a set of
    evidence variables.
  • Notation
  • X denotes query variable
  • E denotes the set of evidence variables E1,,Em,
    and e is a particular event, i.e. an assignment
    to the variables in E.
  • Y will denote the set of the remaining variables
    (hidden variables).
  • A typical query asks for the posterior
    probability P(xe1,,em)
  • E.g. We could ask Whats the probability of a
    burglary if both Mary and John call, P(burglary
    johhcalls, marycalls)?

15
Classification
  • Suppose, we are given for the evidence variables
    E1,,Em, their values e1,,em, and we want to
    predict whether the query variable X has the
    value x or not.
  • For this we compute and compare the following
  • However, how do we compute

What about the hidden variables Y1,,Yk?
16
Inference by enumeration
Example P(burglary johhcalls, marycalls)?
(Abbrev. P(bj,m))
17
Numerically
  • P(b j,m) ? P(b) ?a P(ja)P(ma)?eP(ab,e)P(e)
    ? 0.00059
  • P(?b j,m) ? P(?b) ?a P(ja)P(ma)?eP(a
    ?b,e)P(e) ? 0.0015
  • P(B j,m) ? lt0.00059, 0.0015gt lt0.28, 0.72gt.

18
P(b j,m)
P(b j,m) ? P(b) ?a P(ja)P(ma)?eP(ab,e)P(e)
? P(b) ?a P(ja)P(ma)(P(ab,e)P(e)
P(ab,?e)P(?e)) ? P(b)( P(ja)P(ma)(
P(ab,e)P(e) P(ab,?e)P(?e) )
P(j?a)P(m?a)( P(?ab,e)P(e) P(?ab,?e)P(?e)
)) ? .001(.9.7(.95.002 .94.998)
.05.01(.05.002 .71.998) ) ? .00059
19
P(?b j,m)
  • P(?b j,m) ? P(?b) ?a P(ja)P(ma)?eP(a?b,e)P(
    e)
  • ? P(?b) ?a P(ja)P(ma)(P(a?b,e)P(e)
    P(a?b,?e)P(?e))
  • ? P(?b)( P(ja)P(ma)( P(a?b,e)P(e)
    P(a?b,?e)P(?e) )
  • P(j?a)P(m?a)( P(?a?b,e)P(e)
    P(?a?b,?e)P(?e) ))
  • ? .999(.9.7(.29.002 .001.998)
    .05.01(.71.002 .999.998) )
  • ? .0015
  • 1/(.00059 .0015)
  • 478.5
  • P(b j,m) 478.5 .00059
  • .28
  • P(?b j,m) 478.5 .0015
  • .72

20
Constructing Bayesian networks
  • 1. Choose an ordering of variables X1, ,Xn
  • 2. For i 1 to n
  • add Xi to the network
  • select parents from X1, ,Xi-1 such that
  • P(Xi Parents(Xi)) P(Xi X1, ... Xi-1)
  • This choice of parents guarantees
  • P(X1, ,Xn) ?i 1 P(Xi X1, , Xi-1)
    (chain rule)
  • ?i 1P(Xi Parents(Xi)) (by
    construction)
  • Choosing the parents from X1, , Xi-1 is done by
    domain human experts.

21
Example
  • The ordering of variables is very important.
  • E.g. suppose we choose the ordering M, J, A, B, E
  • Adding MaryCalls No parents
  • P(JM) P(J)?
  • Is P(John calling) independent of P(Mary
    calling)?
  • Clearly not, since, on any given day, if Mary
    called, then the probability that John called is
    much better than the background probability that
    he called.
  • So, we add a link from MaryCalls to JohnCalls.

22
Example
  • Suppose we choose the ordering M, J, A, B, E
  • Adding the A (Alarm) node Is
  • P(A J, M) P(A J)?
  • P(A J, M) P(A)?
  • No.
  • Clearly, if both call, its more likely that the
    alarm has gone off that if just one or neither
    call, so we need both MaryCalls and JohnCalls as
    parents.

23
Example
  • Suppose we choose the ordering M, J, A, B, E
  • Adding B (Burglary) node Is
  • P(B A, J, M) P(B A)?
  • P(B A, J, M) P(B)?
  • Yes for the first. No for the second.
  • If we know the alarm state, then the call from
    John or Mary might give us information about the
    phone ringing or Marys music, but not about
    burglary.
  • So, we need just Alarm as parent.

24
Example
  • Suppose we choose the ordering M, J, A, B,
    E
  • Adding E (Earthquake) node Is
  • P(E B, A ,J, M) P(E A)?
  • P(E B, A, J, M) P(E A, B)?
  • No for the first. Yes for the second.
  • If the alarm is on, it is more likely that there
    has been an earthquake.
  • But if we know there has been a burglary, then
    that explains the alarm, and the probability of
    an earthquake would be only slightly above
    normal.
  • Hence we need both Alarm and Burglary as parents.

25
Example contd
  • So, the network is less compact if we go
    non-causal 1 2 4 2 4 13 numbers needed
    instead of 10 if we go in causal direction.
  • Deciding conditional independence is harder in
    noncausal directions
  • Causal models and conditional independence seem
    hardwired for humans!
Write a Comment
User Comments (0)
About PowerShow.com