Title: Data Warehousing and Data Mining Chapter 7
1Data Warehousing and Data Mining Chapter 7
2Chapter 7. Classification and Prediction
- What is classification? What is prediction?
- Issues regarding classification and prediction
- Classification by decision tree induction
- Bayesian Classification
- Classification by Neural Networks
- Classification by Support Vector Machines (SVM)
- Classification based on concepts from association
rule mining - Other Classification Methods
- Prediction
- Classification accuracy
- Summary
3Classification vs. Prediction
- Classification
- predicts categorical class labels (discrete or
nominal) - classifies data (constructs a model) based on the
training set and the values (class labels) in a
classifying attribute and uses it in classifying
new data - Prediction
- models continuous-valued functions, i.e.,
predicts unknown or missing values - Typical Applications
- credit approval
- target marketing
- medical diagnosis
- treatment effectiveness analysis
4ClassificationA Two-Step Process
- Model construction describing a set of
predetermined classes - Each tuple/sample is assumed to belong to a
predefined class, as determined by the class
label attribute - The set of tuples used for model construction is
training set - The model is represented as classification rules,
decision trees, or mathematical formulae - Model usage for classifying future or unknown
objects - Estimate accuracy of the model
- The known label of test sample is compared with
the classified result from the model - Accuracy rate is the percentage of test set
samples that are correctly classified by the
model - Test set is independent of training set,
otherwise over-fitting will occur - If the accuracy is acceptable, use the model to
classify data tuples whose class labels are not
known
5Classification Process (1) Model Construction
Classification Algorithms
IF rank professor OR years gt 6 THEN tenured
yes
6Classification Process (2) Use the Model in
Prediction
(Jeff, Professor, 4)
Tenured?
7Supervised vs. Unsupervised Learning
- Supervised learning (classification)
- Supervision The training data (observations,
measurements, etc.) are accompanied by labels
indicating the class of the observations - New data is classified based on the training set
- Unsupervised learning (clustering)
- The class labels of training data is unknown
- Given a set of measurements, observations, etc.
with the aim of establishing the existence of
classes or clusters in the data
8Chapter 7. Classification and Prediction
- What is classification? What is prediction?
- Issues regarding classification and prediction
- Classification by decision tree induction
- Bayesian Classification
- Classification by Neural Networks
- Classification by Support Vector Machines (SVM)
- Classification based on concepts from association
rule mining - Other Classification Methods
- Prediction
- Classification accuracy
- Summary
9Issues Regarding Classification and Prediction
(1) Data Preparation
- Data cleaning
- Preprocess data in order to reduce noise and
handle missing values - Relevance analysis (feature selection)
- Remove the irrelevant or redundant attributes
- Data transformation
- Generalize and/or normalize data
10Issues regarding classification and prediction
(2) Evaluating Classification Methods
- Predictive accuracy
- Speed and scalability
- time to construct the model
- time to use the model
- Robustness
- handling noise and missing values
- Scalability
- efficiency in disk-resident databases
- Interpretability
- understanding and insight provided by the model
- Goodness of rules
- decision tree size
- compactness of classification rules
11Chapter 7. Classification and Prediction
- What is classification? What is prediction?
- Issues regarding classification and prediction
- Classification by decision tree induction
- Bayesian Classification
- Classification by Neural Networks
- Classification by Support Vector Machines (SVM)
- Classification based on concepts from association
rule mining - Other Classification Methods
- Prediction
- Classification accuracy
- Summary
12Classification by Decision Tree Induction
- Decision tree
- A flow-chart-like tree structure
- Internal node denotes a test on an attribute
- Branch represents an outcome of the test
- Leaf nodes represent class labels or class
distribution - Decision tree generation consists of two phases
- Tree construction
- At start, all the training examples are at the
root - Partition examples recursively based on selected
attributes - Tree pruning
- Identify and remove branches that reflect noise
or outliers - Use of decision tree Classifying an unknown
sample - Test the attribute values of the sample against
the decision tree
13Training Dataset
This follows an example from Quinlans ID3
14Output A Decision Tree for buys_computer
age?
lt30
overcast
gt40
30..40
student?
credit rating?
yes
no
yes
fair
excellent
no
no
yes
yes
15Algorithm for Decision Tree Induction
- Basic algorithm (a greedy algorithm)
- Tree is constructed in a top-down recursive
divide-and-conquer manner - At start, all the training examples are at the
root - Attributes are categorical (if continuous-valued,
they are discretized in advance) - Examples are partitioned recursively based on
selected attributes - Test attributes are selected on the basis of a
heuristic or statistical measure (e.g.,
information gain) - Conditions for stopping partitioning
- All samples for a given node belong to the same
class - There are no remaining attributes for further
partitioning majority voting is employed for
classifying the leaf - There are no samples left
16Attribute Selection Measure Information Gain
(ID3/C4.5)
- Select the attribute with the highest information
gain - S contains si tuples of class Ci for i 1, ,
m - information measures info required to classify
any arbitrary tuple - entropy of attribute A with values a1,a2,,av
- information gained by branching on attribute A
17Attribute Selection by Information Gain
Computation
- Class P buys_computer yes
- Class N buys_computer no
- I(p, n) I(9, 5) 0.940
- Compute the entropy for age
- means age lt30 has 5 out of 14
samples, with 2 yeses and 3 nos. Hence - Similarly,
18Gain Ratio
- Add another attribute transaction TID
- for each observation TID is different
- E(TID) (1/14)I(1,0)(1/14)I(1,0)
(1/14)I(1,0)... (1/14)I(1,0)0 - gain(TID) 0.940-00.940
- the highest gain so TID is the test attribute
- which makes no sense
- use gain ratio rather then gain
- Split information measure of the information
value of split - without considering class information
- only number and size of child nodes
19- Split information ?(-Si/S)log2(Si/S)
- information needed to assign an instance to one
of these branches - Gain ratio gain(S)/split information(S)
- in the previous example
- split info (1/14)log2(1/14)143.807
- gain ratio (0.940-0.0)/3.8070.246
- Split info for ageI(5,4,5)
- (5/14)log25/14 (4/14)log24/14 (5/14)log25/141.5
77 - gain ratio gain(age)/split info(age)
- 0.247/1.5770.156
20Exercise
- Repeat the same exercise of constructing the tree
by the gain ratio criteria as the attribute
selection measure - notice that TID has the highest gain ratio
- do not split by TID
21Other Attribute Selection Measures
- Gini index (CART, IBM IntelligentMiner)
- All attributes are assumed continuous-valued
- Assume there exist several possible split values
for each attribute - May need other tools, such as clustering, to get
the possible split values - Can be modified for categorical attributes
22Gini Index (IBM IntelligentMiner)
- If a data set T contains examples from n classes,
gini index, gini(T) is defined as - where pj is the relative frequency of class j
in T. - If a data set T is split into two subsets T1 and
T2 with sizes N1 and N2 respectively, the gini
index of the split data contains examples from n
classes, the gini index gini(T) is defined as - The attribute provides the smallest ginisplit(T)
is chosen to split the node (need to enumerate
all possible splitting points for each attribute).
23Extracting Classification Rules from Trees
- Represent the knowledge in the form of IF-THEN
rules - One rule is created for each path from the root
to a leaf - Each attribute-value pair along a path forms a
conjunction - The leaf node holds the class prediction
- Rules are easier for humans to understand
- Example
- IF age lt30 AND student no THEN
buys_computer no - IF age lt30 AND student yes THEN
buys_computer yes - IF age 3140 THEN buys_computer yes
- IF age gt40 AND credit_rating excellent
THEN buys_computer yes - IF age lt30 AND credit_rating fair THEN
buys_computer no
24Approaches to Determine the Final Tree Size
- Separate training (2/3) and testing (1/3) sets
- Use cross validation, e.g., 10-fold cross
validation - Use all the data for training
- but apply a statistical test (e.g., chi-square)
to estimate whether expanding or pruning a node
may improve the entire distribution - Use minimum description length (MDL) principle
- halting growth of the tree when the encoding is
minimized
25Enhancements to basic decision tree induction
- Allow for continuous-valued attributes
- Dynamically define new discrete-valued attributes
that partition the continuous attribute value
into a discrete set of intervals - Handle missing attribute values
- Assign the most common value of the attribute
- Assign probability to each of the possible values
- Attribute construction
- Create new attributes based on existing ones that
are sparsely represented - This reduces fragmentation, repetition, and
replication
26Missing Values
- T cases for attribute A
- in F cases value of A is unknown
- gain (1-F/T)(info(T)-entropy(T,A)
- (F/T)0
- split info add another branch for cases whose
values are unknown
27Missing Values
- When a case has a known attribute value it s
assigned to Ti with probability 1 - if the attribute value is missing it is assigned
to Ti with a probability - give a weight w that the case belongs to subset
Ti - do that for each subset Ti
- Than number of cases in each Ti has fractional
values
28Example
- One of the training cases has a missing age
- T(age ?, incmid,stuno,creditex,class buy
yes) - gain(age) inf(8,5)-ent(age)
- inf(8,5)-(8/13)log(8/13)-(5/13)log(5/13)0.961
- ent(age)
- (5/13)-(2/5)log(2/5)-(3/5)log(3/5)
- (3/13)-(3/3)log(3/3)0
- (5/13)-(3/5)log(3/5)-(2/5)log(2/5)
- .747
- gain(age) (13/14)(0.961-0.747)199
29- split info(age) (5/14)log5/14 lt30
- (3/14)log3/14 31..40
- (5/14)log5/14 gt40
- (1/14)log1/14 missing
- 1.809
- gain ratio(age) 0.199/1.8090.156
30- after splitting by age
- age lt30 branch
- age student
class weight - agelt30 y B 1
- agelt30 n N 1
- agelt30 n N 1
- agelt30 n N 1
- agelt30 y B 1
- agelt30 n B 5/13
age
gt40
31..40
lt30
credit
student
3,3/13 B
ext 2N 5/13B
fair 3B
no 3N 5/13 B
yes 2B
31- What happens if a new case has to be classified
- T agelt30,incomemid,stu?,creditfair,
- classhas to be found
- based on age goes to first subtree
- but student is unknown
- (2.0/5.4) student
- (3.4/5.4) not student
- P(buy) P(stu)P(buystu) P(notst)P(buynostu)
- (2/5.4)1(3.4/5.4)(5/44)0.44
- P(nbuy) P(stu)P(nbuystu) P(notst)P(nbuynostu
) - (2/5.4)0(3.4/5.4)(39/44)0.56
32Continuous variables
- Income is continuous
- make a binary split
- try all possible splitting points
- compute entropy and gain similarly
- but you can use income v as a test variable in
any subtree - there is still information in income not used
perfectly
33Avoid Overfitting in Classification
- Overfitting An induced tree may overfit the
training data - Too many branches, some may reflect anomalies due
to noise or outliers - Poor accuracy for unseen samples
- Two approaches to avoid overfitting
- Prepruning Halt tree construction earlydo not
split a node if this would result in the goodness
measure falling below a threshold - Difficult to choose an appropriate threshold
- Postpruning Remove branches from a fully grown
treeget a sequence of progressively pruned trees - Use a set of data different from the training
data to decide which is the best pruned tree
34Motivation for pruning
- A trivial tree two classes with probability p no
- 1-pyes
- conditioned on a given set of attribute values
- (1) assign each case to majority no
- error(1) 1-p
- (2) assign to no with p and yes with 1-p
- error(2) p(1-p) (1-p)p2p(1-p)gt1-p
- for pgt0.5
- so simple tree has a lower error of
classification
35- If the error rate of a subtree is higher then the
error obtained by replacing the tree with its
most frequent leaf or branch - prune the subtree
- How to estimate the prediction error
- do not use training samples
- pruning always increase error of the training
sample - estimate error based on test set
- cost complexity or reduced error pruning
36Pessimistic error estimates
- Based on training set only
- subtree covers N cases E cases missclasified
- error based on training set fE/N
- but this is not error
- resemble it as a sample from population
- estimate a upper bond on the population error
based on the confidence limit - Make a normal approximation to the binomial
distribution
37- Given a confidence level c default value is 25
in C4.5 - find confidence limit z such that
- P((f-e)/sqrt(e(1-e)/N)gtz)c
- N number of samples
- f E/N observed error rate
- e true error rate
- the upper confidence limit is used as a
pesimistic estimate of the true but unknown error
rate - first approximation to confidence interval for
error - f /-zc/2?f /-zc/2sqrt(f(1-f)/N))
38- Solving the above inequality
- e (fz2/2N zsqrt(f/N-f2/2Nz2/4N2))
- (1z2/N)
- z is number of standard deviations corresponding
to a confidence level c - for c0.25 z0.69
- refer to Figure 6.2 on page 166 of
39Example
- Labour negotiation data
- dependent variable or class to be predicted is
acceptability of contract good or bad - independent variables
- duration,
- wage increase 1th year lt2.5, gt2.5
- working hours per week lt36, gt36
- health plan contribution none, half, full
- Figure 6.2 shows a branch of a d.tree
40wage increase 1th year
gt2.5
lt2.5
working hours per week
gt36
lt36
health plan contribution
1 Bad 1 good
full
none
half
4 bad 2 good
4 bad 2 good
1 bad 1good
a
c
b
41- for node a
- E2,N6 so f2/60.33
- plugging into the formula upper confidence limit
e 0.47 - use 0.47 a pessimistic estimate of the error
- rather then the training error of 0.33
- for node b
- E1,N2 so f1/20.50
- plugging into the formula upper confidence limit
e 0.72 - for node c f2/60.33 but e0.47
- average error(60.4720.7260.47)/140.51
- The error estimate for the parent healt plan is
- f5/14 and e0.46lt0.51
- so prune the node
- now working hour per week node has two branchs
42working hour per week
1 bad 1 good
bed e0.46
e0.72
average pessimistic error (20.72140.46)/16 th
e pessimistic error of the pruned tree f 6/16
ep Exercise calculate pessimistic error decide
to prune or not based on ep
43Extracting Classification Rules from Trees
- Represent the knowledge in the form of IF-THEN
rules - One rule is created for each path from the root
to a leaf - Each attribute-value pair along a path forms a
conjunction - The leaf node holds the class prediction
- Rules are easier for humans to understand
- Example
- IF age lt30 AND student no THEN
buys_computer no - IF age lt30 AND student yes THEN
buys_computer yes - IF age 3140 THEN buys_computer yes
- IF age gt40 AND credit_rating excellent
THEN buys_computer yes - IF age gt40 AND credit_rating fair THEN
buys_computer no
44- Rule R A then class C
- Rule R-A- then class C
- delete condition X from A to obtain A-
- make a table
- class C not class C
- X Y1 E1
- not X Y2 E2
- Y1E1 satisfies R Y1 correct E1 misclassified
- Y2E2 cases satisfied by R- but not R
45- The total cases by R- is Y1Y2E1E2
- some satisfies X some not
- use a pessimistic estimate of the true error for
each rule using the upper limit UppCy(E,N) - for rule R estimate UppCy(E1,Y1E1)
- and for R- estimate UppCy(E1E2,Y1E1Y2E2)
- if pessimistic error rate of R- lt that of R
- delete condition x
46- Suppose n conditions for a rule
- delete conditions one by one
- repeat
- compare with the pessimistic error of the
original rule - if min(R-)ltR delete condition x
- unit no improvement is in pessimistic error
- Study example on page 49-50 of Quinlan 93
47AnswerTree
- Variables
- measurement levels
- case weights
- frequency variables
- Growing methods
- Stopping rules
- Tree parameters
- costs, prior probabilities, scores and profits
- Gain summary
- Accuracy of tree
- Cost-complexity pruning
48Variables
- Categorical Variables
- nominal or ordinal
- Continuous Variables
- All grouping method accept all types of variables
- QUEST requires that tatget variable be nominal
- Target and predictor variables
- target variable (dependent variable)
- predictor (independent variables)
- Case weight and frequency variables
49case weight variables
- unequal treatment to the cases
- Ex direct marketing
- 10,000 households respond
- and 1,000,000 do not
- all responders but 1 of nonresponders(10,000)
- case weight 1 for responders and
- case weight 100 for nonresponders
- FREQUENCY VARIABLES
- count of a record representing more than one
individual
50Tree-Growing Methods
- CHAID
- Chi-squared Automatic Interaction Detector
- Kass (1980)
- Exhaustive CHAID
- Biggs, de Ville, Suen (1991)
- CRT
- Classification and Regression Trees
- Breiman, Friedman, Olshen, and Stone (1984)
- QUEST
- Quick, Unbiased, Efficient Statistical Tree
- Loh, Shih (1997)
51CHAID
- evaluate all values of a potential predictor
- merges values that are judged to be statistically
homogenous - target variable
- continuous F test
- categorical Chi-square test
- not binary produce more than two categories at
any particular level - wider tree than do the binary methods
- works for all types of variables
- case weights and frequency variables
- missing values as a single valid category
52CHAID Algorithm
- for each predictor X
- find pair of categories of X least significantly
different (has the largest p value) wrt target
variable Y - Y cont use F test
- Y nominal for a two-way cross tabulation with
categories of X as rows categories of Y a columns
use the Chi-squared test - for the pair of categories of X with largest p
value - p gt ?merge (pre specified level) than
- merge this pair into a single compound category
- go to step (1)
- p lt ?merge (pre specified level) than
- go to step (3)
53CHAID Algorithm (cont)
- compute adjusted p values
- select the predictor X that has the smallest
adjusted p value (most significant) - compare its p value to a ?split (pre specified
splitting level) - Plt ?split split the node based on categories of
X - P gt ?split do not split the node (terminal node)
- continue the tree growing process until the
stopping rules are made
54Exhaustive CHAID
- CHAID may not find the optimal split for a
variable as it stop merging categories - all are statistically different
- continue merging until two super categories are
left - examine series of merges for the predictor
- set of categories giving strongest association
with the target - computes and adjusted p values for that
association - best split for each predictor
- choose predictor based on p values
- otherwise identical to CHAID
- longer to compute
- safer to use
55CART
- Binary tree growing algorithm
- may not present efficiently
- partitions data into two subsets
- same predictor variable may be used several times
at different levels - misclassification costs
- prior probability distribution
- Computation can take a long time with large data
sets - Surrogate splitting for missing values
56Impurity measures (1)
- for categorical targets variables
- Gini, twoing, or (ordinal targets) ordered twoing
- for continuous targets
- least-squared deviation (LSD)
- Gini
- at node t, g(t)
- g(t) ?j?ip(jt)p(it) i and j are categories
- g(t) 1-?jp2(jt)
- when cases are evenly distributed g takes its max
value of 1-1/k k number of categories
57Impurity measures (2)
- if costs are specified
- g(t) ?j?iC(ij)p(jt)p(it)
- C(ij) specifies cost of misclassifiying a
category j case as category i - gini criterion function for split s at node t
- ?(s,t) g(t) - pLg(tL) - pRg(tR)
- PL proportion of cases in t sent to left child
node - PR proportion of cases in t sent to right child
node - split s maximizing the value of ?(s,t) is choosen
- improvement in the tree
58Impurity measures (3)
- Twoing
- splitting into two superclasses
- best split on the predictor based on those two
superclasses - ?(s,t) pLpR?jp(jtL)-p(jtR)2
- the split s is chosen that maximizes this
criterion - C1 j p(jtL)gtp(jtR)
- C2 C - C1
- costs are not taken into account
59Impurity measures (4)
- Ordered Twoing
- for ordinal target variables
- contiguous categories can be combined to form
superclasses - LSD
- continuous targets
- within node variance for node t
- R(t) (1/Nw(t))?i?twnfn(yi-y_bar(t))2
- where Nw weighted number of cases in t
- wn is value of weighting variable for case i
- fn is frequency variable
- yi target variable
- y_bar(t) weighted mean for t
60- LSD criterion function
- ?(s,t) R(t) - pLR(tL) - pRR(tR)
- this value weighted by the proportion of all
cases in t is the value reported as improvement
in the tree
61Steps in the CART analysis
- at the root node t1, search for a split s
- giving the largest decrease in impurity
- ?(s,1) max ?s?S(s,1)
- split 1 as t2 and t3 using s
- repeat the split searching process in t2 and t3
- continue until one of the stopping rules is met
62- all cases have identical values for all
predictors - the node becomes pure all cases have the same
value of the target variable - the depth of the tree has reached its
prespecified maximum value - the number of cases in a node less than a
prespecified minimum parent node size - split at node results in producing a child node
with cases less than a prespecified min child
node size - for CART only max decrease in impurity is less
than a prespecified value
63QUEST
- variable selection and split point selection
separately - computationally efficient than CART
64Classification in Large Databases
- Classificationa classical problem extensively
studied by statisticians and machine learning
researchers - Scalability Classifying data sets with millions
of examples and hundreds of attributes with
reasonable speed - Why decision tree induction in data mining?
- relatively faster learning speed (than other
classification methods) - convertible to simple and easy to understand
classification rules - can use SQL queries for accessing databases
- comparable classification accuracy with other
methods
65Scalable Decision Tree Induction Methods in Data
Mining Studies
- SLIQ (EDBT96 Mehta et al.)
- builds an index for each attribute and only class
list and the current attribute list reside in
memory - SPRINT (VLDB96 J. Shafer et al.)
- constructs an attribute list data structure
- PUBLIC (VLDB98 Rastogi Shim)
- integrates tree splitting and tree pruning stop
growing the tree earlier - RainForest (VLDB98 Gehrke, Ramakrishnan
Ganti) - separates the scalability aspects from the
criteria that determine the quality of the tree - builds an AVC-list (attribute, value, class label)
66Data Cube-Based Decision-Tree Induction
- Integration of generalization with decision-tree
induction (Kamber et al97). - Classification at primitive concept levels
- E.g., precise temperature, humidity, outlook,
etc. - Low-level concepts, scattered classes, bushy
classification-trees - Semantic interpretation problems.
- Cube-based multi-level classification
- Relevance analysis at multi-levels.
- Information-gain analysis with dimension level.
67Chapter 7. Classification and Prediction
- What is classification? What is prediction?
- Issues regarding classification and prediction
- Classification by decision tree induction
- Bayesian Classification
- Classification by Neural Networks
- Classification by Support Vector Machines (SVM)
- Classification based on concepts from association
rule mining - Other Classification Methods
- Prediction
- Classification accuracy
- Summary
68Bayesian Classification Why?
- Probabilistic learning Calculate explicit
probabilities for hypothesis, among the most
practical approaches to certain types of learning
problems - Incremental Each training example can
incrementally increase/decrease the probability
that a hypothesis is correct. Prior knowledge
can be combined with observed data. - Probabilistic prediction Predict multiple
hypotheses, weighted by their probabilities - Standard Even when Bayesian methods are
computationally intractable, they can provide a
standard of optimal decision making against which
other methods can be measured
69Bayesian Theorem Basics
- Let X be a data sample whose class label is
unknown - Let H be a hypothesis that X belongs to class C
- For classification problems, determine P(H/X)
the probability that the hypothesis holds given
the observed data sample X - P(H) prior probability of hypothesis H (i.e. the
initial probability before we observe any data,
reflects the background knowledge) - P(X) probability that sample data is observed
- P(XH) probability of observing the sample X,
given that the hypothesis holds
70Bayesian Theorem
- Given training data X, posteriori probability of
a hypothesis H, P(HX) follows the Bayes theorem -
- Informally, this can be written as
- posterior likelihood x prior / evidence
- MAP (maximum posteriori) hypothesis
- Practical difficulty require initial knowledge
of many probabilities, significant computational
cost
71NaĂŻve Bayes Classifier
- A simplified assumption attributes are
conditionally independent - The product of occurrence of say 2 elements x1
and x2, given the current class is C, is the
product of the probabilities of each element
taken separately, given the same class
P(y1,y2,C) P(y1,C) P(y2,C) - No dependence relation between attributes
- Greatly reduces the computation cost, only count
the class distribution. - Once the probability P(XCi) is known, assign X
to the class with maximum P(XCi)P(Ci)
72Example
- H X is an apple
- P(H) priori probability that X is an apple
- X observed data round and red
- P(H/x) probability that X is an apple given that
we observe that it is red and round - P(X/H) posteriori probability that a data is red
and round given that it is an apple - P(X) priori probablility that it is red and round
73- P(H/X) P(H,X)/P(X) P(X/H)P(H)/P(X)
- calculate P(H/X) from
- P(X/H),P(H),P(X)
74NaĂŻve Bayes Classifier (I)
- A simplified assumption attributes are
conditionally independent - Greatly reduces the computation cost, only count
the class distribution.
75Naive Bayesian Classifier (II)
- Given a training set, we can compute the
probabilities
76Bayesian classification
- The classification problem may be formalized
using a-posteriori probabilities - P(CiX) prob. that the sample tuple
Xltx1,,xkgt is of class Ci. - There are m classes Ci i 1 to m
- E.g. P(classN outlooksunny,windytrue,)
- Idea assign to sample X the class label Ci such
that P(CiX) is maximal - P(CiX)gt P(CjX) 1ltjltm j?i
77Estimating a-posteriori probabilities
- Bayes theorem
- P(CiX) P(XCi)P(Ci) / P(X)
- P(X) is constant for all classes
- P(Ci) relative freq of class Ci samples
- Ci such that P(CiX) is maximum Ci such that
P(XCi)P(Ci) is maximum - Problem computing P(XCi) is unfeasible!
78NaĂŻve Bayesian Classification
- NaĂŻve assumption attribute independence
- P(x1,,xkCi) P(x1Ci)P(xkCi)
- If i-th attribute is categoricalP(xiCi) is
estimated as the relative freq of samples having
value xi as i-th attribute in class Ci sik/si . - If i-th attribute is continuousP(xiCi) is
estimated thru a Gaussian density function - Computationally easy in both cases
79Training dataset
Class C1buys_computer yes C2buys_computer
no Data sample X (agelt30, Incomemedium, Stud
entyes Credit_rating Fair)
80NaĂŻve Bayesian Classifier Example
- Compute P(X/Ci) for each class
- P(agelt30 buys_computeryes)
2/90.222 - P(agelt30 buys_computerno) 3/5 0.6
- P(incomemedium buys_computeryes)
4/9 0.444 - P(incomemedium buys_computerno)
2/5 0.4 - P(studentyes buys_computeryes) 6/9
0.667 - P(studentyes buys_computerno)
1/50.2 - P(credit_ratingfair buys_computeryes)
6/90.667 - P(credit_ratingfair buys_computerno)
2/50.4 - X(agelt30 ,income medium, studentyes,credit_
ratingfair) - P(XCi) P(Xbuys_computeryes) 0.222 x
0.444 x 0.667 x 0.0.667 0.044 - P(Xbuys_computerno) 0.6 x
0.4 x 0.2 x 0.4 0.019 - P(XCi)P(Ci ) P(Xbuys_computeryes)
P(buys_computeryes)0.028 - P(Xbuys_computeryes)
P(buys_computeryes)0.007 - X belongs to class buys_computeryes
81NaĂŻve Bayesian Classifier Comments
- Advantages
- Easy to implement
- Good results obtained in most of the cases
- Disadvantages
- Assumption class conditional independence ,
therefore loss of accuracy - Practically, dependencies exist among variables
- E.g., hospitals patients Profile age, family
history etc - Symptoms fever, cough etc., Disease lung
cancer, diabetes etc - Dependencies among these cannot be modeled by
NaĂŻve Bayesian Classifier - How to deal with these dependencies?
- Bayesian Belief Networks
82Bayesian Networks
- Bayesian belief network allows a subset of the
variables conditionally independent - A graphical model of causal relationships
- Represents dependency among the variables
- Gives a specification of joint probability
distribution
- Nodes random variables
- Links dependency
- X,Y are the parents of Z, and Y is the parent of
P - No dependency between Z and P
- Has no loops or cycles
X
83Bayesian Belief Network An Example
Family History
Smoker
(FH, S)
(FH, S)
(FH, S)
(FH, S)
LC
0.7
0.8
0.5
0.1
LungCancer
Emphysema
LC
0.3
0.2
0.5
0.9
The conditional probability table for the
variable LungCancer Shows the conditional
probability for each possible combination of its
parents
PositiveXRay
Dyspnea
Bayesian Belief Networks
84Learning Bayesian Networks
- Several cases
- Given both the network structure and all
variables observable learn only the CPTs - Network structure known, some hidden variables
method of gradient descent, analogous to neural
network learning - Network structure unknown, all variables
observable search through the model space to
reconstruct graph topology - Unknown structure, all hidden variables no good
algorithms known for this purpose - D. Heckerman, Bayesian networks for data mining