3.%20Classification%20Methods - PowerPoint PPT Presentation

About This Presentation
Title:

3.%20Classification%20Methods

Description:

To build a compact tree generalized from the data ... An effective approach to avoid overfitting data and for a more compact tree (easy to understand) ... – PowerPoint PPT presentation

Number of Views:18
Avg rating:3.0/5.0
Slides: 28
Provided by: publi8
Category:

less

Transcript and Presenter's Notes

Title: 3.%20Classification%20Methods


1
3. Classification Methods
  • Patterns and Models
  • Regression, NBC
  • k-Nearest Neighbors
  • Decision Trees and Rules
  • Large size data

2
Models and Patterns
  • A model is a global description of data, or an
    abstract representation of a real-world process
  • Estimating parameters of a model
  • Data-driven model building
  • Examples Regression, Graphical model (BN), HMM
  • A pattern is about some local aspects of data
  • Patterns in data matrices
  • Predicates (age lt 40) (income lt 10)
  • Patterns for strings (ASCII characters, DNA
    alphabet)
  • Pattern discovery rules

3
Performance Measures
  • Generality
  • How many instances are covered
  • Applicability
  • Or is it useful? All husbands are male.
  • Accuracy
  • Is it always correct? If not, how often?
  • Comprehensibility
  • Is it easy to understand? (a subjective measure)

4
Forms of Knowledge
  • Concepts
  • Probabilistic, logical (proposition/predicate),
    functional
  • Rules
  • Taxonomies and Hierarchies
  • Dendrograms, decision trees
  • Clusters
  • Structures and Weights/Probabilities
  • ANN, BN

5
Induction from Data
  • Inferring knowledge from data - generalization
  • Supervised vs. unsupervised learning
  • Some graphical illustrations of learning tasks
    (regression, classification, clustering)
  • Any other types of learning?
  • The task of deduction
  • infer information that is a logical consequence
    of querying a database
  • Who conducted this class before?
  • Which courses are attended by Mary?
  • Deductive databases extending the RDBMS

6
What is a bad classifier?
  • Some simplest classifiers
  • Table-Lookup
  • What if x cannot be found in the training data?
  • We give up!?
  • Or, we can
  • A simple classifier Cs can be built as a
    reference
  • If it can be found in the table (training data),
    return its class otherwise, what should it
    return?
  • A bad classifier is one that does worse than Cs.
  • Do we need to learn a classifier for data of one
    class?

7
Many Techniques
  • Decision trees
  • Linear regression
  • Neural networks
  • k-nearest neighbour
  • Naïve Bayesian classifiers
  • Support Vector Machines
  • and many more ...

8
Regression for Numeric Prediction
  • Linear regression is a statistical technique when
    class and all the attributes are numeric.
  • y a ßx, where a and ß are regression
    coefficients
  • We need to use instances ltxi,ygt to find a and ß
  • by minimizing SSE (least squares)
  • SSE S(yi-yi)2 S(yi- a ßxi)2
  • Extensions
  • Multiple regression
  • Piecewise linear regression
  • Polynomial regression

9
Nearest Neighbor
  • Also called instance based learning
  • Algorithm
  • Given a new instance x,
  • find its nearest neighbor ltx,ygt
  • Return y as the class of x
  • Distance measures
  • Normalization?!
  • Some interesting questions
  • Whats its time complexity?
  • Does it learn?

10
Nearest Neighbor (2)
  • Dealing with noise k-nearest neighbor
  • Use more than 1 neighbors
  • How many neighbors?
  • Weighted nearest neighbors
  • How to speed up?
  • Huge storage
  • Use representatives (a problem of instance
    selection)
  • Sampling
  • Grid
  • Clustering

11
Naïve Bayes Classification
  • This is a direct application of Bayes rule
  • P(Cx) P(xC)P(C)/P(x)
  • x - a vector of x1,x2,,xn
  • Thats the best classifier you can ever build
  • You dont even need to select features, it takes
    care of it automatically
  • But, there are problems
  • There are a limited number of instances
  • How to estimate P(xC)

12
NBC (2)
  • Assume conditional independence between xis
  • We have P(Cx) P(x1C) P(xiC) (xnC)P(C)
  • How good is it in reality?
  • Lets build one NBC for a very simple data set
  • Estimate the priors and conditional probabilities
    with the training data
  • P(C1) ? P(C2) ?
  • What is the class for (1,2,1)?
  • P(1x) P(11) P(21) (11)P(1), P(2x)
  • What is the class for (1,2,2)?

13
Example of NBC
C 1 2
7 4 3
A1 0 2 0
A1 1 2 1
A1 2 0 2
A2 0
A2 1
A2 2
A3 1
A3 2
A1 A2 A3 C
1 2 1 1
0 0 1 1
2 1 2 2
1 2 1 2
0 1 2 1
2 2 2 2
1 0 1 1
14
Golf Data
15
Decision Trees
  • A decision tree

Outlook
sunny
overcast
rain
Humidity
Wind
YES
high
normal
strong
weak
NO
YES
NO
YES
16
How to grow a tree?
  • Randomly ? Random Forests (Breiman, 2001)
  • What are the criteria to build a tree?
  • Accurate
  • Compact
  • A straightforward way to grow is
  • Pick an attribute
  • Split data according to its values
  • Recursively do the first two steps until
  • No data left
  • No feature left

17
Discussion
  • There are many possible trees
  • lets try it on the golf data
  • How to find the most compact one
  • that is consistent with the data?
  • Why the most compact?
  • Occams razor principle
  • Issue of efficiency w.r.t. optimality
  • One attribute at a time or

18
Grow a good tree efficiently
  • The heuristic to find commonality in feature
    values associated with class values
  • To build a compact tree generalized from the data
  • It means for each feature, we check whose splits
    can lead to pure leaf nodes.
  • Is it a good heuristic?
  • What do you think?
  • How to judge it?
  • Is it really efficient?
  • How to implement it?

19
Lets grow one
  • Measuring the purity of a data set Entropy
  • Information gain (see the brief review)
  • Choose the feature with max gain

20
Different numbers of values
  • Different attributes can have varied numbers of
    values
  • Some treatments
  • Removing useless attributes before learning
  • Binarization
  • Discretization
  • Gain-ratio is another practical solution
  • Gain root-Info InfoAttribute(i)
  • Split-Info -?((Ti/T)log2 (Ti/T))
  • Gain-ratio Gain / Split-Info

21
Another kind of problems
  • A difficulty problem. Why is it difficulty?
  • Similar ones are Parity, Majority problems.

XOR problem 0 0 0 0 1 1 1 0 1 1
1 0
22
Tree Pruning
  • An effective approach to avoid overfitting data
    and for a more compact tree (easy to understand)
  • Two general ways to prune
  • Pre-pruning to stop splitting further
  • Any significant difference in classification
    accuracy before and after division
  • Post-pruning to trim back

23
Rules from Decision Trees
  • Two types of rules
  • Order sensitive (more compact, less efficient)
  • Order insensitive
  • The most straightforward way is
  • Class-based method
  • Group rules according to classes
  • Select most general rules (or remove redundant
    ones)
  • Data-based method
  • Select one rule at a time (keep the most general
    one)
  • Work on the remaining data until all data is
    covered

24
Variants of Decision Trees and Rules
  • Tree stumps
  • Holtes 1R rules (1992)
  • For each attribute
  • Sort according to its values v
  • Find the most frequent class value c for each v
  • Breaking tie with coin flipping
  • Output the most accurate rule as if v then c
  • An example (the Golf data)

25
Handling Large Size Data
  • When data simply cannot fit in memory
  • Is it a big problem?
  • Three representative approaches
  • Smart data structures to avoid unnecessary
    recalculation
  • Hash trees
  • SPRINT
  • Sufficient statistics
  • AVC-set (Attribute-Value, Class label) to
    summarize the class distribution for each
    attribute
  • Example RainForest
  • Parallel processing
  • Make data parallelizable

26
Ensemble Methods
  • A group of classifiers
  • Hybrid (Stacking)
  • Single type
  • Strong vs. weak learners
  • A good ensemble
  • Accuracy
  • Diversity
  • Some major approaches form ensembles
  • Bagging
  • Boosting

27
Bibliography
  • I.H. Witten and E. Frank. Data Mining Practical
    Machine Learning Tools and Techniques with Java
    Implementations. 2000. Morgan Kaufmann.
  • M. Kantardzic. Data Mining Concepts, Models,
    Methods, and Algorithms. 2003. IEEE.
  • J. Han and M. Kamber. Data Mining Concepts and
    Techniques. 2001. Morgan Kaufmann.
  • D. Hand, H. Mannila, P. Smyth. Principals of Data
    Mining. 2001. MIT.
  • T. G. Dietterich. Ensemble Methods in Machine
    Learning. I. J. Kittler and F. Roli (eds.) 1st
    Intl Workshop on Multiple Classifier Systems, pp
    1-15, Springer-Verlag, 2000.
Write a Comment
User Comments (0)
About PowerShow.com