CS276A Text Retrieval and Mining - PowerPoint PPT Presentation

About This Presentation
Title:

CS276A Text Retrieval and Mining

Description:

Title: CS276B Text Information Retrieval, Mining, and Exploitation Author: Christopher Manning Last modified by: Christopher Manning Created Date – PowerPoint PPT presentation

Number of Views:199
Avg rating:3.0/5.0
Slides: 47
Provided by: Christophe764
Learn more at: http://web.stanford.edu
Category:

less

Transcript and Presenter's Notes

Title: CS276A Text Retrieval and Mining


1
CS276AText Retrieval and Mining
  • Lecture 18

2
Recap of the last lectures
  • Introduction to Text Classification
  • Machine Learning Algorithms for text
    classification
  • Naïve Bayes
  • Simple, cheap, linear classifier quite effective
  • K Nearest Neighbor classification
  • Simple, expensive at test time, high variance,
    non-linear
  • Rocchio vector space classification (centroids)
  • Simple, linear classifier too simple
  • Decision Trees
  • Pick out hyperboxes nonlinear use just a few
    features
  • Support Vector Machines
  • Currently hip linear or nonlinear (kernelized)
    effective at handling high dimensional spaces
    very effective

3
Todays Topic
  • Text-specific issues in classification
  • What kinds of features help or work well?
  • Stemming and weighting?
  • What do different evaluation methods show?
  • Also
  • Course evaluation forms
  • A little bit about CS276B next quarter
  • And dont forget
  • Exam review session on Friday
  • Practical exercise 2 due yesterday!

4
The Real World
  • P. Jackson and I. Moulinier Natural Language
    Processing for Online Applications
  • There is no question concerning the commercial
    value of being able to classify documents
    automatically by content. There are myriad
    potential applications of such a capability for
    corporate Intranets, government departments, and
    Internet publishers
  • Understanding the data is one of the keys to
    successful categorization, yet this is an area in
    which most categorization tool vendors are
    extremely weak. Many of the one size fits all
    tools on the market have not been tested on a
    wide range of content types.

5
The Real World
  • Gee, Im building a text classifier for real,
    now!
  • What should I do?
  • How much training data do you have?
  • None
  • Very little
  • Quite a lot
  • A huge amount and its growing

6
Manually written rules
  • No training data, adequate editorial staff?
  • Never forget the hand-written rules solution!
  • If (wheat or grain) and not (whole or bread) then
  • Categorize as grain
  • In practice, rules get a lot bigger than this
  • Can also be phrased using tf or tf.idf weights
  • With careful crafting (human tuning on
    development data) performance is high
  • Construe 94 recall, 84 precision over 675
    categories (Hayes and Weinstein 1990)
  • Amount of work required is huge
  • Estimate 2 days per class plus maintenance

7
Very little data?
  • If youre just doing supervised classification,
    you should stick to something high bias
  • There are theoretical results that Naïve Bayes
    should do well in such circumstances (Ng and
    Jordan 2002 NIPS)
  • The interesting theoretical answer is to explore
    semi-supervised training methods
  • Bootstrapping, EM over unlabeled documents,
  • The practical answer is to get more labeled data
    as soon as you can
  • How can you insert yourself into a process where
    humans will be willing to label data for you??

8
A reasonable amount of data?
  • Perfect!
  • We can use all our clever classifiers
  • Roll out the SVM!
  • But if you are using an SVM/NB etc., you should
    probably be prepared with the hybrid solution
    where there is a boolean overlay
  • Or else to use user-interpretable Boolean like
    models like decision trees
  • Users like to hack, and management likes to be
    able to implement quick fixes immediately

9
A huge amount of data?
  • This is great in theory for doing accurate
    classification
  • But it could easily mean that expensive methods
    like SVMs (train time) or kNN (test time) are
    quite impractical
  • Naïve Bayes can come back into its own again!
  • Or other advanced methods with linear
    training/test complexity like regularized
    logistic regression (though much more expensive
    to train)

10
A huge amount of data?
  • With enough data the choice of classifier may not
    matter much, and the best choice may be unclear
  • Data Brill and Banko on context-sensitive
    spelling correction
  • But the fact that you have to keep doubling your
    data to improve performance is a little unpleasant

11
How many categories?
  • A few (well separated ones)?
  • Easy!
  • A zillion closely related ones?
  • Think Yahoo! Directory, Library of Congress
    classification, legal applications
  • Quickly gets difficult!
  • Classifier combination is always a useful
    technique
  • Voting, bagging, or boosting multiple classifiers
  • Much literature on hierarchical classification
  • Mileage fairly unclear
  • May need a hybrid automatic/manual solution

12
How can one tweak performance?
  • Aim to exploit any domain-specific useful
    features that give special meanings or that zone
    the data
  • E.g., an author byline or mail headers
  • Aim to collapse things that would be treated as
    different but shouldnt be.
  • E.g., part numbers, chemical formulas

13
Does putting in hacks help?
  • You bet!
  • You can get a lot of value by differentially
    weighting contributions from different document
    zones
  • Upweighting title words helps (Cohen Singer
    1996)
  • Doubling the weighting on the title words is a
    good rule of thumb
  • Upweighting the first sentence of each paragraph
    helps (Murata, 1999)
  • Upweighting sentences that contain title words
    helps (Ko et al, 2002)

14
Two techniques for zones
  • Have a completely separate set of
    features/parameters for different zones like the
    title
  • Use the same features (pooling/tying their
    parameters) across zones, but upweight the
    contribution of different zones
  • Commonly the second method is more successful it
    costs you nothing in terms of sparsifying the
    data, but can give a very useful performance
    boost
  • Which is best is contingent fact about data

15
Text Summarization techniques in text
classification
  • Text Summarization Process of extracting key
    pieces from text, normally by features on
    sentences reflecting position and content
  • Much of this work can be used to suggest
    weightings for terms in text categorization
  • See Kolcz, Prabakarmurthi, and Kolita, CIKM
    2001 Summarization as feature selection for text
    categorization
  • Categorizing purely with title,
  • Categorizing with first paragraph only
  • Categorizing with paragraph with most keywords
  • Categorizing with first and last paragraphs, etc.

16
Feature Selection Why?
  • Text collections have a large number of features
  • 10,000 1,000,000 unique words and more
  • Make using a particular classifier feasible
  • Some classifiers cant deal with 100,000s of
    feats
  • Reduce training time
  • Training time for some methods is quadratic or
    worse in the number of features
  • Improve generalization
  • Eliminate noise features
  • Avoid overfitting

17
Recap Feature Reduction
  • Standard ways of reducing feature space for text
  • Stemming
  • laugh, laughs, laughing, laughed ? laugh
  • Stop word removal
  • E.g., eliminate all prepositions
  • Sometimes it seems like just nouns have most of
    the information useful for classification
  • Conversion to lower case
  • Tokenization
  • Break on all special characters fire-fighter ?
    fire, fighter

18
Does stemming/lowercasing/ help?
  • As always its hard to tell, and empirical
    evaluation is normally the gold standard
  • But note that the role of tools like stemming is
    rather different for TextCat vs. IR
  • For IR, you often want to collapse forms of the
    verb oxygenate and oxygenation, since all of
    those documents will be relevant to a query for
    oxygenation
  • For TextCat, with sufficient training data,
    stemming does no good. It only helps in
    compensating for data sparseness (which can be
    severe in TextCat applications). Overly
    aggressive stemming can easily degrade
    performance.

19
Feature Selection
  • Yang and Pedersen 1997
  • Comparison of different selection criteria
  • DF document frequency
  • IG information gain
  • MI mutual information (pointwise)
  • CHI chi square
  • Common strategy
  • Compute statistic for each term
  • Keep n terms with highest value of this statistic

20
?2 statistic (CHI)
  • ?2 is interested in (fo fe)2/fe summed over all
    table entries
  • The null hypothesis is rejected with confidence
    .999,
  • since 12.9 gt 10.83 (the value for .999
    confidence).

Term jaguar Term jaguar Term ? jaguar Term ? jaguar
Class auto 2 (0.25) 500 (502)
Class ? auto 3 (4.75) 9500 (9498)
expected fe
observed fo
21
?2 statistic (CHI)
There is a simpler formula for ?2
A (t,c) C (t,c)
B (t,c) D (t, c)
N A B C D
22
YangPedersen Experiments
  • Two classification methods
  • kNN (k nearest neighbors)
  • Linear Least Squares Fit
  • Regression method
  • Collections
  • Reuters-22173
  • 92 categories
  • 16,000 unique terms
  • Ohsumed subset of Medline
  • 14,000 categories
  • 72,000 unique terms
  • Ltc term weighting (normalized log tf.idf)

23
(No Transcript)
24
IG, DF, CHI Are Correlated.
25
Discussion
  • You can eliminate 90 of features for IG, DF, and
    CHI without decreasing performance.
  • In fact, performance increases with fewer
    features for IG, DF, and CHI.
  • Mutual information is very sensitive to small
    counts.
  • IG does best with smallest number of features.
  • Document frequency is close to optimal. By far
    the simplest feature selection method!
  • Similar results for LLSF (regression).

26
Feature SelectionOther Considerations
  • Generic vs Class-Specific feature selection
  • Completely generic (class-independent)
  • Separate feature set for each class
  • Mixed (a la YangPedersen)
  • Maintainability over time
  • Is aggressive feature selection good or bad for
    robustness over time?
  • Ideal Optimal features selected as part of
    training not as preprocessing step

27
Measuring ClassificationFigures of Merit
  • Accuracy of classification
  • Main evaluation criterion in academia
  • More in a moment
  • Speed of training statistical classifier
  • Some methods are very cheap some very costly
  • Speed of classification (docs/hour)
  • No big differences for most algorithms
  • Exceptions kNN, complex preprocessing
    requirements
  • Effort in creating training set/hand-built
    classifier
  • human hours/topic

28
Measuring ClassificationFigures of Merit
  • In the real world, economic measures
  • Your choices are
  • Do no classification
  • That has a cost (hard to compute)
  • Do it all manually
  • Has an easy to compute cost if doing it like that
    now
  • Do it all with an automatic classifier
  • Mistakes have a cost
  • Do it with a combination of automatic
    classification and manual review of
    uncertain/difficult/new cases
  • Commonly the last method is most cost efficient
    and is adopted

29
Concept Drift
  • Categories change over time
  • Example president of the united states
  • 1999 clinton is great feature
  • 2002 clinton is bad feature
  • One measure of a text classification system is
    how well it protects against concept drift.
  • Feature selection can be bad in protecting
    against concept drift

30
Measures of Accuracy
  • Evaluation must be done on test data that is
    independent of the training data
  • Overall error rate
  • Not a good measure for small classes. Why?
  • Precision/recall for classification decisions
  • F1 measure 1/F1 ½ (1/P 1/R)
  • Correct estimate of size of category
  • Why is this different?
  • Stability over time / category drift
  • Utility
  • Costs of false positives / false negatives may be
    different
  • For example, cost tp-0.5fp

31
Good practice departmentConfusion matrix
This (i, j) entry means 53 of the docs actually
in class i were put in class j by the classifier.
Class assigned by classifier
Actual Class
53
  • In a perfect classification, only the diagonal
    has non-zero entries

32
Good practice department N-Fold Cross-Validation
  • Results can vary based on sampling error due to
    different training and test sets.
  • Average results over multiple training and test
    sets (splits of the overall data) for the best
    results.
  • Ideally, test and training sets are independent
    on each trial.
  • But this would require too much labeled data.
  • Partition data into N equal-sized disjoint
    segments.
  • Run N trials, each time using a different segment
    of the data for testing, and training on the
    remaining N?1 segments.
  • This way, at least test-sets are independent.
  • Report average classification accuracy over the N
    trials.
  • Typically, N 10.

33
Good practice department Learning Curves
  • In practice, labeled data is usually rare and
    expensive.
  • Would like to know how performance varies with
    the number of training instances.
  • Learning curves plot classification accuracy on
    independent test data (Y axis) versus number of
    training examples (X axis).
  • One can do both the above and produce learning
    curves averaged over multiple trials from
    cross-validation

34
Micro- vs. Macro-Averaging
  • If we have more than one class, how do we combine
    multiple performance measures into one quantity?
  • Macroaveraging Compute performance for each
    class, then average.
  • Microaveraging Collect decisions for all
    classes, compute contingency table, evaluate.

35
Micro- vs. Macro-Averaging Example
Class 1
Class 2
Micro.Av. Table
Truth yes Truth no
Classifier yes 10 10
Classifier no 10 970
Truth yes Truth no
Classifier yes 90 10
Classifier no 10 890
Truth yes Truth no
Classifier yes 100 20
Classifier no 20 1860
  • Macroaveraged precision (0.5 0.9)/2 0.7
  • Microaveraged precision 100/120 .83
  • Why this difference?

36
YangLiu SVM vs. Other Methods
37
CS276BWeb Search and Mining
38
CS276B
  • Web and related technologies
  • Web search
  • Link analysis
  • XML
  • Collaborative Filtering

39
Evolution of search engines
  • First generation -- use only on page, text data
  • Word frequency, language
  • Second generation -- use off-page, web-specific
    data
  • Link (or connectivity) analysis
  • Click-through data (What results people click on)
  • Anchor-text (How people refer to this page)
  • Third generation -- answer the need behind the
    query
  • Semantic analysis -- what is this about?
  • Focus on user need, rather than on query
  • Context determination
  • Helping the user
  • Integration of search and text analysis

1995-1997 AV, Excite, Lycos, etc
From 1998. Made popular by Google but everyone
now
Still experimental
40
Anchor text (first used WWW Worm - McBryan
Mcbr94)
Tiger image
Here is a great picture of a tiger
Cool tiger webpage
The text in the vicinity of a hyperlink
is descriptive of the page it points to.
41
Search Engine Optimization II Tutorial
on Cloaking Stealth Technology
42
Personalization
43
Recap XQuery
Møller and Schwartzbach
44
Collaborative filtering
  • Corporate Intranets
  • Recommendation, finding domain experts,
  • Ecommerce
  • Product recommendations - amazon
  • Medical Applications
  • Matching patients to doctors, clinical trials,
  • Customer Relationship Management
  • Matching customer problems to internal experts in
    a Support organization.

45
Text Mining Tools
Text Mining Tool that returns Key Terms
46
Chance to do a major project
  • Something that can exploit things that weve
    looked at this quarter
  • Or things that we move to next quarter
  • A great learning context in which to investigate
    IR, classification and clustering, information
    extraction, link analysis, various forms of
    text-mining, textbase visualization,
    collaborative filtering
Write a Comment
User Comments (0)
About PowerShow.com