Title: CS590D: Data Mining Prof. Chris Clifton
1CS590D Data MiningProf. Chris Clifton
- January 24, 2006
- Association Rules
2Association Rule Mining
- Given a set of transactions, find rules that will
predict the occurrence of an item based on the
occurrences of other items in the transaction
Market-Basket transactions
Example of Association Rules
Diaper ? Beer,Milk, Bread ?
Eggs,Coke,Beer, Bread ? Milk,
Implication means co-occurrence, not causality!
3Mining Association Rules in Large Databases
- Association rule mining
- Algorithms for scalable mining of
(single-dimensional Boolean) association rules in
transactional databases - Mining various kinds of association/correlation
rules - Constraint-based association mining
- Sequential pattern mining
- Applications/extensions of frequent pattern
mining - Summary
4What Is Association Mining?
- Association rule mining
- Finding frequent patterns, associations,
correlations, or causal structures among sets of
items or objects in transaction databases,
relational databases, and other information
repositories. - Frequent pattern pattern (set of items,
sequence, etc.) that occurs frequently in a
database AIS93 - Motivation finding regularities in data
- What products were often purchased together?
Beer and diapers?! - What are the subsequent purchases after buying a
PC? - What kinds of DNA are sensitive to this new drug?
- Can we automatically classify web documents?
5Why Is Association Mining Important?
- Foundation for many essential data mining tasks
- Association, correlation, causality
- Sequential patterns, temporal or cyclic
association, partial periodicity, spatial and
multimedia association - Associative classification, cluster analysis,
iceberg cube, fascicles (semantic data
compression) - Broad applications
- Basket data analysis, cross-marketing, catalog
design, sale campaign analysis - Web log (click stream) analysis, DNA sequence
analysis, etc.
6Basic ConceptsAssociation Rules
Transaction-id Items bought
10 A, B, C
20 A, C
30 A, D
40 B, E, F
- Itemset Xx1, , xk
- Find all the rules X?Y with min confidence and
support - support, s, probability that a transaction
contains X?Y - confidence, c, conditional probability that a
transaction having X also contains Y.
Let min_support 50, min_conf 50 A ? C
(50, 66.7) C ? A (50, 100)
7Mining Association RulesExample
Min. support 50 Min. confidence 50
Transaction-id Items bought
10 A, B, C
20 A, C
30 A, D
40 B, E, F
Frequent pattern Support
A 75
B 50
C 50
A, C 50
- For rule A ? C
- support support(A?C) 50
- confidence support(A?C)/support(A) 66.6
8Mining Association RulesWhat We Need to Know
- Goal Rules with high support/confidence
- How to compute?
- Support Find sets of items that occur
frequently - Confidence Find frequency of subsets of
supported itemsets - If we have all frequently occurring sets of items
(frequent itemsets), we can compute support and
confidence!
9Definition Frequent Itemset
- Itemset
- A collection of one or more items
- Example Milk, Bread, Diaper
- k-itemset
- An itemset that contains k items
- Support count (?)
- Frequency of occurrence of an itemset
- E.g. ?(Milk, Bread,Diaper) 2
- Support
- Fraction of transactions that contain an itemset
- E.g. s(Milk, Bread, Diaper) 2/5
- Frequent Itemset
- An itemset whose support is greater than or equal
to a minsup threshold
10Definition Association Rule
- Association Rule
- An implication expression of the form X ? Y,
where X and Y are itemsets - Example Milk, Diaper ? Beer
- Rule Evaluation Metrics
- Support (s)
- Fraction of transactions that contain both X and
Y - Confidence (c)
- Measures how often items in Y appear in
transactions thatcontain X
11Mining Association Rules in Large Databases
- Association rule mining
- Algorithms for scalable mining of
(single-dimensional Boolean) association rules in
transactional databases - Mining various kinds of association/correlation
rules - Constraint-based association mining
- Sequential pattern mining
- Applications/extensions of frequent pattern
mining - Summary
12Apriori A Candidate Generation-and-Test Approach
- Any subset of a frequent itemset must be frequent
- if beer, diaper, nuts is frequent, so is beer,
diaper - Every transaction having beer, diaper, nuts
also contains beer, diaper - Apriori pruning principle If there is any
itemset which is infrequent, its superset should
not be generated/tested! - Method
- generate length (k1) candidate itemsets from
length k frequent itemsets, and - test the candidates against DB
- Performance studies show its efficiency and
scalability - Agrawal Srikant 1994, Mannila, et al. 1994
13The Apriori AlgorithmAn Example
Itemset sup
A 2
B 3
C 3
D 1
E 3
Itemset sup
A 2
B 3
C 3
E 3
Database TDB
L1
C1
Tid Items
10 A, C, D
20 B, C, E
30 A, B, C, E
40 B, E
1st scan
Frequency 50, Confidence 100 A ? C B ? E BC
? E CE ? B BE ? C
C2
C2
Itemset sup
A, B 1
A, C 2
A, E 1
B, C 2
B, E 3
C, E 2
Itemset
A, B
A, C
A, E
B, C
B, E
C, E
L2
2nd scan
Itemset sup
A, C 2
B, C 2
B, E 3
C, E 2
C3
L3
Itemset
B, C, E
3rd scan
Itemset sup
B, C, E 2
14The Apriori Algorithm
- Pseudo-code
- Ck Candidate itemset of size k
- Lk frequent itemset of size k
- L1 frequent items
- for (k 1 Lk !? k) do begin
- Ck1 candidates generated from Lk
- for each transaction t in database do
- increment the count of all candidates in
Ck1 that are
contained in t - Lk1 candidates in Ck1 with min_support
- end
- return ?k Lk
15Important Details of Apriori
- How to generate candidates?
- Step 1 self-joining Lk
- Step 2 pruning
- How to count supports of candidates?
- Example of Candidate-generation
- L3abc, abd, acd, ace, bcd
- Self-joining L3L3
- abcd from abc and abd
- acde from acd and ace
- Pruning
- acde is removed because ade is not in L3
- C4abcd
16Definition Association Rule
- Association Rule
- An implication expression of the form X ? Y,
where X and Y are itemsets - Example Milk, Diaper ? Beer
- Rule Evaluation Metrics
- Support (s)
- Fraction of transactions that contain both X and
Y - Confidence (c)
- Measures how often items in Y appear in
transactions thatcontain X
17Computational Complexity
- Given d unique items
- Total number of itemsets 2d
- Total number of possible association rules
If d6, R 602 rules
18Frequent Itemset Generation Strategies
- Reduce the number of candidates (M)
- Complete search M2d
- Use pruning techniques to reduce M
- Reduce the number of transactions (N)
- Reduce size of N as the size of itemset increases
- Used by DHP and vertical-based mining algorithms
- Reduce the number of comparisons (NM)
- Use efficient data structures to store the
candidates or transactions - No need to match every candidate against every
transaction
19Reducing Number of Candidates
- Apriori principle
- If an itemset is frequent, then all of its
subsets must also be frequent - Apriori principle holds due to the following
property of the support measure - Support of an itemset never exceeds the support
of its subsets - This is known as the anti-monotone property of
support
20How to Generate Candidates?
- Suppose the items in Lk-1 are listed in an order
- Step 1 self-joining Lk-1
- insert into Ck
- select p.item1, p.item2, , p.itemk-1, q.itemk-1
- from Lk-1 p, Lk-1 q
- where p.item1q.item1, , p.itemk-2q.itemk-2,
p.itemk-1 lt q.itemk-1 - Step 2 pruning
- ? itemsets c in Ck do
- ? (k-1)-subsets s of c do
- if (s is not in Lk-1) then delete c from Ck
21How to Count Supports of Candidates?
- Why counting supports of candidates a problem?
- The total number of candidates can be very huge
- One transaction may contain many candidates
- Method
- Candidate itemsets are stored in a hash-tree
- Leaf node of hash-tree contains a list of
itemsets and counts - Interior node contains a hash table
- Subset function finds all the candidates
contained in a transaction
22Example Counting Supports of Candidates
Transaction 1 2 3 5 6
1 2 3 5 6
1 3 5 6
1 2 3 5 6
23Efficient Implementation of Apriori in SQL
- Hard to get good performance out of pure SQL
(SQL-92) based approaches alone - Make use of object-relational extensions like
UDFs, BLOBs, Table functions etc. - Get orders of magnitude improvement
- S. Sarawagi, S. Thomas, and R. Agrawal.
Integrating association rule mining with
relational database systems Alternatives and
implications. In SIGMOD98
24Challenges of Frequent Pattern Mining
- Challenges
- Multiple scans of transaction database
- Huge number of candidates
- Tedious workload of support counting for
candidates - Improving Apriori general ideas
- Reduce passes of transaction database scans
- Shrink number of candidates
- Facilitate support counting of candidates
25DIC Reduce Number of Scans
ABCD
- Once both A and D are determined frequent, the
counting of AD begins - Once all length-2 subsets of BCD are determined
frequent, the counting of BCD begins
ABC
ABD
ACD
BCD
AB
AC
BC
AD
BD
CD
Transactions
1-itemsets
B
C
D
A
2-itemsets
Apriori
Itemset lattice
1-itemsets
2-items
S. Brin R. Motwani, J. Ullman, and S. Tsur.
Dynamic itemset counting and implication rules
for market basket data. In SIGMOD97
3-items
DIC
26Partition Scan Database Only Twice
- Any itemset that is potentially frequent in DB
must be frequent in at least one of the
partitions of DB - Scan 1 partition database and find local
frequent patterns - Scan 2 consolidate global frequent patterns
- A. Savasere, E. Omiecinski, and S. Navathe. An
efficient algorithm for mining association in
large databases. In VLDB95
27CS490DIntroduction to Data MiningProf. Chris
Clifton
- February 2, 2004
- Association Rules
28Sampling for Frequent Patterns
- Select a sample of original database, mine
frequent patterns within sample using Apriori - Scan database once to verify frequent itemsets
found in sample, only borders of closure of
frequent patterns are checked - Example check abcd instead of ab, ac, , etc.
- Scan database again to find missed frequent
patterns - H. Toivonen. Sampling large databases for
association rules. In VLDB96
29DHP Reduce the Number of Candidates
- A k-itemset whose corresponding hashing bucket
count is below the threshold cannot be frequent - Candidates a, b, c, d, e
- Hash entries ab, ad, ae bd, be, de
- Frequent 1-itemset a, b, d, e
- ab is not a candidate 2-itemset if the sum of
count of ab, ad, ae is below support threshold - J. Park, M. Chen, and P. Yu. An effective
hash-based algorithm for mining association
rules. In SIGMOD95
30Eclat/MaxEclat and VIPER Exploring Vertical Data
Format
- Use tid-list, the list of transaction-ids
containing an itemset - Compression of tid-lists
- Itemset A t1, t2, t3, sup(A)3
- Itemset B t2, t3, t4, sup(B)3
- Itemset AB t2, t3, sup(AB)2
- Major operation intersection of tid-lists
- M. Zaki et al. New algorithms for fast discovery
of association rules. In KDD97 - P. Shenoy et al. Turbo-charging vertical mining
of large databases. In SIGMOD00
31Bottleneck of Frequent-pattern Mining
- Multiple database scans are costly
- Mining long patterns needs many passes of
scanning and generates lots of candidates - To find frequent itemset i1i2i100
- of scans 100
- of Candidates (1001) (1002) (110000)
2100-1 1.271030 ! - Bottleneck candidate-generation-and-test
- Can we avoid candidate generation?
32CS590D Data MiningProf. Chris Clifton
- January 26, 2006
- Association Rules
33Mining Frequent Patterns Without Candidate
Generation
- Grow long patterns from short ones using local
frequent items - abc is a frequent pattern
- Get all transactions having abc DBabc
- d is a local frequent item in DBabc ? abcd is
a frequent pattern
34Construct FP-tree from a Transaction Database
TID Items bought (ordered) frequent
items 100 f, a, c, d, g, i, m, p f, c, a, m,
p 200 a, b, c, f, l, m, o f, c, a, b,
m 300 b, f, h, j, o, w f, b 400 b, c,
k, s, p c, b, p 500 a, f, c, e, l, p, m,
n f, c, a, m, p
min_support 3
- Scan DB once, find frequent 1-itemset (single
item pattern) - Sort frequent items in frequency descending
order, f-list - Scan DB again, construct FP-tree
F-listf-c-a-b-m-p
35Benefits of the FP-tree Structure
- Completeness
- Preserve complete information for frequent
pattern mining - Never break a long pattern of any transaction
- Compactness
- Reduce irrelevant infoinfrequent items are gone
- Items in frequency descending order the more
frequently occurring, the more likely to be
shared - Never be larger than the original database (not
count node-links and the count field) - For Connect-4 DB, compression ratio could be over
100
36Partition Patterns and Databases
- Frequent patterns can be partitioned into subsets
according to f-list - F-listf-c-a-b-m-p
- Patterns containing p
- Patterns having m but no p
-
- Patterns having c but no a nor b, m, p
- Pattern f
- Completeness and non-redundency
37Find Patterns Having P From P-conditional Database
- Starting at the frequent item header table in the
FP-tree - Traverse the FP-tree by following the link of
each frequent item p - Accumulate all of transformed prefix paths of
item p to form ps conditional pattern base
Conditional pattern bases item cond. pattern
base c f3 a fc3 b fca1, f2, c2 m fca2,
fcab1 p fcam2, cb1
38From Conditional Pattern-bases to Conditional
FP-trees
- For each pattern-base
- Accumulate the count for each item in the base
- Construct the FP-tree for the frequent items of
the pattern base
m-conditional pattern base fca2, fcab1
Header Table Item frequency head
f 4 c 4 a 3 b 3 m 3 p 3
All frequent patterns relate to m m, fm, cm, am,
fcm, fam, cam, fcam
f4
c1
b1
b1
c3
?
?
p1
a3
b1
m2
p2
m1
39Recursion Mining Each Conditional FP-tree
Cond. pattern base of am (fc3)
Cond. pattern base of cm (f3)
f3
cm-conditional FP-tree
Cond. pattern base of cam (f3)
f3
cam-conditional FP-tree
40A Special Case Single Prefix Path in FP-tree
- Suppose a (conditional) FP-tree T has a shared
single prefix-path P - Mining can be decomposed into two parts
- Reduction of the single prefix path into one node
- Concatenation of the mining results of the two
parts
?
41Mining Frequent Patterns With FP-trees
- Idea Frequent pattern growth
- Recursively grow frequent patterns by pattern and
database partition - Method
- For each frequent item, construct its conditional
pattern-base, and then its conditional FP-tree - Repeat the process on each newly created
conditional FP-tree - Until the resulting FP-tree is empty, or it
contains only one pathsingle path will generate
all the combinations of its sub-paths, each of
which is a frequent pattern
42Scaling FP-growth by DB Projection
- FP-tree cannot fit in memory?DB projection
- First partition a database into a set of
projected DBs - Then construct and mine FP-tree for each
projected DB - Parallel projection vs. Partition projection
techniques - Parallel projection is space costly
43Partition-based Projection
- Parallel projection needs a lot of disk space
- Partition projection saves it
44FP-Growth vs. Apriori Scalability With the
Support Threshold
Data set T25I20D10K
45FP-Growth vs. Tree-Projection Scalability with
the Support Threshold
Data set T25I20D100K
46Why Is FP-Growth the Winner?
- Divide-and-conquer
- decompose both the mining task and DB according
to the frequent patterns obtained so far - leads to focused search of smaller databases
- Other factors
- no candidate generation, no candidate test
- compressed database FP-tree structure
- no repeated scan of entire database
- basic opscounting local freq items and building
sub FP-tree, no pattern search and matching
47Implications of the Methodology
- Mining closed frequent itemsets and max-patterns
- CLOSET (DMKD00)
- Mining sequential patterns
- FreeSpan (KDD00), PrefixSpan (ICDE01)
- Constraint-based mining of frequent patterns
- Convertible constraints (KDD00, ICDE01)
- Computing iceberg data cubes with complex
measures - H-tree and H-cubing algorithm (SIGMOD01)
48Max-patterns
- Frequent pattern a1, , a100 ? (1001) (1002)
(110000) 2100-1 1.271030 frequent
sub-patterns! - Max-pattern frequent patterns without proper
frequent super pattern - BCDE, ACD are max-patterns
- BCD is not a max-pattern
Tid Items
10 A,B,C,D,E
20 B,C,D,E,
30 A,C,D,F
Min_sup2
49MaxMiner Mining Max-patterns
- 1st scan find frequent items
- A, B, C, D, E
- 2nd scan find support for
- AB, AC, AD, AE, ABCDE
- BC, BD, BE, BCDE
- CD, CE, CDE, DE,
- Since BCDE is a max-pattern, no need to check
BCD, BDE, CDE in later scan - R. Bayardo. Efficiently mining long patterns from
databases. In SIGMOD98
Tid Items
10 A,B,C,D,E
20 B,C,D,E,
30 A,C,D,F
Potential max-patterns
50Frequent Closed Patterns
- Conf(ac?d)100 ? record acd only
- For frequent itemset X, if there exists no item y
s.t. every transaction containing X also contains
y, then X is a frequent closed pattern - acd is a frequent closed pattern
- Concise rep. of freq pats
- Reduce of patterns and rules
- N. Pasquier et al. In ICDT99
Min_sup2
TID Items
10 a, c, d, e, f
20 a, b, e
30 c, e, f
40 a, c, d, f
50 c, e, f
51Mining Frequent Closed Patterns CLOSET
- Flist list of all frequent items in support
ascending order - Flist d-a-f-e-c
- Divide search space
- Patterns having d
- Patterns having d but no a, etc.
- Find frequent closed pattern recursively
- Every transaction having d also has cfa ? cfad is
a frequent closed pattern - J. Pei, J. Han R. Mao. CLOSET An Efficient
Algorithm for Mining Frequent Closed Itemsets",
DMKD'00.
Min_sup2
TID Items
10 a, c, d, e, f
20 a, b, e
30 c, e, f
40 a, c, d, f
50 c, e, f
52Mining Frequent Closed Patterns CHARM
- Use vertical data format t(AB)T1, T12,
- Derive closed pattern based on vertical
intersections - t(X)t(Y) X and Y always happen together
- t(X)?t(Y) transaction having X always has Y
- Use diffset to accelerate mining
- Only keep track of difference of tids
- t(X)T1, T2, T3, t(Xy )T1, T3
- Diffset(Xy, X)T2
- M. Zaki. CHARM An Efficient Algorithm for Closed
Association Rule Mining, CS-TR99-10, Rensselaer
Polytechnic Institute - M. Zaki, Fast Vertical Mining Using Diffsets,
TR01-1, Department of Computer Science,
Rensselaer Polytechnic Institute
53Visualization of Association Rules Pane Graph
54Visualization of Association Rules Rule Graph
55Mining Association Rules in Large Databases
- Association rule mining
- Algorithms for scalable mining of
(single-dimensional Boolean) association rules in
transactional databases - Mining various kinds of association/correlation
rules - Constraint-based association mining
- Sequential pattern mining
- Applications/extensions of frequent pattern
mining - Summary
56Mining Various Kinds of Rules or Regularities
- Multi-level, quantitative association rules,
correlation and causality, ratio rules,
sequential patterns, emerging patterns, temporal
associations, partial periodicity - Classification, clustering, iceberg cubes, etc.
57Multiple-level Association Rules
- Items often form hierarchy
- Flexible support settings Items at the lower
level are expected to have lower support. - Transaction database can be encoded based on
dimensions and levels - explore shared multi-level mining
58ML/MD Associations with Flexible Support
Constraints
- Why flexible support constraints?
- Real life occurrence frequencies vary greatly
- Diamond, watch, pens in a shopping basket
- Uniform support may not be an interesting model
- A flexible model
- The lower-level, the more dimension combination,
and the long pattern length, usually the smaller
support - General rules should be easy to specify and
understand - Special items and special group of items may be
specified individually and have higher priority
59Multi-dimensional Association
- Single-dimensional rules
- buys(X, milk) ? buys(X, bread)
- Multi-dimensional rules ? 2 dimensions or
predicates - Inter-dimension assoc. rules (no repeated
predicates) - age(X,19-25) ? occupation(X,student) ?
buys(X,coke) - hybrid-dimension assoc. rules (repeated
predicates) - age(X,19-25) ? buys(X, popcorn) ? buys(X,
coke) - Categorical Attributes
- finite number of possible values, no ordering
among values - Quantitative Attributes
- numeric, implicit ordering among values
60Multi-level Association Redundancy Filtering
- Some rules may be redundant due to ancestor
relationships between items. - Example
- milk ? wheat bread support 8, confidence
70 - 2 milk ? wheat bread support 2, confidence
72 - We say the first rule is an ancestor of the
second rule. - A rule is redundant if its support is close to
the expected value, based on the rules
ancestor.
61CS590D Data MiningProf. Chris Clifton
- January 31, 2006
- Association Rules
62Closed Itemset
- An itemset is closed if none of its immediate
supersets has the same support as the itemset
63Maximal vs Closed Itemsets
Transaction Ids
Not supported by any transactions
64Maximal vs Closed Frequent Itemsets
Closed but not maximal
Minimum support 12
Closed and maximal
Closed 9 Maximal 4
65Maximal vs Closed Itemsets
66Multi-Level Mining Progressive Deepening
- A top-down, progressive deepening approach
- First mine high-level frequent items
- milk (15), bread
(10) - Then mine their lower-level weaker frequent
itemsets - 2 milk (5),
wheat bread (4) - Different min_support threshold across
multi-levels lead to different algorithms - If adopting the same min_support across
multi-levels - then toss t if any of ts ancestors is
infrequent. - If adopting reduced min_support at lower levels
- then examine only those descendents whose
ancestors support is frequent/non-negligible.
67Techniques for Mining MD Associations
- Search for frequent k-predicate set
- Example age, occupation, buys is a 3-predicate
set - Techniques can be categorized by how age are
treated - 1. Using static discretization of quantitative
attributes - Quantitative attributes are statically
discretized by using predefined concept
hierarchies - 2. Quantitative association rules
- Quantitative attributes are dynamically
discretized into binsbased on the distribution
of the data - 3. Distance-based association rules
- This is a dynamic discretization process that
considers the distance between data points
68CS490DIntroduction to Data MiningProf. Chris
Clifton
- February 6, 2004
- Association Rules
69Static Discretization of Quantitative Attributes
- Discretized prior to mining using concept
hierarchy. - Numeric values are replaced by ranges.
- In relational database, finding all frequent
k-predicate sets will require k or k1 table
scans. - Data cube is well suited for mining.
- The cells of an n-dimensional
- cuboid correspond to the
- predicate sets.
- Mining from data cubescan be much faster.
70Quantitative Association Rules
- Numeric attributes are dynamically discretized
- Such that the confidence or compactness of the
rules mined is maximized - 2-D quantitative association rules Aquan1 ?
Aquan2 ? Acat - Cluster adjacent
- association rules
- to form general
- rules using a 2-D
- grid
- Example
age(X,30-34) ? income(X,24K - 48K) ?
buys(X,high resolution TV)
71Mining Distance-based Association Rules
- Binning methods do not capture the semantics of
interval data - Distance-based partitioning, more meaningful
discretization considering - density/number of points in an interval
- closeness of points in an interval
72Interestingness Measure Correlations (Lift)
- play basketball ? eat cereal 40, 66.7 is
misleading - The overall percentage of students eating cereal
is 75 which is higher than 66.7. - play basketball ? not eat cereal 20, 33.3 is
more accurate, although with lower support and
confidence - Measure of dependent/correlated events lift
Basketball Not basketball Sum (row)
Cereal 2000 1750 3750
Not cereal 1000 250 1250
Sum(col.) 3000 2000 5000
73Mining Association Rules in Large Databases
- Association rule mining
- Algorithms for scalable mining of
(single-dimensional Boolean) association rules in
transactional databases - Mining various kinds of association/correlation
rules - Constraint-based association mining
- Sequential pattern mining
- Applications/extensions of frequent pattern
mining - Summary
74Constraint-based Data Mining
- Finding all the patterns in a database
autonomously? unrealistic! - The patterns could be too many but not focused!
- Data mining should be an interactive process
- User directs what to be mined using a data mining
query language (or a graphical user interface) - Constraint-based mining
- User flexibility provides constraints on what to
be mined - System optimization explores such constraints
for efficient miningconstraint-based mining
75Constraints in Data Mining
- Knowledge type constraint
- classification, association, etc.
- Data constraint using SQL-like queries
- find product pairs sold together in stores in
Vancouver in Dec.00 - Dimension/level constraint
- in relevance to region, price, brand, customer
category - Rule (or pattern) constraint
- small sales (price lt 10) triggers big sales
(sum gt 200) - Interestingness constraint
- strong rules min_support ? 3, min_confidence
? 60
76Constrained Mining vs. Constraint-Based Search
- Constrained mining vs. constraint-based
search/reasoning - Both are aimed at reducing search space
- Finding all patterns satisfying constraints vs.
finding some (or one) answer in constraint-based
search in AI - Constraint-pushing vs. heuristic search
- It is an interesting research problem on how to
integrate them - Constrained mining vs. query processing in DBMS
- Database query processing requires to find all
- Constrained pattern mining shares a similar
philosophy as pushing selections deeply in query
processing
77Constrained Frequent Pattern Mining A Mining
Query Optimization Problem
- Given a frequent pattern mining query with a set
of constraints C, the algorithm should be - sound it only finds frequent sets that satisfy
the given constraints C - complete all frequent sets satisfying the given
constraints C are found - A naĂŻve solution
- First find all frequent sets, and then test them
for constraint satisfaction - More efficient approaches
- Analyze the properties of constraints
comprehensively - Push them as deeply as possible inside the
frequent pattern computation.
78CS590D Data MiningProf. Chris Clifton
- February 1, 2005
- Association Rules
79Application of Interestingness Measure
80Computing Interestingness Measure
- Given a rule X ? Y, information needed to compute
rule interestingness can be obtained from a
contingency table
Contingency table for X ? Y
Y Y
X f11 f10 f1
X f01 f00 fo
f1 f0 T
- Used to define various measures
- support, confidence, lift, Gini, J-measure,
etc.
81Drawback of Confidence
Coffee Coffee
Tea 15 5 20
Tea 75 5 80
90 10 100
82Statistical Independence
- Population of 1000 students
- 600 students know how to swim (S)
- 700 students know how to bike (B)
- 420 students know how to swim and bike (S,B)
- P(S?B) 420/1000 0.42
- P(S) ? P(B) 0.6 ? 0.7 0.42
- P(S?B) P(S) ? P(B) gt Statistical independence
- P(S?B) gt P(S) ? P(B) gt Positively correlated
- P(S?B) lt P(S) ? P(B) gt Negatively correlated
83Statistical-based Measures
- Measures that take into account statistical
dependence
84Example Lift/Interest
Coffee Coffee
Tea 15 5 20
Tea 75 5 80
90 10 100
- Association Rule Tea ? Coffee
- Confidence P(CoffeeTea) 0.75
- but P(Coffee) 0.9
- Lift 0.75/0.9 0.8333 (lt 1, therefore is
negatively associated)
85Drawback of Lift Interest
Y Y
X 10 0 10
X 0 90 90
10 90 100
Y Y
X 90 0 90
X 0 10 10
90 10 100
Statistical independence If P(X,Y)P(X)P(Y) gt
Lift 1
86There are lots of measures proposed in the
literature Some measures are good for certain
applications, but not for others What criteria
should we use to determine whether a measure is
good or bad? What about Apriori-style support
based pruning? How does it affect these measures?
87Properties of A Good Measure
- Piatetsky-Shapiro 3 properties a good measure M
must satisfy - M(A,B) 0 if A and B are statistically
independent - M(A,B) increase monotonically with P(A,B) when
P(A) and P(B) remain unchanged - M(A,B) decreases monotonically with P(A) or
P(B) when P(A,B) and P(B) or P(A) remain
unchanged
88Comparing Different Measures
10 examples of contingency tables
Rankings of contingency tables using various
measures
89Property under Variable Permutation
- Does M(A,B) M(B,A)?
- Symmetric measures
- support, lift, collective strength, cosine,
Jaccard, etc - Asymmetric measures
- confidence, conviction, Laplace, J-measure, etc
90Property under Row/Column Scaling
Grade-Gender Example (Mosteller, 1968)
Male Female
High 2 3 5
Low 1 4 5
3 7 10
Male Female
High 4 30 34
Low 2 40 42
6 70 76
2x
10x
Mosteller Underlying association should be
independent of the relative number of male and
female students in the samples
91Property under Inversion Operation
Transaction 1
. . . . .
Transaction N
92Example ?-Coefficient
- ?-coefficient is analogous to correlation
coefficient for continuous variables
Y Y
X 60 10 70
X 10 20 30
70 30 100
Y Y
X 20 10 30
X 10 60 70
30 70 100
? Coefficient is the same for both tables
93Property under Null Addition
- Invariant measures
- support, cosine, Jaccard, etc
- Non-invariant measures
- correlation, Gini, mutual information, odds
ratio, etc
94Different Measures have Different Properties
95Anti-Monotonicity in Constraint-Based Mining
TDB (min_sup2)
- Anti-monotonicity
- When an itemset S violates the constraint, so
does any of its superset - sum(S.Price) ? v is anti-monotone
- sum(S.Price) ? v is not anti-monotone
- Example. C range(S.profit) ? 15 is anti-monotone
- Itemset ab violates C
- So does every superset of ab
TID Transaction
10 a, b, c, d, f
20 b, c, d, f, g, h
30 a, c, d, e, f
40 c, e, f, g
Item Profit
a 40
b 0
c -20
d 10
e -30
f 30
g 20
h -10
96Which Constraints Are Anti-Monotone?
Constraint Antimonotone
v ? S No
S ? V no
S ? V yes
min(S) ? v no
min(S) ? v yes
max(S) ? v yes
max(S) ? v no
count(S) ? v yes
count(S) ? v no
sum(S) ? v ( a ? S, a ? 0 ) yes
sum(S) ? v ( a ? S, a ? 0 ) no
range(S) ? v yes
range(S) ? v no
avg(S) ? v, ? ? ?, ?, ? convertible
support(S) ? ? yes
support(S) ? ? no
97Monotonicity in Constraint-Based Mining
TDB (min_sup2)
TID Transaction
10 a, b, c, d, f
20 b, c, d, f, g, h
30 a, c, d, e, f
40 c, e, f, g
- Monotonicity
- When an intemset S satisfies the constraint, so
does any of its superset - sum(S.Price) ? v is monotone
- min(S.Price) ? v is monotone
- Example. C range(S.profit) ? 15
- Itemset ab satisfies C
- So does every superset of ab
Item Profit
a 40
b 0
c -20
d 10
e -30
f 30
g 20
h -10
98Which Constraints Are Monotone?
Constraint Monotone
v ? S yes
S ? V yes
S ? V no
min(S) ? v yes
min(S) ? v no
max(S) ? v no
max(S) ? v yes
count(S) ? v no
count(S) ? v yes
sum(S) ? v ( a ? S, a ? 0 ) no
sum(S) ? v ( a ? S, a ? 0 ) yes
range(S) ? v no
range(S) ? v yes
avg(S) ? v, ? ? ?, ?, ? convertible
support(S) ? ? no
support(S) ? ? yes
99Succinctness
- Succinctness
- Given A1, the set of items satisfying a
succinctness constraint C, then any set S
satisfying C is based on A1 , i.e., S contains a
subset belonging to A1 - Idea Without looking at the transaction
database, whether an itemset S satisfies
constraint C can be determined based on the
selection of items - min(S.Price) ? v is succinct
- sum(S.Price) ? v is not succinct
- Optimization If C is succinct, C is pre-counting
pushable
100Which Constraints Are Succinct?
Constraint Succinct
v ? S yes
S ? V yes
S ? V yes
min(S) ? v yes
min(S) ? v yes
max(S) ? v yes
max(S) ? v yes
count(S) ? v weakly
count(S) ? v weakly
sum(S) ? v ( a ? S, a ? 0 ) no
sum(S) ? v ( a ? S, a ? 0 ) no
range(S) ? v no
range(S) ? v no
avg(S) ? v, ? ? ?, ?, ? no
support(S) ? ? no
support(S) ? ? no
101The Apriori Algorithm Example
Database D
L1
C1
Scan D
C2
C2
L2
Scan D
C3
L3
Scan D
102NaĂŻve Algorithm Apriori Constraint
Database D
L1
C1
Scan D
C2
C2
L2
Scan D
C3
L3
Constraint SumS.price lt 5
Scan D
103The Constrained Apriori Algorithm Push an
Anti-monotone Constraint Deep
Database D
L1
C1
Scan D
C2
C2
L2
Scan D
C3
L3
Constraint SumS.price lt 5
Scan D
104The Constrained Apriori Algorithm Push a
Succinct Constraint Deep
Database D
L1
C1
Scan D
C2
C2
L2
Scan D
C3
L3
Constraint minS.price lt 1
Scan D
105Converting Tough Constraints
TDB (min_sup2)
- Convert tough constraints into anti-monotone or
monotone by properly ordering items - Examine C avg(S.profit) ? 25
- Order items in value-descending order
- lta, f, g, d, b, h, c, egt
- If an itemset afb violates C
- So does afbh, afb
- It becomes anti-monotone!
TID Transaction
10 a, b, c, d, f
20 b, c, d, f, g, h
30 a, c, d, e, f
40 c, e, f, g
Item Profit
a 40
b 0
c -20
d 10
e -30
f 30
g 20
h -10
106Convertible Constraints
- Let R be an order of items
- Convertible anti-monotone
- If an itemset S violates a constraint C, so does
every itemset having S as a prefix w.r.t. R - Ex. avg(S) ? v w.r.t. item value descending order
- Convertible monotone
- If an itemset S satisfies constraint C, so does
every itemset having S as a prefix w.r.t. R - Ex. avg(S) ? v w.r.t. item value descending order
107Strongly Convertible Constraints
- avg(X) ? 25 is convertible anti-monotone w.r.t.
item value descending order R lta, f, g, d, b, h,
c, egt - If an itemset af violates a constraint C, so does
every itemset with af as prefix, such as afd - avg(X) ? 25 is convertible monotone w.r.t. item
value ascending order R-1 lte, c, h, b, d, g, f,
agt - If an itemset d satisfies a constraint C, so does
itemsets df and dfa, which having d as a prefix - Thus, avg(X) ? 25 is strongly convertible
Item Profit
a 40
b 0
c -20
d 10
e -30
f 30
g 20
h -10
108What Constraints Are Convertible?
Constraint Convertible anti-monotone Convertible monotone Strongly convertible
avg(S) ? , ? v Yes Yes Yes
median(S) ? , ? v Yes Yes Yes
sum(S) ? v (items could be of any value, v ? 0) Yes No No
sum(S) ? v (items could be of any value, v ? 0) No Yes No
sum(S) ? v (items could be of any value, v ? 0) No Yes No
sum(S) ? v (items could be of any value, v ? 0) Yes No No
109Combing Them TogetherA General Picture
Constraint Antimonotone Monotone Succinct
v ? S no yes yes
S ? V no yes yes
S ? V yes no yes
min(S) ? v no yes yes
min(S) ? v yes no yes
max(S) ? v yes no yes
max(S) ? v no yes yes
count(S) ? v yes no weakly
count(S) ? v no yes weakly
sum(S) ? v ( a ? S, a ? 0 ) yes no no
sum(S) ? v ( a ? S, a ? 0 ) no yes no
range(S) ? v yes no no
range(S) ? v no yes no
avg(S) ? v, ? ? ?, ?, ? convertible convertible no
support(S) ? ? yes no no
support(S) ? ? no yes no
110Classification of Constraints
Monotone
Antimonotone
Strongly convertible
Succinct
Convertible anti-monotone
Convertible monotone
Inconvertible
111CS590D Data MiningProf. Chris Clifton
- February 2, 2006
- Association Rules
112Mining With Convertible Constraints
TDB (min_sup2)
TID Transaction
10 a, f, d, b, c
20 f, g, d, b, c
30 a, f, d, c, e
40 f, g, h, c, e
- C avg(S.profit) ? 25
- List of items in every transaction in value
descending order R - lta, f, g, d, b, h, c, egt
- C is convertible anti-monotone w.r.t. R
- Scan transaction DB once
- remove infrequent items
- Item h in transaction 40 is dropped
- Itemsets a and f are good
Item Profit
a 40
f 30
g 20
d 10
b 0
h -10
c -20
e -30
113Can Apriori Handle Convertible Constraint?
- A convertible, not monotone nor anti-monotone nor
succinct constraint cannot be pushed deep into
the an Apriori mining algorithm - Within the level wise framework, no direct
pruning based on the constraint can be made - Itemset df violates constraint C avg(X)gt25
- Since adf satisfies C, Apriori needs df to
assemble adf, df cannot be pruned - But it can be pushed into frequent-pattern growth
framework!
Item Value
a 40
b 0
c -20
d 10
e -30
f 30
g 20
h -10
114Mining With Convertible Constraints
Item Value
a 40
f 30
g 20
d 10
b 0
h -10
c -20
e -30
- C avg(X)gt25, min_sup2
- List items in every transaction in value
descending order R lta, f, g, d, b, h, c, egt - C is convertible anti-monotone w.r.t. R
- Scan TDB once
- remove infrequent items
- Item h is dropped
- Itemsets a and f are good,
- Projection-based mining
- Imposing an appropriate order on item projection
- Many tough constraints can be converted into
(anti)-monotone
TDB (min_sup2)
TID Transaction
10 a, f, d, b, c
20 f, g, d, b, c
30 a, f, d, c, e
40 f, g, h, c, e
115Handling Multiple Constraints
- Different constraints may require different or
even conflicting item-ordering - If there exists an order R s.t. both C1 and C2
are convertible w.r.t. R, then there is no
conflict between the two convertible constraints - If there exists conflict on order of items
- Try to satisfy one constraint first
- Then using the order for the other constraint to
mine frequent itemsets in the corresponding
projected database
116Interestingness via Unexpectedness
- Need to model expectation of users (domain
knowledge) - Need to combine expectation of users with
evidence from data (i.e., extracted patterns)
Pattern expected to be frequent
-
Pattern expected to be infrequent
Pattern found to be frequent
Pattern found to be infrequent
-
Expected Patterns
-
Unexpected Patterns
117Interestingness via Unexpectedness
- Web Data (Cooley et al 2001)
- Domain knowledge in the form of site structure
- Given an itemset F X1, X2, , Xk (Xi Web
pages) - L number of links connecting the pages
- lfactor L / (k ? k-1)
- cfactor 1 (if graph is connected), 0
(disconnected graph) - Structure evidence cfactor ? lfactor
- Usage evidence
- Use Dempster-Shafer theory to combine domain
knowledge and evidence from data
118Continuous and Categorical Attributes
How to apply association analysis formulation to
non-asymmetric binary variables?
Example of Association Rule Number of
Pages ?5,10) ? (BrowserMozilla) ? Buy No
119Handling Categorical Attributes
- Transform categorical attribute into asymmetric
binary variables - Introduce a new item for each distinct
attribute-value pair - Example replace Browser Type attribute with
- Browser Type Internet Explorer
- Browser Type Mozilla
- Browser Type Mozilla
120Handling Categorical Attributes
- Potential Issues
- What if attribute has many possible values
- Example attribute country has more than 200
possible values - Many of the attribute values may have very low
support - Potential solution Aggregate the low-support
attribute values - What if distribution of attribute values is
highly skewed - Example 95 of the visitors have Buy No
- Most of the items will be associated with
(BuyNo) item - Potential solution drop the highly frequent items
121Handling Continuous Attributes
- Different kinds of rules
- Age?21,35) ? Salary?70k,120k) ? Buy
- Salary?70k,120k) ? Buy ? Age ?28, ?4
- Different methods
- Discretization-based
- Statistics-based
- Non-discretization based
- minApriori
122Handling Continuous Attributes
- Use discretization
- Unsupervised
- Equal-width binning
- Equal-depth binning
- Clustering
- Supervised
Attribute values, v
Class v1 v2 v3 v4 v5 v6 v7 v8 v9
Anomalous 0 0 20 10 20 0 0 0 0
Normal 150 100 0 0 0 100 100 150 100
bin3
bin1
bin2
123Discretization Issues
- Size of the discretized intervals affect support
confidence - If intervals too small
- may not have enough support
- If intervals too large
- may not have enough confidence
- Potential solution use all possible intervals
Refund No, (Income 51,250) ? Cheat
No Refund No, (60K ? Income ? 80K) ? Cheat
No Refund No, (0K ? Income ? 1B) ? Cheat
No
124Discretization Issues
- Execution time
- If intervals contain n values, there are on
average O(n2) possible ranges - Too many rules
Refund No, (Income 51,250) ? Cheat
No Refund No, (51K ? Income ? 52K) ? Cheat
No Refund No, (50K ? Income ? 60K) ?
Cheat No
125Approach by Srikant Agrawal
- Preprocess the data
- Discretize attribute using equi-depth
partitioning - Use partial completeness measure to determine
number of partitions - Merge adjacent intervals as long as support is
less than max-support - Apply existing association rule mining algorithms
- Determine interesting rules in the output
126Approach by Srikant Agrawal
- Discretization will lose information
- Use partial completeness measure to determine how
much information is lost - C frequent itemsets obtained by considering
all ranges of attribute values P frequent
itemsets obtained by considering all ranges over
the partitions P is K-complete w.r.t C if P ?
C,and ?X ? C, ? X ? P such that - 1. X is a generalization of X and support
(X) ? K ? support(X) (K ? 1) 2. ?Y ?
X, ? Y ? X such that support (Y) ? K ?
support(Y) -
- Given K (partial completeness level), can
determine number of intervals (N)
Approximated X
X
127Interestingness Measure
- Given an itemset Z z1, z2, , zk and its
generalization Z z1, z2, , zk P(Z)
support of Z EZ(Z) expected support of Z based
on Z - Z is R-interesting w.r.t. Z if P(Z) ? R ? EZ(Z)
Refund No, (Income 51,250) ? Cheat
No Refund No, (51K ? Income ? 52K) ? Cheat
No Refund No, (50K ? Income ? 60K) ?
Cheat No
128Interestingness Measure
- For S X ? Y, and its generalization S X ? Y
- P(YX) confidence of X ? Y P(YX)
confidence of X ? Y ES(YX) expected
support of Z based on Z - Rule S is R-interesting w.r.t its ancestor rule
S if - Support, P(S) ? R ? ES(S) or
- Confidence, P(YX) ? R ? ES(YX)
129Statistics-based Methods
- Example
- BrowserMozilla ? BuyYes ? Age ?23
- Rule consequent consists of a continuous
variable, characterized by their statistics - mean, median, standard deviation, etc.
- Approach
- Withhold the target variable from the rest of the
data - Apply existing frequent itemset generation on the
rest of the data - For each frequent itemset, compute the
descriptive statistics for the corresponding
target variable - Frequent itemset becomes a rule by introducing
the target variable as rule consequent - Apply statistical test to determine
interestingness of the rule
130Statistics-based Methods
- How to determine whether an association rule
interesting? - Compare the statistics for segment of population
covered by the rule vs segment of population not
covered by the rule - A ? B ? versus A ? B ?
- Statistical hypothesis testing
- Null hypothesis H0 ? ? ?
- Alternative hypothesis H1 ? gt ? ?
- Z has zero mean and variance 1 under null
hypothesis
131Statistics-based Methods
- Example
- r BrowserMozilla ? BuyYes ? Age ?23
- Rule is interesting if difference between ? and
? is greater than 5 years (i.e., ? 5) - For r, suppose n1 50, s1 3.5
- For r (complement) n2 250, s2 6.5
- For 1-sided test at 95 confidence level,
critical Z-value for rejecting null hypothesis is
1.64. - Since Z is greater than 1.64, r is an interesting
rule
132Min-Apriori (Han et al)
Document-term matrix
Example W1 and W2 tends to appear together in
the same document
133Min-Apriori
- Data contains only continuous attributes of the
same type - e.g., frequency of words in a document
- Potential solution
- Convert into 0/1 matrix and then apply existing
algorithms - lose word frequency information
- Discretization does not apply as users want
association among words not ranges of words
134Min-Apriori
- New definition of support
Example Sup(W1,W2,W3) 0 0 0 0 0.17
0.17
135Mining Association Rules in Large Databases
- Association rule mining
- Algorithms for scalable mining of
(single-dimensional Boolean) association rules in
transactional databases - Mining various kinds of association/correlation
rules - Constraint-based association mining
- Sequential pattern mining
- Applications/extensions of frequent pattern
mining - Summary
136Sequence Databases and Sequential Pattern Analysis
- Transaction databases, time-series databases vs.
sequence databases - Frequent patterns vs. (frequent) sequential
patterns - Applications of sequential pattern mining
- Customer shopping sequences
- First buy computer, then CD-ROM, and then digital
camera, within 3 months. - Medical treatment, natural disasters (e.g.,
earthquakes), science engineering processes,
stocks and markets, etc. - Telephone calling patterns, Weblog click streams
- DNA sequences and gene structures
137What Is Sequential Pattern Mining?
- Given a set of sequences, find the complete set
of frequent subsequences
A sequence lt (ef) (ab) (df) c b gt
A sequence database
An element may contain a set of items. Items
within an element are unordered and we list them
alphabetically.
SID sequence
10 lta(abc)(ac)d(cf)gt
20 lt(ad)c(bc)(ae)gt
30 lt(ef)(ab)(df)cbgt
40 lteg(af)cbcgt
lta(bc)dcgt is a subsequence of lta(abc)(ac)d(cf)gt
Given support threshold min_sup 2, lt(ab)cgt is a
sequential pattern
138Challenges on Sequential Pattern Mining
- A huge number of possible sequential patterns are
hidden in databases - A mining algorithm should
- find the complete set of patterns, when possible,
satisfying the minimum support (frequency)
threshold - be highly efficient, scalable, involving only a
small number of database scans - be able to incorporate various kinds of
user-specific constraints
139Studies on Sequential Pattern Mining
- Concept introduction and an initial Apriori-like
algorithm - R. Agrawal R. Srikant. Mining sequential
patterns, ICDE95 - GSPAn Apriori-based, influential mining method
(developed at IBM Almaden) - R. Srikant R. Agrawal. Mining sequential
patterns Generalizations and performance
improvements, EDBT96 - From sequential patterns to episodes
(Apriori-like constraints) - H. Mannila, H. Toivonen A.I. Verkamo.
Discovery of frequent episodes in event
sequences, Data Mining and Knowledge Discovery,
1997 - Mining sequential patterns with constraints
- M.N. Garofalakis, R. Rastogi, K. Shim SPIRIT
Sequential Pattern Mining with Regular Expression
Constraints. VLDB 1999
140A Basic Property of Sequential Patterns Apriori
- A basic property Apriori (Agrawal Sirkant94)
- If a sequence S is not frequent
- Then none of the super-sequences of S is frequent
- E.g, lthbgt is infrequent ? so do lthabgt and lt(ah)bgt
Given support threshold min_sup 2
141GSPA Generalized Sequential Pattern Mining
Algorithm
- GSP (Generalized Sequential Patter