Title: Association%20Analysis:%20Basic%20Concepts%20and%20Algorithms
1Association Analysis Basic Concepts and
Algorithms
2Association Rule Mining
- Given a set of transactions, find rules that will
predict the occurrence of an item based on the
occurrences of other items in the transaction
Example of Association Rules
Market-Basket transactions
Diaper ? Beer,Beer, Bread ? Milk,
Implication means co-occurrence, not causality!
3Definition Frequent Itemset
- Itemset
- A collection of one or more items
- Example Milk, Bread, Diaper
- k-itemset
- An itemset that contains k items
- Support count (?)
- Frequency of occurrence of an itemset
- E.g. ?(Milk, Bread,Diaper) 2
- Support
- Fraction of transactions that contain an itemset
- E.g. s(Milk, Bread, Diaper) 2/5
- Frequent Itemset
- An itemset whose support is greater than or equal
to a minsup threshold
4Definition Association Rule
- Association Rule
- An implication expression of the form X ? Y,
where X and Y are itemsets - Example Milk, Diaper ? Beer
- Rule Evaluation Metrics
- Support (s)
- Fraction of transactions that contain both X and
Y - Confidence (c)
- Measures how often items in Y appear in
transactions thatcontain X
5Association Rule Mining Task
- Given a set of transactions T, the goal of
association rule mining is to find all rules
having - support minsup threshold
- confidence minconf threshold
- Brute-force approach
- List all possible association rules
- Compute the support and confidence for each rule
- Prune rules that fail the minsup and minconf
thresholds - ? Computationally prohibitive!
6Mining Association Rules
Example of Rules Milk,Diaper ? Beer (s0.4,
c0.67)Milk,Beer ? Diaper (s0.4,
c1.0) Diaper,Beer ? Milk (s0.4,
c0.67) Beer ? Milk,Diaper (s0.4, c0.67)
Diaper ? Milk,Beer (s0.4, c0.5) Milk ?
Diaper,Beer (s0.4, c0.5)
- Observations
- All the above rules are binary partitions of the
same itemset Milk, Diaper, Beer - Rules originating from the same itemset have
identical support but can have different
confidence - Thus, we may decouple the support and confidence
requirements
7Mining Association Rules
- Two-step approach
- Frequent Itemset Generation
- Generate all itemsets whose support ? minsup
- Rule Generation
- Generate high confidence rules from each frequent
itemset, where each rule is a binary partitioning
of a frequent itemset - Frequent itemset generation is still
computationally expensive
8Frequent Itemset Generation
Given d items, there are 2d possible candidate
itemsets
9Frequent Itemset Generation
- Brute-force approach
- Each itemset in the lattice is a candidate
frequent itemset - Count the support of each candidate by scanning
the database - Match each transaction against every candidate
- Complexity O(NMw) gt Expensive since M 2d !!!
10Computational Complexity
- Given d unique items
- Total number of itemsets 2d
- Total number of possible association rules
If d6, R 602 rules
11Frequent Itemset Generation Strategies
- Reduce the number of candidates (M)
- Complete search M2d
- Use pruning techniques to reduce M
- Reduce the number of transactions (N)
- Reduce size of N as the size of itemset increases
- Used by DHP and vertical-based mining algorithms
- Reduce the number of comparisons (NM)
- Use efficient data structures to store the
candidates or transactions - No need to match every candidate against every
transaction
12Reducing Number of Candidates
- Apriori principle
- If an itemset is frequent, then all of its
subsets must also be frequent - Apriori principle holds due to the following
property of the support measure - Support of an itemset never exceeds the support
of its subsets - This is known as the anti-monotone property of
support
13Illustrating Apriori Principle
14Illustrating Apriori Principle
Items (1-itemsets)
Pairs (2-itemsets) (No need to
generatecandidates involving Cokeor Eggs)
Minimum Support 3
Triplets (3-itemsets)
If every subset is considered, 6C1 6C2 6C3
41 With support-based pruning, 6 6 1 13
15Apriori Algorithm
- Method
- Let k1
- Generate frequent itemsets of length 1
- Repeat until no new frequent itemsets are
identified - Generate length (k1) candidate itemsets from
length k frequent itemsets - Prune candidate itemsets containing subsets of
length k that are infrequent - Count the support of each candidate by scanning
the DB - Eliminate candidates that are infrequent, leaving
only those that are frequent
16Reducing Number of Comparisons
- Candidate counting
- Scan the database of transactions to determine
the support of each candidate itemset - To reduce the number of comparisons, store the
candidates in a hash structure - Instead of matching each transaction against
every candidate, match it against candidates
contained in the hashed buckets
17Review
- What are the association rules?
- What are the frequent itemsets?
- What are support/supporting count?
- Whats apriori principle?
- How to mine FIM?
18Generate Hash Tree
- Suppose you have 15 candidate itemsets of length
3 - 1 4 5, 1 2 4, 4 5 7, 1 2 5, 4 5 8, 1 5
9, 1 3 6, 2 3 4, 5 6 7, 3 4 5, 3 5 6,
3 5 7, 6 8 9, 3 6 7, 3 6 8 - You need
- Hash function
- Max leaf size max number of itemsets stored in
a leaf node (if number of candidate itemsets
exceeds max leaf size, split the node)
19Association Rule Discovery Hash tree
Hash Function
Candidate Hash Tree
1,4,7
3,6,9
2,5,8
Hash on 1, 4 or 7
20Association Rule Discovery Hash tree
Hash Function
Candidate Hash Tree
1,4,7
3,6,9
2,5,8
Hash on 2, 5 or 8
21Association Rule Discovery Hash tree
Hash Function
Candidate Hash Tree
1,4,7
3,6,9
2,5,8
Hash on 3, 6 or 9
22Subset Operation
Given a transaction t, what are the possible
subsets of size 3?
23Subset Operation Using Hash Tree
transaction
24Subset Operation Using Hash Tree
transaction
1 3 6
3 4 5
1 5 9
25Subset Operation Using Hash Tree
transaction
1 3 6
3 4 5
1 5 9
Match transaction against 11 out of 15 candidates
26How to Generate Candidates?
- Suppose the items in Lk-1 are listed in an order
- Step 1 self-joining Lk-1
- insert into Ck
- select p.item1, p.item2, , p.itemk-1, q.itemk-1
- from Lk-1 p, Lk-1 q
- where p.item1q.item1, , p.itemk-2q.itemk-2,
p.itemk-1 lt q.itemk-1 - Step 2 pruning
- forall itemsets c in Ck do
- forall (k-1)-subsets s of c do
- if (s is not in Lk-1) then delete c from Ck
27Challenges of Frequent Pattern Mining
- Challenges
- Multiple scans of transaction database
- Huge number of candidates
- Tedious workload of support counting for
candidates - Improving Apriori general ideas
- Reduce passes of transaction database scans
- Shrink number of candidates
- Facilitate support counting of candidates
28Partition Scan Database Only Twice
- Any itemset that is potentially frequent in DB
must be frequent in at least one of the
partitions of DB - Scan 1 partition database and find local
frequent patterns - Scan 2 consolidate global frequent patterns
- A. Savasere, E. Omiecinski, and S. Navathe. An
efficient algorithm for mining association in
large databases. In VLDB95
29DHP Reduce the Number of Candidates
- A k-itemset whose corresponding hashing bucket
count is below the threshold cannot be frequent - Candidates a, b, c, d, e
- Hash entries ab, ad, ae bd, be, de
- Frequent 1-itemset a, b, d, e
- ab is not a candidate 2-itemset if the sum of
count of ab, ad, ae is below support threshold - J. Park, M. Chen, and P. Yu. An effective
hash-based algorithm for mining association
rules. In SIGMOD95
30Sampling for Frequent Patterns
- Select a sample of original database, mine
frequent patterns within sample using Apriori - Scan database once to verify frequent itemsets
found in sample, only borders of closure of
frequent patterns are checked - Example check abcd instead of ab, ac, , etc.
- Scan database again to find missed frequent
patterns - H. Toivonen. Sampling large databases for
association rules. In VLDB96
31DIC Reduce Number of Scans
ABCD
- Once both A and D are determined frequent, the
counting of AD begins - Once all length-2 subsets of BCD are determined
frequent, the counting of BCD begins
ABC
ABD
ACD
BCD
AB
AC
BC
AD
BD
CD
Transactions
1-itemsets
B
C
D
A
2-itemsets
Apriori
Itemset lattice
1-itemsets
2-items
S. Brin R. Motwani, J. Ullman, and S. Tsur.
Dynamic itemset counting and implication rules
for market basket data. In SIGMOD97
3-items
DIC
32Factors Affecting Complexity
- Choice of minimum support threshold
- lowering support threshold results in more
frequent itemsets - this may increase number of candidates and max
length of frequent itemsets - Dimensionality (number of items) of the data set
- more space is needed to store support count of
each item - if number of frequent items also increases, both
computation and I/O costs may also increase - Size of database
- since Apriori makes multiple passes, run time of
algorithm may increase with number of
transactions - Average transaction width
- transaction width increases with denser data
sets - This may increase max length of frequent itemsets
and traversals of hash tree (number of subsets in
a transaction increases with its width)
33Compact Representation of Frequent Itemsets
- Some itemsets are redundant because they have
identical support as their supersets - Number of frequent itemsets
- Need a compact representation
34Maximal Frequent Itemset
An itemset is maximal frequent if none of its
immediate supersets is frequent
Maximal Itemsets
Border
Infrequent Itemsets
35Closed Itemset
- An itemset is closed if none of its immediate
supersets has the same support as the itemset
36Maximal vs Closed Itemsets
Transaction Ids
Not supported by any transactions
37Maximal vs Closed Frequent Itemsets
Closed but not maximal
Minimum support 2
Closed and maximal
Closed 9 Maximal 4
38Maximal vs Closed Itemsets
39Alternative Methods for Frequent Itemset
Generation
- Traversal of Itemset Lattice
- General-to-specific vs Specific-to-general
40Alternative Methods for Frequent Itemset
Generation
- Traversal of Itemset Lattice
- Equivalent Classes
41Alternative Methods for Frequent Itemset
Generation
- Traversal of Itemset Lattice
- Breadth-first vs Depth-first
42Alternative Methods for Frequent Itemset
Generation
- Representation of Database
- horizontal vs vertical data layout
43ECLAT
- For each item, store a list of transaction ids
(tids)
TID-list
44ECLAT
- Determine support of any k-itemset by
intersecting tid-lists of two of its (k-1)
subsets. - 3 traversal approaches
- top-down, bottom-up and hybrid
- Advantage very fast support counting
- Disadvantage intermediate tid-lists may become
too large for memory
?
?
45FP-growth Algorithm
- Use a compressed representation of the database
using an FP-tree - Once an FP-tree has been constructed, it uses a
recursive divide-and-conquer approach to mine the
frequent itemsets
46FP-tree construction
null
After reading TID1
A1
B1
After reading TID2
null
B1
A1
B1
C1
D1
47FP-Tree Construction
Transaction Database
null
B3
A7
B5
C3
C1
D1
D1
Header table
C3
E1
D1
E1
D1
E1
D1
Pointers are used to assist frequent itemset
generation
48FP-growth
Conditional Pattern base for D P
(A1,B1,C1), (A1,B1),
(A1,C1), (A1),
(B1,C1) Recursively apply FP-growth on
P Frequent Itemsets found (with sup gt 1) AD,
BD, CD, ACD, BCD
null
A7
B1
B5
C1
C1
D1
D1
C3
D1
D1
D1
49Rule Generation
- Given a frequent itemset L, find all non-empty
subsets f ? L such that f ? L f satisfies the
minimum confidence requirement - If A,B,C,D is a frequent itemset, candidate
rules - ABC ?D, ABD ?C, ACD ?B, BCD ?A, A ?BCD, B
?ACD, C ?ABD, D ?ABCAB ?CD, AC ? BD, AD ? BC,
BC ?AD, BD ?AC, CD ?AB, - If L k, then there are 2k 2 candidate
association rules (ignoring L ? ? and ? ? L)
50Rule Generation
- How to efficiently generate rules from frequent
itemsets? - In general, confidence does not have an
anti-monotone property - c(ABC ?D) can be larger or smaller than c(AB ?D)
- But confidence of rules generated from the same
itemset has an anti-monotone property - e.g., L A,B,C,D c(ABC ? D) ? c(AB ? CD)
? c(A ? BCD) -
- Confidence is anti-monotone w.r.t. number of
items on the RHS of the rule
51Rule Generation for Apriori Algorithm
Lattice of rules
Low Confidence Rule
52Rule Generation for Apriori Algorithm
- Candidate rule is generated by merging two rules
that share the same prefixin the rule consequent - join(CDgtAB,BDgtAC)would produce the
candidaterule D gt ABC - Prune rule DgtABC if itssubset ADgtBC does not
havehigh confidence
53Effect of Support Distribution
- Many real data sets have skewed support
distribution
Support distribution of a retail data set
54Effect of Support Distribution
- How to set the appropriate minsup threshold?
- If minsup is set too high, we could miss itemsets
involving interesting rare items (e.g., expensive
products) - If minsup is set too low, it is computationally
expensive and the number of itemsets is very
large - Using a single minimum support threshold may not
be effective
55Multiple Minimum Support
- How to apply multiple minimum supports?
- MS(i) minimum support for item i
- e.g. MS(Milk)5, MS(Coke) 3,
MS(Broccoli)0.1, MS(Salmon)0.5 - MS(Milk, Broccoli) min (MS(Milk),
MS(Broccoli)) 0.1 - Challenge Support is no longer anti-monotone
- Suppose Support(Milk, Coke) 1.5
and Support(Milk, Coke, Broccoli) 0.5 - Milk,Coke is infrequent but Milk,Coke,Broccoli
is frequent
56Multiple Minimum Support
57Multiple Minimum Support
58Multiple Minimum Support (Liu 1999)
- Order the items according to their minimum
support (in ascending order) - e.g. MS(Milk)5, MS(Coke) 3,
MS(Broccoli)0.1, MS(Salmon)0.5 - Ordering Broccoli, Salmon, Coke, Milk
- Need to modify Apriori such that
- L1 set of frequent items
- F1 set of items whose support is ?
MS(1) where MS(1) is mini( MS(i) ) - C2 candidate itemsets of size 2 is generated
from F1 instead of L1
59Multiple Minimum Support (Liu 1999)
- Modifications to Apriori
- In traditional Apriori,
- A candidate (k1)-itemset is generated by
merging two frequent itemsets of size k - The candidate is pruned if it contains any
infrequent subsets of size k - Pruning step has to be modified
- Prune only if subset contains the first item
- e.g. CandidateBroccoli, Coke, Milk
(ordered according to minimum support) - Broccoli, Coke and Broccoli, Milk are
frequent but Coke, Milk is infrequent - Candidate is not pruned because Coke,Milk does
not contain the first item, i.e., Broccoli.
60Mining Various Kinds of Association Rules
- Mining multilevel association
- Miming multidimensional association
- Mining quantitative association
- Mining interesting correlation patterns
61Mining Multiple-Level Association Rules
- Items often form hierarchies
- Flexible support settings
- Items at the lower level are expected to have
lower support - Exploration of shared multi-level mining (Agrawal
Srikant_at_VLB95, Han Fu_at_VLDB95)
62Multi-level Association Redundancy Filtering
- Some rules may be redundant due to ancestor
relationships between items. - Example
- milk ? wheat bread support 8, confidence
70 - 2 milk ? wheat bread support 2, confidence
72 - We say the first rule is an ancestor of the
second rule. - A rule is redundant if its support is close to
the expected value, based on the rules
ancestor.
63Mining Multi-Dimensional Association
- Single-dimensional rules
- buys(X, milk) ? buys(X, bread)
- Multi-dimensional rules ? 2 dimensions or
predicates - Inter-dimension assoc. rules (no repeated
predicates) - age(X,19-25) ? occupation(X,student) ?
buys(X, coke) - hybrid-dimension assoc. rules (repeated
predicates) - age(X,19-25) ? buys(X, popcorn) ? buys(X,
coke) - Categorical Attributes finite number of possible
values, no ordering among valuesdata cube
approach - Quantitative Attributes numeric, implicit
ordering among valuesdiscretization, clustering,
and gradient approaches
64Mining Quantitative Associations
- Techniques can be categorized by how numerical
attributes, such as age or salary are treated - Static discretization based on predefined concept
hierarchies (data cube methods) - Dynamic discretization based on data distribution
(quantitative rules, e.g., Agrawal
Srikant_at_SIGMOD96) - Clustering Distance-based association (e.g.,
Yang Miller_at_SIGMOD97) - one dimensional clustering then association
- Deviation (such as Aumann and Lindell_at_KDD99)
- Sex female gt Wage mean7/hr (overall mean
9)
65Quantitative Association Rules
- Proposed by Lent, Swami and Widom ICDE97
- Numeric attributes are dynamically discretized
- Such that the confidence or compactness of the
rules mined is maximized - 2-D quantitative association rules Aquan1 ?
Aquan2 ? Acat - Cluster adjacent
association rules
to form
general
rules using a 2-D grid - Example
age(X,34-35) ? income(X,30-50K) ?
buys(X,high resolution TV)
66Mining Other Interesting Patterns
- Flexible support constraints (Wang et al. _at_
VLDB02) - Some items (e.g., diamond) may occur rarely but
are valuable - Customized supmin specification and application
- Top-K closed frequent patterns (Han, et al. _at_
ICDM02) - Hard to specify supmin, but top-k with lengthmin
is more desirable - Dynamically raise supmin in FP-tree construction
and mining, and select most promising path to mine
67Pattern Evaluation
- Association rule algorithms tend to produce too
many rules - many of them are uninteresting or redundant
- Redundant if A,B,C ? D and A,B ? D
have same support confidence - Interestingness measures can be used to
prune/rank the derived patterns - In the original formulation of association rules,
support confidence are the only measures used
68Interestingness Measure Correlations (Lift)
- play basketball ? eat cereal 40, 66.7 is
misleading - The overall of students eating cereal is 75 gt
66.7. - play basketball ? not eat cereal 20, 33.3 is
more accurate, although with lower support and
confidence - Measure of dependent/correlated events lift
Basketball Not basketball Sum (row)
Cereal 2000 1750 3750
Not cereal 1000 250 1250
Sum(col.) 3000 2000 5000
69Are lift and ?2 Good Measures of Correlation?
- Buy walnuts ? buy milk 1, 80 is
misleading - if 85 of customers buy milk
- Support and confidence are not good to represent
correlations - So many interestingness measures? (Tan, Kumar,
Sritastava _at_KDD02)
Milk No Milk Sum (row)
Coffee m, c m, c c
No Coffee m, c m, c c
Sum(col.) m m ?
DB m, c m, c mc mc lift all-conf coh ?2
A1 1000 100 100 10,000 9.26 0.91 0.83 9055
A2 100 1000 1000 100,000 8.44 0.09 0.05 670
A3 1000 100 10000 100,000 9.18 0.09 0.09 8172
A4 1000 1000 1000 1000 1 0.5 0.33 0
70Which Measures Should Be Used?
- lift and ?2 are not good measures for
correlations in large transactional DBs - all-conf or coherence could be good measures
(Omiecinski_at_TKDE03) - Both all-conf and coherence have the downward
closure property - Efficient algorithms can be derived for mining
(Lee et al. _at_ICDM03sub)