Download as pdf or txt
Download as pdf or txt
You are on page 1of 78

Title

Data Mining:
Concepts and Techniques (3rd ed.)

Lecture 4: Basic Frequent Patterns


Mining Frequent Patterns, Association and Correlations: Basic
Concepts and Methods

Ling Zhou
School of Computer Science and Engineering
(SCSE)
Content
Title

nBasic Concepts

nFrequent Itemset Mining Methods

nWhich Patterns Are Interesting?—Pattern


Evaluation Methods

nSummary
Title What Is Frequent Pattern Analysis?
n Frequent pattern: a pattern (a set of items, subsequences, substructures, etc.) that occurs
frequently in a data set

n First proposed by Agrawal, Imielinski, and Swami [AIS93] in the context of frequent
itemsets and association rule mining

n Motivation: Finding inherent regularities in data


pWhat products were often purchased together?— Beer and diapers?!
pWhat are the subsequent purchases after buying a PC?
pWhat kinds of DNA are sensitive to this new drug?
pCan we automatically classify web documents?

n Applications
p Basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web
log (click stream) analysis, and DNA sequence analysis.
3
Title Why Is Freq. Pattern Mining
Important?
nFreq. pattern: An intrinsic and important property of datasets
nFoundation for many essential data mining tasks
pAssociation, correlation, and causality analysis
pSequential, structural (e.g., sub-graph) patterns
pPattern analysis in spatiotemporal, multimedia, time-series, and
stream data
pClassification: discriminative, frequent pattern analysis
pCluster analysis: frequent pattern-based clustering
pData warehousing: iceberg cube and cube-gradient
pSemantic data compression: fascicles
pBroad applications

4
Title Basic Concepts: Frequent Patterns
Tid Items bought nitemset: A set of one or more
10 Beer, Nuts, Diaper items
20 Beer, Coffee, Diaper
nk-itemset X = {x1, …, xk}
30 Beer, Diaper, Eggs
n(absolute) support, or, support
40 Nuts, Eggs, Milk
count of X: Frequency or
50 Nuts, Coffee, Diaper, Eggs, Milk
occurrence of an itemset X
Customer
buys both
Customer n(relative) support, s, is the
buys diaper
fraction of transactions that
contains X (i.e., the probability
that a transaction contains X)
nAn itemset X is frequent if X’s
Customer support is no less than a
buys beer minsup threshold

5
Title Basic Concepts: Frequent Patterns
Tid Items bought n Find all the rules X à Y with
10 Beer, Nuts, Diaper minimum support and
20 Beer, Coffee, Diaper confidence
30 Beer, Diaper, Eggs p support, s, probability that a
40 Nuts, Eggs, Milk transaction contains X È Y
50 Nuts, Coffee, Diaper, Eggs, Milk p confidence, c, conditional
probability that a transaction
Customer
buys both
Customer having X also contains Y
buys diaper
Let minsup = 50%, minconf = 50%
Freq. Pat.: Beer:3, Nuts:3, Diaper:4, Eggs:3,
{Beer, Diaper}:3
n Association rules: (many more!)
Customer n Beer à Diaper (60%, 100%)
buys beer
n Diaper à Beer (60%, 75%)
6
Example
Title
Title Closed Patterns and Max-Patterns
nA long pattern contains a combinatorial number of sub-patterns, e.g.,
{a1, …, a100} contains (C1001) + (C1002) + … + (C100100) = 2100 – 1 = 1.27*1030
sub-patterns! null

A B C D E

AB AC AD AE BC BD BE CD CE DE

ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

ABCD ABCE ABDE ACDE BCDE Given d items, there are 2d


possible itemsets
Too expensive to test all!
ABCDE
8
Title Closed Patterns and Max-Patterns
nSolution: Mine closed patterns and max-patterns instead
nAn itemset X is closed if X is frequent and there exists no super-pattern
Y ‫ כ‬X, with the same support as X
nAn itemset X is a max-pattern if X is frequent and there exists no
frequent super-pattern Y ‫ כ‬X
nClosed pattern is a lossless compression of freq. patterns
pReducing the # of patterns and rules

9
Title Closed Patterns and Max-Patterns

nExercise. DB = {<a1, …, a100>, < a1, …, a50>}


pMin_sup = 1.
nWhat is the set of closed itemset?
p<a1, …, a100>: 1
p< a1, …, a50>: 2
nWhat is the set of max-pattern?
p<a1, …, a100>: 1
nWhat is the set of all patterns?
p!!
10
Computational
Title Complexity of Frequent Itemset Mining
n How many itemsets are potentially to be generated in the worst case?
p The number of frequent itemsets to be generated is senstive to the minsup threshold
p When minsup is low, there exist potentially an exponential number of frequent itemsets
p The worst case: MN where M: # distinct items, and N: max length of transactions

n The worst case complexty vs. the expected probability


p Ex. Suppose Walmart has 104 kinds of products
l The chance to pick up one product 10-4
l The chance to pick up a particular set of 10 products: ~10-40
l What is the chance this particular set of 10 products to be frequent 103 times in 109 transactions?

11
Content
Title

nBasic Concepts

nFrequent Itemset Mining Methods

nWhich Patterns Are Interesting?—Pattern


Evaluation Methods

nSummary
Title
Scalable Frequent Itemset Mining Methods

nApriori: A Candidate Generation-and-Test Approach

nImproving the Efficiency of Apriori

nFPGrowth: A Frequent Pattern-Growth Approach

nECLAT: Frequent Pattern Mining with Vertical Data Format

nMining Close Frequent Patterns and Maxpatterns


13
The Title
Downward Closure Property and Scalable Mining Methods

nThe downward closure property of frequent patterns


pAny subset of a frequent itemset must be frequent
pIf {beer, diaper, nuts} is frequent, so is {beer, diaper}
pi.e., every transaction having {beer, diaper, nuts} also contains {beer,
diaper}
nScalable mining methods: Three major approaches
pApriori (Agrawal & Srikant@VLDB’94)
pFreq. pattern growth (FPgrowth—Han, Pei & Yin @SIGMOD’00)
pVertical data format approach (Charm—Zaki & Hsiao @SDM’02)

14
The
TitleApriori Principle
• Apriori principle (Main observation):
– If an itemset is frequent, then all of its subsets must also be
frequent
– If an itemset is not frequent, then all of its supersets cannot be
frequent
"X , Y : ( X Í Y ) Þ s ( X ) ³ s (Y )
– The support of an itemset never exceeds the support of its
subsets
– This is known as the anti-monotone (反单调性)property of
support
Illustration of the Apriori principle
Title
Frequent
subsets

Found to be frequent
Illustration of the Apriori principle
Title

null

A B C D E

AB AC AD AE BC BD BE CD CE DE

Found to be
Infrequent
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

ABCD ABCE ABDE ACDE BCDE

Infrequent supersets
ABCDE
Pruned
The Apriori algorithm
Title
Ck = candidate itemsets of size k
Level-wise approach Lk = frequent itemsets of size k

1. k = 1, C1 = all items
2. While Ck not empty
Frequent 3. Scan the database to find which itemsets in C
itemset k
generation are frequent and put them into Lk
Candidate 4. Use Lk to generate a collection of candidate
generation itemsets Ck+1 of size k+1
5. k = k+1
Apriori:
Title A Candidate Generation & Test
Approach
nMethod:
pInitially, scan DB once to get frequent 1-itemset
pGenerate length (k+1) candidate itemsets from length k frequent
itemsets
pTest the candidates against DB
pTerminate when no frequent or candidate set can be generated

19
Title The Apriori Algorithm—An Example
Supmin = 2 Itemset sup
Itemset sup
Database TDB {A} 2
L1 {A} 2
Tid Items C1 {B} 3
{B} 3
10 A, C, D {C} 3
1st scan {C} 3
20 B, C, E {D} 1
{E} 3
30 A, B, C, E {E} 3
40 B, E
C2 Itemset sup C2 Itemset
{A, B} 1
L2 Itemset sup
{A, C} 2
2nd scan {A, B}
{A, C} 2 {A, C}
{A, E} 1
{B, C} 2
{B, C} 2 {A, E}
{B, E} 3
{B, E} 3 {B, C}
{C, E} 2
{C, E} 2 {B, E}
{C, E}

C3 Itemset
3rd scan L3 Itemset sup
{B, C, E} {B, C, E} 2
20
Title The Apriori Algorithm (Pseudo-Code)

Ck: Candidate itemset of size k


Lk : frequent itemset of size k

L1 = {frequent items};
for (k = 1; Lk !=Æ; k++) do begin
Ck+1 = candidates generated from Lk;
for each transaction t in database do
increment the count of all candidates in Ck+1 that are contained
in t
Lk+1 = candidates in Ck+1 with min_support
end
return Èk Lk;

21
Title
Implementation of Apriori

nHow to generate candidates?


pStep 1: self-joining Lk
pStep 2: pruning
nExample of Candidate-generation
pL3={abc, abd, acd, ace, bcd}
pSelf-joining: L3*L3
l abcd from abc and abd
l acde from acd and ace
pPruning:
l acde is removed because ade is not in L3
pC4 = {abcd}
22
How
Title to Count Supports of Candidates?

nWhy counting supports of candidates a problem?


pThe total number of candidates can be very huge
p One transaction may contain many candidates
nMethod:
pCandidate itemsets are stored in a hash-tree
pLeaf node of hash-tree contains a list of itemsets and counts
pInterior node contains a hash table
pSubset function: finds all the candidates contained in a transaction

23
Use
Title the Hash tree for support counts

To generate a candidate hash tree, the followings are required.


•Hash function,i.e., mode 3, (x*10+y)mode 7

•Max leaf size - if number of candidate itemsets exceeds max leaf size, split the node
Insertion
Title of Itemset Candidates
Title
Title
Title
Subset
Title Operation Using Hash Tree
Title
Title
Title
Title
Scalable Frequent Itemset Mining Methods

nApriori: A Candidate Generation-and-Test Approach

nImproving the Efficiency of Apriori

nFPGrowth: A Frequent Pattern-Growth Approach

nECLAT: Frequent Pattern Mining with Vertical Data

Format

33
Further
Title Improvement of the Apriori Method

nMajor computational challenges


pMultiple scans of transaction database
pHuge number of candidates
pTedious workload of support counting for candidates
nImproving Apriori: general ideas
pReduce passes of transaction database scans
pShrink number of candidates
pFacilitate support counting of candidates

34
Partition: Scan Database Only Twice
Title
nAny itemset that is potentially frequent in DB must be frequent in at
least one of the partitions of DB
pScan 1: partition database and find local frequent patterns
pScan 2: consolidate global frequent patterns
nA. Savasere, E. Omiecinski and S. Navathe, VLDB’95

DB1 + DB2 + + DBk = DB


sup1(i) < σDB1 sup2(i) < σDB2 supk(i) < σDBk sup(i) < σDB
Title Reduce the Number of Candidates
DHP:
n A k-itemset whose corresponding hashing bucket count is below the threshold cannot
be frequent
count itemsets
pCandidates: a, b, c, d, e {ab, ad, ae}
35
pHash entries 88 {bd, be, de}

l {ab, ad, ae} . .


. .
l {bd, be, de} . .

l… 102 {yz, qs, wt}


pFrequent 1-itemset: a, b, d, e Hash Table
pab is not a candidate 2-itemset if the sum of count of {ab, ad, ae} is below support
threshold

n J. Park, M. Chen, and P. Yu. An effective hash-based algorithm for mining association
rules. SIGMOD’95
36
Title
Sampling for Frequent Patterns

nSelect a sample of original database, mine frequent patterns within


sample using Apriori
nScan database once to verify frequent itemsets found in sample, only
borders of closure of frequent patterns are checked
pExample: check abcd instead of ab, ac, …, etc.
nScan database again to find missed frequent patterns
nH. Toivonen. Sampling large databases for association rules. In
VLDB’96

37
DIC: Reduce Number of Scans
Title

ABCD
n Once both A and D are determined
frequent, the counting of AD begins
ABC ABD ACD BCD n Once all length-2 subsets of BCD are
determined frequent, the counting of
BCD begins
AB AC BC AD BD CD
Transactions
1-itemsets
A B C D
Apriori 2-itemsets

{}
Itemset lattice 1-itemsets
2-items
S. Brin R. Motwani, J. Ullman,
and S. Tsur. Dynamic itemset
DIC 3-items
counting and implication rules for
38 market basket data. SIGMOD’97
Title
Scalable Frequent Itemset Mining Methods

nApriori: A Candidate Generation-and-Test Approach

nImproving the Efficiency of Apriori

nFPGrowth: A Frequent Pattern-Growth Approach

nECLAT: Frequent Pattern Mining with Vertical Data Format

nMining Close Frequent Patterns and Maxpatterns


39
TitlePattern-Growth Approach: Mining Frequent
Patterns Without Candidate Generation
n Bottlenecks of the Apriori approach
pBreadth-first (i.e., level-wise) search
pCandidate generation and test
l Often generates a huge number of candidates
n The FPGrowth Approach (J. Han, J. Pei, and Y. Yin, SIGMOD’ 00)
pDepth-first search
pAvoid explicit candidate generation
n Major philosophy: Grow long patterns from short ones using local frequent items
only
p“abc” is a frequent pattern
pGet all transactions having “abc”, i.e., project DB on abc: DB|abc
p“d” is a local frequent item in DB|abc à abcd is a frequent pattern
40
TitleConstruct FP-tree from a Transaction
Database
TID Items bought (ordered) frequent items
100 {f, a, c, d, g, i, m, p} {f, c, a, m, p}
200 {a, b, c, f, l, m, o} {f, c, a, b, m}
300 {b, f, h, j, o, w} {f, b} min_support = 3
400 {b, c, k, s, p} {c, b, p}
500 {a, f, c, e, l, p, m, n} {f, c, a, m, p} {}
Header Table
1. Scan DB once, find
Item frequency head f:4 c:1
frequent 1-itemset (single
item pattern) f 4
c 4 c:3 b:1 b:1
2. Sort frequent items in a 3
frequency descending b 3 a:3 p:1
order, f-list m 3
p 3
3. Scan DB again, construct m:2 b:1
FP-tree
41
F-list = f-c-a-b-m-p p:2 m:1
Title
Partition Patterns and Databases
nFrequent patterns can be partitioned
into subsets according to f-list
pF-list = f-c-a-b-m-p
pPatterns containing p
pPatterns having m but no p
p…
pPatterns having c but no a nor b, m, p
pPattern f
nCompleteness and non-redundency

42
TitleFind Patterns Having P From P-conditional Database

n Starting at the frequent item header table in the FP-tree


n Traverse the FP-tree by following the link of each frequent
item p
n Accumulate all of transformed prefix paths of item p to form
p’s conditional pattern base
{}
Header Table
f:4 c:1 Conditional pattern bases
Item frequency head
f 4 item cond. pattern base
c 4 c:3 b:1 b:1
c f:3
a 3
b 3 a:3 p:1 a fc:3
m 3 b fca:1, f:1, c:1
p 3 m:2 b:1 m fca:2, fcab:1

p:2 m:1 p fcam:2, cb:1


43
From
Title Conditional Pattern-bases to Conditional FP-trees

nFor each pattern-base


pAccumulate the count for each item in the base
pConstruct the FP-tree for the frequent items of
the pattern base

m-conditional pattern base:


{} fca:2, fcab:1
Header Table
Item frequency head All frequent
f:4 c:1 patterns relate to m
f 4 {}
c 4 c:3 b:1 b:1 m,
Ú
a 3 f:3 Ú fm, cm, am,
b 3 a:3 p:1 fcm, fam, cam,
m 3 c:3 fcam
m:2 b:1
p 3
p:2 m:1 a:3
44 m-conditional FP-tree
Recursion:
Title Mining Each Conditional FP-tree

{}

{} Cond. pattern base of “am”: (fc:3) f:3

c:3
f:3
am-conditional FP-tree
c:3 {}
Cond. pattern base of “cm”: (f:3)
a:3 f:3
m-conditional FP-tree
cm-conditional FP-tree

{}

Cond. pattern base of “cam”: (f:3) f:3


cam-conditional FP-tree
45
ATitle
Special Case: Single Prefix Path in FP-tree

nSuppose a (conditional) FP-tree T has a shared


single prefix-path P
nMining can be decomposed into two parts
{}
pReduction of the single prefix path into one
a1:n1 node
a2:n2 pConcatenation of the mining results of the
a3:n3 two parts r
{} 1

b1:m1 C1:k1 a1:n1


Ú r1 =
a2:n2
+ b1:m1 C1:k1

C2:k2 C3:k3
46 a3:n3 C2:k2 C3:k3
Benefits of the FP-tree Structure
Title

nCompleteness
pPreserve complete information for frequent pattern
mining
pNever break a long pattern of any transaction
nCompactness
pReduce irrelevant info—infrequent items are gone
pItems in frequency descending order: the more
frequently occurring, the more likely to be shared
pNever be larger than the original database (not
count node-links and the count field)

47
TitleThe Frequent Pattern Growth Mining
Method
nIdea: Frequent pattern growth
pRecursively grow frequent patterns by pattern and
database partition
nMethod
pFor each frequent item, construct its conditional
pattern-base, and then its conditional FP-tree
pRepeat the process on each newly created
conditional FP-tree
pUntil the resulting FP-tree is empty, or it contains
only one path—single path will generate all the
combinations of its sub-paths, each of which is a
frequent pattern

48
Title
Scaling FP-growth by Database Projection

n What about if FP-tree cannot fit in memory?


pDB projection
n First partition a database into a set of projected DBs
n Then construct and mine FP-tree for each projected DB
n Parallel projection vs. partition projection techniques
pParallel projection
l Project the DB in parallel for each frequent item
l Parallel projection is space costly
l All the partitions can be processed in parallel
pPartition projection
l Partition the DB based on the ordered frequent items
l Passing the unprocessed parts to the subsequent partitions
49
Partition-Based Projection
Title

n Parallel projection needs a lot Tran. DB


of disk space fcamp
fcabm
n Partition projection saves it fb
cbp
fcamp

p-proj DB m-proj DB b-proj DB a-proj DB c-proj DB f-proj DB


fcam fcab f fc f …
cb fca cb … …
fcam fca …

am-proj DB cm-proj DB
fc f …
fc f
fc f
50
Title
Performance of FPGrowth in Large Datasets

100
140
90 D1 FP-grow th runtime D2 FP-growth
80
D1 Apriori runtime 120 D2 TreeProjection
70 100

Runtime (sec.)
Run time(sec.)

60
80
50 Data set T25I20D10K Data set T25I20D100K
40 60

30 40
20
20
10

0 0
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2
Support threshold(%)
Support threshold (%)

FP-Growth vs. Apriori FP-Growth vs. Tree-Projection

51
Advantages of the Pattern Growth Approach
Title

n Divide-and-conquer:
pDecompose both the mining task and DB according to the frequent patterns
obtained so far
pLead to focused search of smaller databases
n Other factors
pNo candidate generation, no candidate test
pCompressed database: FP-tree structure
pNo repeated scan of entire database
pBasic ops: counting local freq items and building sub FP-tree, no pattern search and
matching
n A good open-source implementation and refinement of FPGrowth
pFPGrowth+ (Grahne and J. Zhu, FIMI'03)
52
Further Improvements of Mining
Title
Methods
n AFOPT (Liu, et al. @ KDD’03)
pA “push-right” method for mining condensed frequent pattern (CFP) tree

n Carpenter (Pan, et al. @ KDD’03)


pMine data sets with small rows but numerous columns
pConstruct a row-enumeration tree for efficient mining

n FPgrowth+ (Grahne and Zhu, FIMI’03)


pEfficiently Using Prefix-Trees in Mining Frequent Itemsets, Proc. ICDM'03 Int.
Workshop on Frequent Itemset Mining Implementations (FIMI'03), Melbourne,
FL, Nov. 2003

n TD-Close (Liu, et al, SDM’06)

53
Title
Extension of Pattern Growth Mining Methodology
n Mining closed frequent itemsets and max-patterns
pCLOSET (DMKD’00), FPclose, and FPMax (Grahne & Zhu, Fimi’03)
n Mining sequential patterns
pPrefixSpan (ICDE’01), CloSpan (SDM’03), BIDE (ICDE’04)
n Mining graph patterns
pgSpan (ICDM’02), CloseGraph (KDD’03)
n Constraint-based mining of frequent patterns
pConvertible constraints (ICDE’01), gPrune (PAKDD’03)
n Computing iceberg data cubes with complex measures
pH-tree, H-cubing, and Star-cubing (SIGMOD’01, VLDB’03)
n Pattern-growth-based Clustering
pMaPle (Pei, et al., ICDM’03)
n Pattern-Growth-Based Classification
pMining frequent and discriminative patterns (Cheng, et al, ICDE’07)

54
Title
Scalable Frequent Itemset Mining Methods

nApriori: A Candidate Generation-and-Test Approach

nImproving the Efficiency of Apriori

nFPGrowth: A Frequent Pattern-Growth Approach

nECLAT: Frequent Pattern Mining with Vertical Data Format

nMining Close Frequent Patterns and Maxpatterns

55
TitleECLAT: Mining by Exploring Vertical Data
Format
n Vertical format: t(AB) = {T11, T25, …}
ptid-list: list of trans.-ids containing an itemset
n Deriving frequent patterns based on vertical intersections
pt(X) = t(Y): X and Y always happen together
pt(X) Ì t(Y): transaction having X always has Y
n Using diffset to accelerate mining
pOnly keep track of differences of tids
pt(X) = {T1, T2, T3}, t(XY) = {T1, T3}
pDiffset (XY, X) = {T2}
n Eclat (Zaki et al. @KDD’97)
n Mining Closed patterns using vertical format: CHARM (Zaki & Hsiao@SDM’02)

56
Title
Scalable Frequent Itemset Mining Methods

nApriori: A Candidate Generation-and-Test Approach

nImproving the Efficiency of Apriori

nFPGrowth: A Frequent Pattern-Growth Approach

nECLAT: Frequent Pattern Mining with Vertical Data Format

nMining Close Frequent Patterns and Maxpatterns

57
Mining Frequent Closed Patterns: CLOSET
Title

nFlist: list of all frequent items in support ascending order


pFlist: d-a-f-e-c Min_sup=2
TID Items
nDivide search space 10 a, c, d, e, f
pPatterns having d 20 a, b, e
30 c, e, f
pPatterns having d but no a, etc. 40 a, c, d, f
50 c, e, f
nFind frequent closed pattern recursively
pEvery transaction having d also has cfa à cfad is a
frequent closed pattern
nJ. Pei, J. Han & R. Mao. “CLOSET: An Efficient Algorithm
for Mining Frequent Closed Itemsets", DMKD'00.
CLOSET+:
Title Mining Closed Itemsets by Pattern-Growth

nItemset merging: if Y appears in every occurrence of X, then Y is


merged with X
nSub-itemset pruning: if Y ‫ כ‬X, and sup(X) = sup(Y), X and all of X’s
descendants in the set enumeration tree can be pruned
nHybrid tree projection
pBottom-up physical tree-projection
pTop-down pseudo tree-projection
nItem skipping: if a local frequent item has the same support in
several header tables at different levels, one can prune it from the
header table at higher levels
nEfficient subset checking
Title MaxMiner: Mining Max-Patterns

n1st scan: find frequent items Tid Items


pA, B, C, D, E 10 A, B, C, D, E
20 B, C, D, E,
n2nd scan: find support for 30 A, C, D, F
pAB, AC, AD, AE, ABCDE
pBC, BD, BE, BCDE
Potential
pCD, CE, CDE, DE
max-patterns
nSince BCDE is a max-pattern, no need to check BCD, BDE, CDE
in later scan
nR. Bayardo. Efficiently mining long patterns from databases.
SIGMOD’98
TitleCHARM: Mining by Exploring Vertical Data
Format
nVertical format: t(AB) = {T11, T25, …}
ptid-list: list of trans.-ids containing an itemset
nDeriving closed patterns based on vertical intersections
pt(X) = t(Y): X and Y always happen together
pt(X) Ì t(Y): transaction having X always has Y
nUsing diffset to accelerate mining
pOnly keep track of differences of tids
pt(X) = {T1, T2, T3}, t(XY) = {T1, T3}
pDiffset (XY, X) = {T2}
nEclat/MaxEclat (Zaki et al. @KDD’97), VIPER(P. Shenoy et
al.@SIGMOD’00), CHARM (Zaki & Hsiao@SDM’02)
Title Visualization of Association Rules: Plane Graph

62
Title Visualization of Association Rules: Rule Graph

63
Title Visualization of Association
Rules
(SGI/MineSet 3.0)

64
Content
Title

nBasic Concepts

nFrequent Itemset Mining Methods

nWhich Patterns Are Interesting?—Pattern


Evaluation Methods

nSummary
Title
Interestingness Measure: Correlations (Lift)

• play basketball Þ eat cereal [40%, 66.7%] is misleading


• The overall % of students eating cereal is 75% > 66.7%.

• play basketball Þ not eat cereal [20%, 33.3%] is more accurate, although with
lower support and confidence

• Measure of dependent/correlated events: lift


P( AÈ B) Basketball Not basketball Sum (row)
lift = Cereal 2000 1750 3750
P( A) P( B)
Not cereal 1000 250 1250
2000 / 5000
lift ( B, C ) = = 0.89 Sum(col.) 3000 2000 5000
3000 / 5000 * 3750 / 5000
1000 / 5000
lift ( B, ¬C ) = = 1.33
3000 / 5000 *1250 / 5000

66
Are lift and c2 Good Measures of Correlation?
Title

• “Buy walnuts Þ buy milk [1%,


80%]” is misleading if 85% of
customers buy milk

• Support and confidence are not


good to indicate correlations

• Over 20 interestingness measures


have been proposed (see Tan,
Kumar, Sritastava @KDD’02)

• Which are good ones?

67
Title Null-Invariant Measures

68
Title
Comparison of Interestingness Measures

n Null-(transaction) invariance is crucial for correlation analysis


n Lift and c2 are not null-invariant
n 5 null-invariant measures

Milk No Milk Sum (row)


Coffee m, c ~m, c c
No Coffee m, ~c ~m, ~c ~c
Sum(col.) m ~m S

Null-transactions w.r.t. Kulczynski


m and c measure (1927) Null-invariant

69
February 15, 2023 Data Mining: Concepts and Techniques Subtle: They disagree
TitleAnalysis of DBLP Coauthor Relationships

Recent DB conferences, removing balanced associations, low sup, etc.

Advisor-advisee relation: Kulc: high,


coherence: low, cosine: middle
n Tianyi Wu, Yuguo Chen and Jiawei Han, “Association Mining in
Large Databases: A Re-Examination of Its Measures”, Proc. 2007
Int. Conf. Principles and Practice of Knowledge Discovery in
Databases (PKDD'07), Sept. 2007
70
Which Null-Invariant Measure Is Better?
Title

nIR (Imbalance Ratio): measure the imbalance of two


itemsets A and B in rule implications

nKulczynski and Imbalance Ratio (IR) together present a


clear picture for all the three datasets D4 through D6
pD4 is balanced & neutral
pD5 is imbalanced & neutral
pD6 is very imbalanced & neutral
Content
Title

nBasic Concepts

nFrequent Itemset Mining Methods

nWhich Patterns Are Interesting?—Pattern


Evaluation Methods

nSummary
Summary
Title

n Basic concepts: association rules, support-


confident framework, closed and max-patterns
n Scalable frequent pattern mining methods
p Apriori (Candidate generation & test)
p Projection-based (FPgrowth, CLOSET+, ...)
p Vertical format approach (ECLAT, CHARM, ...)
§ Which patterns are interesting?
§ Pattern evaluation methods
73
Ref: Basic Concepts of Frequent Pattern Mining
Title

n (Association Rules) R. Agrawal, T. Imielinski, and A. Swami. Mining


association rules between sets of items in large databases. SIGMOD'93
n (Max-pattern) R. J. Bayardo. Efficiently mining long patterns from databases.
SIGMOD'98
n (Closed-pattern) N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal. Discovering
frequent closed itemsets for association rules. ICDT'99
n (Sequential pattern) R. Agrawal and R. Srikant. Mining sequential patterns.
ICDE'95

74
Ref: Apriori and Its Improvements
Title

n R. Agrawal and R. Srikant. Fast algorithms for mining association rules. VLDB'94
n H. Mannila, H. Toivonen, and A. I. Verkamo. Efficient algorithms for discovering
association rules. KDD'94
n A. Savasere, E. Omiecinski, and S. Navathe. An efficient algorithm for mining
association rules in large databases. VLDB'95
n J. S. Park, M. S. Chen, and P. S. Yu. An effective hash-based algorithm for mining
association rules. SIGMOD'95
n H. Toivonen. Sampling large databases for association rules. VLDB'96
n S. Brin, R. Motwani, J. D. Ullman, and S. Tsur. Dynamic itemset counting and
implication rules for market basket analysis. SIGMOD'97
n S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule mining with
relational database systems: Alternatives and implications. SIGMOD'98
75
Ref: Depth-First, Projection-Based FP Mining
Title
n R. Agarwal, C. Aggarwal, and V. V. V. Prasad. A tree projection algorithm for generation of frequent
itemsets. J. Parallel and Distributed Computing, 2002.

n G. Grahne and J. Zhu, Efficiently Using Prefix-Trees in Mining Frequent Itemsets, Proc. FIMI'03

n B. Goethals and M. Zaki. An introduction to workshop on frequent itemset mining


implementations. Proc. ICDM’03 Int. Workshop on Frequent Itemset Mining Implementations
(FIMI’03), Melbourne, FL, Nov. 2003

n J. Han, J. Pei, and Y. Yin. Mining frequent patterns without candidate generation. SIGMOD’ 00

n J. Liu, Y. Pan, K. Wang, and J. Han. Mining Frequent Item Sets by Opportunistic Projection. KDD'02

n J. Han, J. Wang, Y. Lu, and P. Tzvetkov. Mining Top-K Frequent Closed Patterns without Minimum
Support. ICDM'02

n J. Wang, J. Han, and J. Pei. CLOSET+: Searching for the Best Strategies for Mining Frequent Closed
Itemsets. KDD'03

76
Ref: Vertical Format and Row Enumeration Methods
Title
n M. J. Zaki, S. Parthasarathy, M. Ogihara, and W. Li. Parallel algorithm for discovery of association
rules. DAMI:97.

n M. J. Zaki and C. J. Hsiao. CHARM: An Efficient Algorithm for Closed Itemset Mining, SDM'02.

n C. Bucila, J. Gehrke, D. Kifer, and W. White. DualMiner: A Dual-Pruning Algorithm for Itemsets with
Constraints. KDD’02.

n F. Pan, G. Cong, A. K. H. Tung, J. Yang, and M. Zaki , CARPENTER: Finding Closed Patterns in Long
Biological Datasets. KDD'03.

n H. Liu, J. Han, D. Xin, and Z. Shao, Mining Interesting Patterns from Very High Dimensional Data: A
Top-Down Row Enumeration Approach, SDM'06.

77
Title Ref: Mining Correlations and Interesting Rules

n S. Brin, R. Motwani, and C. Silverstein. Beyond market basket: Generalizing association rules to
correlations. SIGMOD'97.
n M. Klemettinen, H. Mannila, P. Ronkainen, H. Toivonen, and A. I. Verkamo. Finding interesting rules from
large sets of discovered association rules. CIKM'94.
n R. J. Hilderman and H. J. Hamilton. Knowledge Discovery and Measures of Interest. Kluwer Academic,
2001.
n C. Silverstein, S. Brin, R. Motwani, and J. Ullman. Scalable techniques for mining causal structures.
VLDB'98.
n P.-N. Tan, V. Kumar, and J. Srivastava. Selecting the Right Interestingness Measure for Association
Patterns. KDD'02.
n E. Omiecinski. Alternative Interest Measures for Mining Associations. TKDE’03.
n T. Wu, Y. Chen, and J. Han, “Re-Examination of Interestingness Measures in Pattern Mining: A Unified
Framework", Data Mining and Knowledge Discovery, 21(3):371-397, 2010

78

You might also like