Data Mining:

advertisement
Chapter 5 Mining Frequent Patterns, Associations, and Correlations
5.1 Basic Concepts
 Def. - Finding frequent patterns, associations, correlations, or causal
structures among sets of items or objects in transaction databases,
relational databases, and other information repositories.
 Applications:
o Market Basket data analysis: which groups of items are
customers likely to purchase together?
o cross-marketing, catalog design, loss-leader analysis, clustering,
classification,
 Examples.
o Rule form: “Body  Head [support, confidence]”.
buys(x, “diapers”)  buys(x, “beers”) [0.5%, 60%]
major(x, “CS”) ^ takes(x, “DB”)  grade(x, “A”) [1%, 75%]
Association Rule: Basic Concepts
 Given:
o (1) database of transactions,
o (2) each transaction is a list of items (purchased by a customer
in a visit)
 Find:
o all rules that correlate the presence of one set of items with that
of another set of items
o E.g., 98% of people who purchase tires and auto accessories
also get automotive services done
 Applications
o Home Electronics  * (What other products should the store
stocks up?)
o Attached mailing in direct marketing
Rule Measures: Support and Confidence
Basic Concepts: Frequent Patterns and
Association Rules
Transaction-id
Items bought
10
A, B, D
20
A, C, D
30
A, D, E
40
B, E, F
50
B, C, D, E, F


Itemset X = {x1, …, xk}
Find all the rules X  Y with minimum
support and confidence


Customer
buys both
Customer
buys diaper
support, s, probability that a
transaction contains X  Y
confidence, c, conditional
probability that a transaction
having X also contains Y
Let supmin = 50%, confmin = 50%
Freq. Pat.: {A:3, B:3, D:4, E:3, AD:3}
Customer
buys beer
October 23, 2007
Association rules:
A  D (60%, 100%)
D  A (60%, 75%)
Data Mining: Concepts and Techniques
6
5.1.3 Frequent Pattern Mining: A Road Map
 Boolean vs. quantitative associations (Based on the types of values
handled)
o buys(x, “SQLServer”) ^ buys(x, “DMBook”) 
buys(x, “DBMiner”) [0.2%, 60%]
(Boolean, 1 dimensional)
o age(x, “30..39”) ^ income(x, “42..48K”) 
buys(x, “PC”) [1%, 75%]
(Quantitative, multi-dimensional)
 Single dimension vs. multiple dimensional associations
o Number of terms
 Single level vs. multiple-level analysis
o Finding rules of multiple levels of abstraction
 …  buys(X, “Computer”)
 ---  buys(X, “Notebook PC”)
 Association does not necessarily imply correlation or causality
6.2 Mining Single-dimensional Boolean Association Rules (can easily be
extended)
An Example
Mining Association Rules—An Example
Transaction ID
2000
1000
4000
5000
Items Bought
A,B,C
A,C
A,D
B,E,F
Min. support 50%
Min. confidence 50%
Frequent Itemset Support
{A}
75%
{B}
50%
{C}
50%
{A,C}
50%
For rule A  C:
support = support({A C}) = 50%
confidence = support({A C})/support({A}) = 66.6%
The Apriori principle:
Any subset of a frequent itemset must be frequent
July 2, 2004
Data Mining: Concepts and Techniques
8
 Naive method for finding association rules:
o Using the standard separate-and-conquer method, treating every
possible combination of attribute values as a separate class
 Two problems:
o Computational complexity
o Resulting number of rules (which would have to be pruned on
the basis of support and confidence)
 But: we can look for high support rules directly!
Item sets
 Item: one attribute-value pair
 Item set: all items occurring in a rule
 Support: percent of instances correctly covered by the association rule
 Goal: Create only rules that are strong (i.e., exceed pre-defined
support and confidence threshold)
 How: find all item sets with the given minimum support and
generating rules from them
Example:
Item sets for weather data (support>=2/14)
 Total of 12 one-item sets, 47 two-item sets, 39 three-item sets, 6 fouritem sets and 0 five-item sets (with minimum support of two)
Generating rules from an item set
 Once all item sets with minimum support have been generated, we can
turn them into rules
 More than one rule can be extracted from the same data set
Example: Humidity = Normal, Windy = False, Play = Yes (4)
 Seven (2N-1) potential rules:
If Humidity = Normal and Windy = False then Play = Yes
If Humidity = Normal and Play = Yes then Windy = False
If Windy = False and Play = Yes then Humidity = Normal
If Humidity = Normal then Windy = False and Play = Yes
If Windy = False then Humidity = Normal and Play = Yes
If Play = Yes then Humidity = Normal and Windy = False
4/4
4/6
4/6
4/7
4/8
4/9
If True then Humidity = Normal and Windy = False And Play = Yes 4/12
Rules for the weather data
 Rules with support > 1 and confidence = 100%:
 3 rules with support of four, 5 with support of three, and 50 with
support of two
Generating item sets efficiently
 Finding one-item sets is easy, but, how can we efficiently find all
frequent (i.e., sufficient support) item sets?
Try all combinations? NO
How?
 Idea: use one-item sets to generate two-item sets, two-item sets to
generate three-item sets,
o If (A B) is frequent (i.e., meet the coverage requirement) item
set, then (A) and (B) have to be frequent item sets as well!
(Apriori principle)
o In general: if Lk is a frequent k-item set, then all of the (k-1)item subsets of Lk (called Lk-1), are also frequent
How do we use this fact?
==> L1 -> C2 L2 -> ….
The Apriori Algorithm
 Join Step: Ck (Candidate k-item set) is generated by joining
Lk-1 (frequent (k-1)-itemset) with itself
 Prune Step: Any (k-1)-itemset that is not frequent cannot be a subset
of a frequent k-itemset
 Pseudo-code:
Ck: Candidate itemset of size k
Lk : frequent itemset of size k
L1 = {frequent items};
for (k = 1; Lk !=; k++) do begin
C k+1 = candidates generated from Lk;
for each transaction t in database do
increment the count of all candidates in Ck+1 that are contained in t
Ck+1 = candidates in Ck+1 with min_support
end
return k Lk;
The Apriori Algorithm – Example
The Apriori Algorithm — Example
Database D
TID
100
200
300
400
itemset sup.
C1
{1}
2
{2}
3
Scan D
{3}
3
{4}
1
{5}
3
Items
134
235
1235
25
C2 itemset sup
L2 itemset sup
2
2
3
2
{1
{1
{1
{2
{2
{3
C3 itemset
{2 3 5}
Scan D
{1 3}
{2 3}
{2 5}
{3 5}
July 2, 2004
2}
3}
5}
3}
5}
5}
1
2
1
2
3
2
L1 itemset sup.
{1}
{2}
{3}
{5}
2
3
3
3
C2 itemset
{1 2}
Scan D
{1
{1
{2
{2
{3
3}
5}
3}
5}
5}
L3 itemset sup
{2 3 5} 2
Data Mining: Concepts and Techniques
How to Generate Candidates?
 Suppose the items in Lk-1 are listed in an some (ex., Lexicographical)
order
 Step 1: self-joining Lk-1
insert into Ck
select p.item1, p.item2,…, p.itemk-1, q.itemk-1
from Lk-1 p, Lk-1 q
where p.item1=q.item1,…, p.itemk-2=q.itemk-2,
p.itemk-1 < q.itemk-1
 Step 2: pruning
forall itemsets c in Ck do
forall (k-1)-subsets s of c do
if (s is not in Lk-1) then delete c from Ck
In what order - An example
 Given: five three-item sets
L3={abc, abd, acd, ace, bcd}
11
 Lexicographically ordered!
 Candidate four-item sets:
o Self-joining: L3*L3
 abcd from abc and abd
 acde from acd and ace
 Pruning:
o acde is removed because count (from L3) = 2 (missing: cde and
ade)
o C4={abcd} because count = 4 (from abc, abd, acd, bcd)
How to Count Supports of Candidates efficiently?
 Why counting supports of candidates is a problem?
o The total number of candidates can be very huge
o One transaction may contain many candidates
 Method (details in paper and presentation):
o Candidate itemsets are stored in a hash-tree
o Leaf node of hash-tree contains a list of itemsets and counts
o Interior node contains a hash table
o Subset function: finds all the candidates contained in a
transaction
AprioriTid – Use Lk to count support, instead of the database itself.
Detail in “Mining Rules 2.doc” paper and presentation
6.2.3 Methods to Improve Apriori’s Efficiency
 Hash-based itemset counting:
o From (k-1)-itemset, we generate all of the k-itemset for each
transaction table, hash them into the appropriate bucket and
increase the bucket count
o A k-itemset whose corresponding hashing bucket count is
below the threshold cannot be frequent
o Figure 6.6
 Transaction reduction: A transaction that does not contain any
frequent k-itemset is useless in subsequent scans and is removed (used
in AprioriTid)
 Partitioning: Any itemset that is potentially frequent in DB must be
frequent in at least one of the partitions of DB
 Sampling: mining on a subset of given data, lower support threshold +
a method to determine the completeness
 Dynamic itemset counting: add new candidate itemsets only when all
of their subsets are estimated to be frequent
Is Apriori Fast Enough? Performance Bottlenecks
 The core of the Apriori algorithm:
o Use frequent (k- 1)-itemsets to generate candidate frequent kitemsets
o Use database scan and pattern matching to collect counts for the
candidate itemsets
 The bottleneck of Apriori: candidate generation
o Huge candidate sets:
 104 frequent 1-itemset will generate 107 candidate 2itemsets
 To discover a frequent pattern of size 100, e.g., {a1, a2, ,
a100}, one needs to generate 2100  1030 candidates.
o Multiple scans of database:
o Needs (n +1 ) scans, n is the length of the longest pattern
6.2.4 Mining Frequent Patterns Without Candidate Generation
 Compress a large database into a compact, Frequent-Pattern tree (FPtree) structure
o highly condensed, but complete for frequent pattern mining
o avoid costly database scans
 Develop an efficient, FP-tree-based frequent pattern mining method
o A divide-and-conquer methodology: decompose mining tasks
into smaller ones
o Avoid candidate generation: sub-database test only!
Construct FP-tree from a Transaction DB
Construct FP-tree from a
Transaction DB
TID
100
200
300
400
500
Items bought
(ordered) frequent items
{f, a, c, d, g, i, m, p}
{f, c, a, m, p}
{a, b, c, f, l, m, o}
{f, c, a, b, m}
{b, f, h, j, o}
{f, b}
{b, c, k, s, p}
{c, b, p}
{a, f, c, e, l, p, m, n}
{f, c, a, m, p}
Steps:
{}
Header Table
1. Scan DB once, find frequent
1-itemset (single item
pattern)
2. Order frequent items in
frequency descending order
3. Scan DB again, construct
FP-tree
July 2, 2004
min_support = 0.5
Item frequency head
f
4
c
4
a
3
b
3
m
3
p
3
Data Mining: Concepts and Techniques
f:4
c:3
c:1
b:1
a:3
b:1
p:1
m:2
b:1
p:2
m:1
Benefits of the FP-tree Structure
 Completeness:
o never breaks a long pattern of any transaction
o preserves complete information for frequent pattern mining
 Compactness
o reduce irrelevant information infrequent items are gone
o frequency descending ordering: more frequent items are more
likely to be shared
o never be larger than the original database (if not count nodelinks and counts)
o Example: For Connect-4 DB, compression ratio could be over
100
18
Mining Frequent Patterns Using FP-tree
 General idea (divide-and-conquer)
o Recursively grow frequent pattern path using the FP-tree
 Method
o For each item, construct its conditional pattern-base, and then
its conditional FP-tree
o Repeat the process on each newly created conditional FP-tree
o Until the resulting FP-tree is empty, or it contains only one path
(single path will generate all the combinations of its sub-paths,
each of which is a frequent pattern)
Major Steps to Mine FP-tree
 Step 1: Construct conditional pattern base for each node in the FP-tree
 Step 2: Construct conditional FP-tree from each conditional patternbase
 Step 3: Recursively mine conditional FP-trees and grow frequent
patterns obtained so far
o If the conditional FP-tree contains a single path, simply
enumerate all the patterns
Step 1: From FP-tree to Conditional Pattern Base
Step 1: From FP-tree to Conditional
Pattern Base



Starting at the frequent header table in the FP-tree
Traverse the FP-tree by following the link of each frequent item
Accumulate all of transformed prefix paths of that item to form a
conditional pattern base
Header Table
Item frequency head
f
4
c
4
a
3
b
3
m
3
p
3
July 2, 2004
{}
Conditional pattern bases
f:4
c:3
c:1
b:1
a:3
b:1
p:1
item
cond. pattern base
c
f:3
a
fc:3
b
fca:1, f:1, c:1
m:2
b:1
m
fca:2, fcab:1
p:2
m:1
p
fcam:2, cb:1
Data Mining: Concepts and Techniques
Properties of FP-tree for Conditional Pattern Base Construction
 Node-link property
o For any frequent item ai, all the possible frequent patterns that
contain ai can be obtained by following ai's node-links, starting
from ai's head in the FP-tree header
 Prefix path property
o To calculate the frequent patterns for a node ai in a path P, only
the prefix sub-path of ai in P need to be accumulated, and its
frequency count should carry the same count as node ai.
Step 2: Construct Conditional FP-tree (threshold is 2)
22
Step 2: Construct Conditional FP-tree

For each pattern-base
 Accumulate the count for each item in the base
 Construct the FP-tree for the frequent items of the
pattern base
Header Table
Item frequency head
f
4
c
4
a
3
b
3
m
3
p
3
{}
f:4
c:3
c:1
b:1
a:3
b:1
p:1
m:2
b:1
p:2
m:1
m-conditional pattern
base:
fca:2, fcab:1
{}

f:3 
c:3
All frequent patterns
concerning m
m,
fm, cm, am,
fcm, fam, cam,
fcam
a:3
m-conditional FP-tree
July 2, 2004
Data Mining: Concepts and Techniques
24
Mining Frequent Patterns by Creating
Conditional Pattern-Bases
Item
Conditional pattern-base
Conditional FP-tree
p
{(fcam:2), (cb:1)}
{(c:3)}|p
m
{(fca:2), (fcab:1)}
{(f:3, c:3, a:3)}|m
b
{(fca:1), (f:1), (c:1)}
Empty
a
{(fc:3)}
{(f:3, c:3)}|a
c
{(f:3)}
{(f:3)}|c
f
Empty
Empty
July 2, 2004
Data Mining: Concepts and Techniques
25
Step 3: Recursively mine the conditional FP-tree
Step 3: Recursively mine the
conditional FP-tree
{}
{}
Cond. pattern base of “am”: (fc:3)
f:3
c:3
f:3
am-conditional FP-tree
c:3
{}
Cond. pattern base of “cm”: (f:3)
a:3
f:3
m-conditional FP-tree
cm-conditional FP-tree
{}
Cond. pattern base of “cam”: (f:3)
f:3
cam-conditional FP-tree
July 2, 2004
Data Mining: Concepts and Techniques
Single FP-tree Path Generation
26
Single FP-tree Path Generation


Suppose an FP-tree T has a single path P
The complete set of frequent pattern of T can be
generated by enumeration of all the combinations of the
sub-paths of P
{}
f:3
c:3

a:3
All frequent patterns
concerning m
m,
fm, cm, am,
fcm, fam, cam,
fcam
m-conditional FP-tree
July 2, 2004
Data Mining: Concepts and Techniques
Principles of Frequent Pattern Growth
 Pattern growth property
o Let  be a frequent itemset in DB, B be 's conditional pattern
base, and  be an itemset in B. Then    is a frequent
itemset in DB iff  is frequent in B.
 “abcdef” is a frequent pattern, if and only if
o “abcde” is a frequent pattern, and
o “f” is frequent in the set of transactions containing abcde
Why Is Frequent Pattern Growth Method Fast?
 Performance study shows
o FP-growth is an order of magnitude faster than Apriori, and is
also faster than tree-projection
 Reasoning
o No candidate generation, no candidate test
27
o Use compact data structure
o Eliminate repeated database scan
o Basic operation is counting and FP-tree building
FP-growth vs. Apriori: Scalability With the Support Threshold
FP-growth vs. Apriori: Scalability With
the Support Threshold
Data set T25I20D10K
100
D1 FP-grow th runtime
90
D1 Apriori runtime
80
Run time(sec.)
70
60
50
40
30
20
10
0
0
July 2, 2004
0.5
2
1.5
1
Support threshold(%)
Data Mining: Concepts and Techniques
2.5
3
30
Presentation of Association Rules (Table Form)
Presentation of Association Rules
(Table Form )
July 2, 2004
Data Mining: Concepts and Techniques
32
Visualization of Association Rule Using Rule Graph
July 2, 2004
Data Mining: Concepts and Techniques
6.2.5 Iceberg Queries
 Icerberg query: Compute aggregates over one or a set of attributes
only for those whose aggregate values is above certain threshold
 Example:
select P.custID, P.itemID, sum(P.qty)
from purchase P
group by P.custID, P.itemID
having sum(P.qty) >= 10
 Compute iceberg queries efficiently by Apriori:
o First compute lower dimensions (ex., P.custID byitelsef)
o Then compute higher dimensions only when all the lower ones
are above the threshold
34
6.3 Mining Multi-Level Association Rules from Transaction Databases
6.3.1 Multiple-Level Association Rules – what are they
Multiple-Level Association Rules
Food





Items often form hierarchy.
Items at the lower level are
expected to have lower
support.
Rules regarding itemsets at
appropriate levels could be
quite useful.
Transaction database can be
encoded based on
dimensions and levels
We can explore shared multilevel mining
July 2, 2004
bread
milk
skim
Fraser
TID
T1
T2
T3
T4
T5
2%
wheat
white
Sunset
Items
{111, 121, 211, 221}
{111, 211, 222, 323}
{112, 122, 221, 411}
{111, 121}
{111, 122, 211, 221, 413}
Data Mining: Concepts and Techniques
Mining the rules
 A top_down, progressive deepening approach:
o First find high-level strong rules:
 milk  bread [20%, 60%].
o Then find their lower-level “weaker” rules:
 2% milk  wheat bread [6%, 50%].
 Variations at mining multiple-level association rules.
o Level-crossed association rules:
 2% milk  Wonder wheat bread
o Association rules with multiple, alternative hierarchies:
 2% milk  Wonder bread
6.3.2 Approaches
 Uniform Support: the same minimum support for all levels
37
Uniform Support
Multi-level mining with uniform support
Level 1
min_sup = 5%
Level 2
min_sup = 5%
Milk
[support = 10%]
2% Milk
Skim Milk
[support = 6%]
[support = 4%]
Back
July 12, 2004
Data Mining: Concepts and Techniques
40
o + One minimum support threshold. No need to examine
itemsets containing any item whose ancestors do not have
minimum support.
o – Lower level items do not occur as frequently. If support
threshold
 too high  miss low level associations
 too low  generate too many high level associations
 Reduced Support: reduced minimum support at lower levels (ex.,
Figure 6.13)
o There are 3 search strategies:
 Level-by-level independent
 Full breadth first search
 Each note is examined regardless of whether its
parent note is found to be frequent or not.
 Level-cross filtering by single item
 An item at level I is examined if and only if its
parent at level (i-1) is frequent
 Children of infrequent nodes are pruned (Figure 614)
 Level-cross filtering by k-itemset
 Same as last one except its applied to k-itemset
(figure 6-15)
6.3.3 Multi-level Association: Redundancy Filtering
 Some rules may be redundant due to “ancestor” relationships between
items.
 Example
o milk  wheat bread [support = 8%, confidence = 70%]
o 2% milk  wheat bread [support = 2%, confidence = 72%]
 We say the first rule is an ancestor of the second rule.
 A rule is redundant if its support is close to the “expected” value,
based on the rule’s ancestor and the confidence value is similar to the
ancestor rule.
6.4 Mining Multidimensional Association Rules in Large Databases
Multidimensional Rules – Some basics
 Single-dimensional rules:
o buys(X, “milk”)  buys(X, “bread”)
 Multi-dimensional rules:
o Inter-dimension association rules (no repeated predicates)
 age(X,”19-25”)  occupation(X,“student”) 
buys(X,“coke”)
o hybrid-dimension association rules (repeated predicates)
 age(X,”19-25”)  buys(X, “popcorn”)  buys(X,
“coke”)
 Categorical Attributes
o finite number of possible values, no ordering among values
 Quantitative Attributes
o numeric, implicit ordering among values
Techniques for Mining MD Associations
 Search for frequent k-predicate set:
o Example: {age, occupation, buys} is a 3-predicate set.
o Techniques can be categorized by how age is treated.
Three approaches for detailing with numeric data:
1. Using static discretization of quantitative attributes
Quantitative attributes are statically discretized by using predefined
concept hierarchies.
2. Quantitative association rules
Quantitative attributes are dynamically discretized into “bins” based
on the distribution of the data in order to satisfy some mining criteria
such as maximizing the confidence of the rules mined
3. Distance-based association rules
This is a dynamic discretization process that considers the distance
between data points.
6.4.2 Static Discretization of Quantitative Attributes
 Discretized prior to mining using concept hierarchy.
 Numeric values are replaced by ranges.
 In relational database, finding all frequent k-predicate sets will require
k or k+1 table scans. (Apriori)
 Data cube is well suited for mining.
o The cells of an n-dimensional cuboid correspond to the n
predicate sets. (Figure 6-17)
 Mining from data cubes can be much faster.
6.4.3 Mining Quantitative Association Rules
Basic concepts
 Numeric attributes are dynamically discretized
o So that the confidence or compactness of the rules mined is
maximized.
 2-D quantitative association rules: Aquan1  Aquan2  Acat
 Example:
age(X,”30-34”)  income(X,”24K - 48K”)
 buys(X,”HDTV”)
How? ARCS (Association Rule Clustering System)
Main steps:
1. Create a 2D grid using the 2 quantitative attributes, Each pair of
quantitative values that satisfy a categorical condition (ex., but TV) is
represent as a point on the grid (actually count since multiple point
can be located at the same spot). Use binning (ex., quiwidth binning
as in ARCS, figure 6-17) to partition the values into intervals.
2. Search the grid to find counts that exceed the min_supp, from which
strong association rules are generated.
3. Cluster the association rules – combine similar ones, ex.,
Age(34) & income(“31..40k”)  buys(X, “HDTV”)
Age(35) & income(“31..40k”)  buys(X, “HDTV”)
Age(34) & income(“41..50k”)  buys(X, “HDTV”)
Age(35) & income(“41..50k”)  buys(X, “HDTV”)
Replace with a cluster rule:
Age(34..35) & income(“31..50k”)  buys(X, “HDTV”)
Limitations of ARCS
 Only quantitative attributes on LHS of rules.
 Only 2 attributes on LHS. (2D limitation)
 An alternative to ARCS - “Mining Quantitative Association Rules in
Large Relational Tables” by R. Srikant and R. Agrawal.
(See “mining quan asso rules.pdf”, presentation on Wednesday 7/21)
o Non-grid-based
o equi-depth binning
o clustering based on a measure of partial completeness.
6.4.4 Mining Distance-based Association Rules (details in chapter 8)
Class exercise: place the following data in bins
1. data: 1 2 3 4 5 22 50 51 90
Method: Equiwidth (width = 50)
Bin 1 (1..50):
1 2 3 4 5 22 50
Bin 2 (51..100): 51 90
2. data: 11 19 21 39 41 99 101 999
Method: qui-depth (2)
Bin 1 (11..20): 11 19
Bin 2 (21..40): 21 39
Bin 3 (41..100): 41 99
Bin 4 (100..1000): 101 999
Mining Distance-based Association Rules


Binning methods do not capture the semantics of interval
data
Price($)
Equi-width
(width $10)
Equi-depth
(depth 2)
Distancebased
7
20
22
50
51
53
[0,10]
[11,20]
[21,30]
[31,40]
[41,50]
[51,60]
[7,20]
[22,50]
[51,53]
[7,7]
[20,22]
[50,53]
Distance-based partitioning, more meaningful discretization
considering:
 density/number of points in an interval
 “closeness” of points in an interval
July 13, 2004
Data Mining: Concepts and Techniques
53
Clusters and Distance Measurements
 Bins are created based on the density and closeness of the points
 Use a diameter measure to assess the closeness of the N tuples.
 S[X] is a set of N tuples t1, t2, …, tN , projected on the attribute set X
The diameter of S[X]:
 
d ( S [ X ]) 
N
N
i 1
j 1
dist X (ti[ X ], tj[ X ])
N ( N  1)
o distx: distance metric, e.g. Euclidean distance or Manhattan
 The diameter, d, assesses the density of a cluster CX , where
d (CX )  d 0 X
CX  s 0
 Finding clusters and distance-based rules
o the density threshold, d0 , replaces the notion of support
o modified version of the BIRCH clustering algorithm
6.5 From Association Mining to Correlation Analysis
6.5.1 Are all strong rules interesting?
Example: at AllElectronics
10000 transactions, 6000 include computer game, 7500 include video games,
4000 include both.
With min_sup = 40%, min_conf=60%
What strong rules can we generate?
Buys(PC game)  Buys(Video game) conf=4000/6000 = 67% (strong)
Buys(Video game)  Buys(PC game) conf = 4000/7500 < 60&
A better rule:
True  Buys(video game) 75%
6.5.2 From association analysis to Correlation Analysis
Two Object Interestingness measurements:
o support; and
o confidence
Subjective Interestingness measures (Silberschatz & Tuzhilin, KDD95)
 A rule (pattern) is interesting if
o it is unexpected (surprising to the user); and/or
o actionable (the user can do something with it)
Criticism to Support and Confidence – Not always interesting
More examples
Criticism to Support and Confidence

Example 1: (Aggarwal & Yu, PODS98)
 Among 5000 students
 3000 play basketball
 3750 eat cereal
 2000 both play basket ball and eat cereal
 play basketball  eat cereal [40%, 66.7%] is misleading
because the overall percentage of students eating cereal is 75%
which is higher than 66.7%.
 play basketball  not eat cereal [20%, 33.3%] is far more
accurate, although with lower support and confidence
basketball not basketball sum(row)
cereal
2000
1750
3750
not cereal
1000
250
1250
sum(col.)
3000
2000
5000
July 13, 2004
Data Mining: Concepts and Techniques
58
Criticism to Support and Confidence
(Cont.)


Example 2:
 X and Y: positively correlated,
 X and Z, negatively related
 support and confidence of
X=>Z dominates
We need a measure of dependent
or correlated events
corrA, B 

P( A B)
P( A) P( B)
X 1 1 1 1 0 0 0 0
Y 1 1 0 0 0 0 0 0
Z 0 1 1 1 1 1 1 1
Rule Support Confidence
X=>Y 25%
50%
X=>Z 37.50%
75%
P(B|A)/P(B) is also called the lift
of rule A => B
July 13, 2004
Data Mining: Concepts and Techniques
59
Other Interestingness Measures: Interest
Other Interestingness Measures: Interest

Interest (correlation, lift)
P( A  B)
P( A) P( B)

taking both P(A) and P(B) in consideration

P(A^B)=P(B)*P(A), if A and B are independent events

A and B negatively correlated, if the value is less than 1;
otherwise A and B positively correlated
X 1 1 1 1 0 0 0 0
Y 1 1 0 0 0 0 0 0
Z 0 1 1 1 1 1 1 1
July 13, 2004
Itemset
Support
Interest
X,Y
X,Z
Y,Z
25%
37.50%
12.50%
2
0.9
0.57
Data Mining: Concepts and Techniques
60
6.6 Constraint-Based Mining
Interactive, exploratory mining giga-bytes of data, Could it be real?
 Making good use of constraints!
What kinds of constraints can be used in mining?
 Knowledge type constraint: classification, association, etc.
 Data constraint: SQL-like queries
o Find product pairs sold together in Vancouver in Dec.’98
 Dimension/level constraints:
o Specify the dimension of the data or levels of the concept
hierarchy
 Interestingness constraints:
o strong rules (min_support  3%, min_confidence  60%).
 Rule constraints
o small sales (price < $10) triggers big sales (sum > $200).
6.6.1 Rule Constraints in Association Mining
 Two kind of rule constraints:
o Rule form constraints: meta-rule guided mining.
 Ex., P(x, y) & Q(x, w)  Buys(x, “HDTV”).
 A rule that complies with this metarule:
Age(X, “30-39”) & income(X, “41k…50k”)  Buys(X,
“HDTV”).
 See also quiz7
o Rule (content) constraint: constraint-based query optimization
(Ng, et al., SIGMOD’98).
 sum(LHS) < 100 & min(LHS) > 20 & count(LHS) > 3 &
sum(RHS) > 1000
 1-variable vs. 2-variable constraints (Lakshmanan, et al. SIGMOD’99):
o 1-var: A constraint confining only one side (L/R) of the rule,
e.g., as shown above.
o 2-var: A constraint confining both sides (L and R).
Ex., sum(LHS) < min(RHS) & max(RHS) < 5* sum(LHS)
Five categories of rule constraints:
1. Anti-monotone: A constraint Ca is anti-monotone iff. for any pattern
S not satisfying Ca, none of the super-patterns of S can satisfy Ca
Ex., sum(I, price) < 100
2. Monotone: A constraint Cm is monotone iff. for any pattern S
satisfying Cm, every super-pattern of S also satisfies it
Ex., sum(I, price) > 100
The Apriori Principle in terms of frequent itemset
It states: if a k-1 itemset is not frequent, then any of its k-item super set is
not frequent.
Is it 1 or 2? 1
3. Succinct Constraint:
a. A subset of item Is is a succinct set, if it can be expressed as
p(I) for some selection predicate p, where  is a selection
operator
b. IE., constraint is specified on item itself, not the aggregation
(no counting necessary)
c. Ex., min(J, price) > 500
4. Convertible constraints: None of three, however, it can become one of
the three if the items in the itemset are arranged in a certain order (ex.,
asc)
Ex., avg(I, price) <= 100 is antimonotone if items are added in priceasc order
5. Inconvertible
Relationships Among Categories of
Constraints
Succinctness
Anti-monotonicity
Monotonicity
Convertible constraints
Inconvertible constraints
July 14, 2004
Data Mining: Concepts and Techniques
69
Download