Notes 4

advertisement

CPS 196.03: Information Management and Mining

Association Rules and Frequent

Itemsets

Apriori Algorithm

 Method:

– Let k=1

– Generate frequent itemsets of length 1

– Repeat until no new frequent itemsets are identified

 Generate length (k+1) candidate itemsets from length k frequent itemsets

 Prune candidate itemsets containing subsets of length k that are infrequent

 Count the support of each candidate by scanning the DB

 Eliminate candidates that are infrequent, leaving only those that are frequent

Generating Candidate ItemSets

 Exhaustive enumeration

 F(k-1) x F(1)

 Using (lexicographic) ordering

 F(k-1) x F(k-1)

 Will hashing help?

Reducing Number of Comparisons

 Candidate counting:

– Scan the database of transactions to determine the support of each candidate itemset

– To reduce the number of comparisons, store the candidates in a hash structure

 Instead of matching each transaction against every candidate, match it against candidates contained in the hashed buckets

Hash Structure Transactions

N

TID Items

1 Bread, Milk

2 Bread, Diaper, Beer, Eggs

3 Milk, Diaper, Beer, Coke

4 Bread, Milk, Diaper, Beer

5 Bread, Milk, Diaper, Coke k

Buckets

Generate Hash Tree

Suppose you have 15 candidate itemsets of length 3:

{1 4 5}, {1 2 4}, {4 5 7}, {1 2 5}, {4 5 8}, {1 5 9}, {1 3 6}, {2 3 4}, {5 6 7}, {3 4 5},

{3 5 6}, {3 5 7}, {6 8 9}, {3 6 7}, {3 6 8}

You need:

• Hash function

Hash function

3,6,9

1,4,7

2,5,8

• Max leaf size: max number of itemsets stored in a leaf node (if number of candidate itemsets exceeds max leaf size, split the node)

Association Rule Discovery: Hash tree

Hash Function Candidate Hash Tree

1,4,7

2,5,8

3,6,9

Hash on

1, 4 or 7

1 4 5 1 3 6

2 3 4

5 6 7

3 4 5

1 2 4

4 5 7

1 2 5

4 5 8

1 5 9

3 5 6

3 5 7

6 8 9

3 6 7

3 6 8

Association Rule Discovery: Hash tree

Hash Function Candidate Hash Tree

1,4,7

2,5,8

3,6,9

Hash on

2, 5 or 8

1 4 5 1 3 6

2 3 4

5 6 7

3 4 5

1 2 4

4 5 7

1 2 5

4 5 8

1 5 9

3 5 6

3 5 7

6 8 9

3 6 7

3 6 8

Association Rule Discovery: Hash tree

Hash Function Candidate Hash Tree

1,4,7

2,5,8

3,6,9

Hash on

3, 6 or 9

1 4 5 1 3 6

2 3 4

5 6 7

3 4 5

1 2 4

4 5 7

1 2 5

4 5 8

1 5 9

3 5 6

3 5 7

6 8 9

3 6 7

3 6 8

Subset Operation

Given a transaction t, what are the possible subsets of size 3?

Level 1

1 2 3 5 6

Transaction, t

Level 2

1 2 3 5 6 1 3 5 6 1 5 6

1 2 3 5 6

2 3 5 6

2 3 5 6 2 5 6

3 5 6

3 5 6

1 2 3

1 2 5

1 2 6

Level 3

1 3 5

1 3 6

1 5 6

2 3 5

2 3 6

Subsets of 3 items

2 5 6 3 5 6

Subset Operation Using Hash Tree

1 2 3 5 6 transaction

Hash Function

1 4 5

1 + 2 3 5 6

1 3 6

2 3 4

5 6 7

3 4 5

2 + 3 5 6

3 + 5 6

1,4,7

2,5,8

3,6,9

3 5 6

3 5 7

6 8 9

3 6 7

3 6 8

1 2 4

4 5 7

1 2 5

4 5 8

1 5 9

Subset Operation Using Hash Tree

1 2 3 5 6 transaction

Hash Function

1 2 + 3 5 6

1 3 + 5 6

1 5 + 6

1 4 5

1 + 2 3 5 6

1 3 6

2 3 4

5 6 7

3 4 5

2 + 3 5 6

3 + 5 6

1,4,7

2,5,8

3,6,9

3 5 6

3 5 7

6 8 9

3 6 7

3 6 8

1 2 4

4 5 7

1 2 5

4 5 8

1 5 9

Subset Operation Using Hash Tree

1 2 3 5 6 transaction

Hash Function

1 + 2 3 5 6

1 2 + 3 5 6

1 3 + 5 6

1 5 + 6

2 + 3 5 6

3 + 5 6

1,4,7

2,5,8

3,6,9

2 3 4

5 6 7

1 4 5 1 3 6

3 4 5 3 5 6

3 5 7

6 8 9

3 6 7

3 6 8

1 2 4

4 5 7

1 2 5

4 5 8

1 5 9

Match transaction against 11 out of 15 candidates

Factors Affecting Performance

Choice of minimum support threshold

– lowering support threshold results in more frequent itemsets

– this may increase number of candidates and max length of frequent itemsets

Dimensionality (number of items) of the data set

– more space is needed to store support count of each item

– if number of frequent items also increases, both computation and

I/O costs may also increase

Size of database

– since Apriori makes multiple passes, run time of algorithm may increase with number of transactions

Average transaction width

– number of subsets in a transaction increases with its width

– this may increase max length of frequent itemsets and traversals of hash tree

Compact Representation of Frequent Itemsets

 Some itemsets are redundant because they have identical support as their supersets

TID A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 C1 C2 C3 C4 C5 C6 C7 C8 C9 C10

1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

3 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

4 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

5 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

6 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0

7 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0

8 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0

9 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0

10 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0

11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

Number of frequent itemsets

Need a compact representation

3

 k

10 

1

10 k

Compression of Itemset Information

TID Items

1 ABC

2 ABCD

3 BCE

4 ACDE

5 DE

124

A

123

B null

1234

C

245

D

Transaction Ids

E

345

12

AB

124

AC

24

AD

4

AE

123

BC

2

BD

3

BE

24

CD

34

CE

45

DE

12

ABC

2

ABD ABE

24

ACD

4

ACE

4

ADE

2

BCD

3

BCE BDE

4

CDE

Not supported by any transactions

2

ABCD ABCE ABDE

4

ACDE BCDE

ABCDE

Maximal Frequent Itemset

An itemset is maximal frequent if none of its immediate supersets is frequent null

Maximal

Itemsets

A B C D E

AB AC AD AE BC BD BE CD CE DE

ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

Infrequent

Itemsets

ABCD ABCE ABDE

ABCD

E

ACDE BCDE

Border

Closed Itemset

 An itemset is closed if none of its immediate supersets has the same support as the itemset

TID

1

Items

{A,B}

2 {B,C,D}

3 {A,B,C,D}

4 {A,B,D}

5 {A,B,C,D}

Itemset Support

{A} 4

{B}

{C}

5

3

{D}

{A,B}

{A,C}

{A,D}

4

4

2

3

{B,C}

{B,D}

{C,D}

3

4

3

Itemset Support

{A,B,C} 2

{A,B,D}

{A,C,D}

3

2

{B,C,D}

{A,B,C,D}

3

2

Maximal vs Closed Itemsets

TID Items

1 ABC

2 ABCD

3 BCE

4 ACDE

5 DE

124

A

123

B null

1234

C

245

D

Transaction Ids

E

345

12

AB

124

AC

24

AD

4

AE

123

BC

2

BD

3

BE

24

CD

34

CE

45

DE

12

ABC

2

ABD ABE

24

ACD

4

ACE

4

ADE

2

BCD

3

BCE BDE

4

CDE

Not supported by any transactions

2

ABCD ABCE ABDE

4

ACDE BCDE

ABCDE

Maximal vs Closed Frequent Itemsets

Minimum support = 2

124

A

123

B null

1234

C

245

D

Closed but not maximal

E

345

Closed and maximal

12

AB

124

AC

24

AD

4

AE

123

BC

2

BD

3

BE

24

CD

34

CE

45

DE

12

ABC

2

ABD ABE

24

ACD

4

ACE

4

ADE

2

BCD

3

BCE BDE

4

CDE

2

ABCD ABCE ABDE

4

ACDE BCDE

ABCDE

# Closed = 9

# Maximal = 4

Maximal vs Closed Itemsets

Frequent

Itemsets

Closed

Frequent

Itemsets

Maximal

Frequent

Itemsets

Why Do We Care About Closed Itemsets?

 Compact representation of frequent itemsets

– Helps if there are many frequent itemsets

– There are efficient algorithms that can efficiently find the closed itemsets (and only them)

 HW Exercise: Come up with an algorithm to generate all info about frequent itemsets given info about closed frequent itemsets?

 Closed itemsets help identify redundant association rules

– E.g., if {b} and {b,c} have the same support: then would you care about {b} -> {d} or {b,c} -> {d}?

Toivonen’s Algorithm --- (1)

 Start as in the simple algorithm, but lower the threshold slightly for the sample.

– Example : if the sample is 1% of the baskets, use s /125 as the support threshold rather than s /100.

– Goal is to avoid missing any itemset that is frequent in the full set of baskets.

Toivonen’s Algorithm --- (2)

 Add to the itemsets that are frequent in the sample the negative border of these itemsets.

 An itemset is in the negative border if it is not deemed frequent in the sample, but all its immediate subsets are.

Example

 ABCD is in the negative border if and only if it is not frequent, but all of ABC , BCD , ACD , and

ABD are.

Toivonen’s Algorithm --- (3)

 In a second pass, count all candidate frequent itemsets from the first pass, and also count the negative border.

 If no itemset from the negative border turns out to be frequent, then the candidates found to be frequent in the whole data are exactly the frequent itemsets.

Toivonen’s Algorithm --- (4)

 What if we find something in the negative border is actually frequent?

 We must start over again!

 Try to choose the support threshold so the probability of failure is low, while the number of itemsets checked on the second pass fits in mainmemory.

Alternative Methods for Frequent Itemset Generation

 Traversal of Itemset Lattice

– General-to-specific vs Specific-to-general

Frequent itemset border null null

Frequent itemset border

..

..

..

..

null

..

..

{a

1

,a

2

,...,a n

}

(a) General-to-specific

{a

1

,a

2

,...,a n

}

Frequent itemset border

(b) Specific-to-general

{a

1

,a

2

,...,a n

}

(c) Bidirectional

Alternative Methods for Frequent Itemset Generation

 Traversal of Itemset Lattice

– Equivalent Classes null

A B D C A B null

C

AB AC AD BC BD CD

D

AB AC BC AD BD CD

ABC ABD ACD BCD

ABCD

(a) Prefix tree

ABC ABD ACD BCD

ABCD

(b) Suffix tree

Alternative Methods for Frequent Itemset Generation

 Traversal of Itemset Lattice

– Breadth-first vs Depth-first

(a) Breadth first (b) Depth first

Alternative Methods for Frequent Itemset Generation

 Representation of Database

– horizontal vs vertical data layout

Horizontal

Data Layout

TID Items

1 A,B,E

2 B,C,D

3 C,E

4 A,C,D

5 A,B,C,D

6 A,E

7 A,B

8 A,B,C

9 A,C,D

10 B

Vertical Data Layout

A B C D E

1

4

5

6

7

1

2

5

7

8

2

3

4

8

9

2

4

5

9

1

3

6

8 10

9

ECLAT

 For each item, store a list of transaction ids (tids)

Horizontal

Data Layout

TID Items

1 A,B,E

2 B,C,D

3 C,E

4 A,C,D

5 A,B,C,D

6 A,E

7 A,B

8 A,B,C

9 A,C,D

10 B

Vertical Data Layout

A B C D E

1 1 2 2 1

4 2 3 4 3

5 5 4 5 6

6 7 8 9

7 8 9

8 10

9

TID-list

ECLAT

 Determine support of any k-itemset by intersecting tid-lists of two of its (k-1) subsets.

A B

AB

1

4

5

1

2

5

1

5

7

6 7

8

8

7

10

8

9

 3 traversal approaches:

– top-down, bottom-up and hybrid

 Advantage: very fast support counting

 Disadvantage: intermediate tid-lists may become too large for memory

Bottleneck of Frequent-pattern Mining

 Multiple database scans are costly

 Mining long patterns needs many passes of scanning and generates lots of candidates

– To find frequent itemset i

1 i

2

…i

100

 # of scans: 100

!

 # of Candidates: (

100

1 ) + (

100

2 ) + … + (

1

1

0

0

0

0 ) = 2 100 -1 = 1.27*10 30

 Bottleneck: candidate-generation-and-test

 Can we avoid candidate generation?

FP-growth Algorithm

 Use a compressed representation of the database using an FP-tree

 Once an FP-tree has been constructed, it uses a recursive divide-and-conquer approach to mine the frequent itemsets

FP-tree construction

After reading TID=1:

A:1 null

TID

1

Items

{A,B}

2 {B,C,D}

3 {A,C,D,E}

4 {A,D,E}

5 {A,B,C}

6 {A,B,C,D}

7

8

9

{B,C}

{A,B,C}

{A,B,D}

10 {B,C,E}

After reading TID=2: null

A:1

B:1

B:1

B:1

C:1

D:1

FP-Tree Construction

TID

1

Items

{A,B}

2 {B,C,D}

3 {A,C,D,E}

4 {A,D,E}

7

8

5 {A,B,C}

6 {A,B,C,D}

{B,C}

{A,B,C}

9

10

{A,B,D}

{B,C,E}

Header table

Item Pointer

A

B

C

D

E

Transaction

Database

C:3

D:1

B:5

A:7

D:1 null

B:3

C:3

C:1 D:1

D:1

D:1 E:1

E:1

Pointers are used to assist frequent itemset generation

E:1

Construct FP-tree from Transaction Database

TID Items bought

100 { f, a, c, d, g, i, m, p

(ordered) frequent items

} { f, c, a, m, p }

200

300

400

500

{

{

{

{ a, b, c, f, l, m, o b, f, h, j, o, w b, c, k, s, p }

}

} a, f, c, e, l, p, m, n }

{

{

{

{ f, c, a, b, m f, b } c, b, p } f, c, a, m, p

}

}

Header Table

1. Scan DB once, find frequent 1-itemset

(single item pattern)

2. Sort frequent items in frequency descending order, f-list

3. Scan DB again, construct FP-tree a b m p

Item frequency head f 4 c 4

3

3

3

3

F-list =f-c-a-b-m-p min_support = 3 c:3 a:3 m:2 f:4 b:1 p:2 m:1

{} b:1 c:1 b:1 p:1

Benefits of the FP-tree Structure

 Completeness

– Preserve complete information for frequent pattern mining

– Never break a long pattern of any transaction

 Compactness

– Reduce irrelevant info —infrequent items are gone

– Items in frequency descending order: the more frequently occurring, the more likely to be shared

– Never be larger than the original database (not count node-links and the count field)

Partition Patterns and Databases

 Frequent patterns can be partitioned into subsets according to f-list

– F-list=f-c-a-b-m-p

– Patterns containing p

– Patterns having m but no p

– …

– Patterns having c but no a nor b, m, p

– Pattern f

 Completeness and non-redundancy

Find Patterns From P-conditional Database

Starting at the frequent item header table in the FP-tree

Traverse the FP-tree by following the link of each frequent item p

Accumulate all of transformed prefix paths of item p to form p’ s conditional pattern base

{}

Header Table a b m p

Item frequency head f 4 c 4

3

3

3

3 m:2 c:3 a:3 f:4 b:1 b:1 c:1 b:1 p:1 p:2 m:1 c a b

Conditional pattern bases item cond. pattern base m p f:3 fc:3 fca:1, f:1, c:1 fca:2, fcab:1 fcam:2, cb:1

From Conditional Pattern-bases to Conditional FP-trees

 For each pattern-base

– Accumulate the count for each item in the base

– Construct the FP-tree for the frequent items of the pattern base

{} a b m p c f

Header Table

Item frequency head

4

4

3

3

3

3 c:3 f:4 b:1 m:2 a:3 b:1 p:2 m:1 c:1 b:1 p:1 m-conditional pattern base: fca:2, fcab:1

All frequent patterns relate to m

{}

 f:3

 m, fm, cm, am, c:3 fcm, fam, cam, fcam a:3 m-conditional FP-tree

Recursion: Mining Each Conditional FP-tree

{}

{}

Cond. pattern base of “am”: (fc:3) f:3 c:3 f:3 am-conditional FP-tree c:3 a:3 m-conditional FP-tree

Cond. pattern base of “cm”: (f:3)

{} f:3 cm-conditional FP-tree

{}

Cond. pattern base of “cam”: (f:3) f:3 cam-conditional FP-tree

Mining Frequent Patterns With FP-trees

 Idea: Frequent pattern growth

– Recursively grow frequent patterns by pattern and database partition

 Method

– For each frequent item, construct its conditional pattern-base, and then its conditional FP-tree

– Repeat the process on each newly created conditional

FP-tree

– Until the resulting FP-tree is empty, or it contains only one path —single path will generate all the combinations of its sub-paths, each of which is a frequent pattern

Rule Generation

 Given a frequent itemset L, find all non-empty subsets f

L such that f

L – f satisfies the minimum confidence requirement

– If {A,B,C,D} is a frequent itemset, candidate rules:

ABC

D,

A

BCD,

AB

CD,

BD

AC,

ABD

C,

B

ACD,

AC

BD,

CD

AB,

ACD

B,

C

ABD,

AD

BC,

BCD

A,

D

ABC

BC

AD,

 If |L| = k, then there are 2 k – 2 candidate association rules (ignoring L

  and

 

L)

Rule Generation

 How to efficiently generate rules from frequent itemsets?

– In general, confidence does not have an antimonotone property c(ABC

D) can be larger or smaller than c(AB

D)

– But confidence of rules generated from the same itemset has an anti-monotone property

– e.g., L = {A,B,C,D}: c(ABC

D)

 c(AB

CD)

 c(A

BCD)

 Confidence is anti-monotone w.r.t. number of items on the

RHS of the rule

Rule Generation for Apriori Algorithm

Lattice of rules

Low

Confidence

Rule

Pruned

Rules

Pattern Evaluation

 Association rule algorithms tend to produce too many rules

– many of them are uninteresting or redundant

– Redundant if {A,B,C}

{D} and {A,B}

{D} have same support & confidence

 Interestingness measures can be used to prune/rank the derived patterns

 In the original formulation of association rules, support & confidence are the only measures used

Application of Interestingness Measure

Interestingness

Measures

Computing Interestingness Measure

 Given a rule X

Y, information needed to compute rule interestingness can be obtained from a contingency table

Contingency table for X

Y

Y Y

X

X f

11 f

01 f

+1 f

10 f

00 f

+0 f

1+ f o+

|T| f f f f

11

10

01

00

: support of X and Y

: support of X and Y

: support of X and Y

: support of X and Y

Used to define various measures

 support, confidence, lift, Gini,

J-measure, etc.

Drawback of Confidence

Tea

Tea

Coffee Coffee

15 5

75

90

5

10

20

80

100

Association Rule: Tea  Coffee

Confidence= P(Coffee|Tea) = 0.75

but P(Coffee) = 0.9

 Although confidence is high, rule is misleading

 P(Coffee|Tea) = 0.9375

Statistical Independence

 Population of 1000 students

– 600 students know how to swim (S)

– 700 students know how to bike (B)

– 420 students know how to swim and bike (S,B)

– P(S

B) = 420/1000 = 0.42

– P(S)

P(B) = 0.6

0.7 = 0.42

– P(S

B) = P(S)

P(B) => Statistical independence

– P(S

B) > P(S)

P(B) => Positively correlated

– P(S

B) < P(S)

P(B) => Negatively correlated

Statistical-based Measures

 Measures that take into account statistical dependence

Lift

P ( Y | X )

P ( Y )

Interest

PS

P ( X , Y )

P ( X ) P ( Y )

P ( X , Y )

P ( X ) P ( Y )

  coefficien t

P ( X

P ( X )[ 1

, Y )

P ( X ) P ( Y )

P ( X )] P ( Y )[ 1

P ( Y )]

Example: Lift/Interest

Tea

Tea

Coffee Coffee

15 5

75

90

5

10

20

80

100

Association Rule: Tea  Coffee

Confidence= P(Coffee|Tea) = 0.75

but P(Coffee) = 0.9

 Lift = 0.75/0.9= 0.8333 (< 1, therefore is negatively associated)

Drawback of Lift & Interest

X

X

Y

10

0

10

Y

0 10

90 90

90 100

Lift

0 .

1

( 0 .

1 )( 0 .

1 )

10

X

X

Y

90

0

90

Y

0

10

90

10

10 100

Lift

0 .

9

( 0 .

9 )( 0 .

9 )

1 .

11

Statistical independence:

If P(X,Y)=P(X)P(Y) => Lift = 1

Download