Data Mining: Concepts and Techniques Chapter 7. Cluster Analysis

Data Mining:
Chapter 7. Cluster Analysis
Concepts and Techniques
1. What is Cluster Analysis?
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
— Chapter 7 —
4. Partitioning Methods
5. Hierarchical Methods
Jiawei Han
6. Density-Based Methods
Department of Computer Science
University of Illinois at Urbana-Champaign
7. Grid-Based Methods
www.cs.uiuc.edu/~hanj
©2006 Jiawei Han and Micheline Kamber, All rights reserved
1
2
Examples of Clustering Applications
What is Cluster Analysis?
n
Cluster: a collection of data objects
n
n
n
Similar to one another within the same cluster
Dissimilar to the objects in other clusters
programs
distance (or similarity) measures
n
Cluster analysis
n
Unsupervised learning: no predefined classes
n
Typical applications
n
As a stand-alone tool to get insight into data distribution
n
As a preprocessing step for other algorithms
n
Land use: Identification of areas of similar land use in an earth
n
Insurance: Identifying groups of motor insurance policy holders with
observation database
Finding similarities between data according to the characteristics
found in the data and grouping similar data objects into clusters
n
Marketing: Help marketers discover distinct groups in their customer
bases, and then use this knowledge to develop targeted marketing
a high average claim cost
n
City-planning: Identifying groups of houses according to their house
type, value, and geographical location
n
Earth-quake studies: Observed earth quake epicenters should be
clustered along continent faults
3
4
Requirements of Clustering in Data Mining
Example
–
clustering for ontology aligment
n
Scalability
n
Ability to deal with different types of attributes
n
Discovery of clusters with arbitrary shape
n
5
Minimal requirements for domain knowledge to
determine input parameters
n
Able to deal with noise and outliers
n
Insensitive to order of input records
n
High dimensionality
n
Incorporation of user-specified constraints
n
Interpretability and usability
6
1
Chapter 7. Cluster Analysis
Data Structures
1. What is Cluster Analysis?
n
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
4. Partitioning Methods
5. Hierarchical Methods
6. Density-Based Methods
n
7. Grid-Based Methods
Data matrix
n n objects, p attributes
n (two modes)
n One row represents
one object
 x 11

 ...
x
 i1
 ...
x
 n1
...
x1f
...
...
...
...
x if
...
...
...
...
...
...
x nf
...
 0
 d(2,1)

 d(3,1 )

 :
 d ( n ,1)
Dissimilarity matrix
n Distance table
n (one mode)
0
d ( 3 ,2 )
0
:
d (n,2)
:
...
x 1p 

... 
x ip 

... 
x np 

7
8
Type of data in clustering analysis
n
Interval-scaled variables
n
Binary variables
n
n
n
Interval-valued variables
n
Continuous measurements (weight, temperature, …)
Sometimes we need to standardize the data
n
s f = 1n (| x1 f − m f | + | x2 f − m f | +...+ | xnf − m f |)
where
A generalization of the binary variable in that it can take more than 2 states
(color/red,yellow,blue,green)
n
Calculate the mean absolute deviation:
Variables with 2 states (on/off, yes/no)
Nominal variables
n
n
Ordinal
n
Ratio variables
n
Variables of mixed types
n
m f = 1n (x1 f + x2 f
+ ... +
xnf )
.
Calculate the standardized measurement (z-score)
zif =
ranking is important (e.g. medals(gold,silver,bronze))
n
...






0 
xif − m f
sf
a positive measurement on a nonlinear scale (growth)
9
10
Distances between objects
Distances between objects
n
Distances are normally used to measure the
similarity or dissimilarity between two data
objects
n
d (i, j ) = (| x − x |2 + | x − x |2 +...+ | x − x |2 )
i1
j1
i2
j2
ip
jp
Properties
n
n
n
n
Euclidean distance:
n
Manhattan distance:
d (i, j) =| x − x | + | x − x | +...+ | x − x |
i1 j1
i2 j 2
ip jp
d(i,j) ≥ 0
d(i,i) = 0
d(i,j) = d(j,i)
d(i,j) ≤ d(i,k) + d(k,j)
where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp)
are two p-dimensional data objects,
11
12
2
Binary Variables
Distances between objects
n
Minkowski distance:
n
q
symmetric binary variables: both states are equally
important; 0/1
d (i, j) = q (| x − x | + | x − x | +...+ | x − x | )
i1
j1
i2
j2
ip
jp
q
q
n
q is a positive integer
asymmetric binary variables: one state is more
important than the other (e.g. outcome of disease
n
If q = 1, d is Manhattan distance
n
If q = 2, d is Euclidean distance
test); 1 is the important state, 0 the other
13
14
Distance measure for symmetric binary
variables
Contingency tables for Binary Variables
Object j
Object j
1
a
c
1
0
Object i
0
b
d
sum
a +b
c+d
sum a + c b + d
Object i
1
0
1
a
c
0
b
d
sum
a +b
c+d
sum a + c b + d
p
p
a: number of attributes having 1 for object i and 1 for object j
b: number of attributes having 1 for object i and 0 for object j
c: number of attributes having 0 for object i and 1 for object j
d: number of attributes having 0 for object i and 0 for object j
p = a+b+c+d
d (i, j ) =
b+c
a+b+c+d
15
16
Distance measure for asymmetric binary
variables
Dissimilarity between Binary Variables
Object j
Object i
1
0
1
a
c
0
b
d
sum a + c b + d
sum
a +b
c+d
n
Example Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M
Mary F
Jim
M
p
n
d (i, j ) =
Jaccard coefficient = 1- d(i,j) =
b+c
a+b+c
sim Jaccard (i , j ) =
n
n
a
a+b+c
Y
Y
Y
N
N
P
P
P
N
N
N
N
N
P
N
N
N
N
gender is a symmetric attribute
the remaining attributes are asymmetric binary
distance based on these
let the values Y and P be set to 1, and the value N be set to 0
0 + 1
= 0 . 33
2 + 0 + 1
1 + 1
d ( jack , jim ) =
= 0 . 67
1 + 1 + 1
1 + 2
d ( jim , mary ) =
= 0 . 75
1 + 1 + 2
d ( jack , mary
17
) =
18
3
Nominal or Categorical Variables
n
Ordinal Variables
Method 1: Simple matching
n
m: # of matches, p: total # of variables
d ( i , j ) = p −p m
n
n
An ordinal variable can be discrete or continuous
n
Order is important, e.g., rank
n
Can be treated like interval-scaled
n
Method 2: use a large number of binary variables
n
n
creating a new binary variable for each of the M
nominal states
replace xif by their rank
map the range of each variable onto [0, 1] by replacing
i-th object in the f-th variable by
z
n
r if ∈ {1,..., M f }
if
=
r if − 1
M f − 1
compute the dissimilarity using methods for intervalscaled variables
19
20
Variables of Mixed Types
Ratio-Scaled Variables
n
n
Ratio-scaled variable: a positive measurement on a
nonlinear scale, approximately at exponential scale,
such as AeBt or Ae-Bt
n
n
A database may contain all the six types of variables
n symmetric binary, asymmetric binary, nominal, ordinal, interval
and ratio
One may use a weighted formula to combine their effects:
Methods:
n
n
treat them like interval-scaled variables—not a good
choice! (why?—the scale can be distorted)
n
f is binary or nominal:
Σ
δ
p
( f )
( f )
f = 1
ij
ij
p
( f )
f = 1
ij
d
δ
Σ
dij(f) = 0 if xif = xjf , or dij(f) = 1 otherwise
n
apply logarithmic transformation
n
f is interval-based: use the normalized distance
f is ordinal or ratio-scaled
n
yif = log(xif)
n
d (i, j ) =
n
n
treat them as continuous ordinal data, treat their rank
as interval-scaled
compute ranks rif and
and treat zif as interval-scaled
z if
=
r
M
−1
if
f
−1
delta(i,j) = 0 iff (i) x-value is missing or (ii) x-values are 0 and f
asymmetric binary attribute
21
Vector Objects
n
n
n
22
Vector model for information retrieval (simplified)
Vector objects: keywords in documents, gene
features in micro-arrays, etc.
Broad applications: information retrieval, biologic
taxonomy, etc.
Doc1 (1,1,0)
Doc2 (0,1,0)
cloning
Q (1,1,1)
Cosine measure
adrenergic
sim(d,q) = d . q
|d| x |q|
receptor
23
24
4
Major Clustering Approaches (I)
Chapter 7. Cluster Analysis
1. What is Cluster Analysis?
n
2. Types of Data in Cluster Analysis
Partitioning approach:
n
Construct various partitions and then evaluate them by some criterion,
e.g., minimizing the sum of square errors
3. A Categorization of Major Clustering Methods
n
4. Partitioning Methods
n
Typical methods: k-means, k-medoids, CLARANS
Hierarchical approach:
n
5. Hierarchical Methods
Create a hierarchical decomposition of the set of data (or objects) using
some criterion
6. Density-Based Methods
n
n
7. Grid-Based Methods
Typical methods: Diana, Agnes, BIRCH, ROCK, CAMELEON
Density-based approach:
n
Based on connectivity and density functions
n
Typical methods: DBSCAN, OPTICS, DenClue
25
26
Typical Alternatives to Calculate the Distance
between Clusters
Major Clustering Approaches (II)
n
n
Grid-based approach:
n
n
based on a multiple-level granularity structure
n
Typical methods: STING, WaveCluster, CLIQUE
Model-based:
n
n
n
n
Complete link: largest distance between an element in one cluster
n
Average: avg distance between an element in one cluster and an
and an element in the other, i.e., dis(Ki, Kj) = max(tip, tjq)
A model is hypothesized for each of the clusters and tries to find the best
fit of that model to each other
n
Single link: smallest distance between an element in one cluster
and an element in the other, i.e., dis(Ki, Kj) = min(tip, tjq)
element in the other, i.e., dis(Ki, Kj) = avg(tip, tjq)
Typical methods: EM, SOM, COBWEB
Frequent pattern-based:
n
Based on the analysis of frequent patterns
n
Typical methods: pCluster
n
Centroid: distance between the centroids of two clusters, i.e.,
n
Medoid: distance between the medoids of two clusters, i.e.,
dis(Ki, Kj) = dis(Ci, Cj)
User-guided or constraint-based:
n
Clustering by considering user-specified or application-specific constraints
n
Typical methods: COD (obstacles), constrained clustering
dis(Ki, Kj) = dis(Mi, Mj)
n
Medoid: one chosen, centrally located object in the cluster
27
Centroid, Radius and Diameter of a
Cluster (for numerical data sets)
n
n
Centroid: the “middle” of a cluster
ΣiN= 1(t
ip
Chapter 7. Cluster Analysis
1. What is Cluster Analysis?
)
N
2. Types of Data in Cluster Analysis
Radius: square root of average distance from any point of the
cluster to its centroid
n
Cm =
28
3. A Categorization of Major Clustering Methods
Σ N (t − cm ) 2
Rm = i =1 ip
N
4. Partitioning Methods
5. Hierarchical Methods
Diameter: square root of average mean squared distance between
6. Density-Based Methods
all pairs of points in the cluster
7. Grid-Based Methods
Σ N Σ N (t − t ) 2
Dm = i =1 i =1 ip iq
N ( N −1)
29
30
5
The K-Means Clustering Method
Partitioning Algorithms: Basic Concept
Partitioning method: Construct a partition of a database D of n objects
into a set of k clusters, s.t., min sum of squared distance
n
Σ km=1Σ tmi∈Km (Cm − tmi ) 2
n
Given k, data D
n
1. arbitrarily choose k objects as initial cluster centers
n
2. Repeat
Given a k, find a partition of k clusters that optimizes the chosen
partitioning criterion
n
n
Global optimal: exhaustively enumerate all partitions
n
Heuristic methods: k-means and k-medoids algorithms
n
k-means (MacQueen’67): Each cluster is represented by the center
of the cluster
n
n
k-medoids or PAM (Partition around medoids) (Kaufman &
n
Rousseeuw’87): Each cluster is represented by one of the objects
in the cluster
n
Assign each object to the cluster to which the
object is most similar based on mean values of the
objects in the cluster
Update cluster means (calculate mean value of the
objects for each cluster)
Until no change
31
32
Comments on the K-Means Method
The K-Means Clustering Method
Example
n
n
10
9
8
7
6
10
10
9
9
8
8
7
7
6
6
5
5
n
n
5
4
4
Assign
each
object
to most
similar
center
3
2
1
0
0
1
2
3
4
5
6
7
8
9
10
2
1
0
0
1
2
3
4
5
6
7
8
9
4
Update
the
cluster
means
3
10
3
2
Arbitrarily choose K
objects as initial
cluster centers
0
0
1
2
3
4
5
6
10
10
9
9
8
8
7
7
6
6
3
2
1
0
1
2
3
4
5
6
7
8
8
9
10
n
9
10
Weakness
n
5
Update
the
cluster
means
4
0
7
reassign
5
Comparing: PAM: O(k(n-k)2 ), CLARA: O(ks2 + k(n-k))
Comment: Often terminates at a local optimum. The global optimum
may be found using techniques such as: deterministic annealing and
genetic algorithms
1
reassign
K=2
Strength: Relatively efficient: O(tkn), where n is # objects, k is #
clusters, and t is # iterations. Normally, k, t << n.
n
Need to specify k, the number of clusters, in advance
n
Unable to handle noisy data and outliers
n
Not suitable to discover clusters with non-convex shapes
4
3
2
1
0
0
1
2
3
4
5
6
7
8
9
10
Applicable only when mean is defined, then what about categorical
data?
33
34
What Is the Problem of the K-Means Method?
n
The K-Medoids Clustering Method
The k-means algorithm is sensitive to outliers !
n
Since an object with an extremely large value may substantially
n
Find representative objects, called medoids, in clusters
n
PAM (Partitioning Around Medoids, 1987)
distort the distribution of the data.
n
n
of the medoids by one of the non-medoids if it improves the
K-Medoids: Instead of taking the mean value of the object in a
total distance of the resulting clustering
cluster as a reference point, medoids can be used, which is the most
n
centrally located object in a cluster.
PAM works effectively for small data sets, but does not scale
well for large data sets
10
10
9
9
8
8
7
7
6
6
5
5
4
4
3
3
2
2
1
starts from an initial set of medoids and iteratively replaces one
n
CLARA (Kaufmann & Rousseeuw, 1990)
n
CLARANS (Ng & Han, 1994): Randomized sampling
1
0
0
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
35
36
6
PAM (Partitioning Around Medoids) (1987)
A Typical K-Medoids Algorithm (PAM) - idea
Total Cost = 20
10
10
10
9
9
9
8
8
8
Arbitrary
choose k
objects
as initial
medoids
7
6
5
4
3
2
Assign
each
remaining
object to
nearest
medoids
7
6
5
4
3
2
1
1
0
0
0
1
2
3
4
5
6
7
8
9
0
10
1
2
3
4
5
6
7
8
9
n
PAM (Kaufman and Rousseeuw, 1987)
n
Algorithm
7
n
6
5
4
n
3
2
1
0
10
0
K=2
1
2
3
4
5
6
7
8
9
n
10
Randomly select a
nonmedoid object,Oramdom
Total Cost = 26
Select k representative objects arbitrarily
For each pair of non-selected object h and selected object i,
calculate the total swapping cost TCih
Select a pair i and h, which corresponds to the minimum
swapping cost
n
10
Do loop
Swapping O
and Oramdom
Until no
change
8
7
6
If quality is
improved.
If TCih < 0, i is replaced by h
10
Compute
total cost of
swapping
9
9
7
6
5
5
4
4
3
3
2
2
1
1
0
0
0
1
2
3
4
5
6
7
8
9
n
8
10
n
0
1
2
3
4
5
6
7
8
9
Then assign each non-selected object to the most similar
representative object
repeat steps 2-3 until there is no change
10
37
38
What Is the Problem with PAM?
PAM Clustering: Total swapping cost TCih=∑jCjih
10
10
9
j
9
t
8
t
8
7
n
7
6
5
i
4
3
j
6
h
4
5
h
i
3
2
2
1
1
0
n
0
0
1
2
3
4
5
6
7
8
9
10
Cjih = d(j, h) - d(j, i)
0
1
2
3
4
5
6
7
8
9
10
Cjih = 0
Pam is more robust than k-means in the presence of
noise and outliers because a medoid is less influenced by
outliers or other extreme values than a mean
Pam works efficiently for small data sets but does not
scale well for large data sets.
n
10
10
9
9
where n is # of data,k is # of clusters
8
8
j
7
h
6
5
7
Ł Sampling based method,
6
i
5
i
4
t
h
4
3
3
2
2
1
1
j
CLARA(Clustering LARge Applications)
t
0
0
0
1
2
3
4
5
6
7
8
9
Cjih = d(j, t) - d(j, i)
10
0
1
2
3
4
5
6
7
8
9
10
Cjih = d(j, h) - d(j, t)
39
CLARA (Clustering Large Applications) (1990)
n
n
40
CLARA (Clustering Large Applications) (1990)
CLARA (Kaufmann and Rousseeuw in 1990)
n
O(k(n-k)2 ) for each iteration
n
Built in statistical analysis packages, such as S+
Algorithm (n=5, s = 40+2k)
n
It draws multiple samples of the data set, applies PAM on
each sample, and gives the best clustering as the output
Repeat n times:
n
n
n
Strength: deals with larger data sets than PAM
n
Weakness:
n
n
n
Efficiency depends on the sample size
A good clustering based on samples will not
necessarily represent a good clustering of the whole
data set if the sample is biased
41
Draw sample of s objects from the entire data set and
perform PAM to find k mediods of the sample
assign each non-selected object in the entire data set to the
most similar mediod
Calculate average dissimilarity of the clustering. If the value is
smaller than the current minimum, use this value as current
minimum and retain the k medoids as best so far.
42
7
Graph Abstraction
Graph abstraction
…
…
…
n
n
…
Node represents k objects (medoids), a potential solution
for the clustering.
Nodes are neighbors if the sets of objects differ by one object.
Each node has k(n-k) neighbors.
Cost differential between two neighbors is TCih
(with Oi and Oh are the differing nodes in the mediod sets)
n
PAM searches for node in the graph with
minimum cost
CLARA searches in smaller graphs (as it uses PAM
on samples of the entire data set)
CLARANS
n
Searches in the original graph
n
Searches part of the graph
n
Uses the neighbors to guide the search
43
44
CLARANS (“Randomized” CLARA) (1994)
n
CLARANS (“Randomized” CLARA) (1994)
CLARANS (A Clustering Algorithm based on
Randomized Search) (Ng and Han’94)
n
Algorithm
n
n
n
n
It is more efficient and scalable than both PAM
and CLARA
Numlocal: number of local minima to be found
Maxneighbor: maximum number of neighbors to compare
Repeat numlocal times: (find local minimum)
n
n
n
Take arbitrary node in the graph
Consider random neighbor S of the current node and calculate the
cost differential. If S has lower cost, then set S to current and repeat
this step. If S does not have lower cost, repeat this step (check at
most Maxneighbor neighbors)
Compare the cost of current node with minimum cost so far. If the
cost is lower, set minumum cost to cost of the current node, and
bestnode to the current node.
45
46
Chapter 7. Cluster Analysis
Hierarchical Clustering
1. What is Cluster Analysis?
n
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
Use distance matrix as clustering criteria. This method
does not require the number of clusters k as an input,
but needs a termination condition
Step 0
4. Partitioning Methods
a
5. Hierarchical Methods
Step 1
Step 2 Step 3 Step 4
ab
b
6. Density-Based Methods
abcde
c
7. Grid-Based Methods
cde
d
de
e
Step 4
47
agglomerative
(AGNES)
Step 3
Step 2 Step 1 Step 0
divisive
(DIANA)
48
8
Complete-link Clustering Example
1 2 3 4 5
1 0
2  2
3 6

4 10
5  9
Complete-link Clustering Example
1 2 3 4 5
(1,2) 3 4 5





9 7 0 
8 5 4 0
1 0
2  2
3 6

4 10
5  9
(1,2)  0


3  6 0

4 10 7 0 


5  9 5 4 0
0
3 0





9 7 0 
8 5 4 0
0
3 0
(1,2) 3 4 5
4
d (1, 2), 4 = max{ d1 , 4 , d 2, 4 } = max{10,9} = 10
d (1,2 ), ( 4 ,5 ) = max{d (1, 2 ),4 , d (1,2 ), 5 } = max{10,9} = 10
3
d 3 ,( 4 , 5 ) = max{d3 ,4 , d 3, 5 } = max{7,5} = 7
2
2
(1,2)  0

3  6 0 
(4,5) 10 7 0
4
5
d (1, 2), 3 = max{d1, 3 , d 2, 3 } = max{6,3} = 6
d (1, 2), 5 = max{d1, 5 , d 2 ,5 } = max{9,8} = 9
(1,2) 3 ( 4,5)
(1,2)  0


3  6 0

4 10 7 0 


5  9 5 4 0
5
4
3
2
1
2
1
49
50
Complete-link Clustering Example
1 2 3 4 5
1 0
2  2
3 6

4 10
5  9
0
3
9
8
(1,2) 3 4 5




0

7 0 
5 4 0
AGNES (Agglomerative Nesting)
(1,2) 3 ( 4,5)
(1,2)  0


3  6 0

4 10 7 0 


5  9 5 4 0
(1,2)  0

3  6 0 
(4,5) 10 7 0
4
n
Introduced in Kaufmann and Rousseeuw (1990)
n
Implemented in statistical analysis packages, e.g., Splus
n
Use the Single-Link method and the dissimilarity matrix.
n
Merge nodes that have the least dissimilarity
n
Go on in a non-descending fashion
n
Eventually all nodes belong to the same cluster
5
10
4
d (1, 2 , 3 ),( 4 , 5 ) = max{d(1, 2 ), ( 4 , 5 ) , d 3, ( 4 , 5 )} = 10
3
6
2
2
1
th=9
th=5
51
52
Dendrogram: Shows How the Clusters are Merged
AGNES (Agglomerative Nesting)
10
Decompose data objects into a several levels of nested
partitioning (tree of clusters), called a dendrogram.
9
8
7
A clustering of the data objects is obtained by cutting the
dendrogram at the desired level, then each connected
component forms a cluster.
6
5
4
3
2
1
0
0
1
2
3
4
5
6
7
8
9
10
53
54
9
Recent Hierarchical Clustering Methods
DIANA (Divisive Analysis)
n
Introduced in Kaufmann and Rousseeuw (1990)
n
Implemented in statistical analysis packages, e.g., Splus
n
Inverse order of AGNES
n
Eventually each node forms a cluster on its own
n
Major weakness of agglomerative clustering methods
n
n
n
10
10
9
9
9
8
8
8
7
6
6
5
5
5
4
4
4
3
3
3
2
2
2
1
0
0
0
1
2
3
4
5
6
7
8
9
10
n
1
1
0
n
7
7
6
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
can never undo what was done previously
Integration of hierarchical with distance-based clustering
n
10
do not scale well: time complexity of at least O(n2),
where n is the number of total objects
10
BIRCH (1996): uses CF-tree and incrementally adjusts
the quality of sub-clusters
ROCK (1999): clustering categorical data by neighbor
and link analysis
CHAMELEON (1999): hierarchical clustering using
dynamic modeling
55
56
BIRCH (1996)
n
Birch: Balanced Iterative Reducing and Clustering using
Hierarchies (Zhang, Ramakrishnan & Livny, SIGMOD’96)
n
Incrementally construct a CF (Clustering Feature) tree, a
hierarchical data structure for multiphase clustering
n
n
n
Clustering Feature Vector in BIRCH
Clustering Feature: CF = (N, LS, SS)
N: Number of data points
LS: ∑ Ni=1=Xi
Phase 1: scan DB to build an initial in-memory CF tree (a
multi-level compression of the data that tries to preserve
the inherent clustering structure of the data)
SS: ∑ Ni=1=Xi2
CF = (5, (16,30),(54,190))
Phase 2: use an arbitrary clustering algorithm to cluster
the leaf nodes of the CF-tree
(3,4)
(2,6)
(4,5)
(4,7)
(3,8)
10
9
8
7
6
5
Scales linearly: finds a good clustering with a single scan
4
3
and improves the quality with a few additional scans
2
1
0
n
0
Weakness: handles only numeric data, and sensitive to the
1
2
3
4
5
6
7
8
9
10
order of the data record, not always natural clusters.
57
58
The CF Tree Structure
CF-Tree in BIRCH
n
Root
Clustering feature:
n
summary of the statistics for a given subcluster: the 0-th, 1st and 2nd
moments of the subcluster from the statistical point of view.
n
registers crucial measurements for computing cluster and utilizes storage
efficiently
B=6
CF1
CF2 CF3
CF6
T=7
child1
child2 child3
child6
A CF tree is a height-balanced tree that stores the clustering features for a
hierarchical clustering
n
n
CF1
A nonleaf node in a tree has children and stores the sums of the CFs of
their children
n
A nonleaf node represents a cluster made of the subclusters represented by
its children
n
A leaf node represents a cluster made of the subclusters represented by its
entries
child11 child12 child13
Branching factor: specify the maximum number of children.
n
threshold: max diameter of sub-clusters stored at the leaf nodes
CF5
child15
Leaf node
prev
A CF tree has two parameters
n
Non-leaf node
CF2 CF3
59
CFa CFb
CFk next
Leaf node
prev
CFl CFm
CFq next
60
10
ROCK: Clustering Categorical Data
n
n
Similarity Measure in ROCK
ROCK: RObust Clustering using linKs
n S. Guha, R. Rastogi & K. Shim, ICDE’99
Major ideas
n Use links to measure similarity/proximity
maximize the sum of the number of links
between points within a cluster, minimize the
sum of the number of links for points in
different clusters
n Computational complexity:
n
n
n
n
Traditional measures for categorical data may not work well, e.g.,
Jaccard coefficient
Example: Two groups (clusters) of transactions
n
C1. <a, b, c, d, e>: {a, b, c}, {a, b, d}, {a, b, e}, {a, c, d},
{a, c, e}, {a, d, e}, {b, c, d}, {b, c, e}, {b, d, e}, {c, d, e}
n
C2. <a, b, f, g>: {a, b, f}, {a, b, g}, {a, f, g}, {b, f, g}
Jaccard coefficient may lead to wrong clustering result
n
C1: 0.2 ({a, b, c}, {b, d, e}} to 0.5 ({a, b, c}, {a, b, d})
n
C1 & C2: could be as high as 0.5 ({a, b, c}, {a, b, f})
Jaccard coefficient-based similarity function:
T ∩ T2
S im ( T1 , T2 ) = 1
T1 ∪ T2
n
Ex. Let T1 = {a, b, c}, T2 = {c, d, e}
Sim ( T 1 , T 2 ) =
O( n2 + nmmma + n 2 log n)
{c}
{a , b , c , d , e}
=
1
= 0 .2
5
61
62
Link Measure in ROCK
n
Link Measure in ROCK
Neighbor: p1 and p2 are neighbors
n
iff sim(p1,p2) >= t
(sim and t between 0 and 1)
n
Links: # of common neighbors
n
n
Link(pi,pj) is the number of common neighbors
between pi and pj
n
C1 <a, b, c, d, e>: {a, b, c}, {a, b, d}, {a, b, e}, {a, c, d},
{a, c, e}, {a, d, e}, {b, c, d}, {b, c, e}, {b, d, e}, {c, d, e}
C2 <a, b, f, g>: {a, b, f}, {a, b, g}, {a, f, g}, {b, f, g}
Let T1 = {a, b, c}, T2 = {c, d, e}, T3 = {a, b, f}
and sim the Jaccard coefficient similarity and t=0.5
n
link(T1, T2) = 4, since they have 4 common neighbors
n
n
{a, c, d}, {a, c, e}, {b, c, d}, {b, c, e}
link(T1, T3) = 5, since they have 5 common neighbors
n
{a, b, d}, {a, b, e}, {a, b, g}, {a, b, c}, {a, b, f}
63
64
Link Measure in ROCK
n
n
The ROCK Algorithm
Link(Ci,Cj) = the number of cross links between
clusters Ci and Cj
n
G(Ci,Cj)
= goodness measure for merging Ci and Cj
= Link(Ci,Cj) divided by the expected number of
cross links
65
Algorithm: sampling-based clustering
n Draw random sample
n Hierarchical clustering with links using
goodness measure of merging
n Label data in disk: a point is assigned to the
cluster for which it has the most neighbors
after normalization
66
11
CHAMELEON: Hierarchical Clustering Using
Dynamic Modeling (1999)
Overall Framework of CHAMELEON
n
CHAMELEON: by G. Karypis, E.H. Han, and V. Kumar’99
Construct
n
Measures the similarity based on a dynamic model
Sparse Graph
n
n
Two clusters are merged only if the interconnectivity and closeness
(proximity) between two clusters are high relative to the internal
interconnectivity of the clusters and closeness of items within the
clusters
Partition the Graph
Data Set
A two-phase algorithm
1. Use a graph partitioning algorithm: cluster objects into a large
number of relatively small sub-clusters
Merge Partition
Final Clusters
2. Use an agglomerative hierarchical clustering algorithm: find the
genuine clusters by repeatedly combining these sub-clusters
67
68
CHAMELEON: Hierarchical Clustering Using
Dynamic Modeling (1999)
n
CHAMELEON: Hierarchical Clustering Using
Dynamic Modeling (1999)
A two-phase algorithm
n
1. Use a graph partitioning algorithm: cluster objects into a large
number of relatively small sub-clusters
n
Based on k-nearest neighbor graph
n
Edge between two nodes if points corresponding to either of the nodes
are among the k-most similar points of the point corresponding to the
other node
n
Edge weight is density of the region
n
Dynamic notion of neighborhood: in regions with high density, a
neighborhood radius is small, while in sparse regions the neighborhood
radius is large
A two-phase algorithm
1.
2. Use an agglomerative hierarchical clustering algorithm:
find the genuine clusters by repeatedly combining these
sub-clusters
n Interconnectivity between clusters Ci and Cj: normalized sum of
the weights of the edges that connect nodes in Ci and Cj
n Closeness of clusters Ci and Cj: average similarity between
points in Ci that are connected to points in Cj
n Merge if both measures are above user-defined thresholds
2.
69
CHAMELEON (Clustering Complex Objects)
70
Chapter 7. Cluster Analysis
1. What is Cluster Analysis?
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
4. Partitioning Methods
5. Hierarchical Methods
6. Density-Based Methods
7. Grid-Based Methods
71
72
12
Density-Based Clustering Methods
n
n
n
Density-Based Clustering: Basic Concepts
Clustering based on density (local cluster criterion), such
as density-connected points
Major features:
n Discover clusters of arbitrary shape
n Handle noise
n One scan
n Need density parameters as termination condition
n
Two parameters:
n
n
Several interesting studies:
n DBSCAN: Ester, et al. (KDD’96)
n OPTICS: Ankerst, et al (SIGMOD’99).
n DENCLUE: Hinneburg & D. Keim (KDD’98)
n CLIQUE: Agrawal, et al. (SIGMOD’98) (more grid-based)
Eps: Maximum radius of the neighborhood
MinPts: Minimum number of points in an Epsneighborhood of that point
n
NEps(p):
n
Directly density-reachable: A point p is directly densityreachable from a point q w.r.t. Eps, MinPts if
{q belongs to D | dist(p,q) <= Eps}
n
p belongs to NEps(q)
n
core point condition:
|NEps (q)| >= MinPts
p
MinPts = 5
q
Eps = 1 cm
73
74
DBSCAN: Density Based Spatial Clustering of
Applications with Noise
Density-Reachable and Density-Connected
Density-reachable:
n
n
n
A point p is density-reachable from
a point q w.r.t. Eps, MinPts if there
is a chain of points p1, …, pn, p1 =
q, pn = p such that pi+1 is directly
density-reachable from pi
p
n
p1
q
Relies on a density-based notion of cluster: A cluster is
defined as a maximal set of density-connected points
Discovers clusters of arbitrary shape in spatial databases
with noise
Outlier
Density-connected
n
n
A point p is density-connected to a
point q w.r.t. Eps, MinPts if there
is a point o such that both, p and
q are density-reachable from o
w.r.t. Eps and MinPts
p
Border
q
Core
o
Eps = 1cm
MinPts = 5
75
DBSCAN: The Algorithm
n
n
n
n
n
76
DBSCAN: Sensitive to Parameters
Arbitrary select a point p
Retrieve all points density-reachable from p w.r.t. Eps
and MinPts.
If p is a core point, a cluster is formed.
If p is a border point, no points are density-reachable
from p and DBSCAN visits the next point of the database.
Continue the process until all of the points have been
processed.
77
78
13
CHAMELEON (Clustering Complex Objects)
OPTICS: A Cluster-Ordering Method (1999)
n
OPTICS: Ordering Points To Identify the Clustering
Structure
n Ankerst, Breunig, Kriegel, and Sander (SIGMOD’99)
n Produces a special order of the database wrt its
density-based clustering structure
n This cluster-ordering contains info equivalent to the
density-based clusterings corresponding to a broad
range of parameter settings
n Good for both automatic and interactive cluster analysis,
including finding intrinsic clustering structure
n Can be represented graphically or using visualization
techniques
79
80
OPTICS: Some Extension from
DBSCAN
n
n
Reachability
-distance
Core Distance of p wrt MinPts: smallest distance eps’
between p and an object in its eps-neighborhood such that
p would be a core object for eps’ and MinPts. Otherwise,
undefined.
Reachability Distance of p wrt o:
Max (core-distance (o), d (o, p)) if o is core object.
Undefined otherwise
undefined
ε
ε‘
p1
ε
o
p2
Max (core-distance (o), d (o, p))
r(p1, o) = 1.5cm. r(p2,o) = 4cm
Cluster-order
of the objects
MinPts = 5
ε = 3 cm
81
DENCLUE: Using Statistical Density
Functions
n
DENsity-based CLUstEring by Hinneburg & Keim (KDD’98)
n
Using statistical density functions
n
Major features
n
Solid mathematical foundation
n
Good for data sets with large amounts of noise
n
Denclue: Technical Essence
n
n
Significant faster than existing algorithm (e.g., DBSCAN)
n
But needs a large number of parameters
Uses grid cells but only keeps information about grid cells that do
actually contain data points and manages these cells in a tree-based
access structure
Influence function: describes the impact of a data point within its
d ( x , y )2
neighborhood
−
fGaussian (x, y) = e
Allows a compact mathematical description of arbitrarily shaped
clusters in high-dimensional data sets
n
82
n
Overall density of the data space can be calculated as the sum of
the influence function of all data points
D
f Gaussian
(x) =
n
2σ 2
∑
N
i =1
−
d ( x , xi ) 2
2σ
2
Clusters can be determined mathematically by identifying density
attractors. Density attractors are local maxima of the overall density
function
d ( x , xi ) 2
−
D
∇f Gaussian
( x, xi ) = ∑i =1 ( xi − x) ⋅ e
N
83
e
2σ 2
84
14
Denclue: Technical Essence
Density Attractor
n
n
n
Significant density attractor for threshold k: density attractor with
density larger than or equal to k
Set of significant density attractors X for threshold k: for each pair
of density attractors x1, x2 in X there is a path from x1 to x2 such
that each point on the path has density larger than or equal to k
Center-defined cluster for a significant density attractor x for
threshold k: points that are density attracted by x
n
n
Points that are attracted to a density attractor with density less
than k are called outliers
Arbitrary-shape cluster for a set of significant density attractors X
for threshold k: points that are density attracted to some density
attractor in X
85
86
Center-Defined and Arbitrary
Chapter 7. Cluster Analysis
1. What is Cluster Analysis?
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
4. Partitioning Methods
5. Hierarchical Methods
6. Density-Based Methods
7. Grid-Based Methods
87
Grid-Based Clustering Method
n
n
STING: A Statistical Information Grid Approach
Using multi-resolution grid data structure
n
n
Several interesting methods
n
n
88
n
STING (a STatistical INformation Grid
approach) by Wang, Yang and Muntz (1997)
Wang, Yang and Muntz (VLDB’97)
The spatial area area is divided into rectangular cells
There are several levels of cells corresponding to different
levels of resolution
CLIQUE: Agrawal, et al. (SIGMOD’98)
n
On high-dimensional data (thus put in the section
of clustering high-dimensional data)
89
90
15
The STING Clustering Method
n
n
n
The STING Clustering Method
Each cell at a high level is partitioned into a number of
smaller cells in the next lower level
Statistical info of each cell is calculated and stored
beforehand and is used to answer queries
Parameters of higher level cells can be easily calculated from
parameters of lower level cell
n
n
n
count
For each attribute: mean, s, min, max, distribution type
type of distribution—normal, uniform, etc.
SELECT REGION
FROM house-map
WHERE DENSITY IN (100, infinity)
AND price RANGE(400000, infinity)
WITH PERCENT (0.7,1)
AND AREA (100, infinity)
AND WITH CONFIDENCE 0.9
Select the maximal regions that have at least 100 houses per unit area
and at least 70% of the house prices are above 400000 and with total
area at least 100 units with 90% confidence.
91
92
Comments on STING
The STING Clustering Method
n
n
n
n
Use a top-down approach to answer spatial data queries
Start from a pre-selected layer—typically with a small
number of cells
For each cell in the current level compute the confidence
interval – label the cell as relevant to the query or not
relevant
Go down hierarchy level by level, only via relevant cells;
stop at bottom layer
n
Advantages:
Query-independent, easy to parallelize, incremental
update
O(K), where K is the number of grid cells at the lowest
level
n
n
n
Disadvantages:
n All the cluster boundaries are either horizontal or
vertical, and no diagonal boundary is detected
n
Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98)
Automatically identifying subspaces of a high dimensional data space
that allow better clustering than original space
CLIQUE can be considered as both density-based and grid-based
n
n
n
n
20
It partitions each dimension into the same number of equal length
interval
τ=3
It partitions an m-dimensional data space into non-overlapping
rectangular units
A unit is dense if the fraction of total data points contained in the
unit exceeds the input model parameter
30
40
age
60
50
Vacation
(week)
0 1 2 3 4 5 6 7
n
20
30
40
50
age
60
Vacation
CLIQUE (Clustering In QUEst)
n
94
Salary
(10,000)
0 1 2 3 4 5 6 7
93
r
la
Sa
y
30
50
age
A cluster is a maximal set of connected dense units within a
subspace
95
96
16
CLIQUE: The Major Steps
n
1. Partition the data space and find the number of points
that lie inside each cell of the partition.
n
2. Identify the subspaces that contain clusters using the
Apriori principle
n
3. Identify clusters
n
n
n
CLIQUE: The Major Steps
n
2. Identify the subspaces that contain clusters using the
Apriori principle
n
Determine dense units in all subspaces of interests
Determine connected dense units in all subspaces of
interests.
n
Property: If a collection of points is a cluster in kdimensional space, then it is also part of a cluster in any
(k-1)-dimensional projection of this space.
Algorithm: level-by-level.
n
n
4. Generate minimal description for the clusters
n Determine maximal regions that cover a cluster of
connected dense units for each cluster
n Determination of minimal cover for each cluster
97
Start with 1-dimensional dense units
When having (k-1)-dimensional dense units, generate candidate
k-dimensional units by joining (k-1)-dimensional dense units
with other (k-1)-dimensional dense units that share k-2
dimensions. Then check whether the new k-dimensional units
are dense.
98
Strength and Weakness of CLIQUE
n
Strength
n
n
automatically finds subspaces of the highest
dimensionality such that high density clusters exist in
those subspaces
n insensitive to the order of records in input and does not
presume some canonical data distribution
n scales linearly with the size of input and has good
scalability as the number of dimensions in the data
increases
Weakness
n The accuracy of the clustering result may be degraded
at the expense of simplicity of the method
99
17