Clustering

advertisement
Combinatorial
Pattern Matching
Outline
1.
2.
3.
4.
5.
Microarrays and Introduction to Clustering
Hierarchical Clustering
K-Means Clustering
Corrupted Cliques Problem
CAST Clustering Algorithm
Section 1:
Microarrays and
Introduction to Clustering
What Is Clustering?
• Viewing and analyzing vast amounts of biological data can be
perplexing.
• Therefore it is easier to interpret the data if they are partitioned
into clusters combining similar data points.
Inferring Gene Functionality
• Researchers want to know the functions of newly sequenced
genes.
• Simply comparing new gene sequences to known DNA
sequences often does not give away the function of gene.
• For 40% of sequenced genes, functionality cannot be
ascertained by only comparing to sequences of other known
genes.
• Microarrays allow biologists to infer gene function even when
sequence similarity alone is insufficient to infer function.
Microarrays and Expression Analysis
• Microarrays: Measure the expression level of genes under
varying conditions/time points.
• Expression level is estimated by measuring the amount of
mRNA for that particular gene.
• A gene is active if it is being transcribed.
• More mRNA usually indicates more gene activity.
How do Microarray Experiments Work?
1. Produce cDNA from mRNA (DNA is more stable).
2. Attach phosphor to cDNA to determine when a particular
gene is expressed.
3. Different color phosphors are available to compare many
samples at once.
4. Hybridize cDNA over the microarray.
5. Scan the microarray with a phosphor-illuminating laser.
6. Illumination reveals transcribed genes.
7. Scan microarray multiple times for the differently colored
phosphors.
Microarray Experiments
Instead of
staining, laser
illumination can
be used
www.affymetrix.com
Phosphors
can be added
here instead
Using Microarrays
• Track the sample
over a period of time
to see gene
expression over time.
• Each box on the right
represents one gene’s
expression over time.
Using Microarrays: Example
• Green: Expressed only from
control.
• Red: Expressed only from
experimental cell.
• Yellow: Equally expressed
in both samples.
• Black: NOT expressed in
either control or
experimental cells.
Sample Microarray
Interpreting Microarray Data
• Microarray data are usually transformed into an intensity
matrix (below).
• The intensity matrix allows biologists to find correlations
between different genes (even if they are dissimilar) and to
understand how genes’ functions might be related.
• Each entry contains the
intensity (expression level)
of the given gene at the
measured time.
Time:
Time X
Time Y
Time Z
Gene 1
10
8
10
Gene 2
10
0
9
Gene 3
4
8.6
3
Gene 4
7
8
3
Gene 5
1
2
3
Applying Clustering to Microarray Data
1. Plot each datum as a point in N-dimensional space.
2. Create a distance matrix for the distance between every two
gene points in the N-dimensional space.
3. Genes with a small distance share the same expression
characteristics and might be functionally related or similar.
4. Therefore, clustering “close” genes together reveals groups of
functionally related genes.
Clustering: Example
Clustering: Example
Clustering: Example
Clustering: Example
Clustering: Example
Clusters
Homogeneity and Separation Principles
• Every “good” clustering must have two properties:
1. Homogeneity: Elements within the same cluster are close
to each other.
2. Separation: Elements in different clusters are located far
from each other.
Homogeneity and Separation Principles
• Every “good” clustering must have two properties:
1. Homogeneity: Elements within the same cluster are close
to each other.
2. Separation: Elements in different clusters are located far
from each other.
• Therefore clustering is not an easy task!
Homogeneity and Separation Principles
• Every “good” clustering must have two properties:
1. Homogeneity: Elements within the same cluster are close
to each other.
2. Separation: Elements in different clusters are located far
from each other.
• Therefore clustering is not an easy task!
• Example: Two (of many) clusterings:
Homogeneity and Separation Principles
• Every “good” clustering must have two properties:
1. Homogeneity: Elements within the same cluster are close
to each other.
2. Separation: Elements in different clusters are located far
from each other.
• Therefore clustering is not an easy task!
• Example: Two (of many) clusterings:
1. BAD (Homogeneity? Separation?)
Homogeneity and Separation Principles
• Every “good” clustering must have two properties:
1. Homogeneity: Elements within the same cluster are close
to each other.
2. Separation: Elements in different clusters are located far
from each other.
• Therefore clustering is not an easy task!
• Example: Two (of many) clusterings:
1. BAD (Homogeneity? Separation?)
2. GOOD
Section 2:
Hierarchical Clustering
Three Primary Clustering Techniques
1. Agglomerative: Start with every element in its own cluster,
and iteratively join clusters together.
2. Divisive: Start with one cluster and iteratively divide it into
smaller clusters.
3. Hierarchical: Organize elements into a tree.
• Leaves represent genes.
• Path lengths between leaves represent the distances
between genes.
• Similar genes therefore lie within the same subtree.
Hierarchical Clustering: Example
Hierarchical Clustering: Example
Hierarchical Clustering: Example
Hierarchical Clustering: Example
Hierarchical Clustering: Example
Hierarchical Clustering and Evolution
• Hierarchical Clustering can be used to reveal evolutionary
history.
Hierarchical Clustering Algorithm
• The algorithm takes an nxn distance matrix d of pairwise
distances between points as an input.
1.
Hierarchical Clustering (d , n)
2.
Form n clusters each with one element
3.
Construct a graph T by assigning one vertex to each cluster
4.
while there is more than one cluster
5.
Find the two closest clusters C1 and C2
6.
Merge C1 and C2 into new cluster C with |C1| +|C2| elements
7.
Compute distance from C to all other clusters
8.
Add a new vertex C to T and connect to vertices C1 and C2
9.
Remove rows and columns of d corresponding to C1 and C2
10.
Add a row and column to d corresponding to new cluster C
11.
return T
Hierarchical Clustering Algorithm
• The algorithm takes an nxn distance matrix d of pairwise
distances between points as an input.
1.
Hierarchical Clustering (d , n)
2.
Form n clusters each with one element
3.
Construct a graph T by assigning one vertex to each cluster
4.
while there is more than one cluster
5.
Find the two closest clusters C1 and C2
6.
Merge C1 and C2 into new cluster C with |C1| +|C2| elements
7.
Compute distance from C to all other clusters
8.
Add a new vertex C to T and connect to vertices C1 and C2
9.
Remove rows and columns of d corresponding to C1 and C2
10.
Add a row and column to d corresponding to new cluster C
11.
return T
• Different notions of “distance” give different clusters.
Hierarchical Clustering: Two Distance Metrics
1. Minimum Distance:
dmin C,C  mindx, y
•
Here the minimum is taken over all x in C and all y in C*.
2. 
Average Distance:
1
davg C,C  
C C*

•
 dx, y 
Here the average is also taken over all x in C and all y in C*.
Section 3:
k-Means Clustering
Squared Error Distortion
• Given a data point v and a set of points X, define the distance
from v to X, d(v, X) as the (Euclidean) distance from v to the
closest point in X.
• Given a set of n data points V = {v1…vn} and a set of k points
X, define the Squared Error Distortion as
dV, X   
n
i1

dv i , X 
2
n
K-Means Clustering Problem: Formulation
• Input: A set V consisting of n points and a parameter k
• Output: A set X consisting of k points (cluster centers) that
minimize the squared error distortion d(V, X) over all possible
choices of X.
An Easy Case: k = 1
• Input: A set, V, consisting of n points
• Output: A single point x (cluster center) that minimizes the
squared error distortion d(V, x) over all possible choices of x.
An Easy Case: k = 1
• Input: A set, V, consisting of n points
• Output: A single point x (cluster center) that minimizes the
squared error distortion d(V, x) over all possible choices of x.
• The 1-means clustering problem is very easy.
• However, it becomes NP-Complete for more than one center
(i.e. k > 1).
• An efficient heuristic for the case k > 1 is Lloyd’s Algorithm.
K-Means Clustering: Lloyd’s Algorithm
1. Lloyd’s Algorithm
2.
Arbitrarily assign the k cluster centers
3.
while the cluster centers keep changing
4.
Assign each data point to the cluster Ci
corresponding to the closest cluster
representative (center) (1 ≤ i ≤ k)
5.
After the assignment of all data points,
compute new cluster representatives
according to the center of gravity of each
cluster, that is, the new cluster representative
for each cluster C is

vC
v
C
Note: This may lead to only a locally optimal clustering.
Lloyd’s Algorithm: Illustration
x1
x2
x3
Lloyd’s Algorithm: Illustration
x1
x2
x3
Lloyd’s Algorithm: Illustration
x1
x2
x3
Lloyd’s Algorithm: Illustration
x1
x2
x3
Lloyd’s Algorithm: Illustration
x1
x2
x3
Conservative K-Means Algorithm
• Lloyd’s algorithm is fast, but in each iteration it moves many
data points, not necessarily causing better convergence.
• A more conservative greedy method would be to move one
point at a time only if it improves the overall clustering cost
• The smaller the clustering cost of a partition of data points, the
better that clustering is.
• Different methods (e.g., the squared error distortion) can be
used to measure this clustering cost.
K-Means “Greedy” Algorithm
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
ProgressiveGreedyK-Means(k)
Select an arbitrary partition P into k clusters
while forever
bestChange  0
for every cluster C
for every element i not in C
if moving i to cluster C reduces its clustering cost
if (cost(P) – cost(Pi  C) > bestChange
bestChange  cost(P) – cost(Pi  C)
i*  I
C*  C
if bestChange > 0
Change partition P by moving i* to C*
else
return P
Note: We are free to change how the clustering cost is calculated.
Section 4:
Corrupted Cliques
Problem
Clique Graphs
• A clique in a graph is a subgraph in which every vertex is
connected to every other vertex.
• A clique graph is a graph in which each connected component
is a clique.
• Example: This clique graph is composed of three cliques.
Transforming Any Graph into a Clique Graph
• A graph can be transformed into a clique graph by adding
or removing edges.
• Example:
Transforming Any Graph into a Clique Graph
• A graph can be transformed into a clique graph by adding
or removing edges.
• Example:
Transforming Any Graph into a Clique Graph
• A graph can be transformed into a clique graph by adding
or removing edges.
• Example:
Corrupted Cliques Problem
• Input: A graph G
• Output: The smallest number of additions and removals of
edges that will transform G into a clique graph.
Distance Graphs and Clustering
• Turn the distance matrix into a distance graph:
• Genes are represented as vertices in the graph.
• Choose a distance threshold θ.
• If the distance between two vertices is below θ, draw an
edge between them.
• The resulting graph may contain cliques.
• These cliques represent clusters of closely located data points!
• If the distance graph is a clique graph, we are done.
• Otherwise, we have the Corrupted Cliques Problem.
Distance Graphs: Example
• Here we have a
distance matrix.
Distance Graphs: Example
• Here we have a
distance matrix.
• We use it to form a
distance graph with
θ = 7.
Distance Graph
Distance Graphs: Example
• Here we have a
distance matrix.
• We use it to form a
distance graph with
θ = 7.
• We remove two edges
and form a clique
graph with three
clusters.
Distance Graph
Clique Graph
Section 5:
CAST Clustering
Algorithm
Heuristics for Corrupted Clique Problem
• The Corrupted Cliques problem is NP-Hard, some heuristics
exist to approximately solve it.
• CAST (Cluster Affinity Search Technique): A practical and
fast heuristic:
• We first use arbitrary clusters, and then iteratively change
them based off genes’ distance to each cluster.
• CAST is based on the notion of genes close to cluster C or
distant from cluster C.
• The distance between gene i and cluster C, denoted d(i, C),
is the average distance between gene i and all genes in C.
• Gene i is close to cluster C if d(i, C) < θ; otherwise, distant.
CAST Algorithm
• Let S = set of elements, G = distance graph, θ = threshold
1. CAST(S, G, θ)
2.
PØ
3.
while S ≠ Ø
4.
V  vertex of maximal degree in the distance graph G
5.
C  {v}
6.
while a close gene i not in C or distant gene i in C exists
7.
Find the nearest close gene i not in C and add it to C
8.
Remove the farthest distant gene i in C
9.
Add cluster C to partition P
10.
SS\C
11.
Remove vertices of cluster C from the distance graph G
12. return P
Download