# A Nearly-linear-time Spectral Algorithm for Balanced Graph Partitioning Lorenzo Orecchia

A Nearly-linear-time Spectral Algorithm
for Balanced Graph Partitioning
Lorenzo Orecchia
(MIT)
Joint work with
Nisheeth Vishnoi (MSR India) and Sushant Sachdeva (Princeton)
Spectral Algorithms for Graph Problems
What are they?
Spectral algorithms see the input graph as a linear operator
representing similarities between
music artists.
Image by David Gleich.
Algorithmic primitives:
- matrix multiplication,
- eigenvector decomposition,
- Gaussian elimination
Spectral Algorithms for Graph Problems
Why are they useful …
… in practice?
• Simple to implement
• Can exploit very efficient linear algebra routines
• Perform provably well for many problems
… in theory?
• Many new interesting results based on spectral techniques:
Laplacian Solvers, Sparsification, s-t Maxflow, etc.
• Geometric viewpoint
• Connections to Markov Chains and Probability Theory
Spectral Algorithms for Graph Partitioning
Spectral algorithms are widely used in many graph-partitioning applications:
clustering, image segmentation, community-detection, etc.
CLASSICAL VIEW:
- Based on Cheeger’s Inequality
- Eigenvectors sweep-cuts reveal sparse cuts in the graph
NEW TREND:
- Random walk vectors replace eigenvectors:
• Fast Algorithms for Graph Partitioning
• Local Graph Partitioning
• Real Network Analysis
- Different random walks: PageRank, Heat-Kernel, etc.
Why Random Walks? A Practitioner’s View
1) Quick approximation to eigenvector in massive graphs
D = diagonal degree matrix
W = AD-1= natural random walk matrix
L = D– A = Laplacian matrix
Second Eigenvector of the Laplacian can be computed by iterating W:
For random y0 s.t. y0T D&iexcl;11 = 0 , compute
D&iexcl;1 W ty0
Why Random Walks? A Practitioner’s View
1) Quick approximation to eigenvector in massive graphs
D = diagonal degree matrix
W = AD-1= natural random walk matrix
L = D– A = Laplacian matrix
Second Eigenvector of the Laplacian can be computed by iterating W:
For random y0 s.t. y0T D&iexcl;11 = 0 , compute
D&iexcl;1 W ty0
In the limit,
x2 (L) =
D&iexcl;1 W t y0
limt!1 jjW t y0 jj &iexcl;1 .
D
Why Random Walks? A Practitioner’s View
1) Quick approximation to eigenvector in massive graphs
D = diagonal degree matrix
W = AD-1= natural random walk matrix
L = D– A = Laplacian matrix
Second Eigenvector of the Laplacian can be computed by iterating W:
For random y0 s.t. y0T D&iexcl;11 = 0 , compute
D&iexcl;1 W ty0
In the limit,
x2 (L) =
D&iexcl;1 W t y0
limt!1 jjW t y0 jj &iexcl;1 .
D
Heuristic: For massive graphs, pick t as large as computationally affordable.
Why Random Walks? A Practitioner’s View
1) Quick approximation to eigenvector in massive graphs
2) Statistically robust analogue of eigenvector
Real-world graphs are noisy
GROUND TRUTH GRAPH
Why Random Walks? A Practitioner’s View
1) Quick approximation to eigenvector in massive graphs
2) Statistically robust analogue of eigenvector
Real-world graphs are noisy
NOISY
MEASUREMENT
GROUND-TRUTH GRAPH
INPUT GRAPH
Why Random Walks? A Practitioner’s View
1) Quick approximation to eigenvector in massive graphs
2) Statistically robust analogue of eigenvector
NOISY
MEASUREMENT
GROUND-TRUTH GRAPH
INPUT GRAPH
GOAL: estimate eigenvector of ground-truth graph.
OBSERVATION: eigenvector of input graph can have very large variance,
as it can be very sensitive to noise
RANDOM-WALK VECTORS provide more stable estimates.
This Talk
QUESTION:
Why random-walk vectors in the design of fast algorithms?
This Talk
QUESTION:
Why random-walk vectors in the design of fast algorithms?
ANSWER: Stable, regularized version of the eigenvector
This Talk
QUESTION:
Why random-walk vectors in the design of fast algorithms?
ANSWER: Stable, regularized version of the eigenvector
GOALS OF THIS TALK:
1) Showcase how these ideas apply to balanced graph partitioning
2) Show optimization perspective on why random walks arise
Problem Definition
and
Overview of Results
Partitioning Graphs - Conductance
Undirected unweighted G = (V; E); jV j = n; jEj = m
S
Conductance of S &micro; V
&Aacute;(S) =
&sup1;
jE(S;S)j
&sup1;
minfvol(S);vol(S)g
Partitioning Graphs - Conductance
Undirected unweighted G = (V; E); jV j = n; jEj = m
vol(S) = jE(S; V )j = 8
S
&sup1; =4
jE(S; S)j
&Aacute;(S) =
Conductance of S &micro; V
&Aacute;(S) =
1
2
&sup1;
jE(S;S)j
&sup1;
minfvol(S);vol(S)g
Partitioning Graphs - Conductance
Undirected unweighted G = (V; E); jV j = n; jEj = m
vol(S) = jE(S; V )j = 8
S
&sup1; =4
jE(S; S)j
&Aacute;(S) =
Conductance of S &micro; V
&Aacute;(S) =
1
2
&sup1;
jE(S;S)j
&sup1;
minfvol(S);vol(S)g
Partitioning Graphs - Conductance
Undirected unweighted G = (V; E); jV j = n; jEj = m
vol(S) = jE(S; V )j = 8
S
&sup1; =4
jE(S; S)j
&Aacute;(S) =
Conductance of S &micro; V
&Aacute;(S) =
1
2
&sup1;
jE(S;S)j
&sup1;
minfvol(S);vol(S)g
Partitioning Graphs – Balanced Cut
NP-HARD DECISION PROBLEM
Does G have a b-balanced cut of conductance &lt; &deg; ?
S
&Aacute;(S) &lt; &deg;
b
1
2
vol
(S)
&gt;
&gt;b
vol(V )
Partitioning Graphs – Balanced Cut
NP-HARD DECISION PROBLEM
Does G have a b-balanced cut of conductance &lt; &deg; ?
S
&Aacute;(S) &lt; &deg;
b
1
2
vol
(S)
&gt;
&gt;b
vol(V )
• Important primitive for many recursive algorithms.
• Applications to clustering and graph decomposition.
Approximation Algorithms
NP-HARD DECISION PROBLEM
Does G have a b-balanced cut of conductance &lt; &deg; ?
Algorithm
Method
Recursive Eigenvector
Spectral
[Leighton, Rao ‘88]
Flow
[Arora, Rao, Vazirani ‘04 ]
SDP (Flow + Spectral)
Distinguishes
&cedil; &deg; and
p
O( &deg;)
O(&deg; log n)
p
O(&deg; log n)
Running Time
~
O(mn)
~ 23 )
O(m
[AK’07,
OSVV’08]
3
~
2
O(m )[Sherman ‘09]
Approximation Algorithms
DECISION PROBLEM
Does G have a b-balanced cut of conductance &lt; &deg; ?
Algorithm
Method
Recursive Eigenvector
Spectral
[Leighton, Rao ‘88]
Flow
[Arora, Rao, Vazirani ‘04 ]
SDP (Flow + Spectral)
SDP (Flow + Spectral)
Distinguishes
&cedil; &deg; and
p
O( &deg;)
O(&deg; log n)
p
O(&deg; log n)
polylog&sup2; (n)
Running Time
~
O(mn)
~ 23 )
O(m
[AK’07,
OSVV’08]
3
~
2
O(m )[Sherman ‘09]
~ 1+&sup2; )
O(m
Fast Approximation Algorithms
DECISION PROBLEM
Does G have a b-balanced cut of conductance &lt; &deg; ?
Algorithm
Method
Recursive Eigenvector
Spectral
[Leighton, Rao ‘88]
Flow
[Arora, Rao, Vazirani ‘04 ]
SDP (Flow + Spectral)
SDP (Flow + Spectral)
Distinguishes
&cedil; &deg; and
p
O( &deg;)
O(&deg; log n)
p
O(&deg; log n)
polylog(n)
Running Time
~
O(mn)
~ 23 )
O(m
[AK’07,
OSVV’08]
3
~
2
O(m )[Sherman ‘09]
~ 1+&sup2; )
O(m
Approximation Algorithms
Does G have a b-balanced cut of conductance &lt; &deg; ?
Algorithm
Method
Distinguishes
&cedil; &deg; and
Running Time
Recursive Eigenvector
Spectral
p
O( &deg;)
~
O(mn)
[Spielman, Teng ‘04]
[Andersen, Chung, Lang
‘07]
[Andersen, Peres ‘09]
OUR ALGORITHM
&micro;q
&para;
&deg; log3 n
Local Random Walks O
Local Random Walks
Local Random Walks
Random Walks
&micro;
&para;
&sup3;p
&acute;
O
&deg; log n
m
&deg;2
&micro; &para;
~ m
O
&deg;
&micro;
&para;
m
O~ p
&deg;
p
O( &deg;)
~ (m)
O
&sup3;p
&acute;
O
&deg; log n
~
O
Approximation Algorithms
Does G have a b-balanced cut of conductance &lt; &deg; ?
Algorithm
Method
Distinguishes
&cedil; &deg; and
Running Time
Recursive Eigenvector
Spectral
p
O( &deg;)
~
O(mn)
[Spielman, Teng ‘04]
[Andersen, Chung, Lang
‘07]
[Andersen, Peres ‘09]
OUR ALGORITHM
&micro;q
&para;
&deg; log3 n
Local Random Walks O
Local Random Walks
Local Random Walks
Random Walks
&micro;
&para;
&sup3;p
&acute;
O
&deg; log n
m
&deg;2
&micro; &para;
~ m
O
&deg;
&micro;
&para;
m
O~ p
&deg;
p
O( &deg;)
~ (m)
O
&sup3;p
&acute;
O
&deg; log n
~
O
Recursive Eigenvector Algorithm
INPUT: (G; b; &deg;) DECISION: does there exists b-balanced S with &Aacute;(S) &lt; &deg; ?
Recursive Eigenvector Algorithm
INPUT: (G; b; &deg;) DECISION: does there exists b-balanced S with &Aacute;(S) &lt; &deg; ?
• Compute slowest mixing eigenvector of G and corresponding Laplacian eigenvalue &cedil;2
G
Recursive Eigenvector Algorithm
INPUT: (G; b; &deg;) DECISION: does there exists b-balanced S with &Aacute;(S) &lt; &deg; ?
• Compute slowest mixing eigenvector of G and corresponding Laplacian eigenvalue &cedil;2
• If &cedil;2 &cedil; &deg;, output NO. Otherwise, sweep eigenvector to find S1 such that
p
&Aacute;(S1) &middot; O( &deg;):
G
S1
Recursive Eigenvector Algorithm
INPUT: (G; b; &deg;) DECISION: does there exists b-balanced S with &Aacute;(S) &lt; &deg; ?
• Compute slowest mixing eigenvector of G and corresponding Laplacian eigenvalue &cedil;2
• If &cedil;2 &cedil; &deg;, output NO. Otherwise, sweep eigenvector to find S1 such that
p
&Aacute;(S1) &middot; O( &deg;):
• If S1 is (b/2)-balanced. Output S1. Otherwise, consider the graph G1 induced by G on VS1 with self-loops replacing the edges going to S1.
G1
S1
Recursive Eigenvector Algorithm
INPUT: (G; b; &deg;) DECISION: does there exists b-balanced S with &Aacute;(S) &lt; &deg; ?
• Compute slowest mixing eigenvector of G and corresponding Laplacian eigenvalue &cedil;2
• If &cedil;2 &cedil; &deg;, output NO. Otherwise, sweep eigenvector to find S1 such that
p
&Aacute;(S1) &middot; O( &deg;):
• If S1 is (b/2)-balanced. Output S1. Otherwise, consider the graph G1 induced by G on VS1 with self-loops replacing the edges going to S1.
• Recurse on G1.
G1
S1
Recursive Eigenvector Algorithm
INPUT: (G; b; &deg;) DECISION: does there exists b-balanced S with &Aacute;(S) &lt; &deg; ?
• Compute slowest mixing eigenvector of G and corresponding Laplacian eigenvalue &cedil;2
• If &cedil;2 &cedil; &deg;, output NO. Otherwise, sweep eigenvector to find S1 such that
p
&Aacute;(S1) &middot; O( &deg;):
• If S1 is (b/2)-balanced. Output S1. Otherwise, consider the graph G1 induced by G on VS1 with self-loops replacing the edges going to S1.
• Recurse on G1.
S1
S2
Recursive Eigenvector Algorithm
INPUT: (G; b; &deg;) DECISION: does there exists b-balanced S with &Aacute;(S) &lt; &deg; ?
• Compute slowest mixing eigenvector of G and corresponding Laplacian eigenvalue &cedil;2
• If &cedil;2 &cedil; &deg;, output NO. Otherwise, sweep eigenvector to find S1 such that
p
&Aacute;(S1) &middot; O( &deg;):
• If S1 is (b/2)-balanced. Output S1. Otherwise, consider the graph G1 induced by G on VS1 with self-loops replacing the edges going to S1.
• Recurse on G1.
S1
S3
S4
S2
Recursive Eigenvector Algorithm
INPUT: (G; b; &deg;) DECISION: does there exists b-balanced S with &Aacute;(S) &lt; &deg; ?
• Compute slowest mixing eigenvector of G and corresponding Laplacian eigenvalue &cedil;2
• If &cedil;2 &cedil; &deg;, output NO. Otherwise, sweep eigenvector to find S1 such that
p
&Aacute;(S1) &middot; O( &deg;):
• If S1 is (b/2)-balanced. Output S1. Otherwise, consider the graph G1 induced by G on VS1 with self-loops replacing the edges going to S1.
• Recurse on G1.
• If the cut [Si becomes (b/2)-balanced, output [Si.
S1
S3
S4
S2
Recursive Eigenvector Algorithm
INPUT: (G; b; &deg;) DECISION: does there exists b-balanced S with &Aacute;(S) &lt; &deg; ?
• Compute slowest mixing eigenvector of G and corresponding Laplacian eigenvalue &cedil;2
• If &cedil;2 &cedil; &deg;, output NO. Otherwise, sweep eigenvector to find S1 such that
p
&Aacute;(S1) &middot; O( &deg;):
• If S1 is (b/2)-balanced. Output S1. Otherwise, consider the graph G1 induced by G on VS1 with self-loops replacing the edges going to S1.
• Recurse on G1.
• If the cut [Si becomes (b/2)-balanced, output [Si.
S1
S3
S2
&cedil;2(G5) &cedil; &deg;
LARGE INDUCED EXPANDER = NO-CERTIFICATE
S4
Recursive Eigenvector Algorithm
INPUT: (G; b; &deg;) DECISION: does there exists b-balanced S with &Aacute;(S) &lt; &deg; ?
• Compute slowest mixing eigenvector of G and corresponding Laplacian eigenvalue &cedil;2
• If &cedil;2 &cedil; &deg;, output NO. Otherwise, sweep eigenvector to find S1 such that
p
&Aacute;(S1) &middot; O( &deg;):
• If S1 is (b/2)-balanced. Output S1. Otherwise, consider the graph G1 induced by G on VS1 with self-loops replacing the edges going to S1.
• Recurse on G1.
• If the cut [Si becomes (b/2)-balanced, output [Si.
S1
S3
S2
&cedil;2(G5) &cedil; &deg;
~
~
RUNNING TIME:O(mn)
. O(m)
per iteration, O(n) iterations.
S4
Recursive Eigenvector: The Worst Case
EXPANDER
-(n) nearly-disconnected components
Varying
conductance
Recursive Eigenvector: The Worst Case
S2
S3
S1
EXPANDER
Varying
conductance
-(n) nearly-disconnected components
NB: Recursive Eigenvector eliminates one component per iteration.
-(n) iterations are necessary. Each iteration requires (mn) time.
Recursive Eigenvector: The Worst Case
S2
S3
S1
EXPANDER
Varying
conductance
-(n) nearly-disconnected components
NB: Recursive Eigenvector eliminates one component per iteration.
-(n) iterations are necessary. Each iteration requires (mn) time.
GOAL: Eliminate unbalanced low-conductance cuts faster.
Our Algorithm: Contributions
Algorithm
Recursive Eigenvector
OUR ALGORITHM
Method
Distinguishes &cedil; &deg; and
Time
Eigenvector
p
O( &deg;)
~
O(mn)
Random Walks
p
O( &deg;)
~ (m)
O
MAIN FEATURES:
• Compute O(log n) global heat-kernel random-walk vectors at each iteration
• Unbalanced cuts are removed in O(log n) iterations
~ (m)
• Method to compute heat-kernel vectors, i.e. e&iexcl;tL v , in time O
TECHNICAL COMPONENTS:
1) New iterative algorithm with a simple random walk interpretation
2) Novel analysis of Lanczos methods for computing heat-kernel vectors
Why Random Walks
GOAL: Eliminate unbalanced cuts of low-conductance efficiently
• The graph eigenvector may be correlated with only one sparse unbalanced cut.
&iquest; for &iquest; = log n/&deg;.
• Consider the Heat-Kernel random walk HG
&iquest;
HG
ei Probability vector for random
walk started at vertex i
&iquest;
HG
ej
Why Random Walks
GOAL: Eliminate unbalanced cuts of low-conductance efficiently
• The graph eigenvector may be correlated with only one sparse unbalanced cut.
&iquest; for &iquest; = log n/&deg;.
• Consider the Heat-Kernel random walk-matrix HG
&iquest;
HG
ei Probability vector for random
walk started at vertex i
&iquest;
HG
ej
Long vectors are slow-mixing
random walks
Why Random Walks
GOAL: Eliminate unbalanced cuts of low-conductance efficiently
• The graph eigenvector may be correlated with only one sparse unbalanced cut.
&iquest; for &iquest; = log n/&deg;.
• Consider the Heat-Kernel random walk HG
Unbalanced cuts of
p
conductance &lt;
.. &deg;
Why Random Walks
GOAL: Eliminate unbalanced cuts of low-conductance efficiently
• The graph eigenvector may be correlated with only one sparse unbalanced cut.
SINGLE VECTOR
SINGLE CUT
&iquest; for &iquest; = log n/&deg;.
• Consider the Heat-Kernel random walk HG
VECTOR EMBEDDING
MULTIPLE CUTS
Unbalanced cuts of
p
conductance &lt;
.. &deg;
Heat-Kernel Random Walk:
A Regularization View
Background: Heat-Kernel Random Walk
For simplicity, take G to be d-regular.
• The Heat-Kernel Random Walk is a Continuous-Time Markov Chain over V,
modeling the diffusion of heat along the edges of G.
• Transitions take place in continuous time t, with an exponential distribution.
@p(t)
@t
= &iexcl;L p(t)
d
&iexcl; dt L
p(t) = e
p(0)
• The Heat Kernel can be interpreted as Poisson distribution over number of steps
of the natural random walk W=AD-1:
&iexcl; dt L
e
&iexcl;t
=e
P1
tk
k
W
k=1 k!
Background: Heat-Kernel Random Walk
For simplicity, take G to be d-regular.
• The Heat-Kernel Random Walk is a Continuous-Time Markov Chain over V,
modeling the diffusion of heat along the edges of G.
• Transitions take place in continuous time t, with an exponential distribution.
@p(t)
@t
= &iexcl;L p(t)
d
&iexcl; dt L
p(t) = e
Notation
t
p(0) =: HG
p(0)
• The Heat Kernel can be interpreted as Poisson distribution over number of steps
of the natural random walk W=AD-1:
&iexcl; dt L
e
&iexcl;t
=e
P1
tk
k
W
k=1 k!
Background: Heat-Kernel Random Walk
For simplicity, take G to be d-regular.
• The Heat-Kernel Random Walk is a Continuous-Time Markov Chain over V,
modeling the diffusion of heat along the edges of G.
• Transitions take place in continuous time t, with an exponential distribution.
@p(t)
@t
= &iexcl;L p(t)
d
&iexcl; dt L
p(t) = e
Notation
t
p(0) =: HG
p(0)
• The Heat Kernel can be interpreted as Poisson distribution over number of steps
of the natural random walk W=AD-1:
&iexcl; dt L
e
&iexcl;t
=e
P1
tk
k
W
k=1 k!
• In practice, can replace Heat-Kernel with natural random walk W t
Regularized Spectral Optimization
Eigenvector Program
1
min xT Lx
d
s.t. jjxjj2 = 1
SDP Formulation
1
min
d
s.t.
L&sup2;X
I &sup2;X =1
11T &sup2; X = 0
T
x 1 =0
X&ordm;0
Programs have same optimum. Take optimal solution
X &curren; = x&curren; (x&curren;)T
Regularized Spectral Optimization
SDP Formulation
1
min
d
s.t.
L&sup2;X
I &sup2;X =1
11T &sup2; X = 0
Eigenvector decomposition of X:
X=
P
Density Matrix
X&ordm;0
T
pi vi vi
8i; pi &cedil; 0;
P
pi = 1;
8i; viT 1 = 0:
Eigenvalues of X define probability distribution
Regularized Spectral Optimization
SDP Formulation
1
min L &sup2; X
d
s.t.
I &sup2;X =1
J &sup2;X =0
Density Matrix
X&ordm;0
Eigenvalues of X define probability distribution
X &curren; = x&curren; (x&curren;)T
&curren;
x
0
TRIVIAL DISTRIBUTION
0
1
Regularized Spectral Optimization
Regularized SDP
1
min L &sup2; X + &acute; &cent; F (X) Regularizer F
d
Parameter &acute;
s.t.
I &sup2;X =1
J &sup2;X =0
X&ordm;0
The regularizer F forces the distribution of eigenvalues of X to be non-trivial
X &curren; = x&curren; (x&curren;)T
X =
x
&sup2;1
REGULARIZATION
&curren;
&curren;
P
pi vi viT
&sup2;2
1&iexcl;&sup2;
Regularizers
Regularizers are SDP-versions of common regularizers
• von Neumann Entropy
FH (X) = Tr(X log X) =
• p-Norm, p&gt;1
Fp(X) =
1
p
jjXjj
p
p
• And more, e.g. log-determinant.
=
P
1
p
Tr(X
)
p
pi log pi
=
1
p
P
ppi
Our Main Result
Regularized SDP
1
min L &sup2; X + &acute; &cent; F (X)
d
s.t.
I &sup2;X =1
J &sup2;X =0
X&ordm;0
RESULT:
Explicit correspondence between regularizers and random walks
OPTIMAL SOLUTION OF REGULARIZED
PROGRAM
REGULARIZER
F = FH
Entropy
X ? / Ht
where t depends on &acute;
where &reg; depends on &acute;
F = Fp
p-Norm
1
X ? / Tq; p&iexcl;1
where q depends on &acute;
Discussion: Vector vs Density Matrix
Variable is density matrix, not vector
Q: Can we produce a single vector?
A: Density Matrix X describes distributions over vectors.
Assuming distribution is Gaussian, sample a vector
1
2
x=X u
where
u &raquo; N(0; In&iexcl;1)
For example, the vector
x = Ht=2 u
Random walk on random seed
is a sample from the solution of an entropy-regularized problem
PART 1
The New Iterative Procedure
Our Algorithm for Balanced Cut
IDEA BEHIND OUR ALGORITHM:
Replace eigenvector in recursive eigenvector algorithm with
&iquest; for &iquest; = log n=&deg;
Heat-Kernel random walk HG
&iquest; :
Consider the embedding {vi} given by HG
&iquest;
vi = HG
ei
~1
n
Our Algorithm for Balanced Cut
IDEA BEHIND OUR ALGORITHM:
Replace eigenvector in recursive eigenvector algorithm with
&iquest; for &iquest; = log n=&deg;
Heat-Kernel random walk HG
Chosen to emphasize
cuts of conductance ≈&deg;
&iquest; :
Consider the embedding {vi} given by HG
&iquest;
vi = HG
ei
~1
n
Stationary distribution is
uniform as G is regular
Our Algorithm for Balanced Cut
IDEA BEHIND OUR ALGORITHM:
Replace eigenvector in recursive eigenvector algorithm with
&iquest; for &iquest; = log n=&deg;
Heat-Kernel random walk HG
Chosen to emphasize
cuts of conductance ≈&deg;
&iquest; :
Consider the embedding {vi} given by HG
&iquest;
vi = HG
ei
Stationary distribution is
uniform as G is regular
~1
n
MIXING:
&iquest;
Define the total deviation from stationary for a set S&micro; V for walk HG
&iquest;
&ordf;(HG
; S)
=
P
~1=njj2
jjv
&iexcl;
i
i2S
FUNDAMENTAL QUANTITY TO UNDERSTAND CUTS IN G
Our Algorithm: Case Analysis
Recall:
&iquest; = log n=&deg;
&iquest;
&ordf;(HG
; S)
=
CASE 1: Random walks have mixed
P
&iquest;
vi = HG
ei
ALL VECTORS ARE SHORT
&iquest;
&ordf;(HG
;V) &middot;
1
poly(n)
i2S
&iquest;
jjHG
ei &iexcl; ~1=njj2
Our Algorithm: Case Analysis
Recall:
&iquest; = log n=&deg;
&iquest;
&ordf;(HG
; S)
=
CASE 1: Random walks have mixed
P
i2S
&iquest;
jjHG
ei &iexcl; ~1=njj2
&iquest;
vi = HG
ei
ALL VECTORS ARE SHORT
&iquest;
&ordf;(HG
;V) &middot;
1
poly(n)
By definition of &iquest;
&cedil;2 &cedil; -(&deg;)
&Aacute;G &cedil; -(&deg;)
Our Algorithm
&iquest; = log n=&deg;
&iquest;
&ordf;(HG
; S)
&iquest;
vi = HG
ei
=
P
&iquest;
2
~
jjH
e
&iexcl;
1=njj
i
G
i2S
CASE 2: Random walks have not mixed
&iquest;
&ordf;(HG
;V) &gt;
1
poly(n)
p
We can either find an (b)-balanced cut with conductance &middot; O( &deg;)
Our Algorithm
&iquest; = log n=&deg;
&iquest;
&ordf;(HG
; S)
=
&iquest;
vi = HG
ei
P
&iquest;
2
~
jjH
e
&iexcl;
1=njj
i
G
i2S
RANDOM PROJECTION
+
SWEEP CUT
CASE 2: Random walks have not mixed
&iquest;
&ordf;(HG
;V) &gt;
1
poly(n)
p
&middot;
O(
&deg;)
We can either find an (b)-balanced cut with conductance
Our Algorithm
&iquest; = log n=&deg;
&iquest;
&ordf;(HG
; S)
S1
=
P
&iquest;
2
~
jjH
e
&iexcl;
1=njj
i
G
i2S
BALL
ROUNDING
CASE 2: Random walks have not mixed
&iquest;
&ordf;(HG
;V) &gt;
1
poly(n)
p
&middot;
O(
&deg;)
We can either find an (b)-balanced cut with conductance
p
OR a ball cut yields S1 such that &Aacute;(S1) &middot; O( &Aacute;) and
&iquest;
&iquest;
&ordf;(HG
; S1) &cedil; 12 &ordf;(HG
; V ):
Our Algorithm: Iteration
&iquest; = log n=&deg;
&iquest;
&ordf;(HG
; S)
=
S1
P
&iquest;
2
~
jjH
e
&iexcl;
1=njj
i
G
i2S
p
CASE 2: We found an unbalanced cut S1 with &Aacute;(S1) &middot; O( &Aacute;) and
&iquest;
&iquest;
&ordf;(HG
; S1) &cedil; 12 &ordf;(HG
; V ):
Modify G=G(1) by adding edges across (S1; S&sup1;1 ) to construct G(2).
Analogous to removing unbalanced cut S_1
in Recursive Eigenvector algorithm
Our Algorithm: Modifying G
p
CASE 2: We found an unbalanced cut S1 with &Aacute;(S1) &middot; O( &Aacute;) and
&iquest;
&iquest;
&ordf;(HG
; S1) &cedil; 12 &ordf;(HG
; V ):
Modify G=G(1) by adding edges across (S1; S&sup1;1 ) to construct G(2).
Sj
Our Algorithm: Modifying G
p
CASE 2: We found an unbalanced cut S1 with &Aacute;(S1) &middot; O( &Aacute;) and
&iquest;
&iquest;
&ordf;(HG
; S1) &cedil; 12 &ordf;(HG
; V ):
Modify G=G(1) by adding edges across (S1; S&sup1;1 ) to construct G(2).
S1
(t+1)
G
(t)
=G
+&deg;
P
i2St
where Stari is the star graph rooted at vertex i.
Stari
Our Algorithm: Modifying G
p
CASE 2: We found an unbalanced cut S1 with &Aacute;(S1) &middot; O( &Aacute;) and
&iquest;
&iquest;
&ordf;(HG
; S1) &cedil; 12 &ordf;(HG
; V ):
Modify G=G(1) by adding edges across (S1; S&sup1;1 ) to construct G(2).
S1
(t+1)
G
(t)
=G
+&deg;
P
i2St
Stari
where Stari is the star graph rooted at vertex i.
The random walk can now escape S1 more easily.
Our Algorithm: Iteration
&iquest; = log n=&deg;
&iquest;
&ordf;(HG
; S)
S1
=
P
&iquest;
2
~
jjH
e
&iexcl;
1=njj
i
G
i2S
p
CASE 2: We found an unbalanced cut S1 with &Aacute;(S1) &middot; O( &Aacute;) and
&iquest;
&iquest;
&ordf;(HG
; S1) &cedil; 12 &ordf;(HG
; V ):
Modify G=G(1) by adding edges across (S1; S&sup1;1 ) to construct G(2).
POTENTIAL REDUCTION:
1
&iquest;
&iquest;
&iquest;
&ordf;(HG
(t+1) ; V ) &middot; &ordf;(HG(t) ; V ) &iexcl; 2 &ordf;(HG(t) ; St ) &middot;
3
&iquest;
4 &ordf;(HG(t) ; V )
Our Algorithm: Iteration
&iquest; = log n=&deg;
&iquest;
&ordf;(HG
; S)
S1
=
P
&iquest;
2
~
jjH
e
&iexcl;
1=njj
i
G
i2S
p
CASE 2: We found an unbalanced cut S1 with &Aacute;(S1) &middot; O( &Aacute;) and
&iquest;
&iquest;
&ordf;(HG
; S1) &cedil; 12 &ordf;(HG
; V ):
Modify G=G(1) by adding edges across (S1; S&sup1;1 ) to construct G(2).
POTENTIAL REDUCTION:
1
&iquest;
&iquest;
&iquest;
&ordf;(HG
(t+1) ; V ) &middot; &ordf;(HG(t) ; V ) &iexcl; 2 &ordf;(HG(t) ; St ) &middot;
3
&iquest;
4 &ordf;(HG(t) ; V )
CRUCIAL APPLICATION OF STABILITY OF RANDOM WALK
Summary and Potential Analysis
IN SUMMARY:
At every step t of the recursion, we either
1.
Produce a (b)-balanced cut of the required conductance, OR
Potential Reduction
IN SUMMARY:
At every step t of the recursion, we either
1.
Produce a (b)-balanced cut of the required conductance, OR
2.
Find that
&iquest;
&ordf;(HG
;V) &middot;
1
poly(n) , OR
Potential Reduction
IN SUMMARY:
At every step t of the recursion, we either
1.
Produce a (b)-balanced cut of the required conductance, OR
2.
Find that
&iquest;
&ordf;(HG
;V) &middot;
1
poly(n) , OR
3. Find an unbalanced cut St of the required conductance,
such that for the graph G(t+1), modified to have increased edges from St,
&iquest;
&ordf;(HG
(t+1) ; V ) &middot;
3
&iquest;
4 &ordf;(HG(t) ; V )
Potential Reduction
IN SUMMARY:
At every step t-1 of the recursion, we either
1.
Produce a (b)-balanced cut of the required conductance, OR
2.
Find that
&iquest;
&ordf;(HG
;V) &middot;
1
poly(n) , OR
3. Find an unbalanced cut Stof the required conductance,
such that for the process P(t+1), modified to have increased transitions from St,
&iquest;
&ordf;(HG
(t+1) ; V ) &middot;
3
&iquest;
4 &ordf;(HG(t) ; V )
After T=O(log n) iterations, if no low-conductance (b)-balanced cut is found:
&iquest;
&ordf;(HG
(T ) ; V ) &middot;
1
poly(n)
From this guarantee, using the definition of G(T), we derive an SDP-certificate
that no b-balanced cut of conductance O(&deg;) exists in G.
NB: Only O(log n) iterations to remove unbalanced cuts.
Heat-Kernel and Certificates
• If no balanced cut of conductance is found, our potential analysis yields:
&iquest;
&ordf;(HG
(T ) ; V ) &middot;
1
poly(n)
L+&deg;
PT &iexcl;1 P
j=1
i2Sj
L(Stari ) &ordm; &deg;L(KV )
Modified graph has &cedil;2 &cedil; &deg;
CLAIM: This is a certificate that no balanced cut of conductance &lt; &deg; existed in G.
[Sj
Heat-Kernel and Certificates
• If no balanced cut of conductance is found, our potential analysis yields:
&iquest;
&ordf;(HG
(T ) ; V ) &middot;
1
poly(n)
L+&deg;
PT &iexcl;1 P
j=1
i2Sj
L(Stari ) &ordm; &deg;L(KV )
Modified graph has &cedil;2 &cedil; &deg;
CLAIM: This is a certificate that no balanced cut of conductance &lt; &deg; existed in G.
Balanced cut T
[Sj
&Aacute;(T ) &cedil; &deg;
j[Sj j
&iexcl; &deg; jT j
Heat-Kernel and Certificates
• If no balanced cut of conductance is found, our potential analysis yields:
&iquest;
&ordf;(HG
(T ) ; V ) &middot;
1
poly(n)
L+&deg;
PT &iexcl;1 P
j=1
i2Sj
L(Stari ) &ordm; &deg;L(KV )
Modified graph has &cedil;2 &cedil; &deg;
CLAIM: This is a certificate that no balanced cut of conductance &lt; &deg; existed in G.
Balanced cut T
[Sj
&Aacute;(T ) &cedil; &deg;
j[Sj j
&iexcl; &deg; jT j
&cedil;&deg;
b=2
&iexcl;&deg; b
&cedil; &deg;=2
Comparison with Recursive Eigenvector
RECURSIVE EIGENVECTOR:
We can only bound number of iterations by volume of graph removed.
(n) iterations possible.
OUR ALGORITHM:
Use variance of random walk as potential.
Only O(log n) iterations necessary.
&ordf;(P; V ) =
P
i2V
jjP ei &iexcl; ~1=njj2
RIGHT SPECTRAL NOTION OF POTENTIAL
Comparison with Recursive Eigenvector
Recall worst-case example for Recursive Eigenvector:
EXPANDER
Comparison with Recursive Eigenvector
Recall worst-case example for Recursive Eigenvector:
One iteration of our algorithm
EXPANDER
Running Time
• Our Algorithm runs in O(log n) iterations.
• In one iteration, we perform some simple computation (projection, sweep cut)
&iquest;
on the vector embedding HG
.
(t)
~
This takes time O(md)
, where d is the dimension of the embedding.
• Can use Johnson-Lindenstrauss to obtain d= O(log n).
• Hence, we only need to compute O(log2 n) matrix-vector products
&iquest;
HG
(t) u
~
• We show how to perform one such product in time O(m)
for all &iquest;.
• OBSTACLE:
&iquest;, the mean number of steps in the Heat-Kernel random walk, is  (n2) for path.
PART 2
Matrix Exponential Computation
Computation of Matrix Exponential
GOAL: For symmetric diagonally-dominant A, with sparsity m,
compute vector v such that
jje&iexcl;A u &iexcl; vjj &middot;
~ (m).
in time O
1
poly(n)
Computation of Matrix Exponential
GOAL: For symmetric diagonally-dominant A, with sparsity m,
compute vector v such that
jje&iexcl;A u &iexcl; vjj &middot;
1
poly(n)
~ (m).
in time O
KNOWN:
p
~ jjAjj) matrix-vector multiplications by A
Lanczos method: O(
Computation of Matrix Exponential
GOAL: For symmetric diagonally-dominant A, with sparsity m,
compute vector v such that
jje&iexcl;A u &iexcl; vjj &middot;
1
poly(n)
~ (m).
in time O
KNOWN:
p
~ jjAjj) matrix-vector multiplications by A
Lanczos method: O(
A=
log n
&deg; L
~ 1)
jjAjj = O(
&deg;
&micro;
m
~
Running Time: O p
&deg;
&para;
Review of Lanczos Method
IDEA: Perform k matrix-vector multiplications
fu; Au; A2u; A3u; A4u; : : : ; Ak ug
Review of Lanczos Method
IDEA: Perform k matrix-vector multiplications
2
3
4
k
Rk = Spanfu; Au; A u; A u; A u; : : : ; A ug
k-Krylov
Subspace
Compute orthonormal basis Qk for Rk, consider A restricted to Rk
Tk = Qk AQTk
Review of Lanczos Method
IDEA: Perform k matrix-vector multiplications
2
3
4
k
Rk = Spanfu; Au; A u; A u; A u; : : : ; A ug
k-Krylov
Subspace
Compute orthonormal basis Qk for Rk, consider A restricted to Rk
Tk = Qk AQTk
As k grows, Tk is better approximation to A.
For many functions f, even for small k
Qk f(Tk )QTk &frac14; f(A)
Review of Lanczos Method
IDEA: Perform k matrix-vector multiplications
2
3
4
k
Rk = Spanfu; Au; A u; A u; A u; : : : ; A ug
k-Krylov
Subspace
Compute orthonormal basis Qk for Rk, consider A restricted to Rk
Tk = Qk AQTk
As k grows, Tk is better approximation to A.
For many functions f, even for small k
Qk f(Tk )QTk &frac14; f(A)
True if f is close to a polynomial of degree &lt; k
Lanczos Method and Approximation
CLAIM: There exists a polynomial p such that
supx2(0;jjAjj) jp(x) &iexcl; e&iexcl;x j &middot;
p
~
and p has degree O(
jjAjj)
1
poly(n)
Lanczos Method and Approximation
CLAIM: There exists a polynomial p such that
supx2(0;jjAjj) jp(x) &iexcl; e&iexcl;x j &middot;
p
~
and p has degree O(
jjAjj)
p
~ jjAjj) = O
~
O(
&micro;
1
p
&deg;
&para;
1
poly(n)
matrix-vector multiplications by A
Lanczos Method and Approximation
CLAIM: There exists a polynomial p such that
supx2(0;jjAjj) jp(x) &iexcl; e&iexcl;x j &middot;
p
~
and p has degree O(
jjAjj)
p
~ jjAjj) = O
~
O(
&micro;
1
p
&deg;
&para;
1
poly(n)
matrix-vector multiplications by A
OPTIMAL for this approach
Lanczos Method II: Inverse Iteration
IDEA: Apply Lanczos method to
&iexcl;1
B = (I + A
)
k
Lanczos Method II: Inverse Iteration
IDEA: Apply Lanczos method to
&iexcl;1
B = (I + A
)
k
k-Krylov subspace:
Rk = Spanfu; Bu; B2u; B3u; B4u; : : : ; Bk ug
Lanczos Method II: Inverse Iteration
IDEA: Apply Lanczos method to
&iexcl;1
B = (I + A
)
k
k-Krylov subspace:
Rk = Spanfu; Bu; B2u; B3u; B4u; : : : ; Bk ug
APPROX: k-degree rational function approximation to e-x
Lanczos Method II: Inverse Iteration
IDEA: Apply Lanczos method to
&iexcl;1
B = (I + A
)
k
k-Krylov subspace:
Rk = Spanfu; Bu; B2u; B3u; B4u; : : : ; Bk ug
APPROX: k-degree rational function approximation to e-x
CLAIM: There exists a polynomial
p such that&macr;
&macr;
&macr; p(x)
1
&iexcl;x &macr;
supx2(0;jjAjj) &macr; (1+x=k)k &iexcl; e &macr; &middot;
poly(n)
and p has degree k = O(log n)
Lanczos Method II: Inverse Iteration
IDEA: Apply Lanczos method to
&iexcl;1
B = (I + A
)
k
k-Krylov subspace:
Rk = Spanfu; Bu; B2u; B3u; B4u; : : : ; Bk ug
APPROX: k-degree rational function approximation to e-x
CLAIM: There exists a polynomial
p such that&macr;
&macr;
&macr; p(x)
1
&iexcl;x &macr;
supx2(0;jjAjj) &macr; (1+x=k)k &iexcl; e &macr; &middot;
poly(n)
and p has degree k = O(log n)
Lanczos Method II: Inverse Iteration
IDEA: Apply Lanczos method to
&iexcl;1
B = (I + A
)
k
k-Krylov subspace:
Rk = Spanfu; Bu; B2u; B3u; B4u; : : : ; Bk ug
CLAIM: k = O(log n) matrix-vector multiplication by B
RUNNING TIME: Multiplication by B is SDD linear system
Time per
~
O(m)
multiplication
~
Total
O(m)
running time
Lanczos Method II: Conclusion
jje&iexcl;Au &iexcl; vjj &middot; &plusmn;
GENERAL RESULT:
• For general SDD matrix A, running time is
~
O((m
+ n) &cent; log(2 + jjAjj) &cent; polylog(1=&plusmn;))
CONTRIBUTION: Analysis has a number of obstacles
• Linear solver is approximate: error propagation in Lanczos
• Loss of symmetry
Conclusion
NOVEL ALGORITHMIC CONTRIBUTIONS
~
• Balanced-Cut Algorithm using Random Walks in time O(m)
~
• Computing the Heat-Kernel vectors in time O(m)
MAIN IDEA
Random walks provide a very useful
stable analogue of the graph eigenvector
via regularization
OPEN QUESTION
More applications of this idea?
Applications beyond design of fast algorithms?
A Different Interpretation
THEOREM:
Suppose eigenvector x yields an unbalanced cut S of low conductance
and no balanced cut of the required conductance.
S
P
di xi = 0
Then,
0
P
2
d
x
i
i
i2S
&cedil;
1
2
P
i2V
di x2i :
In words, S contains most of the variance of the eigenvector.
A Different Interpretation
THEOREM:
Suppose eigenvector x yields an unbalanced cut S of low conductance
and no balanced cut of the required conductance.
P
di xi = 0
Then,
0
V-S
P
2
d
x
i
i
i2S
&cedil;
1
2
P
i2V
di x2i :
In words, S contains most of the variance of the eigenvector.
QUESTION: Does this mean the graph induced by G on V–S is much closer to
have conductance at least &deg;?
A Different Interpretation
THEOREM:
Suppose eigenvector x yields an unbalanced cut S of low conductance
and no balanced cut of the required conductance.
P
di xi = 0
Then,
0
V-S
P
2
d
x
i
i
i2S
&cedil;
1
2
P
i2V
di x2i :
QUESTION: Does this mean the graph induced by G on V–S is much closer to
have conductance at least &deg;?
ANSWER: NO. x may contain little or no information about G on V–S.
Next eigenvector may be only infinitesimally larger.
CONCLUSION: To make significant progress, we need an analogue of the
eigenvector that captures sparse
Theorems for Our Algorithm
THEOREM 1: (WALKS HAVE NOT MIXED)
&ordf;(P (t) ; V ) &gt;
1
poly(n)
Can find cut of
p
conductance O( &deg;)
Theorems for Our Algorithm
THEOREM 1: (WALKS HAVE NOT MIXED)
&ordf;(P (t) ; V ) &gt;
1
poly(n)
Can find cut of
p
conductance O( &deg;)
Proof: Recall that
(t)
&iexcl;&iquest;Q(t)
&iquest; = log n=&deg;
&ordf;(P; V ) =
Use the definition of &iquest; . The spectrum of P (t) implies that
P
=e
P
P
i2V
jjP ei &iexcl; ~1=njj2
(t)
(t)
2
(t)
jjP
e
&iexcl;
P
e
jj
&middot;
O(&deg;)
&cent;
&ordf;(P
;V)
i
j
ij2E
EDGE LENGTH
TOTAL VARIANCE
Theorems for Our Algorithm
THEOREM 1: (WALKS HAVE NOT MIXED)
&ordf;(P (t) ; V ) &gt;
1
poly(n)
Can find cut of
p
conductance O( &deg;)
Proof: Recall that
(t)
&iexcl;&iquest;Q(t)
&iquest; = log n=&deg;
&ordf;(P; V ) =
Use the definition of &iquest; . The spectrum of P (t) implies that
P
=e
P
P
i2V
jjP ei &iexcl; ~1=njj2
(t)
(t)
2
(t)
jjP
e
&iexcl;
P
e
jj
&middot;
O(&deg;)
&cent;
&ordf;(P
;V)
i
j
ij2E
EDGE LENGTH
TOTAL VARIANCE
Hence, by a random projection of the embedding {P ei}, followed by a sweep cut,
we can recover the required cut.
SDP ROUNDING TECHNIQUE
Theorems for Our Algorithm
THEOREM 2: (WALKS HAVE MIXED)
&ordf;(P (t) ; V ) &middot;
1
poly(n)
No (b)-balanced cut of
conductance O(&deg;)
Theorems for Our Algorithm
THEOREM 2: (WALKS HAVE MIXED)
&ordf;(P (t) ; V ) &middot;
No (b)-balanced cut of
conductance O(&deg;)
1
poly(n)
Proof: Consider S = [ Si. Notice that S is unbalanced.
Assumption is equivalent to
&iexcl;&iquest;L&iexcl;O(log n)
L(KV ) &sup2; e
P
i2S
L(Si )
&middot;
1
poly(n)
:
Theorems for Our Algorithm
THEOREM 2: (WALKS HAVE MIXED)
&ordf;(P (t) ; V ) &middot;
No (b)-balanced cut of
conductance O(&deg;)
1
poly(n)
Proof: Consider S = [ Si. Notice that S is unbalanced.
Assumption is equivalent to
&iexcl;&iquest;L&iexcl;O(log n)
L(KV ) &sup2; e
By taking logs,
L + O(&deg;)
P
i2S
P
i2S
L(Si )
&middot;
1
poly(n)
L(Si) &ordm; -(&deg;)L(KV ):
:
SDP DUAL CERTIFICATE
Theorems for Our Algorithm
THEOREM 2: (WALKS HAVE MIXED)
&ordf;(P (t) ; V ) &middot;
No (b)-balanced cut of
conductance O(&deg;)
1
poly(n)
Proof: Consider S = [ Si. Notice that S is unbalanced.
Assumption is equivalent to
&iexcl;&iquest;L&iexcl;O(log n)
L(KV ) &sup2; e
By taking logs,
L + O(&deg;)
P
i2S
P
i2S
L(Si )
&middot;
1
poly(n)
L(Si) &ordm; -(&deg;)L(KV ):
:
SDP DUAL CERTIFICATE
This is a certificate that no (1)-balanced cut of conductance O(&deg;) exists, as
evaluating the quadratic form for a vector representing a balanced cut U yields
&Aacute;(U) &cedil; -(&deg;) &iexcl; vol(S) O(&deg;) &cedil; -(&deg;)
vol(U)
as long as S is sufficiently unbalanced.
SDP Interpretation
Efi;jg2EG
Efi;jg2V &pound;V
8i 2 V
Ej2V
jjvi &iexcl; vj jj2 &middot; &deg;;
1
2
jjvi &iexcl; vj jj =
;
2m
1 1
2
jjvi &iexcl; vj jj &middot;
&cent;
:
b 2m
0
SHORT EDGES
FIXED VARIANCE
LENGTH OF
STAR EDGES
SDP Interpretation
Efi;jg2EG
Efi;jg2V &pound;V
8i 2 V
Ej2V
jjvi &iexcl; vj jj2 &middot; &deg;;
1
2
jjvi &iexcl; vj jj =
;
2m
1 1
2
jjvi &iexcl; vj jj &middot;
&cent;
:
b 2m
SHORT EDGES
FIXED VARIANCE
LENGTH OF
STAR EDGES