A Random Polynomial-Time Algorithm for Approximating the Volume of Convex Bodies

advertisement
A Random Polynomial-Time
Algorithm for Approximating
the Volume of Convex Bodies
By Group 7
The Problem Definition
The main result of the paper is a randomized algorithm for finding an
approximation to the volume of a convex body ĸ in n-dimensional
Euclidean space
The paper is a joint work by Martin Dyer, Alan M. Frieze and Ravindran
Kannan presented in 1991.
This is done by assuming the existence of a membership oracle which
returns yes if a query point lies inside the convex body or not.
n is definitely ≥3
Never seen a n-dimensional body before?
What is a convex body?
•In Euclidean space, an object is defined
as convex
– if for every pair of points within the object,
– every point on the straight line segment that joins the pair
of points also lies within the object.
Convex Body
Non- Convex body
Well Roundedness?
The algorithm mentions well rounded convex
body which means the dimensions of the
convex body are fixed and finite.
Well roundedness is defined as a
property of a convex body which lies
between two spheres having the radii:-
1 & √ (n)x(n+1)
(where n= no. of dimensions)
The running time of the algorithm
This algorithm takes time bounded by a polynomial
in n, the dimension of the body ĸ and 1/ε where ε
is relative bound error.
The expression for the running time is:-
O(n23(log n)5 ε-2 log[1/ε])
Motivation
•There is no deterministic approach of finding the
volume of an n-dimensional convex body in
polynomial time, therefore it was a major challenge
for the authors.
•The authors worked on a probabilistic approach to
find the volume of the n-dimensional convex body
using the concept of rapidly mixing markov chains.
•They reduced the probability of error by repeating
the same technique multiple number of times.
•It was also the FIRST polynomial time bound
algorithm of its kind.
Deterministic approach and why it doesn’t work?
Membership oracle answers in the following way: It says yes, if a point lies inside the unit sphere and says no
otherwise.
After polynomial no of. queries, we have a set of points, which we call P, from which must form the hull of the
actual figure.
But possible candidates for the figure can range from the convex hull of P to the unit sphere.
Deterministic approach and why it doesn’t work contd.
•The ratio of convex hull (P) and unit sphere is at least
poly(n)/2^n.
•So, there is no deterministic approximation algorithm that runs in
polynomial time.
Overview of today’s presentation
The algorithm itself will be covered by Chen Jingyuan
Chen Min will introduce the concept of Random walk.
Proof of correctness and the complexity of algorithm is
covered by Chin Hau
Tuan Nguyen will elaborate on the concept of Rapidly
Mixing Markov’s Chains(RMMC).
Zheng Leong will elaborate on the proof of why the
markov’s chain in rapidly mixing.
Anurag will conclude by providing the applications and
improvements to the current algorithm
The Algorithm
Chen Jingyuan
The Dilation of a Convex Body
For any convex body K and a nonnegative real number ɑ,
The dilation of K by a factor of ɑ is denoted as
K  {x : x  K }
The Problem Definition
Input: A convex body
K  Rn
Goal: Compute the volume of K ,vol n ( K ) .
• Here, n is the dimension of the body K.
Well-guaranteed Membership Oracle&Well-rounded
A sphere contained in the body: B.
• B is the unit ball with the origin as center.
A sphere containing the body: rB.
• Here r  n ( n  1), n is the dimension of the body.
A black box
• which presented with any point x in space, either replies
that x is in the convex body or that it is not.
Basic Idea
B  K  rB
K K 1 K 2 Kk Kk  rK
B
rB
rB  rK
vol( K  rB )
vol( Kk  1  rB )



 vol( Kk  rB )
vol(K ) vol( K  rB ) 
vol( K 1  rB )
vol( Kk  rB )
vol( K  rB )
vol( Kk  1  rB )
vol( K )  vol( K  rB ) 
 
 vol( Kk  rB )
vol( K 1  rB )
vol( Kk  rB )
vol( Kl  1  rB )
vol( Kl  rB )
K l  rB
K l 1  rB
vol(rB )
The Algorithm
How to generate a group dilations of K?
Let   1  (1 n,)
i


max{

r ,1.}
k  log 1  r and
i
For i=1, 2, …, k, the algorithm will generate a group
dilations of K, and the ratios equals to
 i K  rB
 i 1 K  rB
0 K  rK
k K  K
i K
The Algorithm
How to find an approximation to the ratio
voln (  i K  rB )
voln (  i 1 K  rB )
The ratio will be found by a sequence of "trials"
using random walk.
In the following discussion, let Ki  i K  rB
The Algorithm
C  {x : qi  xi  (qi  1) }
After τ steps...
Ki

n  
1  
2 

x0    q1   ,  q2   ,,  qn   
  
 
r  


Ki 1
 1 ,  2 , n  {0,1,2,,  }r  2   1
…
• Proper trial: if x0  K i 1, we call
it a proper trial.
• Success trial: if x0  Ki , we call
it a success trial.
The Algorithm
Repeat until we have made
And
m proper trials.
m̂ of them are success trials.
The ratio, m̂ m , will be a good approximation to
the ratio of volumes that we want to compute.
The Conclusion of the Algorithm
voln ( K )  voln (  k K  rB ) 
voln (  k K  rB )
vol (  K  rB )
  n 1
 voln (  0 K  rB )
voln (  k 1 K  rB )
voln (  0 K  rB )
voln (  i K  rB ) mˆ

voln (  i 1 K  rB ) m
voln (rB )
Random Walk
Chen Min
Natural
random
walk
Technical
random
walk
Natural random walk
Some notations
1.Oracle:
A black box tells you whether a point x belongs
to K or not (e.g, a convex body is given by an oracle)
2. For any set in 𝑅𝑛 and a nonnegative real
number 𝛼, we denote by 𝐾(𝛼) the set of
points at distance at most 𝛼 from K.
x
Oracle
Y/N
𝐾(𝛼)
K
𝐾(𝛼) is smoother than K
…
3.cubes:
We assume that space (𝑅𝑛 ) is divided into cubes
of side 𝛿. Formally, a cube is defined as:
{𝑥: 𝑚𝑙 𝛿 ≤ 𝑥𝑙 ≤ 𝑚𝑙 + 1 𝛿 𝑓𝑜𝑟 𝑖 = 1,2, … , 𝑛}
Where 𝑚𝑙 are integers
…
Any convex body can be filled
with cubes
Natural random walk
K
Steps:
…
1. Starts at any cube intersecting 𝐾
2. It chooses a facet of the present cube each with
probability 1/(2n), where n is the dimension of the
space.
- if the cube across the chosen facet intersects K,
the random walk moves to that cube
- else, it stays in the present cube
j
m i n
k
….
Prob:
i j:¼
i n:¼
i k:¼
i m :0
i i:¼
Technical random walk
Why need
technical
random walk?
Only given K by an oracle.
How to decide whether
Cube C ∩ 𝐾 𝛼 ?
𝐾(𝛼)
𝐾 𝛼 is smoother
1.
Walk through
𝐾 𝛼
K
𝐾(𝛼) is smoother than K
Prove rapidly mixing
2.
Apply the theorem of
Sinclair and Jerrum
Satisfy the constraint:
Random walk has ½
probability stay in the same
cube.
Technical random walk
Q: We want to walk through 𝐾 𝛼 . But we are only given K by an oracle, and this
will not let us decide precisely whether a particular cube C ∩ 𝐾 𝛼 .
-modification
random walk is executed includes all of those cubes that intersect 𝐾 𝛼 plus some
other cubes each of which intersects 𝐾 𝛼 + 𝛼 ′ , where 𝛼 ′ = δ/(2 𝑛).
𝐾 𝛼 + 𝛼′
Ellipsoid
algorithm
x
Terminates:
𝛼 ′) ∩ 𝐾
𝑖𝑓 ∃𝑥 ∈ 𝐶(𝛼 +
→ 𝐶 ∩ 𝐾(𝛼 + 𝛼 ′ )
C weakly intersects 𝐾(𝛼)
The walk will go to cube C
𝛼 ′ offers a
terminate
condition
𝑖𝑓 𝑎𝑛 𝑒𝑙𝑙𝑖𝑝𝑜𝑠𝑖𝑑 𝑜𝑓 𝑣𝑜𝑙𝑢𝑚𝑒 𝑎𝑡 𝑚𝑜𝑠𝑡
2 𝑛−2 −𝑛+1
′
𝑛
−2
(𝛼 ) 𝜎𝑛−1 𝑛
𝑟
𝜋
contains 𝐶(𝛼 + 𝛼 ′ ) ∩ 𝐾
𝐶∩𝐾 𝛼 = ∅
The walk will not go to
cube C
Technical random walk
2nd modification made on natural random walk
…
New rules:
1. The walk has ½ probability stays in the
present cube
2. With probability 1/(4n) each, it picks one of
the facets to move across to an adjacent cube
In sum:
natural
• 𝐶∩𝐾
• 1/2n
technical
• 𝐶 ∩ 𝐾(𝛼)
• 1/4n, 1/2stay
j
m i n
k
….
Prob:
i j : 1/8
i n : 1/8
i k : 1/8
i m :0
i i : 5/8
Background on
Markov chain
Technical random
walk will converge
to uniform
distribution
Discrete-time Markov Chain
A Markov Chain is a sequence of
random variables
With Markov Property.
Markov Property:
The future states only depend on
current state.
Formally:
A simple two-state
Markov Chain
Pr( X n 1  x | X 1  x1 , X 2  x2 ,..., X n  xn )  Pr( X n 1  x | X n  xn )
Technical random walk is a Markov Chain
Irreducible
A state j is said to be accessible from
a state i if:
Pr( X n  j | X 0  i )  pij
( nij )
0
A state i is said to communicate with state j
if they are mutually accessible.
j
i
j is accessible from i
i is not accessible from j
j
i
A Markov chain is said to be irreducible if its state space is a single
communicating class.
Markov chain for technical random walk is
irreducible
The graph of
random walk is
connected
Periodicity vs. Aperiodic
A state i has period k if any return to state i
must occur in multiples of k.
i
j
k  gcd{n : Pr( X n  i | X 0  i )  0}
If k=1, then the state is said to be aperiodic,
which means that returns to state i can occur
at irregular times.
i
j
A Markov chain is aperiodic if very state is aperiodic.
Markov chain for technical random walk is
aperiodic
Each cube has a
self loop
Stationary distribution
The stationary distribution π is a vector, whose entries are non-negative and add up to
1. π is unchanged by the operation of transition matrix P on it, and is defined by:
P 
Property of Markov chain:
If the Markov chain is irreducible and aperiodic, then there is a unique stationary
distribution π .
Uniformly
random
generator
Markov chain for technical random walk has a
stationary distribution
Since P is symmetric for technical random walk, it is easy to see that all 𝜋𝑗 ’s are equal.
0.4
E.g,
0.6
i
j
0.4
0.6
Proof of Correctness
Hoo Chin Hau
Overview
1. Relate
Voln K𝑖
𝑉𝑜𝑙𝑛 𝐾𝑖−1
2. Show that
to
Pr 𝑠𝑢𝑐𝑐𝑒𝑠𝑠 ∩ 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙
Pr 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙
Pr 𝑠𝑢𝑐𝑐𝑒𝑠𝑠 ∩ 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙
Pr 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙
approximates
certain bound with a probability of at least ¾
Voln K𝑖
𝑉𝑜𝑙𝑛 𝐾𝑖−1
within a
Pr 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙
𝑃𝑟 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙 =
𝑃𝑟 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙 𝑤𝑎𝑙𝑘 𝑒𝑛𝑑𝑠 𝑖𝑛 𝐶 ∗ 𝑃𝑟 𝑤𝑎𝑙𝑘 𝑒𝑛𝑑𝑠 𝑖𝑛 𝐶
𝐶∈𝑊
≤
𝐶∈𝑊
𝑁𝐶𝐵
𝑎𝐶 +
𝑁𝐶
1
1
+ 1 − 17 19
𝑊
10 𝑛
𝑁𝐶𝐵
𝑎𝐶 − Pr 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙 𝑤𝑎𝑙𝑘 𝑒𝑛𝑑𝑠 𝑖𝑛 𝐶) ≤
𝑁𝐶
𝑎𝐶 : 𝑉𝑜𝑙𝑛 (𝐶 ∩ 𝐾𝑖−1 )/𝛿 𝑛
𝑁𝐶 : Number of sub-cubes
𝑁𝐶𝐵 : Number of border sub-cubes
𝜏
𝐶
𝐾𝑖−1
𝛿
Pr 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙
𝑃𝑟 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙 =
𝑃𝑟 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙 𝑤𝑎𝑙𝑘 𝑒𝑛𝑑𝑠 𝑖𝑛 𝐶 ∗ 𝑃𝑟 𝑤𝑎𝑙𝑘 𝑒𝑛𝑑𝑠 𝑖𝑛 𝐶
𝐶∈𝑊
≤
𝐶∈𝑊
𝑁𝐶𝐵
𝑎𝐶 +
𝑁𝐶
1
1
+ 1 − 17 19
𝑊
10 𝑛
𝜏
1
𝑡
𝑝𝑖𝑗 − 𝜋𝑗 ≤ 1 − 17 19
10 𝑛
1
1
Pr(𝑤𝑎𝑙𝑘 𝑒𝑛𝑑𝑠 𝑖𝑛 𝐶) −
≤ 1 − 17 19
𝑊
10 𝑛
𝑡
𝑡
Pr 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙
𝑃𝑟 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙 =
𝑃𝑟 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙 𝑤𝑎𝑙𝑘 𝑒𝑛𝑑𝑠 𝑖𝑛 𝐶 ∗ 𝑃𝑟 𝑤𝑎𝑙𝑘 𝑒𝑛𝑑𝑠 𝑖𝑛 𝐶
𝐶∈𝑊
≤
𝐶∈𝑊
𝑁𝐶𝐵
𝑎𝐶 +
𝑁𝐶
1
1
+ 1 − 17 19
𝑊
10 𝑛
𝜏
𝑛
3
𝑉𝑜𝑙𝑛 𝐾𝑖−1
1
𝜖
𝛿
2𝜂
≤
1
+
3𝑛
+
𝛿𝑛
𝑊
300𝑘 3𝑟
𝑉𝑜𝑙𝑛 𝐾𝑖−1
𝜖 2
≤
1+
𝑊 𝛿𝑛
300𝑘
1 + 𝑥 ≤ 𝑒 𝑥 (𝑇𝑎𝑦𝑙𝑜𝑟 ′ 𝑠 𝑒𝑥𝑝𝑎𝑛𝑠𝑖𝑜𝑛)
𝑉𝑜𝑙𝑛 𝐾𝑖−1
𝜖
3𝑟 𝑛 300𝑘
≤
(1
+
)
𝜏 = 1017 𝑛19 log(
(
))
𝑛
𝛿
𝜖
𝑊𝛿
100𝑘
𝐶∈𝑊
𝑉𝑜𝑙𝑛 𝐾𝑖−1
Pr 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙 ≥
W 𝛿𝑛
3 𝑉𝑜𝑙 𝐾𝑖−1
𝑁𝐶𝐵
≤ 3𝑛2 𝜂
𝑁𝐶
𝛿𝑛
𝜖
1−
≥ 0.33
100𝑘
Pr 𝑠𝑢𝑐𝑐𝑒𝑠𝑠 ∩ 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙
𝑉𝑜𝑙𝑛 𝐾𝑖
W 𝛿𝑛
1−
𝜖
𝑉𝑜𝑙𝑛 𝐾𝑖
𝜖
≤ Pr 𝑠𝑢𝑐𝑐𝑒𝑠𝑠 ∩ 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙 ≤
(1
+
)
100𝑘
W 𝛿𝑛
100𝑘
Pr 𝑠𝑢𝑐𝑐𝑒𝑠𝑠|𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙
Pr 𝑠𝑢𝑐𝑐𝑒𝑠𝑠 ∩ 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙
Pr 𝑠𝑢𝑐𝑐𝑒𝑠𝑠|𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙 =
=𝑝
Pr 𝑝𝑟𝑜𝑝𝑒𝑟 𝑡𝑟𝑖𝑎𝑙
𝜖
𝜖 −1
𝜖
𝑣 1−
1+
≤𝑝 ≤ 𝑣 1+
100𝑘
100𝑘
100𝑘
1
𝜖
𝜖
≤𝑣 1−
≤𝑝 ≤ 𝑣 1+
5
49𝑘
49𝑘
1
𝜌𝐾𝑖−1 ⊂ 𝐾𝑖 ,
𝜌=1−
𝑛
𝑛
𝜌 𝑉𝑜𝑙 𝐾𝑖−1
1
∴𝑣≥
≥
𝑉𝑜𝑙 𝐾𝑖−1
4
𝜖
1−
100𝑘
−1
, 𝑣 = 𝑉𝑜𝑙𝑛 𝐾𝑖 /𝑉𝑜𝑙𝑛 𝐾𝑖−1
Probability of error of a single
volume estimate
Based on Hoeffding’s inequality , we can relate the result of the
𝑚
algorithm (𝑚) and p as follows:
Pr
𝜆2 𝑚𝑝
𝑚
−
− 𝑝 ≥ 𝜆𝑝 ≤ 2𝑒 3
𝑚
Previously, 𝑣 1 −
Pr
𝜖
100𝑘
1+
𝑚: Number of successes
𝑚: Number of proper trials
−1
𝜖
100𝑘
𝑚
− 𝑣 ≥ 𝜆𝑣 ≤ Pr
𝑚
𝜖
𝑊𝑖𝑡ℎ 𝜆 =
,
5𝑘
𝑚
𝜖
−
Pr
−𝑣 ≥
𝑣 ≤ 2𝑒
𝑚
5𝑘
≤𝑝 ≤ 𝑣 1 +
𝜖
100𝑘
1−
−1
𝜖
100𝑘
𝑚
𝜖
− 𝑝 ≥ (𝜆 −
)𝑝
𝑚
20𝑘
𝑝≥
1 3𝜖 2
3 20𝑘 𝑚𝑝
≤
3
𝜖 2
− 5 20𝑘 𝑚
2𝑒
1
5
Probability of error of k volume estimates
Pr
Pr
(𝑃𝑟
(𝑃𝑟
3
𝜖
𝑚
𝜖
− 5 20𝑘
−𝑣 ≥
𝑣 ≤ 2𝑒
𝑚
5𝑘
3
𝑚
𝜖
− 5
−𝑣 ≤
𝑣 ≥ 1 − 2𝑒
𝑚
5𝑘
3
𝑚
𝜖
− 5
𝑘
−𝑣 ≤
𝑣 ) ≥ 1 − 2𝑒
𝑚
5𝑘
3
𝑚
𝜖
− 5
𝑘
−𝑣 ≤
𝑣 ) ≥ 1 − 2𝑘𝑒
𝑚
5𝑘
2
𝑚
𝜖 2
20𝑘 𝑚
𝑘
𝜖 2
20𝑘 𝑚
𝜖
20𝑘
2
1−𝑥
𝑚
𝐴𝑠𝑠𝑢𝑚𝑖𝑛𝑔 𝑉𝑜𝑙𝑛 𝐾0 𝑐𝑎𝑛 𝑏𝑒 𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑒𝑑 𝑡𝑜 𝑤𝑖𝑡ℎ𝑖𝑛 1
𝜖
± , 𝑡ℎ𝑒 𝑎𝑙𝑔𝑜𝑟𝑖𝑡ℎ𝑚 𝑐𝑜𝑚𝑝𝑢𝑡𝑒𝑠 𝑎𝑛 𝑒𝑠𝑡𝑖𝑚𝑎𝑡𝑒 𝑉 𝑠𝑎𝑡𝑖𝑠𝑓𝑦𝑖𝑛𝑔
2
𝜖
𝜖 𝑘
𝑉
𝜖
𝜖 𝑘
1−
1−
≤
≤ 1+
1+
2
5𝑘
𝑉𝑜𝑙𝑛 𝐾
2
5𝑘
𝑤𝑖𝑡ℎ 𝑎 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑜𝑓 1 −
3
𝜖 2
− 5 20𝑘 𝑚
2𝑘𝑒
𝑛
≥ 1 − 𝑛𝑥, 𝑥 ≤ 1
Probability of error of k volume estimates
1−𝜖 ≤
𝑉
3
≤ 1 + 𝜖 with a probability of 𝑎𝑡 𝑙𝑒𝑎𝑠𝑡
𝑉𝑜𝑙𝑛 𝐾
4
Complexity of algorithm
𝑂 𝑘𝑚𝜏 =
𝑂(𝑛23
𝑙𝑜𝑔𝑛
5 𝜖 −2 log
1
)
𝜖
Rapidly Mixing Markov Chain
Nguyen Duy Anh Tuan
Recap
Random walk – Markov chain
A random walk is a process in which at every step we are at a
node in an undirected graph and follow an outgoing edge chosen
uniformly at random.
A Markov chain is similar, except the outgoing edge is chosen
according to an arbitrary distribution.
Ergodic Markov Chain
A Markov chain is ergodic if it is:
1.
2.
Irreducible, that is:
s  N  : pi(,sj)  0, i, j
Aperiodic, that is:
(s)
i, j
gcd{s : p
 0}  1, i, j
Markov Chain
Steady-state
Lemma:
Any finite, ergodic Markov chain converges to a
unique stationary distribution π after an infinite
number of steps, that is:
lim
s 

j
j
pi(,sj)   j i, j
1
Markov Chain
Mixing time
Mixing time is the time a Markov chain takes to converge to its
stationary distribution
It is measured in terms of the total variation distance between the
distribution at time s and the stationary distribution
Total variation distance
(s)
pdenotes
Letting
the probability of going from i
i, j
to j after s steps, the total variation distance at
time s is:
p ,
s
tv
1
s
 max  pi , j   j
i 2
j
Ω is the set of all states
Bounded Mixing Time
Since it is not possible to obtain the stationary
distribution by running infinite number of steps, a
small value ε > 0 is introduced to relax the
convergent condition.
Hence, the mixing time τ(ε) is defined as:
 ( )  min{ s : p , 
s'
tv
  , s'  s}
Rapidly Mixing
A Markov chain is rapidly mixing if the mixing time τ(ε) is
O(poly(log(N/ε))) with N is the number of states.
If N is exponential in problem size n, τ(ε) would be only O(poly(n)).
Rapidly Mixing
In our case:
• n is the dimension of the convex body
• and the number of states would be (3r/δ)n (δ is the size of the cube, r is the radius of the
bound ball).
 17 19   3r  n 300k 

s    10 n log   
     



t
1 

(t )
pi , j   j  1  17 19  , i, j
 10 n 
Rapidly Mixing
If the value of τ is substituted to the inequality in
Theorem 1 of the paper
1 

(t )
pi , j   j  1  17 19 
 10 n 
t
  3 r  n 300k 

10 n log  
    


1 

pi(,j)   j  1  17 19 
 10 n 
  3 r  n 300k 


1 log     
 

 
pi(,j)   j  e
( )
i, j
p
17 19
 
n

 j   
 3r  300k
Rapidly Mixing
Then, we take the summation of all the states to
calculate the total variation distance:
p ,
s
tv
1
s
 max  pi , j   j
i 2
j
1  3r 
p ,   
2  
1 
s
p , 
2 300k
s
n
  
 
 3r  300k
n
  
pi(,t j)   j   
, i, j
 3r  300k
n
Proof of Rapidly Mixing
Markov Chain
Chua Zheng Leong
Anurag Anshu
Proof of Rapidly Mixing Markov Chain
Proof of Rapidly Mixing Markov Chain
Proof of Rapidly Mixing Markov Chain
Proof of Rapidly Mixing Markov Chain
Proof of Rapidly Mixing Markov Chain
Proof of Rapidly Mixing Markov Chain
Proof of Rapidly Mixing Markov Chain
Proof of Rapidly Mixing Markov Chain
Proof of Rapidly Mixing Markov Chain
Applications
Shows that P ≠ BPP relative to this oracle. This means the
implementation of oracle cannot be in polynomial time. Further, its
surprising since P=BPP is believed to be true.
Technique can be used to integrate well behaved and bounded
functions over a convex body.
Improvements in running time of algorithm would require
improvement in mixing time of random walk. This is useful because
the random walk introduced in paper is frequently studied in literature.
Conclusion
Lets revisit the algorithm, briefly.
Given a well rounded figure K, we consider a series of rescaled figures, such that
the ratio of volume for consecutive ones is a constant fraction.
We perform a technical random walk on each figure, and look for the ‘success’,
which gives us the ratio of volumes between consecutive figures to good
approximation.
We use it to obtain the volume of K, given that we know the volume of bounding
sphere.
Technical challenge is to prove convergence of markov process.
Improvements in Algorithm
A novel technique of using Markov process to approximate
the volume of a convex body.
In current analysis, the diameter of random walk was
O(n^4). So algorithm could not have been improved
beyond O(n^8), without improving the diameter.
Algorithm improved to O(n^7) by Lovasz and Simonovitz
in “Random walks in a convex body and an improved
volume algorithm”.
Current algorithms reach up to O(n^4), as noted here.
Thank you!
Download