Random Matrices with Independent Columns Radosław Adamczak College Station, July 2009

advertisement
Random Matrices with Independent Columns
Radosław Adamczak
University of Warsaw
College Station, July 2009
Based on joint work with
O. Guedon, A. Litvak, A. Pajor, N. Tomczak-Jaegermann
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
1 / 41
Outline
1
Introduction
Basic definitions
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
2 / 41
Outline
1
Introduction
Basic definitions
2
Motivations
Sampling convex bodies
Properties of random polytopes
Smallest singular value
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
2 / 41
Outline
1
Introduction
Basic definitions
2
Motivations
Sampling convex bodies
Properties of random polytopes
Smallest singular value
3
Operator Norm
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
2 / 41
Outline
1
Introduction
Basic definitions
2
Motivations
Sampling convex bodies
Properties of random polytopes
Smallest singular value
3
Operator Norm
4
Kannan-Lovasz-Simonovits Question
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
2 / 41
Outline
1
Introduction
Basic definitions
2
Motivations
Sampling convex bodies
Properties of random polytopes
Smallest singular value
3
Operator Norm
4
Kannan-Lovasz-Simonovits Question
5
Neighbourliness of random polytopes
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
2 / 41
Outline
1
Introduction
Basic definitions
2
Motivations
Sampling convex bodies
Properties of random polytopes
Smallest singular value
3
Operator Norm
4
Kannan-Lovasz-Simonovits Question
5
Neighbourliness of random polytopes
6
Smallest Singular Value
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
2 / 41
The basic model
Definition
Let Γ be an n × N matrix with columns X1 , . . . , XN , where Xi ’s are
independent random vectors with values in Rn
Questions
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
3 / 41
The basic model
Definition
Let Γ be an n × N matrix with columns X1 , . . . , XN , where Xi ’s are
independent random vectors with values in Rn
Questions
n
What is the operator norm of Γ : `N
2 → `2 ?
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
3 / 41
The basic model
Definition
Let Γ be an n × N matrix with columns X1 , . . . , XN , where Xi ’s are
independent random vectors with values in Rn
Questions
n
What is the operator norm of Γ : `N
2 → `2 ?
When is ΓT close to a multiple of isometry?
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
3 / 41
The basic model
Definition
Let Γ be an n × N matrix with columns X1 , . . . , XN , where Xi ’s are
independent random vectors with values in Rn
Questions
n
What is the operator norm of Γ : `N
2 → `2 ?
When is ΓT close to a multiple of isometry?
How does Γ act on sparse vectors?
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
3 / 41
The basic model
Definition
Let Γ be an n × N matrix with columns X1 , . . . , XN , where Xi ’s are
independent random vectors with values in Rn
Questions
n
What is the operator norm of Γ : `N
2 → `2 ?
When is ΓT close to a multiple of isometry?
How does Γ act on sparse vectors?
What is the smallest singular value of Γ?
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
3 / 41
Assumptions on Xi
Xi ’s are isotropic, i.e.
EXi = 0
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
4 / 41
Assumptions on Xi
Xi ’s are isotropic, i.e.
EXi = 0
and
EXi ⊗ Xi = Id
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
4 / 41
Assumptions on Xi
Xi ’s are isotropic, i.e.
EXi = 0
and
EXi ⊗ Xi = Id
or equivalently for all y ∈ Rn ,
EhXi , y i2 = |y |2 .
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
4 / 41
Assumptions on Xi
Xi ’s are isotropic, i.e.
EXi = 0
and
EXi ⊗ Xi = Id
or equivalently for all y ∈ Rn ,
EhXi , y i2 = |y |2 .
Xi are ψα random vectors (α ∈ [1, 2]), i.e. for some C and all
y ∈ Rn ,
khXi , y ikψα ≤ C|y |,
where
kY kψα = inf{a > 0 : E exp((X /a)α ) ≤ 2}
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
4 / 41
Consequences
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
5 / 41
Consequences
E|Xi |2 =
Pn
2
j=1 EhXi , ej i
Radosław Adamczak (MIM UW)
=n
Random Matrices with Independent Columns
College Station, July 2009
5 / 41
Consequences
E|Xi |2 =
Pn
For any y
2
j=1 EhXi , ej i = n
∈ S n−1 and t ≥ 0,
P(|hXi , y i| ≥ t) ≤ 2 exp(−(t/C)α ).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
5 / 41
Consequences
E|Xi |2 =
Pn
For any y
2
j=1 EhXi , ej i = n
∈ S n−1 and t ≥ 0,
P(|hXi , y i| ≥ t) ≤ 2 exp(−(t/C)α ).
Fact
For every random vector X not supported on any n − 1 dimensional
hyperplane, there exists an affine map T : Rn → Rn such that TX is
isotropic.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
5 / 41
Consequences
E|Xi |2 =
Pn
For any y
2
j=1 EhXi , ej i = n
∈ S n−1 and t ≥ 0,
P(|hXi , y i| ≥ t) ≤ 2 exp(−(t/C)α ).
Fact
For every random vector X not supported on any n − 1 dimensional
hyperplane, there exists an affine map T : Rn → Rn such that TX is
isotropic.
If for a set K ⊆ Rn the random vector distributed uniformly on K is
isotropic, we say that K is isotropic.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
5 / 41
Examples
G = (g1 , . . . , gn ), where gi are i.i.d. N (0, 1), is ψ2 and isotropic,
R = (ε1 , . . . , εn ), where εi are i.i.d., P(εi = ±1) = 1/2, is ψ2 and
isotropic,
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
6 / 41
Examples
G = (g1 , . . . , gn ), where gi are i.i.d. N (0, 1), is ψ2 and isotropic,
R = (ε1 , . . . , εn ), where εi are i.i.d., P(εi = ±1) = 1/2, is ψ2 and
isotropic,
√
a vector drawn from the uniform distribution on nS n−1 is ψ2 and
isotropic,
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
6 / 41
Examples
G = (g1 , . . . , gn ), where gi are i.i.d. N (0, 1), is ψ2 and isotropic,
R = (ε1 , . . . , εn ), where εi are i.i.d., P(εi = ±1) = 1/2, is ψ2 and
isotropic,
√
a vector drawn from the uniform distribution on nS n−1 is ψ2 and
isotropic,
a random vector distributed uniformly in an isotropic convex body
is ψ1 (C. Borell)
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
6 / 41
Examples
G = (g1 , . . . , gn ), where gi are i.i.d. N (0, 1), is ψ2 and isotropic,
R = (ε1 , . . . , εn ), where εi are i.i.d., P(εi = ±1) = 1/2, is ψ2 and
isotropic,
√
a vector drawn from the uniform distribution on nS n−1 is ψ2 and
isotropic,
a random vector distributed uniformly in an isotropic convex body
is ψ1 (C. Borell)
more generally an isotropic random vector with log-concave
density f is ψ1
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
6 / 41
Motivations: sampling convex bodies
Problem
Let K ⊆ Rn be a convex body, s.t. B2n ⊆ K ⊆ RB2n . Assume we have
access to an oracle (a black box), which given x ∈ Rn tells us whether
x ∈ K.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
7 / 41
Motivations: sampling convex bodies
Problem
Let K ⊆ Rn be a convex body, s.t. B2n ⊆ K ⊆ RB2n . Assume we have
access to an oracle (a black box), which given x ∈ Rn tells us whether
x ∈ K.
How to generate random points uniformly distributed in K ?
How to compute the volume of K ?
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
7 / 41
Motivations: sampling convex bodies
Problem
Let K ⊆ Rn be a convex body, s.t. B2n ⊆ K ⊆ RB2n . Assume we have
access to an oracle (a black box), which given x ∈ Rn tells us whether
x ∈ K.
How to generate random points uniformly distributed in K ?
How to compute the volume of K ?
This can be done by using Markov chains.
Their speed of convergence depends on the position of the
convex body.
Preprocessing: First put K in the isotropic position (again by
randomized algorithms).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
7 / 41
Centering the body is not comp. difficult – takes O(n) steps.
The question boils down to:
How to approximate the covariance matrix of X - uniformly
distributed on K by the empirical covariance matrix
N
1X
Xi ⊗ Xi .
N
i=1
or (after a linear transformation)
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
8 / 41
Centering the body is not comp. difficult – takes O(n) steps.
The question boils down to:
How to approximate the covariance matrix of X - uniformly
distributed on K by the empirical covariance matrix
N
1X
Xi ⊗ Xi .
N
i=1
or (after a linear transformation)
Given an isotropic convex body in Rn , how large N should we take
so that
N
1 X
Xi ⊗ Xi − Id ≤ε
N
`2 →`2
i=1
with high probability?
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
8 / 41
Interpretation in terms of Γ.
We have
N
N
1 X
1 X
Xi ⊗ Xi − Id = sup hXi , y i2 − 1
N
`2 →`2
y ∈S n−1 N i=1
i=1
1
= sup |ΓT y |2 − 1
N
n−1
y ∈S
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
9 / 41
Interpretation in terms of Γ.
We have
N
N
1 X
1 X
Xi ⊗ Xi − Id = sup hXi , y i2 − 1
N
`2 →`2
y ∈S n−1 N i=1
i=1
1
= sup |ΓT y |2 − 1
N
n−1
y ∈S
So the (geometric) question is
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
9 / 41
Interpretation in terms of Γ.
We have
N
N
1 X
1 X
Xi ⊗ Xi − Id = sup hXi , y i2 − 1
N
`2 →`2
y ∈S n−1 N i=1
i=1
1
= sup |ΓT y |2 − 1
N
n−1
y ∈S
So the (geometric) question is
Let Γ be a matrix with independent columns X1 , . . . , XN drawn from an
isotropic convex body (log-concave measure) in Rn .
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
9 / 41
Interpretation in terms of Γ.
We have
N
N
1 X
1 X
Xi ⊗ Xi − Id = sup hXi , y i2 − 1
N
`2 →`2
y ∈S n−1 N i=1
i=1
1
= sup |ΓT y |2 − 1
N
n−1
y ∈S
So the (geometric) question is
Let Γ be a matrix with independent columns X1 , . . . , XN drawn from an
isotropic convex body (log-concave measure) in Rn .
How large should N be so that N −1/2 ΓT : Rn → RN was an almost
isometry?
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
9 / 41
History of the problem
Kannan, Lovasz, Simonovits (1995) – N = O(n2 )
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
10 / 41
History of the problem
Kannan, Lovasz, Simonovits (1995) – N = O(n2 )
Bourgain (1996) – N = O(n log3 n)
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
10 / 41
History of the problem
Kannan, Lovasz, Simonovits (1995) – N = O(n2 )
Bourgain (1996) – N = O(n log3 n)
Rudelson (1999) – N = O(n log2 n)
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
10 / 41
History of the problem
Kannan, Lovasz, Simonovits (1995) – N = O(n2 )
Bourgain (1996) – N = O(n log3 n)
Rudelson (1999) – N = O(n log2 n)
Giannopoulos, Hartzoulaki, Tsolomitis (2005) – unconditional
bodies: N = O(n log n)
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
10 / 41
History of the problem
Kannan, Lovasz, Simonovits (1995) – N = O(n2 )
Bourgain (1996) – N = O(n log3 n)
Rudelson (1999) – N = O(n log2 n)
Giannopoulos, Hartzoulaki, Tsolomitis (2005) – unconditional
bodies: N = O(n log n)
Aubrun (2006) – unconditional bodies: N = O(n)
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
10 / 41
History of the problem
Kannan, Lovasz, Simonovits (1995) – N = O(n2 )
Bourgain (1996) – N = O(n log3 n)
Rudelson (1999) – N = O(n log2 n)
Giannopoulos, Hartzoulaki, Tsolomitis (2005) – unconditional
bodies: N = O(n log n)
Aubrun (2006) – unconditional bodies: N = O(n)
Paouris (2006) – N = O(n log n)
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
10 / 41
History of the problem
Kannan, Lovasz, Simonovits (1995) – N = O(n2 )
Bourgain (1996) – N = O(n log3 n)
Rudelson (1999) – N = O(n log2 n)
Giannopoulos, Hartzoulaki, Tsolomitis (2005) – unconditional
bodies: N = O(n log n)
Aubrun (2006) – unconditional bodies: N = O(n)
Paouris (2006) – N = O(n log n)
Litvak, Pajor, Tomczak-Jaegermann, R.A. (2008) – N = O(n)
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
10 / 41
Remark
√
If √1 ΓT is an almost isometry then obviously kΓk ≤ C N, so the KLS
N
question and the question about kΓk are related.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
11 / 41
Remark
√
If √1 ΓT is an almost isometry then obviously kΓk ≤ C N, so the KLS
N
question and the question about kΓk are related.
It turns out that to answer KLS it is enough to have good bounds on
Am :=
sup |Γz|
z∈S N−1
|supp z|≤m
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
11 / 41
Remark
√
If √1 ΓT is an almost isometry then obviously kΓk ≤ C N, so the KLS
N
question and the question about kΓk are related.
It turns out that to answer KLS it is enough to have good bounds on
sup |Γz|
Am :=
z∈S N−1
|supp z|≤m
Theorem (Litvak, Pajor, Tomczak-Jaegermann, R.A.)
√
If N ≤ exp(c n) and the vectors Xi √
are log-concave then for t > 1,
with probability at least 1 − exp(−ct n),
∀m≤N Am ≤ Ct
√
n+
√
m log
2N .
m
√
√
In particular, with high probability kΓk ≤ C( n + N).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
11 / 41
Compressed sensing and neighbourly polytopes
Imagine we have a vector x ∈ RN (N large), which is supported on a
small number of coordinates (say |supp x| = m << N).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
12 / 41
Compressed sensing and neighbourly polytopes
Imagine we have a vector x ∈ RN (N large), which is supported on a
small number of coordinates (say |supp x| = m << N).
If we knew the support of x, to determine x it would be enough to take
m measurements along basis vectors.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
12 / 41
Compressed sensing and neighbourly polytopes
Imagine we have a vector x ∈ RN (N large), which is supported on a
small number of coordinates (say |supp x| = m << N).
If we knew the support of x, to determine x it would be enough to take
m measurements along basis vectors.
What if we don’t know the support?
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
12 / 41
Compressed sensing and neighbourly polytopes
Imagine we have a vector x ∈ RN (N large), which is supported on a
small number of coordinates (say |supp x| = m << N).
If we knew the support of x, to determine x it would be enough to take
m measurements along basis vectors.
What if we don’t know the support?
Answer (Donoho, Candes, Tao, Romberg) Take measurements in
random directions Y1 , . . . , Yn and set
x̂ = argmin {ky k1 : hYi , y i = hYi , xi}
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
12 / 41
Compressed sensing and neighbourly polytopes
Definition
A polytope K ⊆ Rn is called m-neighbourly if any set of vertices of K
of cardinality at most m + 1 is the vertex set of a face.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
13 / 41
Compressed sensing and neighbourly polytopes
Definition
A (centraly symetric) polytope K ⊆ Rn is called
m-(symmetric)-neighbourly if any set of vertices of K of cardinality at
most m + 1 (containing no opposite pairs) is the vertex set of a face.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
13 / 41
Compressed sensing and neighbourly polytopes
Definition
A (centraly symetric) polytope K ⊆ Rn is called
m-(symmetric)-neighbourly if any set of vertices of K of cardinality at
most m + 1 (containing no opposite pairs) is the vertex set of a face.
Theorem (Donoho)
Let Γ be an n × N matrix with columns X1 , . . . , XN . The following
conditions are equivalent
(i) For any x ∈ RN with |supp x| ≤ m, x is the unique solution of the
minimization problem
min ktk1 ,
Γt = Γx.
(ii) The polytope K (Γ) = conv(±X1 , . . . , ±XN ) has 2N vertices and is
m-symmetric-neighbourly.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
13 / 41
Compressed sensing and neighbourly polytopes
Definition (Restricted Isometry Property (Candes, Tao))
For an n × N matrix Γ define the isometry constant δm = δm (Γ) as the
smallest number such that
(1 − δm )|x|2 ≤ |Γx|2 ≤ (1 + δm )|x|2
for all m-sparse vectors x ∈ RN .
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
14 / 41
Compressed sensing and neighbourly polytopes
Definition (Restricted Isometry Property (Candes, Tao))
For an n × N matrix Γ define the isometry constant δm = δm (Γ) as the
smallest number such that
(1 − δm )|x|2 ≤ |Γx|2 ≤ (1 + δm )|x|2
for all m-sparse vectors x ∈ RN .
Theorem (Candes)
If δ2m (Γ) <
solution to
√
2 − 1 then for every m-sparse x ∈ Rn , x is the unique
min ktk1 ,
Γt = Γx.
In consequence, the polytope K (Γ) (resp. K+ (Γ) = conv(X1 , . . . , XN )) is
m-symmetric-neighbourly (resp. m-neighbourly)
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
14 / 41
History
The following matrices satisfy RIP
Gaussian matrices (Candes, Tao), m ' n/log(2N/n)
Matrices with rows selected randomly from the Fourier matrix
(Candes & Tao, Rudelson & Vershynin), m ' n/log4 (2N/n)
Matrices with independent subgaussian isotropic rows
(Mendelson, Pajor, Tomczak-Jaegermann), m ' n/log(2N/n)
Matrices with independent log-concave isotropic columns (LPTA),
m ' n/log2 (2N/n)
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
15 / 41
Neighbourly polytopes
Theorem (LPTA)
C c
Let θ ∈ (0,
1)and assume that N ≤ exp(cθ n ) and
2N
≤ θ2 n. Then, with probability at least 1 − exp(−cθC nc )
Cm log2 θm
1 δm √ Γ ≤ θ.
n
Corollary (LPTA)
Let X1 , . . . , XN be random vectors drawn from an isotropic convex body
in Rn . Then, for N ≤ exp(cnc ), with probability at least 1 − exp(−cnc ),
the polytope K (Γ) (resp. K+ (Γ)) is m-symmetric-neighbourly (resp.
m-neighbourly) with
n
c.
m = bc 2
log (CN/n)
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
16 / 41
Smallest singular value
Definition
For an n × n matrix Γ let s1 (Γ) ≥
√s2 (Γ) ≥ . . . ≥ sn (Γ) be the singular
values of Γ, i.e. eigenvalues of ΓΓT . In particular
s1 (Γ) = kAk, sn (Γ) =
inf |Γx| =
x∈S n−1
1
kA−1 k
Theorem (Edelman, Szarek)
Let Γ be an n × n random matrix with independent N (0, 1) entries. Let
sn denote the smallest singular values of Γ. Then, for every ε > 0,
P(sn (Γ) ≤ εn−1/2 ) ≤ Cε,
where C is a universal constant.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
17 / 41
Theorem (Rudelson, Vershynin)
Let Γ be a random matrix with independent entries Xij , satisfying
EXij = 0, EXij2 = 1, kXij kψ2 ≤ B. Then for any ε ∈ (0, 1),
P(sn (Γ) ≤ εn−1/2 ) ≤ Cε + c n ,
where C > 0, c ∈ (0, 1) depend only on B.
Theorem (Guedon, Litvak, Pajor, Tomczak-Jaegermann, R.A.)
Let Γ be an n × n random matrix with independent isotropic
log-concave rows. Then, for any ε ∈ (0, 1),
P(sn (Γ) ≤ εn−1/2 ) ≤ Cε + C exp(−cnc )
and
P(sn (Γ) ≤ εn−1/2 ) ≤ Cεn/(n+2) logC (2/ε).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
18 / 41
Corollary
For any δ ∈ (0, 1) there exists Cδ such that for any n and ε ∈ (0, 1),
P(sn (Γ) ≤ εn−1/2 ) ≤ Cδ ε1−δ .
Definition
For an n × n matrix Γ define the condition number κ(Γ) as
κ(Γ) = kΓk · kΓ−1 k =
s1 (Γ)
.
sn (Γ)
Corollary
If Γ has independent isotropic log-concave columns, then for any
δ > 0, t > 0,
Cδ
P(κ(Γ) ≥ nt) ≤ 1−δ .
t
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
19 / 41
Norm estimates
Recall: Γ - an n × N matrix with independent columns X1 , . . . , XN .
sup |Γz|
Am :=
z∈S N−1
|supp z|≤m
We are going to prove
Theorem (Litvak, Pajor, Tomczak-Jaegermann, R.A.)
√
If N ≤ exp(c n) and the vectors Xi √
are log-concave then for t > 1,
with probability at least 1 − exp(−ct n),
∀m≤N Am ≤ Ct
√
n+
√
m log
2N .
m
√
√
In particular, with high probability kΓk ≤ C( n + N).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
20 / 41
An easy decoupling lemma
Lemma
Let x1 , . . . , xN ∈ Rn . There exists a set E ⊂ {1, ..., N}, such that
X
XX
hxi , xj i ≤ 4
hxi , xj i.
i∈E j∈E c
i6=j
Proof.
2N−2
X
hxi , xj i =
i6=j
X
XX
E⊂{1,...,N} i∈E j∈E c
Radosław Adamczak (MIM UW)
hxi , xj i ≤ 2N
max
E⊂{1,...,N}
Random Matrices with Independent Columns
XX
hxi , xj i
i∈E j∈E c
College Station, July 2009
21 / 41
Lemma
Let m ≤ N, ε, α ∈ (0, 1] and L ≥ 2m log 12eN
mε . Then


X
X hzi Xi ,
zj Xj i > CαLAm  ≤ e−L/2
sup
P  sup sup
F ⊂{1,...,N} E⊂F z∈N (F ,α,ε)
i∈E j∈F \E
|F |≤m
Lemma
Let 1 ≤ k , m ≤ N, ε, α ∈ (0, 1], β > 0, and L > 0. Let B(m, β) denote
the set of vectors x ∈ βB2N with |suppx| ≤ m and let B be a subset of
B(m, β) of cardinality M. Then


X X
hzi Xi ,
P  sup sup
sup
xj Xj i > CαβLAm 
F ⊂{1,...,N} x∈B z∈N (F ,α,ε)
i∈F
j6∈F
|F |≤k
6eN k −L
≤ M
e .
kε
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
22 / 41
Proof of the first lemma
Fix F , E ⊂ F , z ∈ N (F , α, ε).
P
Set y = j∈F \E zj Xj .
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
23 / 41
Proof of the first lemma
Fix F , E ⊂ F , z ∈ N (F , α, ε).
P
Set y = j∈F \E zj Xj .
|y | ≤ Am , kzk∞ ≤ α, hence
X
X X zj Xj i ≤ αAm
hXi , y /|y |i.
hzi Xi ,
i∈E
Radosław Adamczak (MIM UW)
j∈F \E
i∈E
Random Matrices with Independent Columns
College Station, July 2009
23 / 41
Proof of the first lemma
Fix F , E ⊂ F , z ∈ N (F , α, ε).
P
Set y = j∈F \E zj Xj .
|y | ≤ Am , kzk∞ ≤ α, hence
X
X X zj Xj i ≤ αAm
hXi , y /|y |i.
hzi Xi ,
i∈E
j∈F \E
i∈E
y , (Xi )i∈E independent, so Chebyshev’s inequality & ψ1 -property
yield
X
X |hX , y /|y |i| i
−L
P
hX
,
y
/|y
|i
≥
CL
≤
e
E
exp
i
C
i∈E
|E|
≤2
e
−L
i∈E
m −L
≤2 e
.
The statement follows from the union bound.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
23 / 41
Theorem (LPTA)
Let X1 , . . . , XN be i.i.d. isotropic log-concave vectors in Rn . For every
ε ∈ (0, 1) and t ≥ 1 there exists√C(ε, t) s.t. if N ≥ C(ε, t)n, then with
probability at least 1 − exp(−ct n),
N
N
1 X
1 X
2
X
⊗
X
−
Id
=
sup
hX
,
y
i
−
1
≤ ε.
i
i
i
N
N
y ∈S n−1
i=1
i=1
Moreover one can take C(ε, t) = Ct 4 ε−2 log(2t 2 ε−2 ).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
24 / 41
Theorem (LPTA)
Let X1 , . . . , XN be i.i.d. isotropic log-concave vectors in Rn . For every
ε ∈ (0, 1) and t ≥ 1 there exists√C(ε, t) s.t. if N ≥ C(ε, t)n, then with
probability at least 1 − exp(−ct n),
N
N
1 X
1 X
2
X
⊗
X
−
Id
=
sup
hX
,
y
i
−
1
≤ ε.
i
i
i
N
N
y ∈S n−1
i=1
i=1
Moreover one can take C(ε, t) = Ct 4 ε−2 log(2t 2 ε−2 ).
√
Remark: In the proof we can assume that N ≤ exp( n).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
24 / 41
Strategy of the proof
1
2
Divide the stochastic process into the ’bounded’ and ’unbounded’
part
Bounded part:
reduce to an ε-net
use Bernstein’s inequality for individual vectors
3
Unbounded part: use estimates on Am to
show that the unbounded part has ’small support’
get estimates on the unbounded part
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
25 / 41
Sketch of the proof
It is enough to consider y ∈ N , N - a 1/4-net in S n−1 of card. 5n .
We decompose
hXi , y i2 = hXi , y i2 ∧ R 2 + (hXi , y i2 − R 2 )1{|hXi ,y i|>R}
We have
N
N
1 X
1 X
2
hXi , y i − 1 = sup (hXi , y i2 − EhXi , y i2 )
sup y ∈N N
y ∈N N
i=1
i=1
N
1 X
≤ sup (hXi , y i2 ∧ R 2 − E(hXi , y i2 ∧ R 2 ))
N
y ∈N
+ sup
y ∈N
i=1
N
X
1
N
hXi , y i2 1{|hXi ,y i|>R} + sup
i=1
y ∈N
N
1X
EhXi , y i2 1{|hXi ,y i|>R}
N
i=1
=: I + II + III.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
26 / 41
III
EhX1 , y i2 1{|hX1 ,y i|>R} ≤ (EhX1 , y i4 )1/2 P(|hXi , y i| > R)1/2
≤ C exp(−R/C).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
27 / 41
III
EhX1 , y i2 1{|hX1 ,y i|>R} ≤ (EhX1 , y i4 )1/2 P(|hXi , y i| > R)1/2
≤ C exp(−R/C).
I
Theorem (Bernstein’s inequality)
Let Y1 , . . . , YN be i.i.d. centered r.v. with EYi2 = σ 2 and kYi k∞ ≤ a. For
any t ≥ 0,
N
1 X
t 2 t P Yi ≥ t ≤ 2 exp − cN min 2 ,
.
N
σ a
i=1
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
27 / 41
III
EhX1 , y i2 1{|hX1 ,y i|>R} ≤ (EhX1 , y i4 )1/2 P(|hXi , y i| > R)1/2
≤ C exp(−R/C).
I
Theorem (Bernstein’s inequality)
Let Y1 , . . . , YN be i.i.d. centered r.v. with EYi2 = σ 2 and kYi k∞ ≤ a. For
any t ≥ 0,
N
1 X
t 2 t P Yi ≥ t ≤ 2 exp − cN min 2 ,
.
N
σ a
i=1
This gives
P(I ≥ ε) ≤ 5n exp(−cN min(ε2 , ε/R 2 )).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
27 / 41
The unbounded part
√
With pr. at least 1 − exp(−t n) we have for all m ≤ N,
N
X
√
2N √
zi Xi ≤ Ct
n + m log
Am = sup .
n
z∈S N−1
|supp z|≤m
Radosław Adamczak (MIM UW)
i=1
Random Matrices with Independent Columns
College Station, July 2009
28 / 41
The unbounded part
√
With pr. at least 1 − exp(−t n) we have for any E ⊆ {1, . . . , N},
2N 2 X
hXi , y i2 ≤ Ct 2 n + |E| log
.
n
y ∈S n−1
sup
i∈E
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
28 / 41
The unbounded part
√
With pr. at least 1 − exp(−t n) we have for any E ⊆ {1, . . . , N},
2N 2 X
hXi , y i2 ≤ Ct 2 n + |E| log
.
n
y ∈S n−1
sup
i∈E
Set E = E(y ) = {i : |hXi , y i| ≥ R}. Then
|E|R 2 ≤
X
i∈E
Radosław Adamczak (MIM UW)
2N 2 .
hXi , y i2 ≤ Ct 2 n + |E| log
n
Random Matrices with Independent Columns
College Station, July 2009
28 / 41
The unbounded part
√
With pr. at least 1 − exp(−t n) we have for any E ⊆ {1, . . . , N},
2N 2 X
hXi , y i2 ≤ Ct 2 n + |E| log
.
n
y ∈S n−1
sup
i∈E
Set E = E(y ) = {i : |hXi , y i| ≥ R}. Then
|E|R 2 ≤
X
i∈E
For R 2 ≥ 2Ct 2 log
Radosław Adamczak (MIM UW)
2N
n
2N 2 .
hXi , y i2 ≤ Ct 2 n + |E| log
n
2
we get |E| ≤ Ct 2 nR −2 .
Random Matrices with Independent Columns
College Station, July 2009
28 / 41
The unbounded part
√
With pr. at least 1 − exp(−t n) we have for any E ⊆ {1, . . . , N},
2N 2 X
hXi , y i2 ≤ Ct 2 n + |E| log
.
n
y ∈S n−1
sup
i∈E
Set E = E(y ) = {i : |hXi , y i| ≥ R}. Then
|E|R 2 ≤
X
i∈E
For R 2 ≥ 2Ct 2 log
Radosław Adamczak (MIM UW)
2N
n
2N 2 .
hXi , y i2 ≤ Ct 2 n + |E| log
n
2
we get |E| ≤ Ct 2 nR −2 . Thus
sup
X
y ∈S n−1
i∈E
hXi , y i2 ≤ 2Ct 2 n.
Random Matrices with Independent Columns
College Station, July 2009
28 / 41
Summarizing
For R ≥ Ct log(2N/n) we have
With pr. 1 − 5n exp(−cN min(ε2 , ε/R 2 )),
I ≤ ε,
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
29 / 41
Summarizing
For R ≥ Ct log(2N/n) we have
With pr. 1 − 5n exp(−cN min(ε2 , ε/R 2 )),
I ≤ ε,
√
With pr. 1 − exp(−ct n)
II ≤ Ct 2 n/N,
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
29 / 41
Summarizing
For R ≥ Ct log(2N/n) we have
With pr. 1 − 5n exp(−cN min(ε2 , ε/R 2 )),
I ≤ ε,
√
With pr. 1 − exp(−ct n)
II ≤ Ct 2 n/N,
III ≤ C exp(−R/C).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
29 / 41
Summarizing
For R ≥ Ct log(2N/n) we have
With pr. 1 − 5n exp(−cN min(ε2 , ε/R 2 )),
I ≤ ε,
√
With pr. 1 − exp(−ct n)
II ≤ Ct 2 n/N,
III ≤ C exp(−R/C).
Set R = Ct log(2N/n) and N ≥ C(ε, t)n to get
I + II + III ≤ 3ε w.h.p.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
29 / 41
Theorem (LPTA)
Let X1 , . . . , XN be i.i.d. isotropic log-concave vectors in Rn . For every
p ≥ 2 and every ε ∈ (0, 1) and t ≥ 1 there exists C(p, ε, t) s.t.√if
N ≥ C(p, ε, t)np/2 , then with probability at least 1 − exp(−cp t n),
N
1 X
|hXi , y i|p − E|hXi , y i|p ≤ ε.
sup N
y ∈S n−1
i=1
Moreover one can take C(ε, t) = Cp t 2p ε−2 log2p−2 (2t 2 ε−2 ).
Previous contributions
Giannopoulos, Milman (2000) – N = O((n log n)p/2 )
Guedon, Rudelson (2007) – N = O(np/2 log n).
Remark: For p < 2 it is enough to take N = O(n).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
30 / 41
Neighbourly polytopes
Theorem (LPTA)
C c
Let θ ∈ (0,
1)and assume that N ≤ exp(cθ n ) and
2N
≤ θ2 n. Then, with pr. at least 1 − exp(−cθC nc ),
Cm log2 θm
N
1
2
1 X
2
sup |Γx| − 1 = zi Xi − 1 ≤ θ.
n
n
z∈S N−1
|supp z|≤m
i=1
Corollary (LPTA)
Let X1 , . . . , XN be random vectors drawn from an isotropic convex body
in Rn . Then, for N ≤ exp(cnc ), with probability at least 1 − exp(−cnc ),
the polytope K (Γ) (resp. K+ (Γ)) is m-symmetric-neighbourly (resp.
m-neighbourly) with
n
m = bc 2
c.
log (CN/n)
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
31 / 41
Sketch of the proof
Denote
Cm = max |Xi |,
i≤N
2
Bm
=
N
1
X
zi2 |Xi |2 sup |Γz|2 −
n
N−1
z∈S
i=1
|supp z|≤m
=
X
hzi Xi , zj Xj i =
sup
z∈S N−1
|supp z|≤m
Radosław Adamczak (MIM UW)
i6=j
sup Dz .
z∈S N−1
|supp z|≤m
Random Matrices with Independent Columns
College Station, July 2009
32 / 41
Sketch of the proof
Denote
Cm = max |Xi |,
i≤N
2
Bm
=
N
1
X
zi2 |Xi |2 sup |Γz|2 −
n
N−1
z∈S
i=1
|supp z|≤m
=
X
hzi Xi , zj Xj i =
sup
z∈S N−1
|supp z|≤m
i6=j
sup Dz .
z∈S N−1
|supp z|≤m
We have
Dz − Dx = hΓz, Γ(z − x)i + hΓ(z − x), Γxi +
n
X
(xi − zi )(xi + zi )|Xi |2 .
i=1
Hence if |x − z| ≤ θ and they have the same support,
2
2
2
Dz ≤ Dx + 2θ(A2m + Cm
) ≤ Dx + 2θ(Bm
+ 2Cm
).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
32 / 41
2
2
2
Dz ≤ Dx + 2θ(A2m + Cm
) ≤ Dx + 2θ(Bm
+ 2Cm
).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
33 / 41
2
2
2
Dz ≤ Dx + 2θ(A2m + Cm
) ≤ Dx + 2θ(Bm
+ 2Cm
).
We construct a set M(θ) such that
√
supx∈M Dx ≤ Am θ n w.h.p.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
33 / 41
2
2
2
Dz ≤ Dx + 2θ(A2m + Cm
) ≤ Dx + 2θ(Bm
+ 2Cm
).
We construct a set M(θ) such that
√
supx∈M Dx ≤ Am θ n w.h.p.
for every z ∈ S n−1 with |supp z| ≤ m, there is x ∈ M(θ),
supp z = supp x, |x − z| < θ.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
33 / 41
2
2
2
Dz ≤ Dx + 2θ(A2m + Cm
) ≤ Dx + 2θ(Bm
+ 2Cm
).
We construct a set M(θ) such that
√
supx∈M Dx ≤ Am θ n w.h.p.
for every z ∈ S n−1 with |supp z| ≤ m, there is x ∈ M(θ),
supp z = supp x, |x − z| < θ.
We get
2
Bm
=
√
2
2
sup Dz ≤ θ nAm + 2θ(Bm
+ Cm
)
z∈S N−1
|supp z|≤m
√ q 2
2 + 2θ(B 2 + C 2 )
+ Cm
≤ θ n Bm
m
m
2 ≤ Cn w.h.p.
Now we solve for Bm and use the fact that Cm
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
33 / 41
We get
2
Bm
=
N
1
X
zi2 |Xi |2 ≤ θn.
sup |Γz|2 −
n
N−1
z∈S
|supp z|≤m
Radosław Adamczak (MIM UW)
i=1
Random Matrices with Independent Columns
College Station, July 2009
34 / 41
We get
2
Bm
=
N
1
X
zi2 |Xi |2 ≤ θn.
sup |Γz|2 −
n
N−1
z∈S
|supp z|≤m
i=1
How does it imply the theorem?
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
34 / 41
We get
2
Bm
=
N
1
X
zi2 |Xi |2 ≤ θn.
sup |Γz|2 −
n
N−1
z∈S
i=1
|supp z|≤m
How does it imply the theorem?
N
n
X
X
2
2
zi |Xi | − n ≤
zi2 |Xi |2 − n ≤ max |Xi |2 − n.
i=1
Radosław Adamczak (MIM UW)
i=1
i≤N
Random Matrices with Independent Columns
College Station, July 2009
34 / 41
We get
2
Bm
=
N
1
X
zi2 |Xi |2 ≤ θn.
sup |Γz|2 −
n
N−1
z∈S
i=1
|supp z|≤m
How does it imply the theorem?
N
n
X
X
2
2
zi |Xi | − n ≤
zi2 |Xi |2 − n ≤ max |Xi |2 − n.
i=1
i=1
i≤N
Theorem (Klartag)
P |Xi | − n ≥ θn ≤ exp(−cθc nc ).
The statement follows by the union bound.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
34 / 41
The smallest singular value of a square matrix
Theorem (GLPTA)
Let Γ be an n × n random matrix with independent isotropic
log-concave rows. Then, for any ε ∈ (0, 1),
P(sn (Γ) ≤ εn−1/2 ) ≤ Cε + C exp(−cnc )
and
P(sn (Γ) ≤ εn−1/2 ) ≤ Cεn/(n+2) logC (2/ε).
Corollary
For any δ ∈ (0, 1) there exists Cδ such that for any n and ε ∈ (0, 1),
P(sn (Γ) ≤ εn−1/2 ) ≤ Cδ ε1−δ .
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
35 / 41
Compressible vectors
Let us define:
Sparse = Sparse(δ) = {x ∈ Rn : |supp x| ≤ δn}
Comp = Comp(δ, ρ) = {x ∈ S n−1 : dist(x, Sparse(δ)) ≤ ρ}
Incomp = Incomp(δ, ρ) = S n−1 \Comp(δ, ρ)
Lemma (Rudelson-Vershynin)
Let Hk = span((Xi )i6=k ). For every ρ, δ ∈ (0, 1) and ε > 0,
n
P(
inf
x∈Incomp(δ,ρ)
Radosław Adamczak (MIM UW)
|Γx| < ερn−1/2 ) ≤
1 X
P(dist(Xk , Hk ) < ε).
δn
k =1
Random Matrices with Independent Columns
College Station, July 2009
36 / 41
The isotropy constant
Definition
For an isotropic log-concave measure µ on Rn define the isotropy
constant of µ as
Lµ = f (0)1/n ,
where f is the density of µ.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
37 / 41
The isotropy constant
Definition
For an isotropic log-concave measure µ on Rn define the isotropy
constant of µ as
Lµ = f (0)1/n ,
where f is the density of µ.
A famous open problem: Is Lµ ≤ C?.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
37 / 41
The isotropy constant
Definition
For an isotropic log-concave measure µ on Rn define the isotropy
constant of µ as
Lµ = f (0)1/n ,
where f is the density of µ.
A famous open problem: Is Lµ ≤ C?.
Fact (not difficult, enough for us)
√
Lµ ≤ C n.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
37 / 41
The isotropy constant
Definition
For an isotropic log-concave measure µ on Rn define the isotropy
constant of µ as
Lµ = f (0)1/n ,
where f is the density of µ.
A famous open problem: Is Lµ ≤ C?.
Fact (not difficult, enough for us)
√
Lµ ≤ C n.
Theorem (Klartag)
Lµ ≤ Cn1/4
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
37 / 41
Incompressible vectors
We have to estimate
P(dist (Xi , Hi ) ≤ ε),
where Hi is a random hyperplane independent of Xi .
Let y be a unit normal to Hi , then
P(dist (Xi , Hi ) ≤ ε) = EHi PXi (|hXi , y i| ≤ ε) ≤ Cε,
as
Z
PXi (|hXi , y i| ≤ ε) =
Radosław Adamczak (MIM UW)
ε
−ε
fhXi ,y i (t)dt ≤ Cε.
Random Matrices with Independent Columns
College Station, July 2009
38 / 41
Incompressible vectors
We have to estimate
P(dist (Xi , Hi ) ≤ ε),
where Hi is a random hyperplane independent of Xi .
Let y be a unit normal to Hi , then
P(dist (Xi , Hi ) ≤ ε) = EHi PXi (|hXi , y i| ≤ ε) ≤ Cε,
as
Z
PXi (|hXi , y i| ≤ ε) =
ε
−ε
fhXi ,y i (t)dt ≤ Cε.
Thus
P(
inf
x∈Incomp(δ,ρ)
Radosław Adamczak (MIM UW)
|Γx| < ερn−1/2 ) ≤ Cε.
Random Matrices with Independent Columns
College Station, July 2009
38 / 41
Lemma (Compressible to sparse reduction)
For any ρ, δ ∈ (0, 1) and M ≥ 1, if
inf
x∈Comp(δ,ρ/(2M))
√
|Γx| ≤ ρ n
√
√
and kΓk ≤ M n, then infy ∈Sparse(δ),|y |=1 |Γy | ≤ 4ρ n.
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
39 / 41
Lemma (Compressible to sparse reduction)
For any ρ, δ ∈ (0, 1) and M ≥ 1, if
inf
x∈Comp(δ,ρ/(2M))
√
|Γx| ≤ ρ n
√
√
and kΓk ≤ M n, then infy ∈Sparse(δ),|y |=1 |Γy | ≤ 4ρ n.
We already know that with pr. at least 1 − exp(−cnc ) for all m-sparse
vectors z (m = δn, δ small enough),
1
|Γz|2 − n ≤ n
4
(we set m = δn, N = n in the RIP Theorem).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
39 / 41
Lemma (Compressible to sparse reduction)
For any ρ, δ ∈ (0, 1) and M ≥ 1, if
inf
x∈Comp(δ,ρ/(2M))
√
|Γx| ≤ ρ n
√
√
and kΓk ≤ M n, then infy ∈Sparse(δ),|y |=1 |Γy | ≤ 4ρ n.
We already know that with pr. at least 1 − exp(−cnc ) for all m-sparse
vectors z (m = δn, δ small enough),
1
|Γz|2 − n ≤ n
4
(we set m = δn, N = n in the RIP Theorem).
This implies that
|Γz| ≥
Radosław Adamczak (MIM UW)
√
n/2.
Random Matrices with Independent Columns
College Station, July 2009
39 / 41
We have proved that
P(sn (Γ) ≤ εn−1/2 ) ≤ Cε + exp(−cnc )
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
40 / 41
We have proved that
P(sn (Γ) ≤ εn−1/2 ) ≤ Cε + exp(−cnc )
How to prove
P(sn (Γ) ≤ εn−1/2 ) ≤ Cεn/(n+2) logC (ε−1 )?
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
40 / 41
We have proved that
P(sn (Γ) ≤ εn−1/2 ) ≤ Cε + exp(−cnc )
How to prove
P(sn (Γ) ≤ εn−1/2 ) ≤ Cεn/(n+2) logC (ε−1 )?
Main ingredients
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
40 / 41
We have proved that
P(sn (Γ) ≤ εn−1/2 ) ≤ Cε + exp(−cnc )
How to prove
P(sn (Γ) ≤ εn−1/2 ) ≤ Cεn/(n+2) logC (ε−1 )?
Main ingredients
We can assume that ε < exp(−cnc ).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
40 / 41
We have proved that
P(sn (Γ) ≤ εn−1/2 ) ≤ Cε + exp(−cnc )
How to prove
P(sn (Γ) ≤ εn−1/2 ) ≤ Cεn/(n+2) logC (ε−1 )?
Main ingredients
We can assume that ε < exp(−cnc ).
√
For a log-concave vector X , P(|X | ≤ ρ n) ≤ C n Lnµ ρn ≤ C n nn/2 ρn .
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
40 / 41
We have proved that
P(sn (Γ) ≤ εn−1/2 ) ≤ Cε + exp(−cnc )
How to prove
P(sn (Γ) ≤ εn−1/2 ) ≤ Cεn/(n+2) logC (ε−1 )?
Main ingredients
We can assume that ε < exp(−cnc ).
√
For a log-concave vector X , P(|X | ≤ ρ n) ≤ C n Lnµ ρn ≤ C n nn/2 ρn .
P
If z ∈ S n−1 then ni=1 zi Xi is log-concave (Prekopa-Leindler) and
isotropic (easy).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
40 / 41
We have proved that
P(sn (Γ) ≤ εn−1/2 ) ≤ Cε + exp(−cnc )
How to prove
P(sn (Γ) ≤ εn−1/2 ) ≤ Cεn/(n+2) logC (ε−1 )?
Main ingredients
We can assume that ε < exp(−cnc ).
√
For a log-concave vector X , P(|X | ≤ ρ n) ≤ C n Lnµ ρn ≤ C n nn/2 ρn .
P
If z ∈ S n−1 then ni=1 zi Xi is log-concave (Prekopa-Leindler) and
isotropic (easy).
The set Sparse(ρ) admits significantly smaller nets than S n−1 .
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
40 / 41
We have proved that
P(sn (Γ) ≤ εn−1/2 ) ≤ Cε + exp(−cnc )
How to prove
P(sn (Γ) ≤ εn−1/2 ) ≤ Cεn/(n+2) logC (ε−1 )?
Main ingredients
We can assume that ε < exp(−cnc ).
√
For a log-concave vector X , P(|X | ≤ ρ n) ≤ C n Lnµ ρn ≤ C n nn/2 ρn .
P
If z ∈ S n−1 then ni=1 zi Xi is log-concave (Prekopa-Leindler) and
isotropic (easy).
The set Sparse(ρ) admits significantly smaller nets than S n−1 .
One has to choose all the parameters (tedious but doable).
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
40 / 41
Thank you
Radosław Adamczak (MIM UW)
Random Matrices with Independent Columns
College Station, July 2009
41 / 41
Download