>> Krysta Svore: Okay I think we are going to... have Umesh Vazirani here with us and, oops I lost...

advertisement
>> Krysta Svore: Okay I think we are going to get started. Today we
have Umesh Vazirani here with us and, oops I lost my thing, there we
go. So Umesh is a professor of Professor of Electrical Engineering and
Computer Science at the University of Berkeley, UCAL Berkeley, and
director of the Berkeley Quantum Computation Center.
He is one of the founders. He is one of the founders of the field of
quantum computing, starting with his 1993 paper with his student Ethan
Bernstein on "Quantum complexity theory". He has also worked on
classical algorithms for Online Ad Auctions, as well as graph
separators, for which he was awarded the 2012 Fulkerson Prize with
Sanjeev Arora and also Satish Rao.
So, today we have Umesh and he is going to speak with us about Taming
the Quantum Tiger, which is sure to be an exciting talk; so let’s
welcome Umesh.
>> Umesh Vazirani: Thanks.
[clapping]
>> Umesh Vazirani: Thank you, thanks Krysta. So I am going to start by
stating as naively as possible what I will talk about and then I will
try to outline the main concepts. So, okay, this doesn’t work does it?
Okay. So I am sure you all know the thing that quantum computing
teaches us is that unlike classical systems where an N particle system
requires only N parameters to specify, a quantum system requires 2 to
the N or exponential in N parameters to specify. So this is of course
great if you are doing computation, like quantum computation. But
there is a flip side to it which is it also makes it difficult to
analyze, understand and control quantum systems.
So this is the part that I want to talk about. In fact this was maybe
the early motivation for quantum computing, at least in [indiscernible]
paper where the issue was: How do you simulate a quantum system given
that there are exponentially many parameters?
Okay. So just visually your state of N qubits is a 2 to the N
dimensional vector lying in the 2 to the N dimensional complex Hilbert
space. And so naively you want to think about, “Well could it be that
natural quantum states, what ever that means, sit in a small corner of
this Hilbert space”? So that actually you could maybe have a hope of
understanding them, computing with them, working with them and
something.
And then there is a second part about quantum mechanics that is very
problematic when you come to working with quantum systems which is this
whole idea of measurement that in fact what you can access through
measurement is not this entire quantum state, but only a small part of
it. So when you measure you don’t see the superposition you only see X
with probability alpha X magnitude squared.
And we also know from Holevo’s Theorem that we can obtain at most N
bits of information through a measurement no matter how we structure
the measurement. And so, um, okay, so there is a second question that
is brought up by these limitations which is: How would you actually
test a quantum system given that it’s exponentially powerful and that
you have this kind of limited access to it? And so let me try to
formulate what that question might mean.
So here is a particularly extreme version of the problem. So let’s say
that we think of our quantum system as being an un-trusted device. And
then we model the fact that we have limited IO by actually making the
input/output be binary, because we may as well. So let’s imagine that
we have two buttons on this box labeled 0 and 1 and two light bulbs
labeled 0 and 1. So this is the only way that we get to interact with
the box.
And so the challenge is, you know, what we want to do is we want to
verify that this device really represents the quantum system of our
choice. Meaning it has the required dynamics; it starts in the
required and now specified initial state and then you can actually
command it to go through a certain kind of dynamics and it does
faithfully execute those.
So if you think about it for a couple of minutes it should be clear
that this should be impossible, right, because for example if you don’t
have complex de-considerations then this box could be doing a classical
simulation of a quantum device and then report the answer pretending to
be during the quantum illusion. And what we care about here is that is
should really be, the dynamics should be exactly what we specify them
to be.
So it turns out that there is a slightly different setting where in
fact you can do this. And the slightly different setting is one where
you have two such boxes and they share entanglement, but they are not
allowed to communicate with each other. And so in this setting, in
fact, you can --. There is a form in which you can achieve this goal
of even though the devices are completely un-trusted –-. They are
supposed to behave in a certain way and you don’t trust it so all you
can do is you can test whether they do through this small interface and
the theorem says you can.
Okay. So let me just say where this comes up. Let me give you a
couple of examples. So the first example is in quantum cryptography
where, of course, BB84 going back to 1984 there was this protocol that
proposed that you could get unconditional security for key distribution
using the principles of quantum mechanics. And then the actual proof
that it achieves unconditional security did not come until about 15
years later by Myers and Shor Preskill, but then despite this proof of
unconditional security there were --. The actual physical
implementations that people have come up with have all been attacked.
In particular there are these attacks called side channel attacks.
There was already a hit of it in the earliest implementation of, of,
of, you know, of quantum key distributions. So this was one where it
was drawn slowly and then Charlie Bennett who worked on a table top
device with Alice and Bob, who were about 3 feet apart. And it was
unconditionally secure, except for the fact that when you were sending
a 1 it put a heavier load on the PAR supply so could actually hear the
difference.
Okay. So now, so Myers and Yao in, you know, about 15 years ago put
forth this challenge of DIQKD device in the quantum key distribution
where they asked, “Is it possible to achieve this kind of security with
devices which you don’t even trust”? So in other words leave aside
these side channel attacks and particular attacks, but let’s say that
you just don’t trust the implementers of these boxes to do anything
right and you just want to make sure in your protocol that it is really
secure. So at that point you could really call it unconditional
security. So that’s one place where this could be useful.
I guess another application is of course if you are building a quantum
computer and maybe you are not really sure what you have actually
achieved and how do you actually test it? So again, it’s supposed to
do something that you cannot do. So how do you test this kind of
device? Okay. So let me jump back to the first theme and then I will
come back to this in a little bit.
Okay. So the first thing is: Are there quantum states which we can
work with efficiently? And so interesting class of states are ground
states of local Hamiltonians. You know these are states which can be
highly coherent at lower temperature. And so you could sort of ask --.
You know, if you describe a local Hamiltonian where it’s a sum of local
terms, each of which is easy to describe because it’s local you could
ask, “Can you necessarily describe the ground state using only
polynomial amount of information”? And of course the trivial answer
is, “Yes, you can”. I mean if the ground state is unique then you can
describe it by just saying, “Describing the Hamiltonian” and then
saying, “What I mean is the unique ground state of this Hamiltonian”.
So that’s a perfectly good description.
But of course what we want is not only that we can specify the state,
but we can also compute interesting properties of it. So, for example,
if we want to compute the ground energy or 2 point correlations or
other such things. And I guess, you know, there is this very
interesting emerging field quantum Hamiltonian complexity which studies
these questions which are the intersection of condensed matter physics
and quantum complexity theory.
So what do we know about it? So what we know goes back to Kitaev, I
guess already at least 15 years ago where he proved that even
approximating the ground energy of a local Hamiltonian is QMA-hard. So
QMA-hard is the quantum analog of NP-hard and what we conjecture is
that there is not even a sub-exponential size classical witness of QMAcomplete problems.
So meaning that if you wanted to approximate the ground energy of this
local Hamiltonian then there is not even --. You know, even if you
appeal to some infinitely powerful prover who, you know, they could not
write down a sub-exponential size proof that you could check. So it’s
sort of a very strong way of saying that the exponential complexities
are inherent to this problem.
And then you could sort of say, “Well, what about special cases of
this”? And you know so this result has been improved to the point
where I guess Gottesman and Irani showed that it’s hard under some
assumptions, even for translation invariant 1D Hamiltonians. Meaning
you have nearest neighbor interactions on the line and the terms of the
Hamiltonian are exactly the same, just translated.
Okay. So this seems to suggest that ground states of local
Hamiltonians even in the simplest cases, but then on the other hand if
you look at what people do in practice about 20 years ago Steve White
came up with this heuristic called DMRG which is extremely successful
in practice for 1D system at least. So you could ask, “So it works in
practice, but does it work in theory”?
And the answer with DMRG is that it can get stuck in local optimum, but
still you would want to know is how could it be that you have so much
success in practice and theoretically the problem is completely hard?
So surely there is some principal phenomenon where there is some
special case suitably formulated where you can solve the problem in
polynomial time. And this is why the heuristic might work.
You know, going beyond 1D is a real challenge and there is this
beautiful work by Vestraete, Cirac and Vidal giving methods of
representing quantum states of 2D systems efficiently using tensor
networks where you can manipulate them efficiently. So how successful
would this be going forward? You know it’s not clear, but it seems
like an extremely promising direction for a very important area.
Okay. So coming back to this 1D question: How can we formulate an
interesting sub-class of 1D system’s which might be attractable? It
turns out that the interesting parameter here is the spectral gap. So
it’s the difference between the ground energy and the energy of the
first excited state. And if you look closely at these QMA-complete
problems then they have a spectral gap which scales as one of 1/poly(n)
the number of particles.
So the scaling is, you know --. There is this natural sub-class which
is the gap local Hamiltonians which is where the spectral gap is a
constant. So here the scaling is important. So we have to, you know,
we scale so that each of the terms of the Hamiltonian has constant
norm, say norm 1. So when we say there is a constant gap it’s sort of
saying imagine that the ground energy was 0 then the first excited
state has energy which is a constant. This means it violates at least
a constant fraction of one of these terms.
If you want to think about the classical analog then the classical
analog is satisfiability where if the formula is uniquely satisfiable,
so the ground energy is 0 and then the first excited state if you don’t
satisfy all the clauses then you must violate at least 1. So the gap
would be at least 1.
So there was this beautiful theorem that Matt showed about 5 years ago
showing that ground states of 1D Hamiltonians have a polynomial matrix
product state representation. And since you can use this
representation --. Once you have the representation you can compute
energy of 2 point correlations, everything efficiently. So the problem
is in NP, right. So for gapped 1D Hamiltonians it’s not really, you
know, it’s no longer QMA-hard, it’s actually an NP.
So then you could ask, “Well how hard is it to actually compute this NP
representation”? And there is some sort of folklore that it might be
intractable. In fact there was this paper by Schuch, Cirac and
Verstraete which showed that a closely related problem is actually NPhard, which seemed to bolster this notion that maybe actually computing
these matrix product states might be hard.
So I guess last year we, you know, we were working on improving the
bounds in Matt’s result and based on that we actually discovered that
you could come up with a sub-exponential time algorithm for the
problem. And then once the sub-exponential time, you know, it’s likely
to be NP-hard. So that raised the question: well surely there is a
polynomial time algorithm. And so very recently we came up with an
actual polynomial time algorithm to find these ground states with
Landau and Vidick.
So let me try to outline some of these main ideas here in the next 1015 minutes. Okay. So of course, you know, the main obstacle to
describing quantum states [indiscernible] is entanglement. So if you
have a [indiscernible] quantum state then we can always write the
Schmidt decomposition of that state. So we have two measures of
entanglement here. One is just the Schmidt rank, which is the number
of non-zero terms and the Schmidt decomposition. This is a crude
measure of entanglement.
And then there is a nicer measure, which is of course Von Neumann
entropy which is classical entropy of the CI squared. So the CI
squared is like a probability distribution and you just look at the
entropy. So the Von Neumann entropy disregards more or less the very
tiny coefficients. So in that sense it’s a better measure of
entanglement.
So the key property of entanglement with ground states is captured in
this conjecture called area law, which says that for gapped local
Hamiltonians --. So if you look at the ground state of some
Hamiltonian, which is the nearest neighbor on this lattice. Now you
take the ground state and you sort of consider it as a [indiscernible]
system. So you decompose the particles into this region on the
outside. And you ask: how much entanglement is there between the
inside and the outside? So naively you would think the entanglement
would be bounded by the volume of the region.
So the number of particles, but the area law says that it’s
proportional to the surface area, the number of bonds you have to cut
in order to separate the inside from the outside. Morally what the
area law says is that most of the entanglement sits near the boundary.
And if this were true this would suggest that maybe you have a succinct
description where to describe this quantum state you describe the
inside and outside separately.
So what you have to do is you have to describe the entanglement of the
boundary and then you can sort of decompose into the inside and the
outside. And that’s exactly what a tensor network would allow you to
do. So I guess this whole notion of an area law in my understanding is
that in some kind of folklore sense it has been know for a very long
time; although, it was actually formalized in terms of entanglement
entropy only about 10 years ago. So as I said the area law is a
conjecture and what Matt did 5 years ago is he actually made this
conjecture rigorous for 1D system.
paper.
And this was really a remarkable
So let me just say what an area law in 1D says. So you have a 1D chain
of particles and what an area law would say is that if you cut that
chain somewhere then the entanglement entropy between the left and the
right is proportional to the surface area which in this case is 1. So
the entanglement entropy should be a constant. It seems like a very
simple statement, but it’s extremely hard to prove.
So what Matt showed in particular is that the entanglement entropy
scales as exponential in log D over epsilon where D is the dimension of
each particle and epsilon is the spectral gap. So it’s constant
because N doesn’t appear anywhere here. And once you have this you
actually, you know, it implies that 1D ground states that this problem
is NP.
>> What’s the argument of the log there?
>> Umesh Vazirani: Sorry?
>> What is the argument of the log?
>> Umesh Vazirani: So you know it’s --.
>> Is it D over epsilon?
>> Umesh Vazirani: Sorry, its log of D divided by epsilon, sorry.
So I guess, you know, Matt’s argument used heavy duty tools that we
didn’t quite, you know, even after sort of working through them we
didn’t quite understand them. So we, you know, with [indiscernible] we
sort of started working on trying to get a common [indiscernible]
understanding of this.
And so through a sequence of papers last year we finally managed to
assure that through [indiscernible] arguments that you could actually
improve this bound from exponential to polynomial in log-D over
epsilon. So log Q of D over epsilon. But, in the process there was
this side effect that we actually realized that you could get a subexponential time algorithm for finding these ground states, finding a
matrix product state representation.
Now there is a certain sense in which this bound is optimal. And so
it’s optimal in the following sense: if you want to try and prove an
area law for 2D systems then it turns out there is no sub-volume law
known yet. So this bound is optimal in the sense that it’s the hardest
you can work without proving anything non-trivial for 2D systems. In
fact if you could actually improve this even a little bit too 3-delta
you would get a sub-volume law for 2D systems. And if you could
whittle it down all the way to log squared then you would actually
prove the 2D area log. If you could improve this to a log D then in
fact it would prove the area law in any number of dimension because you
would just sort of fuse all the boundary into one big particle and just
peal off layers one at a time.
Okay. So let me say, you know, just one thing about the main sort of
tool that goes into proving this bound and then I will say a little bit
about the actual algorithm for finding the ground state. So the proof
of the area law actually relies on this object we call the AGSP an
Approximate Ground State Projector. So an AGSP is an operator that
when you apply to a state it leaves the ground state alone and it takes
anything orthogonal to the ground state and it shrinks it. So that if
you have a general state it sort of gets projected closer and closer to
the ground state as you apply this operator.
So this operator itself we want two properties of it. So one is that
it shrinks the orthogonal space by a factor of delta, but then the
other thing we want is that it doesn’t increase the entanglement rank
across this boundary very much. So we have to construct this operator
carefully so that it has a good trade off between D and delta, right.
So if you can make sure that the shrinking happens faster then the
entanglement rank is accumulating then in fact we can prove an area
log.
And the way that this operator K is constructed is by taking the
Hamiltonian and coming up with a lower rank approximation to it. So
its rank is initially as high as the number of particles and we sort of
whittle it down and get a lower rank approximation. And then the
operator K is a lower degree polynomial in that new operator, in that
modified Hamiltonian.
Okay. So the degree of that polynomial somehow
corresponds to the increase in entanglement rank and the fact that it’s
approximation to the Hamiltonian makes sure that it’s cutting down the
orthogonal space.
Let me just say a little bit in these terms about this whole idea about
the algorithm for finding the ground state. So here’s, if you want to
think about it, where did the NP hardness result about finding a matrix
product state representation of the ground state come from? So you
start with a gapped 1D Hamiltonian and now one way to think about it is
this gapped condition and how do you make use of it? It’s really hard
to understand how it relates to the ground state.
And you know one way you can say is, “Well Matt did all the heavy
lifting and he showed that there is a succinct description of the
ground state”. And now we have nothing else to squeeze out of the fact
that there is a gap so let’s throw that away and let’s now start with
the assumption that we have a succinct matrix product state
representation. And now can we find it efficiently? That’s what they
showed was NP hard.
So the way you get further is you have to take the gapped condition and
you have to squeeze it out further. And the way you do that is you
say, “Well a gapped Hamiltonian implies that there is an AGSP and how
we have got to use this AGSP further in order to actually find the
succinct description”. So that’s what we do.
Okay. So let me describe to you how, at a high level, how this
algorithm works. So remember we are trying to find the ground state of
this local Hamiltonian H. Now one way we could find the ground state
is we could just write a semi-definite program. So the semi-definite
program would just say something like this: minimize trace H row. So
row is ground state subject to row positive trace 1. But now if you
look at it this is a semi-definite program of an exponential
dimensional space and so exponential time. So what can we do instead?
Well suppose that I could give you a small polynomial dimensional subspace which is guaranteed to contain a good approximation to the ground
state? And moreover I tell you that this polynomial dimensional subspace is well specified in the sense that I give you a basis for it and
I show you how to represent the bases vectors efficiently so you can do
linear algebra on them. Well then you could just say, “Well this
particular SDP doesn’t sit in an exponential dimensional space. It
only sits in this polynomial dimensional space. So let’s solve that”.
And there you have it. You have your ground state.
Okay, so --.
>> Isn’t there an additional constraint about the quality of the
approximation has to be good enough?
>> Umesh Vazirani: That’s right, that’s right.
>> So the additional constraint on the SDP is [inaudible].
>> Umesh Vazirani: That’s right or on this envelope.
>> Right, right.
>> Umesh Vazirani: So we are going to make sure that this space is such
--.
>> So you use the gap to some extent and you have to make sure you are
making enough progress to [inaudible]?
>> Umesh Vazirani: That’s right, that’s right, yeah.
So now how do we actually design this algorithm? So here is the first
step: so let’s go back and look at the classical case. So in the
classical analog you have 1D, 1 dimensional satisfiability or one
dimensional constraint satisfaction problem. And now what makes that
easy is that you can do this decomposition of the boundary, right. So
you can sort of say, “Let me assume that the value for this particular
variable is 3". And now because all the constraints are nearest
neighbor I can now decompose the problem into two parts. The left
problem and the right problem and they can be solved separately. So if
there were D sub-problems, if each variable took on D values then this
would just give rise to D sub-problems.
And in particular what you could do is you could now solve this problem
using dynamic programming working your way from the left to the right.
You know, each time adding one more particle. And the key point is
that you need to only keep optimal left solution for each of the D
values of the boundary. So for each, you know, if you want to extend
this left solution by one you would look at every possible value at the
next boundary position and compute the optimal value.
So now we want to do something similar in the quantum case. So how do
we do it? So the first thing we need to do is figure out what’s the
analog of assuming some value for the boundary? So of course now it’s
not just the state of this boundary particle, because there may be a
lot of entanglement between the left and the right. So the natural
analog is what’s called the boundary contraction. It’s roughly the
density matrix that describes the state of this particle and the bond
between the left and the right.
So what’s the dimension of this object? Well it depends upon the bond
dimension. It’s the bond dimension of the matrix product state, which
is a good approximation to your ground state. So further more since
this boundary is now a continuous object we need to discretize it. And
so we’ll actually discretize it while we are in epsilon-net.
Okay. So now the cardinality of this epsilon-net is going to be
exponential in the bond dimension.
And so in order to get a good approximation to the ground state of 1
over N, 1 over poly-n approximation to the ground state, you would need
polynomial bound dimension or even to get a constant approximation to
ground state as far as we know you would need a linear bond dimension
at least. And so this gives rise to exponential in that so it’s the
epsilon that is exponential in that. So the number of sub-problems
becomes 2 to the order N.
So the way that we got the sub-exponential algorithm is we realized in
fact you don’t need a linear bond dimension. You can get by with
something smaller. You can get by with something like 2 to the log to
the 2/3 [indiscernible]. So it’s exponential in that turns out to be
sub-exponential. So that’s how you get the proofed algorithm, but now
we want to actually get a polynomial time algorithm. So we can’t
default to do all this.
So what’s the next idea? So the next idea is we can achieve constant
error using a bond dimension here of only a constant. So in other
words we can get constant --. So, we can come up with a matrix product
state whose bond dimension across a particular bond of our choice is
constant. And bond dimension across every other bond is N or
polynomial N. And so if you want to cut this bond you can make sure
that it has constant bond dimension. And then the epsilon net will
have only polynomial elements and so you have polynomial time.
Except now you have introduced constant error each time you fix one of
these bonds, the N bond. So you can’t afford this kind of error. So
we have got to somehow drive down the error. And the idea is we can
now use the AGSP to drive down the error. So the number of subproblems we have to search through is small, but then we can use the
AGSP to drive down the error and hopefully that will work.
You know once you work through what it means to apply this AGSP to --.
Well of course if you had a complete state then you could apply the
AGSP to it and there would be no problem. It would project it closer
to the ground state and so it would reduce error, but now remember we
don’t have the complete state. We only have this left state. So how
do we apply this operator to just the left half of the state? So what
you have to do is you have to take your operator and decompose it
across this cut. So you write it as a sum of products.
And so what you end up doing is writing it as a sum of polynomial many
terms of this form. And you have to apply each of these AGSP’s to the
left state. So what it does is it takes the dimension of your subspace and it increases it by a polynomial factor. Now of course we
can’t afford to have this polynomial increase at every step, but this
is where this approximate decoupling comes in, right. Remember that
for every element of the boundary contraction we only needed to keep
the optimal state on the left. So what we do is –-. So we are now
alternating between these two steps. You know one which cuts down the
number of elements. The other which reduces the error, but it
proliferates the number of elements. So we just keep going back and
forth between the two.
Now this gets a little complicated because also in the process what
happens is the complexity of our states goes up. So remember what we
are keeping track of is a basis for this sub-space. And we have got to
describe the basis elements explicitly as matrix product states to say,
“Well we have a succinct description for each of the states in our
basis”. So as we do this process the complexity of those elements goes
up. You know the bond dimension goes up so we have to cut that down as
well.
So basically what we are doing is each time we extend the number of
particles by one we do these three steps where we reduce error, reduce
bond dimension, then we use approximate decoupling to reduce the
dimension and then we just keep going on.
Okay. Then of course by the end of the process we have this polynomial
dimensional space for the whole thing which is guaranteed to contain an
approximation to the ground state. And then we can solve the SDP and
that’s the solution.
Okay. So let me just say a word about this algorithm. So as it stands
one way you can think about this is just as an analogy we had this
simplex algorithm for linear programming which was fast in practice,
but in theory we didn’t know. And then there was this [indiscernible]
algorithm which was probably polynomial time, but nobody would ever
dream of implementing it. So probably that’s the way to think about
this. It shows that it’s polynomial time in theory, but this is not
the way you want to implement it.
But, now you can ask, “Can we actually make this faster”? And it seems
like there should be local versions of this algorithm, at least using
AGSPs and this is what we are thinking about. And the other thing you
could ask is, “Well, now that we know how to prove things in 1D can we
actually do anything probable in 2D”? And again these are all
interesting questions to think about.
Okay. So let me come back to the second theme that I talked about of
controlling an un-trusted quantum device, this work with Ben Reichardt
and Falk Unger which was published earlier this year. The picture we
had in mind is there is a classical experimentalist who is confronted
with un-trusted quantum devices and we are considering the extreme case
where we trust nothing about these devices at all. They are black
boxes with binary inputs and outputs. They are entangled. And now we
want to make sure that the initial state and the dynamics are exactly
what we like.
So what’s the problem with doing this? So certainly the Hilbert space
of each device could be very large. But, then within that how do we
know what these devices are up to? So for example when we issue a
command saying, “Measure this qubit” how do we know that the classical
bit that’s reported is even the result of measuring a qubit? I mean it
could be a more general measurement on the system. It may not be
compatible with a qubit at all. So this is sort of the most basic
problem in getting started on this.
Okay. So the way to deal with this is to appeal with the starting
point is this [indiscernible] equality or CHSH Game, which you can
think of as a test for “quantumness”, right. In a very simple setting
it’s a place where a classical experimentalist can at least test that
quantum devices have some “quantumness” to them. So what’s the test?
It doesn’t really matter the details, but each of the devices in this
game we call Alice and Bob get as input a random bit X or Y and then an
output of bit A and B. And they are trying to satisfy some condition,
which again it’s not so important, but they are trying to make sure
that the sum of 2 of the output bits is the same as the product of the
input bits.
And it’s easy to see that classically the best you can do is 75
percent, you know being correct on 75 percent of the inputs for
example, by always outputting A equal, to B equal to 0. So as long as
X and Y are not both 1 you get the right answer. If Alice and Bob
share a Bell pair then they can achieve a success probability of .85
cosine squared n/8.
So now how do they do this? Well there is some strategy for doing this
which involves --. You know, depending upon X whether its 0 or 1 Alice
measures her qubit in the Bell pair and 1 of 2 bases. And depending
upon Y Bob measures his qubit in 1 of 2 bases. So there is some
optimal strategy which achieves cosine squared n/8 and you can do no
better.
So the first step in proving our theorem was proving this rigidity
theorem for CHSH games. So remember Alice and Bob their Hilbert space
is not bounded, right; it could be anything. So now what the rigidity
theorem says is suppose we actually play the game with Alice and Bob
and suppose they win with probability close to cosine squared n/8. So
cosine squared n/8 minus epsilon. Then the theorem says they must share
a Bell state, or close to a Bell state.
So their initial state must be the following: it must be square root of
epsilon close in trace distance to a Bell state tensor product with
everything else. So it doesn’t matter what the rest of their state is
it’s in tensor product with this Bell state, at least up to whatever
level of approximation. And then, you know, and up to some unitary
change of basis on each side.
Okay. So now once you do that you can also say that in fact they must
perform their measurements according the CHSH strategy that I outlined
before. So they must do exactly what the ideal strategy says once you
change the individual basis to make everything look right. So this
gives you a measure of control just by testing this fact.
So now the next step involves actually showing that suppose we play a
sequence of CHSH games. So we just do this sequentially over and over
again. Then what we can say is that if the devices win with close to,
again cosine squared n/8, they win close to that fraction of game. So
in the previous case I talked about the probability of winning which we
cannot really --. You know, how do we know what the probability is?
But now we are just playing a sequence of games and we just look at how
many we win. So if we win close to .85 fraction of the games then they
must share N close to Bell states in tensor product with the rest and
they must perform these ideal CHSH measurements.
Okay. So let me say it a little more precisely. This is sort of a
rough statement, but what the more precise statement says is suppose
you play polynomial N number of games and then at some random point you
stop and you say, “Okay, is the number of games that I won close to
cosine squared n/8"? If yes then you can say, “Now the rest of the
state that Alice and Bob share is close to a tensor product of Bell
states and whatever else and the next N games that are played they must
play it in this way according to the ideal CHSH strategy”.
So you can set up a situation where you are pretty sure at the start of
a certain point that they are going to be doing exactly what you say in
this restricted space. So now once you have them doing something that
you want you have them over a barrel, right. You can now get them to
do exactly what you want. So you can use tomography to leverage this
multi-game rigidity theorem to force them to do exactly what you want
by creating resource states and doing computation [indiscernible] and
so on.
Okay. So you know as I said before there are various things that you
can use this for. One of them was device independent quantum key
distribution. So that follows easily once you have this kind of
structured theorem. You now don’t have to trust the devices. You have
Bell pairs and you can just measure them once you have established
that’s true. But, there is a problem; there is polynomial overhead and
it there is no error resilience. So you really have to work through
this separately to do something.
In fact earlier this year Vidick and I showed a different argument that
in fact you can modify Ekert’s protocol from back in 1992 to get one
that’s provably device independent. And this particular modified
version actually achieves a bit rate which is within a factor of 2 of
what’s optimal even without device independence. So it’s really
getting up there in tensor.
The other task was testing whether a claimed quantum computer is really
quantum. So actually our results were inspired by these two papers of
Aharonov, Ben-Or, Eban and Broadbent, Fitzsimons and Kashefi where they
considered a slightly different setting where you have a classical
verifier, but this classical verifier is given a little bit of a boost.
So this classical verifier can also manipulate a small number of
quantum bits. And it shares this small narrow quantum channel with the
prover.
So now there is only a single prover or a single experiment. And then
there is of course a classical channel where they can communicate as
required. So this small number of qubits can be exchanged back and
forth. Of course the prover can build up a reservoir of qubits in the
process. And they showed that in fact in this model you can verify any
quantum computation at all.
>> Umesh how is really quantum defined in this theorem?
>> Umesh Vazirani: Yeah, so here the way it’s defined is let’s say that
you have a quantum circuit in mind and what you want is that this
quantum computer on input X apply this quantum circuit 2X and report
the output. And in fact the verifier can be satisfied if all the tests
are passed that in fact that particular circuit got applied to X with
very high probability and the output was really, genuinely what would
result.
>> Isn’t the trajectory probabilistic?
>> Umesh Vazirani: Yeah, so you have something from the output
distribution very close to the correct distribution.
Okay. So you know one could be even more ambitious than this and one
might say, “Well what we would really like to do is we would like to
come up with a general way of testing quantum mechanics”. So you could
take this viewpoint that this exponential in complexity this is
probably the most counter-intuitive aspect of quantum mechanics. And
then you take the view that one thing that physicists like to do is
test their theories and the limit of it’s applicability like high
energy, or very small sizes or close to the speed of light. And each
time you test in these limits you discover something new.
So shouldn’t we be testing physics in this limit of high complexity?
And then you run into sort of what seems to be a basic problem which is
if you want to test in this limit of high complexity how do you even
know what your experiment will do? So to set up the experiment you
want to first calculate what the result should be and that would in
general be a problem.
So a general way to deal with this is you would want to come up with
some sort of an interactive experiment instead of the usual style of
experiment where you would think of the classical verifier as carrying
out an interactive proof with the parameters. And what we want to know
is: is it possible that in the spirit of interactive proofs that a
quantum polynomial time prover can convince a classical polynomial time
verifier of any language in [indiscernible]?
>> Is the [inaudible] purely classical?
>> Umesh Vazirani: Purely classical, yeah that’s what we would like
eventually, right. So of course what we know is that interactive
proofs are as powerful as PSPACE and PSPACE contains quantum polynomial
time. So it would seem that the answer is trivially yes, but of course
in order to get IP equal to PSPACE the prover has to be as powerful as
PSPACE. And here we only have a prover who can do quantum polynomial
time computation.
And so that’s really an open question. Okay. So in a sense we don’t
even need for this to be a single prover. It can be two prover’s, you
can set up two experiments that don’t communicate with each other. So
you could ask, “Well why doesn’t the result that I just showed
sufficient to solve this problem”? Well the point is in order to prove
correctness of that result we have to make use of properties of quantum
mechanics.
So maybe it gives you a weak version of this, but to really claim this
you need to do --. You know there are two sides of this you need to
say in interactive proofs you have this two sided thing. If X is in
the language then there is something that the prover can say which
convinces the verifier. And then there is a flip side to it which says
that if X is not in the language then no matter what the prover does it
will not convince the verifier except with very low probability.
So that has to be unconditional; that second part has to be
unconditional and for us that second part depends upon, well it
depended upon CHSH and so on. So at least that part of quantum
mechanics we have to rely on for the new answer. And getting that new
answer unconditionally I think is a very interesting question.
>> [inaudible].
>> Umesh Vazirani: I am sorry?
>> If you allow them to exchange in few qubits.
>> Umesh Vazirani: Yeah, even then --.
>> [inaudible].
>> Umesh Vazirani: I am sorry?
>> How helpful is this?
>> Umesh Vazirani: Um, we don’t know. So even there making it
unconditional, unconditional based on --. So I think that these are,
um --. So certainly there is a big technical problem here in terms of
what’s the one prover system BQP bounded capable of? But, then once
you try to say, “Well how about a small amount of “quantumness” and so
on”. Then there is a real question about how to formulate the problem.
And that becomes quite difficult.
I should finish up in a couple of minutes so let me just wrap up
quickly the last slide and then I will just say a couple of --. Maybe
I will --.
Okay. So there is this very fundamental test in classical complexity
theory called the Classical multi-linearity test which is sort of a
major component of the PCP theorem. And what it is, is you are told
you are given access to a multi-linear function let’s say and a linear
function on many variables. But, you don’t trust that it’s actually a
linear function. So can you sort of quickly, with very few probes tell
that the function you are actually probing is very close to a multilinear function?
So this is a very, very fundamental building block in the PCP theorem.
It goes back to Blum, Luby and Rubinfeld. Now this quantum multi-game
rigidity theorem gives you a similar object on the quantum side because
what it allows you to do is through these very simple tests, which are
these CHSH tests, it allows you to verify that there is an object which
belongs to a certain class which is like the multi-linear functions.
Here it says up to some unitary change of basis the state is close to a
tensor product of Bell states with whatever else.
And then you also get this condition that the measurements must be
according to exactly these conditions. So it’s sort of an analog of
these multi-linearity tests. And so it’s sort of interesting to see
how much further one can push these sorts of ideas.
I should say that there was a beautiful recent result by Ito and Vidick
showing that this classical multi-linearity test also extends if you
have quantum provers. So here the object that they share, you know if
you do this particular test you can show that in fact they don’t share
a quantum object. The quantum object must essentially be this
classical multi-linearity test.
Okay. So let me just finish up by saying that some of the things that
I talked about today, you know, this general topic reorganizing Matt
and Dorit and Frank and I are organizing this program at the Simons
Institute in Berkeley this coming spring with it’s focus on quantum
games and protocols, area laws and Hamiltonians and tensor networks
simulations, etc. So if you are interested please let us know and come
and visit us. There is a long list of people who are already committed
to coming so it should be a nice party and you should come join us.
Hopefully by the time you arrive our building will be habitable and so
--.
Thank you.
[clapping]
Download