Document 17831055

advertisement
>> Krysta Svore: Okay. So we're going to get started. I want to thank you everyone for coming today to
our workshop on quantum algorithms and devices.
We have a really exciting day lined up. I have to say I'm really excited to see so many new faces in the
audience. We did not expect such a crowd. So looks like you have to get to know your neighbors today.
We did -- we had a conversation at the faculty summit for the first time last year, and it was really popular
and a hit, so we had a second session this year. So this is the second time we've had a workshop in
conjunction with the faculty summit, and I have to say this year it looks like we have about double the
crowd.
So this is really exciting, and I hope it kind of shows how more popular, how much more popular quantum
computing is becoming here at Microsoft and also in the academic community.
So thank you everyone for coming. We had a great quantum session yesterday with talks from Rob
Schoelkopf, David Reilly, and Dave Wecker. Today in the middle session Rob and David are going to give
us a more technical detailed talk on the hardware devices and the control engineering required. So that will
be -- that will be great.
This morning we have a series of talks where we're going to learn about what you can do when you have an
untrusted quantum computer. So that should be exciting.
Then this afternoon we'll hear a little bit more about some work in quantum algorithms, what we can learn
about space time from quantum information, also new applications to materials, how we can use quantum
algorithms to explore material science, and so on.
So it should be a really exciting day. Thank you, again, everyone for coming.
And I think we'll just get started.
So the first talk we have this morning is from Yaoyun Shi. And Yaoyun is going to talk about true
randomness, its genesis, and expansion. So let's welcome Yaoyun.
[applause]
>> Yaoyun Shi: Thank you, Krysta, and thank you for all of you to be here. And it's very honor to be here.
And I've been a theorist all my professional career, but I'm in a company talking about my work make me
feel that my work is actually relevant to everything. So that's -- so thank you very much.
So randomness means different things to different people. So I will start by describing what I mean by
randomness and why is it difficult to get randomness and why we should consider quantum approach for
generating random numbers and in particular using untrusted quantum devices.
So what's randomness? Suppose we have an n-bit string X in I call it random to -- which is back to a
quantum system E if the string X cannot be perfectly predicted from the system E. So randomness is
relative.
What time I had breakfast this morning I know that it's random to you.
So uniform randomness means that the string X is uniformly distributed and it is in principle like with the
system E. So there is no correlation between X and E.
(N, k)-source means that the string X has N bits but for the system E, the best chance of E predicting X is
no more than 2 to the minus K. So if N is precisely K, the only possibility is this uniform randomness.
There's no correlation between them.
And when K is strictly less than N, we call X a weak randomness, a weak source of randomness.
So we will be talking about errors. So by error I mean a deviation of the joined state X and E from this
uniform distribution case. A deviation could be a trace distance.
And I will be talking about true randomness. What I mean is when parameters or my protocols grow, when
I use more resources, then this error will go to 0. That's what I mean by true randomness. Okay. Those are
my terminologies.
So when you're in randomness, we need them a lot. Randomness is critical for cryptography, privacy, and
is vital for running randomized algorithms for simulating physics or gambling.
And then we need so much. Probably one terabyte a day in the whole world is not an exaggeration. I saw
Burton laughing. I'm not sure. Do you think that's reasonable?
>>: [inaudible] small.
>> Yaoyun Shi: And this is too small. Okay. At least low bound. This is lower bound.
Actually, in fact, most of randomness we use, it don't come from the keys. The keys are only small portion
of it. At least according to my security colleagues. So every Internet packet has a header which is
supposed to be a random number.
So the central question is how do we get to true randomness and know that we are getting it? The reality is
that we are not always getting true randomness. And there are many security problems in the practical
systems we're using.
I wanted to quote a research done by my colleague Alex [inaudible] and his collaborators, Heninger. So
the [inaudible] did a lot of keys, DSA keys and RSA keys from Internet. They didn't try to factor every one
of the public keys, but they try to look for pairwise shared factors. And they ended up breaking 1 percent,
or 1.5 percent of keys.
And the reason is because there are so many keys sharing factors with [inaudible], you know, GCD is easy
to do, right? And there are small number of device makers, probably 80 percent our entire run, and
producing those hardwares. And they all use similar methods to get initial randomness. So there just isn't
enough entropy to get started with.
So the other problems we know from newspapers that there are hardware and software back doors for
random number generators. So Heninger and coauthors, they said in their paper, secure random number
generation continues to be unsolved problem in important areas of practice.
Yes? Matt?
>>: At the start you said that they simply take a large number of [inaudible]?
>> Yaoyun Shi: They did it in [inaudible] pairwise [inaudible] but essentially [inaudible] more clever
algorithm never find out if there's any [inaudible] factor [inaudible]. Yeah. It's a clever way of pairwise
[inaudible].
>>: [inaudible].
>> Yaoyun Shi: Primes are supposed to be random. I mean, I'm not [inaudible] randomly. That's basically
the theoretical -- the theoretical reason why there are so many share factors.
So it's a difficult problem. So why is it difficult problem? So here's a cartoon, Dilbert cartoon. So this
random number oracle is giving its own numbers, which is a sequence of nine. Nine, nine, nine, nine, nine.
Doesn't seem now to us. But it's the same probability occurring with any other sequence.
There's just no way to test if an output is uniformly distributed, for a simple reason. Because uniform
distribution is a convex combination of different [inaudible] distribution, if I can coin it that way, so there's
no way to differentiate these two cases.
And it's also a deep mathematical physics question. It's not clear at all that randomness exists in the whole
world. It's all possible that we all -- this may look terrible to some people; I apologize for that. But I'm a
fan of the movie, so you probably know which movie I'm referring to. So it's possibly that we are all sitting
in this environment that we know -- we feel that things are random but some higher being knows what's
going on.
We just can't possibly know. And it's possible that quantum mechanics is a correct theory of predicting our
observation. At the same time, everything is deterministic. There might be some magic book having every
set of experiment recorded and then the outcome of experiment also recorded already deterministically, and
that we are just reading of that deterministic book. It's possible.
>>: [inaudible].
>> Yaoyun Shi: Hmm?
>>: Are you reading it randomly?
>> Yaoyun Shi: No, we are reading deterministically. And, yeah, it's random to us, but for this higher
being, it's deterministic. It took me a while to feel comfortable accepting quantum mechanic correctness in
time. The worry is could be deterministic.
>>: How does your definition of randomness go we, say, [inaudible] complexity or other definitions of
randomness?
>> Yaoyun Shi: Yeah, I think it's very different [inaudible] of uses is program size to describe the
randomness, the size of -- the smallest size of a program to generate sequence.
>>: You can define what a guess was?
>> Yaoyun Shi: Sure. I can define it. So imagine you have some classical stream, some secret X, and I
am quantum -- I am the rest of universe. Okay. And I'm trying to predict your secret. So what I can do is
to apply whatever local measurement on my system and come up with a number. And that's my guess.
>>: Do I have to write down a match before [inaudible]?
>> Yaoyun Shi: No, I don't know where you are. So I know you have some string. Suppose I know the
lengths of a string and I'm trying to come up with a guess. So you have X. I'm going to come up with X
prime. X prime comes from the [inaudible] outcome that I get by measuring my own system. So I can
choose the best measurement I think.
>>: So all your strings are finite.
>> Yaoyun Shi: All my string -- I have quantum system. I have quantum [inaudible] information about
your secret [inaudible] as well.
So for our question to make sense, we have to make sure there is a weak randomness in the world. Now,
assuming that the word is not deterministic, there's some weak randomness, so now the question becomes
could there be almost perfect randomness, assuming we have weak randomness to start with.
So [inaudible] answer to that question would be maybe we are stuck with weak randomness. There is no
way to amplify randomness to almost perfect randomness. So it's a deep physics question.
So let's look at how people -- what are the tools developed in a payout for getting true randomness. So
there's a theory of randomness extractors developed in the '80s, starting in the '80s and now it's a very
active research area. So in this theory, weak source, weak sources, they are modeled by min-entropy. So
describe the lengths in the min-entropy. And the extractor is a deterministic procedure, so it takes which
sources and output something so that under some assumptions about those weak sources the extractor is so
cleverly constructed. So the output is guaranteed to be true randomness. So that's the classical theory.
And when one of the sources is uniform but very short, there's a very short seed you start with, there are
amazing constructions, [inaudible] constructions that guarantee very good performance, almost optimal
performance in the output.
But there's a limitation to that theory. One major limitation is it requires two independent sources to start
with. So in particular, if there's only one single source, in which case we call it deterministic extraction,
then it's impossible even just to get one bit. And this was done by Santha-Vazirani in the '80s. And this is
Umesh Vazirani.
So we require two independent sources. And of course one of them -- if one of this is uniform, that is
independent of the other. But there are also constructions that neither has to be uniform, but they do need
to be independent. If they are only slightly correlated, we can't even extract one bit.
It's impossible to check if two sources of randomness are independent for the same reason I pointed out
earlier. So you have independent distributions but a convex combination of them could be highly
correlated. So there's no way to tear these two [inaudible] apart. And it's probably -- there's probably some
correlation between these two sources.
Okay. Then so this fundamental limit is one reason that we should consider quantum approach, and of
course more reason to do that. The most important reason is so quantum mechanics in this postulate has
perfect randomness in it. So if we prepare a quantum state in the plus state and just major using
computation basis, we'd get a perfect coin outcome. And in fact commercial products are available.
And I want to point out that trusting the quantum device -- so I'm talking about trusted device, random
number generation now, if we trust the quantum device, it's actually very strong assumption, it implies
independence source. Because if you trust a device, you trust a state, it is the plus state and you trust the
measurement is doing computation and measurement, then you get perfect coin. You get [inaudible] you
can get independent bits out. So it's a stronger assumption than independence.
So this is commercial product. I hope this slide will not annoy anybody in the audience. So this is taken
from the Web site of ID Quantique at the introductory.
So this is a product of the company for generating random numbers using quantum technology. I don't
know what's inside. I heard it's based on photonics. I'm not sure it is or not.
>>: We opened it, and it's not what you think it does.
>> Yaoyun Shi: Okay.
>>: You have to look at the peta number. And when you Google the peta, then you know what's really
inside; not look at the Web page.
>> Yaoyun Shi: So now I'm convinced that I'm not going to annoy anybody after you speak up. Okay.
Thank you for that information. Right.
>>: And we don't know whether it's quantum.
>> Yaoyun Shi: Ah. Okay. That's good. Yeah. That's interesting. Okay. So I didn't open it up; I didn't
look at a pattern. I look at what is said.
They said that their device passes all randomness tests, which I know is mathematically impossible. And
they also say it's officially certified by some file editor -- sorry, one, two, three, four file editor agency,
maybe, [inaudible].
>>: [inaudible] ask them what that means [inaudible].
>> Yaoyun Shi: Yeah, I don't know what it is.
>>: [inaudible] asked to run one test suite.
>> Yaoyun Shi: Okay.
>>: And they certify that that one test suite the answer is it's passing the [inaudible] test.
>> Yaoyun Shi: I see. Thank you for that information. Thank you.
>>: It doesn't mean that it's random.
>> Yaoyun Shi: Okay.
>>: Just means they certify that it passes network test.
>> Yaoyun Shi: Sure. Sure. Okay. Good. Actually, right, I heard that they sell a lot to casino companies,
and that's precisely those casino companies need, that credibility.
>>: The certificate.
>> Yaoyun Shi: Exactly.
>>: Not the quantumness.
>> Yaoyun Shi: Exactly. Or the randomness.
>>: So I have another question. How do you know this state is precisely one times kept zero plus one
times kept one? I mean, you're asserting that it's on the equator of the [inaudible] with perfect accuracy.
There's been no change whatsoever. The coin is absolutely fair.
>>: It doesn't have to be.
>>: That seems outrageous.
>>: That's not mean like that.
>>: I know. But, I mean, the claim that it's a perfect generator, passes all tests, if I get 1 percent more 1s
than zeros when I measure it, that's a test it will fail if it's not absolutely fair.
>>: But you can always apply [inaudible] trick.
>>: You can apply tricks. The assertion was the thing is passes all tests. There is a test. Does it pass that
no matter what? I doubt it.
>> Yaoyun Shi: Okay.
>>: [inaudible].
>> Yaoyun Shi: Okay. So I'm going to bring out some similar doubts. So how do we know it works.
Right? So same question you're asking. So we are not quantum beings; we are cosmos beings. We can put
our fingers into that device and say, wow, this is beautiful, show it as K state. We can't do that.
And there's another question. Are we willing to trust the manufacturer or solidifying agency? I guess after
the comment that we probably will not.
So even if, yes, we trust them, the devices themselves might not be reliable up to some time. So we're still
facing a question of how we know the thing is working according to the specifications.
So it lends considerations motivated on trusted device, quantum cryptography. So in this model, we
interact with those devices through classical interfaces. And the devices themselves might be entangled
with themselves and might even be -- have been prepared by adversary. Okay. That's the worst-case
scenario that [inaudible]. So there's no assumption about inner working, and device can be imperfect or
even malicious.
So the question is can we still reap the quantum benefits by using those untrusted quantum devices. And
the answer is yes, we can. We have to work with them, and we can still make use of them.
So this area of untrusted-device quantum cryptography started with the question of quantum key
distribution. And now it has a few directions, I think Ben is going to talk about a parallel, closely related
direction of untrusted quantum computation in a sense.
So our task is to create and expand true randomness using untrusted quantum devices. So that's our goal.
So I'm going to talk about what has been done and introduce our model and state our results.
So there are two lines of research before our work. One is called randomness expansion studied by
Colbeck in his thesis. A later paper appeared in -- a few years later with some new content.
So here's the setting. So we have some untrusted devices. And we start with some small number of bits
that are perfectly random. And making use of those devices and with the help of the perfect seed, ideally
we hope to output a large number of bits.
So there have been security [inaudible] against the classical adversary in restricted quantum -- against the
restricted quantum adversaries. And the first quantum security was proved by Vazirani and Vidiek in 2012.
And so another amazing property that Vazirani and Vidiek proved is that the growth of the length is
exponential. We start with K bit; we get exponential K bits out. So we have these two amazing results.
So the second line of research is called randomness amplification started by Colbeck and Renner more
recently. So they were more motivated by the question. So those two were physicists and physicists. They
ask the question: Are there fundamentally randomness in nature?
So we need to model what is weak randomness in nature. So here is their model. You have a sequence of
events, each one is a binary bits. So each one is a bit. And for every bit I, the bit has run a certain
randomness condition on previous bits and the environment. Just forget about what it is. So to translate
being 1 is being half minus epsilon and half pie epsilon. So epsilon is a parameter describing the weak
randomness of the world.
So starting with this weak randomness and making use untrusted quantum devices, they were able to output
one bit of opportunity high randomness, meaning close to -- object close to one half. When the length of
the streams grow, then the closer the one will be to perfect randomness.
So a later paper by Gallego and coauthors show that this is true for any epsilon that's in half. So previous
paper shows that when epsilon is small enough, you can amplify the freeness of the free wheel. And later
papers show that as long as epsilon is bounded about for by one half, then you can just check one bit. And
a more recent paper reduced the total number of devices to be a constant.
So those works assume certain independence of the source in the devices. Just some independence
assumption there. So those are -- those were two lines of research before our work, so I'm going to present
a unifying framework, and we call it physical extractors.
So in this framework, there are four entities. The first one, the protocol, it's deterministic. And there's an
adversary, which is quantum and all powerful. And the adversary may be entangled with the devices and
he might have been the person making those device.
But we have a restriction that once the protocol starts, there's no interruption between the adversary and the
devices. We have to isolate those devices from the adversary. And this is the assumption that we also have
in the classical cryptography. If you're doing something secretive, you have to require that the components
you are running your [inaudible] on is they are not sending out signals to adversary.
All right. The third entities -- the third entity is the collection of devices. They are non-interacting among
themselves once their protocol starts. And in principle we could use -- we could pull in the four part and
relying on special activity to enforce no communication. But in practice it may be difficult.
Okay. The fourth actor in this came is this min-entropy source. So this is a classical string that has certain
unpredictability to the adversary and devices. So the min-entropy of the source is with respect to a joint
system of the adversary and the devices. So that's the setup of our model.
So what are the goals? Actually, so before -- so let's look at expansion amplification as physical extractors.
So expansion is simply [inaudible] extraction, meaning that the input X is a uniform string and not created
with the device or adversary. And amplification is a one-bit extractor with some strong assumption about
the source.
And I'm going to jump to what the goals of this model, of the physical extractor. We want to accomplish a
few goals. We hope to have quantum security. The output should be random, which is back to all powerful
quantum adversary, as opposed to [inaudible] of security. So we are looking at information theoretical
security.
And we hope to minimize error. We have two types of errors. And the error of rejecting honest
implementation and the error of accepting bad outcome, of being fooled by malicious adversary. So two
types of errors we want to minimize.
And we hope to maximize the upper length, and we want to minimize the requirement on the classical
source. We hope to work with the broadest classical source. And we hope to tolerate a constant level of
noise of the devices. We are still not capable of doing everything perfect in quantum, so we hope to
tolerate that.
And we also want to do things efficiently, reducing the requirement on quantum memory, the number of
devices we need to separate, we need to restrict communication of, and the running time should be short.
So those are the goals.
So now we have this model. And our goal becomes constructing physical extractors with optimal
parameters.
So let me state our results. The first result, this is done with call parameter. So this is for seeded
extraction. So we start with a small sequence of uniform bits as exposure of K bits, and we have two
devices that we can restrict a communication between them.
In the end we produce exponential in K bits, and it's close to uniform random to the all powerful quantum
adversary. So quantum security. So those aspects match the result of Vazirani-Vidiek.
So what are the new results? So the error in our protocol is negligible in the running time. So we achieve
cryptographic security for the first time. So previous security is only inverse polynomial.
So the error -- so N is going to be the running time. So this is a function that grows slower than in the
inverse polynomial. So T could be arbitrary large constant. Could be a hundred million. So there's a
tradeoff between this constant T and this exponent in the output length.
But any T and S whose product is less than, say, one half is inadmissible.
So our protocol is also robust. If the devices are some small enough constant build optimal, everything still
works.
>>: [inaudible] there, right? B times C [inaudible]?
>> Yaoyun Shi: That's right. Thank you. [inaudible] so the quantum memory needs only to be one K bit
for each device component. And so the two devices, they play a [inaudible] game that consume that
entanglement but then they can communicate to establish new entanglement and then in the next cycle play
the next game. So they only need to hold one quantum bit each component.
And there are many building blocks that works in our protocol, and the proof is just complete different.
Okay. That's the first set of result. The second set is with Chung and Wu. So this is for seedless
extraction. In other words, now we don't have uniform randomness to start with. We have some (n,
k)-source, just one [inaudible] source, we know classically is impossible to extract even one bit. So now
we have one source, and in fact our protocol works even if the source is completely known to the
adversary. So the min-entropy K is with respect to the devices. Remember we have a restriction of no
communication between them. That's why even if your adversary knows completely the input, he can help
the devices through our protocol.
And so this min-entropy K can be arbitrary small. A hundred. A thousand. Okay. A constant. So our
approach -- what we accomplish is basically a reduction of seedless extraction to seeded extraction. And
the construction tolerates constant level noise. And there's some tradeoff between the error and the number
of devices. And the error can be made very close to optimal.
So [inaudible] running out of time. So I'm going to summarize some applications, and I'll be done.
So the applications -- one application of our two papers is we can do unbounded expansion is robust against
noise. And let me just to the next slide.
So here, when we put these two papers together, here is a result we have. We can start with K bits of
min-entropy. K can be some small arbitrary constant -- sorry, can be a small constant. Then use the second
extractor, we get some true randomness of K bits. And then using unbounded expansion protocol, we get
arbitrary long output. And always secure against the adversary. And the error is exposure small in K,
which is close to optimal. They call optimal 1. It's 2 to the minus K.
And we can translate the min-entropy protocol for generating randomness to do key distribution. In this
case, we have a new result saying that Alice and Bob, they only need to start with a very short seed in order
to produce a long shared T. And this is untrusted device and robust key distribution protocol.
So a physical implication, or maybe a little bit philosophical, is that we have a very strong statement about
randomness in our world. So true randomness in nature either does not exist or it exists in almost perfect
quantity and unbounded quantity. So we are not stuck in the between world where we only have weak
randomness.
And we can use our result to mitigate our freedom of choice loophole in the Bell test by starting with true
randomness, getting almost randomness, and then run the Bell test.
I think I'm going to stop here because I'm running out of time.
>> Krysta Svore: [inaudible].
>> Yaoyun Shi: I do have -- I do. Okay. Good. Then at least I can talk about other problems.
Okay. So let me talk about methods briefly. So the foundations of those protocols are Bell test. Okay.
And so here is how I summarize the summarize of those protocols. I could it a classical test of the quantum
duck. If it sounds like a quantum, it walks like quantum duck, then it must be a quantum being.
So the protocols will be classical tests on the devices. And see if those devices behave like quantum
devices. And the test will be -- will have those property that we call robust or then it calls it -- call them
rigid.
So this those devices behave -- so this is the property, the rigidity property. If those devices behave very
closely to ideal devices, then they must be very close to ideal quantum devices. So from the classical
behavior, we could infer the inner working must be close to certain ideal quantum behavior.
And ideal devices will be able to generate perfect randomness. And we know [inaudible] is one such test.
I'm going to skip that.
So there's one challenge for applying CHSH game, for example, those games, for randomness. So
remember in playing some such game, we require that the input is uniform to devices, and we may be
consuming other randomness, just to get one bit. So we're playing a game, we need two bits of
randomness. So in one -- if you are outside a field, so there's a scheme that we need two bits as input to
devices, and we get one bit out. So we're consuming more randomness than we get. So that's a problem.
And in the case of seedless extraction, when we don't have true randomness, we don't have perfect
randomness to start with, we have additional problem of the input to the players, to the quantum players
that we cannot guarantee to be globally uniform. So we -- our goal is to generate global uniform bits into
[inaudible] game. Those building blocks require global uniform bits to start with. So it's a chicken/egg
problem. So those challenges.
So one solution to the first problem of consuming too much randomness is what I called mixing work and
play.
I was very proud one day. I make her do the dishes; in the same time she feel very accomplished in
playing.
So in our setting, work is to generate randomness, but input is fixed. The input is to some nonlocal game is
a fixed 0-0. I put Einstein between because I restrict these two devices from communicating in playing the
game.
So play is to run the non-local game with uniform input. Okay. So in playing, it consumes randomness,
too much randomness. But in working it does not consume randomness. So one idea is to mix these two
things so that if each device -- so that each device cannot tell if he is playing or working. So when the
input is 0, it could be working; it could be playing. So if the device is trying to cheat, trying to output some
deterministic outcome, then there's a risk of it fading ->>: If you had a third Einstein in there [inaudible] communicate, then you could tell.
>> Yaoyun Shi: I need a lot of Einsteins. So here's protocol from Coudron-Vidick-Yuen and it's an
adaptation of an earlier protocol by Vazirani and Vidiek. So there we have two devices playing a lot of
games. And we choose two more fraction of those games for testing. Because you need very small number
of randomness. And then a lot of randomness for -- and then the rest of the rounds would be generating
rounds.
And that's a protocol that intuitively you think must work. But proving it works -- turns out to be very
difficult.
>>: [inaudible] choosing?
>> Yaoyun Shi: Great. Sure. So this in the context of seedless extraction, I have some uniform seed to
start with. They are very short. Right? So I'm going to use those coins so that for each run I toss a very
bad coin which prohibited P, I will choose that round as a testing round.
And P is a very small number, so therefore I'm going to choose a very small number of runs. But they are
chosen NID.
>>: And you don't put any assumptions [inaudible] your whatever device [inaudible].
>> Yaoyun Shi: Exactly. That's the whole point. I don't know what's going on in devices. Okay.
So there are a lot of work hard to make the proof work. So there are a lot of ideas involved, and we have
particular pieces of puzzle to solve this problem. And I want to highlight one, which we -- and I guess I'm
running out of time. Okay. So maybe I'll just go to the open problem, because those are more exciting for
you. You may be able to solve those problems.
If I have to talk about one single problem, okay, the main problems, one single problem is is there a perfect
physical extractor. So early I described more of physical extractors. So there are multiple parameters. But
perfect physical extractor, what I meant is every parameter is optimized [inaudible] or as opposed to there's
some inherent tradeoff between them [inaudible] parameters. Okay. That's the major problem I hope to see
somebody solve.
I'm going to stop here. Thank you.
[applause].
>> Yaoyun Shi: Yes. Joe.
>>: Very nice talk. You mentioned at the beginning you need a trillion bits a day.
>> Yaoyun Shi: At least.
>>: Do you have a sense of the way in which you can produce [inaudible].
>> Yaoyun Shi: That's excellent question. Yeah. Yeah. That's excellent.
>>: [inaudible].
>> Yaoyun Shi: Okay. Good. So ->>: [inaudible] and what are the limits to producing as fast as you need?
>> Yaoyun Shi: Okay. I think it's optimal. It's almost of a best possible in terms of the bit rate. So every
basic -- every basic operation, on average, one operation produce one bit, a constant bit, a constant number
of bits. So in terms of bit rate it's almost optimal. In the case of ->>: Can you give me a number? Can you give us a number?
>> Yaoyun Shi: I think there's a tradeoff between the error parameter you hope to have and the upper
length. But it's almost optimal. But that is in the seeded case. Assuming you have some short seeded
[inaudible]. I think it's almost perfect. Except for one problem. I don't know what is the least requirement
on a source, like entanglement. A major problem is is it possible to produce true randomness without
entanglement. I conjecture it is. Okay. Works need to be done. But in a seedless case, meaning that when
the input is not perfect, there's big room for improvement. So we have to use a lot of devices. A lot of
Einstein. Which is not practical. And that's where I see big improvement can be done. Or maybe
essentially maybe just a strong load balancing that we can reduce the devices. That's also possible.
>>: So at the beginning you planned true randomness in terms of the error approach in 0.
>> Yaoyun Shi: Yes.
>>: So I have two opposite questions. If I put another Microsoft hat, I would ask what about particular -particular variance of parameters, how small do the errors get or somehow. If on the other hand I put on
my mathematics hat, I ask how fast is that isotonic approach to 0 [inaudible]? What would you want for it
[inaudible] questions?
>> Yaoyun Shi: I guess these two opposite questions can be answered by me switching the two hats
[inaudible] okay. So we have two scenarios, you know, seeded and seedless. Over all I can see there's a lot
of room to improvement, for improvement. But in a seeded case, the error actually scales very nicely. The
error is negligible function of the upper length. It's very nice.
And I haven't thought about the Microsoft question yes. But maybe that's why we need the industry to
work on this as well.
Okay. Thank you.
>> Krysta Svore: Any other questions? I guess that's it.
>> Yaoyun Shi: Thank you.
[applause]
>> Krysta Svore: Okay. So now we're going to move on to our next talk. So our next speaker is Ben
Reichardt. And Ben's going to speak to us about classical command of quantum systems. So let's welcome
Ben.
[applause].
>> Ben Reichardt: Thank you. Yeah. This is joint work with Falk Unger and with Umesh Vazirani. The
main question I'd like to ask is how we can characterize an experimental device. It's a very general physics
question. Of course you run tests on it or whatever. But the difference I'm going to say is that we want to
characterize it with extremely high confidence. So imagining that you're completely paranoid and you
don't trust anything about this device and you're worried about loopholes or people cheating you or you're
really totally unsure and you still want to be able to characterize the device and understand how this
experiment works or possibly even control it.
And there's lots of problem here. One problem is that any device, any real experimental system is going to
be quantum mechanical. And this means if it's an interesting system, it's going to be exponentially
complicated potentially, right? Because just a quantum state on N qubits requires potentially exponentially
N parameters to describe.
And, furthermore, we are essentially classical. So our interactions with this device, we only get classical
information out. We measure this system and we get some classical information out; we can measure it
again, and it's a different direction, whatever.
So it seems like they have this extremely complicated quantum system, and we're only able to access it
classically. And it doesn't give us a lot of hope to be able to solve this problem in general.
And there's other problems as well. So this is sort of related to the previous talk. A trivial way of
generating random bit is just to prepare the state uniform supervision between 0 and 1 and then measure it
in the computation basis, in the 0-1 basis. So you'll get 50-50-0, 50 [inaudible].
And the problems, as we saw, like if there's some slight bias, you're not going to be get 50-50, you're going
to get 51-49. There's other problems like maybe when you do this twice in a row there's some correlation
between the bits for some reason. The states are correlated. Or there could be completely crazy things
going on. Like it could be that actually the device was manufactured by the government and it has a
random string inside in some tiny little corner of the device that you can't even find very easily.
And it's not random at all, it's actually deterministic, just pretending to be random.
And so we'd like to have the correct security model. And you might imagine that this is really too paranoid
over here. We don't have to worry that much. But the problem is I think that it's not so easy to figure out
where you should draw the line if you're worried about security, exactly what kinds of faults or errors in the
device you should worry about. You could write down a list and you could just keep adding and adding
and adding to the list.
So instead of that approach, I'm just going to start with the most paranoid assumption, security assumption
possible, which is that actually the device was manufactured by the enemy, by adversary, and it's trying to
trick you. And, yes, this is paranoid, but the advantage is that it allows us to deal with any kind of security
model that's weaker. We don't have to specify it too carefully.
And if you think about it for very long, so here's my device, the model is that I have no idea what it does.
It's made by the adversary. And if you think about it for very long, this is a completely impossible problem
to try to understand what's going on inside this black box device when you can only access it by asking
questions and getting answers.
But the interesting thing is that actually if you have two of these black box devices, and you have at least
one Einstein, then it does become possible.
So this is really the model we're going to work with, I'm going to work with now. I'm going to say we have
two black box devices that can't communicate with each other -- yeah.
>>: Do we need to assume the two black box devices are identical?
>> Ben Reichardt: No.
>>: No. Okay. Good.
>> Ben Reichardt: I just copy/pasted.
>>: [inaudible].
>> Ben Reichardt: Well, yeah, that's the problem, right? I mean, I guess you have to assume that they're -so I say we have Einstein, but in this protocol we're actually going to require lots of interaction. So it's not
going to be possible to [inaudible] by separating by large systems. You're going to need lead boxes and
stuff like that.
Okay. So here's sort of what I'm going to start with, which is this CHSH game. And so here we have a
referee. This is like me. The experimentalists -- and we have two devices, Alice and Bob, the two
playerers. And how the game works is either you send them each a random bit, A and B, so it's using two
random bits, as we saw, and they send back their answers X and Y.
And to win the game, what they're trying to do is they're trying to get their answers X and Y. The X should
equal the product of the questions.
If you think about it, if these devices are classical devices, there's no quantum mechanics, the best thing
they can do is they can reply a 0. If they each reply a 0, the X order is 0 which means that three out of the
four times you're going to equal the product of A times B. The product of A times B will be 0 except if
they're both 1.
So that's the best you can do classically. You can win three-fourths of the time. Quantumly it's a little
more complicated. But here's the best strategy. They can share an EPR pair, so it's a two cubed entangled
state. 0, 0 plus 1, 1. And they can make the measurements that I've sort of drawn here, depending on what
questions are asked. And so then they just return the results of those measurements.
And it turns out, if you look at this picture for a little bit, you can see that this strategy wins a game with
probably about 85 percent. It's cosine squared pi over 8 is the exact number. And so there's a gap there.
But quantumly you can win the game with 85 percent; classically you can win the game only with 75
percent probability.
And so this game has been used quite a lot by experimentalists to test quantum mechanics. Or to test that
you have a quantum mechanical device. So what you do is you run this test a million times and you just
see how many games are run. So if they win 800,000 of those games, then there's probably some quantum
mechanics involved. Because classically you only expect to win 750,000, it's very unlikely that you'll win
800,000. You could get lucky, but it's very unlikely.
But I'm going to go much further than just testing the devices are quantum mechanical. As I said, I want to
understand exactly what's going on inside these devices. I want to characterize these systems very well.
And so a first step to talk it is actually this strategy that we went to this 85 percent probability, it's
essentially unique. There's only one quantum strategy that wins with the cosine squared pi over 8
probability. It is this one. It has to use an EPR pair.
So that's sort of interesting. I guess. It's sort of trivial. It's not that useful. We want to go a lot further.
And so what I'm going to do is -- here is our main theorem. We're going to try to come up with a way that
these two devices have many entangled -- many qubits worths of entanglement; not just this one EPR pair.
We want to show that they have a very interesting, complicated quantum state.
>>: Your strategy gives you 85 percent. What if we lower to 80 percent?
>> Ben Reichardt: Yeah, that's when I say robustly. So if it's 85 percent minus epsilon, then you're sort of
squared of epsilon close to the unique strategy and some nice metric. If it's -- so that's where epsilon is
pretty small. I don't know if like 85 percent exactly -- I don't know exactly how the constants are. But
yeah. Actually, people have studied that, though. So they do have graphs showing like 80 percent, 81
percent, what fidelity you have to have with the EPR pair, I think.
Right. Okay. So here's the theorem that we're going to show, is we're going to consider playing many
games, one after the next. So in sequence, so what this means is, you know, you send around bits, A1, B1,
2, and you get the answer is X1, X2, XY, [inaudible] and they just have to play each game and win each
game separately.
And here's the conclusion. If the devices win -- so the omega star should be 85 percent or cosine squared pi
over 8. If they win almost the optimal number of the games with high probability, so if they have a strategy
that allows them to win almost the optimal fraction of games, then it must be that at the beginning of a
randomly chosen subsequence of games the strategy for that subsequence of games has to be essentially
ideal. Meaning that -- so here what I've done is I'm playing capital N times little N games, one after the
next. And so you think of it as just -- capital N -- let's see. Each of these is a block of a subsequence of
games. And what I'm saying, the conclusion is that if you choose one of these random blocks, then in that
block of little N games, they have to be using the ideal strategy, which is to say they have little N EPR
pairs, in tensor product, in the first game they measure the first EPR pair, in the second game they measure
the second EPR pair, in the third game they measure the third EPR pair and so on. So it's actually a very
simple strategy that they have to be using. You could imagine them doing almost anything, right? They
could have an arbitrary entangled state to start with. They could apply arbitrary arguments. The rules that
they use the play the games could be completely arbitrary. The rule used to play this game could depend
on what happened in the previous game or the previous game or the previous game. But this theorem says
that this doesn't happen. You play every game independently.
And this is actually a very nice theorem, because it actually does establish that these devices have quite a
lot of entanglement. If they pass this test, then they have to have at least little N bits of entanglement.
And then so this is -- this is what we're going to show. And unfortunately it's actually quite inefficient. So
the parameters we have is capital N is polynomial little N. It has to be. We have [inaudible] the exponent
of the polynomial is probably, I don't know -- I don't want to say. It's probably like 50 or a thousand or
something. And that's what -- the error is pretty bad. So if you're running with probably 85 percent minus
epsilon, the error here in this approximate is actually growing like, again, some crazy polynomial in
epsilon. Yeah.
>>: The little N seems completely arbitrary to me. Why did they need little N EPR pairs? Don't you need
little N times big N EPR pairs to play?
>> Ben Reichardt: Intuitively, yes. But I don't know how to prove that. So little N is ->>: [inaudible] N in the first set of little N games in the block, where do they get the next EPR pairs from?
>> Ben Reichardt: So if the devices are honest, then they'll have to start with capital N times little N EPR
pairs. Yeah. If they're dishonest, I'm proving that they have to have at least little N.
>>: Okay.
>> Ben Reichardt: Yeah. So one interesting open question, which I think is quite interesting in this area,
it's -- I don't know exactly what the applications of this are, but it's sort of a very fundamental question in
quantum information which is exactly what the right kind of error is and how it scales.
And in particular it would be interesting if we could understand how to run these kind of tests if the noise
rate were constant, which I guess is more physically relevant.
And so, for example, one conjecture you can make is if you play little N CHSH games and they win 84
percent of those games with high probability, which is something that you can implementing in a
real-world system, then my conjection is they have to share roughly little EPR pairs without this crazy
polynomial loss.
I have no idea how to prove this, but it's a very interesting question.
Okay. So I've said the theorem -- I haven't really motivated this theorem. The idea is just you have these
devices. You're trying to prove that they -- trying to establish that they must have some interesting
quantum state underneath them. So lots of entanglement. But why to we want to -- why is this actual form
of the theorem useful? And so let me give one application. There's actually a few of them.
So one application is what's called delegated computation. So in delegated computation you have like a
cell phone and you're trying to -- it has to run a computation that's too expensive for it so outsource it to an
untrusted server, for example. And so you want to be -- you don't trust the server, so you want to make
sure that the result it gives you is correct. And also maybe you don't want to even tell it what you're trying
to compute in the first place.
So classically there's a lot of results. Quantumly there's what's known before. It was known as
semi-quantum delegation. So in this kind of results, you start with some quantum circuit you're trying to
allow, so for some reason your cell phone wants to reply quantum circuit, and you're outsourcing it to a
quantum server, but you can't be fully classical. You need to be able to prepare one qubit quantum states
and send them to the server, unknown server, random one-qubit quantum states and send them to the
server.
And then the server does a bunch of processing on these states and sends you classical messages, and you
hate to send other classical messages, and that gives you delegated quantum circuit.
Actually, what we're able to do is come up with fully delegated quantum [inaudible]. So your cell phone,
the weak can be fully classical, and you can delegate a quantum circuit to untrusted quantum servers, but
now we need two quantum servers and we need an Einstein between them.
And sort of intuitively, if you're trying to come up with a scheme like this, you might start with this scheme
here and try to translate it into this picture. So you can have the classical very fire in Alice together
preparing these one-qubit quantum states somehow and then teleporting them over to Bob and then having
Bob act as a quantum server in that picture.
That's sort of a first approach. I don't know how to prove how that works. But I guess it's sort of an
intuition for how you might approach this kind of problem.
>>: [inaudible] but instead of sending a pair of quantum states you send perfect [inaudible]?
>> Ben Reichardt: If the server is exponentially powerful, then yes. If it's unknown that the server is a
polynomial time quantum circuit or quantum device, we don't -- I don't know. Yeah, we don't know.
That's a good question, though.
Okay. And sort of the problem with all these kind of schemes is that it just seems like it's completely
impossible to analyze them rigorously. Because these devices could be doing completely arbitrary things
when you send them these classical messages. You tell them to apply gate and you tell them maybe apply
this two-qubit gate, because that's what you want them to do next, and they apply, you know, an arbitrary
unitary on this infinite dimensional quantum state. And you have no idea what they've just done.
You don't know what their Hilbert spaces are. Again, it's infinite dimensionally. You don't know what
their state is. It seems like it's really hard to understand what's going on when you try to give a security
analysis of any of these kinds of schemes.
But the trick is this theorem for playing a multiple sequential CHSH game actually starts to give us a
structure on their internal quantum state. They have to be using EPR pairs and they have to be measuring
them one at an anytime a very nice way. And that's sort of like our foothold which led us to a bootstrap up
to get completely -- get the complete control over anything that these servers do.
I think actually the interesting thing here is I started with -- I started with this idea that controlling quantum
systems is extremely difficult. Because, you know, they're quantum, they're exponentially complicated,
we're classical, we can only measure them, and so on.
But actually our results turns out to be much stronger than anything that's possible classically. So if you try
to understand a classical device and give it tests and give answers, well, you can verify that it's giving you
the right answers and so on. Sure. But there's no way of knowing exactly what's going on under the hood,
unless you -- if you can't open the box, it could be doing completely arbitrary things under the hood and
just giving you the right answers in some weird way.
Quantumly it turns out that the only way of passing the CHSH test is if they share EPR pairs and if they're
doing these particular measurements. And the only way of -- okay, we'll have more tests in a second. The
only way of passing these more general tests is if they behave in these very particular ways.
So actually without looking under the hood, we know exactly what's going on in the states of these systems.
So we're getting much stronger control quantumly than is even possible classically, which is sort of
interesting.
Right. Okay. Here I'll give you -- I have a bunch of other slides on this, but here's like the one slide that's
probably way too short that kind of explains how we can bootstrap security based on just the sequential
CHSH games. So what we're going to do is we're going to run one of four protocols at random. So a
quarter of the time, or actually like 99 percent of the time, I think, we'll just play CHSH games with these
two devices, one after the next. And they -- this is what they should be doing if they play honestly. They
should by applying these measurements.
Okay. And then you check that they win 85 percent of the game, 84 percent of the game. Okay. But then
a quarter of the time what you do is you'll run a different protocol. So with Bob -- let's see. With Alice
you'll keep on playing this CHSH game. So you'll send your random bits, 0s and 1s, and she'll have no
idea -- that device will have no idea that you're not in this picture playing CHSH games. But with Bob
you'll ask him -- you won't ask him to play CHSH games at all. What you'll instead ask him to make some
interesting measurements. And particularly you'll ask him to prepare certain states on his halves of the
shared EPR pairs.
And the point is Bob now can cheat arbitrarily. You have no idea what Bob's really going to do, but you
know that Alice is still playing the CHSH games. And so you can use her measurement results for those
games to tomographically verify that Bob does what he's supposed to be doing.
And similarly, in the third protocol, we're going to have Bob playing the CHSH games and Alice, we're
going to ask Alice to do some interesting measurements on her halves of the states.
And, again, so Bob's results will sort of give us a tomographic test that Alice the behaving honestly. Yeah.
>>: Just a question about terminology. Do you think control is exactly the right terminology for this, given
that Alice and Bob could just arbitrarily decide not to play the game that you have in the protocol?
>> Ben Reichardt: Right. I don't know. So I think -- actually, did I call it control? Here I call it
command. But I don't know. Yeah, I -- yeah. Right. The devices can always just decide not to cooperate
and there's nothing you can do. Yeah. So there's always an assumption that if the devices pass these test,
then they must. Yeah.
>>: [inaudible].
>> Ben Reichardt: Well, no, I don't any so. I mean, the devices should always be trying to defect in our
analysis. We're assuming that they're trying to be dishonest, both of them, and they just can't.
Sorry. So those are protocols. So the first protocol is just the games, then we have one protocol where
Alice checks Bob, one protocol where Bob checks Alice. And then finally in the last protocol, we just plug
together this part where Alice is doing interesting things and Bob is checking and this part where Bob is
doing interesting things and Alice is checking. So now we put this and this together. So Alice and Bob are
both doing interesting things, but they have no idea that the other person isn't checking them, so they have
to keep on playing, just as they were in these two protocols.
And finally now they're both doing interesting things. They have an interesting state. They have these
EPR pairs. That's a pretty interesting state. So they can do really interesting computations. You can do an
arbitrary quantum circuit. [inaudible] is a trick. So this crazy picture here actually implements this very
simple circuit here. Sorry. You can -- you might be able to see that. It will take like ten minutes.
>>: I have had a quick question. So from [inaudible] perspective, the information sets are such that
[inaudible] distinguish between A and C; that they can distinguish [inaudible].
>> Ben Reichardt: E and D. That's right.
>>: Whereas Alice, it's A and B are indistinguishable, C and D are indistinguishable.
>> Ben Reichardt: Yeah. But they're also tied to each other. Exactly.
>>: [inaudible].
>> Ben Reichardt: Yeah, that's actually very easy. I mean, up to -- like, they're going to know like the size
as a computation unless you try to, you know, blind it somehow, like buffer it or something.
But, yeah, essentially you just -- so in computation by teleportation, you just need to ask one of the players
to prepare these resource states, like Hadamard gates and CNOT gates and whatnot. And the other player
sort links them up, but they have no idea how they're being linked up. So they know how many gates
they're -- they know how many gates a circuit has, but they have no idea how they're wired up at all.
>>: [inaudible].
>> Ben Reichardt: Oh, that's [inaudible].
>>: Oh, I sort of was asking [inaudible].
>> Ben Reichardt: Yeah, I don't know how you could stop that, right? Because after the fact they know as
much as you know. Yeah. All right.
All right. So that's sort of how this starts to be useful. So maybe -- how long do I have?
>> Krysta Svore: 20 minutes.
>> Ben Reichardt: That's good. So maybe some more questions about this, instead I'll go back to the
theorem and start to explain sort of the difficulties in understanding the theorem and proving the theorem
for just the CHSH games.
Let's see. So the proof actually has three steps. The first step is just basic classical statistics. So you say if
these devices win 85 percent of the games with high probability, what that means is that if you look at a
random block of polynomial mini games, then it must be that every single one of those games you have to
play that game with 85 percent winning probability. Which is sort of like a sampling thing.
And this is sort of necessary. I mean, for example, these two devices, Alice and Bob, they could decide at
the very -- they could decide in advance that they're always going to cheat in game No. 5. And statistics,
they're going to return 0 in game No. 5. And there is nothing you can do about that. You can force them
toe use an EPR pair in game No. 5. You can just say that sort of statistically you can avoid that by saying if
I look at a random game or a random block of games, you're missing game No. 5, and in that block they
have to be playing very well.
Okay. So now there's three steps. All right. So the first step says that in every game that they play they
have an EPR pair that they're using or a state that's close to an EPR pair that they're using, and so there's a
Q qubit on Alice's side and a qubit on Bob's side sitting inside their Hilbert space. And so this is just a
picture. This is supposed to be Alice's Hilbert space. Infinite dimensional space. On these balls are each
supposed to be a qubit. And they sort of overlap because we don't know how the qubits relate to each other
at all.
And, furthermore, there's lots of qubits. So the game 1 qubit is here; in game 2 you have maybe two or four
qubits depending on the output of game 1. In game 3 Alice could use any of I guess four qubits, but I think
it's actually 16, depending on the first two games and so on.
So that's the first step. The proof is establishing that they are using qubits hidden inside their Hilbert
spaces, but we have no idea how these qubits relate to each other, and there's tons of them.
And the second and most interesting step is to start to show a tensor-product structure. So here I showed
that the qubit that Alice uses in game 2 is in tensor product with the qubit that she uses in game 1. So it's
sort of like an independent of what happened in game 1.
So in the picture they're not overlapping anymore. But again potentially she uses different qubits in
different games depending on what happened in the previous games.
And finally in the sort of the technical step, we showed that actually there's no history dependence. So the
qubit they use in game 2 does not depend on what happened in game 1; game 3 is not dependent on game 2
or game 1.
But the most interesting step is actually showing that these qubits don't other lap each other; that they're a
tensor product. And I should just say that this kind of picture where qubits are balls and overlapping means
they're not a tensor product. This is completely misleading. You should not like -- you should not think
too closely about what this picture means. I mean, if you have an [inaudible] Hilbert space, qubit is another
ball. A qubit is really -- what is it? You know, you have to choose a basis and say this is my qubit. So I
guess it's like a hyperplane, which is like the poly X measurement, another hyperplane, which is a poly Z
measurement and they should be like dihedral angles of 45 degrees between these planes everywhere.
So I don't like balls at all. And if you have two qubits, they don't overlap; you just have these hyperplanes
sitting around in different directions.
But it's still nice to think about it sometimes.
So it's hard to appreciate how many things can go wrong with this kind of theorem. Pretty much anything,
you can imagine.
So, like, first of all, we don't know that there are playing using EPR pairs. There's no way of establishing
that these devices use exact EPR pairs. Because all we have to test them is statistics. So we can only say
that they're using close to EPR pairs. And this is always a problem because if these [inaudible] EPR pairs,
then the errors can be sort of arbitrary and they can overlap each other in smaller ways, and the errors can
start to blow up very quickly.
So here's kind of a picture of games 1 through 7 that each have slightly overlapping qubits, and the problem
that you're worried about is that game 8 uses the qubit that completely overlaps game 1. Because somehow
the errors have accumulated by -- in the seven or eight games that they're not a tensor product or even close
to being a tensor product anymore.
So, I mean, it's fairly easy to show that if you just have two games one after the next, they have to be
almost a tensor product. But the question is how far you can push that kind of statement.
So here's one way that errors accumulate. So I've drawn four qubits. I'm going to use five -- let's see.
Yeah. So I'm going to play five CHSH games using five EPR pairs, so one, two, three, four, five. But
actually in game 1 I'm not going to use this first EPR pair, I'm going to use most of the EPR pair and a
small fraction of the last EPR pairing. In game 2 I'm going to use most of the second one and a small
fraction of the last one. In game 3 -- and so on.
And so by the time I get to game 4, I've accumulated enough error over here that maybe move this qubit all
the way into like some other position, like back into this position. Because that's just how the error -- that's
the direction the errors are going. And so in game 5 I actually start -- I measure this qubit again. A second
time I measure the same qubit because the EPR pair has moved into that position.
And so this is kind of a non-tensor product strategy. It's not the ideal strategy. So it would be a
counterexample to our theorem if this were allowed.
And you say how can this happen. Well, first of all, the errors are adding up. And there's polynomially
many games in our actual system. Not five. But it's even worse than that because every time you make a
measurement, any error gets renormalized when you measure a quantum state. So you take your state's si,
you measure, you get pi si then some renormalization factor. If there was an error, than the error is also
blown up. And so this is another problem.
And there's lots of other reasons why errors accumulate. This is just a first example.
So let me sort of give you -- I don't think I'll go into any detail, really, but let me just give you very
high-level intuition for why this might work and how the proof works.
And actually the idea is that Alice's Hilbert space, we know nothing about Alice's Hilbert space. Bob's
Hilbert space, we know nothing about that. But the one thing we do know is that Alice and Bob are in
tensor product with each other.
And then that is sort of the key thing. So we're going to use the tensor-product structure between Alice and
Bob to find -- to construct a tensor product's structure, substructure, within Alice's Hilbert space and within
Bob's Hilbert space.
And so the trick is -- we have two tricks that we're going use. So the first one is -- these are both easy to
verify. So the first one is if you take an EPR pair and you act them the first half, that can just as well be
done on the second half. So this is an identity of [inaudible] 2-by-2 matrix. That's the identity 2-by-2
matrix. When you multiply this out, this is true. You get M transpose on the right.
So an operation on Alice's side is the same as an operation on Bob's side, the transpose.
The second fact is that quantum mechanics is local. So anything Alice does actually can't affect what Bob
has in expectation. So if you don't condition on Alice's measurement result, then Bob has no idea what Bob
is doing. You can't.
All right. So how are we going to use this? Let's see. So we're playing N games. And let me just try to
show that game N is sort of independent of game 1. And game N is in tensor product with game 1.
So, first of all, they play the first game, and the collapse is first EPR pair. So now this is like a measured
qubit. Then they play 2, 3, and minus -- come on -- 2, 3 and minus 1. And then they play game N.
And we're worried about a situation where game N is somehow wrapped around and is using the same
qubit again. Okay. So what we're going to do is we're going to fact 1 to pull these games, to pull Alice's
operations in games 2, 3, and minus 1 over to Bob's side.
What that means is by fact 2 this qubit here cannot be affected by games 2, 3, and minus 1 because
anything done on Bob's side can affect Alice's state. So can affect this qubit.
And so what that means is that after N minus 1 games this qubit is still collapsed. It's still whatever it was
after the first game, like at 0 or at plus or something. And so, therefore, when we play game N, this qubit is
just like [inaudible] or something. It's completely useless in the CHSH game. It's not an entangled qubit or
something. And so therefore game N has to use -- has to be using a fresh qubit in some other direction.
That's the idea.
And so that shows that game N has to be in tensor product. But qubit used in game N has to be used in
tensor product with a qubit in game 1 even though a bunch of stuff happened in the middle and lots of
errors accumulated.
That's sort of the idea of how the whole proof works. Obviously you have to be more careful because we're
trying to show that everything's in tensor product, not just N with 1. But this is sort of the biggest idea.
Let's see. All right. So I have a few slides here which sort of show explicitly they construct the tensor
product structure within Alice's Hilbert space. And show how that's done. But I don't know if that's -- I'm
sorry. I don't know if that's so interesting, unless there's questions about that.
That's interesting to me.
[laughter].
>> Ben Reichardt: Let me just instead finish up or ask if there are any questions.
>>: Let me just ask you can question.
>> Ben Reichardt: Yeah.
>>: So -- so I'm trying to see if I can come up with [inaudible] showing your result, suppose [inaudible]
little N [inaudible] I know that every one conditional, the previous numbers, is some tiny epsilon way
around an object. Can I just state that overall they are N times epsilon [inaudible]?
>> Ben Reichardt: No, I don't think so.
>>: [inaudible].
>> Ben Reichardt: No, I mean, I just -- it's -- yeah, because there's no reason errors should just add up.
Errors can blow up much faster than -- much faster than that.
>>: So if it doesn't [inaudible] is there any [inaudible]?
>> Ben Reichardt: I can trivially like -- let's see. Yes, there is something inside, but it might be like
exponential or something like that. Yeah. I mean, well, there's one exponential just from renormalization
and there's another exponential as well, at least another exponential, which I guess probably comes from
this second point which I erased. But yeah.
>>: So that [inaudible] function, is it constructive meaning that it [inaudible] some explicit training
strategy that would satisfy that condition of [inaudible]?
>> Ben Reichardt: Yeah, I think -- I think this is a very interesting open question as well, because it's
coming up with counterexamples to these kind of theorems is very hard, right, because you have to come
up with some very interesting quantum state that's highly [inaudible] and it's not a tensor product state but
once you sort of cheat in some interesting way, I don't know how to do this. I wish I had better ways of
coming up with better counterexamples. That would be just as interesting as the theorem.
>>: [inaudible].
>> Ben Reichardt: Yeah.
>>: All right.
>> Krysta Svore: Any more questions? [inaudible] let's thank Ben.
[applause]
>> Ben Reichardt: Thanks.
Download