23902 >> Krysta Svore: Today we have the privilege of...

advertisement
23902
>> Krysta Svore: Today we have the privilege of having Rod van Meter here with us. He is a professor at
Keio University. He's been visiting MSR all week and today he's going to talk about internet scale quantum
repeater networks, so Rod.
>> Rodney van Meter: Thank you. So I have a tendency to kind of wander around as I talk so I will be
here there and everywhere. We've had a very good week here at Microsoft Research I think because it's
been a good learning experience for all of us, and I hope you guys have gotten something out of it as well.
We talked mostly this week about error correction, architecture and compilers, so I kind of want to end the
week on a little bit of a different topic. The other big area that I've been funded to do research on for the
last four years, five years is quantum repeater networks and in particular my interest, my background being
as a systems person rather than a quantum physicist, is in how we are actually going to scale these things
up. Between the theorists at the top and the experimentalist at the bottom there is this really huge gap and
the stuff that's in the middle is architecture and engineering and that is the stuff that the physicists tend to
sort of write as a lemma in their papers; somebody will take care of this. Well, I am the guy who wants to
take care of this. So we've got a bunch of interesting stuff I think results from various things. The general
concept I want to talk about, the overall architecture I want to talk about today is the thing we call recursive
quantum repeater networks and I'll give you sort of a preview of it. If you've got large network of where
each of these nodes is a quantum repeater and between them you've got a set of links and it's actually a
collection of networks. It is a network of networks, that's what we mean by when we say internetwork, in
fact. So you'll have different sorts of concepts. You'll have transit networks. You'll have some network
where your request originates. You'll have some network where it actually terminates. In particular, what
we're talking about is recursion in networks, each of these individual nodes here can in fact represent a
network itself, so inside of that at the higher level when you are planning your requests and controlling your
overall interaction throughout the system, you can treat an entire network as a single node in the system,
and that's where you get your scalability. So this was work that Clare and a friend of mine named Joe
Touch who's at USCISI, the three of us did together about a year and a half ago and appeared last year. If
I'm boring you you can just download the paper and take a look at that while we're talking instead of
listening to me. Let's see, Joe's background is, he's classical networking guy as well and he's actually the
guy who came up with classical workers of networks which I'll talk about a little bit later. So overall, my
group, we call the group Aqua which stands for advancing quantum architecture and we work more or less
in five areas. The overall paradigm of what we're working on is called distributed quantum computation
because I believe that systems, individual systems are not going to scale to the levels that we really want in
order to actually solve what you a call, I suppose, commercially relevant problems. We are interested in
the design of these large-scale distributed systems. That includes things like, we've actually done some
design of individual devices which I talked a little bit about informally the other day; some of these are like
nano photonic devices and things like that. A lot of this work is done in collaboration with guys at Stanford.
We've done work on workloads and by this I mean taking applications and figuring how well they will run on
a given architecture as opposed to actually defining or creating new quantum algorithms themselves.
Principles, meaning things like understanding how the principle of locality from classical systems actually
applies to quantum systems. Designing things like the quantum multicomputer, designing layered
architectures, so applying what we classical architects think of as standard engineering practice, figuring
out how that's going to affect the overall architecture of quantum systems. And finally, some software tools
and you'll see a little bit about some of those over the course of the day today. So by the way, let's see,
we're scheduled on the, the schedule is written until noon. What time do you actually want to stop?
>> Krysta Svore: I'm not sure how long it was set for. I'll have to check.
>> Rodney van Meter: I should've asked this before we started.
>>: [Inaudible].
>> Rodney van Meter: Let's see, networks, so what are quantum networks actually useful for? And by the
way, feel free to make this interactive. Feel free to interrupt me and ask questions. I actually like to make
the talk interactive. So what are quantum networks and what we need them for? Well, the first sort of thing
that obviously comes to mind is we're going to distributed numerical computation and here you're talking
about basically more computational power and sheer scale that comes from actually combining systems
and this is more or less the original ARPANET vision of distributed shared computing resources. In this
sense we're talking about a quantum multicomputers, maybe distributed quantum databases, things like
that, and at this level we're probably talking primarily about quantum system area networks and local area
networks, rather than wide area networks; although the focus, the real focus of today's talk is about the
larger scale networks. For wide area networks, for example, one of the things that might actually be done
in the form of what you would call numerical computation is a form of quantum Byzantine agreement which
has been developed in theory and in fact was actually demonstrated experimentally a few years ago as
well. The goal of something like that is fewer to accomplish a task and fewer rounds of communication in
the quantum sense than can be done classically. Overall, we expect that there will be a lot more numerical
uses for Bell pairs, GHC states, W states and we would like to see more development of these sorts of
things in the overall literature for these uses, distributed uses of things. One thing that's been talked about
and you guys probably saw it. It showed up in Science a few weeks ago was blind quantum computation.
This was a good experiment. The theory was developed several years ago. Experimentally it was
demonstrated recently using an optical system they divided the system into a quantum server and a
quantum client and in the system sense where this would fit in would be this would essentially be a
computing platform from our point of view. It's not a specific algorithm, but it does fit into the overall
structure of networks as we are talking about it. Let's see. Oh, and this assumes measuring base quantum
computing as a model. That's built fairly deeply into the concept of blind quantum computation. Survey
says?
>>: It's an unmeasured quantity.
>> Rodney van Meter: What's that?
>>: It's an unmeasured quantity. The one next to us is free. Ours won't talk to us. [Laughter]. Let's
assume that we are okay.
>> Rodney van Meter: Okay. Let's see. So the flipside of the coin, those first couple of things I talked
about was distributing numerical quantum computation. The other side of the equation is actually using
quantum effects when you can build distributed quantum states for executing tasks that you wouldn't
necessarily consider to be computational tasks, but that come building in affect things like sensor networks,
quantum sensor networks or the like would qualify in this area. Quantum key distribution would certainly
qualify in this sense because effectively what you're doing is you're actually measuring the presence or
absence of an eavesdropper in that sense. And Shota Nagayama, who is sitting in the back, his bachelor's
thesis was actually figuring out how to take the keys generated by quantum key devices and integrate
those into the use of IPsec for actually keying an IPsec tunnel. So following on from the kind of work that
was done by the BBN guys including Chip Elliott, they sort of just hacked through the protocol. We decided
we wanted a slightly more engineered protocol and we've written an internet draft for this, but we have
gotten it to RFC status yet. That would be one example. Another one that came out last year that I was
really thrilled with was Daniel Gottesman showed how you can take distributed Bell pairs and use those to
improve the precision of measurement of optical telescopes, so doing effectively interferometry where your
reference standard is Bell pairs rather than some sort of distributed classical signal in that sense.
Truthfully, this is a fabulous thing and I think it's a really wonderful thought experiment, and I actually talked
to Daniel about it and in order to pull this off, you're probably going to need a minimum of 10 to the ninth
Bell pairs per second, maybe 10 to the ten or 10 to the eleven Bell pairs per second. The networks we're
talking about here, the experimentalists are going in five years we might get to 1 Hz, [laughter], so we're
still a couple of orders of magnitude away from actually pulling this off yet, but as sort of a goal it's a good
place to go I think. One idea that I had a number of years ago but we haven't actually pursued yet, actually
the in the eastern part of this state, whichever way east is, there is LIGO, the gravity wave observatory.
Could we, could those guys use it? If we created a network that generated entanglement between
Washington and Louisiana where the other observatory is, could they use that to improve either the
resolution or the sensitivity of their detector? We talked to Gerard Milburn about this a little bit and he's like
yeah, it seems feasible. It seems like something we could do. But nobody has actually sat down and done
the work yet. The reason this came up was, in fact, these guys use already, they already have
implemented and is in production, nonclassical states of light in their detectors. They use the form of state
called squeezed quantum state. And I don't know that anybody has yet figured out how to go between
entanglement and squeezed states, but the guy who would probably know would be Acura Forasal
[phonetic] at the University of Tokyo. Speculator views. Preview of what I'm going to talk about or sort of
the results here, I want to emphasize the quantum networking is distributed quantum computation and we
think of a network we think of transmitting data from one place to another as sort of a passive activity. In
fact, the quantum networking this is not just forwarding of data, the creation process of creating these
distributed states is in fact a distributed quantum computation itself. I really want to emphasize that. The
classical principles for how we design classical networks, my sort of way of working, my approach to
solving problems is to figure out what known classical engineering principles we can apply to the quantum
system, figuring out how things change in that sense. And we've been actually pleased to see that largely
the classical networking principles apply. We'll go through that in some detail in a minute. That includes
things like layering and multiplexing, resource management, finite state machine based design of the
protocols, all of the usual approaches to things. Now this one comes from our collaborator Joe Touch. Joe
believes very, very strongly that recursion is a fundamental property of networks. It's not something that is
sort of tacked on. It is fundamental to the behavior of the networks themselves. This is particularly
valuable for internetworking between networks. It's part of how you actually get to scale in large-scale
systems. When I saw Joe's work on this I realized that it applied very well to what we want to do in
designing quantum networks, and so we've adopted the concept pretty much wholesale. So, outline, what
we will talk about today. I'm not sure if I really need to go through the first part, talking about networking
and what's so hard about it. Parts of this talk were engineered for quantum folks. Let's see, we've got one
hard-core quantum guy here. I don't know about you. Everybody else here probably has strong
background in systems, so we're actually probably pretty much okay in that sense. Talk very briefly about
four types of quantum repeaters, quantum repeater network architectures and then go in particular into the
results that I really want to show that are about routing and multiplexing in networks and then the
discussion of the recursive actual quantum network architecture. Networking, what's so hard about it? We
can sort of skim through this part of it I think. Naming, resource management, heterogeneity in both
technology and resources and operations. This is one that would never even occur to the physicist, like
well, you know, Microsoft owns one network. Google owns another network. Layer 3 owns another
network. Cable and wireless on another network, and they don't really want to talk to each other. They
don't want to tell each other more than they have to, in particular about the internals of their networks. So
we need a system that's actually going to accommodate these types of things. Dealing with out of date
information is of course a big deal in networks and sheer scale. Naming. What is it that actually has to be
named in a network? Well, nodes, networks, processes, users, memory location. The request itself is a
particular issue in quantum networks that differs from classical networks and you'll see why in a few
minutes. And things like the scope of the name will also matter. The semantics of the name can vary in
types of networks and that means that you're going to have to have some sort of resolution mechanism that
translates from one namespace to another namespace, something like ARP and DNS would be examples
of namespace. Resource management, avoiding over commitment of your resources is something we all
want to achieve in classical networks. It's going to be different in quantum networks in some ways, but we
want to do that while making sure that the overall network is in fact efficient. We need to balance the
fairness, while minimizing the average weight while maximizing the throughput of the system, for competing
requests on a network. Enforcing real-time communications is actually difficult without some sort of hard
reservation mechanism, meaning circuits switch, IntServ as opposed to DiffServ, something like that. And
in quantum networks because the quantum information itself is so fragile, in fact, that real-time component
actually figures pretty strongly into the overall system. Heterogeneity, the physical technology, of course
bandwidth, pulse rate, quality of the channel, everything else all varies. This can be also whether it's single
pulses or strong laser pulses that you're actually using as your physical mechanism for actually doing your
entanglement protocol there as well. Heterogeneity in your architecture, whether it's your dealing with
packets or circuit switching, what have you. Resource and network conditions, your link length, your buffer
memory, all of that sort of stuff varies. Management and operations and autonomy which goes back to
what I said about Google Microsoft, cable and wireless, everybody. They want to have autonomous
operational control over their entire network. Dealing with out of date information, dynamic state
connection, dynamic network state, the topology of the network itself, which links are up and which are
down, right, you don't want to track the entire thing, that sort of stuff. Sheer scale, billions of nodes, millions
of networks, individual nodes can't and shouldn't track the entire state of the entire network. It's just not
feasible for something that's on the scale of the internet. The way we generally solve this in classical
networks is by applying a form of hierarchy. The thing that's most obvious in that sense is the difference
between the internal gateway protocol and the external gateway protocol at the level of the global BGP, the
border gateway protocol for this internally, across the entire internet. The challenge of doing this is
maintaining efficiency and robustness in the overall system. Joe would argue that the right approach to
doing this is actually applying recursion to the system at a whole because, in fact, what we have seen the
ARPANET was originally one network, and then it went to two layers and then you could argue that it's
effectively three layers now. They've got the global BGP layer interconnections of autonomous systems.
Beneath that you have individual organizational wide area networks and then underneath that you've got a
set of local area networks. If we apply the same set of engineering principles at each of these layers we
could build a strong recursive architecture which shares a lot of knowledge and engineering efforts so that
we don't have to reimplement things separately at each layer. That's everything about classical networking,
but you guys all knew that. There are four types of quantum repeater architectures. I'm not going to go
into the details of each of these types. You can get these from the papers of course. Purify and swap
architecture was originated by Wolfgang Dur and Hans Briegel and others going back to about 1998.
Various people have done lots of contributions to this. The group that I work with, Thaddeus Ladd, Bill
Monro, Kae Nemoto have done their share of work on this. Others include let's see, the group of Misha
Lukin at Harvard would be a very strong both experimentally and theoretically on a lot of this work. There
are some parts of their work that I really think are excellent and other parts that I'm not sure that their
conclusions are justified by the data that's presented in some cases. So there are good things and bad
things about it. CSS encoded systems where you are literally instead of doing purification which is a very
simple form of error correction, doing full-scale quantum error correcting codes on top of the physical
entanglement mechanism. Liang Jiang who at the time was a student of Misha Lukin's at Harvard and is
now post-docking at Caltech, has done some good work on that. Bill and Kae and I are co-authors on that
particular paper of Jiang's. Surface code quantum computations, so Austin Fowler whom we all know as
the primary apostle of surface code, the man who is figuring out how to make it all work, realized at some
point that this could actually be applied to communication systems as well. It could be applied to repeater
architecture as well as actual computational architecture, and again, Bill and Kae and I are co-authors on
this as Austin developed this out into a repeater architecture. These architectures have interesting
differences in how synchronous communication is required across the entire network architecture. It has a
big impact on your overall behavior of your system. And the last one would be the Quasi-async system of
Bill Monro which he published in Nature Photonics a couple of years ago and I can't claim, I wasn't involved
in that one but it's, there are some very interesting ideas in that one overall. Let's see.
>>: What does the word Quasi-async mean there?
>> Rodney van Meter: That's my term for this. You can go, Bill claims the entire architecture is fully
asynchronous and I don't think I really 100% believe that assertion. That's pretty much all the quasi means.
You'll see in the behavior of a lot of the repeater networks, there are a lot of cases where, for example,
when your transmitting a quantum pulse, you send it across the fiber and then you have to wait until you
get a response back to whether or not that pulse actually arrived at the far end before you can do
something with it. So that would be synchronous behavior. Bill claims that with this new architecture he
can do an awful lot of stuff where he sends the pulses and then each node can operate entirely
asynchronously and entirely autonomously with the behavior of it. There are a couple of places in it where
it remains more or less, rather than truly asynchronous, self clocked might actually a better term for it,
because the extra nodes in the system essentially receive information from their neighbors but they don't
get to operate totally autonomously and totally asynchronously. There are places where they still round trip
latencies windup showing up, but it's round-trip latencies over a single link rather than end to end and so
the individual nodes then get to behave more differently in that sense. That's a very crude explanation of it
and Bill would certainly take umbrage to anything I said about it.
>>: [Inaudible].
>> Rodney van Meter: Pardon, say that again.
>>: It's kind of local level asynchronization but it's not global asynchronization.
>> Rodney van Meter: Right, in that sense. It's link level, so single hop level sorts of stuff. Most of what
I'm actually talking about, most of the work that we've actually done in terms of simulation is actually on the
purify and swap form of quantum repeaters, so that's mostly what I'm going to talk about today. And what
they need is a little bit of quantum communication, quantum memory, local quantum gates, and this is true
for pretty much all repeater architectures. Lots of decision-making both local and distributed and lots of
classical communication and we'll see how this all comes out. I should probably say the differences
between these, one of the key differences, the original purify and swap architectures were developed
assuming very small of qubits in each repeater node. The Harvard gang has defined an approach for this
that works with two qubits in a repeater node. And they want to do this literally with two physical qubits.
You can't run the error correction on two physical qubits. So the purify and swap stuff at that level looks
much less resource intensive by orders of magnitude. The problem, of course, is that because it requires
synchronous communication now you have to have long lifetime memories, and that gets to be a very
difficult problem, so if you then want to turn around, and what you would really like to do is take each of
those qubits after it arrives at the repeater node up code that into a large encoded state so you can keep it
for the tens or hundreds of milliseconds that it's going to take for your round-trip messaging and then code
it back down. Doing all of that analysis and figuring out all of the resources involved in this and execution
time for everything and the error probabilities for everything overall is a big engineering problem and
although we have ideas, nobody has really done an analysis that is really complete at that level across all
of it for really any of these architectures at the level we would like to see it openly. But it means that, so
this one looks like it takes tens of qubits per--this looks like it takes a handful of qubits per repeater. This
one looks like it takes tens and this one looks like it takes hundreds. Bill's, it's not entirely clear if it should
be a small number. But by the time you apply enough error correction and enough of everything else that's
going on the resource analysis, I think we've done the first level analysis but we really need to go the next
level and make it much more detailed, as a community. Let me show you, just to sort of set the idea here.
This is one network link technology again developed by Bill Monro and Kae Nemoto and Tim Spiller. Bill
and Tim were at HP Labs at the time in Bristol and Kae was in Japan. The Qubus technology for this and
what you do with this one is rather than sending single photons, you use actually a moderately strong laser
pulse which you put into a waveguide of some sort which is coupled to a qubit at one end and at the other
end, so the distance here might be millimeters and it's effectively what we were using in the distributed
architecture I talked about the other day and might be kilometers. The loss in this channel is still limited to
a few DB which is a problem in terms of implementation. What this does is it effectively measures the
parity of these two qubits and what you get out as a result when you measure it is in effect the parity of the
two qubits, but with a lot of noise and so you still get relatively low fidelity out of this when you come out.
So simulations of this are in Thaddeus's paper in the New Journal of Physics and the original idea was in
Bill and company's paper in the New Journal of Physics and this is also what we have used in a lot of our
own simulations following on from it. Just showing you again what you get out of this is effectively this
entangled state here at the end. Now, entanglement swapping, have you guys seen entanglement
swapping before? Entanglement swapping is using teleportation to lengthen the distance of your
entanglement. So we've got three stations in the diagram here. Each of these arrows represents a qubit.
So you run your single hop protocol to create Bell pairs between station zero and one and between station
one and two. And then you run what's called entanglement swapping using what's called Bell state
measurement between the two of these. You measure which of the Bell states you think that pair of qubits
is in, and the result of that is you lose your local entanglement and you are left with long-distance
entanglement. The problem, the result of this Bell state measurement have to be communicated into the
opposite ends, so this is where, this is why you don't violate relativity when you're actually doing this. A
bigger problem is that fidelity decreases when you do this and so that means that after you've done it, so
you started with reasonable fidelity entanglement, and after you do this you're going to have lower fidelity.
That might be okay over two hops; it's going to be a problem if you're talking about a thousand hops. This
is the nested purify and swap architecture that was designed originally by Dur and Briegel. If you go back
and you read those original papers, as a systems architect it becomes a little bit hard to tease apart where
the entanglement swapping is and where the purification layer is in the overall system. They've got it
essentially as one integrated sort of a system. The protocols are sort of bound together. But what you do
in this nested approach is if you've got for hops, you do entanglement swapping between, at station one
which connects to zero and two and at station three which connects to stations two and four and then you
repeat.
You wind up freeing those cubits in the middle and you end up with the longer end-to-end distance. Much
of the original analysis was done assuming a power of two number of hops. For this approach obviously it
works better when you've got 4, 8, 16, 24 hops. So purification. We'll start with two entangled pairs, so this
is one entangled pair held between station zero and two and another one held between zero and two.
Perform some local operations at each end between these two, a local operation basically amounts to a C
not followed by a measurement. You measure out two of these qubits here. You get some classical
results. If it all works, what you're left with, the pair your left with has higher fidelity. But again, your results
have to be communicated and the usual protocol means you need actually two-way communication or two
one-way messages actually. That's sort of the basic idea. Those are the real concepts you need to
understand this. I looked at this and said well, what you need is you really need a protocol stack. You
really need to understand how this overall thing fits together. So we designed this, a physical
entanglement layer at the bottom. Above that your direct messages saying whether or not your
entanglement operations actually worked, entanglement control, and then above that purification control
and entanglement swapping control. So interestingly, these windup being applied recursively over one
hop, two hops, four hops, eight hops, which is why we think one of the key reasons why we think the
recursive network architecture actually works well for this. And then at the top you'll have some kind of
application protocol which might be your quantum key distribution or whatever it is. Now interestingly, so
this is done at the distance one, both of these are only over a single physical hop. These parts, this part is
repeated at different distances and then again you'll need some stuff end-to-end. Out of this entire stack
the only part that's quantum is this layer here at the bottom. Everything else is classical messages, which
means that everything else is messaging and distributive computation, the kind of things the people in this
room are experts in rather than the experimental physicist who are actually trying to implement some of this
stuff. So that's sort of where the division of work would be in terms of a community. Four hop stuff, I think
you'll see probably how this interacts. You'll wind up with purification control and entanglement swapping
control happening over one hop and over two hops as well as well as end-to-end through the course of this.
That's all at the relatively abstract level. Let's look at this where you've got multiple qubits involved in the
system and you'll start to see how the complexity of the system actually grows as you're doing this. You're
sending a bunch of pulses. You'll wind up with some of your qubits entangled and some of them not as
you're doing this between the various stations. And then what you're going to do, you have to pick out
pairs of these and run your purification protocol. One of the places where you have to be careful in
designing the overall system is this guy and this guy have to make the same decision about what pairs they
are going to purify. If he picks one and two up here and he instead picks up one and five, then your
purification operation is not going to work and you're going to have a real problem in your overall distributed
system. Now there are different ways you can achieve that. You can, you need to analyze the set of
information that's available at each station and make sure that either they can independently make the
same decision, or if not that they are going to communicate and share information, enough information to
make that decision. After you do purification, of course, you are left with fewer of the actual entangled
pairs. Now, same sort of thing going on up there. We get a few entangled pairs. In this case the color we
went from yellow which is sort of an intermediate fidelity pair to green being a higher fidelity Bell pair and
now here in the middle, green is good enough we said for doing entanglement swapping. We'll do the
entanglement swapping there and you get the longer distance stuff there. Again, you have to have enough
information to make the correct decisions everywhere. This station two in the middle knows enough about
these Bell pairs to make a good decision here, but one and three don't know anything about what's going
on at two until two tells them. They don't know what decisions two has made and in particular one probably
doesn't know anything about the connection between two and three. That is an important kind of
information hiding principle involved there. And that's pretty much it on the basic operation. So that's sort
of all background. That's with the exception of the protocol stack itself, that's all work that everybody else
has done. Let's see. So let me show you some of our actual results. This was some of this was the
bachelor's thesis of one of my students Takahiko Satoh, who is now finishing his Masters degree at the
University of Tokyo and Lucho Operisio [phonetic] who was doing his masters degree at [inaudible] but was
working with me. Routing. You guys know this. We want to create entanglement between A and B in this
network. How do you decide how to get from A to B? What would you use to decide on how you're going
to get from A to B?
>>: [Inaudible].
>> Rodney van Meter: Yeah yeah, sure, [inaudible]? How about Dijkstra's shortest path first? The
problem is in a quantum network we don't know what the cost metric is. We don't know how to actually
figure out what the best path is. Let's see. Four hop network here. The circle is pretty good and X is a
pretty poor network and triangle is sort of an intermediate link. The cost for this path would be some
function of those linked costs, now what is that? In classical Dijkstra the cost would just be the sum of all of
those individual links. With quantum version we don't actually know sort of a priori what the right way is to
combine these in order to get some reasonable cost for the overall path. So let's look at the link cost.
These are a set of simulations. This was Qubus interconnections over I think these were over 10 km but
with some additional loss thrown in. So the X axis here is DB loss in your channel from one end to the
other and the vertical axis here is then throughput of Bell pairs purified to a particular fidelity. So in this
case all of these Bell pairs have been purified to a fidelity of 0.98 and once that's done, then they are
delivered and counted as part of our throughput here. So you can see as loss increases, the blue line
here, your throughput is going to decline. In fact, it seems to decline in a stairstep fashion and this is a
result of the number of rounds of purification you wind up doing. So this would be one round of purification,
two rounds of purification, three rounds of purification and four give or take, roughly. For the blue curve
here is the throughput and this is a couple of hundred Bell pairs per second. The green curve here is the
number of measurement operations done in the network as a whole, so this is single hop stuff so this is at
the two ends of our single hop. The red line is the number of pulses you put into the fiber. So in classical
networks and in particular if it's an ethernet or if it's an OSPF network or something, throughput is usually
used as your inverse metric, so one over throughput is the cost for actually using a link in OSPF. The
protocol is independent of that, but that's the most common definition that's used in that was defined by
Fred Baker who is past chair of the IETF. So that sounds good, right? 1 gigabit per second, you know,
that's one bit every nanosecond, sounds good. We can't use the direct approach like that in the quantum
networks, because we don't know a priori what the fidelity of the link is going to be and whether it arrives or
not and once it gets there whether or not it's useful. So we have to sort of move up a level and find a, yeah
sure, question.
>>: This may be a stupid question.
>> Rodney van Meter: There are no stupid questions. I am, there are no stupid questions in my world.
>>: In traditional communication if I need to communicate a message to you now I need some [inaudible]
for now, whereas, isn't it true that in quantum I can say okay, I will need, some time today, I will need to
communicate a message. I can actually pay for the bandwidth now and communicate it later without
paying at the time it's really communicated. So does that mean that some of the measurement that we
[inaudible] throughput [inaudible] when we talked about traditional, I know that I'm running my application
now so if I don't get this amount of traffic within a second, someone is going to be waiting. It's not so much
true in this case, right?
>> Rodney van Meter: You're absolutely correct and in fact one of the things that comes up particularly as
you talk about scaling up to networks, is figuring out how to map operations on the network to a particular
request, because what you're doing in this kind of quantum network is your creating very generic
end-to-end states. And so I want to create one between, I'm going to need a hundred Bell pairs between
Seattle and LA. But the link between Portland and San Francisco, all it's doing is creating Bell pairs and it
doesn't know whether they are to be assigned to my connection between Seattle and LA or your
connection between Vancouver and San Luis Obispo.
>>: It's a completely different marketplace [inaudible] send it the house in the middle they will be creating
channels all the time and…
>> Rodney van Meter: Over a single link that's really easy. It's really obvious that the syncs ought to just
run flat, the individual links ought to run flat out. They ought to be trying to create entanglement all of the
time. The question comes in for things like the entanglement swap is well, now do I connect point A to
point C or do I connect point A to point D? Which one's the better thing for me to be doing right now? So
yeah, absolutely you're 100% correct, and that's the kind of problem I want to solve. And if you want to
work on it, great wheel work on it together. [Laughter].
>>: Did you say that there is an additional factor in the quantum [inaudible] because you can't have the
[inaudible] waiting because as soon as you is state its quantum [inaudible].
>>: So what is it, time sensitive? [Laughter]. It depends.
>>: It depends on how much or what type of [inaudible] you've got. [Inaudible] error correction now so you
couldn't [inaudible] hang around and get 10 Bell pairs [inaudible].
>> Rodney van Meter: Yeah, so the service providers will probably charge you to store those qubits in the
meantime.
>>: Is it something that due to the [inaudible] either now or [inaudible] reality?
>>: This is reality because this is why look, the reason why you can't [inaudible] built is because the
[inaudible] opportunity. Quantum memory [inaudible] stocking the and that is [inaudible].
>> Rodney van Meter: Yeah, it's, so we used to batch e-mail via UUCP and Unis net and everything else
in the middle of the night because that's when the networks weren't being use, right? In this marketplace
so maybe I'll ask for 100 pairs to be ready by 6 AM or something, right? And then if you don't use them
until noon, then you have to pay somebody to store them until noon. It's definitely the kind of thing we want
to evaluate. So this is over a single link. What we're looking for here is a metric for our link cost. So it
looks like in this case both for a single hop over this, the number of pulses and the number of
measurements both look to be sort of reasonable metrics compared to this. They sort of go up as
throughput goes down. We'll see though that one of these actually winds up being a slightly better metric.
For these, for the purposes of these simulations we picked out four points on this curve that were just
chosen arbitrarily. In our network there were four types of links, four links of four different qualities. That's
where they are. First set of results. The gray bars are our throughput. So the x-axis here, this is a set of
paths that we actually simulated, what you're looking at here. For example, square, triangle, triangle,
triangle, triangle which is here somewhere. That would be a path with one excellent link and three good
links in it. This is the set of things that we did. And then across this they are ordered according to
throughput. Throughput is the total distance here and, the total type there, and then the two measures of
work that we use, the number of pulses in the channels and the number of measurement operations. This
is across the entire path in that sense. And again, sort of qualitatively you can see that there is a pretty
good inverse correlation in between the two. A little more detail. These are for 40 different paths, the
same 40 paths that are in this diagram here. But these covered distances ranging from 1 to 9 hops with
different combinations of total, different combinations of high and low quality paths in this. The X axis in
this case is path cost calculated using the path cost using number of measurements here as your link cost
for each of those individual costs, and then just summing those together as path costs here. So this is
inverted so low-cost paths here and high cost paths there, and you can see that as the cost increases the
throughput declines. The different colors of markers here, these with the red X's are all paths that
contained one poor link in the overall system. The green ones had were paths of different links that were
all the best quality paths and you can see the sort of relationship here. They tend to group according to the
bottleneck link which in terms of throughput is the way a classical network actually behaves, so in terms of
throughput we are seeing something that looks a great deal like what we would expect in classical systems.
That is reassuring. On the side over here comparing the total cost, again the same set of things. Number
of measurements here on this line and total number of pulses in the green triangles, and you can see just
from looking at this that, in fact, the measurements are actually going to be a more consistent overall
measure of your total cost. They tend to resemble more closely to an actual line than the green ones do.
Statistics on that are in the paper. So a strong correlation between the path worked there. Those were for
different lengths of paths from 1 to 9 hops. This is all of the possible four hop paths and they are all 256 of
them using our four datatypes and again you can see here really clearly that the throughput is definitely
determined by the bottleneck link. And you got really, here it's even more clear that the measurement
numbers as opposed to the number of pulses is the right metric for link cost. This in particular takes into
account that measurements are used for Bell state measurement and for the actual entanglement
swapping as well as for the purification, which the number of pulses can't really take into account. So the
results of this chunk of work, this was the first work to really quantitatively evaluate the behavior of
heterogeneous paths. Pretty much everybody else said yeah, sooner or later we'll have to do that but they
all assumed homogeneous stuff in their networks. As far as I'm aware we are the first to actually consider
half selection in topologically complex networks and how you would choose an appropriate path for
something there. Counting the number of measurement operations winds up being a good metric for the
total cost to the network, to everybody in the network as a whole to create this, and so it winds up being
something that is well worth evaluating. The throughput of that individual link makes a good cost metric for
the link. The additive path cost actually turns out to work pretty well. Summary of that quantum Dijkstra
seems to work pretty well. And I'm sorry Burton isn't here because he actually said quantum Dijkstra, the
other day when we were talking about that, too bad. [Laughter]. Next chunk of work, I'll show you this one.
We may not actually get into the recursive stuff unless you guys want to stay until 1230 or something. Let's
do the resource management real quick. Remember this is some sort of distributed computation and this
fragile memory means we've got a real problem here. Does this require some sort of circuit switching?
This is another sort of resource evaluation system, resource evaluation of the kinds of work that I like to do
with this. Multiplexing scheme, you guys know this. Let's say we've got a dumbbell network here. We
want one flow to go from A to B and another flow to go from C to D. We can do this using circuit switching,
which means everybody else is going to have to wait, or we can do it using time division multiplexing where
you just sort of flip back and forth between the two for a while. Or we can do buffer space multiplexing and
take half of the memory at the intermediate nodes and say well we will dedicate this to this one, but then
again, it's going to require some sort of reservation mechanism to actually do this. This is, in fact, the
network that we simulated for this and what you see from this is we actually investigated mechanisms
where we dynamically adjusted the target fidelity for each of these individual links, so most of the
simulation so far we said will only use a Bell pair after we've purified it to a fidelity of 0.98, which means
over every distance we applied the same kind of thing. In this case we varied this on the middle because
the middle is the bottleneck link, right? So maybe we should find a way to sort of minimize the use of that
at the expense of making AE, CE, BF and DF work harder because in something like the circuit switch
scenario, or even the time division multiplexing, those links are going to wind up being idle a bigger fraction
of the time. We might as well take advantage of what they've got, so in fact what we did find in the process
of doing this was that if you multiplex this correctly, you will wind up with substantially better overall system
throughput for multiple connections. Larger networks. Even the dumbbell network is to the best of my
knowledge the most complex topologically complex repeater network that anybody has bothered to
simulate even at that simple level. This one, more complex network. What we want to do is we've got a
half a dozen flows that are running through this 1, 2, 3, 4, 5, I guess, running different places. The blue set
of lines is one flow. The orange is another. And part of what we're looking at here is in this case we've got
some short flows and some longer flows. One of the things we want to know in the process of doing this is
can the network be fair? Will short connections windup driving the long connections out of business
depending on how you actually do the multiplexing scheme? Let's see. What we found while doing this is
in, this would be peak ideal throughput. So in this case we've got, so each of these colors represents the
throughput of the matching diagram. In this case this is the entire network is dedicated to just the blue.
This is the entire network is dedicated to just the red connection which is also a whole long-distance
connection here. The shorter ones obviously you see more. This would be if everybody could get the
maximum bandwidth they managed to achieve our network as a whole would achieve this. That would be
some sort of ideal case. When we take just the two pairs, each of these two pairs actually gets greater
than 90% of its own ideal throughput, so the degradation to one connection as a result of the introduction of
a second is less than 10% in this case. In a couple of other cases here, as the total number of flows you
add in increases, of course, each one gets penalized sort of more. One of the things you can see though is
that even for some of the longer distance ones, in this case you can see here, the longer distance one
succeeded in retaining 77% of its throughput, whereas, some of the shorter ones actually suffered a little bit
more of a hit in their throughput which is actually sort of a little bit of a surprise, counterintuitive. So let's
see, how can we evaluate this? This was for time division multiplexing on those. Same sort of thing for
statistical multiplexing and, in fact, in this case we actually got better overall throughput in the statistical
multiplexing case, which is essentially the behavior of the internet as opposed to a cell phone network. So
statistical multiplexing in this case we actually think works better. And buffer space multiplexing, this is a
space where the intermediate nodes sort of split their memory out, turned out to not work quite as well. We
actually did evaluate the fairness of these using James' fairness metric to see how the different flows
actually competed and we actually found that somewhat to our surprise that actually all three mechanisms
retain very good fairness in that sense, so we're actually pretty happy and that sense, and we think actually
that the statistical multiplexing gives the overall best system throughput, but the biggest question is that
memory lifetime issue. That is one of the things that needs continued analysis.
>>: [Inaudible]?
>> Rodney van Meter: Pardon?
>>: [Inaudible] X1 [inaudible]?
>> Rodney van Meter: Those are in this equation, throughput, in Bell pairs per second. This part I'm going
to just flash through, but I wanted you all to see sort of how we did this. You won't see a finite state
machine with state transitions and then trigger all sorts of stuff and work by the physicist, but it is the way
that I like to design protocols, in the different layers of the stuff. That is sort of everything about quantum
Dijkstra and about multiplexing. Any questions on those two topics? Let's do about 10 minutes on the
recursive stuff and then we'll call it a day, how about that? Does that sound unreasonable?
>>: Have to [inaudible].
>> Rodney van Meter: Okay. Thanks. You can get the recursive stuff by looking at the paper.
>>: Or the video.
>> Rodney van Meter: Or the video, yes. So Joe, like I said insist that fundamental, recursion is a
fundamental network property. It is not just some sort of engineering artifact for the way we build things. It
unifies the overall architecture of the system and provides new ways to go about doing tunneling and
overlay networks and multilayered networks and all sorts of things. Let's see. I'm just going to flip through
this and skip some of this stuff and go straight to the quantum parts. So the quantum stuff. Here is our
recursive network, that diagram that I showed you at the beginning. Again we've got transit networks,
recursive networks, individual sorts of things involved here. Let's say in this case what we are trying to do
is we're trying to create an entangled pair between a couple of these nodes, or actually each of these pink
nodes would be in what we're trying to create is one large entangled state between those end nodes of the
system. Let's see. And for a simpler example, we'll look at just these half-dozen nodes here. We're going
to start with our request that originates here and what we're asking for is entanglement between node 11,
node 55 and node 77 in this overall architecture here. Now the way you would do this is--we're looking for
a state that would result from a circuit like this where the circles are controlled phase, controlled Z gate.
We want to create this over a distributed system. Now the question is how are you going to go about doing
it? One way to do it would be to create this three cubic state all of it here at the node 11. This is the guy
who is essentially controlling your distributed application. You can create them all here and then teleport
two of the cubits from where you are to the far ends of the network over top of Bell pairs. But is that
actually the most effective way to actually do the overall system? Well, it might actually be better if we
started with it in the middle and then spread it out. Which of these is more efficient? In order to understand
all of this you have to have rounding tables available at each of these guys. They have to have some
amount of understanding of the overall system. So here at node 11, it holds a routing table that looks
something like this. It knows how to get to network five and how to get to network seven out of this via
other networks and systems, sort of a relatively standard routing table sort of approach. The guy in the
middle here is going to have a somewhat different routing table, of course. Neither one of them would
understand via every node in the entire network. So what we're going to do is we're actually going to
rewrite this request, so it will start with this. Here is our request. We want to have node 11, node 55 and
node 77 all in this entangled state and the guy who creates this request is going to say when I'm
communicating with you about this request, we'll use the address 1000 to refer to the qubit, you know, this
is effectively the memory address, virtual memory address for the qubit that's involved in this state. So we
are going to take this and we're actually going to rewrite it into the set of operations for doing this overall
the original request where we can divide it up into three teleportation operations and each of those
teleportation operations can then get split out into first the creation of a set of Bell pairs and then the actual
combination operation here at the end itself. So we're going to have to have this overall set of states here.
So doing this, why is this valuable? This provides ideal support for internetworking we think, and this will
allow or support hierarchical routing of operations very smoothly and it will unify these software
implementations across multiple levels. In this case the request that you've got, so this guy knows here in
the middle when he's doing this, I want to teleport from one qubit network five address 2000 network five
3000, we want to get that to node 55, 1000 as it's drawn here in the original network diagram. So that's
node 55 out here. The original request from node 11 knows the address and the name of these individual
nodes but it doesn't know the names of other individual nodes throughout the entire network, so it starts
with a generic request that says, somebody here in the middle connect me to that. And that rewrite rule,
those rewrites can happen in various places throughout the network. So let's see. What's interesting about
this, it essentially unifies the distributed computation with transmission of data. So the teleportation
operations, we need to be able to request in order to make even just Bell pair creation work properly, we
have to be able to request purification which includes C not operations independent operations or
conditional operations dependent on the outcomes of things like measurement operations. So we've
already gotten C not and measurement and conditional operations extending from there to a fully general
distributed computing mechanism is actually very small leap from there. So what we really want to do is we
want to just go ahead and do the extension of that and create a unified distributed mechanism, which
means that this guy at the end can request of nodes in the middle of the network or of networks in the
middle of the network, please create this state for me and here is a recipe for actually doing it. Too bad
Alex left already because here is the, this diagram is actually from Bill's paper on some of the
asynchronous stuff and this gap here, this timing gap here, depends in part on the single hop latency in the
network and that's where some of the quasi-asynchronous behavior comes through it. This diagram is from
Bill's paper and what they said was one way to create good end-to-end Bell pairs is to start from the middle
of the network and then propagate them out by it teleportation until you've got end-to-end. It's great in
theory. The question is, in a billion node network how do you find the middle? What is in the middle of the
internet? What is the middle of the connection between here and Tokyo? We need approaches to doing
that, and so here is your network as a whole and here is the middle. All of this can be done ultimately with
one single software module, one of Joe's papers on recursive networks actually includes the pseudocode
for doing essentially all of networking as one recursive function for doing this. And basically all it ever does
is it can discard, forward to a neighbor, deliver to an upper layer, buffer or transform a state, so you have to
have those sets of operations available. And that's pretty much it. Open problems in networking, well the
biggest problem is that the repeater hardware doesn't really work yet. Large-scale networks have not been
really addressed by pretty much anybody up until now. That entanglement swapping that I was talking
about, connecting different sections of the network, that all involves a lot of policy kind of operations, and
we've done some optimization on that, but there is still a good deal more work that needs to be done on
larger scale networks. Part of the driving reason behind Bill's Quasi-async stuff and Jiang's work and
Austin's work is to try to eliminate some of these round-trip communications delays required in the classical
protocols, particularly for purification. There are also one way purification protocols that are not as efficient.
The goal of all of this is to trade-off things in the entire system that are low-cost or essentially free in order
and will do more of that in order to do less of this. It's those kinds of engineering trade-offs. And the single
biggest one like Clare said is really memory. We want to avoid buffering qubits as much as we possibly
can, and that was really Bill's driving reason for creating the Quasi-async stuff and it's really good. Now
what I want to see is I want to see that put into a more complete end-to-end network architecture using
something like the QRNA. There are still probably plenty of other error management mechanisms that we
can investigate and the resource management models as well, and ultimately, hopefully standardization, so
we can build this once rather than a hundred times. And thanks to the gang of collaborators. There's a list
of some of the papers. And some nice photos of Keio and the gang including our annual beach day.
That's the group, okay so Lucho is the guy that did much of this simulation work and Satoh did the quantum
Dijkstra work on that. Those are the guys that did that there. And Joe did the IP section, that's Joe Touch
up there with the recursive [inaudible] that's part of his recursive networking stuff. Some of our friends.
There is [inaudible] and this is from our beach day last summer including the [inaudible] that interested
Burton, so. And that's pretty much it. Don't forget information is physical. That's it. Thanks [applause].
Questions? Anything additional?
>>: So using these [inaudible] on the long-distance constant depth teleportation.
>> Rodney van Meter: Long-distance constant depth teleportation, I haven't seen that.
>>: So there are [inaudible] you can do to do [inaudible] constant depth [inaudible] along or long-distance
so seems like [inaudible].
>> Rodney van Meter: So who did that?
>>: I can't think of her. I've got the paper in my office. It was from like 2009 physicist.
>> Rodney van Meter: Jake Taylor?
>>: Jake Taylor.
>> Rodney van Meter: Jake is really…
>>: So he did some of it but a lot of it was done [inaudible] measurement community.
>> Rodney van Meter: I don't recall having seen that particular phrase, but assuming that we're talking
about the same chunk of work, that's like [inaudible] paper or something, right?
>>: [Inaudible]?
>> Rodney van Meter: The paper for the proceedings for the National Academy of Sciences.
>>: This is embedded in the appendix of one of Jake's papers [inaudible] archive, but we're using it for our
[inaudible] work that [inaudible] last summer we did and we're using [inaudible] the summer that I did
[inaudible] long-distance, or I should say [inaudible] on the network, so it seems in this work in networking
[inaudible].
>> Rodney van Meter: Fanout, actually in particular I mentioned at the beginning is one of the things we
would like to say overall structure for building and using Bell pairs, GHC states, W states, so we want, that
would fit very well into the overall recursive network architecture.
>>: So you can do that.
>> Rodney van Meter: We need a set of requests. Absolutely it could be done in this framework. We
have not stimulated that as part of this specific work we've done. I said earlier that there were some things
about the work from that group that was a little bit dubious about, in particular, I guess we're being recorded
but I'll say it anyway. The single biggest thing that I was dubious about was one of Jake's papers; I think
it's this PNAS paper. He was trying to determine what the optimal link length is for their particular
architecture. He tested across a big parameter space and came back and said you know 20 km is the best
for this, but I really think in that case that the result wound up being driven by the entanglement swapping
architecture and not the link level parameters that he thought he was evaluating. So I think the conclusion
was more or less predetermined by the way the simulation was set up, rather than actually coming out of it.
The hardware architecture that they've done is fine. I don't think I'd buy all of the systems statements that
they made about what the best way is to do some of that stuff.
>>: [Inaudible] constant [inaudible] fanout [inaudible] is that there are a lot of ways of doing [inaudible].
The problem is getting high fidelity [inaudible] fanout if you don't have any way of purifying it you're going to
have to stop [inaudible] incredibly [inaudible] and as soon as you stop sending it through [inaudible] the
time divider [inaudible] actual getting the it's not just [inaudible] because you could do [inaudible] fanout
[inaudible] then you get [inaudible] from one end of the memory to the other.
>> Rodney van Meter: If you're doing this within an integrated system that the whole classical [inaudible] is
what you wanted to get from one end of your computer to the other or one end of that chip to the other,
that's not a problem. But you know a lot of this winds of being how much in a real distributed system how
much each node know when and how do you guarantee that they are making consistent decisions? Some
of this stuff does fallout; some of it you can find a relatively simple algorithm and say if all of the nodes do
exactly this, then it will work. In other cases I don't think you can actually eliminate some of the stuff. So
this is simulation or output from one run of our simulator and you can see that it's building, so this is one
transmitter. This is the second node, this is the third node and this is the fourth node and the fifth node.
You wind up with sort of gradually building fidelity over the course of these with the whole bunch of this
stuff. And you can see that in a system this complex making those decisions consistently and well is not
going to be trivial. Like I said, finally, here after all of this has been running we finally have it two hop link
here. This one is waiting for the appearance of another two hop entangled pair so that it can actually
entangle there somewhere. Lucho did this. Let me show you real quick just the, this was still using the
single chain simulator that Thaddeus and I wrote several years ago. More recently Lucho's work was to
create the actual network simulator that I showed you and here's what it actually looks like when it's
running. And this is actually built off of OMNET plus plus which is a standard freely available network
simulator package. It's designed for doing wireless simulations and internet simulations of various sorts
and Lucho wrote plug-ins for this so that we could actually run large-scale networks on top of OMNET
using the simulation protocol. And you can see, [inaudible] is the student who recorded this for me and you
can hear him clicking on his keyboard. It's a good thing he wasn't actually talking to his girlfriend while he
was recording this. But yes, Thaddeus' original simulator that did the stuff that I showed you earlier with
the quantum Dijkstra stuff includes lots of low-level physical simulation stuff, detecting what the right
detector window is or how why do you make these sorts of things in order to optimize the probability of
detecting a high fidelity state, and all of that sort of optimization of the Qubus link. It also supports some of
the single photon stuff like the Lukin group's work and so it does the same sort of thing, but then that sort of
thing couldn't do networking and it's, it was in terms of adapting the messaging and the protocols for
purification and things like that wound up being really brittle, and so we decided the right thing to do was to
rewrite it in a more networking oriented simulator, so this one does all of the messaging sorts of stuff. We
can support routing protocols; we can support, in theory at least, different purification protocols and all
these other sorts of things. It doesn't yet have enough of the bottom level physical parameter
implementation in it in this simulator. So we've still got…
>>: [Inaudible] acknowledged [inaudible] low range [inaudible] messaging [inaudible].
>> Rodney van Meter: Yeah, absolutely. One of the big questions is what is the fidelity of that operation
transferring from one to the other? [Laughter]. If it's less, if you want to use these for QKD, you know, I
said all of the simulations were done to reach an end-to-end fidelity of 0.98 which wound up also been
useful as the link layer fidelity. That's good enough for running something like QKD. If you want to do
distributed computation fidelity of 0.98, it's not going to be enough. You're going to need four nines, five
nines, nine nines or something of fidelity, by the time you get to this if you want to run large-scale
distributed computations, and there are a few people who have been investigating on moving qubits
between physical mechanisms, between, you know, an electron spin or like the nucleus like the Lukin
group is doing.
>>: [Inaudible].
>> Rodney van Meter: Yeah, and most of those, you know, the fidelity looks like it's going to be pretty low,
at least for the foreseeable future. And if you want to say, if you're going to store it for a long time, keep it
and then say after we bring it back out of the memory, then we'll rerun error correction, or we'll rerun
purification or something, that's fine, but if you think you're going to do it, all purification once and then store
it in the long term memory and leave it there and then use it right afterwards, then that busing protocol has
to be extremely high fidelity, and that's going to be a problem. All right. Anything else? Thanks.
[Applause]
Download