>>: All right. Well, thank you very much everybody... Benjamin Brown here from the University of Copenhagen who’ll be...

advertisement
>>: All right. Well, thank you very much everybody for coming. I'm really excited to have
Benjamin Brown here from the University of Copenhagen who’ll be talking about fault-tolerant
quantum technology. Thanks a lot Ben.
>> Benjamin Brown: Thanks for having me. So the title here isn't what I wrote in my abstract.
Actually I got a little confused because Krysta told me 50 minute research talk with a little bit of
extra stuff and a couple of days before I came here I had somebody from recruitment, he told
me no, it's an hour and a half and you should spend 20 minutes talking about your background
a little bit as well, and so this was told to me the night before the morning I flew so there's now
about 20 more minutes of slides at the beginning, I haven’t rehearsed it but I guess 20 minutes
on my background. I did it on the plane very apologetically but if I look surprised about which of
my slides come up it's because I am.
>>: Maybe the best talks are carried out that way.
>> Benjamin Brown: Okay. So this is a little bit of PowerPoint karaoke. So most of my research
so far has been done on my PhD and pretty much on overlying theme has been topologically
ordered phases of matter and their applications, specifically their applications to fault-tolerant
quantum computing hence the title of the first part of my talk, Fault-Tolerant Quantum
Technologies.
So the 20 minutes part I'm going to rush you through things that I've done so you know all
about me. Recruitment asked. So this is going to be a little intro to what topological phases are
and anyons and some work that I've done has been studying topological phases with
entanglement entropy, applications and topological phases to quantum error correction, so I'm
going to talk a little bit about quantum error correcting codes. I’ve done some work with some
experimentalists on what I would call quite a novel way of preparing quantum error correcting
codes, not topological ones just yet but we are thinking about it and I am going to tell you a
little bit about self-correcting quantum memories and these things pretty much summarize my
research background; and then finally the talk that I have prepared for I'm going to put a new
title slide when I get it to that and this is the regular scheduled program, the research talk on
error correction with the gauge color code.
Okay. I'll start. So topological phases are very interesting just from the point of view of
fundamental condensed metaphysics. Lots of work has been done here with other people. I
guess the pioneers of this field are Xiao-Gang Wen, [indiscernible], Michael Friedman, who's
also at Microsoft. Matt Hastings has done lots of work too. So these are physically realistic
systems. We consider locally interacting systems of spins and what's particularly interesting is
we take the system and we put it on some manifold of varying genus. So here's a manifold; it’s
a double donut, it's got genus two because it has two handles, and what's interesting about
topological phases is they have degrees of freedom in the ground space that can only be
accessed by non-local measurements or non-local operations, and those non-local operations
are supported on nontrivial cycles of the model.
So this is exactly the picture I have in mind. We have some nontrivial manifold which is this big
blue shape, I thought two handles would make it look more exciting. It interacts locally so all
the Hamiltonian terms involve spins that are very close together and we have non-local degrees
of freedom that are accessed by measuring the handles of this model.
What's really exciting about these, the stable under weak local perturbations, so that means
even if the Hamiltonian isn't perfect if the perturbations are small and local we still have the
same physics. This is again some work done by Matt Hastings and some others who’s obviously
here and what's really interesting for me is the low energy expectations of these Hamiltonians
are anyons or generalizations there. So these are pretty interesting just fundamentally as
particles. They have braid statistics so here’s a picture of that manifold and let's suppose I
excited some particles and then I start moving them around. What happens is as I braid these
particles I acquire nontrivial braid statistics which I can use for all manner of things, quantum
error correcting codes or topological quantum computation which is another big subject here,
right?
These are interesting fundamentally and these are interesting for other applications for faulttolerant quantum computation, and I'm going to talk a bit about both of those things. So some
work I've done is concerning the study of these topological phases fundamentally. So one way
we can study these models is by looking at the ground states of these Hamiltonians and we look
at the entanglement of the ground states. So we take a lattice, so you can imagine this square
is being some nontrivial manifold again where maybe the sides have periodic boundary
conditions, we have a torus, and we take the entropy, so we take all the key bits inside a region
and trace out the key bits outside and we look at the entropy of this, of the pure ground state,
and it turns out my results due to Kiteav and Preskill and Leven and Wen in 2006 that we can
learn a lot about the topological characteristics of this model. So specifically we'd expect the
entropy to typically be a scale like an [indiscernible] so there would be some constant term
times by the length of the boundary. So LR is the length of this red line separating part R from
its complement, and if we have some topological phase which supports anyons as its lower
inject citations we also expect to see this universal term. So this term gamma tells us some
property of these phases in matter.
And we can learn about these, so a small extension of this result is we can take many entropy
calculations and take linear combinations of them and we can isolate this gamma term and this
is what we call topological entanglement entropy. So I've studied some topological phases in
this respect. So something I think people here are familiar with, so Alan Geller and Matt
Hastings have a paper on dislocation codes. So what I've studied is if we take some topological
phase, namely the turret code model which I’m going to talk about later, it supports two types
of anyons that have mutually nontrivial braid statistics. One type of exertation is on the blue
squares of this chessboard and the other type of exertation is on the green squares and we can
introduce dislocations to this code. So you can see what I've done in the middle here is I've
taken the squares of the chessboard and just shifted them and this reverses the coloring. So
the coloring of the chessboard I can no longer [indiscernible] color this in a way that’s
universally globally consistent. What's really interesting about these guys, which we call twists
or twist defects, maybe dislocations, is that these in some sense nontrivial change the topology
of the system so you can kind of regard taking this chessboard and introducing twists as
introducing wormholes between the green sizes and the blue slices and in this sense can see
that I've added my handles to the share between the blue pieces and the green pieces. And in
fact, so these twist dislocations have been shown by Hector Bombin, they behave a lot like Ising
anyons which are kind of like [indiscernible] you prefer that technology.
So some work that I've done is to study these twist defects on the lattice and take the entropy
of these regions and various topological entanglement entropy configurations and what we've
seen is that these twist defects that look a lot like Ising anyons and in many respects behave
like Ising anyons also with regard to twist defects with regard to entropy calculations. We did
various diagnostics on these. Here’s a bunch of calculations we made. So the topological
entanglement entropy should depend on the twists that live inside this region, this D1, and so
we did all of these tests. So in the case that we have one twist in this annulus we observe an
Ising anyon and if we put two twists inside this annulus it now depends on the internal state of
the fusion space of these [inaudible] or Ising anyons. It turns out that the analogy still holds
with respect to entanglement entropy.
Further work I've done, so topological phases are even more interesting it turns out in three
dimensions and we don't have a very good way of characterizing these guys yet. So in three
dimensions as well as having point-like anyonic exertations we also have these loop-like
exertations where so these are the exertations that the 3-D turret code and point exertations
back to loops will give you nontrivial braid statistics [indiscernible] here. And we also have
models that are topological in some sense but don't have a good topological entanglement
entropy if we just look at the box.
Some other work I've done regarding the topological phases is to study the boundaries of these
types of models and together with Isaac Kim we came up with two types of diagnostics where
we look closely at the boundary and we take the entropy of regions close to the boundary and
we can find boundaries that support different types of point exertations and also boundaries
that support of different types of loop exertations. And this is really interesting because we
can't tell these things apart by entropy calculations in the bulk of the system. We have to go to
the boundaries. This is kind of cool I think.
So this is all my work on topological phases. I’m now going to talk more about practical stuff.
So the reason why we think about topological codes or topological models as being good
quantum error correcting codes is primarily because we encode information non-locally. And
typically, a big assumption we make in quantum error correction, is the noise model will only
add locally on the physical spins or qubits of the system. So what this means the noise model
has to be either very fierce or it's got to conspire in a bad way before small local errors will start
to effect the non-locally encoded information.
So it turns out that topological codes, as far as I could tell, are some of the best candidates for
quantum error correcting codes we have at the moment so quite a lot of work I've done has
been concerning different types of codes. I don't want to talk about this too much because the
gauge color code is going to be the main topic later on. So certainly it's interesting to figure out
different ways of identifying the errors that have been introduced to these topological codes
and different ways of different classical algorithms we can use to try and determine what error
caused some configuration of syndromes and or anyons as I was talking about earlier. So I've
done some work with some guys at UCL and [indiscernible] who’s now at Sheffield comparing
different decoders for toric codes of varying spin dimension or qubit dimension and I’ve also
done some work with the gauge code. Again, I'll talk about this later.
Okay. So this is all very interesting but what we do really need to think about is how are we
ever going to build a quantum error correcting code, maybe not even a topological one, just any
kind of quantum error correcting code. People are starting to get quite good at it and that's
why lots of people are getting excited about quantum computing recently. So of course we all
know about what's going on in Google in the Martinez group. So these guys are now capable of
making a very small classical error correcting code which will detect bit flip errors and they can
measure syndromes to identify these errors. So syndromes are measurements we make to
spot where the errors are and try to figure out what the error is. And these guys are on their
way to trying to build some toric code but that will still remain as far as they can understand to
the best of my knowledge.
>>: Do you have a sense if they can do many rounds of error correction?
>> Benjamin Brown: My understanding is they have done some number of rounds. A talk I
sought by Martinez led me to believe that that's what they've done. So people also consider
ion trap quantum error correcting codes. So a small 2-D color code has been performed by the
black group in this paper. So this smallest color code is the sting code and this is interesting
because it will perform the whole [indiscernible] group fault tolerantly, but maybe this isn't the
best approach.
So another way we might think to put all our spins together is using the distributed picture. So I
spent quite a lot of time, namely as the co-author of the gauge color code paper that I'm going
to talk about later, but what these guys think a lot about as well maybe it is really hard to put a
lot of qubits together. So maybe we just put a few together, like an ion trap we can control 7,
10, 15 ions really, really well. But 15 ions isn't going to make an arbitrarily sized quantum
computer so maybe if we can take these really well-controlled qubits and connect them by fiber
optics or some channel of communication, then perhaps we can build a quantum computer of
many, many ion traps. So this is what’s called the distributed picture. We have small items of a
few qubits and they are all linked by fiber optics. So this is also a research program of Chris
Monroe and some others I believe who I can’t name. So these images are courtesy of Simon
Benjamin and Karl Nyman who’s a graphic designer who made this cool torics.
Okay. So I've spent a little bit of time thinking about how we can realize quantum error
correcting codes and what's called a Penning Trap. So a Penning Trap is kind of interesting. So
maybe you're familiar with the work of the [indiscernible] group from three years ago. So there
they simulated a Penning Trap. So a Penning Trap, just to be clear, is some electric field that
combines a bunch of ions. So these are charged particles and to keep these particles stable the
trap has to spin around and around and around and around otherwise the ions will go missing.
They can put 100 and 300 ions in this trap which is quite a lot of ions compared to what black is
doing in what's called a pull trap, but the only trouble is because these ions are spinning around
so fast the operations you can perform on them are quite limited. You can't address single ions,
for instance. Instead what you have to do is you shine a laser at the whole crystal and whatever
happens, happens.
So I should talk a little bit about my background at this stage. I was very lucky to go to the
Control Quantum Dynamics Graduate School at Imperial College, and they're trying to make,
just to be clear in the UK they're trying to make PhD’s a bit more like the US system; and so this
particular graduate school we were put in a class with a whole bunch of people wanting to do
quantum stuff as a PhD. And we [indiscernible] together, and we make friends, and the end
when we kind of start to learn how to do quantum mechanics we start talking to each other
again.
So I was talking to this guy Joe Goodwin, who was in my class, and he worked in the Penning
Trap lab at Imperial and he said what can we do with this? I think we can do operations that
are rotationally symmetric. We have this crystal and it’s spinning around and around, the
crystal is rotationally symmetric; what can we do? I said well, the five qubit code, that's
rotationally symmetric; now what we're going to try and to do is build quantum error correcting
codes inside this Penning trap and this is kind of a cool idea because we have so many ions in
there compared to a pull trap. So like I said, so here's a Penning Trap, ions are confined in this
2D crystal and they’re spinning around and around and around, qubits are encoded in the spin
degrees of freedom, and we have use these cross lasers to couple to the spin degrees of
freedom and we can apply>>: What does it mean [inaudible]?
>> Benjamin Brown: It means the frequencies are slightly different. So you get like a
[indiscernible] where these two lasers cross.
>>: [inaudible]?
>> Benjamin Brown: Yeah. They're just slightly off resonant. So in actual fact this is the really
nice thing about this grad school because I don't know these things either very well. I count on
the guys working on ion trips to answer these questions for me as well.
>>: Can you explain again why the symmetry of the trap is reflected by the symmetry of the
code? What’s the connection there?
>> Benjamin Brown: It's not like we want this code so let's make the trap this symmetry. The
question really was the ions kind of had this symmetry. What can we make with this? And I
just started showing up the codes that have this symmetry. So the five qubit code turned out
to be fine.
>>: But wouldn’t it be the same like in a pull trap that you have a symmetry of the chain?
>> Benjamin Brown: I would say it's probably not quite long enough to start imposing
translational symmetry. Ten isn’t so big. Again, it is only like 10 or probably nor more than 20
for sure.
>>: [inaudible] individual addressing. He's got to address everybody here globally so this thing
is spinning around you can do the same thing to all five all the time.
>>: That's the point I was curious about.
>> Benjamin Brown: Exactly. So this is the picture. So these purple arrows these are laser
beams and they're just shooting at the crystal.
>>: And at the detuning he gets to flip the spin however much he wants and so that’s his way
[inaudible] time wise that's how much rotation you're going to get on all five. So the question is
what can you do if you do all five at once?
>> Benjamin Brown: Exactly. So we do allow ourselves a little bit in this particular instance. I’m
going to describe in my next slide a whole protocol but we do allow ourselves some single qubit
addressing with this middle guy. So all this ions spin around it and the central guy is static so
we can get to him. So we can do this entire protocol, and what's amazing is with these global
operations we can do it really fast as well.
So we can start in this configuration, we suppose that we can rotate this, so in all these pictures
the blue ions are where the information is. So we assume a product state where we have a
logical qubit that’s on one physical qubit and we want to teleport it onto the code that's going
to be on the outside ring. So we have global operations that entangle this central guy to the
code, we measure to teleport the information onto the code, in actual fact we have two
different codes we can prepare. The first one is the five qubit repetition code which is just the
classical code but so far as ions are concerned this is kind of interesting because the code that
we make is the repetition code that defends against de-phasing errors, and since de-phasing
errors occur much more quickly than the bit flip errors in the ion trap it’s actually worth doing
this and very easy.
So if we want to, after we teleport it onto the five qubit repetition code, we are then able to
rotate the repetition code onto the five qubit code and defend against an entire quantum set of
noise, like any one of those ions suffer a bit flip or a phase slip or a combination of both
[indiscernible]. So we leave it and then we can re-entangle the code onto the hub qubit and
then we teleport by measuring the outside qubit so this is done by fluorescence. We just shine
a laser over the ions and we make them fluoresce and it turns out not only when we fluoresce
and teleport the information back to the hub do we recover out the information but we also
have land syndrome information. So they'll be some fluorescence pattern here that will tell us
what correction we need to perform based on what areas it occurred when the information
was on the code.
And what's really great about this, so even the five qubit code operation we can do this entire
procedure in just 10 pulses like zap, zap, zap and the information is in and out with syndrome
information [indiscernible]. And if we want to do the repetition code which is also worthwhile
we need only six pulses. And to compare this to what I talked about before, the ion trap of Nick
paper with the black group, so if they want to actually read syndromes it takes them hundreds
of pulses on similar qubit addressing. And all we had to do was figure out what types of unitary
operations we could perform by shooting lasers at the whole crystal in one go.
So this is actually ongoing work. We’ve published this result with small codes and we were
actually criticized by [indiscernible] for not making this scalable. So we were limited. So the
way that we found these was just numerical searching. We knew all the global operations we
could perform, we knew what unitaries we wanted to do to get through this whole procedure,
and we searched for them.
>>: So if you wanted to do two times [inaudible] you get?
>> Benjamin Brown: So the numbers, I think that de-phasing occurs about 1000 times quicker
than bit flips. Between 1000 and 5000ish, these are the numbers. Ions don't flip very fast.
>>: And how long is the de-phasing time?
>> Benjamin Brown: That's a good question. I trusted my own trapping guys on this.
Everything was done in unitless ratios of [indiscernible] types. I can tell you after the talk if
you'd like.
>>: [inaudible] five the best number?
>> Benjamin Brown: No. So we want to get to really, really big codes if we can. So after we did
this work I’m going to tell you about this now. We want to be able to make codes of arbitrary
size if we ever want to make an arbitrary size quantum computer. So we have a [indiscernible]
student work on this in fact, and he's figured out how to make repetition code of size N in this
Penning Trap and we figured out, we are still working on this, but it seems worthwhile with the
de-phasing the T1 and T2 terms I just mentioned to actually think about building a code of size
20, 50ish before bit flips actually become appreciable.
>>: Is it conceivable at all if it's kind of an architecture that you could over go to two qubits?
>> Benjamin Brown: Yeah. That's an excellent question that I put on my last point. There was
another issue with what I showed you before. So again, because single qubit addressing was
difficult missing syndrome readout was tough and we could only do it when we wanted to read
the information out of the code. So not only can we now take a crystal, a big circular crystal
and put a really large repetition code on here, so this was done much more analytically. We did
it brute force in the numerical setting but this guy is, [indiscernible], this paper will be out in a
month or two. He just looked at one global operation and it turned out we could do a lot of
stuff for the repetition code. But not only could he produce one really big repetition code we
can also take the crystal, partition it into two pieces, and separate it into two separate codes;
and in fact we can entangle these two codes and every time we teleport we learned syndrome
information. So we can perform repeated syndrome readouts by teleporting the information
from code A to code B and so this is allowing us to send lots of syndrome information. So this is
making for a really cool code action.
So, as I said, we are now limited because it's only a classical code. It would be nice to explore a
few more of the global operations we are allowed to preform and try and find some quantum
codes that we can make arbitrarily big. I have a couple of ideas on how this might work but
let's see. I think this is already a cool experiment for somebody to try and totally feasible in the
Bullinger lab where they're doing these simulations already I think. I hope it is. Maybe
somebody will tell me why not.
So your question, can we put more qubits in this? So maybe. So maybe we can take this crystal
and partition it into a few more pieces and then maybe put a few more logical qubits in one
crystal or maybe we can combine this idea with the idea of the distributed picture I showed you
before. So the way we work at the moment is we put an arbitrary qubit on the hub and then
we teleport it onto the code. Maybe if we can get some optical access to this guy maybe we
can start entangling this to another code that is similar. But I don't know how to do that. These
are two questions that will make this actually a pretty interesting way to encode qubits in a
distributed picture.
So then finally to summarize, one other part of the work I've done I've worked on selfcorrecting quantum memories. So what this is all about is how do you build a quantum hard
disk? So the reason you can think of why our computer can store information for a really long
time is because we have the hard disk. So a few years ago even we could have thought that this
is a big piece of Ferromagnet and provided the temperature this Ferromagnet stays small
enough information can be encoded in the ground space of this Ferromagnet where either all
the spins points down or all the spins point up. These correspond to magnetic [indiscernible].
And the reason this was robust against thermal errors like just the fact that this piece of
magnet has stood that room temperature is because every time the environment tries to put
some energy into the system the energetics are such that there's a very large energy barrier
trying to separate two ground states. So every time the environment tries to put energy in the
system just want to spit it back out again and so the errors remain very small and don't really
affect the encoded information. And we can always fix it at the end. Ultimately its the physics
that's protecting it so collectively this [indiscernible] number of atoms is producing a field that
points in some direction and every time one atom gets a little bit of energy to try and flip
against the global magnetic field the magnetic field will try and pull it back into line again. This
is where you can think of the Ising model as being a good model for a classical self-correcting
quantum memory.
And the question is can we emulate this thing in quantum physics? It turns out a quantum hard
disk is a really hard problem. We're not even asking can we build one. The question is even can
such a thing fundamentally exist? We need to know a lot more about Hamiltonians before we
can say such a thing. So one model we know we have is the four dimensional toric code. So
that's why I have a picture. So this is a building in Paris called the Grandash de la de France and
the architecture of this building, so if you want to draw a cube on a piece of paper you can draw
a square inside a square and connect its vertices. So the architecture of this is 4D hypercube or
a Tesseract. So it's a cube inside a cube and the vertices are connected. So the architecture of
this building is a bit like 4D hypercube. If you learn nothing else from this talk to learn you’ll
learn something about some architecture.
So I did some work on this. I've written a review article, in fact, and there's all kinds of
interesting models and none of them really work very well. So probably my favorite is what's
called the qubit code model which was discovered by John [indiscernible] Haah and studied by
John [indiscernible] so this model is partially self-correcting. So what we want to do is we want
to take some model, we want to make it arbitrarily big, and we expected qubits to live in the
ground space of this model for an arbitrarily long time.
So this code looks a bit like a self-correcting quantum memory. The memory time will grow for
a short while but if the [indiscernible] gets too big the energetic protection that we have, for
example the TD Ising model it falls off and it doesn't work anymore. So people have also
considered non-commuting models, so subsystem codes. Maybe the most famous example is
the Bacon-Shor code. So there are good arguments for why this might be a self-correcting
quantum memory. There are lots of reasons why it won't work. But ultimately because it's a
non-commuting Hamiltonian we've not really got a good way to solve it. So this is an
interesting way to explore.
>>: So at the high-level describe these difficulties. I remember in a 2-D case there was no gap,
right? So what happens in the [inaudible]?
>> Benjamin Brown: Our max isn’t good enough. Just like we can't [indiscernible] these
Hamiltonian’s to anything useful about them. So David Bacon’s argument for why this should
be a self-correcting quantum memory is based on the field theory and>>: Is there numerical evidence that it might be a self-correcting? Bob I am
>> Benjamin Brown: As far as I know nobody's even simulated this numerically. Again, because
it’s a non-commuting Hamiltonian, it's really intractable. Well, non-commuting, and threedimensional. So the third-biggest system, supposing one qubit is the smaller system, you then
need eight qubits and then you need three by three by this 27 qubits and this is getting kind of
tough on a classical computer to analyze I think given you need>>: Did anybody seriously try?
>> Benjamin Brown: Somebody told me a little while ago that some people were working on
this but I haven't seen that paper. In any case this is a hard problem, but I would like to skip on
to the talk that I was going to give. This is the summary of the stuff I've worked on in the past
little while.
So I want to tell you about the gauge color code and single-shot error with the gauge color
code. This is a really exciting topic for me. So just to give you some background, lots of people
think about using the toric code for quantum error correcting code to realize universal quantum
computing. They think this because it's pretty easy to build. The measurements we need to
make to identify errors are reasonably straightforward compared to all the other codes I can
think of. We can make it universal adding magic state distillation. It’s two-dimensional so we
can build it on an optical bench quite easily. This is still the problem Martinez is having as I
understand it. It's still fairly reasonable to try and think of magic state distillation is difficult. I’'s
worth thinking of other code to do this with and also there are other interesting ways of
performing error correction and I'm going to talk about that today which is single-shot
correction.
So I am going to first talk about the toric code. So single-shot error correction the error model
we're imagining is all our qubits are suffering errors but also in a more realistic picture when we
try and make measurements to learn where these errors are sometimes used measurements
are unreliable as well and photons are lost all the time. You can imagine the measurements
being difficult in general and I’m going to talk about different ways we deal with this problem of
having bad measurements and single-shot error correction is a pretty elegant way of dealing
with this I think.
First I'll talk about the toric code just to explain where we are in this whole picture. So to start
off talking about the toric code we need the stabilizer formalism. I don't want to bore you too
much with this. So we have 1 billion subgroup of the polytope group and these form a stabilizer
group. And we have code states of some quantum error correcting code and they live in the
plus one eigenspace of all the stabilizers of the stabilizer group. So S is a member of the
stabilizer group gives you the plus eigenvalue of all the code words SI and this is an arbitrary
code word of the code I have in mind.
So Gottesman came up with this in his PhD thesis. So these stabilizers we also use as
measurements to learn the positions of errors. So we suppose now not the code state but an
error times by the code state, an error acted on the code state, and by measuring the stabilizer
terms we find terms that don't commute with the error and instead of getting a plus 1
eigenvalues we expect for the code words we get a minus 1 eigenvalue and we use these minus
one measurement outcomes to identify the positions of errors.
So by the way, the reason we are allowed to restrict onto these polytope errors is because
when we measure these stabilizers we project onto something pretty close to a poly error times
by the code state that was encoded. So because of all the eigen states of the stabilizer operate
are poly errors times by code words we project onto a poly error so this allows us to restrict to
these types of errors.
The model we have in mind is the toric code. So again, this is a topological code like I talked
about at the beginning. So this is a lattice of qubits. Qubits live on the edges of the square
lattice. The lattice has periodic boundary conditions and so it lives on the torus thus the name
the toric code. So I've come up with this. Everybody knows that. So it has two types of
stabilizers, star operators which are the tense product of four poly X operators on all the qubits
that are adjacent to a vertex, and placate operators that are the tense product of poly zed
operators on the edges that bound a replicate. So we have one of these for every vertex and
one of these for every replicate. We can see quite clearly that star operators will always
commute with star operators because either they have no common support or if we take a
vertex close to a replicate they’ll have common support of two.
So here we have a stabilizer group and the logical operators, as I mentioned with topological
codes, they are supported on the non-contractible handles of this model. So here are the
logical operators of a topological code. So here's a string of poly zed operators that run around
one handle of the torus and this will anti-commute with a string of poly X operators that run
along the opposite handle of this torus and so you can see these logical operators anticommute with one another so we get the logical algebra of a qubit and the stabilizers commute
with these logical operators too and that's important because if we measure the stabilizers we
don't start collapsing logical information.
So how do errors look? So here's a poly error on the toric code and we measure, it's a string of
poly X operators. Errors will look like strings. So we measure all the replicate operators on
every single replicate and what we'll see is there will be two replicate operators that don't
commit with the string and they will live at the endpoints of this string. You can see replicate
that lives on the string will share two X errors with the replicate so that will commute. So what
we’ll see is a minus one outcome at the endpoints.
So what we'll do to error correct this guy is we assume that the error model is relatively weak
so the number of errors will be quite small. And it should be quite obvious that if we apply
some poly X operators that connect these endpoints again that the syndromes will vanish, the
minus one syndromes will be restored to the code space, they’ll become plus one again, and
you can see, so suppose I apply poly X operators here to here. The product of the correction I
made and the error itself will commute with both logical operators and it will commute with all
the stabilizers. We find the task of correcting the toric code is based basically connect the dots
and we connect the dots with the smallest path possible.
So we then imagine larger errors and this is when error correction becomes difficult. So in this
particular instance here's a long string of poly zed operators. Once again the star operators will
identify two endpoints of this string, and now because this string is wrapped halfway around
the torus it’s now very difficult to error-correct it. So the only information we have are the
endpoints of this string. The trouble is this correction operator that connects these term points
along this path this will commute with both logical operators but this correction operator is
equally as likely being caused by an error is this alternative dotted correction operator. So you
can see if I were to apply this correction operator the correction operator will run across the
support of the logical operator and cause a logical error by making this correction so that's a
bad and because the error is so large, there's no way I can tell one correction from the other so
this would cause a logical error with very high probability.
So that's the prototypical topological quantum error correcting code. But what I also want to
model is noisy measurements. So what I want to suppose is I measure some of the star
replicate operators and the measurement is wrong sometimes. How do we deal with this? In
the case of the toric code, so this is talked about first of all in the Dennis paper, we measure
stabilizers over and over and over again and we try and identify errors as they occur. But
equally in some of these repetitions of the error the measurement will go wrong and so instead
of studying the stabilizer outcomes themselves we look at the parity between different times.
So here's one replicate operator measure three times, one at time T minus 1, one at time T, and
one at time T plus 1. Let's supposed the measurement at time T went wrong. So that's why it's
colored in red independent of whether these are identifying an error or not.
Let's suppose there's no errors in any of these qubits. Because this has given me the wrong
outcome the parity between measurement at T minus 1 and time T will be minus 1 so we can
effectively think of this as the endpoint of a string that runs vertically and the other endpoint
will appear when we look at the parity between the measurement outcome at time T plus 1
and time T. So effectively what we have is a string of measurement errors that run vertically in
time.
So the decoding problem now of the toric code, so here you can imagine a toric code living on
the base, and we measure star and replicate operators over and over and over again and in
many time slices and our physical errors they’ll look like strings that run horizontally along in
the plane of this three-dimensional space time, measurement errors will run vertically, and
maybe we'll get some more complicated errors or a measurement error occurs at the same
time as a physical error in which case we have strings that run both horizontally and vertically.
So this is noisy measurements with the toric code.
So here's a bunch of terminology now. We consider independent or identical noise models, this
isn't realistic most likely, but it's analytically numerically tractable. So the error model we go to
every single qubit and we flip the qubit with probability P. With probability one minus P no
error code on that qubit. So when I want to introduce measurement errors, measurement
errors reflect probability Q. I'm going to call Q but Q is always going to be equal to P. I'm just
going to call it Q because it's good to have and other label for a measurement error. And when
the probability measurement gives me the wrong outcome with probability P equals Q we call
this the phenomenological noise model and this is the noise model we're going to work with.
I'm going to talk a little bit about the gate error model. So in actual fact the phenomenological
noise model is a bit contrived as well, but it's interesting enough. It captures all the physics I'm
interested in for now. So the gate error model is where not only we do we consider
measurement errors, we consider the whole circuit that will conduct a measurement. So that
involves having an insular qubit, coupling it to all of its nearby qubits that are involved in this
parity measurement, and then we assume that every single gate I perform could have
introduced errors to the system; and you can see then with the weight of the measurement of
the probability of error is going to increase. So this is an important error model to consider.
People have done it for the toric code. I haven’t yet got there with the gauge color code. So
this is important to talk about. I'm not going to consider it, but I want you to know what it's
called.
The first way we might consider studying a code is to study its a threshold. So this is what we
called a threshold plot. So this is actually a threshold I made for a different code altogether but
again, thresholds are very close to phase transitions. The physics is very similar. So on the X
axis I have a plot P which is P is the physical error rate of my qubits and on the Y axis I have the
logical failure rate so that's where I take some classical algorithm which I call a decoding
algorithm, I look at the minus one syndrome outcomes and I try and predict what error caused
that particular syndrome. And if the algorithm gets it right then that will be a logical success
and if the algorithm gets it wrong it's a failure. So that's logical failure on the Y axis. That’s how
successful my decoder is.
And what I want is a threshold. So what I want is I want to show that there exists some error
rate below which I can make the code arbitrarily large and get the logical error rate arbitrarily
small. So on this plot I show a code of increasing size and if I’m below threshold which is at the
crossing point increasing the size of the code will make the logical error rate, so on this plot this
is a logical success rate. I've got it upside down I'm afraid. So the logical success rate will
improve with system size. At the crossing point nothing will change. There's no sense in
making my code bigger, and below that it doesn't work anymore. But the interesting value is
how much noise can that code actually tolerate given a decoding algorithm? And if I have noise
below that number I can get my logical success rate arbitrarily large by making the code as big
as I need and then I can in principle get to a quantum computer of arbitrary, I can run quantum
circuits of arbitrary depth because my qubits will fail that infrequently and my logical qubits will
fail that infrequently.
So the final point I want to make about the toric code is the decoding problem becomes harder
when we introduce measurement errors. So if we just assume bit flip errors and measurements
are perfect the toric code can actually tolerate nearly 11 percent of bit flip errors. And when
we consider the phenomenological noise model and we decode in this two plus one space time
picture the threshold error rate will drop to about three percent. And so this is the number we
want to compare the gauge color-code to.
So, as I said, there's lots of reasons to think about other codes other than the toric code. So a
very interesting class of models are those as known as the color-coded models. So this is pretty
much the career of Hector Bombin. So in 2006 and 2007 with his PhD supervisor Martin Del
Gada[phonetic], he came up with the 2-D color code and the 3-D color code. So the 2-D colorcode is interesting. It's got a code very much like the toric code except you can perform
transversely the Clifford gate set. So when I say transversal I mean I want to perform some
rotation on the code space.
What do I have to do to all the physical qubits to get there? Well, a transversal gate means I
take every single qubit and I preform some local rotation on each qubit and performing all
these local rotations on each of the qubits will perform the appropriate rotation of the code
qubit and why do you want to do it this way? Well, it's because errors can’t propagate this way.
So in principle we all know mass and so we understand I have this state and I have this state.
There exists a unitary that will take this state to here. But can kind of circuit will do this and
what effect will errors have when I go through this circuit? Well, if this circuit involves lots of
two qubit entangling gates errors can propagate through the circuit. And like this unitary
operation by the end of it will amplify very small errors on the original code and we won’t to be
able to recover information on the other side anymore. If it's a transversal there's no
entangling operations. Two qubit transversal gates have a few but not in a way that's bad. So if
we transversely perform gates, perform logical operations, errors can't propagate. So errors
that were there will still be there but they won't have moved or gotten bigger so this is what we
are interested in.
So then a little bit later the 3-D color code arrived and the 3-D color code can preformed a
[indiscernible] gate. That's interesting because a [indiscernible] gate together with a Clifford
group will give us a universal gate set so that's kind of nice. But the trouble with the 3-D color
code is the stabilizer code is that it has very high weight measurements. So when we start
thinking about gate error models we are not going to reasonably measure the stabilizers very
well. So the stabilizers have weight maybe 24. The lattice I'm going to talk about has stabilizers
of weight that have 32 in it. The gauge color code isn't a stabilizer code anymore. It’s what we
called a subsystem code. This is a really great because we don't have to measure all the
stabilizers anymore. We can reduce the weight; we can measure other terms to infer the
values of stabilizer operations. The gauge color code, so we've reduced the weights and
measurements and measurement areas become far less problematic when we finally move to
the gate error model.
>>: It has it as a feature but you could see it also as a bug, right? If there’s an error in those
syndrome bits then the syndrome seemed to be worthless right or even detrimental if you
consider it in the error correction.
>> Benjamin Brown: It is a bug, but we want to do is detect those measurement errors and
deal with them. We want to see whether those bugs are. So moreover, the gauge color-code
can also do gauge fixing which was originally introduced by [indiscernible] as a way to
effectively use error correction to map between, I've gone too fast again. So no code can
perform the universal gate set so that makes the prospects of quantum computation look very
bleak. So the solutions are magic state distillation and this as a circumventing the [inaudible]
theorem because we noisily prepare [indiscernible] gates.
Or, what we can do is what [indiscernible] proposes is effectively having two codes and we can
map information from one code to the other via some error correction procedure and we can
make this mapping without making the logically encoded information vulnerable. And if we
choose these two codes smartly code A can have some subset of gates and code B can have the
other gates that we need to make a universal gate set and we can just keep copying the logical
information to each code as and when we need to perform whichever gate and the
[indiscernible] theorem because we don't use one code anymore we have two codes, together
they can form a universal gate set. So the gauge color-code can do this and that's really great.
And what's more, single-shot error correction makes it really easy to write so that's what I'm
going to talk about. Not the gauge fixing part of this code, but I'm going to talk about using
single-shot error correction to error correct for the gauge color code and the advantages this
might have.
So this is what the gauge color-code looks like. So we talk about it sometimes in the dual
picture and the dual picture is very convenient to see the boundaries of the model set. It
doesn't actually need to be on the 3-D three torus. We can get put it on a bowl and make the
boundaries correct and this will still incur the logical qubit. I'm not going to talk about it on the
dual picture, although it is important to see it. There's the bowl that encodes a fairly small
color-code. I'm mostly going to talk about the primal picture.
So the primal picture the code is defined on a three-dimensional lattice where qubits live on the
vertices of the lattice and the lattice has to be four-valent, so every vertex has four edges
instant to it, and also the lattice has to be we call four-colorable. So these three-dimensional
objects, which I'm going to call cells, I should be able to give them one of four colors and I
should be able to assign all the cells a color such that no cell of a given color will touch another
cell of its own color. So the lattice that I'm going to simulate, this is an example of a fourcolorable lattice. Look as hard as you want, you won't see one cell of a given color touching
another cell of its own color.
>>: But you said every vertex had four edges.
>> Benjamin Brown: Yeah. I might not have drawn all the edges on. This is just a piece of a
lattice. So it has four edges, four-valent. So this vertex has one, two, three, four edges.
>>: And you're pointing in or out depending on where you stop drawing. Okay. Never mind.
>> Benjamin Brown: There will be places where you can see three because it's again, a little
piece of the lattice. But yeah, the lattice has to be four-valent. So the stabilizers is of the gauge
code so I'm still going to talk about this as a stabilizer code for now and then I'll tell you what a
subsystem code is shortly. So the stabilizers are associated to the cells in the lattice. So you
pick a cell, so here's a picture of a cell, and every qubit that touches this cell has a tensor
product of poly X’s, there's a poly X, we have the tensor product of the poly X’s on all the qubits
that touch that cell and that's the stabilizer. And similarly, we have zed type stabilizers and
they're there is the tensor product of poly zeds on all the qubits that touch that cell. So take a
cell, poly X’s turns the product on all of them, that's the stabilizer, same over here, and we have
one of those for every single cell. All these stabilizers commute; it’s a stabilizer code.
So how do errors look? Here's a string of poly X errors and these are going to be picked up by
the S zed type stabilizers on the cells. So the two stabilizers that will give me minus 1 outcomes
are these bright blue stabilizers that live at the endpoints of this string so just like toric code we
can regard errors as being strings of poly operators. So I'm going to quite a lot now just draw
my errors just looking like lines and my syndromes as looking like minus one endpoints of these
lines.
So this model is a lot like a three-dimensional version of the toric code. So you [indiscernible]
visiting and he's done a lot of work on this, interning I guess I should say. So it’s a like the toric
code except we have many different colored versions. There are three different copies of the
toric code here and there's a different copy associated with the different colors of the
stabilizers and the different copies talk to each other in a slightly obscure way but we don't
need to worry about that too much. The point is this model is a lot like that toric code into
three dimensions.
So how do we deal with these errors? So I'm going to talk you through the decoding algorithm I
use, and just so you know the decoding algorithm we use to get a threshold for the gauge colorcode it's the first decoder you would’ve thought to use. So this is [indiscernible] coding. So
here’s a picture in the 3-D and you can think of this as being some lattice of some topological.
You imagine the decoder, you can imagine the next four slides so drift up if you wish. So these
red points are qubits that have suffered errors and close to those errors we’ll see a bunch of
syndrome that light up so they’re black points of this picture.
So what I'm going to do is clustering. So we can’t see the errors, they're invisible, but we use
the syndromes try to estimate where those errors could've been. So the way we’re going to
make the estimate is we're going to put all the syndromes in boxes, so here are little green
boxes, and we're going to incrementally make these boxes bigger and bigger and bigger by
finding nearby syndromes. So I make them bigger by say one spatial unit so these syndromes
found one another and these syndromes found one another and these syndromes found one
another. These two didn't because they're far away from everything else.
So some of these boxes have turned blue. That's because at the end of each incremental box
growing stage we look at the contents of the box and say can we explain all of these syndromes
with a correction operator that's contained inside the box? And in these two cases we could
and over here we couldn't because this box needed these two syndromes before we could have
explained this with the correction operator inside the box. So we run the algorithm an
increment larger and these syndromes will finally find these two guys and now all the boxes are
complete. Every box contains a collection of syndromes which I should have been calling
defects, syndromes, defects. I'll use these analogously. So now all these boxes are blue, we can
calculate a correction operator, and the point is because these are topological codes and these
boxes are small compared to the whole site that in none of these boxes can I put a correction
that will look like some non-contractible cycle around, I guess this would be a three torus. It's
not when to spend a whole lattice there. So the correction is going to be topologically trivial
compared to the logical code space or the ground space of this topological code and so what's
contained inside these boxes is a correction operator that will successfully recover the logical
information.
>>: This is a pre-torus, this box is not time, this is all spatial.
>> Benjamin Brown: This is all spatial. So you can see the red errors live inside these boxes so
this will work out fine. So, as I said before, the gauge color-code is a subsystem code not a
stabilizer code. So this is a little bit more complicated. And so here's the abstract picture of
what a subsystem code is. Don't worry, we don't need all this level of abstraction but I'll give it
to you anyway. So a subsystem code is specified by what's called a gauge group. So this is a
subgroup of the poly group again only this time it doesn't have to commute. But the gauge
group allows us to specify a stabilizer group. So if we look at all of the elements that commute
with the gauge group, the centralizer of the gauge group, and we look for the members that
commute with the gauge group that are them themselves members of the gauge group, sorry, I
think I said that wrong. If it's a member of the gauge group and it commutes with all members
of the gauge group then that's the stabilizer.
>>: So it’s the center [inaudible]?
>> Benjamin Brown: It’s the center. I was corrected the other way once before but I'm in the
center then. So the center of the gauge group specifies a stabilizer code. There's a distinction
between stabilizers and logical operate. That should be now, I'm sorry. So the stabilizers
commute with the gauge group but are them themselves members of the gauge group. Logical
operators, not stabilizers, look like they commute with the gauge group but are not themselves
members of the gauge group and this will specify my whole stabilizer code.
So let's look at the example of the gauge color-code. So the gauge group with the gauge color
code they live on the faces of this lattice. So for every single face of every single cell we have
the tensor product of poly X’s that touch the vertices of a given face and also the tensor
product of poly zeds on all the qubits that touch a given face and we have one face for every
single stabilizer. Every two operators we have an X type and a zed type face operator for every
single face. But it's important to see that these face operators don't all commute with one
another. Let's look at this face here. You can see this will share, this zed type face will only
share one qubit with this X type face so you can see this zed face will not commute with this X
face. This is a gauge group. But all the faces, all of the cell operators will commute with the
faces so the way this four-valent, four-colorable lattice is constructed means that no face
shares an odd number of qubits for this cell so the stabilizers I gave you earlier they’re the
center of this gauge group.
Moreover, we can measure the face operators and learn the stabilizer operators. So this is the
next important thing. So where I’m going with this is we don't have to measure face operators
over and over and over again to try and identify measurement areas. Actually we can just
measure all the faces once and we have enough information to learn all of the measurement
errors and then all the physical errors provided the error rate is small enough.
>>: So let me try to get my head around this. So the stabilizer is non-commutative>> Benjamin Brown: Stabilizers commute. Gauge terms don’t commute.
>>: But is the center inside which is t [inaudible] of the gauge group>> Benjamin Brown: Yep.
>>: Cannot identify the elements of that center in that graph somehow?
>> Benjamin Brown: So the center of the stabilizers I told you already the cell operators, all the
cell operators. So the stabilizers haven't changed. I've just introduced a gauge group. The
stabilizer groups are included in this gauge group. So what’s more what's really cool is I don't
need to measure the stabilizers anymore. So I can measure face operators, take the product of
subsets of face operators and then [indiscernible] the values and stabilizers. So here's a picture
of a cell and the faces of this cell are three colorable by which I mean the faces you can assign
one of three colors and it won't touch another face of the same color. What's more, every
single qubit of this cell touches one color face once and only once so that means I can take all
the green faces of this cell, take all the product of all the face operators, and what I learn is this
stabilizer operator, the cell operator. So I could measure all the green face operators and I
learn the value of this stabilizer. So if I measure all the faces of this lattice I redundantly learn
this stabilizer operator three times. Once through the green subset of face operators, once
through the blue subset and once through that red subset of face operators.
>>: [inaudible]?
>> Benjamin Brown: That’s going to let me infer measurement errors in a minute and that's
true of all the cells of this lattice. So that's kind of a neat observation of Hector. And what's
more, I'm only going to measure these faces once and that's going to be very useful and that's
what single-shot error correction is.
So why is this the case? Why does single-shot error correction work? So here's a picture of
some cells of the lattice. Now let's suppose some errors occurred, I haven't shown you the
error on the picture, but let's suppose this cell operator was supposed to tell me minus one and
all the others were supposed to tell me plus 1. The big red one in the middle, that's indicating
the endpoint of a string of errors. In this diagram transparent faces give me plus 1 outcomes
and colored faces give me minus 1 outcomes. So what that means is there's got to be at least
one of every subset of the different colored faces that has to be minus 1. So here's one in the
blue subset, here's one in the yellow subset, and here's one in the green subset. That's nice,
but because that had to be minus one if you then look at the cells next to it, so let's look at this
guy, this one had to be plus 1 so this means there's another cell that has to compensate this cell
is minus 1 and the product of all the blue cells have to be plus 1 so that means there has to be
at least another minus 1 cell to make the product of all the face operators plus one or all the
blue face operators plus 1. So that means this space had to be minus 1 and then so I did this
cell. This cell had to be plus 1 over all the blue faces. So this guy had to be minus 1 to make
sure the product of this space and this space became plus 1 again. So what we see happening is
this big string of minus one face operators emanating from a syndrome.
So here's a diagram now. Don't worry too much about the coloring. Here's the minus 1
stabilizer that is this guy in this diagram. What we see coming from this minus 1 syndrome is a
whole string of negative one face operators and we have one of these for every single colored
subset coming from that cell. So what it means is if we look at the face operators, not just the
stabilizer operators, we have a lot more information to determine measurement errors. So I’m
going to show you a picture of that now.
So if we just looked at the stabilizers this is the information we would have seen. We would
have seen a bunch of minus 1 points and measurement errors occur we might not have seen all
of these correctly, but if we look at all these the strings coming from these endpoints we have
this whole spaghetti of information and we know that these strings don't terminally accept in
triplets the way I showed you in the last slide. So what we expect to see if we measure all the
face terms is some huge spaghetti of data that we can use to identify measurement outcomes.
So now let's suppose we measure these face measurements and errors start occurring. So what
does that look like? Well, here's a picture of an invalid configuration. So if we look at the green
cells these aren’t valid because on the blue subset of faces we have a minus 1 face term but on
all the other two subset of faces they both show up as being plus 1 so this isn’t consistent with
a physical error. So what we see is broken endpoints of these, I should think of a smarter name
than spaghetti but I’ll stick with it, and we see endpoints of these pieces of spaghetti. So these
clearly have to be measurement errors. That's the only way we can explain this.
So if we know to expect some gauge configuration that looks like this and we start seeing
breaks in this configuration we can use these endpoints to actually identify the positions of
measurement errors. So we take this data, the endpoints of these pieces of spaghetti, and we
run it through a decoder again. So these endpoints they form kind of particle statistics in the
same way as the string errors did in the toric codes so this clustering algorithm works perfectly
fine, that's what we do; so we cluster up all the endpoints of these strings, we try and estimate
what went wrong and figure out what syndrome should have been there, and once we do this
we can reconstruct what we think the perfect syndrome would have been assuming the
measurements were exactly right and we run the perfect measurement decoder and this
works.
>>: If you run the decoder on just the endpoints would you lose information? Which one was
connected?
>> Benjamin Brown: No. So the spaghetti configurations, they’re arbitrary. You can't get
anything from this. In general it's a really huge mess. But the only thing we are able to identify
is the endpoints and that is useful information.
>>: [inaudible] you get triplets or not. It’s only good if it’s not triplets you've got measurement
errors.
>> Benjamin Brown: Exactly. So we can run this zero clustering decoder. So here we go. I
want to figure out how good this really simple algorithm is at working and I’m going to do this
through Monte Carlo [phonetic] Simulations. So we prepare our code state, we flip the bits, we
give them bit flip errors with probability P, and then we try and infer the stabilizers by
measuring faces and measurement errors on these face terms flip with probability Q is equal to
P, and finally we read out the state. So when we read out the state this is a little more
complicated. We are allowed to measure onto a product state. We are collapsing the code to
learn the logical information. So if we measure a single qubit poly zeds everywhere these single
qubit poly zeds commute with all the zed face terms and the zed logical operator. I can show
you a picture of the logical operator but it does, and effectively if measurement errors occur
when we perform this collapse they just look like physical errors. So effectively when we read
out we assume a little bit of extra noise so we add a bit more bit flip noise to all the qubits and
then we make the measurement where there is no measurement error anymore. So this is
what happens during readout.
So effectively here are the maps. We want to figure out the probability of the code of a given
size where physical errors happen with probability P, measurement errors happen with
probability Q is equal to P. So we run the code state through this map. The physical error
followed by the measurement followed by the readout recovers the initially encoded state. I
want to figure out how many times out of 10,000 this will occur and we get a threshold. This
isn’t interesting yet. I would talk about this crossing point but I will tell you why I don't care
about this one just yet. Here we have logical failure rate, physical error rate, and three system
sizes. So the system sizes we look at I believe lattice 35 by 35 by 35 is the largest and 23 by 23
by 23 is the smallest.
But this isn’t interesting yet because maybe here's the problem. I want to run this for a really
long time. So that will mean me decoding my gauge color-code many, many, many times.
Here's the thing, the noise that’s left behind after I run the decoder isn’t going to be the same
as the noise I put on initially and I want to give you an example of why I expect this. So here's
an error string and here are two syndromes at the end of the error string. So we have minus
one outcomes here and here. That's what we should have got if the measurements were
perfect. Let's say I measure all the face terms and this is what I saw.
So the original error is shaded gray; I can't see it. This guy, the left syndrome minus 1 that was
measured just fine but then we have this spaghetti of information and we have three endpoints
that don't truly identify this syndrome. So I have to run it through the estimation algorithm to
try and estimate the syndrome and let’s suppose it makes a small mistake. Let's suppose my
decoder predicted the syndrome to be here, not in its true position. So we have an estimated
position and a true position.
So what I'm then going to do is I'm going to put this syndrome into the perfect decoder, the
decoder that tries to identify what bit flips have occurred, and it’s going to come up with a
correction that looks like this. It's going to connect the true syndrome on the left to the
estimated syndrome on the right. So the noise that's effectively introduced to the code will
look like the initial error times by the correction I applied. You can deform this onto a small
error that connects the true syndrome position to the estimated syndrome position.
So other noise is introduced just by mistakes that are made when I make this estimation and
the error that is introduced corresponds to the discrepancy and position between the true
positions and the estimated positions. So this is a very complicated function. I don't know
what this noise looks like at all. I mean I do. It looks a bit like this, but the extent to which it
occurs is difficult to see it. So it's a complicated function of physical error rate, measurement
error rate, and the algorithm I chose to make this syndrome estimation. So the length of this
string I have no idea about and correlated errors can occur, long strings of errors can occur
depending on how good or bad my estimation algorithm was and I don't want to assume that
the noise that my syndrome estimation algorithm left behind was necessarily ideal IID noise.
So to test this out what I want to do is I'm going to run my syndrome estimation many times.
So I'm going to hit it with some physical noise and I'm going to recover it and then I'm going to
do it again and again and again and again. What I'm hoping will happen is if I run this many
times I will still be able to decode the extra noise that's left behind on the code and so I'm going
to do this N times and then I'm going to readout and I’m going to ask how many times does that
give me the initially encoded state.
This is now the stimulation. Here's what we see. So N equals zero. This is what happens. I
didn't run the syndrome estimation at all I just encoded the information and read it straight
out. So this is effectively the perfect measurement decoder threshold. This is the threshold I
showed you in the earlier threshold plot and then I run the threshold algorithm as a function of
N. So here's threshold as a function of how many times I run this guy. What we see is
convergence. The threshold after N is about four, hits this line, and we see a threshold. So we
call the sustainable error rate. So even though we appear below threshold if I were to look at N
equals one and N equals two if I ran for longer I'm not going to remain below threshold so this
is actually the relevant threshold data to collect.
But what we see is the algorithm we run the noise does equilibrate to something that can also
be decoded. This is numerical evidence that suggests this is the case up to N equals>>: This is fixed lattice size obviously. How does that-
>> Benjamin Brown: No, these are thresholds.
>>: [inaudible] sizes?
>> Benjamin Brown: Yeah. So each of these plots is calculated using three different system
sizes. So I'm showing you the crossing point after various N and this is what we see. This is our
result. We find the value of about .31 percent which is about a 10th of what we get with the
toric code.
>>: [inaudible] number of qubits?
>> Benjamin Brown: It’s the threshold value so I'm only talking about thermodynamic
properties at the moment. Overheads are something I’ll talk about in a minute. Here's the
thing, although there's a bunch of reasons why this isn't a fair comparison just yet, so first of all
the toric code has received away over a decade of study by now. This has received three
months of what me and Amy did over the Christmas holiday. Using the simplest code we could
have thought of we don’t know what the optimal value is so we can find much better decoders.
Also even if this is the best threshold, it’s not the best threshold, but even if this is the best
threshold it might even be worth taking the hit because we don't need to do magic state
distillation we can just do this gauge fixing trick. What's more, the fact that we can do this
single-shot error correcting trick means we can do gauge fixing very, very fast. That's really
nice.
There are other benefits of single-shot error correction that you have to think quite hard about.
So imagine now I want to build a big quantum circuit. If there's noise on the quantum circuit,
and there's noise on the logical qubits so the quantum circuit every time I preform a T gate that
modifies the poly noise in a nontrivial way. So every time I perform a T gate or every small
number of T gates I need to error correct it just to get rid of that new noise I’ve introduced by
adding a T gate. If the error correction procedure requires that the toric code where I need to
measure for L rounds this is going to be quite slow but here with the gauge color code I can do
this in constant time. So there’s this polynomial overhead that is just killed for every single gate
I want to execute.
So what this means is if I take the full length of a circuit and I can account for the error
correction you have to perform at the end of every single logical gate I need to perform well
then I’ve just shrunk the depth of the circuit by some overhead that I don't need to perform
nearly as many syndrome measurements to corrected this code. So if I shrink the depth of the
code the logical qubits don't need to last as long anymore so that means I don't need the logical
qubits to have such high distance anymore. So being able to do this single-shot error correction
is going to reduce the physical overhead so there's a lot of benefits that you have to think a
little bit about.
This is one that Hector pointed out. I'm saying the single-shot error correction also gives us
slightly better thresholds. So even though we have this complicated 3-D code by not having to
go to 3 plus one space to decode it, we can do it all in this nice 3-D space, somehow the
thresholds remain comparable to the toric code even the code is much more complicated. It's
three-dimensional, errors will percolate much more quickly, and still we have reasonable
thresholds. Although the threshold is a bit lower it's still worth thinking about these codes.
They’re three-dimensional, that's a bit disappointing. That's okay. Maybe we can build these
with some distributed architecture. So I talked earlier and if we have a bunch of qubits spread
out everywhere and we can let them buy a fiber optics; well, fiber optics don't really care
whether or not the code is three-dimensional or N dimensional. So there are ways we could
build this quite easily or as easily as any other code.
So where do we go from here? So I've argued that this code is certainly worth studying. I'm
not saying it's better than the toric code but it might be. Our results in this direction provide
evidence that support this. So what we need to do is we need to study the model with more
realistic noise. As I said, we worked with phenomenological noise and to compared it 2-D with
the toric code we have to take these six body measurements into account whereas the toric
code only has four body measurements and this will cause the threshold to suffer a little. So
we need better decoding algorithms. We have them, we know what they are, it’s just a case of
sitting down and writing the code and getting some student to do that.
So, as you were saying, we do need to compare the overheads. So we need to do some regime
where P is quite small, like way below threshold, and then ask actually how many physical
qubits do we need to run this circuit? It's a very complicated question for the reasons I was
explaining like the fact that we can shrink this circuit using single-shot error correction. That's
awesome. It’s a 3-D code so you'd lose some qubits again because the distance scales less well.
And the question I'm really interested in is what is single-shot error correction? We have
examples of it, we know it's related to self-correction. So the gauge color code is an example of
the code that performs single-shot error correction, so is the 4-D toric code that I talked about,
but I'd be keen to see that it's impossible somehow. I think I can nearly show that the toric
code definitely can't support single-shot error correction. That sounds like a trivial statement,
but there are technical details. I would like to see the single-shot error correction. I want to
know what fundamentally promises single-shot error correction and then we can, in principle,
find the simplest code that can implement this and this could be very useful for reducing
overheads of quantum architectures. With that, I'm done. I didn’t put a thank you slide in but
thanks for your attention.
>>: Thanks a lot. We've had a lot of great questions her during this. We've got five minutes
left. I’d like to open the floor for questions.
>>: So your error model, I forget, were you pretty much just considering bit flip errors?
>> Benjamin Brown: That's right.
>>: So maybe just the question-based slash comment. I know you’re kind of just getting started
in this direction, but bit flip errors you have some preferred direction but now you have three
dimensions. So do you expect that realistic error models will be bit flips or depolarizing errors
would be more, well they’re not going to favor any particular direction. Have you considered
depolarizing noise?
>> Benjamin Brown: Yeah. We can deal with this. We have, as I said I wasn't explicit about this
in my talk and I realize so much at the end I wasn't, but we have poly X and poly zed stabilizers.
So the way we deal with this is if we were to state depolarizing noise is we’ll measure all the X
type face operators and the X faces to deal with all that zed type errors; we’ll regard Y errors as
X times zed errors so we’ll go correct for those at that point, well, the zed part of those errors
at that point, and then we’ll measure the other flavor. I can't remember if I said zed or X, but if
we measure the X stabilizers first and then the zed stabilizers second and then we just repeat
and repeat and repeat.
Maybe if you're really smart this is something that we thought to try with a 2-D color code
actually but you could in principle measure X faces, zed faces and Y faces and you kind of
symmetrize over all the bit flip and face that there is. If you had more on knowledge on your
noise model, so maybe de-phasing occurs much more rapidly than bit flip errors then maybe
you would measure phase flip errors a few times and then deal with the bit flip a bit later on
and make it a bit anisotropic. We know how to deal with this.
>>: [inaudible] simulations on that or is that future work?
>> Benjamin Brown: That's a headache. I haven't simulated that but bit flips are interesting. I
would rather write a better decoder before going on to other types of noise. This is, in
principle, shows that I can deal with all the types of noise that you're proposing.
>>: Any other questions? Well, if not, let’s thank our speaker.
Download