>> Krysta Svore: Today we have the privilege of... of New Mexico. He's finishing his Ph.D. up this...

advertisement
>> Krysta Svore: Today we have the privilege of having Jonas Anderson here from University
of New Mexico. He's finishing his Ph.D. up this year under the guns of Andrew Landahl, and
he's going to talk to us today about homological stabilizer codes.
>> Jonas Anderson: Can you guys hear me? This is on?
>>: Just for recording purposes, here you've got to use your big [inaudible].
>> Jonas Anderson: All right. Sounds good. So I'm going to talk about homological stabilizer
codes, but I'm going to introduce just quantum codes in general first and see if -- if this sounds
scary, hopefully we'll introduce them in a nice way. Although, maybe for some of you, you're
ready to jump right in I'm sure. But, anyway, bear with me.
So this is work I did with my advisor who's at Sandia, and I'm at University of New Mexico in
the CQuIC group. So just real briefly here, so we've heard about this promise of quantum
computers for a long time. And we've -- it always seems to be just over the horizon.
And if you ask different people, they'll probably give you a combination of these three reasons
why we don't have them yet: the states are delicate; they decode here, so we have to worry about
noise; a lot of the physical systems that we set up are constrained geometrically, so a lot in one
dimension, some in two; and then there's just the general idea of control, which could be
anything from trying to pack more lasers into an experimental setup, to actually getting a
universal gate set and a more abstract type of model.
So I'm going to talk about codes that do a very good job with these two here. And additionally
there are ways to get universal gate sets with them, but I'm going to mainly focus on these two
ideas here.
And so this takes care of two out of the three, so that's a good thing. So here's a how I like to
think of error correction. We've got some set of qubits here. This is a theorist's description of an
optical lattice with a theorist's description of a bath. And we have a bunch of qubits here in a
nice ordered state. They could be general quantum systems. But we have them here.
So noise comes in and they -- it interacts with the bath. And we get -- it gets disordered. And so
I've illustrated that with these qubits, this one as well, going to a different state.
And so the idea -- and so what we want to do is apply some set of parity checks and say this is -just to get a notation straight for the rest of the talk, this is how we're going to do a parity check.
And these are going to be our physical qubits, and then we're going to have some ancillary
system, which I haven't drawn here. And we're going to kind of -- we're going to make a parity
check measurement, which is going to be consist of some CNOTs and measurements here.
And what that will do is kind of digitize the errors here. And so what I've done is I've put
them -- instead of having a random unitary applied, they kind of a discrete set of unitaries
applied. And so then error correction has a chance when we've kind of discretized the errors
here.
So this is where I'm going to put the ancillary system, and I'm going to say -- so in just this
picture green is kind of good, an even parity check, and red is an odd parity check. So based on
this information, which I'll refer to as the syndrome, we need to diagnose the most likely errors
that occurred in the system. And if we're -- and if we're lucky, it's like magic, and we've
corrected things to this nice state.
And but we didn't really get something for free. In physics, we're always worried you can't get
something for free. So, you know, where did the disorder go, and the answer is it went into the
ancillary system. So we can't just keep repeating this. We have to reset the ancillas, and then we
can repeat the process again.
So the idea of error correction is we offload some of the entropy to this ancillary system, we
apply the most likely errors, clean it up, and then we're good to go. All right. So ->>: [inaudible].
>> Jonas Anderson: Feel free to ask questions at any time. So at this level I wasn't going to
discuss it yet. Later I will.
So this psi and this particular -- this is a four-body parity check. So this is the ancilla system and
these are kind of the physical qubits. So this is a four-body parity check and doesn't necessarily
apply to this particular lattice.
So the idea is these are -- you've got some quantum information down in these physical qubits or
data qubits, and so that the size just represents where you're storing your information. And then
this is the ancillary system.
>>: He was asking ->> Jonas Anderson: Oh.
>>: -- the [inaudible] through that you perturb the system, that's actually [inaudible].
>> Jonas Anderson: So the -- so yeah. You have ->>: [inaudible] or is it -- is it a noisy state coming in [inaudible] state coming out?
>> Jonas Anderson: So it's a noisy state in terms of there was some local error applied to each
one, or there's a probability for a local error being applied to each qubit. And then coming out on
the psi prime, since what this will do, is this measurement will feed back into the quantum
system and kind of project it into a subspace of the -- so I'll talk about it in a little bit, but there's
a large space where this quantum system lives, and each of these measurements sort of projected
into a smaller space which in the end is the codespace if we've done everything correctly. And
so this is kind of the -- where we've digitized the errors, as I've illustrated here.
So we've projected them back toward the -- to either the codespace or something orthogonal to
the codespace, and that's how we're able to actually correct for the errors.
Yeah. Feel free to ask questions at any time. So that was the -- that was the cartoons. Now
we're going to talk about stabilizer codes in the -- with a little bit more formalism here. So we've
got these -- we've got the Pauli group, which I think is probably every physicist's favorite group.
And we've got our four Pauli matrices here, I, Z, X, and Y, which I've written here, I, Z, X, and
Y. And so we can write them out as the elements of our Pauli group on a single qubit, but we
can write them more succinctly by the generators here.
And so -- and that's going to be the -- I'm going to be using the generators mostly in this talk.
And that just means by multiplying these together, I can get all the elements of the group. So
just because it's nice to write two things instead of four, and generally it will be very nice.
Another nice feature of the Pauli group is that any two elements either commute, that's this, or
anti-commute. So that's a nice feature that we're going to take advantage of for error correction.
So we can extend the Pauli group to N qubits, and then we just have a -- we have these two
generators for each qubit. So I should explain what that means. So what that means is we have
this -- this would be like the single Pauli group here, so we have X and Z, which anti-commute,
and then identities everywhere else.
And this is just the tensor product. If you're not familiar with it in quantum mechanics, when we
combine systems together, we combine them with the tensor product. A lot of times in the talk
I'm going to drop the tensor product. But just remember it's explicit -- or implicit, rather.
So we're going to -- so this is the way we could right them out or we could just write them as the
generators. And I really like writing them this way because these two anti-commute, but they
commute with all other pairs. And so that will also be useful.
So why have we introduced the Pauli group? Well, it's because I want to talk about the
stabilizer. And we're going to be talking about the stabilizer codes, which are far and away the
most popular quantum codes. They're not the only quantum codes, but they're -- they're -- I think
they're the nicest. And they're certainly the most studied.
Okay. So we define the stabilizer group based on some states that are stabilized by a set of Pauli
operators. And so if this psi is on N qubits, these operators will be from the Pauli group on N
qubits. And we specify them by -- or we act upon this quantum state psi and we return that state
with a plus 1 eigenvalue.
So it turns out that this is a subgroup of the Pauli group, and it's actually that we're going to
enforce that it's an abelian subgroup, so all the elements of the stabilizer group have to commute.
So here's a state. It's actually the GHC state, but that doesn't matter. We can -- and we can think
about writing this in terms of the typical way of writing quantum state. We can write it by its
stabilizers here. And so these are the stabilizer generators of the stabilizer group here. And so
we can check really quick that if we act upon the state with any one of these three that it does
indeed return that state.
X basically takes 0 to 1 and vice versa, while Z puts a phase on the 1. So all of these go to 1, all
of these go to 0, and the -- each of the Zs, each put negative one there twice until they return the
state.
I don't know if all of this is too trivial or if it's something you haven't seen because you're used to
thinking about these from a different way, but feel free to ask questions.
>>: [inaudible] with respect to a single state?
>> Jonas Anderson: So that's a great question. On the next slide, I'll show you can relax it. But
so this is a stabilizer state. And so I noticed that I've got a three qubit state and I've specified
three qubit generators. So for an N qubit stabilizer state, I can always specify it with N
generators. But if I choose to specify less than that, then I -- instead of stabilizing a single state,
I stabilize a subspace, and that's where we'll encode information in just a little ->>: In subgroup you can define a stabilized state [inaudible].
>> Jonas Anderson: So ->>: [inaudible].
>> Jonas Anderson: So we start with the Pauli group. And actually there's only -- only very few
states can be specified this way. And that's kind of a good and a bad thing. It's good because we
can actually kind of simulate it and it's got these great properties for error correction.
But it's good that all states can't be done that way or we would be able to simulate all of quantum
mechanics in this nice way.
>>: [inaudible].
>> Jonas Anderson: Yes. So there's actually a tensor product here and actually there's kind of a
tensor product hidden in these as well. So yeah.
>>: Just to make sure I understand. So the idea here is that essentially you have a certain state,
you have [inaudible] subspace and you essentially have an operator now that reenforces that
subspace via [inaudible] eigenvalues because you want to -- you want to take away the noise
that's been accumulating back towards that happy [inaudible].
>> Jonas Anderson: Yeah. So what -- so what will end up pushing it back will be the
corrections that we apply. What measuring these will do will either project it -- so you can
imagine you've got your Hilbert space broken up into -- or you just got some, I don't know, space
broken up into regions. And what measuring these will do will kind of force, I don't know, every
member of the class in that to kind of go to kind of let's say the center of that region. And then
from there you can kind of correct back to the codespace.
That's how I think about it. Maybe there are other ways to think about it.
>>: So after this projection step there's still -- there's still a correction that you need to
[inaudible].
>> Jonas Anderson: So I haven't quite got into the codes yet. We're actually just stabilizing a
state here. But yeah. So when we -- let me just go to the next slide, and if you still have a
question, ask me.
Okay. So now stabilizer codes. So what we do is we take some number of logical qubits and we
encode those into a larger space of physical qubits. And then we have similarly to classical
codes, we have a notion of distance.
And for those who are familiar with classical codes, we've basically just added one square
bracket here, because that's what we like to do. And to make things quantum is just add some
decoration to our notation.
So what the distance is, though, to the distance classically is we have some -- we have some set
of code words that are just ones and zeros, and the distance is the number of bit flips to go from
one code word to another, or the Hamming distance.
So now we'll have a set of code words which are kind of defined by these stabilizers, and the
distance will be the number of single qubit poll operators to take us from one code word to
another. So those could be of type XY or of Z.
Okay. So now we're going to specify a codespace. And so, again, we write the -- we have some
set of stabilizer generators that act like this. They commute, as I mentioned before, but now
we're going to stabilize a space. And so we're going to -- if this is an N qubit -- if we have N
qubit states here, we have less than N stabilizer generators, which are going to be the parity
checks.
Does that answer your question? Okay.
So this is -- this is the way I like to think about these groups. I didn't come up with this way of
thinking about them, but I'm definitely advocating it. It's not as popular as it should be, I feel.
Although some elements are.
So we have our Pauli group on N qubits. And I just mentioned the stabilizer group, which is a
subgroup. So for -- for this state here, which is the example I was trying to explain before, now
we've dropped one of the stabilizers, and so now we -- our logical information lives in the span
of these.
So for each stabilizer generator, we can specify a pure error generator. So we have this
additional group called the pure error group. And we're going to partner each generator from the
stabilizer group with one generator from the pure error group. So notice how these anticommute,
but they commute with all the other elements from both the stabilizer, the pure error and the
logical group.
And similarly for the other stabilizer. And so if we want to think physically about these, these
are things that we're going to check in the code. So these are our parity checks. And what these
would be -- this would correspond to an error that would only kind of be detected by this check.
So that's kind of a physical intuition for what a pure error is. It's an error that's only detected at
one syndrome.
And then the logical group kind of rounds out the group. And this is a nonabelian group. These
two elements don't commute. They anticommute, in fact. And, too, this is how we can break
this Pauli group on N qubits which I wrote before in kind of the canonical form with kind of
pairs of Xs and Zs in a way that's kind of similar to that, although we've written it more in code
terms. Yeah.
>>: Is there a particular reason why you don't look at [inaudible]?
>> Jonas Anderson: So in the -- so I'm just coming up with a -- so I'm not -- so I think I -- yeah.
So this is -- I'm calling this the pure error group, but it's not going to be my error model.
>>: So why [inaudible]?
>> Jonas Anderson: So the pure error group, which is probably the one that is, I don't know -people don't use as much. And that's -- so I need -- if I picked a single term, it would either -- it
would not -- I wouldn't be able to make this commute with everything other than this generator.
And so I've just -- I've picked them in a way that -- yeah. So these are paired together, but they
commute with everything else.
>>: So another way to say that is that [inaudible] that is [inaudible].
[multiple people speaking at once].
>>: It actually works.
>> Jonas Anderson: Yeah. And it's just another way of expressing the -- yeah, kind of the
canonical group where we had on -- on N qubits we had these pairs, so we had 2 N stabilizer
generators, now we have 2 N -- sorry, 2 N Pauli group generator, now we have 2 N generators,
but they're kind of split in these different -- into these different subgroups that have a nice
meaning in error correction.
So, again, these are parity checks, these are kind of -- these are the errors that just excite a single
parity check, and these there logical operators.
So really quickly, what a logical operator does is if we have a single unencoded qubit, we can act
on that with kind of X operators or Z operators to kind of flip that or add a phase.
But when we encode this information, we also want to be able to act on our logical -- our logical
information. So if we have this state, we can actually -- so this has an interpretation as a logical
X. So applying this to this state here will actually act on this logical qubit as X would act on a
physical qubit. And similarly for the Z. So they also have that nice interpretation too.
All right? So that was the stabilizer codes introduction, and now I'm going to talk about classes
of code called the toric code and the color codes, which are kind of known codes that accomplish
this, that are stabilizer codes so they can correct for errors and they're local in 2D.
So here's the toric code as it's typically written. We have -- we have qubits on the edges here.
And we have -- here would be a stabilizer generator or a parity check. They're four body. This
is a Z type on this face. And this is an X type on this vertex.
So these are the checks. And if we think about -- it's a toric code, so we want to think about it on
a torus where we have periodic boundary conditions here and here. And the logical operators are
then the things that loop around the torus in a nontrivial way.
So this is connected to here, so this loops around that way. And this loops around, but it's on the
dual lattice, actually. So I've drawn them here.
So errors have a nice interpretation. And as physicists, we always like to think of things as
particles, so that's where our intuition comes from. And so we imagine having a single error
here, which would be an X error, and it creates -- it's detected on two Z checks, which are on this
face and this face.
And so I've drown these circles here to kind of indicate parity checks that give an odd value. So
these are kind of the syndrome values that are nontrivial. And similarly we can have an error
here that's detected right here.
And so, again, I've drawn this squiggly line to kind of make it seem like a particle that is kind of
separated. And in fact they have the nice interpretation as anions. But here's a single error. And
then we can imagine another error occurring, so the error chain grows, and another error here,
and it grows to here.
So we have -- now our job in decoding is to match these up. Because we don't actually get to see
the squiggly line in the real world.
And so this is what actually occurred. And so if we were to apply that, we would be great. We
would have corrected that set of errors. But it turns out that actually if we also apply this error
chain, notice it's of the same weight, this is weight 4, and this is weight 4, if we applied this, the
combination of the two, since it can be expressed as stabilizer generators, turns out to also be
allowed.
So we have this nice degeneracy in our error correction. So as long as we don't apply something
like this, and in this case this is actually of the same weight. So, you know, it's we could easily
apply this.
As long as we don't apply something like this, because the combination of these two gives us the
nontrivial loop that wraps around the torus, as long as we don't do something like that and
accidentally apply a logical operator, our information is safe.
And so that's the -- that's the nice thing about the toric codes is you just expand to larger lattices
and you get this kind of natural protection against this local noise.
So that's a nice feature. Although, a more typical error pattern is going to look something like
this. And maybe some of you would say that's not too bad, I can just look at the squiggly lines
and correct that. But really this is -- we're going to see something like this.
And so then what's typically done is that they -- you need this into a minimum-weight perfect
matching algorithm. Which physicists, we would call that a minimum energy decoder. And so
that's kind of matching these things up such that the squiggly lines are the shortest, which is kind
of the minimum amount of -- that's the minimum energy because that's the minimum errors that
had to occur to create that error pattern. Yeah.
>>: Yeah. So minimum-weight matching on a planar graph is easier than on an arbitrary graph?
>> Jonas Anderson: It is. And it's actually -- I think it's very easy, right, on a planar graph. But
it's still a tractable algorithm. But the -- but there's a bit of a cheat here, and that's that -- well,
it's -- I don't know what they call it. It's -- this graph is misleading. And now we're actually not
doing matching on this graph.
So we take these -- all of these syndromes or all of these kind of nontrivial syndromes, and we
create a new graph where those are each the vertices, and we calculate the distance to each
nearest -- kind of the distance on this graph will give -- that new graph will be a weighted graph,
where the distance between neighbors will correspond to a weight in that graph.
And then you do actually perfect matching on that graph, which is a complete graph. So it's not
a planar graph. But you're right. Yeah. And there are lots of tricks to get the runtime down a
lot. I know Austin Fowler and maybe you've worked on it, Clare, you've worked on some of that
as well, I'm not ->>: [inaudible].
>> Jonas Anderson: So, yeah. There's lots of tricks to get the complexity down. In fact, I think
even thread to order 1 in some of these cases. So -- yeah.
>>: I apologize. I'm a nonquantum person, so a lot of this stuff is new.
>> Jonas Anderson: No worries.
>>: So I want to understand what's going on here. So are these the kind of errors that are
coming out of those parity checking bits, or is this ->> Jonas Anderson: I think I maybe skipped -- I was trying to cut the talk down and keep the
important things in as an introduction. I didn't talk about the error model. And so what the error
model is on each -- are we here -- on each key bit we have some probability for an X type and a
Z type error. And I've actually set those to be the same probability P.
And so we're putting those down at random, although I actually just drew this, so it's actually
random or pseudorandom. So we've just put some errors down here and just looked at what the
syndrome would be.
>>: [inaudible] from the syndrome check you figured out these are where errors are. Now what
do you do?
>>: [inaudible].
>>: There is a syndrome ->> Jonas Anderson: We can't see the errors directly.
>>: Right. I understand. And what's the difference between the red and the green?
>> Jonas Anderson: Oh. I -- so see how the red are detected on these vertex checks, which are
four-body X operators, and the green are detected on these placketts [phonetic], which are
four-body Z. It's a little bit confusing. Actually the green are X errors because the Z parity
checks detect X, yeah, and vice versa. But yeah. I should have talked about the error model.
>>: Another question from a nonquantum person. So you make the assumption. So if I have a
qubit, which is in some [inaudible] position, the noise will affect what -- will have the same
effect on all possible -- right? So this is the assumption that's made.
>> Jonas Anderson: It's a local noise assumption. So it's -- yeah, so the assumption is that
there's some -- some local -- some probability for an error at each qubit, even though that qubit
can be, well, fairly entangled with the other qubits ->>: But even for a single qubit, so the normal is noncorrelated with the state of this qubit.
>>: I think the answer to your question is the quantum error channels are usually assumed to be
symmetric. So the probability of a 0 turning into a 1 is the same as the probability of 1 turning
into a 0 [inaudible]. Christo [phonetic] would be actually probably the person who would most
likely know [inaudible] other kinds of parities.
>>: But is it kind of physically supported [inaudible].
>> Jonas Anderson: So it's maybe -- it's not necessarily the most physical assumption, but it's a
very -- it's a very general assumption where you -- as long as kind of you have local noise, you
can express it this way maybe with some -- so, you know, maybe it's not going to be the same,
but kind of the -- if the transition from 0 to 1 is higher than the opposite, then at least you would
have kind of a bound on that for whichever way is the most probable, you could just say, well,
the errors are like that. So you can map it. You can map it to this as long as your error model is
local.
>>: [inaudible] symmetric would be using energy states of an ion as your qubit, right? Decay is
whole lot more probable than spontaneously going up [inaudible].
>> Jonas Anderson: That's absolutely true. And this does not take advantage of things like that.
>>: I don't know anybody who has looked at that in error correction.
>> Jonas Anderson: Yeah. I don't know -- I don't ->>: Most people would quickly avoid [inaudible].
>> Jonas Anderson: We look at a much less maybe physically motivated asymmetry, where we
have instead of X and Z type errors occurring with the same probability, we allow those to vary.
But is that a physical assumption? I don't know.
>>: So you guys aren't even doing that?
>> Jonas Anderson: So that particular asymmetry? Yeah, and, I mean, basically I've verified
results that I've seen from others. So I haven't done much myself because most of my work has
been on the color codes, where they're a little less -- it's a little less obvious because the X and Z
checks are so similar of how to take advantage of this, that type of noise.
But I think that's a great question. I mean, more physically motivated noise models are I think
where this stuff should be going. Absolutely.
>>: Will the X and zed [inaudible] atomic systems. Practically [inaudible].
>> Jonas Anderson: From a computation standpoint, yeah, once you start varying the X and Z in
a code, at some point you're probably going to apply the Hadamard in your computation which
inverts those two anyway. And so I feel like maybe it's just the raw memory that you could take
advantage of that asymmetry.
But, yeah, I think generally in computation that would fail. But a model, like you were saying, I
think that's -- I mean, I think that's something that people should look at if they haven't already.
So the -- so actually I can talk about these error models. And so this is the X and Z type errors,
which are independent local errors, and they occur with the same probability. And so actually
it's kind of their -- they're kind of two separate models. We could basically look at one decoding
algorithm for the red and another for the green. And they get 11 percent in this paper. And this
is what I'm going to call the code capacity. It's not really a threshold because the syndromes
themselves are assumed perfect. The measurement is absolutely perfect. Oh.
>>: And this 11 percent stands for?
>> Jonas Anderson: Okay. So it's the code capacity. And what that means is that if the
independent errors here on each qubit occur with probability 11 percent or less, you can
successfully correct.
>>: But does it need to end with a number? So we have N K and so you had the logical -obviously if you're limited by the amount of logical qubits, you would be -- so what is kind of
[inaudible].
>> Jonas Anderson: So the toric codes are a family of codes that we kind put on larger and
larger lattices. But the number of logical qubits is fixed for the lattice. So if you put them on the
torus, you have two logical qubits. And so the K is always 2. And you have roughly -- let's see.
I mean, it's a square lattice. So if the length here is L, you have roughly L-squared qubits,
ordered L-square qubits.
So really recently in the past actually couple weeks, they looked at the depolarizing noise, which
is, again, a very symmetric noise channel on X, Y, and Z. And then you can no longer kind of
separate these two. But you can take advantage of the correlations because Y errors actually
look like a combination of X and Z errors.
And so what they were able to do is show with that error model they could get up to 18 percent,
which is really high. Again, it's in this model that -- it's debatable how physical it is, but it's a
nice value that they got.
>>: There's layers on the figure, actually.
>> Jonas Anderson: So, yeah, some of these you could imagine a Y error creating these two, and
then a Y, and then with additional maybe -- I can't remember the colors, but Xs coming out here.
Yeah. Although the -- they occurred from an independent X and Z in this picture. But yeah.
Absolutely right.
There's a threshold, and because I'm not particularly good at drawing three-dimensional pictures,
I took this from a paper by Fowler, and so now we imagine having -- so we have our lattice here
at kind of some step in time, and we also have syndrome measurements which we can't trust.
And so we kind of repeat -- we repeat the code in time here. And so these are the kind of
nontrivial syndromes but in time as well.
And so by doing that we can diagnose syndrome errors, and so we don't trust that syndrome at
that particular time and so we don't apply a correction and also diagnose local errors. And in a
model where we assume that this whole parity check is a black box and works with some
probability 1 minus P, it turns out that we can go up to -- we can get a 3 percent threshold. So
we've dealt with syndrome errors without too much loss actually in threshold.
When we take into account a full circuit model here, where this is the same circuit again and this
is the kind of the physical qubits, which now I can show you on the lattice they would actually
be -- I think these four would maybe correspond to these four or something, and the ancilla
system would be like in the center, and so we run a lot of CNOTs. And so this kind of correlates
our ancilla with the system. And then we make a measurement, which I said kind of digitizes the
errors.
And in this circuit model we allow with some probability the preparation of the state to be bad.
We allow the measurement to be faulty. And these two cubic gates could propagate errors as
well as they could create two cubic correlated errors.
And so in that model the best thresholds are around 1 percent, which is actually really good.
These are not quite as rigorous as some analyses of the thresholds. These are done with Monte
Carlo simulations on air patterns while more rigorous derivations of thresholds are actually -have assumed more adversarial models and [inaudible] bound them. So we don't do that.
>>: [inaudible] 1 percent there mean that you can tolerate only up to 1 percent error in that
measurement, like the [inaudible]?
>> Jonas Anderson: So, again, in these models, they kind of lump everything together. So
there's actually probability P for this preparation to fail, probability P for the CNOTs to fail. And
they can fail in different ways because they're two-body. And also probability for the
measurement to fail. And any time qubits are waiting around idling, there's a probability for an
idle error as well.
And so with all of these probability errors being equal, as well as the properties for single qubit
errors that we talked about, you get this value of 1 percent.
>>: But what I'm asking ->> Jonas Anderson: Oh, sorry.
>>: -- is that one referred to as an axiom, you can tolerate and still [inaudible] what you're going
to show, or is it that's a typical number of [inaudible]?
>> Jonas Anderson: So that is the -- yeah, that's the best. When you go to large lattices and
you -- I don't know. And in this error model, that's -- 1 percent is the highest you can tolerate.
Yes.
>>: Well, that's theoretical [inaudible].
>>: You'll never get there.
>> Jonas Anderson: Never get there.
>>: [inaudible].
>> Jonas Anderson: Absolutely.
>>: That's the absolute most that [inaudible].
>>: Yeah. Normally we would [inaudible].
>>: [inaudible].
>> Jonas Anderson: Yeah. If you were -- if you had this model, you would have a computer
that you could only do error correction with. All that you would get done is you would just
correct in time for new errors to come.
>>: [inaudible] there and correct itself [inaudible].
>> Jonas Anderson: Yeah.
>>: It would also have to be infinitely large.
>> Jonas Anderson: It would also have to be infinitely large. And it would apply the identity
really well. And that's it.
So ->>: [inaudible] so where does it -- I guess going back to the theoretical question, what is the
assumption now on what the error rate would be of the physical point of the machine
[inaudible]?
>> Jonas Anderson: Oh. In terms of the projections or what's -- kind of what's out there?
>>: Yeah.
>> Jonas Anderson: So I'm not an expert in the real systems so much, but there are systems like
ion traps and things like that where they can get really good single gates. The single gate fidelity
is really high. They can protect kind of single qubits and sometimes even apply two qubit gates,
but with fidelities or -- well, with, you know, rates that are better than this actually.
But in terms of scaling them and doing distant interactions and things like that, that typically is
not as good. And there's other models where they're kind of more scalable.
>>: But can we say -- [inaudible] target [inaudible] level off for tolerance, we can say
[inaudible].
>> Jonas Anderson: What Clare said is -[multiple people speaking at once].
>> Jonas Anderson: And there's not a single technology that's there. But I think that if the
projections are true, that there are technologies that will have all the different features that we
want at some time, maybe at ->>: [inaudible] horizon?
>>: Jonas Anderson: Yeah.
>>: [inaudible] you have to forgive me for my ignorance. But there are the people who try to
build the computer themselves [inaudible] there are people like yourself who try to do the layer
on top of that. And I'm asking where do you meet. So either kind of a target agreed upon that
says, okay, this is [inaudible] ->>: The engineers, they talk to each other, yes.
>>: And this is where the -- so what would be the number?
>> Jonas Anderson: So, I mean, I think that you're an experimentalist, you want to -- and you
want to implement this type of error correcting code, it's not at all -- even though these
thresholds are great, it's not at all obvious that when you're operating well below the threshold
that these codes are exactly what you want. Although I don't know of much better codes. I just
don't know. They may exist.
I would say it's for an experimentalist. They want to try to make this box right here very, very
good. And once they could do that, there's some caveats with preparing the state because that
involves some other stuff and then actually applying the corrections and things.
But other than that, I think making this black box here work such that it fails, yeah, a few orders
of magnitude lower than 3 percent is a good place to meet. And this is -- it seems reasonable,
just two qubit gates, but it's nontrivial with the current technologies.
>>: [inaudible] has now started saying his technology [inaudible].
>> Jonas Anderson: Even for two qubits?
>>: This is what he says. But he told [inaudible] very gung ho that some people might be a little
scared of.
>> Jonas Anderson: That's great. That's true. Actually, the green isn't showing up too well here.
>>: He's British.
>> Jonas Anderson: Okay. So we're going to talk about one other nice thing about these codes.
And that's that we can introduce these defects. And so one reason why I like to think about the
pure error and the stabilizer group is that here's a -- here's a stabilizer, a stabilizer generator, and
here's the corresponding pure error that anticommutes with this particular stabilizer generator.
Although, it also anticommutes with this other one. Just take my word for it that there's one -- on
the torus there's one redundant -- there's one redundant stabilizer, so it's actually not a stabilizer
generator.
So indeed what I said, I wasn't pulling a fast one on anyone. This actually only anticommutes
with this particular element. And we can do something similar with the placketts.
And so that's -- by introducing a defect, it's actually very simple. We just simply stop measuring
that particular check. We stop enforcing it. And so if I stop enforcing it, we transfer these to the
logical group. We've kind of grown our space of logical information by ceasing to measure
those. And so we put a stabilizer with its anticommuting partner in the logical group, and we get
these -- we get these defects over here.
>>: I don't quite know what that means, because E and S are generators really, right?
>> Jonas Anderson: They are generators.
>>: So when you put those generators in the logical group, you add the subgenerator [inaudible]
I guess to that [inaudible].
>> Jonas Anderson: It's true. But they commuted with every other element in all the other
groups. So I've taken them out of this -- so I've basically grown the logical subgroup by two
generators and shrunk each of these by one. So they're still subgroups, they've just changed in
size. I hope that makes sense.
So I'm going to have to abstract the lattice here, maybe because I can't draw it very well, but
we've got these defects here. And this is a defect of one type, this is the defect of the other type.
So on this page we've got these are Z type defects with the placketts there, and these are X type
defects with the diamonds.
And so I'm going to hopefully convince you that this applies the CNOT gate. So this is what the
CNOT should do with stabilizers, is it takes the -- the X just goes through the target, and so this
is going to be our -- this is going to be a target qubit and this is going to be our control.
And so we follow these stabilizers through. This is the X logical operator or the X stabilizer for
this defect pair. This is the Z. And similarly this is the Z up here and this is the X here.
So I'm not the best at drawing, so this is my interpretation of the braid. So we can actually
move -- once we've created those defects, we can actually move them around in the plane, and
we can imagine braiding them. So we start here and then we do this.
And I told you before that these -- that any time we apply a trivial loop that can be expressed by
stabilizers, so that doesn't have an effect on the space. So we can actually apply this loop right
here and take from here to here. That's just a -- that's kind of a freedom we have with our
stabilizers. And we do something similar, which I haven't drawn with the -- which what should
be orange, and we get -- and then we move them back together. Not having the best of luck with
my laser pointers.
Okay. So you can see that the -- that the X on the bottom, so -- has an additional -- so the X on
the bottom here had an X applied from the top. So this top X propagated down. And so that's
what we see here. And with the Z on the bottom, it propagated up and added an additional Z up
here.
So apologize for the picture. But this is a nice feature of these topological codes in that we -even though we fixed the K with the lattice, we can actually add additional logical qubits as we
go along, and they're kind of -- yeah.
>>: What does it mean to apply an operator to the connecting [inaudible] is it one of the four
functions in there?
>> Jonas Anderson: This part?
>>: Where there is arrow coming from an operator such as X or Z to the -- not to the cloud, but
to the connecting [inaudible].
>> Jonas Anderson: Okay. So ->>: I mean below Z arrow pointing at orange mark. The very bottom.
>> Jonas Anderson: So we took these defect pairs and we took an element of the pure errors,
which is this line here, and an element in the -- well, of the stabilizer generators, and that gives
us the clouds. And what I'm trying to do is kind of simultaneously show if -- this state itself, this
qubit that's encoded in those two defects, it could have the logical Z applied to it and it could
have the logical X applied to it. It's not necessarily applied. And so I'm trying to kind of track
through, you know, if it was there, what would have happened.
>>: So that's a logical ->> Jonas Anderson: That's a logical ->>: [inaudible].
>> Jonas Anderson: That's a logical operator on this defect pair.
>>: Yeah. And the ones that are applied straight, those are [inaudible]?
>> Jonas Anderson: Actually, there's no -- so I've abstracted, everything's logical. And we can
check here that it does indeed do these -- what stabilizers do with the CNOT.
So this is not a universal gate set yet. We actually have -- we need a -- what's a nonClifford gate,
the T gate, and we usually do that with magic state distillation. This is just really quick. I won't
talk about these much. And then there's actually two ways that I know of to do the H gate, and
there's this way with code deformation, and then we can actually also use a magic state. So that
gives us our universal gate set so we can do universal computation with our toric code.
So the other code is not -- my logical operators aren't showing up that well. The other code that I
want to talk about is the -- are the color codes. And now we use these 3-valent, 3-colorable
lattices. And we have checks now of X type on each face and a Z type on each face. So there's
an X -- a six-body X check and a six-body Z check on each face.
The qubits are on the vertices now. And we -- both the logical operators are actually on the -- on
kind of the primal lattice. We don't have to go to the dual now to see these nontrivial loops.
And, again, we could put this on a torus or some other manifold.
And when we use a lattice like this one, we get -- or any lattice, actually, where the checks are all
0 mod 4, we can get the entire Clifford group transversely, so we don't need to worry about
coming up with another way of applying the Hadamard; we get it for free.
But the -- do you have a question? No. But the strings now -- we can have an error string that's
blue on kind of the blue sublattice that can split into the green and a red string. And so we no
longer have these nice endpoints to the strings that we can match. And so -- and so I'm going to
call it a string net, although that's slightly different than what they call string nets in some other
fields, if people know about these other topological codes, like the Levin-Wynn code [phonetic].
Anyway, if not, so the matching algorithm breaks down, although there are approximate
matching algorithms that seem to work out okay. And so this with was around -- this code was
introduced around the time I started doing research in the field. And one method that we came
up with was to use an integer program to decode this code.
And I want to get on to the homological stabilizer code, so we won't go into too much detail. It's
not an efficient algorithm, but it does do minimum energy, so it's provably a minimum energy
algorithm. So it's kind of good.
But, anyway, so lots of us looked at this independent X and Z error model, and again we got
right around 11 percent. So they're very comparable. And, again, the recent paper that I
mentioned for the toric code, they mentioned something very similar for the color code.
So, again, they're very good and I would say at this point the color codes are a win because they
have these few extra transversal gates. And then we get down to looking at this black box
measurement model. And it's 3 percent again, so that's great.
But when we actually analyzed the circuitry, since we have these four- and eight-body parity
checks, the circuitry gets a bit more complicated. And if you can remember the toric code where
each qubit was being checked by four parity checks, now they're being checked by six because
there's two on each face and it's a 3-colorable graph. So there's a lot more waiting around as
well. And so because of that we actually get an order of magnitude reduction in the circuit
threshold.
And so depending on how -- on how much you like having a -- there's a few extra transversal
gates, you may want to use the toric code because of this feature here. So it's pretty comparable,
but the take-home message is that you get this order of magnitude reduction in the circuit model.
So the last thing I want to talk about is the homological stabilizer codes. And this is some work
done recently. And the idea was we wanted -- we observed that the two codes I discussed, which
were the two kind of known codes, had this homological description. And so we wanted to use
that to try to find new codes. Because, I don't know, new codes are exciting, and we only know
of kind of these two topological codes at least with this type of interpretation. So we wanted to
look for more.
So I'm going to introduce homology real quickly. It's the theory of boundaries. So
mathematicians -- or I assume this is what happened. At one point they decided determining if
two surfaces or two manifolds are equivalent is really hard. Trying to bend the coffee cup into a
doughnut and other things, that's just -- that's a lot of work. So if we could just associate a group
with a surface or generally a manifold, then we could just look at the two groups of these two
surfaces and just see if those groups are the same. And that's a lot easier, I guess, because we
like algebra maybe more than topology.
Anyway, so we say if a loop like this that bounds a region is homologically trivial, and we
associate that with the trivial element of a group. And we say that loops that don't bound a
region, like these, are homologically nontrivial, and we give them a nontrivial element.
And so actually we have one nontrivial loop from the B, but it's equivalent to this B prime, so
they're not independent elements of the group that we want to associate with the surface. But
then we have this additional loop here, and that can't be -- we can't go from C to B by kind of
deforming the loop. So they're separate ones.
And so we're going to associate the group Z2 with one of them and Z2 with the other. A lot of
times in homology they use the group Z, so that you can imagine you put one loop and then you
just add another loop, so it's a group of the integers, and you can just keep adding them or you
can put them the other way and you subtract them.
But because of the way the error chains will work in these codes, and if you have like an X
operator going around a string of X operators and then you multiply with another string of X
operators in these codes, X squared is the identity. And so that's why we want to use the Z2
homology group.
And I've tried to illustrate how the Z2 differs and that if we put 2, we don't have to give it an
orientation, because it's just Z2. So we have 2-2 loops and we put them together, they bound the
region, and we go back to the trivial element.
So it won't be extremely important, but the basic idea was that we want to take a graph and we
want to embed it in a surface. We can't take exactly any graph. We need kind of graphs that
have -- we need graphs with cycles, because we're going to associate the faces to the stabilizers
of the checks. But any graph like that will work. And we're going to put the vertices on the
qubit. We're going to put the qubits on vertices. And we're going to associate the nontrivial
cycles with logical operators.
So by associating the stabilizer generators with faces, anything that's a product of those will also
be trivial because of that. And so by making this association, we want to look at the possible
graphs that we can use, then, to make codes.
But there's one quick thing. The way I've defined -- the way I've defined it, toric codes actually
don't fit into the description. I've got qubits on the edges, and I've got this -- this is on -- this is a
nontrivial cycle, but it's on the dual lattice. And so how do I rectify that?
It turns out that there's a nice transformation called the medial transformation. So if we imagine
that that code from -- or that lattice from the previous page is this lattice here at 45 degrees, we
can take the vertices from that -- we can take the vertices from that lattice and associate them
with the yellow faces in a new lattice, and we can take the faces from that old lattice and
associate them with the blue faces of this new lattice, and the qubits then go from edges to
vertices.
So now we've transformed every toric code. I've done it here for the square lattice, but it turns
out that any planar graph with the toric code could be mapped this way to a new lattice. And so
I've now mapped those into an equivalent homological stabilizer code.
But what's more is that every planar graph that you map this way with qubits on the edges, when
you take them to this medial transformed toric code, they will always go to 4-colorable -- or,
sorry, 2-colorable, 4-valent graphs.
So this particular example is one, but it turns out that every planar graph will do that. And it also
takes planar graph to planar graph. So they'll still be embedable in the same surface.
So that's interesting. One thing to observe is that those are distinct from the color codes, which
are on 3-colorable -- 3-face-colorable lattices, and 3-valent as well. So we've got these two
distinct classes of homological stabilizer codes now, which before we had these codes that were
defined on any planar graph.
So maybe that gives us some hope for classifying all of them, because we put them into some
nice groups so far. We've already seen this slide. But just to remind you, this was the color
code. So what else is out there, can we find other -- can we find other exciting codes, other 2D
codes with these properties.
And I won't leave you in suspense for too long. The answer is pretty much no. Sadly. Based on
my definition of homological stabilizer codes, which I think is a pretty good definition, all the
nontrivial ones will either be toric code or color code-like. And then I'll spend the rest of the talk
classifying those and showing you why with some fairly simple arguments.
Oh. So one thing I said is up to label set equivalents. So let me define label set equivalents. So
this is the medial transform toric code. But we have X type checks here and Z type checks here.
They're still four body, but we could have other more general lattices. And at each vertex we're
going to just start at one face and go around and associate those with a set of -- with this label set
here.
So if we've started here, we'd have X, Z, X, Z, and this is just what I'm calling a label set. It just
gives me a way to classify different codes based on this idea of a label set.
And so more generally it looks like this. And these -- these letters all correspond to nontrivial
Pauli operators, so X, Y, or Z. This is also a 4-valent lattice, but generally you can have more
than that. So you can have multiple letters. Or more than four letters, rather.
So what can we do to -- so I said that these codes were equivalent up to label set equivalence.
And so what is that. If somebody gives us a code that looks something like this and they claim
that they have a new code with some great properties, we want to kind of come up with an
operational definition for some equivalent codes, equivalent stabilizer codes.
And so what I'm going to allow is local, so just -- so local Clifford operations. The reason,
Clifford take Pauli operators to Pauli operators, and so we wanted -- and so generally local
unitaries actually don't kind of affect the entanglement properties of a system, but we stick with
Clifford because we want the output to be a stabilizer code as well.
So turns out we can just apply the S gate and it takes us back to the toric code. And so this is not
really and probably no big surprise, this is -- yeah.
>>: [inaudible] the entire Clifford algebra [inaudible] was generated.
>> Jonas Anderson: So ->>: [inaudible]?
>> Jonas Anderson: So all the elements of the Clifford group will take not just the generators,
but all the elements will take -- upon conjugation, they'll take Pauli operators to Pauli operators.
And so really any element -- any unitary that has that property on single qubit operations -- I
should mention here that these are strictly single qubit. So these are 2-by-2 Clifford operations.
Did that answer?
So not a big surprise, but that's not a -- you know, that's not a new code. This is a code that's also
equivalent to the toric code. And it actually -- Levin and Wynn used it in a model that they
called the Levin-Wynn plackett model, although they acknowledge that it was very similar to the
toric code.
>>: What did you say?
>> Jonas Anderson: Levin and Wynn. Yeah. They -- so Levin -- they had the plackett model,
which was not nearly as well known as their string net model, which is another topological code,
except it's not necessarily known how to decode it or to make it at all fault tolerant, but it has
some really interesting properties from a condensed matter standpoint in that they -- those
particles that arise and braid around each other can do much more exotic things. And so you
could actually get -- you could use -- you could braid these particles and have a universal gate
set.
So, again, it's a whole family of codes, but for certain classes of it it could be universal with
braiding and a few other operations that are topological.
Again, it's not known how to do that. But this is an operation we can allow. And if you want to
think of it as a single qubit error, the error itself, whether you've rotated the label set, it's still
going to be detected by the same number of checks. And so it's not a -- from an ability to error
correct, it doesn't really change things much.
So there's one other bit that we only need to worry about when we have two checks on a face,
and I've tried to illustrate that here. This is like in the color code where we have two checks on
the face. We can -- we just kind of choose an arbitrary order for which check comes first in this
kind of subset. And there's no reason to choose kind of one operator before the other. So we can
just swap them all on a particular face. As long as we swap them all, since it was an arbitrary
choice, there's no -- so these are very kind of trivial equivalences, I hope, and that's all we're
going to need.
And okay. And so it turns out that if we just exhaustively search the nontrivial label sets, and by
nontrivial I mean like label sets that aren't -- well, that commute, so all the different faces
commute and that they're -- they're not all the same -- we show that we couldn't have like all Xs,
because that wouldn't detect like even a single X operator -- single X error, rather.
So it turns out that with those equivalencies on 4-valent graphs, so these are graphs that are
4-valent, so they're 4-valent everywhere, although the faces and things can be irregular, that all
the label sets are equivalent to the toric code.
For the 3-valent, 3-colorable graphs, we get the color code here, but we also get these two others.
And they're not -- they're not equivalent in the ways that I defined to the color code, but they're
very similar. The -- let's see. The distance of the codes will be the same for a typical noise
model. The topological properties of the code are very similar.
So if we think about the errors, there's [inaudible] being quasi particles, it turns out to be exactly
the same model. And from, I don't know, all of the things that seemed interesting to me, at least,
there were no fundamental differences. And so it would be great if somebody would give me a
reason why they were interested, but I couldn't find them. So now we've classified these
particular graphs, the 4-valent graphs and the 3-valent, 3-colorable graphs, based on their label
sets.
And so the really simple argument for why 5-valent graphs don't work -- and I feel like if
whether -- sorry about the alignment there. But I feel like if some of those -- the coding stuff
went over your head or maybe it was too simple, that this stuff is completely abstract and
doesn't -- so it doesn't matter really at all what the stuff I said before except to give you
motivation for why we should look for these.
Okay. So why -- what about all these other 5-valent and higher graphs, how well do those work
as codes. So if we just arbitrarily pick a face to have an X type face on it, let's see what that
implies.
So this face here touches this face once and this face one time. And since I've demanded that we
put nontrivial checks on each face and these faces have to commute because they're stabilizers, it
turns out that we need to put Xs here just for commutivity. But this face touches this face one
time, and this face touches this face one time, so just from that argument we get Xs everywhere.
And there's a single X error here that's not detected by any of the -- any of the stabilizers. So
those all yield trivial -- those all are trivial label sets. That's a distance one code. So we're not
interested in those. And it turns out the same argument applies for the 6-valent and higher. You
do the same thing really.
So we've actually now eliminated all of the -- all of the graphs except -- well, we didn't talk
about 2-valent graphs, but those are basically just polygons. And those turn out from this type of
definition at least aren't that interesting. So really we've classified everything except for the -- so
the 3-valent, 4-colorable graphs, and I guess you might have to take my word for that, but we've
been able to eliminate so many with that 5-valent and higher, that really the only class of graphs
that are left are these 4-colorable graphs that are on 3-valent lattices.
And so what about these? This is the regular lattice that has that property, but we can imagine
irregular lattices, and basically you just need tri-valent graphs that have odd-weight faces, like
the triangle here, and they'll be 4-colorable.
So what about these? And the answer turns out to be no. It's a little bit more involved. But it's
not that bad. And like in -- I don't know, I can work through it quickly or I can -- if there's
questions afterwards, I can bring up the paper and we can talk about it. But it's just using some
basics from graph theory. So I'll ->>: What's C?
>> Jonas Anderson: Okay. So I'll get to C. So I'm going to take a g-genus surface. Some
orientable surface, like a torus, or a 2-torus, so that's, you know, one -- genus 1, genus 2, et
cetera, so we're just going to have -- our lattice is going to be -- our surface is going to be fixed,
and that C is just saying it's a constant.
So we pick a surface that we're going to embed or graph in. And so this is fixed. And it's a
3-valent graph. Now, I haven't specified whether it's 3-colorable or 4-colorable, but it's a
3-valent graph.
And we can actually drive this expression simply for F average, which is the average number of
vertices per face, and we see for a fixed -- for a fixed lattice size -- for a fixed lattice, as we take
the number of vertices to infinity, which is the idea if we have a family of codes as we get to our
larger family of codes, it turns out that the F average is always going to be 6. And we can just
see that from taking, you know -- taking V can be very large. So ->>: Can you tell me what C was?
>> Jonas Anderson: Oh. C is just saying that the genus of the surface is fixed.
>>: Pick a constant.
>> Jonas Anderson: Pick a constant.
>>: That's all it is.
>> Jonas Anderson: Sorry. I -- yeah. So with the way that I've defined homological stabilizer
codes is that we have these logical qubits which are also a function of the nontrivial cycles on the
surface. And so that's also something that's fixed with the surface.
So we can go to larger and larger lattices on the same surface, but the number of logical
operators will be fixed because there's these nontrivial cycles. So we kind of have these two
limits. And this needs to go to 6 and this needs to go to 0 here.
So let's just analyze this ratio a bit more. So I'm going to express my logical qubits as the
number of qubits minus the stabilizer generators, which we can -- so this is kind of the number of
physical qubits or total qubits.
And then since we have this nice association in homological stabilizer codes with the graph, we
can replace the total number of qubits with the total number of vertices, and in here -- and in the
stabilizer generators, which are the number -- with the total number of faces, times this M, which
is going to be the average number of checks per face. So we really didn't have to worry about
that before in the toric code. M was always 1, if we had one check per face, and the color codes,
M was always 2, because we had two checks per face.
Now I'm just going to allow the probability to vary. It can't be more than two actually because
then you would have -- if you try to add a third, it would be a product of the other two at that -well, the way that I've defined it. So, you know, if you have X and Z and then you try to add a
Y, the product of the others is going to be -- will give you the Y.
So that's something that we don't allow. So it's going to be something between 1 and 2. And we
can do a bit more algebra. And we eventually get down to -- we get to this expression for the -in this limit we get one minus three times the average number of checks per face times F
average, which is what I defined on the last page.
But in this limit, the limit is the number of qubits goes to infinity, which is the same as the -when the number of vertices goes to infinity. We said that F average needed to go to 6. Again, I
derive this for 3-valent lattices -- or 3-valent graphs, rather.
And so the only way for this to go to zero is basically is M equals two. And that actually works
and that gives us the color codes and the other similar codes. But for these codes that have an
odd -- odd-weight faces or these odd cycles, we can only actually put a single stabilizer generator
on those faces. And so, you know, if we have these -- if the maximum we can have is two and
we have these faces with only one check on them, we cannot satisfy this ratio.
And what that means is that there's going to be local logical operators or logical operators that
aren't these nontrivial cycles, so they don't satisfy the definition of homological stabilizer codes,
and that's a property that we don't like because we don't want these local logical operators.
And so that's the idea for why this type of graph doesn't work. Would have been -- again, I wish
that I wasn't presenting no-go results; I wish I had some new codes to show.
But it turns out it gives us a nice classification based on planar graphs for -- it gives us a way to
categorize the codes that we already knew in a nice way and we can compare them nicely. But it
also tells us that that method seems to fail for finding other codes.
So thanks.
[applause].
>>: So if you add Y into X and Z, it is redundant, correct?
>> Jonas Anderson: So ->>: Wouldn't that let you get to six faces per vertex or that sort of thing?
>> Jonas Anderson: So it would be -- I guess the reason's a little more subtle why I don't allow
them. It's not so much that -- I mean, it is that it's a product of the other two, but it's also that -so when I have multiple checks on a face, so when I have a single check on a face, I don't allow
any identities. Because if I thought about that, that would be a -- that check would no longer be a
homologically trivial operator. It would be like a -- it would not be a full loop. So it wouldn't
have this nice description.
And so if I put -- if I imagine putting two checks on a face where I have, you know, Xs and then
some combination of Zs and Xs, when I look at the combination of the two, though actually I
could re-express those on a face as having some -- as having some identities in it. And that's
something that I don't want because then I -- those might be okay codes in some circumstances,
but they don't have this homological description.
And, similarly, if I add an X, Y, and Z, I can take the product of the two of them and get a Y and
take that, and so I can get identities and I don't have these nice loops.
There's no -- I guess there's no guarantee that that's -- you have to have loops for these good
surface codes. But the observation I guess that me and others made were that they have this
description, and I wanted to see what I could get with that description.
>>: [inaudible] sort of boring questions.
>> Jonas Anderson: That's okay.
>>: Repeatedly you're saying that -- your overhead citations had JTA et al.
>> Jonas Anderson: Uh-huh.
>>: Is that paper available?
>> Jonas Anderson: Yeah. So there was ->>: Which paper is that?
>> Jonas Anderson: -- two separate papers. This last paper was on homological stabilizer codes.
And unless somebody else has used that name, I think you just have to type that ->>: That's last July. I think it was in archive last July.
>> Jonas Anderson: Yeah. That's fine. And then the other, the stuff from the threshold with the
color code -- oh. Yeah, and, actually, if you click on my name, you'll probably get to the other
one.
>>: [inaudible].
[multiple people speaking at once].
>> Jonas Anderson: Yeah. We did that. We did that one together. And then I did this one with,
I don't know, with his help. But mainly ->>: I just want to make sure I'm looking at the right papers. I'd already read one of these, but I
hadn't seen the other one.
>> Jonas Anderson: Yeah, they're -- they're completely different papers, but they're -- I don't
know. They're -- they're nice. They're both fairly long. I don't know. My advisor doesn't like to
break up papers into smaller papers. He likes -- he'd rather put something out there all at once
that -- so yeah.
>>: Have I got a student for him. Cody Jones [phonetic] from Stanford actually likes kitchen
sink.
>> Jonas Anderson: Nice.
>>: So the other one is -- your Satanic numbering ->> Jonas Anderson: Oh. 6.6.6 lattice?
>>: What are the numbers? I don't actually know that off the top of my head.
>> Jonas Anderson: Yeah. So here's another. We pick a -- this numbering scheme only works
for semiregular lattices. And so you pick a vertex. In a semiregular lattice you have kind of
each vertex sees the same faces. So each vertex here sees a 3-face, a 12-face, and another
12-face. And so what the 6.6.6 ->>: [inaudible] so that's assuming all vertices [inaudible].
>> Jonas Anderson: Yeah. I didn't need that for my proof in the homological stabilizer codes.
They don't need to be -- the valency or the number of neighbors that each qubit has was regular,
but the -- they weren't necessarily semiregular lattices. The faces could have been even random,
as long as they're 3-valent or fixed-valent lattices. But yeah. This is easier to draw.
Any other questions? How did I do for time? Oh. Wow. I don't know if that's perfect. You
guys might have wanted it to be shorter.
[applause]
Download