>> Krysta Svore: Okay. Thanks everyone for coming... hear from -- a little change of pace for the...

advertisement
>> Krysta Svore: Okay. Thanks everyone for coming back. So now we're going
hear from -- a little change of pace for the next two talks. We're going to
focus a little more on the actual hardware devices. So now we're going to
hear from Rob Schoelkopf about using cat states in a microwave cavity for
quantum information. So let's welcome Rob.
>> Rob Schoelkopf:
[Applause]
Thanks.
>> Rob Schoelkopf: I see you guys are gluttons for punishment. You're back
for your second dose. Maybe a few of you were at the talk yesterday, too.
So yeah, what I thought I'd do today is give a little bit more technical
talk. I'll try not to repeat material too much from yesterday. And I'll
give you some specifics, but I also want to sort of pull out some general
things that we're thinking about as we're starting to press towards the next
stage which is really I think, as I said yesterday, trying to figure out the
best way to do error correction.
So in particular, so we'll give a little review because part of the goal of
today's talk is to rehabilitate the harmonic oscillator in your eyes. And,
you know, it's always nice to start the day by quantumizing the harmonic
oscillator.
But, interestingly, you know, I think in our system, we sometimes talk about
it as qubits. Okay. And if we're in certain regimes, we can, to a large
extent, view it as we really have two level systems, but that's not really
the natural things for us to build in our system. Really, what we build in
our system, as I'll try to explain, is sets of coupled oscillator modes with
varying degrees of non-linearity. So you have to have some non-linear. And
with interesting kinds of interactions which are not maybe the usual things
that algorithm designers and quantum information theorists think of, right?
I mean, a lot of the stuff I'm aware of people say, oh, you know, take these
two level systems and do these C knots and so on. And that's great. But to
me, as a hardware designer, it's really nice if I can think of a simple piece
of hardware that in some sense implements complicated functions autonomously
or naturally all at once, and that's kind of the spirit of what we're doing
here. And so I want to try to explain to you that kind of crazy red, white,
and blue four globby thing that was on the title slide and explain to you why
we think it's maybe what we call a hardware efficient way in our system to
realize an error correctible logical qubit without having to build many, many
Josephson junctions and do things kind of brute force. And then I'll show
you some experiments along the lines, we're doing that, how we're encoding
information in multi-photon states in a cavity and actually about this paper
here which is on the archive and will be in nature in a week or ten days or
something, which I claim is the first realtime tracking of a quantum error
correctible syndrome.
Okay. So I hope to be pedagogical and not pedantic, and it's a nice small
audience, so please, you know, let's make it a conversation and I'll try to
go quickly.
I'm going to try to keep your intuition going, sort of have some pictures
here of a mechanical harmonic oscillator, but of course what we really build
are electrical harmonic oscillators, right, and instead of XMP, our variables
are Q and fi but they are the conjugate variables of the system, and then we
do the usual thing. So I'm going to sort of introduce you to this -- our
version of quantum optics, which is really the way we talk about and think
about our systems all the time. So, okay. So that's what A and A dagger
mean.
And of course, with fi and Q, it's a bit arbitrary which one you consider the
position and which one you consider the momentum. But because of the way a
Josephson junction works, it sort of -- usually we like to say, okay, fi is
the coordinate. Because then what the Josephson effect will give us is a
certain non-linear potential.
>> Does it really matter?
>> Rob Schoelkopf: It doesn't really matter, but then I would have to say,
oh, my non-linearity is I'd have a velocity-dependent mass if I did it the
other way, and that's just not as intuitive or common. So it's absolutely
true, though, it's a bit arbitrary, and you'll see people doing it either
way.
Okay. And we quantize this thing and it has energy levels and it's a lovely,
beautiful quantum system to work with. And since we're talking about error
correction, that's the goal, we're going to talk a lot about parity in
several different forms. All right. And again, this is all very
rudimentary, but just to remind you, when we have our oscillator and we
quantize it and we have our fox states, that's what physicists call these,
states of definite number of excitation, those have these well-known wave
functions which are even or odd, depending on whether N is even or odd in
terms of their wave function in the coordinate space. Okay. So remember
that. We'll come back to a few versions of parity.
So what's the problem with harmonic oscillators or why people don't usually
think about them as being useful for quantum information is, well, they're
too classical in some sense because they're linear, because all these energy
levels are regularly spaced. If I act on it from outside with a laser or
some classical force -- in our case it will be a microwave generator tuned to
the resonance of this thing -- we can excite the system but what we tend to
make right are the Glauber coherent states which are superpositions of number
states and it basically sort of amounts to taking your minimum uncertainty
wave packet and just placing it to the right or to the left. Of course you
can do that with whatever amplitude and phase you want depending on the
amplitude and the phase of the force you use to excite the system. Okay.
And we'll describe those coherent states of course with some complex numbered
alpha, right.
And alpha squared tells you sort of the average energy. In [indiscernible]
state of course is a superposition of many of the fox states. Alpha squared
tells you the average energy or N bar.
And notice that this state doesn't really have any particular parity, right?
And also, of course, what I do, if I make this thing, you know, if dynamics
is kind of trivial, it just swings back and forth, right? And the only
difference between a quantum oscillator and a classic one is that this motion
as it's moving back and forth is just slightly blurry by this uncertainty.
Okay.
So how do we make that more fun? As I sort of alluded to yesterday, we have
the Josephson junction which lets us put in linearity. Okay. And that means
our oscillator now is nonlinear. We have in fact something which is a cosine
potential, and that then lets us, if we want, think of the zero and the one
as the two states of our qubit and we'll never go up to the doubly excited
level. So if you make the sort of transmon cubits that we do mostly in my
lab, that anharmonicity there is pretty large. It's a few hundred megahertz
out of five gigahertz. So in ten nanoseconds, you can really sort of
spectroscopically resolve this transition distinctly from that transition.
>>: [Indiscernible]?
>> Rob Schoelkopf: Yeah. So it's exactly this, right? And this is why I
say, okay, the Josephson effect is giving you a non-linear potential. Right?
So cosine, the first order, is just a parabola. But that's what I've drawn
here. The next order of course, which is important, is this cord term of the
cosine and that's the thing which gives you the non-linearity. And because
it's the sort of a softening potential, the energy to add to quanta is
somewhat less than the energy of putting in one quantum. But doesn't really
matter what the sine [indiscernible] is, just that it's non-zero. Okay. So,
yeah, that was my first high-level point is it's qubits, sort of. Right?
And actually, you know, detail that you probably haven't heard about if
you're not working in this field, right, is that almost everything we've ever
done with superconducting qubits and most of the other groups has importantly
involved many of the other levels. So I mean, the way we did the two qubits
gates and that algorithm I showed yesterday involved at least sort of at
least virtually going to the to state which is still fairly coherent in the
qubit. Okay.
So just as, you know, atomic physicists tell you it's just a two-level
system, but then to do this transition, we nematically go through some
optical transition. Well, okay. So it's the same thing here. It's all done
with levels. Right. Okay. So you can make these and they're nice and they
work. And they're getting more and more coherent. Okay. And this is sort
of what they looked like a couple of years ago. Nowadays, this is sort of
the zoom in of a picture of one of the qubits that's working in that 3D
architecture that was on my title slide.
We've made the qubits bigger and bigger, which turns out to make them less
and less sensitive to materials imperfections and stuff, but it really
doesn't matter. We get all the sort of same -- it's still got one Josephson
junction there, the Hamiltonian is exactly the same, the parameters are
exactly the same, so you don't need to worry about those things. Okay.
And here's another version of this progression in coherence with a little bit
more detail. Shows both T1 and T2 and you have to keep improving both of
those but also importantly here I've added another ingredient which is, you
know, previously sort of before 2011 or so, our linear oscillators and our
non-linear Josephson junction-based oscillators both had fairly equivalent
queues or basically the same kind of coherence properties. All right. But
nowadays we can get strong coupling of our qubit to a proper
three-dimensional cavity and we've demonstrated already single photon
lifetimes in those of ten milliseconds. And I think there's really no reason
to believe we shouldn't be able to have seconds for those [indiscernible]
times. If you go to Sartorious (phonetic) group, right, they do
50-gigahertz, 1 kelvin cavities that have a hundred millisecond lifetime and
so we should be able -- and open cavity up to seven atomic is much harder, so
we should be able to do that. And it's just sort of beginning here but I
think there's really the -- these are interesting objects because if
that's -- you know, if you can put your information somehow in an object that
starts with much lower error rate, that should also help me out when I'm
trying to do error correction.
Yes, good, we have a question.
So that's great.
>>: Yes. So I heard people say that T1 and T2 were fairly comparable.
it looks like for a lot of your [indiscernible] your T1 is now --
And
>> Rob Schoelkopf: Yeah. So this last point here is, you know, on the
bleeding edge so to speak. So this is a more complicated device from my
colleague Michelle Devoret's group called the fluxonium. This one is now
demonstrated T1 times of a couple of milliseconds. And it also has some
interesting differences. It can make for example like a lambda system, if
you know atomic physics, as opposed to just the transmons I've been talking
about so far.
It also allows to you tune with a magnetic field the frequency. And so this
point here is something where you're on a tuning curve and we know that
basically unfortunately the device is there acting as a magnetometer and it's
being dephased by some one over F noise. So if they go to a sweet spot for
that, if they make -- if they fix the frequency, this can be improved
somewhat. But basically, what happens with each generation of device is, I
mean, first you need to improve T1, because if you haven't improved T1, you
can't see anything. You don't know about dephasing. And then you know, if
you find this situation, you say, ah, let me investigate what the dephasing
is, and get it back up, and so this sort of iterations here are, you know,
improve the lifetime, reduce the dissipation, then figure out if it's either
homogenously broadened and T2 is T1 or twice T1. If not, figure out what the
dephasing is and bring that up. You have to kind of keep going to this
cycle.
>>: I think T2 is going to probably catch up to [indiscernible].
>> Rob Schoelkopf: I mean, yeah. That's sort of -- we had to do these
things along the way. Right. So you know, in this first generation of the
3D transmons, for example, the T2 was lower and now it's kind of caught up to
the T1. And we realize that was basically due to unintentional measurements
from very small backgrounds of black body photons in the cavity.
>>: [Indiscernible]. If you're introducing cavity shield [indiscernible],
then how do you imagine -- do you emergency the volume of these cavities are
going to end up posing a problem if you want a [indiscernible] device?
>> Rob Schoelkopf: Right. And so that's a good question which people ask a
lot. So even when we go to the full three-dimensional cavities, they're
about a CC in volume. Okay. And so right now in our fridges, we could
easily cool down a qubit meter. So that's a million of these cavities.
The bigger issue, right, than making a million of these or whatever is, as
Dave is going talk about, wiring them, calibrating them, controlling them,
all those kinds of things. And so we have some ideas about that we're
working on ways of sort of making three dimensional structures and shields in
an integrative way, but that's a whole lot of technology that maybe we won't
get into too much.
Okay. So even more fun than having you're non-linear oscillators to couple a
linear oscillator to a non-linear oscillator. In quantum optics, they call
that cavity QED. So you have some Bozanic mode here. You have your
two-level atoms. So you write your A daggers and As for your oscillator and
you say it's a sigma Z and you have this Jaynes-Cummings or dipole
interaction between the two. And we can get exactly this Hamiltonian. We
get very strong couplings compared to real atoms where this G, the coupling
between the two things can also be hundredths of megahertz or correspond to
ten nanoseconds to do a coherent operation. Okay.
And there are ways of implementing that. This is sort of a cartoon of the
things I showed yesterday, right where you have a planar qubit and a planar
transmission line, but it equally well applies to this sort of Josephson
junction with a great big dipole antenna that's pasted inside of a
three-dimensional cavity.
But in the rest of the talk today, we're not going to think about this sort
of resonant case where the qubit and the cavity exchange energy because they
are at the same frequency. Much more often now, we work in this so-called
dispersive regime. Okay. And so we're going to have the frequency of my
qubit or my atom being different than the frequency of my cavity or the
resonator. And that difference will, in addition, be large, somewhat larger
than G, the kind of coupling between the two elements. Okay.
And what that means is you go to second order invert theory, right? And then
you get an interesting Hamiltonian like this. Okay. So if I -- and I like
to call this doubly QND because you see, the interaction between my two parts
here commutes with the total Hamiltonian. All right. So this is a really
nifty thing. It means we can use the cavity do a QND measurement of the
qubit, or we can use the qubit to do a QND measurement of the photon number
in the cavity. Okay. And that's where kind of all the fun is going come
about.
And in addition, I didn't write this out, but you know, if in reality of
course the qubit here is a B dagger B plus some of the higher order terms for
its non-linearity, this is really A dagger A, B dagger B. It's -- in quantum
optics, though call the cross cur. All right. I guess you could also call
it, you know, a Q long like interaction, right. It says that if I excite one
of the components, it changes the energy required to excite the other
component. But we'll never actually trade between the two. We just have a
kind of dispersive interaction between them.
And so what's special about this? I don't know. It's a natural thing for us
to build. It's not something I really see people writing about a lot in the
quantum infranatetic literature. It's similar to things in an MR. It's just
sort of a new and interesting Hamiltonianism, and as I'll show you in the
rest of the talk, basically there's neat things we're learning how to do with
it.
So the first thing you can use this for, and we've been doing this since like
2005 because the transmon qubit, as I mentioned, you can't measure it in any
way that's sort of low frequency. The only thing you can do is use this
Hamiltonian to measure.
And so the way we measure kind of conceptually is we send a microwave signal
through the cavity and then the qubit is sort of like a polarizable medium
that has different polarizability. It shifts the frequency of the cavity so
if that's the transmission through the cavity, if the qubit is in the ground
state, that's the transmission. If the qubit's in the excited state, I can
put a pulse through and measure and I will project the qubit in a down or up,
depending on whether I get a high intensity coming out of the cavity or a low
intensity coming out of the cavity, or depending on whether there's a phase
shift of the microwaves going through. Okay. And just to remind you, right,
doing a Q and D measurement on one of your set of qubits is an absolute
entrance exam requirement to playing with quantum error correction. So I
kind of alluded to this fact yesterday. There's been remarkable progress on
this. When we first did this, we had to measure for micro seconds. And the
fidelity was 20 percent, 50 percent sometimes, which was enough to play with
the qubits and understand what they were doing. Nowadays, so, especially if
you combine that with a paramp of the Josephson-based amplifier that's
running at ten millikelvin you can basically make a quantum limited
measurement of that signal going through and it's very highly QND so here is
a qubit that has a 50 microsecond lifetime while it's being continuously
projectively measured. And here you're seeing the jumps, the discrete
quantum jumps between its two lowest energy states. In realtime, it means
that in 300 nanoseconds, which is much less than the coherence time of all
the other stuff around it, I can learn exactly whether that guy is G or E, up
or down, zero or one for my ancilla. Okay. And so that's been a lot of
work, and it's really interesting and it's sort of important in opening up
this new stage of ->>: [Indiscernible]?
I know it's G and E, but what --
>> Rob Schoelkopf: Oh, so, I mean, this scale here is arbitrary funny
voltage units on some demodulated ADD converter at room temperature. What
you do of course is you do pie pulses and then you can see what the vertical
axis is. So that sum is fairly easy so calibrate. Okay.
So this circuit QED has lots of nice benefits. Putting things in a cavity is
really good. So for these 3D qubits, by the way, if it was in free space,
its T1 time would be a new nanoseconds. So we're already suppressing
spontaneous emission and the coupling to the environment by many orders of
magnitude. Much more than has ever been done with real atoms, by the way.
Of course, you can use it to wire up and entangle qubits. The loss, even in
those plainer things is sort of like .1 DB per kilometer. Kind of similar to
fiber. Nobody really wants to stream 10 million kelvin co-ax between
distance locations, but my point is even if I had like a really complicated
thing that was a cubic meter, I don't have to worry a lot about the
interconnects. It's actually more about mode matching and coupling and
stuff. And again in the microwave, good, we can do this high fidelity
readout of the qubits, and as I wanted to say notice rest of this now, it's a
really useful resource for doing error correction. And people are doing lots
of other interesting things including many body -- you know, there's things
you can do with flying photons or you can do non-linear optics like squeezing
and pair metric down conversion. Also there's this nice field of quantum
electro mechanic that borrows a lot of concepts and techniques from this. So
there's lots more things than just the quantum computing going on today.
Okay. So yeah. So we're here and we want to do feedback and quantum error
correction. Because now we have developed the coherence, the ability to
entangle and the measurements. So now, if we want to think about doing error
correction, what should we do? So there are several known architectures,
right. There are sort of stabilizer error correction, there's surface codes.
And something we've been thinking a lot about is this so-called modular
approach, which if you like is a bit like taking quantum network and using it
as a programmable quantum computer. And kind of the downside is that all
these things are pretty complicated. So as I said before, there's kind of a
chicken-egg problem. You want to build much more complicated calm circuits
so you can do error correction, but for those to work, they need to be error
corrected. So do we just put our heads down and engineer the ability to do
20 of them at once? Maybe, maybe not. With this modular approach, what you
need is sort of an error corrected memory. Some ancilla that you have good
local gates, and then the ability to do sort of remote entanglement between
those ancillas, which for exam, can be done by having signals that fly out,
like I just showed, the QND measurement that go on to some superconducting
detector or paramp, and then basically your gates between your logicals,
these red blobs here, are done using the teleported gate scheme of
[indiscernible] and others. Okay.
And something we like about this is that, well, from the engineering point of
view, I have to learn to make little box was a few quantum resources inside
that work very well. And tune all that up and then I have a small number of
fairly defined operations I need to do between my boxes. And they don't work
that well at all, it turns out, for sort of reasonable resource overheads.
You know, if those are 95 percent or 98 percent fidelity, it's not so bad we
think. And the other thing I like about this is that if you have a little
bit of error correction built in here and then you have this kind of scheme,
you can think of it as a quantum bread board. So you can then do error -you know, if I have seven of these modules, I can then do a stabilizer
between these things or I can make a real toric code or a surface code or a
surface code with variable range interactions or whatever.
So basically, I want to play with my hardware. I want to learn about error
correction, but I don't want to commit yet to a particular architecture
because I'm not really sure what's optimal. So we'll see how that goes.
Okay. So if we were going to do the straightforward stein, here's the issue,
right. We'd have seven of our transmons inside this cavity and then I have
six ancillas and six readouts. I would need six paramps, six FPGAs reading
it at room temperature. I have to do a very complicated classical but fast,
in submicrosecond logic and feedback.
And what I've learned from our theorists and this suggestion of the cat codes
is an interesting thing. This is a very silly way to do error correction.
It's very inefficient. And the reason is as follows. See what you think of
this.
So we have, you know, one qubit and it's got some errors. And what we have
to do is redundantly encode. Which we do by adding a bunch of other qubits.
So we made a bigger Hilbert's base. But what's the problem? I had T1 and T2
where bit flips and face flips on this guy, bit flips and face flips on this
guy, bit flips and face flips on this guy. And essentially, you have tiny
sort of not quite zero marginal returns and we keep building and building and
building and we have more and more things that can go wrong, which is why
there are all these syndromes. Okay.
So I also feel like it undersells a little bit the interesting physics here
if you just say, oh, so we have to scale up in order to do error correction.
Right? There's a lot of really cool physics that's going on here in the
process of trying to do error correction, right? So first of all, to remind
you, right, what you need to do, here's one of your ancillas, and these are
conditional gates. You know, it's already been sort of compiled or written
in a concise way here. You want to measure the sort of four-way parity.
Remember parity? Using this ancilla of those four qubits and the X parity of
these and the Z parity of those and so on.
So what are we doing here? Well, we're working with a large dimensional
Hilbert's base. I'll show you [indiscernible] where we're using a comparably
large Hilbert's base already. You want to measure a symmetry property, okay,
in a way which projects this thing back on to an igan [phonetic] state of
that operator, that interesting multidimensional operator. We need to do it
in a quantum non-demolition way. The act of doing these gates and making the
measurement on that ancilla had better not flip any of these qubits. That
would be bad. And we need to measure fast, with low latency, and then we
need do stuff in response to that. And eventually, this is only one layer,
right of error correction we need to concatenate and make everything fault
tolerant. Okay. So there's a lot to be done still.
So here's the kind of proposal that our theorists came up with. From the
hardware point of view, it looks lovely. So we're going to use a really
exotic state of light in a cavity as the register. And then we need one
ancilla and one readout to monitor the relevant error syndrome. And the
reason I guess that this thing seems to hardware efficient compared to that
is we're going to use a large Hilbert's space in one object without
introducing CBD -- right? This is something I have to check -- any new error
mechanisms. In fact, in our cavities, we know that there's very minimal
dephasing. Matt asked about dephasing. So essentially, we only have one
thing that goes wrong when we put many photons in the cavity. We can have a
finite Q, so we have photon loss that happens at some rate kappa. And that's
the dominant thing we have to correct.
Okay. So first of all, I want to show you how we're encoding information
here. And we made -- this is a paper in science last year, what I think are
the world's largest Schrodinger cats. That is not my graduate student Brian.
This guy is working with a large cat, but he's doing a reverse Schrodinger
experiment. I'm pretty sure he's dead already.
Okay. That's Brian. He's still healthy.
is also fine. He's [indiscernible].
Brian is doing fine.
And Gerhard
So what I want to do then in the remainder of the time is sort of explain to
you these cat states and how they can be used. So what's a cat state of an
oscillator? A little different than a Schrodinger cat. It's not entangled
with anything. It's just a superposition of that displacement and that
displacement, right? So it's like I pushed the pendulum to both sides
simultaneously. Okay.
And if I do alpha and minus alpha with a plus sign here, I have a certain
distance between these which is the average photon number and this thing does
decay more rapidly, right, because I have N photons in here. The time
between each individual photon loss event is now one over N bar kappa. But
when I make this superposition with a plus sign, you see it's even parity.
Right?
The other thing you can do is make a superposition with a minus sign, so you
have displaced it that way and with a phase displaced it this way. Now, how
do you know you did that? I'm cheating here. I'm showing you the wave
function. You don't have access to the wave function. If I look at the
probability density, both of these have, you know, the probability of the
oscillator is both right and left with equal probability you can't tell plus
from minus or even from just a mixture of it went both ways.
But what you can do, right, is you can wait a certain amount of time and then
the pendulum will swing back and interfere with itself if it's really in a
cat state, if it's really in a coherent superposition. And what you should
see then, if you look at the probability, is some fringes and the fringes get
tighter and tighter and tighter together as the initial displacement or the
number of photons gets larger and larger. And of course those are more
easily destroyed by decoherence, and that's how you kind of recover the
classical situation for N bar is a million or something.
So how do you measure that in an experiment? Well, what you have to do is
look at something like this. This is the Wigner function. So as a function
of either the P or -- and the X or the Q and the fi are conjugate variables.
What you want to do is you want to measure the photon number parity. That's
what the Wigner function is, if you haven't ever heard of it. So it's that
operator, E to the I pi, A dagger A. Okay. And if you have fringes here,
then you -- that corresponds to this interference in the two blobs and you
know you've made a real cat. And ours is a cryogenic cat. So looks like
that. These are the whiskers on your frozen cat.
So people have made complex oscillator states before sort of with Rydberg
atom things and with other approaches using super conducting qubits, but in
both cases, the record is sort of -- superpositions are the things containing
up to maybe ten photons. So what we were able to do -- let me just check how
we're doing on time. Okay. So all right -- is we make a two-cavity
architecture. So it's this three dimensional thing. We're going to have one
of the cavities be very long-lived, 50 microseconds in this case, and the
other one very short-lived because that's going to be the one which we use to
measure the state of the qubit and then infer what's going on in the other
cavity. And we combine it with one of these parametric amplifiers so we can
do submicrosecond single-shot projective measurement on this qubit which is
simultaneously coupled with this dispersive Hamiltonian to both the storage
cavity which is the A and the readout cavity here, which is the B. Okay. So
again, the interaction we have here is like this. It's this A dagger A times
sigma Z of the qubit. So the B mode, the readout cavity is the classical
readout hardware now. We can just eliminate that from the discussion.
And here's what that Hamiltonian means. The if I measure, let's say, the
absorption spectrum of the qubit, I get a peak. But if I have populated this
storage cavity with some photons, I get other peaks. And those or peaks
correspond to the energy it takes to flip the qubit, so this peak is flipping
the qubit if the cavity is in vacuum. This peak corresponds to flipping the
qubit if the cavity has exactly one quantum of energy in it. This peak if it
has exactly two quanta, and so on.
So I can do a non-demolition measurement of the photon number. If I flip the
qubit here, I know that it -- and it does flip, I know that it's N equals
one. You can play lots of really interesting games here. Okay.
And this is again a sort of interesting Hamiltonian and a regime which has
previously only been accessed by the Rydberg atom guys in the one group that
can do those experiments.
So the first thing that our guys proposed and that we did, let me do this
quickly so we get to show you the coding scheme is basically we want to go
from a qubit that's in an arbitrary superposition, any location on the block
sphere, to a state where the qubit is going to be in the ground state. Okay.
But the cavity is going to be in that same superposition of alpha and minus
alpha. I want to make deterministically an arbitrary superposition of the
oscillator displaced both ways at the same time. Okay. And so using that
Hamiltonian, there's a sequence of gates that you can come up with that does
this.
And again, my point I think is that these gates use that Hamiltonian that
we're given and the ability to do like displacements and pi pulses on the
qubit and the cavity and it's a perfectly nice gate language, but not
something that I think people really thought about before. So we can do it.
There's a picture of a cat with alphas about three, so -- or two point,
something like seven photons on average or something. And you see the two
blobs right and left in these nice alternating fringes here that tell you
it's a true cat state of the field. And this is what we get if we start
with, let's say, the qubit on the equator of the block sphere, G plus E. In
this particular scheme, if we start with let's say E, we get only this left
blob. If we start with G, you get only the right blob and you can tell
whether you're this way or that way on the block sphere by the sign of the
interference fringe there at zero. Okay. And this is a reversible
operation. We have done it forward and backward. It's actually already in
the first implementation 80 percent fidelity so the mapping is fairly good
and can get much better we think.
And then we could make bigger cats, so you just push harder before this
operation begins on your oscillator and here the fringe is still clearly
going negative and showing you non-classical interference at something over a
hundred photons. That's the biggest Schrodinger cat without [indiscernible].
And by sort of you making more complicated protocols with those things, you
can make three-component cats or four component cats like this. So this is
an experimental measurement and this is a state of -- this is a quantum state
no one has ever made before, I claim.
How are we doing on time?
>> Krysta Svore:
I'm getting a little --
Four more minutes.
>> Rob Schoelkopf: Okay. So what's special about those things for error
correction? If I have one of these coherent states and I look at sort of the
qubit spectrum which tells me the probability of all the photon numbers, I
see this kind of Poisson distribution that you will know and love very well,
right?
If I make a positive cat or a negative cat, remember, I said this one had
positive parity and this one had negative parity. And when you look, what
that means is there's sort of a Poissonian envelope, but this state has
essentially only the even photon numbers and this one only the odd photon
numbers. So here's how this error correction scheme is going to work. This
four-component cat can be thought of as a superposition of two things. If
the superposition of the positive parity cat like this, that's this, and also
the positive parity cat that's like that, and these are two orthogonal basis
states. Okay. To the extent this is a continuous variable system, right,
I'm assuming here that I've pushed my Gaussian blobs far enough away that I
can ignore the tiny overlap. So if you want that to be ten to the minus six,
use on average 4 or 5 photons. It's fine. So okay.
And so I can make any superposition of these two and I have a coherence. I
have a qubit worth of information I can store, but I know that whatever state
I've encoded, it should be in only even photon numbers. Okay. And now of
course what's the idea? If I start here or there or in any superposition and
there's loss because my system is not perfect yet, I will go to that. See
the difference? Blue fringe versus red fringe. Which is even versus odd
photon number.
So if I can track the parity of the photon number, tell me is it even or odd.
Don't tell me the number. Tell me the number, game's over.
>>: You destroyed something.
>> Rob Schoelkopf: Absolutely. I've projected out of this big Hilbert's
space into a definite eigenstate of the Hamiltonian. If I can do this
measurement, show you really quickly here end that we can do that, what I do
is I project from this big Hilbert's space into a two-dimensional subspace
the code space or the error space. So this is really the essence of quantum
error correction. All right.
Here's the idea. We're going start an even parity. There's decoherence,
which is the evil juggler. He may at some random time cause a photon to be
lost, and we go from even parity to odd parity and back. And if I can track
it, I can try to defeat the evil juggler.
So here's a measurement using the system of the parity. Here's the way it
goes. So we have two components. We have the one ancilla qubit. And we
have our cavity state. So if we start in a state of -- I'll just show you
here a state which is unknown parity, so a coherent state that has no
particular parity. And we basically do a pi over two pulse on the qubit.
Now this dispersive Hamiltonian acts, and what happens, remember, the qubit
frequency is shifted by a certain number of hertz for each and every quanta I
have added into my oscillator. Okay.
So my pancake here, that's the N equals zero, N equals one, N equals two, so
on. I've color-coded, of course, the even and the odds red and blue. And at
this point we have a very large entanglement between all the photon number
states and my ancilla. But the property of that Hamiltonian has a symmetry.
We can make things where the shift per photon is exactly equal almost. And
so after a magic amount of time, which is 200 nanoseconds in this case, we
end up with all the even photon number states pointing this way on the block
sphere and all the odd ones pointing that way. Now we've really made a
Schrodinger cat. My spin, my qubit is entangled with many photons in the
cavity in their distinct, macroscopically states.
Now if I do a pi over
force myself into one
see it, it works, the
this measurement many
my last slide because
two pulse and I projectively measure G or E, I can
of these states. And so if you do the experiment, you
fidelity is actually quite nice. And you can repeat
times. It's a QND measurement. So this will be like
I'm really extending the time.
So we start here in a state. We make a measurement and put it in a cat.
Let's say it comes out odd in the beginning. That's what the Wigner function
of the state should look like. And then each of these dots here is a
measurement of the photon number parity. And what we can do is we track
along the purple thing is a filter that's using the results of the
measurement to infer whether the state is even or odd. This is what we know
must be happening inside the cavity. So it's a realtime tracking of the
errors as photons are lost one at a time. So again, each of these jumps we
know a photon has been lost, but we don't know what the photon number is. We
only know that we've jumped from code space to error space and back again.
Okay. And what we're doing right now with this is we're actually trying to
see if using this information allows us to extend the lifetime of the
information that we've stored inside. And I think, you know, from that,
we're going to be learning all sorts of interesting things about how you can
really do these kinds of things. And you know, what really matters in the
hardware sense for quantum error correction.
So as I said, this is the first measurement I think a photon parity in any
system. It's the first repeated use of an ancilla in superconducting qubits,
the first quantum jumps of light in circuit QED, and I think maybe most
importantly, it's the first realtime tracking of the natural errors in a
system by measuring a syndrome, so there are many examples of quantum jumps.
This is the first example of quantum jumps that are not between energy
eigenstates but between two degenerate subspaces.
Anyway, let me thank all the people who do this work and if there's any time
left for questions, I'm happy to entertain. Thanks.
[Applause]
>> Krysta Svore:
So let's take 1 or 2 questions.
>>: You just remember the parity and use it or back it up with a computation
later or are you intending to put protons back?
>> Rob Schoelkopf: So yeah, that's a good question. Obviously the
interesting thing here is the decay has two components. There's the
deterministic loss of alpha as E to the minus kappa T. Okay. The energy
rings down in a smooth way. And then every once in a while the environment
decides, oh, a photon has been lost and you change the parity. So if we
encode and the separation is big enough, we can watch and just use this
tracking and as long as our states are still non-overlapping enough, we can
extract the information at that point in time and everything is fine. We
just have to use in a -- well, you don't want to do it in a post selected
way, but use in a realtime way the information.
If you really want to make it live forever, you want to also put energy back
in. But to put the energy back in, you just can't displace again. What you
need to do is add photons two at a time or in the end, four at a time because
really, our code states in the end -- I've sort of glossed over a few details
here. Our code states are really whether you're in zero, 4, 8, or 2, 6, 10.
So you want to keep you are but put energy back in and then interleave that
with making a measurement of the parity to know when it's jumped. And so my
colleague, Michel Devoret, using sort of parametric down conversion things
has shown that he continuously -- can continuously pump a two component cat.
That paper will be submitted soon.
And you know, this basically uses a Josephson junction again, and it's
current on linearity to do, you know, fancy quantum optics things you can't
usually do in the microwave domain. So there's a lot more to be done along
the lines of doing these things and I just described how to correct a memory,
not how do logical gates and stuff, and we're working on those kinds of
things. And there are some neat ideas.
>> Krysta Svore:
[Applause]
So let's thank Rob again.
>> Krysta Svore: Okay. So our next speaker, continuing in the device base,
we're going hear from David Reilly on technology for large scale quantum
computing. So let's welcome David.
>> David Reilly: Thanks, Krysta. So in a similar way to Rob's talk, this is
going to be the advanced course of yesterday's talk with quite a lot more
detail, but I would like to keep it informal, so yeah, please interrupt and
ask questions.
I'm kind
going to
sort of,
and then
of a little scared that this is a pretty diverse audience so I'm
sort of go into some details in places but hopefully, if you can
I don't know, amuse yourself for a few slides, it will refresh again
something interesting will appear. Let's hope.
So, okay. So the topic here is really exploring the issues related to
controlling ultimately what we want a large scale quantum computer. Lots of
qubits. And trying to understand what the problems are in doing that and I'm
really going to focus on this kind of layer here that layer that interfaces
directly are the physical, quantum physical layer where the actual qubits
are. And try and explore some of the issues related to really this problem.
So what's the complexity class of this classical hardware layer that you
would need to control a useful machine, a useful quantum computer?
I think for sure that even with error correction, it's actually a MP hard
problem. To do it optimally. Okay. Layout, routing, wiring, optimal pulse
shaping, clock distribution, timing, sequencing, all of this kind of stuff is
known to be MP hard. We don't have to do it optimally, but we have to go
close; otherwise this is going to be an extremely difficult problem. And
it's kind ironic that we're trying to build a machine to solve certain hard
difficult problems. In order to build that machine, we have to solve hard
difficult problems. And it would be, I mean, unfortunate but kind of
amusing -- this is meant to be God laughing at you -- if it turned out that,
you know, with a barrier between building a quantum computer that can then
solve hard problems was that this hard stuff got in the way. Hopefully
that's not going to be the case.
So if you don't believe me that this is hard, you only need look at what's
involved in doing routing of wiring for the kind of printed circuit boards
that are in your computer. This is just work from my group here and I'm
start to go really appreciate what's involved in doing this kind of wiring.
This is a pretty simple circuit actually. It's a single FPGA and some kind
of slots for other peripherals and the like. It's a few layer printed
circuit board, but I takes the computer some serious amount of time to try
and find not the optimal solution but at least something within some design
rules and parameters. And you know, it's kind of known that all route
something really intractable. These are hard problems, let alone doing
complexity that we're imagining doing for a large-scale quantum computer.
So I believe that it's important recognizing that this stuff is hard, to at
an early stage, way before we need to in some sense, at an early stage,
really try and understand, well, what would be the smart way to put together
the control circuitry and what is clearly the way that's just not going to
work and some people already I think have a pretty good idea of that but
there's a range of opinions out there in the community and I think that it's
kind of time to start to flush out some of these issues.
I like this quote from Mike Friedman in this recent article in the New York
Times that says it's actually the conclusion of this article where asked what
you might do with a working quantum computer, he responds that the first
thing he'd do is program it to be a model of improved version of itself. And
I think that that's exactly right. This is going to be some kind of
bootstrapping evolution as we start to build these machines and figure out
how to use them and how to wire them and how to build a control circuitry and
just like the evolution of classical computing, it's going to take some
evolution from there.
The outline of this talk is to really drill down to some of the details. I
want to go over how you control and read out solid state qubits and I'll tell
you some details there. There's some interesting convergences that have only
really started to happen over the last few years where the time scales
involved for coherence, for readout and the like are more or less the same
across a range of different technologies and that's kind of interesting from
the point of view of control because it means that we can start to develop
generic technology that will be useful and applicable for a range of
different types of physical implementations.
I'll tell you a little bit about this architecture that I have in mind for
kind of scaling control and readout that I discussed yesterday. And then I
want to talk about, well, what is the technology, the classical technology
that is best suited for actually implementing this type of control? Is it
CMOS in silicon? Is it something more advanced? What's that parameter space
look like? And if I have time, I'll drill down into some more details here
about actually what we have in mind, what it looks like, and show you some
pictures of a few cool things.
So this idea of convergence, if you go back ten years, 15 years, and go to a
conference on quantum computing where there are experimental approaches that
are presented, it was extremely difficult in terms of what people imagined
these machines would ultimately look like. The time scales involved for
doing single qubit rotations, the coherence times, the T1 lifetimes, the
readout times stretched many, many orders of magnitude. And what's happened
over the last decade is that some implementations have kind of fallen off the
table. And others have caught up and others have kind of jumped ahead but
there's largely a convergence at least on most of the parameters between the
various flavors of solid state qubits.
I think the superconducting qubits are at the moment way ahead of, as Rob
just showed. They're doing extremely interesting and sophisticated things.
They're leading the race there. But in terms of just the time scales that
are relevant for control, I think that they're kind of comparable between
these different systems.
For instance so for readout, as I mentioned yesterday, readout now is really
very similar for different systems. It's about detecting the amplitude or
the phase of a microwave signal. And so how this works just briefly, it's
actually a technique that was pioneered by Rob going back some time now, I
think. That's 15 years, something like that.
Yeah.
(Laughter)
It's been around for a while.
>>: [Indiscernible] point that out.
>> David Reilly: So but it's become the standard way in which people do
measurements now. All qubits in a solid state are basically read out using
this technique. And what it does is take a microwave tone, shines a down a
co-ax cable onto an impedence matching network and down here, you could have
a device, an altro meter, like something like a single electron transistor or
it may even be a resonator. The impedence matching network just transforms
that impedence so that it looks like 50 arms or some characteristic impedence
that you want it to, and then impedence is going to change depending upon the
charge state or the qubit state of the device.
What you want do is detect a change in impedence. And if you have a change
of impedence, then what you have is a reflected signal, some partial reflex
here, that's proportional to the change of impedence. And so readout amounts
to detecting that change in phase or amplitude of the microwave signal that
comes back, up your dilution refrigerator, through a chain of amplifiers,
gets mixed with a carrier signal and what spits out the end is just a signal
proportional to the change in impedence, which is then proportional to the
state of the qubit.
Even with the best amplifiers around and at these low temperatures, you're
still adding considerable noise to the system. Classical boring noise that
has nothing to do with the state of the qubit. And there's a big push in the
community to try and squash that noise but it's already pretty good. And
that leads to integration times in the order of less than a microsecond
depending on the details. So this is actually a table for spin qubits only,
and you can see that there's a range here, maybe the state of the art is
maybe a little better than this one microsecond now. I think that's
800 nanoseconds from Amelia Kirby's group and similar times, you know, are
kind of popping up in various different context. Again, I think the
superconducting guys are maybe a factor of five or something. Better than
that, but it's all between 100 nanoseconds an one microsecond is the state of
the art.
From the point of view of algorithms and error correction, the time that
you're spending doing readout is important relative to the coherence time.
So if you're spending an enormous amount of time, let's say you're working
with one of these qubits or something down here in the milliseconds, you're
doing a lot of integrating of that readout signal in order to figure out what
the state of the qubit is. Meanwhile, your other qubits are decohering. So
this is an important parameter, so it gets lost a little bit that readout
takes time and not all systems are the same. But more of a -- there's
actually fundamental limits on how fast you can read. So the two parameters
that control how fast you can read are how strongly coupled the readout
detector is to the qubit and how much noise you have in the system. Turns
out that both of those parameters, the coupling and the noise are up against
quantum mechanical limits. So in the case of the noise, there's some quantum
noise. This could be shot noise of electrons in a detector. It's equivalent
to the photon noise that you might have in a cavity or in an optical system,
and you can only couple the readout detector so strongly to your qubit, even
imagining that you can turn it off, do something coherent, turn it on, do a
strong projective measurement. You can't turn it on so strongly, you know,
there are limits on how strong that can be.
It actually turns out -- and this is not something that I think is very well
understood but it turns out the defined structure constant sets some limits
at least for electromagnetic coupling on how strong the readout detector can
be coupled on the scale of nanometers to these types of qubits of that scale.
And I think it's those two parameters that mean that the time scales for
readout are converging let's say around the hundred nanoseconds or so
ballpark. So that's an important number I think to have in mind for how
algorithms will proceed. Yeah. So you can see that that's the case here for
the superconducting devices.
Just like readout, there is a convergence of control approaches. And the
semiconductor community for a long time has been using what people call DC
pulses, which is a kind of odd term. It basically means rectangular waves
tilt and rock the potential, chemical potential of a semiconductor device.
Spin qubits and semiconductor qubits are evolving now to what superconducting
qubits have been doing for a long time and that is controlling the state of
the qubit using microwaves in a very similar way to how one does nuclear
magnetic resonance. So if you know anything about NMR or ESR, it's that kind
of approach that's now being used more or less for all of these solid state
qubits. And the way it works is you take a carrier. The carrier frequency
is set to be the energy level splitting or close to the energy level
splitting of your qubit. There's an envelope that's mixed with that carrier
to produce a pulse and the width and amplitude of the pulse then sets the -what's called the tipping angle, the superposition between the two base
states. So if I have some pulse here, depending on how wide that pulse in
time, how large it is in its amplitude, that's going control the angle of my
state vector on the block sphere. So as a function of increasing the width
of the pulse, what you'll see, you can -- being the term Rabi oscillations, I
would love to actually -- this is -- someone should do this. I'd love to
image search by oscillations and go back in time year by year over the last
ten years and see what the images look like.
You see, the whole community is growing, and they all start to converge and
resemble the same number of fringes, the same kind of duration of pulse
width, the same kind of contrast. That would be a cool thing to do.
But if you have seen these before and you worked what is that wiggling thing
that's decaying, it is indeed just the state vector processing around from
north pole to south pole as you increase the width of the amplitude of that
pulse that's driving. So that gives you -- turns out if you can control the
phase of it, then you can have arbitrary control. You can control and put
your state vector anywhere you like on the block sphere.
As Rob mentioned,
and semiconductor
on the particular
Nuclear spins are
depending on what
the superconducting qubits are approaching a millisecond
spin qubits are already around that ballpark. It depends
flavor. Maybe electron spins are hundreds of microseconds.
at the 32nd mark, and some are in between the two,
system. It's kind of like a millisecond or so.
There's still a big variation in the time it takes to do single qubit
rotations into cubic gates. I think that's the widest parameter. So that
number actually ranges from picoseconds for exchange coupled electron spins.
There's data that shows that that's in the higher ten picosecond. It could
be faster, but it's hard to detect, hard to measure, up to even tens of
microseconds or hundreds of microseconds for, again, some semiconductor
systems. I think with the superconducting guys sitting at the few
microsecond level and getting faster.
Again, a convergence though of what it actually looks like from a control
point of view, what you need in the lab to do these types of experiments and
what we want integrate into some kind of scaleable approach to control many
different qubits.
So as I showed yesterday, the universality of quantum computing means that
from a control perspective, life's not so challenging. You really just need
to be able to map some subset of gates, single qubit and two qubit gates into
a control wave form that implements those gates. The qubits, really all
you're doing is moving that safe vector around the block sphere or bringing
two qubits together for coupling and that's being controlled by a microwave
pulse.
So depending on which gate you want to execute, you're going to play a
different microwave pulse to your qubits. The order that you play those
pulses is how the algorithm is going to be implemented. So if you take a
circuit and model diagram like this and think of these different symbols as
being of course different gates, but now think of them as being microwave
pulses with different shapes and different width and different amplitudes,
then what you need is a technology that can basically take a family of
different wave forms and then steer them to the appropriate qubit at the
appropriate time. And so what's really evolved I think is an approach that
separates out the generation and playing of those different wave forms that
execute different gates from the problem of steering those pulses to the
appropriate qubit at the appropriate time. And the reason to kind of
separate this is that these guys, you know, microwave signals, they're hard
to generate, they're expensive to generate, we don't want to generate many of
them. We'd like to generate as few as possible. And trying to have one of
those signals or a bunch of those signals tied to each qubit is a real pain.
I want to kind of separate that and let these guys fly on a bus and then pick
them up and drop them where they need to go at the right time.
The same thing applies for readout. If you want to read out qubit three,
then I need some kind of routing layer here that switches qubit three to a
readout bus and there's some addressing here that makes that happen. So
that's sort of the proposal. That's the dream. It's actually pretty
challenging to imagine what some kind of switching matrix or routing matrix
might look like that can steer analog wave forms into gigahertz that can also
connect them to a readout bus without any insertion loss, any loss here is
going translate into a loss of fidelity of the readout signal and so we care
about that.
So what kind of technology would you have in mind for implementing that kind
of a scheme? At least to get started. So as I showed yesterday, here's a
kind of cartoon Mickey mouse version of that scheme projected on to today's
dilution refrigerator technology, and you can ask the question: Where should
I locate the different components of this scheme? Where should the microwave
generators be? Where should the switching, routing be? And the first thing
is the switching matrix that steers these pulses, that should be very, very
close to the qubits. Ideally, it should be integrated in the same substrate,
if in the same device. Because then you can use lithography to make all of
these switches. You can bring in very few inputs. You only need the number
of inputs equal to the number of different gates, the number of different
wave forms. But then this is a pretty complex object. It wants to take some
addressing string and then spit that out into however many qubits you have,
you're going to need the same number of switches or even more. So that's a
complicated thing and you want to make that with lithography in a clean room.
Not wire it up with cabling.
So that guy should be cold.
And it should be close to the qubits.
What type of technology could you use to implement these types of switches?
This kind of routing? It's a challenge because you need broad bands. You
need a bandwidth of, let's say, a few gigahertz. Depending on the details of
the pulse you need a few gigahertz. So that's hard. It's going to dissipate
some heat. You don't have a huge heat budget to deal with. You to switch
very quickly. Okay. It's starting to get challenging. I don't think it's a
mechanical relay. We had in mind using a MEMS device like this based on
these PZT switches, but it turns out that they're pretty slow and they're not
that great at a gigahertz.
After a while, you realize the answer is already there. It's in many devices
already. It's a field affect transistor. And we've making HEMTs, actually,
high electron -- electro mobility transistors, but they're just -- they're
FETs built into gallium arsenide substrate because that's also the substrate
of our qubit and we're start to go actually use them in the context of
stirring pulses to the qubits.
So I don't want you to pay too much attention to this data. The point is
simply that a university group, a collection of graduate students, you know,
don't have to work too hard to make a switch that looks like this with
reasonable performance. Is that the end game? No. This is where you're
headed over to people who really know what they're doing and they can take
this concept and start to optimize the microwave performance of these types
of devices, but, already, we're getting something like 60-DB of on/off ratio
which is the key parameter. We really want to turn these switches off so
that the qubits don't see anything when they go dark and when you switch them
on, we need that high on/off ratio.
So pushing this out to high frequencies I think is pretty straightforward,
but it's not something that we've done yet.
The size of those guys can be shrunk down below microns. In fact, they can
be in the limit where you are starting to see ballistic transport. So very
little power dissipation. HEMTs have been demonstrated to be able to switch
on fast time scales and, you know, can pass gigahertz signals up to actually
20, 30, 40, gigahertz. So that really seems like a viable technology for
implementing something. Yeah.
>>: David, do I see the insertion loss is something like 10 or 20
[indiscernible]?
>> David Reilly: Yeah. So for that particular data, it is, and we don't
care for the control. But for readout, we care big time. So for control, I
can just crank up the signal a little bit and I can live with some
attenuation. They're usually attenuation on the line. But obviously we want
to get rid of that.
I don't see any reason why that can't be squashed. It's -- yeah. I mean
this was sort of generation one. In fact, I think we have data that's round
about 2, 3D B.
>>: At least for us now, some of our control pulses, we can't tolerate
20 degrees of [indiscernible].
>> David Reilly:
Why is that?
>>: On the cold stage.
It's just too much -- the peak power is too high.
>> David Reilly: Yeah. So yeah, this is a very key aspect that if you're
going to dump heat even from your pulses, it's got to go somewhere. So I
think there's two approaches to that. One is to bring the pulses in and
bring them back out. Terminate somewhere where you can dump heat and only
pick up, you know, enough of the pulse that you need.
>>: The more launch you have, the better the reflection.
>> David Reilly: The other way to go is to say I don't want a disputive
switch. I want a reflective switch, and I think that's the better way.
So here's a reflective switch. And it's built out of the same technology,
but it's kind of an interesting device. In fact, I would dare say that it's
an analog of a Josephson junction but it's Losee. It's not so Losee, but
it's Losee. You can map current to voltage and inductance to capacitance,
and what it is is a voltage-controlled capacitor. But you have the same
non-linearities. In fact, you can -- if you didn't have loss, you could
think of doing some pretty cool things. So the way it works is I take a
transmission line that consists of some ground play, separated by dielectric
and a conductor on top, and I can make the impedence of that geometry to be
more or less what I like if I have control over that geometry and dielectric
constant.
If I change the impedence, then I change the characteristic and then I change
the reflection and what happens in this device is that the ground plane is
now formed not by a metal but by a two-dimensional electron gas. Basically
the inversion layer in an FET. And so when you put a voltage on the gate and
you deplete that inversion layer, you deplete the ground plane and you change
the characteristic impedence of the device dramatically, there by changing
the manner of reflective power.
So again, not trying too hard, this works pretty well out to reasonable
frequencies and again, you're getting something like 40, 50, 60-DB of on/off
ratio. And you can see here, okay, insertion loss is still in this
particular one not great, but I think that that's optimization. That's now
simulation of really what's going on with the electric fields and current
density.
We've taken this concept and basically are using it to do experiments because
it's useful now for the experiments that we're doing as well as for scaling
up to larger numbers. For semiconductor spin qubits, we have the added
challenge of often needing to add DV voltage levels to our microwave signals
and so we have to bring in variation bias tee arrangements as well. But it
works pretty well and we're scaling it up to now a ten by ten array of these
types of switches. And I think where this is going is in addition to kind of
lifting the burden of not having to wire a separate microwave generator co-ax
line to every single qubit, I think the bigger advantage is actually that it
allows to you calibrate the amplitude of the pulse for each qubit. So if you
don't have identical qubits, which you won't, then we need to be able to
adjust the amplitude and ideally the phase of the microwave signal for each
qubit. And so I think that this kind of switching array may allow for that.
That's something that we're trying to benchmark at the moment.
So rather than operate it as a switch where it's open or closed, you can
operate it as a programmable or variable attenuator where you need to go in,
you need to calibrate how open or closed that switch is such that the
amplitude of the pulse gives you the right rotation on the block sphere that
you were after.
So making IQ modulators, phase shifters, out of this kind of technology seems
pretty straightforward.
So let me shift gears now and move up a stage. I've talked about this
switching array, this routing kind of matrix. I want to talk about what kind
of technology that could be used to implement the logic, the classical logic
and the data converters. We've got to make decisions based on readout
events. So that's an analog wave form that will be turned into a digital
signal from some kind of analog to digital converter and then based on that
signal, we want to make some decision about what to do next. That then
translates into analog wave forms that then go to the qubit chips.
So what technology shall we build these ADCs, DACs, and logic out of? And
what temperature should it be located? So as I said, the switching array is
cold, but I think there's also benefits to the data converters and the logic
cold as well. So some of those benefits are the footprint and the scaling.
You can start integrate all of these devices. We'd really like to take
advantage of superconductivity in all of the cabling and the interconnects.
By doing that, you can make very dense interconnects that don't take up much
room and don't bring in very much heat into the cryostat.
The signal fidelity and the bandwidth of superconducting interconnects is
much higher. You can reduce the latency by bringing it cold because then
it's close to the qubit chip. I think one of the key drivers at the moment
for us is the noise performance, the improved noise performance that you have
by lowering the thermal noise for your data converters.
There's some electromagnetic interference issues that seem to improve as
well. Just because of the size and the length of the cabling that you have,
stray capacitants and the like. And thus the enhanced clock speed that you
get by cooling this stuff down.
>>: David, yeah, superconducting shielding is also very, very good.
gives you lots of opportunities to get down electromagnetics.
So it
>> David Reilly: Yup. So using -- yeah, using lead shields or nirothium
[phonetic] shields, you can take advantage of that as well. You know, it's a
funny thing number that the astronomy community really goes to some length to
cool detectors, similar for the kind of particle detector guys. They're
cooling the detector to improve its performance, the developing cryogenic
electronics so that the detectors can run at those temperatures because
that's where the noise is lower, that's where the dark count is lower.
For our systems, it's only very recently that we've said, you know what, the
qubits have got to be cold. Why don't we also take advantage of the fact
that it's cold and lower the system noise in various other ways. It seems
kind of strange that we wouldn't have taken advantage of that earlier on but
I think it's just where the technology has evolved.
So how hard is it to get electronics, commercial off-the-shelf electronics
working at four kelvin? What can go wrong? Most people think what goes
wrong is that there's some sort of thermal contraction and that certainly
happens. And if you cool the thing rapidly, yeah, you can crack it. But
that's not the main mechanism of failure. The main reason that semiconductor
devices, silicon devices fail at temperatures, let's say, below about
50 kelvin is that in a silicon transistor, you have dopings have some kind of
atomic potential well here, so this is a -- you can think of a positive
nucleus with its inner shell electrons or positive ion core is creating this
atomic potential, and then there's a bunch of bound electron states and of
course, there's an ionization energy, the energy that it takes to kick an
electron out of that potential and into the conduction band where it can then
contribute to a currently flow through the device.
And what happens is at low temperatures, below sort of 30 kelvin, the thermal
energy isn't sufficient to ionize those dopings. So you don't then have
electrons in the conduction band. The electrons that were in the conduction
band are now dropping back to their atom, their donor atom, and are locally
bounds, unable to contribute to current flow. So that kind of freeze out
means that the characteristics of your transistors change drastically as you
get to temperatures below sort of 30 kelvin and eventually, you don't have
any electrons in the conduction band to participate in transport.
What we found is that of course that's true but what's missing there is the
presence of large electric fields. And if the electric field is sufficiently
large, then the field itself can ionize the electrons and keep them in the
conduction band even down to temperatures well below four kelvin. Okay. So
that's an effect that's known, and it turns out that I guess out of kind of
luck, the evolution of transistors these days to high K dielectric materials
and to much smaller gate dimensions means that the electric fields are
naturally already sufficiently high that you can operate at low temperatures.
So the challenge then is to say, well, I want to work with just the
transistors, everything else, all the supporting circuitry, power
conditioning clocks comes, that's going fail so you got to get rid of that
stuff. Put that at room temperature or high temperatures and just work with
effectively bare died chips. That's what we've been doing in my lab for the
last sort of 6 to 12 months and these are some of the things that we're
demonstrating. This is an FPGA, a ADC and DAC integrated on a six-layer
board. Lots of effort to partition analog and digital signals, and that's
running at four kelvin in an ultra high vacuum in a dilution friction. Works
pretty well.
Is it compatible with qubits? These things are pretty noisy. You know,
you've got large signals there that are running these type of FPGA and
digital devices. Does that generate a heck of noise? Is it compatible with
quantum systems?
Well, as far as we can see, at this point in time, and we're only working
with a single qubit, but we don't see any change in the parameters at least
of spin qubit devices. So we've ->>: So your PTA and DC, they're all sort of running at this low temperature
or you have [indiscernible]?
>> David Reilly: The FPTA and the data converters all integrated on this
circuit board are running at four kelvin and the qubit is running at ten
millikelvin, but they're in the same ->>: FPTA with your high K.
>> David Reilly: It's commercial off the shelf. It's designer links. And I
can tell you the details. And there are some tricks to keeping them alive.
You can reprogram them at four kelvin and they live, but there's some tricks.
Yes?
>>: Do you know what official temperature [indiscernible]?
>>: Published.
>> David Reilly: Oh, yeah, I do. It's minus 30 C or minus -- yeah, minus
40 degrees C. Yeah. And I think that that comes mostly from mechanical
issues related to, yeah, contraction and yeah.
Okay. So if you bring it all together, how does it work? This is, so the
FPGA at four kelvin routing and switching at millikelvin -- I haven't talked
about this, but this is frequency division multiplexing of readout signals
also super conductivity on sapphire and millikelvin and a spin qubit device.
You can bring them all together, wire them up then if you like, secure a
shell to your FPGA in the bottom of your dilution fridge, ask it what the
temperature is down there, and then send some instructions to direct these
pulses from one electron or the other.
And so what we see in the data is that you'll see sort of these lines here
that correspond to transitions of single electrons. For spin qubits, this is
the typical diagram that's used to tune up the potential landscape of a spin
qubit and what this shows is that this doubling corresponds to where we're
steering these pulses basically to surface gates on the chip. So it more or
less works. We don't see really any change in the parameters of the qubit at
all.
So in this kind of, you know, inspired by how straightforward was to get this
stuff working, we're kind of pushing ahead now and really trying to develop
more sophisticated data converters. We'd like to put arbitrary wave form
generators at four kelvin with greatly improved noise performance. They're
also much cheaper, even if you factor in the labor costs of students and
things like that. We pay them a lot less in Australia, you know.
This is a kind of motherboard solution with a series of daughter cards. You
can choose different bandwidths. We're using DACs at the moment that are
kind of tailored for spin qubits but there's no reason we can't substitute
them out for a high bandwidth DACs that are more appropriate for the
superconducting community as well. That's what it looks like in, again, in
the fridge. You can see there's some power distribution that has to come
down from room temperature. There's two co-ax lines. One's a clock. And
the other one is the digital communication channel for sending and receiving
data from the system.
>>: David, in that picture, the cover plate is the thing that's at four K,
right?
>> David Reilly:
Right.
>>: Likes like there are a chip slitter, kind of like -- that are kind of
hanging out there.
>> David Reilly: These guys. Yeah. I should show you a picture of what
this copper frame mount is. We designed that to push hard up against the
various chips.
Now, you could ask the question, when you say four K, what's the actual
internal temperature if I were to put some kind of thermometer on the chip?
Well, obviously we can't do that. I mean, you can -- if you want to, you
could dissolve the epoxy of the zylex [phonetic] processor out and try and
get at it, but good luck getting it to work after you do that.
So we don't know. We don't know what the temperature is inside. It probably
self-heats a little bit. At the moment, that's a good thing. I think that
that keeps it alive a little bit. It's probably not a full kelvin. We think
it's probably around about maybe 10, 15 kelvin at the moment.
>>: The goal of the copper plate is that it's ->> David Reilly:
It's [indiscernible].
>>: But it isn't -- you're trying to get it in contact with every one of the
chips on ->> David Reilly: The key ones that really benefit from being cold. So there
are some things there that we don't care. We just want them close. We want
them on the printed circuit board, and it's really about interconnect and
wiring density and they just need to be located there. We don't care what
temperature they are because the performance doesn't matter.
But for the DACs, we want them as cold as possible because the architectures
of DACs that we're designing for are made to take advantage of the fact that
you're two orders of magnitude lower in temperature than room temperature.
So those guys should be thermalized.
And you know, you use the usual tricks that the low-temperature physics
community knows about how to thermal lies things that are not metals.
>>: So how do you handle from, you know, some part is at four K, the others
are at milli-K, so the interface, how do you --
>> David Reilly: Right. So what we're really trying to do is take advantage
of superconducting transmission lines below the -- that go superconducting
let's say below nine kelvin. So if we're using niobium based superconducting
transmission lines, then we can take a lot of signals with a very small
cross-sectional area and bring them down from four kelvin to millikelvin
with, you know, moderate heat loads. But very low loss in the -- you know,
electrical loss at microwave frequencies. And that's what we're really
trying to take advantage of by having some of this stuff located at those
temperatures. Otherwise we have to work with electrically Losee transmission
lines, which is not so bad, depending on what you're trying to do, but I
think ultimately for scale, particularly for readout, less so for control,
lower loss is better.
So what comes next? How far can you get with silicon CMOS commercial
off-the-shelf technology? Well, hopefully I can report on that, you know,
next you're or something. I think so far so good. We've started a program
with my colleague here, Phil Leong [phonetic] maybe who is also from the
University of Sydney, to start to design Asics for cryo operation. That's
something that's going to take us some time, but I think that's a path
forward beyond that, I'm kind of inspired by the parallel path of the, you
know, semiconductor industry in general to move to various other materials.
Most of these guys, 3, 5 materials Indian phosphide which is not on that
list, are now very embedded in a lot of commercial applications if you are
owning one of the latest Agilent oscilloscopes, or storage scopes, those
things are powered almost exclusive at the front end by Indian phosphide. So
this stuff is start to go emerge into commercial devices. And the driver for
that is enhanced clock speeds. But we're happy to take advantage of any
progress generally in the kind of semiconductor field.
Beyond that, I think ultimately, superconducting technology, flux logic, it's
really about manipulating single fluxonium door is likely to be the final
kind of emergent technology that a appears for this type of application.
So let me show you something controversial, see what you think, see whether
you agree. Pay attention to the Y axis. This is in agony. Extreme physical
or mental suffering as a function of the number of qubits. And what I'm
drawing here is just the technology for classical control. Okay. So you can
do brute force and it's really easy at the kind of 1 and 2 qubit level, but
then it starts to get pretty painful pretty quickly, as you scale up. I
don't know exactly what the slope is there, but it's something like that. My
view is that silicon CMOS, low voltage, low power CMOS, is going to take you
pretty far. Not to say that there's zero agony, but the agony is not growing
so rapidly until you start to hit I think hundreds, maybe even a thousand
qubits. Based on the numbers that we have at the moment for heat
dissipation, clock speeds and the types of things we think we have to do.
After that, you can probably step up a little to gallium arsenide, Indian
phosphide or even something more exotic, maybe graphene, carbon
[indiscernible]. I mean, there's a lot of stuff out there that's likely to
kind of come online when we're at this level and starting to be interested in
those technologies for control. And that takes you a little bit further by
which point, flux quanta technology will likely start to emerge as something
that you can download, install and run in cadence and design the circuits
that you want to do and pump them out with foundries, but I think that that's
going to be needed out here, but let's see how far we can get with
semiconductors at the level of hundreds to thousands of qubits.
There's some interesting work to be done, some powerful quantum computing to
be done at this level and this number of qubits.
Okay.
How much time have I got.
>> Krysta Svore:
Two minutes.
>> David Reilly: Oh, okay, I'll go fast. I just want to show you something
pretty cool. How about the cryogenic technology? Often people say you know,
a dilution fridge is a kind of fragile small object. And if you Google
quantum computer, actually, it turns out quantum computers look like dilution
fridges. In fact, they're really kind of one and the same as far as I can
tell. And if you don't know anything about quantum computing, you could be
misled -- let me come back to that. You could be misled to thinking that
it's actually something to do with getting to really low temperatures. So I
don't know how many people saw this article about D wave in time magazine but
it starts off, this is the opening paragraph that astronomers have been
wrong, and that the oldest place in the universe is actually in Canada.
[Laughter]
>> David Reilly: And look, the coldest place is actually the small city
directly east of Vancouver. Okay. And they cool it down, of course you have
to quote it in Farenheit. It makes it sound even colder. Minus 459.6,
almost two degrees colder than -- yeah, okay.
A number of people asked me about this, and we're really fascinated by this.
And now that there's no cameras around like there was yesterday, I'm happy to
say that I too a D-wave and I didn't say anything for the first hour. I was
given a detailed presentation on a dilution refrigerator and what
temperatures it can actually get to and how marvelous that technology is and
you know, out in deep space, it's cold, but here, it's even colder. And
coming away from that, it's not unreasonable to mistake quantum computing
technology with actually a dilution refrigerator. So okay.
What's possible? Is cryogenic technology going to be the bottleneck? Is
that our problem? We dump heat there with classical control. We're bringing
in cables. They're interconnected. Is that the problem? Take a look at
this. This is a paper from 2009 from Georgia Frossati and Alan Ward who
basically own and run Leiden cryogenics, a company that supplies commercial
dilution refrigerators. Check out these specs. So this is one ton scale
cryogenic detectors for rare event physics. That's what the interest is.
And they're installing this underground, in underground laboratory. It's a
large cryo free cryo set cooled by post tubes by high-powered dilution
fridge, about 10,000 kilograms of lead will be cooled to below one kelvin and
only a few construction materials are acceptable. [Indiscernible] you'll
have a total mass of about 1,500 kilo. Must be cooled to ten millikelvin in
a vibration-free environment.
Here's some more specs. Have a look at this. 10 millikelvin for optimal
operation. The detector array, and I think this kind of ties in with what
Rob was saying about 3D transmons and people get worried about the size of
those things, but the detect array is one meter high and 90 centimeters in
diameter. It needs 2,700 wires. This is up and running. They had to make
the whole thing out of material that doesn't have any radioactivity in the
background of the material. 30 centimeters of led shields in every
direction. And I think it's this last sentence that is really important for
our community. If we're really serious about building machines and running
them, this is going to be important. So it says that this CUORE measuring
time can be as long as ten years, the experiment needs to be stable,
service-free, and high duty cycle running, for ten years. The fact that
that's up and running now, that's 2009 they wrote the paper, and from what I
understand, it's gone pretty well, that's where we are today. I'm not too
worried about cryogenics. That's what this thing actually looks like. This
is a previous version again of what Frossati has built. That's one ton of
gold-plated copper sitting on the end of a dilution refrigerator in Leiden.
So that kind of technology is pretty advanced.
I'm going to skip very quickly, 30 seconds, just for the owe affecianados in
the room that are really interested in hardware, let me just show you a few
things, and if you're interested, we can kind of talk maybe at lunchtime.
Quantum computing is a little bit different from the commercial world in
terms of the interconnects. We're operating of course at low temperatures.
Sometimes in magnetic fields. And frequently in a sort of situation where
you want to change out samples all the time so you can't take your chip, coat
it in epoxy, seal it up and forget about it. We need to get at that thing
regularly. So that's led to an evolution in interconnect technology. This
is some recent work from my group at trying to bring in large-density wiring.
Again microwaves and DC connections. Our most recent generation actually
looks like this. You can see the dimensions of this thing. It's pretty
small. It's bringing in, okay, not so large numbers, but it's starting to
get up to hundreds of wires that are need today interface and fit within a
package that's small enough to go into the bore of a superconducting magnet.
Let me stop there and take any questions.
[Applause]
>>: Can I ask one.
with my ->> David Reilly:
Thanks.
The curve with [indiscernible], I would like to weigh in
Yes.
>>: -- [indiscernible].
The number on the X axis is physical qubits?
>> David Reilly:
Yeah.
Yeah.
>>: And you're thinks dots or you're thinking other things? Because in most
of our experiments to date, we have only one wire in and one wire out when
there are multiple channels of readout and multiple qubits. So we standardly
run 3 or 4 experiments in a fridge and we have something like 30 co-axes
already there. So for us, the brute force isn't till we're at a hundred,
like we're working on buying systems that are room temperature that will work
at a hundred qubits for us. So it's kind of not ->> David Reilly:
Yeah.
So I definitely -- no, I agree with you.
>>: In general, the curve is something to think about but it's not ->> David Reilly:
I agree.
Most of the agony --
>>: It's not three fourths yet.
>> David Reilly: That's right. Most of the agony are, even for spin qubits,
is the microwave wave forms. The generation to steering, the logic that's
needed, it's mostly logic actually. The logic that's going to be needed to
do fast feedback. And I think that's -- okay. I still think superconducting
qubits are -- have some advantages but is not so different.
All the slow DC wiring for spin qubits, I don't think that's much agony.
It's interconnect density, but it's not dumping heat and, yeah. It's okay.
>>: [Indiscernible] to under $10,000 just with off-the-shelf room temperature
stuff. So it's only money to [indiscernible].
>> David Reilly: Yeah. In some ways I would say you're on -- you know, this
curve. I didn't say whether this is cold or not, right. So I mean, in some
ways, you're saying you're kind of agreeing with this curve with the caveat
that this is at room temperature. Okay. Then there's going to be some pain
when you go here because it can't be at room temperature.
>> Krysta Svore:
[Applause].
Thank you, David.
Download