Document 17842096

advertisement
>> Krysta Svore: So the next session, to start the next session, Sabre Kais is here from Purdue,
and he's going to talk about challenges of electronic structure calculations on quantum
computers.
>> Sabre Kais: Thank you, Krysta, for the invitation. So, yeah, let me start by thanking my
students who did the hard work. And as you can see we are involved in two main projects: one on
finite size scaling and quantum criticality and quantum computing and quantum information. And
of course this is the topic for discussion today.
So I'm going to discuss quantum algorithms for chemistry, and I'll start with the Schrodinger
Equation. I'm not going to talk about optimization but we interested also in global optimization.
And I will end with discussing linear systems of coupled equations and how we can solve
problems in chemistry related to Ax equals b.
And I thought maybe at this point I will mention a couple of problems which might be related to
our discussion, or maybe for tomorrow about how to take advantage of superposition,
interference and entanglement to understand fundamental problem in chemistry. And for that we
have a couple of problems: one, understand the coherence and entanglement in photosynthesis.
The second problem has to do with the mechanism with which birds determine magnetic north
and how entanglement can teach all the outcomes of a given chemical reaction.
So for the first one we start looking at the entanglement and coherence in the FMO complex. And
now we are moving to LH2 which is a larger system [inaudible]. And we're doing this work with
Alan and Peter trying to understand the role of entanglement and coherence in photosynthesis.
The other problem has to do with entanglement for threading a polar molecular system in optical
lattice. And here the idea is to come up with a way to do the quantum gates needed for such a
system because it's extremely hard to individually manipulate each qubit on this side because the
splitting is so small and it's really hard to run the quantum gates. So we are involved in this
project to see what we have to do in order to design quantum gates for such a system, and for
that we're collaborating with Dudley Hershbach and Bretislav from Berlin Institute.
And the other problem is the -- As you can see I have an undergraduate from Goshen College
which is about two hours north of Purdue. And, as you can see, he's very happy to work on this
problem. [laughter] And the basic idea is that in the back of the eye of the bird there's this
complex [inaudible] system. And for us just for simplicity we can think about you have donor
acceptor, we have a blue photon, excited electron, to this single state; the electron transfer to the
acceptor. And since both electrons, they have different nuclear environments so they have
different magnetic fields affecting the two spins. So they start oscillating singlet and triplet. And
the oscillation singlet and triplet which depends on the environment leads to two different
chemical reactions, and somehow the link between the chemical reactions and how the bird
decides about earth magnetic field is still unclear. And this is open problem and [inaudible] groups
are working this direction trying to understand how the bird takes the information from a chemical
reaction and knows the direction based on the magnetic field.
And the last problem I'd like to mention is related to entanglement in chemical reactions. And
we're trying to understand the experimental results of this gentlemen group. They published this
back in Science 2007. And the idea is to do the smallest two slit experiments. You direct the H2 in
space. And with high energy photon, you ionize one electron and by correlation to other electrons
follow this one. And you look at the angular distribution of the first electron. And it's not clear at
this point how you extract information about entanglement from such an experiment.
So now I'd like to discuss our involvement in developing a quantum algorithm to solve the
Schrodinger Equation. For us in this field we are lucky we know the Hamiltonian so we don’t have
to make an effort to devise a new Hamiltonian. All the forces are known, Coulombic forces. And
all we have to do is just solve this linear second-order of differential equation with Coulombic
forces.
So if you look at the development of methods for solving the Schrodinger Equation and if you look
back on the Hartree-Fock method, this was developed back in 1932. So we're talking about
almost 80 years of research to solve such an equation. And of course over the years, people
have developed many different methods. So I can maybe talk about conventional and
unconventional methods to solve the Schrodinger Equation. And I would like to mention just the
unconventional because I was involved the development. So I'll just mention the algebraic
method which the basic idea is use Lie groups and Lie algebra to solve this equation. And this is
while my PhD. And then I did post work with dimensional scaling, and I got my tenure at Purdue
developing the finite size scaling.
So people ask, what is the difference between conventional and unconventional methods? I will
say an unconventional method is when you go back and look at your papers and you see you are
the only one citing your work. [laughing]
So I just would like to mention the outcome of these different methods so you can see where
we're standing. And the question I'm trying to raise, are we done developing classical methods to
solve the Schrodinger Equation? This is my point. Can we still design better classical algorithm to
solve the equation? And probably Garnet Chan will talk about it in his [inaudible]. But I mean
there's a question here, are we done or is there a hope that we can develop more efficient
classical methods to solve the Schrodinger Equation?
So I would just like to mention the results from the Lie groups. And the idea is to write the
Hamiltonian in the generators of a given Lie group. And probably when I was a graduate student,
it took me one year to figure out that the only system that can be solved is the hydrogen atom.
And you [inaudible]. And then I tried another year of solving the helium atom with no success
because there was no way to write the Hamiltonian in the generator of any given group. So
before giving up this problem, at the end, I decided to look at what kind of interaction we can
modify in order that we can solve this problem, keeping the electron-electron interaction of course
because this is the main point here: keeping the electron-electron interaction. And it turns out that
if I change the Coulombic to harmonic interaction, only certain values of k for this one, only k onefourth, you can solve this analytically.
And it's very interesting because everybody thought of this field that there is no way to solve the
system analytically, and it's not clear yet why with k equal to one-fourth the system admit an
analytical solution. But anyway, you can solve this exactly analytically and you get the y function
[inaudible]. And this is the only system where you keep the electron-electron interaction and you
still solve the system analytically.
And again, there, if you look at the correlation energy -- I still remember the number; it's .04
atomic units where the helium is .042. It's very close to the helium atom if you're looking at the
correlations.
>>: So this is where the electron-electron interaction is Coulombic but the...
>> Sabre Kais: But the...
>>: ...[inaudible]?
>> Sabre Kais: Yeah, the external field, instead of Coulombic you change it to harmonic. So
anyway, maybe there is still hope for an analytical solution in this field. And for the
renormalization group techniques, again, the basic idea is to start with the renormalization group
techniques the way Wilson did it in [inaudible] matter. And also the same here, very quickly you
will realize that it's impossible to do renormalization the way Wilson did it directly from the
partition function by reducing the number of degrees of freedom. And it took us over a year to
solve it for helium and was numerical impossible to follow for any other system. And we stopped
here. This was in 1998. But Garnet has a smarter idea: you go to density-matrix renormalization
group instead Wilson renormalization group. And so the conclusion with the Wilson
renormalization group, it's almost impossible to do electronic excitation calculations.
And now that I mention scaling the idea, instead of solving the problem in a three-dimensional
space which is hard and we know that it's scaled exponentially, can we do calculations at
different dimensions by generalizing the Hamiltonian to D-dimensional space, solve it at a certain
dimension and then, go back and say, "Okay, now I'll go back to the three-dimensional space." It
turns out for atomic structure it is extremely simple to go the large D limit. D goes to infinity. All
you have to do is minimize the effector function. And then you do perturbation theory. So a large
D limit is good for localization if you have a localized electron. This is an excellent approximation.
Otherwise, you have to do perturbation theory, and you get to all troubles of doing perturbation
theory.
So again just to summarize this: this method is only good for localized electron.
Okay, so now we're dealing with exact calculations not approximation. We did exact calculations
and we already discussed it that the calculation time of the energy of atoms and molecules scale
exponentially with the size of the system and where this comes from. If you are doing full CI
calculations and you have N orbitals and m electrons, the way to distribute the m electrons in N
orbitals goes exponentially up with number of electrons. And you can see the number of
configurations for given number of electrons and orbitals grows exponentially.
And I always like to give this simple molecular system; the methanol with 18 electrons and 50
basis functions. So we're not talking about 100, just like 18 electrons. The number of
configurations needed to do full CI is 10 to the 17. And maybe you can correct me, as far as I
know I haven't seen any classical calculation with 10 to the 17 configurations. So the maximum I
encountered was 10 to the 9. This was back in '99. I think that people can go to 10 to the 10, 10
to the 11.
So even methanol, we're not talking about 100 electrons, only 18. You cannot do the calculations
on a full CI on a classical computer.
Okay, so how do you approach this problem? We already discussed how to find the energy. But
for us in chemistry, I mean, we have the Hamiltonian; we have to present the Hamiltonian. We
have to initialize the y function of quantum computer, and we have the phase estimation
algorithm. We already discussed it, how to use the phase estimation algorithm to solve this
problem.
So I will not go back to the phase estimation; we already discussed it. And maybe I'll mention
here the work of [inaudible] where you write the Hamiltonian in the second quantized form and
then, you try to design the circuit for each turn. And maybe I'll just give you the reference. It was
in Molecular Physics. And the idea here is to go from fermionic representation of the operator to
the [inaudible] matrices. And of course in this field was the first paper in Science for H2, Alan and
Peter, and then the experiment was done. Oh, sorry.
This was published in Nature. So we know how to represent the Hamiltonian for H2. We know
how to do the simulation. We know how to do the experiment. It was done it principle. And the
question is what next?
So what is the challenge in this field? So I have a student in my group from computer science,
and every time he was giving a group meeting he would say, "I will apply the black box. I will
apply the black box," and so on. So I told him in chemistry, we'd like to open this box and see
what's inside. And this is the challenge for us. I mean, how do we write U for a given molecular
Hamiltonian?
So for that we said, "Okay, for practical reasons, for a given U can we use the optimization
methods to come up with the best sequence for a given matrix, U?" Oh, this is [inaudible]. So
again, as you will see in a minute, this is not an efficient way because, I mean, the maximum
number of gates goes 4 to the n and this is optimization for practical reasons.
But of course if you look at the literature there are two different kinds of methods. You have
deterministic methods and stochastic methods in optimization that we all use. And people used
before us genetic algorithms and many of algorithms. But my student was able to design a new
algorithm which was better for such an optimization. And the algorithm is the Group Leader
Optimization Algorithm. And the idea: you divide the space in two groups and in each group you
choose a leader which is the best one in that group. And then you will see in a minute -- I will
show you the flow chart for this algorithm -- it's very similar to the genetic algorithm.
So the steps in the algorithm are you generate random-population for each group. You calculate
the fitness -- and I will mention in a minute how you calculate the fitness for such a matrix -- and
you determine the leaders for each group. And then, very similar to genetic algorithm, you try to
do mutation and crossover between the different populations.
So what is the fitness here? So we're looking at the trace of the matrix that you'd like to
decompose into a sequence of gates times the target. And of course if Ua equals the target, this
is one [inaudible] N. So F will be one and then epsilon will be zero. So this is the fitness for our
optimization. And at each step we have to calculate this function and choose the leader for each
population and then do the mutation and crossover.
So here are the results for the molecular Hamiltonian for H2 and for water. So here is the fidelity
error as the number of -- Here we have the number of iterations. And you can see the
convergence in the fidelity error. And also since you optimized the number of gates for two qubits
and one qubit, you have to also give a closed function higher for the two-qubit gate than
comparing with the single qubit gate.
So we are minimizing the space with the fidelity and the cost of the quantum gates. So here is the
quantum circuit for the H2. And for this one he succeeded to do 18 quantum gates. And I think, if
I'm not wrong with the first paper, with the 4-qubit you need 256, I think, quantum gates. So for
practical reasons, for this simple system, molecular Hamiltonian, we already reduced the
quantum gates from 250 into 18 quantum gates.
And the second one the same, with the 5 qubits for -- This is the quantum circuit from the
optimization algorithm for water. And again we're discussing this with the experimental group to
see if they can do the experiment for such a quantum circuit with the difficulty you see in the
control here. But we're trying to simply the quantum circuit so they can do the experiment for the
H2O. Yes?
>>: So Stephen [inaudible] early, like, NMR work was doing this kind of stuff. It seems like the
problem is you have to be -- basically you have to be able to solve U to do the minimization. So
what's your perspective on kind of like being able to do blocks? Like obviously if you got to a
larger [inaudible]...
>> Sabre Kais: No, of course. I mean if it's [inaudible] -- It's inefficient for smaller U. I mean if we
larger matrices, this is not the way to go. So this is a very important point because if we have
larger matrices then it's very, very hard to come up with the sequence needed.
>>: So water [inaudible] five qubits?
>> Sabre Kais: For this one?
>>: Yeah.
>> Sabre Kais: Yeah, because the matrix for water was larger we need a larger basis set.
[inaudible] function U go from H2 to other. So we need a larger basis function to represent the
water of the Hamiltonian matrix....
>>: For H2 it was just the spin orbitals. So it isn't five spin orbitals. Why isn't it much larger? I
don't understand why it's only five.
>> Sabre Kais: Yeah, for H2 we used two basis functions and then you have two to the -- Yeah.
But for this one we add -- For water we use the multi-difference y function as an initial guess for
the y function. So we have more basis functions to represent the Hamiltonian than water because
in the water we are testing in the excited states not only the ground state.
Now so the idea is-- And this is the question here -- can we come up with universal
programmable quantum circuits? I will give you the Hamiltonian. Here is the molecular system.
Can you give me the quantum circuit design for a given Hamiltonian?
So we worked on this one, and we have an idea but it's not really efficient because the scaling of
this one was still 4 to the n. And the idea is to add [inaudible] qubit to control the -- I will show you
in a minute. This is the idea. So we'll start with this matrix. So this is the initial input and this is the
output. And the question is can we enlarge this by n theta qubits in a way that we can still read
the output of this one in the larger space?
So what is the main point? We'd like to come up with a way that all we're changing in the
quantum circuit is just the angles of the rotation. So you give me H2, and I can read directly what
are the angles in the rotation relative to H2. Give me water, lithium hydride, all I have to do -- we
have a fixed design, and only the rotation angle of the gates are changing. So this is our objective
here and our goal.
And of course the way we decided with doubling the qubit is not efficient but at least this way we
have a fixed quantum circuit design for a given Hamiltonian and all we're changing is the rotation
angles.
So this was the basic idea here, and you need these to uniformly control the [inaudible]. So we
have three steps, the formation and the combination. Let me just go directly to the water here. So
the final quantum circuit design, all we have to do here is just change the rotation angle and the
rotational gates. So let me just go to the water. So this was the Hamiltonian we used for water.
You can see sparsity. These are only the elements far away from the diagonal. There is a way to
bring those ones closer to the diagonal and change the bandwidth of the matrix. So it will be easy
to control and add the number of qubits to the design for the water.
Now when we designed it this way we get for the hydrogen molecules of a larger circuit, we have
32 gates, and 19 of them are rotation and the rest are X. So all we have to do in this one is just
change the rotation angle. And here we have the table for relating the matrix elements with the
rotational angles in these gates.
So this works for this small system, and the idea is really how to come up with a fixed quantum
design. For us in quantum chemistry this is very important because once we have this design, I
can go from any molecular Hamiltonian to another molecular Hamiltonian just by changing the
rotation angle. And of course if you look at the history of quantum chemistry, the development of
the computation, once people discover how to do the [inaudible] Gaussian fashion, and you have
the [inaudible] the Gaussian [inaudible] back in 1973, so you have a systematic way for any given
Hamiltonian. You can create the matrix elements and then you put in the program and you run it.
And we'd like to come up with something like the same here in quantum chemistry, a fixed
design. I give you the Hamiltonian and you change the angles and do the simulations.
So I would like to mention also trying to work with the algorithm for solving linear equation. And
the algorithm was designed by Seth Lloyd and Harrow back in 2009. And the basic idea for any
given matrix, A, and a vector, b, find the vector, x. So instead you have Ax equals b. And you can
see the basic idea is to invert the matrix A to this so I get the vector x. And you run into problems
that we discussed before about the singularity of this matrix. And this is also dependent here on
the conditional number. The conditional number is the ration between the highest and the
smallest eigenvalue of the matrix. So if the matrix is close to singularity it would be hard to invert
it. And you can see this in the scaling. So the best classical algorithm score is [inaudible] order N,
square root of k and log 1 epsilon, and k is the condition number. With the quantum still you have
exponential speedup log N but you still have k square here. And this is the problem with this
algorithm, that the scaling goes from square root of k to k square. So if the matrix is singular then
you have problems inverting the matrix.
Now what is the basic idea in this algorithm? That, instead of working with the matrix A, you go
phase estimation algorithm, exponential e to the minus A and this will invert the matrix. And then
you have the eigenvalues, you have the vector x.
So what is the input? The input, you write your vector b as a quantum state [inaudible]. And the
output will be the vector x. And the basic idea in this algorithm is to do the phase estimation
algorithm, e to the minus iA -- A is the matrix that you're trying to invert -- and t.
So essentially we're running, again, the phase estimation algorithm to get the vector x for Ax
equal b system.
>>: Is that you assume the Hermitian?
>> Sabre Kais: Yeah, we're assuming Hermitian here; we discussed it. If not Hermitian, you
enlarge it so you have it --. This the quantum circuit for solving this linear system of equations.
And as I mentioned before the crucial step here with respect to the phase estimation algorithm is
how to run U which is e to the minus iAt. So again you need an algorithm to represent this
exponential. And then once you know how to represent this U, you can run the quantum circuit
and get the vector x.
So for this one we tried to collaborate with an experiment group, with Professor Du in NMR in
China. And we gave them this simple matrix which is just two by two. And you can see the
simplicity here of the output of this action. And they were able to do the simulation with 4 qubits
using NMR. And this is the molecular system they used. So they have 4 qubits, carbon 13, and
the 3 following atoms.
And they run the experimental 4 qubits and they got the results. They ran this experiment three
times, and you can see where the fidelity, I think, was 97%. So they just finished this experiment
like two weeks ago, and we started just discussing the results with them. And we're trying to
summarize all the results, the quantum circuit that we did and their experiment with 4 qubits to
solve Ax equals b.
So, I mean, this is again proof of principle that if you have a linear system of equation as Alan and
Vidal and Evelyn did with H2, we can also -- if we have a quantum computer -- we can solve Ax
equals b as they did in this, in 4-qubit NMR experiment.
>>: What is it's scale, 4 to the N?
>> Sabre Kais: Yeah.
>>: For the black box [inaudible]?
>> Sabre Kais: [laughing] Yeah, again, I mean the problem with this, with the matrix -- I mean
this one because of the -- And maybe this is the other point I have to mention, with this algorithm,
Ax equals b, it's efficient to calculate a function of the vector x which is any average operator, not
the individual component of the vector x because then you have to read it whatever N times. And
until now it's not clear what is the operator [inaudible] to us in chemistry, that the average of that
operator will be [inaudible] to the solution. So we're trying to figure out what is the average
operator which will be [inaudible] chemistry. And we are thinking about scattering theory where
this equation comes in and this vector will be related to one of the columns of this matrix in
scattering. So I think still there's a hope that maybe this can be written in scattering theory, but it's
not clear what is the operator that is needed for such a system.
>>: How you get e to the iAt [inaudible]. I think that's the big challenge.
>> Sabre Kais: Yeah, yeah. This goes back to [inaudible] dimension. Yeah, yeah. So I mean the
last system that I would like to mention is solving the Schrodinger Equation and adiabatic
quantum computing.
So what is the basic idea here? We have an initial Hamiltonian and we're changing the
parameter, S, from zero to one. If this is one, we go to the final Hamiltonian. If it's zero, you start
with the initial Hamiltonian. And of course what you need in this system, you need the initial
Hamiltonian. Easy to implement experimentally. It has a ground state that simple to calculate.
And, of course, for example just the sum of the Pauli X matrices. And the idea is to come up with
the final Hamiltonian again which can experimentally be implemented. And the solution of the
final Hamiltonian is the one you are looking for. And as an example of the 2-local Hamiltonian is
the Z and Z-Z Hamiltonian. And this was already implemented in the D-wave machine.
So the idea is if we can bring all Hamiltonian into this kind of form, 2-local, at least we can
implement it now with the D-wave machine. But if not, if it's more complex, the question is can we
start with the more complex Hamiltonian and move to a Hamiltonian with larger numbers. You'll
see in a minute, for example in H2, we don't have a 2-local. We have k-local. And I will explain it
in a minute. And the question is, how do you start in adiabatic quantum computing with a
Hamiltonian which will lead to some Hamiltonian that you can relate it to electronic structure
calculations?
So again this is the spin Hamiltonian in the D-wave machine with a Z-Z interaction. And of course
you can see the importance of this one because this one is just looking at the minimum of this
quadratic function where J is the matrix here and this w is the vector. And of course if there is any
problem, because it's in this form of quadratic and constrained by an optimization, you can run it
on this machine.
And of course if you look back, Ax equals b is the same. You're minimizing this quadratic
function. So in principle we can run Ax equals b on the D-wave machine.
>>: [inaudible] how many bits [inaudible] do you need [inaudible] because that machine...
>> Sabre Kais: Yeah, yeah, of course.
>>: ...[inaudible]...
>> Sabre Kais: Yeah. I mean, yeah, this does not scale with [inaudible] system with the
coefficient.
>>: Just how much do the coefficients vary in magnitude, that's the question, because they can
program it to see if it's...
>> Sabre Kais: Yeah.
>>: ...some kind of numbers 1, 2, 3, and 4.
>> Sabre Kais: Yeah.
>>: [inaudible] cannot.
>>: The [inaudible] need for your...
>> Sabre Kais: Exactly because we...
>>: ...molecular [inaudible].
>> Sabre Kais: I mean in terms of we run this Ax equals b on the machine and the results were
not better than any classical algorithm. So we were not getting good results from the D-wave
machine. But we are not done yet, so we don't really want to discuss it at this point. But the
results from the D-wave machine of this optimization were less efficient than the classical
algorithm for solving this equation. And this is because of your point of the coupling...
>>: The coupling [inaudible].
>> Sabre Kais: Yeah, yeah. So we run this one, I think, anywhere from small size, 2 by 2 up to
16. It's not getting better results than the classical algorithm on the D-wave machine.
Okay. So I will just go back to the electronic structure Hamiltonians and use the Wigner-Jordan
transformation to go from fermionic operator to spin operators and see what kind of interaction we
end up with, with this transformation. So I'm giving the example of H2. So if we write this
transformation, this is the Hamiltonian we are getting for H2. So we have 2-local. We have Z-Z
here, 2-local. And, again, the H11 is the one-electron integrals and with the four indices, the twoelectron integrals. So you can see up to this point we can still run this kind of algorithm in
adiabatic quantum computing and using the D-wave machine to [inaudible] by the high order tens
with the four indices here which [inaudible] exchange. If you look at the order, this is the 8-local.
So in order to do electronic structure equations with the adiabatic quantum computing we need
an efficient way to go from k-local -- Maybe in this case we have 8-local -- to 2-local. And this is
what we're trying to do to use perturbation theory to reduce this 8-local Hamiltonian into 2-local.
And I know Vital -- I think you did a 3 to 2?
>>: Yeah. And 2 to 2.
>> Sabre Kais: Yeah. So this the question we are facing now and I thought maybe we can have
it for discussion how do you reduce the 8-local interaction in electronic structure. You can see
why the 8 because we have these 4 terms here. So if we come up with a method to reduce the 8
to 2, we'll be able to run an electronic structure equation on adiabatic quantum machines. So this
is one case.
Another case we said, "Okay, I'm not stopping with two. I can do, for example, three or four." And
then the question is how do go from eight to that number that will allow you to go from initial
Hamiltonian to the final Hamiltonian? So I thought this may be the question we can have for
discussion. Yes?
>>: I mean, the fact that you have a 4-index term there as a function of your basis -- So if you
choose basis functions which have compact support, which don't overlap, the second terms are
only 2 and 6.
>> Sabre Kais: This one?
>>: Yeah.
>> Sabre Kais: So you mean these? Because, I mean, you can see what we did here. We just
transformed this fermionic operator into Pauli spin matrices in this order matrix element. Are you
saying these all can't get used? This is the question. I mean, [inaudible]...
>>: Because even if it's in [inaudible] form, you can reduce a 4-index term to a 2-index term if the
basis functions have local support. So [inaudible] work in real space, the Coulomb operator is a
pair operator. Right? I mean because you're working a delta function basis. It's got local support.
So if it worked in the finite element basis, a Coulomb operator becomes a pairwise operator.
>> Sabre Kais: If you work in finite element, you get to use the two?
>>: Yeah.
>>: But think about the overlap matrix. The overlap -- So you need to take P1 -- The integral was
pq times rs, and p and q have to overlap.
>> Sabre Kais: Yeah, I have to think about it because if there is a way to reduce this one to two
then we can...
>>: Well, I mean we can...
>> Sabre Kais: Yeah, I mean let me just finish, and we can discuss it. Yeah, it's interesting
because this is our goal. If we succeed to break it down to 2-local, I mean, in principle we can do
electronic structure. But I mean -- Yeah, I mean this is a good point to discuss because I mean I
didn't think about this one, that you can reduce it to two.
>>: Yeah, yeah.
>> Sabre Kais: No, excellent point. I mean, this is really our goal. Because if this is true then I
can do electronic structure now.
>>: [inaudible] that's great.
>> Sabre Kais: Okay, good? And still the D-wave machine is available with -- like they are
moving to 500 qubits.
>>: But on the current machine you still need it to condense to a Hamiltonian that commutes.
>> Sabre Kais: Say that again. Sorry, I'm not following you.
>>: So the terms you can use are all commuting terms.
>> Sabre Kais: Yeah.
>>: Which is slightly different than the problem of, "Can I go to a 2-qubit form in non-qubit
terms?"
>> Sabre Kais: Oh, okay. I see your point. Yeah.
>>: We're generally -- From a computing perspective, we know that 2-local Hamiltonians are
[inaudible].
>> Sabre Kais: Mmm-hmm.
>>: So I can always simulate a 4-local [inaudible].
>>: You can always put that in the graph.
>>: Yeah.
>>: Just the overhead is going to kill you.
>>: Oh, sure.
>>: So I missing one thing about this overlap. So since the Coulombic propulsion is long
distance...
>>: Okay, so the statement I'm making pertains to the first formula here [inaudible] is the sum of
the four terms of the Coulomb operator.
>>: Right.
>>: That's a function of using an underlying basis. But the Coulomb operator, you know, is a
pairwise operator. So why do you have four indices? It's because you're using an underlying
basis which has overlap, which doesn't have [inaudible] support.
>>: Right. Right. So, I agree. So you go to a delta function basis...
>>: A delta function [inaudible] element basis. All right, so that can be reduced to a pairwise. I'm
just answering Sabre's original question, how to reduce the [inaudible] of those interactions? You
just introduce finite support to your basis.
>>: But if it affects accuracy in representing the y function, you won't get the same -- you'll have
to have more of those basis functions....
>>: Well, they might not be as compact as a Gaussian basis. But, you know, people use different
kinds of basis functions in quantum chemistry.
>> Sabre Kais: Okay. I mentioned this before, but I would like here to mention at the end I'm
trying to connect and this is where -- an open question. What we did in finite-size scaling to the
adiabatic quantum computing -- I mean suppose I have a method which will tell me where the gap
is. Does this help us in reducing the evolution time because when you're close to the gap, you
have to be very slow? So if somebody gave me the gap, what is the gap in the given
Hamiltonian? Can this speed up the calculations?
And for that I'm trying to connect it with what we did before. For any given Hamiltonian in
quantum mechanics, a functional parameter, we devise what we call the finite-size scaling which
will give us that point where the system cross or avoid cross as a function of parameter.
And let me just -- So the idea was in quantum mechanics, you take away function and you
expand the basis set. And now I can look at the criticality not in the thermodynamic limit, I can
look at criticality in the Hilbert space where the number of basis functions is infinite is [inaudible]
thermodynamic limit. But if the basis function is finite, I can use finite-size scaling in Hilbert space
to detect where is the criticality or where the gap is going to be.
So we have done this one before for any given Hamiltonian with a parameter. And this is maybe
almost over ten years ago. And just to give you an example of how this works, if I have a
Hamiltonian with a given parameter and expand it in a finite basis set, I can do the ansatz in
finite-size scaling instead of the thermodynamic limit. N will be a number of basis functions and
might be [inaudible] qubits.
And then from this one, this is same ansatz use in finite-size scaling in thermodynamic
calculations. But we can repeat it in the Hilbert space, and there is low energy between Hilbert
space and the thermodynamic limit. And from there I can -- For example, here is the Hamiltonian
with the parameter. We define our finite basis set. And I can see if I run it with increasing basis
set, I can get the crossing point. The crossing point is the critical point for the given Hamiltonian.
So if you give me your Hamiltonian, I can tell you exactly where this crossing is going to be. So
the question is, if I know where crossing in the adiabatic quantum computing, does this help to
reduce the time? Because I will be very fast in the evolution until I get close to the critical point.
And then, I will slow down. And the question is how to turn this one into an algorithm where we
can take advantage of this point and get different scaling for -- because, I mean, what's killing you
is the slow evolution.
So this I thought may be interesting for discussion. And I mean the algorithm works because we
also check the inverse which is the [inaudible] and we get universal [inaudible]. So there is no
doubt that this analogy works between the thermodynamic and the Hilbert space.
Again I would like to end with this positive note, if we have a quantum computer [inaudible]. And I
would like my group who is doing the hard work. Thank you for your attention.
[applause]
>>: Maybe I missed it. I don't know if you said it, but why have the D-wave computer implement
on something that's not SCSC?
>> Sabre Kais: What we were doing with the D-wave, I mean from Vidal's experiment, I mean
they have Z-Z and they cannot do the x...
>>: Why can't they do anything else?
>>: Well, actually they can.
>>: They can?
>>: So they, for example, [inaudible]. They haven't and the reason they haven't is just because
they're very interested in these communitorial optimization problems [inaudible].
>>: [inaudible] restriction they have is that they can only make Hamiltonians which are [inaudible]
meaning all of the [inaudible] elements have the same sign. So that all of the signs and those
weights and those paths [inaudible] have the same sign....
>>: But that means [inaudible] has no sign problem.
>>: Yeah, that’s right. So they can only simulate Hamiltonians which have no sign.
>>: But then...
>>: But they can be spin...
>>: ...what's the point?
>>: ...[inaudible].
>>: Their interact can be [inaudible].
>>: [inaudible]. And so I mean [inaudible] doing some quadratic rendering optimization which
[inaudible]. So [inaudible]. [inaudible] a machine that did this thing, you can [inaudible]. And so
now we're all [inaudible].
>>: But that would also be a big problem for doing quantum chemistry on it. [inaudible].
>> Sabre Kais: Yeah, this is --.
>> Krysta Svore: Are there any other questions or comments? Okay, let's [inaudible].
[applause]
>> Krysta Svore: Okay, so now we're going to hear from Peter Love from Haverford College. And
he's going to talk about fermionic quantum simulation.
>> Peter Love: So thanks for the -- Is my mic turned on? Can you hear me? Thanks for the
invitation. So I just wanted to say for those of you who don't know, Haverford College -- If this is
the largest curiosity-driven enterprise, we might be the smallest curiosity-driven enterprise. So
we're a liberal arts college about 15 minutes outside of Philadelphia. And this was worked on with
Jake Seely who was a senior at Harverford as part of his senior thesis. And he's unfortunately
gone into climate science. He'll begin at Berkeley in the spring which is good news for Station Q
because it means he'll be able to stay in the same floor of the building. You're quite close to the
C.
So, okay, I just wanted to start with some general comments. Sorry, there's a figure missing here.
So there's been a lot of interesting developments in thinking about the difference between bosons
and fermions. So bosons, by which I mean photons -- Let's just stick to things we can literally see.
So that's why you can see me. The photons in this room are bouncing off of me. They're
interacting with the atoms that make me up and they're bouncing back into your eye. They're not
bouncing off each other. So photon-photon interactions are an ignorable non-existent.
But then, fascinatingly, you know there's this great KLM theorem that says, "In spite of the fact
that photons don't interact with each other, one can still build a universal linear optical quantum,"
computer by using measurements and feed-forward to create effective interactions between the
photons and [inaudible] gates.
Of course one needs lots of tentacle things like lots of good sources for single photons on
demand that we don't have right now and good detectors that we're getting there. But,
nonetheless, universal quantum computation with linear optics? No problem.
More interestingly still, you know, these bosonic local Hamiltonian problems -- So if you restrict
and know that you no sign problem, so you just get right of them. This work done by [inaudible]
who has just got a faculty position at Stony Brook. He told me a funny story which is he published
the paper saying, "You know, bosonic Hamiltonians are just as hard as fermionic Hamiltonians."
And he said, "You know, I got some angry e-mails from the quantum Monte Carlo community." I
said, "Really? Who from?" And he said, "Well, I got an e-mail from this guy, Bernie Alder." So
this was a somewhat surprising result but, nonetheless, it's true. And then more recently Scott
Aaronson work showing that even sampling from the distributions implied by simple linear optical
circuits can be hard.
So, okay. So that's -- Yep?
>>: I just wanted to ask you a question about your second bullet point. And maybe you or some
computer scientist in the audience can answer. I was reading in Scott Aaronson's recent posting
on the archive. He says that the KLM procedure allows universal quantum computation if
enhanced by post-selection. So does anyone know exactly what that means? In other words, my
impression is that linear optics is not a universal model for quantum computation without some
fancy unrealistic computer science adjectives such as post-selection.
[laughing]
>>: So it's just that state that you need to generate -- What's nice about it is, well, it's postselected but it's heralded.
>>: It's heralded?
>>: Yeah.
>>: I don't know what that means.
>>: So what it means is -- Okay, so it's post-selected but it doesn't contain any data. So you have
some part of the photons that has all the data and then, you have some state that you need to
generate which you have to post-select.
>>: Okay. So...
>>: [inaudible]...
>>: ...just for myself and maybe -- So post-selection is something that's not physically realistic?
You have repeat something many times and then only use the thin fraction of cases where what
you're post-selecting for actually occurs?
>>: That's right. But you can make it realistic if it doesn't corrupt the data. Right? So the most
powerful...
>>: No, but you might have to repeat some experiment exponentially often before you find the
instance that you're post-selecting on?
>>: Yeah, so you have to make some trade off between space and time, right? And what's nice
about the KLM scheme is that you can improve the probability of getting the state you want at a
cost of space. So you figure out...
>>: Polynomial space or exponential space?
>>: Polynomial space. So you figure out, you know, how big of an algorithm you want to run and
then, you have to put aside a chunk of resources to do the post-selection to start. But it's different
-- Because Scott sometimes talks about if you have a quantum computer where you could postselection on the quantum computer which puts you into like a completely different...
>>: Well, that's what I thought...
>>: ...[inaudible] class.
>>: That's what I thought you were discussing.
>>: Yeah, so in this case you need post-selection but it's weaker because it doesn't do anything
to the data. It's just for preparation.
>>: So is linear optics BQP complete or not?
>>: Yeah.
>> Peter Love: Yeah.
>>: With [inaudible], yeah.
>> Peter Love: Yeah.
>>: So all this post-selection that's necessary can actually be, in principle, implemented by
polynomial overhead?
>>: That's right.
>> Peter Love: Yeah.
>>: Okay.
>>: So then can I ask a question [inaudible]...
>> Peter Love: Yes.
>>: ...simple-minded quantum Monte Carlo perspective, can you tell me what the trick here is just
that this is essentially a [inaudible] Hamiltonian and that's why it's hard?
>> Peter Love: Yes, that's right. That's right. So there's also a nice paper by Sergei [inaudible] on
stochastic Hamiltonians in which he essentially proves quantum Monte Carlo without frustration
and without the sign problem is efficient. Which perhaps you felt that you knew but now it's nice to
some more rigorous argument.
Okay, so let's see what trouble I can get in on slide number two. So, okay. let's think about
fermions meaning electrons. So, of course, electrons have wave-like properties so we can take
these very nice electron micrographs. And of course electron-electron interactions are strong,
that's worth saying out loud if we're talking about chemistry. And fascinatingly there's a paper by
[inaudible]: as soon as they did KLM, the obvious question was, "Well, what happens if you do
this construction for fermions, the equivalent of fermionic linear optics?" Which I think, perhaps, is
the answer to what Garnet was saying earlier. So if you only have the pairwise term in that
Hamiltonian, that's the linear optical regime. Now fermionic linear optics is not universal so you
cannot do the equivalent KLM scheme with fermions.
Fermion sampling is easy because we can calculate determinates well and fermionic local
Hamiltonian is still QMA complete. So it's nice to have these little sketches of a few complexity
theoretic results that way one can talk a little more generally.
However, this talk is extremely specific. I've really enjoyed today because it's been a lot of
depressing negative-Nancy talk about things I'm very interested in. However, I think that the
questions have been exactly right. You know, when we thought about this a while ago, you know,
the algorithms we have now are concatenations of constructions. And those constructions are -there's no reason to believe and no claim even that they're optimal. In fact, you know, if you really
end up doing serious chemistry on a quantum computer a long time in the future, there's no
reason to believe it would look anything like anything that anyone said today. Perhaps there are
completely new ideas. Hopefully we'll have some tomorrow.
So here are the steps, though, the mapping fermions to qubits. We've already heard quite a lot
about ansatz states and from Jarrod about, you know, mapping out these things onto a real
quantum computer. State preparation of those ansatzs, i. Phase estimation has already been
defined and then, that gives us a nice digital read out of the energy.
So I want to talk about not the key steps. So the mapping of a fermionic wave function to qubits
and then combined with trotterization. So today this talk is about the mapping. It's about picking
out just one step of this concatenated construction and trying to improve it. So I'm just going to
talk about to you more efficiently map fermionic states and fermionic operations onto qubits.
Okay, so I'll just briefly blow through this because we've already heard it today. So, okay, here's a
set of spin orbitals for water. So they're typically one-electron energy eigenfunctions from some
method. The key point for today is just, okay, the fermion. So each of these orbitals can be
occupied or unoccupied and so, you form many-electron basis from antisymmetrized products.
So this is the idea of Slater determinants.
So what do we think about here? Well, we have a occupation-number basis. So if we represent a
state of a chemical system in qubits, we use the occupation-number basis. So we assign a qubit
to an orbital. A state zero of that qubit is that orbital is unoccupied. A state one of the qubit is that
orbital is occupied. This idea has a history. Actually it goes back to Paolo Zanardi who originally
wrote this down thinking not about chemistry but thinking about entanglement in fermionic
systems.
So this is the figure from the 2005 paper. It just says if you do that then your estimate for numbers
of qubits is more reassuring and encouraging than your estimate for number of qubits for, say,
interesting factoring problems. There's another way around of saying that which is that if this
number was comparable to the number of qubits you would need to factor interestingly large
numbers, that's bad news. Because if you build a quantum computer that's that big, it will be
taken away from you because it will have all sorts of horrible military applications that you will not
be allowed to play with. However, we can build this many qubits and people who, you know, live
on the beltway will not get upset with us and we can still do interesting chemistry.
Of course this isn't the whole story. The numbers of gates as we've seen are alarmingly large.
Okay, so a lot of repeating here, we've seen this a number of times before. So we just take the
Born-Oopenheimer approximation, treat the nuclei as fixed and then, the game is to find the
electronic energy as a function of the nuclear coordinates. And hey, presto, in second-quantized
form is our famous molecular electronic Hamiltonian again.
So what I'm going to do in this talk is take this Hamiltonian and reduce it into a Hamiltonian for
qubits in a number of different ways. Okay, so what's the important property? So I define this
occupation number basis but that cannot possibly be the whole story because no information
about symmetry of the wave function is encoded in anything I've said so far. Instead, the
information about anti-symmetry gets encoded in the operators that define the Hamiltonian
through these fermionic commutation relations. And just to hammer the point home, this is the
idea of a good scheme. We have a number of ingredients. So we can take away function of
elections, we can encode into a way function of qubits. Then we have an operator that takes a
state of qubits to a new state of qubits. Then there's a fermionic operator that takes the state of
the electrons to a new state of the electrons. And hopefully we're inversing code we get the same
states here. So this is the definition of a successful encoding scheme.
Okay, so how do we usually do this? Or what was the sort of starting point for this work? So it
was the idea that Jarrod has already presented. So we need to define qubit creation and
annihilation operators that act on the occupation number basis but which obey the same algebra
as the fermionic operators. And the way you do that is you attach these long strings of Z
operators. So this little arrow here just says, "I'm qubit J. Everybody who has a net less than me
gets acted on by a Z."
And so I have my Q, my qubit operator, which is just this operator here that takes a zero and
turns it into a one. That's creation in the qubit land. It takes a one and makes it into a zero; that's
annihilation in qubit land. And then, I have this string operator here that if you plug these formulae
into the anticommutation relations, you will find that these operators do indeed obey the correct
anticommutation relations. Therefore, these are the qubit analogs of the fermionic operators.
So this is originally from Jordan and Wigner in 1928. This use of it was proposed by Rolando
Somma in 2002. And in between these years it was used very extensively in condensed matter
physics for transforming backwards and forwards between fermionic and spin systems to enable
us to solve a whole bunch of systems exactly.
Okay, so this is a play on words that doesn't work at all in America where you call this game
Shoots and Ladders for reasons that escape me. But in England it's called Snakes and Ladders.
So we have these long strings of sigma z that have shown up in our Hamiltonian in order to get
the antisymmetry problems right. Okay, so what we are going to end up doing after we have
trotterized and so on and so forth, is we're going to exponentiate something that looks like this.
The circuit to do this is simple, actually; it's in Nielsen and Chang. You do a whole bunch of
CNOTs to effectively compute the parity of this sort of qubits which is what this operator depends
upon. You then do a sigma Zed rotation and then you un-compute the parity. Yeah?
>>: I think there's a paper by Kitaev from about 12 years ago where he reduces this linear
change to a log factor...
>> Peter Love: That's exactly what I'm about to talk about.
>>: Okay.
>> Peter Love: So these ladders of CNOTs are exactly order N long in Jordan-Wigner. And so,
unfortunately, Matthias, I know you said order N to the fourth makes you afraid. This should make
you terrified because...
>>: [inaudible]
>> Peter Love: ...you get an extra factor of N so it goes up to order N to the fifth.
>>: Or log N. [inaudible] log N and stop.
>> Peter Love: Yeah. So the -- Okay. So as you said, quite correctly, so this problem was
thought about from a different point of view. They thought about how -- Imagine I had a fermionic
quantum computer, so imagine I was computing with fermionic orbitals. How many fermionic
operations do I have to do to do a single-qubit gate operation? And so buried in here is exactly
the mapping that we need to improve this N to the fifth to log N. For pedagogical reasons in that
paper they think about the parity basis. So if you think about what's happening in the JordanWigner transformation, occupation is stored locally which means that parity is a non-local
quantity. You have to look at many qubits to compute their parity. And so you end up with these
long string operators.
So on a parity basis why not make parity the local quantity. So you store in the qubits the parity of
a set of orbitals. So I'm qubit j, and I store the total parity of everyone within index than me. Then
this is great, we can compute these parity quantities that we're interested in just by looking at a
single qubit. Okay, but now we still need a raising or lowering operator and now occupation is no
longer a local quantity because we're not storing occupation in the qubits.
So now instead of having long strings of Z's that compute parity, we have long strings of X's that
update qubits. So if I want to change one orbital occupancy, I end up changing all of the parities
that involve that orbital occupancy. So I end up, instead of having long strings of Z's on the left, I
end up with long strings of X's on the right. And these are, again, order N long and I get no
advantage.
Okay, so both of these methods, both the occupation number and the parity number bases end
up with long strings of Pauli's that scale up to the N. And they're equivalent.
So what about some happy medium between occupation number basis and parity basis, and this
is exactly what we call a Bravyi-Kitaev basis. So what do they do? So here's the occupation
number basis; here's the Bravyi-Kitaev basis. So what's indicated here by the arrows is every
time there's an arrow, this qubit stores the sum of all of the occupancies of the orbitals that have
an arrow pointing to them. Okay? So we wrestled with that paper for a while to try and find an
economical explanation or economical description of how this transformation works. You know,
we drew various pictures like this trying to understand what was in Bravyi's mind. The simplest
thing we came up with the following: to think about the mapping in terms of the following matrix
which is relatively easy to construct for any number of qubits.
So this is a mapping not from state to state but from qubit labels to qubit labels. So in my
occupation number basis, I have a bit string that tells me the occupations. And then, over here in
my Bravyi-Kitaev basis, I have a bit string that tells me which logical basis state that occupation
number state goes to.
So the way you construct this matrix is you start with a one, so that's easy to remember. Then,
you replicate that block here and fill in the bottom row. All right. Now I'm going to do it again. I'm
going to take this block. I'm going to replicate it again and fill in the bottom row. Now I do it again.
I replicate that block here and I fill in the bottom row. What's happening here or one of the things
that is most noticeable is that the number of times you end up with a sum that runs over all of the
qubits is getting increasingly sparse. You need fewer and fewer and fewer of those as you add
more and more and more qubits. So this is, think, number of orbitals. This is a log N by log N
matrix. And you know, okay, it's not unitary but that's okay because it doesn't act on axons labels
of states.
So that's the Bravyi-Kitaev basis. That's the new basis that you want to use. Okay, so to think
through exactly how this is going to work and how we're going to take this transformation, apply it
to the interacting fermion Hamiltonian that we're interested in and understand the spin
Hamiltonian that come out, it useful to define a whole bunch of sets of qubits. So the first set is
the update set of U of j. So if I take orbital j and I change its occupation, which qubits do I have to
act on? So that's the update set. So I update an orbital and then I have to update the update set
of qubits.
Okay. So then we're also interested in parity information because we want to compute phases.
So what set of qubits stores the total parity of all orbitals with index less than j. And that's the
partity set, P of j.
Now of course qubits can only be zero or one. So any give qubit can either store the parity of
orbital j or it can be opposite to the parity of orbital j. And so then for reasons that are too boring
to talk about, it's convenient to define this flip set, F of j, which defines whether your qubit is
telling you the parity or the opposite parity. And then the number of qubits in these sets -- This is
not obvious but it's true -- is at most log N.
Okay, so how do we do it? So even-numbered orbitals, the qubits store the occupation directly;
so this is a fact about the transformation. If you go back to this matrix, you'll notice a feature of it
is that every so often there's a wand on the diagonal. That corresponds to the even-numbered
orbitals. And so for these, I just have the update set. Okay, so I'm going to change the
occupation. I'm applying a creation or annihilation operator. I have to update everything in the
update set, so I act with X on everything in the update set. I have a qubit creation operator that
just changes that qubit from zero to one and then, on the parity set I apply the phases. But now
none of these operators are order N local. They're order log N local because they only act on sets
of qubits whose size is bounded above by log N.
Now there're a couple of different sets, so it's not strictly log N. It's, you know, at most 2 log N
plus 1.
And the same thing here. So I just update everything in the update set. I compute the phase from
the parity set, and I apply the correct qubit operator. Okay, so odd-numbered orbitals they don't
store directly the occupation of the orbital. So I have to have a qubit creation or annihilation
operator that depends on whether the flip set has even or odd parity. Okay, so if the flip set has
even parity that means that the qubit stores the occupation of the orbital. If it has odd parity that
means it's the opposite. And so it just flips the definition of creation and annihilation and applies a
sign, so you just have this slightly more complicated qubit operator. But the update and parity
sets -- Now because we included the phase here which, you know, is one of those things you do
on the blackboard and then you end up regretting when you write the paper, we end up having
this R set which is the parity set minus the flip set.
So we get exactly the same structure, however. We have to update a bunch of qubits because
we've touched one and now neither occupation nor parity is local, observable and we have to
apply the phase, but we only to have to act on a number qubits which is bounded logarithmically.
Okay, if you've never seen a graph of the Hamming number, there it is. I imagine at Microsoft
Research you may have seen such a graph. So these things are actually bounded above by the
Hamming. So, for example, this is the parity set. It's actually the Hamming way. So if you're
careful about counting resources, you should compute the Hamming way. That will tell you,
averaged over all labelings, that's the average locality. But the punchline is that you can simulate
these things by order log N Pauli gates.
So we know that this transformation is exponentially better than Jordan-Wigner. We replace an N
by a log N. So by that I mean if you imagine having a fixed capability inside your quantum
computer to bring groups of qubits together and act on them with strings of Pauli operators -- And
so I get the experimentalists gives me fixed results that you can act on five, well, if I use JordanWigner that means I can do five orbitals. If I use this, I can do 2 to the 5 orbitals. So, therefore, it's
a distinct improvement.
So we know the asymptotic scaling is better. But a good question is, you know, this is a more
complicated transformation so why is there a crossover point where with the small problems it's
better to use Jordan-Wigner and then later on it's better to pay the extra cost of the complexity
and use this Bravyi-Kitaev?
Well, so lets do the simplest example which is H2 in a minimal basis which has the virtue of being
the only one that's actually being calculated on a quantum computer. New, novel techniques. The
only one that's been published, perhaps I should say.
[laughing]
Okay, so this is the optical set up from 2010. And these are the curves for the minimal basis H2.
So if we take this Hamiltonian and just work out the details, here are the Hamiltonians. As you
can see by inspection there are isospectral. I'll just give you a minute to work that out. Okay, so
something that's no so obvious is what the computation cost is. So from that ladder construction
above, you know that if you have an N-fold tensor product of Pauli -- I say Z here but it could be
any string of Paulis. So this requires 2 to the n minus 1 CNOT's on one single qubit gate,
[inaudible].
So if flip one of these to an x or a y, I have to pay two extra single qubit gates just to do a local
rotation to change my local basis. So if I ask about cost per trotter step -- Now we haven't at all
solved any of the issues that surround trotter. We haven't made you unafraid because it's still N to
the fourth log N. But the cost per trotter step is as follows: so it's good to collect together all the Z
things because obviously they commute so you don't have to trotterize them separate. So the
single qubit Z gates are exactly the same here. The CNOT's for the Z-part of the Hamiltonian are
worse for us because we have a more complicated construction. Significantly, the off-diagonal
terms in the Hamiltonian, those things that one needs to worry about if you're imaging using
something like the D-wave machine, these are significantly better for us. Well, perhaps
"significantly" is a little overstating it. The overall is that we actually -- per trotter step we suffer a
little; it's a little bit worse. But overall, depending how you trotterize you can actually get a slight
advantage. So just using this, again, ten-year-old-now paper by Bravyi and Kitaev, you can
replace order N ladders of gates with order log N ladders.
So this is an example where, you know, an hour in the library can save you a long time in the lab.
And so the improvement increases with more orbitals. So it turns out actually this is a terrible
example because it's so tightly constrained to electrons in four orbitals that the actual advantage
of the method is barely exhibited. So this paper's out. I just got the proofs today, actually, so it'll
be in Journal of Chemical Physics fairly soon.
And if you look at N equals 32 for the Jordan-Wigner decomposition of this thing, you get this
nasty string. And then from Bravyi-Kitaev it's very small. So in addition to the paper we actually
have code that implements all this so we can take arbitrary sets of -- If you tell us the integrals,
we can tell you the Hamiltonian couplings. And it just remains to thank the NSF and the Air Force
and Howard Hughes. And this is our Quantum Information and Quantum Computation Center
which hopefully will manage to continue for another five years. And thank you for your attention.
[applause]
>>: So, Peter, I'm going to make a claim that you can do better than that. But I think -- But maybe
[inaudible] can review that.
[laughter]
>>: So the story you are describing sounds similar to the study of fermionic systems with tensor
product states. When tensor product states were first proposed, they were for spin systems. And
then people thought, well, the natural way to treat them for fermions was to introduce a JordanWigner transformation. By then you'd have a long string of operators and that seemed like a bad
thing. And then as [inaudible] suggested, you should use [inaudible] mapping but it wouldn't seem
very practical. And then, we know now that if you set up initial state correctly to [inaudible] parity
constraints then you can actually absorb all these non-local signs into [inaudible] local operation.
And so what that says if [inaudible] quantum computer, what that means is that if you prepare the
quantum state with this parity structure and you apply a physical Hamiltonian then for all those
types of simulations there is no need for non-local constraint. Would you agree?
>>: Yeah.
>>: With that?
>>: Yeah.
>>: So you don't -- So actually you can bypass this problem...
>>: I think fundamentally...
>>: [inaudible]...
>>: ...the description for it is actually in the same paper by Bravyi and Kitaev.
[simultaneous audience comments]
>>: We're currently thinking about the whether the tricks used in [inaudible] can be extended to
this.
>>: Yeah, which means you took the log N factor, the N to the 4...
>>: Oh yes, yes. I know. I know. But just for the mapping of the fermion...
>>: Yes.
>>: But how much of the Hilbert's space do you sacrifice by these parity constructions?
>>: Nothing.
>>: Nothing.
[simultaneous audience comments]
>>: I think you said you compute only on...
>>: You need more qubits.
>>: You need to -- I mean your...
[simultaneous audience comments]
>>: But you can safely diagonalize in a parity sector.
>>: As long as your interested in evolving with Hamiltonians which are parity conserving, so
essentially...
[simultaneous audience comments]
>>: Oh, so you're not using the Hilbert space sparsely to encode the states?
>>: No.
>>: No.
>>: You're in a sense.
>>: You need more qubits than before.
>>: Is it cost effective [inaudible].
[simultaneous audience comments]
>>: You need some [inaudible]...
>>: [inaudible] think it might be more. It's polynomially more qubits.
>>: Well you need a Z-2 degree of freedom if you [inaudible].
>>: But now you have [inaudible], for example.
>>: Yeah.
>>: So polynomially at -- I mean that sounds rather significant. So originally you were taking N
qubits to encode your state and you had N to fourth log N, and now you're saying you might
polynomial-N qubits to encode your information? To get rid of the log?
>>: I need more qubits to get rid of the log, guys. But I think I also get more gate [inaudible].
>>: So you pay with more qubits and more gate operations...
>>: Yes.
>>: ...and get rid of the log?
>>: Yes.
>>: I don't understand that.
>> Peter Love: But, Garnet, I mean if you encode to first quantized encoding of this then what
you say is true. Right? You have a Hamiltonian that preserves the symmetry of the y function.
And, therefore, if you initialize in anti-symmetric state then it will simply be preserved and you
don't have to do any of this. But I understand that's not what you're saying. But, yeah, yeah.
>>: But the realization in [inaudible] states is that if you -- it's pretty obvious if you like unitary
circuits [inaudible] some state that you can kind of [inaudible] preserve this parity [inaudible]. And
then, that makes it [inaudible]. I don't think it's quite hitting me yet what [inaudible].
>>: I don't understand Matthias' point. What was Matthias' point [inaudible]...
>>: Well, the original...
[simultaneous audience comments]
>>: ...is that [inaudible]...
>>: Yeah, right.
>>: ...every bond. So that will make you need at least [inaudible] more qubits [inaudible].
>>: Polynomial.
>>: But the [inaudible]. But now if you have [inaudible].
>>: Yeah, for local Hamiltonian it's certainly -- Yeah.
>>: For local one's it is good like...
[simultaneous audience comments]
>> Peter Love: Yeah.
>>: [inaudible]
>>: Oh, I see meaning, oh, for long range...
[simultaneous audience comments]
>>: Right, because in that [inaudible] paper, they get down to a constant locality because they're
on a square grid.
>>: Yes.
>>: So for the local Hamiltonian to be constant...
[simultaneous audience comments]
>>: ...or they're for something non-local [inaudible].
>>: Yeah.
>>: Okay, yeah.
>>: It'll be a constant in the number of [inaudible].
>>: Of course the number of interactions complicates [inaudible].
>> Krysta Svore: [inaudible]. Are there any more questions or comments?
>>: Peter, so one thing I don't like about the Jordan-Wigner transformation is that generally I
have no capability in parallelization.
>> Peter Love: Uh-huh.
>>: Basically those ladders create locks which prevent me from switching things. So when I
switched to this [inaudible], you have a sense of what the rate of collisions is. I mean, it's got to
be good. Right?
>>: It's a clean spectra so it's...
>>: [inaudible] I think.
>>: It's [inaudible].
>> Peter Love: Yeah. Yeah, I don't think there's any great saving.
>>: What are you talking about?
>>: [inaudible].
>>: There are more correlations than [inaudible]?
>>: Yeah.
>> Peter Love: Another question we don't know is this is N-orbital to N-qubit construction so if
you can go from N orbitals to more than N qubits. Whether you can...
>>: You can't have...
>> Peter Love: Yeah. Exactly [inaudible]...
>>: [inaudible].
>> Peter Love: Uh-huh. Yeah.
>>: That's great in this case.
>> Krysta Svore: Okay. Let's thank Peter again.
[applause]
>> Krysta Svore: Is your mic on?
>> Ken Brown: I guess so. Yep, right?
>> Krysta Svore: Oh, wait. I'm forgetting to introduce you. Okay, so now we have Ken Brown
from Georgia Tech and he's going to talk to us about error correction and architectures for
quantum chemistry on a quantum computer.
>> Ken Brown: Thanks, Krysta. Anyway, thanks for having me out. It's nice to see everybody. A
lot of -- I know half of you at least, to start. It's also nice to be back in Washington which is where
I'm from originally. So my interest in quantum computing is mostly about how to actually build
these things, which I think in the end, leads to error correction.
So my lab, we actually do basically three things. So first we try to build ion trap quantum
computers. It's hard, so we do a lot of theory thinking about how to make it easier through error
correction and maybe quantum control. And then, I'd also like to think about in the process of
building this device I'm making very complicated, very sensitive quantum systems. And, can I use
that sensitivity to do other things? And so for instance, we can do single molecular ion mass
spectrometry in a way which is nondestructive. For instance. We work with, you know, Sabre Kais
with quantum [inaudible] quantum computer. I of course also work for these IARPA guys who are
mostly interested in breaking codes. And we work on these devices and then, we work on
thinking about some of these architecture questions which is what I'm going to talk about today.
So these few slides of background I feel like I should just skip, but just so we're all together I
guess. So the way I think about it with a classical computer you need an exponential number of
electrons to represent n electrons. And it's kind of [inaudible] that with a molecule you can use n
electrons to represent n electrons. So as we all know, Feynman realized that, well, if I had one
quantum system to represent another quantum system maybe we can get over that scaling
barrier. And I disagree a little bit with Michael when he was mentioning how physicists can think
of new algorithms because I think the problem with physicists is they think this and then, there's
like no tool.
And so as we know, Peter Shor is a mathematician able to actually solve and lay out how to solve
a specific mathematical problem which is really what got us going. And what's amazing is that
subroutine of the phase estimation algorithm is the basis of the way we think about doing all
these other methods. So as a reminder just to compare these things -- And we've already talked
quite a bit about how the phase estimation algorithm works. So for factoring there are basically
three great things: first, the state that you start with here is very easy to prepare. It's just one.
Second, that there's a way to put this unitary basically 2 to the n times in a way which is
logarithmic in m. And then finally when we get to the measurement, we can do this continued
fraction expansion to get out the factor.
When we look at the way that we talk about these ground state estimation algorithms first
suggested by Seth Lloyd, first it's hard to say whether or not we have the ground state. And, you
know, Jarrod showed like the worse scenario that if we get a larger and larger system it becomes
smaller and smaller. The second thing which I think is Matthias' main point and actually my main
concern as well is that I don't know how to do this any faster than just doing it 2 to n times. And I
think that cost is like the real -- If we can solve that, I would feel a lot more comfortable.
And finally, you know, the output depends on this input. Somehow I'm optimistic about that. All
right. So what I want to talk about is how the cost is even worse than Matthias said. Do you do
architectural constraints and constraints involving error correction? Okay. So we know in
principle...
>>: [inaudible]
>> Ken Brown: All right. [laughter] Yeah, I like to talk in an optimistic way and say pessimistic
things. [laughter] So we know from, you know, this paper by Peter and Alan that basically if we
could get very good qubits and forget about the time cost, we could start to solve, you know,
these difficult-to-solve exact spectral physical chemical problems.
All right. So what's nice is the quantum computer in some sense solves the problem because it
allows us to use kn electrons to represent n electrons. But now we have to get down to things like
time and what's this value of k.
All right. So the way I think about it is, you know, classical computer we map bit strings to bit
strings. We can stop at any time, measure the computer, it doesn't change the state of the
computer. Quantum computers we map bit strings to super positions of bit strings and when we
do a measurement we change the state of the computer. And the most important thing about that
is that it leads kind of directly to this no cloning result. So in a classical computer we can always
take the bit string in the classical computer and copy it. Now in quantum computing we can do
that in one basis but we can't do it in the arbitrary basis because it prevents linearity -- or it
violates linearity. And that has, I think, two important things: one is for error correction and the
other is for architectures.
So on the error correction side we know classically we can always think about some kind of
redundancy code. And if our error rate is small, we just take a majority vote and then, based on
that majority vote, you know, correct. So I mean I think in some sense that's the basis of a
democracy. We assume that if we take a majority vote of enough people, it won't be so bad.
In a quantum computer you can't stop and measure it because you want to maintain this
coherence. So instead of measuring the bit string and taking a majority vote, we measure an
operator which tells us what subspace we're in. So if there's a single error say from this first bit
flips, it takes is from the blue subspace to the green subspace. We just ask what subspace we're
in. We're in the green subspace. We flip it back to the blue subspace, and we're able to keep the
coherence between our encoded quantum bits.
So what is the cost of measuring these subspaces? So a very common concatenated code is the
Steane code which encodes one logical qubit into seven physical qubits. And it requires
measuring basically six different subspaces. And each of the subspaces will be measured using
four axillary quantum bits which basically serve as the way to dump the entropy. They allow us to
measure which subspace we're in and then we fix it.
So if we want to minimize the number of qubits, we only need four more qubits to do this
subspace measurement on top of the extra six we need for encoding. But typically because the
quantum computer is not going to be, you know, "We need to make sure we get everything done
before it falls apart," to minimize time, it makes sense to sort of parallelize the way we measure
these blocks. So we kind of measure all the sub-blocks at the same time. Now on top of that -And this goes back to what Krysta was talking about earlier -- is for most quantum computing
codes the Clifford group corresponding to these polygates, the Hadamard and the CNOT, can be
done in a way which is transversal, which means I just take my encoded qubit and I apply
Hadamard to all of the qubits that make up the smaller pieces. But the non-Clifford group gates,
you can pick anyone you want and usually people pick this T-gate or pi over 8. And I'm sorry;
they didn't call pi over 4 [inaudible] but it stuck.
You need to prepare this state and then actually teleport it in. And the one think which I think is
funny about this post-selection business that we talked about earlier is that almost all error
correcting protocols that are done in fault tolerant way rely on post-selection. Because I need to
prepare this state with some underlying fidelity, and I do that by teleporting it in and looking at
these measurements, fixing it if I can but often times rejecting it because it's the wrong state.
It doesn't do anything to the computation; it's just like a resource that I need to build up in order to
get through the computation. All right.
Yeah. So in terms of building a quantum computer, when you think about fault tolerant quantum
error correction, the sort of nightmare is that in a physical level single qubit operations are usually
easy. Two-qubit operations are hard. But in a fault tolerant setting, single qubit operations are
hard; they're actually much harder. So now given those constraints what we want to do is figure
out how to create this operator U of 2 to the m.
So Peter just talked about the Bravyi-Kitaev method. I just used as my example this JordanWigner transformation. Again, we're just going to build everything in the Trotter formula. And the
tough part -- And I'm really unhappy to hear about this tree-like nature. So if I look at the JordanWigner transformation, this is that ladder of spins and it does two things. So the first thing it does
is it prevents me from being able to do these things in parallel. It forces me to basically apply
them one at a time. So the end of the four actually becomes part of the time. The second thing is
you'll notice that it reaches over this whole space. And that's kind of an issue because it could be
that the qubits that you want to connect physically are actually very far away from each other.
And because of the no cloning theorem, I can't necessarily put the state onto like a wire and fan it
out all over the place. I actually have to move the bit to the bit it needs to interact with. All right.
So our way to move things quickly is this teleportation scheme. So teleportation uses these
entangled pairs and these two measurements to map a wave functions from this qubit to this
qubit. Now the beautiful thing about it is the speed at which that happens, assuming that you
have the EPR pairs floating around all over the system, is just the speed of classical
measurement and communication.
So relative to everything else that's going on in your quantum computer, that's very fast. So we
can move things around quickly but we still have to move it from point to point to point. So an idea
of how to think about building like an architecture here is you have a block which contains all of
the resources you need to compute. So it'll have the logical qubit, all the ancilla you need for error
correction, all the ancilla you need for [inaudible] preparation and then, some channels by which
you use these entangled pairs to move information around. And this is particular architecture is
called QLA from Fred Chong's group at Santa Barbara presented at ISCA in 2006.
So since that 2006 paper and some earlier work, both Fred Chong at Santa Barabara and John
Kubiatowicz at Berkeley have thought about all kinds of different ways to like re-configure the
system, deminimize space or time depending if the qubits have different jobs or not. But then of
course we want to try to implement this. So we want to implement it on a real, physical system.
And my interest is in ion traps. So one -- Yeah, sorry. I have lost said movie.
So my basic idea, there's an idea of kind of a charge coupled device ion trap first suggested by
Kelpinski, Monroe and Wineland. Wineland won the Nobel Prize this year. He's also the nicest
guy on the planet. So the idea is the ions will be stored in all of these surface electrode arrays of
traps. They'll move around. They'll go through this dance of error correction kind of continually.
So what we can do is we can say, "Well, let's be optimistic and assume that we can make these
ions very good, have like a very low failure rate." The physical gate speed is kind of slow, 10
microseconds. Decoherence time has been measured to be quite long and is longer even since
these original values. But these are the values used in this ISCA paper for factoring. And the cost
of concatenated error correction, there's the good and the bad. So the good is, you see, that as I
go up in levels of error correction the failure rate drops off dramatically. It actually drops off
double exponentially. The bad is that the gate time increased exponentially, and the number of
qubits and sort gates also increased exponentially. So I get this double exponential win so it's
efficient. But this slowing in time and increasing in circuit size is kind of a problem.
So now in a second I'm going to show you a graph with too much information on it. So to get us
set for that graph, here's one with just a triangle.
[laughter]
What I’m going to do is I'm going to think about any computation in terms of the number of logical
computational steps -- So there might be many parallel gates happening in that one time step but
I'm just interested in sort of block cycles if you will -- and then, the number of logical qubits. And
then, because I'm interested in error correction what I want to say is the probability of any gate
failing on any qubit is less than this product, right, so my circuit size basically. So, you know, I
check. If I don't do any error correction, it gives me a line which gives me a space of algorithms
that I can complete without failing.
If I go to level one, you know, the line moves. It's a large space. And I can imagine just moving
this line up until the triangle which represents the algorithm I want to complete that is beneath this
line and I can get the answer with some probability.
That's the plan. Okay. Now at this point I can draw a line here which is the total number of logical
qubits required to do this algorithm. All right, so now here's the too much information. We'll try to
go through it really slowly.
Tzvetan Metodi and Fred Chong's ISCA paper, what they were interested in was factoring. So
this block here is the number of logical qubits and computation time required to factor a 10-24 bit
number. Now imagine -- It's kind of the opposite of Peter's thing about the spies taking you away.
What I like to think about is in terms of vacuum tube computers, right, the spies built them during
World War II and then afterwards, you know, gave them to scientists to do what they wished. So
imagine the spies have secretly built this 10-24 qubit computer and then, they come and they give
it to you. And they say, "Hey, we build this thing. We've cracked all the codes. You can now use
it." What could you do with it in terms of a quantum simulation?
So we picked something we know the answer to which is the Transverse Ising Model. And we
looked at it for a series of qubit size and 50-100 to 150. And we calculated both the number of
qubits required and the computational time steps. Now the first thing, going this direction what we
have is that the n is increasing and for n it is incredibly efficient. What it's not efficient for is the
precision of which I want to know the energy. Right? It's the same problem: that I basically have
to wait a long time to get a good energy. Now on top of that these diagonal lines correspond to
these levels of error correction. So this diagonal line here is the level 2 error correction. This is
level 1. This is level 3. So this block here is all level 3. And to me that is the extra level of
nightmare. So here at level 2 we have an operation time which is a quarter of a second. For
factoring you still win because you can factor this 10-24 bit number in 11 days and everything is
great.
But if you look down here, you see that we can calculate to a precision of, you know, somewhere
between five and ten bits the Transverse Ising model which is not so promising. And when we get
to a higher precision number out here, we end up in level 3 in which the clock speed slows down
to basically one operation per ten seconds. Yeah?
>>: Let me just make sure I understand the scale. So on the Y axis is logical qubits.
>> Ken Brown: That's right, logical qubits. And in order to calculate the step function where you
go to [inaudible] concatenation in the scheme code.
>> Ken Brown: Yeah.
>>: You're inputting fidelities that come from ion trap...
>> Ken Brown: That's right.
>>: What numbers are you using?
>> Ken Brown: So we're using these numbers here. So we're assuming really good gate failure
rates, 10 to minus 7.
>>: So the fidelity is 1 minus 10 to the minus...
>> Ken Brown: 7. That's right.
>>: Oh, and is that assumption used for both 1 and two-qubit gates?
>> Ken Brown: That's right. That's right.
>>: [inaudible]...
>> Ken Brown: That's right. That's right. Which is...
>>: Well, what are the current numbers in ion traps for those two types of gates?
>> Ken Brown: So for one-qubit gates, it's 10 to the minus 6 now I believe.
>>: Okay.
>> Ken Brown: two-qubit gates is 10 to the minus 3.
>>: I see.
>> Ken Brown: So there's a big -- And also -- So that's a big gap right now. I would say in real ion
trap quantum computers our real challenge right now is better two-qubit gates.
>>: Okay. And what would that do if you couldn't get the 10 to the minus 7, two-qubit gates? How
would that affect the step function? You kept the 10...
>> Ken Brown: Yeah.
>>: ...to the minus 5?
>> Ken Brown: Yeah, so the problem is -- Right, so the error out is --.
[silence]
>>: Well, that's a [inaudible].
>> Ken Brown: Right. So but...
>>: I guess I was just wondering if...
>> Ken Brown: So...
>>: ...[inaudible] minus 5 how many layers of concatenation would you need?
>> Ken Brown: Well, so that's the problem. So from Krysta's earlier work we know that the
threshold when you have a logical architecture is not as good as you think. So this number is
probably about 10 to the minus 6 for the Steane code. So at the end I'm going to talk about
surface codes. So honestly I think we've got to move...
>>: Oh, okay.
>> Ken Brown: ...to surface codes.
>>: Okay, so Steane code is really not the right thing to --?
>> Ken Brown: Yeah. So for the Steane code, because we're below threshold, we gain
something. Right? But if we -- But I guess I want to emphasize about this even asymptotic scaling
is that this is a great win if you're far from threshold. But if you're right at the limit, you know, you
win but it's bad. So what would happen is these would basically fall off faster. Right? The area
that you could do level 1, level 2 would -- All of this would shift to the left.
Now what I want to point out is that my optimism comes from the fact that Shor's algorithm,
there's just been a ton of work. So if you implement it in a kind of naïve, simplest method with
smallest number of qubits, it takes forever. But if you throw in more qubits basically to increase
the parallelism, you can shift, right, from something you can't do at level 4 to something you can
do at level 2. And I think that area of quantum chemistry calculations, there's still a lot of space
there.
All right. So this is just -- Okay, so we didn't do this -- Sometimes people think, "Well, how could
be so bad for the Ising Model?" And so I want to point out that we didn't do this in a naïve way. It
turns out that you can do something like a fan-out if you're preparing Cat states because the Ising
Model pretty much completely commutes. You can do everything in parallel so there's not this
end of the 4-scaling. Right? So we have already bad times, no end of the 4-scaling. We have
order 1-scaling.
The other thing is there's a lot of room into how do you make an arbitrary rotation from a finite set
of rotations? So quantum error correction forces you to have a finite set of rotations and that
leads into those large numbers. And so the way we ended up calculating that was we assumed
that the Trotter Error had to be less than the bits of phase precision and the Solovay-Kitaev Error
had to be less than the Trotter Error. And from that, we just calculated error.
In terms of the architecture what was interesting was there was sort of a natural set up in which
most of the architecture is a block which is preparing these T-states all the time because we're
constantly doing these rotations and we want to be ready. So half of the whole architecture is just
preparing these states so we can do the rotations on demand.
Quarter are these ancillae which are used to make this Cat state to do this fan out and try to solve
our communication problem. And then finally, the last quarter is the quantum bits that are the
circuit. So an idea of the overhead.
>>: [inaudible]?
>> Ken Brown: Yeah.
>>: What [inaudible] by the time required to do the Solovay-Kitaev algorithm? Because...
>> Ken Brown: Almost completely. So...
>>: Because there's, you know, an inherent inefficiency there that's a polynomial problem that's
[inaudible]...
>> Ken Brown: Yeah.
>>: ...versus [inaudible]. [inaudible] the theoretical density?
>> Ken Brown: Yeah.
>>: The right answer is sort of there at [inaudible] -- or square root of the y [inaudible]?
>> Ken Brown: Right. Right.
>>: But it's just hard to find it.
>> Ken Brown: That's right.
>>: So if you had a better algorithm for that that would be very significant.
>> Ken Brown: That would be huge. So as an example, you guys have this nice algorithm for
basically not increasing the number of qubits at all but relative to running the normal SolovayKitaev Algorthim that everybody uses, it's like at least three orders of magnitude better. Yeah, so
this paper with Krysta and Bocharov, it's great. So it shifts all of these numbers this direction. And
then if we look at this Cody Jones paper, it reduced the time even more but at the cost of the
huge increase in qubits. And this I'm being a little bit generous to them. I haven't yet gone through
[inaudible] paper on like where the...
>> Krysta Svore: We have a new vector that's somewhere -- it decreases the computational time
by better than Cody Jones and fewer qubits than Cody Jones.
>> Ken Brown: Great. Great.
>> Krysta Svore: [inaudible].
>> Ken Brown: Yeah, so this one is pretty interesting because in this particular example this one
stays -- it helps quite a bit but we're still in this level 3 error correction which we don't really want
to be at. This one actually takes us to a point we can't do. But if I shift down in my precision
required to just 20 bits then, you know, they both work. And there's some argument about
whether time is more valuable to you or qubits are more valuable to you. And that's actually quite
nice. So I think in the context of a whole algorithm, it might be that the type of decomposition you
use will depend on that algorithm and kind of where you are relative to the space for error
correction.
>> Krysta Svore: [inaudible] have a question. Where are your qubits? In our paper you need a lot
of additional resource states.
>> Ken Brown: That's right. That's right.
>> Krysta Svore: So if you have that ability to place them in the architecture [inaudible] that might
be a better way to go.
>> Ken Brown: Yes. Yes, I agree. And I think there should be some similar method for solving
this N to the 4 problem. Right? That if we can somehow spend in qubits, maybe doubling the
space to take care of the parity and whatever, to shrink everything down is, I think, the good
direction.
All right. So just in terms of a little bit of update on ion trap stuff, one thing that we've been
thinking about is actually having instead of a huge chip with all of these ions, to have smaller
chips and then a few of these ions photon-coupled where again we use basically post-selection to
try to generate EPR pairs. Most of the time it fails but when it succeeds, we know it succeeds.
Right? We have no false positives. And what's great is that kind of architecture has a really
natural mapping onto this same idea of a quantum logic array. It's just now these EPR channels
aren't actually channels in the computer; they're channels that happen through this photon
interconnect switch.
All right. So finally -- I'm going too fast. So I want to just talk about surface codes just a tiny bit. So
when I was a grad student, Michael came and talked at the misery institute up at Berkeley.
Whatever. And you were wearing this fringe leather jacket. So unfortunately I think I've only seen
you one time since then, and I always think of you in this fringe leather jacket. Like that's my
vision. [laughter]
>>: I think that was the last time for that jacket.
[laughter]
>> Ken Brown: But you were talking about topological quantum computing and you had all of
these pants like coming together. And at the time, like, that just seemed too far out to me.
>>: Well, dimensional fringes were a special touch.
>> Ken Brown: That's right.
>>: [inaudible] is Berkeley 1968.
[laughter]
>> Ken Brown: That's perfect. So this paper by Raussendorf and Harrington and the followup
paper by Raussendorf, Harrington and Goyal, where they show that you don't really need a
topological computer. But if you just take that idea and use it to make a code -- And they show
how to make that code -- the thresholds become around one percent.
Now what happened is I think experimentalists initially were hesitant about this because like all
good error correcting code people, when they show the overhead cost, they show the overhead
cost right at threshold. And the cost right at threshold is ridiculous. But if you could get down to
these 10 to the minus 3 or even like 10 to the minus 5 or 6 then the overhead cost relative to
concatenated codes, I believe, is better. And I will not show that now but I am -- If you actually
look at an algorithm of any scale, it wins.
So the basic idea is they're going to be a surface code, we think of it as a surface abstractly. It's
going to have different kinds of defects which we can think of as primal and dual. We can think
about the error correcting -- Or sorry, we can think about the quantum circuit for the Clifford gates
as being braidings of these defects in time. You can think about this all being in space but
basically your computer lives at this point in time and you shift these defects around each other.
In the same way you have the same problem as always which is you have to bring in these non-
Clifford gates somehow. And we do that by generating a patch, using that patch to then generate
these magic states doing teleportation like usual.
And yeah, so I think the surface code people are really enamored by making these giant braids.
They're like, "Yeah, we can twist all this stuff together and get this huge ball of spaghetti." But
what I like is you can actually make a really natural mapping onto the architectures we already
know. And the way you do that is you take two of these primal holes to service your qubit, and
you assume they don't move anywhere. You force them to be fixed in some location.
Now in this area they may wrap around each other to do single qubit gates. But then, from this
previous slide which this is just a teleportation scheme, I can use the dual defects as these errorchannels floating around. Now I am sure this is not the optimal way to like minimize space for a
surface code, but in terms of like trying to calculate resources and actually building a device
where pretty much all of the device is just doing the same thing all of the time -- Controllers are
easier -- seems pretty promising to me and pretty nice.
Now the thing which is missing here is this ancillary block is going to be the block I need to
generate these magic states in order to do these non-Clifford gates. And so there'll be some like
extra huge chunk here probably around each qubit to take care of it.
So just in conclusion: So from my perspective I think the No Cloning Theorem leads to a pretty
big overhead for quantum computers both in terms of quantum error correction which people
thought a lot about but also in terms of communication because I have to somehow carry the
information with me wherever I go.
What's nice is we can solve that to some degree through teleportation and that these entangled
pairs effectively at the cost of, you know, preparing them beforehand creates the computer where
everything is completely connected. Topological codes without too much work can be mapped to
current architecture ideas. But then, this again kind of points to Jarrod's thing about, you know,
I'm building a totally universal -- You know, so far I've only thought about pretty much totally
universal ideas. Can I build some either topological code specific architecture or even
concatenated code architecture which is built to be optimal for these quantum chemistry
problems? Like what is the right underlying architecture for Jordan-Wigner? Bravyi-Kitaev?
And that's that. And so this is my group. So in addition to the quantum computing stuff, again, I
work on these cold molecular ions. You can find out more about us here. That's all. Thanks.
[applause]
Yeah?
>>: Why does your group have to wear ties?
>> Ken Brown: Okay. [laughter] So we had a lot of group pictures where everybody is like
standing on a staircase. Graham would always wear this Faulty Towers Don't-Mention-The-War
shirt. And JB was like, "I'm sick and tired of these photos." And he's like, "Next photo I just want
everybody to wear ties." And we're like, "Okay." So that's it. But normally nobody wears ties.
Yeah.
>>: Are you going to do an addition to that slide, the too-busy slide with where your surface code
implementation of that architecture would come out?
>> Ken Brown: Yeah. I need to do that. I haven't done that yet. So one thing, which is nice and
it's very similar to, actually, [inaudible] and Krysta's paper where they talk about when you do
Solovay-Kitaev in this other way there're no longer steps. It's smooth. And so what's nice about
the concatenated code is it's also smooth more or less.
>>: Surface code....
>> Ken Brown: Surface code. Yes. Sorry. Surface code is more or less smooth. There're a little
bit of steps due to the distillation is still concatenated. But, yeah, I would like to remake this part.
Yeah?
>>: So what are the technological difficulties with ion trap [inaudible]?
>> Ken Brown: So it...
>>: [inaudible].
>> Ken Brown: So we have two problems. So one is we're trying to make sure we can scale up.
And I would say that our -- Yeah...
>>: Does that mean reducing the size of elements?
>> Ken Brown: It means showing that we can build elements that we can then put together
without any extra complication. Right? So that's one problem. And I think we're making good
progress there. But our key challenge has been: the best gates people do involve these lasers.
And so then you've got to think about laser delivery to the whole trap. And there's been some
recent work by Dave Wyland's group looking at actually switching to just microwave gates. And
so you actually fabricate these electrodes that have a little bit of magnetic field. And previously
people weren't sure if you could do two-qubit gates because the way the two-qubit gates work is
the ions repel each other and so they have some shared motion. You send in a photon which
knocks this ion, gets the other ion to move, stop it by sending in a photon on this ion. And so the
microwave basically because the wavelength is too long cannot really create that force. But it can
if you add in a little bit of magnetic field gradient. And people are pushing that way, and hopefully
those will lead to really good two-qubit gates.
What is nice about ion traps is, again, one-qubit gates are really great right now. Measurement is
also really excellent. It's this fluorescence detection. It's very easy. It think it's -- The record now is
five nine's of measurement. So I think it's -- Yeah. Scalability, two-qubit gates.
>>: A couple years ago I was hearing about problems with like heating of the [inaudible] sources.
>> Ken Brown: Yeah.
>>: And attempting to scale down to ions on a chip sort of model.
>> Ken Brown: Yeah, so fundamentally you would assume that for perfect devices the heating
should go like one of the distance squared. But in practice is goes like one over distance to the
fourth. People have been able to suppress it by using cryogenically cooled traps which is a bit of
a bummer because before I thought one of our big advantages was we didn't have cool
everything. But one thing they've shown recently is that basically ion trap quantum computer
people are bad surface scientists. And it turns out that basically we had spent all this time
fabricating these traps in clean rooms, and we would like take it in air and we'd just walk it over to
the trap and put it in. And on the way it basically picks up a lot of carbon. And so Dave Pappas at
Boulder has shown that if you come in and you clean off that carbon, the anomalous heating goes
away.
The downside is even at 10 to the minus [inaudible], that carbon will come back. So you need to
think about -- So you have plenty of time to do a computation of it because that still will take a few
days for it to come back but it's not permanently solved. Probably you could solve it permanently
if you did the cleaning plus the cold then it would never migrate back. But, yeah, I'm very bullish
on ion traps. I'm also very bullish on superconductors. Those are my two favorites. Yeah?
>>: So the heating doesn't work on -- But like can you cover it with something [inaudible] rather
than trying to keep it clean? Can you cover it [inaudible]...
>> Ken Brown: You just get -- The dirt would still get on top of it. But that's the...
>>: [inaudible] cover.
>> Ken Brown: On top of the cover, yeah. Because it's very, very little dirt. But it's just a few
carbon atoms covering this more or less smooth metal surface, and that creates a little bit of
voltage fluctuations which leads to the heating.
>>: But what I mean is like if you covered it with like a very perfect material like graphene or
something, would that protect it?
>> Ken Brown: Yeah, so people have thought about graphene because they think that maybe
things will stick less well to graphene than they stick to metals. The only graphene trap that's
been tested so far, they somehow in the process just got a big piece of dust on it. And so it's -Like you can see in the laser scatter this big piece of dust. And of course, the heating rate was
terrible. So people are thinking about it, but in some sense, you know, stuff will still stick. And
your graphene would have to be really perfect.
>> Krysta Svore: So how close to move to 10 to the minus 3 [inaudible] and the ion trap to 10 to
the minus 4, 10 to the minus 5? Like how plausible or how quickly do you think that could --?
>> Ken Brown: So I think it depends on this continued fabrication of these magnetic gradients
with microwaves. If that continues to work well, it's nice. The problem with the way we normally
deal with the lasers is you do this [inaudible] transition, and there're always some spontaneous
emissions. So in principle if you had infinite laser power, you could solve it. And this paper by -Yeah, anyway, this 2005 paper by Ozeri, this number isn't a real number but it comes from
Ozeri's "what is the physical limit of this?" But unfortunately this, it just seems like there's too
much laser power. But what's nice about 10 to the minus 4 is for surface codes you start to get
low enough below threshold that it could work.
>> Krysta Svore: But, yeah, I guess you're right on the edge.
>> Ken Brown: Right. Right.
>> Krysta Svore: Well, depending on what distance code you're using. But it seems with, you
know, the 10 to the minus 4 if you're using a distance 3 surface code, you know, you're right on
the edge of getting some correction off of these from memory. And you...
>> Ken Brown: Right. Right.
>> Krysta Svore: So it'd be -- I'm just thinking, you know, for near-term experimental
demonstration of a surface code, ion traps are pretty close.
>> Ken Brown: That's right. That's right.
>> Krysta Svore: [inaudible] you could get just a little bit further that'd be --.
>> Ken Brown: The downside of the surface, and the other reason why I think people were kind
of reluctant about it, is the cost of one gate on a surface code is a lot. But the cost of many gates
is not -- The cost of many gates is not so bad, right?
>> Krysta Svore: Yeah.
>> Ken Brown: And that scaling, it's hard to do the first experiment.
>> Krysta Svore: Well, my other question is if you guys have studied how the threshold
changes? You know, we always study the threshold for the surface code for a memory.
>> Ken Brown: Yeah. So this...
>> Krysta Svore: So [inaudible] operation, do you think the threshold is going to change?
>> Ken Brown: So the gate...
>> Krysta Svore: [inaudible]?
>> Ken Brown: So T-gate threshold is I think 0.7% is the latest. Which isn't bad; it's good. Yeah.
>> Krysta Svore: Any other questions? Okay, let's thank -- Oh, let's have Peter ask a question.
>> Peter Love: Sorry. Just to go back to the cryogenics, if that's really the only problem that
seems -- Because one of the things that's happened in the last five years is now all the systems
have become cryogenic so you don't have a ton of infrastructure to [inaudible]...
>> Ken Brown: Oh, yeah. No, it's way nicer. There's some -- Yeah, I mean we have a cryogenic
system. We just have these like terrible problems of like the pump vibrates, trap vibrates. So now
if we lock to the pump, everything is stable. But if we just try to run without locking to the pump,
it's like -- It looks like there're two ions; there's only one ion.
[laughter]
No, no. But the weird thing is -- I guess I can say this because I'm just making fun of my own
work. But when we first did a cryogenic ion trap, we got a huge gain in the heating rate, huge
gain, like four orders of magnitude. But it's partly because the way we were fabricating them was
not that clean. Now that we fabricate them more cleanly, when we use the cryogenic trap we get,
you know, a much more modest gain. And so it seems like to do the next -- we need this next
level of cleaning first followed by the cryo stuff.
>> Krysta Svore: Let's thank Ken again.
[applause]
Download