1

advertisement
1
>> Michael Freedman: So welcome, Microsoft regulars and our new visitors who
have come from, in some cases, great distances. So thank you for being here
and thank you for your willingness to teach us in a very rapid tutorial that
Matthias and Krysta have scheduled about your respective subjects.
I think it's probably correct to say that Microsoft is the largest
curiosity-driven research institute in the world. Microsoft Research. And
once our curiosity has taken to us some place, it's our tradition to understand
the real world itch cations and see what's out there.
And in the case of quantum computing, we do have a very
history of curiosity research in this area for over ten
particular, with this project, we're very interested in
applications. How does the theoretical acceleration of
translate into real world applications.
strong and longstanding
years. And in
the real world
certain problems
So in a nutshell, what will quantum computers be able to do best if and when
they're built. Now, many of you come from disciplines where perhaps you don't
think of yourselves as quantum computer scientists, but I want to reassure you
that anyone who does quantum mechanics professionally for a living is in a very
good position to discover and design quantum algorithms.
I think you do not need to be familiar with the formalism which the next talk
will explain of quantum circuit model. So I don't think you need to think in
terms of pictures like this, for example. It's too small to read, and I don't
read this language anyway.
And I'd like to give you some evidence for this, and the evidence is that very
interesting ideas, you might call them breakthroughs in thinking about quantum
algorithm, have historically, in many instance, come with their own model.
People who know quantum mechanics and understand some aspect of it think of
something that quantum mechanics can do. And later on, people translate that
into the quantum circuit model, into, you know, pictures like that.
But the initial inspiration may come with its own design. For example, the
idea of using quantum computers to solve logical problems, satisfiability
problems, max, cut problem, things like that. Problems that are typically NP
hard, we don't expect to be able to solve them in polynomial time, but it's
possible that quantum computers can give some interesting speed-up, perhaps a
2
square root in the exponential, who knows, over classical algorithms.
And the idea here was introduced initially by Fahri, Goldston and Guttmann, and
it, as most of you know, was to initialize Hamiltonian so that the ground state
solved a rather trivial problem, like finding the ground state of a magnet and
then evolving the Hamiltonian slowly so that the solution tracked in the ground
state.
It's a problematic proposal, and it has to do with the size of the spectral
gaps how well it will work, but it's an idea that just required knowing the
adiabatic theorem. It didn't require knowing what a quantum circuit model was.
Another example from the same group, actually, is using scattering theory. You
can think of it as quantum rather than a random walk in a tree. You send wave
packets, perhaps, of light through some kind of optical tree. And depending on
how you've designed the tree, the transmission probabilities can tell you
something about solutions to game theoretic problems, such as values of a nan
tree.
And, again, this was within a week translated into the quantum circuit model,
but the idea just came from knowing scattering theory.
And a third example is from Aaronson's work. It's actually with a graduate
student Kiphoff, whose name I should have included. And, you know, this is
based on the old joke that in quantum mechanics, bosons have a much harder time
of it than fermions. Because free fermion systems, the ground state will be
Slater determinant. And we know from a computer science point of view that
determinants are polynomially very easy to compute, and it's, you know, one of
the great achievements in complexity theorem from 1979, valiant, that
permanence, which is the positive version of determinants, this is where all
the alternating signs are just forgotten and become plus signs, that those are
sharp hard to compute.
So the fact that the ground state of free bosonic systems will tell you
something about permanence is kind of a wedge in to a new way of thinking about
quantum computing and new architecture. It's not a complete slam dunk from
that statement, because the thing you have to always think about in quantum
complexity is that quantum computers are probabilistic machines in terms of
their output. So just knowing that an exact solution is extremely difficult
does not necessarily tell you that an approximate solution conveys a lot of
3
computational power. So that's why this paper of Aaronson and Kiphoff is 95
pages long instead of two. You know, you have to analyze the error.
And this Hong-Ou-Mandel dip, this is like a two-photo on experiment that
originally shows the correlations that they proposed leading a machine to
exploit. So I'm actually not saying, and I wouldn't assert that any of these
three ideas are practical. I mean, I don't know. I think they're all
interesting, open questions. But it's just evidence that if you know quantum
mechanics and you have problems to solve, you may think of ways of using
quantum mechanics and they may really be out in left field with respect to the
way quantum computer scientists are trained to write down algorithms.
So there's a big coterie of people who will turn your idea into an algorithm if
you have a good idea.
So roughly speaking, the propose the applications of quantum mechanics seem to
lie in three large baskets, you might say of increasing murkiness. So the
sharp and clear applications, so the very mathematical ones, finding units in a
number field, factoring numbers, discreet logarithm, things like this, these
tend to be algorithms that you can analyze their computational costs prove
fairly sharp bounds on their efficiency. And it's not the kind of thing where
you need a lot of real world experience to learn how well these things work.
Now, people have proposed using quantum computers for simulation and design of
quantum mechanical systems, nano structure, maybe going way beyond the Hubbard
models to studying coop rates, things like that. And that's the kind of thing
where one really has to start doing the numbers and seeing whether it's
realistic or farfetched, given the resources we're likely to have in quantum
computing.
And I'd say that maybe the third category is probably closest to the kinds of
applications where you'd expect experience to be essential, rather than theory.
So to figure out how important quantum computers are going to be in these
areas, like general optimization and machine learning problems, it's probable
that we need a quantum computer and we have to interact with it and learn
gradually. It lacks sort of the mathematical crispness, in my opinion. But
that's not to say that maybe some insights can't be gained early.
So I'd like to sort of finish by showing you two slides I recently gave a
public lecture last month at the [indiscernible] institute in Brussels, and
4
this is sort of typical of the way public lectures on quantum computing either
begin or end. In this case, this was the way my ended. So it's full of all
these kind of bold assertions. Somewhat shaky. Some caveats about, you know,
what things we know how to do and where we might look for areas that classical
methods have trouble with.
Then I went on to list even more speculative applications and then I got a
little modest and said we don't really know. But my goal for this meeting is
to get past those first sentences, at least in one narrow area. I'll be very
happy if, after we're done tomorrow, I can say sentences two through ten about
one of those subjects. Thank you.
>> Krysta Svore: Okay. Well, thank you, everyone for coming. And thanks,
Mike, for our motivation and introduction for our meeting. I'm Krysta Svore,
and I'm a member of the QuArC group, the quantum architectures and computation
group here at MSR in Redmond. And because we're coming from different
backgrounds, some of us are more on the classical side. Some of us are more on
the quantum computing side. I thought I'd give just a very brief -- this is a
15-minute introduction to the quantum circuit model and how we think about
quantum algorithms in that language. And a really brief introduction to phase
estimation, which is one of the key hammers in our tool box that we would use
for quantum algorithm, potentially, in this space and materials and chemistry.
So I wanted to start just by introducing you to our group members. Sorry,
Alan. I couldn't find a picture last night. So we have, we're a new group.
We're about a year old here in MSR Redmond. And for those of you, obviously, I
know the Microsoft people know us. But for those of you who are new and
outside external to Microsoft, we have a new group here in quantum computation,
and we're a sister group to station Q, and we focus more on the computer
science, the quantum circuits, quantum algorithms and station Q focuses more on
the theoretical physics. So I think we're a good team.
And here's our members. So if you see them floating around.
of folks are consultants and our key collaborators.
The bottom group
And our group focuses on designing quantum algorithms, so that's definitely one
of the reasons for having this meeting. And we like to think about the
implementation of small, medium and large-scale quantum computers. So what
would we do with hundreds of qubits, thousands, and then maybe millions.
5
And then we think, as Mike said, you don't need to know quantum circuit or how
it will be implemented on the quantum device to design a quantum algorithm. So
our group thinks of the quantum algorithms, but then we think about taking an
algorithmic, you know, idea and actually converting it down to the device
specifics.
So we think about programming languages, optimizing the quantum circuit,
converting the Hamiltonian into a quantum circuit that's going to be efficient,
and questions like this.
And then finally, in support of that goal of breaking down the algorithm into
the device components and the device operations, we're building out a full
comprehensive system architecture that will help in designing and programming a
scaleable and, of course, fault tolerant quantum computer. And tomorrow,
you'll have a great tutorial on liquid. And liquid is our architecture, our
platform for programming and simulating quantum algorithms and quantum circuits
and also Hamiltonians. So that will be tomorrow morning, and Dave Wecker here
will be giving that then.
So in an introduction to that, I'll introduce some of the language and then
he'll show you how you can take this language and these tools and actually use
it to program or simulate something. So to start with, I know all of us know
quantum mechanics. So I'm going to try to just take quantum mechanics and just
translate it into what we do for in the quantum computer and quantum circuit
model.
So with a qubit, and actually a qubit, it lives in a two-dimensional complex
vector space. It has an inner product so it lives in a Hilbert space. And we
can think of these two, for example, these two vectors form an orthonormal
basis, so we typically think, we call this the Z basis. And in the quantum
circuit model, we typically write everything in the Z basis. So the state
vectors then are just one, zero, for the zero state and then the vector zero,
one if we're talking about the one state.
So in general, we can write any qubit, any two-dimensional quantum state here
as a general linear super position of these two basis states. And so then the
vector we use is alpha beta, where the alpha and beta are the amplitudes and
alpha and beta are always complex numbers. And, of course, they have to
satisfy the normalization condition.
6
So then we can think about if we move to multiple qubits, the size of that
state space, and so, for example, a two qubit state, we have now the super
position over four possible basis vehicle to and, again, we have all of their
amplitudes and we again have to satisfy the normalization condition that the
sum of the amplitudes squared has to equal one.
And then we can, of course, move to a bigger state so where we have N qubits
and, again, it's a linear super position over all of those orthonormal basis
states. And again, it satisfies the normalization condition. I'm sure this is
all repeat.
So then let's think about the quantum evolution of this state, the system. So
from quantum mechanics, we know that the time evolution of a closed system is
going to evolve following a short Schroedinger's equation so here's a
Schroedinger's equation we all know and love.
And then we want to talk about how we can express this, and we know that we can
solve Schroedinger's equation for two time points T1, T2. So we want to know
how the state space evolves between time T1 and time T2. And so we start, you
know, with our stateside, our quantum state at time T1, and it evolves as a
unitary evolution, a unitary transformation U. And we can see the relation
between if we define U to be E to the IH, the difference between the times over
H bar, then we can see that the unitary evolution of the system matches the
continuous time dynamics of Schroedinger's equation.
So this is where we, you know, you come up with the Hamiltonian and then, you
know, we want to actually find what the unitary is, and so this can be
challenging, finding an efficient unitary that computes what we want. That
computes this evolution.
So in general, we want to know what these unitaries are and which ones we can
use in practice and on the device. And in general, a unitary can be written,
any unitary can be written as U equals E to the IK, where K is some Hermitian.
So what unitaries are natural to consider when we actually want to build this
device and make efficient quantum algorithms? So here are a few that we use in
practice. Very common. So for single-qubit operators, so unitaries act on
just a single qubit. The typical ones we know, you know, from quantum
mechanics and physics, of course, we have the Pauli operators. So the Pauli
gates, you know, in physics, we refer to them often as the sigma I, sigma X,
7
sigma Y, sigma Z.
X, Y or Z.
In the quantum circuit language, we often just call them I,
And so we define, we have the identity, the X operator. So X operators a NOT,
we call it NOT in traditional computer science language. The NOT operator
takes the zero state, converts it to one state, and a one state and converts it
to the zero state. And that's defined here by this two by two matrix.
And then for the other two Pauli operators, we have sigma Y and sigma Z. Sigma
Z changes the phase, and then sigma Y is a combination of X and Z so it does a
bit flip and a phase flip with, also, this additional phase I. So we use these
all the time in our quantum circuits.
And then we, of course, have a phase gate. So if we want to think about any -applying an arbitrary phase theta to our quantum state, then we can do this
with a rotation about the Z axis by theta, by angle theta. So in general,
then, the matrix is defined as one zero zero E to the I theta, where theta is
the angle about which we're rotating around the Z axis.
So we all know the block sphere picture so here the zero state sits at the
north pole, the one state sits at the south pole and then here's the Z axis, so
when we talk about a phase gate, it's rotating there about the Z axis by angle
theta.
And there's some common Z rotation gates that we use a lot. The first being
we've already seen the Pauli Z operator. This is, in fact, a Z rotation where
theta is the angle pi. And the S gate, which is called the phase gate, the S
gate is one zero zero I, this is just where the theta is now pi over two. So
it's again a rotation about Z. And then finally, a very important gate, the T
gate, which we call the pi over 8 gate, but theta equals pi over 4. So here,
this is a rotation by angle pi over 4 about the Z axis. We call it the power
over 8 gate, because you can factor out, in this form, one zero zero, either
the I pi over four if we factor out the I pi over 8, then you'll be left with
pi over 8s on the diagonal. So it's called the pi over 8 gate. And this is an
important gate, because for a -- to have a universal single qubit set. Require
something outside the Clifford group. So all of these gates, the Pauli gates
are in the Clifford group, plus one more gate, the Hadamard gate is in the
Clifford group. And this is not a universal set, the Clifford group. So we
have to add an additional gate, and that's the T gate. One candidate is the T
gate, which allows us to implement any single qubit unitary in practice.
8
So the final gate to mention is the Hadamard gate, and this gate is given by
with one one one minus one plus this normalization, and this gate allows you to
perform a super position so you can take the state zero to zero plus one over
root two and the state one to zero minus one over root two. And now we can
start introducing some quantum circuit diagrams, the smallest one in this so
far.
So normally, we denote the gate as a box, just with the name of the gate inside
of it. And then this represents, the wire is the qubit. So this represents
applying the Hadamard gate to whatever the state of the wire is to state psi.
So then we can move to two qubit operators, and the most common two qubit
operator is probably the controlled NOT gate. This is a controlled X gate, and
what this gate does is it takes -- so it's a two qubit gate so we have qubits A
and qubit B, and it takes qubit A and qubit B to A and the XOR of A and B. So
essentially, it says if my first qubit is zero, I don't do anything to my
second qubit. If my first qubit is one, then I'm going to flip, apply the X
operation to my second qubit.
And so the matrix here, you can see that in the lower right corner, we have the
X gate. And in the upper left corner, we have the identity. So in general, if
we want to apply any controlled U, which comes up a lot in the quantum
chemistry algorithm, we have to apply a controlled phase gate, then if it's a
single qubit phase, for example, we would have the identity here and then that
U or that phase gate sitting in the bottom right. So now we can construct any
controlled unitary operation. And the diagram for the controlled NOT is this
here, where this is the control lean. So if this qubit is one, then we apply
this, and that's the symbol for -- well, it's the XOR, which is why we're using
it here, but you also might see it controlled with an X in a box on the bottom
qubit line, which would match this notation here. So this is the notation for
any controlled unitary operator U.
And again, the filled-in circle means that qubit has to be in the state one for
it to be applied.
Another gate that comes up is the swap gate. Swap gate simply flips the state
of the two qubits. So A, B goes to B, A. And this can be, for example, it can
be implemented with a sequence of three controlled NOT gates. And the unitary
is given by this matrix here.
9
So if you, you know, you can solve this and you can see that if you do three
controlled NOTs, where the middle one is in the opposite direction, then you
will see that you have a swap gate. So feel free to matrix multiply away. You
can solve that.
And then the notation is often these two Xs with a line in between as they were
swapping the state of those two qubits. So that's a quick introduction to all
of the quantum gates. Most of the quantum gates we use.
And then I just want to go through and introduce phase estimation, because I
think this is going to come up in a lot of, maybe a lot of the talks today.
And so I think we all know this well. But in terms of the circuit notation
which we use, which you're going to see a lot of tomorrow in Dave's talk and
maybe some in Matthias's talk, or maybe not, definitely in Dave's talk, this
will give you an idea of how we think about it in more of a computer science
circuit way.
So this is one of the -- or maybe the most common subroutine. Grover is a
candidate as well, of course. But between Grover and phase estimation, those
are our two main tools for designing quantum algorithms. We'd love to have
more. So we should all be thinking about that, of course, but these are good
things to look at when you're thinking about a quantum algorithm.
So in the phase estimation problem, we have some unitary U, and then we have an
eigenvector psi, eigenvector of U and then the eigenvalue of this is E to the 2
pi I phi. So that's, you know, U psi equals E to the 2 pi I phi psi. And the
question we have when we're doing phase estimation is we don't know this phi
value here. And so we want to say, you know, can we get an estimate on what
phi is and to what precision.
So we want to estimate this value phi. And there's a way, a known circuit to
do this, and there's actually several known circuits. I'm presenting one. And
there are different circuits that don't use some of these gates here that you
can do more classical processing, and we can talk about that separately.
But here, I'm just going to introduce one of the most -- probably the most
standard that most people see in the textbooks. So here's the quantum circuit
for performing phase estimation. So we start with a bunch of qubits all
initialized to the zero state, and then we have our eigenvector psi. So
10
already, one of the assumptions of the subroutine is you can prepare psi in
some efficient way. That's not always possible, of course.
So we want to think about algorithms where we can actually prepare psi
efficiently, and then so we have then a large super position state here in the
top register. So we refer often to a set of qubits as a register, as we would
in more traditional computer science. So this register is going to be
controlling the application of these unitaries. So what happens is now these
are in a large super position so part of that state or, you know, one part of
the basis states that are in these super position are going to get the Us
applied to them and the other one won't.
So then we have a sequence of powers of U, controlled powers of U.
we're going to apply the Fourier transform.
And then
So what happens, just at this state in the computation, after we've performed
all of these controlled U operations, the different qubits, the different wires
here contain the following states. So at this point, this qubit is in the
state zero plus either the 2 pi I 2 to the zero phi one and so on down to the 2
to the N minus one phi. So you can see where this is starting to give you, if
you can extract this information about the phase here, you can get information
about the different bits so you can get different precision or accuracy on the
estimate of phi. Request of so what we're trying to do is get information
about each bit of phi, basically, here.
And how do we get that out? Well, we know another trick, we can use the
quantum Fourier transform in this case, we use the inverse of the quantum
Fourier transform. And what does that do? So this large box here has a bunch
of gates going on inside of it, which I'm not reviewing in this talk, but we
can -- but it's a straightforward, efficient quantum circuit. It's polynomial
time. Actually, quadratic.
So we can -- what this box does is it takes this state, which is the state of
the system at this point, and it's going to map it to this estimated version of
phi and then it leaves our state psi, our eigen state, our eigenvector, it
leaves it alone. So the algorithm ->>:
Can I ask on this slide, what is the little slash with the M next to it?
>> Krysta Svore:
Oh, that's saying -- sorry.
Yeah so in the circuit model, we
11
often do this to denote that there's -- this is an M qubit state. So this
register has M qubits. This register has some other number of qubits based on
what accuracy you need.
>>:
[indiscernible].
I think the two are unrelated.
>> Krysta Svore: Yeah, sorry. We have N qubits in this register and M qubits
in this register, in this diagram, yeah. And so if we were to just look -- so
in an algorithmic -- if we're to write this out as an actual algorithm, then
the input, while we require a black box or a quantum circuit that's going to
allow us to perform controlled U to the J operations, so depending on the
efficiency of this, this could be an efficient method or not.
We require the ability
some number where this
we want -- so based on
be and then, also, how
to prepare this eigenvector psi, and then we also need
is going to be set based on how accurate and how often
two things. How accurate we want our estimate of phi to
often we want it to succeed.
>>: Just a think, the M in this slide is not the same as the N in the prior
slide?
>> Krysta Svore:
>>:
Oh, I probably switched notations, sorry.
The N is the thing on the right of two plus one.
>> Krysta Svore: Oh, I think I moved some stuff around, sorry. Okay. And
then I changed a letter again. This should be M, I think. Sorry about that.
Okay. So then the output is an approximation to phi. And then the run time of
this, the QFT takes N squared time where N is now my N here and the oracle and
it's plus the cost of this black box, this oracle. So I'm saying we can apply
this, you know, controlled U to the J. If we can do that efficiently, then
that's the cost.
>>:
There's no other [indiscernible].
>> Krysta Svore: Yeah, so you can do a little better than this, I think, in
some implementations. We do have, yes, so you can do an approximate -- well,
you're asking about the [indiscernible].
>>:
[indiscernible].
This is the N squared?
12
>>:
No, no, because the N here is different.
>> Krysta Svore: Oh, right, yeah. Sorry about that. We're in the
And then let see. So we have some success probability as well that
The algorithm will succeed with some probability of success, and we
that, you know, increase the number of qubits if we want to succeed
for example.
quantum.
we use.
can set
more often,
And then the procedure, so as we saw in the circuit, as we write this down in
an algorithm, we would say we start with two quantum registers. This register,
according to this slide, contains N qubits and then psi has M. And then that
means -- after we perform the Hadamard gate, which we saw on the previous
slide, then that takes us to a super position over all the states, plus our
original eigenvector psi.
And then we undergo this black box, this U to the J.
>>: So when you say it takes [indiscernible] the source of the failure means
when -- so where is the random [indiscernible].
>> Krysta Svore: Oh, so we measure all quantum algorithms are probabilistic.
So here we're measuring this state. And so there's a chance we're going to
measure the state, the zero. If we measure the zero here, let me go back one
more. Let's say when we measure -- imagine we're measuring this state, for
example. There's a chance we're going to measure zero and we don't get any
information. So it's the measurement that has the probability associated with
it maybe I should show it here, actually.
>>:
I think the --
>> Krysta Svore:
We're measuring this.
>>: I think the thing that's interesting is you have basically, you know, this
real number which you're trying to [indiscernible] and the very last part of
it, like that is a random choice. You pick the wrong one, then
[indiscernible].
>> Krysta Svore: Yeah, so every bit that we're trying to approximate has a
probability associated with it. We're not always exactly in this ring. So
13
we're not going to get exactly every bit with -- we're not actually always
going to get the zero or one bit exactly. So we have a probability associated
with each bit.
>>: Let me just try to answer the question. I think the error comes from a
mismatch of two discretization. If you have two bits in these top registers
and the phase phi was at the K over N and then the denominator, then there
would be no source of error. The problem is you're doing a discrete Fourier
transform, trying to detect a unit complex number which is not necessarily in
that group. So the Fourier transform won't be completely delta function. And
then the end of the quantum computation is you sample the Fourier transform
with respect to the norm squared. So you will be off a little bit.
>> Krysta Svore: Right. Yeah, I should have probably put, you know, that
typically, this is some number over N and we're sampling in the cyclic group N.
So it might not be exactly N in the denominator as Mike just pointed out.
So after we have the super position state, then we have to apply this black box
that's U to the J. These are all the controlled U operations that were in the
circuit diagram. And we may or may not be able to apply this efficiently. So
this is something we really, I think, have to think about is how to do this.
And I think Matthias and Dave will present some work on how you can approximate
this for different algorithms. In shore's algorithm, for example, this is
modular exponentiation.
So the output after we apply this black box is this super position state here,
then we 'ply, as we saw, the inverse QFT and we get out a sampling of phi,
which we're trying to approximate. So this is a general trick, phase
estimation, that I think we'll hear about a lot during the next two days. So
now I'm going to turn it over to Matthias.
>> Matthias Troyer: Okay. Thank you again. So I want to now tell you why
we're meeting here. And the reason is that we want to turn the theory of this
[indiscernible] application. Because once we build the quantum computer, then
you want to use to it take care of something real.
And so now we want to think about actually coding up something on a future
machine and then I don't like oracles, because I don't know how to build an
oracle. I don't have it, I can't buy it. So we need to build this oracle.
And there I'm working in high performance computing so I have to think about
14
what I actually can compute on a machine. And then I'm running into problems.
But first, I have two jobs. One is I'm a professor of computational physics at
the ETH in Zurich, and one day a week, I work for Microsoft as a consultant on
the quantum computing and this is a kind of a merger of both aspects.
So what can quantum computers being used for? The first thing is factoring.
We know it can be done if we build that big machine. The next thing, when you
ask what else besides factoring would be interesting in simulating quantum
systems, especially fermion systems, which are hard, and that has been around
for 30 years since Fineman mentioned it first, and many of you worked on this
[indiscernible] algorithms.
So and we get the exponential speed up here, which is great, because we use
quantum mechanics to evolve the quantum system. We get exponentially a speedup
in the memory because we have qubits instead of [indiscernible] bits, and so
the wave function can be stored in the N qubits, instead of here 2 to the N,
the classical bits. And when we apply, the operator's also exponentially
faster.
So we have a huge speedup, which is good. But I always become cautious when
talking to theoretical computer scientists just that an algorithm is NP does
not mean I can ever compute it in finite time on my lifetime. So let's look at
what has to be done if you want to simulate the material. And let's look at
the Coulomb Hamiltonian, the general one.
We have a [indiscernible] term. We have a one [indiscernible] tem. So we have
one term with four operators and one with two. And you want to evolve it. And
I don't have the oracle so I have to build the oracle. And to build the
oracle, I have to evolve it for some time, and I can do that by splitting the
time into small time steps, that is T, and using the Trotter scheme to evolve
it.
And now that's what I have to do. So what is the complexity? And this is a
rough estimate. I have an order of N to the 4 terms when I have N basis
functions. When I've run four time T, I have T over delta T Trotter steps and
for each application, I need on the order of ten or so gate operations. But
ten is the lower bound.
So this is roughly the complexity. And when I'm saying N to the 4, then I'm
getting scared. We know it's hard for classical machines. That's a big
15
problem.
But they haven't found ways of reducing it.
For quantum machines, you also have something that we have to address. So let
me now not focus on how I get the [indiscernible] function. Let me just focus
on if I have a weight function, how to get the energy or a gap or something.
So let's just look at one aspect you want to get this energy and do it by the
phase estimation that Krysta has shown.
And so I want to evolve it for a certain time or a certain power and the time
is the inverse of the accuracy that I need. So if I want six [indiscernible]
places, I have to run for time this 2 pi times ten to the six. But it's other
time steps.
And now, what is the complexity of this? Essentially, I want to do about T
over delta T times ten times N to the 4 and then choosing an accuracy of
epsilon. So live this -- the Trotter time step if we can choose one of 0.1,
this is a big one I think we can't go smaller. So then we're ending up with 10
to the 3 times N to the 4 over epsilon.
Then talking to people in chemistry, when we wanted a least six digits, many
people say ten. Let's keep it to six. There's a complex of 10 to the 9 times
N to the 4. And now what is the run time if this is my algorithm?
If an ion trap currently, the gate space is about ten microseconds per
operation. In the super [indiscernible] qubit, one could aim for ten
nanoseconds. So let's assume if a perfect decoherence free quantum computer
that operates at 100 megahertz and the run time is ten times N to the four
seconds. We're going to run it for a month. Let's assume for a single run.
So that's how many seconds we have and how many operations we can do. Then the
biggest problem size we can do is 22 spin orbitals. I can do them on my laptop
in a few minutes.
We can do much bigger on a classical measure. So we have a problem in this
simple-minded way. So let's being a bit more optimistic. Let's think we can
speed it up and get a really good run that operates at a gigahertz. And I'm
going to run it for a year. Then we can do 75 spin orbitals. We've beaten the
classical machine.
What it shows you is that we have a problem. It's not just so simple as to say
we can use a quantum computer and it is efficient in polynomial time. It's
16
just the oracle Krysta mentioned. The time evolution of the quantum system.
If we do it in a quantum system, like in this pen here, it's efficient because
it's analog. If we simulate it in digital hardware in a program, then we
have -- I've got N to the 4 scaling.
So the question now is what can we do? What important [indiscernible] are
there around that we can do if we build something that can adjust just a bit
more than the classical. We can do [indiscernible] maybe 60, 70 spin orbitals,
which is a little bit bigger. Is there a big problem here. And the second
question is we need new ideas to speed up those methods. In the simple-minded
way it would be just be very, very hard and so those are the two things that we
want to discuss tomorrow in detail.
I first want to hear you talk about the state of the field and methods and then
we want to open it up to brainstorming of what one can do to make it useful.
We have a [indiscernible] so it's not hopeless, but one should not just say
you'll be have a quantum [indiscernible] solve all problems of quantum
chemistry. It's not that easy if you want your machine.
So now ->>: [indiscernible] it seems to me like the real space to make in
[indiscernible] is to determine what class of Hamiltonians can rank the sort of
[indiscernible] time energy uncertainty. So we know shore's algorithm, just of
thinking about it numbered theoretically, I think it is a Hamiltonian basis in
time.
>> Matthias Troyer:
Yes.
>>: I somehow can be Eisenberg's [indiscernible]. So I'd be curious if people
have a sense of, like, what chemistry Hamiltonians would fit into that category
that are not trivial.
>> Matthias Troyer: That's one way of going there. The other way is can we
simplify the [indiscernible] to get the scaling down. That's another way that
one can go. We don't have to make it in one step or a few, but you can find
some that you can do faster than N to the 4. These would be two ways of
proceeding.
>>:
Well, there's this other thing that I talked to [indiscernible] a little
17
bit about, which is sort of a -- look at an ensemble of initial states, rather
than a single initial state and essentially don't try to simulate the entire
system for a long period of time with the sequential performance constraint
that applied. Ned, do something much more clever about mixing between the
states using a genetic algorithm or something else. This is like what we had
with the metropolis algorithm after all. I mean, metropolis algorithm is a
very serial algorithm when you implement the [indiscernible]. But, in fact, we
have other techniques that don't rely on sequential simulation of the electron
positions and so that might be a path.
>> Matthias Troyer: Yes, so what we want to say actually is that we need to
think about new methods, and we should think problem, and the problem is not
solved at the moment. But we have all day tomorrow for brainstorming, and
we're late already bit for your talk so let's start. Markus, thank you for
coming here.
>> Markus Rieher: Good morning, everybody. I name is Markus Rieher. For
those of you, which is most of you, who don't know me, I'm a quantum chemist
basically so I work with standard many particle methods, which are sold on
classical computers to solve chemical problems. And I will actually come back
to some of these problems which we had in your list in your first presentation.
And I would like to discuss with you what are truly chemical problems. I mean,
it's easy to say that, well, we would like to have desensitized solar cells
with some properties and so on. I mean, let's look a bit more into the details
here. And I happen to be here because I have been talking to Matthias for a
few times in the past six months, and he already told me the story which he
just presented to you.
Say given there's a fixed not too large number of qubits this you could use for
quantum computing in chemistry, what kinds of problems could you actually
solve? And while I will focus mostly on chemical reactions, hence the title,
and most of these chemical reactions will have to do with catalysis, because
this is a problem which is really important in chemistry and it's kind of, you
know, when you do [indiscernible] metric reactions, we need an equal number of
reactants in order to get a product. That is already a challenge, but it's
even more challenging to come up with some molecules that can, you know, you
just need a few molecules in order to carry out a chemical reaction over and
over again, meaning catalysis.
18
And now let's work under the assumption that you have only, well, I call it
here a hundred one-electron states which you could construct your quantum in
many particle state. These 100 one-electron states I have in mind are actually
molecular orbitals. These are not qubits yet, because the molecular orbitals
I'm talking about have four states, so it could be MT an molecular orbital,
spin up, spin down and doubly occupied. So you would need two qubits in order
to represent such a molecule orbital.
And I just pick a hundred to have one number which is, I would say,
sufficiently large to be interesting for -- to be competitive for standard
methods on classical computers.
Okay. And like I said, I will focus on chemical rather than physicohemical
problems, right. So physicochemical problems are problems related to
spectroscope, so where you have some sort of spectrum that you would like to
compute. I'm after the chemistry.
And the main target for us is the electronic energy and that is defined by the
Born-Oppenheimer approximation, so you have to solve the stationary electronic
[indiscernible] equation.
Now, what would you need in order to describe catalytic reactions? So most of
them, especially if they involve transition metal ions, I would say that you
can set up the molecular model with 50 to 300 ions, okay. So you could do a
huge deal of chemistry with such a model of 50 to 300 atoms.
And this model, you would then, depending on the chemistry you're doing, embed
into some environment and you would come up with an environment that is easy to
model. For instance, a dielectric continuum, usually used to model solvents.
You could use a quantum mechanical and quantum mechanical embedding and, for
instance, one option is to have a frozen-density embedding where you take your
active catalyst and you embed it into the electronic density some of
surrounding, which is usually structured.
And, of course, you have electrostatic embedding, and that's usually used in
protein chemistry and you can set up a force field and into that force field,
you embed your quantum system.
>>:
Can you ask you a question?
19
>> Markus Rieher:
Sure.
>>: When you say 50 to 300 atoms, is each atom requiring 30 qubits to model
because of [indiscernible]?
>> Markus Rieher: I'll come to that point. Before I come to it, let's look at
this slide. So first of all, I mean, the nice point is for this kind of
catalytic reactions, you can get away with a model with, say, 50 to 300 atoms.
Could be much more, of course. You have chemical processes where you surely
need more than 300 atoms.
>>:
Can you explain that, why you need 300 atoms?
I just don't understand.
>> Markus Rieher: Also, a slide. Give me a second. Before I tell you that,
another important point to mention is that I would say that we get a way -- we
don't need explicit nuclear dynamics. So what we can do is we look at
stationary states of the Born-Oppenheimer potential energy surface so you take
a structure and for a structure -- determined by the position of atomic nuclei,
it's also an atomic structure problem and extract your chemistry from that.
You don't need to move the nuclei around.
For the reactions I'm interested in, of course, there are chemical problems
where you need to do that. But for those problems I'm telling you about, you
really don't need that. And these reactions, and that's the most important
point, involves the breaking and forming of rather strong chemical bonds. So
when you do that, this is when you get away with a model which has only a few
hundred atoms. And what you do is you usually break only one or two atom-atom
contact at a time. This is why it's possible.
Still, the problem is this you need to consider a huge number of nuclear
configurations and huge number of molecular structures. And for these nuclear
configurations, you saw the [indiscernible] and that also means that -- yet you
also need to do for a molecular and different charge in spin states.
Now, coming back to your question, say if you take a hundred atoms, and this is
a very low estimate, and each atom contributes 10 one-electron basis states,
that means you end up with 1,000 molecular orbitals, and I told you we just
want to use a hundred. So what can we do about that? That is a problem.
The nice thing is that there are already ideas around for standard techniques
20
to circumvent the problem, and that has to do with the active space concept
that was basically developed by a group and led by Roos and his coworkers. And
the fide is the following. I mean, you know that you have many more molecular
orbitals than you can treat already in a standard calculation. Now is it
possible to select a reduce the said of orbitals which is totally sufficient in
order to describe the chemistry?
And this is called the complete active space concept. And so you select a few
or the most essential molecular orbitals, which are needed to represent the
total state only in these active orbitals. While in the standard methods, you
always end up with a kind of a super position of many, many electron states,
basis states usually with a fixed number of electrons and you expand your total
state, your electronic functions into this basis. And the basis is, of course,
constructed from those orbitals that you selected.
Okay. Now, there are some drawbacks, of course, because, I mean, you select
only a few orbitals and that ultimately has consequences on accuracy so
usually, you can make sure that you get qualitatively correct wave functions.
Sometimes they are even quantitatively correct, because some contributions
might be lacking. And in quantum chemistry, we call them dynamical
correlation.
Also, it's not guaranteed that it will be possible to select the most relevant
orbitals, but we will see that, say, a hundred molecular orbitals is a
reasonably good number for the catalytic processes I have in mind.
A little bit more notation. You will see this on the slides to come. CAS
means complete active space. So that is a set of molecular orbitals in which
you construct your total state in an exact manner and you're seeing this on
Matthias's slide before. We call that in quantum chemistry full configuration
interaction, full CI. So to be more precise, full CI usually refers to the
exact solution in a total set of one particle states since we restrict our set
of one particle states, we call this exact solution CAS-CI. If you do a full
CI only within the active space that we have chosen.
Now, since you do this restriction, you could also play around with relaxing
the orbitals and then when we do that we call that CAS-SCF. So that's a CAS-CI
method, a CAS-CI type wave function where we also optimize the orbitals.
Now, two examples that should illustrate what kind of chemical problems I do
21
have in mind, which are important. One example is this one.
solved. Well, despite 50 years of research ->>:
That's an ancient reaction, though.
>> Markus Rieher:
>>:
It's not yet
Right, a very important one.
You bet.
>> Markus Rieher: So the point is the following. I mean, we have this in air,
the nitrogen. Molecular nitrogen, 75 percent. In order to grow food, we need
to conform to it ammonia. Usually, it's done in the industry through a process
which is a hundred years old. It's called the [indiscernible] process running
at elevated temperature and elevated pressure.
It consumes two percent of the annual energy production. And it would be nice
to really convert the nitrogen under ambient conditions, ambient temperature,
ambient pressure, to ammonia. It has not been achieved by synthetic chemistry
until the year 2003. And it is possible, though, because there's an enzyme
doing this.
Now, this is a system which first accomplished it in the lap, and it was
published by the Schrock lab in M.I.T. in 2003, and it worked with
[indiscernible]. You see here, I hope you are somehow used to chemical
structures. So what you see here, this is -- or if you want to solve a
chemical problem, you need to get used to it. But, of course, I'll explain
this to you.
So you see, there's a -- this is the active center. It's a metal. It's a
molybdenum surrounded by four nitrogen atoms. Then in the fifth position here,
there's a di-nitrogen binding. So this comes from it. Then all of this here
is a barbed wire of atoms. Atoms taken from organic chemistry. It's basically
a huge system. You see it in a ball and stick representation on the next
slide.
It's the first system to do, say, a few catalytic cycles. The turnover number
that is only six so, say, after three cycles, it drops dead. And that is very
bad for a catalyst. You need to achieve a million cycles, something like that.
So we are in the position of the first system, which is synthetic, which can do
the job, but it's totally inefficient. And since 2003, nothing has changed.
22
There's another change based on molybdenum that's available since two years
which can also do the job, also drops dead after a few cycles, say four. Okay.
And, of course you need ->>:
[indiscernible].
>> Markus Rieher: That we could study with quantum chemical methods. The
point is, you provide electrons through these compounds here and you have a
proton source. And under these highly reductive conditions, what you do is you
attach hydrogen atoms to these nitrogen atoms and the whole thing falls apart,
because it's no longer stable. And that's kind of the problem.
>>:
[indiscernible].
>> Markus Rieher:
>>:
Yes.
So do they use [indiscernible].
>> Markus Rieher: The mechanism is unknown. The active side is known only,
basically, since last year. It took 20 years to clarify how the active side is
really composed. The mechanism is not yet solved, and one reason is actually
that standard quantum chemical methods have problems here for the reason that
the number of active states that you could consider is too small.
Of course, you can do DFT, but you never know how accurate that is.
>>:
Is it known how nitrogen fixing bacteria work?
>> Markus Rieher:
>>:
No, that's what I just said.
Do they have --
It's not known.
The molybdenum.
>> Markus Rieher: Yes, they do, but it's believed that it takes place on iron.
So the active side of this enzyme consists of seven iron atoms and one
molybdenum atom, and they are all bridged by sulphur. So in terms of selecting
molecular orbitals, it's a mess.
>>:
But these are not exotic bonds?
23
>> Markus Rieher: No, no. No, this is -- for me, the conclusion is that it
can be done chemically, and so maybe this is also a chemical -- this is a true
chemical problem, which I don't know how to solve algorithmically because it's
basically a combinatorial problem.
The point is you need to know where to place your atomic nuclei, right? That's
what I said back in the very beginning using the born-Oppenheimer
approximation. But if you don't have an experiment which really tells you
where to put them on the nuclei or from which nuclei the catalyst should be
composed, well, there's not much that you can do. So and this was the first
system that could do the job so everybody jumped on that system to understand
how it works. But it's very hard and now impossible to really improve.
For instance, we substituted this barbed wire here of atoms and whatever we
did, it was worse than the original system. So this is a really hard problem.
Well, this is the ball and stick representation of the catalyst. You see the
molybdenum atom here in the center these are the nitrogen atoms. This is the
[indiscernible] binding. And here you see what I call this organic barbed wire
carbon atoms and hydrogen atoms.
While we studied the system with basically DFT. This is the only method which
is feasible, actually, but we don't know anything about the accuracy. So
having more accurate data will be really helpful here. It also tells you that
you need to come up with a structure and models that incorporate the acid.
This is the acid here. Here you see the proton which is going to be
transferred. This is actually a view on some of the reactions after the first
ammonia molecule has been produced, because you see there's only one nitrogen
atom missing here.
And then you need to set up a molecular structure model. Well, we did it first
with isolated molecules and then we let the acid approach to different reaction
channels which is depicted here. So that is what I mean with you need to
consider really many calculations for different nuclear configurations.
And this is just a glimpse of the full complexity of the process. It's
something that the calculations will be carried out with specific DFT, not
knowing really what the accuracy is. Each node here is basically a different
chemical species, and you see electrons added, protons added and things are
going on, and that's only for the system which with the low turnover numbers.
24
So this is just understanding what's going wrong.
It's not yet a solution.
So this is one of my major points here. If you just run one calculation on one
structure in chemical terms, it's pretty useless. I mean, you can solve a
physical chemical problem. But a chemical problem, no way.
But what could you do with a quantum computer? Well, the point is there are
certain essential steps in the mechanism. For instance, one of them is the
exchange of ammonia, which is produced by the newly incoming di-nitrogen
ligate. And this is something that the Schrock system is really capable of
doing. Many other systems are just there. They produce ammonia, but ammonia
poisons the catalyst and then it takes -- you need to get rid of it and bind
the new di-nitrogen ligate and this is something which is possible with Schrock
system.
So in this kind of chemistry, there are only four essential steps which must be
accomplished by a catalyst. You need to be able to bind to di-nitrogen. Then
you need to activate it, meaning you need to transfer the first electron, the
first proton on to it. The rest is energetically downhill. You can forget
about it. You need to get rid of the final ammonia molecule which is formed,
and you need to increase the turnover number, so you need to deal with a huge
amount of side reactions.
And especially for the second step, a quantum computer could be helpful,
because that is a step which involves electron and proton transfer and it's
very hard to model that within DFT. And I'll just show you one example, which
is a different system for the same purpose, which would benefit from such a
simulation. You see it's a [indiscernible] system and the M stands for metal.
The metal is [indiscernible], could be iron. And there's again organic barbed
wire and the N2 molecule is clamped between the two metal fragments.
The point is you can transfer through reduction or photo chemical activation an
electron on to the N2 unit, and then you see that it will induce a structural
change. And because of the structural change, protons will hop over to the
di-nitrogen ligate, and that would accomplish the important second step.
>>:
What is that [indiscernible] stand for?
>> Markus Rieher: Well, N2 is basically singlet, but the question is what the
spin state of the metal fragment. So with respect to N2 as a molecule, it
25
pretty harmless.
But the metal fragment is a problem.
For instance if you have iron instead of [indiscernible] here, it's not clear
it's a spin status.
The second example that is from the hydrogen production business for clean
energy production basically as it is pursued with an enzyme this time with a
so-called iron iron hydrogenase. And I'm not talking about the process of
forming H2, because there are many systems which can do that. The point is
whenever you have such a system, it's sensitive to oxygen. So if you expose to
it air, it drops dead. And you really need to do something about that.
In biology, basically, three different evolutionary independent enzymes are
known, which could do that and that is the modern iron hydrogenase, the di-iron
hydrogenase and the nickel iron hydrogenase. Except for this system, for all
the others, the problem is O2 inhibition. In the left -- for the system here
on the left, you see here's the iron atom. The explanation is simple. It does
not bind oxygen but it has also a bias to split the hydrogen. So it's not the
most efficient at hydrogen producing catalyst.
Well, if you want to study this, you need to consider different reactive oxygen
species, three, four, five of them. You have different pathways, different
spin coupling schemes, different charges need to be considered. We did that
also with DFT and we considered more than 1,000 broken-symmetry DFT
calculations. We don't know how accurate they are. The outcome was this.
This is the most system in terms of hydrogen production. The central side of
iron iron hydrogenase. FE, of course, means iron atom as a sulphur atom, and
you see it has six iron sides. So four here and two in this sub cluster over
here. It's actually the sub cluster which binds molecular oxygen from air, for
instance, and then it produces reactive oxygen species, which are converted to
an OOH radical and to this H2O2 oxidizing molecule, and these reactive oxygen
species, then they attack the cluster and then it decomposes, and that's the
reason why it drops dead.
And from DFT, these are the energies for the process. We don't know how
accurate that is. All we know that this is pretty much in line with what is
experimentally seen. But we have no other independent way to test that. And
for this, it would be nice to run calculations on a computer which could treat
a hundred molecular orbitals.
26
Of course, if you understand that, then the next thing is you would increase
your atomic model, say, to 700 atoms, because what you can do then is
multi-genesis studies. So you can change the environment in order to make it
more oxygen stable. Actually, this is something we are working also on, and in
collaboration with experimental people who could do 600 mutants per day.
Although they can do 600 mutants of the enzyme per day, the combinatorial
possibilities here are so large that you really need theoretical calculations
in order to guide you.
>>: Sorry to interrupt, but my impression for this sort of work is that in
some sense, the experimentalists feel like the calculations are almost good
enough, right? You say -- you have a space of a gazillion, you give them, say,
10,000 things that look good to you. They make 60 mutants. Of those, 20 work,
right? I hear theorists talk a lot about how it would be great if we could do
something better. But the experimentalists are always like look, we've got
these thousand things. We tried 600 of them.
>> Markus Rieher:
>>:
It's true.
So what do you tell your experimental --
>> Markus Rieher: Well, what you're describing is pretty much going on in the
drug design industry, because the experimentalists don't care whether this or
that proposition will be right or wrong, just to nail down the millions to a
few thousand and test it experimentally. While I think when it comes to
catalysis, you need to change the structure of the catalyst, and in order to do
that he reliably, you really need a high accuracy, because it might be very
hard to synthesize. It could take a year or two until you figure out how to do
that.
And when you're in such a situation, you really need to be able to trust your
calculations and, well, DFT could work, but you never know.
Okay. But this is as much about the two examples as I wanted to tell you.
Just to give you an impression what a chemical problem really is. And, of
course, there are also molecular properties we are interested in, and one
property that we have spent some time on calculating is actually the spin
density, because that was turned out to be very hard with standard methods.
27
Why is that so? Well, let me explain to you, this what you see here basically
spin density distributions calculated with different standard methods all are
CAS-SCF, calculations for a tri-atomic molecule, and you don't see the atoms.
There's an iron atom here and there's a nitrogen atom here and an oxygen there.
So it's FeNO.
And the notation is that this seven means seven electrons and this other second
seven means in seven molecular orbitals. And this means 11 electrons and nine
molecular orbitals and so on. So this directly relates to the question how
many molecular orbitals could you treat with the standard methods. The point
is with the standard thought CAS-SCF, you can treat at most, I would say, 18
electrons and 18 orbitals. This is why you could treat a hundred orbitals, it
would be great, or even 50.
Here, you clearly have a problem, because you see, well, the point is this is a
spin density and all these representations are spin density differences to be
more accurate in my description, and you see that the spin density kind of
oscillates around the referenced entity which I just picked as a CAS 11 and 14
reference.
But we could go to other algorithms which are kind of not yet standard in
chemistry, although they're quite standard in physics. And like the DMRG
density matrix renormalization because with DMRG, we can really treat large
active spaces.
Here you see just one slide which should convince you that we can really
converge a spin density within DMRG save for 13 electrons and 29 molecular
orbitals, and then the reference spin density which we produce is this one.
Now we can use that in order to compare with CAS-SCF and DFT just to see how
wrong it is.
This
spin
this
spin
is the comparison with CAS-SCF. So these are not the spin densities, but
density differences. I forgot to tell you that -- let me go back. So
is the spin density and blue means alpha spin excess and yellow means beta
excess.
These are spin density differences, and you see that the CAS-SCF calculations
with this small number of active orbitals, 13, 14, 15, 16, is too much off. So
this is the error in the spin density with one of the standard methods, CAS-SCF
in quantum chemistry.
28
>>: What is the [indiscernible] 10 percent error, 20 percent error, 100
percent error?
>> Markus Rieher: It's an iso surface, and I don't recall in absolute terms,
but the error is too large. It's, of course, in the paper if you want to look
it up. You can also take the difference to the DFT spin densities, and this is
for eight standard density functions which are used in chemistry -- in quantum
chemistry. And while they look like spin densities, but this is the error in
DFT, okay?
We can do that for larger systems. Before, I showed you tri atomic model
system. Here we have a full-fledged transition metal complex. In order to
describe the spin density here, you need to, for instance, 13 electrons and 42
orbitals. It also tells you if you can treat 50 molecular orbitals, meaning a
hundred spin orbitals with respect to the spin density, this is the system you
could study. The spin density is, of course, important for certain
spectroscopic techniques in chemistry.
Now, the final point I would like to discuss with you is, is there a way to
choose the molecular orbitals from the huge set of molecular orbitals that you
need to describe, that you need for the description of your molecules. Is
there an automatic way to determine those orbitals which are relevant to the
active space?
And usually, that is done based on energetic criteria and mostly I would say,
on chemical intuition. But recently, it turned out that concepts from quantum
information theory are quite beneficial here. And the concept that we studied
are the single orbital entropy and the mutual information. The single orbital
entropy is computed from the eigenvalues, Omega of the reduced density matrix
that we obtained when we trace out all the environment states with which a
single orbital would interact. And then we end up with a four by four reduced
density matrix and four eigenvalues and we can compute the single orbital
entropy here.
And in order to understand what the entanglement between two orbitals is, we
could calculate the mutual information. These concepts, by the way, have been
introduced by Legeza almost ten years ago, and the mutual information by
Rissler, Noack and White in chemical physics 2006. So the mutual information
is computed from the single orbital entropies and from the two orbital entropy,
29
which you get in the pretty same way by using the eigenvalues of the two
orbital reduced entity matrix. That way, you can see the two orbitals
explicitly and you trace out all the environmental orbitals.
Now, we can study these concepts and it turns out that we usually see three
subsets of orbitals with high entanglement, medium entanglement, and weak
entanglement. Turns out that they pretty much match the chemical intuition.
So in chemistry, people have used, for instance, if you have active electrons
in pi orbitals, you should correlate them with anti-bonding so-called pi
orbitals and so on.
So we have kind of a measure in our hands that we tend to replace chemical
intuition here, which is really good if you want to pick, say, a hundred
orbitals from the set of 1,000 that I had on one of my slides. And while he
could go in more detail here. But the pattern in all the molecules that we
have studied so far is pretty much the same. You always have these three
classes of orbitals when it comes to orbital interactions. And we have seen
that also for larger systems.
That's basically everything I wanted to tell you about. So my conclusion here
is the following. I think it is possible to define important yet unsolved
chemical problems that would benefit from the availability of a quantum
computer if it could treat 100 molecular orbitals. If it's only 50 molecular
orbitals, well, I don't know whether it's worth the effort. It's important to
know that whenever you do such a quantum simulation, you must be able to beat
all the standard methods that are around, like coupled cluster, CAS-SCF,
perturbation on top of CAS-SCF, DMRG or DFT.
And I was also asked in this email by Leeanne to put up some questions. These
are a few questions that I would have, because some of it might be already
answered by Matthias' slide. So if you have this problem, how fast could it be
solved on the quantum computer. Would it be really more efficient than the
consisting schemes? Would it be possible to include the, what we call the
dynamic correlation effect? So the fact that we have omitted most of the
orbitals in the calculation, actually, and how difficult would it be to compute
molecular properties, response properties. And that's it. Thanks for your
attention.
>>:
Any more questions?
We have time for discussion now.
Mike?
30
>>: So can you describe how the information concepts like entropy and mutual
entropy can be used to select the most important orbitals to keep track of.
But classically, when you're [indiscernible] degrees of freedom, do you settle
on one of your combinations in the initial degrees of freedom and not just
choosing the best ones from the original basis by some principal value method.
For example [indiscernible] the largest principal values and keep those.
So I'm wondering whether something analogous is possible.
linear combinations of orbitals.
Can you select
>>: Anyway, taking combinations of orbitals and occupancies, I mean, actually
don't move the electrons one at a time either. Suppose you could say, okay,
there are these quantum mechanical transitions described by unitaries among
this, let's call it a basis per minute of occupancy the if sets of orbitals.
And what we seek is a combination of those guys that we can model. I have no
idea if that with a work, but, I mean, that's the kind of -- that extends not
only the basis to the linear algebra, but also gets around this pushing
electron timetable would be different.
>> Markus Rieher: There's a lot of literature and knowledge available on how
to choose these one-particle states and the point -- I didn't talk about this,
but I considered it not to be a problem, because it's already knowledge
available. For instance, you could pick the molecular orbital that comes out
of a [indiscernible] calculation. But that is not what we did here. What we
did is we used so-called natural orbitals that we computed from a small CAS-SCF
calculation. So we run a CAS-SCF calculation, which also optimizes the
orbitals and then we pick that linear combination which diagonalizes the first
density matrix.
>> Krysta Svore:
>> Markus Rieher:
This is already done.
That's why I didn't talk about it.
>>: I'm confused. It seems like this procedure, choosing the orbitals
requires you use the bigger system already. If you're going to choose the
orbital this way, you've got to be able to trace out everything else. So it
seems like you've already solved the system here.
>> Markus Rieher: That's right. The point is what we did here is we ran small
DMRG calculations, small in a sense that it was a limited number of iterations.
31
Now, I think it's not a principal problem that you get this information
posteriori. You can come up with a scheme to sample that knowledge in advance,
I think.
But in any case, it would be good to have such a measure which tells you
whether you picked the right orbitals. Even if, for instance, this can be done
only after the quantum computation. I don't think that's true. You could get
the knowledge before with standard methods.
>>:
So [indiscernible] try to get it up to 50 is very hard to find.
>> Markus Rieher: If you want to be competitive.
guess Garnett agrees, don't you?
>>:
I don't know whether -- I
Yeah, yeah.
>>: I mean, in the knowledge that you are saying, these are the only -- these
are only the [indiscernible] orbitals underlying the thousands of [inaudible].
>>:
So that's the scale that we have to [indiscernible].
>> Markus Rieher:
Come it comes to chemistry.
>>: So you identify the [indiscernible]. But what about do you
[indiscernible] techniques and measurement techniques these things more
directly. [indiscernible] computation on the theoretical end but think about
the experimental techniques that are developing later will be easier to use
say, tricks that would let you [indiscernible] five years as an example of
that.
>> Markus Rieher: So I don't get your point. So in order to solve the
chemical problem or to replace kind of the universal quantum computer by a
system which just simulates directly.
>>: I'm not thinking about simulation. I'm thinking about experiment now.
Take techniques in experimental quantum conversation, use the more direct
[indiscernible]. I guess ->>:
Can I ask a question here?
32
>>:
Sure.
>>: So take these biological [indiscernible]. So we know people do these DFT
calculations, a couple of things don't happen. One is they don't fold the
right way so there's a huge hole and all this water comes in, kills the
catalyst. The other thing which goes wrong is somehow, in some sense, the
[indiscernible] activity of the active site is reduced for some reason.
There's some [indiscernible] or whatever that screwed it up.
So I think, so we -- what do you see as like the chemical, like the physical
properties of these things that make them good catalysts? Like what do I need
to worry about? Do I need to worry about readouts potential? Do I need to
worry about the [indiscernible] of the surrounding?
>> Markus Rieher:
The point is that you need to figure out.
>>: So Peter's question is can we use our abilities to use coherent
spectroscopy as a way to, like, test ->> Markus Rieher: How would you do that? I mean, maybe I don't get you right,
but the point is, in theory, you could study a system which is not yet
experimentally. So how would you do that with an experiment? You see what I
mean?
>>:
Yeah, but I think --
>>: Let me ask the question another way, which is if you could order
experiments [indiscernible], what experiments would you ask them to do?
things would you ask them to measure to best inform [indiscernible].
What
>> Markus Rieher: To figure out whether the calculations are correct. Well,
actually, what you really need in the first place is structures. So I don't
think that you really need new experimental techniques. The point is it's very
difficult to get the structures, because you need to, well, if you use x-ray
defraction, you need a crystal. Difficult to get for certain systems.
You could use more indirect techniques and, of course, that's why I basically
had this point here, molecular properties. Of course, with standard
techniques, we can compute molecular properties. For instance, whether you
have N2 binding or not, vibrational spectroscopy is sufficient because it tells
33
you where the vibration is stretching vibration of N2 is.
It's perfect.
So I don't think that there are new techniques in spectroscopy that we really
need. It's these systems are very difficult to treat experimentally, and that
is basically a chance for us, in theory to really compete on an equal level
with them.
>>:
Okay.
Thank you.
Download