>> Michael Freedman: So we’ve had a number of really interesting talks. I’m glad to see everyone looks alert and has a lot of energy for the afternoon session. Garnet Chan is going to start us off in the afternoon session with Electron Quantum Information and for Quantum Chemistry, Garnet.
>> Garnet Chan: Okay, [inaudible]. It’s very interesting to be in this meeting …
>> Michael Freedman: [inaudible] …
>> Garnet Chan: Oh …
>> Michael Freedman: We’ve got to get your mic on.
>> Garnet Chan: Oh, oh yes sorry. The one thing I had to do.
[laughter]
>> Garnet Chan: Alright now we can start again. Okay, so as I was saying it’s really very interesting to be here and to meet all these different people and different community, and also to be in a different setting. So I think I was telling Marcus this is the first time I’ve been outside of an academic environment. It’s not really so different it’s just, you know, the food is nice and the chairs are nicer.
[laughter]
>> Garnet Chan: But otherwise kind of similar. So that’s me, let’s see does this pointer work? Okay, so my name’s Garnet Chan as you heard. I’m now on the faculty of Princeton. But for most of the last decade I was in the Chemistry Department at Cornell. So my talk today tries to look at some of the intersections between quantum information theory and quantum chemistry. I’m by training a Quantum
Chemist but in recent years there’s been a lot of marrying of the two fields and that’s symbolized both these two figures here. I don’t know which one is which but one is suppose to be quantum chemistry and one is suppose to be quantum information. You can see they’re not really so different they differ.
[laughter]
>> Garnet Chan: In fact only by one line in Power Point. So I don’t usually think of myself as working in quantum computing. Rather I would say that my group works at the intersection of quantum chemistry, condensed math theory, and quantum information theory. What we try to do in our work is we devise numerical methods drawing on concepts from these three fields to simulate molecules and solids with high accuracy and with material specificity. Now some of the problems that we’re interested in are shown here. For example I’m interested in the area of strong correlation. That’s also the title of the next talk you’ll hear about from Matthias. But briefly this is a scenario for electrons when the energy scale of the electron interaction becomes large or dominant. In that case a traditional mean field picture breaks down. We’re also interested in devising new simulation methods to treat very large bio-
molecules or even infinite systems such as condense phases including like metal work size and super conductors. Then finally we’re interested also in quantum dynamics. This is what happens whenever you carry out spectroscopy experiments in chemistry. You shine electromagnetic radiation on a system and it relaxes and through its relaxation you try to infer the properties of they system.
So all these areas are classic, oops, all these areas are classic problem areas in chemistry and physics.
They all lie at the frontiers of classical simulation. Now they of course are very challenging to carry out particularly for large systems. So we really have two choices. So one we can say the classical simulation is very difficult and so consequently we should try and go through the route of a quantum computer and take advantage of quantum computing to make progress. Again, of course that’s one of the reasons why we’re meeting here today. But the other way in which you can improve things is to say well perhaps we can improve the classical algorithms themselves. Indeed there’s no indication that the optimal algorithms have been found. So a fruitful way to improve the classical simulation algorithms is to incorporate insights which have been gained from quantum information and quantum complexity theory in recent years.
I don’t understand how this clicker works.
>>: I think it’s the right one.
>> Garnet Chan: To the right?
>>: [inaudible]
>> Garnet Chan: Okay.
>>: Oh, there it is.
>> Garnet Chan: I think it’s to do with where I’ve clicked here. Okay, alright so that works now, okay.
Alright so at the very heart of it if you take away the, sort of the applications to the real Gory chemistry quantum chemistry is really concerned with three foundational fundamental problems. So the first is a sort of existential problem. Why can we do quantum chemistry? If you think about the vastness of hilbert space that suggests that it should be impossible to ever simulate quantum mechanically a large biological system like a piece of DNA. But the fact is we do that and in practice we can do that at least within some approximations, you know, reasonably well. So why is this? Does this mean that nature is in fact much simpler than we expect? If indeed nature is then simple this leads us to the second sort of foundational question, what are the kinds of quantum states that nature explores, okay? What structure do they have? In condensed matter this is related to the classification of electronic phasers.
Then finally on a practical level, the third question is if I know that there are certain structures of quantum states can we take advantage of that structure to carry out efficient calculation and construct new algorithms?
So I say that these are the questions, foundational questions for quantum chemistry but of course much of quantum information theory concerns the same questions. So the two fields are, in fact, in my view very closely related even though not all practitioners always think so. Okay, nonetheless the different academic heritage of the two fields means that quantum chemistry and quantum information have come out with different kinds of answers to these different questions. It’s exploring so the intersections of the different approaches which I’m going to be concerned about in this talk. Okay, sort of getting confused by this set up. Okay, so quantum information and quantum chemistry view hilbert space in a complimentary way. Okay, this leads to different ways to characterize and approximate quantum states. So in a nutshell the two views broadly speaking are the following. So the formula then commonly employed in quantum information is to write quantum state vector in occupation number representation. In this case the bases states are labeled by local degrees of freedom and these could be local firm and occupancies or in other words you can map them to local qubits. The important feature of writing things in this form is that there’s always a trend to product structure to the space.
Okay, the traditional view in quantum chemistry however is a little bit different. In quantum chemistry we typically write a state in an excitation representation. Now what this means is you take your complex comstate and you write it in terms of deviations from a simpler comstate which you know how to handle, this is typically a mean-field state. From that mean-field state you then add on deviations or excitations. In these spaces the excitations involve labels which label individual electrons and so this is more of a sort of first quantide view. Also from a mathematical perspective this excitation picture doesn’t really have any kind of pull out structure.
Okay, so how do we construct approximations in these two pictures? Well, I’ll start off within the excitation picture which is the one which is most familiar in chemistry. So we’ve seen this type of diagram already in Jerrod’s talk. In the excitation picture we say we start with a state which is very simple, typically a mean-field state which is made up of single particle orbiters which are all occupied and some unoccupied orbiters and there’s typically a gap between them. Then they add in the deviations where the occupied orbiters are moved to the unoccupied orbiters. Okay, and the way to construct an approximate conce of course you can perimeterize any quantum state in this way but the way to construct an approximate conce is to limit the number and form of these excitations in some way. This gives rise to many different theories in quantum chemistry who in the most successful being couple Costa theory which assumes there’s a certain cluster structure to these terms.
So this is a remarkably successful picture of a lot of law of nature. You can do accurate couple cluster calculations with accuracies rivaling the best things you can measure on systems with more than a thousand orbiters. Okay, we’ve been talking earlier about doing simulations on 50 orbiters but if your quantum state, true physical quantum state happens to be reasonably close to a mean-field state then you can simulate many, many degrees of [indiscernible]. Nonetheless it’s clear that this picture has to fail under certain situations. In particular if we’re in a regime of nature where the typical inter-particle interaction is very large compared to this gap, that’s called the strongly interaction regime, then the mean-field become unstable and this excitation picture must break down.
Okay, so that’s how people construct approximations from an excitation picture. What’s the natural framework to construct approximations and occupation number representation? Well, this involves limiting the amount of the entanglement to the wave functions. So first let me say a little bit about entanglement. Of course all of you know about entanglements so I’ll be very quick. Okay, so what’s entanglement? What entanglement measures to some degree is how independent sub-systems are in the total system. So imagine that we have a system made of two qubits. If the two systems were in fact completely independent then the quantum state would just factorize, okay. We then would say there’s no entanglement. Okay, now in a general case the quantum state would not factorize but we can write it in a factored form by using a single vared composition. In that case the quantum state is written as a sum of components associated with the individual qubits and the number of terms in the sum or depending on basically the single values here measures the amount of the entanglement in the state.
Okay, from this perspective I’m using entanglement as an analysis to all. If I give you a quantum state I can do an SVD on it and then measure how entangled it is. But you can turn this view point on its head and use it as a way to construct approximations. So for example you can say well why do I limit myself to consider only those quantum states which have a limited amount of entanglement? Formally this means that they have the mathematical form that they can be written as a sum over product terms of the qubits and there’s a limited number of terms in the sum. Okay, so that’s the basic low entanglement approximation. Okay, so this can be generalized from two qubits to three qubits. So in the simplest case if all these things are independent then you say your state factorizes. But in the more general case, if I want to construct a learn time with approximation I say it won’t exactly factorize. But I write my quantum state as a product of terms each associated with one qubit and have just a small number of terms in the sum. Okay, so that’s my low entanglement state here.
You notice when we write it out, it’s missing a J here. You notice when we write it out it’s kind of a little bit unwieldy because there was summing of all sorts of indices and so. So, it’s nice to introduce a graphical notation. In this low entanglement approximation every qubit that’s associated with a mathematical tensor, some object with several indices and I can write this tense in this diagrammatic form where each of the legs represents one of the indices. Then in this low entanglement wave function
I’ve just connected those tensors together equivalent to summing over these indices, I in a certain way.
Okay …
>>: What are the stigmas represent?
>> Garnet Chan: Those are singular values. So if you were to cut your system here …
>>: [inaudible]
>> Garnet Chan: Pardon.
>>: [inaudible]
>> Garnet Chan: Yes, you can think of this as a diagonal matrix or also scales.
>>: Oh, okay.
>> Garnet Chan: So if you are to cut this here this sigma I’s would be the numbers that enter into the entanglement entropy. Okay, so …
>>: They’re all one so [indiscernible] composition rates?
>> Garnet Chan: So these are the Schmidt numbers if you’re to decompose between the second and third qubit and the first qubit, right. These are Schmidt numbers if you [inaudible]. So, okay, so it turns out these low entanglement states have a name in quantum information theory they’re called Matrix
Product States. But in fact they have two names because it turns out that this class of quantum states is a class of quantum states implicitly explored by a very famous algorithm in condensed matter, the
Density matrix renormalization group. So the Density matrix renormalization group is probably one of the most powerful quantum simulation tools especially when your quantum system has certain geometry, kind of one dimensional geometry. What has quantum information brought to the study of the DMRG? Well, the traditional view of DMRG was in terms of sort of the statistical physics language of renormalization. That really implied this linear connectivity of these tensors. But in quantum information theory if we see the DMRG as being just the certain mathematical form, or with a certain graphical form you can construct other connectivity’s to the tensors which generalize the technique to new states.
So let’s look at some of these generalizations which are known as Tensor networks. Here I take my qubit and I associate again with a mathematical tensor but instead of two legs I add three legs now. You can connect this now in more complicated ways like you can connect and make a tree or you can connect it to make a lattice and so and so forth. So this gives rise to many, many different families of
Tensor network states. In total Matrix Product States and their generalizations have significant advanced mobility to treat strongly interacting systems in real simulations because you notice in their construction I never made any reference to a mean-field. Okay, there’s, I never said let’s start from mean-field and consider deviations from it. I just built the state up directly by linking together objects associated with each individual qubit.
Okay, why are Tensor network states useful in nature? So, if the success of the excitation approximations in chemistry reflects the fact that interactions are to some extent weak in many regimes the success of Tensor network states is a reflection of fact that nature is local. Okay, so the precise, so the more mathematical statement of locality is in terms of a property of the entanglement known as an area law, which is a little bit confusing but we record errors in what we call a perimeter law. Okay, so what is a perimeter law? Well, let us take an arbitrary quantum system and let me cut it like this. Okay, so I’m going to have an inner region and outer region and you can ask the question, how much entanglement is there between the inner and outer region? Well, we know that physical reality is relatively local, right. So in the sense that if I tap something over there you don’t see something pop up
on the other side of the room. So consequently you’d think that out of the quantum states in A and B which entangled the ones which must be entangled must be the ones that live only along the perimeter.
Okay, so this can be turned into a more precise statement which says that the entanglement entropy of the system scales only like the length of this boundary. This is a statement which is essentially proven in one dimension and which is believed to hold to physical systems in two dimensions. Okay, so an implication of this area law, an implication of this locality is that the corresponding quantum states can be efficiently represented by these Tensor networks which have this very, very local entanglement arrangement. I just have these bonds which connect these tensors on qubits, neighboring qubits.
Okay, so to summarize it two families of quantum states that we have for efficient simulations in the hilbert space are the following. So this is hilbert space and we have two families. We have the family of states that arise from this occupation number called information picture. That’s a computation efficient because natures simple, because entanglements local, and for these states as I increase the amount of computational resources I devote to a problem this allows you to capture more and more entanglement. On another side of the hilbert space you have the states associated with the quantum chemistry excitation picture which essentially states natures simple, because interactions are usually weak, and here as I increase the amount of computational resources this allows us to extend the simulations to larger and larger values of the interaction perimeter. The interesting question, of course, for quantum computing is to what degree can states reach by quantum devices or black boxes that
Jerrod mentioned, to what extents can they live in the regions outside of these areas and that’s not entirely resolved.
Okay, so for the rest of this talk I’m going to probably just give you an overview of what applications of these simulation techniques really look like in quantum chemistry, in particular to difficult problems in quantum chemistry. So this is an area which my group has worked on for the last decade. Okay, so here’s a very simple application. Let’s take a fake molecule made of the simplest atoms which I hope you remember are the hydrogen, hydrogen is the simplest atom, okay. I’m just going to stick all the hydrogen’s together on a line so it’s not, you know, very simple and when I take this linear type molecule if I stretch it, you just stretch the line you can show that that changes the gap of the system and it moves it from weakly to a strongly interaction regime. So it’s a nice test of system. Okay, so formerly the hilbert space of this problem is very large it has ten to the twenty-eight degrees of freedom but I can ask the question, well is the quantum state of the system well approximated by low entanglement wave function or Matrix Product State? Okay, so I can construct such a low entanglement wave function by writing down a tensor associated with each qubit or each atom and linking all the tensors together. Okay, that’s going to be my approximation.
What you see if I use this kind of state to variationally obtain the energy, well here’s a table of numbers, but what this index shows is essentially the number of terms in this summation. Here are various bond lengths as I’m stretching and you can see these are the energies. The energies are converged to all the digits which really mean I have, at least as far as chemistry is concerned, an exact solution of the problem. Okay, even though the hilbert space size is very large. Why could we obtain an exact solution? Well, it’s because there’s very little long range entanglement. In particular if I have electrons
in some bond here they are not entangled with electrons in some bond there, okay. So this is a model system and I can show you many more things like this but let me now show you some more realistic applications. So, I’m going to turn from the hydrogen atom, the simplest atom, to transition metals which are very complicated atoms. Okay, transition metal atoms are very complicated because they have things called de-orbiters and these orbiters, if you take sort of a big atom these orbiters are sort of shrunken through the inside of the atom and they overlap very poorly with each and this gives rise to very small energy gaps in the system. Consequently on that scale of the energy gap the electron interaction appears to be very large. Consequently transition metal systems have giant amounts of degeneracy in their spectrum and give rise to highly flexible electronic structure. They exhibit many, many different electronic states. It is no surprise that these transition metal systems are both the most comic air systems sustaining molecules as well as the most complicated systems to state in physics. So if you take any sort of exotic phenomenon like high temperature super connectivity it always involves a transition metal.
Okay, I’m going to focus on transition metals in biology or bioinorganic systems. We’ve seen some of these systems already in various talks. Here are two very important reactions, okay, photosynthesis which powers the planet, right. So this, in photosynthesis light is used to convert water into oxygen, actually I’ve got this the other way around. Okay, light is converted, and nitrogen fixation, okay, which allows animals to basically build up protein. Okay, so in both of these processes what biology does is something which appears remarkable to the chemist. So Marcus already mentioned that to carry out this reaction what humans do today is actually the most important industrial process today, the Hubble process. You have to carry out a two hundred atmosphere pressure at four hundred degrees Celsius and this just occurs naturally in the root systems of legumes at ambient temperatures and ambient pressures. [indiscernible], send me this reaction occurs with high quantum efficiency.
So how does nature achieve this? What uses very flexible correlated electronic structure transition metals to carry out this remarkable chemistry? It’s one of the big unsolved mysteries of our time exactly how it’s doing this. If we knew how it’s doing this then we could try and do it ourselves. Okay, so I would say up to a few years ago trying to directly understand the quantum mechanics of these systems was considered impossible but with the advent of very efficient algorithms that use these low entanglement approximations we can now use Tensor networks or density matrix renormalization group to directly simulate the many particle quantum mechanics of these biological systems and study at them at an entangled level.
So I’m going to show you, give you two examples today from my work. So, I’m going to show you a study on the oxygen evolving complex and photosynthesis and a study on something which is missing its, somehow I don’t know what happened to this image, it looks more interesting than this I can assure you.
[laughter]
>> Garnet Chan: A study of the iron cluster of nitrogenase or is converting astropheric nitrogen into ammonia. Okay, so, one of the reasons why it’s been so hard to study these systems quite aside from doing the quantum mechanics it’s been hard to study them experimentally even is because people haven’t really been able to crystallize out the relevant proteins and people don’t even know where the atoms are in these complexes.
>>: But that’s not true.
>> Garnet Chan: Well …
>>: It’s for both systems there’s an x-ray structure two years ago.
>> Garnet Chan: Yes, yes I know so I’m going to show, I mean traditionally it’s been hard, right. So now we’ve got the structure.
>>: [inaudible].
>> Garnet Chan: Okay, so …
[laughter]
>> Garnet Chan: I’m saying traditionally this has been, so my point is one of the reasons why it’s been hard traditionally to study these things is that you haven’t had a good structure. It may sound like a very simple thing to just take the x-ray structure of a protein but it’s really, really hard. If you examine the
Noble Prizes in chemistry over the last two decades probably half of them go to structural biologists where you can say all they’ve done is wait for a long time until something crystallized and then put it in, okay, ha-ha.
[laughter]
>> Garnet Chan: [indiscernible] but of course it’s not true it’s only how a theorist sees it. So amazing thing about the oxygen evolving complex is last year an x-ray structure atomic resolution was achieved for the first time by a Japanese group, okay. So people finally knew where the atoms lay. So what we decided to do was to try to study this system now not just at the atomic resolution but at the resolution of the electrons by carrying out a DMRG calculation. I can’t tell you everything about what we learned from this calculation but it revealed some very interesting things. Okay, so the first thing that we noticed is that the cluster that they studied it appears to be damaged by the x-rays. Okay, so the x-ray diffraction structure is believed to be in a certain state called the S1 state. That just means there are a certain number of electrons on the system. But if you sort of study the geometric perimeters this is in disagreement of some other spectroscopy. If we actually carry out the direct simulation we actually can show that at this structure this S1 state is unstable. It’s actually a different observation state of the cluster which is evolved. Okay, the S1 state lies in a different potential energy surface. So this is not
that surprising. Okay when you stake something in a big floppy organic biological molecule in the x-ray machine even if you’re very careful it’s not unusual for the x-rays to knock electrons off the atom and change the structure and change the oxidation state. But our calculations show that this is what was happening.
We also did things like various entanglement indicators and this says something about the reactivity. I mean, there isn’t a direct link between entanglement and reactivity but you have a sort of indirect link in the form of quantum monogamy. So you know that to some extent if you have more partners you can be sort of, you can’t be as entangled with each of them, right that’s a monogamy theorem. So …
>>: So can I ask you what you even mean by entanglement because at the beginning of your talk you …
>> Garnet Chan: I talked about [indiscernible]…
>>: Right but you said from the quantum chemistry point of view you started from [indiscernible]off the mean-field state.
>> Garnet Chan: Yes …
>>: And …
>> Garnet Chan: That’s not what I’m doing here.
>>: Oh this is more from the quantum information …
>> Garnet Chan: Yeah, yeah.
>>: The ordinary sense of entanglement?
>> Garnet Chan: Yes this is an ordinary, so I’m using here an approximate wave function structure that’s within the DMRG which is in occupation number form. Okay, so here I compute sort of the, this is the quantity also which Marcus showed. I compute, I guess I don’t know the precise name of this probably the two orbits with mutual information is probably the right term, okay. This maps out basically how all the different atoms are communicating with each other in the cluster. You can see that in the different structures as the enzyme is going through its cycle there’s one atom which stands out as changing very significantly in its entanglement. People have been wondering, what is the atom in this cluster which is responsible for doing the chemistry? Well most likely it’s going to be this atom, okay.
>>: [inaudible]
>> Garnet Chan: Okay, so this is of course just a pretty picture, okay a graph that’s flattened out in some pretty way. But on each of these atoms we have a set of atomic orbiters and I computer the two orbiter mutual interaction for every pair of atoms. So …
>>: [inaudible] the mutual information …
>> Garnet Chan: Yeah.
>>: It’s defined between two orbiters and not two atoms.
>> Garnet Chan: Yeah, yeah, so every atom has many orbiters. So you notice here manganese forty are the five D orbiters.
>>: So you collect all the orbiters in [inaudible].
>> Garnet Chan: So they range from …
>>: [inaudible]
>> Garnet Chan: So the precise arrangement of where these things are is determined by the graph plotting software which is just trying to make the graph look understandable.
>>: [inaudible] the question at least [indiscernible].
>> Garnet Chan: Every circle is orbiter.
>>: Every circle is one …
>> Garnet Chan: Orbiter and the only meaningful information is the labels of the orbiters and the thickness of the lines. Right, the relation of the orbiters in space to each other is not important, but the thickness of lines indicates the amount of entanglement or the amount of mutual information, okay. So you can see that manganese five here goes from being essentially non-tangled anything to be extremely strongly entangled in the S1 state. Okay, so this is the atom which has a very big change in its quantum nature and it’s most likely responsible for the chemistry. Okay, so that’s an example from …
>>: So proximity is relatively correlated with entanglement as well?
>>: Proximity he said it means nothing.
>> Garnet Chan: Proximity in the graph is not very important but that’s determined simply by, have you ever used things like Graph Biz? I mean you just …
>>: [inaudible] looking, you know, how do you look at the manganese and say that it’s not entangled
[indiscernible]?
>> Garnet Chan: Oh, so if you look at this here you see that none of the lines are very thick whereas you look at this manganese oxygen it has a giant thick line around it, so thick it’s enveloping both.
>>: Okay, but it’s not entirely clear to me just looking at the left that it doesn’t have a lot of lines connected to a lot of different …
>> Garnet Chan: Okay, so yes, yes, but …
>>: [inaudible] presumably adds up.
>> Garnet Chan: Yes, yes but, so these are already small numbers and this is a giant number. I mean …
>>: I’m sorry the structure also changes?
>> Garnet Chan: Yes the structure changes, yeah.
>>: [indiscernible] structure [indiscernible]?
>> Garnet Chan: No …
>>: [inaudible] …
>> Garnet Chan: I mean it’s not …
>>: [inaudible] structures?
>> Garnet Chan: So for example in this graph here, so this is one structure this is the structure, the x-ray structure and then this is a relaxed structure which is in the S1 state, in the S1 oxidation state.
>>: [inaudible] entangled?
>> Garnet Chan: It’s manganese five; it’s the one that is on the edge here. It’s on the edge of the cluster, this one here, this one here, or this one here.
>>: See in the left figure there’s only [inaudible]?
>> Garnet Chan: Oh, we call calcium one so …
>>: Oh.
>> Garnet Chan: Sorry.
[laughter]
>> Garnet Chan: Sorry, yeah. [indiscernible] that’s the experimental thing I’ve …
>>: [indiscernible]
>> Garnet Chan: Ha-ha …
[laughter]
>> Garnet Chan: Okay …
>>: So sorry, so the thickness of the lines …
>> Garnet Chan: Yeah.
>>: These pictures is very singular values [inaudible]?
>> Garnet Chan: So actually not so this is a reduced quantity so you can get this from the two particles reduce …
>>: [inaudible]
>> Garnet Chan: [indiscernible] matrix.
>>: [inaudible]
>> Garnet Chan: Okay, so, okay so this suggests that this is where the water is binding, okay. On to another system, okay so onto the nitrogenase …
[laughter]
>>: So, but you don’t feel like, so the manganese to [indiscernible] oxygen [indiscernible]?
>>: [inaudible] change.
>>: I mean that …
>> Garnet Chan: Yeah, yeah, I mean this also changes a lot. But manganese five basically goes from being sort of un, like not communicating with anything to being sort of deeply involved in the system.
This is not a definitive; this is by no means a definitive proof. I mean of course all what, of course what you see as a theorist is we calculate these things that are two different structures. We look at this entanglement and we go, what can we say about it? Ha-ha …
[laughter]
>>: [inaudible]
>> Garnet Chan: Okay, and it looks like this is an important, there seems to be a big change.
>>: But then it should be, you look at what happens with water is really binding.
>> Garnet Chan: Yes, of course we should do more calculations, right. I mean of course we, ha-ha …
[laughter]
>> Garnet Chan: We should study the whole cycle but …
>>: [inaudible] EFT calculations [indiscernible]?
>> Garnet Chan: Yeah.
>>: Is that also what he sees?
>> Garnet Chan: Hilbert’s calculations also implicate this.
>>: Sorry the statement was you think that’s [indiscernible] because that’s what you’re seeing as having the biggest change?
>> Garnet Chan: Yes, of course so there’s a lot of context there. Okay, why water, why not methane or something like that? So we know that, we know that what this, during these geometrical changes at one stage of the geometrical change what happens biologically is that water comes in and binds on to the cluster. Okay, so one of the, and so you would think that what is happening in nature is the cluster has changed so that while the atoms becomes available to bind water. So we’re hoping to see some echo of that in the entanglement or in the wave function of the cluster at different structure. So this is the basis of saying that this is a possible water binding site that seems to be doing something. There’s an indirect measure, the more direct way to do this is to do the next calculation which is to study the system with water attached and all these other things and that’s of course what we’re going to do next.
>>: Is there any, I mean is there any intuition specifically as to why having an increase in entanglement with some other thing [indiscernible] would make it a more likely [inaudible]?
>> Garnet Chan: No, no, so, I mean this is why I just say there’s a greater change in entanglement. So all this is saying is that the electronic situation, the electronic surroundings of this atom is changing a lot during the cycle. Perhaps it’s not being used for anything but, you know, it’s the kind of thing where, you know, if you see smoke then perhaps something is going on. There isn’t a direct, in this calculation there’s no direct way to say that water is bind that because there is no water in the calculation.
>>: Are the ions moving in this transition or is this just the way chronic state is different?
>> Garnet Chan: The ions are moving a bit. They’re not actually moving very much. It’s actually, you know, they just move a tiny amount. If you were to see the two structures you wouldn’t see any difference between them …
>>: Right.
>> Garnet Chan: But you, but the [inaudible] changes a lot.
>>: I see, so that’s like a better measure in your mind than just looking which part of the molecule has a shape changing locally the most …
>> Garnet Chan: It’s a better measure for me. I mean people of course who study the system forever their sensitive to minute changes …
>>: [inaudible]
>> Garnet Chan: In the system and …
>>: I mean maybe for [indiscernible] direct [indiscernible] taken into correlation?
>> Garnet Chan: Yes.
>>: In correlation you’re following one so there’s why you see the changes.
>> Garnet Chan: Yeah, so generally you see these thick things happening when there’s, this indicates some spin pairing of the manganese or the oxygen and there’s a change of the spin state of the manganese like that. Then this might be preparing it to be chemically active. Alright, we were just happy we could compute the quantum state, honestly for this thing, okay.
Alright, now next onto the iron sulfur cluster where there’s a more definitive result.
>>: You are not very lucky with iron sulfur systems.
>> Garnet Chan: No …
[laughter]
>> Garnet Chan: No, I mean this will be kind of bad we can’t see anything. So I think this was, this is a
Mac to PC conversion.
>>: You had to [inaudible] if you want to find out …
>>: [inaudible]
>> Garnet Chan: It’s okay, it’s okay, we can, so just imagine that there’s some complex system imbedding in, imbedded in it are four iron atoms. Okay, so this is a very important system. You have all these different iron atoms which have spins. Okay you can think of them as gigantic qubits they have spin [indiscernible]. They form many, many magnetic levels. The way in which people have understood the magnetic levels in the system has been very indirect. I mean what you do is you do an experimental measurement or something like the susceptibility, this gives you some sort of vague featureless curve.
Then you fit the susceptibility to an underlying model Hamiltonian in which you guess to back out the magnetic levels. So this is something which now a system which we can just again directly simulate with the DMRG and calculate these levels better. These are all the singlets that we get out and they won’t mean very much to you. But the interesting thing that we found when we calculated the singlet state is that we didn’t obtain the same number of states which is very qualitative thing as what previous pictures are derived from experimental data had said, okay. Actually usually when a theorist, when a theory disagrees to experiment most of the time you’re wrong, usually in theoretical chemistry most of the time the experiment is right. But it turned out in this case when we looked at it there was something wrong with the experience. So it turns out that Hamiltonian that they were using to back out the indirect data was incorrect and to explain this is actually very, very simple. We have four irons, they each get paired into singlets and so there are three, so there are four irons so there are three ways of pairing it. Each of these three singlets pairing then gives rise to a ladder of excitations. Sorry, and then according to the experimental picture each of these three pairings gives rise to a ladder of excitations.
But it turns out that when we went through and redid the derivation that people had carried out before, but more carefully, you see that those excitations are not linearly independent. Okay, so the experiment was, so they just over counting state okay. If you look at who did the derivation it was the experimentalist, so all we learned never trust experimentalist to do a theorist’s job.
[laughter]
>> Garnet Chan: Okay, alright so the previous calculations were using the DMRG or the matrix product states but more general tensor networks are also promising. For example we showed that the generalization called correlator product states could be used to describe magnetism in these gigantic
magnetic molecules here with lot and lots of irons. When you draw these tree tensor networks and they look like they should just naturally fit into molecules like this dendrimers. These are very interesting for their optical properties. So, I won’t talk more about this. I should just move on, there are other applications of quantum information ideas. For example quantum information has starting in our group significantly influenced how we think about the idea of quantum embedding. So this is a question where if I take a very complicated system I just cut out a small chunk, how do I capture the effect of the environment on this small subsystem? Now there’s a traditional physics way to think about this in terms of greens function and self energies. But from a quantum information perspective you say, well all you have to do to represent the environment is understand the entanglement between the two. Okay and this gives rise to a new embedding formula. Also as I mentioned we’re interested in time-dependent simulations and we can now have this quantum information and form them. Just to show you very quickly the final slide this is just to show you that the embeddings based on entanglement actually out performed traditional techniques. Again this is a system, a chemical system where we compare against the standard traditional theory technique [indiscernible] theory and the entanglement embedding which does much, much better.
So to conclude quantum information really has a natural intersection with quantum chemistry and I hope I’ve told you about a few of those things that excite me. Entanglement theory and tensor networks have already impacted real chemistry, real systems. It’s not just at the level of model molecules it’s really things at the frontiers of understanding. Ultimately though I haven’t really said anything about quantum computing I think they’re understanding the limits of classical simulation is essential to understanding the limits of quantum simulation.
So that’s that.
[applause]
>> Michael Freedman: So do you have time for a few questions for Garnet? [indiscernible]
>> Garnet Chan: Marcus, yeah.
>> Marcus: [inaudible] maybe you, what’s a common say, given the [indiscernible] of say DMRG what would you say quantum computing needs to achieve in order to be competitive in the methods we know, in terms of [indiscernible]?
>> Garnet Chan: I really think that for low energy prop, okay so my personal feeling is that for low energy properties and material and things like ground states we have so many indications that physical ground states are really simple. I mean except for perhaps some extremely exotic thing constructed in some weird low temperature laboratory with some, you know, peculiar topological properties, almost anything that you actually, sorry anything that you care about in chemistry isn’t really that complicated and it’s, and for that reason I can’t see the features that people usually tout as being important for quantum computing as impacting the study of these types of states. Now in the case of the study of
time evolution of quantum systems there we know that, there we don’t have similar results to tell us that if we track the time evolution of quantum state that it’s simple. In fact we have results which show the opposite. Okay, that time evolve, arbitrary time evolve quantum states are complex and that says regime where quantum computers are basically known to work well. Okay, so I think that for quantum dynamics there’s a lot of room but for a lot of the problems of quantum chemistry which are not dynamic, my views that there isn’t, my views I would stick with improving the classical computer but that’s my perspective.
>> Michael Freedman: Is there other questions or comments? Before we thank you, let me ask you a question. Could you turn your last comment around and use your knowledge of ground states and how generically they’re rather simple to analyze classically, or not that simple but could you conceive of an example of where the classical techniques would break down rather quickly. So in other words could you imagine, you know, some molecule or material where even to get at the ground state you would need quantum methods.
>> Garnet Chan: Yeah, so most of these, so there’s definitely systems which remain hard. So probably the most difficult thing I would say in the classical simulation is that though you know representing the quantum state can be simple nonetheless finding it can be hard. So if you’re sort of cooling in a classical sense, I mean, we often overlook this point if we know we can represent the state well. But if the sort of cooling surface so to speak is, you know, very glassy then it may be very hard to get there. So of course the classical computer has problems with this but the quantum computer may also have problems with that. But it might be that quantum computer has different problems and better ones. I promise
Matthias has some views on these types of, this question as well. You can definitely make classical states which classical computers have problems with and then I think it’s still an open question if quantum computers can do that.
>> Michael Freedman: Yes.
>>: [inaudible] example of course that’s a one dimensional [indiscernible] simple geometric structure.
>> Garnet Chan: Yes.
>>: Can you make a general comment about the role of geometry in success of these [inaudible].
>> Garnet Chan: Yeah, so I mean everything that, you know, everything people have used to build in sort of simple entanglement structures into quantum state have induced geometry on the problem. It would be very nice to relieve this constraint. Every time I see Duncan Holden he says, you know, oh but you know can you get rid of the ladders or something. Then I don’t know how, okay, so I don’t know, I mean I don’t know if it’s a fundamental feature or not. Currently I don’t think there’s any …
>>: [inaudible] some things [indiscernible] get rid of geometry in …
>> Garnet Chan: Well, yes …
>>: We’ve done some of that for quantum chemistry but …
>> Garnet Chan: Yeah so the …
>>: [indiscernible] other problems.
>> Garnet Chan: Yeah there are other problems, so like the, so the giant magnet that was also using a state which basically every thing is connected to every thing else. They don’t seem to work as well as the ones which have [inaudible].
>> Michael Freedman: Well if there are not further questions why don’t we thank Garnet again.
[applause]
>> Michael Freedman: The next talk is from a gentleman you’ve seen before, ha-ha, requires no introduction.
>> Matthias Troyer: Thank you. Okay, so I want to now go away from chemistry a bit more towards physics [indiscernible] people to physics, but they all had meetings of academies or ambassadors in
Washington or somewhere. So now unfortunately I have to talk again and try to keep it in time so that we can have nice dinner soon. What we are interested in is the material with [indiscernible] the properties that come from strong correlations the most famous example being the superconductors.
Superconductivity was found first in 1911 then it took nearly forty-six years until it was explained first and then the big surprise came that, okay, so the [indiscernible] definitely was about [indiscernible].
Then suddenly in 1986 it started jumping up. It jumped up to one hundred and forty [indiscernible] and then remained there ever since. The question is why when it jumped so high why can’t they just jump another fact of two in [indiscernible] room temperature and if power lines that are [indiscernible] trains that just glide as [indiscernible] big [indiscernible]. But we haven’t found it yet so we might want to think about a quantum computer to give us a room temperature superconductor.
Why a quantum computer? We can’t simulate these materials with DFT. It just gets totally quantitatively wrong states. If you look at the EndoPal material of such let them incorporate, you look at the bend structure you see a beautiful bend, the [indiscernible] crossing the [indiscernible] surface its
[indiscernible] metal. If you build it, if you measure it it’s an insulator. It’s just qualitatively wrong. So one has to go beyond the [indiscernible] and try to solve the full quantum problem. What people did in physics first is we like to simplify. We like effective models that capture the element physics. So what simplifies from the material one sees there are planes of [indiscernible] oxide, one looks at them one goes further down and finds out the [indiscernible] is the 3D [indiscernible] orbital so we make a simplified model of just those orbitals. We throw away the rest. The only thing we keep is the hybridization between the [indiscernible] neighbors on the extremities and the cool on from end to the
[indiscernible] we keep only the local u-terms we go to single so we go to end terms. This is a simple model of the Hubbard model. As long as we don’t understand the Hubbard model there’s no reason why we should understand the full material.
I will then come back later towards materials. But even solving that problem is exponentially hard. My master’s thesis more than two decades ago was I should solve the big quantum of the [indiscernible] and after [indiscernible] I realized hey there’s a sign problem. So I haven’t solved my master’s thesis probably yet maybe that’s when, ha-ha, so interesting this. So what happens here is the Mott transition.
If we have non-interacting system then we have a band that’s half [indiscernible] metal. As we increase the local interaction the space changes as we make it bigger, and bigger, and bigger the band starts splitting into two subbands. What goes on here is that if the on site of [indiscernible] U is very large the electors don’t want to move just because we don’t want to have two electors on the same
[indiscernible] column from the [indiscernible]. It becomes insulating in the [indiscernible] case the bands start splitting into two subbands. This is just not captured by DFT.
When you have materials close to multiplization important things happen. Here we’re looking at the lanthanides and we’re looking at the volume as the function of pressure and especially when you look at
[indiscernible] it has a huge change in volume at the [indiscernible] this just the motorization. So a small change in the structure can give you a big change in the volume. It’s worse even in the actinides but I don’t want to focus on them for now.
>>: [inaudible]?
>> Matthias Troyer: I don’t know the temperature in this case. I know the temperature for the, so for, it’s a bit above room temperature.
>>: [inaudible]?
>> Matthias Troyer: Yes, [indiscernible] it only emits a few hundred degrees.
>>: [inaudible] isn’t this one of the issues with the maintenance of the new [indiscernible] the base transition and the [indiscernible] structure …
>> Matthias Troyer: I think it’s pretty well known experimentally but probably classified so I don’t think you have to understand it to keep it safe. You just have to measure [indiscernible] to observe it but yes one would like to …
>>: [indiscernible] accessible because …
>> Matthias Troyer: [indiscernible] understand that what makes it interesting there is that the most
Mott like material is plutonium, it’s right there. So but we don’t want to talk about that because we’re not interest in …
[laughter]
>> Matthias Troyer: In fact of plutonium we’re interested in things that you can talk openly about that are not classified for someone. So let’s look at real challenges. Superconductors, I want to have a room temperature superconductor with model, with the superconducting properties of the material. These are tiny energy scales. These are merely electon [indiscernible] case or less. We want to start the multiferroics, the [indiscernible] here but the materials with the switchable electric, magnetic, elastic properties. Topological materials with unusual band structure topologically insulate is to build our quantum bits maybe. We want to have dyes, pigments, non-toxic dyes with the right colors we want.
We really want the accurate description of transitions between localized orbitals. We want to have none-rare-earth-based permanent magnets so that we are less dependent on China.
[laughter]
>> Matthias Troyer: So there’s a lot to be done if we can accurately describe such materials. But it’s hard on a classic machine; it’s hard on a quantum machine. So we want to do some hybrid kind of classically quantum method to either look at the simplified model like the Hubbard model, solve that.
Or we want to mix methods be used for the Hubbard model with DFT to make a self-consistent DFT plus strong correlation solution and talk about that. So first how can we store these things on a quantum machine? For the Hubbard model they’ve been built, they’re quantum simulators, they’ve been tested, validated, they work. That has been one of the runners-up for the Breakthrough of the Year in science two years ago. They have been tested. So what is the idea here we can by now control single quantum systems accurately, single atoms and ions, photons, quantum dots. Now we can just build a quantum model. We can essentially build a Hubbard model for example and then measure on it.
This brings me to the classification of computing devices. The first computing devices were analog. The
Greeks, the first one known as the Antikythera mechanism found in an old Greek ship for, they calculated the motion of planets. This was special purpose analog. Now a day we have a programmable digital machine. What we are going to now is analog quantum machine and quantum simulate. They want to build the Hubbard model and then the next step will be the quantum computer running
Windows Q, that’s my code name for it. So, but let’s first stay with the quantum simulator. What we do there the first example was Bose-Einstein condensation site. We take the atomic gas we cool it to nano
Kelvin temperatures, it’s in a [indiscernible] stable gas phase and at those temperatures these are
[indiscernible]. They all condense into the same count state in a trap. When you look at the momentum distribution function you cool it you see a sharp peak appearing at zero. These are all those atoms in the same quantum state as rest, that’s the Bose condensate. All of the atoms enter the same quantum state. That was first seen in the year 1995 and gave the Noble Prize to [indiscernible].
Next we can take this and we can add optic lasers and the optic letters from lasers on top of it. Then you have boson [indiscernible] atoms in the lattice. You can build it; it gives you a huge optical table, lots of lasers, [indiscernible], lenses, mirrors, and so on. It takes about three years to really calibrate it, get it controlled because it’s a couple million dollars to build. Then you have it and you can take pictures for example in the quantum gas microscope of the [indiscernible] they see here in the orbital atoms in the lattice.
One more thing you want to show is how can we measure properties there, especially how if people measure the momentum distribution function they’ll have a nice movie by the group of [indiscernible].
It’s very simple you start with your cloud of atoms in the trap. Now you just release the trap and the atoms fall down. As they fall down the cloud expands and the faster atoms fly farther so after awhile the distance from the center is proportionate to the velocity or the momentum so when you then take a picture you get a picture of the momentum distribution function. Aptly spoken you have to take into account that [indiscernible] short time only but then this works. This can be done for bosons; this can be done for fermions. Before doing it for fermions the challenge by Dopper was to debilitate the works for bosons. We checked that we got from the experimentalists the perimeters, which atoms they used, the trap, the particle number, the entropy and we calculated what they should see. The challenge was that the experiment should better reproduce what we simulate because the experiment is supposed to
be an analog quantum computer for the model. So it’s a better [indiscernible] produced in model and here’s the comparison in the lattice at high temperatures there’s just a high peak in the center, when you go to low temperatures more peaks appear at [indiscernible] of two phi which are the condensate peaks and will [indiscernible] transition and the experiment is noisier than the simulation but it works and gives quite good [indiscernible]. The bosons without the gage [indiscernible] we can solve, very easy to solve.
Now we go to hard problem high temperatures of superconductor. It’s hard on a classical computer.
It’s easier on a quantum machine. We just change the isotope. We add one more neutron and the boson turns into fermion. Essentially we use the similar set up and then one sees the fermions surface in the [indiscernible] distribution function. But we can solve the problem of home temperature super connectivity with that? The answer is no, because cooling is a real challenge. We have to cool it with tiny, tiny, tiny energies. We would have to cool it to [indiscernible] and lower. Essentially they just can not at the moment go low enough and even if they can they can do Hubbard model but not the real full material. So one can make progress here but so far the classical simulation methods keep up, so, but it’s hard doing that on a quantum simulator so I prefer to have a programmable quantum computer.
Here this scaling is much better than the worst case I said in the morning because first of all for the onebody term we have the hopping. The hopping is local on this lattice. In this case there’s a way if we use twice as many qubits we can do it in constant time it doesn’t scale with N. The interaction is local I have just N term instead of N to the four terms. It’s just the Hubbard U but all terms can be parallelized. The hoppings of the [indiscernible] the odd bonds in the right direction the even bonds in the right direction and all of the Hubbard U’s. Just five steps and those five things can be in parallel. So this scaling becomes constant instead of N to the four log N. Therefore it improves but for the full coulomb that was my ten to the nine times N to the four times a log N maybe and more factors. Here I might
[indiscernible] them so there’s a five here which I turn into a factor of ten. Then the other advantage is
I’m looking only at the correlated band and [indiscernible] total energy. So I look at the much smaller energy range but I still have to be careful about what [indiscernible] I want or might want a fixed energy but the total energy grows with N so whatever I want have should shrink with one over N. So I get a scaling of about ten to the five times N instead of ten to the nine times N to the four. This now looks much more realistic.
So here we can do big systems now. Fifty times fifty and we need such big systems here because it’s really strongly [indiscernible] correlated. So we have a [indiscernible] work with Brian Clark, Dave
Wecker, and [indiscernible] Nayak about how we can solve the Hubbard model on a quantum computer.
For the basic idea we take an idea that we had for the quantum simulator, we move it to a quantum computer, we take single plaque heads that we built then we couple them [indiscernible] so build up the count state of the Hubbard model then we measure what we want to measure spin correlation function, spectral function, spectral gaps, response functions, superconducting pairing and all systems of up to eight sides it gets implemented and coded and run on liquid by Dave Wecker until it shows the liquid
[indiscernible]. That way [indiscernible] check how the scaling goes, how slow we have to couple the
[indiscernible] that it works the state is prepared, what is the cost for the measurements.
Then we can do anything the Hubbard model should be solvable. Now the Hubbard model has been called a spherical cow. Consider a spherical cow of radius R it’s very useful approximation for a physicist but it’s not relevant to any material. If you talk to a physicist they think it’s great. If you talk to a material scientist then I’ve asked one and she said okay if you solved the Hubbard model I’d be totally bored because it’s not relevant to anything. Now, in the mean time this has changed because I’ve challenged her to build me a Hubbard model material. But the methods we have learned so far to solving the Hubbard model they can be applied to materials and especially get something called the
Dynamical Mean Field Theory which is quite successful for describing the materials near the
[indiscernible] where the physics is local. This is basically a mean field theory on a quantum level so let me quickly leave you mean field four spins. You take a spin in the lattice; you look at just one spin and the rest you replace by effective mean field then you solve that self-consistently that the mean
[indiscernible] of the spin should be the same as the mean field coupling to it. In DMFT you do the same for a quantum model but now it’s not static but the [indiscernible] can hop. You start from a lattice model, you pick out a single site or a [indiscernible] site, cluster of atoms for example and the rest all around you replace by a self-consistency path and you let the atoms hop from the site to the path and back. That way you creatively the simplify the model and as long as the physic is local like in the Mott transition that it essentially turns from an itineral problem to a localist one then this is a pretty accurate method.
It can be improved by making the, by using not just one site but two or three, or four or five sites for the impurity. What is the problem here? I take my lattice model and on the lattice model I calculate the, so
I take the path around, I solve the path executables. The path is one of one non-attracting electrons that [indiscernible] function and they get site coupled to a path around that have to solve a single site coupled to a path. Once I solve that and measure the [indiscernible] function on the site and then I want to find a lattice [indiscernible] function and then I look for the lattice model that’s the same local
[indiscernible] function. So it’s the mean field theory where the self-consistency is in the local
[indiscernible] function. The expensive part is still solving that quantum impurity problem but it’s now a much smaller problem than solving the full lattice model.
How do we solve that? On classical machines we made a [indiscernible] thousand, two hundred thousand over the last few years by going to a newer quantum Monte Carlo method that work in continuous time and are quite fast. So there we make [indiscernible] allow us to go about effective a thousand faster. What we can not do is right before we needed the super computer to solve a single orbital DMFT that now takes just minutes on a laptop and on a super computer we can do five to seven orbital’s with a full Coulomb interaction and that way we could look at the [indiscernible] materials like serum, people in some negative [indiscernible] to aconites. So we can to up to about five to seven orbitals. Yeah.
>>: [inaudible] do you mean [indiscernible] terms …
>>: Yes, yes Robert, yes.
>> Matthias Troyer: There’s no sign problem because we go into the [indiscernible] basis of the localized cluster. So …
>>: Work the, okay …
>> Matthias Troyer: So we work in the [indiscernible] basis of the …
>>: [inaudible]
>> Matthias Troyer: Of these [indiscernible] orbitals.
>>: The expanding and the coupling not in U but in T.
>> Matthias Troyer: We’re expanding in the hopping to the path and that’s the key here then we can do it.
>>: Okay.
>> Matthias Troyer: But it means we, because we have to the [indiscernible] basis of those correlated
[indiscernible] orbital’s that the scaling is exponential in the size so that’s why [indiscernible] five typically in a seven if make some [indiscernible]. That’s our base in which we do the Monte Carlo then.
So this one could do for simplified models but also can do it for materials. What we do there is we start from the DFT get the calculation. We pick out the certain set of local Wannier functions from the bands which model the F or the D orbitals and then we pick those that you want to model in their fully correlated way. We add in the full microscopic Coulomb interaction is cleaned one which makes it a bit harder for those. But then the probabilities we have already counted the Coulomb interaction because we already have it in the DFT. So we have to subtract the double counting correction because we have already treated the Coulomb term in DFT and that’s a problem because that’s a talk here. Because it’s not a pure F state it’s the F mixed [indiscernible] with an S or B so it gets hard to separate it really well.
But one can try it. Then one sees here for example the disburse [indiscernible] we go from the
[indiscernible] to [indiscernible. As one goes here one increase the ratio of the Coulomb [indiscernible] the local U over the bandwidth and at some point a gap opens up at the [indiscernible] surface and it becomes the Mott insulator. LDA doesn’t see that’s the thin linear [indiscernible] metal. An LDA
[indiscernible] DMFT you see the Mott gap opening.
>>: [inaudible] experiment or?
>> Matthias Troyer: Experiment, I have a slide that shows some things from experiment. This is a
[indiscernible] which you can show and it’s, have to unhide this …
>>: I didn’t mean to interrupt.
>> Matthias Troyer: No your fine. So here’s some data from experiment for the itremtytenate that has, so that’s the [indiscernible] specter and you see there’s a gap here, there’s a peak of around minus one point three electron volts. If you go here there’s a peak around minus one point one. So it’s roughly in the right place for the [indiscernible] to validate that’s still a metal. So it looks qualitatively okay, quantitatively semi-okay …
>>: How did you obtain the J and U parameters?
>> Matthias Troyer: That’s a big problem here so you have to, so that’s one of the problem you have to find out what is the screen local Coulomb interaction that you should use for the model.
>>: [inaudible] I mean I’m not saying that this is what happened in this calculation but certainly in some geometric calculations the parameters are not independent of the experiment, right?
>> Matthias Troyer: Oh, yes, what you do is you have the double counting and you just fudge your things until it fits.
>>: Yeah, right, right.
>> Matthias Troyer: If it doesn’t work out you don’t publish it.
[laughter]
>> Matthias Troyer: That’s why I am not doing this I’m inventing methods for them to use.
[laughter]
>> Matthias Troyer: I’ve written one or two papers there just to show that it works but then it withdrawn again. So I said, no. Yes.
>>: [inaudible]
>> Matthias Troyer: The big problem is what is U? How do you get the Hubbard U? Because it’s not the bare Coulomb interaction it’s a screened one. It’s screened by the other [indiscernible], yeah. The
[indiscernible] it’s screened so that’s a big problem how do you get reliable [indiscernible] of U and J.
Let me show you another slide then which is just the next one that I wanted to show. There was a calculation done with a simplified model so this was done ten years ago before we could do the
Coulomb interaction with the new methods here [indiscernible] invented. At that time people there was the Koplier Group they looked at the plutonium with just the Hubbard type model, just the U term and the problem in plutonium is that in DFT the volume comes out in a TGA it comes out thirty percent
too small. It’s just totally off in the volume of the unit of the [indiscernible]. As you add U and now you change the U as you change the U the minimal changes start shifting and then it jumps up and this
[indiscernible] normalized T to the one off the [indiscernible] U of about four then it starts agreeing.
When you change the [indiscernible] FCC to the [indiscernible] it will be then [indiscernible] the lowest.
So that way you can start getting at least qualitatively right answers for the [indiscernible] materials.
But there’s still challenges because we’re limited to about five to seven orbital’s. We have the double counting [indiscernible] correction and it’s hard to get the U and the J parameters and the [indiscernible]
Coulomb and the [indiscernible] exchange. So, and the double counting also sensitivity depends on how choose our correlated orbital’s. Because the band that I showed is not just purely F its F hybridized but S and P. So what can quantum computer help? Well if you can go to about fifty instead of just seven orbital’s we can avoid most of the double counting problem by including the S and P bands as well, and then it’s much less severe. We can include a few more bands and the screening problem becomes less severe and the U is then more microscopic.
We can go to the bigger unit cells like multi-layer [indiscernible] superconductor or the alpha-phase of the [indiscernible] which is the interesting one of the room temperature where there are four atoms in the unit cells so I need at least twenty-eight orbital’s to have just the F orbital’s modeled and I can make it more accurate by including more unit cells. For the Hubbard model we’ve show that if you want to go to about room temperature you need to go to about a hundred unit cells to really get it accurate, so here in the hardest regions. So here again by going to more unit cells one can make it more accurate.
So I see a big potential here for going way beyond what we can do now just by going maybe from seven already to fifteen orbital’s for which you might need thirty qubits, forty qubits. We could do much more, okay so we need thirty qubits for the correlated ones and maybe one hundred simple ones for the path which is none [indiscernible] where things just [indiscernible]. So we might need about a hundred orbital’s in total, or a hundred qubits or two hundred qubits but the correlated hard part might be only about fifteen orbital’s and we have something that goes way beyond what we can do now. So here things can work relatively easily. I’m not saying that will solve all the problem but we go way beyond what we can do now. So then what we then want to do is you want kind of a hybrid simulation, LDA plus DMFT plus quantum computers to solve that and that can have a big impact here and the second wave models like the Hubbard models. So what I’d say is what we should look for is kind of some hybrid classic quantum methods where we use the complexity of the quantum problem that you solve by treating some things [indiscernible] approximately and then finding a good way to do the correct weight like in the DMFT and we need some thing like that also for molecules and you have the other problem and that’s what we maybe should think about.
So if you think I’ve been pessimistic it’s just because first one has to acknowledge that one just can’t do it [indiscernible] worth and now we have to be smart and find out ways how to solve it. Thank you for staying, for listening.
[applause]
>>: So in case you finished very promptly in which, do you have time for questions?
>> Matthias Troyer: We have six minutes for questions then we should go to the shuttle.
>>: What is the complexity clause of the Hubbard model [inaudible]?
>> Matthias Troyer: The Hubbard model, I think I can’t build Hubbard models that are [indiscernible] hard. But the simplest [indiscernible] Hubbard model for the [indiscernible] my belief it’s a BQP because nature finds the grounds [indiscernible] superconducting. I can’t prove it but that’s my feeling.
>>: I tried to ask that question is …
>>: [indiscernible]
[laughter]
>>: I want to ask a question about scale...
>>: No sorry.
>>: The Hubbard model doesn’t have enough parameters in it unless …
>> Matthias Troyer: I can’t put the Hubbard model onto [indiscernible] and turn it into [indiscernible]. I can’t build the [indiscernible] from Hubbard model.
>>: So then you have enough parameters to ask that question reasonably but then it’s not the Hubbard model you want to solve because it’s a spin [indiscernible]?
>>: It’s the Hubbard model you want to solve. There aren’t enough parameters to code essentially enough problems to really talk about whether it’s…
>> Matthias Troyer: So, but the Hubbard model on a square like this I believe because the materials
[indiscernible] quickly believe it’s a big UP. It might have been P but I don’t know that yet.
>>: [inaudible] comment on the last spin [indiscernible] concept. So in a number of comments that say about spin [indiscernible] then there being problems because ones too fine the ground state and you know that’s a [indiscernible] true ground state in that case is very hard. But it might be misleading; I mean it might be that the Hubbard model is some kind of spin [indiscernible] by nature doesn’t
[indiscernible] state. That’s actually a situation you see in biology and proteins. For a long time the protein [indiscernible] field was concerned with finding the ground state of protein which explores in exponential [indiscernible] rate and I was worried about how to find a ground state but the fact that in
many cases nature doesn’t find the ground state it just finds them at a stable state. So the relevant physical state does not [indiscernible] stable state. But perhaps you know one could say, I’ll I’m saying is maybe we shouldn’t worry so much if in nature [indiscernible] we don’t have to find that [inaudible].
>> Matthias Troyer: But if nature, if there’s a spin [indiscernible] material I want to model it then yes then you might just want to say hey, it looks like a spin [indiscernible] and it’s good enough and nature can cool it much better. But typically spin [indiscernible] you want to solve because you [indiscernible] optimization problem [indiscernible] machine and then we want to at least get a good approximate, a good low energy state.
>>: Yes, now I think we want a good low energy state…
>> Matthias Troyer: Yes.
>>: That’s what we want. So I was going to say with respect to hybrid approaches, I mean the way you treat some [indiscernible] differently than …
>> Matthias Troyer: Yeah.
>>: And of course I think in chemistry this is what we always [inaudible]…
>> Matthias Troyer: Yes.
>>: So it’s really a question of machinery that hasn’t been transferred into solids…
>> Matthias Troyer: Yeah.
>>: It’s just so much work but [indiscernible] concept and so and so forth.
>> Michael Freedman: Maybe I just make a comment on what Garnet just, distinction Garnet and
Matthias were just talking about finding exact ground state versus a good low energy state. One of the most important open problems with quantum computer sciences QPCP so it’s known in the classical world for about fifteen years that of their MP complete problems for which it is just as difficult to approximate the solution as to find the exact solution. The quantum mechanical analog with this is whether this is true for QMA complete problems in general whether they’re Hamiltonians of which it’s just as difficult to find an approximate ground cell low energy state as it is to solve an exact one. So I don’t think there’s a betting pool on this conjecture but it’s really like one of the [indiscernible].
>>: You see the [indiscernible] machine [indiscernible] starting with the Hubbard model and
[indiscernible] computing?
>> Matthias Troyer: We already have the Hubbard model machine, well many of them with …
>>: What do you think about the …
>> Matthias Troyer: Quantum gases and they work pretty well and they get what we get and for all of the static and thermodynamic properties they’ve been a nice challenge for us so that now we have quantum [indiscernible] battles like the [indiscernible] DMFT that essentially give the same as hard. So for thermodynamic properties these experiments challenged us to beat them and so far we’ve won. For dynamic properties like have been mentioned it’s very different so there of course they win, because for quantum dynamics we have no way of doing it. There’s a huge sign problem that we just can’t solve except for very short times. But these things, so in those things we take the atoms and use the atoms to model the fermions and they interact by [indiscernible] interactions and [indiscernible] which is short range. So it’s a [indiscernible] interaction it gives me Hubbard U. But it can not really go easily beyond the Hubbard U. I could build a [indiscernible] molecule in the lattice but that’s what people
[indiscernible] everyone can get a [indiscernible] coupling that’s hard but, one can use ions and get the
[indiscernible] Coulomb but they can’t really build the [indiscernible] this way. So to go to a real material there are two ways either use the quantum computer or I learn from the Hubbard model about a new classical method which it could then apply to the material by trying to go both [indiscernible] and see who would win, the quantum computer or our classical ideas.
>> Michael Freedman: Because of the time we should thank Matthias.
[applause]
>> Michael Freedman: Maybe I should ask …