>> Peter Lee: Good morning. Thank you all... our speaker for today, Professor Matthias Troyer. He comes...

advertisement
>> Peter Lee: Good morning. Thank you all for coming. It is my pleasure to introduce
our speaker for today, Professor Matthias Troyer. He comes to MSR all the way from
Zürich where he is a professor of computational physics at the Institute for theoretical
physics. He has also been a many time visitor at project Q down in Station Q in Santa
Barbara. He is a pioneer of cluster computing in Europe having been responsible for the
installation of the first Beowulf cluster in Europe with more than 500 CPUs in 1999 and
the most energy-efficient general purpose computer on the top 500 list in 2008. He is a
fellow of the American Physical Society and his activities range from quantum
simulations and topological quantum computing to novel simulation algorithms, highperformance computing and computational providence. He is the author of the Boost
MPI C++ Library for Message Passing and Parallel Computers and the leader of the
Open Source Alps Library for the simulation of quantum many body systems. So
without further ado, please join me in welcoming Professor Troyer.
[applause].
>> Matthias Troyer: Thank you. What I was told often is that there are two words that
matter most to be a scientist and it is physics and quantum. [laughter]. So I am glad that
so many come here and keep it as simple where it should be no problem to follow. And
if there are people here who have some physics background, then in case you have
questions ask me and I will show the details later.
The starting point is that we see that the most lasting hope is this bit of [inaudible]
machine [inaudible] exponentially and soon we should reach an exaflop. But the
question is how long can that continue and essentially that in the next 10 years or so we
should slow down or stop. The main physical problem is that when you look at the
CMOS device, the insulating layer now approaches about the nanometer, so it is four
atomic layers thick. When you make is thin and scale it down further then it will no
longer be insulating. So the challenges when one reaches the limit is scaling, quantum
mechanics becomes important, but the main problem that we face right now really is as
we stop scaling we need more power to run the machines, because as we scaled it down
the power density stays constant. Now to get the performance we need we need bigger
machines and we need more power.
So to get to a petaflop and exaflop in the past years we needed about 100 times more
cores to reach 1000 times more power. In the ‘90s I used the Y/MP and I found it a great
machine eight CPUs and a gigaflop. Then later we use machines with thousands of cores
and now my friends in Japan use hundreds of thousands of cores and that causes
problems. One problem is Amdahl’s law; it gets impossible to write good code on those
machines. It gets hard to scale. There is a problem with the code that we write can scale
to about 10,000 cores and beyond that we can do some manipulations but they are not the
real problems we want to solve. [laughter].
So I am not going for a better scale, I am going to want to solve the real problem and not
get the record. But the other problem that we already face is power. The power that our
clusters use gets more and more, and what limits our clusters at the ETH is the amount of
money the vice president wants to give us to run the machine for the power. So that will
be the limiting factor. So we can now scale things up and improve things slowly but at
some point in time there will be an end at that point we need to change the way we do
things. According to Rajeeb Hazra, he thinks that beyond 2025 we need some miracles.
So what we can do is we can already think about disruptive change and think how can we
make use of quantum mechanics to solve some special problems in computing, while
[inaudible] for a while. Quantum mechanics is carried to many. It brought some to
physics; it keeps others from physics, [laughter] like we heard before the talk. But it is
already more than a century old and the last century was a century of physics and there
were many innovations through physics but almost all of it is classical. There are very
through quantum devices and here I have a quantum device that you can buy. It works on
Windows, Mac OS and Linux. So we have some quantum devices that work like number
generators, quantum simulators and in the mail this year the first quantum computer was
sold for the quantum computing systems. I will not talk about that because I will talk
about things that work.
[laughter].
>> Matthias Troyer: I can mention something about D Webb one if somebody asks later
but I want to leave that for now and my hope is it is more for the future Windows Q
quantum computer that we might have some time. But now first to the machines that
actually work and why it is important to actually make sure they work and what they do.
You want to start with something simple like quantum number generator. The question is
where do we need random numbers, and where do we need good random numbers and
why would one have a quantum random number generator. The main commercial
application for quantum random numbers are online gambling and the lotteries, because
there you really want to be sure it is random. The lotteries really have to prove that they
are unbiased and fair and so on and online gambling you don't want a hacker to make use
of some of the flaws in your generator.
What I use for manipulations for quantum systems in physics then it also you have the
right to use the finest example possible of the futures of the stock market. And how do
we usually get random numbers? We get random numbers using an algorithm on a
deterministic machine and of course that is not random because it always gives the same
results if you start with the same input, so it is called a pseudo random number and that is
something that just looks random that looks unpredictable and when one wants to use
some fast simple algorithms.
The first problem is these numbers are not random at all; they are completely
deterministic. There is no entropy. If you know the algorithm you can predict the next
number. But it might be good enough. And so they have been through many, many tests
so they look random [inaudible] relations and those that pass the tests they are termed to
be good random number generators. And then a problem comes up every two years that
you do a simulation and you calculate something with high accuracy and you find the
result is off by 40 [inaudible] that is just completely wrong. It was seen the first time
back in 1992 and it comes up often now. Yes?
>>: It famously came up in 1963 or 64 through the random generators that IBM used for
generation. The infamous RAM do. Look it up and you will shudder.
>> Matthias Troyer: So it's a well-known problem and it comes up every time and one
uses better ones and better ones. And one tests them and uses them and then a student
spends a week finding a bug in their code because the test result is wrong until the
professor says maybe it's a problem with the random number generator, and then they say
hey, yes. And then somebody uses a better one. And then somebody runs and tells me
there's a bug in my code and then I asked him to change random generator. And they say
why? It should be good. So the only test that you have is your own application. If that
gives the right result and that is good enough. If it doesn't give the right result then it is
not good enough. So we essentially have to run everything twice.
There are generators that are better. We try using a secure one. We try using them then
at least if you find a problem you have solved a hard computation problem. The problem
is they affect 1000 to 10,000 and they are slow, too slow. They give too few numbers.
So the idea is you can get a quantum random generator and put use random numbers
based on quantum mechanics. How does it work? It cost about €1000 so 1500 USA
dollars and what is in here is a photon source that emits a photon and the photon slides
out and it hits a semi transparent mirror. And here now is what you have to know about
quantum mechanics and a friend of mine Jean-Claude at Yale he explains quantum
mechanics using quotes from Yogi Berra. There is a famous quote, "when you come to a
fork in the road, take it." [laughter].
That is what really happens. So the photon takes the fork and it follows both paths. They
came here and then it follows both paths. And we look where the photon is. Is it here or
is it there. And we look where something is and it is always only in one location. So by
observing it, by measuring it there is another Yogi Berra quote for that but I don't
remember. Then it shows up in only one location. When you want to pinpoint it, when
you want to localize it somewhere, then it is only in one location and which one is chosen
purely at random. There is a physical mechanism here that produces randomness. And
then I record a random bit, zero or one depending if it's left or right. Now does it work?
Of course there is this assumption that quantum mechanics is correct and there is good
reason to believe it. So does this device work? And what you can do is you can look up
and there is a certificate by the Swiss office of metrology that satisfies that the tested
device has passed all of the tests and it is random.
[laughter].
>> Matthias Troyer: So it is certified and then when we saw our results later and we
asked them and they say are, of course, any certificate only means as far as the test can
tell.
>>: [inaudible] the effect that we have silver vision that doesn't introduce…
>> Matthias Troyer: That is the first problem with bias. [laughter]. Good point. It is not
perfectly bad but sometimes it gives you more ones than zeros or more zeros than ones.
What you can do is there is a version of this where in the driver software thus Von
Neumann debiasing, you take two bits. If the two bits are the same, you discard it. The
zero one you map to zero. The one zero you map to one. That way the bias is gone. And
your bit rate is reduced to a quarter. So instead of 4 megabits a second you get only one
megabit per second from the device, but without any bias. One megabit per second is not
fast because we typically need about a gigabit per second.
But there is another problem. When you look at correlations, there shouldn't be any
because quantum mechanics, that should be perfectly random. So we measure the
correlations and when you measure for a few days, then you find there is an
autocorrelation of about 3x10 to the -5. It is tiny but it is worse than a cryptographically
secure pseudo-random number generator. It is not random. Now the problem here is not
that quantum mechanics is incorrect; the problem is that the detector has a memory. If
the detector sees a photon then the next time you ask where is photon, then the detector
can sometimes trigger again because of the previous photon. The photon functioning
sometimes there can be two photons and then there is a tiny bias that the same detector
triggers again. One can solve that by just increasing the time. This is, so if you go to
about 4 or 5 times the distance that it is safe. If you reduce then the bit rate went up by a
factor of 10 then you should be safe. But now we have reduced the bit rate by a factor of
40 and you are still not completely sure that it works.
So what we can instead do is I then talked to a person in quantum information theory at
the idQuantique in Geneva and he told me don't use randomness extractors. Then I asked
him what is that and he told me well, that has been invented by theoretical computer for
scientists just who produce random numbers. And that was then the key to making this
thing work. What we do is we view the big screen not as a random screen but as an
entropy source. We look at those bits and for example we get record N equals 2048 bits
of it and then we look at what is the entropy density in that and the best we identified was
about .99, so this is only a tiny, tiny loss of entropy due to the bias in the correlations.
And then what one can do is one can use two universal hash functions which takes a
random bit matrix and will fix it once and for all. And one does multiplication of the
[inaudible] of the bit matrix and from 2048 bits of entropy that we feed in, we can extract
1792 bits of random numbers, and if this matrix M is chosen with two random numbers,
then the probability to find any number randomness in the output is less than two to the
minus the entropy, the density times the number of bits minus the ones we extracted
divided by two and that gives us the ability to find any non-randomness is less than 10 to
the -35. If you want it even better you just extract fewer bits from it. So that way one
can actually prove that the probability that the nonrandom is bounded by the number
which makes it work and the only change that we need is a change in the driver. Yes?
>>: Are there less expensive forms of entropy that you can use?
>> Matthias Troyer: The other question is, are there less expensive forms of entropy.
That certainly has to be explored. There are forms of entropy that have been used, for
example, thermoloid, a resistor. That is what we are looking at. The problem with
thermoloids is you cannot prove on physical grounds what their entropy is in quantum
device. You know the physical mechanism; you can actually prove it. With the
thermoloid you can estimate it. So this is the stronger thing. But yes, we want to find
faster and less expensive sources because that is really expensive for that low bit rate.
>>: The proof that you have there seems like it is not only dependent on your
controllable error probability, but also on the assumptions that you are making about the
correlation and…
>> Matthias Troyer: It depends on how good my [inaudible] estimate is and what
assumption is going to that. So how did we get the number? What we did is we assumed
that the correlations decay exponentially or decay fast enough. And the only thing for
that estimate that we need to assume is essentially that the correlation long times are
bounded by the values at the shortest distance that are measured. That went into the
estimate. It just should not increase beyond that level anymore in the future. And that is
the physical thing we know about this device. Of course there could be something hidden
in the quantum, in physics beyond quantum mechanics, or there could be some
correlations but that is very unlikely. Yes.
>>: I don't want to [inaudible] you but I--random numbers only that problems are
perfectly good for crypto. Because the bad guys don't have an interest in [inaudible] in
guessing the key. The key [inaudible].
>> Matthias Troyer: Oh, you have a question?
>>: Yeah. I guess I'm still a little unsatisfied because of that assumption because that
assumption doesn't seem to be any weaker assumption than a typical cryptographic
assumption that could be made about a good one-way hash function or a good pseudorandom number generator.
>> Matthias Troyer: The assumption that goes into here is that quantum mechanics is
valid and…
>>: At the entropy estimate.
>> Matthias Troyer: The entropy is estimated based on the assumption that the
correlations do not increase anymore after a millisecond.
>>: Right. Is that [inaudible] assuming that a certain pseudo-brand number generator is
secure?
>> Matthias Troyer: I think that is more secure because in this device it can measure the
correlation time and the correlations decay with the autocorrelation time of about 100 ms.
Then memory should be lost over time in those devices and the time scale, so if you see
that after a second this goes up again that would be worth a Nobel Prize [laughter].
So now let me go to a device that costs 1000 times more. If you think that was
expensive, now we go to something that costs $2-$3 million. But we want to do is we
want to simulate quantum system. And first let me show you why simulated quantum
systems is hard, why we need it and then what we can do about it. The basic problem is
here is the theory of everything relevant in daily life. The Schrödinger Equation with
[inaudible] forces. Essentially physics knows the theory of everything unless you go to
extreme conditions. This equation here, the Schrödinger Equation explains most of
physics, all of chemistry, biology, the stock market, life and more. It is a simple linear
partial differential equation. The only problem is it lives 3N dimensions and it is in the
order of 10 to the 20. So we just have to solve a linear PDE in a huge number and
dimension and the complexity of solving that close exponentially with the number of
dimensions.
If you have M points in one dimension then you need, so you then have M to the 3N
points in N dimensions. So that is the basic problem. It is a simple problem but it just
scales horribly. So we have to reduce it to something that doesn't scale as badly and
since I mentioned Nobel Prize before, here is the first Nobel Prize to Kohn and Pople.
But Kohn has proven and it is a one line proof that you can get a Nobel Prize for a one
line statement, when he said you can find the ground state of a quantum system not by
finding this huge ray function but the only thing that you have to do is you have to
minimize a functional of the density. The density lives in three dimensions so you have
to solve a three-dimensional minimization problem. You have to find the distribution
that minimizes the functional. The only problem is we don't know the last term exactly.
That one line proof as it exists doesn't tell you how to compute it easily and it was
recently shown by [inaudible] that is hard even on quantum computer. It is QMA hard to
find the true function of it. So what people do is they approximate to find something that
is good enough. Yes?
>>: What does the E stand for in ESC?
>> Matthias Troyer: Exchange and correlation. We have one term which is the external
[inaudible] gravity, the nuclear and so on. You have the kinetic energy of the electrons.
You have classical part of the [inaudible] energy. Then you have the rest. It's tiny but
important. But if you apply for example to silicon then you find the band gap here, and
that band gap [inaudible] is what makes the semi conductors work. Now to solve this
equation you need big machines. And to demonstrate what I'm showing here the fastest
machines that run on the Jaguar in Oak Ridge. Essentially all of the cores in the world
that exists that run at faster than a petaflop solve this material science problem. This
needs huge machines and essentially no other code reaches the petaflop. So these are big
problems to solve this still and now the question is can you find the interesting things
with it? One interesting question is can we, for example, find a good superconductor?
Superconductor is a material that is no electric conductivity and it was covered back in
1911 in Harland. As you lower the temperature of mercury the conductivity drops and
the wire carries the current without assistance. Then it took nearly 50 years to explain it.
And of course when one wants to increase the temperature, which does happen because
the first one was about four Kelvin degrees, which is not very cold. Here is now the
highest temperature which material can be superconducting as a function of time and it
grew slowly and slowly and then it's saturated at about 30 Kelvin until in 1986 Bednorz
and Muller found the so-called high temperature superconductors. Suddenly it jumped
up rapidly in a few years to 140 Kelvin and it is now stuck there.
The big question is why are those materials superconducting and the more important
question is if that was increased by a factor of 3 to 4, can we increase it by another factor
of two and reach room temperature and then we would have power lines without loss
would be really great, levitating trains and all. So that would be really useful and there
was a big hype in those years about that that will be happening soon. But since then we
are stuck at this temperature and the question is can we simulate it. So what we can do is
we can use the density functional theory that worked so well for silicon and then try to
apply to those materials. The problem is and I don't want to go into detail, it gives the
wrong results.
>>: When you have this material [inaudible] and the experiment it is an insulator. When
you simulate it, it comes out as a metal. It is quantitatively wrong just because here those
materials, this exchange correlation term which you don't know is so important. So what
is the answer, we need to solve the full quantum problem. We want to solve a simplified
toy model at least for those materials but we need to solve this big problem. And now I
want to show you how we can solve exponentially big problems than what is the problem
in quantum mechanics.
What we can solve, or what we can use to solve high dimensionally [inaudible] is the
Monte Carlo method. It was invented in the 40s by Ulam. I don't know if you know the
story. But he was sick in bed and he was playing solitaire. He asked the problem that
was also exponentially hard namely what is the probability to win in solitaire? If you
have 52 cards and 52 factorial possible games and if you want to solve it exactly you
either have to be very, very smart and find exactly the solution. But he instead had
another smart idea. He played 100 times because he had time and--he didn't have
Windows yet [laughter]. It took longer, but then he played 100 times and won 15 times
and so he knew that, though he didn't know the exact answer, it was somewhere around
15%. That was good enough for him. So if you just want an estimate, then we can get as
precise as possible and then we can do that by sampling a small subset of the
exponentially large number of states. And the general algorithm for that was invented in
1953, so quite a while ago, by Metropolis, Rosenbluth, Rosenbluth Teller and Teller,
And that is the Metropolis algorithm.
Many people have heard about the method, but very few have ever looked at the paper.
Has anyone here seen the paper? The paper is interesting because there are two husbandand-wife teams on the paper, and the person actually inventing the algorithm was a
woman, Arianna Rosenbluth. Metropolis was just alphabetically first and he was the one
who built the machine. That machine was also special because it was the first machine
that ran at a Kiloflop. It was a big challenge at the time of what to do with such a fast
machine and that is why Teller invented that method, and that is what we use now when
we don't know what a big machine is useful for, the Metropolis algorithm will always
work.
It is always important to find a good motivation for a paper and find a good first sentence
and so this paper beats them all. The purpose of this paper is to describe a general
method of calculating the properties of any substance. Wow. Yeah?
>>: [inaudible] plutonium?
>> Matthias Troyer: Including plutonium, yes, which is still a challenge. Another
sentence down here, the second one says, classical statistics is assumed and that is the
problem with plutonium. But it is a simple method and I will just show it because that
will then point to the problems with quantum systems. What you do is you start with any
configuration of your system or of your [inaudible], whatever. You make a small change
like you move one particle around. You take into consideration the weight of the new to
the old configuration, and you accept the new configuration with the probability that it is
the minimum of one to one ratio. Very simple, but a great idea and a principal, it works
for anything. But it is slow. And let me show you why.
>>: [inaudible]?
>> Matthias Troyer: Those weights might be the weights of the state which is
exponential of minus the energy divided by the temperature in a classical system. So it is
just the probability to be there at the temperature that you are at. If you want to find the
ground state, then essentially what it means you accept it always has the energy goes
down. When it goes up you don't accept it.
Now let me try to show you how it works. And here I have a simple model of a magnet.
Let me try to run that. Now that is a magnet that can point up or down. Up is white and
down is black and I am at low temperature so it is just stuck. The simulation is actually
running. I can increase my temperature and it starts to fluctuate. I can increase the
temperature more. Now it is so hot that the magnet is no longer magnetic. It is half black
and half white. Half up, half down so it is no longer magnetic. Now we can cool it down
again to close to the chemical point where the magnet then vanishes and now we see that
some big domains here start forming. Black and white ones and it starts until all of it is
half black and half white. We also see that now nothing much changes anymore and that
is what I wanted to show. Those slow [inaudible] that do changing squares from black to
white, that does not change much in the global structure. This is called the critical
slowing down problem. But it gets even worse if we lower the temperature. Let me
lower it over again to the very low temperatures. 0.1, it should turn white or black and
we cannot wait a long time. In principle, it should work, but it can take forever. And so
we need to be smart and invent faster methods. The principal works but we can do huge
progress with faster methods.
Let me go back again to close to this critical point there goes the cluster of these schemes
[inaudible] which is the smart way to find the graph to flip and that code is about five
lines long. It is 20 lines instead of 15 lines and it performs a factor here about 40,000
faster. So that's a key that is important to find better algorithms. And to demonstrate that
one of my colleagues last year looked at how simulated the model over the years and it
started back in 1972 and he plotted how much did the computer speed of his machines
increase. It was this line which is [inaudible] Moore's law and how much faster did his
code become with the combined improvements of both algorithms and hardware. And
that was also an additional exponential growth and in principle if you run the modern
algorithm on a 30 year old machine, if you still had it, it would run faster than using the
old method on today's machine. So there have been speedups of 10 to the five and 10 to
the six and more, just due to methods. I think one good point at the end of Moore's Law
will be that we will focus more on methods instead of hardware.
But we can now often simulate hundreds of thousands of quantum particles. The
progress came through new ideas, new methods [inaudible] 10 to the five faster. We
have another new method for the plutonium that makes that effective 10 to the six faster
but that is used now not by us but [inaudible] a little more because I am not that
interested in plutonium. We scale our codes well to about 10,000 cores. There are
colleagues who scale it further and get [inaudible] by scaling it to 100,000 cores. On
official problems I prefer to stick with the real problems and stop at 10,000 cores. For
some codes we use now accelerator chips [inaudible] successor chips in the future and
that helps a lot and allows us to solve the bigger problems. What we need to get those
codes to run is libraries that help us clarify the coding because we essentially have to help
physics students write sensible code that scales well. So they would be used to libraries
for that to be used for workflow systems that are provenance-enabled. And the goal that
we have reached now in two of the papers already is that just by looking at the paper and
clicking on the figure you can reproduce and rerun the whole calculation.
And the next step which we are working on now is the new machines we can get with
with not so [inaudible]. The runtime of all the codes will make the codes more tolerant
and that works pretty well for those simulations.
>>: Provenance enabled?
>> Matthias Troyer: You want to have a workload system that helps you record what
you have done so when you have your [inaudible] graphic at the end, with that you have
all of your metadata that tells you where it came from and how it was calculated. If you
just take your numbers and you copy them into Excel and make a plot and copy that into
your paper, then they are asking hey, which version of the code did you run that on or
which machine and then the students say hm, I don't know. It might be the buggy version
and you redo the whole calculation. You want to look at the final data and trace the
lineage back where it came from.
So with that we can for some type of quantum systems we served up to tens and hundreds
of millions of quantum systems, but the most interesting are those materials are the
materials are electrons and electrons are type of article that physicists called a fermion.
Those who know quantum physics know about the [inaudible] principle. For the others
you want to know just the defect is that you can simulate those electrons but the weights
you get can become negative. That goes back to the question of the weights. You
sample the states and some come in with a positive weight and some with a negative
weight, so you get a huge consolation problem. When you want to sample it and you
have negative weight, then it won't accept the state with a probability of -1/2 and I don't
know how to do that. So you then try to find a way around it. If we could prove that
solving this sign problem in general is a NP hard problem.
So we might find some cases where it works, but generally it will not, and now we come
back to the quantum hardware, because Feynman was one of the first to discover the
problem and he said to solve the quantum problem you need to use quantum mechanics.
So we need to use a computer that is based on quantum mechanics to solve it. And for
those models we don't aim for a full fledged quantum computer, yet; we aim for a
quantum simulator which is essentially a special-purpose device to solve and model. And
last year science has chosen quantum simulators as one of the breakthroughs of the year.
They had a total of 10 scientific topics for the breakthroughs. The winner was a quantum
machine. The eight other problems were all in life science, cancer research [inaudible]
and so on. One more was physics namely the quantum simulators that they've passed the
first key test and that is what I will tell you about now.
So what are the types of machines that we can have to compute? The first ones were
analog like the mechanism that was found in Greek. It is about 2000 years old and it was
the first machine to calculate where the planets are. Nowadays we have programmable
digital machines. Now what a quantum simulator is essentially a quantum analog
Antikythera machine and an analog quantum device. In the future then it will not take
another 2000 years; it will be fast. It will be the quantum computer. I want to focus now
on how we can use the classical machine to validate the quantum simulator. That goes
back to the DARPA program on the optical lattice emulators. They said that since people
can build such a quantum simulator, they give us money to build it. But in the first two
years we have to build one from a model that we can solve and we have to show that this
quantum machine gets the right answers for a problem that we can solve. And then we
got more money to solve some problems and build some that we can't solve from the
classical machine. Of course that was a challenge to me and I tried to make my codes
faster and faster to solve the same problems there.
So but what are those machines built on? It goes back to work by Stephen Chu and
others who is now the Secretary of Energy here. And he found a way to cool and track
atoms with laser light that won the Noble prize in 1997. Than about four years later there
was a new Nobel prize for the achievement of the [inaudible]. I will now mention
quickly what that is, and why it is relevant to quantum simulators. Essentially there are
two types of quantum particles that have been found so far and the nature of them are
bosons and fermions. There is a third type [inaudible] that station Q looks for quantum
computer, but they have not been found so far. So we take a gas of atoms down to very,
very low temperatures. As we go down to the nano Kelvin temperatures and trap a
million of those atoms and now depending on the total number of neutrons and protons
and electrons are even or odd it is a boson or a fermion. For example, lithium seven is a
boson and those bosonic particles, when you cool them down to [inaudible] temperature
they all want to go into the same state. The fermionic particles when you have the total
odd number of fermions like in lithium also, they want to have just one particle per state.
That is an example that explains the [inaudible] factor of the atom, if you have just one
electron per state.
Now bosons are computationally easy and fermions we get the sign problem and they are
computationally hard. So model with bosons we can simulate; the one with fermions we
want to solve. So we can now build an atomic quantum gas simulator. It costs about $2
million. It takes about 2 to 3 years to build for a team if you know exactly what you're
doing. The challenges are you had these gases at nano Kelvin temperatures, you need to
control the noise down to nano Kelvin you need to really line up your lasers perfectly and
you need to be able to take pictures of the atoms so a big challenge to actually build it.
But then once you build it, what can you do?
I need to show that here in this gas chamber there is the quantum gas. And then what
they do to find the state is they take the gas and they drop it down. They drop this cloud
of gas and it expands. Then they just take a picture. And that picture now shows the gas
cloud and what it does is it shows the velocity distribution in the cloud. Those atoms
which were addressed top down, those who fling out, they flay farther. So once you take
the picture you see the momentum distribution of your cloud. And that is where this
Bose Einstein condensation comes in. If you have bosons at very low temperatures they
all occupy the same state, so they all are addressed in the center of the track. When you
now drop them down they shouldn't fly apart, because they are all in the state of rest.
And here are three pictures as they cool down the gas; a sharp peak forms to the center at
velocity zero. And they found the Bose Einstein condensate and they got the Nobel
Prize.
Now can we do more? Okay. That is a simple gas but now we want to simulate a more
complicated model like for electrons in the material. For that we need to add the crystal
lattice. Now we can do that by just adding an optical lattice by adding six more lasers
and six more beams and all. Then you take those six laser beams to form the standing
wave pattern of light and then you have those atoms hopping around on the lattice. Now
you build a quantum lattice model and all you have to do now is you have to build the
model that describes your material and you measure properties. Does it work? We
wanted to validate it. So we can simulate the same number of atoms, about a half-million
atoms if they are bosons on the big lattice on the single point takes about eight hours on a
single core. So it is cheap.
But then we had to model all of the details, the same lattice structure, the same number of
particles, the same temperature and we measured the same quantities as this, and then if
this quantum device worked, then our simulation should better give the same results as
they measure. If not then the quantity device does not work. So we compare it and of
course it did not agree. Why did it not agree? It is because we have to model all of the
details of the experiment. Then we learned that it drops down only finer time, so there
are still corrections to what you actually see. It is not the limiting value which is the
velocity distributions. Then we have this laser and there is some noise in the lasers. The
laser noise heats the gas, so the temperature changes. Then we have a lens through which
we take the picture. The lens broadens the signal that we see. We have the camera with
the pixel size. Once all of that was taken into account and we looked again. Do things
agree? And this is what we got at the end. This is the simulation. This is the
experiment. And here is low temperature, the high and essentially the increase except
that the simulation is still clean; it has the smaller errors.
When we look more closely they are in full agreement here. So it works. That is the
good news. So we can actually build a device that implements a quantum model and
when we simulate the device it really gives what we want, so we actually built a quantum
model in a quantum device. Now the next step. The next step is we want to solve a hard
problem, for example, we want to solve a high temperature of the superconductor. What
do we have to do for that? If you want to do that on a classical computer it becomes an
exponentially hard problem. If you want to do it on the quantum simulator, we just have
to change the isotope. We just have to change from lithium seven to lithium eight. The
rest of the machine can essentially stay the same. Of course there are other problems, but
it can be done. And it is not much harder for them to do fermions than bosons while for
us the classical machine is much harder to do the fermions.
What is the status there? To validate it is much, much harder. For the small system, the
high temperatures that they are at now is about 1000 times harder but we could validate
the first results, and we now wait for the first things that we didn't know and those
numbers should come soon from them. If we want to simulate a real material like you
mentioned the plutonium, there is no way we will ever model the lattice structure of that
in such systems, so for that we will need to quantum computer. But simple models one
can actually build and then measure. So one can build a special purpose quantum device
to solve hard quantum problems.
So to summarize, we have the first quantum devices and they seem to work like the
quantum random numbers work. They are a bit slow and cost a lot but it works. The
quantum simulators cost much, much more but the next stage and slowly they will reach
models and temperature machines that will not be solved on the fastest machines. So
they will become useful quantum computers will be the next step, but in my point of view
you shouldn't sell a quantum computer before you validate that it works, and that will be
the next thing to do then.
That was funded by the DARPA dimension and the Space Center on Quantum Systems
and Technology where we work on quantum dots, simulations, qubits and more and the
people who did the work for the quantum random numbers; it was Raffaele Solca in
ETH. The simulations were done mostly by Lode Pollet at ETH and the experiments
were done in Munich by Stefan Trotzky, a student there. Thank you.
[applause].
>> Matthias Troyer: Yeah?
>>: [inaudible] computer you can't measure them so what difference does it matter if
they are random or not? It's not that quantum computers couldn't measure the output
[inaudible] some error somewhere else.
>> Matthias Troyer: The quantum computer--in a quantum system when you measure
something, you will change the state of the system, is that which you mean?
>>: Yes.
>> Matthias Troyer: And that is exactly what those quantum random numbers are based
on because if you say the photon is in two places, and you measure it somewhere, then
you change the system, then it is in only one place only. But which one is chosen
randomly? And that is the process that realized the device. So that when you measure it
you have some random selection and that is what we use. For a quantum computer you
want it to compute a single answer that you can measure and it is just in one state at the
end. Yes?
>>: If I remember right, A as incounterable, it runs about two cycles per bit and it is
secure enough for [inaudible]. Was it running to slow? If you run the standard block size
for in incounterables, you get stuff whose only distinction to the random is that it has no
[inaudible] paradox, is that really too slow?
>> Matthias Troyer: 2 seconds per bit that would be slow compared to some of the ones
we used, but it would approach the range where we…
>>: [inaudible] Uni cycle not [inaudible] per second.
>> Matthias Troyer: Oh yeah. Yes. So where it can be useful for some methods; the
ones we tested and we tried, they needed about 10,000 cycles per bit.
>>: [inaudible].
>> Matthias Troyer: Yes. But with any of those generators the basic thing there is no
underlying guarantee that there is entropy in your stream, but in principle there is always
that they are deterministic so there will always be problems. And yes, there is the
potential that we don't know the physics yet that makes this device have some strange
[inaudible] the wrong times but there it is less likely when there been a problem in one of
the number of [inaudible] random generators. So it is a different level because this
device does have true entropy in it and the other methods don't. Anymore questions?
Download