Lecture 3: Quantum simulation algorithms

advertisement
Lecture 3: Quantum
simulation algorithms
Dominic Berry
Macquarie University
1996
Simulation of Hamiltonians
๏ฎ
We want to simulate the evolution
๐œ“๐‘ก = ๐‘’ −๐‘–๐ป๐‘ก |๐œ“0 ⟩
๏ฎ
The Hamiltonian is a sum of terms:
๐‘€
๐ป=
๐ปโ„“
โ„“=1
Seth Lloyd
๏ฎ
We can perform
๐‘’ −๐‘–๐ปโ„“ ๐‘ก
๏ฎ
For short times we can use
๐‘’ −๐‘–๐ป1๐›ฟ๐‘ก ๐‘’ −๐‘–๐ป2 ๐›ฟ๐‘ก … ๐‘’ −๐‘–๐ป๐‘€−1๐›ฟ๐‘ก ๐‘’ −๐‘–๐ป๐‘€ ๐›ฟ๐‘ก ≈ ๐‘’ −๐‘–๐ป๐›ฟ๐‘ก
๏ฎ
For long times
๐‘’ −๐‘–๐ป1 ๐‘ก/๐‘Ÿ ๐‘’ −๐‘–๐ป2 ๐‘ก/๐‘Ÿ … ๐‘’ −๐‘–๐ป๐‘€ ๐‘ก/๐‘Ÿ
๐‘Ÿ
≈ ๐‘’ −๐‘–๐ป๐‘ก
Simulation of Hamiltonians
๏ฎ
1996
For short times we can use
๐‘’ −๐‘–๐ป1๐›ฟ๐‘ก ๐‘’ −๐‘–๐ป2 ๐›ฟ๐‘ก … ๐‘’ −๐‘–๐ป๐‘€−1๐›ฟ๐‘ก ๐‘’ −๐‘–๐ป๐‘€ ๐›ฟ๐‘ก ≈ ๐‘’ −๐‘–๐ป๐›ฟ๐‘ก
๏ฎ
๏ฎ
๏ฎ
๏ฎ
This approximation is because
๐‘’ −๐‘–๐ป1๐›ฟ๐‘ก ๐‘’ −๐‘–๐ป2 ๐›ฟ๐‘ก … ๐‘’ −๐‘–๐ป๐‘€−1๐›ฟ๐‘ก ๐‘’ −๐‘–๐ป๐‘€ ๐›ฟ๐‘ก
= ๐•€ − ๐‘–๐ป1 ๐›ฟ๐‘ก + ๐‘‚ ๐›ฟ๐‘ก 2 ๐•€ − ๐‘–๐ป2 ๐›ฟ๐‘ก + ๐‘‚ ๐›ฟ๐‘ก 2 …
… ๐•€ − ๐‘–๐ป๐‘€ ๐›ฟ๐‘ก + ๐‘‚ ๐›ฟ๐‘ก 2
Seth Lloyd
= ๐•€ − ๐‘–๐ป1 ๐›ฟ๐‘ก − ๐‘–๐ป2 ๐›ฟ๐‘ก … − ๐‘–๐ป๐‘€ ๐›ฟ๐‘ก + ๐‘‚ ๐›ฟ๐‘ก 2
= ๐•€ − ๐‘–๐ป๐›ฟ๐‘ก + ๐‘‚ ๐›ฟ๐‘ก 2
= ๐‘’ −๐‘–๐ป๐›ฟ๐‘ก + ๐‘‚(๐›ฟ๐‘ก 2 )
If we divide long time ๐‘ก into ๐‘Ÿ intervals, then
๐‘Ÿ
๐‘Ÿ
๐‘’ −๐‘–๐ป๐‘ก = ๐‘’ −๐‘–๐ป๐‘ก/๐‘Ÿ = ๐‘’ −๐‘–๐ป1 ๐‘ก/๐‘Ÿ ๐‘’ −๐‘–๐ป2๐‘ก/๐‘Ÿ … ๐‘’ −๐‘–๐ป๐‘€ ๐‘ก/๐‘Ÿ + ๐‘‚ ๐‘ก/๐‘Ÿ 2
๐‘Ÿ
−๐‘–๐ป
๐‘ก/๐‘Ÿ
−๐‘–๐ป
๐‘ก/๐‘Ÿ
−๐‘–๐ป
๐‘ก/๐‘Ÿ
1
2
๐‘€
= ๐‘’
๐‘’
…๐‘’
+ ๐‘‚ ๐‘ก 2 /๐‘Ÿ
Typically, we want to simulate a system with some maximum
allowable error ๐œ€.
Then we need
๐‘Ÿ ∝ ๐‘ก 2 /๐œ€.
2007
Higher-order simulation
๏ฎ
Berry, Ahokas,
Cleve, Sanders
A higher-order decomposition is
๐‘’ −๐‘–๐ป1 ๐›ฟ๐‘ก/2 … ๐‘’ −๐‘–๐ป๐‘€−1๐›ฟ๐‘ก/2 ๐‘’ −๐‘–๐ป๐‘€ ๐›ฟ๐‘ก/2 ๐‘’ −๐‘–๐ป๐‘€ ๐›ฟ๐‘ก/2 ๐‘’ −๐‘–๐ป๐‘€−1๐›ฟ๐‘ก/2 … ๐‘’ −๐‘–๐ป1๐›ฟ๐‘ก/2
= ๐‘’ −๐‘–๐ป๐›ฟ๐‘ก + ๐‘‚ ๐‘€ ๐ป ๐›ฟ๐‘ก 3
๏ฎ
If we divide long time ๐‘ก into ๐‘Ÿ intervals, then
๐‘Ÿ
๐‘’ −๐‘–๐ป๐‘ก = ๐‘’ −๐‘–๐ป๐‘ก/๐‘Ÿ
๐‘Ÿ
= ๐‘’ −๐‘–๐ป1 ๐‘ก/2๐‘Ÿ … ๐‘’ −๐‘–๐ป๐‘€ ๐‘ก/2๐‘Ÿ ๐‘’ −๐‘–๐ป๐‘€ ๐‘ก/2๐‘Ÿ … ๐‘’ −๐‘–๐ป1๐‘ก/2๐‘Ÿ + ๐‘‚ ๐‘€ ๐ป ๐‘ก/๐‘Ÿ 3
๐‘Ÿ
= ๐‘’ −๐‘–๐ป1 ๐‘ก/2๐‘Ÿ … ๐‘’ −๐‘–๐ป๐‘€ ๐‘ก/2๐‘Ÿ ๐‘’ −๐‘–๐ป๐‘€ ๐‘ก/2๐‘Ÿ … ๐‘’ −๐‘–๐ป๐พ1๐‘ก/2๐‘Ÿ + ๐‘‚ ๐‘€ ๐ป ๐‘ก 3 /๐‘Ÿ 2
๏ฎ
Then we need
๏ฎ
General product formula can give error ๐‘‚ ๐‘€ ๐ป ๐‘ก/๐‘Ÿ
๏ฎ
For time ๐‘ก the error is ๐‘‚ ๐‘€ ๐ป ๐‘ก
๏ฎ
To bound the error as ๐œ€ the value of ๐‘Ÿ scales as
๐‘€ ๐ป ๐‘ก 1+1/2๐‘˜
๐‘Ÿ∝
๐œ€ 1/2๐‘˜
The complexity is ๐‘Ÿ × ๐‘€.
๏ฎ
1.5 /
๐‘Ÿ∝ ๐‘€ ๐ป ๐‘ก
2๐‘˜+1 /๐‘Ÿ 2๐‘˜
.
๐œ€.
2๐‘˜+1
for time ๐‘ก/๐‘Ÿ.
Higher-order simulation
2007
Berry, Ahokas,
Cleve, Sanders
๐‘€ ๐ป ๐‘ก 1+1/2๐‘˜
๐‘Ÿ∝
๐œ€ 1/2๐‘˜
๏ฎ
๏ฎ
๏ฎ
๏ฎ
๏ฎ
The complexity is ๐‘Ÿ × ๐‘€.
For Sukuki product formulae, we have an additional factor in ๐‘Ÿ
5๐‘˜ ๐‘€ ๐ป ๐‘ก 1+1/2๐‘˜
๐‘Ÿ∝
๐œ€ 1/2๐‘˜
The complexity then needs to be multiplied by a further factor of 5๐‘˜ .
The overall complexity scales as
๐‘€52๐‘˜ ๐‘€ ๐ป ๐‘ก 1+1/2๐‘˜
๐œ€ 1/2๐‘˜
We can also take an optimal value of ๐‘˜ ∝ log ๐‘€ ๐ป ๐‘ก/๐œ€ , which gives
scaling
๐‘€2 ๐ป ๐‘ก exp[2 ln 5 ln(๐‘€ ๐ป ๐‘ก/๐œ€)]
Solving linear systems
๏ฎ
Consider a large system of linear equations:
๐ด๐’™ = ๐’š
๏ฎ
First assume that the matrix ๐ด is Hermitian.
๏ฎ
It is possible to simulate Hamiltonian evolution
under ๐ด for time ๐‘ก: ๐‘’ −๐‘–๐ด๐‘ก .
๏ฎ
Encode the initial state in the form
2009
๐‘
๐’š =
๐‘ฆโ„“ |โ„“⟩
โ„“=1
๏ฎ
The state can also be written in terms of the eigenvectors of ๐ด as
๐‘
๐’š =
๐œ“๐‘˜ |๐œ†๐‘˜ ⟩
๐‘˜=1
๏ฎ
We can obtain the solution |๐’™⟩ if we can divide each ๐œ“๐‘˜ by ๐œ†๐‘˜ .
๏ฎ
Use the phase estimation technique to place the estimate of ๐œ†๐‘˜ in
an ancillary register to obtain
๐‘
๐œ“๐‘˜ |๐œ†๐‘˜ ⟩|๐œ†๐‘˜ ⟩
๐‘˜=1
Harrow, Hassidim
& Lloyd
Solving linear systems
๏ฎ
2009
Use the phase estimation technique to place the
estimate of ๐œ†๐‘˜ in an ancillary register to obtain
๐‘
๐œ“๐‘˜ |๐œ†๐‘˜ ⟩|๐œ†๐‘˜ ⟩
๐‘˜=1
๏ฎ
Append an ancilla and rotate it according to the
value of ๐œ†๐‘˜ to obtain
๐‘
๐œ“๐‘˜ |๐œ†๐‘˜ ⟩|๐œ†๐‘˜ ⟩
๐‘˜=1
๏ฎ
Invert the phase estimation technique to remove the estimate of
๐œ†๐‘˜ from the ancillary register, giving
๐‘
๐‘˜=1
๏ฎ
1
1
0 + 1 − 2 |1⟩
๐œ†๐‘˜
๐œ†๐‘˜
1
1
๐œ“๐‘˜ |๐œ†๐‘˜ ⟩
0 + 1 − 2 |1⟩
๐œ†๐‘˜
๐œ†๐‘˜
Use amplitude amplification to amplify the |0⟩ component on the
ancilla, giving a state proportional to
๐‘
๐’™ ∝
๐‘˜=1
๐œ“๐‘˜
|๐œ† ⟩ =
๐œ†๐‘˜ ๐‘˜
๐‘
๐‘ฅโ„“ |โ„“⟩
โ„“=1
Harrow, Hassidim
& Lloyd
Solving linear systems
๏ฎ
What about non-Hermitian ๐ด?
๏ฎ
Construct a blockwise matrix
0 ๐ด
๐ด′ = †
๐ด
0
๏ฎ
The inverse of ๐ด′ is then
๐ด′ −1 = 0
๐ด−1
๏ฎ
๏ฎ
This means that
0
๐ด−1
−1
๐ด†
0
๐ด†
0
−1
๐’š
0
=
0
๐’™
In terms of the state
|0⟩|๐’š⟩ โ†ฆ |1⟩|๐’™⟩
2009
Harrow, Hassidim
& Lloyd
Solving linear systems
2009
Complexity Analysis
๏ฎ
We need to examine:
1.
The complexity of simulating the Hamiltonian to
estimate the phase.
2.
The accuracy needed for the phase estimate.
3.
The possibility of 1/๐œ†๐‘˜ being greater than 1.
๏ฎ
The complexity of simulating the Hamiltonian for
time ๐‘ก is approximately ∝ ๐ด ๐‘ก = |๐œ†max |๐‘ก.
๏ฎ
To obtain accuracy ๐›ฟ in the estimate of ๐œ†, the
Hamiltonian needs to be simulated for time ∝ 1/๐›ฟ.
๏ฎ
We actually need to multiply the state coefficients
by ๐œ†min /๐œ†๐‘˜ , to give
๐‘
๐‘˜=1
๏ฎ
|๐œ†min |
๐œ“๐‘˜ |๐œ†๐‘˜ ⟩
๐œ†๐‘˜
To obtain accuracy ๐œ€ in ๐œ†min /๐œ†๐‘˜ , we need
accuracy ๐œ€๐œ†2๐‘˜ / ๐œ†min in the estimate of ๐œ†.
Harrow, Hassidim
& Lloyd
Final complexity is
๐œ…2
|๐œ†max |
∼ ,
๐œ…: =
๐œ€
|๐œ†min |
2010
Differential equations
๏ฎ
๏ฎ
Berry
Discretise the differential equation, then encode as a linear system.
Simplest discretisation: Euler method.
dx
๏€ฝ Ax ๏€ซ b
dt
sets initial condition
x j ๏€ซ1 ๏€ญ x j
๏ƒž
I
0
0
๏ƒฉ
๏ƒช ๏€ญ( I ๏€ซ Ah)
I
0
๏ƒช
0
๏€ญ( I ๏€ซ Ah) I
๏ƒช
๏ƒช
0
0
๏€ญI
๏ƒช
๏ƒช๏ƒซ
0
0
0
sets x to be constant
h
0
0
0
I
๏€ญI
๏€ฝ Ax j ๏€ซ b
0 ๏ƒน ๏ƒฉ x 0 ๏ƒน ๏ƒฉ xin ๏ƒน
0 ๏ƒบ ๏ƒช x1 ๏ƒบ ๏ƒช bh ๏ƒบ
๏ƒบ๏ƒช ๏ƒบ ๏ƒช ๏ƒบ
0 ๏ƒบ ๏ƒช x 2 ๏ƒบ ๏€ฝ ๏ƒช bh ๏ƒบ
๏ƒบ๏ƒช ๏ƒบ ๏ƒช ๏ƒบ
0 ๏ƒบ ๏ƒช x3 ๏ƒบ ๏ƒช 0 ๏ƒบ
I ๏ƒบ๏ƒป ๏ƒช๏ƒซ x 4 ๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 ๏ƒบ๏ƒป
Quantum walks
๏ฎ
๏ฎ
๏ฎ
The quantum walk has position
and coin values
|๐‘ฅ, ๐‘⟩
It then alternates coin and step
operators, e.g.
๐ถ ๐‘ฅ, ±1 = ๐‘ฅ, −1 ± ๐‘ฅ, 1 / 2
๐‘† ๐‘ฅ, ๐‘ = |๐‘ฅ + ๐‘, ๐‘⟩
The position can progress
linearly in the number of steps.
๏ฎ
A classical walk has a position which is an
integer, ๐‘ฅ, which jumps either to the left or the
right at each step.
๏ฎ
The resulting distribution is a binomial
distribution, or a normal distribution as the
limit.
Quantum walk on a graph
๏ฎ
The walk position is any node on
the graph.
๏ฎ
Describe the generator matrix ๐พ by
๐›พ,
๐‘Ž ≠ ๐‘Ž′ , ๐‘Ž๐‘Ž′ ∈ ๐บ
0,
๐พ๐‘Ž๐‘Ž′ =
๐‘Ž ≠ ๐‘Ž′ , ๐‘Ž๐‘Ž′ ∉ ๐บ
−๐‘‘ ๐‘Ž ๐›พ,
๐‘Ž = ๐‘Ž′
๏ฎ
The quantity ๐‘‘(๐‘Ž) is the number of
edges incident on vertex ๐‘Ž.
๏ฎ
๏ฎ
An edge between ๐‘Ž and ๐‘Ž ′ is
denoted ๐‘Ž๐‘Ž′ .
The probability distribution for a
continuous walk has the differential
equation
๐‘‘๐‘๐‘Ž ๐‘ก
=
๐พ๐‘Ž๐‘Ž′ ๐‘๐‘Ž′ (๐‘ก)
๐‘‘๐‘ก
′
๐‘Ž
1998
Quantum walk on a graph
๐‘‘๐‘๐‘Ž ๐‘ก
=
๐‘‘๐‘ก
๏ฎ
Farhi
๐พ๐‘Ž๐‘Ž′ ๐‘๐‘Ž′ (๐‘ก)
๐‘Ž′
Quantum mechanically we have
๐‘‘
๐‘–
๐œ“ ๐‘ก = ๐ป|๐œ“ ๐‘ก ⟩
๐‘‘๐‘ก
๐‘‘
๐‘– ⟨๐‘Ž ๐œ“ ๐‘ก =
๐‘Ž ๐ป ๐‘Ž′ ⟨๐‘Ž′|๐œ“ ๐‘ก ⟩
๐‘‘๐‘ก
′
๐‘Ž
๏ฎ
The natural quantum analogue is
๐‘Ž ๐ป ๐‘Ž′ = ๐พ๐‘Ž๐‘Ž′
๏ฎ
We take
๐‘Ž๐ป
๏ฎ
๐‘Ž′
๐›พ,
= 0,
๐‘Ž ≠ ๐‘Ž′ , ๐‘Ž๐‘Ž′ ∈ ๐บ
otherwise.
Probability is conserved because ๐ป
is Hermitian.
Quantum walk on a graph
๏ฎ
๏ฎ
๏ฎ
entrance
Childs, Farhi,
Gutmann
The goal is to traverse the graph
from entrance to exit.
Classically the random walk will
take exponential time.
For the quantum walk, define a
superposition state
1
col ๐‘— =
|๐‘Ž⟩
๐‘๐‘— ๐‘Ž∈column ๐‘—
exit
๐‘๐‘— =
๏ฎ
2002
2๐‘—
22๐‘›+1−๐‘—
0≤๐‘—≤๐‘›
๐‘› + 1 ≤ ๐‘— ≤ 2๐‘› + 1
On these states the matrix
elements of the Hamiltonian are
col ๐‘— ๐ป col ๐‘— ± 1 = 2๐›พ
Quantum walk on a graph
entrance
2003
Childs, Cleve, Deotto,
Farhi, Gutmann,
Spielman
๏ฎ
Add random connections
between the two trees.
๏ฎ
All vertices (except entrance
and exit) have degree 3.
๏ฎ
Again using column states, the
matrix elements of the
Hamiltonian are
exit
col ๐‘— ๐ป col ๐‘— ± 1
=
2๐›พ
2๐›พ
๐‘—≠๐‘›
๐‘—=๐‘›
๏ฎ
This is a line with a defect.
๏ฎ
There are reflections off the
defect, but the quantum walk
still reaches the exit efficiently.
2007
NAND tree quantum walk
Farhi, Goldstone,
Gutmann
๏ฎ
In a game tree I alternate making moves with
an opponent.
๏ฎ
In this example, if I move first then I can
always direct the ant to the sugar cube.
๏ฎ
What is the complexity of doing this in
general? Do we need to query all the leaves?
AND
OR
AND
๐‘ฅ1
OR
AND
AND
๐‘ฅ2
๐‘ฅ3
๐‘ฅ4
๐‘ฅ5
AND
๐‘ฅ6
๐‘ฅ7
๐‘ฅ8
2007
NAND tree quantum walk
OR
OR
AND
๐‘ฅ1
NOT
AND
๐‘ฅ2
๐‘ฅ3
NOT
AND
๐‘ฅ4
๐‘ฅ1
NAND
NAND
๐‘ฅ1
Farhi, Goldstone,
Gutmann
NAND
๐‘ฅ2
๐‘ฅ3
๐‘ฅ4
NOT
NOT
AND
๐‘ฅ2
๐‘ฅ3
๐‘ฅ4
NAND tree quantum walk
2007
Farhi, Goldstone,
Gutmann
wave
๏ฎ
The Hamiltonian is a sum of an oracle Hamiltonian, representing the
connections, and a fixed driving Hamiltonian, which is the remainder
of the tree.
๐ป = ๐ป๐‘‚ + ๐ป๐ท
๏ฎ
Prepare a travelling wave packet on the left.
๏ฎ
If the answer to the NAND tree problem is 1, then after a fixed time
the wave packet will be found on the right.
๏ฎ
The reflection depends on the solution of the NAND tree problem.
Simulating quantum walks
๏ฎ
A more realistic scenario is that we have
an oracle that provides the structure of the
graph; i.e., a query to a node returns all
the nodes that are connected.
๏ฎ
The quantum oracle is queried with a
node number ๐‘ฅ and a neighbour number ๐‘—.
๏ฎ
It returns a result via the quantum
operation
๐‘ˆ๐‘‚ ๐‘ฅ, ๐‘— |0⟩ = ๐‘ฅ, ๐‘— |๐‘ฆ⟩
๏ฎ
wave
Here ๐‘ฆ is the ๐‘—’th neighbour of ๐‘ฅ.
|๐‘ฅ, ๐‘—⟩
๐‘ˆ๐‘‚
|0⟩
connected nodes
query node
|๐‘ฅ, ๐‘—⟩
|๐‘ฆ⟩
Decomposing the Hamiltonian
๏ฎ
๏ฎ
๏ฎ
๏ฎ
0
0
1
The rows and columns correspond to
0
node numbers.
๐ป=
0
The ones indicate connections
1
between nodes.
โ‹ฎ
The oracle gives us the position of
0
In the matrix picture, we have a
sparse matrix.
the ๐‘—’th nonzero element in column ๐‘ฅ.
0
1
0
0
0
1
โ‹ฎ
0
1
0
0
0
0
0
โ‹ฎ
1
0
0
0
1
1
0
โ‹ฎ
0
0
0
0
1
1
0
โ‹ฎ
0
1
1
0
0
0
0
โ‹ฎ
0
2003
Aharonov,
Ta-Shma
โ‹ฏ
โ‹ฏ
โ‹ฏ
โ‹ฏ
โ‹ฏ
โ‹ฏ
โ‹ฑ
โ‹ฏ
0
0
1
0
0
0
โ‹ฎ
1
Decomposing the Hamiltonian
๏ฎ
๏ฎ
๏ฎ
๏ฎ
0
0
1
The rows and columns correspond to
0
node numbers.
๐ป=
0
The ones indicate connections
1
between nodes.
โ‹ฎ
The oracle gives us the position of
0
In the matrix picture, we have a
sparse matrix.
the ๐‘—’th nonzero element in column ๐‘ฅ.
๏ฎ
We want to be able to separate the
Hamiltonian into 1-sparse parts.
๏ฎ
This is equivalent to a graph
colouring – the graph edges are
coloured such that each node has
unique colours.
0
1
0
0
0
1
โ‹ฎ
0
1
0
0
0
0
0
โ‹ฎ
1
0
0
0
1
1
0
โ‹ฎ
0
0
0
0
1
1
0
โ‹ฎ
0
1
1
0
0
0
0
โ‹ฎ
0
2003
Aharonov,
Ta-Shma
โ‹ฏ
โ‹ฏ
โ‹ฏ
โ‹ฏ
โ‹ฏ
โ‹ฏ
โ‹ฑ
โ‹ฏ
0
0
1
0
0
0
โ‹ฎ
1
2007
Graph colouring
๏ฎ
๏ฎ
๏ฎ
๏ฎ
0
0
First guess: for each node, assign
1
edges sequentially according to their
0
numbering.
๐ป=
0
This does not work because the edge
1
between nodes ๐‘ฅ and ๐‘ฆ may be edge
โ‹ฎ
1 (for example) of ๐‘ฅ, but edge 2 of ๐‘ฆ.
0
How do we do this colouring?
Second guess: for edge between ๐‘ฅ
and ๐‘ฆ, colour it according to the pair
of numbers (๐‘—๐‘ฅ , ๐‘—๐‘ฆ ), where it is edge
๐‘—๐‘ฅ of node ๐‘ฅ and edge ๐‘—๐‘ฆ of node ๐‘ฆ.
๏ฎ
We decide the order such that ๐‘ฅ < ๐‘ฆ.
๏ฎ
It is still possible to have ambiguity:
say we have ๐‘ฅ < ๐‘ฆ < ๐‘ง.
Berry, Ahokas,
Cleve, Sanders
0
1
0
0
0
1
โ‹ฎ
0
1
0
0
0
0
0
โ‹ฎ
1
0
0
0
1
1
0
โ‹ฎ
0
0
0
0
1
1
0
โ‹ฎ
0
1
1
0
0
0
0
โ‹ฎ
0
โ‹ฏ
โ‹ฏ
โ‹ฏ
โ‹ฏ
โ‹ฏ
โ‹ฏ
โ‹ฑ
โ‹ฏ
0
0
1
0
0
0
โ‹ฎ
1
2007
Graph colouring
๏ฎ
๏ฎ
๏ฎ
๏ฎ
Berry, Ahokas,
Cleve, Sanders
0
0
First guess: for each node, assign
1
edges sequentially according to their
0
numbering.
๐ป=
0
This does not work because the edge
1
between nodes ๐‘ฅ and ๐‘ฆ may be edge
โ‹ฎ
1 (for example) of ๐‘ฅ, but edge 2 of ๐‘ฆ.
0
How do we do this colouring?
Second guess: for edge between ๐‘ฅ
and ๐‘ฆ, colour it according to the pair
of numbers (๐‘—๐‘ฅ , ๐‘—๐‘ฆ ), where it is edge
๐‘—๐‘ฅ of node ๐‘ฅ and edge ๐‘—๐‘ฆ of node ๐‘ฆ.
๏ฎ
We decide the order such that ๐‘ฅ < ๐‘ฆ.
๏ฎ
It is still possible to have ambiguity:
say we have ๐‘ฅ < ๐‘ฆ < ๐‘ง.
0
1
0
0
0
1
โ‹ฎ
0
๐‘ฅ
3
1
0
0
0
0
0
โ‹ฎ
1
0
0
0
1
1
0
โ‹ฎ
0
0
0
0
1
1
0
โ‹ฎ
0
1
1
0
0
0
0
โ‹ฎ
0
0
0
1
0
0
0
โ‹ฎ
1
2
1
(1,2)
3
2
๐‘ฆ(1,2)
1
2
3
๏ฎ
โ‹ฏ
โ‹ฏ
โ‹ฏ
โ‹ฏ
โ‹ฏ
โ‹ฏ
โ‹ฑ
โ‹ฏ
๐‘ง
1
Use a string of nodes with equal
edge colours, and compress.
General Hamiltonian oracles
0
0
2
0
๐ป=
0
− 2๐‘–
โ‹ฎ
0
๏ฎ
๏ฎ
0
3
0
0
0
1/2
โ‹ฎ
0
−
2
0
0
0
0
0
0
0
0
0
โ‹ฎ
3−๐‘–
0
1
0
๐‘’ −๐‘–๐œ‹/7
0
โ‹ฎ
0
More generally, we can perform a
colouring on a graph with matrix
elements of arbitrary (Hermitian)
values.
Then we also require an oracle to
give us the values of the matrix
elements.
๐‘’ ๐‘–๐œ‹/7
2
0
โ‹ฎ
0
|๐‘ฅ, ๐‘—⟩
|0⟩
|๐‘ฅ, ๐‘ฆ⟩
|0⟩
2๐‘– โ‹ฏ
1/2 โ‹ฏ
0
0
0
0
โ‹ฎ
0
2003
Aharonov,
Ta-Shma
0
0
โ‹ฏ − 3+๐‘–
โ‹ฏ
0
โ‹ฏ
0
โ‹ฏ
0
โ‹ฑ
โ‹ฎ
โ‹ฏ
1/10
๐‘ˆ๐‘‚
๐‘ˆ๐ป
|๐‘ฅ, ๐‘—⟩
|๐‘ฆ⟩
|๐‘ฅ, ๐‘ฆ⟩
|๐ป๐‘ฅ,๐‘ฆ ⟩
Simulating 1-sparse case
0
0
0
0
๐ป=
0
− 2๐‘–
โ‹ฎ
0
0
3
0
0
0
0
โ‹ฎ
0
0
0
0
0
0
0
โ‹ฎ
− 3−๐‘–
0
0
0
1
0
0
โ‹ฎ
0
0
0
0
0
2
0
โ‹ฎ
0
2๐‘–
0
0
0
0
0
โ‹ฎ
0
2003
Aharonov,
Ta-Shma
โ‹ฏ
0
โ‹ฏ
0
โ‹ฏ − 3+๐‘–
โ‹ฏ
0
โ‹ฏ
0
โ‹ฏ
0
โ‹ฑ
โ‹ฎ
โ‹ฏ
0
๏ฎ
Assume we have a 1-sparse matrix.
๏ฎ
How can we simulate evolution under this Hamiltonian?
๏ฎ
Two cases:
1.
If the element is on the diagonal, then we have a 1D subspace.
2.
If the element is off the diagonal, then we need a 2D subspace.
Simulating 1-sparse case
๏ฎ
2003
Aharonov,
Ta-Shma
We are given a column number ๐‘ฅ. There are then 5 quantities
that we want to calculate:
1.
๐‘๐‘ฅ : A bit registering whether the element is on or off the
diagonal; i.e. ๐‘ฅ belongs to a 1D or 2D subspace.
2.
๐‘š๐‘–๐‘›๐‘ฅ : The minimum number out of the (1D or 2D) subspace to
which ๐‘ฅ belongs.
3.
๐‘š๐‘Ž๐‘ฅ๐‘ฅ : The maximum number out of the subspace to which ๐‘ฅ
belongs.
4.
๐ด๐‘ฅ : The entries of ๐ป in the subspace to which ๐‘ฅ belongs.
5.
๐‘ˆ๐‘ฅ : The evolution under ๐ป for time ๐‘ก in the subspace.
๏ฎ
We have a unitary operation that maps
๐‘ฅ 0 → ๐‘ฅ |๐‘๐‘ฅ , ๐‘š๐‘–๐‘›๐‘ฅ , ๐‘š๐‘Ž๐‘ฅ๐‘ฅ , ๐ด๐‘ฅ , ๐‘ˆ๐‘ฅ ⟩
Simulating 1-sparse case
2003
Aharonov,
Ta-Shma
๏ฎ
We have a unitary operation that maps
๐‘ฅ 0 → ๐‘ฅ |๐‘๐‘ฅ , ๐‘š๐‘–๐‘›๐‘ฅ , ๐‘š๐‘Ž๐‘ฅ๐‘ฅ , ๐ด๐‘ฅ , ๐‘ˆ๐‘ฅ ⟩
๏ฎ
We consider a superposition of the two states in the subspace,
๐œ“ = ๐œ‡ ๐‘š๐‘–๐‘›๐‘ฅ + ๐œˆ ๐‘š๐‘Ž๐‘ฅ๐‘ฅ
๏ฎ
Then we obtain
๐œ“ |0⟩ → |๐œ“⟩|๐‘๐‘ฅ , ๐‘š๐‘–๐‘›๐‘ฅ , ๐‘š๐‘Ž๐‘ฅ๐‘ฅ , ๐ด๐‘ฅ , ๐‘ˆ๐‘ฅ ⟩
๏ฎ
A second operation implements the controlled operation based
on the stored approximation of the unitary operation ๐‘ˆ๐‘ฅ :
|๐œ“⟩ ๐‘ˆ๐‘ฅ , ๐‘š๐‘–๐‘›๐‘ฅ , ๐‘š๐‘Ž๐‘ฅ๐‘ฅ → ๐‘ˆ๐‘ฅ |๐œ“⟩ ๐‘ˆ๐‘ฅ , ๐‘š๐‘–๐‘›๐‘ฅ , ๐‘š๐‘Ž๐‘ฅ๐‘ฅ
๏ฎ
This gives us
๐‘ˆ๐‘ฅ |๐œ“⟩|๐‘๐‘ฅ , ๐‘š๐‘–๐‘›๐‘ฅ , ๐‘š๐‘Ž๐‘ฅ๐‘ฅ , ๐ด๐‘ฅ , ๐‘ˆ๐‘ฅ ⟩
๏ฎ
Inverting the first operation then yields
๐‘ˆ๐‘ฅ ๐œ“ 0
Applications
๏ฎ
2007: Discrete query NAND algorithm – Childs, Cleve, Jordan, Yeung
๏ฎ
2009: Solving linear systems – Harrow, Hassidim, Lloyd
๏ฎ
2009: Implementing sparse unitaries – Jordan, Wocjan
๏ฎ
2010: Solving linear differential equations – Berry
๏ฎ
2013: Algorithm for scattering cross section – Clader, Jacobs, Sprouse
Implementing unitaries
๏ฎ
2009
Jordan, Wocjan
Construct a Hamiltonian from unitary as
0
๐ป= †
๐‘ˆ
๐‘ˆ
0
๏ฎ
Now simulate evolution under this Hamiltonian
๐‘’ −๐‘–๐ป๐‘ก = ๐•€ cos ๐‘ก − ๐‘–๐ป sin ๐‘ก
๏ฎ
Simulating for time ๐‘ก = ๐œ‹/2 gives
๐‘’ −๐‘–๐ป๐œ‹/2 1 ๐œ“ = −๐‘–๐ป 1 ๐œ“
= −๐‘–|0⟩๐‘ˆ|๐œ“⟩
Quantum simulation via walks
๏ฎ
Three ingredients:
1. A Szegedy quantum walk
2. Coherent phase estimation
3. Controlled state preparation
๏ฎ
The quantum walk has eigenvalues and
eigenvectors related to those for Hamiltonian.
By using phase estimation, we can estimate the
eigenvalue, then implement that actually
needed.
๏ฎ
Szegedy Quantum Walk
๏ฎ
2004
Szegedy
The walk uses two reflections
2๐ถ๐ถ † − ๐•€
2๐‘…๐‘…† − ๐•€
๏ฎ
๏ฎ
The first is controlled by the first register and acts on the
second register.
Given some matrix ๐‘[๐‘–, ๐‘—], the operator ๐ถ is defined by
๐‘
๐‘๐‘– =
๐‘[๐‘–, ๐‘—]|๐‘—⟩
๐‘—=1
๐‘
๐ถ=
๐‘– ⟨๐‘–| ⊗ |๐‘๐‘– ⟩
๐‘–=1
Szegedy Quantum Walk
๏ฎ
๏ฎ
2004
Szegedy
The diffusion operator 2๐‘…๐‘…† − ๐•€ is controlled by the
second register and acts on the first. Use a similar
definition with matrix ๐‘Ÿ[๐‘–, ๐‘—].
Both are controlled reflections:
๐‘
2๐ถ๐ถ † − ๐•€ =
๐‘– ⟨๐‘–| ⊗ (2|๐‘๐‘– ⟩⟨๐‘๐‘– | − ๐•€)
๐‘–=1
๐‘
2๐‘…๐‘…† − ๐•€ =
(2|๐‘Ÿ๐‘– ⟩⟨๐‘Ÿ๐‘– | − ๐•€) ⊗ ๐‘– ⟨๐‘–|
๐‘–=1
๏ฎ
The eigenvalues and eigenvectors of the step of the
quantum walk
(2๐ถ๐ถ † − ๐•€)(2๐‘…๐‘…† − ๐•€)
are related to those of a matrix formed from ๐‘[๐‘–, ๐‘—] and
๐‘Ÿ[๐‘–, ๐‘—].
2012
Szegedy walk for simulation
Berry,
Childs
๏ฎ
Use symmetric system, with
∗
๐‘ ๐‘–, ๐‘— = ๐‘Ÿ ๐‘–, ๐‘— = ๐ป๐‘–๐‘—
๏ฎ
Then eigenvalues and eigenvectors are related to
those of Hamiltonian.
๏ฎ
In reality we need to modify to “lazy” quantum walk,
with
๐‘๐‘– =
๏ฎ
๐›ฟ
๐ป
๐‘
1
∗
๐ป๐‘–๐‘—
|๐‘—⟩
๐‘—=1
๐œŽ๐‘– ๐›ฟ
+ 1−
|๐‘ + 1⟩
๐ป 1
๐‘
๐œŽ๐‘– โ‰”
Grover preparation gives
๐œ“๐‘ =
1
๐‘
๐‘ ๐‘˜=1
๐‘˜
๐œ“๐‘˜ 0 + 1 − ๐œ“๐‘˜ 2 |1⟩
|๐ป๐‘–๐‘— |
๐‘—=1
Szegedy walk for simulation
2012
Berry,
Childs
๏ฎ
Three step process:
1. Start with state in one of the subsystems, and perform controlled state
preparation.
2. Perform steps of quantum walk to approximate Hamiltonian evolution.
3. Invert controlled state preparation, so final state is in one of the
subsystems.
๏ฎ
Step 2 can just be performed with small ๐›ฟ for lazy quantum walk, or can
use phase estimation.
๏ฎ
A Hamiltonian has eigenvalues ๐œ‡, so evolution
under the Hamiltonian has eigenvalues
๐‘’ −๐‘–๐œ‡๐‘‡
๐‘‰ is the step of a quantum walk, and has
eigenvalues
๐‘’ ๐‘–๐œ† = ±๐‘’ ±๐‘– arcsin ๐œ‡๐›ฟ
๏ฎ
The complexity is the
maximum of
๐ป ๐‘‡
๐‘‘ ๐ป max ๐‘‡
๐œ€
Download