Lecture 2: Techniques for quantum algorithms
Macquarie University
1996
Grover’s search algorithm
๏ฎ Problem: Given a function ๐(๐ฅ) mapping ๐ bits to 1 bit ๐ ๐ฅ : 0,1 ๐ → {0,1} , determine the value of ๐ฅ such that ๐ ๐ฅ = 1 .
๏ฎ Classically we need to evaluate ๐ ๐ values, ๐ = 2 ๐
.
๏ฎ
๏ฎ
1.
2.
๏ฎ
Grover’s algorithm enables a search with ๐ ๐ queries.
Quantum algorithm has two crucial steps:
Calculate the function on all values simultaneously.
Reflect about the equal superposition state.
These steps are repeated ๐ ๐ times.
Lov Grover
๏ฎ This is optimal .
Optimality of Grover’s algorithm
1997
๏ฎ It is not possible to perform any better.
๏ฎ NP is the class of problems that can be verified in polynomial time. That is, if given ๐ , you can verify that ๐ ๐ = 1 efficiently – but it is hard to find ๐ .
๏ฎ If it were possible to achieve an exponential speedup for search, it would mean that a quantum computer can solve NP problems in polynomial time.
Bennett, Bernstein,
Brassard, Vazirani
๏ฎ It is not possible to search any faster than ๐ .
๏ฎ NP problems are still fundamentally hard on quantum computers.
Optimality of Grover’s algorithm
1997
Method of solution:
๏ฎ Denote operations between oracle calls by ๐ โ
. The general state produced by the sequence of ๐ oracle calls is then
|๐ ๐ ๐
〉 = ๐ ๐
๐ ๐
๐ ๐−1
๐ ๐
… ๐
2
๐ ๐
๐
1
๐ ๐ ๐
๏ฎ
๏ฎ
If we did not call the oracle, then the state would be
|๐ ๐ 〉 = ๐ ๐
๐ ๐−1
… ๐
2
๐
1
|๐〉
When we sum over all ๐ we need large ๐ to make the following quantity large:
๐ ๐ ๐
= ๐=1
|๐ ๐ ๐
〉 − |๐ ๐ 〉
2
Bennett, Bernstein,
Brassard, Vazirani
๏ฎ We then show that, unless ๐ ๐ is large, we cannot obtain the solution with high probability.
1997
Amplitude amplification
๏ฎ Problem: Given a function ๐(๐ฅ) mapping ๐ bits to 1 bit ๐ ๐ฅ : 0,1 ๐ → {0,1} , and an algorithm that produces a state that is a superposition of values of ๐ฅ , obtain a state that is superposition only of values of ๐ฅ such that ๐(๐ฅ) = 1 .
๏ฎ
๏ฎ
๏ฎ
๏ฎ
๏ฎ
๏ฎ
The idea is that ๐(๐ฅ) = 1 corresponds to a success, and we want to amplify the successful component of the state.
Denote the state prepared by the algorithm
๐ ๐ = ๐ ๐ฅ
|๐ฅ〉 ๐ฅ=1
Let the ๐ success states be ๐ ๐
.
If we were just to measure, the probability of obtaining success would be ๐ = ๐ ๐ฅ
2 ๐ฅ∈{๐ ๐
}
Classically repeating would require ๐(1/๐) repetitions.
Amplitude amplification only needs ๐(1/ ๐) .
Brassard & Høyer
1997
Amplitude amplification
๏ฎ Method: Alternate reflections about the prepared state |๐ 〉 and the solution states |๐ ๐
〉 .
๏ฎ
๏ฎ
๏ฎ
๏ฎ
๏ฎ
As for Grover’s algorithm, the reflection about the superposition state is the oracle ๐
๐ ๐
= ๐ − 2๐ ๐
, ๐ ๐
= ๐ ๐
〈๐ ๐
| ๐=1
In Grover’s algorithm, we can denote the operation that transforms |0〉 to the equal superposition state by ๐ . This is just a tensor product of Hadamards.
The reflection is obtained by performing ๐
†
, reflecting about |0〉 , then performing ๐ again.
๐ ๐
= 2 ๐ 〈๐ | − ๐ = ๐ 2 0 0 −๐ ๐ †
Here we can denote the algorithm by a unitary operation ๐ that transforms |0〉 to |๐ 〉 .
The reflection is obtained in exactly the same way:
๐ ๐
= ๐ 2 0 0 −๐ ๐
†
Brassard & Høyer
1997
Amplitude amplification
๏ฎ
๏ฎ
๏ฎ
๐ ๐
๐ ๐
= ๐ − 2๐ ๐
= 2 ๐ 〈๐ | − ๐
The initial state can be written as
๐ ๐ = ๐ฅ=1 ๐ ๐ฅ
|๐ฅ〉 = ๐ ๐ฅ
|๐ฅ〉 + ๐ฅ∉{๐ ๐
}
= cos ๐ ๐ ′ ๐ฅ∈{๐ ๐
}
+ sin ๐ |๐〉 ๐ ๐ฅ
|๐ฅ〉 where sin
2 ๐ ′ = ๐ = ๐ ,
1
1 − ๐ ๐ฅ∉{๐ ๐
} ๐ ๐ฅ
|๐ฅ〉 , ๐ =
1 ๐ ๐ฅ∈{๐ ๐
} ๐ ๐ฅ
|๐ฅ〉
The action of ๐ ๐ on ๐ is
๐ ๐ ๐ = (2 ๐ 〈๐ | − ๐) ๐
= 2 ๐ ๐ ๐ − ๐
= 2 sin ๐ ๐ − ๐
This gives
๐ ๐ ๐ = 2 sin ๐ cos ๐ ๐ ′ + sin ๐ |๐〉 − ๐
= 2 sin ๐ cos ๐ ๐ ′ − 1 − 2 sin 2 ๐ ๐
= sin 2๐ ๐
′
− cos 2๐ |๐〉
Brassard & Høyer
1997
Amplitude amplification
๏ฎ
๐ ๐
๐ ๐
= ๐ − 2๐ ๐
๐ ๐
= 2 ๐ 〈๐ | − ๐ ๐ = cos ๐ ๐ ′ + sin ๐ |๐〉 sin
2 ๐ = ๐ ๐ = sin 2๐ ๐ ′ − cos 2๐ |๐〉
The action of ๐ ๐ on ๐ ′ is
๐ ๐ ๐ ′ = (2 ๐ 〈๐ | − ๐) ๐ ′
= 2 ๐ ๐ ′ ๐ − ๐ ′
= 2 cos ๐ ๐ − |๐
′
〉
๏ฎ This gives
๐ ๐ ๐ ′ = 2 cos ๐ cos ๐ ๐ ′ + sin ๐ |๐〉 − ๐
′
= 2 cos 2 ๐ − 1 s ′ + 2 sin ๐ cos ๐ ๐
= cos 2๐ ๐ ′ + sin 2๐ |๐〉
๏ฎ Consider the state ๐ = cos ๐ ๐ ′ + sin ๐ |๐〉
๏ฎ The action of ๐ ๐ is ๐ ๐ ๐ = cos ๐ ๐ ′ − sin ๐ |๐〉
๏ฎ Then applying ๐ ๐
= cos ๐ cos 2๐ ๐ ′ gives ๐ ๐
๐ ๐ ๐
+ sin 2๐ |๐〉 − sin ๐ sin 2๐ ๐ ′ − cos 2๐ |๐〉
= cos ๐ cos 2๐ − sin ๐ sin 2๐ ๐
′
+ cos ๐ sin 2๐ + sin ๐ cos 2๐ ๐
= cos(๐ + 2๐) ๐ ′ + sin(๐ + 2๐) ๐
Brassard & Høyer
1997
Amplitude amplification
๏ฎ
๏ฎ
๐ ๐
๐ ๐
๐ ๐
= ๐ − 2๐ ๐
๐ ๐
= 2 ๐ 〈๐ | − ๐ ๐ = cos ๐ ๐ ′ + sin ๐ |๐〉 sin
2 ๐ = ๐ ๐ = sin 2๐ ๐ ′ − cos 2๐ |๐〉 ๐ ′ = cos 2๐ ๐
′
+ sin 2๐ |๐〉
With the state ๐ = cos ๐ ๐ ′ + sin ๐ |๐〉
๐ ๐
๐ ๐ ๐ = cos(๐ + 2๐) ๐ ′ + sin(๐ + 2๐) ๐
This means that the algorithm gives ๐
๐ ๐
๐ ๐ ๐ = cos[ 2๐ + 1 ๐] ๐ ′ + sin 2๐ + 1 ๐ |๐〉
๏ฎ The optimal number of iterations is therefore ๐
2๐ + 1 ≈
2๐ ๐ ๐ ≈
4 ๐
Brassard & Høyer
2000
State preparation
๏ฎ Consider an alternative formulation of the amplitude amplification problem.
๏ฎ
๏ฎ
๏ฎ
๏ฎ
๏ฎ
Our algorithm gives us the state
๐ ๐ = ๐ ๐ฅ
|๐ฅ〉 sin ๐ ๐ฅ
0 + cos ๐ ๐ฅ
|1〉 ๐ฅ=1
We want to remove the |1〉 component – i.e., we want the effect of a measurement on the ancilla with result 0 .
We use these two reflections:
๐ ๐
= ๐ ⊗ (๐ − 2|0〉〈0|)
๐ ๐
= 2 ๐ 〈๐ | − ๐
This is now identical to standard amplitude amplification. The oracle is just returning success if the ancilla is in state |0〉 .
The number of steps is
−1/2
~ ๐
4 ๐
= ๐
4
๐ ๐
2 ๐ฅ sin
2 ๐ ๐ฅ ๐ฅ=1
Lov Grover
2000
State preparation
๏ฎ
๏ฎ
1.
2.
3.
We want to prepare the state
๐ ๐ = ๐ ๐ฅ
|๐ฅ〉 ๐ฅ=1
Consider an oracle that yields ๐ ๐ procedure is as follows: ๐ฅ |0〉 = |๐ฅ〉|๐ ๐ฅ
〉 . The
Start with an equal superposition state with two ancillas
๐
1
|๐ฅ〉 0 |0〉
๐ ๐ฅ=1
Apply oracle to give
๐ ๐ฅ ๐ ๐ฅ ๐ฅ=1
|0〉
Apply a rotation on the second ancilla controlled by the state of the first ancilla to give
๐ ๐ ๐ฅ
0 + 1 − ๐ ๐ฅ
2 |1〉 ๐ฅ=1 ๐ฅ ๐ ๐ฅ
Lov Grover
2000
State preparation
๏ฎ
๏ฎ
1.
2.
3.
We want to prepare the state
๐ ๐ = ๐ ๐ฅ
|๐ฅ〉 ๐ฅ=1
Consider an oracle that yields
๐ ๐ ๐ฅ |0〉 = |๐ฅ〉|๐ ๐ฅ
〉 . The procedure is:
Start with an equal superposition state with two ancillas
๐
1
๐ ๐ฅ=1
|๐ฅ〉 0 |0〉
Apply oracle to give
๐
1 ๐ฅ ๐ ๐ฅ
๐ ๐ฅ=1
|0〉
Apply a rotation on the second ancilla controlled by the state of the first ancilla
๐
1 ๐ฅ ๐ ๐ฅ ๐ ๐ฅ
0 + 1 − ๐ ๐ฅ
2 |1〉
๐ ๐ฅ=1
4.
Invert the oracle to give
๐
1 ๐ = ๐ฅ 0 ๐ ๐ฅ
0 + 1 − ๐ ๐ฅ
๐ ๐ฅ=1
2 |1〉
5.
The procedure in steps 1 to 4 then prepares the state |๐ 〉 that can be used in the amplitude amplification procedure described earlier.
๏ฎ The number of steps required in the amplitude amplification is then ๐
~
4 ๐
= ๐ ๐
4
๏ฎ
๏ฎ
This problem may be used to encode a search problem; for example, where ๐ ๐
= 1 for an unknown ๐ . That means that this complexity is optimal for worstcase problems.
For specific problems, the complexity may be improved upon.
1995
Quantum phase estimation
๏ฎ Often in quantum algorithms, one has a unitary operator ๐ , and wants to find an eigenvalue.
๏ฎ Given a state in the form
๐ ๐ = ๐ ๐
|๐ ๐
〉 ๐=1 where e ๐๐ ๐ eigenvector, one wants to obtain an estimate,
๐ ๐ probability is an eigenvalue of ๐ and |๐ ๐ ๐ ๐
2
.
〉 is the corresponding
, of ๐ ๐ with
๏ฎ Alternatively, one wants to place the estimate in an ancillary register in a coherent way to obtain (approximately) the state
๐ ๐ ๐
|๐ ๐
〉| ๐ ๐
〉 ๐=1
Alexei Kitaev
bits of
๐
1995
Quantum phase estimation
๏ฎ
๏ฎ
๏ฎ
๏ฎ
Consider an initial eigenstate ๐ = |๐〉 .
The Hadamard gives ( 0 + |1〉)/ 2 .
The controlled ๐ โ operation gives ( 0 + ๐ ๐โ๐
|1〉)/ 2 .
The state of the set of ๐ qubits is
2 ๐
1
−1 ๐
2 ๐ ๐ฅ=0 ๐๐ฅ๐ |๐ฅ〉
|0〉
|0〉
|0〉
|0〉
|0〉
|0〉
๐ป
๐ป
๐ป
๐ป
๐ป
๐ป
Inverse
QFT
|๐〉 ๐ ๐ 2 ๐ 4 ๐ 8 ๐ 16 ๐ 32
Alexei Kitaev
1995
Quantum phase estimation
๏ฎ
๏ฎ
๏ฎ
๏ฎ
๏ฎ
๏ฎ
๏ฎ
๏ฎ
Consider an initial eigenstate ๐ = |๐〉 .
The Hadamard gives ( 0 + |1〉)/ 2 .
The controlled ๐ โ operation gives ( 0 + ๐ ๐โ๐
|1〉)/ 2 .
The state of the set of ๐ qubits is
2 ๐
−1
1 ๐ ๐๐ฅ๐ |๐ฅ〉
Alexei Kitaev
It therefore gives
2 ๐ ๐ฅ=0
The inverse QFT is the operation
2 ๐
1
−1
2 ๐ ๐ฅ,๐ฆ=0 ๐ −2๐๐๐ฅ๐ฆ/2 ๐ ๐ฆ 〈๐ฅ|
2 ๐
−1
1
2 ๐ ๐ฅ,๐ฆ=0 ๐
−2๐๐๐ฅ๐ฆ/2 ๐ ๐ ๐๐ฅ๐ ๐ฆ
Net result is state ๐
๐ ๐ = becomes
๐ ๐=1 ๐ ๐=1 ๐ ๐ ๐ ๐
| ๐
|๐ ๐
〉 ๐
〉
For ๐ = 2๐๐/2 ๐
, the sum over ๐ฅ is nonzero only for ๐ฆ = ๐ , giving the state |๐〉 .
That is, for ๐ = 2๐ × 0. ๐
1 ๐
2 ๐
3
… ๐ ๐ ๐
1
, the output state is ๐
2
… |๐ ๐
〉
๏ฎ
๏ฎ
We can implement one unitary ๐ , but we want to implement a different operation ๐ .
The operations share eigenstates, but the eigenvalues are related by a function.
๐ ๐ = ๐ ๐๐ ๐ ๐ ๐ = ๐ ๐ |๐〉
๏ฎ
๏ฎ
๏ฎ
Implement the unitary V many times and perform phase estimation to give
๐ ๐ ๐ = ๐ ๐
|๐ ๐
〉 โฆ ๐ ๐ ๐=1 ๐=1 ๐ ๐
| ๐ ๐
〉
The eigenvalue ๐(๐) can then be imposed, giving (ignoring error)
๐ ๐ ๐ ๐(๐ ๐
) ๐ ๐ ๐=1
| ๐ ๐
〉
Inverting the phase estimation gives
๐ ๐ ๐ ๐(๐ ๐
) ๐ ๐ ๐=1
= ๐|๐〉
1996
Simulation of Hamiltonians
๏ฎ
๏ฎ
๏ฎ
In general, the problem of simulating a physical system is that of simulating Hamiltonian evolution: ๐ ๐๐ก ๐ ๐ก
= −๐๐ป(๐ก)|๐〉
For a constant Hamiltonian, we would have ๐ ๐ก
= ๐ −๐๐ป๐ก |๐
0
〉
A typical physical system consists of many subsystems:
Seth Lloyd
๏ฎ Each subsystem is of limited dimension ๐ , but with ๐ subsystems the overall dimension is ๐ ๐
.
๏ฎ A typical Hamiltonian is a sum of local terms on individual subsystems, and interaction terms on 2 subsystems.
๐ป = ๐ป ๐ ๐
1996
Simulation of Hamiltonians
๏ฎ
๏ฎ
We want to simulate the evolution ๐ ๐ก
= ๐ −๐๐ป๐ก |๐
0
〉
With a Hamiltonian a sum of terms:
๏ฎ
๐ป = ๐ป ๐ ๐
Each individual ๐ป ๐ has low dimension, so could be efficiently simulated on its own.
๏ฎ That is, we can perform ๐ −๐๐ป ๐ ๐ก
Seth Lloyd
๏ฎ For short times we can use the approximation that ๐ −๐๐ป
1 ๐ฟ๐ก ๐ −๐๐ป
2 ๐ฟ๐ก … ๐ −๐๐ป
๐พ−1 ๐ฟ๐ก ๐ −๐๐ป
๐พ ๐ฟ๐ก ≈ ๐ −๐๐ป๐ฟ๐ก
1996
Simulation of Hamiltonians
๏ฎ For short times we can use the approximation that ๐ −๐๐ป
1 ๐ฟ๐ก ๐ −๐๐ป
2 ๐ฟ๐ก … ๐ −๐๐ป
๐พ−1 ๐ฟ๐ก ๐ −๐๐ป
๐พ ๐ฟ๐ก ≈ ๐ −๐๐ป๐ฟ๐ก
๏ฎ
๏ฎ
๏ฎ
๏ฎ
This approximation is because ๐ −๐๐ป
1 ๐ฟ๐ก ๐ −๐๐ป
2 ๐ฟ๐ก … ๐ −๐๐ป
๐พ−1 ๐ฟ๐ก ๐ −๐๐ป
๐พ ๐ฟ๐ก
= ๐ − ๐๐ป
1 ๐ฟ๐ก + ๐ ๐ฟ๐ก 2 ๐ − ๐๐ป
2 ๐ฟ๐ก + ๐ ๐ฟ๐ก 2
… ๐ − ๐๐ป
๐พ ๐ฟ๐ก + ๐ ๐ฟ๐ก 2
= ๐ − ๐๐ป
1 ๐ฟ๐ก − ๐๐ป
2
= ๐ − ๐๐ป๐ฟ๐ก + ๐ ๐ฟ๐ก ๐ฟ๐ก … − ๐๐ป
๐พ
2 ๐ฟ๐ก + ๐ ๐ฟ๐ก 2
= ๐ −๐๐ป๐ฟ๐ก + ๐(๐ฟ๐ก 2 )
…
If we divide long time ๐ก into ๐ intervals, then ๐ −๐๐ป๐ก = ๐ −๐๐ป๐ก/๐ ๐ = ๐ −๐๐ป
1 ๐ก/๐ ๐ −๐๐ป
2 ๐ก/๐ … ๐ −๐๐ป
๐พ ๐ก/๐ + ๐ ๐ก/๐
= ๐ −๐๐ป
1 ๐ก/๐ ๐ −๐๐ป
2 ๐ก/๐ … ๐ −๐๐ป
๐พ ๐ก/๐ ๐ + ๐ ๐ก 2 /๐
2
Seth Lloyd ๐
Typically, we want to simulate a system with some maximum allowable error ๐ .
Then we need ๐ ∝ ๐ก 2 /๐ .