Lecture 3: Quantum simulation algorithms Dominic Berry Macquarie University 1996 Simulation of Hamiltonians ๏ฎ We want to simulate the evolution ๐๐ก = ๐ −๐๐ป๐ก |๐0 〉 ๏ฎ The Hamiltonian is a sum of terms: ๐ ๐ป= ๐ปโ โ=1 Seth Lloyd ๏ฎ We can perform ๐ −๐๐ปโ ๐ก ๏ฎ For short times we can use ๐ −๐๐ป1๐ฟ๐ก ๐ −๐๐ป2 ๐ฟ๐ก … ๐ −๐๐ป๐−1๐ฟ๐ก ๐ −๐๐ป๐ ๐ฟ๐ก ≈ ๐ −๐๐ป๐ฟ๐ก ๏ฎ For long times ๐ −๐๐ป1 ๐ก/๐ ๐ −๐๐ป2 ๐ก/๐ … ๐ −๐๐ป๐ ๐ก/๐ ๐ ≈ ๐ −๐๐ป๐ก Simulation of Hamiltonians ๏ฎ 1996 For short times we can use ๐ −๐๐ป1๐ฟ๐ก ๐ −๐๐ป2 ๐ฟ๐ก … ๐ −๐๐ป๐−1๐ฟ๐ก ๐ −๐๐ป๐ ๐ฟ๐ก ≈ ๐ −๐๐ป๐ฟ๐ก ๏ฎ ๏ฎ ๏ฎ ๏ฎ This approximation is because ๐ −๐๐ป1๐ฟ๐ก ๐ −๐๐ป2 ๐ฟ๐ก … ๐ −๐๐ป๐−1๐ฟ๐ก ๐ −๐๐ป๐ ๐ฟ๐ก = ๐ − ๐๐ป1 ๐ฟ๐ก + ๐ ๐ฟ๐ก 2 ๐ − ๐๐ป2 ๐ฟ๐ก + ๐ ๐ฟ๐ก 2 … … ๐ − ๐๐ป๐ ๐ฟ๐ก + ๐ ๐ฟ๐ก 2 Seth Lloyd = ๐ − ๐๐ป1 ๐ฟ๐ก − ๐๐ป2 ๐ฟ๐ก … − ๐๐ป๐ ๐ฟ๐ก + ๐ ๐ฟ๐ก 2 = ๐ − ๐๐ป๐ฟ๐ก + ๐ ๐ฟ๐ก 2 = ๐ −๐๐ป๐ฟ๐ก + ๐(๐ฟ๐ก 2 ) If we divide long time ๐ก into ๐ intervals, then ๐ ๐ ๐ −๐๐ป๐ก = ๐ −๐๐ป๐ก/๐ = ๐ −๐๐ป1 ๐ก/๐ ๐ −๐๐ป2๐ก/๐ … ๐ −๐๐ป๐ ๐ก/๐ + ๐ ๐ก/๐ 2 ๐ −๐๐ป ๐ก/๐ −๐๐ป ๐ก/๐ −๐๐ป ๐ก/๐ 1 2 ๐ = ๐ ๐ …๐ + ๐ ๐ก 2 /๐ Typically, we want to simulate a system with some maximum allowable error ๐. Then we need ๐ ∝ ๐ก 2 /๐. 2007 Higher-order simulation ๏ฎ Berry, Ahokas, Cleve, Sanders A higher-order decomposition is ๐ −๐๐ป1 ๐ฟ๐ก/2 … ๐ −๐๐ป๐−1๐ฟ๐ก/2 ๐ −๐๐ป๐ ๐ฟ๐ก/2 ๐ −๐๐ป๐ ๐ฟ๐ก/2 ๐ −๐๐ป๐−1๐ฟ๐ก/2 … ๐ −๐๐ป1๐ฟ๐ก/2 = ๐ −๐๐ป๐ฟ๐ก + ๐ ๐ ๐ป ๐ฟ๐ก 3 ๏ฎ If we divide long time ๐ก into ๐ intervals, then ๐ ๐ −๐๐ป๐ก = ๐ −๐๐ป๐ก/๐ ๐ = ๐ −๐๐ป1 ๐ก/2๐ … ๐ −๐๐ป๐ ๐ก/2๐ ๐ −๐๐ป๐ ๐ก/2๐ … ๐ −๐๐ป1๐ก/2๐ + ๐ ๐ ๐ป ๐ก/๐ 3 ๐ = ๐ −๐๐ป1 ๐ก/2๐ … ๐ −๐๐ป๐ ๐ก/2๐ ๐ −๐๐ป๐ ๐ก/2๐ … ๐ −๐๐ป๐พ1๐ก/2๐ + ๐ ๐ ๐ป ๐ก 3 /๐ 2 ๏ฎ Then we need ๏ฎ General product formula can give error ๐ ๐ ๐ป ๐ก/๐ ๏ฎ For time ๐ก the error is ๐ ๐ ๐ป ๐ก ๏ฎ To bound the error as ๐ the value of ๐ scales as ๐ ๐ป ๐ก 1+1/2๐ ๐∝ ๐ 1/2๐ The complexity is ๐ × ๐. ๏ฎ 1.5 / ๐∝ ๐ ๐ป ๐ก 2๐+1 /๐ 2๐ . ๐. 2๐+1 for time ๐ก/๐. Higher-order simulation 2007 Berry, Ahokas, Cleve, Sanders ๐ ๐ป ๐ก 1+1/2๐ ๐∝ ๐ 1/2๐ ๏ฎ ๏ฎ ๏ฎ ๏ฎ ๏ฎ The complexity is ๐ × ๐. For Sukuki product formulae, we have an additional factor in ๐ 5๐ ๐ ๐ป ๐ก 1+1/2๐ ๐∝ ๐ 1/2๐ The complexity then needs to be multiplied by a further factor of 5๐ . The overall complexity scales as ๐52๐ ๐ ๐ป ๐ก 1+1/2๐ ๐ 1/2๐ We can also take an optimal value of ๐ ∝ log ๐ ๐ป ๐ก/๐ , which gives scaling ๐2 ๐ป ๐ก exp[2 ln 5 ln(๐ ๐ป ๐ก/๐)] Solving linear systems ๏ฎ Consider a large system of linear equations: ๐ด๐ = ๐ ๏ฎ First assume that the matrix ๐ด is Hermitian. ๏ฎ It is possible to simulate Hamiltonian evolution under ๐ด for time ๐ก: ๐ −๐๐ด๐ก . ๏ฎ Encode the initial state in the form 2009 ๐ ๐ = ๐ฆโ |โ〉 โ=1 ๏ฎ The state can also be written in terms of the eigenvectors of ๐ด as ๐ ๐ = ๐๐ |๐๐ 〉 ๐=1 ๏ฎ We can obtain the solution |๐〉 if we can divide each ๐๐ by ๐๐ . ๏ฎ Use the phase estimation technique to place the estimate of ๐๐ in an ancillary register to obtain ๐ ๐๐ |๐๐ 〉|๐๐ 〉 ๐=1 Harrow, Hassidim & Lloyd Solving linear systems ๏ฎ 2009 Use the phase estimation technique to place the estimate of ๐๐ in an ancillary register to obtain ๐ ๐๐ |๐๐ 〉|๐๐ 〉 ๐=1 ๏ฎ Append an ancilla and rotate it according to the value of ๐๐ to obtain ๐ ๐๐ |๐๐ 〉|๐๐ 〉 ๐=1 ๏ฎ Invert the phase estimation technique to remove the estimate of ๐๐ from the ancillary register, giving ๐ ๐=1 ๏ฎ 1 1 0 + 1 − 2 |1〉 ๐๐ ๐๐ 1 1 ๐๐ |๐๐ 〉 0 + 1 − 2 |1〉 ๐๐ ๐๐ Use amplitude amplification to amplify the |0〉 component on the ancilla, giving a state proportional to ๐ ๐ ∝ ๐=1 ๐๐ |๐ 〉 = ๐๐ ๐ ๐ ๐ฅโ |โ〉 โ=1 Harrow, Hassidim & Lloyd Solving linear systems ๏ฎ What about non-Hermitian ๐ด? ๏ฎ Construct a blockwise matrix 0 ๐ด ๐ด′ = † ๐ด 0 ๏ฎ The inverse of ๐ด′ is then ๐ด′ −1 = 0 ๐ด−1 ๏ฎ ๏ฎ This means that 0 ๐ด−1 −1 ๐ด† 0 ๐ด† 0 −1 ๐ 0 = 0 ๐ In terms of the state |0〉|๐〉 โฆ |1〉|๐〉 2009 Harrow, Hassidim & Lloyd Solving linear systems 2009 Complexity Analysis ๏ฎ We need to examine: 1. The complexity of simulating the Hamiltonian to estimate the phase. 2. The accuracy needed for the phase estimate. 3. The possibility of 1/๐๐ being greater than 1. ๏ฎ The complexity of simulating the Hamiltonian for time ๐ก is approximately ∝ ๐ด ๐ก = |๐max |๐ก. ๏ฎ To obtain accuracy ๐ฟ in the estimate of ๐, the Hamiltonian needs to be simulated for time ∝ 1/๐ฟ. ๏ฎ We actually need to multiply the state coefficients by ๐min /๐๐ , to give ๐ ๐=1 ๏ฎ |๐min | ๐๐ |๐๐ 〉 ๐๐ To obtain accuracy ๐ in ๐min /๐๐ , we need accuracy ๐๐2๐ / ๐min in the estimate of ๐. Harrow, Hassidim & Lloyd Final complexity is ๐ 2 |๐max | ∼ , ๐ : = ๐ |๐min | 2010 Differential equations ๏ฎ ๏ฎ Berry Discretise the differential equation, then encode as a linear system. Simplest discretisation: Euler method. dx ๏ฝ Ax ๏ซ b dt sets initial condition x j ๏ซ1 ๏ญ x j ๏ I 0 0 ๏ฉ ๏ช ๏ญ( I ๏ซ Ah) I 0 ๏ช 0 ๏ญ( I ๏ซ Ah) I ๏ช ๏ช 0 0 ๏ญI ๏ช ๏ช๏ซ 0 0 0 sets x to be constant h 0 0 0 I ๏ญI ๏ฝ Ax j ๏ซ b 0 ๏น ๏ฉ x 0 ๏น ๏ฉ xin ๏น 0 ๏บ ๏ช x1 ๏บ ๏ช bh ๏บ ๏บ๏ช ๏บ ๏ช ๏บ 0 ๏บ ๏ช x 2 ๏บ ๏ฝ ๏ช bh ๏บ ๏บ๏ช ๏บ ๏ช ๏บ 0 ๏บ ๏ช x3 ๏บ ๏ช 0 ๏บ I ๏บ๏ป ๏ช๏ซ x 4 ๏บ๏ป ๏ช๏ซ 0 ๏บ๏ป Quantum walks ๏ฎ ๏ฎ ๏ฎ The quantum walk has position and coin values |๐ฅ, ๐〉 It then alternates coin and step operators, e.g. ๐ถ ๐ฅ, ±1 = ๐ฅ, −1 ± ๐ฅ, 1 / 2 ๐ ๐ฅ, ๐ = |๐ฅ + ๐, ๐〉 The position can progress linearly in the number of steps. ๏ฎ A classical walk has a position which is an integer, ๐ฅ, which jumps either to the left or the right at each step. ๏ฎ The resulting distribution is a binomial distribution, or a normal distribution as the limit. Quantum walk on a graph ๏ฎ The walk position is any node on the graph. ๏ฎ Describe the generator matrix ๐พ by ๐พ, ๐ ≠ ๐′ , ๐๐′ ∈ ๐บ 0, ๐พ๐๐′ = ๐ ≠ ๐′ , ๐๐′ ∉ ๐บ −๐ ๐ ๐พ, ๐ = ๐′ ๏ฎ The quantity ๐(๐) is the number of edges incident on vertex ๐. ๏ฎ ๏ฎ An edge between ๐ and ๐ ′ is denoted ๐๐′ . The probability distribution for a continuous walk has the differential equation ๐๐๐ ๐ก = ๐พ๐๐′ ๐๐′ (๐ก) ๐๐ก ′ ๐ 1998 Quantum walk on a graph ๐๐๐ ๐ก = ๐๐ก ๏ฎ Farhi ๐พ๐๐′ ๐๐′ (๐ก) ๐′ Quantum mechanically we have ๐ ๐ ๐ ๐ก = ๐ป|๐ ๐ก 〉 ๐๐ก ๐ ๐ 〈๐ ๐ ๐ก = ๐ ๐ป ๐′ 〈๐′|๐ ๐ก 〉 ๐๐ก ′ ๐ ๏ฎ The natural quantum analogue is ๐ ๐ป ๐′ = ๐พ๐๐′ ๏ฎ We take ๐๐ป ๏ฎ ๐′ ๐พ, = 0, ๐ ≠ ๐′ , ๐๐′ ∈ ๐บ otherwise. Probability is conserved because ๐ป is Hermitian. Quantum walk on a graph ๏ฎ ๏ฎ ๏ฎ entrance Childs, Farhi, Gutmann The goal is to traverse the graph from entrance to exit. Classically the random walk will take exponential time. For the quantum walk, define a superposition state 1 col ๐ = |๐〉 ๐๐ ๐∈column ๐ exit ๐๐ = ๏ฎ 2002 2๐ 22๐+1−๐ 0≤๐≤๐ ๐ + 1 ≤ ๐ ≤ 2๐ + 1 On these states the matrix elements of the Hamiltonian are col ๐ ๐ป col ๐ ± 1 = 2๐พ Quantum walk on a graph entrance 2003 Childs, Cleve, Deotto, Farhi, Gutmann, Spielman ๏ฎ Add random connections between the two trees. ๏ฎ All vertices (except entrance and exit) have degree 3. ๏ฎ Again using column states, the matrix elements of the Hamiltonian are exit col ๐ ๐ป col ๐ ± 1 = 2๐พ 2๐พ ๐≠๐ ๐=๐ ๏ฎ This is a line with a defect. ๏ฎ There are reflections off the defect, but the quantum walk still reaches the exit efficiently. 2007 NAND tree quantum walk Farhi, Goldstone, Gutmann ๏ฎ In a game tree I alternate making moves with an opponent. ๏ฎ In this example, if I move first then I can always direct the ant to the sugar cube. ๏ฎ What is the complexity of doing this in general? Do we need to query all the leaves? AND OR AND ๐ฅ1 OR AND AND ๐ฅ2 ๐ฅ3 ๐ฅ4 ๐ฅ5 AND ๐ฅ6 ๐ฅ7 ๐ฅ8 2007 NAND tree quantum walk OR OR AND ๐ฅ1 NOT AND ๐ฅ2 ๐ฅ3 NOT AND ๐ฅ4 ๐ฅ1 NAND NAND ๐ฅ1 Farhi, Goldstone, Gutmann NAND ๐ฅ2 ๐ฅ3 ๐ฅ4 NOT NOT AND ๐ฅ2 ๐ฅ3 ๐ฅ4 NAND tree quantum walk 2007 Farhi, Goldstone, Gutmann wave ๏ฎ The Hamiltonian is a sum of an oracle Hamiltonian, representing the connections, and a fixed driving Hamiltonian, which is the remainder of the tree. ๐ป = ๐ป๐ + ๐ป๐ท ๏ฎ Prepare a travelling wave packet on the left. ๏ฎ If the answer to the NAND tree problem is 1, then after a fixed time the wave packet will be found on the right. ๏ฎ The reflection depends on the solution of the NAND tree problem. Simulating quantum walks ๏ฎ A more realistic scenario is that we have an oracle that provides the structure of the graph; i.e., a query to a node returns all the nodes that are connected. ๏ฎ The quantum oracle is queried with a node number ๐ฅ and a neighbour number ๐. ๏ฎ It returns a result via the quantum operation ๐๐ ๐ฅ, ๐ |0〉 = ๐ฅ, ๐ |๐ฆ〉 ๏ฎ wave Here ๐ฆ is the ๐’th neighbour of ๐ฅ. |๐ฅ, ๐〉 ๐๐ |0〉 connected nodes query node |๐ฅ, ๐〉 |๐ฆ〉 Decomposing the Hamiltonian ๏ฎ ๏ฎ ๏ฎ ๏ฎ 0 0 1 The rows and columns correspond to 0 node numbers. ๐ป= 0 The ones indicate connections 1 between nodes. โฎ The oracle gives us the position of 0 In the matrix picture, we have a sparse matrix. the ๐’th nonzero element in column ๐ฅ. 0 1 0 0 0 1 โฎ 0 1 0 0 0 0 0 โฎ 1 0 0 0 1 1 0 โฎ 0 0 0 0 1 1 0 โฎ 0 1 1 0 0 0 0 โฎ 0 2003 Aharonov, Ta-Shma โฏ โฏ โฏ โฏ โฏ โฏ โฑ โฏ 0 0 1 0 0 0 โฎ 1 Decomposing the Hamiltonian ๏ฎ ๏ฎ ๏ฎ ๏ฎ 0 0 1 The rows and columns correspond to 0 node numbers. ๐ป= 0 The ones indicate connections 1 between nodes. โฎ The oracle gives us the position of 0 In the matrix picture, we have a sparse matrix. the ๐’th nonzero element in column ๐ฅ. ๏ฎ We want to be able to separate the Hamiltonian into 1-sparse parts. ๏ฎ This is equivalent to a graph colouring – the graph edges are coloured such that each node has unique colours. 0 1 0 0 0 1 โฎ 0 1 0 0 0 0 0 โฎ 1 0 0 0 1 1 0 โฎ 0 0 0 0 1 1 0 โฎ 0 1 1 0 0 0 0 โฎ 0 2003 Aharonov, Ta-Shma โฏ โฏ โฏ โฏ โฏ โฏ โฑ โฏ 0 0 1 0 0 0 โฎ 1 2007 Graph colouring ๏ฎ ๏ฎ ๏ฎ ๏ฎ 0 0 First guess: for each node, assign 1 edges sequentially according to their 0 numbering. ๐ป= 0 This does not work because the edge 1 between nodes ๐ฅ and ๐ฆ may be edge โฎ 1 (for example) of ๐ฅ, but edge 2 of ๐ฆ. 0 How do we do this colouring? Second guess: for edge between ๐ฅ and ๐ฆ, colour it according to the pair of numbers (๐๐ฅ , ๐๐ฆ ), where it is edge ๐๐ฅ of node ๐ฅ and edge ๐๐ฆ of node ๐ฆ. ๏ฎ We decide the order such that ๐ฅ < ๐ฆ. ๏ฎ It is still possible to have ambiguity: say we have ๐ฅ < ๐ฆ < ๐ง. Berry, Ahokas, Cleve, Sanders 0 1 0 0 0 1 โฎ 0 1 0 0 0 0 0 โฎ 1 0 0 0 1 1 0 โฎ 0 0 0 0 1 1 0 โฎ 0 1 1 0 0 0 0 โฎ 0 โฏ โฏ โฏ โฏ โฏ โฏ โฑ โฏ 0 0 1 0 0 0 โฎ 1 2007 Graph colouring ๏ฎ ๏ฎ ๏ฎ ๏ฎ Berry, Ahokas, Cleve, Sanders 0 0 First guess: for each node, assign 1 edges sequentially according to their 0 numbering. ๐ป= 0 This does not work because the edge 1 between nodes ๐ฅ and ๐ฆ may be edge โฎ 1 (for example) of ๐ฅ, but edge 2 of ๐ฆ. 0 How do we do this colouring? Second guess: for edge between ๐ฅ and ๐ฆ, colour it according to the pair of numbers (๐๐ฅ , ๐๐ฆ ), where it is edge ๐๐ฅ of node ๐ฅ and edge ๐๐ฆ of node ๐ฆ. ๏ฎ We decide the order such that ๐ฅ < ๐ฆ. ๏ฎ It is still possible to have ambiguity: say we have ๐ฅ < ๐ฆ < ๐ง. 0 1 0 0 0 1 โฎ 0 ๐ฅ 3 1 0 0 0 0 0 โฎ 1 0 0 0 1 1 0 โฎ 0 0 0 0 1 1 0 โฎ 0 1 1 0 0 0 0 โฎ 0 0 0 1 0 0 0 โฎ 1 2 1 (1,2) 3 2 ๐ฆ(1,2) 1 2 3 ๏ฎ โฏ โฏ โฏ โฏ โฏ โฏ โฑ โฏ ๐ง 1 Use a string of nodes with equal edge colours, and compress. General Hamiltonian oracles 0 0 2 0 ๐ป= 0 − 2๐ โฎ 0 ๏ฎ ๏ฎ 0 3 0 0 0 1/2 โฎ 0 − 2 0 0 0 0 0 0 0 0 0 โฎ 3−๐ 0 1 0 ๐ −๐๐/7 0 โฎ 0 More generally, we can perform a colouring on a graph with matrix elements of arbitrary (Hermitian) values. Then we also require an oracle to give us the values of the matrix elements. ๐ ๐๐/7 2 0 โฎ 0 |๐ฅ, ๐〉 |0〉 |๐ฅ, ๐ฆ〉 |0〉 2๐ โฏ 1/2 โฏ 0 0 0 0 โฎ 0 2003 Aharonov, Ta-Shma 0 0 โฏ − 3+๐ โฏ 0 โฏ 0 โฏ 0 โฑ โฎ โฏ 1/10 ๐๐ ๐๐ป |๐ฅ, ๐〉 |๐ฆ〉 |๐ฅ, ๐ฆ〉 |๐ป๐ฅ,๐ฆ 〉 Simulating 1-sparse case 0 0 0 0 ๐ป= 0 − 2๐ โฎ 0 0 3 0 0 0 0 โฎ 0 0 0 0 0 0 0 โฎ − 3−๐ 0 0 0 1 0 0 โฎ 0 0 0 0 0 2 0 โฎ 0 2๐ 0 0 0 0 0 โฎ 0 2003 Aharonov, Ta-Shma โฏ 0 โฏ 0 โฏ − 3+๐ โฏ 0 โฏ 0 โฏ 0 โฑ โฎ โฏ 0 ๏ฎ Assume we have a 1-sparse matrix. ๏ฎ How can we simulate evolution under this Hamiltonian? ๏ฎ Two cases: 1. If the element is on the diagonal, then we have a 1D subspace. 2. If the element is off the diagonal, then we need a 2D subspace. Simulating 1-sparse case ๏ฎ 2003 Aharonov, Ta-Shma We are given a column number ๐ฅ. There are then 5 quantities that we want to calculate: 1. ๐๐ฅ : A bit registering whether the element is on or off the diagonal; i.e. ๐ฅ belongs to a 1D or 2D subspace. 2. ๐๐๐๐ฅ : The minimum number out of the (1D or 2D) subspace to which ๐ฅ belongs. 3. ๐๐๐ฅ๐ฅ : The maximum number out of the subspace to which ๐ฅ belongs. 4. ๐ด๐ฅ : The entries of ๐ป in the subspace to which ๐ฅ belongs. 5. ๐๐ฅ : The evolution under ๐ป for time ๐ก in the subspace. ๏ฎ We have a unitary operation that maps ๐ฅ 0 → ๐ฅ |๐๐ฅ , ๐๐๐๐ฅ , ๐๐๐ฅ๐ฅ , ๐ด๐ฅ , ๐๐ฅ 〉 Simulating 1-sparse case 2003 Aharonov, Ta-Shma ๏ฎ We have a unitary operation that maps ๐ฅ 0 → ๐ฅ |๐๐ฅ , ๐๐๐๐ฅ , ๐๐๐ฅ๐ฅ , ๐ด๐ฅ , ๐๐ฅ 〉 ๏ฎ We consider a superposition of the two states in the subspace, ๐ = ๐ ๐๐๐๐ฅ + ๐ ๐๐๐ฅ๐ฅ ๏ฎ Then we obtain ๐ |0〉 → |๐〉|๐๐ฅ , ๐๐๐๐ฅ , ๐๐๐ฅ๐ฅ , ๐ด๐ฅ , ๐๐ฅ 〉 ๏ฎ A second operation implements the controlled operation based on the stored approximation of the unitary operation ๐๐ฅ : |๐〉 ๐๐ฅ , ๐๐๐๐ฅ , ๐๐๐ฅ๐ฅ → ๐๐ฅ |๐〉 ๐๐ฅ , ๐๐๐๐ฅ , ๐๐๐ฅ๐ฅ ๏ฎ This gives us ๐๐ฅ |๐〉|๐๐ฅ , ๐๐๐๐ฅ , ๐๐๐ฅ๐ฅ , ๐ด๐ฅ , ๐๐ฅ 〉 ๏ฎ Inverting the first operation then yields ๐๐ฅ ๐ 0 Applications ๏ฎ 2007: Discrete query NAND algorithm – Childs, Cleve, Jordan, Yeung ๏ฎ 2009: Solving linear systems – Harrow, Hassidim, Lloyd ๏ฎ 2009: Implementing sparse unitaries – Jordan, Wocjan ๏ฎ 2010: Solving linear differential equations – Berry ๏ฎ 2013: Algorithm for scattering cross section – Clader, Jacobs, Sprouse Implementing unitaries ๏ฎ 2009 Jordan, Wocjan Construct a Hamiltonian from unitary as 0 ๐ป= † ๐ ๐ 0 ๏ฎ Now simulate evolution under this Hamiltonian ๐ −๐๐ป๐ก = ๐ cos ๐ก − ๐๐ป sin ๐ก ๏ฎ Simulating for time ๐ก = ๐/2 gives ๐ −๐๐ป๐/2 1 ๐ = −๐๐ป 1 ๐ = −๐|0〉๐|๐〉 Quantum simulation via walks ๏ฎ Three ingredients: 1. A Szegedy quantum walk 2. Coherent phase estimation 3. Controlled state preparation ๏ฎ The quantum walk has eigenvalues and eigenvectors related to those for Hamiltonian. By using phase estimation, we can estimate the eigenvalue, then implement that actually needed. ๏ฎ Szegedy Quantum Walk ๏ฎ 2004 Szegedy The walk uses two reflections 2๐ถ๐ถ † − ๐ 2๐ ๐ † − ๐ ๏ฎ ๏ฎ The first is controlled by the first register and acts on the second register. Given some matrix ๐[๐, ๐], the operator ๐ถ is defined by ๐ ๐๐ = ๐[๐, ๐]|๐〉 ๐=1 ๐ ๐ถ= ๐ 〈๐| ⊗ |๐๐ 〉 ๐=1 Szegedy Quantum Walk ๏ฎ ๏ฎ 2004 Szegedy The diffusion operator 2๐ ๐ † − ๐ is controlled by the second register and acts on the first. Use a similar definition with matrix ๐[๐, ๐]. Both are controlled reflections: ๐ 2๐ถ๐ถ † − ๐ = ๐ 〈๐| ⊗ (2|๐๐ 〉〈๐๐ | − ๐) ๐=1 ๐ 2๐ ๐ † − ๐ = (2|๐๐ 〉〈๐๐ | − ๐) ⊗ ๐ 〈๐| ๐=1 ๏ฎ The eigenvalues and eigenvectors of the step of the quantum walk (2๐ถ๐ถ † − ๐)(2๐ ๐ † − ๐) are related to those of a matrix formed from ๐[๐, ๐] and ๐[๐, ๐]. 2012 Szegedy walk for simulation Berry, Childs ๏ฎ Use symmetric system, with ∗ ๐ ๐, ๐ = ๐ ๐, ๐ = ๐ป๐๐ ๏ฎ Then eigenvalues and eigenvectors are related to those of Hamiltonian. ๏ฎ In reality we need to modify to “lazy” quantum walk, with ๐๐ = ๏ฎ ๐ฟ ๐ป ๐ 1 ∗ ๐ป๐๐ |๐〉 ๐=1 ๐๐ ๐ฟ + 1− |๐ + 1〉 ๐ป 1 ๐ ๐๐ โ Grover preparation gives ๐๐ = 1 ๐ ๐ ๐=1 ๐ ๐๐ 0 + 1 − ๐๐ 2 |1〉 |๐ป๐๐ | ๐=1 Szegedy walk for simulation 2012 Berry, Childs ๏ฎ Three step process: 1. Start with state in one of the subsystems, and perform controlled state preparation. 2. Perform steps of quantum walk to approximate Hamiltonian evolution. 3. Invert controlled state preparation, so final state is in one of the subsystems. ๏ฎ Step 2 can just be performed with small ๐ฟ for lazy quantum walk, or can use phase estimation. ๏ฎ A Hamiltonian has eigenvalues ๐, so evolution under the Hamiltonian has eigenvalues ๐ −๐๐๐ ๐ is the step of a quantum walk, and has eigenvalues ๐ ๐๐ = ±๐ ±๐ arcsin ๐๐ฟ ๏ฎ The complexity is the maximum of ๐ป ๐ ๐ ๐ป max ๐ ๐