Lecture 2: Techniques for quantum algorithms

advertisement

Lecture 2: Techniques for quantum algorithms

Dominic Berry

Macquarie University

1996

Grover’s search algorithm

๏ฎ Problem: Given a function ๐‘“(๐‘ฅ) mapping ๐‘› bits to 1 bit ๐‘“ ๐‘ฅ : 0,1 ๐‘› → {0,1} , determine the value of ๐‘ฅ such that ๐‘“ ๐‘ฅ = 1 .

๏ฎ Classically we need to evaluate ๐‘‚ ๐‘ values, ๐‘ = 2 ๐‘›

.

๏ฎ

๏ฎ

1.

2.

๏ฎ

Grover’s algorithm enables a search with ๐‘‚ ๐‘ queries.

Quantum algorithm has two crucial steps:

Calculate the function on all values simultaneously.

Reflect about the equal superposition state.

These steps are repeated ๐‘‚ ๐‘ times.

Lov Grover

๏ฎ This is optimal .

Optimality of Grover’s algorithm

1997

๏ฎ It is not possible to perform any better.

๏ฎ NP is the class of problems that can be verified in polynomial time. That is, if given ๐œ” , you can verify that ๐‘“ ๐œ” = 1 efficiently – but it is hard to find ๐œ” .

๏ฎ If it were possible to achieve an exponential speedup for search, it would mean that a quantum computer can solve NP problems in polynomial time.

Bennett, Bernstein,

Brassard, Vazirani

๏ฎ It is not possible to search any faster than ๐‘ .

๏ฎ NP problems are still fundamentally hard on quantum computers.

Optimality of Grover’s algorithm

1997

Method of solution:

๏ฎ Denote operations between oracle calls by ๐‘‰ โ„“

. The general state produced by the sequence of ๐‘˜ oracle calls is then

|๐œ“ ๐‘˜ ๐œ”

⟩ = ๐‘‰ ๐‘˜

๐‘ˆ ๐‘“

๐‘‰ ๐‘˜−1

๐‘ˆ ๐‘“

… ๐‘‰

2

๐‘ˆ ๐‘“

๐‘‰

1

๐‘ˆ ๐‘“ ๐œ“

๏ฎ

๏ฎ

If we did not call the oracle, then the state would be

|๐œ“ ๐‘˜ ⟩ = ๐‘‰ ๐‘˜

๐‘‰ ๐‘˜−1

… ๐‘‰

2

๐‘‰

1

|๐œ“⟩

When we sum over all ๐œ” we need large ๐‘˜ to make the following quantity large:

๐‘ ๐‘Ÿ ๐‘˜

= ๐œ”=1

|๐œ“ ๐‘˜ ๐œ”

⟩ − |๐œ“ ๐‘˜ ⟩

2

Bennett, Bernstein,

Brassard, Vazirani

๏ฎ We then show that, unless ๐‘Ÿ ๐‘˜ is large, we cannot obtain the solution with high probability.

1997

Amplitude amplification

๏ฎ Problem: Given a function ๐‘“(๐‘ฅ) mapping ๐‘› bits to 1 bit ๐‘“ ๐‘ฅ : 0,1 ๐‘› → {0,1} , and an algorithm that produces a state that is a superposition of values of ๐‘ฅ , obtain a state that is superposition only of values of ๐‘ฅ such that ๐‘“(๐‘ฅ) = 1 .

๏ฎ

๏ฎ

๏ฎ

๏ฎ

๏ฎ

๏ฎ

The idea is that ๐‘“(๐‘ฅ) = 1 corresponds to a success, and we want to amplify the successful component of the state.

Denote the state prepared by the algorithm

๐‘ ๐‘  = ๐œ“ ๐‘ฅ

|๐‘ฅ⟩ ๐‘ฅ=1

Let the ๐‘š success states be ๐œ” ๐‘›

.

If we were just to measure, the probability of obtaining success would be ๐‘ = ๐œ“ ๐‘ฅ

2 ๐‘ฅ∈{๐œ” ๐‘›

}

Classically repeating would require ๐‘‚(1/๐‘) repetitions.

Amplitude amplification only needs ๐‘‚(1/ ๐‘) .

Brassard & Høyer

1997

Amplitude amplification

๏ฎ Method: Alternate reflections about the prepared state |๐‘ ⟩ and the solution states |๐œ” ๐‘›

⟩ .

๏ฎ

๏ฎ

๏ฎ

๏ฎ

๏ฎ

As for Grover’s algorithm, the reflection about the superposition state is the oracle ๐‘š

๐‘ˆ ๐‘“

= ๐•€ − 2๐‘ƒ ๐œ”

, ๐‘ƒ ๐œ”

= ๐œ” ๐‘›

⟨๐œ” ๐‘›

| ๐‘›=1

In Grover’s algorithm, we can denote the operation that transforms |0⟩ to the equal superposition state by ๐‘ˆ . This is just a tensor product of Hadamards.

The reflection is obtained by performing ๐‘ˆ

, reflecting about |0⟩ , then performing ๐‘ˆ again.

๐‘ˆ ๐‘ 

= 2 ๐‘  ⟨๐‘ | − ๐•€ = ๐‘ˆ 2 0 0 −๐•€ ๐‘ˆ †

Here we can denote the algorithm by a unitary operation ๐‘ˆ that transforms |0⟩ to |๐‘ ⟩ .

The reflection is obtained in exactly the same way:

๐‘ˆ ๐‘ 

= ๐‘ˆ 2 0 0 −๐•€ ๐‘ˆ

Brassard & Høyer

1997

Amplitude amplification

๏ฎ

๏ฎ

๏ฎ

๐‘ˆ ๐‘ 

๐‘ˆ ๐‘“

= ๐•€ − 2๐‘ƒ ๐œ”

= 2 ๐‘  ⟨๐‘ | − ๐•€

The initial state can be written as

๐‘ ๐‘  = ๐‘ฅ=1 ๐œ“ ๐‘ฅ

|๐‘ฅ⟩ = ๐œ“ ๐‘ฅ

|๐‘ฅ⟩ + ๐‘ฅ∉{๐œ” ๐‘›

}

= cos ๐œ™ ๐‘  ′ ๐‘ฅ∈{๐œ” ๐‘›

}

+ sin ๐œ™ |๐œ”⟩ ๐œ“ ๐‘ฅ

|๐‘ฅ⟩ where sin

2 ๐‘  ′ = ๐œ™ = ๐‘ ,

1

1 − ๐‘ ๐‘ฅ∉{๐œ” ๐‘›

} ๐œ“ ๐‘ฅ

|๐‘ฅ⟩ , ๐œ” =

1 ๐‘ ๐‘ฅ∈{๐œ” ๐‘›

} ๐œ“ ๐‘ฅ

|๐‘ฅ⟩

The action of ๐‘ˆ ๐‘  on ๐œ” is

๐‘ˆ ๐‘  ๐œ” = (2 ๐‘  ⟨๐‘ | − ๐•€) ๐œ”

= 2 ๐‘  ๐œ” ๐‘  − ๐œ”

= 2 sin ๐œ™ ๐‘  − ๐œ”

This gives

๐‘ˆ ๐‘  ๐œ” = 2 sin ๐œ™ cos ๐œ™ ๐‘ ′ + sin ๐œ™ |๐œ”⟩ − ๐œ”

= 2 sin ๐œ™ cos ๐œ™ ๐‘  ′ − 1 − 2 sin 2 ๐œ™ ๐œ”

= sin 2๐œ™ ๐‘ 

− cos 2๐œ™ |๐œ”⟩

Brassard & Høyer

1997

Amplitude amplification

๏ฎ

๐‘ˆ ๐‘ 

๐‘ˆ ๐‘“

= ๐•€ − 2๐‘ƒ ๐œ”

๐‘ˆ ๐‘ 

= 2 ๐‘  ⟨๐‘ | − ๐•€ ๐‘  = cos ๐œ™ ๐‘  ′ + sin ๐œ™ |๐œ”⟩ sin

2 ๐œ™ = ๐‘ ๐œ” = sin 2๐œ™ ๐‘  ′ − cos 2๐œ™ |๐œ”⟩

The action of ๐‘ˆ ๐‘  on ๐‘ ′ is

๐‘ˆ ๐‘  ๐‘ ′ = (2 ๐‘  ⟨๐‘ | − ๐•€) ๐‘ ′

= 2 ๐‘  ๐‘ ′ ๐‘  − ๐‘  ′

= 2 cos ๐œ™ ๐‘  − |๐‘ 

๏ฎ This gives

๐‘ˆ ๐‘  ๐‘ ′ = 2 cos ๐œ™ cos ๐œ™ ๐‘ ′ + sin ๐œ™ |๐œ”⟩ − ๐‘ 

= 2 cos 2 ๐œ™ − 1 s ′ + 2 sin ๐œ™ cos ๐œ™ ๐œ”

= cos 2๐œ™ ๐‘  ′ + sin 2๐œ™ |๐œ”⟩

๏ฎ Consider the state ๐œ“ = cos ๐œƒ ๐‘ ′ + sin ๐œƒ |๐œ”⟩

๏ฎ The action of ๐‘ˆ ๐‘“ is ๐‘ˆ ๐‘“ ๐œ“ = cos ๐œƒ ๐‘ ′ − sin ๐œƒ |๐œ”⟩

๏ฎ Then applying ๐‘ˆ ๐‘ 

= cos ๐œƒ cos 2๐œ™ ๐‘  ′ gives ๐‘ˆ ๐‘ 

๐‘ˆ ๐‘“ ๐œ“

+ sin 2๐œ™ |๐œ”⟩ − sin ๐œƒ sin 2๐œ™ ๐‘ ′ − cos 2๐œ™ |๐œ”⟩

= cos ๐œƒ cos 2๐œ™ − sin ๐œƒ sin 2๐œ™ ๐‘ 

+ cos ๐œƒ sin 2๐œ™ + sin ๐œƒ cos 2๐œ™ ๐œ”

= cos(๐œƒ + 2๐œ™) ๐‘  ′ + sin(๐œƒ + 2๐œ™) ๐œ”

Brassard & Høyer

1997

Amplitude amplification

๏ฎ

๏ฎ

๐‘ˆ ๐‘ 

๐‘ˆ ๐‘ 

๐‘ˆ ๐‘“

= ๐•€ − 2๐‘ƒ ๐œ”

๐‘ˆ ๐‘ 

= 2 ๐‘  ⟨๐‘ | − ๐•€ ๐‘  = cos ๐œ™ ๐‘  ′ + sin ๐œ™ |๐œ”⟩ sin

2 ๐œ™ = ๐‘ ๐œ” = sin 2๐œ™ ๐‘  ′ − cos 2๐œ™ |๐œ”⟩ ๐‘ ′ = cos 2๐œ™ ๐‘ 

+ sin 2๐œ™ |๐œ”⟩

With the state ๐œ“ = cos ๐œƒ ๐‘ ′ + sin ๐œƒ |๐œ”⟩

๐‘ˆ ๐‘ 

๐‘ˆ ๐‘“ ๐œ“ = cos(๐œƒ + 2๐œ™) ๐‘  ′ + sin(๐œƒ + 2๐œ™) ๐œ”

This means that the algorithm gives ๐‘˜

๐‘ˆ ๐‘ 

๐‘ˆ ๐‘“ ๐‘  = cos[ 2๐‘˜ + 1 ๐œ™] ๐‘  ′ + sin 2๐‘˜ + 1 ๐œ™ |๐œ”⟩

๏ฎ The optimal number of iterations is therefore ๐œ‹

2๐‘˜ + 1 ≈

2๐œ™ ๐œ‹ ๐‘˜ ≈

4 ๐‘

Brassard & Høyer

2000

State preparation

๏ฎ Consider an alternative formulation of the amplitude amplification problem.

๏ฎ

๏ฎ

๏ฎ

๏ฎ

๏ฎ

Our algorithm gives us the state

๐‘ ๐‘  = ๐‘  ๐‘ฅ

|๐‘ฅ⟩ sin ๐œƒ ๐‘ฅ

0 + cos ๐œƒ ๐‘ฅ

|1⟩ ๐‘ฅ=1

We want to remove the |1⟩ component – i.e., we want the effect of a measurement on the ancilla with result 0 .

We use these two reflections:

๐‘ˆ ๐‘“

= ๐•€ ⊗ (๐•€ − 2|0⟩⟨0|)

๐‘ˆ ๐‘ 

= 2 ๐‘  ⟨๐‘ | − ๐•€

This is now identical to standard amplitude amplification. The oracle is just returning success if the ancilla is in state |0⟩ .

The number of steps is

−1/2

~ ๐œ‹

4 ๐‘

= ๐œ‹

4

๐‘ ๐‘ 

2 ๐‘ฅ sin

2 ๐œƒ ๐‘ฅ ๐‘ฅ=1

Lov Grover

2000

State preparation

๏ฎ

๏ฎ

1.

2.

3.

We want to prepare the state

๐‘ ๐œ“ = ๐œ“ ๐‘ฅ

|๐‘ฅ⟩ ๐‘ฅ=1

Consider an oracle that yields ๐‘ˆ ๐œ“ procedure is as follows: ๐‘ฅ |0⟩ = |๐‘ฅ⟩|๐œ“ ๐‘ฅ

⟩ . The

Start with an equal superposition state with two ancillas

๐‘

1

|๐‘ฅ⟩ 0 |0⟩

๐‘ ๐‘ฅ=1

Apply oracle to give

๐‘ ๐‘ฅ ๐œ“ ๐‘ฅ ๐‘ฅ=1

|0⟩

Apply a rotation on the second ancilla controlled by the state of the first ancilla to give

๐‘ ๐œ“ ๐‘ฅ

0 + 1 − ๐œ“ ๐‘ฅ

2 |1⟩ ๐‘ฅ=1 ๐‘ฅ ๐œ“ ๐‘ฅ

Lov Grover

2000

State preparation

๏ฎ

๏ฎ

1.

2.

3.

We want to prepare the state

๐‘ ๐œ“ = ๐œ“ ๐‘ฅ

|๐‘ฅ⟩ ๐‘ฅ=1

Consider an oracle that yields

๐‘ˆ ๐œ“ ๐‘ฅ |0⟩ = |๐‘ฅ⟩|๐œ“ ๐‘ฅ

⟩ . The procedure is:

Start with an equal superposition state with two ancillas

๐‘

1

๐‘ ๐‘ฅ=1

|๐‘ฅ⟩ 0 |0⟩

Apply oracle to give

๐‘

1 ๐‘ฅ ๐œ“ ๐‘ฅ

๐‘ ๐‘ฅ=1

|0⟩

Apply a rotation on the second ancilla controlled by the state of the first ancilla

๐‘

1 ๐‘ฅ ๐œ“ ๐‘ฅ ๐œ“ ๐‘ฅ

0 + 1 − ๐œ“ ๐‘ฅ

2 |1⟩

๐‘ ๐‘ฅ=1

4.

Invert the oracle to give

๐‘

1 ๐‘  = ๐‘ฅ 0 ๐œ“ ๐‘ฅ

0 + 1 − ๐œ“ ๐‘ฅ

๐‘ ๐‘ฅ=1

2 |1⟩

5.

The procedure in steps 1 to 4 then prepares the state |๐‘ ⟩ that can be used in the amplitude amplification procedure described earlier.

๏ฎ The number of steps required in the amplitude amplification is then ๐œ‹

~

4 ๐‘

= ๐œ‹ ๐‘

4

๏ฎ

๏ฎ

This problem may be used to encode a search problem; for example, where ๐œ“ ๐œ”

= 1 for an unknown ๐œ” . That means that this complexity is optimal for worstcase problems.

For specific problems, the complexity may be improved upon.

1995

Quantum phase estimation

๏ฎ Often in quantum algorithms, one has a unitary operator ๐‘ˆ , and wants to find an eigenvalue.

๏ฎ Given a state in the form

๐‘ ๐œ“ = ๐œ“ ๐‘˜

|๐œ† ๐‘˜

⟩ ๐‘˜=1 where e ๐‘–๐œ† ๐‘˜ eigenvector, one wants to obtain an estimate,

๐œ† ๐‘˜ probability is an eigenvalue of ๐‘ˆ and |๐œ† ๐‘˜ ๐œ“ ๐‘˜

2

.

⟩ is the corresponding

, of ๐œ† ๐‘˜ with

๏ฎ Alternatively, one wants to place the estimate in an ancillary register in a coherent way to obtain (approximately) the state

๐‘ ๐œ“ ๐‘˜

|๐œ† ๐‘˜

⟩| ๐œ† ๐‘˜

⟩ ๐‘˜=1

Alexei Kitaev

bits of

๐œ†

1995

Quantum phase estimation

๏ฎ

๏ฎ

๏ฎ

๏ฎ

Consider an initial eigenstate ๐œ“ = |๐œ†⟩ .

The Hadamard gives ( 0 + |1⟩)/ 2 .

The controlled ๐‘ˆ โ„“ operation gives ( 0 + ๐‘’ ๐‘–โ„“๐œ†

|1⟩)/ 2 .

The state of the set of ๐‘› qubits is

2 ๐‘›

1

−1 ๐‘’

2 ๐‘› ๐‘ฅ=0 ๐‘–๐‘ฅ๐œ† |๐‘ฅ⟩

|0⟩

|0⟩

|0⟩

|0⟩

|0⟩

|0⟩

๐ป

๐ป

๐ป

๐ป

๐ป

๐ป

Inverse

QFT

|๐œ†⟩ ๐‘ˆ ๐‘ˆ 2 ๐‘ˆ 4 ๐‘ˆ 8 ๐‘ˆ 16 ๐‘ˆ 32

Alexei Kitaev

1995

Quantum phase estimation

๏ฎ

๏ฎ

๏ฎ

๏ฎ

๏ฎ

๏ฎ

๏ฎ

๏ฎ

Consider an initial eigenstate ๐œ“ = |๐œ†⟩ .

The Hadamard gives ( 0 + |1⟩)/ 2 .

The controlled ๐‘ˆ โ„“ operation gives ( 0 + ๐‘’ ๐‘–โ„“๐œ†

|1⟩)/ 2 .

The state of the set of ๐‘› qubits is

2 ๐‘›

−1

1 ๐‘’ ๐‘–๐‘ฅ๐œ† |๐‘ฅ⟩

Alexei Kitaev

It therefore gives

2 ๐‘› ๐‘ฅ=0

The inverse QFT is the operation

2 ๐‘›

1

−1

2 ๐‘› ๐‘ฅ,๐‘ฆ=0 ๐‘’ −2๐œ‹๐‘–๐‘ฅ๐‘ฆ/2 ๐‘› ๐‘ฆ ⟨๐‘ฅ|

2 ๐‘›

−1

1

2 ๐‘› ๐‘ฅ,๐‘ฆ=0 ๐‘’

−2๐œ‹๐‘–๐‘ฅ๐‘ฆ/2 ๐‘› ๐‘’ ๐‘–๐‘ฅ๐œ† ๐‘ฆ

Net result is state ๐‘˜

๐‘ ๐œ“ = becomes

๐‘ ๐‘˜=1 ๐œ“ ๐‘˜=1 ๐œ† ๐‘˜ ๐œ“ ๐‘˜

| ๐œ†

|๐œ† ๐‘˜

⟩ ๐‘˜

For ๐œ† = 2๐œ‹๐‘š/2 ๐‘›

, the sum over ๐‘ฅ is nonzero only for ๐‘ฆ = ๐‘š , giving the state |๐‘š⟩ .

That is, for ๐œ† = 2๐œ‹ × 0. ๐œ†

1 ๐œ†

2 ๐œ†

3

… ๐œ† ๐‘› ๐œ†

1

, the output state is ๐œ†

2

… |๐œ† ๐‘›

Operation conversion

๏ฎ

๏ฎ

We can implement one unitary ๐‘‰ , but we want to implement a different operation ๐‘‚ .

The operations share eigenstates, but the eigenvalues are related by a function.

๐‘‰ ๐œ† = ๐‘’ ๐‘–๐œ† ๐œ† ๐‘‚ ๐œ† = ๐‘“ ๐œ† |๐œ†⟩

๏ฎ

๏ฎ

๏ฎ

Implement the unitary V many times and perform phase estimation to give

๐‘ ๐‘ ๐œ“ = ๐œ“ ๐‘˜

|๐œ† ๐‘˜

⟩ โ†ฆ ๐œ“ ๐‘˜ ๐‘˜=1 ๐‘˜=1 ๐œ† ๐‘˜

| ๐œ† ๐‘˜

The eigenvalue ๐‘“(๐œ†) can then be imposed, giving (ignoring error)

๐‘ ๐œ“ ๐‘˜ ๐‘“(๐œ† ๐‘˜

) ๐œ† ๐‘˜ ๐‘˜=1

| ๐œ† ๐‘˜

Inverting the phase estimation gives

๐‘ ๐œ“ ๐‘˜ ๐‘“(๐œ† ๐‘˜

) ๐œ† ๐‘˜ ๐‘˜=1

= ๐‘‚|๐œ“⟩

1996

Simulation of Hamiltonians

๏ฎ

๏ฎ

๏ฎ

In general, the problem of simulating a physical system is that of simulating Hamiltonian evolution: ๐‘‘ ๐‘‘๐‘ก ๐œ“ ๐‘ก

= −๐‘–๐ป(๐‘ก)|๐œ“⟩

For a constant Hamiltonian, we would have ๐œ“ ๐‘ก

= ๐‘’ −๐‘–๐ป๐‘ก |๐œ“

0

A typical physical system consists of many subsystems:

Seth Lloyd

๏ฎ Each subsystem is of limited dimension ๐‘‘ , but with ๐‘› subsystems the overall dimension is ๐‘‘ ๐‘›

.

๏ฎ A typical Hamiltonian is a sum of local terms on individual subsystems, and interaction terms on 2 subsystems.

๐ป = ๐ป ๐‘˜ ๐‘˜

1996

Simulation of Hamiltonians

๏ฎ

๏ฎ

We want to simulate the evolution ๐œ“ ๐‘ก

= ๐‘’ −๐‘–๐ป๐‘ก |๐œ“

0

With a Hamiltonian a sum of terms:

๏ฎ

๐ป = ๐ป ๐‘˜ ๐‘˜

Each individual ๐ป ๐‘˜ has low dimension, so could be efficiently simulated on its own.

๏ฎ That is, we can perform ๐‘’ −๐‘–๐ป ๐‘˜ ๐‘ก

Seth Lloyd

๏ฎ For short times we can use the approximation that ๐‘’ −๐‘–๐ป

1 ๐›ฟ๐‘ก ๐‘’ −๐‘–๐ป

2 ๐›ฟ๐‘ก … ๐‘’ −๐‘–๐ป

๐พ−1 ๐›ฟ๐‘ก ๐‘’ −๐‘–๐ป

๐พ ๐›ฟ๐‘ก ≈ ๐‘’ −๐‘–๐ป๐›ฟ๐‘ก

1996

Simulation of Hamiltonians

๏ฎ For short times we can use the approximation that ๐‘’ −๐‘–๐ป

1 ๐›ฟ๐‘ก ๐‘’ −๐‘–๐ป

2 ๐›ฟ๐‘ก … ๐‘’ −๐‘–๐ป

๐พ−1 ๐›ฟ๐‘ก ๐‘’ −๐‘–๐ป

๐พ ๐›ฟ๐‘ก ≈ ๐‘’ −๐‘–๐ป๐›ฟ๐‘ก

๏ฎ

๏ฎ

๏ฎ

๏ฎ

This approximation is because ๐‘’ −๐‘–๐ป

1 ๐›ฟ๐‘ก ๐‘’ −๐‘–๐ป

2 ๐›ฟ๐‘ก … ๐‘’ −๐‘–๐ป

๐พ−1 ๐›ฟ๐‘ก ๐‘’ −๐‘–๐ป

๐พ ๐›ฟ๐‘ก

= ๐•€ − ๐‘–๐ป

1 ๐›ฟ๐‘ก + ๐‘‚ ๐›ฟ๐‘ก 2 ๐•€ − ๐‘–๐ป

2 ๐›ฟ๐‘ก + ๐‘‚ ๐›ฟ๐‘ก 2

… ๐•€ − ๐‘–๐ป

๐พ ๐›ฟ๐‘ก + ๐‘‚ ๐›ฟ๐‘ก 2

= ๐•€ − ๐‘–๐ป

1 ๐›ฟ๐‘ก − ๐‘–๐ป

2

= ๐•€ − ๐‘–๐ป๐›ฟ๐‘ก + ๐‘‚ ๐›ฟ๐‘ก ๐›ฟ๐‘ก … − ๐‘–๐ป

๐พ

2 ๐›ฟ๐‘ก + ๐‘‚ ๐›ฟ๐‘ก 2

= ๐‘’ −๐‘–๐ป๐›ฟ๐‘ก + ๐‘‚(๐›ฟ๐‘ก 2 )

If we divide long time ๐‘ก into ๐‘Ÿ intervals, then ๐‘’ −๐‘–๐ป๐‘ก = ๐‘’ −๐‘–๐ป๐‘ก/๐‘Ÿ ๐‘Ÿ = ๐‘’ −๐‘–๐ป

1 ๐‘ก/๐‘Ÿ ๐‘’ −๐‘–๐ป

2 ๐‘ก/๐‘Ÿ … ๐‘’ −๐‘–๐ป

๐พ ๐‘ก/๐‘Ÿ + ๐‘‚ ๐‘ก/๐‘Ÿ

= ๐‘’ −๐‘–๐ป

1 ๐‘ก/๐‘Ÿ ๐‘’ −๐‘–๐ป

2 ๐‘ก/๐‘Ÿ … ๐‘’ −๐‘–๐ป

๐พ ๐‘ก/๐‘Ÿ ๐‘Ÿ + ๐‘‚ ๐‘ก 2 /๐‘Ÿ

2

Seth Lloyd ๐‘Ÿ

Typically, we want to simulate a system with some maximum allowable error ๐œ€ .

Then we need ๐‘Ÿ ∝ ๐‘ก 2 /๐œ€ .

Download