Slides

advertisement
Tony Short
University of Cambridge
(with Sabri Al-Safi – PRA 84, 042323 (2011))
Overview
 Comparing the information-theoretic capabilities of quantum theory
with other possible theories can help us:
 Understand why nature is quantum
 Hone our intuitions about quantum applications
 Surprisingly, despite entanglement, quantum theory is no better
than classical for some non-local tasks.... Why?
 Non-local computation
[Linden et al., 2007]
 Guess your neighbour’s input [Almeida et al. 2010]
 Information causality
[Pawlowski et al. 2009]
The CHSH game
Random x{0,1}
Random y{0,1}
Shared resources
Alice
a {0,1}
Task:
a  b  xy
Bob
b {0,1}
 What correlations P(ab|xy) are achievable given certain resources?
 What is the maximum success probability p in this game?
 Local (classical):
P(a,b|x,y) = Σλ qλ Pλ(a|x)Pλ(b|y)
pC  3/4 (Bell’s Theorem - CHSH inequality)
 Quantum:
P(a,b|x,y) = Tr(Pxa ⊗ Pyb ρ)
pQ  (2+√2)/4 (Tsirelson’s bound)
 General (box-world):
Σa P(a,b|x,y) independent of x
Σb P(a,b|x,y) independent of y
pG  1 (PR-boxes)
Non-signalling conditions
 PR-box correlations [Popescu, Rohrlich (1994)]
Optimal non-signalling correlations (p = 1)
1 2 if a  b  xy
PPR (ab | xy)  
 0 if a  b  xy
x
y
b
a
b
 Problem:
Is there a good, physical intuition behind pQ  (2 + √2) / 4 ?
Information Causality
 Information causality relates to a particular communication task
[Pawlowski et al, Nature 461, 1101 (2009)]
Random y{1,...,N}
N random bits x1 ... xN
m classical bits
Bob
Alice
 I x
N
Task : Maximize
y 1
y
: by 
by
(Bob’s best
Guess of xy)
 I(x:y) is the classical mutual information
I  X : Y   H ( X )  H (Y )  H ( XY )
H ( X )   p( x) log2 p( x)
x
 The Information causality principle states
J   I x y : by   m
N
y 1
 Physical intuition: The total information that Bob can extract about
Alice’s N bits must be no greater than the m bits Alice sends him.
 However, note that Bob only guesses 1 bit in each game.
 The bound on J can easily be saturated:
Alice simply sends Bob the first m bits of a.
 Information Causality is obeyed in quantum theory and classical
theory, and in any theory in which a ‘good’ measure of mutual
information can be defined (see later)
 Information Causality can be violated by general non-signalling
correlations. E.g. One can achieve J=N >> m=1, using
1 / 2 a  b  x y
P(ab | xy)  
 0 otherwise
 Information Causality can be violated using any correlations which
violate Tsirelson’s bound for the CHSH game (when N=2n, m=1)
p
2 2
4

J m
 Hence
Information Causality
Tsirelson’s bound
 Furthermore, it can even generate part of the curved surface of
quantum correlations [Allcock, Brunner, Pawlowski, Scarani 2009]
 But why is this particular task and figure of merit J so important?
 What about probability of success in the game?
 Given that J is a strange non-linear function of the probabilities,
how does this yield nice bounds on quantum correlations
 Is mutual information central to quantum theory?
I.C. - A probabilistic perspective
 If we use probability of success in the Information Causality game,
quantum theory can do better than classical
Random y{1,...,N}
N random bits x1 ... xN
m classical bits
Bob
Alice

Task : Maximize Probby  xy

by
(Bob’s best
Guess of xy)
 When m=1, N=2, maximum success probabilities are the same as
for the CHSH game
3
p 
4
C
2 2
p 
4
Q
pG  1
 The m=1 case for general N has been studied as ‘Random access
coding’ [Ambainis et al 2008, Pawlowski & Zukowski 2010]
1 
1  N  1  1 
2 


p  1  N 1  N 1   1 
2  2   2    2 
N 
C
1
p  1 
2
Q
1
N




(Known to be tight for N=2k3j )
 Furthermore, J=y I(xy:by) and the success probability are not
monotonically related. E.g. For N=2, m=1
 Strategy 1: Alice sends x1 with a bit of noise
J= 1-, p=3/4- ’
 Strategy 2: Alice sends either x1 or x2 perfectly, based on
random bit shared with Bob
J0.38, p= ¾
 What is the relation between bounds on J and on the success
probability, and how do these relate to Tsirelson’s bound?
 Define py as the probability of success when Bob is given y, and the
corresponding bias Ey= 2py – 1
 When proving Tsirelson’s bound, the crucial step uses a quadratic
bound on the entropy
J   I x y : by    1  H p y ,1  p y   
y
y
y
E y2
2 ln 2
when m=1, Information Causality therefore implies
N
2
E
 y  2 ln 2
y 1
 Can we derive a ‘quadratic bias bound’ like this directly?
Information Causality as a non-local game
 It is helpful to consider a nonlocal version of the Information
Causality game. This is at least as hard as the previous version with
m=1 (as Alice can send the message a, and Bob output by=a  b )
Random y{1,...,N}
N random bits x1 ... xN
Bob
Alice
a
Task:
a  b  xy
b
 For any quantum strategy
1
Ey  N
2

 (1)

aˆ x  b y  x y

x
 Using similar techniques to those in the non-local computation
paper [Linden et al (2007)] we define
 
1
2
N
 (1)
aˆ x
  x
x
y 
1
2
N
 (1)
bˆy  x y
x
and note that


y E    y  y  y    1


2
y
  x
 Hence we obtain the quantum bound
N
2
E
 y 1
y 1
 This is easily saturated classically (a=x1, b=0)
 With this figure of merit quantum theory is no better than
classical. Yet with general correlations the sum can equal N
 It is stronger than the bound given by Information Causality
(y Ey2  2ln2)
 Furthermore any set of biases Ey satisfying Σy Ey2 ≤ 1 is quantum
realizable. This bound therefore characterizes the achievable set of
biases more comprehensively than Information Causality.
 When we set all Ey equal, then Ey = 1/N, and we achieve
1
p  1 
2
Q
1
N




 As this non-local game is at least as hard as the original, we can
achieve the previously known upper bound on the success
probability of the (m=1) Information Causality game for all N.
 We can easily extend the proof to get quadratic bounds for a more
general class of inner product games
Inner product game (with Bob’s input having any distribution)
N random bits x1 ... xN
2
E
 y 1
N bits y1 ... yN
y
Alice
Bob
a
Task:
a b  x.y
b
 When Bob’s bit string is restricted to contain a single 1, this implies
the Information Causality result. When N=1, it yields Tsirelson’s
bound, and the stronger quadratic version [Uffink 2002]
Summary of probabilistic perspective
 The form of the mutual information does not seem crucial in
deriving Tsirelson’s bound from Information Causality.
 Instead, quadratic bias bounds seem to naturally characterise
quantum correlations.
 The inner product game with figure of merit y Ey2 is another
task for which quantum theory is no better than classical, but
which slightly-stronger correlations help with.
I.C. - An entropic perspective
 The key role of the mutual information is in deriving Information
Causality. The bound J  m follows from the existence of a
mutual information I(X:Y) for all systems XY, satisfying:
1. Symmetry
I(X:Y) = I(Y:X)
2. Consistency
I(X:Y)= Classical I when X, Y are classical
3. Data Processing I(X:Y) ≥ I(X:T(Y)) for any transformation T
4. Chain Rule
I(XY:Z) – I(X:Z) = I(Y:XZ) – I(X:Y)
(Plus the existence of some natural transformations)
 But mutual information is a complicated quantity (two arguments),
and this list of properties is quite extensive.
 Instead, we can derive Information Causality from the existence of
an entropy H(X), defined for all systems X in the theory, satisfying
just 2 conditions:
1. Consistency
2. Local Evolution

H(X)= Shannon entropy when X is classical
ΔH(XY) ≥ ΔH(X)+ ΔH(Y)
for any local transformation on X and Y
The intuition behind the 2nd condition is that local
transformations can destroy but not create correlations,
generally leading to more uncertainty than their local effect.
 To derive information causality, we can use H to construct a
measure of mutual information I(X:Y)=H(X)+H(Y) – H(XY), then use
the original proof.
 The desired properties of I(X:Y) follow simply
1. Symmetry
trivial
2. Consistency
from consistency of H(X)
3. Data Processing equivalent to Local Evolution of H(X)
4. Chain Rule
trivial

Hence, Information causality holds in any theory which admits a
`good’ measure of entropy. I.e. One which obeys Consistency and
Local Evolution.
 The Shannon and von Neumann entropies are both `good’.
 We can prove that any `good’ entropy shares the following
standard properties of the Shannon and Von Neumann entropies:
 Subadditivity
 Strong subadditivity
 Classical positivity
H(X,Y) ≤ H(X) + H(Y)
H(X1X2| Y) ≤ H(X1| Y) + H(X2| Y)
H(X | Y) ≥ 0 whenever X is classical
(where we have defined H(X|Y)= H(XY)-H(Y) )
 Instead of proceeding via the mutual information, we can use
these relations to derive information causality directly.
 This actually allows us to prove a slight generalisation of
Information Causality:
 H (x
y
| by )  H (x)  m
y
 This generalized form of Information Causality makes no
assumptions about the distribution on Alice’s inputs x1...xN.
 The intuition here is that the uncertainty that Bob has about Alice’s
bits at the end of the game, must be greater than the original
uncertainty about her inputs minus the information gained by the
message.
Entropy in general probabilistic theories
 We can define an entropy operationally in any theory
[Short, Wehner / Barrett et al. / Kimura et al. (2010) ]
 Measurement entropy: H(X) is the minimal Shannon entropy of
the outputs for a fine-grained measurement on X
 Decomposition entropy: H(X) is the minimal Shannon entropy
of the coefficients when X is written as a mixture of pure states.
 These both obey consistency, and give the von Neumann entropy
for quantum theory. However, for many theories they violate
local evolution.
Entropy and Tsirelson’s bound (also in Dahlsten et al. 2011)
 Finally note that due to information causality,
Existence of a `good’ entropy
Tsirelson’s bound
 The existence of a `good’ measure of entropy seems like a very
general property, yet remarkably it leads to a very specific piece of
quantum structure.
 This also means that no ‘good’ measure of entropy exists in
physical theories more nonlocal than Tsirelson’s bound,
(such as box-world, which admits all non-signalling correlations).
Summary and open questions
 Quantum theory satisfies and saturates a simple quadratic bias
bound y Ey2  1 for the Inner Product and Information Causality
games, which generalises Tsirelson’s bound.
 Can we find other similar quadratic bounds?
 The existence of a ‘good’ measure of entropy in a theory (satisfying
just 2 properties) is sufficient to derive information causality and
Tsirelson’s bound.
 Is quantum theory the most general case with such an entropy?
 Is there a connection to thermodynamics?
Download