The Structure of Optimal and Nearly-Optimal
Quantum Strategies for Non-Local XOR Games
by
ARCHNES
j E-rS INSTITUTE
OF fLCHNOLOLGY
ASSACH
Dimiter Ostrev
B.S. Mathematics, Yale University (2008)
B.S. Economics, Yale University (2008)
M.A.S. Mathematics, Cambridge University (2009)
JUN 3 0 2015
LIBRARIES
L_
Submitted to the Department of Mathematics
in partial fulfillment of the requirements for the degree of
Doctor of Philosophy
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
June 2015
@ 2015 Massachusetts Institute of Technology. All rights reserved.
Signature redacted
Author .........................
Department of Mathematics
April 30, 2015
C ertified by ..............................
Signature redacted
I/
Peter Shor
Morss Professor of Applied Mathematics
Thesis Supervisor
- Z-)
Accepted by ......................
Signature redacted
Peter Shor
Chairman, Department Committee on Graduate Theses
2
The Structure of Optimal and Nearly-Optimal Quantum
Strategies for Non-Local XOR Games
by
Dimiter Ostrev
Submitted to the Department of Mathematics
on April 30, 2015, in partial fulfillment of the
requirements for the degree of
Doctor of Philosophy
Abstract
We study optimal and nearly-optimal quantum strategies for non-local XOR games.
First, we prove the following general result: for every non-local XOR game, there
exists a set of relations with the properties: (1) a quantum strategy is optimal for
the game if and only if it satisfies the relations, and (2) a quantum strategy is nearly
optimal for the game if and only if it approximately satisfies the relations. Next,
we focus attention on a specific infinite family of XOR games: the CHSH(n) games.
This family generalizes the well-known CHSH game. We describe the general form
of CHSH(n) optimal strategies. Then, we adapt the concept of intertwining operator
from representation theory and use that to characterize nearly-optimal CHSH(n)
strategies.
Thesis Supervisor: Peter Shor
Title: Morss Professor of Applied Mathematics
3
4
Acknowledgments
I would like to thank my advisor Prof.
through the years.
Prof.
Peter Shor for his unconditional support
Shor gave me the freedom I needed to explore, and to
find my own way. He was also generous with his time, and patiently listened to my
mathematical arguments and ideas.
I would like to thank Thomas Vidick for bringing to my attention the problem of
self-testing and entanglement rigidity. Thomas has always been friendly, enthusiastic,
and open to discussion. The conversations with him have been a source of many great
ideas.
I would like to thank all the professors who have taught me classes in the initial
years of my time at MIT. The classes inspired me and helped shape my way of
thinking.
I would like to thank all the people I have met during my time at MIT for the
friendly and stimulating atmosphere they created.
Finally, I would like to thank my parents for everything they have done for me,
both before and during my graduate studies. They have always offered kind words of
encouragement and advice.
5
6
Contents
1
Introduction
2
Preliminaries
15
2.1
Quantum computation and information theory ................
15
2.1.1
Dirac's bra-ket notation
16
2.1.2
A linear bijection between
.
17
2.1.3
State spaces and state vectors . . . . . . . . . . . . . . . . . .
19
2.1.4
Bipartite systems and entanglement . . . . . . . . . . . . . . .
19
2.1.5
The Schmidt decomposition
. . . . . . . . . . . . . . . . . . .
20
2.1.6
Measurement, observables, expected values . . . . . . . . . . .
20
2.1.7
Non-local XOR games
. . . . . . . . . . . . . . . . . . . . . .
22
2.1.8
The CHSH(n) XOR games . . . . . . . . . . . . . . . . . . . .
24
2.2
Semi-definite programs . . . . . . . . . . . . . . . . . . . . . . . . . .
26
2.3
Some facts from representation theory
28
3
9
+ 1 anti-commuting
. . . . . . . . . . . . . . . . . . . . .
CdA
0
CdB
and
MatdA,d(C) . . .
. . . . . . . . . . . . . . . . .
1 observables on
2.3.1
2k
2.3.2
The general form of n anti-commuting
.
29
2.3.3
Anti-commuting +1 observables and inner products . . . . . .
30
2.3.4
Invariant subspaces and Schur's lemma . . . . . . . . . . . . .
31
2.3.5
Intertwining operators
32
C2k.
. . . . . . . . .28
1 observables on
Cd
. . . . . . . . . . . . . . . . . . . . . .
Overview of Results
35
7
4
5
Relations for optimal and nearly-optimal quantum strategies
45
4.1
Non-local XOR games and semi-definite programs . . . . . . . . . . .
45
4.2
Proof idea for Theorem 3.1 .......
52
4.3
Decompositions of the dual optimal solution . . . . . . . . . . . . . .
53
4.4
A useful identity and the proof of Theorem 3.1 . . . . . . . . . . . . .
54
4.5
Freedom in the choice of decomposition . . . . . . . . . . . . . . . . .
57
The structure of CHSH(n) optimal and nearly optimal strategies
61
5.1
Relations for CHSH(n) optimal and nearly optimal strategies . . . . .
62
5.2
Classification of CHSH(n) optimal strategies . . . . . . . . . . . . . .
66
5.2.1
An optimal CHSH(n) strategy must have a certain form
. . .
66
5.2.2
Any strategy of a certain form is optimal for CHSH(n)
. . . .
70
5.3
6
.......................
Approximate intertwining operator construction for CHSH(n) nearoptim al strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
74
5.3.1
Orthonormal vectors
. . . . . . . . . . . . . . . . . . . . . . .
76
5.3.2
The Frobenius norm of T . . . . . . . . . . . . . . . . . . . . .
78
5.3.3
The expression for (Ai 0 I)T - T(Ai
5.3.4
The expression for (I 0 Bkl)T - T(I 0 Bk1)
9 I)
. . . . . . . . . . .
79
. . . . . . . . . .
81
5.3.5
The first error bound . . . . . . . . . . . . . . . . . . . . . . .
82
5.3.6
The second error bound
. . . . . . . . . . . . . . . . . . . . .
92
5.3.7
Putting everything together . . . . . . . . . . . . . . . . . . .
94
Conclusion and open problems
97
8
Chapter 1
Introduction
Non-local XOR games are a framework used to study the correlations that result from
measuring two parts of an entangled quantum state using two spatially separated
devices, each capable of performing one of several possible measurements.
When we think of a non-local XOR game, we imagine two people, usually called
Alice and Bob, in two spatially separated laboratories, and unable to communicate
with each other. Alice and Bob choose a strategy for the game by choosing a particular
setup for their respective measurement devices, and a particular entangled quantum
state shared between them.
Alice and Bob's aim in choosing their strategy is to
maximize a given linear functional acting on the space of correlations. The linear
functional represents the rules of the particular XOR game Alice and Bob are playing;
the higher the value of the linear functional on the correlations produced by Alice
and Bob's strategy, the better Alice and Bob are doing.
It has long been known that for certain non-local XOR games, Alice and Bob can
achieve a higher value using measurements of a shared entangled state than anything
Alice and Bob could do using only a classical shared random string (see, for example,
the surveys [22, 2]).
This has attracted interest both from the point of view of
foundations of physics, and from the point of view of applications. From the point
9
of view of foundations of physics, the advantage of quantum strategies over classical
ones has been central in the discussion about local realism (see, for example, the
survey [51). From the point of view of applications, there have been many proposals
for using quantum entanglement as a resource in information processing tasks, such
as performing distributed computation with a lower communication cost (see, for
example, the survey [3]), teleportation of quantum states [1] and the extension to a
full scale computation by teleportation scheme [101, and quantum cryptography (see,
for example, the survey [9).
In the study of non-local XOR games, the optimal and nearly-optimal quantum
strategies are interesting objects for several reasons. First, their behavior is maximally far away from the behavior of classical strategies. Second, applications often
involve setups related to the optimal strategies. Third, the optimal quantum strategies represent the boundary of the non-local correlations that are achievable in quantum mechanics, and are therefore interesting from the perspective of foundations of
quantum mechanics.
And finally, the optimal and nearly optimal quantum strate-
gies for XOR games have interesting mathematical structure, with connections to
semi-definite programming and representation theory.
In this thesis, we study the optimal and nearly optimal quantum strategies for
non-local XOR games. First, we present the following general result: for every nonlocal XOR game, there exists a set of relations such that
1. A strategy is optimal for the game if and only if it satisfies the relations.
2. A strategy is nearly-optimal for the game if and only if it approximately satisfies
the relations.
The coefficients of the relations can be computed efficiently by solving a semi-definite
program and finding the eigenvalues and eigenvectors of a positive semi-definite matrix. The precise statement is in Theorem 3.1 and the proof in Chapter 4.
10
The result in Theorem 3.1 continues the line of work in references [19, 6, 181. In
[19], a correspondence was established between the quantum non-local correlations
and inner products of vectors in real euclidean space. Later, in [6], it was noticed that
a semi-definite program can be associated to each non-local XOR game. In reference
[18], the dual semi-definite program was used to obtain the so-called marginal biases
for a non-local XOR game. In this thesis, we use the dual semi-definite program to
derive the set of relations for optimal and near-optimal quantum strategies of a given
XOR game.
In the second part of this thesis, we focus on a specific infinite family of nonlocal XOR games: the CHSH(n) games, n E N, n > 2 introduced in [18]. For this
family, we solve the system of relations mentioned above, and precisely characterize
the optimal and nearly-optimal CHSH(n) strategies.
The interest in precisely characterizing optimal and nearly-optimal quantum strategies for XOR games comes from recent results about information processing with
untrusted black-box quantum devices. In these results, one or more parties attempt
to perform an information processing task, such as quantum key distribution, randomness generation, or distributed computation, by interacting via classical inputs
and outputs with quantum devices that cannot be trusted to perform according to
specification. The devices may not be trusted for example for fear of malicious intent,
as in quantum cryptography, or, to take another example, the manufacturing process
used to make the devices may be unreliable and prone to errors.
The task of doing information processing with untrusted black-box devices and
being confident in the result may at first appear daunting.
However, there have
recently been proposals of protocols for quantum key distribution with untrusted
devices, for randomness generation with untrusted devices, and for a protocol in
which a classical verifier commands two untrusted quantum provers to perform a
full-scale quantum computation. References to results of this type may be found for
11
example as follows: for quantum key distribution, the original proposals are 112, 13],
a more recent result is [21], and the survey [2] lists a number of other results on
p.34-35; for randomness generation, the survey [2] lists a number of results on p.33;
the protocol in which a classical verifier commands two untrusted quantum provers
to perform a full-scale quantum computation is developed in reference [17].
All of these protocols rely on mathematical results that have been given the name
of self-testing or entanglement rigidity (see [14, 15, 17] for three examples of such
results, with different proof techniques in each). These results are a characterization
of optimal and nearly-optimal strategies for the CHSH game (or close cousins of the
CHSH game). The CHSH game is the first member of the family CHSH(n) , n > 2,
mentioned above.
In this thesis we obtain a precise characterization of optimal and nearly-optimal
strategies for all the CHSH(nr) XOR games. The techniques used in the proof differ
from the self-testing results mentioned above; here we use ideas form representation
theory.
It has been noticed previously [19, 18] that representation theory is well-suited to
describing exactly optimal quantum strategies for non-local XOR games. In the case
of exactly optimal CHSH(n) strategies, the contribution of this thesis is to give an
explicit and direct statement and proof of a classification theorem for the CHSH(n)
exactly optimal strategies. The precise statement is in Theorem 3.3, and the proof in
Section 5.2.
The situation with nearly-optimal strategies is more subtle; the representation
theory techniques that work so well in the exact case are difficult to generalize to
nearly-optimal strategies (we will say more about the difficulty later). An attempt to
use representation theory in this context has been made in [181, but the error bounds
obtained there depend on the dimension of the Hilbert space used for the strategy; in
the context of untrusted black box devices, this dimension may be arbitrarily large.
12
In this thesis, we take a different approach to characterizing nearly-optimal quantum strategies. The key insights are to adapt the concept of intertwining operator
from representation theory, to notice the importance of a certain subspace of the space
of a given strategy and to adapt the group averaging technique from representation
theory. The precise statement of the result for CHSH(n) near-optimal strategies is in
Theorem 3.4, and the proof in Section 5.3.
The remainder of this thesis is structured as follows: in Chapter 2, we present
notation, concepts and known facts from quantum computation and information theory, semi-definite programming, and representation theory; this material is necessary
background for the rest of the thesis. In Chapter 3, we give the precise statements of
the results proved in this thesis. In Chapter 4 we prove the result concerning relations
for optimal and nearly-optimal quantum strategies for general XOR games. In Chapter 5 we prove the results characterizing the optimal and nearly-optimal strategies
for the CHSH(n) games. In Chapter 6 we discuss open problems and possible future
work.
13
14
Chapter 2
Preliminaries
The goal of this chapter is to cover notation, concepts and known facts that are used
throughout the rest of the thesis. We cover material from quantum computation and
information theory, semi-definite programming, and representation theory.
2.1
Quantum computation and information theory
In this section, we begin with Dirac's bra-ket notation for vectors which is commonly
used in quantum computation and information theory.
Next, we introduce a linear bijection between the space
CIA
0
CdB
and the space
MatdA,dB (C) of dA x dB matrices. This linear bijection will be used in some of the
arguments later on.
Next, we cover concepts related to quantum systems: the state space and state
vector; states of a bipartite system and entanglement; the Schmidt decomposition for
states of a bipartite system; measurement, observables, and expected values.
Finally, we cover concepts related to non-local XOR games, and their quantum
strategies.
15
2.1.1
Dirac's bra-ket notation
In the bra-ket vector notation, we put 1), called a ket, around the identifier of a
column vector. We illustrate with several examples.
In the space C2 we have the standard basis
1
0
0
1
In quantum computation, these are commonly denoted by 10) and Ii) respectively.
So we have
1
10) =
01
We now take two other column vectors
u=
[,v=H
a
C
b
d
In the bra-ket notation, these would be written as
'u) =
ju) = ajO) + b1i),
c10) + dl1)
Next, we turn attention to the notation ( I called a bra. We put
( I around
the
identifier of a row vector. If 1w) is a column vector, then (wl is the conjugate transpose
row vector. For the examples above,
(01 =
11 0] ,
(11 = [0
1],
(ul = [a* b*] ,
vl =c* d*1
We will sometimes want to take just the transpose of a vector, not the conjugate
transpose. We adopt the convention of doing so by taking the entry-wise complex
16
conjugate of the conjugate transpose vector, so that (w*I is the transpose of
Iw). For
the examples above
(0*I1 = 1
(1*1= [0
0,
(U*I = [a b],
1],
(v* =
[c
d]
We conclude this section by showing the notation for the inner product, outer
product, and tensor product of two vectors. The notations for these are (ulv), lu)(vl,
and Ju) 09 v) respectively. The notation for tensor product also has two shorthand
versions: 1u)lv) and luv). For the examples above, we have:
[a*
(uv)
H
b*]
a*c + b*d
a
ju)(V1
b[
1u) 0 Jv) =u)Iv) = Juv)
2.1.2
=
d*]
ac*
ad*
bc*
bd*
acIO)10) + bcl1)10) + ad0)11) + bd1)I1)
A linear bijection between
CIA
0
and
CdB
MatdA,d,,(C)
We consider the space CdA with its standard basis denoted by 1i), i
the space CdB with its standard basis denoted by
IJ), j
= 1,... dB.
With this notation, we can write the standard basis of
ii)
j), i= 1,...
dA,
j = 1,...
1,... dA, j=1,... dB
17
CdA 0 CdB
dB
and we can write the standard basis of MatdA,dB(C) as
li)(ji=
=
as
1,... dA and
We define a linear bijection
'C
Cd^
®
dCdB MatdA,dB(C)
by defining the action of L on the standard basis as
a (di) e dj)) =i
) (Ji
and extending to the whole space by linearity; that is,
S(wij1i)
1j))
=
Wij1)W
We collect some useful properties of L in the following lemma.
Lemma 2.1. Let JU) E CdA, Iv) E CdB, |w) E CdA 0 CdB, A E MatdA(C), B E
MatdB(C). Then,
" L(Iu) 0 Iv)) = Iu)(v*I and consequently, by linearity,
L
"
(_,
Iut) 0 lvi))
A L(jw)) = L(A 0D Iw))
" L(Iw))BT =2(I 0 Bjw))
" 1I (Iw))IIF
IIIW)II
All of these properties can be proved by expanding the relevant vectors and matrices with respect to the standard basis and checking that the appropriate identity
in the coefficients holds.
The notation 11 I|F used above denotes the Frobenius norm of a matrix: for an
m x n matrix A,
Sa ij
12
=
TrAtA
IIAIIF
E
i=1 j=1
18
2.1.3
State spaces and state vectors
It is a postulate of quantum mechanics (see [16, p. 801) that to each isolated quantum
system, there is an associated state space: a vector space over C with inner product.
In this thesis we will work with finite dimensional state spaces, so the state spaces
will always be of the form C'.
The state of a system is described by a unit vector in the state space, called a state
vector, and sometimes just a state. If
the states
[0)
4')
is a state then we can consider e' 0 IV)), 0 E R;
and eiOI4,), 0 E R give the same predictions for all measurements and
are considered equivalent.
2.1.4
Bipartite systems and entanglement
Now we consider two quantum systems: system A with state space CdA and system
B with state space
CdB.
It is a postulate of quantum mechanics (see [16, p. 941) that
the state space of the composite system AB is
0 CdB. If the state of system A is
CdA
|'A) E Cd^ and the state of system B is |OB) E CdB, then the state of the bipartite
system AB is IPA) 0 | 4'B) E CdA 0
CdB.
We have seen that system AB can have state vector of the form
However, the space
CdA 0 CdB
[0A)
09
0B).
also contains unit vectors that cannot be written in
the form 10A) & JOB), for example
I0 11
POP+ ) + 11) (D 1)
'iit1
EC2 &C2
States that cannot be written in the form |0A) 0
k|B)
are called entangled states.
One can count the number of free parameters in a product state (it is dA + dB
and the number of free parameters in a generic state in
CdA
0
CdB
(it is dAdB
and see that entangled states form the vast majority of states in CdA 0
-
1)
1),
CdB.
One research direction in Quantum Computation and Information Theory is to
19
study the curious properties of entanglement. This thesis also studies one facet of
entanglement.
2.1.5
The Schmidt decomposition
Here, we present a very useful decomposition theorem for states of a bipartite system.
Theorem 2.1 (Schmidt Decomposition). Let [7P) E
CA 0 CdB
be the state of a
bipartitesystem AB. Then, there exists an orthonormal basis |u 1),...
orthonormal basis |v1),.. .vdB)
udA) of CIA,
an
of CdB and positive reals A 1 ,... Ar, r < min(dA, dB),
_1 Ai = 1 such that
r
4')
VAlui)
=
0
'vi)
i= 1
One way to prove this theorem is to use the linear bijection between
CIA
0
CdB
and MatdA,dB(C) and the singular value decomposition for a matrix in MatdA,d,,(C).
For a detailed proof, see [16, p. 1091.
In the decomposition
r
Z
1')
i=
AIui)
0
vi)
1
the number r is called the Schmidt number of lb), the
coefficients for 10), and |I),..
.
U),
/XA
are called the Schmidt
lvi),... lvdB) are called the Schmidt bases for
systems A and B. The span of those Schmidt vectors that have non-zero Schmidt
coefficients we will call the A, respectively B, support of the bipartite state
40);
that
is,
suppA1,O) = span(li), ...
lu,))
C CIA
suppB14) = span(v1),... Ivr)) C CdB
2.1.6
Measurement, observables, expected values
We present an abbreviated discussion of quantum measurements that is sufficient
for the purposes of this thesis; for a more detailed exposition see, for example, [16,
20
p. 84-961.
Consider a quantum system with state space C'. By an observable on C' we
mean a self-adjoint linear operator acting on C'. Each observable describes a possible
measurement of the system.
By the spectral decomposition, an observable M can be written as
A .. Pm
M =E
where the Pm are a complete set of orthogonal projections.
Now suppose the system is in state IV)) E C' and a measurement with observable
M is performed. The eigenvalues Am of M are the possible outcomes of the measurement. The outcome Am occurs with probability (4'IPmI@). The expected value of the
outcome of the measurement is
(4'ML') =
Am(0IPmkb)
Next, consider a setup that is used throughout this thesis. Consider a bipartite
system AB with state space CdA 0
CA
and N
=
Consider observables M
CdB.
Z, viQ, on CdB. Suppose the system is in state
i/)
=
Em AmPm on
E CdA 0 CdB and
consider measurement of NI on system A and N on system B.
Look at the identity
(/I@M 0 N4)
Aml/n(*IIPm
=
?,T,n
0 Qnk10)
Here, (V) IPmDQnIV) is the probability that outcome Am occurs when measuring M on
system A, and outcome v,, occurs when measuring N on system B. The expression,
(,?pIM 0 NI1,0) represents the expected value of the product of the outcomes of the two
measurements.
21
We close this section by introducing a special class of observables called
servables. A
and -1.
A
1 ob-
1 observable is a self-adjoint operator that only has the eigenvalues 1
1 observable M has the properties that
M = MtI
M-1,
M2
i.e. M is both self-adjoint and unitary. The observables we consider in this thesis
will always be
2.1.7
1 observables.
Non-local XOR games
In a non-local XOR game two players, traditionally called Alice and Bob, are separated in space and play cooperatively without communicating with each other. A
third party, called a Referee or sometimes a Verifier, runs the game and decides
whether Alice and Bob win or lose.
Formally, a non-local game consists of two finite sets S and T, a probability
distribution
ir
on S x T, and a function V : S x T -+ {-1, 1}. The game proceeds
as follows:
1. The referee selects a pair (s, t) E S x T according to the probability distribution
1lr.
2. The referee sends s as a question to Alice and t as a question to Bob.
3. Alice replies to the referee with a E {-1, 1} and Bob replies to the referee with
b E {-1, 1}
4. The referee looks at V(s, t)ab. If V(s, t)ab = 1, then Alice and Bob win, and
if V(s, t)ab = -1 then Alice and Bob lose. Notice that V(s, t) = 1 means that
Alice and Bob must give matching answers to win and V(s, t) = -1 means Alice
22
and Bob must give opposite answers to win. 1 We can also think about V(s, t)ab
as the payoff that Alice and Bob receive: they either win 1 unit or lose 1 unit.
A quantum strategy for an XOR game consists of a state space
14)
E CA & CdB, and
CdA 0 CdB,
a state
1 observables {A, : s c S} on CdA and {Bt : t E T} on CdB.
The interpretation of this strategy is the following: Alice and Bob share a bipartite
quantum system with state space CdA 0 CdB.
Prior to the beginning of the game,
the system has been prepared in the state IV) E CdA 0
CdB.
s, Alice measures observable A, and uses the outcome, 1 or -1,
On receiving question
as her answer to the
referee. Similarly, on receiving question t, Bob measures observable Bt and uses the
outcome, 1 or -1,
as his answer to the referee.
We adopt the point of view that V(s, t)ab is the payoff that Alice and Bob receive
from the game and ask: for a given strategy, what is the expected payoff? We see
that the expected payoff is
Sr(s, t)V(s, t)(7P1A , 0
BtI10)
sES tET
This is because 7r(s, t) is the probability that questions (s, t) occur, and V(s, t)(4'IA,9
BtI1,) is the expected payoff conditional on questions (s, t) occurring.
We can also ask about the optimal expected payoff for a quantum strategy for a
given non-local XOR game; this is
'=~ sup
E ir(s,
t)V(s, t)('A, I Bt)
As,Bt,p4) sES tET
where the supremum is taken over all possible strategies. We call the value of the
2
supremum the quantum success bias for the game.
'The name "XOR game" is related to the following: if we write a = (-1)a', b = (- 1 )b' for
E {0, 1}, then V(s, t)ab = V(s, t)(-1),'eb' so that whether Alice and Bob win or lose depends
on the XOR of the bits a' and b'
a', b'
2
The phrase quantum value may sound more appropriate here, but it has been used previously
23
Next, define a matrix -y by 'y(s, t) = r(s,t)V(s,t). The matrix '- contains all
the information about the game: the set S is the set of row indices of -y, the set T
is the set of column indices of -y, the probability distribution 7r can be recovered by
-x(s,t) =
(s, t)1, the function V can be recovered by V(s, t) = sign(7-Y(s, t)). Thus, we
can identify non-local XOR games with matrices -y normalized so that Est -y
=
1.
So far we have
F=
sup Yxir(s,t)V(s,t)(*A,®Bt4,)
As,Bt,jp) sES tET
sup
ZZ'y(s,t)(VIA®Btb)
As,Bt,\ }PsGS tCT
We define an optimal strategy for the XOR game with matrix -y to be a strategy
AS, Bt, 1,) such that
t)(IAs S Bt|j) = F
ZZy(s,
SES tET
and we define an E-optimal strategy to be A8, Bt, 4') such that
-y(s,
t)(4OA, 0 Btj4)
(1 - E)F <
F
SES tET
2.1.8
The CHSH(n) XOR games
Here, we look at the infinite family of XOR games CHSH(n) , n c N, n > 2 introduced
in [18].
For the CHSH(n) game, the question set S of possible questions for Alice is
{1, ...
, n} and the question set T of possible questions for Bob is the set of ordered
pairs {ij : i,j C
1, ... , n}, i # j}.
The probability distribution 7r(s, t) according to which the referee selects the questions can be described as follows:
in the literature to mean optimal success probability, not optimal expected payoff as we consider
here. The success probability p and expected payoff e for a given XOR game and given strategy are
related by e = 2p - 1.
24
1. The referee selects a pair i, j uniformly at random among all
<
(')
pairs such that
< j < n.
2. The referee selects either i or
j
as question for Alice, and either ij or ji as
question for Bob; the four possibilities are equally likely.
The rule for winning or losing V(s, t) is determined like this: to win, Alice and Bob
must give matching answers on questions (i, ij), (i, ji) and (j, i3), and give opposite
answers on questions (j, ji).
As in the previous subsection, it is convenient to summarize all information about
the CHSH(n) game in a matrix 7. The matrix 7 for the CHSH(n) game has n rows
and n(n- 1) columns. It is most convenient to write the matrix -Y using Dirac's bra-ket
notation. Let 1).,...
In) be an orthonormal basis of R', and let lij), i $ j E {1,.. . n}
be an orthonormal basis of R'(-'). Then, we can write:
7= 4
E
2
)(iil
(Ii)(ijii
i)(ii -wi)(iil)
Isi<jsn
It was shown in reference [18] that the quantum success bias for all the CHSH(n)
games is
sup
A2 ,Bj,?)
; that is,
1
2
(Ai
0 Bij + Ai 0 Bji + Aj 0 Bij - Aj 0 Bji)li)=
1
1<i<j<n
Finally, we remark that the first element of the family, CHSH(2), is the usual
CHSH game, based on reference [4]. Thus, the family CHSH(n) is a generalization of
the CHSH game.
25
2.2
Semi-definite programs
In this section we cover some terminology and facts about semi-definite programs that
will be used later on. We use an abbreviated discussion on semi-definite programs
that is sufficient for the purposes of this thesis; for a more detailed exposition see, for
example, [201, or the lecture notes [11].
Look at the space of real symmetric matrices of a given size. For two such matrices
A, B, we define their inner product
A - B = Tr AB=
Ai Bi
Within the space of real symmetric matrices, we look at the positive semi-definite
matrices. We use the notation A
>-
0 to mean that A is positive semi-definite, and
the notation A >- 0 to mean that A is strictly positive definite. This notation also
extends in the following way: A > B means that (A - B) is positive semi-definite
and A >- B means that (A - B) is strictly positive definite.
A semi-definite program is a constraint optimization problem of the form
G-Z
sup
Z>-O, Fi-Z=ci, i=1,....m
Here G, F, i=
1,.. . m are symmetric matrices, and ci, i = 1,
...
m are real numbers.
We call this semi-definite program the primal. We denote the value of the supremum
by vprimat.
The dual semi-definite program is
inf
E
.
c
wiFG
We denote the value of the infimum by vdal.
Next, we introduce some terminology:
26
* A primal/dual feasible solution is one that satisfies the constraints.
* A primal/dual strictly feasible solution is one that satisfies the constraints, and
satisfies the positive semi-definite constraint strictly.
* A primal/dual optimal solution is a feasible solution Z, respectively ',
that G - Z = vp,imal, respectively
such
-W
* A primal/dual c-optimal solution is a feasible solution Z, respectively W', such
that G . Z > (1 -
) Vprimai, respectively C- 'i15 (1 + E)Vdual.
* For a primal feasible Z and a dual feasible ', the quantity
E
i=1
-Z
wiFi - G
)-
is called the duality gap.
We summarize some known facts about semi-definite programs in the following
theorem:
Theorem 2.2. Assume throughout that both the primal and the dual have feasible
solutions. The following statements hold
* For a primal feasible Z and a dual feasible W-, the duality gap is non-negative:
SwjF - G
* (Z> wjF -
G).Z = 0 if and only if vpimal
-Z > 0
= Vual,
Z is optimal for the primal
and iW optimalfor the dual. This statement is sometimes called "complementary
slackness condition ".
* Vprimal < Vdual.
This statement is sometimes called "weak duality".
27
"
If the primal has a strictly feasible solution, then the dual infimum is attained;
if the dual has a strictly feasible solution, then the primal supremum is attained.
* If at least one of the primal and dual has a strictly feasible solution, then
Vprimal = Vdual.
2.3
This statement is sometimes called "strong duality".
Some facts from representation theory
In this section we cover a few facts and concepts from representation theory that will
be used later on. These facts include properties of anti-commuting +1
observables,
invariant subspaces and Schur's lemma, and the notion of an intertwining operator.
For a more detailed exposition of representation theory, see for example [8] or the
lecture notes [7].
2.3.1
2k +1 anti-commuting
1 observables on C2k
1 observables on C2k
We give an explicit construction of 2k + 1 anti-commuting
using the isomorphism C2k ~ C2 002 0 -
.
02 and the Pauli matrices.
k terms
The Pauli matrices are the 2 x 2 matrices
0
1
0
-i
1
0
1
0
i
0
0
-1
They are self-adjoint, unitary, and satisfy
0-.Uy =
oayo-x
=
lorz,
Uo-.oz = ior.,
-io-z,
oUzY = -iox,
o-c7
a-a-
=r
oY
=-i-y
From this it also follows that the Pauli matrices anti-commute.
28
Now consider the following 2k + 1 operators on C2k
,C2
g C2
®
.
0 C2:
k terms
Uk,1
=
0
0-09
0
Uk,2-
010
0
Uk,2k
y
Uk,2k+1
y
0-y
.
O®x0
Ork,5 U=0 Uy
I0I
--0
0
-010
(2.1)
I00
-Y0Jy0
0ay
0-y
OYy
- -O
0y
These operators are self-adjoint, unitary, and anti-commute.
It is known from the representation theory of the Clifford algebra that any collection of 2k anti-commuting
1 observables on C 2 k is equivalent (by conjugation by uni-
tary) to the collection 0-k,1,. ..
k,2k,
and any collection of 2k+1 anti-commuting
servables on C 2 k is equivalent to either
crk, 1 ,.
0-
1 ob-
k,2k+1 or 0-k,1, .. . Jk,2k, -Uk,2k+1
(the two options are not equivalent because the product of the observables in the first
collection is (-i)kI and the product in the second collection is - (-i)kI).
2.3.2
The general form of n anti-commuting t1 observables
on Cd
It follows from the representation theory of the Clifford algebra that the following
holds for n anti-commuting t1 observables on Cd:
Theorem 2.3. Let A 1 ,... A,, be k1 observables on Cd such that AkA + AAk
for k
#
=
0
1. Then d = s2[n/2J for some s E N, and there is an orthonormal basis of Cd
29
with respect to which A 1 ,... A.
have block-diagonal form with 2[n/2] x
2[,n/2]
blocks
and such that
* For n = 2k,
i =1,...2k
Ai[
~ki
" For n=2k+1,
OUk,i
i=1,...2k
Ai=
and
gk,2k+1
Uk,2k+1
A2k+1 =
-Uk,2k+1
L-Ok,2k+1J
i. e. some number s', 0 < s' < s of the diagonal blocks of A2k+l are ak,2k+1 and
the other s - s' diagonal blocks are -(k,2k+1
2.3.3
Anti-commuting
1 observables and inner products
Here we present a property relating n anti-commuting
products of vectors in R'.
1 observables and inner
We introduce a piece of notation and then state the
property.
-T
Let A 1 ,. .. A, be some matrices on
Cd,
and let u
=
u
...
Un
be a vector in
R'. By u-A we mean a linear combination of A 1 ,.... A. with the coefficients u1 ,. . . U7.,;
30
that is,
uA- =u1 A1 +-
-+uA
With this notation, we can state the following lemma:
Lemma 2.2. Let A 1 ,... A, be anti-commuting
1 observables on Cd, and let u, v E
Rn be two vectors. Then
=2
(u-)(v -A) + (v -A)(-)
Eujv
I=2(u'v)I
Proof. We expand the left-hand-side:
2.3.4
) + (v - )(u - )
n
i=1
j=1
uiv 1 (A As + A A ) = 2
=
u
\i=1
i
/
(uA A)(v -
n
Invariant subspaces and Schur's lemma
Here we present some facts about invariant subspaces. These facts are commonly
called Schur's lemma in expositions of representation theory. We introduce the notion
of invariant subspace and then state Schur's lemma.
Let A be a matrix on Cd and let V be a subspace of Cd. We say that V is invariant
under A if
|v) c V => (Alv)) c V
This also generalizes to a collection of matrices: let I be some index set and let
{Aj
: i E I} be a collection of matrices.
collection {Aj
We say that V is invariant under the
: i E I} if it is invariant under each individual A. In the context of
representation theory, the index set I has the extra structure of being a group or an
algebra, and the mapping i
'-*
Ai has the extra structure of being a group or algebra
31
homomorphism.
However, this extra structure is not used in the proof of Schur's
lemma, and the lemma holds for general index sets I.
Now we are ready to state Schur's lemma:
Lemma 2.3.
1. Let A be a linear operatoron a vector space V, B a linearoperator
on a vector space W and T a linear operatorV
-+
W. Suppose TA = BT. Then
ImT is invariant under B and KerT is invariant under A.
More generally, if {A
: i E 1} is a collection of linear operators on V, {Bi
i E I} is a collection of linear operators on W, and TAj = BiT for all i E I,
then ImT is invariantunder the collection {Bi :E
1} and KerT is invariant
under the collection { Ai : i E I}.
2. Let A, T be linear operators on a vector space V, such that AT = TA. Then,
all eigenspaces of T are invariant under A. More generally, if {Ai : i E 1}
is a collection of linear operators on V and AiT = TA, for all i E I, then all
eigenspaces of T are invariant under the collection { Ai : i E I}.
All these statements can be proved directly from the definitions.
2.3.5
Intertwining operators
Here we look at the concept of intertwining operator that is implicitly present in the
statement of Schur's lemma.
Let {Aj
: i E 1} be a collection of linear operators on V, {Bi : i E 2} be a
collection of linear operators on W, and T a linear operator V -+ W. We say that T
an intertwining operator for the collections {Aj : i E 2}, {Bi : i E I} if TAj = BiT
for all i E I.
In the context of representation theory, the index set I has the extra structure
of being a group or an algebra, and the mappings i
32
'-+
Aj, i
'-*
Bi have the extra
structure of being group or algebra homomorphisms. Here, we will want the slightly
more general definition that allows an arbitrary index set I.
33
34
Chapter 3
Overview of Results
First, we look at the question: given a non-local XOR game, what can we say about
optimal or nearly optimal strategies for the game? We show that there exists a set of
relations such that any optimal strategy satisfies the relations, and any nearly optimal
strategy nearly satisfies the relations. More specifically, we prove the following:
Theorem 3.1. Consider a non-local XOR game specified by an n x m matrix y
and such that the quantum success bias for the game is F. Then, there exist vectors
a 1, ... a, E Rn and
*
1, ....
Or E R' such that
1 observablesA 1 , ... A,, B 1 ,... B, and bipartitestate IV)) are an optimal strategy for the game, i.e,
n
m
ZEm
(IAi 9 B)
=F
i=1 j=1
Vk = 1,...r
*
ak - A II|)=I0/3k.B-
)
if and only if
1 observables A 1 ,... A,,, B1 ,... Bm and bipartite state 1\0) are an c-optimal
35
strategy for the game, i. e,
nm
Sy( i (Ai 0 Bij4) < F
(1
-
E E
e)
i=1
j=1
if and only if
ES
k
0 I ) -- I
k.B-b)
2
k=1
The proof of Theorem 3.1 is in Chapter 4. The proof relies on the semi-definite
program that can be associated to an XOR game, and on an argument that is related
to the complementary slackness condition.
vectors a,,
. . . a,
E R" and 31 , ...
From the proof, one can see that the
Or E Rn from the statement of Theorem 3.1 can be
computed efficiently by solving a semi-definite program and finding the eigenvalues
and eigenvectors of a positive semi-definite matrix.
Next, we focus attention on the CHSH(n) XOR games. By specializing the methods form the proof of Theorem 3.1 to the case of CHSH(n), we obtain relations that
must be satisfied by any optimal or nearly-optimal CHSH(n) strategy; these relations
are contained in the following theorem:
Theorem 3.2. The following three statements for k1 observables Aj, Bk and bipartite state 10) are equivalent:
" Aj, Bik, |') is an optimal CHSH(n) strategy, i.e.
I1
(V
(n) I<,<j <n
I|(Ai 0 Bij + Ai 0 Bji + Aj &Bij - Aj & Bj)|$
" The observables and state satisfy, for all i, j 1
Ai + Aj01
A
i
1
I|$,) =1
36
i < j
j ,)
Bjj|$))
n
* The observables and state satisfy, for all i, j 1 < i < j < n
Ai D I4V) = I
Bii + Bii1
.
Aj (D I|0) = I10 Bij
Bj
)
B1kb)
Moreover, the following approximate versions of the three statements are also equivalent
* Aj, Bjk,\) is an E-optimal CHSH(n) strategy, i.e.
1
~(1 -e)
11
&
(
1Ti<j
(|{Ai 0 Bij + Ai (D Bjj + Aj 0 Bij - Aj 0 Bjj)|)
2
n
* The observables and state satisfy
1
2
2
1<i<jiAn
+
A -2A
II)
-I
Bilo) 12
2n(n - 1)e
o The observables and state satisfy
Ai 0 I|0) - 1
S
1<i<jsn
B0
\2
+Aj (9IIV,) - I (D
v- "
)1)
< 2n(n - 1)c
The proof of Theorem 3.2 is in Section 5.1.
Next, we analyze the structure of optimal and nearly optimal CHSH(n) strategies.
For the case of optimal strategies, we obtain the following classification theorem:
37
Theorem 3.3. A, Bik, 14) is an optimal CHSH(n) strategy on the space CdA DCdB if
and only if there exist an orthonormal basis ju1),... udA) of CdA and an orthonormal
basis IvI),. . .|va
) of CdB such that all of the following statements hold
o The Schmidt decomposition of |4') is
s2Ln/2j
V iu ) (@ vi)
i=1
with the Schmidt coefficients equal in blocks of length 2Ln/2],
A1
=-, A 2 ln/2]
=
A2Ln/2j+ 1
A(s-
1 ) 2Ln/2j+l
* With respect to the basis Iu1),...
udA)
i. e.
2-2Ln/2
=s2L-/2
of
CdA,
the observables Aj, i = 1,...n
have the block diagonalform:
Az
Ai =
A s)
Ci
where each A(
is 2/2
x
2 [n/2J
and acts on
span(Iu(j- 1) 2 n/2i+1),
and, for each i = 1,... n, for each j = 1,...s, A
al[/ 2 J,
1
. . .
I"j2t/2i)),
except for the
case n = 2k + 1, and i = n, in which case A) is either Ok,2k+1 or
-Uk,2k+1-
The block Ci is an arbitrary 1 observable on the orthogonal complement of
span(|ui), . . .
us2Ln/2j)).
'Here, the observables ck,i are the ones defined in the relations (2.1).
38
* With respect to the basis
lvI),
..
~d) of CdB, the observables Bjk, J $ k E
{1,... n} have the block diagonalform:
B(1
Jk
Bjk
=
jk
Dik
where each B(1) is 2Ln/2 x 2Ln/2J and acts on span(v(-1) 2 tf/i+l),
and, for 1 < j < k < n,
T
/
jk
A(
A('
A 1 + A(' )
v/2kj
The block Dik is an arbitrary k1 observable on the orthogonal complement of
))
-
span(|Vi), ...
.|V12tLn/2
The proof of Theorem 3.3 is in Section 5.2.
The proof uses the relations from
Theorem 3.2 and the linear bijection L between CIA 0 CdB and MatdA,dB(C) from
subsection 2.1.2. The bipartite state 1'0) from an optimal CHSH(n) strategy is shown
to be such that T = L(|0)) is an intertwining operator between certain linear combinations of Alice's observables and certain linear combinations of the transpose of
Bob's observables. Given the special structure of the relations for the CHSH(n) game,
this is enough to imply the conclusions of Theorem 3.3.
One way to interpret Theorem 3.3 is that any optimal CHSH(n) strategy must
be a direct sum of elementary optimal strategies on C2 [n/2J ® C2[n/ 2 j, possibly with
some additional dimensions on each side that are orthogonal to the support of the
state. Another interpretation is that the space suppA1|P) 0 suppB Id) C CdA 0 CdB
is a "good subspace" on which the observables from the strategy are "well-behaved",
with Aj, i = 1,
...
n leaving the space s'uppAJO) invariant and satisfying the canonical
39
anti-commutation relations on that space, and with Bik,
the space SUPPB
1V,)
I # k E {1,.. . n} leaving
invariant and being determined there by Bjk
(A'
A')/v2-.
We now turn attention to E-optimal CHSH(n) strategies. One may at first hope
that an approximate version of Theorem 3.3 holds, in the sense that Aj, i = 1,... n
nearly satisfy the canonical anti-commutation relations on suppA
((At A )
2
on SUPPB 4).
I),
and with Bi~
Unfortunately, that turns out not to be the case;
the obstacle is that one can take one of the optimal strategies described in Theorem
4))
3.3 where some blocks of the Schmidt coefficients for
are arbitrarily small, and
then one can change the corresponding blocks of the observables Aj, Bjk to something
arbitrary. The result is that one gets an E-optimal CHSH(n) strategy such that the
observables Aj, i =1,.
Byk,
j #
.
. n are not well-behaved on all of SUPPA 14,) and the observables
k E {1, . .. n} are not well-behaved on all of SUPPB
4)-
The next best thing one could hope for is that the observables Aj, i = 1, . . . n
Bjk,
j#
k E {1,
...
n} , are well-behaved on some subspace of SUPPA
140) 0 SUPPB 1).
One approach to finding such a subspace is to take a subset of the Schmidt vectors
on the A side, and a subset of the Schmidt vectors on the B side. This approach
has been pursued in reference [181. The difficulty with this approach is that it gives
error bounds that depend on the dimensions dA, dB of the strategy. We have seen in
Theorem 3.3 that dA, dB can be arbitrarily large even for optimal strategies.
In this thesis, we take a different approach. We start with a strategy Aj,
on
CdACdB
on
C2Fn/ 2 1
that is c-optimal for CHSH(n) . We introduce a new strategy
Bjk,
10)
Aj, Bjk, 4)
0 C 2 rn/21 that we call the canonical optimal strategy for CHSH(n) . Then
we construct a non-zero linear operator T : C2'/" D C
_--+ 0C^
0 CdB that
approximately satisfies the intertwining operator property from representation theory.
Formally, we prove the following:
Theorem 3.4. Let Aj,
A
B y,
Bik,
4')
be an c-optimal CHSH(n) strategy on CdA
}k,be the canonical optimal strategy on
40
C2Fn/
1
0
C2[/2'.
D CdB.
Let
Then, there exists
a non-zero linear operator
T :C2n/2 1 (,
_++CdA ( CdB
C2[n21
with the properties
Vi
Vj 7 k
|(Ai 0 I)T - T(A 0 I)IIF
12n2V'IITIIF
<
||(10Bjk)T - T(I 0 Bjk)IF < 17n 2 v11ITII F
We now define the canonical optimal strategies that are used in the statement of
Theorem 3.4. The canonical strategy is defined differently for the cases n = 2k and
n = 2k + 1:
1. For the case n = 2k we define the canonical strategy on the space C 2 k 0 C2k to
be as follows
Zi
Bj=
v/2-
=A +AI,
i =1, ... 2k
= a~i
(AT - A),
1j-
1 < j <1 < 2k
(
2. For the case n = 2k + 1 we define the canonical strategy on the space C2k+1
C2k+1 to be as follows
01
i
0
A2k+1
+ A,
2
=
0k,2k+1
0
-k,i
Bj13=
1
1 ... 2k,
1
_T
vr2
j
Bij--(A
k+1
v 2 k+1
41
_T
-A,),
0
-Uk,2k+1
1<j<1<2k+l
The motivation for defining the canonical strategies in this way is that the observables A 1 ,. . . A, generate an algebra that is isomorphic to the Clifford algebra with n
generators.
Next, we say a few words about the motivation for proving a result of the form
of Theorem 3.4. We look at it from two different points of view: the point of view
of the concept of homomorphism in algebra, and the point of view of identifying a
"good subspace" on which the observables from a strategy are "well-behaved".
Consider the concept of homomorphism in algebra. When we talk of a homomorphism, we have two sets with certain operations on each, and the homomorphism is
a map from one set to the other that preserves all the operations. In the context of
Theorem 3.4, the two sets are
2
C2rn/ 1
0 C2[n/ 2 1
A4 o I, I 0
53 k.
0CC2 2"
and
CdA
0
CdB.
The operations on
are addition, scalar multiplication, and the action of the operators
The operations on
CdA
0
CdB
are addition, scalar multiplication,
3
and the action of the operators Ai 0 I, I 0 B'
. The operator T that we construct
in Theorem 3.4 is linear, so it preserves addition and scalar multiplication, and it
satisfies the approximate intertwining property, so it approximately maps the action
of the operators
i
o 1,
I 0 5bk to the action of the operators Ai 0 I, I 0 BJk.
Next we look at Theorem 3.4 from the point of view of identifying a "good subspace" on which the observables from a strategy are "well-behaved". We mentioned
above that we can think about the classification theorem for optimal CHSH(n) strategies as saying that suppAI1) 0 SUppBI'?) C CA
D CdB
is a "good subspace" on which
the observables from the strategy are "well-behaved".
We also saw that trying to
generalize this to near optimal strategies encounters difficulties if we look for a good
subspace of the form V 0 W with V C suppA110) and W C SUPPB 1).
At this point, we take a step back to the optimal CHSH(n) strategies. We notice
that for an optimal strategy, inside the space suppAI4) 0 SUPPB 1) there is another
42
space:
span {AI1 ... A#,* 0
)
: (j1 . . j)
and that this space is invariant under Ai 0 I, I 0
E{0,1}"}
Bjk.
It is also the case that for
many optimal CHSH(n) strategies, the space
span {A(1 ... An @I1) : (j1 ...
n) E{o, 1}}
cannot be written in the form V 0 W; this is why this subspace cannot be found
by methods looking for the "good subspace" as V 0 W with V C SUPPA
1) and
W C SUppB |)).
When we go to the nearly optimal CHSH(n) strategies, it is the space
span {Aj...A00 I11) : (j.
that we can identify as approximately a "good subspace". It will be clear from the
proof of Theorem 3.4 that for the approximate intertwining operator T we construct,
ImT = span {Aj1 ... A
0 11b) : (ji ...
in)
E {0, 1}}
The proof of Theorem 3.4 is in Section 5.3. The proof gives an explicit construction
of the approximately intertwining operator T. The construction is motivated by the
above insight about the importance of the space
span {A31 . . . A3 0&I) : (ji ) E{0, 1}"
and by the group averaging technique-a common technique of constructing intertwining operators in representation theory.
43
44
Chapter 4
Relations for optimal and
nearly-optimal quantum strategies
The goal of this chapter is to prove Theorem 3.1.
In section 4.1 we explain the
relationship between non-local XOR games and semi-definite programs.
This rela-
tionship has been noted previously in [19, 6], but we include here a detailed proof for
completeness. In section 4.2 we give the main idea of the proof of Theorem 3.1. In
section 4.3 we show how to obtain the vectors a 1 ,... ar,
1,...
, r for the statement
of Theorem 3.1 from the solution to the dual semi-definite program, and we show
some properties of these vectors. In section 4.4 we prove a useful identity, and obtain
Theorem 3.1 as a corollary. In section 4.5 we comment on the freedom of choosing
the vectors a 1 ,. . .ar,
4.1
.
-1,
3
, for the statement of Theorem 3.1
Non-local XOR games and semi-definite programs
Consider the maximization problem:
n
F
sup
Ai,B1 ,k)
M
(ij(VIAi
Z
j=1 j=1
45
9 Bj1)
(4.1)
This maximization problem expresses the search for the optimal strategy for the nonlocal XOR game given by the n x m matrix -y. The supremum is taken over all valid
strategies, consisting of a state space CIA 0
I observables B 1 ,... Bm on
CdB
CdB,
and a state
14')
1 observables A 1 , .... A on CIA,
G CdA 0
CdB.
The value of the
supremum, F, is the quantum success bias for the game.
We now introduce a semi-definite program:
F =
sup
G-Z
(4.2)
Z>-O, Z-Ejj=1, i=1,...(n+m)
Here, Eii is the (n + m) x (n + m) matrix with 1 in the i-th diagonal entry and 0
everywhere else, and G is the (n + m) x (n + m) matrix with block form
0
-
G =
2 7r
T
The two maximization problems (4.1) and (4.2) are related as follows: for each
feasible solution of one of them, there is a feasible solution of the other that achieves
the same value. Formally:
Theorem 4.1.
1. For each quantum strategy Aj, B, ,|$),
there is an (n + m) x
(n + m) matrix Z that is feasible for the semi-definite program (4.2) and such
that
n
'm
G -Z = E E
(OAi
Bgj4')
i=1 j=1
2. For each (n+m) x (n+m) matrix Z that is feasible for the semi-definite program
(4.2) there is a quantum strategy A , Bj,|4) such that
E
{Kp~Aj & Bj|O) = G -Z
i=1 j=1
46
Proof. First, we prove the Part 1. Given a quantum strategy Aj, Bj,
I''), we seek to
find a matrix Z that is feasible for the primal semi-definite program and that achieves
the same value.
Form the (n + m) x (n + m) matrix Q of the inner products of the vectors A 1 S
IN),... An& IIiB),10B ,I... I Bm 10); that is,
(,Pl(Ai 0 I)(Ai 0 I)14')
(0 (An 0 I)(Al 0
I)|V,)
(I 0 Bi)(Ai 0 I)14)
(I 0 Bm)(Al 0 1)1
)
(
(VI(Al 0
I)(A.
0
(4'(A. 0 1)(An 0
I)|P)
(4'I(Ai 0
1)14)
(4|(A 0 I)(1 0 Bi)4)
(4'(An 0
(*|(I 0 B 1)(I 0 Bi) j4)
(*1(I 0 Bj)(I 0 B.)10)
(41(I 0 Bm)(1 0 Bi)I')
(*|(I 0 Bm)(I 0 Bm)4)
(Op(I U BI)(A,. 01) P)
(1I
B.)(A
0 1)|)
I)(I 0 B 1 )IP)
(*I(Ai 0 1)(I 0
Bm)I4')
I)(1 0 B.)I4)
Next, we observe that the matrix Q has the following properties:
" It is a matrix in Matn+,(C), it is self-adjoint, and it is positive semi-definite.
* It has all its diagonal entries equal to 1.
" It has real entries of the form ('|Ai0 Bj|4) in the n x m upper right block and
the m x n lower left block.
Next we take Z to be the real part of the matrix Q. From the properties of the
matrix Q we get the following properties of the matrix Z:
" It is a matrix in Man+m(R), it is symmetric, and it is positive semi-definite.
" It has all its diagonal entries equal to 1.
" It has real entries of the form ('?Ai 0 Bj|4) in the n x m upper right block and
the m x n lower left block.
From the first two properties, we get that Z is feasible for the primal semi-definite
47
program (4.2). From the third property, we get that
Z'ij(|Ai 9 Bfr|p)
n
0
G .Z
Z=
M
E
0
i=1 j=1
The proof of Part 1 of Theorem 4.1 is complete.
Now, we prove Part 2. Let Z be feasible for the primal semi-definite program
(4.2); we seek to find a quantum strategy that achieves the same value.
Z is positive semi-definite, so there exists some (n + m) x (n + m) matrix Y
Denote the first n columns of Y by ui,... un, and denote the
such that Z = yTy.
remaining m columns of Y by v,
...
T
U1 U 1
vrn. Then, we have
T
U1 V1
T
U, Vm
T
Un1 U 1
T
Un V 1
T Urn
.. Un
T
V1 U1
T
V1 V1
.. V T
1 Vm
TV
VIM1
..
.. U1T Un
T
T
'VmU1
We get that
G -Z = 2
T
VIm-Un
]zzn
0.~
-Z =
0
and in addition, from the primal constraint Z
the vectors u,... un, v,
...
T
VmVin
-=ijtti V
.
Z
=
=1 j=1
.Ei
= 1, i=1.(n
+m) we get that
vm all have unit norm.
To complete the proof of Theorem 4.1, we claim that given unit vectors u 1 ,
v1,.. . vM in Rd, we can find
...
un,
1 observables A 1 , . .. An, B 1 ,... Bn and bipartite state
1) such that
(|Aj
0 B~4) = u T
48
This fact was first observed in reference [191. Here we also give a proof; see Lemma
4.1 below.
Assuming for now that we can find
partite state
1 observables A 1 ,... A,,, B1 ,... Bn and bi-
4') such that
(VIAi 9Bg4) = u'vj
it is clear that for this quantum strategy we have
ZZij
i=1
Ai 9 Bgj|')
G-Z
=
i=1 j=1
j=1
The proof of Theorem 4.1 is complete.
W
Now we present the lemma that was used in the proof of Theorem 4.1:
Lemma 4.1. Let u,... . u,, V 1 ,... V.
be unit vectors in Rd.
observables A 1 ,. .. A,,, B 1 , .. . Bm and bipartite state |')
on
Then, there exist k1
C Ld/2]
0
CLd/2j
such that
(V) Ai 0 BjU)=
Proof. In the space
CLd/2j
0
CLd/2
we take the state
2
v/2Ld/2j
Next we consider the
form the
1 observables
J4)
to be
d/2j
i=1
u[d/2j,1,... ua~/2j,d
defined in (2.1). We will
1 observables A 1 , ... A,, B 1 ,.... Bm as linear combinations of O[d/2J,1,
. .-
Ld/2j,d-
Following the notation defined in subsection 2.3.3, we take, for a vector w E Rd,
d
a WiOUd/2j,i
W
=
i=1
Using this notation, we form the observables A 1 ,
49
...
A,, from the vectors u,
...
u,,
by taking
Ai = ui - 6
We form the observables B 1 ,... Bm from the vectors v 1 ,... vm by taking
Bi = (vi -6)'
Using the fact that ui,.... u,, v 1 ,...vm, are unit vectors and using lemma 2.2,
we get that A' = I and B2 = I for all i,j.
In addition, Aj, B
are self-adjoint
by construction; it follows that the A 1 ,... A, B 1,... Bm constructed above are
1
observables.
It remains to show that
Ai & Bj|
= uv
First, we observe that the state
2d/2j
\/ 2 Ld/2
has the property
)
AMIIj'11) = I &MTI 4
for all matrices M on C2 [d/2 . This follows by using the linear bijection L from
subsection 2.1.2 and the properties of L in Lemma 2.1
L(M (D I1,0)) = ML(1,0)) = Mf
1
v\,/2[d/2]
I-C2Ld/2
/
2
Ld/2J
50
Id/2j
AI = L(I))M = L(I 0 MTIO))
2
C
From the property
M010) = I®MTIV)
it follows that
(4'Ai 0 Bj 10)
(BjjO))
(Ai
B)10(O\Ai
+
)
0
0I)
|
2
B
I i
I|4) + ((OIAi 0 I) (Br 0|b))
AiBj + BfAi
2
1
Finally, we apply Lemma 2.2 and get
AiBf + BjTA.
2
2
(ui . -)(v-j)
+ (v
2
)(i.- (ut)
)I
0") (U- 0 (U
j)
and from here we obtain
(4OLA
0
BjI4) = uTv
as needed. The proof of Lemma 4.1 is complete.
Having established the relation between the optimization problem (4.1) and the
semi-definite program (4.2), we now turn attention to the dual semidefinite program.
The dual to (4.2) is:
m+n
V'"=
inf
E3wi
(4.3)
Both the primal and the dual semi-definite programs have strictly feasible solutions; therefore, by Theorem 2.2 the primal supremum is attained, the dual infimum
is attained, and both are equal.
Combining this with Theorem 4.1, we get that
51
F ='
=
" and that F is also attained.
To summarize, we have the three optimization problems
n
F
=
sup
Ai,Bj ,1b)
M
-(bI|Ai 0 Bj 1,0)
EZ)7Z
i=1 j=1
G-z
sup
Z>-O, Z-Eii=1, i=1,...(n+m)
m+n
F"=
inf
zwi
E'4'wiEii G
and we know that F = F' = F" and we know that all three are attained.
4.2
Proof idea for Theorem 3.1
We are now in a position to show how to use the dual semi-definite program (4.3) to
obtain relations that any optimal or nearly optimal quantum strategy must satisfy.
The basic idea of the argument is to look at the duality gap: for Z a feasible
primal solution and wi, . . . wm+n a feasible dual solution, we have
m+n
m+n
S=1
Moreover, if wi,
...
wi Eis - G
-Z > 0
wm+n is dual optimal and if F(1 - E) < G - Z < F, then
m+n
rc >
w&
Er - G
-Z > 0
so we can use the dual optimal solution to obtain relations on primal optimal and
near-optimal solutions. We proceed with the details in the sections below.
52
4.3
Decompositions of the dual optimal solution
In the statement of Theorem 3.1 we use vectors ai, . . a E R', 01,... . 3 r
C Rm. We
now show how to obtain these vectors from the dual optimal solution; the argument
is contained in the following lemma and its proof.
Lemma 4.2. Let w 1 ,... wm+n be an optimal solution for the dual semi-definite program (4.3). Then, there exist vectors a1.... a E
R,
01 ... Or E R"n with the prop-
erties
a a =
wjEjj
r
E 13,10T = Z
Diag(w1, ...
wn)
n+iEi = Diag(wn+1 , . - Wn+m)
(4.4)
i=1
i1
Zo T =
Proof. We look at the (n
=
i=1
m
=1
P
e
+ m) x (n + m) matrix
wEj
jI" - G.
semi-definite by the dual constraint. Therefore, there exist vectors vi,
It is positive
...
r E RT+n
such that
r
m+n
ivEii - G =
i=1
Vivi
i=1
2
-
One possible such decomposition comes from the orthonormal eigenvectors of E'+" wiEi
G, each eigenvector multiplied by the square root of the corresponding eigenvalue.
There is also freedom in choosing this decomposition; we say more about this in
section 4.5.
Now we look at the block decomposition of the matrix EZi"=1
the vectors vi, . .. vr. The (n + m) x (n + m) matrix E'+'
53
-
G and of
wi Eii - G can be written
in block form as
m~n
Diag(wi,... Wn)
-T /2
For the vectors vi,
. . . 1R
Diag(wn+,...Wn+m)
let ozi,. .ar E R",
]If+,
Cei
Vi
/1, .. , E Rm be such that
.
Ej
-
=w1
i = 1,...r
in block form.
By using the block decompositions, we get
r
Diag(wj,...-Wn)
ai
[aT
I=
-T /2
D'iag(Wn+1, -- wn+t)]
/
J
L[-13
and from here we get the relations
r
S<
n
=T5 wjEj =
Diag(w1,....w,.)
3n+iEii[= Diag(wn+1,.- - Wn+m)
5 cx,3,
= 2-y
i=1
The lemma is proved.
4.4
[
A useful identity and the proof of Theorem 3.1
So far, we have obtained the vectors a1 , ... a7r E R,
301,
...
.r
C R"' as in Lemma 4.2.
To complete the proof of Theorem 3.1, we use the following identity:
54
Lemma 4.3. Let A 1
Rn,
,3,
...
An, B1 , ... Bin, 1,0) be a quantum strategy. Let a1 ,.... a, E
R' be vectors with the properties:
r
S aT =T
in
i=1
i=1
r
in+iEi
i=1
2=1
Then, the following identity holds:
2
n
m+n
-kb-~I®/3k.B
Ek
i i
mn
- E E
yij
Ai 9 Bj|')
(4.5)
i=1 j=1
k=1
Proof. We open the squares on the left-hand side:
r
EZ1
2
-4
c~k A
k
BIy5) 1
k=1
i=1
I1k) + r(
(ai
I0
(i- B)
)o
i=1
2
(a - - ) & (8i - B)
1
Now, from the property
n
'wEij
aia T
i=1
i=1
we obtain
n
r
i=1
2
E wjA
GAZAJ
+5
i=1
i7si
Similarly, from the property
r
=
=1
i=1
55
n+iEi
=
(w
\n=
I
Io)
we obtain
mn
2
1(3i - B)
Wn+iBi
=
+ZEOBB,
j=1
WI
-('M
i#j
Finally, from the property
1
EZa3T
we obtain
2
2
n m -yhA%®0Bj
0i- -)=ZZ
(i
i=1 j=1
i=1
The identity
m+ri
r
A®
011~)
-
2
I &/ k
-
ak=
3
i=1
k=1
n
rn
wi - E|A
i=1 j=1
0 Bj|V)
El
follows.
Using Lemma 4.3, we can complete the proof of Theorem 3.1.
Proof of Theorem 3.1. We have chosen w,. . Wn+m to be a dual optimal solution, so
-_+-M'Uw= F. Then, by Lemma 4.3,
r
kak=
A
2
I|'1) - I 03k- - B14>)
=r -
{p|j(01Ai 0 Bj|)
E
j=1
i=1
It follows that
- jj('IAi a Bjf I4V) - P
i=1
j=1
if and only if
Vk = 1,...r
ak
'A
1
M
I1,0) =I-1
A
and that
E
i=1
i~E V i (S B.7 IV) < IF
j=1
56
-/3|
r
Z
2
l lk
k=1
'AOII11P) --
0
]7<cF
Ak.BIO)
Theorem 3.1 is proved.
4.5
Freedom in the choice of decomposition
In this section, we return to the choice of decomposition we made in Section 4.3 when
we wrote
m+n
r
(wjEjj - G=-
T
[-
r
vivi =(ae
=
-p
--#iJ
It is known that for a square positive semi-definite matrix M, the decomposition
of the form M = Ej vivi is not unique. In fact, the following theorem holds:
Theorem 4.2. Let M be a real symmetric positive semi-definite matrix. The following statements hold:
* Let M =
= vivi and let 0 be an r x r orthogonal matrix.
E_.O1k0k. Then,
M
=
E
Ui
Let ul =
T
E
u 1 ,. . . u, with r - s copies of the zero vector.
Then, there exists an r x r
orthogonal matrix 0 such that u1 = EZ= 1 OlkVk.
For a proof of this theorem, see [16][p. 103-1041. The proof there treats self-adjoint
matrices over C, but it easily adapts to the real symmetric case.
From this, we can see that there is freedom in the decomposition
r
m+n
wjLiEj - G=
1~
a
-
Z'
Let
=
Ml
and
=v
_8=
ensemble
the
Complete
s.
r>ugu'
assume
if and only if
0Z T
=1-j
57
-#O
a
and that any two such decompositions are related by
kak
k=1
81'
-O
for some orthogonal matrix 0.
We can see that the different decompositions give rise to equivalent sets of relations; the argument is contained in Lemma 4.4 and Corollary 4.1 below. Despite
the fact that different sets of relations are equivalent, it will be convenient in future
arguments to be able to use more than one set of relations.
We conclude this section by explaining the equivalence of relations arising from
the different decompositions. The argument is contained in the following lemma and
corollary.
Lemma 4.4. Let u1 , ... ur, V1,
Z=
1
OIkVk
...
VT
be vectors in some inner product space such that
for some orthogonal matrix 0. Then,
Proof. We open the squared norms
I112
r_ 1 |uk||2
r
112vkI
and use the fact that 0 has orthonormal
rows and we get:
E IUk 112
1=1
k=1
nt=l \k=1
JIok||2
{VIIVm) =
Oki Ok-m
=
/k=1
I
Corollary 4.1. Let
Kil
M+
r
wjEj -G
1
-
z
=( 1
=1
p
1
--
58
=
-O=
[
-p0-
T
~
T]
I-
be two decompositions. Then
r
EZak
A®I
10)I3k -BIO)
'/ 1
22k
k
k=1
1
(a
2
2j
k=1
and so the two decompositions give equivalent relationsfor Theorem 3.1.
Proof. Assume without loss of generality that r > r'. From Theorem 4.2, we know
that
1
[a
=~Olk [k
k=1
-,3k
for some orthogonal matrix 0.
Then,
a' -A OI|O) - I 0i'
-
@) =
k=1
Olk (a
so we can apply Lemma 4.4 to the vectors ak and a' - A OIIV) -I O #[ -BIV)), k = 1,..r'
59
-A®Ikb) -- I/3k
0 I0
) - I ®k - Bk1), k = 1,
F]
60
Chapter 5
The structure of CHSH(n) optimal
and nearly optimal strategies
In this chapter we prove the results that characterize optimal and nearly optimal
strategies for the CHSH(n) XOR games: Theorems 3.2, 3.3, and 3.4.
Theorem 3.2 gives relations that any optimal CHSH(n) strategy must satisfy,
and any nearly optimal CHSH(n) strategy must approximately satisfy. The proof of
Theorem 3.2 is in section 5.1.
Theorem 3.3 classifies the optimal CHSH(n) strategies. The proof of Theorem 3.3
is in section 5.2.
Theorem 3.4 constructs an approximately intertwining operator between the canonical optimal CHSH(n) strategy and a given E-optimal CHSH(n) strategy. The proof
of Theorem 3.4 is in section 5.3.
61
5.1
Relations for CHSH(n) optimal and nearly optimal strategies
In this section, we aim to prove Theorem 3.2. We will prove the second part of the
theorem; that is, we aim to show the equivalence of the three statements
Bik, 1'0)
1
(1 -
is an E-optimal CHSH(n) strategy, i.e.
)
o Aj,
<(|
4(n)
.
11
n
Ai & Bij + Ai 0 Bjj + Aj 0 Bij - Aj 0 Bjj)|10) <
1
The observables and state satisfy
S
( +A
2
0 BIV,)
-li)-i
I
1 i<j~n
+ I/20
9
I)
- I0 Bji
)
< 2n(n - 1)E
The observables and state satisfy
S
I|@) - I®O
Bij + Bjj1')
2
(
1 i<j~n
v/-22
< 2n(n - 1)E
)
A o I0) -. I®
If we prove the above three statments are equivalent, then we also get the first part
of Theorem 3.2 by taking c = 0.
We use the same techniques that we used in proving Theorem 3.1 in chapter 4.
We look at the primal and dual semi-definite program corresponding to the CHSH(n)
game, and we find two explicit decompositions that, using Theorem 3.1, give us the
62
equivalence of the three statements above.
We take the n x n(n - 1) matrix -y that summarizes the information for the
CHSH(n) game. From subsection 2.1.8 we know that
7 =
(ti)(II + j)(iji + i)(il - Ij)(Aii)
Next, we form the n 2 x n2 matrix G which has the block form:
0
1
-
G =
In this context, it is convenient to think of R" 2 as having an orthonormal basis formed
by concatenating the basis 11),.
Rn(n-1). So,
In) of R" and the basis Iij), i # j
{1,
..
.n} of
we can write
+ (Wij)il + lij)(l + Iji( - jIA)))
Next, we form the primal and dual semi-definite programs corresponding to the
CHSH(n) game; they are
sup
G-Z
Z>-O,Z-Eiti=1
n
2
E wi
inf
i=1 wiEiitG
i=1
We know that the optimal value for both is !;
this follows from the result in ref-
erence [18] about the quantum success bias of the CHSH(n) game, and the discussion
in Section 4.1.
63
Next, we claim that w
-
=
1
2Vf2- I Wn1
=1
dual optimal solution. We can see that
,
=
W,2
=
Wn2=
2 2n (n -1)
isa
i
the dual optimum, so all that
is left to prove is that w 1 , . . . w,2 is dual feasible.
To prove that wi,...
Wn2
is dual feasible, we show that
i12
We define the following vectors for 1 < i < j < N
a
=
Ii)
ji
)
(5.1)
ji) +ii)
-
__ij)
ii)
and observe that the following decomposition holds:
n2
Zwi Eli - G
i=1
S21n(n1i<j~
n
((oik -Oij)
It follows that the matrix
the given wi,...
W,2
(aij - OT)\ +(aji - #i) (aji -#3) (5.2)
wE - G is positive semi-definite, and therefore,
are a dual optimal solution as claimed.
Now, we observe that from the decomposition (5.2), it follows that the vectors
(5.1) satisfy the conditions of Lemma 4.3; therefore, we apply Lemma 4.3, and, as in
the Section 4.4, we conclude that the following two statements are equivalent:
64
o Aj, Bk,
l)
is an E-optimal CHSH(n) strategy, i.e.
1
7(1 -E)
1(2
E
(|
(1Ai
0 Bij + A 0 Bj + A 0 Btj - A 0 By ) |0)
<I
* The observables and state satisfy
110)
2
Bj- B-- 17
A 0Iji)-I
+
(A
2
-
S
1<i<j~rL
B-10Bj + BV2
N
"|2)
2
)
2n(n-1)
Next, we define the following vectors for 1 < i < j < N
ai) +j)
(5.3)
=
|i
)
i
and observe that the following decomposition holds:
n2
5wEj -
G
i=1
2v 2n(n - 1)
-C
ij (01,
_- ij
+ Cf
-
ti~;) (afi
-
I3~T
(5.4)
1 i<j n
Again we apply Lemma 4.3 and this time we conclude that the following two
statements are equivalent:
65
* Aj, BJk, 4) is an E-optimal CHSH(n) strategy, i.e.
(1 - E)
1
(
<K
i
(|j (Ai 0 Bij + Ai D Bji + Aj & Bij - Aj 0 Bji)I)<
e The observables and state satisfy
S
2
A +A
Bij IV,)
+
v1-2I
I Bjilo) 2
2n(n -1)c
This completes the proof of Theorem 3.2.
5.2
Classification of CHSH(n) optimal strategies
The goal of this section is to prove Theorem 3.3. Theorem 3.3 claims the equivalence
of two statements:
" A strategy is optimal for the CHSH(n) game
" There are bases for Alice's space and for Bob's space with respect to which the
strategy has a certain form.
We prove that the first statement implies the second in subsection 5.2.1, and we
prove that the second statement implies the first in subsection 5.2.2.
5.2.1
An optimal CHSH(n) strategy must have a certain form
Let Aj, Bjk, IV) be an arbitrary optimal CHSH(n) strategy on
CIA
0
CdB.
is to show that this strategy has the structure described in Theorem 3.3.
66
Our goal
From Theorem 3.2 we know that the following relations are satisfied for all i, j1 <
i<jin
B --
<Bi3+
A4i 0 I10)
= I (D Bi +B
Aj 9 I1,0) = I0
Bi
1)
- Bj 1)
Let T = L(|,)) be the dA x dB matrix that corresponds to
4')
E CdA 0 CdB
(subsection 2.1.2). To the relations above correspond the following relations in terms
of AP:
+ Bjj
A VA19== XP (Bij
(B
- Bi)T
(5.5)
AV=4
It follows that the space Imx
C
(Bij - Bj)
CA is invariant under the observables Aj, i
=
1, ... n , by using Schur's lemma 2.3.
Let the non-zero terms in the Schmidt decomposition of IV) be
4')
=
>3
AfIu) 0 Ivi)
i=1
IUr) as an orthonormal basis of ImJ, and complete it to an orthonor-
Choose U 1 ),
...
mal basis of
CdA.
With respect to this basis, the observables A, i = 1,... n have the
block form
Ai=
[A'
0
0
Ci
where A' acts on Im4, and Ci acts on the orthgonal complement.
From A' = Ai, A2
=
I, it follows that A't = A', A' 2 = I and C = Ci, C2 = I.
It is clear at this point that the blocks C, i = 1,
...
n may be arbitrary, and that
they don't in any way influence the quantum value achieved by the strategy. From
now on, we focus on the observables A' that act on the space ImT.
67
We now claim that for all i,j, 1 < i <
{Ai~i}~=~( B
BT
+ BT
BT
{A', A'} = 0. This is because
BT
--
=B 9((BT)2 - (B,5)2) =0
,+ B
{Aj, Ajj}W = j
< n,
It follows that A',... A' are anti-commuting I observables on the space ImXI.
We apply Theorem 2.3 and get that the number of non-zero Schmidt coefficients of
10) is an integer multiple of 2Ln/2J. Let r = s2Ln/2J.
We now consider the operator
T, which takes the space ImT to itself. Form
the relations (5.5), it follows that
AiWt=9(Bij + Bj
P
B= +FB
it = TV At _ V t Aj
We now apply Schur's lemma, and conclude that all eigenspaces of XQt must be
invariant spaces for the observables Al, .. .A'. It then follows that all eigenspaces of
xT must have dimension an integer multiple of 2[n/2j.
From this conclusion about the eigenspaces of 'T, and from the expression
s2L-/2j
i=1
we get that the non-zero Schmidt coefficients of |b) must come in blocks of length
2L,,/2] that are equal, i.e.
1
~
A2 Ln/2j+ 1
A(s-1)2Ln/2j+1
j
2 Ln/ 2
.2-/2j
=
As2 L-/21
Returning to the observables A',... A', we apply Theorem 2.3 and get that with
68
respect to the basis
lui),...
IUs2tn/2j), the observables
A',
. . .
A' have the block diag-
onal form:
Ai
[A~oA~s)]
=
where each AYj
for each i
=
is
2 [n/2j
x
2
Ln/21
and acts on span(Iu(- 1 ) 2 Lf/21), -- -
1, . . . n, for each j = 1, . . s,
and i = n, in which case A(
is either
= U-n/2Ji
Uk,2k+1
lgt.2)),
except for the case n
and,
2k+1,
or -Ok,2k+1-
The proof of the forward direction of Theorem 3.3 is now almost complete; it
remains to prove the statement about Bjk, j
k E {1, ...
n} . We take the following
relations from Theorem 3.2:
0 [0=1oB
Iv@) =I®@B~I'|
)
Ai + Aj
0
and we rewrite them in terms of IQT to get
Ai + Aj
Bj q/T=q
Bv
Ai+Ai) T=
BjjIT = jT (AiA
)
3
It follows from Schur's lemma that ImTT = span(vi), ...
under Bjk, j / kE {1, ... n} , and so Bjk, i
(5.6)
IVS2t/2J)) is invariant
kc {l,.. . n} have the block diagonal
form
Bjk
=
Bjk
0
0
Djk
where the t1 observables Bjk act on ImT
orthogonal complement.
69
and the
1 observables Djk act on the
The final thing that is left to show is the block-diagonal decomposition
B(1
B'k- [B.
and the relations on the individual blocks, for 1 < j < k < n
jk
(1
B=A(') + AC
=
B
A(')- A(
T
ki
These follow from the relations (5.6) and from the fact that with respect to the basis
u*),...ju*2 ta/2j) of the source space and the basis vi),.
.. Iv 8 2
n/2J) of the target space,
IQT has the block diagonal form
VT2L 12J
XpT
V'Xs 21I2jI
The forward direction of Theorem 3.3 is proved.
5.2.2
Any strategy of a certain form is optimal for CHSH(n)
We assume that a strategy Aj, Bjk, jI/)) on
CA
® CIB has the form described in The-
orem 3.3; that is, we assume
e The Schmidt decomposition of 1,) is
s2Lt/2j
i=
vTi IUi) (D IVi)
70
with the Schmidt coefficients equal in blocks of length 2 Ln/2J, i.e.
A,
A2Ln/2j
2Ln/2+ 1
2
(s-1)2Ln/ j+1
* With respect to the basis lui),
...
A2-2Ln/2
-s2Ln/2
udA) of
CdA,
the observables Aj, i = 1,... n
have the block diagonal form:
1
[A(')
Ai =
A(s)
Ci
where each A is 2 Ln/2j x 2 Ln/2J and acts on span(lu(- 1 )2 Lf/2j+1),
and, for each i
=
1,.
. . n,
j
for each
=
1, . .. s,
case n = 2k + 1, and i = n, in which case
The block C is an arbitrary
span(Iui), . ..
A
=
0
is either
4n/2Ji
...
except for the
0k,2k+1
or
-Uk,2k+1-
1 observable on the orthogonal complement of
U. 2 [n/2j)).
* With respect to the basis
lvi), .. .VdB)
of
CdB,
the observables Bik,
{1, ... n} have the block diagonal form:
1
Bg
Bjk
Djk
71
j $
k
E
where each B(1) is
2
Ln/2J x 2L/J and acts on spam(v(J_12 tf2j,),
. . .
Ij
2
t2/2)),,
and, for 1 < j < k < n,
T
+AO
A- A
ik
kj ---
v2
The block Djk is an arbitrary +1 observable on the orthogonal complement of
span(ivi), .....
V.22tn/2j ) ).
) is an optimal CHSH(n) strategy.
We have to show that A , Bik,
First, we use the description of the Schmidt decomposition of
4')
(the first bullet),
to write
=
2Ln/2J VA
2 Lf/2 1
41)
1=1
where
12 Ln/2]
1
1N")
2Ln/2j
IU-r) 0 IV)
E
2
r=(1-1)2t/
Next, we claim that for each i,
j +1
j, 1 < i < j < n, the following two statements
hold, the first for indvidual blocks, and the second for the whole observables:
* For each block number 1, 1 < I < s,
=1
A
B()I 4")
1
1'2
B I
)
(,IA(')
72
.
For the whole observables,
1
(O4Ai 0 BigjI) =
(4Ai 0 Bji b)
1
=
(5.7)
1
($|Aj 0 Bigj|)
1
($IAj 0 Bji|V) =0
-v/2
i = 1,
The first statement is true because the blocks A(
Bjk()=
A
n anti-commute on
suPPA|I 0), and because
u1 2 Ln/j))=
1
1), ...
f2
A
-
B k()
+
the space span(u(1- 1 ) 2Lfl/2i
...
We have already seen in the proof of lemma 4.1 that these facts imply that (7PIA( 0
B(1)1,01) equals the inner product of the vectors Ii) E Rn and
see that (
')$'f
0B
14i)
=
,
c R". Thus we
E)+Ij)
and the other three equalities follow similarly.
Now we focus on the statements for the whole observables. They follow from the
statements for the individual blocks. We show this for (4jAi (D BijIV)):
=/2]n/2 AB
0|A2 |
(
1)
((n/ IA|A
1=1
B
1)
=
Z2Ln/ 2
A12 L,,/2] 1
v12
1=1
The other three terms are analogous.
Now, from the relations (5.7), we see that the CHSH(n) value of the strategy
Ai,
Bjk,
4/)
is
(
(01 Ai 0 Bij + Ai 0 Bi + A3 D Bi0 - Aj 3 B)
F21<i<jsn
73
1|p)
=
so Aj, Bk, 1)
is an optimal CHSH(n) strategy. The reverse direction of Theorem 3.3
is proved.
5.3
Approximate intertwining operator construction
for CHSH(n) near-optimal strategies
The goal of this section is to prove Theorem 3.4. That is, given an arbitrary C-optimal
CHSH(n) strategy Aj, Bjk,
LV) on
strategy Ai, B5k, W) on C 2 rn/21
CdA 0 CdB , and the canonical optimal CHSH(n)
C3a
,
we want to show the existence of a non-zero
linear operator
T :C
2
fn'21
&
rn2
2
-+ Cd^ D CdB
with the properties
|(Ai 0 I)T - T(A 0O I)IIF < 12n2 VIITIIF
Vi
Vj # k
|| 1() BJk)T - T (I 0 Bjk)IIF < 17n2
IITIIF
We construct T explicitly:
T =
1
(n 1...
SM)10,1}"
Aj* ..
(j
Akn 0 I|)1
Ai- .. I
The motivation for this construction comes from the insight about the importance of
span {A31 ... An'
1) :(.
)
the space
and from the group averaging technique of constructing intertwining operators. Even
though here we are dealing with strategies for the CHSH(n) game, and do not explicitly have representations of finite groups, the relations on optimal and nearly-optimal
74
CHSH(n) strategies from Theorem 3.2 are strong enough that we can prove that the
T defined above behaves approximately like an intertwining operator with respect to
the observables of the two strategies.
The argument proceeds in the following steps:
1. We prove that the vectors
in) E {0, 1}"
: (01...
A- o 11|N)
J{Ze ...
coming from the canonical strategy are orthonormal.
2. From this, we derive that
lIThF
= 1, and so also T
$
0.
3. Next, we show that we can write
(Ai 0 I)T -T(Aj
-
& I) =
sign(iji,... ')Ai
Here the sign(i,
1
...
(
(AjA3
Aftl ... Afn
ji, ... Jn) notation
. A
I
... Z
10))('I (A
®I
(5.8)
has to do with the sign resulting from chang-
ing the order in a product of anti-commuting observables and will be defined in
detail later.
4. Similarly, we show we can write
1
(I & Bk)T - T(I 0 BkO) =
((A1
...
A n
Bki'P)
(,/2n ---... )M
E 0,1}"
-
+ ti,
i(+sign('i,,.. .j,
1)A ... Ai
In the place where there is t,
k)A'j
. .. A)
we take
75
...
A-k,' . ..
nA® jb)
a (Ai1< .w..
ae i (5.9)
+ if k < I and we take - if k > 1.
A Ai ... Akj
0 I14)
-
. . n},
for all (i
sign(iji,...
A
..
.
..
C {0,
in)
.A
.*..
1}
,
5. Next, we show that for all i E {1,.
kI)
< (6 + 4v/) n2\/ < 12n 2v/6 (5.10)
6. Similarly we show that for all k $ 1 E {1,. . . n}, for all (j 1 ...
A'~ 0BkII~)
An
.
A
-
3i( B1
+ sign(ji, ...
)
n
sign(j,....Jn
110)1
k) Ail1~*
*, 1)A"... Aj* ... A3
A~k
in)
E {0, 1}",
.An & Ib)
I
17 + 6,/2-
n2
/-
7n 2,/-, (5.11)
7. Finally, we combine all the previous steps to show that
Vi
Vj # k
II(Ai 0 I)T - T(Ai 0 I) |F < 12n2 fIITIIF
||(I O Bjk)T -T(I
BJk)IIF <
17n2 v ITIIF
as required for the proof of Theorem 3.4.
The seven subsections below are devoted to the detailed arguments for the seven
steps outlined above.
5.3.1
Orthonormal vectors
Here we aim to show that the vectors
{Zi ... AJ &Ib) : (J1. .jn) E {0, 1}n}
coming from the canonical strategy are orthonormal.
76
First, we reduce this to proving that
j)
nonzero (j1
one can take (ji, .. .j)
=
( A ...
.
.
A
I)
for each
n to show that given (ki ... k.n), (1i,. . .')
E
{0, 1},
(ki E li,... kn D 1n) and have
I
..
j ...
A-,-o I|4))
(
)
1n@I
is orthogonal to A'
Now, we prove that |)
(ji ...
in)
(1,
1) and the second case is all other situations.
...
to A'
E {0, 1}'. This works because we can use the anti-commutation
relations for the Ai, i = 1,.
Di
'l
j4) is orthogonal
...
Zn
0 I1) for each nonzero
e {, 1}. There are two cases: one case is if n is odd and (ji, . . in)
We consider the first case. For n odd, we have
n
0
i1
0
-
I
lA~= (-i)n
Therefore, we have
2Ln/2j
1
2.
[n/2]
n
i=1
and so
Ii)
2 . Ln/21
is orthogonal to A,'
...
An
2.2
11)
E
wj
Ln/2j
E
j=1
j=2Ln/2+1
2L-/2j
2-2Ln/2j
Y, j)
j=1
w|j
Ii) oi))
Ii)oli)
-
=
+
14')
j=2Ln/2] +1
0 I14) in the first case.
Next, we consider the second case. First, we look at the product
Aj1 . . . Aj. We
claim that there exists an index i such that
(5.12)
77
This is because when there are an even number of terms in the product Ail
...
AJ,
we can choose Ai to be one of the observables that appears in the product, and if
there are an odd number of terms, we can choose Ai to be one of the observables that
does not appear in the product. Next, we use the relation (5.12) to write
(4'IA
In
.
.
A1® Oy)
=(4'I
(oT) (A,' A ®I~) (Ai A) 4T)
(|(AiA'... An A) 0 (A. )2 . 1) =-(4'IAj. An) = '4)
and from here we obtain that
) is orthogonal to Ai1 ... A-7 0 11 ) in the second
case as well. This completes the proof that the vectors
{Al
3n) E
(j, ...
...Ai & I|N
0, 1}n
coming from the canonical strategy are orthonormal.
5.3.2
The Frobenius norm of T
Here we aim to prove that IITH|F = 1. This follows from the expression
|ITIIF=
TTT
for the Frobenius norm, combined with the expression
Zgn
Al . . Ai- 01I7P)( j (Zil ...
T =
(1-2-i-MG)10,1}
for T, combined with the fact that the vectors
{Aj1... A
0
I|n )
(ji... j'7 ) E {0, 1}}
78
t
are orthnormal, and combined with the fact that the vectors
I0) : (l ...
in) E {o, j1}
{A31..
A-'
1
all have unit norm.
To combine all these facts, we use the following lemma:
Lemma 5.1. Let
S
S
where the vectors |vi), i = 1,
ui )(viI
are orthnormal. Then,
... ,r
r
Proof.
(viIIvj)(ujI
Sst =
=
i=1 j=1
1
r
r
S
ui)(ui
and so
IISIIF
_ Tr ui)(uI I
r
El
Applying this lemma to the operator T, we conclude that
5.3.3
The expression for (Ai 0 I)T - T(Ai
1TH|F
-1
D I)
Here we aim to show the identity
-
(
sign(i,ji,.. .jn)Ajl ...
A
79
..
.A~n
II))(ipl
b)
(1.
.A!
I
-
(AiA-'...A in(gI
-
(Ai 0 I)T - T(Ao; I)= 1
Consider T(A 0 ):
1
T(A 0 I)=
A',...
- f(j,...-j.)E{0,j}"
1
|
An"Ip)
All~
J
(l.
)
n
(Ai 0 I)
Aj- t l)
..AI)((A &i) I(Al' ...
(ji---j.)C{0,j}"
We now use the anti-commutation relations for
A...
into the product
n.
Ai,
i = 1,. .n
to insert the
Ai
This possibly incurs a minus sign, depending on the
particular i and the particular (j1... Jn) E {0, 1}". We define sign(i, ji,... 3n) to be
such that
(Ai)(A
1
. . Aj-) = sign(i, ji, ... j,)A3 1 . . Aji'el... A3n
Using this, we get
T(A & I)
1
E
Al . .
A 3 no
IV)
sign(zi, j,7 . j.. )A 1'.
A jno 0
A j'*.***lkn
1)
110 )p/
Now we change the index of summation, and use
)
s i 1n11, ... Ji ... in) = sign(i, j, .. . ' (@ 1.. .j
to get
T(A0I) =
1
sign~i 1, . . n)A(1.. . j*l . . . An I@
(j, ...
j.)E{0,1}n
(NI (Ai .. . A ,- & I)
80
From here, the identity
(Ai & I)T - T(A 0D I)
1
SAi Ajl... Ai- 0 I|1
M
- sign(i, ji,..
A
...Ag0I?
...
))(ei (A{1
n
t
A!n
follows.
5.3.4
The expression for (I 0 Bi)T - T(I 0
&0l)
Here we aim to prove the identity
-
(10 Bk)T
T(I & kl)=
1
(j1.---3n)E{,1}"
ksign(i, .-..
+ sign (j1,...
n,
(Al ...
An 0 BklI|)
k) A31 . ..A(* ...Ain@ Il@
AI
(A3 I) {| 1 ...
1a~) Aj ...A3,*') ...Ai-
The argument is similar to the previous section. We consider T(I 0 B k).
T(10
&ki)
I)(I 0 Bk1)|)
..
(ii1... in)E{0,1}
1
A3,...
A'n- 0 1
)(
1
. .l2)(oI) 0
Ak +
t
P1
(v1..2){S1}
where +Ak is taken if k < 1 and -Ak is taken if k > 1.
Next, we use the anti-commutation relations to insert Ak and A, into the product
81
A.A-'-.
T(I
B
A-," ...
Ain
YS
0|)
(ii---.)E{0,j}"
.. jk)Z(1.
ni,)-
sign(*
A-k1,
A@...®Ai)
+ (sign(ji,. . . jr, l)Ajl ... AE . .
.
0I
))
We separate into two sums and change the index of summation in each and we get
1
1(
E
Bk1)
sign(ji,... j,
+ sign0l, ...in 11)AIil ...Aj"')'
...Ain
l
k)Ajl ... A(**E
... Ag 0 ID
)
T(I 0
.A
I14') (4I (A3.
& I)t
From here, the identity
1
1
SE
(Ajl
i72sign(ji, . .. J, k)Ajl ...
+ sign(ji,... In, l)All ... Ajl
.. . n A1 10ri
Aik(D ... Ag
.Ain o I 1 ))
114D
( Ail.. A
)
(I0 Bk1)T - T(I & Bk1)
)
follows.
5.3.5
The first error bound
Here, we aim to show that for all i E {1, ... n}, for all (ji ... J71) E {0, 1},
AAl . .. A.- 011,0) - sign(iji, . .. j)Ajl ... Aji . . An 01I4')
; (6 + 4x)n 2 V/<
82
12n 2 V
/
Sk,)
We get
(5.13)
The situation is the following: we would like to insert Ai into the product A31 ... Ain
as if the Aj, i
=
1,. . . n were anti-commuting. However, we don't know that Aj, i
1,.. .n are anti-commuting; all we know about the Aj, i
=
1,...n is that they are
part of an c-optimal CHSH(n) strategy.
The first step is to recognize that even though Aj, i
1, . . . n may not be anti-
=
commuting as operators, they nearly anti-commute in their action on the strategy
state I$). We prove the following:
Lemma 5.2. Let Aj, Bik,
4V)
be an c-optimal CHSH(n) strategy. Then,
0
AA + AjAj
2<
(1 + V)
2 n(n
_ 1)c
1 i<j<n
Proof. We recognize that the operators
AiA- 01+ I OBij
and
Ai - A
each have operator norm at most (1+
+0B
3
vi), by the triangle inequality.
Next, we see that
=
V2_
A4+
vf2-
I + 1 @ Bij
< (1 + V2)
83
I - I o Bij
V2_
I - I10 B
|,
)
Ai + A
$
2
and similarly,
AAj + AAj
2
+ ! (I
v'-2)BvI- Ijj
|p
1OB
Now we use the relation
A
A(e I'b) - I® miB
(
1<i<j<n
2n(n -
IBil/)
+
)E
from Theorem 3.2. We get
2 Aj
2Z
+ AjA
2
2
1<i<j<n
+ SIl@ I DBij|
A2
)
(I(+V2)2
(Ai<An
+ A
2
2
2n(n - 1)c
AI(9 )I Bjilo) 2) 5(1+V)
-
The lemma is proved.
Now we know that Aj, i = 1,...n almost anti-commute in their action on the
strategy state 10). This is a step forward, but still not enough for proving the bound
(5.13). To see why, consider a product like AiA 1 A 2 0 IJ4). We want to switch the
order of Ai and A 1 . We know that Ai and A 1 nearly anti-commute in their action on
I4), but we don't yet know that they nearly anti-commute in their action on A
2
&I014).
Fortunately, this difficulty can be circumvented: we know from Theorem 3.2 that,
84
for example, A 2 0
I)(I 0 (B 12 - B2 1))10)
I/2
1)
(B12- B 2 1)I$). This helps, because
(AiA 1 0 I)(A 2 0 1)1') ~ (AiA 1
1
(
=
(10
1
(B12 - B 21 ))(AiAl 0
I)|$)
and now we can switch the order of Ai and A 1 in their action on 10).
The preceding discussion shows that we can use the anti-commutation on 10)
(Lemma 5.2) to switch the order of a product of the Ai's acting on
I4'),
as long as we
can "get some of the Ai's out of the way", by replacing their action with the action
of an operator on the B side.
For reason of keeping the errors of approximation under control, we would want
the operators on the B side that we use to have operator norm 1. The operators
(Bi
Bjj) do not necessarily have operator norm 1, but fortunately this difficulty
can also be circumvented.
The discussion in the previous paragraphs motivates us to prove the following
lemma:
Lemma 5.3. Fix k. Then, there exists an I such that
Ak0 I
-
Bki + Bk
kI +B' |
I kB
where +BkI is taken if 1 > k and -Bkl
is taken if I < k. The notation
- Bkl + Bik
I Bkl+ BkI
means that we take all eigenvalues of the operator Bk1
positive ones to 1, the negative ones to -1,
normalized to 1.
85
+ Bik and normalize the
and, by convention, the eigenvalue 0 gets
Proof. The proof proceeds in two steps: first, we approximate Ak 0 I14) by I 0
Bkl Btk
VB
theBweI
1) and then we approximate I
byJ
bB
y 14'
*B
Bkl Blk4)
We prove the first step. We take the relation
+ B-
A i11)-IDBio
2
I@)Ai
- I ov/"2-
-j
n
01kb) -Io
B--j - B~jI, 2
; 2n(n - 1)E
)2
from Theorem 3.2. We focus only on those terms of the sum that contain Ak and we
get
B
Ak 09
I|)-
2
I®D Bk+Bjk
1
j=k+1
k-1
+E
A 0 17P)
-Jo
j +kjIp
V-22
)
4
2
< 2n(ri -1f
j=1
Pick the smallest of these (n - 1) terms. It satisfies
Bkl Blk
Ak 0 11') - I
This is how we approximate Ak 0 14') by I
2
4')
(5.14)
<2ne
BkJBlk
Next we focus on the second step. By Lemma 5.4 which we will prove below,
B+ @k) < I
I&
Bkj +Bj
k B/+B-9
-
so it suffices to give a bound on
For the bound on 11I 0
BkI + Blk
0
I
||I 0
BkIBlk + BlkBkI
2
1)
BkiBlk BlkBkI
BkBlk+BkBkl
I4')Hj
Ak 01 + Io
86
we observe that the operator
BkI+ B k
(5.15)
has operator norm at most (1 + -f), and so
BkIBlk + BlkBkl
2
(Ak
9
- I0
) Ak& II)
(1 +
B
k
I|@) -I®
I+10 I Bkl Blk A
Bk1 + B1
V2_
I
I (I +V)vr2)V-
'
=
Combining this with inequalities (5.14) and (5.15), we get that
Ak 0 I'0) - I
-)
< (2V + 2)B
B
El
as needed. The lemma is proved.
Next, we prove the missing link in the proof of Lemma 5.3, which has to do with
operators of the form
R S
observables.
and
S when
R, S are
1osrals
r k1
nR
v2 R
R+Sl
Lemma 5.4. Let R, S be two
1 observables on C'. Then,
1. The following operator identity holds:
R+S
R+S
jR+S|
2
-1
(RS+SR)
RS+SR
(21
+ 2
S+SR)
+2
2. The operator
2
R+S
R+ S
R+ S|)
<
RS+ SR
)
RS+ SR )
2
2
is positive semi-definite.
Iv),
R+S
v)
-R S
IR+SI IV')I
87
-
2
)
3. For any vector
(2,
RS+SR)
Proof. We first prove the operator identity.
+
We break up Cd into eigenspaces for the self-adjoint operator R + S. Since RS
SR = (R + S)2 - 21, these are also eigenspaces for the operator RS + SR, and so
also eigenspaces for the operator
RS+SR
RS+SR)
(21+
2,
2
+2 +2I+
~S+
2
RS+SR)
SR)
2
We will prove that the operator identity holds on each of the aforementioned eigenspaces.
Consider an eigenspace where R
+ S has eigenvalue A.
On this eigenspace, the operator
R+S 2
R+S|)
R+S
has eigenvalue
((signA)
-
1)2 ;
this holds in all the three cases A > 0, A < 0, A = 0.
.
The eigenvalue of (RS+SR) on this eigenspace is
The eigenvalue of
(
RS+SR
2,
-1
1 RS+ SR +2 fS+ SR)
2
I+ 2
+2
I+
RS +fSR)
2
2
is therfore
A22 2)2
1
2+ A+2
2
1+A 2 2 2
Next, we observe that
2+
A 2 -2
-+
22
1
21+2N
2 -2
2
= A
2
+2
+2 L
((s= nA)A
and that
2
2
(signA)A
)
A2 -2
88
1)2
(signA)A
+ 1)2
+ 1)2
and therefore,
A2
2)2
1
2+
(signA)A
+~f2 1+9
1)2
vf2
2
Next we use the above to conclude that the operators
R+S
v'2
R+S
~\R+S|
2
and
RS+SR>
RS+SR
2)
2
(21+
-1
RS+SR)
I
+2
RS+ SR
have the same eigenvalue on this eigenspace.
This argument holds for any eigenspace, and so the operator identity
R+S
v'-2
R S
R+S)
2
(RS +SR)
RS+SR
(21+
2
2
r+RS +SR
RS+SR)
2)
holds.
Next we prove the second part. We can see from the argument above that the
operator
RS+ SR +I+
(21
+ 2
has eigenvalues of the form
1
(signA)A
89
+ 1)2
_+
and they are all in (0, 1]. Therefore,
(R+S
R+S)2
R+S\)
-1
(RS +SR)
2
+RS+
2ISR 2
RS + SR)
1+ RS+SR)
2
2)
RS+ SR )
2
Finally, we observe that the third part follows directly from the second.
2
The
lemma is proved.
Recall that the goal of this section is to prove
A A. . . Ai -1I4)
-
sign(i,ji,. .. j)A
...
A
... Ai-
114)
< (6 + 4v2)n2v
and the overall strategy is to insert Ai into the product Ai' ...
A
-<
12n 2 VF
as if the Aj, i
=
1, . . .n were anti-commuting.
The results of the lemmas above have prepared the tools necessary for this goal.
Lemma 5.2 tells us that
AkAj 0 Ij|b) ~ -AAk 0 1')
with the error of approximation being at most (2v'2
+ 2)nvF. We call this apporoxi-
mation step an anticommutation switch. Lemma 5.3 tells us that
Ak®0 I')
where
BkL+Bk
ItBki+Bik
19
BkI + Bik
B1I + B1k|
is a suitable t1 observable acting on the B side, and the error of
2
approximation is at most (2v/2- + 2)v/n-f. We call this approximation step an AB90
switch.
The idea is to concatenate a number of these approximation steps to get the
relation (5.13). We present a procedure that goes from
Aj A3 ...Atn & I|$
to
sign(i, i,...
MaA31
...
Afi'') . ..An
I@
using at most n anti-commutator switches and 2n AB-switches. The procedure is the
following:
1. Start with AjA'1 ... A
0 IKn).
2. Switch all elements of the product Ai' ...
A
to the B side using the AB-
switches.
3. Repeat
(a) Switch the last observable on the B side back to the A side
(b) Anti-commute Ai and the newly switched observable
until Ai comes to its proper position.
4. Switch the observables still remaining on the B side back to the A side.
The total approximation error of this procedure is at most
n(2V2+ 2)nf + (2n)(2V2 + 2) /nV
91
< (6 + 4 V2)n 2 V < 12n2
The relation
Ai Aj ... A-
9 I))
... A 1 ... An®I|}
- sign(i, j 1 ,... j,)Aj
< (6 + 4-v/2)n 2 V/E < 12n 2
e
is proved.
5.3.6
The second error bound
The goal of this subsection is to prove that
A 311 ...
Ain
n & Bk1 I
k sign(j1 ,.. .in, k )Ajl .. . A3) ** .. . An- 0
+ sign(
a 1..
)A
Ajl*D ...
Al"
...
11}
& IV7)
17 +6,/2
n
E<17n2 fi
The argument is similar to the previous subsection.
By the triangle inequality, we have
+ sign
1
< A-,...Ak n
. .. jn,
+
1
I|A'' ... A
V22,z0,
1) A',.. Al*l . .. An
BkI|l})- Al
All' . .. Ai-Ak 0 11')
+
A3Y*) . . Ann
k sign J 1,..
-
sign(ji,
A 0 11')
... A3i
. ..
(j1,
-sig
92
Ip
kAk-+ AI
j, k)A'
.. .
1IV4)
0 I|
)
Ai . .. Aj-On Bri|7) -~v/22
n
... A
kE
... Ai-o I0)
, )A3 ... Ajl*e . . A30
I|,)
For the first term we have the following:
AJ, 0 Bkl|O) Ail ...
Aljl
+Ak+
k AI
jn k
Ab
...
n vf2-
IAt
0 110')
Ak + A,
=I @ BkllV)/) -
@ ~/)
< V2n(n - 1),
where we have used the inequalities in Theorem 3.2.
For the second term, we claim that
IAl1
..
. Ag Ak
0114') -
sign(ji,.. . j", k)A
.. .Aik
... A3oII4')
I
(6 + 4V))niE
The argument is similar to the argument in the previous section: we present a procedure that goes from
All . .. A-In Ak 0
1p
to
sign(ji,... jn, k)Aj1 ... A
l ... A
0 11')
using at most n anti-commutator switches and 2n AB-switches. The procedure is the
following:
1. Start with A'.
A
0i 1).
2. Repeat
(a) Anti-commute Ak and the next to last observable on the A side
(b) Move the newly switched observable to the B side
until Ak comes to its proper position.
3. Switch the observables still remaining on the B side back to the A side.
93
The third term is analyzed in the same manner and we get
A
Ai-Al 0 I14) - sign(ji,... j, l)Aj'
...
..
AjD... A
I
< (6
Combining all the preceding bounds, we get
Aj1 ... A
0 BWI10)
n~~~y
-
1
(f-
- sign(j,... jn, k)A'1'
.. A-4.**D .. . Ain (D I10)
1,-n
+sign(jil... In,
' 1)AI31 ...Aj'ED'...
AJn
I
1
1
(6 + 4 V)n 2 fi
(6+4V)n2
/
+
+
2n(n- 1)E
<
lo))
<
+6V2
7
2
2
as needed.
5.3.7
Putting everything together
The aim of this subsection is to put all the previous steps together and show that
Vi
Vj
$
k
| Ai 0 I)T - T(Ai 0 I)IF < 12n2 V ITIF
||(I0 Bjk)T - T(I 0 B[j)|IF < 17n 2
I TF
~i
thereby completing the proof of Theorem 3.4.
We start with the first inequality. We know from subsection 5.3.3 that
(Ai 0 I)T - T( Ai
I) = 1
S
v'2n(j1...-j.)({0,1}"
-
sigr(i,ji,...jn)Ai' . .
SAj Aj'...
Alin
Afil ... Ao I1))(
94
I()1,0
(1
...
An
n
t
We know from subsection 5.3.1 that the vectors
Ai . . AIl) : ()1... in) E{0, 1}"
are orthonormal.
We also know from subsection 5.3.5 that for all i, for all (ji ...
AjAj1 ...
Ag 0 I/)
-
sign(i, j, ...
.. . Ag
'...
j)
E
{0,
1}"
0 Ij/)
; (6 + 4v 2)n2 4 < 12n 2 4
We combine these facts using Lemma 5.1 and we get that for all i,
1| (Ai 0 I)T - T(Aj 0 I)|IF < 12n 2 V
=
22 4IITIF
F/ri
where in the last step we have used the fact that T was chosen so that
IITIIF
= 1
(subsection 5.3.2).
In a similar manner, we take the results of subsections 5.3.1, 5.3.2, 5.3.4, and 5.3.6
and apply Lemma 5.1 and get that for all j # k
I1, .... n}
11(10 Bjk)T - T(I 0 Bjk)IF < 17n2V/IITIIF
The proof of Theorem 3.4 is complete.
95
96
Chapter 6
Conclusion and open problems
In this thesis, we first derived a general result about non-local XOR games:
for
every non-local XOR game, there exists a set of relations such that optimal quantum
strategies satisfy the relations and nearly-optimal quantum strategies approximately
satisfy the relations. Then, we focused on the CHSH(n) XOR games, and derived the
structure of their optimal and nearly-optimal quantum strategies.
One possible direction for future work is whether structure results like the one for
CHSH(n) near-optimal quantum strategies can be proved for other XOR games. The
opinion of this author is that it may be possible to generalize the arguments above to
a few XOR games other than CHSH(n) . The CHSH(n) games have a very regular
structure, and the arguments above make heavy use of this structure; however, it
may be possible to construct an argument of this form, or another form altogether,
for other XOR games with less regular structure.
Another possible direction for future work is whether the CHSH(n) games can
be used in the context of quantum information processing with untrusted black-box
devices. The CHSH game, the first member of the CHSH(n) family, has already been
used in protocols for doing information processing with untrusted devices. Whether
all the CHSH(n) games can be used, and which of the CHSH(n) games gives protocols
97
with the best parameters, are two questions that are still open.
98
Bibliography
[1] Charles H Bennett, Gilles Brassard, Claude Cr6peau, Richard Jozsa, Asher Peres,
and William K Wootters. Teleporting an unknown quantum state via dual classical and einstein-podolsky-rosen channels. Physical review letters, 70(13):1895,
1993.
[2] Nicolas Brunner, Daniel Cavalcanti, Stefano Pironio, Valerio Scarani, and
Stephanie Weiner.
Bell nonlocality.
Reviews of Modern Physics, 86(2):419,
2014.
[3] Harry Buhrman, Richard Cleve, Serge Massar, and Ronald de Wolf. Nonlocality
and communication complexity. Reviews of modern physics, 82(1):665, 2010.
[4] John F Clauser, Michael A Horne, Abner Shimony, and Richard A Holt. Proposed experiment to test local hidden-variable theories. Physical review letters,
23:880-884, 1969.
[5] John F Clauser and Abner Shimony.
Bell's theorem. experimental tests and
implications. Reports on Progress in Physics, 41(12):1881, 1978.
[6] Richard Cleve, Peter Hoyer, Benjamin Toner, and John Watrous. Consequences
and limits of nonlocal strategies. In Computational Complexity, 2004. Proceedings. 19th IEEE Annual Conference on, pages 236-249. IEEE, 2004.
[7] Pavel Etingof.
Introduction to representation theory.
Lecture Notes avail-
able online at http://ocw.mit.edu/courses/mathematics/18-712-introduction-torepresentation-theory-fall-2010/lecture-notes/.
99
[81 William Fulton and Joe Harris.
Representation theory, volume 129. Springer
Science & Business Media, 1991.
[9] Nicolas Gisin, Gr6goire Ribordy, Wolfgang Tittel, and Hugo Zbinden. Quantum
cryptography. Reviews of modern physics, 74(1):145, 2002.
[10] Daniel Gottesman and Isaac L Chuang. Demonstrating the viability of universal
quantum computation using teleportation and single-qubit operations. Nature,
402(6760):390-393, 1999.
[111 L. Lovasz. Semi-definite programs and combinatorial optimization. Lecture Notes
available online at http://www.cs.elte.hu/ lovasz/semidef.ps.
[12] Dominic Mayers and Andrew Yao. Quantum cryptography with imperfect apparatus. In Foundations of Computer Science, 1998. Proceedings. 39th Annual
Symposium on, pages 503-509. IEEE, 1998.
[13] Dominic Mayers and Andrew Yao.
Self testing quantum apparatus.
arXiv
preprint quant-ph/0307205, 2003.
[14] Matthew McKague, Tzyh Haur Yang, and Valerio Scarani. Robust self-testing of
the singlet. Journal of Physics A: Mathematical and Theoretical, 45(45):455304,
2012.
[15] Carl A Miller and Yaoyun Shi. Optimal robust quantum self-testing by binary
nonlocal xor games. arXiv preprint arXiv:1207.1819, 2012.
[16] Michael A Nielsen and Isaac L Chuang.
Quantum computation and quantum
information. Cambridge university press, 2010.
[171 Ben W Reichardt, Falk Unger, and Umesh Vazirani.
quantum system:
A classical leash for a
Command of quantum systems via rigidity of chsh games.
arXiv preprint arXiv:1209.0448, 2012.
[181 William Slofstra. Lower bounds on the entanglement needed to play xor non-local
games. Journal of Mathematical Physics, 52(10):102202, 2011.
100
[191 Boris S Tsirel'son. Quantum analogues of the bell inequalities. the case of two spatially separated domains. Journal of Soviet Mathematics, 36(4):557-570, 1987.
1201 Lieven Vandenberghe and Stephen Boyd.
Semidefinite programming.
SIAM
review, 38(1):49-95, 1996.
[211 Umesh Vazirani and Thomas Vidick.
Fully device independent quantum key
distribution. arXiv preprint arXiv:1210.1810, 2(11), 2012.
[221 Reinhard F Werner and Michael M Wolf. Bell inequalities and entanglement.
arXiv preprint quant-ph//0107093, 2001.
101