>> Konstantin Makarychev: It's a great pleasure to have... here. Yuan is a student at Carnegie Mellon University. ...

advertisement
>> Konstantin Makarychev: It's a great pleasure to have Yuan Zhou
here. Yuan is a student at Carnegie Mellon University. And he's
graduating this year. Yuan's done some excellent work on approximation
algorithms and today he will tell us about one of his recent results.
>> Yuan Zhou: I'm going to talk about the hardness of robust graph
isomorphism, Lasserre gaps and asymmetry of random graphs. This is
joint work with Ryan O'Donnell, John Wright, and Chenggang Wu. And
most of the slides here are made by my co-author John Wright.
So like there are three terminologies in the title. It's quite a long
title there. So the robust graph isomorphism, Lasserre gaps and
asymmetry of random graphs. I'll start first with the definition of
the first term, the robust graph isomorphism. I'll motivate the
definition with this example. Say we're given this graph which is
yesterday's Facebook graph. So the nodes represents the users and the
edges represents the friend relationship. So here is me and here is my
co-author John Wright. And we are given -- here we are given another
copy of yesterday's Facebook graph. And somehow we scramble it. Now
if we put these two graphs as input into a graph isomorphism algorithm
A we expect such an algorithm tell us that yes these two graphs are the
same graph, and the algorithm should also unscramble the second graph.
But let's look at another example. So we are still given this first
graph as yesterday's Facebook graph. And here we are given today's
Facebook graph. We see that it's almost the same graph except there
are few differences, say me and John became friends and some two users
they de-friend each other. Here are the differences. And now we also
scramble this second graph and put them as inputs into the graph
isomorphism algorithm A. Now A will tell us that now there are
different graphs and it just terminates. So this is good.
It's like -- it is what we should expect from a graph isomorphism
algorithm. But it's a little unsatisfactory, because they're almost
the same graphs. So the question now is can we detect this almost
isomorphism, and if so, can we unscramble the second graph.
So this motivates us to define the problem of robust graph isomorphism.
At a high level it says that given two almost isomorphic graphs and the
goal is to find the best isomorphism between them or something pretty
close to it.
I'd like to define this in more details, but let's start with some
basic definitions such as isomorphisms. So isomorphisms is just a
bijection high between the vertex sets of two graphs VG and VH. We say
pi isomorphism is like for every edge in the G graph, after we apply
the permutation or bijection pi, the resulting edge is the edge in the
H graph and vice versa.
So, for example, we have these two graphs and we have this permutation
pi. We see that this edge is mapped to this edge and so on. And let's
now modify the definition a little bit. So it is equivalent to define
that a pi is isomorphism, let's assume first two graphs has the same
number of edges, then this pi is the isomorphism if and only if the
probability over this uniform distribution of the edges of G and we
choose this UV random edge and we apply the pi permutation and the
resulting edge is an edge in H graph. Nothing different.
But now it motivates us to define something like close to isomorphism.
So we say this pi is the isomorphism is this probability is after. So
still like ->>: [indiscernible].
>> Yuan Zhou: Yes. So here we assume like the two graphs has the same
number of edges. Or like they're very close number, the numbers are
very close. Because we really care about like whether two graphs are
very close. So if the number of edges differ a lot, they're already
very close. So now we look at still let's take a look at this
isomorphism pi. We see it's truly one isomorphism and different
mapping. This pi we can just use simple counting, and this is one-half
isomorphism.
So a very simple fact is that if the two graphs have beta isomorphism,
then they're also isomorphic with beta. So with this definition we can
introduce the problem of approximate graph isomorphism. That is, say
like we have two constants C and S for C greater than S, we want to -so the CS approximate graph isomorphism problem wants us to distinguish
whether G and H are C isomorphic or they're not S isomorphic. So
simple fact is that for this one comma S approximate graph isomorphism
it's no harder than the normal graph isomorphism because normal graph
isomorphism is 1 comma 1 approximate graph isomorphism. If we make
this completeness factor number one to one minus epsilon it's not very
clear how hard this problem is and actually this problem relates to the
graph isomorphism problem. For robust graph isomorphism, we're given
two graphs G and H which are very close to being isomorphic. Say one
minus epsilon is isomorphic, and we want the algorithm to output an
isomorphism, which is one minus a function R of epsilon, and this R of
epsilon goes to 0 as S epsilon goes to zero. So this means that if
they're really close to isomorphic, to being isomorphic, we want to
certify that they're very close to being isomorphic.
So this robustness of these algorithms are like studied before, before
the constraint satisfaction problems that we -- before we proposed a
conjecture that characterizes all the constraint satisfaction problems
that have a robust algorithm. And it was later confirmed by Bartel and
Koning and we also introduced this robust graph isomorphism in another
paper and we gave robust graph isomorphism algorithm for trees.
So very interesting question is like for other class of graphs such as
planar graphs, do they also have robust graph isomorphism algorithms?
So also for the graph isomorphism -- this is a brief history of the
approximate graph isomorphism problem. So it is inverse known that
when the two graphs are dense, say like the number of edges is omega N
squared, then this problem can be where approximated. But in our talk
we're mostly interested in the sparse graph, say like the number of
edges is linear. And they also show some hardness of approximation
results for some variance of this approximate graph isomorphism. But
in their problem the vertices have colors. That is, easier to show
hardness result. So now we deal with the graphs without colors. And
for the graph isomorphism problem, it is famously -- it is not known to
be whether NP complete. And there's some evidence that it is not NP
complete. So now the question is what about the robust graph
isomorphism problem.
And so this is one of our main results we proved that assuming the five
random 3X hypothesis, there's no poly time algorithm for robust graph
isomorphism.
So more technically we show that there is a constant epsilon 0 such
that for every positive constant epsilon there's no poly time algorithm
can distinguish between 1 minus epsilon isomorphic or not being 1 minus
epsilon 0 isomorphic. And so, in other words, this robust graph
isomorphism problem is hard.
And actually like after we, after we submitted our paper, we also got
some better results like instead of assuming the random 3X hypothesis
we just assumed RP does not equal to MP would get the same result.
And so this is our first result. And now let me introduce our second
result; but before that, let's look at the past algorithms for the
normal graph isomorphism problem. So the first algorithm is just brute
force search. It takes N factorial time. And there was this pretty
famous algorithm framework, algorithmic framework called failure linear
algorithm. This is some lifted algorithm for the color refinement
algorithm.
So it also takes roughly order of N factorial time. But it is quite
useful in the sense that later, like Babai and Luks, they combined this
algorithm with other group theory techniques. They were able to bring
this running time down to the 2 to the square root N log N. So
Weisfeller-Lehman algorithm is pretty well-known heuristic for the
graph isomorphic problem, and it has a parameter K and for a fixed K
draws typically time into the K and when K is bigger it takes more time
but also more powerful.
And it is quite interesting to us as like someone in the optimization
community, this work, this recent work, they showed that this
Weisfeller-Lehman algorithm is actually, is powering, is actually
almost the same as the level K Sherali Adams linear programming
relaxation. So this is quite interesting because for us it shows that
the Sherali Adams they have almost the same power in terms of graph
isomorphism. And for the Sherali Adams LTs, usually like record the
super LPs, like very strong.
>>: So graph is mostly [indiscernible].
>> Yuan Zhou: Yes, the feasibility problem. Just write the RPMs,
whether it is. So indeed, on the other side, on the graph isomorphism
community, there was once like they were speculating that this WL or
whatever Sherali Adams algorithm can solve -- could solve graph
isomorphism even for a very, very small K equals to log N, order of log
N. But this was later, this conjecture was later refuted by Cai Furer
Immerman, showed that for some graphs in order to prove that these two
graphs are not isomorphic you actually need K to B order of omega N for
W Sherali Adams.
Okay. So but back to our Sherali Adams view. So Sherali Adams is very
strong LP relaxation but still we can ask questions like can we use
some even stronger relaxation tools as SDPs, Lasserre or sum of squares
which is super duper SDP.
Okay. So let me first introduce our results. So in this work we
showed that using like strong SDP relaxation techniques it actually
does not help. We show there exists some graph such that actually
they're very far away from being isomorphic, they're .999. They're not
.999 isomorphic, but Lasserre hierarchy, the Lasserre hierarchy thinks
they're isomorphic. This is our second result.
>>: What is the definition 1 minus epsilon isomorphic?
>> Yuan Zhou: Just like -- so they're not -- there exists some
permutation of the vertices such that if you permute the vertices of
the first graph. Say 1 minus epsilon fraction of the edges in the
first graph overlap with the second graph. And you assume the two
graphs has the same number, have the same number of edges.
>>: Epsilon 0 in your result, so are there any algorithmic sort of
lower bounds on epsilon 0, do you know whether -- do you get maybe
something like .0 and -- but do you know that for something you can
distinguish 1 minus small epsilon higher ->> Yuan Zhou: That's a very good question. I think that, I suspect
you can even -- the true answer might be like 1 comma delta for every
constant delta but we don't know how to prove it. So I'll mention that
in the ->>: Even 1 comma delta.
>> Yuan Zhou:
comma there.
For Lasserre.
But hardness is like 1 minus epsilon
>>: But you don't keep any hardness under any other conditions?
aren't given any hardnesses, essentially.
>> Yuan Zhou:
These
Sorry?
>>: This does not give any hardness for graph isomorphism?
>> Yuan Zhou: No. Our first result gave hardness on robust graph
isomorphism. The previous one. But this one only says for Lasserre
hierarchy it cannot -- not only it fails to do the exact graph
isomorphism, but it fails significantly.
So in terms of how do you prove it, like so one slide -- it's like
actually we look at the previous work by Cai Furer Immerman shows that
the Sherali Adams cannot distinguish between two nice isomorphic
graphs. But their work is basically a reduction from the 3XOR
instance.
So we use this observation and use the random 3XOR. So things like it
was known by Schoenebeck that the random 3XOR is hard for Lasserre and
we use this reduction.
But there are several new ideas we need to introduce.
Okay.
So --
>>: For hardness, you use ->> Yuan Zhou: Random 3XOR. So the true results are basically uses the
same proof. So in a sense of that, let me just try to show the proof
of our first theorem, which is assuming random 3XOR there is no
polynomial time algorithm for robust graph isomorphism.
>>: The definition I just missed, what's the definition of robustness?
>> Yuan Zhou: Robust graph isomorphism is just to say that you are
given two graphs. They're really, really close. So the algorithmic
task is to find out a permutation to certify this, which more
technically saying like if 1 minus epsilon close, the output of your
algorithm should be a isomorphism which is 1 minus a function F of
epsilon such that this F of epsilon goes to 0 as epsilon goes to 0.
Think of F of epsilon being the sum square root epsilon or even like 1
over log 1 over epsilon, something that goes to 0 as epsilon goes to 0.
>>: Does it give you anything for the regular graph isomorphism, this
result?
>> Yuan Zhou:
Regular graph, strongly regular graph.
>>: Does it, for example, say that the graph isomorphism is NP hard,
not NP hard, doesn't give anything?
>> Yuan Zhou:
No, no.
Okay.
>>: [indiscernible].
>>: Yes [indiscernible] [laughter].
>>: For that problem.
>> Yuan Zhou: Okay. So here is our reduction. One thing Cai Furer
Immerman, is that the gap, their gap is really, really small. Their
gap is 1 comma 1. So their two graphs only differ by one edge. We
need to make it constant apart.
So our reduction is -- our proof is a reduction from the 3XOR. So here
is a 3XOR instance. So we have all these linear equations and each
linear equation is R modulo 2 and it only has access to three
variables. And for every equation there are related three variables
has like four possible ways to satisfy this equation. For example,
this is the four ways and this is the other four ways for the other
equation and so on and so forth. So the goal is to find out assignment
to satisfy as many equations as possible. And this is easy when this
instance is satisfiable. We can just use Gaussian elimination.
But once it is almost, close to being satisfiable, so 1 minus epsilon
satisfiable, then theorem by Hastad says that we cannot even find a
solution that satisfies a hard plus epsilon of the equations.
In other words, like we cannot distinguish whether this instance is
almost satisfiable or far from being satisfiable. And so we want to
use this to prove that for graph isomorphism also have this style of
hardness.
Okay. So this is ideally want this reduction so almost satisfiable
3XOR instance is reduced to almost to almost isomorphic graphs. And if
the instance is far from being satisfiable then we have like two graphs
that are far from being isomorphic. So the first step is usually quite
easy but for second step we currently don't know how to do it. We only
know how to make this work for most of the nonsatisfiable, far from
satisfiable 3XOR instance, which I mean are random instances. So this
is the definition of random 3XOR. So say the random 3XOR has
invariables and M equations, and this M equations are chosen uniformly
from the end truth three possible triples of this invariables. Say
like we have the first triple X1, X2, X3 and the second being X3, X5
and X7, so on and so forth. And now we write the equations like this.
But we leave the right-hand side blank now. To decide the right-hand
side, we just flip a coin for each flip and buy a coin for each
equation to make sure that they're independent.
It is quite easy to show that as long as the number of equations is
pretty large, much bigger than N, then with very high probability every
assignment can satisfy most 50 plus small number percent of the
equations.
So the 3X hypothesis says that there's no poly time algorithm that can
distinguish between an almost satisfiable 3XOR instance and random 3XOR
instance since the random 3XOR instance is pretty far from being
satisfiable.
So this is quite well believed, there's some standard complexity
assumption that is widely used in some other results and current best
algorithm for this problem takes time 2 to the N by log N. It's almost
exponential time. So now we assume Feige's random 3XOR hypothesis and
we want to do a reduction from a random 3XOR instance such that R from
almost satisfiable instance such that this almost satisfiable instance
still goes to almost isomorphic pairs of graphs and random 3XOR
instance which are far from, which is still far from being satisfiable
goes to a pair of graphs that are far from being isomorphic, with high
probability. So now we define our reduction. So we want to -- at a
high level we want to define this reduction function graph to map a
3XOR instance to a graph. Suppose now we have such a function but now
we actually need to produce two graphs. What we do is we start from
this 3XOR instance I, and we make another copy of -- make a very
similar copy of I. We call it stat I, which is we just preserve every
equation but change the right-hand side to all zeros. We record set I
just because set I is this instance is always satisfiable by the
trivial assignment all 0, constant 0 assignment.
And suppose we have this reduction for single instance.
of graphs is just the graph I and the graph set I.
Now our pair
>>: What do you mean graph I?
>> Yuan Zhou: Graph is like -- I will define it later. It's a
reduction function. I'll define it on the next slide. Okay. So now
I'm going to define this graph, this graph function. I first defined
how some gadgets which worked for single equation. So we call this 0
guided because it works for the equation with right-hand 0. And for
the right-hand side being 0, they have all the assignments which
satisfy this equation. And in all these assignments they're even
number of ones. So in my gadget I have the six vertices in three
groups. We call them just like say the variable blobs or variable
clouds for X and Y and Z. And each cloud they're like two vertices,
which one represents assignment 4 X, 0 and 1. And we also have this
equation vertices, there are four of them, each one corresponds to an
assignment, satisfying assignment to this equation. There are four of
them. And now I connect edges. That is, I just connect in the obvious
way. I connect every equation vertex to the -- I connect three edges
out from it to the three, one of each variable blobs with corresponding
assignment. So for this one, because X, Y, Z, they're all mapped to
zeros, I have to connect to X0, Y0 and Z0 and this one I connect to X0,
Y1 and Z1 and so on and so forth. Finally I get some graph like this.
So this is a little messy. So I'm going to write it, abstract it in
this way. So this cloud represents the equation vertices for this
equation. And this two, three smaller clouds represents the variable
vertices. Okay. And similarly I define the one gadget which works
with the equation right-hand side 1, similarly define this equation as
N variable vertices and connect the corresponding edges. And I also
abstract it in this way. So with both the 1 gadget and 0 gadget,
actually when I abstracted out, zoomed out, they look the same. The
only difference is how I connect the edges between this equation
vertices to this variable vertices.
So now finally I'm ready to define this graph function. So in the
resulting graph, I will have equation clouds for each equation that is
in each cloud I have three, four vertices corresponding to the four
satisfying assignments. No matter whether it's the right-hand side is
0 or 1. And for each variable I introduce this variable cloud, which
contains two variables. And I just use the very obvious way to apply
the gadget that is for each equation that is 0, with right-hand side 0,
I put say 0 gadget here. And for this one the right-hand side is 1, I
put the one gadget here. And so on and so forth. So finally I have
this graph and take a step back.
Our final reduction is that because I would need two graphs. So we
take both this instance, the original instance and the set instance and
apply the graph functions to both of the instances, I have the two
graphs G and H. So this defines our reduction. Any questions? And
now we need to show this company's case then. So the company's case is
pretty easy. So for I would need to show that for almost satisfiable
3XOR instance the two graphs are almost being isomorphic. For
simplicity I show it for satisfiable 3XOR instance and I show
isomorphic. I do it in a way that, let's assume we have an assignment
F which is satisfying assignment for the instance I, would let us use
this F to construct isomorphism pi between G and H. So this is how we
construct the isomorphism.
For each variable blob, the true vertices are mapped to the
corresponding two vertices there, if this variable is assigned to 0.
That is, if it is assigned to 0, the pi maps X1 comma 0 vertex and 1
vertex to the 1 vertex here. If X2 is assigned to 1 we swap the
mapping. That is, I map the 0 vertex to the 1 vertex and map the 1
vertex to the 0 vertex and so on. Now how do you map the
corresponding -- we want to map the equation vertices here to the
corresponding equation vertices here. How do we do it? We need a
simple -- we need an observation that is so for the right-hand side,
for this graph, all the gadgets we apply are zero gadgets, because we
zero out all the right-hand sides. Then we need to say that if this
equation is indeed with strong right-hand side 0, then for this zero
gadget. And for every even assignment because F should satisfy this
assignment, there is an isomorphism between this vertices to this
vertices, given that the bottom variable vertices are fixed. And
similarly if this is a one gadget, therefore every other assignment 0
is an isomorphism. Let me just demonstrate it by one example. Say the
right-hand side is zero gadget, and here we have a one gadget or say we
have one equation, we choose this assignment to X, Y, Z so the pi maps
0 to 0, 1 to 1 for X, because X is assigned to 0 and so is for Y. For
this, so it maps 0 to 1, maps 1 to 0, because these are assigned to 1,
signed by 1. And now we see there is this isomorphism for the equation
vertices. Let me just check it. Because pi only is 0 that we can
check that for this 001 vertex, on the right-hand side we map it to 000
and we see all these three edges are preserved. And we can also check
for this vertex to this vertex all three edges are preserved and so on.
So you can just trust me that there is always a way to map the equation
vertices such that all the edges are preserved. So we can construct
such a pi which is indeed an isomorphism between G and H.
Now we come to this, that is, we want to show for random 3XOR instance
it's far from being isomorphic. And I'll cheat for most of -- during
most of the time. So what I say might not be correct but it can be
corrected. So we want to show that for random 3XOR instance with
Hyper-V in our reduction we get two graphs that are far from being
isomorphic. We usually do the counter [indiscernible], that is
whenever the two graphs are almost isomorphic we can decode an
assignment such that which almost satisfies this instance I. Now let's
see what must be true about pi. So indeed like most of the pis can go
crazy. That is, it can map like some equation vertex to some variable
vertex. It can map some variable vertex to some equation vertex, and
also like map some equation vertex to some equation vertex which is
not -- which not belong to the same equation. We don't want this to
happen. And you can believe me that after doing some tricks, then if
pi is very close to being isomorphic, then none of this will happen.
What we can guarantee is almost like this. A pi maps all the equation
vertices in the same blob to all the equation vertices in the same
blob. They might not be the same. Might not be corresponding to the
same equation, but it must be like from one blob to the other blob.
Also for -- it's also true for these variable blobs. Okay. But
actually this is what we dream for, this scenario. That is, not only
what I just said was true, but also it indeed maps this blob from
equation one to the blob in the equation one in the other graph. I
mean, maps the corresponding blobs. So you can believe me that if pi
really looks like this, the rest of the proof should go through really
easily. We can just decode the assignments for this XI by like whether
they're mapped from 0 to 0 or mapped from 0 to 1. But sometimes this
fails. When does this fail? So a simple answer is that it fails when
the equation graph has lots of symmetry. That is, take this for the
example, that is, say like for the first equation we have X1, X2, X3
and the second equation has X3, X4, X5 and third one has X5, X6 and X7.
In this case, what does G and H look like? G looks like this. So at a
high level, zoomed out level, they looked like this, right? And now a
pi could just map this equation 1 to equation 2 and why I'm mapping
this X1 to X3 X4 -- X3, X4, X5. Now we see that all these edges, maybe
it can still be preserved to all these edges. And also for the
equation 2 it can be mapped to equation 3 by making X3, X5, and X4 to
X6 and so on. So you can imagine such case might happen while like pi
is still almost isomorphism or even exactly isomorphism. In this case
we don't want this case, of course, because for variable X3 it's mapped
to X5 and we don't know how to decode it. All right. So this is not
what we wanted, but since this graph is produced randomly, we want to
argue that random graphs usually don't have this kind of symmetry. So
then we usually get our dream scenario and this is our dream scenario
and the proof goes through. So this is high level of the proof of the
soundness, and let me just say a little bit about our last topic, which
is robust symmetry of random graphs. So symmetric graph is just a
graph that you can rearrange the vertices to get back to the same
graph. So say all this has symmetric graphs because you can, it's just
obviously the symmetric graphs. This is not symmetric. So formally we
say symmetric graph is the graph with nontrivial auto morphism. By
auto morphism, I mean it's just isomorphism between a graph and itself.
The trivial isomorphism is just identity mapping. So a symmetric graph
is a graph which has auto morphism which is different from the trivial
identity mapping. So the asymmetric property, symmetry property of
random graphs are studied, were studied a long time ago from Erdos and
Renyi, they showed the GMP model random graph is asymmetric with
Hyper-V when P is bounded by log N by N to 1 minus log N by N.
And also showing for random deregular graphs. But in our case, it's
random graph with exactly M edges and actually they're hypergraphs.
But these are like some technical issues we can deal with. But indeed
the main problem is that we not only we want something more than
asymmetry property, we want zero robustness symmetry, because -- so now
we define a permutation pi to be alpha auto morphism just as we defined
for the alpha isomorphism, that is this probability is alpha for G and
G self. Okay. So now let us ask, does the random graph G has a good
isomorphism? Of course, because everywhere has one auto morphism,
which is the identity permutation. We can use this as the same as
previous reason. So another definition we might use is we ignore the
identity permutation, does G have a good alpha auto morphism. But
still this is always true because we can always take the identity
permutation and just swipe the first two vertices. We see there's not
too many edges to be affected in this way. So it's still a very good
auto morphism. So our final definition is we want G to have -- we ask
whether G has alpha auto morphism which is far away from the identity
function or the identity mapping. So indeed now we have our theorem
which says that let G to be a random graph, then with Hyper-V any
automorphism, any very good automorphism 1 minus epsilon automorphism
for G is epsilon close to the identity function for any pretty large,
for any pretty large epsilon. So there's some restrictions on the
number of edges and this epsilon, but it is enough for our application.
Okay. So this is the high level proof and there's some open questions.
For the first one is still we assume, we prove our hardness assuming RP
you go to NP. Can we make it just NP hardness proof. There we need to
explicitly construct the sum graph that are robustly asymmetric,
asymmetry. So currently we know that random graphs are such graphs
with very high probability, but we don't know any great candidate
graphs for deterministic construction. Once we get that it will give
us NP hardness proof for robust graph isomorphism problem. So the
second problem, just as Costa asked before, can we improve the hardness
gap for robust graph isomorphism. So here is our theorem. We say like
for every epsilon we cannot distinguish whether it is 1 minus epsilon
isomorphic or 1 minus epsilon 0, not isomorphic. But this is -- this
epsilon 0 is really, really bad. It's like 10 to the minus 14 in your
paper. Although we didn't try to optimize it but still it's really
bad. So I'm not sure whether it is worse or better than the constant
in the corresponding PTP theorem. But in the PTP theorem, you can
always use parallel repetition to, you can always use parallel
repetition to amplify the gap. But for us the question is there some
procedure similar to the parallel repetition to amplify this gap. And
one attempt is to try the tensor product. So given two graphs that has
taken tensor in this way, we know that the true graphs, understand my
assumptions, the true graph, if they're not originally isomorphic, then
tensors are not isomorphic, but what we really want to know is whether
given they're a little far from being isomorphic, are there tensors
much farther from being more isomorphic? So this is our second open
question. And that's all what I'm going to talk about.
[applause].
>> Konstantin Makarychev:
Questions?
>>: So didn't talk about the gap?
>> Yuan Zhou:
What gap?
>>: I think you mentioned gap, integrated ->> Yuan Zhou: So the integrated gap follows roughly by the same proof.
So of course we use the channel backs integrated gap for random 3XOR.
So the sumness, which is the most technical part of this, the hardest
part of it is the same. We only have to show the completeness in the
Lasserre language. But that is usually you can expect it to be true.
>>: So this XOR it's equal to half or not? Because -- so you have
hardness as being in the [indiscernible] and then rely on the PCP
theorem but for XOR, if you assume the hardness conjecture, then you
use, can you show hardness where epsilon 0 is equal to half, is hard to
distinguish half plus epsilon 1 minus epsilon [indiscernible].
>> Yuan Zhou: You're asking can I -- is there a gap like 1 minus
epsilon comma one-half plus epsilon?
>>: Yes.
In the current case you're working.
>> Yuan Zhou: I don't think so. So I don't see why like one-half is
important for -- is like magic for graph isomorphism. But you're
right. If you're given like 1 comma 1 over 2 gap, usually you only
expect to show a bigger gap. But I think the most cost it does not
come from this gap it's from some other procedure.
>>: So conjecture isn't hard. So essentially you get -- it's half plus
epsilon and 1 minus epsilon. So randomness would be half of epsilon.
>> Yuan Zhou: Yes. But there you can always use like exchanges of
that constructor, not like on modulo two, you would do modulo Q assume
a very big gap. Assuming that big gap, I don't think it's something
significant.
>>: Given any proof of where you -- why don't you get the same ->> Yuan Zhou:
One-half.
>>: The same gap as in the original sort of ->> Yuan Zhou: Because there are many ways to lose. So we need to say
that equation blob goes to equation blob, it's like we need to add some
extra edges and there we already lose a lot. So you use some average
argument that easily lose a lot.
>> Konstantin Makarychev:
again.
[applause]
Any questions?
Let's thank the speaker
Download