>> Dimitry Khovratovi: Great to see you in Redmond. ... workshop on cryptanalysis. And as you know today we...

advertisement
>> Dimitry Khovratovi: Great to see you in Redmond. We hope you will enjoy our
workshop on cryptanalysis. And as you know today we have a series of talks starting
with [inaudible] talks and then later just regular talks and at the end of the day we have
discussions on brainstorming on what we are going to work on tomorrow, and tomorrow
we will split into groups and work the full day and this will also work on Wednesday.
Some of us will go for [inaudible] but this doesn't prevent the remaining ones to stay and
continue the work. I am happy to introduce our first invited speaker Thomas Peyrin from
Singapore. Thomas is a well-known crypto analyst who got best paper award at
[inaudible] for his work at [inaudible]. And he is now a researcher in NT University of
Singapore and he will talk about unaligned rebound attack for KECCAK.
>> Thomas Peryrin: Thank you. Do you hear me if I… Yeah, okay. So this is joint
work with Alexandre Duc, Jian Guo and Lei Wei. Unaligned rebound attack is basically,
maybe you have seen it in the paper produced by the KECCAK designers recently about
alignment in KECCAK which is bad. And this is actually supposed to prevent, I mean,
this actually prevents the rebound attack to be easily applied on KECCAK. So here we
don't show the actual [inaudible] wrong because they didn't say it was impossible, they
just said it would be hard. And this is actually what we are doing. We show that
explaining the rebound attack is hard but you can still get some results, but you have to
generalize it a lot, so this is why we call it unaligned rebound attack. Rebound attack
where things are not easy. You don't have [inaudible] differential so easy to 12
[inaudible] so you have to consider much more things.
I did not know that everybody would be a specialist on the committee so maybe I am
going to go fast in the beginning.
>>: Just go to your conclusions [laughter].
>> Thomas Peryrin: Maybe I should do that. He will not be that active with the
presentations so it is not that impressive in the end. So you know the NIST announced
the five finalists, Blake, Grost1, JH, KECCAK and Skein. All the five are very good at
this function. Of course none of them is broken and it is very unlikely that any of them
will be broken at the end of the competition. So we know how to compare their speed
but we don't know, I mean we need to compare their security, so if you cannot break it,
how do we compare them? Usually what we do is we attack easier attack models. We
will attack reduced variants, for example, near collisions or collisions on a subset of the
bits. You can look for distinguisher so, how many of them are zero sums, subspace,
limited birthday et cetera. Or what you can do is attack variants so a common thing to do
is to attack on a number of fronts. On internal functions what you can do is attack only a
part of the hash function, for example the compression function, because it is very often-the stories show that very often you can turn the compression function attacking to
function attacking
the N.
[inaudible] KECCAK is going to be the internal permutations that were going to look at
because KECCAK is completely based on some internal permutation. So here what we
are going to do is analyze the reduced round internal permutations of KECCAK and what
we are trying to do is look for only differential distinguishers. So very quickly again I am
pretty sure you all know but there are--whatever--so KECCAK has been introduced by
Bertoni et al, so for the SHA-3 competition I need, it is based on sponge function so the
operating mode is sponge function, so very quickly you get an internal state of r bits
which is the bitrate and c bits which is the capacity, and the total we will name it b, b bits,
okay? Then you [inaudible] a message, you divide into blocks, you insert the message
into the bitrate parts, you apply the permutation et cetera, et cetera into you finish
processing all of the message blocks and then this is the absorbing face and the squeezing
face and just you take r bits of the internal set and you output with and then you apply the
permutation and you continue doing that until you get the preferred hash length.
I am not going to go into details, but basically you get nice security proof as far as sponge
function but what you need is the permutation which is supposed to be a good one and
you don't make more claims for the sponge function you don't do, do more calls 2 c over
2 where C is the capacity. So here since we say that b bits is the size of the permutation
you cannot make a claim bigger than 2 to b over 2. So actually when we are looking for
attack which have a cost lower than 2 to the b over 2, otherwise it makes really no sense
because anyway they didn't claim any security for more than that.
So, so far there is a [inaudible] KECCAK. Basically there are for [inaudible] I would
say. The first one is by J.-P. Aumasson et all, 2009. They showed the probability for the
known [inaudible] for the internal permutation so zero-sum distinguishers, you have a set
of input output and all the input sums to 0 and all the output sums to 0 as well. And they
managed to find this for 16 round of KECCAK- 1600 which is the big version of
KECCAK, okay, it is the big permutation, and this is the one which is used for the
submitted version. This is the one we're going to study here because it is the most
interesting one. But for this work actually the complexity was quite big.
And then there has been a completely different work. This one is really more practical
but it is on a very, very reduced number of fronts, its three rounds but they can find small
message pre-image using sat solvers. Then Don Bernstein proposed pre-image attack on
eight rounds with complexities almost the generic one and the number of rough bits of
memory required are very big so it is very likely that actually this attack is not faster than
the very generic one in practice.
In reality, of course, this is far from reality. So maybe the best results are from C. Boura
et al. so they showed zero-sum partitions distinguishers. So instead of just zero-sum you
partition the input and the output sets into a partition and they all are in fact zero sums.
And they managed to find this for up to 24 rounds so the full version of the big
KECCAK, so this is nice but the complexity is like really very high, 2 to the 1590 sets,
very, very high.
So even if those complexities are quite high, the zero-sum distinguisher will actually be
able to attack more rounds than what we do and even for the same number of rounds that
we do, they will have overall lower complexity. The thing that is interesting to look at
differences of the distinguishers for several reasons. First, our advantage to the generic
complexity is quite big, where in their case it is very small, only a factor of two each
time, so it is very, very thin. Also zero sums are very difficult to exploit when you want
to find collisions for pretty much little bit less rounds, you don't know what to do using
zero sums and with different sized distinguishers actually it will turn out for a lower
number of fronts you can do something about regarding collisions for example. Also if
you want to attack the hash function that's on the permutation with zero-sum, it seems to
be hard. Maybe it's not impossible, but it seems to be hard. In our case it is going to be
just another constraint on the input, so it is going to be easy. And also just a small
criticism about the zero-sum partitions, in fact an order to do describe the partitions, they
use full KECCAK rounds to describe this partition, and this to me is a little bit of
cheating, and it is not clear how you can describe the partition in a compact way. For
example, if you want to attack a low number of fronts with a low complexity you
describe the partition with a complexity which is not bigger than the complexity you
claim. This is not clear.
>>: [inaudible] complexity of the size of the partition.
>> Thomas Peryrin: What I mean is that they, to me you have to discredit, you have to
distinguisher you have to say what will be your partition. So if you say that you define…
>>: [inaudible] KECCAK is a short description…
>> Thomas Peryrin: Yes, but to me it's a little bit of cheating to insert this description
directly into the description of the distinguisher.
>>: [inaudible] distinguisher. That is the impression that actually they don't do this
anymore, huh?
>> Thomas Peryrin: In the last version of the paper they don't do that? Okay.
>>: They say [inaudible]
>> Thomas Peryrin: Okay. Because from what I've read of the paper, I that they use the
KECCAK front to describe their…
>>: Because as you point out in [inaudible] it gets very tricky to write down a…
>> Thomas Peryrin: Yeah, sorry.
>>: Your description of sponge had c and r, then you had 2 to the n and 2 to the b, are
they here in catch-up or are they…
>> Thomas Peryrin: Sorry.
>>: You have c bits and r bits and in the complexity had n over 2 in it.
>>: [inaudible] n is the [inaudible].
>> Thomas Peryrin: n is the [inaudible] sorry.
>>: [inaudible] oh yeah, sorry.
>>: What is b?
>> Thomas Peryrin: b is the c + r, is the total size of the permutation. Sorry.
Also there has been no work, no third party cryptanalysis on differential aspects of the
KECCAK actually. There has been very lot of work done by the designers actually, quite
good work. And but nobody tried apart from them. Again, I recall that we focus on the
type of complexity lower than the 2 to the b over 2, because anyway they do not make
more claims than this.
So if we look at the KECCAK permutation first the internal states. It can be seen as a
rectangular cuboid of 5 x 5 by W bits, so 5 x 5 and this is W. So the big version of
KECCAK the volume is 64-bit okay. And you can, okay, so each cube represents one bit
so you can subdivide your the internal state into several parts. For example, this is a
slice. If you take the 5 x 5 chunks, this is a slice. Okay, the row is easy. The column,
and this is going to be the lane. So you have 25 lanes in a KECCAK internal state. So
how does internal permutation work? So for the big version of KECCAK, so far b is
1600, you have 24 rounds total. Each round is composed of five layers. The first one is
tita, which is really the main component for diffusion in the KECCAK internal
permutation. So I am not going to go into the description completely, but basically what
it does is for all of the bits in the internal states, what you do is you take the parity of the
column located on the thin slice but just the column on the left, okay, and you take these
parity bits, you accelerate through the parity bits the column located on the right, and
then the slice before and then you excel those bits and then you solve them to the value of
the bit here. The then you do that for all of the bits of the internal state of the KECCAK.
Then the row, second layer is just a bit permutation inside each of the, they call this the
sheet, I think. So basically for all of the lanes you are going to rotate by a certain
constant value, so just move the bits around. The pi just move the bits inside the slice so
you can describe this layer by six bit permutation but we don't really care about what, I
mean you don't really need to know what kind of permutation it is right now. Just
remember that it is only inside the slice. Then the χ,chi. So the case is the only
[inaudible] KECCAK internal permutation so it can be seen, so it is described here you
get n doors but it can be seen as a five bit Sbox, okay, so it is a five bit approximation and
you can really look at this as five bit five box apply to each of the rows of [inaudible]
KECCAK independently.
>>: [inaudible] show the rows [inaudible]
>> Thomas Peryrin: So you apply Sbox on each of the rows. So then the last layer is ι
lota, but this layer is just a constant addition layer so it has really no effect on our
differential [inaudible] so we consider it not existing.
So you can now divide one round of KECCAK into two parts [inaudible] which we call
[inaudible] which is the application of theta rho and pi. And [inaudible] will play on each
of the rows. So first we want to do is be able to build good differential paths for
KECCAK, because it's going to be our main tool for trying to do any rebound
whatsoever. So we first studied the diffusion in KECCAK and actually all of the
diffusion of the differences will come from theta mostly at first π pi and ρ rho, is moving
the bits around so that we will be able to not increase the number of differences, and there
is a very bad diffusion of the Sbox actually because, for example, you have one active bit
on the Sbox, you can map it to one active bit as well, so this mapping is always supposed
to--if you want to always control the difference in the Sbox. So the diffusion will not
really come from the Sbox [inaudible]. Everything really comes from theta θ.
So theta is quite good diffusion, for example, here I have presented [inaudible] just one
bit of difference. Those other bits will contain different after applying theta. So basically
you are going to have the same bits active and one column here and one column here. If
you move the bits in here, the same column will be active, just this bit will move. Now if
you look at theta invert actually, things somehow much better, much or much worse
[inaudible]. You can [inaudible] just one bit and [inaudible] backward and then you have
a huge set of random difference everywhere. A good thing for the cryptanalysis is…
>>: [inaudible] to look at what is there. The smallest sound of the [inaudible] so you are
just changing a single bit [inaudible] go backward it makes sense to go five [inaudible]
bits [inaudible]
>> Thomas Peryrin: Actually it is even better. If you use two you get something even
better. So this is a something very good to control diffusion. [inaudible] but this has
been already remarked by the designer. So I call it CPK okay. The idea is that if you use
up 2 bits, an even number of bits into one column then the diffusion of theta will be
[inaudible] identity. And you can see that…
>>: I'm talking about forward. I'm not thinking about [inaudible]
>> Thomas Peryrin: Yes. But that part is also true. So basically if you can see if you
insert a bit in here and here you have the two bits here but the column will be removed at
the same time. So this is why this is maintain and this is identity. So it is the same for
four bit of course and you can place the even number of bits wherever you want to result
is going to be the same. So of course you're going to use that a lot to control the
diffusion. Note that the designer had some kind of proofs that say that you cannot stay in
the CPK for more than three rounds, when you model the chi layer identity. But this is
not how we are going to model this identity so.
So our goal is to direct an algorithm to look for good diffusion bypass and of course we
are going to use this CPK, and when we are going to look for good different [inaudible] S
when we going to for some difference before the lineal I/O, course we’re going to know
for sure what is the output difference after this [inaudible]. So all the branching number
into our, during our [inaudible] search, will come from the chi, because we can have
different transition for the chi. And if we look for all of them, it is going to increase our
[inaudible] number.
So what we do is we consider that all the slices are somehow independent and we do
some free computation slice wise, which means we are going to build a big table of 2 to
the 25 which will present hold 2 to the 25 possible differences for one slice. And what
we do is we precompute what all of the best chi position for just a slice such that after
applying the chi we arrive into the new theta and then we are as much as possible into
CPK. So we precompute those good transitions for each of the slices. So this is the precomputation. The algorithm is simple. We start with a internal states we call it A1, we
compute for A1-so A1 is composed of a certain number of CPK column so we start with
K CPK column. We can place them randomly into the state, we can also randomize
where all of the bits are placed into the column. Then for each of them we apply the
linear. We obtain all of the difference in B1 and then we go into the table and we look
for the best chi transition for the difference. And we continue et cetera, et cetera.
If we look for example four rounds different [inaudible] we do that for three rounds and
then we go backward for one round. This round back here will not cost a lot because we
know A1 is low weight. We started when it was only K CPK and K will be small, so this
is going to be the weight. And we cannot do more because we invert the theta here and
that theta is going to be very, very bad. It is going to be different every round so we
cannot invert one more round. So what we do is we go forward a lot and just one round
backward.
So once we obtain those results, so this represents all for one, two round, three round four
round five round and all the size of the permutation of KECCAK the different [inaudible]
is obtained and the red, the one that we improved. So this represents the complexity and
this represents the number of conditions for each of the rounds. So you can see here for
example that we tried to stay as much as possible into the CPK. It is 12, 12, 12 and when
we cannot do it anymore then it is exploding and it is over.
>>: [inaudible] improve results. You compared to the original [inaudible] do they go
through [inaudible] permutation [inaudible] what is the least [inaudible]
>> Thomas Peryrin: What they did is not try to, they tried to prove that the CPK, or they
did this to model the chi as the identity, and then they show you cannot stay in the CPK
for more than three rounds, so eight is enough, which is true because of course here we
are not going to be attacking…
>>: [inaudible]
>> Thomas Peryrin: I am not sure that they actually did all of the computation, but even
if you do, even if you try to stay in the CPK you have to do it in somehow not the smart
way but you have to do it, you have to find all of the best chi position because of course
you can start with, the more CPK you can start inside, the more chance you have to find a
good pass and we have to stop with I think six, five or six CPK, we cannot go more. So
the more…
>>: [inaudible] combinations of five or six…
>> Thomas Peryrin: Yes, for starting yes. We start with all possible combinations of
column and placement of bits.
>>: [inaudible] search for [inaudible]
>>: So you played this all possible placements? But you didn't play with trying to jump
over to [inaudible] not looking for the [inaudible] algorithm what is the best single
[inaudible]. Your description you were trying to find in each phase what is the best next
set [inaudible] but if you are looking ahead two steps, for example, it is possible that the
first one will be coming back, and then it will improve the second round.
>> Thomas Peryrin: Here it is the simple quality algorithm it is describing here. We also
have some randomization because we do is--actually we don't pick only the best one but
the set of best ones. And then we randomize it inside the [inaudible] a little bit so that we
don't stack into this kind of program--but in the end we don't have a clear vision of how
something good can happen here so that it is going to be better much after. It seems the
diffusion, I mean the entire state of chi is quite big for the big version, so it really seems
to me that as long as you control the number of active bits, this is what you will have, at
least in the beginning. Maybe after that perhaps you can have some overlapping and
stuff, but in the beginning I think it is explain…
>>: You mention so for every transition [inaudible] from A1 to the B1 I you have a set
of possible qualities. Do you [inaudible] [inaudible] all three? Do you try to sample it?
>> Thomas Peryrin: You mean from A1 to B1?
>>: Yes.
>> Thomas Peryrin: So it [inaudible]. So if I fix the difference…
>>: Oh, I'm sorry [inaudible] B1…
>> Thomas Peryrin: So B1 to A2? So I have precomputed tables, and I consider some
[inaudible] make the assumption that all of the slices are independent and I look for all of
the active slices, I will go to my table, see what is the best transition and I pick one of
them.
>>: But you you pick the best one; you said that you actually missed all the good
transactions, right?
>> Thomas Peryrin: Yeah.
>>: Do you then search the whole tree from A1 to B3, all of the possible good transitions
or do you random choose one?
>> Thomas Peryrin: In the beginning we did only the best one, but then we put some
trade-off and we allow some transition and we pick one of them randomly.
>>: Kind of breaking bones, so you just bound your search.
>> Thomas Peryrin: Yes, so that doesn't expose too much. We can also make an
algorithm where it doesn't look for all of them, but just he picks random. I think what is
the best, actually, the best results we had, because we tried many different configurations.
The best is actually to take the best transition for a set of number upfront and then when
you have sets in the number of difference inside, then you can start to look for all of the
kind of differences. But in the beginning you better just focus on the number of actual
bits and minimize this in the beginning, because nothing is going to overlap.
So this is pretty much what we get. Maybe the most interesting one is the 400 differential
which is a complex property of this success of 2 to the -142. So you can build very
simple distinguish with that differential path. Okay. The obvious one is you take one of
the differential paths and it matches one fixed difference to a fixed [inaudible] difference
and if your property of success is better than 2 to the - b then you get to know the
distinguisher complexity between the input and the output difference are all fixed. What
you can do is improve by using a little bit of freedom degrees. So you add one for
example in the and, what you do is simply continue the differential path, okay you see
where the differences are going. Of course this is high Hemingway, so it's going to cost
you a lot, but you can use your freedom degrees in here and choose the values into the
Sbox so you attack each of the Sboxes independently, and in the end your cost is almost
nothing. So your complexity will be just the same [inaudible] because you still have to
pay this original differential path probability, and your generic complexity is the same
because you're still mapping one difference to one difference in the outputs.
What you can do is then to even extend it by two more rounds, one backwards and one
forwards, and what you do here now is you don't control the difference anymore; you let
them spread. So your complexity will be the same because you don't pay more, you just
have a different spread, but the drawback is going to be that you don't have only one
possible set of difference, but a second set of output difference and the same for on the
input you are going to have a second possible set of input difference. And then, for
example, you can could look in the [inaudible] distinguisher and depending on your
complexity of your, depending on your [inaudible] rate of your differential path and
depending on the size of the input and the size of the output difference, possible
differences you may or may not have the distinguisher.
>>: [inaudible]
>> Thomas Peryrin: On the edges?
>>: Yes. On this [inaudible] face.
>> Thomas Peryrin: It depends on what you mean by [inaudible]. If by [inaudible] you
mean something where everything is well placed et cetera, this is not happen on
KECCAK.
>>: [inaudible] description.
>> Thomas Peryrin: No. [laughter]. That is exactly why we don't attack is not really
working well on KECCAK, because you can't, if you have one differential path and you
want to have another one, it is going to be completely different and they have nothing in
common.
>>: So you can do nothing better than just illicitly [inaudible]
>> Thomas Peryrin: That's why I was talking about the fact that here I need to be sure
that the way to describe the set of difference input and output is lower than my
complexity. And I…
>>: This looks like perhaps six or seven rounds or something?
>> Thomas Peryrin: Seven rounds.
>>: Can you give us some idea of the Hamming weight of the execution of those rounds?
Because it's [inaudible]
>> Thomas Peryrin: You mean here?
>>: Of all of the rounds.
>> Thomas Peryrin: So here I see it is going to be very small so just before theta we’re
going to have, for example six active columns of two bits each, so 12 bits. But then you
invert with [inaudible] so it is going to be like 800.
>>: [inaudible] fast enough to…
>> Thomas Peryrin: Yes. Here it is going to be…
>>: In the forward direction?
>> Thomas Peryrin: In the forward direction because the theta will go slowly and then it
is going to be okay.
>>: [inaudible] start [inaudible]
>> Thomas Peryrin: Yes. But from one direction it is going to be like this and the other
one is going to be that [laughter].
>>: So what is actually going to [inaudible] particular [inaudible] usually the probability
is going to be much lower than the probability in the middle, middle part because each
one of the input differences is not likely to go through the [inaudible] of the and get into
the original differential path, so if you're running backwards, it is not uniquely invertible
in the sense that most of the differences are not going to go to the front end. So you
you're not really losing the ability by going backwards and by extending into the all the
possibilities of the [inaudible]. You go backwards and one round and [inaudible]
>> Thomas Peryrin: Yes. But I don't pay anything in here because I just let the
difference spread, so I don't…
>>: [inaudible]
>> Thomas Peryrin: Yes. Anywhere, I just let it go. But in my real attack I'm not going
to pay for it.
>>: But in a real attack [inaudible] difference. Most likely it will not lead forward…
>> Thomas Peryrin: Yes. But my game is just to say that it is going to be in that set.
This is my game. So I know for sure it is going to be inside, because I precomputed that
set, so I am sure it is going to be in.
>>: How big is the [inaudible] set?
>> Thomas Peryrin: Oh, sorry. I can show you, I mean, it depends. It depends on the…
>>: Approximately. I mean are we talking about 1000… 10 trillion or…
>> Thomas Peryrin: I don't know. It is 2 to the power of something, which could be
quite big. It can be really big.
>>: But if you choose an intercept [inaudible] question.
>> Thomas Peryrin: So until you get the complexity…
>>: I have a Delta image.
>> Thomas Peryrin: Yes. This is fixed.
>>: Diffuse, you have a set of differences that would be what you call in. If you pick
one of those quite likely when I take it forward I don't end up on Delta in.
>> Thomas Peryrin: Yeah.
>>: But then I get noise in my measurements.
>> Thomas Peryrin: Yeah. But in my case I am going to start from here, right? So I
start, I mean I have my different chi pass. I just continue the chi pass up to here. I just
design it. I am not finding any difference. I describe what to be my difference. This will
cost a lot. So I start from here.
>>: [inaudible] and then start [inaudible].
>> Thomas Peryrin: That's okay. I will find a very [inaudible] that goes from [inaudible]
with a Delta prime that is in with the complexity that basically to the P or one other P.
and then when I am here then I just have to extend it by a round and a round here, and I
am not going this way. I am going from inside out.
Okay. I have to go faster down. So very quickly the rebound attack. I am pretty sure
everybody knows how it works now so I am going to go fast. So it says introduced by J.P. Aumasson et al in 2009 [inaudible] permutation and the reason why is because the
chunk differential is very [inaudible] computation. And so this for example is a different
chi pass a chunk a different chi pass for AES. Black means the byte is active. White
means the byte is inactive, and you have , for example, have seven rounds and all of the-I am pretty sure that you know the worst part, the costly part is located in the middle
because you have the full to 4 and full to 4 here; this will cost you a lot.
So course what you better do is start from the middle and then again do inside out and the
rebound attack allows you to do this part for a cost of one on average. So how does it
work? You start from a different cell to in. Actually you start from just after the other
sub byte approximation here. And you start with different Delta out in here, so just
between the third and fourth round, and of course you pick differences that fit into the
trunk of the differential, of course. Then you compute, this is completed now so you just
compute what is the difference after [inaudible] permutation at constant so you are arrive
at certain difference at the beginning of the sub bytes. The Delta outs you compute
backwards.
This is [inaudible] so you can note what is the difference after the sub byte information,
and it is going to be all active because you know this because of the trunk differential.
Then you're going to have a certain property that you are going to have a match. That
means that you're going to have a certain difference on the inputs and a certain difference
on the outputs, and those differences can be matched with a certain property. And it
depends on each of the Sboxes. You get 16 Sboxes. They are all active, and when you
have random in choose and output difference from the Sbox, for AES it is simple. It is
going to be property 1A that you can map them. They are all values that can map this
difference to this difference.
So the property of your match in here for random issues at that time in the top out is
going to be 2 to the -16. 16 inactive Sboxes and property of rebound for each. So I will
call because in our case the P match would be very complex to compute, so remember
that P match is the property that is given random difference which is the property that you
can match them through the Sbox value.
Then when you find such difference after 2 to the 16 tries, you find all of the values that
verify that it can map each of the transition. So [inaudible] if I ask again is going to be
simple. When there is a match for one Sbox, you have two values that you can find for
this transition, so you're going to have 2 to the 16 possible values when you find the
match. And I call this value and match. The number of solutions you can generate when
you have a match. Again this is going to be complex to compute in our case. And all of
these values what you do is you prove again them backward and forward. You know for
all of them that this will be verified, because you handle it in the beginning and you just
have to hope that this is behaving as you want, and this is behaving as you want. And I
call this property here P forward and this property here P backward. So this is the four
parameter that we need to know it's P match, and match the number of solutions that you
generate here. PF is the probability of the input for 1 and PB probability output
backward.
So the overall complexity is this formula. Why that? First, whatever you do you need a
match at some moment. You cannot have anything unless you get one match, so you
have to pay 1 over P match anyway. This term represents the number of times you have
to have a match in order to have enough solution to propagate forward and backward, so
that you know is that the forward pass and backward pass will be verified. This is this
term. And then when you get enough solution, you actually have to run them forward
and backward, and so this is why you also have to pay for one over PB times PF.
So there are some improvements for the rebound attack. For example the super S box
which is just saying that two AES rounds can be merged somehow and instead of looking
at 8-bit S box, you can look at 32-bit S boxes and you do the rebound basically on the
bigger S boxes or super S boxes and this allows you to gain one round. And non[inaudible] is, for example, instead of having everything active here, you can actually
gain a little bit of complexity in the rest of the pass by allowing maybe a known for active
different S box [inaudible] but we are going to see in the case of KECCAK it is not going
to work at all.
Our goal, what we wanted to do here is apply the rebound attack on KECCAK. So we
took, for example, we want to attack the big version. We take the four rounds of
[inaudible] in here. Let's say we take two instances of this [inaudible] forward and
backward. We want to merge them for example. And the complexity seems to be quite
okay. We cannot take five rounds you consider the complexities [inaudible] to be. So
let's try. So what we hope is with the four rounds differential in the background, we can
hope for one inbound round in the middle, so we hope for a nine round distinguisher
maybe with a complexity which should be lower than 2 to the 512. This is what we hope.
But this is not going to work at all, and the reason is that the main reason is because we
don't have enough differential paths to run the rebound. This is because of the bad
alignment in KECCAK.
We know one good differential path would probably [inaudible] -142 but we know about
only one of them. And if we want to find many, many of them, it is going to be very hard
and very quickly you'll need to relax the constraints on the properties of your differential
paths in order to have more and more paths inside. But then if you relax too much, then
you over property of success is going to be too small, and it is not going to work. So the
main reason is because you cannot have enough differential paths as good [inaudible]
KECCAK. Also the analysis of the rebound attack is much harder on KECCAK, because
in AES everything is simple…
>>: [inaudible] slowdown [inaudible] cases by using the symmetry of the [inaudible]
[laughter].
>> Thomas Peryrin: You would've saved us a lot of time. There is a symmetry on the
lane. You can rotate each of the differential paths on the lane, so you have 2 to the 6…
>>: [inaudible] edition of the [inaudible] only the 00 [inaudible] right?
>> Thomas Peryrin: We don't…
>>: Nobody [inaudible]
>> Thomas Peryrin: I can see different circumstances, but not here. So also the analysis
of the rebound is much harder in KECCAK because, for example, the AES S box is very
unyielding. If you have a random difference in the input output, you know there's
probably only one [inaudible] that you can merge them, and know that they are going to
have two values if you have a match. If you look at that DDT so the difference
distribution table, for the KECCAK S boxes, 5x5, so you get difference on the input
difference on the output, it is really looking like a complete match, so it depends on--I
mean there is no real structure that can find. Sometimes you are going to have lots of 2s,
lots of possible matches so the P match is good but the N match is going to be bad, and
sometimes that is not case and you have bad P match and good N match. And the worst
is if you look at it from the output, then it is completely looking random. So it's going to
be really hard, because when you do your rebound you will need to be able to see what is
the distribution of the output difference for each of the S boxes so that you know what
will be the P match. Hopefully, we find some kind of structure which is the distribution
of the output difference properties is the same as the input difference when the weight is
fixed. So if you look at the input difference, if I add just one Hamming weight of one on
the input, you can see I am always going to have eight, eight, eight eight, four times.
Eight, eight, eight eight, four times. If I look here, eight, eight, eight eight, four times.
[inaudible] and it is the same if I take two bits, I am going to have eight times four et
cetera et cetera.
So if you fix the Hamming weights on the input, then you can find some kind of structure
and this is going to help us, because then we can do our reasoning not on the distribution
of the input difference, just on the distribution on the Hamming weight of the inputs, the
difference Hamming weights on the input of all of the S boxes.
>>: [inaudible] shouldn't your [inaudible] solution table [inaudible] Hamming weights
>> Thomas Peryrin: Yes. But I wanted to show. Yes, actually I could, but not really,
because actually we have to look at the input and the output. And for the output I mean
we didn't find any kind of structure. Sometimes it really looks completely random.
Because we are also going backward [inaudible] so we also need to look back at the DDT
-1, I mean the invert of the DDT.
So it seems to be to look very bad, but we still want to try to run the, we are going to
attack the KECCAK. So this is our roadmap. So we have an inbound of one round so
this is [inaudible] one round. Then a certain number upfront for the forwards and a
certain number upfront for the backwards and what we want is we want all of the
backward paths to have a proba better than this certain proba PB and all the forward paths
to have a proba PB better than certain proba PBF. So of course this represents the
amount of the difference we have in here. So if we want to--I mean the P match is going
to be bad. The P match is going to be roughly to give you an idea is going to be 2 to the
480 something. It is going to be very, very bad. So you need lots of difference coming
from here and a lot of difference coming from here to have a match sometime, at some
point. So you need this to be big, this to be big, and you need this to be small. This is the
amount of difference you get here. And this is the amount of difference to get here. Of
course, because the smallest is a set and the better you know what difference you have.
You can play with that, but not too much otherwise your distinguisher is not going to
work at all. Actually, what we are going to do is for some reason I don't have the time to
explain here, but because diffusion in [inaudible] is very good compared to theta. The
amount of forward pass is going to be very, very small actually. And we all, it all is
going to be, we all are going to be playing the backward path, not the forward path.
What we need to do is to know what is going to happen on the inbound. So we are going
to have lots of different Sherpas coming from here, lots of different paths coming from
here and we need to compute which is the probability match. The difficulty of computing
this probability of the match, we need to be sure that the same set of S boxes will be
active. If you don't have the same set of S boxes active, you are never going to have a
match. You can be sure of that. Then once you are sure that the same set of S boxes are
active, you can make some reasoning about the Hamming weight, the distribution is to is
to try to present the [inaudible] match, but first you need to handle this. So what we did
is we modeled this with a balls and bucket problem, a limited capacity balls and bucket
program. One bucket represents one S box.
The capacity is going to be five, because that is the number of bits inside the S boxes.
And the number of balls is the number of active bits we are going to have. So you have
two instances of balls and buckets and you throw the balls randomly depending on the
Hamming weight, you know that you are going to have backwards and forwards and once
you compute this probability of the set of S boxes, I mean the known [inaudible] set is the
same. You want to compute this probability. So we this [inaudible] property and we
derive what is the probability that the pattern is the same, and this property is also so
when a pattern match happens , what is the average number of known active S boxes that
we have. You don't care about this theorem, but the [inaudible] is the conclusion actually
for the number of, for the Hamming weights were going to deal with which is not too
small. Actually we remark that it is very likely that the match on the S box active, S box
pattern is going to be very, very high. So P pattern is going to be very, very high. And
the reason for that is because actually almost all of the time all the S boxes are going to
be active or almost all active. It's going to be all active or all -1 or -2, almost all the time
is going to be okay. This is the reason why [inaudible] is not the work here in KECCAK
because actually whatever you do very quickly you are going to end up with everything
active.
Sorry. Is there something missing? Okay. Now that we know that all of the S boxes are
going to have the same pattern with probability, we build a forward path and the
backward path. So the forward pass is going to be easy. So here we represent the
inbound; here is the first-round, second-round, third-round. We try to do now three
rounds. Here we present the number of active bits. Here I present the probability of each
of the rounds of the chi layer, and here is going to be the number of paths that we are
going to have it this moment. So here by the arrow I represent the transition that we
choose through the chi. So this is actually the core of the four round best differential
paths. You can see we stay for about three rounds; we stay in the CPK; we stay in the
diffusion which is very, very controlled diffusion. This is what we use as a core of the
path. And here a 6 to 6 transition will cost you 12 position because you get six individual
bits in six individual S boxes and a 1 to 1 transition will cost you 2 to the -2. So this is
why you have -12, -12, [inaudible]. Then as I said, you have not just one differential path
for this, but 2 to the 6 we can, because when you build one path you can rotate it through
the lane and then it is going to be an equivalent path completely, so W is equal to 64 so
you have to do the six equivalent paths.
So this is the curve. But this is not enough paths. We have to try to improve it a little bit.
First we can do is improve…
>>: [inaudible] forwards to backwards symmetry, can't you use rotation of symmetry
with respect to left to right, up or down [inaudible]
>> Thomas Peryrin: We did not find any, because the rho and the pi really makes
everything. We didn't find any, actually. So what we can do is first improve a little bit
the property here. Instead of forcing the transition 6 to 6 on the output, we let them
spread randomly, so we don't have to pay any cost in here. And the good thing is we
don't have to pay anything more in here, and the bad thing is we are increasing the
number of possible paths in the output. We do the same in here. In the beginning, we let
the difference spread randomly, but then the cost we have to pay may be a little bit higher
because the transition once run backward is the best one so if we choose another one we
have to pay for little bit more. This is a drawback. But the good thing is now we have
more paths, also. So from 2 to the 6 we go to 2 to the 25 in here.
Sorry. This is, what I didn't say in here, in order to simplify the analysis, we now
consider that all S boxes will be active in the middle. It is not always the case. I treat the
best properties that all the S boxes -1 are active. But here we don't care. It makes it so
much simpler to consider all the S boxes active. So we will throw away all of the paths
of all the S boxes that are active. So this is why from 2 to the 25 we have to filter it a
little bit and we end up with 2 to the 23.3 paths here so that all of the S boxes are active.
This is the easy part. Now the hard part is the backward path. I think I need to go faster.
How much time do I have left? Okay. So for the backward path, we start here with again
CPK but not on three rounds, only on two rounds because we can, because it is not going
to work on three rounds. It is too constrained. So here CPK we start with eight active
columns with two bits each so this way we have 16. And we use the same 1 to 1
transition so this is the way of doing -32 to a -32. And here starting with eight columns
with structured bits you can randomize their relative position and you can also randomize
where you place the two active bits inside the column and you end up with 2 to the 77.7
distinct differential paths. So this is a good start, but it is not enough.
So what we do again, we relax the condition in here on the input, far away from the
inbound so that we don't have to pay this property anymore. The drawback is we
increase the number of possible input differences. Now we do, we could do the same in
here. 16 to whatever, but it is not going to work. Why is that? Because if you let
whatever happen then the number of differences will explode too much; we will have two
much paths and the probability will be actually too bad for the backward. So we need to
make it explode but just a little bit. We need to control how much we let it spread. So
okay we have programs. There are a lot of parameters in here. I am just giving you the
final numbers. So we figure out that just letting 8 S boxes spreading is actually the best
configuration for us. We end up actually with the same properties because when you
compute forward, the differential tradition has all of the same properties. So whatever
one you choose it is going to be the same properties and this is why this -32 is not
changing in here. Since we let more transition, we prove the number of differential paths.
We increase the number of differential paths. And then here is a bunch of [inaudible]
modeling in order to see how things are behaving. Maybe this is the most complex part
of the paper. I am not going to talk about it. But just to tell you that we can actually
bound the properties and we can say that actually this last part is going to have a property
at least, I mean better than 2 to the -418 and the number of paths we have here is going to
be big enough that we can do the matching [inaudible].
>>: [inaudible].
>> Thomas Peryrin: Very, very not tight. The reason for that is that we have two
choices. We could do reasoning on average or on the worst case. And we tried to do as
much as we could, we tried to do the worst case. Because I really think if you run the
attack for, you are not going to do it of course, but if someone could have the possibility
to run the attack with this number, actually it would find a much better complexity. For
example, in here we can see many things. For example, the probability of the paths, we
assume the worst-case possibility. We checked the properties, the worst one, actually,
there were lots of paths with best properties. And we don't know about all of the
proportion.
>>: [inaudible] upper bound complexity of the [inaudible] but means the estimate so the
[inaudible] complexity.
>> Thomas Peryrin: No. But we claim that our worst-case complexity is our complexity
in the end. I don't know how to actually give a good--to be honest, this is already far
much, in the beginning we wanted to play the rebound attack on KECCAK and we hoped
it would be simple, and it turned out to be not simple at all. And I don't want the paper to
be more complex so I don't want to try to…
>>: [inaudible] do you have an estimate without [inaudible].
>> Thomas Peryrin: It is not. I really don't know because here I really tried to simplify
this many, many times because when we apply this we have to study what is the
distribution of Hamming weight for this box, and again here we have to make some
bounds, we have to make more, because we cannot know exactly what is the property, so
we have to see what is the worst case scenario, and when we look at the distribution of
Hamming weights and then we get the bounds on that worst case. So there are too many
bounds to be honest. I don't know.
>>: [inaudible] could be much better.
>> Thomas Peryrin: Yeah, I think it is going to be much better. But I don't know how
much. But we took the worst-case.
>>: [inaudible].
>> Thomas Peryrin: Yeah, maybe just one.
>>: [inaudible] would be much better, 2 to the 100 would also be much better
[inaudible].
>> Thomas Peryrin: We ran the attack on small version of KECCAK, so the problem is
the small version you don't have maybe the same--you get a lot of overlapping, so we
verified our model with a small version just to see if--I mean our model is giving well,
but the problem is because of overlapping, you cannot do the same consideration as we
did in here because it makes no sense. For the smaller version you have only one or two
slices.
So what you have in the end is this. You have a set of backward paths with a property
better than this number, set of four [inaudible] bits better than this number. You know
that the elements of difference in here is at least this number, and in here this number.
And the input and output difference are bound with a certain amount. So in the end, we
measured because, of course, depending on what kind of differential path was here and
here, the property matches going to change. So we have some kind of computer program
that tries to find the best configuration. After a few days we managed to find the best
configuration, all those numbers. So it is really, we cannot say directly just in one shot
that this is the best complex difference. We have to look locally for the best
configuration, the best parameters we can have. So for these parameters we have a key
match of 2 to the -491.5, oh sorry, this is a 5 in here. So you can see if you compute the
cross products of those two sets you would have enough to find a solution at least for, a
very good probability to find a solution for the match. And then the match in here it is
big enough so that you cannot [inaudible] this and this from 486 and you have a little bit
more, but that is okay.
>>: [inaudible] around [inaudible] why is it only one round?
>> Thomas Peryrin: Because it is already hard for us, I mean, in fact, here it is increasing
slowly in here. What you mean by two inbound rounds is that it would look for bigger S
box. So I think I talk about it, I don't know, maybe it was before, sorry. I forget to talk
about it, but again this is because of the alignments. When you try to build super S boxes
for KECCAK it is not going to work out at all because your super S boxes will
completely overlap…
>>: [inaudible] degrees of freedom and it is not just concentrated in one [inaudible]
really need all of your freedom [inaudible].
>> Thomas Peryrin: So how would you, okay. So what we do we actually spend our
freedom degrees, because okay, in fact, here, in our case the rebound attack is not true for
freedom degrees, it is true for looking for differential paths actually. This is what we are
doing. We are not--here we are looking for differential paths, not--how to use the
freedom degrees somehow is in the end. You just find it for each of the S boxes
independently, you find a solution, and of course, you can try to choose them in a certain
way with certain structures so that this will end in a better way. But in our case our tool
is really just to find differential paths, not to…
>>: [inaudible] fully active state because that [inaudible] the analysis [inaudible] cost a
lot in times of [inaudible] you would be able to [inaudible].
>> Thomas Peryrin: I really do not see how to not make it [inaudible]. What we did is
very quickly the Hamming weights will have a certain weight in here, because you
cannot control it. Otherwise we have to make it smaller here, which is not going to, is
not going to give…
>>: [inaudible].
>> Thomas Peryrin: No because then you are going to make--the thing is the P match of
the pattern effect of the S box is very good in our case, because it is very good in our case
because it is almost full active. If you have just a few active S boxes, then even if you
have like say 20 active S boxes only over 320 that there are in total, then you know that
you have the same set of 20 because it is going to be completely randomly distributed and
it is not going to happen at all. Otherwise it will mean some--I mean I really don't think
it will mean--what we need is to find a way to, so that you know that the pattern will not
go to certain S boxes so that you don't have to look for 320 S boxes but just a subset of
them and it is randomly distributed on the [inaudible], otherwise I don't see how.
>>: I understand that maybe about a year ago or so we had a bit of a similar situation
with respect to [inaudible] very complicated [inaudible] in the very beginning [inaudible]
but then later people [inaudible] programs and so on [inaudible] actually found
[inaudible] in the middle and then they suddenly found [inaudible].
>> Thomas Peryrin: So that would be a nice workshop, or working group to …
>>: [inaudible] impossible?
>> Thomas Peryrin: No. I would never say that something is impossible. I mean, from
now on I don't see any waste. We didn't look for that so much.
>>: [inaudible] never say that something is impossible.
>> Thomas Peryrin: Yes. I can still say that. Okay. So what we have now is a certain
set of input and output differences. So here you asked me about how much the
differences, well this is a number of differences we have on the input and this is the
number of differences we have on the output and if you apply for example the limited
birthday you can see that we are very much lower than the generic complexity, so you
can distinguish seven rounds, because we have three rounds, three rounds, and one round
in the middle. Now if you add one more round here, you can have eight rounds and you
are going to increase your output number of difference but that is still okay; it is still
lower than the generic complexity and eight runs can be distinguished. But for nine runs
it is not going to work because here you are, because of the diffusion of the theta -1 is
very good. Your in will be exploding completely and is not going to work and you are
going to be bigger than the generic complexity, so you cannot, as it is, you cannot use it
to distinguish nine rounds.
Our results, so this is a table for the complexity of the distinguished paths depending on
the number of rounds and depending on the size of the permutation. The only rebound
result is the one in here. Again, it takes a lot of times to adapt, I mean to fine-tune all of
the parameters and details so we only applied it on the big KECCAK version, and we
verified the model on the small KECCAK. But we did not look for the best parameters,
so it can be expected that maybe we can find, fill the empty boxes in here as well using
the rebound.
Okay, so I think there is some future work. We can use what we have explained here to
look for example collision preimage, so now it is going to be not distinguisher, but trying
to find without differences on the outputs, or differences placed just where the message is
going to be inserted after so you can erase the differences. So this is just more constraints
on what we did, and so far we are trying to look for KECCAK challenges that have been
proposed very recently. We already found collision for the one and two round
challenges. We expect three rounds to be possible, four rounds maybe, but I think five
rounds, my gut feeling is five rounds will be very hard, and I think the challenge they go
up to 12.
Also we tried to apply the differential distinguisher on the permutation of the hash
function, so when the IV is fixed, so when you force the IV to KECCAK then we so far
we managed to go up to three rounds. When we can choose the IV, and I mean choose,
that is of course you cannot put differences in the IV, but you can choose your own IV
values somehow [inaudible] start. And we show whatever that means and we can go up to
five rounds so far.
>>: So do you apply this [inaudible]?
>> Thomas Peryrin: Yes, but then the output is 512 bits maximum, so it is easier for us
to attack the permutation which is very wide, when actually reducing them is hard for us.
And of course we can also try to apply this kind of framework to, because there are a lot
of tools that can be reused I think for the hash function [inaudible] and for present might
be interesting, spongent also, and the JH SEC so maybe since it is not that easy to look
for differential paths, but maybe how to apply the rebound. Okay. So this is the end.
Thank you. [applause].
>> Dimitry Khovratovi: We still have a couple of minutes for questions. I think they
asked all of their questions during the talk. Thank you [applause].
So let me introduce our second speaker. Yu Sasaki is from the NTT Corporation in Japan.
Yu Sasaki is best known for his excellent work in the attack on hash functions, especially
the [inaudible] text, for example, the MD-5 [inaudible] first found by our guest and
[inaudible]. And today Yu Sasaki is going to tell us more about AES.
>> Yu Sasaki: Thank you. Okay. My talk is about AES toward extending integral based
known key distinguishers on AES. This talk consists of three parts. The first part is
introducing and or explaining the kind of status of known key distinguishers [inaudible].
And the second part is how to extend the number of rounds that will be attacked by the
known key distinguishers. I hope of our distinguishers is a very good distinguisher round
on AES 128. So I would like to hear your opinion. So AES this is a briefing about AES
I think you may not need it, but anyway I will introduce it. AES is a block cipher
designed by Daemen and Rijmen. It is fast and can be permitted in a very small byte. It
is widely standardized and is also widely used. AES is now supported by Intel's CPU so
we can't ignore the existence of AES. The design is very simple and easy to analyze.
[inaudible] the security of AES thus AES is an interesting academic object.
>>: Does anyone know if AMD [inaudible] going to introduce [inaudible]?
>> Yu Sasaki: The talk is not about the security of block cipher because block cipher so
far [inaudible] through model operations like hash functions or MAC or Stream-ciphers.
So actually if, when we need block ciphers in hash functions in very resource constrained
environment we only implement the block cipher and spare the hash function with the
block cipher with a modeled operation. And in such a constrained environment we don't
need a big data size. So if you implement the SHA-3 then the output rings will be 256
bits or you can choose 512, but in such an environment just 128 bit will work, or even 64
bits is enough. So building a hash function using 128 bit block cipher AES is possible
and might be suitable.
So I would like to talk about the hash function security of AES. So PGV hashing modes
is what we use as a way to deal with the hash function from the block cipher so this is a
block cipher. Here is the key input in here is the [inaudible] state and the output is
ciphertext. And like in the PGV modes, in one of the PGV modes is called the MatyasMeyer-Oseas mode. The key input is regarded as chaining [inaudible] and there is a
message to be hashed is assigned to the PGV input and to make the output the [inaudible]
of the ciphertext and the [inaudible] is computed. So this presents the computation, this
makes the one-way function. And PGV modes have a provable security. That is pretty
nice. [inaudible] function of the number of cipher. As long as the block cipher is
[inaudible] cipher it has provable security. If it is not a block cipher then the provable
security will be broken.
So this was briefly introduce the difference of the security in the secret key model and not
known key model. The security in the known key model is different from the security in
the secret key setting, because let me consider the security factor in each model. So this
is of course a rough statement but there are two factors in the secret key model that
secures the [inaudible] bits. So why is the existence of the secret information? It hides
the original message. And the second one is the mixture of the data in the [inaudible]
process so the data is hard to predict from the ciphertext. But in the known key model
the key value is the public value. If the MMO is used this is just the public constant so
only the security factor is the mixture of the data. So that is the reason providing the
same security in the known key model is quite hard. So we need a different evaluation
framework for this kind of hash function use with block cipher.
So known key distinguishers is proposed by Knudsen and Rijmen in 2007. And this was
a suitable framework for discussing the security one block ciphers are instantiated in
hashing modes. I will introduce the known key distinguisher. But before that let me
consider example of the problematic frustration in the hash model use. This is a block
cipher and this is the key in the in the [inaudible] of the ciphertext. So in this illustration
for a fixed key k for the block cipher ek, ek is a fixed permutation. What do you think if
the LSB is identical map for any key? So the foundation can be constructed but such
frustration with [inaudible]. Interesting it is still secure against a key recovery attack in
the secret key model, because any key that is not [inaudible] LSB is the identical map,
this does not provide any information of the key. But of course it is weak against
distinguishing attack so the cipher is not secure, but still no information is given for the
key recovery attack. But if you consider the MMO mode and the hashing modes then
[inaudible] attack, because if you are given the hash value, then you can immediately
know the [inaudible] or all you have to do is [inaudible]. So the situation is a problem for
the block ciphers , so we need to framework to evaluate the block cipher and the security
in the hashing modes.
Okay. So this is a rough idea known key model. The explanation is very informal. The
distinguishers aim to detect a certain property of a random instantiation of the block
cipher which cannot be observed for a random permutation. So this should be known
[inaudible] property. And Minier et al proposed the formalization of the known key
distinguisher model in 2009. This formalization covers a part of previous known key
distinguishers. But after this publication other approaches of known key distinguishers
were proposed and unfortunately this formalization does not cover the known key issues
after this publication.
>>: [inaudible] publication that you are aware of?
>> Yu Sasaki: This formalization?
>>: You say that the first publication was not very good and that people had all kinds of
added formalizations so which one do you consider to be the one that captures the
intuitive notion the best?
>> Yu Sasaki: As far as I know this is the only attempt to formalize the known key
distinguishers. So I can't--really. Okay. This will explain what will happen if known
key distinguishers is discovered in the block cipher. Actually, the known key
distinguishers shows that something undesired will occur in hashing modes. The impact
depends on which property the distinguishers will detect. So if the distinguishers can
find collisions in the hash modes then it is pretty great, it is very good. But sometimes it
detects only zero-sum. It still seems to have some impact, but we don't know how to
exploit the zero-sum application yet.
>>: [inaudible] problem I can see is that you are assuming that the key is known but is
randomly chosen. Now in many of the applications you can influence to some extent the
[inaudible] what is the previous hash value that is coming to the key. So suppose, for
example, you have a bit property that happens whenever they key is significant half on
zero. So this is kind of very local ability, but then your attack is going to be, fails to try
to make the change values zero and is a difficult task, and then you will be able to utilize
it in order to reduce the complexity when you are trying to [inaudible] this [inaudible]
message. So maybe to capture everything by walking only when you assume the
knowledge of the key, but it is fully [inaudible].
>> Yu Sasaki: Thank you for the comment. That is a very good comment, I think. This
work does not consider the propagation at all. Extreme knowledge of key or maybe
allowing the [inaudible] to choose a key depending on the application is a good
motivation for future research, yes.
>>: Part of this model is [inaudible] and the [inaudible] were able to choose some
[inaudible] to get some better properties.
>>: So that's your option.
>>: Sort of a second approach [inaudible] don't just know the key [inaudible].
>>: [inaudible] then this is a stronger statement than the chosen [inaudible].
>>: [inaudible] obvious now were you can choose the [inaudible] key into a [inaudible]
key and [inaudible].
>> Yu Sasaki: [laughter].
>>: Choose in the fool key in [inaudible].
>>: We are going to choose a key and then the key recovery [inaudible].
[laughter].
>>: The is [inaudible] we are here.
>> Yu Sasaki: Yeah. I will say formalizing the chosen [inaudible] is more headache
like--it is very difficult to find the…
>>: [inaudible] for example [inaudible] randomizer constants by some random causes
[inaudible] to me in any case we don't choose [inaudible] practical applications of any of
those keys. We just mostly show the structure of the stuff that we can find the [inaudible]
to me the [inaudible].
>>: This will not help against properties held against them with respect to the key. For
each fixed key [inaudible] I don't find any property but I can show you that if you
[inaudible] with the key then there are strange relationships between the two
computations. This was not captured by any of the previous definitions.
>>: Is very similar to trying to formalize the security of a hash function, because you
don't have a key for your [inaudible].
>>: [inaudible] the posthumous Supreme Court judge. I will know it when I see it. But
in reality you show me a property that has no random, and you look at it and you see how
it is derived and if you can do the simplest thing for most ciphers and most hash functions
[inaudible] that is a random property, if the key goes into the structure that is useful and it
is probably the best that you can do [inaudible].
>>: My example would be considered very [inaudible] I can relate the properties of the
mapping for two different keys.
>>: Right. That would actually be an effect. But you can get the property with the same
physical properties sometimes is not an attack if it is purely random, but if you have the
structural reason why that key would be there, then it is an attack. So the same statistical
variation might be the same depending on what your motivation is how you found it.
That's why you can't formalize it.
>>: [inaudible] try to figure out if it's like [inaudible] or no. [inaudible] parameter in the
S boxes or whatever [inaudible] chosen keys or something [inaudible].
>>: You go back a few slides, you mentioned, no leave it, you mentioned a constrained
environment and then you mentioned a new [inaudible] opcode. Isn't that like
mentioning a Mercedes-Benz car in the properties program? The cheapest [inaudible] is
the high-end when you're talking about [inaudible].
>>: Not for long.
>> Yu Sasaki: I did not get what you said.
>>: Constrained environment which I would think would be 8-bit, 4 bit [inaudible] then
you mention Intel opcodes.
>>: The [inaudible] are smaller than the [inaudible].
>>: The third [inaudible] is smaller than the [inaudible].
>>: Okay. I believe that. But the [inaudible] is not.
>> Yu Sasaki: There is a risk of motivation [inaudible] maybe some motivations are
conflicting of each other.
>>: Actually I mentioned something which goes part way in this direction, trying to take
fixed mapping and giving it [inaudible] with a problem [inaudible].
>> Yu Sasaki: Okay. Let me continue. Vary by the case, the known key distinguishers
is just knowing their behavior is detected and we [inaudible]. But anyway knowing their
behavior is still meaningful because it invalidates the security proof of the hashing modes
such as PGV construction. This is the only way that I can say the meaning of such
banana deficiencies. Actually in the second part of our [inaudible] second part of this
talk, I am pretty sure that there is no way to exploit the [inaudible] property in the
application. But maybe, first there is nothing to be predicted. That is just [inaudible].
Okay, AES specification or something like that. So AES block size is 128 bits and the
key size can be chosen from three [inaudible]. And a round number depends on the
number of bits, yeah, key rings. So this part of the talk focuses on this one, the round
one, 128. The known key distinguisher, the key is just given [inaudible] variant, so
considering the shortest round variant is very natural and we focus on this one. It is 128.
So this is the discretion AES uses in the this block. So far [inaudible] is predicated here
and there is a whitening is [inaudible] and then the SB function apply an S box for each
byte. And the SR each byte of the row j direct by j bytes like the second row this byte is
[inaudible] and so on. In the next round is a [inaudible] operation modification by the
near matrix and this is linear transformation and the numbers run in AES 128 is 10 round
and in the last round [inaudible] so [inaudible] and then the subkey and [inaudible].
Actually in the attack we need to analyze the key schedule report because it is
complicated. I don't explain how to analyze it inside of the key schedule. It is just one
state to one state’s transformation. Actually, it is a correction of a round to round
transformation but in this talk we regard it as state to state transformation. That is
enough. So this is the previous and this is the AES. For 128 [inaudible] are secret key
setting. So in the single key attack there were two approaches, maybe more but our
example is an impossibility differential attack and example is partial sum attack by
myself. Many papers have been proposed on this round differential rebound attack. I
think four relative papers on this topic and it's hard to how to [inaudible] so I just pick
me. And the related key attack there is one [inaudible] exists for [inaudible] AES 128
approach is boomerang attack. Known key attack [inaudible] was proposed by Thomas
and in the chosen key attack eighth round it was attacked with the boomerang approach.
I don't discuss 192 and 256 so much, but for rounds I will talk to the secret key mode by
using the related key differential or the related key boomerang attacks. And I want to fill
in the ranks on the next slides. So this is a brand-new information. So why is the splice
cut analysis proposed by [inaudible] and it it is single key attack on 128 AES and known
key distinction on 10 round AES. [inaudible] AES. And this talk I directed this talk by
directly discussed about nine rounds and don't get an integer and even 10 round and don't
get a integer. And his approximate…
>>: [inaudible].
>> Yu Sasaki: Okay. Referring to is the [inaudible]. So I direct it at your opinion and so
you regarded as very orthogonal. So let me explain the known key distinguishers on
AES. So there are two approaches. One is the integral-based approach. The other is
differential-based approach like a rebound attack and the differential-based approach is
much more popular now and powerful than the integral-based approach. And I enter this
[inaudible] approach but I think many of you know it so maybe we can cancel it. But
anyway this is a well-known property in the integral property in the four directions. So
this means the [inaudible] is still a constant. The A means active so you will correct all
possible [inaudible] for this byte. It varies from 0 to 255. And then after this we run the
computations…
>>: Everybody [inaudible] you're supposed to use P for perturbation.
>> Yu Sasaki: [laughter] yeah, it's true. Okay. So in this slide okay, we use notation a
for active and notation b for balanced slot. That means so if you take [inaudible] or take
in the set the result is zero. This is balanced. So after three round computations then all
bytes are balanced so this is the property with the three round encryption and it is
possible to expand half rounds by activating more rounds, more bytes. So activate four
bytes so collection of [inaudible] then after 4.5 rounds so 4 [inaudible] would appear
[inaudible] for rounds. But it's difficult to describe and we only used the properties that
all bytes are balanced. Later we used additional four or so four for this byte.
Okay. So [inaudible] in all direction and if we just start from the state and then after
sweeping five rounds, you will have all balanced states. A similar property can be
detected in the backward direction. So if you start from this state and compute three
rounds backward computations, then the state will be all balanced states. And the integer
of this attack combines these two properties. So start from the 2 to the 56 texts and
compute in 3.5 and back in those three rounds then the [inaudible] is balance state and the
ciphertext and you should know [inaudible] round makes [inaudible]. So 0.5 rounds last
round. So the ciphertext you also have [inaudible] states, so there could be some
property. So this is a complexity of a previous seven round attack and the complexity is
due to the 2 to the 56 rounds of coefficients in the memory negligible. You just can't
update the find zero-sum every time you wrote the integrated text so you don't have
memory. So the impact of the attack because the attack can't find a zero-sum in the
MMO caching modes. Zero-sum is a s, et of ciphertext (P,C)s such that XO P
corresponds zero and XO C ciphertext is also zero. Okay so this is a previous attack
[inaudible] attack.
And then we approach the differential based approach attack which is the rebound attack
and because Thomas explains the rebound attack [inaudible] I only say that it is
[inaudible] and it is the technique super S box and so the S box is divided into the
inbound path outbound path and choose many values for this inside path and test if
outside path is also satisfied or not by using the the solution [inaudible] based. And
check the plaintext in the ciphertext and the particular difference. And this is I want to
focus at this point. The difference [inaudible] is very simple for AES without [inaudible]
because this transformation is just changing the [inaudible] positions; it is very simple.
But the attack works even with mixed [inaudible] in the response. In that case the
distinguishers called the subspace distinguisher. So check the cyber text difference is the
form of written by the [inaudible]. So the [inaudible] at the start is any known zero
difference and [inaudible] 12 bytes [inaudible] are difference. And this state transform
by [inaudible]. And I emphasize that AES operations are used to define the target space.
This is a subspace distinguisher.
>>: [inaudible] distinguisher the only statement is not a particular description of the
subspace. It is just a mention of the subspace. [inaudible] distinguisher works the only
[inaudible] or something? And the soon as the subspace is different [inaudible] in the
subspace [inaudible] this dimension alone [inaudible].
>> Yu Sasaki: So. You didn't do these sorts of things.
>>: Yes. But that is only a step towards a model which is…
>> Yu Sasaki: Oh, I see.
>>: [inaudible] [multiple speakers].
>>: I agree but then the [inaudible] model is stronger because you need this part of the
[inaudible] described [inaudible].
>> Yu Sasaki: Okay. This is a misuse of the time [inaudible]. [inaudible] in an hour we
get it. This is a rough introduction to [inaudible] it is the formalization of known key
distinguisher. So Minier et al proposes the formalization called the known key called
non-adaptive chosen middle text distinguisher. So the sketch is like this. And there is an
integer and there is Oracle implements either a random instantiation of the cipher or two
random functions. The distinguisher doesn't know which is been permitted in the O so
the goal is to detect which is incremented. And what the distinguisher can do is forming
the middle text to describe [inaudible]. Then the Oracle is written the results of the two
functions, F1 and F2. And if Oracle is instantiation of target cipher then if F1 and F2 is
description, half round, half decryption and half encryption. Otherwise the functions are
random instantiations or random functions, maybe random [inaudible]. Suitable.
And the distinguisher defines the acceptance region A(n). n is the number of curves.
A(n) in advance. D judges by checking whether the returned results are in the A(n) or
not. I will give an example.
>>: [inaudible].
>> Yu Sasaki: If Oracle is implements random functions than we don't have the
[inaudible] key. And if it is target cipher then Oracle [inaudible] determines the key, but
the vector does not know the key. It is secret information.
>>: [inaudible] block the B and C sets or…
>> Yu Sasaki: Let me explain this and maybe we can find what is the acceptance region,
I hope. So this is the known [inaudible] attack and the middle text attack c is very easy to
[inaudible] form in this style. The acceptance region is the definition of the set of P and
the set of C such that the X or sum of the P is texting in the set is zero is the same in the
ciphertext.
>>: [inaudible] key from this information and then try to figure out…
>> Yu Sasaki: I think it is possible.
>>: Yes, well, because the middle say [inaudible] three rounds and then [inaudible] try to
recover the key and then check with the…
>>: [inaudible] maybe it is [inaudible].
>>: If you get to inject chosen things in the middle then you can [inaudible] each half of
the ciphertext [inaudible].
>> Yu Sasaki: I could discuss that, but actually my impression is it is impossible to
analyze half rounds independently so some [inaudible] are needed for its side, but the rest
of the talk I will mention that topic, but if you want to read about the key, then the story
sounds a bit different, a bit strange, [inaudible] and that I think [inaudible] this
formalization did not consider the [inaudible] the key by using [inaudible] to protect the
integer.
>>: [inaudible] seems a predicate for [inaudible].
>>: [inaudible] the key [inaudible] distinguisher [inaudible].
>> Yu Sasaki: [inaudible] key says some statements.
>>: [inaudible] background to say yes, very efficient. So if you enter the blankets that
you [inaudible] the five rounds and they say yes, [inaudible] you run the attack. If the
[inaudible] execution key which you don't know and they see a [inaudible] you found the
key, then you have the distinguisher and you can even check it…
>> Yu Sasaki: Okay. Obviously that is a problem for us. So in that case like a is 100
[inaudible] can be attacked. [inaudible] text, ciphertext, okay. The NA-CMA does not
say that the position of the middle text…
>>: [inaudible] last time [inaudible].
>>: I'd like to mention that in [inaudible] there is a new cipher coming out called ASC
one [inaudible] you can take any structure in use four rounds of a and you can [inaudible]
because it is then obligated ciphertext. [inaudible] you get ciphertext. But using
[inaudible] but during our [laughter]. It is really about [inaudible].
>>: You guys know about [inaudible] structure [inaudible].
>> Yu Sasaki: Okay so. My claim is analyzing the two parts independently will not
detect the weakness of the 10 round AES 128. So some permutations are necessary for
backward analysis and forward analysis. If what we do is like recovering the key in each
direction, but the source of the detect is shared in each direction. Okay. Let me continue.
This is a remark how to choose the acceptance region. The acceptance region must be
defined by in the event of the secret key. This is pointed out by the formalization of the
Minier et all because if the acceptance region can depend on the key, then the attack
really works. So now their key [inaudible] is given and then the attacker will choose the
[inaudible] key. Because he knows the key he can compute the c. And he can know the
acceptance region as well. So define the acceptance region for one text that there resulted
plain text is this key and the resulting plaintext is just like an encryption of the key.
>>: [inaudible] definition [inaudible] in the case of O mentioned [inaudible] assuming
[inaudible] the key [inaudible] clearly it is an attack, but I don't see how to define a key
dependent on the acceptance region because your condition means that the key you get
from the first half and the key you get from the second half are the same. So it is a
mistake that would work right into your framework right? Because you insist that by
finding some acceptance region in advance which is not key dependent. I do not
[inaudible].
>>: Maybe you can express the condition of the keys are equal the functions are
[inaudible] the ciphertext.
>>: Yes. But you have to run the…
>>: The problem is defining the acceptance region, the definition could be exponentially
hard [inaudible] putting restrictions on how hard it could be so if you just write the set
although we have say [inaudible] and that is the acceptance region, right? Here you say
the acceptance region must be defined independent of the key, but you're right, there is no
key. You have a P and a C [inaudible] acceptance region is dependent on k so I think you
have some issues with the formulas here.
>> Yu Sasaki: I think my formula is short and unknown key signature--I forget which
part. But the goal is detect a certain property which can be absolved in either case. If the
acceptance region is defined as whole domain then…
>>: I understand that but when you say, you said you have to define the acceptance
region, I could define acceptance region as [inaudible] go back double slides, one more,
the set of plaintext and the set of ciphertext must [inaudible]. That is a mathematical
setback…
>>: Probably have to [inaudible].
>>: Yes. Instead of plaintext and set of ciphertext there must be a yes key that matches
these one-to-one. That is a mathematical set; we can define it. But that is dodging the
question because the…
>>: [inaudible].
>>: I could go there, [inaudible] the acceptance region. [inaudible].
>>: [inaudible] what they want to avoid to mention the [inaudible] to them the
acceptance region [inaudible] in the description of the [inaudible] as simple as possible
>>: [inaudible] set it up as a table, a big table. I am a mathematician; I don't care.
>>: The acceptance, there are two things about the acceptance region. First of all
[inaudible] if you change the key the acceptance region in this affect starting in the
middle and then [inaudible] does not change.
>>: Is not changing?
>>: Is not changing. [inaudible] [multiple speakers].
>>: So this answer [inaudible] plaintext by [inaudible] why changing the key changes
immediately the acceptance region, and in your case if you have a huge acceptance
region, which says this [inaudible] went to this and this [inaudible] into this I know
>>: [inaudible] didn't say that he just said that this [inaudible] goes to some of them.
>>: It will have the same size as the formalization.
>>: I'm not saying the set is small [inaudible] it will be small. Description of it…
>>: [inaudible] complexity of the size of the acceptance region. The distance-- the
complexity is the distance between the acceptance region and the ciphertext that you
were discussing in the random implementation…
>>: I see…
>>: Let's say you get to the solution [inaudible] randomization, they did a good job. And
to work in a [inaudible] we are looking for that allows us to say [inaudible] sets of AES is
bad and 20 rounds of AES is good.
>>: How do you explain the distance between the acceptance regions? Let's keep going.
This will take long. Sorry.
>> Yu Sasaki: That's all right. Which mutation is borrowed is actually not clear. And
that is the reason that I question the [inaudible] of the test results. Actually we can say
that [inaudible] is incorrect in the [inaudible] actually we don't know what the [inaudible]
is allowed. But in the Oracle [inaudible] in the acceptance region will not change
depending on the key is the important factor. I don't think. I have no idea if that is
enough or not. And the choice of the acceptance region is very weak, a very difficult
issue. And I don't think the previous formalization is perfect and, but we don't know how
to fix it, how to [inaudible] the formalization of the known key distinguisher.
Okay. I [inaudible] right here like a light described subspace, it is not subspace
[inaudible] but it's some [inaudible] here on earth and the amount of the acceptance
region. So the O [inaudible] like a different attack. And acceptance region for this
problem from this set exists at least one of the paired values was different the formal
[inaudible] so this kind of signatures also can also be described in the acceptance region.
Then remarks.
NA-CMA is the formalization of the known key distinguisher. But the distinguisher does
not know the key. I think it captures, this is my impression. It captures the random
factor, the value of the given key as the secret information. And this is an interesting
point of the formalization. And the game works similarly to the classical distinguisher in
the secret key model because I am sure the key does not know the key value.
>>: [inaudible] of your definition with respect to making the [inaudible] system larger.
Let us look at the following example, suppose that you have your 10 rounds of AES.
Now let's assume that I am going to increase that to 20 rounds in the following way. I get
to 10 rounds then I have one more round and then nine more identically [inaudible].
Now out of the 20, the middle point is going to have 10 rounds of AES on one side and
one round of AES on the other side. And since you are allowed to give me input-output
relations on the second half, it is trivial to find the key from the [inaudible]. So actually
taking something that might be secure by your definition and adding more rounds can
make it less secure, so it is strange behavior because the middle point is not well-defined
in the [inaudible] so I think just a little bit real encryption on one side.
>>: I suppose if you do the definition framework you would have a function and the you
actually define that [inaudible] same transform [inaudible] functions in the [inaudible].
>> Yu Sasaki: Let me explain the discussion part first [laughter]. Questions on middle
text [inaudible]. So question for me as well. And the question we have when we did the
analysis [inaudible] results in the secret key can be [inaudible] because all we can apprise
[inaudible] in either direction independently. The second question is what would happen
if we said [inaudible] position like [inaudible] around the in the for in the [inaudible]
round. And first I would explain this then I think you will find some understanding for
the last question. And let's stop considering AES. And instead let's consider basic
cipher. Because it is very easy and the backwards Oracle covers one round and the
forward Oracle covers one round. So in the one round encryption the [inaudible] just
[inaudible] state. So of course the definition for [inaudible]. The middle text would
appear in the ciphertext. So can we say anything along the place that the cipher can be
diminished. I want to say partially yes, and impartially no, because the ciphertext any
number of rounds can be distinguished. So you don't have to fix this part only nine so
you can fix any part a finite number of rounds, you can choose.
However, distinguisher never says anything about the number of rounds for any number
of rounds is attacked in the same way. So my understanding is it is useless considering
[inaudible] and because only you get to know is one [inaudible] is in there. This is true in
this is the reason why we say partial yes, because actually we distinguish the target is the
[inaudible] it is not the [inaudible] function. This is my current understanding that
making such unbalanced separation is not meaningful. But please remember this is
discussion so we don't say I have a concrete vision is perfectly, I mean [inaudible] but the
[inaudible] covers the theory. If you can say it then probably--I reached the conclusion
that it can attack double rounds. This is my conclusion, because some parts of the
analysis must be shared. If you can independently analyze one round independently
analyze another round, then I think yeah, because as long as the [inaudible] is
independent then that same [inaudible] with this the setting what happened then any
number of rounds can be attacked. So in that case, yeah, it distinguishes the shorter
round parts but it is not their combined part. So concerning that, some parts, I don't
understand which factor is the core, but some parts of the analysis must be shared.
>>: [inaudible] the needs, because if one part of the ciphertext for another part then you
just [inaudible] ciphertext from [inaudible] the first one and then the ciphertext from the
second one and then [inaudible].
>> Yu Sasaki: In that case, any number of rounds could be attacked. It falls apart if you
don't use it. So actually we have the same complete vision once, but it faces the problem
that it does not realize the weakness in the number of rounds and--this is an example of
the integral attack on AES 128 in the secret setting works seven or six rounds but the
known key cannot reach 12 rounds.
>>: Why not?
>> Yu Sasaki: If time allows I would like to explain the attack and but, just to say…
>>: [inaudible] to say the you are feeling.
>> Yu Sasaki: I am feeling, sorry. This discussion is rising. All of the discussions is
rising.
>>: I think we just got into a discussion which I also want to participate in but we might
need to just separate it from the talk [inaudible] [laughter].
[inaudible][multiple speakers].
>> Yu Sasaki: Just a moment. Just two sentences more. And the rebound attack is
another example. In the differential path is shared in two directions. So the differential
path can be applied separately, and so it is still unclear which analysis is efficient in
which analysis is even right now.
Let's go into the attack or the attack question. So our goal is combining the integral
based attack and the subspace distinguisher. And the goal is extending the number of
attacked rounds. So the approach is append several rounds in the beginning and the end
of the previous integer property. And the fast approach is just elect the integer round
property in the previous work and try to extend that as much as possible. In this case
because anyway we append several rounds in the edge, we don't have to stick to starting
at the beginning of the round and end at the [inaudible]. So in that case this is a three
round, 3.5 round in the previous integer. And it is [inaudible] that we can extend half
more round in each direction, because if the state is balanced for any linear
transformation, the result is also balanced. Now we can say that 4.5 rounds in forward
and 4 rounds in backward. Oops. I am sorry. Four rounds in forward and 3.5 rounds in
backward so that structure is [inaudible] and because it ends up here in the middle, so
how about appending a half round in the beginning. Because it ends at the AES true
round. So how about appending a last round here.
So here it covers 0.5+7.5+0.5 and the AES is nine round AES 128. Yeah, and the
property, that is almost the program [inaudible]. And so this is a structure. So after, this
is a middle state and after four rounds you will still have the round state. And after 3.5,
you can't be under the 3.5, but with 3.5 you will have a balanced property. And the idea
is actually we can [inaudible] the key. So if P is encrypted with the right key then the
balanced state will appear here, and the same with the ciphertext regular key. So if C is
decrypted with the right key, then B will appear. And the two keys would have some
relationship of the function. So if the ciphertext is the same…
>>: [inaudible] huge table that I was mentioning before. I will have to wait and see how
you do it but…
>> Yu Sasaki: It is right, right away.
>>: More like if you [inaudible] the keys this is exactly the problem that you had with
finding the acceptance…
>>: No, no no, go to the next slide.
>>: You don't find the, you don't find the key…
>> Yu Sasaki: Okay. So this is the definition of the acceptance region.
>>: There is a key [inaudible] this is an exponentially large set because you have defined
it.
>>: [inaudible] information the key or a formula for it.
>>: [inaudible] for the whole thing.
>> Yu Sasaki: Is that the same?
>>: Yes. You justify, the first half of the key in the second half is the same and it was
describing a lookup table in which you are talking about a bunch of plaintext and a bunch
of ciphertext and you have only the cases where there exists a key that can go with the
ciphertext. The only difference is that here you have an explicit formula which avoids
writing down the existing key because one round is so simple that you can actually
express the key in terms of [inaudible].
>> Yu Sasaki: I see.
>>: It's a little, in the lookup table I don't care if it is easy to find or not. In a big table
including all of those cases where you have compatible keys.
>> Yu Sasaki: Hmm.
>>: [inaudible] sub bytes because you can write the sub bytes as a polynomial over the
top of the field like this and you can, we did some work on this back around 2001 or so.
It caused us to write formulas for five rounds and we actually did the extras on a piece of
paper.
>> Yu Sasaki: Uh-huh.
>>: Tell us about your attack model [inaudible] sure the [inaudible].
>>: This demonstrates that for extending this type of [inaudible].
>>: Requires us explicitly to mention in some large…
>>: [inaudible] the number [inaudible] just shows which number in the closer of the
[inaudible] of course that [inaudible] weaker but maybe it does make sense to just make a
[inaudible] and filter out all of the descriptions that don't fit and save the [inaudible]
maybe it is actually better to make kind of a hierarchy and I said…
>>: [inaudible] becomes very mushy and I…
>>: We can even arrange this acceptance region which is weaker so this acceptance
region is more complicated than the seventh round one and that is actually what
[inaudible] on the 10th.
>>: What possible relationship of that [inaudible] be able to say how secure are
[inaudible].
>>: In the middle is a property and cipher, right? Extensions of the next rounds work for
any cipher whatsoever, right? If you have property in the middle then you can always
say well these last round and this first round have a certain relationship. So I think the
extension is kind of a plastic [inaudible] and return. Core property is useful.
>>: [inaudible] so no one has actually written down a formal [inaudible] for general case
that [inaudible] your…
>> Yu Sasaki: No.
>>: If a framework hasn't really [inaudible].
>>: The problems with this [inaudible] framework…
>>: [inaudible].
>>: Yeah, this is pretty much the same problem as we see for earlier keys [inaudible] one
cannot propose a definition which would capture the [inaudible] without the [inaudible]
the real one because it is just [inaudible] in some sense unenforceable [inaudible].
>>: [inaudible].
>>: You cannot make the same definition for the [inaudible] because they are even
weaker because states [inaudible] and maybe it just doesn't make sense to feed to a single
definition that [inaudible] we just accept the procedure we understand which one is
weaker than the other one…
>>: Not totally [inaudible]. [inaudible] acceptance region and another one [inaudible]
and another one is weaker than the other [inaudible].
>>: [inaudible] it's easier to compare then the [inaudible].
>>: [inaudible] distinguishable for describing two or three years ago. As in fact it might
[inaudible] allow [inaudible] accept or to how to describe the [inaudible] one thing is to
[inaudible] power to describe the subspace which is actually a very strong assumption.
And it turns out that some of the text you can actually rewrite some of the subtext you
don't have to do. You can actually ask for [inaudible] okay, that the differences are
[inaudible] subspace [inaudible] that is a very, very weak requirement, and I think that
this case the [inaudible] is clear if you only asked the dimension [inaudible].
[inaudible][multiple speakers].
>>: Generally compelling acceptance [inaudible].
>>: Is like comparing different distinguishing effects that each might be the same so that
maybe it makes more sense to consider all of them just having some hierarchy
[inaudible].
>>: [inaudible] compare the text by saying there is the complexity because the
complexity is higher than the…
>>: Easier when you can have the same model but different models have different I
know.
>>: All of the folders without a key, without a secret have this problem that there is a
formula in [inaudible] back and show you examples were the same property could be
[inaudible] shows that you can't form [inaudible].
>>: Like hash function extensions [inaudible].
>>: [inaudible] form that supports AES…
>>: It doesn't just support, it supports AES and it supports all nine flavors of Raindale
and it supports at least two [inaudible] from the hash competition so do you try to make
life difficult by using the AES with the 128 block size. Seems like a misfit from day one.
>>: That doesn't support any bigger process right now.
>>: [inaudible] supports three keys right now.
>>: [inaudible] has nine.
>>: [inaudible].
>>: It does not.
>>: [inaudible] it helps you.
>>: Yeah, if you [inaudible].
>>: Yeah, if you don't watch for it.
>>: It doesn't matter because you have instructions for one round one direction,
instructions for one round the other direction, [inaudible] round and backward [inaudible]
something for [inaudible] duplication.
>>: Inverse duplication [inaudible] instructions.
>>: They have instructions in the AES [inaudible].
>>: And the DS2 is not part of the ASIs [inaudible].
>>: [inaudible] the same. In any case I think that the problem here is that your
heightened at least definition is not independent of the sub case because if you look at the
last line the X9 equal to the K except for X0
>>: [inaudible].
>>: You actually hidden the [inaudible] AES here.
>> Yu Sasaki: Oh, I see.
>>: Up until then you can say for any this is independent for any blocks [inaudible] you
could've taken four of 8 1/2 rounds and the [inaudible] round, round [inaudible] and
[inaudible] two rounds of S at the end, I mean at the beginning, you could send them in
any way that you want but this is likely hiding the AES. This is where…
>>: No he's not hiding AES [inaudible] hiding.
>>: [inaudible].
>> Yu Sasaki: Okay well. So this is the most cool part so I don't want to explain the
details over the edges now but okay, what I originally brought to the talk is after I get the
results of 2 to the 56 cipher text or plain text and the how to detect that, the results are in
the set or not. And how to [inaudible] we find. The results are in the access region or
not. Some of my [inaudible] Technik described it is just a small story and maybe we
can… So I think that this is a…
Okay so anyway we are not sure which kind of acceptance region is acceptable or
[inaudible]. And which method is efficient. I assure you we, one purpose of this
presentation is asking some questions to the audience and I am pretty happy for listening
to your opinion and okay. This is the original conclusion. We extended the known key
distinguisher on AES 128. We directed a combination of the integral approach and the
subspace distinguisher, or some differential property, I mean. It follows the, forget this
part. And in general, which kind of analysis is efficient in the known key model is
unclear. And that is future work and yes, that is my talk. Thank you. [applause].
>>: So that it is time but so lunch could be served here so in theory we could just pick up
the food and go on with the discussion. [laughter].
>> Yu Sasaki: I would just like to start now presenting these numbers because they
represent [inaudible] the first two. And we were working with five rounds on these
numbers.
>>: You mean extend the…
>> Yu Sasaki: No, just five rounds, no I mean 12.
>>: Can detect five rounds in the secret model in the secret key model with this
complexity?
>>: The six round, the original round [inaudible].
>>: Next question [laughter].
>>: Last round we should do would be in the 80s. [inaudible] by the way that I think it
is going to be [inaudible].
>> Yu Sasaki: The end of the approach is in the middle?
>>: No it is just a square.
>> Yu Sasaki: A square, really?
>>: Your three a half rounds and then you just keep doing [inaudible].
>> Yu Sasaki: Yeah, actually if we don't have to consider combining two then occupying
one byte is enough. [inaudible] okay, thank you.
Download