>> Kristin Lauter: Today we're very pleased to Jung... a professor at Seoul National University. Before becoming a...

advertisement
>> Kristin Lauter: Today we're very pleased to Jung Hee Cheon visiting us. He's
a professor at Seoul National University. Before becoming a professor at Seoul
National University, he was also a post-doc at Brown University with Joe
Silverman, also held positions at several institutes in Korea, ETRI and ICU. He's
also the winner of the best paper award at Asiacrypt in 2008. And today we're
very pleased to have him speaking to us on some of his work on recent progress
on discrete logarithm problems.
>> Jung Hee Cheon: Thank you, Kristin for inviting me. And I'm so happy to
give a talk in my course of research. The previous title of my talk was the
discrete logarithm problem with [inaudible] inputs put in this talk I tried to
introduce a more broader topics, more topics related to discrete logarithm
problem rather than focusing on this only one problem.
And mainly -- can you hear me? It's okay? Mainly I will introduce three topics.
One is Pollard rho algorithm on DLP, discrete logarithm problem, and another
one is constrained DLP, and another one, the last one is DLP with auxiliary
inputs.
Let me start interesting discrete logarithm problem first. Yeah. Here we list G to
be a finite cyclic group of order Q generated by G. And then given an element of
H in G, the discrete logarithm problem over G is to find the integer, usually the
smallest non-negative integer satisfying X is equal to H. And then X is called the
discrete logarithm of H to the base G.
And then this problem is popular in cryptography. Used it for key agreement
encryption, signature, lots of protocols.
In particular finite fieldings are widely used to implement this DLP-based
cryptosystems, including digital signature standard, the standard for that
signature. And usually it's implemented on a subgroup, prime order subgroup
over a finite field.
And then for these problems we have two attacks usually. One is on index
calculus and the other is Pollard rho, in that calculus, the complexity of index
calculus algorithm is determined by the size of P, the size Ps -- the size of this
finite field. And then the complexity of Pollard rho is determined by the size of
the prime of the suborder.
In practice usually P is taken to be 1,024 bit and Q is taken to be 160. And we
have several improvement for Pollard rho, like including the change point method
and highlighting work or some parallel linear parallelization. And then using
endomorphisms. But the complexity of the original algorithm for the complexity
of original algorithm have been no much progress for several decades.
Let me briefly recall the Pollard rho algorithm. And then in Pollard rho algorithm
we iterate function F from a group to G -- group G from to G. They create a
sequence G0, G1, through et cetera. And then here G1 is defined to be F0 and
then G2 is F G1, like this. And in that since G is a finite group we must arrive at
some collision. And there is the smallest integer mu and lambda, such that G mu
plus lambda is G mu, that is you start from G0 and take F and G1, G2, G3, G4, et
cetera, then finally some G mu is equal to G mu plus lambda.
Then the rho length is the length of this one. So it's lambda plus mu. Its rho
length is the average of rho length is square root of pi over two. Here the
expectation, the expectation is over the -- when the function is F when we take -when we compute these rho length for all function F, expectation becomes this,
square root over the order.
To be useful in discrete logarithm, the function F should be exponent traceable
random, random. That means if F can be expressed in the form like this, that is
when you are given an input of G to the A times H to the B, here H is the target
element, them we should be able to know the exponent here. For example, F will
add a random function F is when F is [inaudible] and if you take this F we know
the exponent is 2A and this is 2 B. So when we know the exponent of the input
element, we know the exponent of output element with respect to G and H.
The Pollard rho algorithm, they start with G0, with known exponent and then
compute next element again and again using exponent traceable function. Then
when a collision is detected, we can find X because from this [inaudible]. Since
we know if there is a collision from this equality we know -- we can find the linear
iteration of X. And then it's not likely -- with high probability BI should be different
from BJ.
In the -- Pollard -- Pollard used -- suggested these functions and it takes more
than the square root of pi Q over two iterations. And I already gave us the faster
iteration, faster re[inaudible] they used random [inaudible] S points, point -- R
points where R can be 20. In 20 walks we preselect 20 points and then given Y,
depending on the Y, we choose one of the 20 point and multiply to get the next
element.
Okay. Then how to -- once -- okay. Once the collision occurs, we have to
detect. If we store all the previous point, it's easy to detect the collision. But it
requires square root of Q storage. It's too much. So instead of this we use
special method for in Floyd method we compute -- we store G to the I and G to
the 2I for each I, and so they don't -- doesn't require any storage. And in Brent
method they require -- they compute G to the I for each I but store only G to the
2K. So store G sub-2, G sub-4, G sub-8, G sub-16, et cetera. But if there is the
collision at some point, this collision is -- you can find collision again and again.
So finally you can find the collision [inaudible] G sub-2 to the K. And the best -actually the -- in practice the best method is distinguished point method. Okay?
DP is defined to be an element of G satisfying a certain condition, which is easy
to detect. For example, some MS -- for example 20 MSDP over GI is equal to 0.
It's very easy to check. And then we store only this DP point. That is, we follow
the random walk and then whenever each point has 20 zeros in MSB, they store
this. And then they -- in each step they -- they compare its point with the DP, the
elements in DP set.
If it's DP. Then collision is detected after -- after additional these steps at
average. If zeta is the fraction of DP in G, okay, this is most convenient and
efficient.
Okay. So my interest for this algorithm is was how to speed up this Pollard rho
algorithm. There had to be not much progress on this algorithm for several years
so we can [inaudible] the three steps. One is to reduce the rho length. How to
reduce the rho length? Since it depends on the random function, we should use
some -- we should use some good iterating function. But it's not easy to beat the
random functions with these -- a specify iterating functions.
Another one is fast -- device fast collision detection method, fast collision
detection after collision has even occurred. But currently the DP provide almost
optimal method in the sense that this additional one is almost [inaudible]
compared to this.
And third one is speed up each step. If we cannot reduce the number of steps,
we should be able to. We can try to speed up each step of random functions.
Okay.
So let me introduce how to speed up each step. Recall the r-adding walk with
DP method. In Ith iteration given G of I we determine the index of G to the I.
Usually in Pollard algorithm index is one over three, 0, 1, 2. One over three
element. And in 20, S is one of 20 values from 0 to 19. And then first compute
the S and then compute GI times M sub-S. And if G sub-I plus 1 is DP, it is
floating table.
A storing operation is not very frequent, so the occasion can be is it necessary to
compute the product at every iteration? Because we -- what we need to know is
only the index and then the index is only [inaudible] information. It suffice to
compute just the index at each iteration until we reach a distinguished point, for
example. We arrived at the point once a million time, million time, for example.
So instead DP we don't need to compute this, so we might not need to compute
this one.
So that ->>: How do you tell [inaudible].
>> Jung Hee Cheon: Pardon?
>>: How do you tell if it's a distinguished point if you don't compute it?
>> Jung Hee Cheon: Yeah. But even for the distinguished point we need to
know only 20 MSB, not four element.
>>: [inaudible].
>> Jung Hee Cheon: Partial computation. For example we can consider 1,024
finite [inaudible] here. We need to compute 1,024 bit for this element instead of
this we just compute 20 beat. Then one of a million elements can be DP. Okay.
And it's the idea is like this. And S index function from the group to R element it's
index. And then we took a random multiplier set, let's see, it's a 20 element
when R is 20 and we precompute of all product of L or less elements from M. L
can be 1 thousand -- 100. Any 100 product over these element are precompute
and stored. And then we define S bar G to the L and M to be the index of the -this multiplication. Okay?
Let me start from G0 and first we -- since we know G0 we compute the index of
G0. And we can choose S -- the MS0 and then if we -- if we propound the
multiplication, we can see -- we can know G1 but instead of taking the
multiplication we compute the index, this index. We have a method to compute
the index without multiplication when you are given two inputs.
Compute the index, okay. Then it [inaudible] but we have a problem in the
second step. When you compute S2, we have to -- we have S2 is a -- S2 is an
index of G2. G2 is the multiplication with G1 times MS1. But here since we do
not know the G1 element, we cannot S by -- should it be G1, MS1. But we do
not know G1. Okay. Let me -- S2 is S -- index of G2. This is index over this
one. But instead of this, we use -- and this is -- we did not compute this G1
element, so we cannot compute G1 and S1 since we do not know G1. But we
have an -- this is -- this must -- this is equal to this one. So instead of computing
this one, we compute this one. And then MS 0 times MS one is taken from the
memory.
So the L multiplication of the randomly chosen R element are computed. So
using these techniques without computing the full multiplication for L steps, we
can follow the random walks and then after L steps we don't -- we could just see
that the table so we do full multiplication to compute G to the L and then repeat
this procedure. Yes?
>>: But isn't this way more expensive each step because you have to quickly
[inaudible] every time?
>> Jung Hee Cheon: This multiplication is precomputed so ->>: Oh, okay.
>> Jung Hee Cheon: Precomputed. Because it will be done -- it will be used
again and again. If L is 100 and then R is four, the size is just a kilobyte,
separate kilobyte. Not so much.
Then the -- so the -- whether it's efficient or not is depending on whether this
index computation is spaced or not compared with the full multiplication. And
okay.
Then on -- the thing is how to detect the collisions. So rather than computing
only several bits of information we computed more, a larger information usually.
10 beats or 20 beats. And then anyway, G -- G is more precisely it is defined to
be a distinguished point if the index is a fixed value and then certain number of
starting bits of GI equal to zero. To check this one we need actually more
information. And whenever there is a chance of being some GI being DP, we do
a full computation. Okay.
So okay. So the question is can we compute some bit of the information from the
multiplication without full multiplication? It's a very interesting question. And with
modular P it's easy. When you compute a multiplication if you know MSB over
2X and if you know the MS of X and Y, you can guess the MSB of XY. Or if you
know LSB of XY, you can guess the LSB over XY without [inaudible]. But since
the modular -- it should be a modular reduction, we should perform the modular
reduction here. It's complicated, but still we can have for efficient one. That is X
[inaudible] probably it's like 2 to the 32, for example, one side and then XE to the
summation of XIW to the I and Y is -- Y -- and W -- then when you multiply two
integers you get 2T words. And then by reducing -- by taking modular reduction,
you get this, for example -- you compute the, for example, if you multiply 1,024
bit, it becomes 2,000 bit. And then by modular reduction, 1,000 bit can be moved
here. You add. So to know the MSB actually it's enough to know the MSB of
here and here.
Okay. In this sense, let me omit the details. In this sense if you have 32 words,
X -- so 32 words. You can -- you need to compute only one word. Okay. So we
expect 32 times speeding up but with some -- some miscellaneous computations.
Actually this index computation is faster than -- 15 times faster than the full
computation. Full multiply occasions. And then when we apply to their Pollard it
shows at least 10 times faster than the original collateral.
So as a summary, for finite field 1,000 bit our implementation is more than 10
times speeding up. It's actually log P times speeding up. In finite. And actually
my goal was to apply this technique to elliptic curves. And but application to
elliptic curves requires to compute some bit of X over Y modular P without taking
full modular division. [inaudible]. I couldn't do that. So how to -- we need to use
Euclidean algorithm for computing X over Y, modular P. But I don't know how to
compute some bit of this information without taking -- computing full recovery, full
element.
For example, whether this element is [inaudible] or not could be not one
implementation, one bit implementation. But even for determining this element is
[inaudible] or not, it's hard when X and Y are a bit complicated like algebraic
computations. Including I mean plus -- yes?
>>: That little X and that little Y, are those coordinates of ->> Jung Hee Cheon: Okay. For example, yeah, when -- so more precise
question is like given X1, Y1, two point, the X coordinate is like something like
this. We want determine whether this is [inaudible] or not. If we have a best
method to determine this is [inaudible] or not, then efficient means -- efficient
then computing all the element. I spent some times, but I -- I was not successful,
and then one of the comments is the -- when I do these computations the -- to
get the best way of -- way to get some information is to use [inaudible]
reductions, and then that gives the good way to compute the some bit without
true computation. But I couldn't [inaudible] in to divisions. Division is too hard
compared with multiplication. I learned this lesson in my -- okay.
>>: Is the reason that you wanted the division was because the distinguished -but what's the analog of the distinguished point method, it's if the -- you want to
check if something's a quadratic residue, so ->> Jung Hee Cheon: Yes, quadratic residue we follow [inaudible] random walk -the simplest random is like two directions random walks.
>>: Oh, so I see. So your X over Y there, the X is the numerator, the Y two
minus [inaudible].
>> Jung Hee Cheon: Yes, yes. Sorry. So if we have a method, we can speed
up the collateral and elliptic curves, that's one of my goals to do. Any questions?
Yes?
>>: [inaudible].
>> Jung Hee Cheon: Not exactly because let -- we implemented four adding
marks. Then rather than, for example, put to the -- we don't care about the order,
okay. So we should choose what is -- allow the -- ignore the order. We -- so,
okay. The table of contents -- how to say we take them from M, L element and
multiply them, then that means we choose L element without -- without the -without order. And then it's much less than this one. I give you the number after
this talk, yeah. Any question? Any other questions? Okay.
Next let's go to the next topic. Constrained DLP. We can -- let's consider a DLP
on a group G of prime order P. And the weight T is the Hamming weight of the
binary representation of T. So for example, the weight of this one is 3. And then
Hamming wait T, DLP is like this given Y and G, compute X [inaudible] whenever
the weight is T. Weight of the exponent is T. In a group of -- with fasten dough
amorphism, for example binary fields or elliptic conserve, the W is fast. So the
exponentiation may depend on the weight of a exponent. In binary field the
weight should be a binary -- Hamming wait of binary representation. In
[inaudible] it should be of a Hamming weight of binary representation.
Okay. So we -- we -- what can we do exponent of low Hamming weight has the
advantage that the exponentiation is faster than with a randomly chosen X. And
so there have been several procedures using this low Hamming weight DLP.
The question is it does it have the same strength, the same complexity with the
ordinary DLP and can we apply baby step, giant step efficiently.
And then let X be the set of integers less than Q with Hamming weight T. Oh,
sorry, it should be order P. P. And the question is find two sets A and B of size
square root of Q such that X is covered by A minus B modular P operation or A
over B. Then if G on element of X is to [inaudible] X and Y, and then when H is
given, then we checked H -- okay. Replace G by X minus Y, then we obtain this
one. So we checked this equal [inaudible] for G to the X the complexity depends
on the size of A and B. Then the complexity is added and not rather than
multiply, the complexity is Q to Q. And also it's similar for -- it's similar when Z
represents ratio for X and Y. We [inaudible] Y here. So H to the Y is equal to G
to the X. So we can apply baby step, giant step here.
So the question is how to find A and B of size square root of Q. And then the
answer was given by Coppersmith and later Stinson. It's called splitting system.
How to find what -- find A and B.
But this problem was generalized the low Hamming weight product. In that case
they use product of two exponent of low Hamming weight rather than one. It was
proffered by Hoffstein and Silverman. And then it looks very efficient. It provides
very efficient exponentiation for [inaudible] case. And then how to split this. If X
is equal to X1 times X2, it's very easy to split like this. So to be -- to have -- so to
get more complicated BLP, we should have X2 much greater than X1. It's not
balanced case. And here XI is the set of positive integers of Hamming weight TI.
And X1 is [inaudible] from X1, kept X1 and X2 is [inaudible] from the kept X2.
In this case, we need to have more flexible splitting system, that is you split X
into A and B but some said it's charger than this one, and some said it's smaller
than this one. So we worked on this. It's called a parameterized splitting
systems. And then the -- so we found more efficient text for this exponent than
suggested, but still we do not know which one is the optimal complexity, that is
here several questions. Here we do not have a memory efficient algorithm for
this algorithm. We have only baby step giant step style that this -- we don't have
[inaudible] style algorithm for this one.
And then to use this exponent for some [inaudible] we need to have a lower
bound of the complexity. The first step of this is to have a cardinality of
multiplication. But even for this we don't have any information that is -- what is
this. So there could be several -- several repetition for X1, X2 modular of P. So
the lower bound on the cardinality of this set would be an interesting problem. If
we have a good bound for this, we may use this exponent for some lightweight
device. In 1977 Erd os-Newmann suggest a question like this. Is there any set S
whose generic DLP complexity is greater than square root of S. And then it
holds for random set S, but the answer was given by Mironov, Mityagin and
Nissim in 2006, and then they gave a construction of such a set S.
This is another line of research related to DLP. Any questions? I'm making
everybody sleepy.
>>: It's just [inaudible] very recently all three of those authors were all Microsoft
employees.
>> Jung Hee Cheon: All?
>>: Mironov, that's Illya, and Anton Mityagin and Kobbi Nissim was in this room.
>> Jung Hee Cheon: Yeah. We expected more [inaudible] about these. And the
reason I'm interested in this study is to -- actually I want to design some
lightweight protocol can be used for identify some sensors then this lightweight -this low Hamming weight is a good candidate. But the problem, as I said, there
is no bound, no ground of the -- for the complexity of this problem. We don't
have attack and my [inaudible] is just improvement of the attack suggested by
the [inaudible].
So the research about this could lead to an efficient algorithm efficient
cryptographic protocol can be used to some of lightweight device. Okay. Let me
move to the third one. We know RSA problems, and we know DLP. But usually
the crypto system does not depend -- the secret does not dependent on DLP,
rather than -- but computation DPL problem or [inaudible] problem. And
nowadays we are realizing the problems more and more. Why? To design a
new system with additional properties like -- yeah, ID based encryptions or some
activity based encryptions, et cetera. Or to prove that she could weed out
random walkers, we need to -- we need relaxed assumptions.
Okay. How to relax. We have two approaches. Okay. Let me instruct this.
How to get a good gradient examinations. We have two ways. One, grading
needs to be flexible and okay, okay, and another one is give more hints before
the test. It's system now.
Then relax the problems. By flexible gradings. For flexible RSA problems given
a composite N and message M in the list of problems we feed to -- the solution
should be M to the 1 or third, right? But it's okay one is -- it's okay for any E.
That is the flexible was strong RSA problem.
And we have a variant of DLP with flexible grading. For example it's LRSW
problems was used for anonymous credentials. Then we have flexibility for H.
Actually if H is picks to G, it's the same with detail problems but if they allowed to
use any H here. Another connection is to give more hint. When you are given G
and G to the I part if you can compute G to the 1 of Y part it's called DPL math
inversion problems. But instead these we give more hints like G to I plus Q to
the [inaudible] to the L. This problem is called L [inaudible] problem. And we
have several -- [inaudible]. We have more variant like given this -- when at least
-- without this, this is the same with DPL scale exponent problem but we give -we provide L elements, and then ask to compute G to the I part to the L plus 1.
You just used to design short signatures with random oracle then short group
signatures. And also we can apply this variant to pairing settings and then given
bilinear maps, okay, we can ask to compute this quantity, the identity-based
encryption, verifiable random functions and then hierarchal ID based encryption
with constant-size ciphertext and public key broadcast encryption, et cetera.
These problems are very popular. And then based on these problems, many
protocols are proposed. And then all of them has all of the input like D input.
Questions? The natural question is are they -- are they as hard as DL? In
generic model, they provide the complexity, the complexity of the strong DL or let
me call this one this co-located with order inputs or sometimes strong DL.
Actually it's not stronger than DL, but stronger assumption. So to avoid the
confusion then they call just strong DL.
The complexity is lower bounded by a square root P over D group operation
when D not so large. It does square root of P for the DL. So the natural question
is whether this bound is optimal in [inaudible]. That is can we device an attack
whose complexity matches with this lower bound. Okay.
>>: So whose [inaudible] was that [inaudible].
>> Jung Hee Cheon: Oh, it's -- in the first paper by Bonet Atar [phonetic] when
they proposed this problems and then later, whenever they proposed some new
problems it's mandatory to -- it become mandatory to prove the very complexity
in appendix.
>>: [inaudible] the generic.
>> Jung Hee Cheon: Yes, lower bound.
>>: In the generic.
>> Jung Hee Cheon: In generic model, yes. In appendix. [laughter].
>>: Okay. So for this algorithm we use baby step giant step techniques and
then given H, G to the alpha, we split alpha to be U part and V part and then M is
square root of P and then move -- then -- cannot see it. Maybe it's way up.
Okay. So this occasion becomes this, and then we construct the lookup table for
the left and enter -- and sort it by the first component. And the compute G to. V
and compare with the first entries in the table. It requires square root of P
storage and computation.
And we apply this technique to the strong DL. But in some -- for the exponent
only for the exponent there is -- let zeta be the generate of the ZP star. We -- the
point here is we work on the exponent rather than clue itself. We apply the baby
step giant step technique for the exponent. Here that is you are given [inaudible]
we are given [inaudible]. Even though we do not know this T. And then let zeta
be a generator of ZP star and then zeta is defined to be zeta to the D. And here
alpha to the D is your order P minus and over D and thus generated -- should be
generated by zeta hat.
And we compute Z such that it's alpha to the D is equal to -- alpha to the D is
equal to zeta to the Z. If we can find G, then we can find beta. But it's strange
because we do not know beta. We know G to the beta but we do not beta. So
how to check this quality? We can check this quality by raising power like this.
We know G to the beta, so -- and then since we know zeta hat to the G we can
compute this one. We check this equality, and then when we have a collision,
we okay, then beta should be this quantity. That is rather than working on group
G, we working on group exponent group GP. But only the quality is checked in
the base group.
And here we apply baby step giant step technique but to speed up that, better
than taking this one, we check this one. In this case it's big step. And then it's
parked right inside [inaudible] but two sides simultaneously we get the list
becomes skewed. Okay. That is the idea and the algorithm itself. And then
once we compute we know zeta using similar techniques we can [inaudible]
again. Okay. And then when people -- okay. This all take works only when H is
a did I vicar of P minus one. Otherwise if D is a relative prime to P minus one,
this is in the order of P minus one. So [inaudible] deduced the complexity. So T
must be a divisor of P minus one.
Then what if P minus has no small divisor D? So in that case we can use P plus
one would generate P to the N minus one but the idea -- actually the idea is the
same. Essentially the same. But here we have to use more mathematical
notations. So let H be a subgroup of order P plus one of the multiple [inaudible]
finite. And then here we consider -- we can consider this finite [inaudible] beta
with two copies of ZP. Vector space of dimension 2 over GP. And then we
embed alpha to alpha. You know, group H okay. The proposed is we embed it
to alpha but alpha we embed alpha to a smaller group H. And then the alpha
part to the D because here H is order P plus one. Alpha to the D is order P plus
one over D. That's the point.
It's not a map just to -- just to embed alpha and then alpha by an element over H
anyway and if you raise to the power with D, divide P plus one, alpha part to the
D is the order of P plus one Orr D. Then we use some baby step giant step
technique to recover beta in square root 2, P plus one over D times.
So the -- to apply these P minus 1 -- or P plus one should have small vectors.
When I check all the known parameters they have a set of [inaudible] P minus 1
gives enough small vectors for this [inaudible]. And so if we apply this one, for
example, and for example in BGW broadcast encryptions they use this elliptic
curves and then by Pollard rho algorithm the complexity is 2 to the 76 elliptic
curve operation but it refused two to the 59 exponentiations when N is 2 to the
32. And it's [inaudible] if they use a billion users it refuse to this size. Okay.
>>: All billion users have to collude to break the system?
>> Jung Hee Cheon: No, no collusion. To be used to N users, they have to -they have to publish the N publishes. And then users, they preoccupy some -okay. For the future they usually take large N so that's why they consider this
unbelievable numbers.
>>: So that N is the same N in users.
>> Jung Hee Cheon: N users, yes.
>>: The size of theta, the small factor is like two to the 17 or something?
>> Jung Hee Cheon: Two to the 76.
>>: What's the size of the divisor, the divisor is the group order is like --
>> Jung Hee Cheon: Divisor of group ->>: 17 ->> Jung Hee Cheon: The group order is 151 bit. The group order is P. I didn't
write down the P minus one but the P minus has enough vectors, enough large -small vectors we can choose as you want. And, yeah.
>>: Can you go back two slide, please. So in -- so maybe -- sorry, can you go
back. Okay.
>>: Right?
>> Jung Hee Cheon: Huh?
>>: [inaudible] P and Gs are generated [inaudible].
>> Jung Hee Cheon: Yeah.
>>: What happens in prime order group?
>> Jung Hee Cheon: Prime order group?
>>: Yes.
>> Jung Hee Cheon: It work -- prime order group P, prime order B.
>>: Right.
>> Jung Hee Cheon: But P minus one is not prime.
>>: Right. But what happens in a prime order group?
>> Jung Hee Cheon: I'm working on prime order group.
>>: Right. The order of G is V minus one?
>> Jung Hee Cheon: No, P.
>>: Oh, okay.
>> Jung Hee Cheon: P. What we consider P minus one.
>>: I see.
>> Jung Hee Cheon: Here -- I think we better have some good names for this.
Something like this we have two double exponent. Here group G is group G and
alpha belongs to GP, and both these belongs to GP minus one. Yeah.
>>: So [inaudible] actually in the exponent?
>> Jung Hee Cheon: Yes.
>>: Okay. So I mean I was wondering I mean do you really need to know the
order? I mean, does your method work for instance in a group of unknown
order?
>> Jung Hee Cheon: Unknown order?
>>: Yeah. I mean, whether you have an approximation of the order the size of
the order? I mean, do you really have to know precisely the order of the group or
you can adopt your method to work in a long or a known order?
>> Jung Hee Cheon: It's very good question. That's very interesting. But let me
give it some time. And I could answer it. Yeah. Yeah, that would be a very
interesting question. Okay.
So as a summary when you are given D element, D plus element, if D's divided
over P minus one, the complexity is deduced by the square root of D. And then if
D is -- if order was when D's divided P plus one this -- so in this case we have
almost met maximal genetic complexity. But we have a prime P such that both P
minus one and P plus one are almost prime. So we don't -- we couldn't find
appropriate divisor D. In that case, it's still safe.
And then the ratio, the proportion of such primes are very, very large. I mean,
they're more than the susceptible primes. So we need to generate these too.
We have more rooms.
>>: So are you saying what's the percentage -- do you know the percentage of
the time that those two are [inaudible] almost prime?
>> Jung Hee Cheon: It's a very -- it's non-negligible, but it's not -- it's very small.
Okay.
>>: [inaudible] one over log square, log square P if you assume cardinality of the
random process which it isn't.
>>: That makes it a lot harder to generate simple curves if you have to satisfy
both those conditions it makes it a lot harder to generate simple curves.
Because your flexibility [inaudible].
>> Jung Hee Cheon: Yes. Makes it harder. But still possible. Still possible to
generate the elliptic parameters to register these architectures. So that means
even though you use [inaudible] assumptions you don't need to increase the key
size. You could take a good prime P, prime P, put -- there is no guarantee
whether it is prime is true or not. Just register against this text. So it would be
better to avoid the -- to use these assumptions with the same key size. I mean,
we need to increase the key size. Then we -- when our assumption is strong
detail map, better than just detail map.
Okay. But my interest is to generalize this text. I'll have to tie to generalize this
text to make this group but the complexity is not lower than the previous. And
time's over. And let me just this slide only. And then I found some generalized
algorithm but the complexity is still under review. And we general -- we bed
alpha to P to the N rather than alpha P square. Okay. Then let P to the N minus
one is P times E and zeta be a generator. Then H will be the subgroup of order
D. Here D is the smaller element, smaller than P. Order P generated by zeta
hat. Zeta hat is defined to the zeta to the E.
And then we city embed, embed alpha to alpha bar to this element and that is
power additional with some random vector R. Then the point is this is an
element of order D. Here H is the subgroup of order D. Our objective is to
embed an element to order D element. Order D group. Then using the similar
method the complexity becomes square root of D. This is D's less than T.
Square root of D is less than square root of P is it becomes more efficient than
the usual text.
So we have this embedding if R is relative prime TD. Let me omit the details.
And then -- but the -- this embedding is successful, but to have -- the [inaudible]
the vector is -- the problem is since alpha is linear function of alpha, alpha is a
linear funding of alpha, this is a linear function and then we respond to RD. So
it's a polynomial optical over [inaudible] RE. [inaudible] RE is too large. You
know, RE can be larger than P. Too large. So we should be able to make small
this one. Here we use some technique. Consider PI representation of RE. Then
RE's written as a summation of EI to the P to the I. Then we know P -- since P
subvector is T, P come into this part and PP and then finally the -- it's times RE
gives smaller degree ratio functions than the RE due to RE's -- since RE's if we
consider purely representation -- purely representation over RE.
So using this, if sum of signed digits denoted by E, it's small. This is successful.
Then it's okay, the sum of digit, the sum of digit, given this element we expand as
a pure element and then if here all the coefficient here is positive, some of the
digit means just this element. But sum of signed digits means we take maximum
over positive coefficient and my relative coefficient and they determine this is the
[inaudible] of the numerator and denominator and then this determine the
[inaudible] of numerator.
So if the sum of signed digit is small, this [inaudible] function of a smaller degree.
So using similar techniques, we can -- we have the algorithm for strong DL
[inaudible] support the prime P we are given prime P and then they are given.
Find an appropriate divisor D over P to the N minus one for some N. We tried
from at least one, two, three, four, for some N. And then if we find the
appropriate device D, then next we try to find R such that sum of the digit is
small.
In this case we can use [inaudible] reduction and then reply the algorithm to the
recover alpha and then the complexity depends on this one.
And as a further study, we -- I'm still working on how to find the R such that the
sum of the digit is small. I know how to find such that the sum of the digit is
small, but I need to impose these condition. And then how -- another case is
how to check if some prime register against this attack because there are many
attacks here. It's not easy to guarantee. This prime is strong enough.
When we used strong DL assumptions the designer wants to claim this prime is
enough and hence the security. But in this case, it's not easy to say. But we can
say the -- we can use the low bound of the 10 A complexity. Other than these we
don't have any ground. And then on -- so another question is given this strong
DL elements, the complexity's always in this basket D. That is we have matching
attacks for arbitrary prime P. Also these days we have many flexible
assumptions, some menial assumptions, many assumptions. We have -- I think
we have more [inaudible] these days. So we are [inaudible] these assumptions.
We need to investigate the security of complexity of these assumptions. And
then okay so we may consider embedding to elliptic curves. It's related but we
still have some problems to embed elliptic curves. If we succeed to this, we can
say the strong DL has the -- that is strong DL has the complexity used by square
root of D from the order DL. But it is still a -- it's not successful yet.
Okay. That's it. Thank you for your attention.
[applause].
>> Jung Hee Cheon: Yeah?
>>: So I'm so curious. I thought you showed a -- that giant step in the exponent.
So what happens if you use a group of order like a prime P that is written as 2Q
plus one where Q is also prime?
>> Jung Hee Cheon: Yeah.
>>: What happens then if a change in the assumption where I use only quadratic
residues in the exponent so that there is no device because, you know, the ->> Jung Hee Cheon: Yeah. If P's, for example.
>>: Q plus one, yeah.
>> Jung Hee Cheon: Then it -- it register against the P -- my P minus one, okay?
But it can be susceptible against the P plus one attack. So it should
simultaneously should of the exponent like this or some -- you can -- you can
allow some larger co-vector rather than two. But you could just -- and then some
people generated this prime.
>>: So it wouldn't be -- would it resist ->> Jung Hee Cheon: The text. But if the [inaudible] is successful, then ->>: [inaudible].
>> Jung Hee Cheon: [inaudible].
>>: But every extra condition you put on there makes it much bigger. You could
end up making it possible to for you to generate if you put that order because to
generate an elliptic curve of prime order is all right hard. And then if you put
extra conditions it might end up not being able to get lucky.
>> Jung Hee Cheon: Increase the generation time to 100 or local P times.
>>: [inaudible] set parameters.
>>: No, but generating elliptic curve with a certain order.
>>: But if it's a matter of getting 10 curves every 10 years from NIST, right
[inaudible].
[brief talking over].
>>: Yeah, but the size of those [inaudible] polynomials is like a square root of D
is like square root of P like your P is 256, so square root of D is, you know, like
128, the size of the polynomials is like E to the square root D. That's just the size
of the coefficients of the polynomials.
>>: Okay.
>>: So that's why it's hard to generate the curves.
So I have a question. So even if you do that, then you also still have your other
attack, which is that a divisor of P to the N minus E minus one. So if P to the N
minus one is D times E, how large of a factor can you use like D reasonably, how
much space or time does your algorithm take? Can D be two to the 40 or can it
be two to the 20 or what -- how large is practical for your algorithm divisor of P to
the N minus ->> Jung Hee Cheon: Yeah. Our algorithm depends on two vectors. This one
and this one. Only two. And then usually the complexity algorithm depends on
this one.
>>: And what is the complexity?
>> Jung Hee Cheon: That's square root of D.
>>: Oh, square root of D. Oh, okay.
>> Jung Hee Cheon: Yeah. And then ->>: Handles very large [inaudible].
>> Jung Hee Cheon: So if D is smaller than P, it could -- it can provide more
architect than the [inaudible] the ore [inaudible] but the problem is this is related
to DSDL, this DS element. If it's large, we need to many elements. It's not
realistic. So we should be able to reduce this part and then, yeah, that part is ->>: Did you find any of the NIST curves that are susceptible to this for some
small values of N and?
>> Jung Hee Cheon: None yet. Yeah. Actually yesterday I finished my
[inaudible] and -- since I changed my data. Maybe several days. I'm trying. The
problem's current parameters are set to this [inaudible] already, P minus
[inaudible]. So it's true motivation.
>> Kristin Lauter: Any other questions? Thank you.
[applause]
Download