>> Kristin Lauter: For our next talk I am very pleased to introduce François Rodier. François is the director of the Institut de Mathématiques de Luminy in Marseille and he'll speak about asymptotic non-linearity of Boolean functions. >> François Rodier: Thank you Kristin. I thank Kristin Lauter for inviting me to talk to, to visit Microsoft and to give a talk. This is a work with my student, Stéphanie Dib. I will speak on Boolean functions and I will speak more specifically about non-linearity of Boolean functions and I will investigate the, I will be interested in the asymptotics when the number of variables of the Boolean functions turns to infinity. What is, here is the outline of the talk. I will talk about Boolean functions and I will talk after that on higher-order Boolean functions and I will talk about vectorial Boolean functions. Then I will talk about resistance against linear cryptanalysis and finally my conclusions. First I will recall what are Boolean functions. Boolean function with m variables is a map from the vector space f2 to the m in 2f2 and I will call this vector space Vm. As these functions map from vector space you can speak of linearity and so they Boolean function is linear if it is in linear form on the vector space of 2 to the m and it is fine if it is equal to a linear function up to a constant which is, which can be zero or one. What use are Boolean functions in cryptography? The existence of finite or approximations of Boolean functions allow people to build attacks on the system. In case of stream ciphers, for instance, the attacks are called fast correlation attacks. And what does this consist in? It consists in simplifying the enciphering algorithm by linear approximation. Therefore a function f is more resistant to this attack that it is distinct and faraway from a linear mapping. So there is the notion of non-linearity. We call non-linearity of a Boolean function f from Vm to f2. The distance from the set of affine functions, the distance from f to the set of affine functions with m variables, which I defined by this formula and where d is the Hamming distance and I call nL of f the non-linearity of f. What are the qualities of this non-linearity? This non-linearity is equal also to 2m minus one when minus of 1 of the f s of f where S of f is given by this formula where here is the [inaudible] transform of the Boolean function f. You take the absolute value and you take the set f inside Vm. So S of f is called the spectral amplitude of the Boolean function f. Actually I will not speak about, in the talk I will not speak about the non-linearity. I will speak more about the spectral amplitude of the Boolean function f, but the spectral amplitude is linked to the non-linearity by this simple formula, so the highest spectral amplitude, the lowest non-linearity. What can we get as inequalities on this spectral amplitude? First the spectral amplitude is the sum of 2 to the m elements in the spectral space Vm which are equal to zero or one, so it's clear that the spectral amplitude is smaller than 2 to the m and from parseval equality you get that Sf of f is greater than 2 to the m over, 2 to the m over… So we have some function such that s of f is equal to 2 to the m over 2. Of course m has been even because S of f is always an integer and these functions for instance, [inaudible] quadratic form, for instance these functions are called bent functions and for odd m 2 to the m over 2 square root of 2 has been a long time the only known lower bound of S of f. It's simply that you take the m, you take inside the m minus 1 and we have m minus 1 even and you take bent function for the m minus 1 and it will give such functions. In 1983 Patterson and Wiedemann have shown that one can do better for m equals 15 and they have produced a Boolean function f such that s of f is equal to here it's 2m over 2 square root of two and you have to multiply it by a fraction which is smaller than one and you get S of f. They have conjectured that the inf of S of f is equal or around 2 to the minus m over 2. More recently Kavut, Maitra and Yucel have shown that there exists a Boolean function f with m equals 9 having spectral amplitude. It's 2m over 2 square root of 2 minus 2 only. For m equals 3, 5 and 7 it's related to the integer 2 square root of 2. So this is for known results. The unknown results, well, there are certain constructed bent functions. Mainly there is Maionara MacFarland and the partial spread functions which give most of the bent functions that we can construct. And this number is about 2 to the m over 2 times m. Counted bent functions we don't know and we know a maximum bound which is given by the degree. In fact bent functions with m variable are of degree m over 2. So the number of functions of this degree is 2 to the 2m minus 1 plus epsilon. Actually Claude Kelly has given a bound which is a slightly bit better but I don't speak of it now. What we can see is the gap between the number of constructed bent functions here and the maximum bound given by the degree. It's really a big gap. For m equals eight the constructed bent functions are in numbers smaller than 2 to the 59. The maximum bound even by the degree is about 2 to the 163 and actually [inaudible] and his team have counted bent functions of eight variables and this gives a number which is about 2 to the 106.3 and so you have a very big gap between constructed bent functions and counted bent functions. I will not speak about bent function only because for security reasons in cryptography, the Boolean functions need to have the properties like high non-linearity, but also they have to be balanced. They have to have high algebraic degree and they have to have high algebraic immunity and so on. And these properties are not compatible so it is necessary to have the possibility of choosing among many Boolean functions, not only bent functions, but also functions which are close to being bent. So that is the reason why we have to study nonlinearity for all bent functions, all Boolean functions I should say. For instance, you can compute the distribution of non-linearity for m equals 10. This has been computed from random bent functions because, random Boolean functions because you cannot compute all bent functions, all Boolean functions with m variables. So the spectral amplitude is from 2 to the 10 divided by 2 which is 32 to 1024. The fact is that the curve these form it has a maximum of about 128. What is the distribution for all Boolean functions? Distribution is between the square root of q where q is 2 to the m to the q and most f are about 2q square root of 2q log of q. So we can state a theorem which has been proven for one side. That is the side, this side from Olejar and Stanek and independently by Carlet and also a theorem by me. So the probability that S of f is between a the square root of 2q log of q and the b square root of 2q log of q is probably tends to 1 as m goes to infinity for a and b which are around one. And there is a way of saying this. You can say that if f is a Boolean function then almost surely you have two limit m turns to infinity of S of f divided by square root of 2q log of q is equal to one. Of course this statement is not completely correct because you have if f is a Boolean function and you take the limit of when m tends to infinity, so I can give a correct statement by using the space of Boolean functions with an infinity of variables. So I call Bm the set of Boolean functions on Vm and I take the infinity is the space of infinite sequences of elements of F 2 which are almost all equal to zero. And I take B, called B infinity as the algebra of Boolean functions on V infinity. So we have the action mappings from the infinity to m which sends f to fm which is simply the restriction of f to Vm. The algebra, the infinity is made topological express, topological expressed by considering the compact open topology and so the probability on B will be the Haar measure on it. Another way of saying that is that for each f in Bm the probability of the event Bm inverse of f which is a subset of B is a given by Pm inverse of f equals 1 over 2q. And with these notions we can state this theorem correctly. I can give more precise results by Litsyn and Shpunt. The theorem is different. The probability of S of f is smaller than the square root of 2q log of q but there is correcting term here minus 0.5 log of q minus alpha log of q is the big O of one over log to the alpha of q, and there is also an expression for S of f. It's greater than these expressions. So what can we conclude? We can conclude that the concentration point here which was 2 q log of q, square root of 2q log of q is indeed closer than that 2q log q minus 0.5 log of q, so it is more precise. I just want to give a sketch of proof. The first part of the theorem is to compute the probability of S of f is greater than d and we cannot set S of f is greater than 2m minus 2 theta [inaudible] the fact that f is the union of the bowl of center h and affine Boolean functions and produce Delta, so the probability of S of f is greater than Delta is smaller than the probability, the sum of the probability that f are in the bowl B Delta of h which are, is the same side. For the other side the probability of S of f is the smallers of Delta. We can note that S of f is smaller than 2m minus 2 Delta and this is equivalent to eta equals zero where eta is the sum of [inaudible] functions of the bowl of center each and of reduced Delta. Then you can compute the probability that eta was zero by [inaudible] formula which is expectation of eta square minus expectation of eta square divided by expectation of eta squared. I will come again to the red result by Litsyn and Shpunt and I want to say that the red result is significant only when alpha is small. For instance, if you take the bent function so you have to take this equals 2 to the m over 2 which gives this formula for alpha and the result gives that the probability of S over f equals 2 to the m over 2 is big O of 1 over q and this is much weaker than the formula the probability of S of f equals 2 to the m over 2 equals 2 to the minus q over 2 plus epsilon which is obtained by the bound on the degree. Unfortunately, the right result will not give a good bound for all of the functions which are smaller than spectral amplitude is smaller than this number. This result gives existence of such functions which are in the red region here, but it does not construct some function in the red region. But you can construct the first, the bent functions for m even which are where the spectral amplitude is the square root of 2 of q. You can construct a function obtained by the bent concatenation method for m odd which consists to consider Vm minus one and construct bent functions there and then consider it on the m. You can construct quadratic functions of high rank which you can construct the functions obtained by Maitra and Fontaine which are rotation symmetric functions idempotents. And you can construct the function f goes to tres of F of x where F is a polynomial. I will give some order of it. Let k equal F to the m assimilated to the space F 2 to the m. If F is a polynomial over k let us construct a Boolean function tres of F of x where tres is a tres function of F2 to the m2 to F2 and the theorem is that if d is greater than three and if d is the degree of F and if d is odd, then spectral amplitude is smaller than d minus 1 square root of q. I will speak about higher order bouillon functions. The non-linearity of order r generalizes the usual non-linearity. For a given function f it is its Hamming distance to the set of all functions whose algebraic degrees do not exceed r. So let NL r of f denote the r-th order nonlinearity of f. So the cryptanalysis of order r consists in simplifying the enciphering algorithm by making it an r order approximation, and therefore the function is more resistant to that this attack that f is distinct from a mapping of order r. And very little is known on the non-linearity of order r. To be able to compare with the preceding theorem we define the spectral amplitude of Boolean function f the integer Sr of f by using this formula. Carlet and Mesnager have shown that the minimum possible spectral amplitude of order two of Boolean functions is bounded below by this number. They have completed the theory to the function of order r to get this bound. I come to this theorem of Claude Carlet. He says that the density of the set of functions satisfying this formula tends to one when m tends to infinity if c is a real number greater than one. I wanted to have a theorem which says that the inverse inequality. We describe the distribution of Boolean functions only in the second order non-linearity case and this theorem is that the density of the set of functions satisfying the same inequality that tends to zero when m tends to infinity where c is smaller than one. We could not obtain this theorem for second order inequality. We compute the probability of S of f smaller than d by the same procedure as before where the expectation of eta is the sum of probability of B eta of h. The expectation of square is, the expectation of eta reduces the sum of this number at the intersection of two bowls, which is again probably eta. This probability is equal to the probability of the intersection, probability of of B Delta of h for h in [inaudible] RM 2 and it's multiplied by the number of the element of [inaudible]. So the probability of B Delta of zero intersects with Delta of [inaudible] depends on the weight of h and this weight of the functions of 1 or 2 are relatively crowded around 2 to the m minus 1 and so the weights are distributed similarly in min square for function of degrees 2 and 3 and to prove this by the fact that this random variable at min zero and [inaudible] to m. But actually this formula is not strong enough to conclude the weights are not so close to the m minus one, so we can conclude only when we have the exact with the distributions of [inaudible] which we have only for the first [inaudible]. I will now talk about vectorial Boolean functions. The linear cryptanalysis exploits non-uniform statistical behaviors in the process of encryption. It consists in simplifying the encryption algorithm by making a linear approximation. Therefore again, a function f is more resistant to this attack that f is distinct from a linear mapping. I called the m, n vectorial Boolean functions with m variable is a map from the space Vm into Vn and I defined the component function uf as the function of X goes to uf of x where the dot denotes the unusual scaler product of two elements of Vm. And we call non-linearity of Boolean function f from Vm to Vn the minimum Hamming distance between all the component functions of f and all affine functions on m variables. You have this formula, so non-linearity is equal to this formula where S of u of f is the sup of absolute value of [inaudible] transform of uf. Results by Chabaud and Vaudenay about the minimal spectral amplitude which achieves this formula and actually it is the only case where this bound is will work is the case where m equals n and m and n are odd. And this is the minimal spectral amplitude of the vectorial Boolean functions from fq to fq is 2 to the m plus 1 over and these almost bent and they exist when m is odd. So we have investigated the spectral amplitude of these Boolean functions, so if f is vectorial function from fq to fq then almost surely the probability that S of f is between 2a square root of q log q and 2b square root of q log q. Tends to one as m goes to infinity for a and b--a is smaller than one, smaller than b. So I can state the theorem if f is a Boolean function from fq to fq then almost surely the limits of when m tends to infinity S of f over 2 to the square root of q log q is equal to one. There is another theorem for the Boolean function from f2 to the m to f2 to the n. It says that almost surely the probability that S of f is smaller than b square root of 2 to the m plus 1 times m plus n log 2 tends to 1 as m goes to infinity for b is less than one. Unfortunately we have not finished the work. We have not the other inequality. I can give some functions which have a small spectral amplitude and so let d represent 3 and consider polynomial f over f2 to the n with a degree of f equals d. If d is odd the spectral amplitude is smaller than d minus 1 times 2 to the n over 2 plus 1. And if d is even, and d prime is the largest integer among the odd divisors of a degree of a term of f of x and you have that if d prime greater than 3 S of f is smaller than d prime minus 1 times 2 to the n over 2 plus 1. I will skip this and go to the conclusion. So we have been interested in classifying the Boolean functions according to the non-linearity. We found a concentration point in the case of Boolean functions, in the case of Boolean functions with second-order non-linearity and in the case of vectorial Boolean functions. There is more to be done. For instance, to find bounds for [inaudible] between 2 to m over 2 and the square root of 2q log q. Also to study the weight distribution of the RM of order r for r greater than 3 which should give the concentration point in this case of the Boolean function with r order non-linearity. Okay, thank you. [applause]. >>: I wonder if you could explain a little bit more when you talk about the weight distribution of the Reed Muller codes. Usually think about that being connected to coding theory and error correction code, but yet here you are talking about the non-linearity and are you relating it to cryptographic application then? >> François Rodier: Yes. This slide? >>: Yes. >> François Rodier: Yes. So there is a computation of the expectation of eta square depends on the [inaudible]. And the probability of the bowl of center zero and the range of Delta intersection of the bowl of center h reduce Delta, this probability depends on the weight of h, because if you are to have two bowls of the same weight this intersection is the same. We use a fact, for 1 of degree 2 and the weights of the functions are relatively crowded around 2 to the m-1 and this makes this formula work. And we wonder what is the distribution for functions of degree 3 and so on, and we found an argument which says that these functions seem to have, seem to be relatively crowded around the m minus 1. But it's, the formula we found was not strong enough and the weight turned out to be much more closer to 2 to the m minus 1 that was given by this formula. So not knowing something better than this formula we could not conclude. >> Kristin Lauter: Any more questions? Okay let's thank François again. [applause]. >> Kristin Lauter: Okay. For our next talk I am pleased to introduce Sorina Ionica who is visiting us for several weeks from University of Lorraine and the INRIA lab there in Nancy. She will speak on pairing-based algorithms for Jacobian of genus 2 curves with maximal endomorphism ring. >> Sorina Ionica: Okay. Thank you first for inviting me. This talk is about endomorphism ring computation for genus two curves Jacobians over finite fields. Before starting to speak about that many of you here are cryptographers, so I would like to start with explaining why we actually care about this problem in cryptography, so why do we want to compute endomorphism rings in cryptography. The thing is that in cryptography we are interested in working with abelian variety of small dimension, like 1 or 2 defined over a finite field such that its number of points over the finite field is divisible by some large prime number, so in order to find such an abelian variety, what we could do for a solution is to count the number of points of this abelian variety and check if it verifies this condition, or otherwise, the second solution which is what we actually use when we do pairing-based cryptography is to use the class polynomials in order to generate such abelian varieties whose number of points we already know. These class polynomials are actually sort of the key to generate these curves and that's just the Jacobians, of course. This slide is meant to explain to you what class polynomials actually are. We are going to consider the Jacobian of, an ordinary Jacobian of a genus 2 curve over a finite field. We know that it's an endomorphism ring is in order of a CM field that we denote by k. So what is such a CM field? It is a totally imaginary quadratic extension of a totally real number field. The class polynomials actually are polynomials with coefficients over q which parametrize the invariants of abelian varieties over c such that the endomorphism ring of these varieties will be the maximal order of this k here. If we take the prime number which is like a good number in the sense that this abelian variety will have good reduction modulo some prime ideal over this prime number p, then our class polynomials will factorize into linear factors and their linear mod p and their roots are actually going to be [inaudible] invariants for this abelian varieties with a maximal order. And as a bonus, what we get is in fact the number of points of such an abelian variety, so the number of the forms of my Jacobian, is going to be given by the norm of pi -1 where pi is the Forbenius endomorphism corresponding to the finite field Fp. If we are able to factorize this class of polynomials over Fp we get the J invariants of abelian varieties for which we know what the number of points is and this is for cryptography. What we would like to do is to be able to compute these class polynomials and for this a way to do it is to use the CRT method. So what does this CRT method do? What it does actually is it computes these class polynomials modulo many small primes p, and then it uses the CRT to just reconstruct H1, H2 and H3. How does it work? We select a good prime p just as I said earlier and thanks to the theory of complex multiplication we know what these good primes are. We know what they look like once we field the CM field. And then once we have such a prime we start testing all of the p3 Jacobians defined over fp then we try to see by computing the zeta function whether they are in the good isogeny class. Whether they have the good number of points and if they have a good number of points, then we just check to see if the endomorphism ring is the maximal order. If it is, we keep this curve and if it isn't we just throw it away and start again. So we repeat this process until we get, because we have a finite number of such Jacobians, so when we got them all we stop this process here. And so once we did this for a certain number of primes p, we just, we are able to compute our Hi mod p. Maybe I not being very clear; I'm sorry. Why do we need endomorphism ring computation? It's just at this step right here. Actually what I should note is what I didn't say is that the difficult part is to actually find such Jacobians with the good endomorphism ring so they could be Jacobians that have the maximal order. Instead of keep looking for good Jacobians, what we can do is once we have found such a Jacobian with maximal order we can generate other Jacobians with maximal order from it by computing horizontal isogenies from this Jacobian. And isogeny is horizontal if the two endomorphism rings are endomorphic. In my talk what I will do is actually refer to these two steps of the algorithm so I will explain, I will give a method of how to verify whether the Jacobian of a genus 2 curve defined over a finite field is, has endomorphism ring of the maximal order, and I will also show how to compute these horizontal isogenies. I will work on these two steps of this algorithm. Let me fix some notations. I will be working with endomorphism rings of ordinary Jacobians over a finite field. I denote by k the CM quartic field which is q to which we have joined this eta here, and I will also assume that the real multiplication, which I denote by OK zero has class number one. I said that our Jacobians will be ordinary which means that the endomorphism ring will be an order of K and I denote by pi the Forbenius endomorphism. So this endomorphism ring of j will be always contained into the maximal order and will always contain this order here which is generated by phi and by pi bar. The algorithm which allows us to compute the endomorphism rings which was given by Eisentrager and Lauter and then improved in the case of just checking if the endomorphism ring is the maximal order by Fremont and Christian Lauter, so this algorithm is based on the following remark. We are given an endomorphism ring of the Jacobian that we call alpha, then alpha over n is also an endomorphism ring if and only if j of n so the n torsion of j will be contained into the kernel of alpha. How does this algorithm work? So it actually writes down a number of generators so we are given an order. First we take an order O of our CM field and we just want to check whether this order is contained in the endomorphism ring. What we do is we write down a basis for this order which will look like gamma i where gamma i is the form alpha i over ni and our alpha i here is actually a polynomial depending of pi, and if we want to check, if we just need to check whether this thing here will be endomorphism of our Jacobian so we can do this by using the previous remark, we just need to check whether alpha i is zero or on the ni torsion. Actually it turns out that this denominator that we have here ni, are divisors of the index of set of pi and pi bar in OK. The drawback of this algorithm is that we need to do these sorts of computations by working over the extension fields which contain this g and i where ni are different divisors of this index here. In the end we might need to consider ni which might be quite large numbers, so we are going to have to work over extension fields which are pretty large, so to better understand this idea, so assume you need to work over the extension fields containing g of L where L is a prime number. This is actually going to be defined, the points of this group here are defined over an extension field of degree r where r might be as large as L4, L24, so this is pretty big already. If you need j of L square then you are going to have to go up to the extension field of the degree r times L and if you need jL3 then you're going to get another L here in your extension field degrees and it goes on like that, so it gets pretty high. I will start by making the following observation. There is a theorem of state that tells us that the endomorphism ring if we want computed locally at some prime number L we know that this is actually given by this subgroup here which is actually a subgroup of the group of matrices of dimension 4 over zL. But we don't know how to use this information to actually compute the endomorphism ring locally. The goal of my talk is to show to you that by computing the action of the Forbenius on the [inaudible] so just by understanding what the Forbenius metrics looks like up to a certain precision, I can understand some sort of information about the endomorphism ring, and more precisely, I will give a criterion to check whether the endomorphism ring is the maximal order. So to fix the notations I will denote by n a number greater than one such that j of L to n is defined over the finite field fq and such that j to the power L n poswant [phonetic], okay, j of Ln poswant is not defined over the finite field. We will work with prime numbers greater than two and this assumption here also translates into saying that pi -1 is divisible so pi is the Forbenius, okay, this is divisible exactly by L2 to the power n. Obviously, if I make these assumptions, my Forbenius metrics to the precision n, L to the n will look like that. Also more definitions, we take theta, an element of an order of O and we define the erratic variation of theta with respect to this O as this quantity here. It looks maybe a bit weird, but just understand what this is. Let's write down a basis for our order, a zed basis for our order. We take the composition over this basis for theta. Then this erratic variation is in fact the erratic variation of the gcd, the greatest common divisor of this coefficient here. This is what it actually represents. I don't care about this coefficient here. This is just an integer and I just look at the other three and take the erratic variation of their gcd. This weird definition here we just wrote it down to show that actually this definition is independent of the basis. In general the erratic variation of theta over any order will be smaller than the erratic variation over the maximal order. Now I just take a basis of OK zero which I denote by 1 and Omega and I take the relative basis of OK with respect to this basis of OK zero, so I get theta like this. Then I have this following simple criterion to verify whether an order O is maximal locally at L or not, so under the following assumption, we have this element theta which verifies, so I make this assumption here then for theta verifying this assumption the erratic variation for any order O will be a smaller than the erratic variation for the maximal order if the index of O into the maximal order is divisible by L, so if we are given such a theta and we compute its variation and we realize that it's equal to the variation of the maximal order, then it means that our order is actually the maximal order. We are actually going to compute our theta in the following slides will be pi, the Forbenius. Our criteria and is going to work like that. We are interested in computing the variation of pi over the endomorphism ring of the Jacobian, and if we realize that this variation equals the variation of pi over the maximal order, then we are sure our endomorphism ring is the maximal order. Otherwise it is not. What we need to do is get this variation. Earlier I showed you that this variation can be computed if we have a basis for our order and we just need to compute the gcd, but I can't do this for my problem because I don't know what the endomorphism ring is so I can't try down a basis for this order. I need to figure out a method to compute this variation differently. How do we do this? Actually we show that this erratic variation of the Forbenius is the largest integer m such that the metrics of the Forbenius will have this shape here, so just some lambdas on the diagonal and that's it. Now the question is in order to get this integer m, I will need to compute the action of the Forbenius on all these torsion samples, j of L, j of L2 et cetera and stop whenever my metric is not diagonal. If I do this, as I mentioned earlier, I will need to consider a large extension field because these subgroups here are defined over extension fields which are higher and higher, so it doesn't seem like the right way to do this. So what we are actually going to do is to use pairings. We are going to work with the Tate pairing and the idea to do this is not mine so it comes from Susan Schmoyer who uses it in cryptographic settings so it's completely different, but I just reuse the same idea to do my computations. First, let's see some definitions on pairings. I need first to introduce the Weil pairing. So we will take an abelian variety A defined over a field K and I take its m torsion; it's Am and we know that the m torsion of the dual variety is isomorphic to homomorphism group here. This is exactly what we need to define a bilinear map which is what we call the Weil pairing. Moreover, if our abelian variety is principally polarized, which is our case because we are working with Jacobians of genus 2 curves we can get rid of this dual here and we get this bilinear application here. We want to introduce the Tate pairing principal. For that I will need to use Galois cohomology, so I first look at this sequence here. I just take, the Galois cohomology of this sequence and this will give me a connecting morphism which goes from the Aj zero here to Aj one of Am and if I Goshen by this quantity here what I get actually is an isomorphism between this Goshen here and AK over MAK. My connecting morphism can be described like that to any point P on my variety I associate the following co-cycle, so I take a point P bar which is such that M times P bar equals P and then to any element Sigma of the Galois group. So any Sigma we send it over Sigma times pi bar minus pi bar. This thing here is an m portion group so it goes where it should go which means over here and it allows us to get a bilinear map which is what we call the Tate pairing so it's a linear map which goes, which is since this Goshen times the m portion of the dual and it takes its value into the first commonality group of the square roots of unity. Since we’re working with principal [inaudible] varieties, we can just get rid of the dual and we get this map here and, moreover, since we are working over finite fields while we did some computations of this Galois cohomology here and we got that it is isomorphic to the m roots of unity. Basically, if I lost you with this Galois cohomology computations, this is where I can get you back because what we get is in fact this formula here which is true over finite fields and there is no Galois cohomology in it, so you can get back to me. Basically what the formula that I'm going to use for the Tate pairing is this formula here. I just want to pair two points, p and q so what I compute is the Weil pairing of pi of p star minus p star and q. Now for those of you who work with pairings in cryptography you know that this is not the way how we are going to actually compute the pairing in cryptology. Why? Because in this formula we could run [inaudible] algorithm and get this pairing like this, but note that there is a p bar here which is the point line, the m star torsion point lying over my p point here, so this point p bar is actually defined over a larger extension field, so if my point p is defined over fq, p bar is going to be in an extension field of this fq, so you don't want to do that. It's not efficient for cryptography, so what I do is I just consider this formula to get my action of the Forbenius on the m torsion points, but to actually compute the pairing I do this the way we usually do it by considering [inaudible] bounds pairing and by using [inaudible] algorithm to compute that pairing. But what you get is actually this thing here. You just need to know that both of them are the same. I need more definitions, so I will denote by this W here the set of subgroups of rank two in j of L to the power n which are maximal to the isotropic with respect to the Weil pairing. Now I denote by k of L and j. The greatest k such that there are two points p and q in a subgroup of W such that the Tate pairing of p and q will be an L case primitive root of unity. It looks pretty weird but it will make sense a little later. Now I will study the non-degeneracy of the Tate pairing over such, the subgroup of this W. So I say that that Tate pairing is known to generate on this G subgroup if it is, if this application here with values into mu of L to the power kLg is surjective, and we say it is degenerate otherwise. What is this kLj good at? It is useful because we actually have the theorem that tells us that if pi -1 is exactly divisible by L to the power n, then kLj is actually two times n minus the erratic variation of pi. So this formula here basically gives us a way to compute the erratic variation of pi which is what I wanted earlier if we know what k of L and j is. Since k of L and j is, since I defined it as by doing some pairing computations, what I'm going to show you is that I can get this formula by just, I can get k of L and j by just computing a certain finite number of Tate pairings. >>: [inaudible] true to this that the proposition [inaudible] the slide on the bottom? >> Sorina Ionica: It will come up later when I'll talk about horizontal isogenies. Yes? >>: How big is the value of n in practice? Roughly how big are those values? >> Sorina Ionica: How big are those values? >>: Sorry, of L, the size of L. >> Sorina Ionica: It depends of what you are getting in the composition of, because the L’s that you're going to work with are the L’s that you get in the composition of this index here. This is where, I mean if L divides this you are not sure that your endomorphism ring will be maximal at this L so we are going to work with this sort of L; sometimes it might be pretty big, but most of the times they are small, but you might have like a small prime number up to a large power which divides your index, so they can be anyhow. >>: Just to give you an idea, when L starts to be like three digits, you start to be in big trouble. >>: Yeah. >>: So that's the relative size. We are not talking cryptographic. >>: You've got to compute ahead for these, right? [inaudible]. >>: The degree of extension where the points can live, the degree can potentially be L to the 4th, so that's why… >> Sorina Ionica: Sorry about this definition. To justify it a little bit, so basically I consider all of the sub groups which are kernels of isogenies between principal polarized the abelian varieties. Since I'm interested in understanding which ones are the horizontal ones which would give me some kind of information about my endomorphism ring, this is why I introduced this KLJ somehow distinguishes horizontal ones from those which are not horizontal. This is how it originally came up. But we're going to get this later because hopefully I'm going to talk about horizontal isogenies. From this theorem I get my, using my criteria that I showed you earlier about checking if an order is locally maximal at L or not, so I get this simple criterion to check whether the endomorphism ring is locally maximal at L, which is, I just need to compute kLj and check whether it equals 2 times n minus the erratic variation of pi over the maximal order. If I get equality, I got what I wanted. Some computational issues, in order to get the erratic variation of the Forbenius, we need to know what this kLj is and if we want to know what this is, what we could do is basically invert all of the subgroups which are maximal isotropic with respect to the Weil pairing, and check for which of those the Tate pairing will give, will be known to generate. So basically you need to compute the Tate pairing over all of these subgroups to get this quantity kLj and there are quite many of them so there are like O of L3 subgroups which are maximal isotropic with respect to the Weil pairing, so this is quite a lot and we wanted to avoid that so how did we do that? So in practice we just computed a syntactic basis for our LN torsion and we showed that in fact this kLj is the greatest integer k such that the Tate pairing of Qi and Qj for all j’s different from minus i are primitive Lk roots of unity, so we have this formula here which means that in order to compute kLj we just need to compute the Tate pairings on certain pairs Qi, Qj of this [inaudible] basis. So this is like a small number of pairing computations that we need to do. So in practice, the L torsion is not going to be defined over fq so what I want because earlier I said, I need some torsion L to the n with n greater than one defined over my field fq and if it's not, what I'm going to do is in fact switch to the extension field containing at least the L torsion. Then I just check to see whether there is more of the L torsion defined over that extension field and I just see what n is, and then I just compute a syntactic basis and compute my kLj just like I told you earlier and my criterion is just check whether the erratic variation of the Forbenius over the maximal order is 2n minus kLj and if it is my algorithm returns true and that's it. Now to compare to the algorithm by David Freeman and Kristin Lauter let's just set the notations first so milder than worst over this extension field fqr containing the L torsion. Their algorithm, so in the worst case it needs to do computations over the largest extension field containing L to the power u or where u is the erratic variation of the index, so this is what I call u and I call m of r the cost of a multiplication over an extension field of degree r, so I have this complexity comparison here. Basically, the most cost of computation for both algorithms is to get a syntactic basis of the Ln torsion and L to the power u torsion, so what we have here is a comparison where you can see that somehow the two complexities resemble because the dominating part of the computation is just to get the syntactic basis, so we work over an extension field of degree r while this algorithm in the worst case works over an extension field of degree u of degree, sorry, of degree r times L to the power u minus n. >>: Especially when u is large the r’s is bad. >> Sorina Ionica: Yeah, this is what I say but, yeah, but the thing is that what we don't know how to compare this one to this here u and n because I mean, so we moved to an extension field which contains the L torsion. It might contain a little more than that. It could contain the L to the n torsion but I don't know how to relate that to the L to the u torsion, see. In practice I realized that there is a way to generate such examples, u equals n. In this case the two algorithms work just as fast, so I guess it's still not very clear when we are going to be faster. >>: Since you announced that you did here, the running time on the left, does that take into account, you know, the potential failure of our algorithm when, if you don't have kind of equity distribution of like L to the n torsion points or did you… >> Sorina Ionica: Yeah, I tried us, so basically the complexity I wrote here is what I understood from the complexity you gave in your amps paper so that is like, yeah, I think it's what Damier Robert does while implementing an isogenies. >>: Then we should write Robert because that's an improvement over that [inaudible] definitely [inaudible] in the amps paper. He has a better way of getting uniform distribution of L torsion. >> Sorina Ionica: Yeah, it's just that I didn't completely understand what he does and I think his paper is not really written so… >>: That's true. >> Sorina Ionica: It's not really very clear. I've tried to reproduce it here. Anyway, the difference is what is the extension field over where we are working. This is what's important in the end. But then how to compute the syntactic basis, we do it in the same way, so we just need to figure out which is like the best way to do it. I think you should write down his paper because otherwise I don't really understand. >>: Once again, which part of your complexity comes from computing this syntactic basis? >> Sorina Ionica: So it's this part here. This part is the [inaudible] computation. Yeah. So I'm not finished, so I was talking about class polynomials earlier. I said that we would see how to check whether the endomorphism ring is maximal. We have just done that. And now we're going to be looking at how to compute horizontal maximal isogenies from a Jacobian with maximal in the endomorphism ring. I will call an L isogeny, an isogeny whose kernel is maximal isotropic with respect to the Weil pairing. I will say that this isogeny is horizontal if the two endomorphism rings are isomorphic. The way these horizontal isogenies were computed so far was by using the action of the Shimura class group. So this class group here, okay? Actually we know, so people knew how to do it whenever this integer L is co-prime to the discriminate of zeta pi and pi bar. We didn't know how to do it when L divides this index here. Let's just talk a little bit about how this used to be done before. It was done by using the fact that the kernel of our horizontal isogeny which corresponds to an ideal A of the Shimura class group. This kernel is actually a subgroup invariated by the Forbenius, so people just computed this subgroup invariated by the Forbenius and this subgroup corresponded to the horizontal isogenies. But the question is how can we do this if our L divides the index here. I've got a solution to this problem and the solution is the following: I have a Jacobian. I know that it is in the endomorphism ring and locally maximal at L and I take, so given G, a maximal isotropic subgroup of the L torsion and assuming that pi -1 is divisible exactly by L to the power n then we can consider a subgroup G of G bar which is a subgroup of the Ln torsion such that L to the bar n -1 times the subgroup is exactly G and such that my G… I think that there is actually a typo here. I think it was such that my G bar will actually be in the subgroup of maximal isotropic anyway what I call W, so if I do this the isogeny of kernel G is horizontal if and only if the Tate pairing is degenerate on this G bar times G bar and otherwise is descending. I didn't say with the descending is, but it doesn't really matter. We just want to know when this is horizontal. I'm not going to bore you with the details of how to actually get this isogeny, so how does this algorithm work. It's basically the same idea as before. Write down a syntactic basis for the Ln torsion and then by using the bi-linearity of the pairing what we just write down some [inaudible] and get the pairings, get the subgroups which generate the Tate pairing. Basically the complexity is the same as the one for our first algorithm, so it depends on the complexity of computing a syntactic basis for the Ln torsion. We have one example, so I consider the hyperelliptic curve defined over the finite field f127 and I have this as a Jacobian which is maximal at 5 and I know that index here is 50, so I have this state composition of 5 into A times A bar and this means that we actually have two horizontal isogenies, so thanks to [inaudible] and we actually computed this isogenies and what I give here are the Mumford coordinates of the generators of one kernel, so one of these two isogenies, so we go for what is written here and… So future work and what I would like to do in the future is to actually put this method to work so basically implement the CRT algorithm and see if my two methods allow to improve the CRT method for class polynomial computation and another question, another open question is whether we can get like in the general case whether we can get by looking at this map here, whether we can actually get an algorithm which allows computing the endomorphism ring all of the time. So this concludes my talk. [applause]. >>: I could ask a lot of questions, but I think I will ask them directly to Sorina, and let's thank Sorina and take a coffee break until four.