Polynomial Decomposition Algorithms Dexter Kozen∗ Department of Computer Science Cornell University Ithaca, New York 14853 Susan Landau† Department of Mathematics Wesleyan University Middletown, Connecticut 06457 Abstract We examine the question of when a polynomial f over a commutative ring has a nontrivial functional decomposition f = g ◦ h. Previous algorithms [2, 3, 1] are exponential-time in the worst case, require polynomial factorization, and only work over fields of characteristic 0. We present an O(n2 )-time algorithm. We also show that the problem is in NC . The algorithm does not use polynomial factorization, and works over any commutative ring containing a multiplicative inverse of r. Finally, we give a new structure theorem that leads to necessary and sufficient algebraic conditions for decomposibility over any field. We apply this theorem to obtain an NC algorithm for decomposing irreducible polynomials over finite fields, and a subexponential algorithm for decomposing irreducible polynomials over any field admitting efficient polynomial factorization. 1 Introduction In this paper, we address the following question: given a polynomial f with coefficients in a commutative ring K, does it have a nontrivial functional decomposition f = g ◦ h? If so, can the coefficients of g and h be obtained efficiently? This problem arises in certain computations in symbolic algebra. The following example, due to Barton and Zippel [2, 3], shows that decomposition can simplify symbolic root-finding. Since the polynomial f = x6 + 6x4 + x3 + 9x2 + 3x − 5 decomposes into f = g ◦ h, where g = x2 + x − 5 and h = x3 + 3x , ∗ † Supported by NSF grant DCR-8602663. Supported by NSF grants DCR-8402175 and DCR-8301766. 1 one can solve for the roots α of g, and then solve for the roots of h − α to obtain the roots of f . Many symbolic algebra systems (e.g., MACSYMA, MAPLE, and SCRATCHPAD II) support polynomial decomposition for purposes such as this. Barton and Zippel [2, 3] present two decomposition algorithms. Some simplifications are suggested by Alagar and Thanh [1]. All these algorithms require K to be a field of characteristic 0, they all use polynomial factorization, and they all take exponential time in the degree of f in the worst case. No decomposition algorithms over fields of finite characteristic, or over more general rings, were known. Our first result is a polynomial-time algorithm for computing a decomposition f = g ◦ h over any commutative ring K containing a multiplicative inverse of the degree of g. The algorithm has several advantages over the algorithms of [2, 3, 1]: 1. It is polynomial-time. A straightforward implementation uses O(n2 r) algebraic operations, where r is the degree of g. All previously known algorithms [2, 3, 1] were exponential. With a bit more care, the complexity can be further reduced to O(n2 ) algebraic operations. It also follows from our algorithm and from a general result of Valiant et al. [14] that the problem is in NC . 2. It is more general. The algorithms of [2, 3, 1] require K to be a field of characteristic 0. Our algorithm works in any commutative ring K containing a multiplicative inverse of r, the degree of g. 3. It is simpler. The algorithm itself is only a few lines, and requires little more than high school algebra for its analysis, in contrast to [2, 3, 1]. Most significantly, it does not use polynomial factorization. 4. It is faster in practice. A rough implementation has been coded in about 100 lines of MAPLE by Bruce Char at the University of Waterloo. Although no performance figures are yet available, Char reports that it is faster than Barton and Zippel on all examples he has tried. The algorithm has also been implemented by Arash Baratloo, an undergraduate at Cornell. His implementation decomposes a polynomial of degree 10 in less than 1 second of real time. This algorithm is described in §3. In the remainder of the paper, we restrict K to be a field, but place no restriction on its characteristic. In §4, we generalize the notion of block decomposition of the Galois group of an irreducible polynomial f over a field to the case where f need not be irreducible or separable. We then establish necessary and sufficient conditions in terms of this construct for the existence of a nontrivial decomposition of any polynomial f over any field K, and show how the decomposition problem effectively reduces to the problem of polynomial factorization over K. The complexity of decomposition will depend on the complexity of factoring over K, and in general will be at least exponential. This result gives more efficient algorithms in the following special cases. In §5, we give an NC algorithm for decomposing irreducible polynomials over any finite field. In §6, we 2 consider arbitrary fields, and give an O(nlog n ) sequential-time algorithm for decomposing irreducible polynomials over any field admitting a polynomial-time factorization algorithm. 2 Algebraic Preliminaries Let f = xn + an−1 xn−1 + · · · + a1 x + a0 be a monic polynomial of degree greater than one with coefficients in a commutative ring K. A (functional) decomposition of f is a sequence g1 , . . . , gk such that f = g1 ◦ g2 ◦ · · · ◦ gk , i.e., f (x) = g1 (g2 (· · · gk (x) · · ·)). The gi are called components of the decomposition. For any unit a of the ring K, the linear polynomials ax + b and a1 (x − b) are inverses under composition, thus f always has trivial decompositions in which all but one of the components are linear. If all decompositions of f are trivial, then f is said to be indecomposable. A complete decomposition is one in which each of the components is of degree greater than one and is indecomposable. Complete decompositions are not unique. Consider the examples f ◦ g = f (x + b) ◦ (g − b) xn ◦ xm = xm ◦ xn Tn ◦ Tm = Tm ◦ Tn where Tn and Tm are Chebyshev polynomials. In the first example, one decomposition is obtained from the other by inserting linear components. In the second and third, one decomposition is obtained from the other by interchanging commuting components. Ritt’s first theorem states that a complete decomposition of f is unique up to these ambiguities, provided the characteristic of K is either 0 or finite but greater than the degree of f [13, 6, 8]. If f is monic and has a decomposition f = g ◦ h, where g = br xr + br−1 xr−1 + · · · + b0 h = cs xs + cs−1 xs−1 + · · · + c0 , then g(cs x) and h(x)/cs are monic and also give a decomposition of f . The leading coefficients of g and h are invertible, since br crs = 1. It follows that any complete decomposition is equivalent to a decomposition in which all the components are monic. It will follow from Algorithm 3.4 below that if f ∈ K[x] is decomposable over any extension of K, then it is decomposable over K, provided K contains a multiplicative inverse of r. This generalizes a result of Fried and MacRae [8], who show that if K is a field and f ∈ K[x] is decomposable over an algebraic extension of K, then it is decomposable over K. 3 Decomposition over Commutative Rings The equation f = g ◦ h, where g, h, and f are monic of degree r, s, and n = rs respectively, results in rs equations in r + s unknowns. When r ≥ s, the longest of these equations can be 3 √ shown to have at least as many terms as the number of partitions of s, which is 2Ω( s) [10]; thus a naive implementation results in an exponential algorithm. This led Barton and Zippel to develop two other algorithms which are more efficient in practice, but still exponential in the degree of f in the worst case. These algorithms involve polynomial factorization and assume that K is a field of characteristic 0. In this section we give an elementary algorithm requiring at most O(n2 r) algebraic operations in the worst case. The algorithm does not involve polynomial factorization, and works in any commutative ring containing a multiplicative inverse of r. We then show how an alternative implementation using interpolation further reduces the complexity to O(n2 ) algebraic operations. We begin with an elementary lemma. Lemma 3.1 Let K be a commutative ring, and let f1 , f2 , g ∈ K[x] be monic. If f1 and f2 agree on their first k coefficients, then so do f1 g and f2 g. Let f = xrs + ars−1 xrs−1 + · · · + a0 g = xr + br−1 xr−1 + · · · + b0 h = xs + cs−1 xs−1 + · · · + c0 such that r, s < rs = n and f = g ◦ h. We can assume without loss of generality that c0 = 0, since if f = g ◦ h, then f = g(x + c0 ) ◦ (h − c0 ). Let g factor in an extension of K as r Y g= (x − βi ) . i=1 Then f = g(h) = r Y (h − βi ) . i=1 By r applications of Lemma 3.1, we have Lemma 3.2 The products hr and f = Qr i=1 (h − βi ) agree on their first s coefficients. Let qk = xs + cs−1 xs−1 + · · · + cs−k xs−k , 0≤k≤s. Then q0 = xs , qs = qs−1 = h, and qk = qk−1 + cs−k xs−k , 1≤k≤s. Again by Lemma 3.1, we have Lemma 3.3 The powers hr and qkr agree on their first k + 1 coefficients, 0 ≤ k ≤ s. 4 The k + 1st coefficient of qkr is the coefficient of xrs−k ; but since qkr = (qk−1 + cs−k xs−k )r r−1 r + rcs−k xs−k qk−1 +p = qk−1 where p ∈ K[x] is of degree at most rs − 2k, this coefficient is just dk + rcs−k , where dk r−1 r and rcs−k is the coefficient of xrs−k in rcs−k xs−k qk−1 . By is the coefficient of xrs−k in qk−1 st Lemmas 3.2 and 3.3, this agrees with ars−k , the k + 1 coefficient of f , 1 ≤ k ≤ s − 1. Thus if the earlier coefficients cs−1 , . . . , cs−k+1 of h are known, then cs−k can be determined by computing ars−k − dk cs−k = , 1≤k ≤s−1 , r provided K contains a multiplicative inverse of r. This gives rise to the following decomposition algorithm: Algorithm 3.4 /* coefficients of h */ set q0 := xs and compute q0i := xis , 0 ≤ i ≤ r; for k := 1 to s − 1 do 0 r /* assume qk−1 , . . . , qk−1 are known from the previous step */ r set dk := the coefficient of xrs−k in qk−1 ; 1 set cs−k := r (ars−k − dk ); /* then cs−k = the coefficient of xs−k in h */ calculate cjs−k , 0≤ j≤ r; P j j−i set qkj := ji=0 cis−k xi(s−k) qk−1 , 0 ≤ j ≤ r; i /* then qkj = (qk−1 + cs−k xs−k )j */ /* coefficients of g */ Solve the triangular system Ab = a , (1) where Aij is the coefficient of xis in hj , 0 ≤ i, j ≤ r, b = (b0 , . . . , br ), and a = (a0 , as , . . . , ars ). /* consistency check */ Compute g ◦ h and compare with f . The calculation of h = qs−1 takes O(n2 r) algebraic operations. This also produces the entries of the matrix A which is used in the calculation of g. The matrix A is triangular with all diagonal elements 1, thus is easily invertible over K in time O(r2 ). 5 The consistency check is necessary, since the system is overconstrained (there are rs equations in r + s unknowns), and the g and h produced by the above algorithm may not compose to give f . This is easily done in O(nr) time, using the powers of h already computed. It is easily shown that the above procedure produces an expression for each coefficient of h and g that is of degree at most r in each coefficient of f . Since these coefficients are calculated by a straight-line program involving only algebraic operations, it follows from a general simulation result of Valiant et al. [14] that the algorithm can be parallelized so as to run in logO(1) n parallel time, using polynomially many processors, therefore the problem is in the complexity class NC . We now show that the O(n2 r) sequential bound can be improved to O(n2 ), provided the ring contains at least n + 1 elements, using polynomial interpolation. Let ξ0 , . . . , ξn be n + 1 distinct elements of K, and let V be the Vandermonde matrix Vij = ξij . Computing the transform fb = V f amounts to evaluating f at the points ξi (here f is represented by its vector of coefficients (a0 , . . . , an−1 , 1), and V f is just the matrix-vector product). We carry out the computations of Algorithm 3.4 in the transformed space to obtain a vector of values b h, and then interpolate to obtain h by solving the system V h = b h. th In the following, let ei denote the vector whose i entry is 1 and whose other entries are 0. We will need to perform the analog of the operation, “let d be the coefficient of xi in the polynomial p” in the transformed space. This is equivalent to computing the inner product ei · p, so if pb = V p, then d = ei · (V −1 pb) = (eTi V −1 ) · pb , where eTi V −1 is the ith row of V −1 . This can be done in linear time, provided we have the matrix V −1 . Algorithm 3.5 compute the transform and inverse transform matrices V and V −1 ; set fb := V f ; /* coefficients of h */ set qb0 := (ξ0s , ξ1s , . . . , ξns ); /* then qb0 = V q0 */ for k := 1 to s − 1 do /* assume qbk−1 = V qk−1 is known from the previous step */ r compute qbk−1 = the pointwise rth power of qbk−1 ; T r set dk := (ers−k V −1 ) · qbk−1 ; r /* then dk = the coefficient of xrs−k in qk−1 */ 1 set cs−k := r (ars−k − dk ); /* then cs−k = the coefficient of xs−k in h */ set qbk := qbk−1 + cs−k V es−k ; 6 /* note that V es−k is the s − k th column of V , and cs−k V es−k = the transform of cs−k x */ set h = V −1 qbs−1 ; s−k /* the calculation of the coefficients of g and the consistency check are the same as in Algorithm 3.4 */ The matrices V and V −1 can be computed in time O(n2 ) [12]. It is also straightforward to show that these computations can be performed in NC . The only nonlinear step in the loop is the calculation of the pointwise rth power of qbk−1 ; this takes O(n log r) steps. Thus the total time in the loop is O(ns log r) ≤ O(n2 ). We have shown Theorem 3.6 Let K be a commutative ring containing a multiplicative inverse of r. There is an O(n2 ) algorithm to determine whether a given monic polynomial f of degree n over K has a nontrivial decomposition f = g ◦ h over K, deg g = r, and to produce the coefficients of g and h if such a decomposition exists. 4 A Structure Theorem The algorithms of §3 require that the ring K contain certain multiplicative inverses. Barton and Zippel [2, 3] and Alagar and Thanh [1] consider only fields of characteristic 0. In this section we give a structure theorem that provides a necessary and sufficient condition for the decomposability of any polynomial over any field. These conditions involve a generalization of the notion of block decomposition of the Galois group of an irreducible polynomial over a field. Let K be a field of arbitrary characteristic. Let f ∈ K[x] be monic of degree n = rs, b denote the splitting field of f . Let G denote not necessarily irreducible or separable. Let K b over K. The following definition reduces to the usual notion of block the Galois group of K decomposition for f irreducible and separable. b Definition 4.1 A block decomposition for f is a multiset ∆ of multisets of elements of K such that Q Q 1. f = A∈∆ α∈A (x − α) 2. if α ∈ A ∈ ∆, β ∈ B ∈ ∆, and σ ∈ G such that σ(α) = β, then B = {σ(γ) | γ ∈ A} . A block decomposition ∆ is an r ×s block decomposition if |∆| = r and |A| = s for all A ∈ ∆. 2 7 Note that non-irreducible polynomials may have block decompositions that are not r × s for any r, s. th elementary symmetric function on m-element multisets: Let cm k denote the k X Y cm (A) = B. k B ⊆ A, |B|=k By convention, cm 0 = 1. Theorem 4.2 Let f ∈ K[x] be monic of degree n = rs. The following two statements are equivalent: (i) f = g ◦ h for some g, h ∈ K[x] of degree r and s, respectively; (ii) there exists an r × s block decomposition ∆ for f such that csk (A) = csk (B) ∈ K, for all A, B ∈ ∆, 0 ≤ k ≤ s − 1 . Proof. (i) → (ii). Let Γ be the multiset of roots of g. Then Y (x − γ) , g= γ∈Γ and f = g(h) = Y (h − γ) . γ∈Γ For each γ ∈ Γ, let Aγ be the multiset of roots of h − γ. Let ∆ = {Aγ | γ ∈ Γ}. Then Y Y Y f= (h − γ) = (x − α) . γ∈Γ γ∈Γ α∈Aγ For each γ ∈ Γ, (−1)k csk (Aγ ) is the coefficient of xs−k in h − γ, 0 ≤ k ≤ s − 1, therefore csk (Aγ ) ∈ K, 0 ≤ k ≤ s − 1. It remains to show that ∆ is a block decomposition. Suppose that γ, δ ∈ Γ, α ∈ Aγ , β ∈ Aδ , σ ∈ G, and σ(α) = β. Since h(α) = γ and h(β) = δ, σ(γ) = σ(h(α)) = h(σ(α)) = h(β) = δ . b →K b uniquely to σ : K[x] b → K[x] b Extending σ : K by taking σ(x) = x, we have Q Q (σ(x) − σ(α)) α∈Aγ (x − σ(α)) = α∈A Q γ = σ( α∈Aγ (x − α)) = σ(h − γ) = h − σ(γ) = Q h−δ = β∈Aδ (x − β) . 8 Thus Aδ = {σ(α) | α ∈ Aγ }. (ii) → (i). Given ∆, let A ∈ ∆ and let h= s−1 X (−1)k csk (A)xs−k ∈ K[x]. k=0 The choice of A does not matter, by the assumption of (ii). For each A ∈ ∆, let Y b , α = (−1)s+1 css (A) ∈ K γA = (−1)s+1 α∈A and let g= Y (x − γA ) . A∈∆ Then s X Y h − γA = (−1)k csk (A)xs−k = (x − α) , α∈A k=0 and Q Q f = QA∈∆ α∈A (x − α) = A∈∆ (h − γA ) = g(h) . The coefficients of h are in K by the assumption of (ii), and the coefficients of g are in K since they are solutions of the nonsingular system (1). 2 If f is irreducible, then we need only check the condition of Theorem 4.2 for one A ∈ ∆; if it holds for one, then it holds for all, since G is transitive on ∆. The coefficients of h will be the csk (A), 1 ≤ k ≤ s − 1. The constant coefficient of h is 0, without loss of generality. The roots of g are css (A), A ∈ ∆. The coefficients of g are obtained by solving the triangular linear system (1). Theorem 4.2 gives an algebraic condition that can be used to test decomposability of any f over any field K, provided one can factor over K and thereby construct the splitting field of f . The complexity of the algorithm depends on the complexity of factoring over K. 5 Decomposition over Finite Fields The conditions of Theorem 4.2 give an algorithm for decomposing any polynomial over any field, provided one can factor polynomials over that field. In this section, we show that for irreducible polynomials over finite fields, we obtain an NC algorithm (parallel logO(1) mnp time on (mnp)O(1) processors), where K = GF (pm ) and n is the degree of f . Block decompositions of irreducible polynomials over finite fields are particularly easy to b = K(α) where α is any root compute. If K = GF (pm ) and f is irreducible over K, then K 9 km of f , and the only automorphisms of K(α) over K are of the form β 7→ β p , 0 ≤ k ≤ n − 1. It follows that there is a unique r × s block decomposition ∆ = {Ai | 0 ≤ i ≤ r − 1} , (i+jr)m | 0 ≤ j ≤ s − 1}, Ai = {αp 0≤i≤r−1 km for f . It thus suffices to construct the conjugates αp , 0 ≤ k ≤ n − 1, so that one of the polynomials Y h − γi = (x − β) , 0 ≤ i ≤ r − 1 β∈Ai can be computed and the condition of Theorem 4.2 checked. These conjugates are represented by the polynomials xp km mod f , 0≤k ≤n−1 , which can be computed in NC (parallel time logO(1) mnp on (mnp)O(1) processors) by the method of Fich and Tompa [7]. Theorem 5.1 An irreducible polynomial f over a finite field K can be tested for the existence of a nontrivial decomposition g ◦ h in NC . If such a decomposition exists, the coefficients of g and h can be computed in NC . Proof. First we compute representations of the roots of f in NC , as described above. For any r, s such that n = rs, these roots can be arranged in the unique r×s block decomposition ∆, and for some A ∈ ∆, the product Y (x − α) α∈A computed, and tested for the condition of Theorem 4.2. This computation also gives the coefficients of h, if it exists. Finally, the triangular linear system (1) can be solved in NC [4] to obtain the coefficients of g. 2 Example 5.2 There are exactly two primitive irreducible polynomials of degree 4 over GF (2), namely p 1 = x4 + x + 1 p 2 = x4 + x3 + 1 . We will show that p1 has a nontrivial decomposition and p2 does not. First consider p1 . The unique 2 × 2 block decomposition is ∆ = {{α, α4 }, {α2 , α8 }} , where α is a root of p1 . This reduces modulo p1 to {{α, α + 1}, {α2 , α2 + 1}} . 10 Computing the two polynomials whose roots are the two multisets of ∆, we obtain (x − α)(x − (α + 1)) = x2 + x + (α2 + α) (x − α2 )(x − (α2 + 1)) = x2 + x + (α2 + α + 1) , thus the requirements of Theorem 4.2 are met, and we may take h = x2 + x . The coefficients of g are obtained by solving the nonsingular triangular system (1). In this case, we have 1 0 0 b0 1 0 1 1 b1 = 0 0 0 1 b2 1 with solution (1, 1, 1), giving g = x2 + x + 1 . Computing g ◦ h, we find g ◦ h = (x2 + x)2 + (x2 + x) + 1 = p1 . For p2 , the unique 2 × 2 block decomposition is ∆ = {{β, β 4 }, {β 2 , β 8 }} = {{β, β 3 + 1}, {β 2 , β 3 + β 2 + β}} where β is a root of p2 . Computing the two polynomials whose roots are the two multisets of ∆, we obtain (x − β)(x − (β 3 + 1)) = x2 + (β 3 + β + 1)x + (β 3 + β + 1) (x − β 2 )(x − (β 3 + β 2 + β)) = x2 + (β 3 + β)x + (β 3 + β) , neither of whose second coefficients are in GF (2), therefore h does not exist, and p2 has no nontrivial decomposition. 2 6 Decomposition over Arbitrary Fields Theorem 4.2 also gives the following decomposition result for arbitrary fields: Theorem 6.1 A complete decomposition of an irreducible polynomial f over any field K admitting a polynomial-time polynomial factorization algorithm can be constructed in time O(nlog n ). 11 Proof. First we dispose of the problem of inseparability. If K is a field of characteristic k p, and if f ∈ K[x] is inseparable and irreducible, then f = g(xp ) for some irreducible and separable g, thus we have an immediate nontrivial decomposition. Hence we will ignore this case and assume that f is separable. In Landau and Miller [11], it was shown how to determine minimal blocks of the roots of f in polynomial time. A similar procedure can be used to determine all blocks. By Theorem 4.2, any nontrivial decomposition f = g ◦ h corresponds to an r × s block decomposition ∆, where s = deg h, such that csi (A) ∈ K, 0 ≤ i ≤ s − 1, for one (and hence all) A ∈ ∆. We first use the algorithm of [11] to compute all minimal blocks and test this condition. For each block A which satisfies this condition, the coefficients of h are obtained immediately from the csi (A), and the coefficients of g are obtained by solving the triangular linear system (1). The decomposition algorithm is then applied to g. Note that g is irreducible and h is indecomposable; if g were reducible, then f would have been, and if h were decomposable, then the block A would not have been minimal. If no minimal block decomposition satisfying Theorem 4.2 is found, then the algorithm is repeated using blocks whose only proper subblock is minimal. If any of these satisfy Theorem 4.2, then the appropriate g and h are constructed, and the decomposition algorithm is then performed on g. If not, the next level of minimal blocks is examined. Each time a decomposition is found, the polynomial g is irreducible, and the polynomial h is indecomposable. The algorithm halts when the only block left to examine is Ω or when g is indecomposable. If there is a polynomial-time algorithm for factoring over the base field K, then the algorithm of [11] for finding a block is polynomial in the size of f . The difficulty is that there are potentially nlog n blocks, and we must search through all of them to find a decomposition. For each block, the condition of Theorem 4.2 can be tested and the coefficients of g and h obtained in polynomial time. 2 Finally we observe that over Q, most irreducible polynomials are indecomposable. This is because Theorem 4.2 implies that an irreducible polynomial f over Q can be decomposed only if the Galois group of the splitting field of f over Q acts imprimitively on the roots of f . But Gallagher [9] has shown that almost all irreducible polynomials over Q have Galois group Sn , which acts primitively. 7 Recent work Some improvements and extensions have appeared since the results of this paper were first reported. M. Atkinson [correspondence] and independently J. von zur Gathen (see [15]) have observed that coefficients of h in Algorithm 3.4 can be obtained in time O(n log n), using a technique related to Hensel lifting. M. Dickerson [5] has obtained partial results for multivariate polynomials. 12 Acknowledgments We are grateful to Bruce Char and Arash Baratloo for their implemention of Algorithms 3.4 and 3.5, and to Mike Atkinson for insightful comments. References [1] V. S. Alagar and M. Thanh. Fast polynomial decomposition algorithms. In Proc. EUROCAL85, pages 150–153. Springer-Verlag Lect. Notes in Comput. Sci. 204, 1985. [2] D. R. Barton and R. E. Zippel. Polynomial decomposition. In Proceedings SYMSAC ’76, pages 356–358, 1976. [3] D. R. Barton and R. E. Zippel. Polynomial decomposition algorithms. J. Symb. Comp., 1:159–168, 1985. [4] L. Csanky. Fast parallel matrix inversion algorithms. SIAM J. Comput., 5:618–623, 1976. [5] Matthew Dickerson. Polynomial decomposition algorithms for multivariate polynomials. Technical Report TR87-826, Comput. Sci., Cornell Univ., April 1987. [6] H. T. Engstrom. Polynomial substitutions. Amer. J. Math, 63:249–255, 1941. [7] Faith E. Fich and Martin Tompa. The parallel complexity of exponentiating polynomials over finite fields. In Proc. 17th Symp. Theory of Comput., pages 38–47. ACM, May 1985. [8] M. D. Fried and R. E. MacRae. On the invariance of chains of fields. Ill. J. Math., 13:165–171, 1969. [9] P. X. Gallagher. Analytic number theory. In Proc. Symp. in Pure Math., pages 91–102. Amer. Math. Soc., 1972. [10] G. H. Hardy and S. Ramanujan. Proc. London Math. Soc., 2(17):75–115, 1918. [11] Susan Landau and Gary Miller. Solvability by radicals is in polynomial time. J. Comput. Syst. Sci., 30:179–208, 1985. [12] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. Numerical Recipes: The Art of Scientific Computing. Cambridge University, 1986. [13] J. F. Ritt. Prime and composite polynomials. Trans. Amer. Math. Soc., 23:51–66, 1922. [14] L. G. Valiant, S. Skyum, S. Berkowitz, and C. Rackoff. Fast parallel computation of polynomials using few processors. SIAM J. Comput., 12(4):641–644, 1983. [15] Joachim von zur Gathen, Dexter Kozen, and Susan Landau. Functional decomposition of polynomials. In Proc. 28th Symp. Found. Comput. Sci., pages 127–131. IEEE, November 1987. 13