2011
DOI: 10.1137/08073408x
|View full text |Cite
|
Sign up to set email alerts
|

Fast Polynomial Factorization and Modular Composition

Abstract: We obtain randomized algorithms for factoring degree n univariate polynomials over F q requiring O(n 1.5+o(1) log 1+o(1) q + n 1+o(1) log 2+o(1) q) bit operations. When log q < n, this is asymptotically faster than the best previous algorithms (von zur Gathen & Shoup (1992) and Kaltofen & Shoup (1998)); for log q ≥ n, it matches the asymptotic running time of the best known algorithms.The improvements come from new algorithms for modular composition of degree n univariate polynomials, which is the asymptotic b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
219
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 169 publications
(220 citation statements)
references
References 32 publications
1
219
0
Order By: Relevance
“…Finally, we argue (17). The inequality follows from the fact that the only ways A ′ differs from A are (i) Computing the random map-one needs O(1) amount of work per entry in the random map (this includes the time taken to compute the random permutation-the Fischer-Yates shuffle can be implemented with O(1) work per element); (ii) the copying of elements from the run to another portion in the memory using the random map-this just needs O(1) work per elements in the run; (iii) while doing a memory access, one can has to compute its location from the random map, which again takes O(1) work per element.…”
Section: Simulationmentioning
confidence: 76%
See 2 more Smart Citations
“…Finally, we argue (17). The inequality follows from the fact that the only ways A ′ differs from A are (i) Computing the random map-one needs O(1) amount of work per entry in the random map (this includes the time taken to compute the random permutation-the Fischer-Yates shuffle can be implemented with O(1) work per element); (ii) the copying of elements from the run to another portion in the memory using the random map-this just needs O(1) work per elements in the run; (iii) while doing a memory access, one can has to compute its location from the random map, which again takes O(1) work per element.…”
Section: Simulationmentioning
confidence: 76%
“…(As before all this information is stored in a table.) To get a better work complexity, instead of using the naive polynomial evaluation algorithm, we use a recent efficient data structure for polynomial evaluation designed by Kedlaya and Umans [17].…”
Section: Proof Techniquesmentioning
confidence: 99%
See 1 more Smart Citation
“…Since 6. To find the roots of a polynomial over a finite field, we can first factorize it to get a set of monic polynomials (see [33] for some algorithms), then find the monic degree-1 polynomials' roots.…”
Section: Representing Sets By Polynomialsmentioning
confidence: 99%
“…If S = Φ(x) and T = Φ(y) are known, a result by Kedlaya and Umans [26] for modular composition, and its extension in [32], yield an algorithm with bit complexity essentially linear in mn and log(p) on a RAM. Unfortunately, making these algorithms competitive in practice is challenging; we are not aware of any implementation of them.…”
Section: Introductionmentioning
confidence: 99%