In order to obtain more accurate solutions of polynomial systems with numerical continuation methods we use multiprecision arithmetic. Our goal is to offset the overhead of double double arithmetic accelerating the path trackers and in particular Newton's method with a general purpose graphics processing unit. In this paper we describe algorithms for the massively parallel evaluation and differentiation of sparse polynomials in several variables. We report on our implementation of the algorithmic differentiation of products of variables on the NVIDIA Tesla C2050 Computing Processor using the NVIDIA CUDA compiler tools.
Homotopy continuation methods to solve polynomial systems scale very well on parallel machines. In this paper we examine its parallel implementation on multiprocessor multicore workstations using threads. With more cores we can speed up pleasingly parallel path tracking jobs. In addition, we can compute solutions more accurately in the same amount of time with threads, and thus achieve quality up. Focusing on polynomial evaluation and linear system solving (the key ingredients of Newton's method) we can double the accuracy of the results with the quad doubles of QD-2.3.9 in less than double the time, if we use all available eight cores on our workstation.
Our problem is to accurately solve linear systems on a general purpose graphics processing unit with double double and quad double arithmetic. The linear systems originate from the application of Newton's method on polynomial systems. Newton's method is applied as a corrector in a path tracking method, so the linear systems are solved in sequence and not simultaneously. One solution path may require the solution of thousands of linear systems. In previous work we reported good speedups with our implementation to evaluate and differentiate polynomial systems on the NVIDIA Tesla C2050. Although the cost of evaluation and differentiation often dominates the cost of linear system solving in Newton's method, because of the limited bandwidth of the communication between CPU and GPU, we cannot afford to send the linear system to the CPU for solving during path tracking.Because of large degrees, the Jacobian matrix may contain extreme values, requiring extended precision, leading to a significant overhead. This overhead of multiprecision arithmetic is our main motivation to develop a massively parallel algorithm. To allow overdetermined linear systems we solve linear systems in the least squares sense, computing the QR decomposition of the matrix by the modified Gram-Schmidt algorithm. We describe our implementation of the modified Gram-Schmidt orthogonalization method using double double and quad double arithmetic for GPUs. Our experimental results on the NVIDIA Tesla C2050 and K20C show that the achieved speedups are sufficiently high to compensate for the overhead of one extra level of precision.Keywords double double arithmetic, general purpose graphics processing unit (GPU), massively parallel algorithm, modified Gram-Schmidt method, orthogonalization, quad double arithmetic, quality up.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.