2019
DOI: 10.1007/s13160-019-00360-8
|View full text |Cite
|
Sign up to set email alerts
|

Benefits from using mixed precision computations in the ELPA-AEO and ESSEX-II eigensolver projects

Abstract: We first briefly report on the status and recent achievements of the ELPA-AEO (Eigenvalue Solvers for Petaflop Applications -Algorithmic Extensions and Optimizations) and ESSEX II (Equipping Sparse Solvers for Exascale) projects. In both collaboratory efforts, scientists from the application areas, mathematicians, and computer scientists work together to develop and make available efficient highly parallel methods for the solution of eigenvalue problems. Then we focus on a topic addressed in both projects, the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
16
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(16 citation statements)
references
References 36 publications
(51 reference statements)
0
16
0
Order By: Relevance
“…People have been proposing using mixed precision to refine other problems like eigenvalue problems for years such as in [13]. More recently, there has been success with FP32 Eigenvalue solvers which are compute intensive and are the bottleneck in quantum chemistry problems [14]. These applications consume a huge fraction of large supercomputers.…”
Section: Performance Ramificationsmentioning
confidence: 99%
“…People have been proposing using mixed precision to refine other problems like eigenvalue problems for years such as in [13]. More recently, there has been success with FP32 Eigenvalue solvers which are compute intensive and are the bottleneck in quantum chemistry problems [14]. These applications consume a huge fraction of large supercomputers.…”
Section: Performance Ramificationsmentioning
confidence: 99%
“…7333 s 265 s mixed (9) 316 s 257 s mixed 11 Therefore, mixed precision has been implemented in all BEAST schemes mentioned above, allowing an adaptive strategy to automatically switch from single to double precision after a given residual tolerance is reached. A comprehensive description and results are presented in [4]. These results and our initial investigations also suggest that increased precision beyond double precision (i.e., quad precision) will have no benefit for the convergence rate until a certain double precision specific threshold is reached; convergence beyond this point would require all operations to be carried out with increased precision.…”
Section: Working Precision Levelmentioning
confidence: 81%
“…Scalabilty, performance, and portability have been tested on three top-10 supercomputers covering the full range of architecures available during the ESSEX project time frame: Piz Daint 2 (heterogeneous CPU-GPU), OakForest-PACS 3 (many-core), and SuperMUC-NG 4 (standard multi-core).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Third, the emergence of machine learning has enhanced the design of computer architecture for the acceleration of low-precision (single-or half-precision) calculation. The efficient use of low-precision calculation, typically in mixed-precision calculation, will be important in any high-performance computational science field [8,9]. A posteriori verification methods guarantee satisfactory numerical reliability when low-precision calculation is used.…”
Section: Introductionmentioning
confidence: 99%