2020 IEEE International Symposium on Workload Characterization (IISWC) 2020
DOI: 10.1109/iiswc50251.2020.00012
|View full text |Cite
|
Sign up to set email alerts
|

HPC-MixPBench: An HPC Benchmark Suite for Mixed-Precision Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…Besides, precision auto-tuning tools aim at providing a mixed precision version of a program that satisfies accuracy requirements, whatever the implemented algorithms. In [3] a benchmark suite of programs is introduced for mixed precision computing analysis. Moreover the authors present the performance of various precision auto-tuning algorithms such as combinational (used in FloatSmith [4]), compositional (used in FloatSmith [4]), Delta Debug (introduced in [5], used in Precimonious [6] and PROMISE [1]), hierarchical (used in CRAFT-HPC [7]), hierarchical-compositional (used in FloatSmith [4]) and a Genetic Search Algorithm (GA) (used in AMPT-GA [8]).…”
Section: Introductionmentioning
confidence: 99%
“…Besides, precision auto-tuning tools aim at providing a mixed precision version of a program that satisfies accuracy requirements, whatever the implemented algorithms. In [3] a benchmark suite of programs is introduced for mixed precision computing analysis. Moreover the authors present the performance of various precision auto-tuning algorithms such as combinational (used in FloatSmith [4]), compositional (used in FloatSmith [4]), Delta Debug (introduced in [5], used in Precimonious [6] and PROMISE [1]), hierarchical (used in CRAFT-HPC [7]), hierarchical-compositional (used in FloatSmith [4]) and a Genetic Search Algorithm (GA) (used in AMPT-GA [8]).…”
Section: Introductionmentioning
confidence: 99%
“…Its principles preach that although performing the most possible exact computations at scale requires a high amount of computational resources, allowing certain approximations or occasional violations of numerical consistency can provide significant gains in efficiency. [Parasyris et al 2020] states that Approximate Computing exploits the gap between the level of accuracy required by the applications and that provided by the system and has the potential to benefit a wide range of applications, such as scientific computing and machine learning. [Mittal 2015] stands that AC is based on the intuitive observation that while performing the most possible exact computation or maintaining peak-level service requires a high amount of resources, allowing selective approximation or occasional violation of the specification can provide gains in efficiency.…”
Section: Introductionmentioning
confidence: 99%
“…Modern computer architectures sup-port multiple levels of precision for floating-point computations to provide trade-offs between accuracy and performance. Several recent studies, such as [Fogerty et al 2017] and [Parasyris et al 2020], have demonstrated the use of mixed precision, which means using multiple levels of precision, to increase significantly the performance of scientific applications. With accelerators supporting several levels of floating-point precision, such as half, single, and double precision in NVIDIA GPUs, and with higher peak performance in lower precision in these accelerators, this technique has become a promising approach to boost performance, especially using Tensor Cores [Parasyris et al 2020].…”
Section: Introductionmentioning
confidence: 99%