2015
DOI: 10.1145/2813885.2737969
|View full text |Cite
|
Sign up to set email alerts
|

Autotuning algorithmic choice for input sensitivity

Abstract: A daunting challenge faced by program performance autotuning is input sensitivity, where the best autotuned configuration may vary with different input sets. This paper presents a novel two-level input learning algorithm to tackle the challenge for an important class of autotuning problems, algorithmic autotuning. The new approach uses a two-level input clustering method to automatically refine input grouping, feature selection, and classifier construction. Its design solves a series of open issues that are pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
57
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 45 publications
(57 citation statements)
references
References 46 publications
(25 reference statements)
0
57
0
Order By: Relevance
“…• Algorithm Choice [3,15] we use IRA to choose between five different algorithmic implementations of Jmeint that offer different accuracy-performance tradeoffs in computing whether pairs of 3D triangles intersect. The most complex algorithm is the exact algorithm, while the simplest algorithm uses computationally cheap heuristics that work well only when triangles are far apart.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…• Algorithm Choice [3,15] we use IRA to choose between five different algorithmic implementations of Jmeint that offer different accuracy-performance tradeoffs in computing whether pairs of 3D triangles intersect. The most complex algorithm is the exact algorithm, while the simplest algorithm uses computationally cheap heuristics that work well only when triangles are far apart.…”
Section: Methodsmentioning
confidence: 99%
“…The general purpose approximate computing techniques typically applied to regularly-structured computations, such as loop perforation [27,45], algorithm selection [3,15], and numerical approximation [25], have been important and successful vehicles for realizing approximation in practice. These approaches can be realized on commodity hardware, apply to a variety of problem types, and are straightforward for programmers to implement.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…We provide these inputs as a means of increasing the flexibility of our system, for example, to support applications in which the optimization heuristic depends on dynamic values which cannot be statically determined from the program code [3,24]. When present, the values of auxiliary inputs are concatenated with the output of the language model, and fed into the heuristic model.…”
Section: Auxiliary Inputsmentioning
confidence: 99%
“…Machine learning has emerged as a viable means in automatically constructing heuristics for code optimization [3,4,24,[37][38][39]. Its great advantage is that it can adapt to changing hardware platforms as it has no a priori assumptions about their behavior.…”
Section: Related Workmentioning
confidence: 99%