2014
DOI: 10.1155/2014/797348
|View full text |Cite
|
Sign up to set email alerts
|

Collective Mind: Towards Practical and Collaborative Auto-Tuning

Abstract: Abstract. Empirical auto-tuning and machine learning techniques have been showing high potential to improve execution time, power consumption, code size, reliability and other important metrics of various applications for more than two decades. However, they are still far from widespread production use due to lack of native support for auto-tuning in an ever changing and complex software and hardware stack, large and multi-dimensional optimization spaces, excessively long exploration times, and lack of unified… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
59
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(59 citation statements)
references
References 73 publications
(98 reference statements)
0
59
0
Order By: Relevance
“…Among the contributions of this paper are: 1) We develop a simple, yet powerful profiling based analysis to capture data and control flow dependences for program executions with different input data sets, 2) we analyze the variability of both data and control flow dependences for the whole CBENCH benchmark suite [18], [19] using 100 randomly chosen input data sets from the KDATASETS [20] collection, and 3) we analyze the performance implications of the dynamically collected dependence information with respect to the ability to exploit loop-level parallelism and compare against static parallelization approaches.…”
Section: B Contributionsmentioning
confidence: 99%
See 2 more Smart Citations
“…Among the contributions of this paper are: 1) We develop a simple, yet powerful profiling based analysis to capture data and control flow dependences for program executions with different input data sets, 2) we analyze the variability of both data and control flow dependences for the whole CBENCH benchmark suite [18], [19] using 100 randomly chosen input data sets from the KDATASETS [20] collection, and 3) we analyze the performance implications of the dynamically collected dependence information with respect to the ability to exploit loop-level parallelism and compare against static parallelization approaches.…”
Section: B Contributionsmentioning
confidence: 99%
“…We use the MIDATASETS/CBENCH benchmark suite [19], [22] for our evaluation. It contains 32 benchmarks from the MIBENCH [18] suite.…”
Section: Empirical Evaluation a Experimental Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, G. Fursin et al [7] have proposed the notion of iterative compilation; they get rid of software characteristics altogether, and consider (among other) the performance of a program for a given set of optimization strategies in order to predict for the same program. While robust, this approach is obviously only relevant for large optimization spaces, which is not the case in our study.…”
Section: Related Work: Use Machine Learning To Improve Compilationmentioning
confidence: 99%
“…Whenever a new program needs to be compiled, the model is queried to predict good configurations. Usually, this just aims at focusing the selection of the configurations to be tested through iterative compilation on more promising areas [Agakov et al 2006], but a similar approach can be used to predict a single configuration to be used as the result of the compilation [Fursin and Temam 2010]. Unfortunately, the training phase for building a good model is really long, up to several weeks [Fursin et al 2008], and this limits the applicability of machine learning approaches.…”
Section: Introductionmentioning
confidence: 99%