20th Annual International Conference on High Performance Computing 2013
DOI: 10.1109/hipc.2013.6799098
|View full text |Cite
|
Sign up to set email alerts
|

MaSiF: Machine learning guided auto-tuning of parallel skeletons

Abstract: We present MaSiF, a novel tool to auto-tune parallelization parameters of skeleton parallel programs. It reduces the cost of searching the optimization space using a combination of machine learning and linear dimensionality reduction. To auto-tune a new program, a set of program features is determined statically and used to compute k nearest neighbors from a set of training programs. Previously collected performance data for the nearest neighbors is used to reduce the size of the search space using Principal C… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 12 publications
0
7
0
Order By: Relevance
“…In addition to optimizing sequential programs [Cooper et al 1999], recent studies have shown that predictive modeling is effective in optimizing parallel programs Collins et al 2013;Emani et al 2013;Wang et al 2014b] and scheduling parallel workload ]. The Qilin [Luk et al 2009] compiler uses offline profiling to create a regression model that is employed to predict a data parallel program's execution time.…”
Section: Related Workmentioning
confidence: 99%
“…In addition to optimizing sequential programs [Cooper et al 1999], recent studies have shown that predictive modeling is effective in optimizing parallel programs Collins et al 2013;Emani et al 2013;Wang et al 2014b] and scheduling parallel workload ]. The Qilin [Luk et al 2009] compiler uses offline profiling to create a regression model that is employed to predict a data parallel program's execution time.…”
Section: Related Workmentioning
confidence: 99%
“…For example, varlist my_vars [5] defines a pool named my_vars of size 5. Five variables, named my_vars1 to my_vars5, are in this pool, and can be sampled from using the variable construct.…”
Section: Genesis Flow and Constructsmentioning
confidence: 99%
“…There is increasing interest in the use of machine learning in automatic tuning of program performance, particularly on parallel architectures such as multi-cores and Graphics Processing Units (GPUs) [1,5,8,9,16,19]. Much of this work uses supervised learning methods that rely on training programs, i.e., programs to which various optimizations are applied and whose characteristics and performance are used to build a machine learning model.…”
Section: Introductionmentioning
confidence: 99%
“…In compiler research, the feature sets used for predictive models are often provided without explanation and rarely is the quality of those features evaluated. More commonly, an initial large, high dimensional candidate feature space is pruned via feature selection [3], or projected into a lower dimensional space [43,44]. FEAST employs a range of existing feature selection methods to select useful candidate features [45].…”
Section: Related Workmentioning
confidence: 99%