2009
DOI: 10.1007/978-3-642-03770-2_26
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing MPI Runtime Parameter Settings by Using Machine Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
11
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(11 citation statements)
references
References 2 publications
0
11
0
Order By: Relevance
“…The OPTO framework [18] tests all possible configuration combinations so as to automatically find the best one. Machine learning was also proposed as an alternative method [19]. A training tool finds out important characteristics of the platform before matching them with specific application needs.…”
Section: E Shared-buffer Reuse Order and Moesi Protocolmentioning
confidence: 99%
“…The OPTO framework [18] tests all possible configuration combinations so as to automatically find the best one. Machine learning was also proposed as an alternative method [19]. A training tool finds out important characteristics of the platform before matching them with specific application needs.…”
Section: E Shared-buffer Reuse Order and Moesi Protocolmentioning
confidence: 99%
“…Mohamad C. et al [6] proposed OTPO, a tool that can optimize OpenMPI runtime parameters, which gives users and system researchers the possibility to make their environment meet the requirement of performance. In [28,29], the main idea is to conduct an off-line training phase to derive the "best" configurations of OpenMPI for each target architecture based on machine learning algorithms. Jha et.al.…”
Section: Related Workmentioning
confidence: 99%
“…Much work has been done on creating machine learning‐based models and using them for tuning and auto‐tuning, eg, to determine loop unroll factors, which optimizations to apply for parallel stencil computations, Message Passing Interface (MPI) parameters, and general compiler optimizations . Kulkarni et al developed a method to determine a good ordering of compiler optimization phases, on a per function basis.…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, analytical performance models for GPUs and heterogeneous systems have been developed [42][43][44][45] and used for auto-tuning. 5 Much work has been done on creating machine learning-based models and using them for tuning and auto-tuning, eg, to determine loop unroll factors, 28,46 which optimizations to apply for parallel stencil computations, 7,47 Message Passing Interface (MPI) parameters, 14 and general compiler optimizations. [20][21][22][23][24][25][26][27][28][29][30][31][32] Kulkarni et al 13 Machine learning approaches have also been used for performance modeling and auto-tuning in a heterogeneous setting.…”
Section: Related Workmentioning
confidence: 99%