International Symposium on Code Generation and Optimization
DOI: 10.1109/cgo.2005.29
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Unroll Factors Using Supervised Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
133
0

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 136 publications
(137 citation statements)
references
References 15 publications
2
133
0
Order By: Relevance
“…Most of these works also require a large number of training runs. Stephenson et al [28] show more complementarity with collective optimization as program matching is solely based on static features.…”
Section: Background and Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Most of these works also require a large number of training runs. Stephenson et al [28] show more complementarity with collective optimization as program matching is solely based on static features.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Several research works have shown how machine-learning and statistical techniques [25,29,28,34] can be used to select or tune program transformations based on program features. Agakov et al [3] and Cavazos et al [8] use machine-learning to focus iterative search using either syntactic program features or dynamic hardware counters and multiple program transformations.…”
Section: Background and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Within the scope of this survey, scientific publications that use machine learning for code optimization at compile time include [1,17,34,35,57,69,87,95,99], whereas scientific publications that use meta-heuristics for code optimization include [21,88,92,93]. Table 4 lists the characteristics of the selected primary studies that address code optimization at compile time.…”
Section: Code Optimizationmentioning
confidence: 99%
“…It is critical to choose features that have significant impact on the prediction model. There are different feature selection techniques that can find features that contain the most useful information to distinguish between classes, for example mutual information score (MIS) [27], greedy feature selection [87], or information gain ratio [45].…”
Section: Machine Learningmentioning
confidence: 99%