2013
DOI: 10.1016/j.jpdc.2013.01.011
|View full text |Cite
|
Sign up to set email alerts
|

Estimating parallel performance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 27 publications
0
7
0
Order By: Relevance
“…Lobachev et al [26] provides a mechanism to estimate performance of massively parallel code. It also allows exact measurement and prediction with Scalability in terms of problem size and processor members.…”
Section: Gpgpus For Parallelizationmentioning
confidence: 99%
“…Lobachev et al [26] provides a mechanism to estimate performance of massively parallel code. It also allows exact measurement and prediction with Scalability in terms of problem size and processor members.…”
Section: Gpgpus For Parallelizationmentioning
confidence: 99%
“…Similar to Ulrich et al [2014b], we perform an optimisation step over all slices at once to minimise the energy of the non-linear distortions based on detected features ('global' registration). In contrast to the previous work, we 1. Use a multi-resolution approach, Our new method processes not only the features of the scale of the initial rigid transform, but also iteratively much smaller features.…”
Section: Visual and Operational Comparison With Ulrich Et Almentioning
confidence: 99%
“…It starts by predicting the execution time of the MDFG. Based on [16,17], this term consists of estimating two parameters whose are the computation time and the communication time. The first one is predicted by multiplying the cycle period and cycle number of the whole application.…”
Section: Technique Choicementioning
confidence: 99%