1994
DOI: 10.1006/jpdc.1994.1099
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing Scalability of Parallel Algorithms and Architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
40
0
4

Year Published

1997
1997
2011
2011

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 139 publications
(48 citation statements)
references
References 25 publications
(35 reference statements)
0
40
0
4
Order By: Relevance
“…A comprehensive discussion of various scalability and performance measures can be found in the survey by Kumar and Gupta [50]. While over ten years old, the results cited in this survey and their applications are particularly relevant now, as scalable parallel platforms are finally being realized.…”
Section: Cost-optimality and The Isoefficiency Functionmentioning
confidence: 99%
“…A comprehensive discussion of various scalability and performance measures can be found in the survey by Kumar and Gupta [50]. While over ten years old, the results cited in this survey and their applications are particularly relevant now, as scalable parallel platforms are finally being realized.…”
Section: Cost-optimality and The Isoefficiency Functionmentioning
confidence: 99%
“…The crucial step is to identify the overhead as a function of problem scale and system scale. The scalability is evaluated with a realistic tool-the isoefficiency function (Kumar et al 1994) based on the characterized communication patterns and overhead sources. Kumar et al (1994) point out that if the parallel efficiency can be maintained constant as the total work grows at least linearly with p, then the parallel implementation is scalable.…”
Section: Scalability Analysismentioning
confidence: 99%
“…The scalability is evaluated with a realistic tool-the isoefficiency function (Kumar et al 1994) based on the characterized communication patterns and overhead sources. Kumar et al (1994) point out that if the parallel efficiency can be maintained constant as the total work grows at least linearly with p, then the parallel implementation is scalable. The authors have previously used the isoefficiency function to analyze the performance of the restarted generalized minimum residual (GMRES) method (Saad 2003) for the iterative solution of large scale sparse linear systems (Sosonkina et al 2002).…”
Section: Scalability Analysismentioning
confidence: 99%
“…A nice summary of existing approaches is presented in [7,11]. We suggest a model for a coarse subdivision of parallel runtime into "good" and "bad" parts.…”
Section: Introductionmentioning
confidence: 99%