2010
DOI: 10.1007/978-3-642-13821-8_3
|View full text |Cite
|
Sign up to set email alerts
|

Validating Model-Driven Performance Predictions on Random Software Systems

Abstract: Abstract. Software performance prediction methods are typically validated by taking an appropriate software system, performing both performance predictions and performance measurements for that system, and comparing the results. The validation includes manual actions, which makes it feasible only for a small number of systems. To significantly increase the number of systems on which software performance prediction methods can be validated, and thus improve the validation, we propose an approach where the syste… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2011
2011
2011
2011

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 22 publications
(26 reference statements)
0
5
0
Order By: Relevance
“…To overcome this obstacle, we use our tool for generating synthetic software applications [7]. The tool assembles modules taken from industry standard benchmarks [22] and other sources [6] into applications whose architecture is random and whose workload exhibits the effects of resource sharing.…”
Section: Random Software Applicationsmentioning
confidence: 99%
See 4 more Smart Citations
“…To overcome this obstacle, we use our tool for generating synthetic software applications [7]. The tool assembles modules taken from industry standard benchmarks [22] and other sources [6] into applications whose architecture is random and whose workload exhibits the effects of resource sharing.…”
Section: Random Software Applicationsmentioning
confidence: 99%
“…The original publication is available at www.springerlink.com, http://www.springerlink.com/content/50868p3861927512/ Using a wide range of synthetic software applications makes our results more general in that our observations are not collected on only a few systems, where inadvertent bias, experiment tuning or even plain luck can distort the conclusions significantly. For a more involved discussion of the representativeness of the applications, we refer the reader to [7] -here, we limit ourselves to asserting that our conclusions should be reasonably valid for concurrent systems running industry-standard processor-intensive workloads.…”
Section: Random Software Applicationsmentioning
confidence: 99%
See 3 more Smart Citations