2005 International Symposium on Empirical Software Engineering, 2005. 2005
DOI: 10.1109/isese.2005.1541839
|View full text |Cite
|
Sign up to set email alerts
|

Empirical study design in the area of high-performance computing (HPC)

Abstract: The development of High-Performance Computing (HPC) programs is crucial to progress in many fields of scientific endeavor. We have run initial studies of the productivity of HPC developers and of techniques for improving that productivity, which have not previously been the subject of significant study. Because of key differences between development for HPC and for more conventional software engineering applications, this work has required the tailoring of experimental designs and protocols.A major contribut… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2009
2009
2018
2018

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 12 publications
0
7
0
Order By: Relevance
“…We found this claim in four studies [2,3,54,55]. There are two primary reasons given for this claim.…”
Section: Table 10mentioning
confidence: 72%
See 1 more Smart Citation
“…We found this claim in four studies [2,3,54,55]. There are two primary reasons given for this claim.…”
Section: Table 10mentioning
confidence: 72%
“…As was mentioned in the introduction, the cost or danger of performing these physical experiments is often the reason why scientists build software models in the first place, so the physical experiments will not be done before promising software models have been created. Second, experimental validation is frequently impractical since it is usually difficult or impossible to know what the correct result for a piece of software will be until the software is run [2,54,55]. In some cases, scientific software developers treat validation studies as research projects or theses in and of themselves due to the challenge in performing them.…”
Section: Table 10mentioning
confidence: 99%
“…The importance and current lack, of empirical assessment has been demonstrated in many software engineering areas like high performance computing (Shull et al 2005), agile software development (Dybå and Dingsøyr 2008), regression testing (Engstrom et al 2008), variability management (Chen et al 2009), reverse engineering (Tonella et al 2007), and information visualization (Ellis and Dix 2006). The Goal Question Metric (GQM) paradigm is a general approach for the "specification of a measurement system targeting a particular set Falessi, D., Ali Babar, M., Cantone, G., Kruchten, P., Empirically Assessing Software Architecture Research: Challenges and Lessons Learned, Empirical Software Engineering Journal, 15(3): pp.…”
Section: Related Studiesmentioning
confidence: 99%
“…In this case, the goal of the ESWSs was to understand whether various development approaches could quickly increase the effectiveness of novice programmers. Therefore, because the desired outcome of the ESWSs was to learn about the activities of novice developers, students were exactly the right test population (Shull et al 2005;Hochstein et al 2006;Basili et al 2008). & Testing the feasibility of technologies.…”
Section: Reasons To Use Eswssmentioning
confidence: 99%
“…(Shull et al 2000;Jaccheri 2001;Shull et al 2001;Baresi and Morasca 2002;Baresi et al 2003;Morasca 2003;Shull et al 2005;Walia and Carver 2006). In an earlier paper, we addressed some issues in ESWSs by presenting a framework for assessing student experiments from four points of view: researcher, student, instructor, and professional.…”
Section: Introductionmentioning
confidence: 99%