2008
DOI: 10.1007/s10664-008-9075-7
|View full text |Cite
|
Sign up to set email alerts
|

Presenting software engineering results using structured abstracts: a randomised experiment

Abstract: When conducting a systematic literature review, researchers usually determine the relevance of primary studies on the basis of the title and abstract. However, experience indicates that the abstracts for many software engineering papers are of too poor a quality to be used for this purpose. A solution adopted in other domains is to employ structured abstracts to improve the quality of information provided. This study consists of a formal experiment to investigate whether structured abstracts are more complete … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
54
0
4

Year Published

2009
2009
2016
2016

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 68 publications
(62 citation statements)
references
References 20 publications
3
54
0
4
Order By: Relevance
“…The measurement is the sorting of articles into the systematic map as explained in Section 2 and Section 3, based on a keywording of abstracts. We often noticed, in line with existing studies, that abstracts often lacked information regarding our categorization scheme and were misleading at times [6]. Mendes, for example, noted a high number of papers designated incorrectly, when using terms incorrectly (for example suggesting a specific research strategy, such as "experiment" [14]).…”
Section: Validity and Reliabilitysupporting
confidence: 57%
“…The measurement is the sorting of articles into the systematic map as explained in Section 2 and Section 3, based on a keywording of abstracts. We often noticed, in line with existing studies, that abstracts often lacked information regarding our categorization scheme and were misleading at times [6]. Mendes, for example, noted a high number of papers designated incorrectly, when using terms incorrectly (for example suggesting a specific research strategy, such as "experiment" [14]).…”
Section: Validity and Reliabilitysupporting
confidence: 57%
“…These were respectively assessed using a set of 8 questions similar to those employed in previous studies, Budgen et al (2011Budgen et al ( , 2008, and a 10-point Likert-like scale. The completeness score for a specific abstract for each judge was calculated as:…”
Section: Independent and Dependent Variablesmentioning
confidence: 99%
“…In addition, the title and keywords were removed from each abstract. The questions were derived from those used in the previous studies (Budgen et al 2011(Budgen et al , 2008, with modifications to address the restriction of using only those papers that had an empirical element. For the purpose of data collection, each student judge was required to first complete a consent form, then a short form asking for demographic information, and would then receive the four data collection forms in the defined order, 11 and one at a time.…”
Section: Experimental Materialsmentioning
confidence: 99%
“…This is sometimes hard as many abstracts often omit relevant information [38]. As a consequence, Brereton et al [36] recommend reviewing also the conclusions of the papers in addition to the titles and abstracts.…”
Section: Classification Of Papers and Missing Informationmentioning
confidence: 99%