Proceedings of the 12th International Workshop on Principles of Software Evolution and the 7th Annual ERCIM Workshop on Softwar 2011
DOI: 10.1145/2024445.2024451
|View full text |Cite
|
Sign up to set email alerts
|

Towards a benchmark for traceability

Abstract: Rigorously evaluating and comparing traceability link generation techniques is a challenging task. In fact, traceability is still expensive to implement and it is therefore difficult to find a complete case study that includes both a rich set of artifacts and traceability links among them. Consequently, researchers usually have to create their own case studies by taking a number of existing artifacts and creating traceability links for them. There are two major issues related to the creation of one's own examp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
16
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 15 publications
(16 citation statements)
references
References 21 publications
0
16
0
Order By: Relevance
“…Charrada et al [2] developed a benchmark for traceability which is based on the following three criteria, which are identical to those proposed by Dekhtyar et al [5]: precision, recall, and time. The evaluation of their benchmark was accomplished by analyzing one traceability mining tool and gauging its performance according to the proposed criteria.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Charrada et al [2] developed a benchmark for traceability which is based on the following three criteria, which are identical to those proposed by Dekhtyar et al [5]: precision, recall, and time. The evaluation of their benchmark was accomplished by analyzing one traceability mining tool and gauging its performance according to the proposed criteria.…”
Section: Related Workmentioning
confidence: 99%
“…RR2: Integration with other tools [14]. RR3: Performance [5,2]. RR4: Usability and data visualization [14].…”
Section: Related Workmentioning
confidence: 99%
“…This is due to its high cost, complexity, time-consuming nature, and errorproneness [10,16,17]. This results in there being almost no publicly available projects containing very robust benchmarks [5]. Second, we can evaluate a new recovery technique with benchmarks developed by other researchers and/or applied by them to other software projects.…”
Section: Introductionmentioning
confidence: 99%
“…Third, we can establish our own benchmarks to meet our specific recovery technique evaluation needs. Due to the well-known difficulty of obtaining or using the above two types of benchmarks, researchers usually create their own benchmarks to conduct evaluations of their new recovery techniques [5,8,9,11]. Nevertheless, a major issue arises here: how do we actually go about building affordable and robust traceability benchmarks?…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation