2019
DOI: 10.12688/f1000research.19994.1
|View full text |Cite
|
Sign up to set email alerts
|

On the evaluation of research software: the CDUR procedure

Abstract: Evaluation of the quality of research software is a challenging Background: and relevant issue, still not sufficiently addressed by the scientific community.Our contribution begins by defining, precisely but widely enough, Methods: the notions of research software and of its authors followed by a study of the evaluation issues, as the basis for the proposition of a sound assessment protocol: the CDUR procedure.CDUR comprises four steps introduced as follows: itation, to Results: C deal with correct RS identifi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 51 publications
(59 reference statements)
1
5
0
Order By: Relevance
“…The TRAAC framework takes transparency into account by answering three underlying questions ( EC, 2019a ; Gomez-Diaz and Recio, 2019 ; OECD, 2005 ), which lead to the development of four transparency related criteria in Table 1 . Who created the T&M?…”
Section: Traac Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…The TRAAC framework takes transparency into account by answering three underlying questions ( EC, 2019a ; Gomez-Diaz and Recio, 2019 ; OECD, 2005 ), which lead to the development of four transparency related criteria in Table 1 . Who created the T&M?…”
Section: Traac Frameworkmentioning
confidence: 99%
“…The reliability pillar of the TRAAC framework focuses on aforementioned aspects by assessing the correctness and consistency of the tools' outputs (see Table 2 ). Three underlying questions are addressed ( Aerts, 2017 ; EPA, 2017 ; Gomez-Diaz and Recio, 2019 ; Hristozov et al, 2016 ; Isigonis et al, 2019 ; JRC, 2018 ; Morris et al, 2010 ; OECD, 2005 ; Sørensen et al, 2019 ) when considering the reliability of the T&M, which lead to the development of six reliability related criteria in Table 2 . Has the T&M been verified and received support within the scientific community?…”
Section: Traac Frameworkmentioning
confidence: 99%
“…This kind of context is equally important when we discuss research software, as the needs of a group or individual clearly frame any subsequent evaluation. Although efforts such as FAIR [1] exist to ensure that software in the research domain is findable, accessible, interoperable, and reusable (FAIR), and there is work to define the life-cycle [2] or measuring of such software [7], these efforts focus on quality or best practices, which is a different task than definition. There is also often an implied bias that the definition is self explanatory, and that research software is simply software that is used in research [3,6].…”
Section: Introductionmentioning
confidence: 99%
“…Is it good enough to use metadata purely to discover the functionality of a RS code, or might it be necessary to explore in more details the functions and libraries in RS? It is maybe a naïve vision, but research data features a much broader heterogeneity than RS, so many different types of data can be generated that it makes these procedures difficult to apply or generalise to RD in general (As mentioned in the report of reviewer 1 for RS 2 ). In general, more specific examples are provided for RS than for RD.…”
mentioning
confidence: 99%
“…It is maybe a naïve vision, but research data features a much broader heterogeneity than RS, so many different types of data can be generated that it makes these procedures difficult to apply or generalise to RD in general (As mentioned in the report of reviewer 1 for RS 2 ).…”
mentioning
confidence: 99%