2019
DOI: 10.1016/j.cels.2019.09.011
|View full text |Cite
|
Sign up to set email alerts
|

FAIRshake: Toolkit to Evaluate the FAIRness of Research Digital Resources

Abstract: As more digital resources are produced by the research community, it is becoming increasingly important to harmonize and organize them for synergistic utilization. The findable, accessible, interoperable, and reusable (FAIR) guiding principles have prompted many stakeholders to consider strategies for tackling this challenge. The FAIRshake toolkit was developed to enable the establishment of community-driven FAIR metrics and rubrics paired with manual and automated FAIR assessments. FAIR assessments are visual… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
38
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(40 citation statements)
references
References 11 publications
0
38
0
Order By: Relevance
“…Some funders such as the EU and NIH are developing policies around FAIR data which may include a more formal assessment of FAIRness. Such tools include FAIRmetrics 1 , FAIR Maturity Indicators (Wilkinson et al 2019)), FAIRshake (Clarke et al 2019) and the FORCE11/Research Data Alliance evaluation criteria (McQuilton et al 2020).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Some funders such as the EU and NIH are developing policies around FAIR data which may include a more formal assessment of FAIRness. Such tools include FAIRmetrics 1 , FAIR Maturity Indicators (Wilkinson et al 2019)), FAIRshake (Clarke et al 2019) and the FORCE11/Research Data Alliance evaluation criteria (McQuilton et al 2020).…”
Section: Discussionmentioning
confidence: 99%
“…As documented by the Neuroscience Information Framework (NIF; neuinfo.org), on-line data repositories are diverse, each with their own custom user interfaces and few standards as to how they should be designed and the functions they should support [5]. With data repositories increasing in importance, groups have been developing recommendations on a basic set of functions that these repositories should support (e.g., [6][7][8][9][10][11]. Many of these focus on FAIR, e.g., FAIRshake [6] but they are by no means the only criteria.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, by vertically stacking representations for different datasets, we can visually compare FAIRness levels for each maturity indicator. In the literature, another example of visualization is insignia , created for the platform FAIRshake [ 15 ]. It consists of multiple squares colored from blue (satisfactory) to red (unsatisfactory) for different levels of FAIRness.…”
Section: Discussionmentioning
confidence: 99%
“…Following these criteria, two platforms have recently been developed to automatically compute FAIR maturity indicators: FAIR Evaluation Services and FAIRshake. The first platform offers an evaluation of maturity indicators and compliance tests [ 14 ], whereas the second platform provides metrics, rubrics, and evaluators for registered digital resources [ 15 ]. Both platforms provide use cases for FAIRness assessment, however they do not provide systematic analysis of evaluated datasets and repositories.…”
Section: Introductionmentioning
confidence: 99%
“…Sharing experimental data has major advantages for the scientific community. With the amount of biological data produced increasing each year, structured databases are a crucial tool to store, share and maintain data, improving quality and reproducibility ( 1 ). Being able to have this information aggregated in a single discipline-specific repository could spare a lot of time.…”
Section: Introductionmentioning
confidence: 99%