Proceedings of the Seventh International Workshop on Health Text Mining and Information Analysis 2016
DOI: 10.18653/v1/w16-6110
|View full text |Cite
|
Sign up to set email alerts
|

Replicability of Research in Biomedical Natural Language Processing: a pilot evaluation for a coding task

Abstract: The scientific community is facing raising concerns about the reproducibility of research in many fields. To address this issue in Natural Language Processing, the CLEF eHealth 2016 lab offered a replication track together with the Clinical Information Extraction task. Herein, we report detailed results of the replication experiments carried out with the three systems submitted to the track. While all results were ultimately replicated, we found that the systems were poorly rated by analysts on documentation a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 11 publications
(4 reference statements)
0
2
0
Order By: Relevance
“…must exist along with published papers. As part of a NLP challenge, Névéol et al (2016) report results on replicating experiments from three systems submitted to the CLEF eHealth track. They show that replication is feasible although "ease of replicating results varied".…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…must exist along with published papers. As part of a NLP challenge, Névéol et al (2016) report results on replicating experiments from three systems submitted to the CLEF eHealth track. They show that replication is feasible although "ease of replicating results varied".…”
Section: Related Workmentioning
confidence: 99%
“…The next steps include, but are not limited to, analyzing whether the supplementary material and appendices actually do improve replicability, as stated by Névéol et al (2016). Furthermore, evaluating the repository offered by Fares et al (2017), whether other researchers actually build on top of it and with what results.…”
Section: Future Workmentioning
confidence: 99%