2014
DOI: 10.1136/amiajnl-2013-002544
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating the state of the art in disorder recognition and normalization of the clinical narrative

Abstract: Objective The ShARe/CLEF eHealth 2013 Evaluation Lab Task 1 was organized to evaluate the state of the art on the clinical text in (i) disorder mention identification/recognition based on Unified Medical Language System (UMLS) definition (Task 1a) and (ii) disorder mention normalization to an ontology (Task 1b). Such a community evaluation has not been previously executed. Task 1a included a total of 22 system submissions, and Task 1b included 17. Most of the systems employed a combination of rules and machine… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
72
0
1

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 106 publications
(74 citation statements)
references
References 20 publications
1
72
0
1
Order By: Relevance
“…Two annotators tagged each document and conflicting annotations were checked by a third independent expert. The ShARe/CLEF 2013 eHealth shared task [22] can be viewed as a continuation of the i2b2 NLP challenges, focusing on disorder mention identification and normalization of disorders. The process of normalization involves mapping the disorder mentions to the closest equivalent UMLS 28 CUI subset of SNOMED CT.…”
Section: Related Workmentioning
confidence: 99%
“…Two annotators tagged each document and conflicting annotations were checked by a third independent expert. The ShARe/CLEF 2013 eHealth shared task [22] can be viewed as a continuation of the i2b2 NLP challenges, focusing on disorder mention identification and normalization of disorders. The process of normalization involves mapping the disorder mentions to the closest equivalent UMLS 28 CUI subset of SNOMED CT.…”
Section: Related Workmentioning
confidence: 99%
“…Several open evaluations such as ShARe-CLEF (Pradhan et al, 2013) and Semeval (Pradhan et al, The example sentence in Figure 1 satisfies these two unique criteria. Since entities may not occur contiguously, a BIO (Begin-Inside-Outside) style sequence tagger is no longer directly applicable (Bodnari et al, 2013).…”
Section: Introductionmentioning
confidence: 99%
“…This is especially challenging due to the properties of clinical text: formal grammar is typically not complied with, while misspellings and non-standard shorthand abound (Allvin et al, 2011). Testament to the growing importance of domain-adapted NER systems are the many shared tasks and challenges that have been organised in recent years (Uzuner et al, 2010;Uzuner et al, 2011;Pradhan et al, 2014;Pradhan et al, 2015). However, most of the existing NER modules that are used in clinical NLP systems, such as MedLEE (Friedman, 1997), MetaMap (Aronson and Lang, 2010) and cTAKES (Savova et al, 2010), are rule-based -i.e., with hand-crafted rules -and thus rely heavily on comprehensive medical dictionaries.…”
Section: Named Entity Recognition In Clinical Textmentioning
confidence: 99%