2004
DOI: 10.1197/jamia.m1552
|View full text |Cite
|
Sign up to set email alerts
|

Automated Encoding of Clinical Documents Based on Natural Language Processing

Abstract: Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
265
0
6

Year Published

2005
2005
2022
2022

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 388 publications
(275 citation statements)
references
References 23 publications
4
265
0
6
Order By: Relevance
“…Quantitatively the precision of the tool presented here is on par with other similar tools such as MedLEE; 0.89 (Friedman 2004) and the tool presented in Meystre 2006; 0.76, despite that a relatively simple approach presented here.…”
Section: Fuzzy Mapping Featuressupporting
confidence: 48%
“…Quantitatively the precision of the tool presented here is on par with other similar tools such as MedLEE; 0.89 (Friedman 2004) and the tool presented in Meystre 2006; 0.76, despite that a relatively simple approach presented here.…”
Section: Fuzzy Mapping Featuressupporting
confidence: 48%
“…Regarding biomedical text mining, tools like BioMedLEE [Friedman et al, 2004], MetaMap [Aronson and Lang, 2010] or SemRep [Liu et al, 2012] are closely related to our approach. The tools mostly focus on annotation of texts with concepts from standard biomedical vocabularies like UMLS which is very useful for many practical applications.…”
Section: Related Workmentioning
confidence: 99%
“…The existing work on clinical document annotation focused on explicit entity mentions with contiguous phrases (Aronson, 2006) (Savova et al, 2010) (Friedman et al, 2004) (Fu and Ananiadou, 2014). Going one step beyond, the SemEval 2014 task 7 recognized the need for identifying discontiguous mentions of explicit entities .…”
Section: Related Workmentioning
confidence: 99%