2016
DOI: 10.1016/j.mehy.2016.04.031
|View full text |Cite
|
Sign up to set email alerts
|

The lexeme hypotheses: Their use to generate highly grammatical and completely computerized medical records

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 11 publications
0
1
0
Order By: Relevance
“…This is further complicated by the prevalence of unstructured freetext entries, which commonly have misspellings, local abbreviations, and other errors that are unsuitable for use in computerized decision support, information exchange, and for secondary analysis. [1][2][3][4][5] The need for standardization has been well described in the past, 6 but no solution has received wide adoption. Any potential solution must also facilitate clinical workflows and support clinicians, rather than be optimized for secondary reuse.…”
Section: Background and Significancementioning
confidence: 99%
“…This is further complicated by the prevalence of unstructured freetext entries, which commonly have misspellings, local abbreviations, and other errors that are unsuitable for use in computerized decision support, information exchange, and for secondary analysis. [1][2][3][4][5] The need for standardization has been well described in the past, 6 but no solution has received wide adoption. Any potential solution must also facilitate clinical workflows and support clinicians, rather than be optimized for secondary reuse.…”
Section: Background and Significancementioning
confidence: 99%