2016
DOI: 10.4137/becb.s36155
|View full text |Cite
|
Sign up to set email alerts
|

Discovering Related Clinical Concepts Using Large Amounts of Clinical Notes

Abstract: The ability to find highly related clinical concepts is essential for many applications such as for hypothesis generation, query expansion for medical literature search, search results filtering, ICD-10 code filtering and many other applications. While manually constructed medical terminologies such as SNOMED CT can surface certain related concepts, these terminologies are inadequate as they depend on expertise of several subject matter experts making the terminology curation process open to geographic and lan… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 3 publications
0
7
0
Order By: Relevance
“…Also, creating and maintaining ontologies up to date is a timeconsuming manual process. The unsupervised neural networks, such as Word2Vec, capture a larger number of related concepts in the clinical note corpus that are beyond the static ontology terms (e.g., generic and user defined abbreviations and common misspellings), which supports the work of Ganesan et al 10 While the intelligent assistance of DeepSuggest is rudimentary, it is surprisingly effective. DeepSuggest recommends a larger number of relevant words due to its much larger dictionary derived from the local corpus (►Table 4).…”
Section: Discussionmentioning
confidence: 52%
See 2 more Smart Citations
“…Also, creating and maintaining ontologies up to date is a timeconsuming manual process. The unsupervised neural networks, such as Word2Vec, capture a larger number of related concepts in the clinical note corpus that are beyond the static ontology terms (e.g., generic and user defined abbreviations and common misspellings), which supports the work of Ganesan et al 10 While the intelligent assistance of DeepSuggest is rudimentary, it is surprisingly effective. DeepSuggest recommends a larger number of relevant words due to its much larger dictionary derived from the local corpus (►Table 4).…”
Section: Discussionmentioning
confidence: 52%
“…The residents conducted 11 predefined queries and marked the top suggested 60 terms as relevant or not relevant. This set of 11 queries were primarily inspired from the previous work of Ganesan et al, 10 and modified by the authors to accommodate a broad category of medical words in pediatric setting (e.g., drugs, devices, procedures, diseases, and symptoms). For instance, "iPhone" (Apple, United States) was added to test the capability of the system to handle nonmedical terms.…”
Section: Evaluation Of Precision On Suggested Wordsmentioning
confidence: 99%
See 1 more Smart Citation
“…Third, the constructions of user types and task types can be formalized and made more fine-grained, for example, by categorizing MD users by discipline or skill. Fourth, we utilized semantic embeddings to identify similar words, while other methods could be used find related terms like graphical models [33], Fifth, we only included unigrams when training EMR-based embeddings in our current study. We did try word embeddings based on bi-grams or tri-grams.…”
Section: Discussionmentioning
confidence: 99%
“…Removing stop words avoids generating style indicator keywords that are not useful in the desired context. We have used the list of clinical stop words provided by Ganesan et al [ 36 ].…”
Section: Methodsmentioning
confidence: 99%