Proceedings of the 15th Conference of the European Chapter of The Association for Computational Linguistics: Volume 1 2017
DOI: 10.18653/v1/e17-1014
|View full text |Cite
|
Sign up to set email alerts
|

Recognizing Mentions of Adverse Drug Reaction in Social Media Using Knowledge-Infused Recurrent Models

Abstract: Recognizing mentions of Adverse Drug Reactions (ADR) in social media is challenging: ADR mentions are contextdependent and include long, varied and unconventional descriptions as compared to more formal medical symptom terminology. We use the CADEC corpus to train a recurrent neural network (RNN) transducer, integrated with knowledge graph embeddings of DBpedia, and show the resulting model to be highly accurate (93.4 F1). Furthermore, even when lacking high quality expert annotations, we show that by employin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
38
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 48 publications
(39 citation statements)
references
References 23 publications
0
38
0
1
Order By: Relevance
“…can help [47], it makes sense to consider the "best order" to ask the user for input in the hopes of achieving a sufficiently performant system with minimal human effort.…”
Section: Methodsmentioning
confidence: 99%
“…can help [47], it makes sense to consider the "best order" to ask the user for input in the hopes of achieving a sufficiently performant system with minimal human effort.…”
Section: Methodsmentioning
confidence: 99%
“…Recall F1-score Baseline [5] 0.7067 ± 0.057 0.7207 ± 0.074 0.7102 ± 0.049 Baseline with adam 0.7065 ± 0.058 0.7576 ± 0.083 0.7272 ± 0.051 KB-Embedding Baseline [22] 0.7171 ± 0.058 0.7713 ± 0.091 0.7397 ± 0.055 Self-training 0.6999 ± 0.047 0.8304 ± 0.039 0.7588 ± 0.039 Joint MTL (Section 3.3) 0.7177 ± 0.027 0.8482 ± 0.068 0.7770 ± 0.043 MTL (Section 3.2) 0.7569 ± 0.044 0.8386 ± 0.078 0.7935 ± 0.045 set to 64 with the confidence threshold value empirically set to 0.5. The stopping criteria for the self-training kicks in when the number of iterations reaches 5 or if the unlabeled tweets pool is exhausted, whichever occurs first.…”
Section: Methods Precisionmentioning
confidence: 99%
“…Precision Recall F1-score Baseline [5] 0.6120 ± 0.116 0.5149 ± 0.099 0.5601 ± 0.100 Baseline with adam 0.6281 ± 0.094 0.5614 ± 0.110 0.5859 ± 0.079 KB-Embedding Baseline [22] 0.5960 ± 0.081 0.6144 ± 0.068 0.6042 ± 0.060 Self-training 0.5717 ± 0.056 0.7141 ± 0.082 0.6332 ± 0.057 Joint MTL (Section 3. 3) 0.5675 ± 0.049 0.7384 ± 0.079 0.6401 ± 0.051 MTL (Section 3.2) 0.6656 ± 0.083 0.6380 ± 0.077 0.6482 ± 0.065 The KB-embedding baseline [22] replaces word embeddings of the medical entities in the sentence with the corresponding embeddings learned from a knowledge-base. The corresponding results can be seen in row 3 of the tables.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep learning methods have been increasingly applied to solve NLP tasks in the medical domain and achieve better performance than CRFs. In the case of ADR mention recognition, Stanovsky et al [40] employed RNNs with word embeddings trained on a Blekko medical corpus in conjunction with entity embeddings trained on DBpedia. If an entity mention was a lexical match with one of DBpedia entities, then the entity embeddings trained on DBpedia replaced word embeddings of all words in the entity mention.…”
Section: Related Workmentioning
confidence: 99%