2019
DOI: 10.1007/s40264-018-0762-z
|View full text |Cite
|
Sign up to set email alerts
|

Overview of the First Natural Language Processing Challenge for Extracting Medication, Indication, and Adverse Drug Events from Electronic Health Record Notes (MADE 1.0)

Abstract: Introduction This work describes the MADE 1.0 corpus and provides an overview of the MADE 2018 challenge for Extracting Medication, Indication and Adverse Drug Events from Electronic Health Record Notes. Objective The goal of MADE is to provide a set of common evaluation tasks to assess the state of the art for NLP systems applied to electronic health records (EHRs) supporting drug safety surveillance and pharmacovigilance. We also provide benchmarks on the MADE dataset using the system submissions received … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
83
0
2

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 124 publications
(92 citation statements)
references
References 66 publications
1
83
0
2
Order By: Relevance
“…Hence, Medical Information Extraction in the Age of Deep Learning Table 4 Medical Named Entity Recognition: Medication Attributes. Benchmark Datasets: n2c2 [56]; i2b2 2009 [57]; MADE 1.0 [59]; DDI [60].…”
Section: Citationsmentioning
confidence: 99%
See 2 more Smart Citations
“…Hence, Medical Information Extraction in the Age of Deep Learning Table 4 Medical Named Entity Recognition: Medication Attributes. Benchmark Datasets: n2c2 [56]; i2b2 2009 [57]; MADE 1.0 [59]; DDI [60].…”
Section: Citationsmentioning
confidence: 99%
“…The top performers for the medication attribute REX task [62] employed a joint learning approach based on CNN-RNN Medical Information Extraction in the Age of Deep Learning Table 5 Medical Relation Extraction: Medication-Attribute Relations (including ADEs). Benchmark Datasets: n2c2 [56]; MADE 1.0 [59].…”
Section: Medication-attribute Relationsmentioning
confidence: 99%
See 1 more Smart Citation
“…More detailed information on the challenge can be found at [96]. Jagannatha et al reported that out of the 11 participating teams the highest F1 scores in each category was 0.8290 in NER, 0.8684 in RI, and 0.6170 in NER + RI, where the F1 score is the weighted mean of precision and recall with ranges from 0 (worst) up to 1 (best) [97].…”
Section: Made10 Challenge: Pharmacovigilance On Cancer Patient Emrsmentioning
confidence: 99%
“…It is designed for classifications and predictions on time series data, in which events may occur with significant and unknown time lags in the sequence [99]. Teams involved in the MADE1.0 challenge used pre-trained embeddings to prepare the RNNs or as feature inputs into CRF training [97]. Within NER task models in this challenge, conditional random fields (CRF) and long short-term memory (LSTM) were among the most frequently used frameworks [97].…”
Section: Made10 Challenge: Pharmacovigilance On Cancer Patient Emrsmentioning
confidence: 99%