2016
DOI: 10.1155/2016/1850404
|View full text |Cite
|
Sign up to set email alerts
|

Multichannel Convolutional Neural Network for Biological Relation Extraction

Abstract: The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of “vocabulary gap” and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
89
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 120 publications
(96 citation statements)
references
References 24 publications
0
89
0
Order By: Relevance
“…We also show the performance of the existing models with negative instance filtering for reference. In the comparison without negative instance filtering, our model outperformed the existing CNN models Quan et al, 2016;Zhao et al, 2016). The model was competitive with Joint AB-LSTM model (Sahu and Anand, 2017) that was composed of multiple RNN models.…”
Section: Comparison With Existing Modelsmentioning
confidence: 84%
See 1 more Smart Citation
“…We also show the performance of the existing models with negative instance filtering for reference. In the comparison without negative instance filtering, our model outperformed the existing CNN models Quan et al, 2016;Zhao et al, 2016). The model was competitive with Joint AB-LSTM model (Sahu and Anand, 2017) that was composed of multiple RNN models.…”
Section: Comparison With Existing Modelsmentioning
confidence: 84%
“…We mainly compare Methods P (%) R (%) F (%) No negative instance filtering CNN 75.29 60.37 67.01 MCCNN (Quan et al, 2016) --67.80 SCNN (Zhao et al, 2016) 68 (2015) --67.0 CNN 75.72 64.66 69.75 MCCNN (Quan et al, 2016) 75.99 65.25 70.21 SCNN (Zhao et al, 2016) 72.5 65.1 68.6 Joint AB-LSTM (Sahu and Anand, 2017) 73.41 69.66 71.48 Table 6: Comparison with existing models Comparison of attention mechanisms on CNN models with ranking objective function the performance without negative instance filtering, which omits some apparent negative instance pairs with rules (Chowdhury and Lavelli, 2013), since we did not incorporate it. We also show the performance of the existing models with negative instance filtering for reference.…”
Section: Comparison With Existing Modelsmentioning
confidence: 99%
“…In Figure 1, f irst − layer entities would be carbamazepine and HLA − B, and second − layer entities are carbamazepine hypersensitivity and HLA-B*1502. Finally, we trained a model similar to the one described in [48] to extract relationships between identified entities. Reasonable meta-parameters were selected according to previous experiments.…”
Section: With the Biomedical Literaturementioning
confidence: 99%
“…Quan et al used a multichannel CNN model with five types of word embedding to extract DDIs from unstructured biomedical literature [22]. Asada et al encoded textual drug pairs with CNNs and encoded their molecular pairs with graph convolutional networks (GCNs) to predict DDIs [23], and obtained the best result so far for CNN-based DDI extraction methods.…”
Section: Introductionmentioning
confidence: 99%