2019
DOI: 10.1016/j.knosys.2018.11.020
|View full text |Cite
|
Sign up to set email alerts
|

Feature assisted stacked attentive shortest dependency path based Bi-LSTM model for protein–protein interaction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
43
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 64 publications
(43 citation statements)
references
References 27 publications
0
43
0
Order By: Relevance
“…Other bi-directional stacked neural networks [33], [34] have shown excellent experimental results. The basic idea of Bi-S-SRU is to superimpose a forward and a backward SRU into each training sequence, and the two SRUs are connected to an output layer.…”
Section: B Principle Of Bi-s-sru Deep Learning Modelmentioning
confidence: 99%
“…Other bi-directional stacked neural networks [33], [34] have shown excellent experimental results. The basic idea of Bi-S-SRU is to superimpose a forward and a backward SRU into each training sequence, and the two SRUs are connected to an output layer.…”
Section: B Principle Of Bi-s-sru Deep Learning Modelmentioning
confidence: 99%
“…Yadav et al proposed a novel algorithm based on the deep bidirectional long shortterm memory (Bi-LSTM) that exploited word sequences and dependency path related information to predict PPI [26]. They also proposed an algorithm based on the attentive deep RNN, which combined multiple levels of representations using word sequences and dependency path related information to predict PPI [27]. Ahmed proposed a novel tree RNN with the attention mechanism to predict PPI [28].…”
Section: Background and Related Woekmentioning
confidence: 99%
“…Zhang et al [28] showed how leveraging the complementary advantages of RNNs and CNNs in a combined hybrid model improves biomedical relation extraction. Yadav et al [29] experimented with a bidirectional LSTM network with an attention mechanism, exploiting word sequences and the shortest dependency path between the entities, whereas Zhang et al [30] introduced a residual CNN to tackle the task. Ahmed et al [31] exploited a tree LSTM network using a structured attention architecture, showing how the attention mechanism improves the performance in relation extraction.…”
Section: Related Workmentioning
confidence: 99%