2022
DOI: 10.1016/j.patter.2022.100551
|View full text |Cite
|
Sign up to set email alerts
|

Accurate prediction of virus-host protein-protein interactions via a Siamese neural network using deep protein sequence embeddings

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 54 publications
(129 reference statements)
0
10
0
Order By: Relevance
“…We conducted a comparative analysis between SENSE-PPI and several existing sequence-based deep learning models, namely PIPR [4], D-SCRIPT [53], Topsy-Turvy [52] and STEP [28].…”
Section: Other Deep Learning Architectures Used For Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…We conducted a comparative analysis between SENSE-PPI and several existing sequence-based deep learning models, namely PIPR [4], D-SCRIPT [53], Topsy-Turvy [52] and STEP [28].…”
Section: Other Deep Learning Architectures Used For Comparisonmentioning
confidence: 99%
“…Comparative analysis of the performance of SENSE-PPI and several others DL architectures on Guo's yeast dataset. Comparison is made based on a 5-fold cross-validation test between SENSE-PPI, PIPR, STEP, and the original model from Guo et al Values reported for all DL architectures other than SENSE-PPI were taken from[28].The best values across architectures are shown in bold.of sequence pairs at each cross-validating fold, in the validation step, each model mostly evaluates proteins already "seen" during training. Such pairs are known to show better performance in comparison to those excluded in training[35,13].3.2 Training on the human proteome and testing on model and non-model organismsThe recent global e↵orts to sequence the biodiversity of species[60,3,23,24,41], make PPI predictions in yet unexplored organisms a major challenge.…”
mentioning
confidence: 99%
“…Methods for such an approach include autoencoders, convolutional neural networks [3], or sequential models like recurrent neural networks (RNN) [4] or transformer-based models [5][6][7][8][9]. Transformer-based models originate from natural language processing (NLP) and have recently gained much attention since they have achieved excellent results in many areas [10][11][12][13][14]. A principal advantage of transformer models is the ability to train them in a parallel fashion and the ability to weigh different parts of a time series differently due to their inbuilt attention mechanism.…”
Section: Introductionmentioning
confidence: 99%
“…Most of the work in this area focuses on prediction of interactions from sequence, especially using deep learning techniques. Some recent publications reported highly accurate prediction results from sequence alone that caught our attention ( Tsukiyama et al, 2021 ; Asim et al, 2022 ; Madan et al, 2022 ). As long-time practiotioners of machine learning in this area, we approach such results with a healthy dose of skepticism.…”
Section: Introductionmentioning
confidence: 99%
“…Although some researchers have rightfully shunned the technique of similarity-constrained negative example selection ( Liu-Wei et al, 2021 ; Madan et al, 2022 ), this practice remains present in the field of HPI prediction ( Basit et al, 2018 ; Zhou et al, 2018 ; Yang et al, 2020 ; Pitta et al, 2021 ; Tsukiyama et al, 2021 ; Yang et al, 2021 ; Asim et al, 2022 ) and also in PPI prediction Chen et al (2022) , necessitating this paper to alert researchers to this issue. We have also observed the use of similarity based choice of negative examples in other sequence-based prediction problems such as anti-microbial peptide prediction ( Veltri et al, 2018 ).…”
Section: Introductionmentioning
confidence: 99%